Issue Primer Archives - Center for News, Technology & Innovation https://innovating.news/article-type/issue-primer/ Fri, 14 Jun 2024 20:59:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://innovating.news/wp-content/uploads/2024/03/cropped-favicon-1-32x32.png Issue Primer Archives - Center for News, Technology & Innovation https://innovating.news/article-type/issue-primer/ 32 32 Synthetic Media & Deepfakes https://innovating.news/article/synthetic-media-deepfakes/ Wed, 21 Feb 2024 15:00:00 +0000 https://innovating.news/?post_type=article&p=4173 How do we protect societies from synthetic media and "deepfakes"?

The post Synthetic Media & Deepfakes appeared first on Center for News, Technology & Innovation.

]]>

Deepfakes, a form of synthetic media content that uses artificial intelligence (AI) to create realistic depictions of people and events, have proliferated in recent years. There are many questions about how this content affects journalists, fact-based news and mis- and disinformation. In addressing these concerns it is important to consider freedom of expression and safety. Relatedly, policies targeting deepfakes must be clear about what types of content qualify as such. Detection technologies and provenance approaches are being rapidly developed but it is unlikely they can prevent all potential harms by AI-altered content. Additional research should consider (1) what effects deepfakes have on journalism, (2) how content labeling addresses concerns about deepfakes (and what types are most effective), (3) what international standards should be applied to content to confirm its authenticity and (4) how best to teach the public to identify synthetic media.

Manipulated imagery has been around for over 150 years but it has reached a new level with “deepfakes.” The term “deepfake” originated in 2017 to describe audio and videos manipulated with the assistance of artificial intelligence (AI) to resemble a real person even when the person portrayed did not say or do what is depicted in the content. Deepfakes are a subset of “synthetic media” which include audio, images, text and video content created with the assistance of AI. There continue to be conversations about how to differentiate synthetic audiovisual content from deepfake audiovisual content. Like many topics CNTI covers, the definitional clarity of these terms remains a work in progress and is important when considering policy.  

The number of deepfakes online increased tenfold from 2022 to 2023. While some research raises questions about the degree to which harm can be directly attributed to manipulated media, there is certainly some evidence of it in countries such as Slovakia, the United Kingdom and the United States. This is especially alarming amid a record-breaking number of national elections being held in 2024 and as broader global concerns grow about threats to the overall stability of a country. A March 2022 deepfake, for example, depicted the Ukrainian President falsely ordering his country’s military to surrender. 

To date, the highest-quality, most-convincing deepfakes require a large amount of training data and a lot of time to be persuasive. Therefore, most actors intending to cause harm are likely to pursue less resource-intensive means to spread false narratives such as disinformation tactics discussed in a separate CNTI issue primer. But as technological innovations advance, deepfakes are rapidly becoming easier to make and more persuasive.

Alongside the worry about the direct impact of a deepfake is concern about a “liar’s dividend” and the sowing of further distrust in “real news,” the news media and government figures.

Responses to the growth in deepfakes are occurring on several fronts. Many digital and online platforms (e.g., Facebook, Instagram, TikTok, X and YouTube) have begun implementing disclosure policies that require advertisements using AI-created content to be labeled. Others are banning certain types of synthetic material, creating training resources to assist in combating these images and working on ways to embed content with tags to confirm authenticity

News organizations have also developed online training courses to assist with identifying deepfakes. However, interviews with expert fact-checkers reveal that while deepfakes are a concern, many feel that more critical threats come from text, images, audio and/or video taken out of context (i.e., “decontextualized”) as well as other forms of manipulated media such as “cheap fakes” where information is recontextualized into a false narrative. 

Methods to detect synthetic media have advanced over the last several years. These identification technologies examine the shadows, geometry, pixels and/or audio anomalies of suspected synthetic media and look for hidden watermarks to evaluate authenticity. Detection challenges have encouraged researchers and the public to experiment with new ways to accurately identify deepfakes. While these techniques will almost certainly be unable to fully erase the threats of synthetic media, they do offer steps toward establishing guardrails to protect the public’s access to authentic information. To identify how to best implement detection methods, further collaboration across various sectors (e.g., technology, communications, policy, government, etc.) is needed. 

Most countries do not have specific legislation on synthetic media, but among those that do, policies fall into two general categories: (1) banning all deepfake content that does not obtain consent from the individual(s) depicted or (2) requiring disclosure and/or labeling of deepfake content. Deepfake-related regulation is particularly complex due to many countries’ protections for freedom of speech and expression. 

Clearly delineating what qualifies as a deepfake is difficult but critical.

Governments and researchers are confronting how best to differentiate deepfakes from other types of synthetic media content. For instance, there is a question about whether the material depicted must be deceptive in nature to be classified as a “deepfake.” Other definitional considerations revolve around intentharm and consent. For example, a 2019 synthetic video of soccer star David Beckham speaking nine languages was intended to disseminate factual information about malaria but it used deepfake technologies to make the dialogue sound authentic. The intent of this deepfake was not to be deceptive or spread harm but it is still widely considered a deepfake because of how it was made. On the flip side, altered images have been around for over 150 years – without the use of artificial intelligence – but may be intentionally deceptive, similar to AI-generated deepfakes. The spectrum of synthetic media becomes increasingly complex with the inclusion of “shallowfakes” and “cheap fakes” – forms of manipulated media that do not require advanced technological tools. Better delineating what types of content are classified broadly as synthetic media versus specifically as “deepfakes” (a subset of synthetic media) is crucial to separate the benign and beneficial uses (e.g., for education and entertainment) from harmful uses.

While developing software to detect and counter deepfakes requires strong digital infrastructure and financial resources that only certain countries have available, new labeling and disclosure tools are making methods for addressing deepfakes more accessible globally.

Developing independent software for detecting and countering deepfakes is expensive, but tools to identify whether or not media sources have been manipulated are becoming available for wider use. One potential strategy is to use trained human graders in combination with pre-trained AI detection models. Researchers find these combined approaches can have advantages over using a single detection method. In response to the growing number of deepfakes, content creators and the technology industry have also begun developing ways to tag and label manipulated media. These include both direct and indirect disclosure approaches to maintain transparency and assert provenance (i.e., authenticity) as well as different types of content labeling. Watermarking is one technique that can be visible to users or embedded in media to certify its authenticity. Arriving at a global standard for this type of labeling should be a priority. The Coalition for Content Provenance and Authenticity (C2PA) is one possible standard which has received support from Adobe, Google, Intel and Microsoft among many other organizations. While technology tools and disclosure and labeling requirements greatly help to address deepfakes, they are unlikely to remove all mis- and disinformation from the news ecosystem so understanding how to mitigate threats from all sources is critical for promoting fact-based news. 

Efforts to regulate deepfake content must be compatible with laws protecting freedom of speech and expression.

Governments need to determine where to draw the line between legal and illegal deepfake content. In countries that legally protect free speech, deepfakes present difficult circumstances for determining what is legal or illegal content. To a degree, sharing false statements is a protected right under freedom of speech and expression laws so banning all deepfake content is likely illegal, thus making the regulation of deepfakes particularly complex

Synthetic media create opportunities for journalists to protect their identity in threatening situations, but deceptive behavior runs counter to many news outlets’ codes of ethics on misrepresentation.

The technological innovations brought forth by deepfakes may allow journalists and/or their sources to remain anonymous by altering their appearance and voice when working on sensitive projects. However, this goes against many outlets’ codes about journalists remaining honest and transparent during reporting as well as policies about deception. While news organizations may outline rare circumstances for journalists to protect anonymity or to engage in deceptive practices, these are only allowed for matters of public interest or personal safety. Determining when journalists can use deepfakes for their work is an important ethical consideration.

While deepfakes are a relatively new technological innovation, much research has explored how individuals interpret false information presented in differing forms of media (e.g., text, audio and/or video). In one recent study, even in the presence of a warning that deepfake content would be encountered, nearly 80% of participants did not correctly identify the only deepfake in a series of five videos. Other work has found that up to half of respondents in a nationally representative sample cannot differentiate between manipulated and authentic videos. These findings indicate that deepfakes are difficult to counter and that correcting false information is needed to support a well-informed public. 

There are also evidence-based areas of hope. Research suggests people can be trained to better detect deepfakes. Interventions that focus on the accuracy of information and the low cost of producing consumer-grade deepfakes (which can now be done in a matter of minutes using apps and websites) show positive results in an effort to counter the negative effects of this type of content. 

To complement training individuals about detecting deepfakes, one strategy is to study how responsive individuals are to informational fact checks and labels on manipulated media: 

  • Fact-checking politicians’ false statements has been shown to decrease beliefs that the statements were factual, though further research into how partisanship shapes interpretation of synthetic media is crucial. Developing digital media literacy approaches, like how to spot false information, will likely be important to help individuals recognize high-quality, fact-based news. 
  • Findings suggest that tagging information as false, while beneficial, also has consequences for true, authentic information. General, broad disclosures about false information can cause viewers to discount true, accurate news. The public may grow more accustomed to looking for labels and other kinds of disclosures as a way of trusting its veracity but that remains a question for now.
  • False information that is not tagged as such is found to be interpreted as more accurate than other false information that has been tagged as false. These findings suggest that labels can be effective but they must be comprehensive and cover all applicable synthetic media.
  • The value of asserting provenance, or tracking the authenticity and origin of content, has also been studied. While the public may not widely grasp provenance as a concept, it has been shown to decrease trust in deceptive media when presented to individuals. Further education on the importance of provenance for a fact-based news ecosystem is needed.

Future research should continue to study (1) how individuals engage with synthetic content and (2) how persuasive the public finds this content, which is especially relevant given how realistic and life-like these media are becoming. In response to the increased presence of synthetic media, researchers should also consider what techniques – including labeling and disclosure – are most effective for mitigating the negative effects stemming from deepfake content. Understanding how to respond to and “treat” individuals who have encountered deepfake content is an important consideration that can support fact-based news endeavors. Finally, research should also examine how newsrooms will confront the proliferation of deepfake content and its potential harms.

Most countries do not have any existing policies that specifically target deepfakes. The existing legislation on deepfakes grapples with how to accommodate less harmful and/or benign uses of synthetic media (e.g., art, education, comedy) while addressing a broad range of harmful uses (e.g., nonconsensual adult content, deceiving consumers). Defining the harmful, illegal uses of deepfake content is critical for effective policy. Legislation attempting to protect societies from deepfake content must decide whether to ban all AI-generated deepfake content or to set restrictions on material allowed in deepfake content and develop regulations on how it is presented to audiences (e.g., labeling). 

Another crucial consideration for public policy is how to best classify deepfake media content. Much of the legislation being debated and passed in U.S. states is about images, videos and audio. However, some pieces of legislation also address text-related synthetic media, while others focus solely on videos – even if generative AI was not used in the manipulated videos. Global standards for what types of media are classified as deepfake content might be helpful but only if they are fully inclusive of the ways people receive information and interact with news and allow for future developments.

Current approaches across several U.S. states, the European Union and China involve implementing disclosure requirements or labeling content that has been generated using AI. These approaches include watermarking, content labeling and disclaimers. The parties responsible for enforcing these types of deepfake regulations have included government agencies but concerns persist that the technology is moving more quickly than legislation and oversight. For many countries, current laws that may be affected by deepfake content (e.g., right to privacy, defamation or cybercrime) do not specifically address synthetic media. These gaps make regulating manipulated content difficult. Enforcement is also problematic in cases where the content creator resides outside of a country’s jurisdiction. 

Experts recommend focusing policies on the general harms of technological innovations rather than on the technologies themselves as it is likely impossible to detect and/or ban all manipulated synthetic media. There are also concerns that deepfake regulation will curtail freedom of speech and expression. Future legislation should consider how to best craft regulations to avoid the costs of such regulations outweighing the benefits. As such, regulations ought to consider the balance between the need for freedom of expression (and an open internet) while also protecting against the harms of mis- and disinformation.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

A look at global deepfake regulation approaches
Responsible Artificial Intelligence Institute (April 2023)

Artificial intelligence, deepfakes, and disinformation
RAND Corporation (July 2022)

Deepfakes and international conflict
Brookings Institute (January 2023)

From deepfakes to TikTok filters: How do you label AI content?
Nieman Lab (May 2021)

Increasing threats of deepfake identities
U.S. Department of Homeland Security (n.d.)

Regulating AI deepfakes and synthetic media in the political arena
Brennan Center for Justice (December 2023)

Snapshot paper – Deepfakes and audiovisual disinformation
Centre for Data Ethics and Innovation (September 2019)

Tackling deepfakes in European policy
European Parliamentary Research Service (July 2021)

Coalition for Content Provenance and Authenticity: Organization that develops technological standards for identifying authentic media.

Partnership on AI: Non-profit organization that is dedicated to understanding AI through cross-industry discussions and partnerships to promote positive outcomes for society and has an initiative for synthetic media.

Responsible Artificial Intelligence Institute: Non-profit organization focusing on how to assist organizations with responsible AI usage and implementation.

University of North Carolina Center on Technology Policy: Public policy-focused organization addressing current technology issues and providing meaningful policy considerations.

WITNESS: Non-profit organization that provides information about how individuals around the world may use technology and video recordings to improve and secure human rights.

David Doermann, School of Engineering and Applied Sciences, University at Buffalo

Hany Farid, Electrical Engineering & Computer Sciences and the School of Information, University of California, Berkeley

Henry Ajder, Founder, Latent Space

Matthew Groh, Kellogg School of Management, Northwestern University

Matthew Wright, Department of Cybersecurity, Rochester Institute of Technology

Maura Grossman, Cheriton School of Computer Science, University of Waterloo

Sam Gregory, Executive Director, WITNESS

Siwei Lyu, School of Engineering and Applied Sciences, University at Buffalo

Deepfakes and the Law Conference
University of Leeds and City University, London
May 20, 2024 – London, UK or Online

The Impact of Deepfakes on the Justice System
American Bar Association
January 22, 2024 – Online

A look at global deepfake regulation approaches
Responsible Artificial Intelligence Institute (April 2023)

Artificial intelligence, deepfakes, and disinformation
RAND Corporation (July 2022)

Deepfakes and international conflict
Brookings Institute (January 2023)

From deepfakes to TikTok filters: How do you label AI content?
Nieman Lab (May 2021)

Increasing threats of deepfake identities
U.S. Department of Homeland Security (n.d.)

Regulating AI deepfakes and synthetic media in the political arena
Brennan Center for Justice (December 2023)

Snapshot paper – Deepfakes and audiovisual disinformation
Centre for Data Ethics and Innovation (September 2019)

Tackling deepfakes in European policy
European Parliamentary Research Service (July 2021)

Coalition for Content Provenance and Authenticity: Organization that develops technological standards for identifying authentic media.

Partnership on AI: Non-profit organization that is dedicated to understanding AI through cross-industry discussions and partnerships to promote positive outcomes for society and has an initiative for synthetic media.

Responsible Artificial Intelligence Institute: Non-profit organization focusing on how to assist organizations with responsible AI usage and implementation.

University of North Carolina Center on Technology Policy: Public policy-focused organization addressing current technology issues and providing meaningful policy considerations.

WITNESS: Non-profit organization that provides information about how individuals around the world may use technology and video recordings to improve and secure human rights.

David Doermann, School of Engineering and Applied Sciences, University at Buffalo

Hany Farid, Electrical Engineering & Computer Sciences and the School of Information, University of California, Berkeley

Henry Ajder, Founder, Latent Space

Matthew Groh, Kellogg School of Management, Northwestern University

Matthew Wright, Department of Cybersecurity, Rochester Institute of Technology

Maura Grossman, Cheriton School of Computer Science, University of Waterloo

Sam Gregory, Executive Director, WITNESS

Siwei Lyu, School of Engineering and Applied Sciences, University at Buffalo

Deepfakes and the Law Conference
University of Leeds and City University, London
May 20, 2024 – London, UK or Online

The Impact of Deepfakes on the Justice System
American Bar Association
January 22, 2024 – Online


The post Synthetic Media & Deepfakes appeared first on Center for News, Technology & Innovation.

]]>
Journalists & Cyber Threats https://innovating.news/article/journalists-cyber-threats/ Fri, 01 Dec 2023 18:52:00 +0000 https://innovating.news/?post_type=article&p=309 How can we better ensure the digital security of the press and protect against cyber threats?

The post Journalists & Cyber Threats appeared first on Center for News, Technology & Innovation.

]]>

The digital security of publishers, journalists and their sources is under threat in many parts of the world. At the governmental level, policymakers must acknowledge the very real threats facing journalists specifically and ensure that digital policy initiatives both protect them and, in doing so, do not threaten free expression, basic privacy rights, or encryption and VPN protections. At the platform level, technology companies can establish and protect human rights and privacy safeguards, but they also must at times navigate challenging state demands (at times via legislation) to provide private data and information on their users, potentially permitting governments’ abuses of power. At the publisher level, proactive efforts around cyber education, safety and support, as well as sharing experiences within the industry, are equally critical. Finally, researchers and civil society need to do their part to collaboratively shed light and provide data on the trends, risks and potential avenues forward.

The physical and digital security of journalists and their sources are under threat in many parts of the world. Digital security and cybersecurity threats, in particular, have become more important than ever for the global news media as journalists and publishers are becoming high-profile targets for malware, spyware and digital surveillance, compromising their and their sources’ personal information and safetyMore broadly, digital security and cybersecurity have gained the attention of policymakers globally, with 156 countries having enacted cybercrime legislation as of 2021.

Cybersecurity threats come in many forms, including increasingly sophisticated domestic and transnational spyware, denial-of-service (DDoS) and malware (malicious software used to gain unauthorized access to IT systems and spread across a network), ransomware (malware where attackers demand a payment or ransom in exchange for restoring access), and phishing attacks. A range of actors can be behind these tactics, including nation-states and politicians, powerful individuals, corporations, criminal networks and extremist organizations. 

Digital security challenges facing the global news media also involve broader threats to data privacy and security which include growing concerns about software vulnerabilities as well as the use of digital platform surveillance. For example, some governments attempt to chip away at encryption protections offered by secure apps like Signal, WhatsApp, ProtonMail or SecureDrop. Additionally, there are digital safety and privacy concerns about legacy social media platforms like X (formerly known as Twitter) — a staple of global journalism practice and sourcing — where data and communications are rapidly becoming far less secure for journalists and civil society organizations in the wake of ownership and infrastructure changes.

Digital and physical threats to journalists are connected. For instance, the use of spyware has been linked to hundreds of acts of physical violence around the world. In particular, human rights organizations and cybersecurity experts have expressed concerns about the use of spyware largely developed by companies in Europe, the Middle East and the U.S. — including NSO Group’s ‘Pegasus’ and QuaDream’s ‘Reign’ — being used in dozens of countries around the world, especially in Latin America. There is also an increasingly important link between governments’ use of spyware and their use of lawfare (i.e., the weaponization of legal systems or institutions) against journalists. Together, these threats have critical detrimental effects on journalists’ mental health (discussed in more detail in CNTI’s upcoming issue primer on journalist safety), leading to calls for a holistic approach to security efforts.

In addition to raising safety and psychosocial concerns, digital security threats also damage trust in the news media. In an attention economy, cyberattacks can eliminate entire business models and push audiences away by slowing or crashing websites. A lack of digital security training in newsrooms — particularly newsrooms outside of large national (and largely Western) publishers with the means to provide such training — can create chilling effects for sources and whistleblowers who increasingly fear being unintentionally exposed by journalists. These patterns are representative of the public’s broader lack of trust in the safety of personal data, including in the hands of news publishers. In cases where sources are less aware of digital security concerns, research finds journalists are often hesitant to request protective measures for fear of scaring off sources. 

These threats are expensive and difficult for newsrooms to address on their own. They are even more difficult for independent journalists or publishers operating in (or in exile from) countries with hostile governments or authoritarian regimes. Thus, collaboration among policymakers, platforms, researchers and domestic and international civil society organizations is critical to ensure digital security of the global press.

Addressing the scope of cybercrime threats through policy — both in general and for journalists and sources specifically — fundamentally depends on the definition and scope of cybercrime.

There is no single, internationally accepted definition of cybercrime or cyberattacks. The definition may include crimes dependent on the use of technology, crimes facilitated by the use of technology or both. Limitations to this definition may be set based on the seriousness of the crime or the type of case (e.g., criminal, civil, administrative or all of the above). Recent efforts to account for this, such as the United Nations’ cybercrime treaty negotiations, reveal the complexity of global consensus on and oversight of cybercrime. Too broad or ambiguous of a definition raises serious risks of abuse, threatening global journalists’ safety, freedom of expression and human rights.

At the same time, “cybercrime” is a moving target as new technologies such as AI give rise to new cyber threats. Policies that do not account for these developments are not able to defend against or punish actions. Because there is so little existing case law, it is unclear how cybercrime laws apply to the multi-billion-dollar spyware industry, which poses threats to press freedom.

Policies related to digital matters sometimes lead to unintended consequences that impact the digital security of both journalists and the general public.

For instance, some policies inadvertently make it easier for spyware to be deployed, compromise end-to-end encryption vital for press freedom, or restrict the use of virtual private networks (VPNs) on which journalists depend to do their work. It’s crucial to recognize and address these potential cyber threats when formulating digital policies to ensure the overall digital security of the independent press and the public at large.

The ability to mitigate digital security risks differs across countries and across newsrooms.

Widespread uptake of digital security tools has been limited due to a lack of resources, funding and technological infrastructure as well as how difficult it can be to navigate complex, less user-friendly technologies. Even in highly digital environments, those with less developed (or unevenly developed) infrastructures may have insufficient capacity to protect against cyber threats.

Journalistic practices and norms can, at times, be in tension with digital security practices.

The nature of journalism as an incredibly public-facing and accessible occupation gives rise to specific risks for those in the industry. Digital security threats include online harassment and abuse, and doxing campaigns against journalists (discussed in more detail in CNTI’s upcoming issue primer on journalist safety), which means making journalists and their contact information easy to access online particularly risky. At the same time, removing such information is fundamentally at odds with newsroom efforts around audience interactivity and engagement. 

In 2020, research also found a broad reluctance to prioritize digital security among publishers or editors, noting an emphasis on physical security over information security, conflicts with IT teams over recommended security measures such as virtual private networks (VPNs) and perceptions that security is an individual — rather than a collective — issue. Journalists are also often hesitant to become the story, leading to less visibility for these critical issues.

In parallel with the rise of other threats to press freedom and journalists’ security – particularly in the wake of the Snowden revelations – a burgeoning academic field has turned its attention to questions around the digital and information security of the global press. Academic and public attention to cybercrime, ransomware, and spyware have also grown, both in the wake of prominent attacks and amid growing concerns about the adoption of generative AI technologies in cyberattacks – for instance, leveraging these systems to manipulate journalists’ likeness – and AI’s broader threats to digital privacy and security (see CNTI’s issue primer on AI in journalism).

It is clear from this body of work that journalists and their sources in many countries are increasingly concerned about or actively experiencing a range of digital threats (whether from domestic, foreign or unknown actors), but for several culturalinstitutional and individual-level reasons, this does not often result in changes to practices or policies to counter such threats. 

Further, work by organizations including Forbidden Stories, the University of Toronto’s Citizen Lab and the Committee to Protect Journalists have tracked at least 180 cases of spyware targeting journalists, particularly by clients of NSO Group. Other research has depicted the unique threats of digital surveillance to (1) investigative journalists and (2) marginalized people and communities including women, queer and gender-nonconforming people and people of color. This work, as noted earlier, has revealed the relationship between journalists’ digital presence and offline abuse, including but not limited to physical violence. These risks are often amplified in non-Western regions as well as in areas of conflict. (We will discuss the related issue of online harassment in detail in a separate issue primer.)

Research on the targeting of journalists (and the public more broadly) using digital surveillance or “dataveillance,” hacking and spyware often focuses specifically on:

  • How journalists perceive and adapt to information security challenges, including their protection of and secure communication with sources and whistleblowers.
  • How digital intimidation tactics threaten press freedom and journalists’ ability to deliver stories in the public interest.
  • The influence of new technologies and the rise of “dataveillance” by state and corporate actors.

While this research speaks to the global nature of digital security threats to an independent press, future research should continue to examine how journalists, technology companies, researchers and policymakers can collaborate to defend against these threats, including tracking trends and sharing practices. Currently, many conversations about how to address these threats also lack global evidence. For instance, it is unclear how, or to what extent, digital security technologies such as encryption software have been implemented by journalists outside of Western countries. It is also important to understand how communication between journalists and their sources has evolved in an increasingly digital but often less safe environment.

Further, a lack of resources continues to play a critical role in news publishers’ and journalists’ ability to protect against and respond to digital security threats. Digital forensics efforts by organizations such as Citizen Lab and Amnesty Tech illustrate the critical role intermediaries play in creating networks, developing resources and establishing support mechanisms to protect the digital security of the global press. What other areas within this ecosystem must be strengthened to account for these growing threats, and what could those resources or collaborations look like?

Cybercrime has become a growing concern among policymakers in many parts of the world. As of 2021, the UN Conference on Trade and Development found that 156 countries (80%) had enacted cybercrime legislation, though adoption rates vary by region. 

However, cybercrime policy (including but not limited to “cyber libel,” “cyberterrorism” and “online hate speech” laws) does not always account for — and at times directly threatens — the digital safety of journalists. Often, cybercrime policy efforts are led by countries’ security or banking sectors, leading to policymaking that may not take into account — or be at odds with — international standards for press freedom and privacy. Research has found that a majority of cybercrime laws include few safeguards to protect against investigatory overreach and incorporate provisions with vague or broad wording that can be used to target journalists, thus threatening an independent press and free expression.

Over the past two decades, legal frameworks established to protect an independent press and the confidentiality of journalistic sources and information have been under threat in many parts of the world. New legislation and policies, including national security or anti-terrorism legislation, override and/or contradict existing protections. Other policies pressure or force digital intermediaries to provide private user data. When legislation does not adequately account for new uses of, or technological tools for, digital data by journalists and sources, it will not provide them with necessary legal protections, thus making forward-thinking policymaking critical.

Broader debates around who qualifies as a ‘journalist’ and what qualifies as ‘journalism’ also play an important role in cyber policy, as they affect who receives which legal protections. Do the same data or source protections, for instance, extend to freelancers or citizen journalists, or to international journalists doing cross-border or conflict reporting? A lack of definitional clarity in policymaking can lead to inconsistent or unequal entitlement to protections.

Global experts have called for collaborative approaches to global cybersecurity regulations through private and public sector coordination as well as strengthened international data protection frameworks. Such frameworks loosely exist, to some extent, within agreements to advance citizens’ privacy rights by the Council of Europe and the Organisation for Economic Cooperation and Development (OECD) and as a part of efforts to establish a global cybercrime treaty by the United Nations’ Office on Drugs and Crime’s Global Programme on Cybercrime. However, these protections are not currently codified and do not account for the unique digital security threats posed to journalists and other sectors of civil society. Harmonizing these standards cross-nationally could also promote the development of more resilient digital infrastructures – though, as illustrated in recent UN efforts, consensus on definitional considerations and cross-border oversight are difficult to achieve, particularly between democratic societies and authoritarian regimes.

Finally, it is critical at both the supranational and national levels to have open legislative processes around cybercrime or cybersecurity so that the voices of the news media and journalists, as well as civil society more broadly, are taken into account from the start.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

Vietnam tried to hack U.S. officials, CNN with posts on X, probe finds
Washington Post (October 2023)

Investigation finds Russian journalist Galina Timchenko targeted by Pegasus spyware
Committee to Protect Journalists (September 2023)

Dissecting the UN Cybercrime Convention’s threat to coders’ rights at DEFCON
Electronic Frontier Foundation (August 2023)

Khashoggi’s widow sues Israeli firm over spyware she says ruined her life
Washington Post (June 2023)

TikTok tracked UK journalist via her cat’s account
BBC (May 2023) 

FBI takes down Russian computer malware network that attacked NATO nations, journalists
CNBC (May 2023)

Abusive spyware ban: No press freedom without journalist safety
Tech Policy Press (May 2023)

Why does the global spyware industry continue to thrive? Trends, explanations, and responses
Carnegie Endowment for International Peace (March 2023)

Digital safety: Using online platforms safely as a journalist
Committee to Protect Journalists (November 2022) 

Ex-Twitter exec blows the whistle, alleging reckless and negligent cybersecurity policies
CNN (August 2022)

Why journalism needs information security
Reuters Institute for the Study of Journalism (April 2022)

My journey down the rabbit hole of every journalist’s favorite app
Politico (February 2022)

AI risk and cybersecurity
Research ICT Africa (February 2022)

Digital security do’s and don’ts for journalists
International Journalists’ Network (April 2020)

Basic steps to enhance your privacy and security within the digital ecosystem
iWatch Africa (July 2019) 

Here are 12 principles journalists should follow to make sure they’re protecting their sources
NiemanLab (January 2019)

Five digital security tools to protect your work and sources
International Consortium of Investigative Journalists (January 2018)

Carnegie Endowment for International Peace: Offers a dataset on “Mapping the Shadowy World of Spyware and Digital Forensics Sales” ranging from 2011–2023.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Center for International Media Assistance (CIMA): Initiative of the U.S. National Endowment for Democracy dedicated to improving U.S. efforts to promote independent media in developing countries around the world.

Centre for Freedom of the Media: University of Sheffield’s interdisciplinary research center with global outreach on issues of media freedom and journalism safety, including a Journalism Safety Research Network.

Citizen Lab: Investigates the prevalence and impact of digital espionage operations against civil society groups.

Committee to Protect Journalists’ Emergencies Response Team: Provides comprehensive, life-saving support to journalists and media support staff worldwide and offers digital and physical safety kits and resources. 

EFF Surveillance Self-Defense: Electronic Frontier Foundation expert guide to protect from online spying.

Freedom House: Publishes an annual Freedom on the Net report, including survey and analysis of internet freedom around the world and country-level assessments.

Global Encryption Coalition: Promotes and defends encryption in key countries and multilateral fora where it is under threat.

Global Initiative Against Transnational Organized Crime: Independent civil-society organization aimed at strategies and responses to organized crime (including cybercrime).

Global Investigative Journalism Network: Hub for investigative reporters aiming to spread, strengthen and support in-depth, watchdog journalism worldwide, offering a Journalist Security Assessment Tool and a Reporter’s Guide to Investigating Digital Threats.

Internews (including SAFETAG and MONITOR programs): Independent, nonprofit media development and support organization.

Iraqi Network for Social Media: In partnership with the Human Rights Office of the United Nations Assistance Mission for Iraq, offers an online user guide for online protection and digital security for human rights defenders and activists.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa.

Jamii Forums: East and Central African secure whistleblowing platform that promotes accountability and transparency in Tanzania.

Open Knowledge Foundation: Provide services to create, manage and publish open data.

Organization for Security and Co-Operation in Europe: Intergovernmental organization addressing a wide range of security-related concerns.

Organized Crime & Corruption Reporting Project: Investigative reporting platform for a worldwide network of independent media centers and journalists.

RSF Resource for Journalists’ Safety: Reporters Without Borders cyber-surveillance guide.

Safety of Journalists: Platform with useful resources on journalists’ safety from academia, civil society and international organizations.

Security Lab at Amnesty International: Multi-disciplinary team supporting journalists, activists and civil society organizations at risk from targeted digital attacks.

Tactical Tech’s guide on holistic security: A practical tool for in-depth risk assessment employing a holistic approach, integrating self-care, well-being, digital security and information security into traditional security management practices.

Tech Policy Lab: Interdisciplinary collaboration at the University of Washington that aims to enhance technology policy through research, education and thought leadership.

UNCTAD: Tracks cybercrime legislation worldwide.

UNODC Global Programme on Cybercrime: UN-mandated programme to assist Member States in their struggle against cyber-related crimes through capacity building and technical assistance.

USC Election Cybersecurity Initiative: Nonpartisan independent project to help educate and protect U.S. campaigns and elections.

Sadie Creese, Professor of Cyber Security, University of Oxford

Philip Di Salvo, Lecturer, Università della Svizzera Italiana

Ron Deibert, Founder and Director, Citizen Lab

Roger Dingledine, President and Co-Founder, Tor Project

Paula Fray, Founder, frayintermedia

Tanveer Hasan, Executive Director, Centre for Internet & Society

Jennifer Henrichsen, Assistant Professor, Washington State University 

Mallory Knodel, Chief Technology Officer, Center for Democracy & Technology

Carlos Lauría, Senior Americas Program Coordinator, Committee to Protect Journalists

Nayelly Loya Marín, Head of Programme, UNODC Global Programme on Cybercrime

Maxence Melo, Co-Founder & Executive Director, JamiiForums

Michael Nelson, Senior Fellow, Carnegie Endowment

Riana Pfefferkorn, Research Scholar, Stanford Internet Observatory

Erica Portnoy, Senior Staff Technologist, Electronic Frontier Foundation

Julie Posetti, Global Director of Research, International Center for Journalists

Ryan Powell, Head of Innovation and Media Business, International Press Institute

Rana Sabbagh, Senior Editor, Middle East/North Africa, Organized Crime & Corruption Reporting Project 

Drew Sullivan, Co-Founder and Publisher, Organized Crime & Corruption Reporting Project 

Joel Simon, Founding Director, Journalism Protection Initiative

RightsCon
June 5–8, 2023 – San José, Costa Rica

Chatham House Cyber2023
June 14, 2023 – London, United Kingdom

UNODC Sixth Session of the Ad Hoc Committee
August 21–September 1, 2023 – New York, United States

Internet Governance Forum 2023
October 8–12, 2023 – Kyoto, Japan

GovExec Data in Action Summit
December 6, 2023 – Virginia, United States

19th International Conference on Cyber Warfare and Security
March 26–27, 2024 – Johannesburg, South Africa

Vietnam tried to hack U.S. officials, CNN with posts on X, probe finds
Washington Post (October 2023)

Investigation finds Russian journalist Galina Timchenko targeted by Pegasus spyware
Committee to Protect Journalists (September 2023)

Dissecting the UN Cybercrime Convention’s threat to coders’ rights at DEFCON
Electronic Frontier Foundation (August 2023)

Khashoggi’s widow sues Israeli firm over spyware she says ruined her life
Washington Post (June 2023)

TikTok tracked UK journalist via her cat’s account
BBC (May 2023) 

FBI takes down Russian computer malware network that attacked NATO nations, journalists
CNBC (May 2023)

Abusive spyware ban: No press freedom without journalist safety
Tech Policy Press (May 2023)

Why does the global spyware industry continue to thrive? Trends, explanations, and responses
Carnegie Endowment for International Peace (March 2023)

Digital safety: Using online platforms safely as a journalist
Committee to Protect Journalists (November 2022) 

Ex-Twitter exec blows the whistle, alleging reckless and negligent cybersecurity policies
CNN (August 2022)

Why journalism needs information security
Reuters Institute for the Study of Journalism (April 2022)

My journey down the rabbit hole of every journalist’s favorite app
Politico (February 2022)

AI risk and cybersecurity
Research ICT Africa (February 2022)

Digital security do’s and don’ts for journalists
International Journalists’ Network (April 2020)

Basic steps to enhance your privacy and security within the digital ecosystem
iWatch Africa (July 2019) 

Here are 12 principles journalists should follow to make sure they’re protecting their sources
NiemanLab (January 2019)

Five digital security tools to protect your work and sources
International Consortium of Investigative Journalists (January 2018)

Carnegie Endowment for International Peace: Offers a dataset on “Mapping the Shadowy World of Spyware and Digital Forensics Sales” ranging from 2011–2023.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Center for International Media Assistance (CIMA): Initiative of the U.S. National Endowment for Democracy dedicated to improving U.S. efforts to promote independent media in developing countries around the world.

Centre for Freedom of the Media: University of Sheffield’s interdisciplinary research center with global outreach on issues of media freedom and journalism safety, including a Journalism Safety Research Network.

Citizen Lab: Investigates the prevalence and impact of digital espionage operations against civil society groups.

Committee to Protect Journalists’ Emergencies Response Team: Provides comprehensive, life-saving support to journalists and media support staff worldwide and offers digital and physical safety kits and resources. 

EFF Surveillance Self-Defense: Electronic Frontier Foundation expert guide to protect from online spying.

Freedom House: Publishes an annual Freedom on the Net report, including survey and analysis of internet freedom around the world and country-level assessments.

Global Encryption Coalition: Promotes and defends encryption in key countries and multilateral fora where it is under threat.

Global Initiative Against Transnational Organized Crime: Independent civil-society organization aimed at strategies and responses to organized crime (including cybercrime).

Global Investigative Journalism Network: Hub for investigative reporters aiming to spread, strengthen and support in-depth, watchdog journalism worldwide, offering a Journalist Security Assessment Tool and a Reporter’s Guide to Investigating Digital Threats.

Internews (including SAFETAG and MONITOR programs): Independent, nonprofit media development and support organization.

Iraqi Network for Social Media: In partnership with the Human Rights Office of the United Nations Assistance Mission for Iraq, offers an online user guide for online protection and digital security for human rights defenders and activists.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa.

Jamii Forums: East and Central African secure whistleblowing platform that promotes accountability and transparency in Tanzania.

Open Knowledge Foundation: Provide services to create, manage and publish open data.

Organization for Security and Co-Operation in Europe: Intergovernmental organization addressing a wide range of security-related concerns.

Organized Crime & Corruption Reporting Project: Investigative reporting platform for a worldwide network of independent media centers and journalists.

RSF Resource for Journalists’ Safety: Reporters Without Borders cyber-surveillance guide.

Safety of Journalists: Platform with useful resources on journalists’ safety from academia, civil society and international organizations.

Security Lab at Amnesty International: Multi-disciplinary team supporting journalists, activists and civil society organizations at risk from targeted digital attacks.

Tactical Tech’s guide on holistic security: A practical tool for in-depth risk assessment employing a holistic approach, integrating self-care, well-being, digital security and information security into traditional security management practices.

Tech Policy Lab: Interdisciplinary collaboration at the University of Washington that aims to enhance technology policy through research, education and thought leadership.

UNCTAD: Tracks cybercrime legislation worldwide.

UNODC Global Programme on Cybercrime: UN-mandated programme to assist Member States in their struggle against cyber-related crimes through capacity building and technical assistance.

USC Election Cybersecurity Initiative: Nonpartisan independent project to help educate and protect U.S. campaigns and elections.

Sadie Creese, Professor of Cyber Security, University of Oxford

Philip Di Salvo, Lecturer, Università della Svizzera Italiana

Ron Deibert, Founder and Director, Citizen Lab

Roger Dingledine, President and Co-Founder, Tor Project

Paula Fray, Founder, frayintermedia

Tanveer Hasan, Executive Director, Centre for Internet & Society

Jennifer Henrichsen, Assistant Professor, Washington State University 

Mallory Knodel, Chief Technology Officer, Center for Democracy & Technology

Carlos Lauría, Senior Americas Program Coordinator, Committee to Protect Journalists

Nayelly Loya Marín, Head of Programme, UNODC Global Programme on Cybercrime

Maxence Melo, Co-Founder & Executive Director, JamiiForums

Michael Nelson, Senior Fellow, Carnegie Endowment

Riana Pfefferkorn, Research Scholar, Stanford Internet Observatory

Erica Portnoy, Senior Staff Technologist, Electronic Frontier Foundation

Julie Posetti, Global Director of Research, International Center for Journalists

Ryan Powell, Head of Innovation and Media Business, International Press Institute

Rana Sabbagh, Senior Editor, Middle East/North Africa, Organized Crime & Corruption Reporting Project 

Drew Sullivan, Co-Founder and Publisher, Organized Crime & Corruption Reporting Project 

Joel Simon, Founding Director, Journalism Protection Initiative

RightsCon
June 5–8, 2023 – San José, Costa Rica

Chatham House Cyber2023
June 14, 2023 – London, United Kingdom

UNODC Sixth Session of the Ad Hoc Committee
August 21–September 1, 2023 – New York, United States

Internet Governance Forum 2023
October 8–12, 2023 – Kyoto, Japan

GovExec Data in Action Summit
December 6, 2023 – Virginia, United States

19th International Conference on Cyber Warfare and Security
March 26–27, 2024 – Johannesburg, South Africa


The post Journalists & Cyber Threats appeared first on Center for News, Technology & Innovation.

]]>
Artificial Intelligence in Journalism https://innovating.news/article/ai-in-journalism/ Tue, 29 Aug 2023 14:00:00 +0000 https://innovating.news/?post_type=article&p=234 How do we enable the benefits and manage the harms of artificial intelligence in journalism?

The post Artificial Intelligence in Journalism appeared first on Center for News, Technology & Innovation.

]]>

Developments in AI carry new legal and ethical challenges for how news organizations use AI in production and distribution as well as how AI systems use news content to learn. For newsrooms, the use of generative AI tools offers benefits for productivity and innovation. At the same time, it risks inaccuracies, ethical issues and undermining public trust. It also presents opportunities for abuse of copyright for journalists’ original work. To address these challenges, legislation will need to offer clear definitions of AI categories and specific disclosures for each. It must also grapple with the repercussions of AI-generated content for (1) copyright or terms-of-service violations and (2) people’s civil liberties that, in practice, will likely be hard to identify and enforce through policy. Publishers and technology companies will also be responsible for establishing transparent, ethical guidelines for and education on these practices. Forward-thinking collaboration among policymakers, publishers, technology developers and academics is critical.

Early forms of artificial intelligence (prior to the development of generative AI) have been used for years to both create and distribute online news and information. Larger newsrooms have leveraged automation for years to streamline production and routine tasks, from generating earnings reports and sports recaps to producing tags and transcriptions. While these practices have been far less common in local and smaller newsrooms, they are being adopted more. Technology companies also increasingly use AI to automate critical tasks related to news and information, such as recommending and moderating content as well as generating search results and summaries.

Until now, public debates around the rise of artificial intelligence have largely focused on its potential to disrupt manual labor and operational work such as food service or manufacturing, with the assumption that creative work would be far less affected. However, a recent wave of accessible – and far more sophisticated – “generative AI” systems such as DALL-E, Lensa AI, Stable Diffusion, ChatGPT, Poe and Bard have raised concerns about their potential for destabilizing white-collar jobs and media work, abusing copyright (both against and by newsrooms), giving the public inaccurate information and eroding trust. At the same time, these technologies also create new pathways for sustainability and innovation in news production, ranging from generating summaries or newsletters and covering local events (with mixed results) to pitching stories and moderating comments sections.

As newsrooms experiment with new uses of generative AI, some of their practices have been criticized for errors and a lack of transparency. News publishers themselves are claiming copyright and terms-of-service violations by those using news content to build and train new AI tools (and, in some cases, striking deals with tech companies or blocking web crawler access to their content), while also grappling with the potential of generative AI tools to further shift search engine traffic away from news content. 

These developments introduce novel legal and ethical challenges for journalists, creators, policymakers and social media platforms. This includes how publishers use AI in news production and distribution, how AI systems draw from news content and how AI policy around the world will shape both. CNTI will address each of these, with a particular focus on areas that require legislation as well as those that should be a part of ethical journalism practice and policyCopyright challenges emerging from artificial intelligence are addressed here, though CNTI also offers a separate issue primer focusing on issues of copyright more broadly.

Source: OpenAI’s ChatGPT

In considering legislation, it is unclear how to determine which AI news practices would fall within legal parameters, how to categorize those that do and how news practices differ from other AI uses.

What specific practices and types of content would be subject to legislative policy? How would it apply to other types of AI? What comprises “artificial intelligence” is often contested and difficult to define depending on how broad the scope is (e.g., if it includes or excludes classical algorithms) and whether one uses technical or “human-based” language (e.g., “machine learning” vs. “AI”). New questions have emerged around the umbrella term of “generative AI” systems, most prominently Large Language Models (LLMs), further complicating these distinctions. These questions will need to be addressed in legislation in ways that protect citizens’ and creators’ fundamental rights and safety. At the same time, policymakers must consider the future risks of each system, as these technologies will continue to evolve at a faster pace than policy can reasonably keep up with.

The quantity and type of data collected by generative AI programs introduce new privacy and copyright concerns.

Among these concerns is what data is collected and how it is used by AI tools, in terms of both the input (the online data scraped to train these tools) and the output (the automated content itself). For example, Stable Diffusion was originally trained on 2.3 billion captioned images, including copyrighted works as well as images from artists, Pinterest and stock image sites. Additionally, news publishers have questioned whether their articles are being used to train AI tools without authorization, potentially violating terms-of-service agreements. In response, some technology companies have begun discussing agreements to pay publishers for use of their content to train generative AI models. This issue raises a central question about what in our digital world should qualify as a derivative piece of content, thereby tying it to copyright rules. Further, does new AI-generated content deserve its own copyright? If so, who gets the copyright: the developers who built the algorithm or the entity that published the content?

The copyright challenges of AI (addressed in more detail in our copyright issue primer) also present ethical dilemmas involved in profiting off of the output of AI models trained on copyrighted creative work without attribution or compensation.

Establishing transparency and disclosure standards for AI practices requires a coordinated approach between legal and organizational policies.

While some areas of transparency may make sense to be addressed through legal requirements (like current advertising disclosures), others will be more appropriate for technology companies and publishers to take on themselves. This means establishing their own principlesguidelines and policies for navigating the use of AI within their organizations – ranging from appropriate application to labeling to image manipulation. But these will need to both fit alongside any legal requirements and also be similar enough across organizations for public understanding. Newsroom education will also be critical, as journalists themselves are often unsure of how, or to what extent, their organizations rely on AI. For technology companies specifically, there is ongoing debate over requirements of algorithmic transparency (addressed in more detail in our algorithmic accountability issue primer) and the degree to which legal demand for this transparency could enable bad actors to hack or otherwise take advantage of the system in harmful ways.

The use of generative AI tools to create news stories presents a series of challenges around providing fact-based information to the public; the question of how that factors into legal or organizational policies remains uncertain.

Generative AI tools have initially been shown to produce content riddled with factual errors. Not only do they include false or entirely made-up information, but they are also “confidently wrong,” creating convincingly high-quality content and offering authoritative arguments for inaccuracies. Distinguishing between legitimate and illegitimate content (or even satire) will, therefore, become increasingly difficult – particularly as counter-AI tools have so far been ineffective. Further, it is easy to produce AI-generated images or content and use it to manipulate search engine optimization results. This can be exploited by spammers who churn out AI-generated “news” content or antidemocratic actors who create scalable and potentially persuasive propaganda and misinformation. For instance, while news publishers such as Semafor have generated powerful AI animations of Ukraine war eyewitness accounts in the absence of original footage, the same technology was weaponized by hackers to create convincing “deepfakes” of Ukrainian President Volodymyr Zelenskyy telling citizens to lay down their arms. While it is clear automation offers many opportunities to improve news efficiency and innovation, it also risks further commoditizing and undermining public trust in news.

There are inherent biases in generative AI tools that content generators and policymakers need to be aware of and guard against.

Because these technologies are usually trained on massive swaths of data scraped from the internet, they tend to replicate existing social biases and inequities. For instance, Lensa AI – a photo-editing app that launched a viral AI-powered avatar feature – has been alleged to produce hypersexualized and racialized images. Experts have expressed similar concerns about DALL-E and Stable Diffusion, which employ neural networks to transform text into imagery, which could be used to amplify stereotypes and produce fodder for sexual harassment or misinformation. The highly lauded AI text generator ChatGPT has been shown to generate violent, racist and sexist content (e.g. that only white men would make good scientists). Further, both the application of AI systems to sociocultural contexts they weren’t developed for and the human work in places like Kenya to make generative AI output less toxic present ethical issues. Finally, while natural language processing (NLP) is rapidly improving, AI tools’ training in dominant languages worsens longstanding access barriers for those who speak marginalized languages around the world. Developers have announced efforts to reduce some of these biases, but the longstanding, embedded nature of these biases and global use of the tools will make mitigation challenging.

OpenAI’s ChatGPT notes some of these biases and limitations for users.

Artificial intelligence is no longer a fringe technology. Research finds a majority of companies, particularly those based in emerging economies, report AI adoption as of 2021. Experts have begun to document the increasingly critical role of AI for news publishers and technology companies, both separately and in relation to each other. And there is mounting evidence that AI technologies are routinely used both in social platforms’ algorithms and in everyday news work, though the latter is often concentrated among larger and upmarket publishers who have the resources to invest in these practices.

There are limitations to what journalists and the public understand when it comes to AI. Research shows there are gaps between the pervasiveness of AI uses in news and journalists’ understandings of and attitudes toward these practices. Further, audience-focused research on AI in journalism has found that news users often cannot discern between AI-generated and human-generated content. They also perceive there to be less media bias and higher credibility for certain types of AI-generated news, despite ample evidence that AI tools can perpetuate social biases and enable the development of disinformation.

Much of the existing research on AI in journalism has been theoretical. Even when the work is evidence-based, it is often more qualitative than quantitative, which allows us to answer some important questions, but makes a representative assessment of the situation difficult. Theoretical work has focused on the changing role of AI in journalism practice, the central role of platform companies in shaping AI and the conditions of news work, and the implications for AI dependence on journalism’s value and its ability to fulfill its democratic aims. Work in the media policy space has largely concentrated around European Union policy debates and the role of transparency around AI news practices in enhancing trust.

Future work should prioritize evidence-based research on how AI reshapes the news people get to see – both directly from publishers and indirectly through platforms. AI research focused outside of the U.S. and outside economically developed countries would offer a fuller understanding of how technological changes affect news practices globally. On the policy side, comparative analyses of use cases would aid in developing transnational best practices in news transparency and disclosure around AI.

The latest wave of AI innovation has, in most countries, far outpaced governmental oversight or regulation. Regulatory responses to emerging technologies like AI have ranged from direct regulation to soft law (e.g. guidelines) to industry self-regulation, and they vary by country. Some governments, such as Russia and China, directly or indirectly facilitate – and thus often control – the development of AI in their countries. Others attempt to facilitate innovation by involving various stakeholders. Some actively seek to regulate AI technology and protect citizens against its risks. For example, when it comes to privacy the EU’s legislation has placed heavy emphasis on robust protections of citizens’ data from commercial and state entities, while countries like China assume the state’s right to collect and use citizens’ data.

These differences reflect a lack of agreement over what values should underpin AI legislation or ethics frameworks and make global consensus over its regulation challenging. That said, legislation in one country can have important effects elsewhere. It is important that those proposing policy and other solutions recognize global differences and consider the full range of their potential impacts without compromising democratic values of an independent press, an open internet and free expression.

Legislative policies specifically intended to regulate AI can easily be weakened by a lack of clarity around what qualifies as AI, making violations incredibly hard to identify and enforce. Given the complexity of these systems and the speed of innovation in this field, experts have called for individualized and adaptive provisions rather than one-size-fits-all responses. Recommendations for broader stakeholder involvement in building AI legislation also include engaging groups (such as marginalized or vulnerable communities) that are often most impacted by its outcomes.

Finally, as the role of news content in the training of AI systems becomes an increasingly central part of regulatory and policy debates, responses to AI developments will likely need to account for the protection of an independent, competitive news media. Currently, this applies to policy debates about modernizing copyright and fair use provisions for digital content as well as collective bargaining codes and other forms of economic support between publishers and the companies that develop and commodify these technologies.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

RSF and 16 partners unveil Paris Charter on AI and Journalism
Reporters Without Borders (November 2023)

The legal framework for AI is being built in real time, and a ruling in the Sarah Silverman case should give publishers pause
Nieman Lab (November 2023)

These look like prizewinning photos. They’re AI fakes.
The Washington Post (November 2023)

How AI reduces the world to stereotypes
Rest of World (October 2023)

Standards around generative AI
Associated Press (August 2023)

The New York Times wants to go its own way on AI licensing
Nieman Lab (August 2023)

News firms seek transparency, collective negotiation over content use by AI makers – letter
Reuters (August 2023)

Automating democracy: Generative AI, journalism, and the future of democracy
Oxford Internet Institute (August 2023)

Outcry against AI companies grows over who controls internet’s content
Wall Street Journal (July 2023)

OpenAI will give local news millions to experiment with AI
Nieman Lab (July 2023)

Generative AI and journalism: A catalyst or a roadblock for African newsrooms?
Internews (May 2023)

Lost in translation: Large language models in non-English content analysis
Center for Democracy & Technology (May 2023)

AI will not revolutionise journalism, but it is far from a fad
Oxford Internet Institute (March 2023)

Section 230 won’t protect ChatGPT
Lawfare (February 2023)

Generative AI copyright concerns you must know in 2023
AI Multiple (January 2023)

ChatGPT can’t be credited as an author, says world’s largest academic publisher
The Verge (January 2023)

Guidelines for responsible content creation with generative AI
Contently (January 2023)

Governing artificial intelligence in the public interest
Stanford Cyber Policy Center (July 2022)

Initial white paper on the social, economic and political impact of media AI technologies
AI4Media (February 2021)

Toward an ethics of artificial intelligence
United Nations (2018)

Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
Brookings Institute (May 2019)

AIAAIC: Independent, public interest initiative that examines AI, algorithmic and automation transparency and openness.

AI Now Institute: Policy research institute studying the social implications of artificial intelligence and policy research.

Data & Society & Algorithmic Impact Methods Lab: Data & Society lab advancing assessments of AI systems in the public interest.

Digital Policy Alert Activity Tracker: Tracks developments in legislatures, judiciaries and the executive branches of G20, EU member states and Switzerland.

Global Partnership on Artificial Intelligence (GPAI): International initiative aiming to advance the responsible development of AI.

Electronic Frontier Foundation (EFF): Non-profit organization aiming to protect digital privacy, free speech and innovation, including for AI.

Institute for the Future of Work (IFOW): Independent research institute tracking international legislation relevant to AI in the workplace.

JournalismAI Project & Global Case Studies: Global initiative empowering newsrooms to use AI responsibly and offering practical guides for AI in journalism.

Local News AI Initiative: Knight Foundation/Associated Press initiative advancing AI in local newsrooms.

MIT Media Lab: Interdisciplinary AI research lab.

Nesta AI Governance Database: Inventory of global governance activities related to AI (up to 2020).

OECD.AI Policy Observatory: Repository of over 800 AI policy initiatives from 69 countries, territories and the EU.

Organized Crime & Corruption Reporting Project: Investigative reporting platform for a worldwide network of independent media centers and journalists.

Partnership on Artificial Intelligence: Non-profit organization offering resources and convenings to address ethical AI issues.

Research ICT Africa & Africa AI Policy Project (AI4D): Mapping AI use in Africa and associated governance issues affecting the African continent.

Stanford University AI Index Report: Independent initiative tracking data related to artificial intelligence.

Term Tabs: A digital tool for searching and comparing definitions of (U.S./English language) technology-related terms in social media legislation.

Tortoise Global AI Index: Ranks countries based on capacity for artificial intelligence by measuring levels of investment, innovation and implementation.

Rachel Adams, CEO & Founder, Global Center on AI Governance

Pekka Ala-Pietilä, Chair, European Commission High-Level Expert Group on Artificial Intelligence

Norberto Andrade, Director of AI Policy and Governance, Meta

Chinmayi Arun, Executive Director, Information Society Project

Charlie Beckett, Director, JournalismAI Project

Meredith Broussard, Research Director, NYU Alliance for Public Interest Technology

Pedro Burgos, Knight Fellow, International Center for Journalists

Jack Clark, Policy Director, OpenAI 

Kate Crawford, Research Professor, USC Annenberg

Renée Cummings, Assistant Professor, University of Virginia

Claes de Vreese, Research Leader, AI, Media, and Democracy Lab 

Timnit Gebru, Founder and Executive Director, The Distributed AI Research Institute (DAIR)

Natali Helberger, Research Leader, AI, Media, and Democracy Lab 

Aurelie Jean, Founder, In Silico Veritas 

Francesco Marconi, Co-founder, AppliedXL

Surya Mattu, Lead, Digital Witness Lab

Madhumita Murgia, AI Editor, Financial Times

Felix Simon, Fellow, Tow Center for Digital Journalism

Edson Tandoc Jr., Associate Professor, Nanyang Technological University

Scott Timcke, Senior Research Associate, Research ICT Africa

Abraji International Congress of Investigative Journalism
June 29–July 2, 2023 – São Paulo, Brazil 

Association for the Advancement of Artificial Intelligence 2023 Conference
February 7–14, 2023 – Washington, DC

ACM CHI Conference on Human Factors in Computing Systems
April 23–28, 2023 – Hamburg, Germany

International Conference on Learning Representations
May 1–5, 2023 – Kigali, Rwanda

2023 IPI World Congress: New Frontiers in the Age of AI
May 25–26, 2023 – Vienna, Austria

RightsCon
June 5–8, 2023 – San José, Costa Rica

RegHorizon AI Policy Summit 2023
November 3–4, 2023 – Zurich, Switzerland

RSF and 16 partners unveil Paris Charter on AI and Journalism
Reporters Without Borders (November 2023)

The legal framework for AI is being built in real time, and a ruling in the Sarah Silverman case should give publishers pause
Nieman Lab (November 2023)

These look like prizewinning photos. They’re AI fakes.
The Washington Post (November 2023)

How AI reduces the world to stereotypes
Rest of World (October 2023)

Standards around generative AI
Associated Press (August 2023)

The New York Times wants to go its own way on AI licensing
Nieman Lab (August 2023)

News firms seek transparency, collective negotiation over content use by AI makers – letter
Reuters (August 2023)

Automating democracy: Generative AI, journalism, and the future of democracy
Oxford Internet Institute (August 2023)

Outcry against AI companies grows over who controls internet’s content
Wall Street Journal (July 2023)

OpenAI will give local news millions to experiment with AI
Nieman Lab (July 2023)

Generative AI and journalism: A catalyst or a roadblock for African newsrooms?
Internews (May 2023)

Lost in translation: Large language models in non-English content analysis
Center for Democracy & Technology (May 2023)

AI will not revolutionise journalism, but it is far from a fad
Oxford Internet Institute (March 2023)

Section 230 won’t protect ChatGPT
Lawfare (February 2023)

Generative AI copyright concerns you must know in 2023
AI Multiple (January 2023)

ChatGPT can’t be credited as an author, says world’s largest academic publisher
The Verge (January 2023)

Guidelines for responsible content creation with generative AI
Contently (January 2023)

Governing artificial intelligence in the public interest
Stanford Cyber Policy Center (July 2022)

Initial white paper on the social, economic and political impact of media AI technologies
AI4Media (February 2021)

Toward an ethics of artificial intelligence
United Nations (2018)

Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms
Brookings Institute (May 2019)

AIAAIC: Independent, public interest initiative that examines AI, algorithmic and automation transparency and openness.

AI Now Institute: Policy research institute studying the social implications of artificial intelligence and policy research.

Data & Society & Algorithmic Impact Methods Lab: Data & Society lab advancing assessments of AI systems in the public interest.

Digital Policy Alert Activity Tracker: Tracks developments in legislatures, judiciaries and the executive branches of G20, EU member states and Switzerland.

Global Partnership on Artificial Intelligence (GPAI): International initiative aiming to advance the responsible development of AI.

Electronic Frontier Foundation (EFF): Non-profit organization aiming to protect digital privacy, free speech and innovation, including for AI.

Institute for the Future of Work (IFOW): Independent research institute tracking international legislation relevant to AI in the workplace.

JournalismAI Project & Global Case Studies: Global initiative empowering newsrooms to use AI responsibly and offering practical guides for AI in journalism.

Local News AI Initiative: Knight Foundation/Associated Press initiative advancing AI in local newsrooms.

MIT Media Lab: Interdisciplinary AI research lab.

Nesta AI Governance Database: Inventory of global governance activities related to AI (up to 2020).

OECD.AI Policy Observatory: Repository of over 800 AI policy initiatives from 69 countries, territories and the EU.

Organized Crime & Corruption Reporting Project: Investigative reporting platform for a worldwide network of independent media centers and journalists.

Partnership on Artificial Intelligence: Non-profit organization offering resources and convenings to address ethical AI issues.

Research ICT Africa & Africa AI Policy Project (AI4D): Mapping AI use in Africa and associated governance issues affecting the African continent.

Stanford University AI Index Report: Independent initiative tracking data related to artificial intelligence.

Term Tabs: A digital tool for searching and comparing definitions of (U.S./English language) technology-related terms in social media legislation.

Tortoise Global AI Index: Ranks countries based on capacity for artificial intelligence by measuring levels of investment, innovation and implementation.

Rachel Adams, CEO & Founder, Global Center on AI Governance

Pekka Ala-Pietilä, Chair, European Commission High-Level Expert Group on Artificial Intelligence

Norberto Andrade, Director of AI Policy and Governance, Meta

Chinmayi Arun, Executive Director, Information Society Project

Charlie Beckett, Director, JournalismAI Project

Meredith Broussard, Research Director, NYU Alliance for Public Interest Technology

Pedro Burgos, Knight Fellow, International Center for Journalists

Jack Clark, Policy Director, OpenAI 

Kate Crawford, Research Professor, USC Annenberg

Renée Cummings, Assistant Professor, University of Virginia

Claes de Vreese, Research Leader, AI, Media, and Democracy Lab 

Timnit Gebru, Founder and Executive Director, The Distributed AI Research Institute (DAIR)

Natali Helberger, Research Leader, AI, Media, and Democracy Lab 

Aurelie Jean, Founder, In Silico Veritas 

Francesco Marconi, Co-founder, AppliedXL

Surya Mattu, Lead, Digital Witness Lab

Madhumita Murgia, AI Editor, Financial Times

Felix Simon, Fellow, Tow Center for Digital Journalism

Edson Tandoc Jr., Associate Professor, Nanyang Technological University

Scott Timcke, Senior Research Associate, Research ICT Africa

Annual IDeaS Conference: Disinformation, Hate Speech, and Extremism Online
IDeaS
April 13-14, 2023 – Pittsburgh, Pennsylvania, USA

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

Cambridge Disinformation Summit
University of Cambridge
July 27–28, 2023 – Cambridge, United Kingdom

EU DisinfoLab 2023 Annual Conference
EU DisinfoLab
October 11–12, 2023 – Krakow, Poland


The post Artificial Intelligence in Journalism appeared first on Center for News, Technology & Innovation.

]]>
Building News Economic Sustainability https://innovating.news/article/building-news-sustainability/ Mon, 28 Aug 2023 14:00:00 +0000 https://innovating.news/?post_type=article&p=305 How can public policy addressing economic support for news enable independent, competitive journalism without creating political or legacy bias?

The post Building News Economic Sustainability appeared first on Center for News, Technology & Innovation.

]]>

Independent journalism is critical to functioning democracies. As the global news industry largely continues to face financial struggles, governments have responded with a range of policy initiatives to provide economic support to commercial, public and local news media. Among these, recent media bargaining legislation in Australia and Canada has become the starting point for policy debates in many parts of the world. These policy debates bring several challenges to light, including tensions between the news as a public good versus a market product, the role of public policy in compensating for market forces and the risk of increasing media dependence on, and undue influence from, governments and platforms. Policy aiming to support the economic sustainability of news media cannot be one-size-fits-all; it must be context-sensitive and protect media independence, free expression and an open internet.

For decades, the commercial business model of bundled news with embedded advertising generated high profitability for publishers. The internet gave rise to new structures for news consumption as well as new and more targeted ways for advertisers to reach people. As the digital landscape evolved and more of the public moved online, the traditional news business model became obsolete and the news industry plunged into financial crisis

Simultaneously, the role of news publishers began to change, particularly when it came to distribution of news. While the advent of digital platforms (primarily social media and search engines) broadened news publishers’ access to potential audiences, it also left publishers more dependent on these intermediaries. 

Amid these changes, the news industry has made efforts to develop new business models such as building digital subscribers, donations, grants, membership events and other means of funding. While there have been some successes, the news industry has not been able to generate the levels of revenue it had previously enjoyed. Because news is critical to functioning democracies, governments and civil society have responded to economic challenges in the news industry with a range of policy initiatives aimed at providing financial support.

When it comes to commercial news media, a range of proposals to provide financial support exist, but most notable are recent media bargaining legislation in Australia and Canada, which have inspired policy debates in many parts of the world. Earlier attempts to provide financial support for news were often based on copyright law, but these policies are based on competition (antitrust) law, requiring digital platforms to negotiate payment to news publishers on the grounds that platforms have financially benefited from digital advertising by providing links to publisher content without due compensation. Other efforts, such as in the U.S., aim to create exemptions to antitrust law to bring technology companies to the bargaining table. 

In response to some of this legislation, technology companies such as Meta and Google have begun to shift away from news altogether, calling the legislation unworkable for a number of reasons, including concerns over their own financial liabilities. This shift away from news would almost certainly harm publishers’ (and particularly smaller publishers’) ability to reach audiences and inhibit the public’s ability to access legitimate news sources and information. 

There are also ongoing policy debates surrounding what is perhaps the most substantial form of direct government intervention in the news industry: support for public service media. In many democracies, public media sit at the center of the news ecosystem, making them key players in debates around public policy aimed at enabling independent, competitive journalism. However, public media also continue to face many of the same threats as the commercial news industry, including challenges with digital transitions, decades of budget cuts and, in some cases, political attacks. As we address later in this primer, policy discussions surrounding public versus commercial news media support are often siloed or considered at odds with one another (despite little to no evidence to date supporting the theory that public media support ‘crowds out’ or shrinks commercial news audiences). 

Attempts to regulate economic support for the news industry introduce difficult but critical questions about the balance between news as a public good versus a market product, the role of public or local media and freelance journalists in the news ecosystem, the tension between content neutrality and curbing mis- and disinformation, the challenge of making complex and proprietary platform infrastructures publicly governable and accountable, the potential implications of government involvement in news media revenue structures and the role of public policy in compensating for market forces (versus changing them) in the public interest.

It is critical to address the economic sustainability of the news industry, which includes promoting a pluralisticdiverse and innovative media ecosystem, local and public news, and broader public access to news. However, these multistakeholder conversations must be guided by evidence-based practices that reflect strengths and weaknesses of policy-driven news support, with an eye toward media independence, plurality and transparency.

These debates raise the broader question of whether, or to what extent, governments can and should save independent journalism from financial crisis.

Democratic governments are obligated to protect fundamental rights, including the basic freedom to access and share information. However, protecting free expression alone is not enough to sustain the news media. Strong legal protections of the press in countries like South Africa and the U.S. have not protected their news industries from financial struggles. Policymakers can help to enable an environment supportive of an independent, competitive press – if they are willing to commit resources to these efforts, prioritize strong governance and protect editorial independence.

Legislation must be sensitive to a country’s context and media environment, which makes replicable or one-size-fits-all approaches ineffective.

While early efforts to address the economic sustainability of news media can help inform approaches in other parts of the world, there are also lessons to be learned beyond these narrow (i.e., largely Global North) contexts. Approaches to developing domestic policy must consider the specific country’s media environment and history. For example, histories of governmental press control or low levels of institutional trust may influence some countries’ approaches to or perceptions of government involvement in funding or supporting news media. Other challenges, particularly for emerging democracies, may include less instructive experience, limited support and resources, weakened civil society, and government control of media associations or press councils that would shepherd such changes or existing legal structures (e.g., laws preventing tax revenue from going toward news media).

Holistic solutions for media sustainability may not be possible, or even preferred, but piecemeal responses may make it more difficult to create lasting solutions or to measure success.

For instance, despite evidence that countries with well-funded, independent public media consistently have stronger democratic outcomes, public media (as well as local media) are often dealt with in separate policy initiatives from commercial news media. Public service media are, in theory, meant to serve the whole public, meaning that – when truly independent – they can play a key role in providing free public access to information and making attempts to combat disinformation. Countries that address commercial media sustainability but do not address public or local media sustainability risk perpetuating existing inequalities in news access and diversity and continuing to serve audiences who are already better-served.

If a primary concern driving legislation is to address news publishers’ increased dependence on digital platforms, it is especially critical to ensure that legislation does not inadvertently create more dependence on, provide more power to or constrain scrutiny of these companies.

Legislative policy that puts platforms in charge of negotiating with news organizations risks decisions made based on commercial interests. Further, if news publishers rely financially on payments from major technology companies, it would be reasonable to worry that publishers may not scrutinize these companies through news coverage as much as they would absent their financial relationships. There is also a question of how sustainable some policies may be if they create financial dependence on digital platforms that may or may not last, as digital habits and interests shift, both among the public and among platform companies who may simply choose to shift away from news content at any time.

It is important that these policy decisions do not jeopardize media independence or introduce new opportunities for undue political influence on publishers.

Legislation that puts governments in charge of determining who qualifies as a news organization and how news revenue is structured risks politically motivated decisions and loss of journalistic independence. These risks must be considered even in countries where the current government supports an independent news media, as ruling parties can – and indeed do – change, such as in Nicaragua and Serbia. Policymakers must consider the risks of media capture, including how to prevent political influence from determining which publishers succeed in the media market as well as how to ensure media plurality.

It is unclear to what degree certain policies will actually bolster news reporting or support news innovation and a diverse, competitive news environment.

Current policy approaches largely benefit legacy and large news publishers at the expense of smaller independent or local ones. The policies often lack full transparency, equal collective bargaining rights for smaller or independent publishers and other mechanisms that would ensure diverse distribution of funding. While policymakers in some countries have attempted to resolve these issues by allowing for collective bargaining, these policies usually leave smaller or independent publishers and freelancers unequally served. These smaller, typically digital-first startups are often paving the way for news innovation, thus, policies that are destructive to these smaller independent publishers suppress the emergence of new business models which are essential to the success of the news industry as a whole. Further, even among those who agree that technology companies should pay for facilitating access to news on their platforms, there remains a lack of consensus over how or where that money should be spent by publishers.

Content-neutral funding initiatives can undermine efforts to curb mis- and disinformation.

Some media bargaining codes risk funding purveyors of disinformation due to a) efforts to maintain neutrality in who qualifies for funding, and b) using publisher reach or traffic as a key criteria for bargaining structures. This challenge is exacerbated by the difficulty of defining or measuring news “quality” (see more discussion of this in a separate issue primer). It is critical that considerations of content neutrality in policy-driven economic support for news weigh the opportunities of promoting a diverse media environment against the threats of potentially incentivizing or directly funding disinformation or clickbait.

As CNTI notes in other issue primers, a breadth of research in recent years has emphasized both the global news industry’s continued struggle with the transition to a digital and mobile media environment and publishers’ evolving relationship with, and dependency on, platform companies.

One segment of this research is specifically concentrated around policy-driven economic support for news. Much of this recent work centers on three key areas:

  • Systematic evidence of the impact of platforms (and platform changes) on publishers and the public.
  • Evaluations of existing cases of media sustainability legislation, including Australia’s News Media Bargaining Code and Canada’s tax credits.
  • Evaluations of a broader set of initiatives with the aim of supporting an independent, pluralistic news media environment, including funding public service media or local news publishers.

It is clear across this work that no policy response addressing economic support for news is perfect. However, experts in this field have noted the strengths and weaknesses of various legislative efforts, consistently calling for measures of transparency and public accountability, the protection of editorial independence from undue commercial and political influence and the need to tailor solutions to a country’s specific needs. Further, this work reflects the broad range of tested and broadly supported policy initiatives that exist outside of media bargaining codes, including those that promote media pluralism and support public or local media.

As new policy initiatives aimed at media sustainability – and responses from major platform companies – unfold in real time, future work in this area can uniquely examine the impact of such legislation on media sustainability and independence as well as an informed public. One major obstacle to research in this area is limited independent researcher access to publishers’ closely held internal data (or smaller and local publishers simply lacking such data) as well as to platforms’ increasingly restrictive APIs, which CNTI explores in more detail in a separate issue primer. Related to this is a lack of, and at times conflicting, data on and research into news publishers’ advertising revenue and traffic. Collaboration among publishers, platforms, academics and civil society organizations will be critical to evaluating the success of policy models for media sustainability.

Globally, policymakers and civil society have tried a wide range of approaches to enable a sustainable and pluralistic media environment with varying success. These approaches have included but are not limited to:

  • Funding for public service media via license fees (e.g., the UK), media taxes (e.g., Germany) or from the state budget (e.g., Denmark).
  • Direct domestic subsidies (e.g., DenmarkFranceCanada) to news organizations or independent funding bodies.
  • Indirect domestic subsidies, including Value-Added Tax (VAT) exemptions and tax rebates (e.g., Ghana) or credits (e.g., Canada) for publishers, editorial roles, freelance journalists or digital subscriptions.
  • Media subscription vouchers (e.g., France).
  • Competitive or selective funds (e.g., formerly New Zealand).
  • Emergency relief funds (e.g., during COVID-19).
  • International subsidies for public interest journalism.
  • Government advertising, though this can become a method of indirect censorship.
  • Extended copyright or copyright-like protections (e.g., Germany, Spain, EU).
  • Non-legislative approaches such as direct civil society or private-sector funding.

More recently, as we note earlier in this primer, adopted and proposed legislation in countries such as Australia, Canada, Brazil and the U.S. has required large platform companies to negotiate commercial deals with news publishers for their content, largely on the basis of antitrust or competition law. To proponents of these initiatives, they represent a much-needed rebalancing of power between digital platforms and the publishers whose content they distribute and monetize. As Australia’s News Media Bargaining Code in particular becomes a popular model for other governments, experts continue to note concerns, evidenced by research cited above, about issues of transparency, a lack of definitional clarity, long-term sustainability and further media dependence on major platforms. Additionally, media bargaining structures, which often primarily benefit legacy and larger publishers, risk becoming the arbiter of what news and information appears online, as well as whether news is carried there at all.

As we discuss in a separate issue primer, in many cases it is unclear how to assess what “quality” news – or even “news” more broadly – is. While some work indicates a certain degree of consensus around how to define ‘news’ across existing legislation in countries like Canada and the U.S., the provisions that structure these definitions (including publisher reach, size, geographic orientation, originality of reporting, etc.) vary and shape the outcomes of these laws.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

Ads for News: Initiative of non-profit coalition United for News and led by Internews enabling brands to advertise with trusted, local news media.

Center for International Media Assistance (CIMA): Initiative of the U.S. National Endowment for Democracy dedicated to improving U.S. efforts to promote independent media in developing countries around the world.

Endowment Fund for Independent Journalism (NFNZ): Czech non-governmental organization supporting editors and individuals who are engaged in serious journalism and honor the principles of liberal democracy.

Institute for Nonprofit News: Supports independent, non-profit news organizations.

Internews: Independent, nonprofit media development and support organization.

InternetLab: Independent Brazilian research center that aims to foster academic debate around issues involving law and technology, especially internet policy.

International Fund for Public Interest Media: Supports media organizations and ecosystem-level interventions across four focus regions (Africa/Middle East, Asia/Pacific, Latin America/Caribbean and Eastern Europe).

John S. and James L. Knight Foundation: Promotes research around building the future of local news in U.S. communities.

Lenfest Institute for Journalism: Supports local journalism through its focus on diversified revenue models, digital product development, and equity and representation.

Media Development Investment Fund (MDIF): Non-profit investment fund for independent media in countries where access to free and independent media is under threat.

News Sustainability Project – Industry research effort, led by the Google News Initiative and FT Strategies, to more deeply understand, measure and enable the drivers of publisher sustainability across the world.

SembraMedia: Non-profit organization helping independent digital media leaders build stronger organizations and develop sustainable business models.

Tow Center for Digital Journalism: Institute within Columbia University’s Graduate School of Journalism serving as a research and development center for the profession as a whole.

UNC Center for Innovation & Sustainability in Local Media: UNC research center aiming to grow a more equitable and sustainable future for local news, the journalists who make it and the communities that need it.

Ramiro Álvarez Ugarte, Vice Director, Centro de Estudios en Libertad de Expresion y Acceso a la Informacion

Emily Bell, Founding Director, Tow Center for Digital Journalism

Colette Brin, Director, Centre d’études sur les médias

Francisco Brito Cruz, Executive Director, INTERNETLAB

Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo

Wahyu Dhyatmika, CEO, Tempo Digital

Marius Dragomir, Director, Center for Media, Data and Society

Sue Gardner, Former Executive Director, Wikimedia Foundation 

Jodie Ginsberg, President, Committee to Protect Journalists

Jonathan Heawood, Founder and Executive Director, Public Interest News Foundation 

Mijal Iastrebner, Co-Founder and Executive Director, SembraMedia

Mathew Ingram, Chief Digital Writer, Columbia Journalism Review 

Beatriz Kira, Research Fellow, University College London

Amy Kovac-Ashley, Head of National Programs, Lenfest Institute 

Irene Jay Liu, Regional Director for Asia & the Pacific, International Fund for Public Interest Media

Gaven Morris, Managing Director, Bastion Transform & Former News Director, ABC News

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Khadija Patel, Journalist-in-Residence, International Fund for Public Interest Media

Courtney Radsch, Senior Fellow, Center for International Governance Innovation

Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University

Meera Selva, CEO, Internews Europe

Knight Media Forum 2023
John S. and James L. Knight Foundation
February 21–23, 2023 – Virtual

World News Media Congress
WAN-IFRA
June 28–30, 2023 – Taipei, Taiwan

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil

International Journalism Festival
April 17–21, 2024 – Perugia, Italy

Ads for News: Initiative of non-profit coalition United for News and led by Internews enabling brands to advertise with trusted, local news media.

Center for International Media Assistance (CIMA): Initiative of the U.S. National Endowment for Democracy dedicated to improving U.S. efforts to promote independent media in developing countries around the world.

Endowment Fund for Independent Journalism (NFNZ): Czech non-governmental organization supporting editors and individuals who are engaged in serious journalism and honor the principles of liberal democracy.

Institute for Nonprofit News: Supports independent, non-profit news organizations.

Internews: Independent, nonprofit media development and support organization.

InternetLab: Independent Brazilian research center that aims to foster academic debate around issues involving law and technology, especially internet policy.

International Fund for Public Interest Media: Supports media organizations and ecosystem-level interventions across four focus regions (Africa/Middle East, Asia/Pacific, Latin America/Caribbean and Eastern Europe).

John S. and James L. Knight Foundation: Promotes research around building the future of local news in U.S. communities.

Lenfest Institute for Journalism: Supports local journalism through its focus on diversified revenue models, digital product development, and equity and representation.

Media Development Investment Fund (MDIF): Non-profit investment fund for independent media in countries where access to free and independent media is under threat.

News Sustainability Project – Industry research effort, led by the Google News Initiative and FT Strategies, to more deeply understand, measure and enable the drivers of publisher sustainability across the world.

SembraMedia: Non-profit organization helping independent digital media leaders build stronger organizations and develop sustainable business models.

Tow Center for Digital Journalism: Institute within Columbia University’s Graduate School of Journalism serving as a research and development center for the profession as a whole.

UNC Center for Innovation & Sustainability in Local Media: UNC research center aiming to grow a more equitable and sustainable future for local news, the journalists who make it and the communities that need it.

Ramiro Álvarez Ugarte, Vice Director, Centro de Estudios en Libertad de Expresion y Acceso a la Informacion

Emily Bell, Founding Director, Tow Center for Digital Journalism

Colette Brin, Director, Centre d’études sur les médias

Francisco Brito Cruz, Executive Director, INTERNETLAB

Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo

Wahyu Dhyatmika, CEO, Tempo Digital

Marius Dragomir, Director, Center for Media, Data and Society

Sue Gardner, Former Executive Director, Wikimedia Foundation 

Jodie Ginsberg, President, Committee to Protect Journalists

Jonathan Heawood, Founder and Executive Director, Public Interest News Foundation 

Mijal Iastrebner, Co-Founder and Executive Director, SembraMedia

Mathew Ingram, Chief Digital Writer, Columbia Journalism Review 

Beatriz Kira, Research Fellow, University College London

Amy Kovac-Ashley, Head of National Programs, Lenfest Institute 

Irene Jay Liu, Regional Director for Asia & the Pacific, International Fund for Public Interest Media

Gaven Morris, Managing Director, Bastion Transform & Former News Director, ABC News

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Khadija Patel, Journalist-in-Residence, International Fund for Public Interest Media

Courtney Radsch, Senior Fellow, Center for International Governance Innovation

Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University

Meera Selva, CEO, Internews Europe

Knight Media Forum 2023
John S. and James L. Knight Foundation
February 21–23, 2023 – Virtual

World News Media Congress
WAN-IFRA
June 28–30, 2023 – Taipei, Taiwan

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil

International Journalism Festival
April 17–21, 2024 – Perugia, Italy


The post Building News Economic Sustainability appeared first on Center for News, Technology & Innovation.

]]>
Building News Relevance https://innovating.news/article/building-news-relevance/ Sat, 26 Aug 2023 14:00:00 +0000 https://innovating.news/?post_type=article&p=235 How can the news media remain relevant, particularly with young audiences and underserved communities?

The post Building News Relevance appeared first on Center for News, Technology & Innovation.

]]>

CNTI aims to help news publishers and policymakers understand the current challenges of news relevance as a means to enabling more informed internet and media policy and a healthy information system. The importance the public places on the news media inherently impacts the digital news environment and any feasible solutions to its current challenges. Even the best internet policy cannot ensure the future of independent journalism and an open internet if the news media, at large, does not carry credibility and relevance with the audiences it strives to serve. This is particularly important when it comes to younger audiences and those in underserved communities.

The public’s perceived relevance of journalists and the role they play in providing fact-based accounting of events and issues is an essential ingredient in building policy that protects a strong, diverse and independent press, a crucial principle of democracy. Yet, achieving relevance seems harder than ever. 

The internet provides new opportunities for producing news, telling stories and reaching and connecting with audiences. It also poses new challenges – including entirely new structures and formats of news as well as a wider range of content and creators competing for the public’s attention, trust and value. The public’s ability to share information and opinions through social media also holds journalists accountable in ways they weren’t in the age of traditional media.

In a digital environment where publishers no longer serve as the sole gatekeepers of news and information, they must find new ways to stand out by developing relationships with audiences that reflect their lifestyles and habits. This means rethinking the very concept of news: how people define it, what news they’re looking for and where they access it. The “unbundling” of news and its financial structure are only the beginning of what has changed. 

Younger generations require particular attention. In addition to being digital natives, many also have grown up in the social media age, when news is just one of many kinds of content mixed into online spaces which include messaging apps, private groups, video games and social feeds. Information sources are global. With so many choices, access to verified and independent reporting can be difficult. With habits already different from other age groups, how will their relationships with news evolve as they move further into adulthood? More broadly, how do they differentiate news from other kinds of content? What do they look for versus happen upon? How do they recognize brands, if at all? Who do they trust, and why?

Members of historically marginalized and underserved communities also require attention. They often have well-earned skepticism of news media broadly, having been historically underrepresented in newsrooms and misrepresented in (and actively harmed by) news coverage. They face unequal targeting from disinformation campaigns and often have less access to news and information. Inequities in representation, coverage and access impact people with disabilities through insufficient accessible and inclusive news, particularly for visually impaired news audiences and/or people with cognitive disabilities

Many newsrooms have taken significant steps to connect with and serve marginalized and underserved communities, but there is still much ground to cover. One of the positives of the digital realm is the low cost to publishing, which has enabled the creation of niche news publishers. That said, building audiences can be incredibly challenging because it involves creating content specific populations connect to and value. 

For journalism to be relevant today, publishers must listen to audiences and evaluate what they want and need so they can both meet their audiences’ wants and needs and offer accountable, fact-based work. As important as independent, competitive journalism and an open internet are to the future of functioning societies, neither can be ensured if the press, at large, does not gain the recognition, relevance and trust of audiences in the communities it strives to serve. Publishers and journalists need to think hard about what news means in the future and how people find it, value it and understand who produces it. Neither further regulation nor revenue alone can solve this part of the equation. 

One challenge publishers face is gathering research and analysis that differentiates between their audiences’ expressed desires for news and actual media practices. This research and analysis is complex, expensive and falls largely on the news industry and research communities.

What publishers think about news “relevance” does not always align with what the public considers relevant to them.

What audiences personally want and value from the news can be contradictory and can vary, particularly when it comes to younger and historically marginalized audiences. The mismatch between publishers’ perception and the public’s perception of what is relevant exists from the international level to the local level. For instance, research shows that large national news outlets in the U.S. often skew coverage toward a wealthy, white and liberal audience, and local U.S. newsrooms often fail to cover topics their local audiences are seeking out.

The primary challenges to producing relevant news are different in different contexts.

For instance, in politically polarized societies like the U.S. and Brazil, partisanship plays a critical role in perceptions of and trust in news media. There are also countries where history influences the public’s understanding of and relationship with news media. There is also variation in the degree of independence the news media has had from national governments. In many cases, larger institutional or cultural shifts are required before we can address broader issues of relevance through changes to news practices or content.

News publishers must make trade-offs to engage different groups.

Journalists cannot meet the needs and wants of every person. What appeals to one group might very well turn off another. This applies to everything from long-form to short-form content, and from news that carries viewpoints and opinions to news that does not. It also applies to the use of visuals, source type, and topics. Publishers seeking to appeal to a mass audience rather than smaller, niche ones that share news preferences face the greatest challenge engaging their audiences.

Many people do not see the news as central to their lives.

While the future of independent journalism is critical for open societies, it is not often central to people’s everyday lives or online experiences. People are increasingly getting news via distributed discovery (especially social media and search engines), but many use platforms primarily for other purposes and may actually be hesitant to get news from them. Further, people who are less trusting of news also pay less attention to and have fewer opinions about news practices, implying a broader lack of interest in what news is or what it should be. Persuading the public to see news as relevant will require communicating its value to people who do not consider news to be an important part of their lives.

Young people’s media habits and attitudes make them harder for news publishers to reach than older audiences.

Compared with older audiences, under-30s in particular are increasingly reliant on digital and social media, more disconnected from news brands, and have different perceptions of what news is. Younger and/or less educated audiences are more likely to avoid news because they perceive it as hard to follow or understand. Traditional media often assumes a certain amount of knowledge and often provides little explanation of context, which can alienate younger or newer news consumers. And, the spaces they find news and information online change. For instance, younger audiences increasingly use platforms like TikTok as both a search engine and a news source despite understanding that not all information on the app is credible and/or independent. It is critical that publishers meet audiences where they are, and with news formats they can understand, trust and connect with.

Research continues to indicate global public interest and trust in the news media declining and news avoidance rising. Many news avoiders perceive the news as irrelevant for navigating their daily lives. The question of news relevance, and the public’s perceptions of it, tie into a broad range of research topics, including media trust and engagement, the changing media environment and business models, and the public’s news choices and expectations (and those of young news audiences specifically). Together, this work – detailed at various points throughout this issue primer – paints a clear picture of the challenges facing publishers attempting to remain or become relevant to their audiences. 

Moving forward, audience-centered work quantitatively and qualitatively measuring public perceptions and attitudes of news relevance and public trust in non-U.S. and comparative contexts is critical. In many countries, we simply lack evidence-based understandings of what different groups want and need from news publishers. For example, we could move beyond simple assumptions that news for younger audiences needs to have a “youth” slant if the solution is actually hiring young talent and presenting stories in ways and on platforms that align with their lifestyles. 

Research can also push to explore new ways of thinking about news. Many of the decades-old parameters of how news is defined no longer apply, making it less clear what news means to people today. Many initiatives aiming to connect with audiences – such as explanatory, “good news,” solutions or constructive journalism – are also untested, and their effectiveness (as well as whether audiences actually want them) is still unclear. And, while recent research has told us much about how audiences (including young people) either intentionally or incidentally consume digital news, opportunities remain for further exploring how specific audiences navigate an increasingly mediated and fragmented digital news environment.

Legislation might not be explicitly designed to address the challenges of news relevance, but it nonetheless serves as a motivating factor behind a range of policies at the intersection of journalism and technology. This includes legislative debates addressed across several of CNTI’s issue primers:

  • Our issue primer addressing economic support for news outlines policies intended to support commercial, public and local media through various funding mechanisms, which may offer ways for journalists and publishers to focus on important connections with the public. However, it is often unclear whether these responses address, or even consider, what the public desires or how they connect with news today, which are central for the news media to remain relevant with audiences.
  • In our issue primer on artificial intelligence (AI) in journalism, we consider how automation offers opportunities for news productivity and innovation at the same time that it risks enabling disinformation, obscuring source attribution and undermining public trust in news. These questions of trust will continue to be influenced by how publishers, technology companies and policymakers handle AI transparency and accountability.
  • In our issue primer on addressing disinformation, we detail global concerns about the rise of false and misleading information online as well as the type of content the public tends to perceive as mis- and disinformation. These perceptions have critical insights into which sources audiences trust and consider relevant.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

When it comes to audience diversity, newsrooms are asking the wrong questions
Nieman Lab (November 2023)

Choose your words wisely: The role of language in media trust
Digital Content Next (November 2023)

Young South Africans are shaping the news through community radio — via social media
Nieman Lab (October 2023)

The mirage in the trust desert: challenging journalistic transparency
Reuters Institute for the Study of Journalism (August 2023)

Strategies for building trust in news: What the public say they want across four countries
Reuters Institute for the Study of Journalism (September 2023)

From contributors, to co-producers – a guide to participatory production
On Our Radar (August 2023)

Unpacking news participation and online engagement over time
Reuters Institute for the Study of Journalism (June 2023)

Engage to build trust
American Press Institute (June 2023)

Journalists must understand the power of community engagement to earn trust
Poynter (February 2023)

Q&A: How can newsrooms better serve communities of color?
Columbia Journalism Review (February 2023)

Blind news audiences are being left behind in the data visualization revolution: Here’s how we fix that
Reuters Institute for the Study of Journalism (January 2023)

Gina Chua on how to reach underserved communities: Journalism for – not about – them
Global Investigative Journalism Network (June 2022)

The changing news habits and attitudes of younger audiences
Reuters Institute for the Study of Journalism (June 2022)

How do audiences really ‘engage’ with news?
Columbia Journalism Review (December 2019)

Journalism needs an audience to survive, but isn’t sure how to earn its loyalty
The Conversation (February 2019)

American Press Institute: Non-profit organization providing insights, tools and research to advance journalism.

Center for Media Engagement (CME): Works with and conducts research on American newsrooms to promote connections between the press and the public.

International Press Institute: Global organization dedicated to promoting and protecting press freedom and the improvement of journalism practices.

Knight Foundation: Promotes research around building the future of local news in U.S. communities.

Medill Local News Initiative: Team of experts aiming to reinvent the relationship between news organizations and audiences to elevate enterprises that empower citizens.

Reuters Institute for the Study of Journalism (RISJ): Explores questions related to the future of journalism, including news engagement, relevance and trust through globally focused research.

Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as credibility and trust.

The Trust Project: International consortium aiming to amplify journalism’s commitment to transparency, accuracy, inclusion and fairness.

World Association of News Publishers (WAN-IFRA): Global organization of the world’s press aiming to protect independent media.

Kamal Ahmed, Co-Founder and Editor-in-Chief, The News Movement

Raney Aronson-Rath, Producer, PBS FRONTLINE

S. Mitra Kalita, Co-Founder and CEO, URL Media

Amy Kovac-Ashley, Head of National Programs, Lenfest Institute

Luba Kassova, Researcher and Writer

Gaven Morris, Managing Director, Bastion Transform & Former News Director, ABC News

Nic Newman, Senior Research Associate, Reuters Institute for the Study of Journalism

Agnes Stenbom, Head, IN/LAB

Talia Jomini Stroud, Director, Center for Media Engagement

Benjamin Toff, Assistant Professor, University of Minnesota

Winston Utomo, Founder and Ceo, IDN Media

International Journalism Festival
April 17–21, 2024 – Perugia, Italy

World News Media Congress 2024
WAN-IFRA
May 27–29, 2024 – Copenhagen, Denmark

Newsroom Summit 2023
WAN-IFRA
October 21–23, 2024 – Zurich, Switzerland

Trust Conference
Thomson Reuters Foundation
October 19–20, 2023 – London, United Kingdom

When it comes to audience diversity, newsrooms are asking the wrong questions
Nieman Lab (November 2023)

Choose your words wisely: The role of language in media trust
Digital Content Next (November 2023)

Young South Africans are shaping the news through community radio — via social media
Nieman Lab (October 2023)

The mirage in the trust desert: challenging journalistic transparency
Reuters Institute for the Study of Journalism (August 2023)

Strategies for building trust in news: What the public say they want across four countries
Reuters Institute for the Study of Journalism (September 2023)

From contributors, to co-producers – a guide to participatory production
On Our Radar (August 2023)

Unpacking news participation and online engagement over time
Reuters Institute for the Study of Journalism (June 2023)

Engage to build trust
American Press Institute (June 2023)

Journalists must understand the power of community engagement to earn trust
Poynter (February 2023)

Q&A: How can newsrooms better serve communities of color?
Columbia Journalism Review (February 2023)

Blind news audiences are being left behind in the data visualization revolution: Here’s how we fix that
Reuters Institute for the Study of Journalism (January 2023)

Gina Chua on how to reach underserved communities: Journalism for – not about – them
Global Investigative Journalism Network (June 2022)

The changing news habits and attitudes of younger audiences
Reuters Institute for the Study of Journalism (June 2022)

How do audiences really ‘engage’ with news?
Columbia Journalism Review (December 2019)

Journalism needs an audience to survive, but isn’t sure how to earn its loyalty
The Conversation (February 2019)

American Press Institute: Non-profit organization providing insights, tools and research to advance journalism.

Center for Media Engagement (CME): Works with and conducts research on American newsrooms to promote connections between the press and the public.

International Press Institute: Global organization dedicated to promoting and protecting press freedom and the improvement of journalism practices.

Knight Foundation: Promotes research around building the future of local news in U.S. communities.

Medill Local News Initiative: Team of experts aiming to reinvent the relationship between news organizations and audiences to elevate enterprises that empower citizens.

Reuters Institute for the Study of Journalism (RISJ): Explores questions related to the future of journalism, including news engagement, relevance and trust through globally focused research.

Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as credibility and trust.

The Trust Project: International consortium aiming to amplify journalism’s commitment to transparency, accuracy, inclusion and fairness.

World Association of News Publishers (WAN-IFRA): Global organization of the world’s press aiming to protect independent media.

Kamal Ahmed, Co-Founder and Editor-in-Chief, The News Movement

Raney Aronson-Rath, Producer, PBS FRONTLINE

S. Mitra Kalita, Co-Founder and CEO, URL Media

Amy Kovac-Ashley, Head of National Programs, Lenfest Institute

Luba Kassova, Researcher and Writer

Gaven Morris, Managing Director, Bastion Transform & Former News Director, ABC News

Nic Newman, Senior Research Associate, Reuters Institute for the Study of Journalism

Agnes Stenbom, Head, IN/LAB

Talia Jomini Stroud, Director, Center for Media Engagement

Benjamin Toff, Assistant Professor, University of Minnesota

Winston Utomo, Founder and Ceo, IDN Media

International Journalism Festival
April 17–21, 2024 – Perugia, Italy

World News Media Congress 2024
WAN-IFRA
May 27–29, 2024 – Copenhagen, Denmark

Newsroom Summit 2023
WAN-IFRA
October 21–23, 2024 – Zurich, Switzerland

Trust Conference
Thomson Reuters Foundation
October 19–20, 2023 – London, United Kingdom


The post Building News Relevance appeared first on Center for News, Technology & Innovation.

]]>
Enhancing Algorithmic Transparency https://innovating.news/article/enhancing-algorithmic-transparency/ Fri, 25 Aug 2023 14:00:30 +0000 https://innovating.news/?post_type=article&p=310 How can public policy enhance algorithmic transparency and accountability while protecting against political or commercial manipulation?

The post Enhancing Algorithmic Transparency appeared first on Center for News, Technology & Innovation.

]]>

Digital platforms have become central to how people around the world find and share news and information. Currently, each platform operates under its own rules and conventions, including what content is shown and prioritized by algorithmic infrastructures and internal company policies. Establishing legal and organizational policy to promote algorithmic transparency is one critical step toward a structure that allows for more accountability for often-opaque digital platform practices related to content selection and moderation processes. Various stakeholders – including policymakers, researchers, journalists and the public – also often have different purposes and uses for transparency, which need to be thought through to be sure it serves those needs while protecting against the risks of political or commercial manipulation. These considerations also carry through to who is designated to regulate transparency requirements.

The internet is enmeshed in virtually every aspect of modern life. Most often, the public’s online experiences flow through digital platforms, which serve as intermediaries between the content being delivered and the individuals consuming it. That provides convenience but also raises serious concerns. A central one is how much control digital platforms have, or should have, over what people see, especially when it comes to news and information, and how transparent digital platforms’ decision-making processes, typically conducted through the use of algorithms, needs to be.

Each digital platform, and its algorithmic infrastructure, operates under its own rules and conventions to determine which content is and is not allowed as well as what content is prioritized on the platform. Algorithmic choices have far-reaching implications ranging from determining user experiences by selecting one news story over another to facilitating hate speech, disinformation and violence

Policymakers, journalists, social media researchers, civil society organizations, private corporations and members of the public are raising awareness of and questions about what are often-opaque digital platform practices related to content selection and moderation processes. The global news industry in particular – which relies to a significant degree on digital intermediaries to reach audiences – has articulated the need for a better understanding of how algorithms rank, boost, restrict and recommend content (including, but not limited to, news) and target consumers. Without a better understanding of how these algorithms are currently built, it is difficult for publishers to work within their constraints to reach audiences or to play any role in determining how algorithms should be built to promote an informed public.

Thus, in addition to ensuring that algorithms are designed to identify fact-based, independent journalism (a topic CNTI addresses in a separate issue primer), it is critical to enhance transparency in and accountability for how digital platform algorithms function.

The key challenge is that regulating or mandating transparency is more complex than it appears, with many elements that need to be thought through. What forms of transparency are necessary and most likely to be effective for protecting and promoting access to high-quality, diverse and independent news? What does transparency mean to policymakers, journalists, researchers and the public? How can we enable algorithmic transparency in ways that protect against political and commercial manipulation or abuse and ensure user privacy? What are its limitations? What systems or entities should be beholden to these levels of transparency? And who should be empowered to set these rules? 

Addressing the questions introduced in this primer is a critical first step toward developing approaches to algorithmic accountability that support an independent, competitive press and an informed society. Policymakers and other stakeholders can benefit from a more nuanced understanding of what transparency can, and cannot, accomplish as well as what new risks could be introduced. Policymakers must be forward-thinking about how and where algorithms may be used in the future, what data could be exposed and who could ultimately gain access to it as a result of increased transparency.

One hurdle in creating valuable algorithmic transparency is being sure that those on the receiving end are armed with the necessary knowledge to assess these processes.

A good deal of tension has arisen over how technology companies have responded to requests for independent researcherjournalist and public access to algorithmic processes or their digital trace data. Transparency policies must take into account each stakeholder’s differing needs, aims and interests and consider what forms of transparency are necessary and appropriate for each. Because the algorithmic processes are complicated and often rely on technological jargon, there is the risk that even with increased transparency, a lack of understanding of these processes will endure for some stakeholders.

“Open-source” transparency is not enough to achieve an understanding of algorithmic processes or solve the problems of platform accountability. This is true in two different manners. First, some tech leaders have suggested “open-source” algorithms as an answer to transparency and accountability issues, but algorithmic processes require a level of technical knowledge most stakeholders do not possess in order to understand their complexities. “Open-source” transparency requires individuals to have the time and capacity to learn the complexities of data governance and content moderation. Context is critical in learning how algorithms work; without a clear understanding of the internal policies dictating algorithmic behaviors or access to their underlying training data, raw information is not interpretable.

Determining the parameters of transparency is challenging to do in a single policy.

There is broad agreement that transparency is a critical component of digital platform governance and content moderation, and that “more transparency is better,” but, it is not always clear what, and where, transparency is being sought. Collaboration between experts is critical to addressing the multifaceted impacts policy on transparency may have. Further, the aim of transparency differs in market-based versus state-led models, with the former aiming to empower users and the latter aiming to empower regulators. Policymakers, researchers, the news media and civil society must work together to weigh stakeholders’ needs against platforms’ capabilities and consider what regulatory structures will serve to enforce this legislation. Finally, as we address in a separate issue primer, addressing transparency on its own does not address disagreements over how to ensure people see fact-based, independent news content on digital platforms, nor does it address the choices individuals themselves make about what they click on or what kind of information they seek out.

Digital platform regulation needs to protect against political and commercial manipulation as a consequence of algorithmic transparency.

As discussed in our issue primer on addressing disinformation, algorithmic transparency (and researcher access to digital trace data) can both help experts understand the spread of online disinformation and how to counter it and, at the same time, introduce new risks for manipulation and abuse. Governments in many parts of the world increasingly pressure platforms to make content decisions that benefit their own political interests. There is also the risk that commercial actors or institutions manipulate algorithms to exploit virality, amplifying large quantities of clickbait and problematic content over fact-based, independent information – particularly by using new technologies like generative AI. Both introduce threats to an independent press and an informed public. Policy must safeguard against these risks and insulate regulation from political and commercial manipulation, both by ensuring oversight remains independent from governments and by separating regulatory bodies from direct involvement in content moderation decisions.

Platforms’ commercial structures are real factors to be considered, both in the way they can negatively drive internal decisions and in the value they bring to our digital environment.

Technology companies are private business entities that operate in a competitive marketplace with a certain level of value placed on their proprietary information and intellectual property. The current concealed nature of algorithmic selection and ranking, however, risks commercial incentives negatively affecting the work of journalists and the content the public receives. Crafting the best transparency policies will require a balance between these two elements, including consideration of potential unintended consequences for innovation or competition, such as deterring startups or smaller players from participation due to costs associated with adhering to regulation.

Debates around transparency introduce important opportunities and risks for the relationships between governments and platforms.

This is particularly important when it comes to who oversees or regulates algorithmic transparency and what can be asked of platforms when it comes to content moderation. Legislation that does not address these critical questions can threaten an independent press and freedom of expression (for instance, via “jawboning” practices or other legal demands of platforms). This is a consistent complexity across many of CNTI’s issue areas and speaks to the importance of considering the balance between legislative and organizational policy (including government-enforced organizational policy) in addressing these challenges.

Policy must balance transparency and accountability against user privacy concerns when applicable.

Digital platforms’ efforts to improve transparency, when they exist at all, are at times negotiated within the contexts of user privacy concerns and data security concerns. While this does not affect all approaches to algorithmic transparency, questions about what user data digital platforms would be obligated to share and what personal information would be contained in that data will inevitably lead to trade-offs between digital platform transparency and user privacy.

Over the past decade, questions surrounding digital platform governance have moved to the forefront of political, legal and academic debates. Recently, a breadth of interdisciplinary research has focused on social media platforms and messaging apps central to global journalistic work as well as on the broader range of software companies and sharing or gig economy apps central to contemporary online commerce. 

Much of this research fits into two categories: governance of social media and governance by social media. Research focused specifically on digital platform algorithmic transparency represents one small segment of work in this field. 

To date, this research has been far more theoretical than empirical, largely due to a lack of data (as we note in greater detail below) and the slow-moving nature of legislative policy, but it still sheds light on the varying forms of transparency different stakeholders expect. For instance, public transparency via disclosures looks different from, and accomplishes different aims than, research transparency via data access. 

Research findings have also noted the limitations of algorithmic transparency alone as a means of establishing digital platform accountability as well as the challenges various legislative efforts face in addressing algorithmic transparency, ranging from potential free expression violations to individual privacy infringements. These limitations have led some experts to call for forms of digital platform accountability beyond algorithmic transparency. 

Collectively, this work also reveals how little has changed, both via research and policy, in almost a decade of calls for algorithmic accountability.

Looking forward, there is much that we do not yet know about digital platforms’ algorithmic infrastructures. Research in this area is critical to inform current debates about internet governance. Experts have called for independent researchers and civil society to be granted more access to the inner workings of digital platform technology companies, the algorithms they develop and the trace data they collect – particularly as digital platforms steadily move away from more open models of free data sharing through their APIs. Of course, what access to give to whom is itself part of the debate about transparency. Still, access, when it has the appropriate safeguards, can help to better inform policy-making and contribute to democratic processes as a form of checks and balances on government and corporate power.

Content moderation is the element of transparency that has, to date, received the most policy attention. A wide range of policy initiatives around the world have begun, albeit slowly, to regulate content moderation processes and/or government takedown requests on digital platforms, which introduce opportunities and risks. We address some of these in a separate issue primer

Legislative policy around addressing algorithmic transparency and accountability more broadly is still nascent. While some technology companies have taken steps internally to improve organizational transparency, governments are beginning to consider policy initiatives that require specific forms of platform accountability, with a focus on (1) supporting research access to data, (2) protecting user data privacy and (3) disclosing certain content moderation practices. 

Some experts have cautioned against the risk that poorly designed transparency laws could become a mechanism for state intervention in and control over platforms’ editorial policies. In contexts like the U.S. with strong constitutional protections of free expression, it is unclear whether the government can mandate transparency. 

The establishment of international ethical frameworks for platform accountability is complicated by the fact that most institutional structures for oversight, such as review boards or ethics committees, vary greatly by country. Nonetheless, policymakers, particularly in Europe and the U.S., have called for international cooperation when it comes to facilitating transparency and access to cross-platform research. Experts have also called for multi-stakeholder and (supra-)national legislative efforts to govern digital spaces.

Even if not intentional, policymakers need to consider that a legislative framework in any one country has the potential to influence regulation globally. For example, a rule-of-law-based approach, while effective in stable democracies, could later serve as a blueprint for suppression elsewhere.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

YouTube launches new watch page that only shows videos from “authoritative” news sources
Nieman Lab (October 2023)

X changes its public interest policy to redefine ‘newsworthiness’ of posts
TechCrunch (October 2023)

Platform Accountability and Transparency Act reintroduced in Senate
Tech Policy Press (June 2023)

Google opposed a shareholder proposal asking for more transparency around its AI algorithms
Business Insider (June 2023)

Meta explains how AI influences what we see on Facebook and Instagram
The Verge (June 2023)

Beyond Section 230: Three paths to making the big tech platforms more transparent and accountable
Nieman Lab (January 2023)

Declaration of principles for content and platform governance in times of crisis
AccessNow (November 2022)

“This is transparency to me”: User insights into recommendation algorithm reporting
Center for Democracy & Technology (October 2022)

Frenemies: Global approaches to rebalance the Big Tech v. journalism relationship
Techtank/The Brookings Institution (August 2022)

How social media regulation could affect the press
Committee to Protect Journalists (January 2022)

If Big Tech has the will, here are ways research shows self-regulation can work
The Conversation (February 2021)

Competition issues concerning news media and digital platforms
Organisation for Economic Co-operation and Development (December 2021)

Why am I seeing this? How video and e-commerce platforms use recommendation systems to shape user experiences
Open Technology Institute (March 2020)

No more magic algorithms: Cultural policy in an era of discoverability
Data & Society (May 2016)

Africa Freedom of Information Centre (AFIC): Pan-African, membership-based civil society network and resource center promoting the right of access to information, transparency and accountability across Africa.

Algorithmic Impact Methods Lab: Data & Society lab advancing assessments of algorithmic systems in the public interest.

AlgorithmWatch: Research and advocacy organization committed to analyze automated decision-making systems and their impact on society.

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S): Cross-disciplinary national research center for responsible automated decision-making.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Centre for Media Pluralism and Media Freedom (CMPF): European University Institute research and training center, co-financed by the European Union.

European Centre for Algorithmic Transparency: European Commission center aiming to contribute to a safer online environment and to support oversight of the Digital Services Act.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa, including data governance.

Jordan Open Source Association: Nonprofit organization aiming to promote openness in technology and to defend the rights of technology users in Jordan.

Karlsruhe Institute for Technology (KIT): Research and education facility seeking to develop industrial applications via real-world laboratories.

Chinmayi Arun, Executive Director, Yale Information Society Project

Susan Athey, Economics of Technology Professor, Stanford University

Emily Bell, Director, Tow Center for Digital Journalism, Columbia Journalism School

Guy Berger, Former Director of Policies & Strategies in Communication and Information, UNESCO

Neil W. Netanel, Professor of Law, University of California – Los Angeles

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Matthias Spielkamp, Executive Director, AlgorithmWatch

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Association for Computing Machinery
2024 – TBD

YouTube launches new watch page that only shows videos from “authoritative” news sources
Nieman Lab (October 2023)

X changes its public interest policy to redefine ‘newsworthiness’ of posts
TechCrunch (October 2023)

Platform Accountability and Transparency Act reintroduced in Senate
Tech Policy Press (June 2023)

Google opposed a shareholder proposal asking for more transparency around its AI algorithms
Business Insider (June 2023)

Meta explains how AI influences what we see on Facebook and Instagram
The Verge (June 2023)

Beyond Section 230: Three paths to making the big tech platforms more transparent and accountable
Nieman Lab (January 2023)

Declaration of principles for content and platform governance in times of crisis
AccessNow (November 2022)

“This is transparency to me”: User insights into recommendation algorithm reporting
Center for Democracy & Technology (October 2022)

Frenemies: Global approaches to rebalance the Big Tech v. journalism relationship
Techtank/The Brookings Institution (August 2022)

How social media regulation could affect the press
Committee to Protect Journalists (January 2022)

If Big Tech has the will, here are ways research shows self-regulation can work
The Conversation (February 2021)

Competition issues concerning news media and digital platforms
Organisation for Economic Co-operation and Development (December 2021)

Why am I seeing this? How video and e-commerce platforms use recommendation systems to shape user experiences
Open Technology Institute (March 2020)

No more magic algorithms: Cultural policy in an era of discoverability
Data & Society (May 2016)

Africa Freedom of Information Centre (AFIC): Pan-African, membership-based civil society network and resource center promoting the right of access to information, transparency and accountability across Africa.

Algorithmic Impact Methods Lab: Data & Society lab advancing assessments of algorithmic systems in the public interest.

AlgorithmWatch: Research and advocacy organization committed to analyze automated decision-making systems and their impact on society.

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S): Cross-disciplinary national research center for responsible automated decision-making.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Centre for Media Pluralism and Media Freedom (CMPF): European University Institute research and training center, co-financed by the European Union.

European Centre for Algorithmic Transparency: European Commission center aiming to contribute to a safer online environment and to support oversight of the Digital Services Act.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa, including data governance.

Jordan Open Source Association: Nonprofit organization aiming to promote openness in technology and to defend the rights of technology users in Jordan.

Karlsruhe Institute for Technology (KIT): Research and education facility seeking to develop industrial applications via real-world laboratories.

Chinmayi Arun, Executive Director, Yale Information Society Project

Susan Athey, Economics of Technology Professor, Stanford University

Emily Bell, Director, Tow Center for Digital Journalism, Columbia Journalism School

Guy Berger, Former Director of Policies & Strategies in Communication and Information, UNESCO

Neil W. Netanel, Professor of Law, University of California – Los Angeles

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Matthias Spielkamp, Executive Director, AlgorithmWatch

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Association for Computing Machinery
2024 – TBD


The post Enhancing Algorithmic Transparency appeared first on Center for News, Technology & Innovation.

]]>
Addressing Disinformation https://innovating.news/article/addressing-disinformation/ Fri, 25 Aug 2023 14:00:00 +0000 https://innovating.news/?post_type=article&p=232 How can we ensure that mechanisms to stem disinformation aren’t used to restrict press independence or free speech?

The post Addressing Disinformation appeared first on Center for News, Technology & Innovation.

]]>

Publishers, platforms and policymakers share a responsibility to respond to growing concerns around disinformation. It is increasingly important to understand and navigate challenging trade-offs between curbing problematic content and protecting independent journalism and fundamental human rights. Efforts to stem misinformation must ensure that governments cannot determine the news that the public receives or serve as arbitrators of truth or intent.

Legislation should articulate high-level goals, understand that initiatives in one country or online context inherently impacts other contexts and delegate enforcement to independent bodies with clear structures for transparency and accountability.

The spread of false and misleading information is not a new problem and appears in many forms: online and offline, through public and private channels and across a variety of mediums. In a world with growing digital media platforms (where false and misleading information is rapidly spread and, at times, amplified), new technologies for digital manipulation, political upheavals in the Global North, coordinated election disinformation and hostile propaganda campaigns, problematic COVID-19 information, rampant denial of climate science and declining trust in institutions and news, the credibility of information the public gets online has become a global concern. 

Of particular importance, and the focus of this primer, is the impact of disinformationfalse information created or spread with the intention to deceive or harm – on electoral processes, political violence and information systems around the world.

Disinformation is distinct from but often used interchangeably with terms like misinformation (which is also false, but may be benign and can be spread without the intention of harm) or malinformation (which is also intentionally harmful, but is not necessarily false). With this in mind, CNTI chooses to use the term disinformation given its (1) falsity and (2) malicious intent. This primer addresses the opportunities and challenges that come with legislative policy responses to disinformation.

The level and impact of disinformation campaigns around the world have a wide range. Disinformation may come from people doctoring photos and creating memes that go viral, transnational actors and trolls seeking to sow distrust and confusion, and elected officials attempting to win elections, maintain power or incite hatred

Governments around the world are taking action to curb disinformation: some with the goal of supporting an informed citizenry and others with the goal of undermining it. Actions taken by one entity can impact other countries when groups in one country learn from and model successful efforts elsewhere. Digital platforms and interest groups have also put in place content moderation processes to stem disinformation (though some platforms have begun to move away from these efforts), but these can vary by language and context.

Among the many well-intended legislative proposals to address disinformation, one overarching concern is that the vagueness of what constitutes disinformation (especially the difficulty of interpreting actors’ intent) can result in policy that controls the press and limits free expression. Even legislation aimed at supporting an informed citizenry can potentially lead to restrictions on both the news media and the general public within a country. Further, policies that target disinformation can easily serve as models for authoritarian regimes or antidemocratic actors to exploit. In these cases, the actual – and, at times, intended – effects are restriction of media freedom, censorship of opposing voices and control of free expression. 

Thus, it is critical to balance the opportunities and risks of policy responses to the challenge of disinformation.

A core challenge in addressing disinformation with policy is a lack of agreement on the definition of disinformation and what kinds of content constitute it.

The term itself is often interchanged with similarly opaque concepts such as misinformation, malinformation, information disorder and the particularly contested term “fake news.”Determining which content fits within each category is subject to disagreement and is often politicized. To the public, these terms are used to encompass anything from poor or sensationalized journalism and fabricated content to political propaganda and hyper-partisan content. As challenging as it may be, it is important to strive for a clear and consistent understanding of what disinformation is. Without such agreement, developing effective measures to support an informed citizenry and safeguard an independent press has the added challenge of needing to withstand differences among these labels.

Addressing disinformation is critical, but some regulative approaches can put press freedom and human rights at great risk.

Governments’ involvement in decisions around content can allow them to serve as the arbiters of what content appears online, with potentially dangerous consequences for an independent press and free expression. Legislation may intentionally target specific groups, including journalists, political opponents and activists, and it may include loopholes that allow for suppression or censorship. Legislation intended to support an informed public may also unintentionally stem speech protected under the basic human right to express ideas. Further, legislation that is effective in one context may introduce different risks in another. Prohibiting acts of expression, particularly using vague or undefined terms, can infringe upon international human rights laws. If measures against disinformation are selectively or unequally enforced — whether by police, regulatory bodies or technology companies — they can be used as a tool for crushing political dissent, impinging upon freedom of opinion and paving the way for illegal surveillance or self-censorship.

Adopting measures such as blocks or bans to combat disinformation can allow state actors to exercise undue control over the flow of information and can isolate users from an open, global internet.

Governments’ involvement in decisions around content can allow them to serve as the arbiters of what content appears online, with potentially dangerous consequences for an independent press For example, when state actors apply blocks and bans to journalists they may harm both the freedom of the press and citizens’ basic right to access and share ideas and information, both of which impact people’s participation in public and political life.

It may not be possible to develop disinformation interventions that suit all digital contexts.

The various forms, contexts and audiences in the online space introduce different (and unequal) harms and risks. For instance, users of encrypted messaging apps such as WhatsApp or Telegram in India, Brazil and elsewhere have differing rights to and expectations of privacy and reach than users of Twitter or Reddit. Even within the same platform, structured spaces can vary from public to private. Dismantling encrypted spaces, in particular, does little to combat disinformation and discourages free expression. Another major challenge is in determining which platforms and which countries fall under the parameters of a piece of legislation. This is further complicated by the fact that U.S.-based technology companies’ levels of cultural expertise and engagement lessen the further they get from the U.S., which means the effectiveness of efforts to combat disinformation vary. Finally, disjointed efforts to combat online disinformation risk contributing to a fragmented internet, in which people’s online experiences vary by country or region. We address this issue in a separate issue primer.

In recent years, as governments, platforms and funders turned their attention and investments toward policy and technical solutions to address mis- and disinformation (though this has started to wane), academic and media attention to the topic has dramatically increased. The research to date has produced helpful insights, including putting the scope of mis- and disinformation in context with other online content. The research field also has several shortcomings that reveal the need for a deeper and more global approach:

Future work could provide more systematic global research needed to design more effective measures against mis- and disinformation. This includes studying the scale and impact of mis- and disinformation in countries outside of the U.S. and in comparative contexts. For instance, it is unclear whether strategies that are proven to be effective in countries with higher education and literacy levels would also apply elsewhere. There is also a need for understanding the agents and infrastructures involved in the spread of mis- and disinformation online and offline, particularly when it comes to video and image-based content as well as messaging applications. Finally, more data and research is needed to understand the effects that laws against disinformation – and related government action against platforms – have on civil liberties.

The global landscape around what legislators consider harmful content or disinformation is diverse, often complicated and reaches back centuries in some countries. Legislators’ treatment of disinformation has ranged from a desire to protect election integrity against domestic or foreign interference to obvious schemes to stifle political dissent. There has been a considerably greater effort to regulate what can be said and by whom in recent years, particularly in the wake of the COVID-19 pandemic. Efforts to respond to disinformation are critical, but policy must not set the stage for the dismantling of an independent press or an open internet. Specific areas of concern include:

  • Both highly democratic countries and authoritarian regimes increasingly regulate online discourse. The latter regularly target critical voices under the banner of tackling disinformation, often by abusing a state of crisis or emergency to justify state censorship, and often without time limits. Measures may either have vague wordingand broad scope, thus intentionally or unintentionally creating room for misuse, or may be too narrow in scope to effectively combat disinformation.

  • Sanctions within the legislative framework include financial penalties, jail time, bandwidth restrictions, advertising bans and blocking, depending on whether they address individuals or companies. Several challenges remain, including how to enforce regulations across borders (if the source for disinformation comes from a wholly different legal environment) as well as how to prevent “chilling effects” and self-censorship in newsrooms for fear of punishment.

  • Even non-authoritarian governments have vastly different approaches toward what they deem illegal content. In the EU, companies hosting others’ data are liable if, upon actual knowledge of it, they fail to act and remove illegal content. This fundamentally differs from existing immunities in countries like the U.S., likely due to its historical commitment to free speech. This legal patchwork creates further challenges for transnational corporations.

  • Despite the wealth of evidence that disinformation flows both top-down and bottom-up, policy attempting to address top-down disinformation (e.g., from domestic politicians or celebrities and foreign governments) has largely been absent while there has been an overwhelming focus on bottom-up mis- and disinformation (e.g., within platforms and their users).

As disinformation receives growing attention by elected leaders and academics, there should be a similar focus on legislative attempts to face a rapidly changing information environment. As much as possible, efforts need to be rooted in a clear understanding of the actors and their differing roles as well as in protection for an independent press, freedom of expression and fundamental human rights. Both the inadequacies and best practices of existing global legislation must be discussed openly. Additionally, in rule-of-law countries, there is a need for more political and public awareness that legislation may be weaponized by authoritarian regimes, worsening already restrictive situations for human rights groups, political opposition and independent news media.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

Korean president’s battle against ‘fake news’ alarms critics
The New York Times (November 2023)

Chilling legislation: Tracking the impact of “fake news” laws on press freedom internationally
Center for International Media Assistance (July 2023)

Most Americans favor restrictions on false information, violent content online
Pew Research Center (July 2023)

Twitter agrees to comply with tough EU disinformation laws
The Guardian (June 2023)

Regulating online platforms beyond the Marco Civil in Brazil: The controversial “fake news bill”
Tech Policy Press (May 2023)

What’s the key to regulating misinformation? Let’s start with a common language
Poynter (April 2023)

Policy reinforcements to counter information disorders in the African context
Research ICT Africa (February 2023)

Lessons from the global South on how to counter harmful information
Herman Wasserman (April 2022)

Why we need a global framework to regulate harm online
World Economic Forum (July 2021)

How well do laws to combat misinformation work?
Empirical Studies of Conflict Project, Princeton University (May 2021)

Rush to pass ‘fake news’ laws during Covid-19 intensifying global media freedom challenges
International Press Institute (October 2020)

Disinformation legislation and freedom of expression
UC Irvine Law Review (March 2020)

Story labels alone don’t increase trust
Center for Media Engagement (2019)

A human rights-based approach to disinformation
Global Partners Digital (October 2019)

Six key points from the EU Commission’s new report on disinformation
Clara Jiménez Cruz, Alexios Mantzarlis, Rasmus Kleis Nielsen, and Claire Wardle (March 2018)

Protecting democracy from online disinformation requires better algorithms, not censorship
Council on Foreign Relations (August 2017)

Center for an Informed Public: University of Washington research center translating research about misinformation and disinformation into policy, technology design, curriculum development and public engagement.

Empirical Studies of Conflict (ESOC): Multi-university consortium that identifies global disinformation campaigns and their effects on worldwide democratic elections.

EU Disinfo Lab: Independent nonprofit organization gathering knowledge and expertise on disinformation in Europe.

First Draft: Offers training, research and tools on how to combat online mis- and disinformation.

Global Disinformation Index: Nonprofit organization aiming to provide transparent, independent neutral disinformation risk ratings across the open web.

International Press Institute (IPI): Monitored media freedom violations, including policies or legislation passed against online misinformation, throughout the COVID-19 pandemic.

Laws on Expression Online: Tracker and Analysis (LEXOTA): Coalition of civil society groups that launched an interactive tool to help track and analyze government responses to online disinformation across Sub-Saharan Africa.

LupaMundi: Interactive map presenting national laws to combat disinformation in several languages.

OECD DIS/MIS Resource Hub: Peer learning platform for sharing knowledge, data and analysis of government approaches to tackling mis- and disinformation.

PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.

Poynter’s guide to anti-misinformation actions around the world: Compiled a global guide for 2018-2019 interventions for or attempts to legislate against online misinformation.

Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as targeted disinformation.

Francisco Brito Cruz, Executive Director, InternetLab

Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo

Joan Donovan, Former Research Director, Shorenstein Center on Media, Politics and Public Policy

Pedro Pamplona Henriques, Co-Founder, The Newsroom

Clara Jiménez Cruz, CEO, Maldita.es

Tanit Koch, Journalist, The New European 

Vivek Krishnamurthy, Professor, University of Ottawa

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Elsa Pilichowski, Director for Public Governance, OECD

Maria Ressa, CEO, Rappler

Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University

Nabiha Syed, CEO, The Markup

Scott Timcke, Senior Research Associate, Research ICT Africa

Claire Wardle, Executive Director, First Draft 

Herman Wasserman, Professor, University of Cape Town

Gavin Wilde, Senior Fellow, Carnegie Endowment for International Peace

Annual IDeaS Conference: Disinformation, Hate Speech, and Extremism Online
IDeaS
April 13-14, 2023 – Pittsburgh, Pennsylvania, USA

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

Cambridge Disinformation Summit
University of Cambridge
July 27–28, 2023 – Cambridge, United Kingdom

EU DisinfoLab 2023 Annual Conference
EU DisinfoLab
October 11–12, 2023 – Krakow, Poland

Korean president’s battle against ‘fake news’ alarms critics
The New York Times (November 2023)

Chilling legislation: Tracking the impact of “fake news” laws on press freedom internationally
Center for International Media Assistance (July 2023)

Most Americans favor restrictions on false information, violent content online
Pew Research Center (July 2023)

Twitter agrees to comply with tough EU disinformation laws
The Guardian (June 2023)

Regulating online platforms beyond the Marco Civil in Brazil: The controversial “fake news bill”
Tech Policy Press (May 2023)

What’s the key to regulating misinformation? Let’s start with a common language
Poynter (April 2023)

Policy reinforcements to counter information disorders in the African context
Research ICT Africa (February 2023)

Lessons from the global South on how to counter harmful information
Herman Wasserman (April 2022)

Why we need a global framework to regulate harm online
World Economic Forum (July 2021)

How well do laws to combat misinformation work?
Empirical Studies of Conflict Project, Princeton University (May 2021)

Rush to pass ‘fake news’ laws during Covid-19 intensifying global media freedom challenges
International Press Institute (October 2020)

Disinformation legislation and freedom of expression
UC Irvine Law Review (March 2020)

Story labels alone don’t increase trust
Center for Media Engagement (2019)

A human rights-based approach to disinformation
Global Partners Digital (October 2019)

Six key points from the EU Commission’s new report on disinformation
Clara Jiménez Cruz, Alexios Mantzarlis, Rasmus Kleis Nielsen, and Claire Wardle (March 2018)

Protecting democracy from online disinformation requires better algorithms, not censorship
Council on Foreign Relations (August 2017)

Center for an Informed Public: University of Washington research center translating research about misinformation and disinformation into policy, technology design, curriculum development and public engagement.

Empirical Studies of Conflict (ESOC): Multi-university consortium that identifies global disinformation campaigns and their effects on worldwide democratic elections.

EU Disinfo Lab: Independent nonprofit organization gathering knowledge and expertise on disinformation in Europe.

First Draft: Offers training, research and tools on how to combat online mis- and disinformation.

Global Disinformation Index: Nonprofit organization aiming to provide transparent, independent neutral disinformation risk ratings across the open web.

International Press Institute (IPI): Monitored media freedom violations, including policies or legislation passed against online misinformation, throughout the COVID-19 pandemic.

Laws on Expression Online: Tracker and Analysis (LEXOTA): Coalition of civil society groups that launched an interactive tool to help track and analyze government responses to online disinformation across Sub-Saharan Africa.

LupaMundi: Interactive map presenting national laws to combat disinformation in several languages.

OECD DIS/MIS Resource Hub: Peer learning platform for sharing knowledge, data and analysis of government approaches to tackling mis- and disinformation.

PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.

Poynter’s guide to anti-misinformation actions around the world: Compiled a global guide for 2018-2019 interventions for or attempts to legislate against online misinformation.

Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as targeted disinformation.

Francisco Brito Cruz, Executive Director, InternetLab

Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo

Joan Donovan, Former Research Director, Shorenstein Center on Media, Politics and Public Policy

Pedro Pamplona Henriques, Co-Founder, The Newsroom

Clara Jiménez Cruz, CEO, Maldita.es

Tanit Koch, Journalist, The New European 

Vivek Krishnamurthy, Professor, University of Ottawa

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Elsa Pilichowski, Director for Public Governance, OECD

Maria Ressa, CEO, Rappler

Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University

Nabiha Syed, CEO, The Markup

Scott Timcke, Senior Research Associate, Research ICT Africa

Claire Wardle, Executive Director, First Draft 

Herman Wasserman, Professor, University of Cape Town

Gavin Wilde, Senior Fellow, Carnegie Endowment for International Peace

Annual IDeaS Conference: Disinformation, Hate Speech, and Extremism Online
IDeaS
April 13-14, 2023 – Pittsburgh, Pennsylvania, USA

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

Cambridge Disinformation Summit
University of Cambridge
July 27–28, 2023 – Cambridge, United Kingdom

EU DisinfoLab 2023 Annual Conference
EU DisinfoLab
October 11–12, 2023 – Krakow, Poland


The post Addressing Disinformation appeared first on Center for News, Technology & Innovation.

]]>
Algorithms & Quality News https://innovating.news/article/algorithms-quality-news/ Fri, 25 Aug 2023 14:00:00 +0000 https://innovating.news/?post_type=article&p=233 How can we ensure that algorithms identify and promote fact-based, independent journalism?

The post Algorithms & Quality News appeared first on Center for News, Technology & Innovation.

]]>

Digital platforms subtly guide how we create and discover content. People around the world increasingly rely on digital intermediaries for news and information, and newsrooms must now optimize online content for clicks, shareability and engagement. In this environment, ensuring that algorithmic selection incentivizes high-quality information plays an important role in promoting an informed public, protecting an independent press and enhancing platform credibility. Alongside the need for legal and organizational policy to promote platform transparency, cross-industry collaboration is critical to ensuring that platform algorithms select and prioritize fact-based, independent news content.

The display of news content on digital platforms deviates from the traditional news model by decoupling news production from news distribution. In addition to news organizations’ own digital distribution, technology companies’ algorithms distribute news by selecting, filtering, ranking and bundling news for consumers. News organizations increasingly depend on digital intermediaries to reach audiences through these means, and the public, in turn, relies on digital intermediaries to access news. Social media platforms, search engines, news aggregators and video-sharing services are becoming the dominant means for news consumption across the world. While these pathways increase news access and reach, they also introduce two challenges: the algorithmic selection process is usually opaque to news publishers and the public (who both rely on it), and the algorithmic selection results have the potential to expose the public to lower-quality news and information.

In addition to addressing transparency in how digital platform algorithms function (a topic CNTI addresses in a separate issue primer), there is a need to understand how to ensure fact-based, independent journalism rises to the top on digital platforms. Both publishers and digital platforms face challenges over how to determine the newsworthiness of editorial content. In particular, how can algorithms find, select and prioritize accurate, evidence-based content? This is important in enabling:

  • An informed public. News reporting remains critical to supporting an informed public. With the digital environment open to both well- and ill-intended actors, algorithmic selections risk delivering lower-quality or erroneous content to the public. In addition, it is possible that amid declining revenues, some publishers may attempt to prioritize and elevate content selected and amplified by platform algorithms over coverage driven by public interest.
  • Journalism’s sustainability. News publishers increasingly rely on digital platforms to reach a broader audience, to drive more traffic to their own websites or apps and to increase subscriptions and donations. Less than a quarter (22%) of news audiences say they prefer to start their news journeys with a news website or app, down 10 percentage points since 2018. Instead, social media platforms play an increasingly critical role in news consumption. Amid efforts to seek more revenue from algorithmically driven news consumption (discussed in a separate CNTI issue primer), the opportunities digital platforms provide for exposure and reach to fact-based, independent content remain critical for many publishers. There is also the real possibility of platforms shifting away from news altogether which, at least in our current structures, would almost certainly harm publishers’ ability to reach audiences and inhibit the public’s ability to access legitimate news sources and information. 
  • Digital platforms’ relevance, trust and revenue. Although news content typically makes up only a small portion (according to Facebook, around 3-4%) of digital platform content, platforms that provide independent and fact-based news content reap the benefits of increased relevance, credibility and some revenue when they display ads next to links or snippets and collect user data for targeted advertising. News generally improves the quality and range of content on digital platforms, whose aim is to entice users to spend as much time as possible in their “walled gardens.” 

Behind the challenge of ensuring that algorithms identify and promote fact-based, independent journalism, is the influence digital platforms possess through subtly guiding how society creates and discovers content. Digital platforms have evolved beyond the role of distribution channels to exert direct and indirect editorial influence (though some have begun to shift away from news). For example, in the past decade, digital platforms have incentivized particular types of content (e.g., live videoshort video) and set design standards (e.g., subscription policyanti-cloaking). As a result, editorial decision-making in a competitive, data-driven news market is increasingly based on third-party choices, resulting in newsrooms optimizing online editorial content for clicks, shareability and engagement. 

These dynamics signal the growing importance of ensuring that platform algorithms are structured in a way to identify and promote fact-based, independent journalism. There is a clear need for both legal and organizational policy when it comes to algorithmic transparency. There is also a critical need for cross-industry conversation and collaboration. There is evidence that collaboration leads to increased visibility of fact-based journalism by, for example, changing algorithms (and including human validation) to prioritize and elevate original reporting over standard ranking considerations. Is there a role for news publishers in shaping these algorithmic processes? And, what do publishers need from digital platforms to be able to serve the public and make effective and informed decisions about digital news content?

Policy in this space will need to grapple with what quality means and how it gets assessed.

What content algorithms should promote, as well as who or what defines news “quality,” can be difficult to determine or to articulate in policy. And it often means different things to academics, to news publishers and to the public.Historically, the term “quality” was used by broadsheet newspapers and public broadcasters to distinguish their coverage from the more quantity-driven, sometimes sensationalist tabloid headlines as well as from commercial channels – though whether this distinction was ever true is questionable. It can be difficult, if not outright impossible, to label many news publishers entirely along a binary measure of “quality.” (Further, research suggests that such labels don’t measurably improve people’s news diets or reduce misperceptions.) In some contexts, the question of who determines the quality of news content can open the door to governmental overreach or abuse. Governments’ involvement in decisions around what qualifies as “quality” news content can lead to them serving as the arbiters of what does or does not appear online, potentially threatening an independent press and free expression.

Even if algorithms prioritize fact-based and independent journalism, people may still choose to consume content that is not fact-based or independent.

For instance, tabloid publishers have often argued their popularity guarantees that their content is what the public wants. Should the public be compelled to see content they don’t want to see if governments or others determine it is good for them? The disconnect between what information publishers and the public deem relevant or valuable is a challenge for legal and organizational policy attempting to address algorithmic content recommendations. For instance, should the public be compelled to see content they don’t want to see if governments or others determine it is good for them? Efforts must strike a balance between prioritizing fact-based content and protecting the public’s right to make their own decisions about how, or to what extent, they are informed. This disconnect is explored in more depth in a separate CNTI issue primer on news relevance.

Even if we can agree on the standards of quality journalism, the challenge of how to choose among sources remains.

For instance, many news stories repeatrepackage, aggregate or comment on previously published content. How can policy or other approaches disincentive these practices in favor of original reporting? What role might platforms perform in this? Further, in today’s attention economy, as algorithms sift through and prioritize content from an ever-growing number of news publishers and freelancers, smaller brands are often at a disadvantage amid the prioritization of publishers’ reach and quantity. Further, because of their assumed public and economic value, legacy and upmarket publishers often have more access to digital platform and search engine representatives and may receive more guidance on what types of content will perform better in algorithms. Meanwhile, smaller and often local news organizations may not be able to afford the same level of digital expertise needed to best navigate news algorithms. A large proportion of digital subscriptions already go to just a few big national brands, reinforcing a “winner takes most” dynamic in the online media industry.

Much like their audiences, news publishers generally have a limited understanding of how algorithms target news consumers and rank, boost, restrict and recommend news content.

To journalists, the metrics used in these algorithms range from opaque to obscure, despite some efforts to improve transparency. When proprietors strategically make regular and unannounced changes to their algorithms, ensuing decreases in traffic often lead newsroom, SEO and social media teams to feel at the mercy of major technology companies. These knowledge gaps make it difficult for news publishers to participate in decisions about how platform algorithms should be built to promote an informed public by recognizing and encouraging fact-based, independent journalism.

Platform algorithms are entangled with broader human choices and user preferences, so they are not necessarily designed to prioritize news content.

Platform algorithms are generally built to respond to people’s broader interests, which may or may not include news and current affairs. Individuals see customized search results and feed pushes, and it is unclear whether the public desires more uniform results and pushes.

As private business entities, platforms’ commercial incentives may not always be in sync with the aim to promote fact-based, independent journalism.

For publishers, news content is the top priority on their platforms, but for digital platforms, there is a substantial (and widening) delta in how much importance different parties attach to news content, resulting in different algorithmic objectives. Platforms also have commercial incentives to respond to and recommend content based on users’ digital behaviors, which may or may not include news content. These differences can lead, and have led, to conflicts between publishers and technology companies, at times culminating in news bans at publishers’ expense when technology companies feel unduly regulated. Like debates around algorithmic transparency, these conflicts demonstrate the need for cross-industry conversation and collaboration to ensure that commercial interests do not take precedence over identifying and promoting fact-based, independent news and information.

The impact of algorithms on news media has been recognized as a critical area of research. The central academic debates focus on the risks and opportunities algorithmically ranked news presents to individuals and society, digital platform accountability and the shift from mass media distribution to personalized recommendation systems. Scholars across a range of fields have increasingly analyzed the critical relationship between those who manage algorithms and those who produce news

Researchers have found evidence of publishers adjusting – even lowering – editorial standards to boost their position in ranking and recommendation systems. Researchers have also expressed concerns about accessing, understanding and evaluating the algorithms that govern a range of news-related processes. These concerns involve the opaque, “black box” nature of these algorithms as well as the increasing suppression of researchers’ access to digital trace data (which CNTI addresses in detail in a separate issue primer).

While it is clear that platform algorithms subtly guide trends and drive how people discover content, there is little evidence of exactly how they work or what they reward or amplify. Thus, it is important to differentiate between what the data firmly reveal about algorithms and news and what there is not yet evidence to support:

  • In spite of public attention to fears of algorithmic “filter bubbles,” studies in the UK and several other countries consistently find that algorithmic selection by digital platforms generally leads to slightly more diverse news consumption (though self-selection may hinder this, particularly among political partisans). 
  • Algorithmic rankings can exert influence over how people engage with and evaluate news.
  • Some work suggests that emotional language and out-group animosity are more likely to go “viral” or be shared on social media – trends that rely on user behavior alongside platform and algorithmic design. 
  • Historically, social media platforms like Facebook overwhelmingly weighted comments as more important than other types of interactions, leading posts that explicitly or implicitly encouraged commenting (e.g. divisive content) to gain more popularity. But transparency around algorithms has limitations because the code is not static: Platforms’ formulas for engagement are frequently changing.
  • Research demonstrates that platforms are frequently changing algorithms and investing in both manual and algorithmic moderation processes to minimize the spread of disinformation. At the same time, problematic or extremist content is more likely to be amplified by certain digital platforms’ recommendation systems than by others.
  • News consumers are generally skeptical of algorithmic selection of online news but are equally wary of news selection by editors and journalists. Similarly, people often perceive news produced by algorithms as equally or more credible than (or simply not discernible from) news selected by human journalists. 

There is opportunity for future work to serve as an intermediary between digital platforms and news organizations, pushing for more transparency to fully understand the complexity of algorithmic mechanisms and where the balance may lie between demystifying processes to promote fact-based, independent news and information while protecting against political or commercial manipulation. On the policy side, the effects of recent digital platform legislation on newsroom practices and reach are unclear. Establishing best practices, both in digital platforms and editorial environments, will depend on the findings of research in this area.

Legislative approaches to address platform algorithms in regard to news content and distribution range from direct government intervention to self-regulation. While laws and proposals vary by country, they face a common challenge: the “pacing problem” – technology moving faster than the law.

We discuss two key groups of approaches addressing the impacts of algorithm-driven services with policy in separate issue primers. The first group appears in countries where the emphasis is to protect citizens against safety risks and to increase algorithmic transparency and accountability. The second group appears in countries like Australia, Canada, Brazil and the U.S. where the aim is to require large platform companies to negotiate commercial deals with news publishers for their content. Within both sets of approaches, it is unclear how to assess what “quality” news is. 

Legislation in these areas can greatly impact – and even supersede – what news publishers and technology companies can do to promote fact-based, independent content. In some cases, media bargaining structures, which often primarily benefit legacy and larger publishers, would have determined these actions. Thus, legislation may end up being the arbiter of what content appears online. Empirical evidence is needed to better understand how regulation impacts the algorithmic information landscape in practice, with intended and unintended consequences for fact-based editorial content and news consumption.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

Center for Media Engagement: Research center based at the University of Texas at Austin conducting research to influence media practices for the benefit of democracy.

Centre for Media Pluralism and Media Freedom (CMPF): European University Institute research and training center, co-financed by the European Union.

NewsQ: Hacks/Hackers and Tow-Knight Center for Entrepreneurial Journalism initiative seeking to elevate quality journalism when algorithms rank and recommend news online.

Reuters Institute for the Study of Journalism: University of Oxford institute exploring the future of journalism worldwide through debate, engagement and research.

Tow Center for Digital Journalism: Institute within Columbia University’s Graduate School of Journalism serving as a research and development center for the profession as a whole.

Charlie Beckett, Director, JournalismAI Project

Emily Bell, Director, Tow Center for Digital Journalism, Columbia Journalism School

Axel Bruns, Professor of Communication and Media Studies, Queensland University of Technology

Jeff Jarvis, Director, Tow-Knight Center for Entrepreneurial Journalism, CUNY

Natalie Jomini Stroud, Director, Center for Media Engagement

Connie Moon Sehat, Director, News Quality Initiative

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Pier Luigi Parcu, Director, Centre for Media Pluralism and Media Freedom

Matthias Spielkamp, Executive Director, AlgorithmWatch

European media between decreasing revenues and a quest for pluralism
Centre for Media Pluralism and Media Freedom
April 28, 2023 – Florence, Italy (hybrid)

Algorithmic Competition
Organisation for Economic Co-operation and Development
June 14, 2023 – Paris, France

Information Integrity in the AI Era: (Central) European Perspective
The Aspen Institute Central Europe
September 26, 2023 – Prague, Czech Republic

4th European Data & Computational Journalism Conference 2023
ETH Zurich
June 22–24, 2023 – Zurich, Switzerland

Center for Media Engagement: Research center based at the University of Texas at Austin conducting research to influence media practices for the benefit of democracy.

Centre for Media Pluralism and Media Freedom (CMPF): European University Institute research and training center, co-financed by the European Union.

NewsQ: Hacks/Hackers and Tow-Knight Center for Entrepreneurial Journalism initiative seeking to elevate quality journalism when algorithms rank and recommend news online.

Reuters Institute for the Study of Journalism: University of Oxford institute exploring the future of journalism worldwide through debate, engagement and research.

Tow Center for Digital Journalism: Institute within Columbia University’s Graduate School of Journalism serving as a research and development center for the profession as a whole.

Charlie Beckett, Director, JournalismAI Project

Emily Bell, Director, Tow Center for Digital Journalism, Columbia Journalism School

Axel Bruns, Professor of Communication and Media Studies, Queensland University of Technology

Jeff Jarvis, Director, Tow-Knight Center for Entrepreneurial Journalism, CUNY

Natalie Jomini Stroud, Director, Center for Media Engagement

Connie Moon Sehat, Director, News Quality Initiative

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Pier Luigi Parcu, Director, Centre for Media Pluralism and Media Freedom

Matthias Spielkamp, Executive Director, AlgorithmWatch

European media between decreasing revenues and a quest for pluralism
Centre for Media Pluralism and Media Freedom
April 28, 2023 – Florence, Italy (hybrid)

Algorithmic Competition
Organisation for Economic Co-operation and Development
June 14, 2023 – Paris, France

Information Integrity in the AI Era: (Central) European Perspective
The Aspen Institute Central Europe
September 26, 2023 – Prague, Czech Republic

4th European Data & Computational Journalism Conference 2023
ETH Zurich
June 22–24, 2023 – Zurich, Switzerland


The post Algorithms & Quality News appeared first on Center for News, Technology & Innovation.

]]>
Modernizing Copyright Law https://innovating.news/article/modernizing-copyright/ Fri, 25 Aug 2023 14:00:00 +0000 https://innovating.news/?post_type=article&p=306 How can copyright law be modernized in a way that benefits independent, competitive journalism and an open internet?

The post Modernizing Copyright Law appeared first on Center for News, Technology & Innovation.

]]>

Copyright laws must be modernized for the digital age. The methods of creating, citing and utilizing creative works have changed dramatically since current laws were written. The definition of “publisher” becomes nebulous when anyone with an internet connection can create and share their work. It is challenging to propose and debate modernization strategies without fully understanding who and what would be affected or how these challenges vary by country, which may harm journalists’ ability to participate in that process. More informed and comprehensive discussions among publishers, technology companies and policymakers are needed to structure new laws in a way that protects journalistic work while also recognizing the ways the public accesses and interacts with creative works in our digital societies.

Copyright is a type of intellectual property that describes the legal rights a creator has over a wide range of content, including books, music, film, artwork, software and images. Copyrights give creators the ability to prevent others from using their work without permission and to be compensated for approved uses by others. Use of copyrighted works is dependent on copyright laws in the country of origin as well as various international copyright conventions and treaties that have been adopted to provide clearer rights for creators in a global economy.

Journalists and news publishers create original text, photo, audio and video content. They also incorporate copyrighted content into their work by quoting, paraphrasing or providing evidence from outside content. In many countries, “fair use” and “fair dealing” provisions or “quotation rights” allow for limited use of copyrighted material in circumstances such as journalism, research and teaching without the need to obtain permission from the copyright holder. But, depending on the legal system, these allowances vary significantly.

The digital age has introduced new copyright use cases that are not sufficiently addressed in current law. For example, now news aggregators, social media sites and search engines often publish headlines or snippets of and links to a publisher’s news story. Another new use case emerged with the creation of generative AI, which ‘learns’ from amassed digital content created by others to produce its own derivative content, which may or may not be considered original created work. Journalists increasingly use generative AI in news production, and the original content is also used to train AI tools.

Modernizing copyright law and fair use doctrine for the global digital age is increasingly central to an open internet and competitive news environment and raises a slew of new challenges. How will policy differentiate between snippets of licensed content in a citizen’s post or email, a news story, a freelancer’s news column, a search result, a news feed, and more? What elements of building a generative AI tool fall under fair use? How should societies define original work created in a global, digital environment? While questions surrounding AI and its use in journalism are addressed more fully in a separate issue primer, rapid technological developments reflect how important it is that copyright policy is forward-looking.

A “fair use” doctrine has not been fully integrated into international law.

In addition to countries (e.g., the Philippines, Israel, South Korea) that have different fair use criteria, and countries (e.g., Australia, Canada, South Africa) that have different systems for copyright exemptions such as “fair dealing”, there are also countries (e.g., Brazil, Mexico) that have no clear fair use or fair dealing frameworks in place. Further, “fair use” largely depends on common law or legal precedent that does not always exist in all contexts, making broad adoption of fair use exceptions particularly difficult.

Copyright restrictions may have unintended negative consequences for independent news consumption.

Research shows search engines and other aggregated news lists are major sources of traffic to news stories. If aggregators delisted outlets or opted out of sharing news altogether, news publishers would get less visibility and traffic to their sites. Further, using copyright restrictions as a means of aiding financial support for journalism may favor larger, more established publishers, leaving down-market outlets at risk of shrinking audiences and revenue. That would also leave the public with less access to and consumption of a wide range of quality news.

Determining who and what falls under copyright protection or fair use is complicated and could have lasting implications for fair use protections for journalists and the public.

Areas of concern include restrictive rights contracts being imposed upon freelancers who produce a vast amount of news content online, unfair competitive advantages for larger and/or legacy news outlets, and unclear definitions of what legally constitutes “news” or a “publisher.” For example, limiting the digital sharing of news headlines or snippets could mean that any online user sharing these – even with a link to the original content – would be subject to and in violation of that law. It could also limit what journalists themselves could use in their reporting, with particular risks for freelance journalists.

Existing and future news and technology business models clash over copyright.

Online news aggregation services and search engines often provide headlines and snippets of news stories in addition to links to the full stories housed on news websites. In the U.S., these headlines and snippets traditionally fall under the fair use rules of copyright law, leaving it up to users of aggregator sites to decide if they want to view the original content. Publishers argue that aggregators should pay them for the use of snippets, saying that tech companies’ use of the media industry’s original content for free undermines publishers’ efforts to develop online news business models. Aggregators and search engine businesses argue that their services provide a substantial amount of reach to publishers: If users follow a link to a full story, the publisher does get the value of a site visit. Part of the debate is over what level constitutes “enough traffic to the site” to count as payment.

New digital technologies (e.g., generative AI or the metaverse) are not adequately accounted for in current law or in debates on ways to modernize it.

Debates over modernizing copyright law will need to grapple with legal questions surrounding the training and output of generative AI systems. If you own the copyright for creative content used to train AI, can you have a legal claim over its output? Can you copyright AI output, and if so, who owns it? Allegations claim that AI art software, including Stable Diffusion and Lensa AI, find loopholes around EU and U.S. copyright infringement by developing the technology through nonprofit means to avoid licensing fees. These challenges illustrate how quickly new copyright questions will continue to emerge in the digital realm alongside technological advances.

Research on issues of copyright for a free press has largely focused on media policy, with an overwhelming emphasis on U.S. fair use doctrine and European Union (EU) copyright reform. Work in this area has indicated a number of challenges facing intellectual property holders and journalists in the digital age, particularly with the rise of content aggregationfreelance work and changing media business models. Researchers note a lack of clarity around (1) what separates legitimate information-sharing from harmful copyright infringement and (2) how to define news or set professional standards for news aggregation. While protections of creative work will be critical to fulfill the mission of a free and robust press, research also cautions against the effects of copyright protection that is too strong—including press censorship and the restriction of competition, innovation and public access to information. 

In addition to global and comparative research in this area, it would be valuable for future work to explore how digital journalists understand and navigate complex and often opaque copyright regulations, the effects of enhanced copyright regulation on news audience behaviors and the implications of such provisions for smaller and/or down-market publishers and freelance news creators.

Governments and intergovernmental bodies around the world are working to update copyright laws for the digital age and a global media environment. These efforts present new legal challenges and tensions for publishers, technology companies and those involved in the creation, flow and regulation of digital content. Much of the current work centers around the role of copyright in online searches and news aggregation. At times, copyrights are affected through copyright-adjacent legislation, such as Australia’s News Media Bargaining Code, while in others, such as the European Union copyright law, have been updated directly with the aim of increasing protections for news publishers and content creators. 

As research on these policies indicates, both indirect and direct copyright legislation can carry risks to an independent press, citizen journalism and other elements of an open internet. Specifically, research finds important deficiencies in current approaches, including a lack of clarity in defining what constitutes copyright protection, and raises questions about the effectiveness of these laws in reducing online copyright infringement. Further, little has been done yet to effectively address the rise of new technologies, such as generative AI, in online content creation and distribution.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

News firms seek transparency, collective negotiation over content use by AI makers – letter
Reuters (August 2023)

A journalist’s guide to Creative Commons 2023
Creative Commons (June 2023)

Clarifying copyright to enable AI research in Africa
Research ICT Africa (May 2023)

Digital distribution & lawsuit threats prompt publishers to plan for copyright compliance
Editor & Publisher (February 2023)

AI-created images lose U.S. copyrights in test for new technology
Reuters (February 2023)

Generative AI copyright concerns you must know in 2023
AI Multiple (January 2023)

International copyright’s exclusion of the Global South
Michigan Journal of International Law (2022)

Why Ottawa’s efforts to get Google and Facebook to pay for news content misses the mark
The Conversation (September 2022)

Open access and closed minds: Balancing intellectual property and public interest in the digital age
Nalaka Gunawardene (August 2022)

Frenemies: Global approaches to rebalance the Big Tech v journalism relationship
Brookings Institute (August 2022)

The metaverse, NFTs and IP rights: to regulate or not to regulate?
World Intellectual Property Organization Magazine (June 2022)

Fair use win in screenshot case is a victory for media reporting
Freedom of the Press Foundation (April 2022)

Australia pressured Google and Facebook to pay for journalism. Is America next?
Columbia Journalism Review (March 2022)

Fair use essay series
MediaWell (February 2022)

Reclaim the state: Public interest in copyright and Modern Monetary Theory
InternetLab (2022)

Content aggregation: Spreading or stealing the news?
Reporters Committee for Freedom of the Press (2012)

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

International Association for the Protection of Intellectual Property (AIPPI): Nonprofit association dedicated to the development and improvement of laws for the protection of intellectual property.

InternetLab: Independent Brazilian research center that aims to foster academic debate around issues involving law and technology, especially internet policy.

Managing IP: Trade publication following copyright issues.

World Intellectual Property Organization (WIPO): Global forum for intellectual property policy and information, with 193 member states.

Patricia Aufderheide, University Professor, American University School of Communication

Francisco Brito Cruz, Executive Director, InternetLab

Tarleton Gillespie, Principal Researcher, Microsoft Research

Eric Goldman, Co-Director, High-Tech Law Institute; Santa Clara Law School

Brewster Khale, Founder, Internet Archive 

Larry Lessig, Founder, Creative Commons 

Mike Masnick, Founder, Coppia Institute & TechDirt 

Neil W. Netanel, Professor of Law, University of California – Los Angeles

Ruth L. Okediji, Professor of Law, Harvard Law School

Courtney Radsch, Senior Fellow, Center for International Governance Innovation

Daren Tang, Director General, WIPO

Rebecca Tushnet, Harvard Law School; Founder, Organization for Transformative Works

Margrethe Vestager, Executive Vice President, European Commission

Intellectual Property and Frontier Technologies
World Intellectual Property Organization
March 29–30, 2023 – Geneva, Switzerland

Assemblies of the Member States of WIPO
World Intellectual Property Organization
July 6–14, 2023 – Geneva, Switzerland

Eighth Session of the WIPO Conversation
World Intellectual Property Organization
September 19–21, 2023 – Geneva, Switzerland

News firms seek transparency, collective negotiation over content use by AI makers – letter
Reuters (August 2023)

A journalist’s guide to Creative Commons 2023
Creative Commons (June 2023)

Clarifying copyright to enable AI research in Africa
Research ICT Africa (May 2023)

Digital distribution & lawsuit threats prompt publishers to plan for copyright compliance
Editor & Publisher (February 2023)

AI-created images lose U.S. copyrights in test for new technology
Reuters (February 2023)

Generative AI copyright concerns you must know in 2023
AI Multiple (January 2023)

International copyright’s exclusion of the Global South
Michigan Journal of International Law (2022)

Why Ottawa’s efforts to get Google and Facebook to pay for news content misses the mark
The Conversation (September 2022)

Open access and closed minds: Balancing intellectual property and public interest in the digital age
Nalaka Gunawardene (August 2022)

Frenemies: Global approaches to rebalance the Big Tech v journalism relationship
Brookings Institute (August 2022)

The metaverse, NFTs and IP rights: to regulate or not to regulate?
World Intellectual Property Organization Magazine (June 2022)

Fair use win in screenshot case is a victory for media reporting
Freedom of the Press Foundation (April 2022)

Australia pressured Google and Facebook to pay for journalism. Is America next?
Columbia Journalism Review (March 2022)

Fair use essay series
MediaWell (February 2022)

Reclaim the state: Public interest in copyright and Modern Monetary Theory
InternetLab (2022)

Content aggregation: Spreading or stealing the news?
Reporters Committee for Freedom of the Press (2012)

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

International Association for the Protection of Intellectual Property (AIPPI): Nonprofit association dedicated to the development and improvement of laws for the protection of intellectual property.

InternetLab: Independent Brazilian research center that aims to foster academic debate around issues involving law and technology, especially internet policy.

Managing IP: Trade publication following copyright issues.

World Intellectual Property Organization (WIPO): Global forum for intellectual property policy and information, with 193 member states.

Patricia Aufderheide, University Professor, American University School of Communication

Francisco Brito Cruz, Executive Director, InternetLab

Tarleton Gillespie, Principal Researcher, Microsoft Research

Eric Goldman, Co-Director, High-Tech Law Institute; Santa Clara Law School

Brewster Khale, Founder, Internet Archive 

Larry Lessig, Founder, Creative Commons 

Mike Masnick, Founder, Coppia Institute & TechDirt 

Neil W. Netanel, Professor of Law, University of California – Los Angeles

Ruth L. Okediji, Professor of Law, Harvard Law School

Courtney Radsch, Senior Fellow, Center for International Governance Innovation

Daren Tang, Director General, WIPO

Rebecca Tushnet, Harvard Law School; Founder, Organization for Transformative Works

Margrethe Vestager, Executive Vice President, European Commission

Intellectual Property and Frontier Technologies
World Intellectual Property Organization
March 29–30, 2023 – Geneva, Switzerland

Assemblies of the Member States of WIPO
World Intellectual Property Organization
July 6–14, 2023 – Geneva, Switzerland

Eighth Session of the WIPO Conversation
World Intellectual Property Organization
September 19–21, 2023 – Geneva, Switzerland


The post Modernizing Copyright Law appeared first on Center for News, Technology & Innovation.

]]>
Protecting an Open Internet https://innovating.news/article/protecting-open-internet/ Tue, 22 Aug 2023 14:00:39 +0000 https://innovating.news/?post_type=article&p=307 How can we discourage the development of 'splinternets' and encourage the protection of an open internet?

The post Protecting an Open Internet appeared first on Center for News, Technology & Innovation.

]]>

An open internet infrastructure is critical to functioning, free societies. As governments around the world increasingly turn their attention to issues of internet governance – including via efforts to tackle disinformation and protect user data – the risk of “splintered” internet experiences grows. Policy frameworks should address the distinctions among different forms of fragmentation, the (limited) scenarios in which content fragmentation is justified and how to minimize the impact of fragmentation. Internet regulation that discourages the “splinternet” distributes power outside of the government, protects and promotes individual rights (regarding encrypted and personal data) and open and transparent standards, and accounts for the global nature of the internet – particularly when it comes to the rights of journalists and citizens to communicate and share information within and across borders. Support for independent online media is critical to the protection of an open, globally connected internet.

The rise of the global internet in its earliest days signaled a potential for free and open societies to transcend borders and connect people through digital civic spaces and to provide citizens with a global repository of information. The internet, as a public good, can provide access to reliable and independent news as well as exposure to diverse sources and perspectives. But amid changing geopolitical contexts, new technologies and the threat of online abuse and disinformation, policymakers around the world are increasingly leading governments into rethinking models of internet governance. 

At the core of these discussions is who controls how the global internet functions. Often, this control is negotiated between governments and technology companies that preside over online spaces. 

These political, commercial and technological pressures risk splintering the internet into a collection of different networks and user experiences based on one’s location in the world (also referred to as “fragmentation” or “balkanization”). In fact, splintering is already occurring in some places; two users may encounter wholly different internet experiences based solely on their location. While practices that result in splintering are often enacted by autocratic regimes – via rhetoric, technological developments and legislation – these types of practices are also increasingly central to internet policy debates in democratic societies. 


Source: Internet Society

Regulations intended to restrict and/or control online environments can involve several tactics including content takedowns, blocked or rerouted websites or platforms, government-ordered network disruptions and/or a version of the internet entirely closed-off from outside providers or services.

In weighing the challenges of this issue, it is helpful to distinguish between two types of systems: those where a “splinternet” is intentionally sought out, often to assert state control over data and digital assets or to cut off public access to independent information, and those where a “splinternet” is an unanticipated byproduct of governments’ (or even corporations’) attempts to prevent the spread of disinformation, address legitimate online harms and/or protect citizens’ data from foreign interference. 

There are crucial differences between these two systems, but both introduce risks if they do not include safeguards that protect both user rights and a free flow of global information. The preservation of an open internet is especially critical for the global protection of an independent, competitive news media. Journalists depend on access to open digital communication channels to both document and distribute news, particularly when platforms have been infiltrated or blocked by the state. 

It also becomes increasingly difficult for journalists to get any information across fragmented digital spaces. A number of organizations that analyze internet freedom have found there are a growing number of countries with restricted access to international information sources (see the Notable Research section for more details). These are powerful threats to press freedom and safety as well as to open public access to information.

As governments consider different approaches to digital regulation, some policy responses risk facilitating a “splinternet,” while others can protect societies from it. The challenge is to ensure that the public’s ability to use the internet to create, share and access information as well as journalists’ ability to report and distribute it is protected – both within and across borders. 

At its most harmful, the “splinternet” enshrines government and/or corporate power in ways that erode human rights, including those of free expression and privacy. Greater attention to and awareness of these effects, even when unintended, will be critical for policymaking that protects and promotes an open internet.

Internet “fragmentation” does not have a singular definition, so policies responding to the risks associated with fragmentation must be tailored to the particular context.

In academic and policy circles, the concept of internet “fragmentation” is broad and often contested. However, experts frequently cite two dimensions of fragmentationtechnical fragmentation, referring to systems being completely disconnected from the global internet (e.g. China’s Great Firewall), and content or user experience fragmentation, referring to individuals having different online experiences depending on where they are in the world. For the latter, users may have technical connectivity, but still face restrictions in what they can access (e.g. only displaying government-approved sources in a Google search or a Twitter feed, or Netflix’s geographic controls that vary content by location). Each of these types of internet fragmentation introduces different risks, requiring different approaches to address them.

Information source: IGF’s Policy Network on Internet Fragmentation 

Policymakers tasked with assessing or rethinking models of internet governance face challenges in foreseeing unintended consequences of policies if they don’t have access to technical expertise in the nuances of how the internet works.

The internet is incredibly complex, so policy development requires evidence-based technical discussions in order to be effectively addressed, especially when it comes to addressing fragmentation. These global complexities include the basic technical components of the internet as a whole, how internet censorship occurs, the feasibility of digital surveillance infrastructure and the variability of fragmentation depending on its origin point(s). Origin points of fragmentation may be technical, governmental, and/or commercial:

  • Technical: Changes in infrastructure that prevent systems from inter-operating and the internet from functioning consistently at all endpoints.
  • Governmental: Policies (or other actions) that prevent use of the internet to create, share or access information.
  • Commercial: Corporate practices that prevent use of the internet to create, share or access information.

Bans on specific digital platforms have become a common response to concerns about online safety and disinformation, but they increase the risk of internet fragmentation, threaten free expression and can encourage copycat legislation.

A growing list of governments around the world have considered or taken action against TikTok and its Beijing-based parent company ByteDance amid ongoing debates around two key issues: the potential national security risks of allowing the Chinese government to access sensitive user data and/or to use the app to distribute disinformation, and the rising concern about children’s online safety on digital platforms. Some experts argue more clarity and transparency is needed from governments’ when they cite evidence of cybersecurity risks, given that the amount and type of user data TikTok collects is not out of step with other platform companies or American data brokers. Proposed age restrictions on platforms risk violating free speech, splintering the online world and leading platforms to narrow their offerings to avoid legal ramifications. Outside of formal legislative processes, publishersgovernment agencies, intergovernmental agencies and universities have begun to restrict TikTok on employee devices.

These bans threaten free expression and an open internet on several fronts – most notably that they inhibit citizens’ fundamental rights to access and distribute information in democratic societies. Some legislation banning TikTok requires a technical surveillance infrastructure that would put privacy and civil liberties at risk. Further, this kind of legislation can serve as examples for autocratic governments. For example, Russia and Zambia have temporarily restricted citizens’ access to encrypted messaging apps such as Telegram and WhatsApp because the platforms refused to break end-to-end encryption, which would have allowed government agencies to access user data. Among the many complexities noted here is the question: Are there times when platform bans are justified? For instance, in Brazil, Telegram was blocked due to its failure to comply with a lawful request from the Brazilian Supreme Court. Considerations of context, along with any unintended consequences of policy proposals, must be taken into account in policy debates.

World map depicting that 1 in 4 of the world's countries have restricted messaging apps, either presently or in the past.
Source: NetBlocks

The ‘splinternet’ presents new challenges for corporations asked to comply with demands that would permit governments’ abuse of power.

At times, governments may demand (at times through legislative policy) that platforms provide private data and information on their users. If corporations adhere to these demands, they risk allowing for government abuse. If they do not, they risk breaking the law. These situations have arisen in CambodiaIndiaBrazil and the U.S., which encompass many different sociopolitical contexts, including democratic jurisdictions. It is crucial to acknowledge that this issue is multifaceted, and corporations often choose specific jurisdictions and legal frameworks for market and product reasons.

Protecting an open internet is not always possible in autocratic societies, but actions taken elsewhere can still have global impact.

Citizens, advocacy groups and watchdog organizations often have limited power to stop autocratic action. However, open and globally focused policy discussions raise awareness and, as such, offer a means of pushing back against abuses of power and creating greater accountability. Further, democratic governments can impact citizens in authoritarian regimes by taking internet governance actions. For example, in the wake of Russia’s invasion of Ukraine, international threats to the internet infrastructure in Russia risked cutting citizens off from accessing or sharing information outside of their country.

It is crucial to consider the impact of zero rating programs on an open internet.

Technology companies, including mobile carriers and app providers, have implemented zero rating programs (largely in developing countries) to attract customers and increase internet access. Zero rating programs, while aiming to provide access to certain services without incurring data charges, can also limit users’ experiences by creating a fragmented internet landscape. By granting free access to certain sources of content but charging for others, these programs can become the arbiters of what content is accessible, creating a tiered internet ecosystem as they promote broader adoption. In addition, many models and initiatives fall within the definition of ‘zero rating,’ and little research exists that can offer systematic evaluations of these programs. Analyzing the implications of zero rating programs is vital for understanding the challenges to achieving a connected and inclusive digital world.

The contemporary internet is still not a truly universal or open network; access and language translation vary considerably around the world.

Debates around public access to information and international connectivity standards must consider existing barriers to an open, global internet. According to the World Economic Forum, as of January 2023, nearly 3 billion people still do not have internet access. For many, this is because content is only available in a few languages despite the fact that around 6,000 languages are in use today. The top 10 languages make up more than three quarters (82%) of internet content. Even among the dominant languages, the universe of information available online differs from one language to the next. In addition to inequities in internet access and language, interconnection penetration (via the distribution of internet exchange points) and broadband access speeds vary drastically in places around the world.

Research in the fields of internet studies and governance – particularly over the past decade – has resulted in the development of a broad set of terms reflecting similar concepts including internet governancedigital sovereignty and internet fragmentation or balkanization. Some of these terms have shifted in meaning and/or been contested as societies have undergone digital transformations. That does not mean past research loses its value, but it can make it challenging for researchers to build upon one another’s work. 

To date, much of the work on internet governance has centered on authoritarian contexts (with a particular focus on China and Russia) as well as some research examining the European Union’s digital sovereignty agenda. That said, a growing segment of public-facing research conducted by independent research institutions, open internet advocacy organizations and (inter)governmental bodies (several of which we note in this primer) has focused specifically on assessing the increasing threat of the “splinternet.” This work largely relies upon expert interviews and literature reviews to arrive at a common understanding of the threats and manifestations of the “splinternet” and to recommend policies that protect an open internet.

Because these bodies of research are largely focused on either conceptualizing these issues in the context of policy or studies of critical cases, it is difficult to find systematic evidence of the scope and impact of internet governance legislation. Disagreement over what internet ‘freedom’ or ‘censorship’ entail also means there is little consensus on how to best measure them.

As platform bans become a more commonly discussed approach to addressing online safety and disinformation, research analyzing the scope and impact of these policy debates will be particularly useful. This includes how particular policy language is adopted in one context and readopted elsewhere over time and on a global scale.

The internet is now a central locus of global policymaking as governments increasingly debate how to regulate digital spaces within their own borders (and, at times, outside of them). Internet governance represents a broad range of stakeholders and mechanisms including national and international legislative policy as well as technological design, company policies and global regulatory institutions.

In many cases, policies or other actions that cause internet fragmentation stem from a push for “digital sovereignty,” or a government’s capacity to control or regulate its citizens’ internet content and their country’s flow of data through its technological infrastructures. Digital sovereignty is often legitimized for being in “the national interest,” though what that means varies drastically in different contexts. For instance, national interest may refer to national security or economic purposes and can take many forms within those realms. 

In some cases, legislation attempts to protect societies from a “splinternet.” In others, legislation risks encouraging a “splinternet” either intentionally or unintentionally.  As we note throughout this primer, this includes direct attempts to develop a closed national intranet, interventions to block access to specific platforms or sources of news and information as well as broader online governance laws that may introduce indirect consequences for free expression and an open internet. For instance, a wave of new online safety regulations in countries and regions including the European UnionAustraliaNew Zealand, the United Kingdom and Canada promote digital privacy and other protections for citizens and simultaneously risk user experience fragmentation by enhancing controls over what and where content is shown.

Legislative efforts that protect an open internet should consult resources and recommendations provided by global internet experts. These include ensuring that governments can only collect or access online user data for transparent and legitimate purposes, promoting open standards among new technologies and platforms, protecting encryption, maintaining access to internet services and digital platforms, supporting independent online media and addressing the digital divide. It is equally critical that related policy debates at the intersection of news and technology around disinformationartificial intelligence and algorithmic transparency consider unintended negative impacts of legislation on an open and interoperable internet.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

Bypassing secrecy: New tools to facilitate more cross-border investigations with RTI
Reuters Institute for the Study of Journalism (August 2023)

Safeguarding freedom of expression and access to information
UNESCO (April 2023)

Proton launches dedicated VPN servers for access to censored Deutsche Welle
PressGazette (March 2023)

It’s the great TikTok panic – and it could accelerate the end of the internet as we know it
The Guardian (March 2023)

Misguided policies the world over are slowly killing the open internet
Fortune (February 2023)

WhatsApp launches a tool to fight internet censorship
WIRED (January 2023)

Splinternet: how geopolitics is fracturing cyberspace
Polytechnique Insights (January 2023)

Internet governance doublespeak: Western governments and the open internet
Council on Foreign Relations (January 2023)

Interoperability is important for competition, consumers and the economy
Center for Democracy & Technology (January 2023)

Internet infrastructure as an emerging terrain of disinformation
Centre for International Governance Innovation (July 2022)

Biden administration risks splinternet with Europe
Tech Policy Press (July 2022)

‘Disastrous for press freedom’: What Russia’s goal of an isolated internet means for journalists
Committee to Protect Journalists (May 2022)

The declaration for the future of the internet is for wavering democracies, not China and Russia
Lawfare (May 2022) 

A declaration for the future of the internet
U.S. White House and European Union (April 2022)

Blackouts
Rest of World (April 2022)

Quem defende seus dados?
InternetLab & Electronic Frontier Foundation (2022)

The internet is splintering
The New York Times (February 2021)

What is the splinternet?
The Economist (November 2016)

Access Now: Nonprofit organization aiming to protect and extend global digital civil rights.

Carnegie Endowment for International Peace: Nonpartisan international think tank aiming to advance international peace.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Centre for International Governance Innovation: Independent, nonpartisan think tank on global governance.

Collaboration on International ICT Policy or East and Southern Africa: Works to promote effective and inclusive ICT policy and practice for improved governance, livelihoods and human rights in Africa.

Digital Watch Observatory: Observatory compiled by global policy experts to maintain a comprehensive, up-to-date summary of developments in digital policy.

Freedom on the Net: Freedom House’s annual analysis of global internet freedom.

Global Network Initiative & Country Legal Frameworks Resource: Non-governmental organization aiming to prevent internet censorship and protect individuals’ internet privacy rights.

Internet & Jurisdiction Policy Network: Multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions.

Internet Governance Lab: American University research lab aiming to study the implications of internet governance for society and the global economy.

Internet Governance Forum: Global multi-stakeholder platform established by the United Nations that facilitates the discussion of internet policy issues.

InternetLab: Independent Brazilian research center that aims to foster academic debate around issues involving law and technology, especially internet policy.

Internet Society: Nonprofit advocacy organization aiming to support and promote the development of an open, globally connected internet.

Internet Engineering Task Force (IETF): Develops voluntary internet standards for users, operators and vendors to adopt.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa.

Jordan Open Source Association: Nonprofit organization aiming to promote openness in technology and to defend the rights of technology users in Jordan.

Media Foundation for West Africa: Regional independent non-governmental organization with a network of national partner organizations in all 16 countries in West Africa.

NetBlocks: Independent internet monitor mapping and reporting internet blocks and global connectivity in real time.

PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.

Research ICT Africa: Research center to inform African digital policy and data governance.

Folorunso Aliu, Group Managing Director, Telnet

Jon Bateman, Policy Researcher, Carnegie Endowment for International Peace

Samantha Bradshaw, Assistant Professor, American University

Francisco Brito Cruz, Executive Director, INTERNETLAB

Laura DeNardis, Endowed Chair in Tech, Ethics, and Society, Georgetown University

Nkiru Ebenmelu, Head of Cybersecurity, Nigeria National Communication Commission

Alex Engler, Fellow in Governance Studies, The Brookings Institution

Alena Epifanova, Research Fellow, German Council on Foreign Relations

Steven Feldstein, Senior Fellow, Carnegie Endowment for International Peace

Allie Funk, Research Director for Technology and Democracy, Freedom House

Dame Wendy Hall, Executive Director, Web Science Institute

Konstantinos Komaitis, Senior Researcher, Lisbon Council

Francesca Musiani, Researcher, Centre National de la Recherche Scientifique (CNRS)

Dr. Vincent Olatunji, National Commissioner/CEO, Nigeria Data Protection Commission

Jason Pielemeier, Policy Director, Global Network Initiative

James Tager, Research Director, PEN America

Emily Taylor, CEO, Oxford Information Labs Limited

Tarah Wheeler, Senior Fellow for Global Cyber Policy, Council on Foreign Relations

Timothy Wu, Julius Silver Professor of Law, Science and Technology, Columbia Law School

“Internet for Trust” Conference
UNESCO
February 21–23, 2023 – Paris, France

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Internet Governance Forum 2023
Internet Governance Forum
October 8–12, 2023 – Kyoto, Japan

Bypassing secrecy: New tools to facilitate more cross-border investigations with RTI
Reuters Institute for the Study of Journalism (August 2023)

Safeguarding freedom of expression and access to information
UNESCO (April 2023)

Proton launches dedicated VPN servers for access to censored Deutsche Welle
PressGazette (March 2023)

It’s the great TikTok panic – and it could accelerate the end of the internet as we know it
The Guardian (March 2023)

Misguided policies the world over are slowly killing the open internet
Fortune (February 2023)

WhatsApp launches a tool to fight internet censorship
WIRED (January 2023)

Splinternet: how geopolitics is fracturing cyberspace
Polytechnique Insights (January 2023)

Internet governance doublespeak: Western governments and the open internet
Council on Foreign Relations (January 2023)

Interoperability is important for competition, consumers and the economy
Center for Democracy & Technology (January 2023)

Internet infrastructure as an emerging terrain of disinformation
Centre for International Governance Innovation (July 2022)

Biden administration risks splinternet with Europe
Tech Policy Press (July 2022)

‘Disastrous for press freedom’: What Russia’s goal of an isolated internet means for journalists
Committee to Protect Journalists (May 2022)

The declaration for the future of the internet is for wavering democracies, not China and Russia
Lawfare (May 2022) 

A declaration for the future of the internet
U.S. White House and European Union (April 2022)

Blackouts
Rest of World (April 2022)

Quem defende seus dados?
InternetLab & Electronic Frontier Foundation (2022)

The internet is splintering
The New York Times (February 2021)

What is the splinternet?
The Economist (November 2016)

Access Now: Nonprofit organization aiming to protect and extend global digital civil rights.

Carnegie Endowment for International Peace: Nonpartisan international think tank aiming to advance international peace.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Centre for International Governance Innovation: Independent, nonpartisan think tank on global governance.

Collaboration on International ICT Policy or East and Southern Africa: Works to promote effective and inclusive ICT policy and practice for improved governance, livelihoods and human rights in Africa.

Digital Watch Observatory: Observatory compiled by global policy experts to maintain a comprehensive, up-to-date summary of developments in digital policy.

Freedom on the Net: Freedom House’s annual analysis of global internet freedom.

Global Network Initiative & Country Legal Frameworks Resource: Non-governmental organization aiming to prevent internet censorship and protect individuals’ internet privacy rights.

Internet & Jurisdiction Policy Network: Multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions.

Internet Governance Lab: American University research lab aiming to study the implications of internet governance for society and the global economy.

Internet Governance Forum: Global multi-stakeholder platform established by the United Nations that facilitates the discussion of internet policy issues.

InternetLab: Independent Brazilian research center that aims to foster academic debate around issues involving law and technology, especially internet policy.

Internet Society: Nonprofit advocacy organization aiming to support and promote the development of an open, globally connected internet.

Internet Engineering Task Force (IETF): Develops voluntary internet standards for users, operators and vendors to adopt.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa.

Jordan Open Source Association: Nonprofit organization aiming to promote openness in technology and to defend the rights of technology users in Jordan.

Media Foundation for West Africa: Regional independent non-governmental organization with a network of national partner organizations in all 16 countries in West Africa.

NetBlocks: Independent internet monitor mapping and reporting internet blocks and global connectivity in real time.

PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.

Research ICT Africa: Research center to inform African digital policy and data governance.

Folorunso Aliu, Group Managing Director, Telnet

Jon Bateman, Policy Researcher, Carnegie Endowment for International Peace

Samantha Bradshaw, Assistant Professor, American University

Francisco Brito Cruz, Executive Director, INTERNETLAB

Laura DeNardis, Endowed Chair in Tech, Ethics, and Society, Georgetown University

Nkiru Ebenmelu, Head of Cybersecurity, Nigeria National Communication Commission

Alex Engler, Fellow in Governance Studies, The Brookings Institution

Alena Epifanova, Research Fellow, German Council on Foreign Relations

Steven Feldstein, Senior Fellow, Carnegie Endowment for International Peace

Allie Funk, Research Director for Technology and Democracy, Freedom House

Dame Wendy Hall, Executive Director, Web Science Institute

Konstantinos Komaitis, Senior Researcher, Lisbon Council

Francesca Musiani, Researcher, Centre National de la Recherche Scientifique (CNRS)

Dr. Vincent Olatunji, National Commissioner/CEO, Nigeria Data Protection Commission

Jason Pielemeier, Policy Director, Global Network Initiative

James Tager, Research Director, PEN America

Emily Taylor, CEO, Oxford Information Labs Limited

Tarah Wheeler, Senior Fellow for Global Cyber Policy, Council on Foreign Relations

Timothy Wu, Julius Silver Professor of Law, Science and Technology, Columbia Law School

“Internet for Trust” Conference
UNESCO
February 21–23, 2023 – Paris, France

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Internet Governance Forum 2023
Internet Governance Forum
October 8–12, 2023 – Kyoto, Japan


The post Protecting an Open Internet appeared first on Center for News, Technology & Innovation.

]]>