Research & Other Reports Archives - Center for News, Technology & Innovation https://innovating.news/article-type/research/ Fri, 03 May 2024 12:54:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://innovating.news/wp-content/uploads/2024/03/cropped-favicon-1-32x32.png Research & Other Reports Archives - Center for News, Technology & Innovation https://innovating.news/article-type/research/ 32 32 World Press Freedom Scores Fall Back to 1993 Levels – the Launch Year of World Press Freedom Day https://innovating.news/article/world-press-freedom-scores-fall-back-to-1993-levels-the-launch-year-of-world-press-freedom-day/ Fri, 03 May 2024 12:45:00 +0000 https://innovating.news/?post_type=article&p=4641 This analysis sheds important light on the downward trend over the last decade for press freedoms around the world.

The post World Press Freedom Scores Fall Back to 1993 Levels – the Launch Year of World Press Freedom Day appeared first on Center for News, Technology & Innovation.

]]>

May 3, 2024 marks the 31st anniversary of World Press Freedom Day which serves as a “reminder to governments of the need to respect their commitment to press freedom… [as well as] a day of reflection among media professionals about issues of press freedom and professional ethics.”

To study where we stand in these areas, CNTI examined global trends across multiple measures over the last 31 years. The findings reveal worrying trends in global press freedom and safety. While protections from media censorship and harassment of journalists grew, on average, between 1993 and the early 2010s, these scores have fallen back down to below their 1993 levels.

What’s more, these recent declines cut across all four government regime types: liberal democracies, electoral democracies, electoral autocracies and closed autocracies. These findings support other research showing downward trends in press freedoms, including within government policies related to news information.

For this analysis, CNTI assessed global press freedoms using data from the Varieties of Democracy Institute (V-Dem) which has country-level metrics dating back to 1789. Two measures were of primary interest: (1) government censorship of print or broadcast media and (2) harassment of journalists. These metrics are a part of CNTI’s searchable dataset of 15 measures across 179 countries which also provides links to V-Dem’s full methodology and statistical approach. (The data examined below use the linearized transformation (i.e., _osp) variables which form an interval scale from 0 to 4.) 

An outline of the two measures is presented here:

V-Dem’s measure of government censorship asks the question, “Does the government directly or indirectly attempt to censor the print or broadcast media?” (p. 207) with the following five response options:

  • 0: Attempts to censor are direct and routine.
  • 1: Attempts to censor are indirect but nevertheless routine.
  • 2: Attempts to censor are direct but limited to especially sensitive issues. 
  • 3: Attempts to censor are indirect and limited to especially sensitive issues.
  • 4: The government rarely attempts to censor major media in any way, and when such exceptional attempts are discovered, the responsible officials are usually punished.

V-Dem’s measure of journalist harassment asks, “Are individual journalists harassed — i.e., threatened with libel, arrested, imprisoned, beaten, or killed — by governmental or powerful nongovernmental actors while engaged in legitimate journalistic activities?” (p. 209) with the following five response options:

  • 0: No journalists dare to engage in journalistic activities that would offend powerful actors because harassment or worse would be certain to occur.
  • 1: Some journalists occasionally offend powerful actors but they are almost always harassed or worse and eventually are forced to stop. 
  • 2: Some journalists who offend powerful actors are forced to stop but others manage to continue practicing journalism freely for long periods of time.
  • 3: It is rare for any journalist to be harassed for offending powerful actors, and if this were to happen, those responsible for harassment would be identified and punished.
  • 4: Journalists are never harassed by governmental or powerful nongovernmental actors while engaged in legitimate journalistic activities.

Both metrics use lower scores to denote greater harm to press freedom and journalists while higher scores represent greater safety and protection. It is also important to note that the measure of government censorship specifies print and/or broadcast media and that the measure of journalist harassment does not capture harassment from the public. 

How have media censorship and journalist harassment changed over the last 31 years? The figure below shows the yearly global average values for government censorship of print and broadcast media and for journalist harassment from the roughly 179 counties in the V-Dem dataset from 1993 to 2023. (The number of countries classified by V-Dem varies slightly year-to-year. See the Appendix for a yearly breakdown.) Both of these metrics follow similar patterns (the correlation is r = 0.96). 

V-Dem’s measures (based on a scale from 0 to 4 with 0 being the worst score for press freedoms and 4 being best) show protection from both government censorship and harassment of journalists increased overall from 1993 through the early 2010s. After peaking in 2012, however, both of these measures decreased (down 11% and 9%, respectively), landing below their starting values in 1993. As such, global averages for both government censorship and journalist harassment were slightly lower (i.e., worse) in 2023 than in 1993. 

Government censorship of print or broadcast media had a global average of 2.16 in 2023, which in V-Dem’s definition means that “[a]ttempts to censor are direct but limited to especially sensitive issues.” The 2023 global average for harassment of journalists of 1.99 signals that “[s]ome journalists who offend powerful actors are forced to stop but others manage to continue practicing journalism freely for long periods of time.” 

There is certainly some variation by country, but the averages are a helpful way to understand the general state of our global society. Exploration of country-level information is available in CNTI’s searchable dataset. Below we offer an additional breakdown by regime type.

Another important consideration is the relationship between government structure and press freedoms. V-Dem provides a classification of countries’ political regimes with four categories: (1) liberal democracy, (2) electoral democracy, (3) electoral autocracy and (4) closed autocracy. The table below presents the breakdown of these categories using V-Dem’s most recent report for 2023 which classified 179 countries. Democracies constituted about 51% of governments while autocracies made up about 49% of governments. 

Analyzing the harassment scores by regime type reveals the prevalence of these declines across various government structures. These data are analyzed from 2012 through 2023 to show how they have declined since the global peaks observed in 2012. 

As evident in each of the four graphs below, declines can be seen across all four regime types. Liberal democracies, electoral autocracies and closed autocracies all score lower, on average, in 2023 than they did in 2012. The average score for electoral democracies decreased the least.  

Liberal democracies experienced a 3.5% decrease in their scores from 2012 to 2023. Electoral democracies declined 1.6% during the same period. Electoral autocracies decreased 11%. Closed autocracies, which also consistently received the lowest (i.e., worse) baseline scores for journalist harassment, decreased 9% from 2012 to 2023. 

It is important to note that each regime type has different starting points and vertical ranges in the figure above. As such, even with the 3.5% decline, liberal democracies still have the highest average scores in 2023 compared to the other three regime types. Similarly, electoral autocracies performed slightly better, on average, than closed autocracies. 

This analysis sheds important light on the downward trend over the last decade for press freedoms around the world. And while scores have fallen more in some regime types than others, no type has, on average, strengthened their scores since 2012. 

The numeric size of the declines above, which range between 0.04 and 0.19 units, are not massive in part because the V-Dem measure is on a 5-point scale. However, even these small decreases correspond to significant percentage changes in these measures of approximately 2-11%, depending on regime type. The steady declines over the last decade are important for understanding the backsliding trajectory of press freedoms globally.

This research brief follows several other recent and ongoing projects CNTI is conducting. In January 2024, CNTI released a “fake news” report which found that proposed bills or existing laws on “fake news” and mis- and disinformation from 2020 to 2023 created opportunities for governments to curtail press freedoms. An issue primer on online abuse and harassment of journalists is also in development and will be released soon. For more information, please visit innovating.news.

Note: The Center for News, Technology & Innovation (CNTI) has an aggregated data resource of metrics from sources including V-Dem, Reporters Without Borders, Committee to Protect Journalists, the World Bank and the World Justice Program among several others. Combining the metrics from these resources helps to understand global patterns. Given that these data only show the most recent year, the findings above are supplemented with V-Dem’s time series data

The code for this project may be found on CNTI’s GitHub here

The Center for News, Technology & Innovation (CNTI), an independent global policy research center, seeks to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations.

The Center for News, Technology & Innovation is a project of the Foundation for Technology, News & Public Affairs.


The post World Press Freedom Scores Fall Back to 1993 Levels – the Launch Year of World Press Freedom Day appeared first on Center for News, Technology & Innovation.

]]>
Watermarks are Just One of Many Tools Needed for Effective Use of AI in News https://innovating.news/article/watermarks-are-just-one-of-many-tools-needed-for-effective-use-of-ai-in-news/ Tue, 09 Apr 2024 10:33:32 +0000 https://innovating.news/?post_type=article&p=4477 A Global Cross-Industry Group of Experts Discuss Challenges and Opportunities in an AI-Incorporated News Landscape

The post Watermarks are Just One of Many Tools Needed for Effective Use of AI in News appeared first on Center for News, Technology & Innovation.

]]>

“The literature to date suggests that watermarks and disclaimers … won’t be a silver bullet.” But they could be helpful — alongside experimentation, model transparency, collaboration and a thoughtful consideration of standards — in differentiating between harmful and helpful uses of artificial intelligence (AI).

Indeed, journalism today — the production and dissemination of fact-based news to inform the public — takes all of us: journalists to do the critical reporting, technology to enable distribution, access and information gathering, research to evaluate impact, policy to support and protect all of the above, and, importantly, the public’s involvement and interest.

Participants

  • Charlie Beckett, LSE
  • Anna Bulakh, Respeecher
  • Paul Cheung, Fmr. Center for Public Integrity
  • Gina Chua, Semafor
  • Ethan Chumley, Microsoft
  • Elik Eizenberg, Scroll
  • Deb Ensor, Internews
  • Maggie Farley, ICFJ
  • Craig Forman, NextNews Ventures
  • Richard Gingras, Google
  • Jeff Jarvis, CUNY
  • Tanit Koch, The New European
  • Amy Kovac-Ashley, Tiny News Collective
  • Marc Lavallee, Knight Foundation
  • Celeste LeCompte, Fmr. Chicago Public Media
  • Erin Logan, Fmr. LA Times
  • The Hon. Jerry McNerney, Pillsbury Winthrop Shaw Pittman LLP, Fmr. Congressman (CA)
  • Tove Mylläri, Yle (Finnish Broadcasting Company)
  • Matt Perault, UNC Center on Technology Policy
  • Adam Clayton Powell III, USC Election Cybersecurity Initiative
  • Courtney Radsch, Center for Journalism and Liberty
  • Aimee Rinehart, The Associated Press
  • Felix Simon, Oxford Internet Institute
  • Steve Waldman, Rebuild Local News (moderator)
  • Lynn Walsh, Trusting News

For more details, see the Appendix.

So concluded a day-long discussion among leaders from around the world in journalism, technology, policy and research. It was the second convening in a series hosted by the Center for News, Technology & Innovation (CNTI) on enabling the benefits — while guarding against the harms — of AI in journalism. 

Co-sponsored by and held at the USC Annenberg School for Communication and Journalism Washington, D.C. offices, the Feb. 15 event brought together technologists from Google, Microsoft and Scroll, journalists from the Associated Press and Semafor, academics from USC, LSE and CUNY, former members of government and researchers from UNC, Yle (the Finnish Broadcasting Company) and Oxford and civil society experts and philanthropists from a range of organizations.  (See sidebar for the full list of participants.)

Under the theme of how to apply verification, authentication and transparency to an AI-incorporated news environment, participants addressed four main questions: What information does the public need to assess the role of AI in news and information? What do journalists need from technology systems and AI models? How should technology systems enable these principles, and how should government policies protect them?

The session, held under a modified Chatham House Rule, continued the tone and style that began with CNTI’s inaugural convening in October 2023 (“Defining AI in News”). CNTI uses research as the foundation for collaborative, solutions-oriented conversations among thoughtful leaders who don’t agree on all the elements, but who all care about finding strategies to safeguard an independent news media and access to fact-based news.

Throughout the convening, participants often prefaced their remarks by describing their own optimism or pessimism about the present and future role of AI in journalism. The event’s moderator, Steve Waldman, discussed this tension in his introduction: “To me the answer is we have to be both absolutely enthusiastic about embracing the many positive aspects of this [technology] and absolutely vigilant about potential risks. Both are really important.” As the report explains, there are no easy answers but there are avenues to consider as AI technology advances at a blistering pace.

  1. There is No Silver Bullet for Addressing Harms of AI in News While Still Enabling its Benefits
  2. We Need to Experiment with a Number of Possible Tools
  3. The Role for Policy: Consider Industry Standards as a Start, Rethink Regulatory Structures, Lead by Example
  4. Successful Standards, Uses & Guardrails Require Technology Companies’ Active Participation
  5. Research, Research, Research
  6. Best Steps for Newsrooms: Innovate with a Degree of Caution

This was the second in a series of CNTI convenings on enabling the benefits while managing the harms of AI in Journalism. Stay tuned for details about CNTI’s third AI convening, to be held outside the U.S.

Artificial intelligence will impact seemingly every industry, with the news media being no exception. This can be particularly challenging in the news and information space as the use of AI by various actors can sometimes work against informing the public and sometimes work towards it. In either case, new technological developments have added new challenges for separating the real from the fake. And while the media, technology companies and others are making attempts at identifying AI, the group was in agreement that no single solution will be completely effective. In fact, some research finds that current tools, such as labels of AI use, may actually do more harm than good.

So what is being done now to verify or authenticate AI-generated content? To date, the tool that has been furthest developed and has also received the most attention is “watermarking” — a technique where markers get embedded into a piece of content (e.g., image, audio, video, text, etc.) when it is first created. Proponents say watermarks help journalists and the public identify ill-intended changes to content they find online.

Several online platforms have begun implementing their own software and/or rules for asserting the authenticity of content, many of which comply with the Coalition for Content Provenance and Authenticity (C2PA) standard. OpenAI released an update that adds watermarks to images created with its DALL-E 3 and ChatGPT software. Google’s SynthID allows users to embed digital watermarks in content that are then detectable by the software. Other companies like Meta have focused on provenance, with policies that require disclosing content as being generated with AI. Across the technology industry, companies are taking note of provenance and implementing tools to assist users.

While developing these kinds of tools clearly has benefits, the group identified several important considerations and reasons the tools alone should not be seen as a solution in and of themselves:

  • Preliminary research finds that labels related to AI in journalism can have an adverse effect on the public. Research examining the impact of content labeling raises several caution flags. First, overly broad labels about false and manipulated information can lead users to discount accurate information, suggesting labels need to be comprehensive and explicit about which content is false. Similarly, additional research on tagging content as AI-generated or enhanced finds, “… on average that audiences perceive news labeled as AI-generated as less trustworthy, not more, even when articles themselves are not evaluated as any less accurate or unfair” and that the effects are more common among people with higher pre-existing levels of trust.
  • These types of content labels can also lead to an “implied truth effect” in which false information that is not tagged as such may be interpreted as authentic. Similar findings exist when studying provenance. On the more hopeful side, some findings suggest that the public can appreciate provenance details as long as they are comprehensive — poorly-detailed provenance can lead to users discounting authentic information. To that end, researchers are exploring what specific labels to use in a given context (e.g., AI generated, manipulated, digitally altered, computer generated, etc.) and how users interpret these terms.
  • Current labeling techniques don’t differentiate between uses of AI that help inform rather than disinform. Indeed, some content alterations are done to help inform, which current methods of labeling don’t address. One participant shared an  example from Zimbabwe where a newsroom used chatbots to offer information in many more local dialects than Western-trained models could provide. There are also several non-publicly facing innovative AI uses such as fact-checking radio broadcasts or combing through and synthesizing news archives (which offer particular value for local news organizations). The current research suggests that a simple label acknowledging the use of AI risks automatic public rejection of what would otherwise add value to the news product. 
  • We need to clarify the intended audience for each label. There is a lot of conversation about the benefit of provenance for the public but, as the group discussed, it may be more crucial for journalists and content creators. Marc Lavallee noted, there is “limited value in any kind of direct consumer-facing watermark or signal” and Richard Gingras said, “the value of provenance is probably more for the world of journalism than it is for the world of consumers.” Consider the example of information cataloged for original artwork. Celeste LeCompte noted, “most of that information is not generally revealed to the public but is rather something that is part of an institutional framework.”
  • We need to better understand and articulate the various elements of identification: provenance, watermarking, fingerprinting and detection. In this convening, four similar but distinct forms were discussed. First is provenance, which refers to a “manifest or audit trail of information” to ensure content is “always attributable.” Provenance information is imperfect, however, because it can be input incorrectly or changed by malicious actors. Second is watermarks, a class of identifiers embedded directly into content which is more difficult to remove and considered to be more robust than provenance. A third technique called fingerprinting can serve as a lookup tool, like reverse image searches. Finally, there are what are termed detection methods. These use AI models to detect other AI models, though the development process remains challenging. They are the least robust and, as one participant asserted, more research needs to be done in this area. 

Again, the conclusion is not that watermarks are bad or of no use, but that they need to be fully thought through within the vast array of AI uses and considered as just one tool among many that are needed — which leads to the next takeaway.

As we further develop the efficacy of tools like watermarking, participants encouraged further experimentation with additional ways to help identify and explain various uses of AI content. As one participant remarked, “Doing something is better than doing nothing while waiting for the perfect solution.”

A few ideas shared by participants:

  • SSL for AI: SSL, or Secure Sockets Layer, was designed to be an encryption tool for e-commerce, but is now used by virtually every website as a privacy and authentication tool. As one participant stated, there’s no “values determination,” just a determination on the content’s origin. Thus, there are no false positives or false negatives. Could publishers and technologists collaborate on something similar here? 
  • Accessible Incentive Structures to Adopt Standards: Another idea was to use incentive structures for journalists and other responsible content creators to adopt certain standards and labeling techniques which could eventually become commonly understood. Search Engine Optimization (SEO), the process by which websites strive to rank higher in Google and other search engines, was offered as an example. While not a perfect corollary (with questions about gaming algorithms and the value of information that is not public facing) SEO, as originated, did offer a strong incentive and was pretty easy to adopt. How might something like that work for identifying AI content? And how could we measure its effectiveness? Getting the incentive structure right so that fact-based content gets promoted while “… inauthentic material or material of unknown provenance is lessened is really the place to focus,” suggested Lavallee. “If we get to a point where basically only bad actors are the ones not willing to use a system like this, I think that’s the threshold that we need to get to in order for it to be effective.”
  • Training the public: Continued attention on AI literacy and education is important. As the research (cited above) shows, most of the public seems to distrust any use of AI in news content. A more nuanced understanding is important to allow journalists to use AI in ways that help serve the public. One participant shared information about how media and technology education are included in Finland’s national education plans and how news organizations there have also developed training materials for the general public. Information about how AI is used in news needs to be understandable for a non-technical audience but also allow people who would like further details to have access to that information. As such, per Tove Mylläri, “We believe that educating people to be aware and letting them decide themselves also builds trust.” Understanding what types of training materials are most effective is crucial for increasing knowledge about how AI is used in news. 

One important element for any of these tools, especially (but not limited to) those that are public facing, is communication. The publishers and others who implement these tools need to explain to their audiences and the general public what these tools do. It will take time for them to become recognized, commonly understood and utilized. The “nutrition labels” parallel provides a sense of that timeline. Nutrition labels, noted Anna Bulakh, are now generally understood and serve a valuable purpose, but that took experimentation about what kinds of information the public wanted and it took time for shoppers to become accustomed to them. In fact, it remains a work in progress and also carries with it the important consideration of who decides what goes into the label. News consumers are not yet used to “nutrition labels” for content. “Provenance is providing you with a nutritional label of content [so you can] be aware that it is AI manipulated or not AI manipulated,” Bulakh added. Provenance should be understood to mean information about the source of the material (i.e., the publisher, author, and/or editor) as well as what tools or mechanisms might be relevant to assessing the trustworthiness of the content.

Government policy and regulation can be critical in promoting and safeguarding the public good. They can also have lasting impact and as such must be thoroughly and carefully approached. Any drafted policy should consider its potential impacts both now and in the years to come. To facilitate this, the discussants laid out several insights:

  • Established standards can help build effective policy. One part of the conversation focused on creating industry standards that, at least in some areas, can be used to inform effective legislation and regulation (recognizing these two forms of policy have different definitions which can also vary by country). Standards allow for experimentation and adjustment over time, could be developed for different parts of the system and, as past examples have shown, can then develop into effective policy. Consider, for example, the U.S. Department of Agriculture (USDA)’s development of organic food standards in the 1990s. The 1990 Organic Foods Production Act createdthe National Organic Program which was tasked with developing standards for organic food production and handling regulations. The USDA defines organic as a “labeling term that indicates that the food or other agricultural product has been produced through approved methods. The organic standards describe the specific requirements that must be verified by a USDA-accredited certifying agent before products can be labeled USDA organic.” The creation of this standard, remarked one participant, took time to develop but eventually led some consumers to seek out this type of product especially because they believe it follows specific criteria. Media content could follow a similar process. Once trusted standards are implemented, the public may seek content that conforms to those protocols, which would likely incentivize many industry actors to also use the standards. 
  • In supporting this approach, former U.S. Representative & Chair of the Congressional Artificial Intelligence Caucus, Jerry McNerney, suggested that we fund AI standards agencies and, once created, enforce those standards through law, adding that it is important to have “the involvement of a wide spectrum of stakeholders,” particularly given the unique ways journalists are incorporating AI into their work.
  • It’s time to rethink what structures of “regulation” should look like today.Several participants agreed we need to rethink what the structure of regulation looks like in an AI-incorporated environment in which developments occur rapidly with technology that is complex. One participant rhetorically asked, “do you really understand what you’re trying to accomplish?” It doesn’t mean we should walk away from complicated issues but we should fully think about the most effective approaches.
  • Ethan Chumley offered that there “are close parallels and analogies” to cybersecurity perspectives “if we start to view the challenges of media authenticity and trust in images as a security problem.” Regulation will likely require more updates (1) as technologies evolve and (2) as standards are revised.
  • This all takes time! Developing broadly adopted standards that can then develop into policy is time consuming. Bulakh suggests that “every new standard would take around 10 years to be accessible” to tool creators, distribution platforms, content creators and consumers. She points to the Coalition for Content Provenance and Authenticity (C2PA), one of the leading global standards, entering year 5, even though it has not been fully adopted. Another example is HTML standards for websites which, similarly, took many years to develop and are still evolving. We have past examples to use as models for timelines which need to be built in — let’s use them! 
  • This is not to say that a standards-first approach is right for all areas. There may well be some aspects of managing use of AI that call for some government oversight more immediately —  though we still need to be sure the regulatory structures are effective and that the policies are developed in a way to serve the public long-term.
  • One immediately available step is to lead by example. Many nods of agreement occurred when one participant pointed out that if governments want others to adopt and utilize various standards, rules or policies, they need to do so themselves. Governments can, in the immediate term, “start adopting these standards that are already out there in their own datasets,” said Elik Eizenberg, which would likely create momentum for the private market to follow. One of the most powerful tools governments have, Eizenberg added, is to “lead by example.” In the discussion that followed, another participant added that, conversely, many current policies relating to misinformation have built-in exceptions for politicians. 

“Journalists [and] media houses cannot cope with these issues alone without technology companies,” remarked one participant to the nodding approval of others. Another person added, “Open standards are developed by technologists. It’s [technologists’] job … to come together to provide access to those tools … and make them more accessible.” Media houses can then communicate and “change consumer behavior.” 

Tanit Koch spoke of the importance of technology companies in helping guard against those with bad intentions: “We need tech companies and not only because of the scale and the speed that disinformation can happen and is happening on their platforms, but simply because they have the money and expertise to match the expertise of the bad actors. We definitely and desperately need more involvement by those who feel a sense of responsibility to create open source tools to help the media industry detect what we cannot detect on our own and all of this full well knowing that the dark side may always try to be ahead of us.”

A separate point was raised about whether there was a role for technology companies, and policies guiding them, in helping users better understand certain risks or benefits associated with the use of certain technologies. “When I take a pill, there’s a warning label,” offered Paul Cheung, but these risks are not as clearly defined in the online space. Thus, consumers “have no information to assess whether this is a risk they’re willing to take.” People may use a certain technology tool or piece of software “because it’s free and fun” without knowing the risk level. And those risk levels likely vary, suggesting a model similar to the U.S. Food and Drug Administration (FDA) or Federal Aviation Administration (FAA) may be a better route than a “one-size-fits-all” approach. 

Understanding these risks relates to the discussion on building trust that occurred during CNTI’s first AI convening. As reported following that gathering:

Better understanding of [AI language] can also help build trust, which several participants named as critical for positive outcomes from AI use and policy development. One participant asked, “How do we generate trust around something that is complicated, complex, new and continuously changing?” Another added, “Trust is still really important in how we integrate novel technologies and develop them and think two steps ahead. And when it comes to putting that in writing, “We need to think about what’s the framework to apportion responsibility and what responsibility lies at each level … so that you get the trust all the way up and down, because ultimately newsrooms want to be able to trust the technology they use and the end user wants to be able to trust the output or product from the newsroom.”

To date, research about labeling AI content (and how users engage with labels) is limited. We need much more data to gain a fuller understanding of the best strategies forward, as well as which strategies are likely to fall short, backfire or possibly work in some areas but not in others. To develop a deeper understanding of what, why and how certain policy and technology approaches work better than others, we need to conduct more studies, replicate findings and build theories. Approaches must also reflect geographic diversity by including researchers representing a range of local contexts and communities and by examining how people in diverse, global contexts are similar and/or unique in their interactions with AI-related news content. For example, U.S./European research that focuses solely on strategies to address internet disinformation would not serve well those places where radio is still the largest news medium. We need to provide the resources and support for this work — starting now.

Several researchers in the room noted that the existing literature does not yet fully grasp how users interact with provenance information or how the tools being developed will influence user behavior. 

Felix Simon shared recent preliminary research he’d conducted in collaboration with Benjamin Toff that featured a striking finding: “For our sample of U.S. users, we found, on average, that audiences perceived news labeled as AI-generated as less trustworthy, not more, even when articles themselves were not evaluated as any less accurate or unfair.” Yet, when the authors provided users with the sources used to generate the article, these negative effects diminished. Further research is warranted to better understand how the public interprets labels on AI-generated articles and how to best present this information.

When it comes to watermarks in particular, “we have some directional indications about efficacy but we do have extremely limited data in this area,” offered Matt Perault. Based on the limited data and research, he presented four key research questions that need to be addressed:

  1. Will disclaimers and/or watermarks be implemented correctly?
  2. Would users actually observe them?
  3. Would disclaimers and/or watermarks have a persuasive effect?
  4. What effects would watermark and/or disclosure requirements have on competitiveness and innovation?

Answers to these questions will assist in the development of evidence-based policy.

Technology has revolutionized the media many times over (e.g., the printing press, radio, television, the internet, social media, streaming, etc.) with AI being the latest example of an innovation that will change how reporters gather and share information and how consumers take it in.

AI Innovation Sidebar

Participants noted a number of innovations publishers, editors and reporters can explore to better incorporate AI into their work, with the assistance of technologists. Some of these ideas are already in the works, while others need to be developed including, as one participant noted, broadening beyond a singular focus on large language models (LLMs) or secondary uses of LLMs. LLMs are AI models that are trained on a massive amount of text.

  • Local LLMs: Because AI technology is Western-centric, journalists in countries that do not use the world’s most popular languages are left behind. There needs to be a concerted effort to develop local language versions of large language models (LLMs), another example where industries will need to partner in order to achieve results. One participant offered an example in Zimbabwe where chatbots were trained to better interpret the various local dialects in the country. These types of localized innovations are also receiving support from users as they feel better represented. In addition to the technology companies that would build such LLMs, journalists and others have a major role to play: We need “people to create the information [through] front-line reporting and analysis that these models then ingest and generate new material,” remarked Maggie Farley. This type of front-line reporting also requires “talented people,” pointed out Erin Logan, who have secure employment with liveable wages and benefits which requires sustainable business models for news organizations. 
  • Shared LLMs: Aimee Rinehart shared that as a part of her Tow-Knight Fellowship on AI Studies her capstone project is a blueprint of an LLM specifically for news organizations that would “add transparency because, as journalists, we all like to know, ‘who’s your source?’” In this case, the source isn’t the person who provided the news tip, but rather that system that supports the production of the news item. Such an LLM could do far more for journalism than simply outsource writing. A journalism-focused LLM, Rinehart added, “could resurrect the archive [and] provide a licensing opportunity for newsrooms.” These opportunities are likely to be especially relevant for local news organizations, which are struggling to remain economically viable.
  • RAGs: While LLMs are built on massive data sources, journalists often need a more narrow scope for their work. That is where Retrieval-Augmented Generation (RAG) comes in. RAGs are used to limit the scope of an LLM query, such as to particular data sets, to more efficiently and accurately pull results of the billions of data points that form the input. One participant, Gina Chua, said RAGs can be used to read documents, classify data, or even turn journalism into something more conversational (and therefore more accessible). Such AI tools can be applied at scale to rebuild local newsrooms and have the potential to “improve journalism products, which [can] then improve our engagement with communities,” Chua remarked. 
  • Pinpoint: Another participant called on technology companies to develop tools to help journalists process massive amounts of raw data that can inform their reporting. One example already available is Googe’s Pinpoint, a collaborative tool that allows reporters to upload and analyze up to 200,000 files — not just documents and emails but also images, audio, and scans of hand-written material.

Panelists made a number of workplace recommendations that would help journalists incorporate AI into their reporting and build a better relationship with their audiences.

  • Newsrooms need to embrace AI technology, but do it cautiously. They must be willing to experiment with new tools to see what works for them and what helps audiences better understand the news. Offered Charlie Beckett, “The crisis in journalism at the moment is about connecting the ton of good quality stuff out there to the right people in the optimal way that doesn’t make them avoid the news.” It was further noted that quality reporting requires diligent research and thorough questions and that a similar approach should be applied to the exploration and adoption of new technology. Participants discussed a number of existing and developing technologies that could spur newsrooms to adopt the use of AI to support their work (see sidebar).  
  • Conversely, concerns remain about how AI models use journalists’ works. Courtney Radsch asserted, “As we see more generative AI content online, less human-created information, the value of journalism, I think, goes up. So we should be thinking about a model that can allow us to have some say over the system.” The complex pros and cons of methods such as licensing or copyright on the nature and effectiveness of the knowledge ecosystem, including that they can hinder broad distribution and reward quantity irrespective of quality, were mentioned but were covered more in depth at CNTI’s first AI Convening, as written about in the summary report.
  • Journalists need to apply layers of transparency in their work, just as they expect from the people and organizations they cover. Innovation and transparency are critical to serving local communities and fortifying the journalism industry for the digital age. But remember that “transparency has multiple meanings, each of which must be addressed.” First, journalists must offer the same level of transparency that they demand of others. At a non-technical level, that means explaining why a particular story is being told at that time. But it also means clearly explaining how AI is used to report, create and produce content including attributions and points of reference in the output of AI-driven answer engines (e.g. ChatGPT, etc.). Research has shown that some responses by AI-driven answer engines can be biased by the articulation of the source material used to feed them. News consumers would benefit from understanding related references and attributions which can help audiences rebuild trust in the media.
  • Recognize that there is more to AI than the content being produced. Jeff Jarvisshared his thoughts on the ABCs of disinformation, which a 2019 report by Camille François of the Berkman Klein Center for Internet & Society at Harvard University defined as manipulative actors, deceptive behavior and harmful content. Jarvis summarized, “We have to shift to the ABC framework — that is to say actors, behavior and content,” rather than a singular focus on content. While journalists may be able to identify those seeking to sew harm through diligent research and reporting, it is difficult to persuade news consumers that such people act with malicious intent. 
  • Another concept offered was that of “fama,” a Latin word dating back to Europe before the printing press that brings together rumor and fame to form the concept of believing something to be true because it was said by someone the listener trusts. Its modern equivalence is believing in news — or conspiracy theories — because they are uttered by a trusted voice. As noted by Deb Ensor, “We don’t often think about our audiences in terms of their behaviors and how they share or trust or value or engage with their information suppliers.” Those with ill intentions then appeal to this behavior with deceptive tools, such as bots and troll farms to spread disinformation. With the advent of AI, these actors have even more tools at their disposal and journalists need partners in technology and research to keep up.

This is the second in a series of convenings CNTI will host on enabling the benefits of AI while also guarding against the harms. It is just one of many initiatives within the larger work CNTI, an independent global policy research center, does to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. Please visit our website: www.innovating.news to see more of CNTI’s work and sign up to receive updates, and, as always, please contact us with questions and ideas.

Finally, please see the Appendix below for numerous valuable resources shared by participants of this event as well as other acknowledgements.

Charlie Beckett
Professor/Founding Director, Polis, LSE (CNTI Advisory Committee)
Marc Lavallee
Director of Technology Product and Strategy/Journalism, Knight Foundation
Anna Bulakh
Head of Ethics & Partnerships, Respeecher (CNTI Advisory Committee)
Celeste LeCompte
Fmr. Chief Audience Officer, Chicago Public Media
Paul Cheung
CEO, Center for Public Integrity (Fmr., now Sr. Advisor, Hacks/Hackers)
Erin Logan
Fmr. Reporter, LA Times
Gina Chua
Executive Editor, Semafor
The Hon. Jerry McNerney
Senior Policy Advisor, Pillsbury Winthrop Shaw Pittman LLP, Fmr. Congressman (CA)
Ethan Chumley
Senior Cybersecurity Strategist, Microsoft
Tove Mylläri
AI Innovation Lead, Yle (The Finnish Broadcasting Company)
Elik Eizenberg
Co-Founder, Scroll
Matt Perault
Director, UNC Center on Technology Policy
Deb Ensor
Senior VP of Technical Leadership, Internews
Adam Clayton Powell III
Executive Director, USC Election Cybersecurity Initiative (CNTI Advisory Committee)
Maggie Farley
Senior Director of Innovation and Knight Fellowships, ICFJ
Courtney Radsch
Director, Center for Journalism and Liberty at the Open Markets Institute
Craig Forman
Managing General Partner, NextNews Ventures (CNTI Executive Chair)
Aimee Rinehart
Local News & AI Program Manager, The Associated Press
Richard Gingras
Global VP of News, Google (CNTI Board)
Felix Simon
Researcher, Oxford Internet Institute (CNTI Advisory Committee)
Jeff Jarvis
Director of the Tow-Knight Center for Entrepreneurial Journalism & The Leonard Tow Professor of Journalism Innovation, CUNY (CNTI Advisory Committee)
Steve Waldman
President, Rebuild Local News (CNTI Advisory Committee)
Tanit Koch
Journalist/Co-Ownder, The New European (CNTI Advisory Committee)
Lynn Walsh
Assistant Director, Trusting News
Amy Kovac-Ashley
Executive Director, Tiny News Collective (CNTI Advisory Committee)

CNTI’s cross-industry convenings espouse evidence-based, thoughtful and challenging conversations about the issue at hand, with the goal of building trust and ongoing relationships along with some agreed-upon approaches to policy. To that end, this convening adhered to a slightly amended Chatham House Rule:

  1. Individuals are invited as leading thinkers from important parts of our digital news environment and as critical voices to finding feasible solutions. For the purposes of transparency, CNTI publicly lists all attendees and affiliations present. Any reporting on the event, including CNTI’s reports summarizing key takeaways and next steps, can share information (including unattributed quotes) but cannot explicitly or implicitly identify who said what without prior approval from the individual.
  2. CNTI does request the use of photo and video at convenings. Videography is intended to help with the summary report. Any public use of video clips with dialogue by CNTI or its co-hosts requires the explicit, advance consent of the subject.
  3. To maintain focus on the discussion at hand, we ask that there be no external posting during the event itself.

To prepare, we asked that participants review CNTI’s Issue Primers on AI in JournalismAlgorithmic Transparency and Journalistic Relevance as well as the report from CNTI’s first convening event

Participants at our convening event shared a number of helpful resources. Many of these resources are aimed at assisting local newsrooms. We present them in alphabetical order by organization/sponsor below. 

Several news organizations were mentioned for their use of AI in content creation. One that received recognition was the Baltimore Times for its efforts to better connect with their audience through the use of AI. 

Participant Aimee Rinehart shared a blueprint for her CUNY AI Innovation project. This project aims to create a journalism-specific LLM AI model for journalists and newsrooms to use.

A novel radio fact-checking algorithm in Africa, Dubawa Audio Platform, was discussed to show how countering mis- and disinformation can be done in non-Internet-based contexts. The platform was initiated by a Friend of CNTI, the Centre for Journalism Innovation and Development (CJID). The Dubawa project received support from a Google News Initiative grant.  

Information was shared about the Finnish government and academic community’s campaign on AI literacy, Elements of AI. This project aims to raise awareness about the opportunities and risks of AI among people who are strangers to computer science, so they can decide for themselves what’s beneficial and where they want their government to invest. Free educational material also exists for children. Curious readers may also learn about related research here.

The topic of transparency was discussed and a 2022 report by Courtney Radsch for the Global Internet Forum to Counter Terrorism (GIFCT) provides an important overview of transparency across various industries.

A number of participants shared information about technological tools:

  • Google’s Pinpoint project helps journalists and researchers explore and analyze large collections of documents. Users can search through hundreds of thousands of documents, images, emails, hand-written notes and audio files for specific words or phrases, locations, organizations and/or names.
  • Google’s SynthID is a tool to embed digital watermarks in content to assist users with knowing the authenticity and origin of content. 
  • Microsoft’s PhotoDNA creates a unique identifier for photographs using its system. This tool is used by organizations around the world — it has also assisted in the detection, disruption and reporting of child exploitation images. 

Information was shared about Schibsted, a Norwegian media and brand network, organizing the development of a Norwegian large language model (LLM). It will serve as a local alternative to other general LLMs. 

The Tow Center for Digital Journalism at Columbia University released a recent report by Felix Simon titled “Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena.” 

A participant shared an innovative use of AI in Zimbabwe in which the AI model has been trained using local dialects. The chatbot is more representative of the users in that region when compared to other general AI language models. 

We appreciate all of our participants for sharing these resources with CNTI.

The Center for News, Technology & Innovation (CNTI), an independent global policy research center, seeks to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. CNTI’s cross-industry convenings espouse evidence-based, thoughtful but challenging conversations about the issue at hand, with an eye toward feasible steps forward.

The Center for News, Technology & Innovation is a project of the Foundation for Technology, News & Public Affairs.

CNTI sincerely thanks the participants of this convening for their time and insights, and we are grateful to the University of Southern California’s Annenberg Capital Campus, the co-sponsor and host of this AI convening. Special thanks to Adam Clayton Powell III and Judy Kang for their support, and to Steve Waldman for moderating such a productive discussion.

CNTI is generously supported by Craig Newmark Philanthropies, John D. and Catherine T. MacArthur FoundationJohn S. and James L. Knight Foundation, the Lenfest Institute for Journalism and Google.

The post Watermarks are Just One of Many Tools Needed for Effective Use of AI in News appeared first on Center for News, Technology & Innovation.

]]>
Most “Fake News” Legislation Risks Doing More Harm Than Good Amid a Record Number of Elections in 2024 https://innovating.news/article/most-fake-news-legislation-risks-doing-more-harm-than-good-amid-a-record-number-of-elections-in-2024/ Thu, 18 Jan 2024 15:00:00 +0000 https://innovating.news/?post_type=article&p=4329 In 2024, a surge of national elections spans 50 countries, including major players like the United States, India, and Mexico. Yet, amidst this democratic wave, there's mounting concern over widespread disinformation, emphasizing the critical need for trustworthy, fact-based news.

The post Most “Fake News” Legislation Risks Doing More Harm Than Good Amid a Record Number of Elections in 2024 appeared first on Center for News, Technology & Innovation.

]]>

As the world launches into 2024, we face a year with a record-breaking number of countries (50) holding national elections including the United States, India and Mexico. With these elections come heightened concerns about the spread of disinformation and the challenge of providing voters with fact-based news. 

Discussions about how to guard against disinformation and encourage the delivery of fact-based news are critical. In working toward the best actionable outcomes, these discussions need to consider both the potential and realized impact of recent legislative policies related to this topic. This study focuses specifically on policies laid out as guarding against “fake news.” 

Legislation targeting “fake news” — a contested term used to reference both news and news providers that governments (or others) reject as well as disinformation campaigns — has increased significantly over the last few years, particularly in the wake of COVID-19. This study finds that even when technically aimed at curbing disinformation, the majority of “fake news” laws, either passed or actively considered from 2020 to 2023, lessen the protection of an independent press and risk the public’s open access to a plurality of fact-based news.

Indeed, governments can — and have — used this type of legislation to label independent journalism as “fake news” or disinformation. According to the Committee to Protect Journalists, among the 363 reporters jailed around the world in 2022, 39 were imprisoned for “fake news” or disinformation policy violations. Even within well-intended legislative policies, like Germany’s laws which focus on platform moderation of “illegal content” related to hate speech and Holocaust denial, concerns can arise over potential government censorship.

There are several pieces of legislation, such as the United Kingdom’s Online Safety Act, that are important to consider in broader policy discussions about online safety and algorithmic regulation (and are discussed in other CNTI reports) but they go beyond the scope of this study. This analysis examines the language within 32 “fake news” policies proposed or enacted from 2020 to 2023 in 31 countries, 11 of which have elections scheduled in 2024.

Overall, the study reveals that the language in the 32 pieces of legislation does little to protect fact-based news and in many cases creates significant opportunity for government control of the press. The lack of safeguards in this legislation risks curbing press and journalistic freedoms heading into a major election year. Among the key findings:

  • “Fake” or “false” news is explicitly defined in less than a quarter (7/32) of this legislation. Omission of these definitions leaves them open to interpretation by whomever has oversight authority which, in these cases, is often the government itself. 
  • In what is very much a double-edged sword, two pieces of legislation examined here explicitly define journalism or what may be considered “real” news, one defines journalists and four define news organizations. While definitions can help protect press freedom, they can also be used as legal grounds to protect media that props up the government and ban media that does not — especially if the court’s application is also dictated by the government. 
  • 14 of the 32 policies clearly designate the government with the authority to decide what is or is not “fake news.” In some cases it is the central government itself and in others it is an entity within the government whose independence from the central government is often unclear. The remaining 18 policies provide either vague or no language about who has that control, ceding it to the government by default. Putting this power in the hands of the government — whether explicitly or by default — introduces greater risk of governmental press and message control.
  • Although press control issues are more prevalent in the countries with autocratic rather than democratic regimes, definitional issues and a lack of clarity are found in legislation from both regime types. Of the 31 countries studied, 19 are autocratic and 11 are democracies as identified by the research organization V-Dem.1
  • Criminal penalties for the publication of “fake news” vary dramatically, from fines to suspension of publications to imprisonment. Among the 27 policies with clearly noted penalties, three-quarters (20 policies) include imprisonment, ranging from less than one month in Lesotho to up to 20 years in Zimbabwe.

These findings warrant concern. Vague or missing definitions can create, intentionally or not, opportunities for governments to censor opposing voices and restrict press freedom and freedom of expression. Any legislation, even if developed while under leadership that values an independent press, must account for the possibility of future regime change or legal interpretation.

Putting hard lines around false information is certainly not easy. The challenge is exacerbated when trying to discern intentional versus unintentional efforts to mislead. Legislation can be an important part of creating a digital news environment that safeguards both an independent press and the public’s access to fact-based news, but those aiming to develop policy should be aware of these challenges. CNTI offers five key questions that anyone seeking to construct policy to guard against false information in a way that safeguards an independent press and public access to a plurality of fact-based news should consider. Detailed in the “Important Policy Considerations” section of this report, they include: 1) whether policy or other non-governmental methods are the best approach for the current situation; 2) if specific independent oversight is laid out; 3) whether there are clear adjudication processes; 4) who the subject — or target — of the policy is; and 5) what future or global implications might emerge?

This report is one of many CNTI efforts to help address the challenges of today’s digital news environment in ways that safeguard an independent, competitive news media and the public’s access to a plurality of fact-based news. It is also a part of CNTI’s work in the specific area of defining journalism in our digital, global society. Any legislation related to journalism and news needs to thoroughly address the range of implications that can ensue.

A substantial portion of the legislation we examine was developed in response to the COVID-19 pandemic, often targeting the spread of false information about the pandemic as well as information that contradicts government and public health officials. About two-fifths of the policies examined here (13/32) focus specifically on false information about COVID-19 and disputing government public health officials. For example:

  • Botswana’s Emergency Powers (COVID-19) Regulations and South Africa’s amendment to its Disaster Management Act use the exact same wording, penalizing “any statement, through any medium, including social media, with the intention to deceive any other person about COVID-19; COVID-19 infection status of any person; or any measure taken by the Government to address COVID-19.”
  • The Philippines’ Bayanihan To Heal As One Act penalizes “individuals or groups creating, perpetuating, or spreading false information regarding the COVID-19 crisis on social media and other platforms.”

While a narrow scope might be less susceptible to abuse, these policies nonetheless risk chilling effects on free expression or political criticism. Indeed, narrow “health” proscriptions have been used in Zimbabwe, for example, to persecute journalists questioning COVID-19 policies and exposing corruption in COVID-linked procurement practices. These topic-specific policies also suggest the potential for passing similar legislation in other topic areas deemed risky by the government.

Other recent legislation, meanwhile, include much broader — and more easily exploited — phrases about information that criticizes or harms the country’s military or economy, or sows discord:

  • Greece’s legislation on false information includes “anyone who publicly or via the Internet disseminates or disseminates in any way false news that is capable of causing concern or fear among citizens or of shaking public confidence in the national economy, the country’s defense capability or public health.” 
  • Hungary’s legislation denotes anyone who “states or disseminates any untrue fact or any misrepresented true fact with regard to the public danger that is capable of causing disturbance or unrest in a larger group of persons at the site of public danger” is guilty of a crime.
  • Myanmar’s draft legislation defines the creation of misinformation and disinformation as “causing public panic, loss of trust, or social division on cyberspace.”

While many countries’ legislation are similar in how they define the type of information that qualifies as false or malicious, the level of specificity varies dramatically across policies, with only occasional shared phrasing (see Table 1). A lack of clarity and use of vague language, when definitional language exists at all, carries risks for journalists and the public.

“Fake” or “false” news (sometimes referred to as disinformation, misinformation or other terminology in legislation) is explicitly defined in less than a quarter of this legislation (7/32). As is the case in disinformation studies more broadly, the question of intent often complicates definitions of “fake news,” with four of the seven seeking to separate accidental or unintentional from intentional spreading of false news (see Table 1).2 In some contexts, such as Nigeria, legislation explicitly states that it is illegal to produce “knowingly false” information, though how this definition is adjudicated is unclear.

A natural follow-up question to how false news is defined is if and how journalism — or what might be considered “real” news — is defined. Less than a quarter of these policies define any terms related to journalism. Four of the 32 policies explicitly define news organizations, while journalism (as in news content, not “fake news”) is defined in two of the 32 policies and journalists in only one.

Even when explicit definitions of journalism are present, they are often vague and thereby open to a wide array of interpretations. For example, in Togo’s 2020 Press and Communication Code, journalism is broadly defined as “original content” about current events of “general interest.” It is unclear in this legislation what qualifies as “original” or what counts as “general interest.”

References to or definitions of “journalism” or “journalists” within policy are just as — if not more — critical to fully consider than those of “fake news.” While they may be intended to protect independent, pluralistic journalism, these definitions can also be used to sanctify government control of the press.

Another element to consider is the authority responsible for, and the process of, determining what or who constitutes “fake news.” This authority is clearly designated in less than half of the legislation (14/32), often making legislation susceptible to abuses aimed at curbing press freedom (see Table 1).

Among the remaining 18 policies with unclear or no noted oversight authority, 11 of them include regulations for “fake news” or disinformation that are wrapped into broader legislation related to COVID-19. The entity most often noted as having any kind of authority within these broader COVID-19 policies is the country’s health minister, but it is unclear exactly what authority that entails when it comes to adjudicating “fake news” specifically.

Among the 14 policies with clearly noted oversight authorities, the arbiter of such decisions is often designated as the head or body of a government commission, ranging from ministers of communications, information or technology to electronic transactions control boards and communications authorities (see Table 1). Others, in practice, have unclear or limited independence from the state. For instance, Togo’s Haute Autorité de l’Audiovisuel et de la Communication (HAAC) has the authority to sanction media actors and grant press accreditation. While its structure is technically independent from the government, it is not particularly transparent and in the past has been found to censor news organizations at the state’s request.

The criminalization of “fake news” has a long history in many countries that have experienced penal codes imposed under colonization. Similar types of penalties are embedded in many of these modern policies.

About four out of five of these policies (27/32) explicitly note criminal penalties for creating and publishing “fake news” or disinformation — most commonly fines and/or imprisonment, though some legislation includes the temporary suspension of news publications (e.g., Togo) or compulsory community service (e.g., Uzbekistan).

Among the 27 policies with clearly noted penalties, three-quarters (20 policies) include imprisonment, ranging from less than one month in Lesotho to up to 20 years in Zimbabwe.

Together, these policies reveal the serious consequences journalists face especially when legislation does not include clear oversight authorities or processes and is then abused by governments. Further, the risk of imprisonment or heavy fines may create broader impacts, discouraging independent journalists through  potential consequences such as their work being labeled as “fake news.”

While news and news organizations are not necessarily a central consideration of “fake news” or disinformation policies, the ripple effects of these policies impact independent journalism and press freedom. Codified policies focused on news, disinformation and/or journalism — as well any language used to define these and related terms — carry tremendous power, creating opportunities for governments or other powerful actors to intentionally or unintentionally threaten the independence, diversity and freedom of the press as well as broader elements of free expression.

As Table 1 shows:

  • A majority of the countries examined in this study (19/31) have some form of autocratic governance (versus 12 democracies).
  • Among the 30 countries in this study that also appear in the Reporters Without Borders Press Freedom Index, more than half (17/30) rank in the bottom half of the index. 

Together, these findings indicate high risks of undue political influence compounded by vague language and a lack of clear definitions and oversight processes. They also suggest that a majority of these policies may not have been intended to promote democratic values such as an independent press or free expression. Those aiming to construct policy that promotes an informed public and independent press should be aware of these conditions.

It is critical to take a careful and deliberate approach to codification of what “fake news” (real news) is and is not. This is not an easy task, as CNTI discusses in some of our issue primers, and has become even more complicated in the digital era. This study reveals five important questions to consider in  policy development in this area:

  • Are legislative policy or other non-governmental methods the best approach to address disinformation? The latter could include processes such as human rights or information risk assessments, content moderation or data sharing on digital platforms, news and media literacy initiatives or other related efforts. Each option, including legislative policy, has potential benefits and harms which are important to think through.
  • Are specific, independent oversight of these definitions included in policy? This can be designated via self-regulatory bodies or through independent government agencies designed to protect against undue political influence.
  • Are there clear adjudication processes for these definitions? Can journalists, news entities or civil society formally challenge definitional decisions by oversight bodies and if so, how?
  • Who is the subject of the policy? Who would be liable? Solely individuals? Publishers? Platforms? Some combination based on the circumstances? While complex, it is important to fully consider who the law would implicate and why.
  • What potential future and global implications might emerge? These decisions have global consequences, as policies in one country inherently impact those in others. And, as new technologies for disinformation and digital manipulation emerge, new tactics for addressing them may be necessary. 

1. One country, Vanuatu, has no noted V-Dem regime type. In this case, we have supplemented this data with indicators from the U.S. Department of State, the United Nations and Freedom House showing that Vanuatu is a parliamentary democracy and conducts democratic elections.

2. One potential reason we may not see greater similarities in phrasing is that most legislation was translated from the native language to English.

This study included quantitative and qualitative analyses of 32 “fake news” legislative policies. Two content analysis coders compiled case data and coded for a range of variables including country, short and long titles of legislation, dates of legislation draft and latest update, legislation status, definitions of key terms (“news”/“journalism,” “fake news,” “journalists,” “news entities”/“publishers” and “platform”/“news intermediary”) and authorities responsible for overseeing each definition. Five test cases were coded by both coders simultaneously to assess intercoder reliability with 99.3% agreement.

The coders used a conservative approach to coding definitions. Some pieces of legislation hinted at a concept in an extraneous clause, but if the term was not explicitly defined, it was coded as “not defined.” Links to all legislation included in this research and the study’s codebook can be viewed here.

Cases representing “fake news” legislative activity from 2020 to 2023 were drawn from the Center for International Media Assistance (CIMA), LEXOTA and LupaMundi reports and databases. Additional information, including the actual drafts of legislation for analysis, was compiled from government websites and international news articles. Therefore, the analysis was limited to legislation with a full, publicly available draft or bill online. When compiling policies, some public drafts could not be found and were coded as missing cases. Google Translate was used for the 13 policies that were not available in English to keep the source of translation consistent across all pieces of legislation. All translations were also checked against ChatGPT’s translation tool to ensure that the interpretations were reliable. 

YearNumber of Bills Examined
202019
20217
20225
20231
ContinentNumber of Bills Examined
Africa14
Asia & Middle East9
Australia & Oceania1
Europe6
North America1
South America1

The Center for News, Technology & Innovation (CNTI), an independent global policy research center, seeks to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. CNTI’s cross-industry convenings espouse evidence-based, thoughtful but challenging conversations about the issue at hand, with an eye toward feasible steps forward.

The Center for News, Technology & Innovation is a project of the Foundation for Technology, News & Public Affairs.

The post Most “Fake News” Legislation Risks Doing More Harm Than Good Amid a Record Number of Elections in 2024 appeared first on Center for News, Technology & Innovation.

]]>
Defining AI in News https://innovating.news/article/defining-ai-in-news/ Wed, 08 Nov 2023 15:00:00 +0000 https://innovating.news/?post_type=article&p=4130 Leaders in Tech, Media, Research and Policy Lay Groundwork for Global Solutions

The post Defining AI in News appeared first on Center for News, Technology & Innovation.

]]>

The Center for News, Technology & Innovation (CNTI) hosted its inaugural event convening leaders in journalism, technology, policy and research for an evidence-based discussion about enabling the benefits and managing the harms of AI use in journalism, with a focus on critical definitional considerations when constructing policy.

Co-sponsored by and hosted at the Computer History Museum in Mountain View, California, the Oct. 13 event brought together legal and intellectual property experts from OpenAI, Google and Microsoft; leading journalists from The Associated Press, Axios, Brazil’s Nucleo Journalismo and Nigeria’s Premium Times; researchers in AI and intellectual property law from the University of Oxford, the University of Sussex, Research ICT Africa and Stanford; and technology policy and industry experts representing a range of organizations.

Participants

  • Anna Bulakh, Respeecher
  • Garance Burke, The Associated Press
  • Craig Forman, NextNews Ventures
  • Richard Gingras, Google
  • Andres Guadamuz, University of Sussex
  • Dan’l Lewin, Computer History Museum
  • Megan Morrone, Axios
  • Dapo Olorunyomi, Premium Times
  • Matt Perault, Center on Technology Policy
  • Ben Petrosky, Google
  • Kim Polese, CrowdSmart
  • Aimee Rinehart, The Associated Press
  • Tom Rubin, OpenAI
  • Marietje Schaake, Stanford Cyber
  • Policy Center (moderator)
  • Felix Simon, Oxford Internet Institute
  • Krishna Sood, Microsoft
  • Sérgio Spagnuolo, Núcleo Jornalismo
  • Scott Timcke, Research ICT Africa

For more details, see the Appendix.

Among the questions considered were: How should policy define “artificial intelligence” in journalism, and what should fit into that bucket? How do we use language that plans for future technological changes? What are the important complexities related to copyright considerations around AI in journalism? 

The productive session, held under a modified Chatham House Rule, sets the tone for the many convenings CNTI will hold in the months and years to come across a range of issues facing our digital news environment: using research as the foundation for practical, collaborative, solutions-oriented conversations among thoughtful leaders who don’t agree on all the elements, but who all care about finding solutions that safeguard an independent news media and access to fact-based news. As one participant put it, even just for AI: “We need 50, 100, of these.”

  1. Better articulation, categorization and understanding of AI is essential for productive discussions.
  2. Whether a particular AI use is a benefit or harm to society depends on its context and degree of use, making specificity vital to effective policy.
  3. Even when policy is groundbreaking, it must also take into account how it relates to and builds on prior policy.
  4. One policy goal should be to address disparities in the uses and benefits of AI as a public good.
  5. Both inputs and outputs, at all stages of building and using AI, need to be considered thoroughly in policy development.

The day concluded with ideas for next steps, including taking stock of AI use cases in journalism, creating clear and consistent definitions of news and news publishers, and examining copyright laws to better understand how exactly they apply to AI use in news.

Stay tuned for CNTI’s second AI convening, which will consider oversight structures assigned to organizational and legislative policies around AI use in journalism.

Cross-industry experts unanimously agreed on the need to work together to better define AI and articulate clear categories of use. This will enable a common understanding and allow for more productive conversations. Right now, that is not happening. As one participant remarked, “We jump into the pool … and we’re not even talking about the same thing.”

An overarching definition of AI: The participants chose not to spend their limited time together writing a precise definition of AI, but they shared definitions they have found to be useful starting points including those from:

  • Council of the EU: “systems developed through machine learning approaches and logic- and knowledge-based approaches.”
  • AP Stylebook’s AI chapter: separate definitions for “artificial intelligence,” “artificial general intelligence,” “generative AI,” “large language models” and “machine learning.”
  • Melanie Mitchell: “Computational simulation of human capabilities in tightly defined areas, most commonly through the application of machine learning approaches, a subset of AI in which machines learn, e.g., from data or their own performance.”

Research backs up the need for, and lack of, clarity around AI.

A lack of conceptual clarity around AI, changing interpretations of what qualifies as AI and the use of AI as an umbrella term in practice and policy can make potential violations of law incredibly hard to identify and enforce. Recent legislative activity around AI, including the European Union’s AI Act has been criticized for offering a broad legal definition of “general purpose AI systems,” making it difficult to know what would be included or not included in this scope. Canada’s Bill C-27, similarly, does not define the “high-impact systems” it is regulating. It is important to proceed from shared understandings of the technology at issue and the harms policymakers hope to address. For example, attempting to write laws touching only “generative AI” (e.g., chatbots, LLMs) could inadvertently prohibit processes that underlie a range of technological tools, while applying requirements to broad or vaguely defined categories of technology could lead to legal uncertainty and over-regulation.

Within these definitions are several categories of use that must also be clearly differentiated and articulated. They include: 

The scope of AI: AI is not new – in general nor in journalism – and represents a much broader set of technologies than simply generative AI (GAI) or large language models (LLMs). Nevertheless, one participant noted research finding many burgeoning AI newsroom policies narrowly define AI as GAI and LLMs, which “speaks to the challenges of grasping this set of technologies.”

The type of AI use: It is important to differentiate among the various types of AI use related to news. There are uses for news coverage and creation and, within that, questions around the degree of human vs. AI involvement in published content. There are uses that help with newsroom efficiency and cost savings, such as transcription, that don’t necessarily result in public-facing AI output. There are uses for news distribution and access, such as translation. At least one participant remarked on the tendency, when thinking about LLMs, to limit the scope of its definition to text-based GAI rather than comprehensively conceptualize GAI to include other formats such as XtownLA, which uses AI models for reporting on public hearings.

Who the user is: It is critical to specify who the user is at each point in the process of AI use so that appropriate responsibility, and perhaps liability, can be attributed. Is it newsrooms? Members of the public? Technology or third-party companies? Governments? This articulation is often missed.

The part of the AI system being accessed: A few participants talked about the importance of understanding and differentiating among the different levels of AI systems. The first level is, for example, the LLM being built. The second level is the API or corpus (e.g., all data being used as inputs) of what is in the model or further steps such as reinforcement learning through human feedback (RLHF). Third is the application use level, such as ChatGPT, in which humans do not actually interact with the model itself. Each of these, as one participant noted, should have different policy considerations.

Another pitfall in current discussions occurs when we don’t take time to articulate what is neither a part of AI nor tied directly to its use. This is particularly important when it comes to guarding against harms. Participants articulated the need to distinguish between which issues are attributed to the internet and social media age in general and which issues are linked specifically to AI technologies.

Better understanding of these definitions and categorizations can also help build trust, which several participants named as critical for positive outcomes from AI use and policy development. One participant asked, “How do we generate trust around something that is complicated, complex, new and continuously changing?” Another added, “Trust is still really important in how we integrate novel technologies and develop them and think two steps ahead.”

How do we create better knowledge and understanding? If we want this articulation to lead to better understanding and knowledge, how do we get there? How can we make that knowledge more accessible to journalists, researchers and policymakers? What can technology companies do? How can CNTI help distribute this knowledge?

Whether a particular use of AI is a benefit or a harm to society can depend on several factors, including the degree of use, the strength of protective actions and the subject matter surrounding the use. Effective policy must include specific and context-sensitive considerations. A prime example of this, discussed extensively by the group, is algorithmic transparency. (For more on this topic, see CNTI’s related Issue Primer.)

While there was general agreement on the importance of some level of transparency about how models are built and how newsrooms, journalists or others use AI – and, in most cases, agreement that more transparency is needed than currently exists – participants also discussed instances when transparency might be counterproductive. Is there a point at which the level of transparency carries more risk than value or causes more confusion than clarity – for instance, where “the level of transparency and detail … can actually undermine the security and safety of the models”? Determining where to draw that line is critical but difficult. 

For example, optimal transparency around how AI is used in hiring processes or in making bail recommendations might differ from optimal transparency around the trillions of tokens that go into a LLM. The latter may not be as useful for unrestricted access to the public because of such varying abilities to evaluate this knowledge, and there could be a point at which the benefit of transparency is outweighed by the risk of malign actors abusing these systems. There seems to be clear value, though, in public transparency about whether content was fully AI-generated, was AI-generated with human involvement or came only from a human. 

There were some different opinions about how to – or who should – determine how transparency extends to the broader public. One individual suggested “more transparency is better,” and then journalists can “make choices about how we describe all of this.” Participants agreed that we “have something to grapple with” and must work toward this together. These conversations call for “precision in the way we are discussing and analyzing different use cases so we ensure that policy that is created is specific to a given situation,” the challenge of doing so amid technology’s rapid evolution notwithstanding.

Another discussion surfaced about the benefits of AI that increased through advancements in LLMs, which AI policy may want to protect: transcription and translation. We “almost gloss over some of those [tools like transcription] because they’re such routinized things,” but “it is really the kind of scale of deployment that’s effective and interesting.” Similarly, translation has been used for a long time, but GAI has scaled it to create new opportunities for public use and benefit.

As we think about benefits of AI use, what are other AI practices that can be scaled up in ways that bring value to societies more broadly? What methods can be used to help determine how and when a benefit might become a harm, or vice versa?

While it is important to focus on clear, specific definitions when constructing new regulations, the group recognized that policy development is a layered process, and it is necessary to consider it in the context of previously adopted policy. So while developing clear definitions around AI matters, existing policy will inevitably impact new policy. 

Take, for example, anti-discrimination law. In many countries, discrimination is illegal regardless of how it occurs, including via AI. The specific definition of AI doesn’t really impact whether its use violates such laws. In a similar example, European Union and Canadian policies influence how AI gets trained and used in the context of data protection. Even if many of today’s AI systems did not exist when data protection laws were written, they still have an effect when it comes to AI. As one participant put it:

“Policy is often a layered process, where previously adopted laws matter. … It’s always a combination of things. … We are not starting totally from scratch, not legally and not procedurally.”

While there was clear agreement that definitions are important, at least one participant also asked: Have we reached a time where being clear about which principles should be protected is as important as the definitions themselves? Those principles can be articulated in ways that are technology-agnostic. Would this kind of approach help us create policy that can endure future technological developments? Or perhaps the answer is a nexus point of definitions and principles. 

A core principle for CNTI, and for this convening, is a free and independent press, which is under intense pressure in many parts of the world. Could we look at the various ways in which uses of AI in journalism impact that principle and approach policy development with that lens? 

Perhaps. But as other participants pointed out, that approach can be problematic if it is not done well, and definitions remain essential considerations. For example, a 2023 bill in Brazil proposed changing its Penal Code to include an article doubling the penalty for using AI to commit online fraud without actually defining AI – thus validating concerns about the impact of unclear definitions in legislative policy.

When it comes to AI use in journalism, what is the right balance between broad principles and specific definitions? Are there examples of successful regulatory structures that already exist, such as anti-discrimination policies, that can help us strike that balance? 

The importance of greater parity arose many times throughout the day. There are currently numerous global disparities in access to, use of and value of AI models. There was strong consensus that we need to work together on creating equality around AI as a public good, including the use of AI to help close gaps in news access. Inequalities work against the public good, which public policy is intended to protect. Use of and access to AI tools need to be democratized. How can public policy help that? Is this a case where organizational policy is as beneficial as — or perhaps more beneficial than — legislation?

Some specific areas of disparity discussed at the convening include: 

The content in current AI models misses many people and many parts of the world. Certain parts of the world can’t realize the same potential of AI, especially GAI, because a substantial proportion of the world’s languages are not accounted for in LLMs (either because they do not have enough training data to create them or because tools aren’t well-constructed for them). While the practical effect of this disparity is easy to grasp, some less-considered impacts include reduced public trust in AI tools and their outputs. 

GAI models often replicate existing social biases and inequities. Participants questioned the ability of many AI models to “reflect the full breadth of human expression,” noting examples ranging from the generation of photos of engineers as white men to the inclusion of QAnon conspiracy theories in LLM training data. Research supports this skepticism.

AI models, and the companies providing them, are largely exported from only a few countries. In many parts of the world, technological systems like GAI applications are largely imported from a small number of (often Western) companies. This means data used to build the models, as well as the models and tools, are imported from people and entities who likely do not understand the nuances of the information environment of the end users. This has led to greater public skepticism about — and even distrust toward — the implementation of these tools, both in general and in journalism specifically. One participant noted:

“There is a lot of aggravation that one can’t trust the technology because Africa is an importer of these AI tools.”

News is not a digital-first product in all parts of the world. Print, television and radio are still popular modes of news consumption in many countries. And there remain areas with much less internet accessthan others. So, as there is a push to digitization, “there’s an extent to which AI is still going to end up on the paper in some way, shape or form and we have to think about what that type of signaling may mean.” This raises the question, not yet given much consideration, of how AI use and policy translate to print. 

Discussions should also consider ways policy can support use of AI as a means to help close gaps in news access. There are parts of global society with less access to independent, fact-based news – whether the result of financial downfalls (e.g., news deserts in the U.S.), government control or high-risk environments such as war zones. Could AI be helpful here? If so, how? “We need to look at ways that we can rebuild newsrooms and do so in a way where it’s not just the large players that survive,” expressed one participant, “but local newsrooms survive, that everyone can actually have access to information.”

Wealthier news publishers have a greater advantage when it comes to AI use and licensing. There are currently clear financial advantages for large, well-resourced — and often Western — national and international news outlets. As pointed out by one participant: “Licensing is one way to obtain information, but you need to have funds, you need to have means … those with the biggest purses can obtain data and can possibly benefit more from AI. … That is bad from a public policy perspective because it means we’re not actually democratizing access to model development.” 

The varying relationships between governments and the press must be considered in policy discussions. In some contexts, policymakers simply may not value the principle of an independent press. In other contexts, such as in Caribbean nations, financial support via government ad spending is critical for the sustainability of media organizations, leading to a hesitance to create friction with their primary sources of funding. If governments do not trust media organizations’ adoption of AI tools, their actions can create economic problems for publishers or journalists.

What roles should legislative versus organizational policy play when it comes to addressing disparities in the use of AI as a public good?

The last portion of the day was intended to be focused on copyright law, but many of the points raised during the discussion have broader policy implications. This section summarizes these points first, then makes note of considerations specific to copyright and AI in journalism.

A substantial and, as some participants noted, justifiable amount of attention and litigation has focused on the inputs of AI systems, particularly on what data models are being trained on, especially when it comes to news content. Less attention has been paid to the other end of the equation: these systems’ outputs. As one participant noted, there is only one ongoing case related to AI outputs. There was a general acknowledgement that policy and policy discussions must consider the totality of the system. 

Two key issues emerged around AI outputs: ownership and liability. Both issues connect to copyright law and will likely be addressed in courts and public policy. One participant outlined three potential policy approaches:

  1. No AI-generated output is copyrighted and, thus, everything is in the public domain. This is the current policy approach in countries like the U.S.
  2. Some form of copyright of AI-generated output is recognized, as long as there is some degree of human intervention, and the human would receive the copyrights.
  3. AI-generated output receives short protections (e.g., 10 to 15 years) that could be registered to those who want to profit from transformative works.

Each option carries critical implications that are often lost in debates around AI inputs. Where is the cutoff point for AI-generated content? At what point in the editing process do we recognize content as having been transformed by AI and, therefore, no longer protected? Would this include the use of software, such as Photoshop, that have integrated AI tools? What does “human intervention” entail? And, for option three, what would be the duration of short-term protections (an option currently available in countries such as the United Kingdom)? Any policy, the group agreed, needs a clearly defined auditing process that includes evidence of steps taken in the content creation process. 

Participants also discussed the lack of clarity around output protections when it comes to patterns of language that models learn from snippets of words (rather than agreed-upon protections for creative expression like written articles). This ambiguity underlies other critical questions around issues like disinformation, such as decisions about whether to pull content like QAnon conspiracy theories out of model training data.

Clarity around outputs should be added to (but not replace) conversations about inputs: decisions around what data AI models are, or should be, trained on. As one participant noted, “Sticking to an understanding of the inputs here is important when we describe this interplay between the models and journalism, because journalism very much needs to be rooted in fact-based reporting.” Some suggested that the inputs question is potentially the easier issue to solve outside the bounds of policymaking: “You just pay [for data].” Though, it is not clear where current copyright law would fall on this and, as noted earlier, can lead – and has led – to a winner-takes-most approach where technology companies and news publishers with the most money and resources have the most to gain.

This discussion also included a broader conversation about liability, both within the context of copyright policy and beyond it. Participants noted that one critical area in AI policymaking is about the various layers of responsibility for the application of expression via AI tools including the model itself, its API, and its application (e.g., ChatGPT):

“We need to think about what’s the framework to apportion responsibility and what responsibility lies at each level … so that you get the trust all the way up and down, because ultimately newsrooms want to be able to trust the technology they use and the end user wants to be able to trust the output or product from the newsroom.”

One element that came up at several points is the need for transparency from AI model developers in order to have more open discussions around how to apportion responsibility.

In addition to the areas for further consideration noted above, the convening introduced several potential opportunities for future work, including:

  • The important remaining question of definitions of journalism in policy. While time limitations did not allow for a more thorough discussion of this question in this convening, participants noted a need to focus on definitions of journalism in policymaking alongside definitions of AI.
  • Mapping the various elements of copyright and intellectual property protection and their impacts on journalism. While CNTI’s Issue Primer on copyright lays out the current state of copyright policy related to journalism, participants discussed the value of better understanding how exactly these laws apply to AI use in news. As one participant noted: “We should not only look at what makes sense for us in the here and now” but also “in contexts where different languages are spoken, where access is a huge problem, where skills are a huge problem, where disinformation has very different connotations … what can be the possible consequences for people who are more vulnerable, less empowered, in a different part of the world, and where the consequences can be way worse.”
  • A need to take stock of AI practices (separate from guidelines), and establish a living repository of AI use cases related to journalism. CNTI developed a table of potential AI benefits and harms related to journalism (shown below) as a starting point for this discussion, though this was not an exhaustive list, and there is opportunity to develop it further into a digital resource. Participants also noted the need for an open database or repository of actual AI use cases (including their positive and negative impacts) that journalists and others creating and delivering fact-based news can add to. Some foundational work around this exists.

Realized & Potential Benefits

Realized & Potential Harms



Efficiency in some tasks, enabling journalists to focus on more challenging work (e.g., transcription, translation, organization, summarization, data analysis, writing)

Loss of audience trust from errors or lack of transparency; IP infringement; journalistic loss of control


Easier mechanisms for content moderation

Potential for errors (false positives or negatives) and biases


Personalization and curation of news content for audiences

Implicit biases in methodological choices of models (language, social/cultural, technical)


Opportunities for innovation and open-access competition

Over-regulation that stifles innovation or benefits large, established news orgs and harms start-ups or freelancers


Accessible tools for new, local and smaller global newsrooms

Unequal global access and resources to invest in-house


Enhanced audience analytics/distribution (e.g., paywalls)

Unequal support for freelancers, creators, citizen journalists



Capture of evidence in unsafe or inaccessible contexts (e.g., satellite imagery)

Use of this technology and data by those seeking to create disinformation, clickbait and scams


Aggregation and promotion of news content

Journalistic loss of control; reliance on third-party data


Delivery of fact-based news within & across borders, including in unsafe contexts (e.g., AI-enabledbots converting news content into accessible formats not blockable by governments)

Potential breaches of privacy regulations


Source or document verification through watermarks, etc. 

Potential for errors; falsification by those seeking to create disinformation or confusion over facts 


Automating time-consuming bureaucratic processes (e.g., FOIA requests)

Lack of human sensitivity; over-conservative approaches (e.g., over-redacting information)



Corpus of easy-to-access information

Intellectual property or terms-of-service infringement; worsening financial strain on news publishers and journalists


Developing more comprehensive training data/AI models

Implicit biases in methodological choices of models

Anna Bulakh
Head of Ethics & Partnerships, Respeecher
Ben Petrosky
Senior Policy Counsel, Google
Garance Burke
Global Investigative Journalist, The Associated Press
Kim Polese
Chairman, CrowdSmart
Craig Forman
Managing General Partner, NextNews Ventures (CNTI Board)
Aimee Rinehart
Local News & AI Program Manager, The Associated Press
Richard Gingras
Global VP of News, Google (CNTI Board)
Tom Rubin
Chief of Intellectual Property & Content, OpenAI
Andres Guadamuz
Reader in Intellectual Property Law, University of Sussex
Marietje Schaake
International Policy Director, Stanford Cyber Policy Center (CNTI Board)
Dan’l Lewin
President & CEO, Computer History Museum
Felix Simon
Researcher, Oxford Internet Institute
Megan Morrone
Technology Editor, Axios
Krishna Sood
Assistant General Counsel, Microsoft
Dapo Olorunyomi
Publisher, Premium Times
Sérgio Spagnuolo
Founder/Executive Director, Núcleo Jornalismo
Matt Perault
Director, Center on Technology Policy
Scott Timcke
Senior Research Associate, Research ICT Africa

CNTI’s cross-industry convenings espouse evidence-based, thoughtful and challenging conversations about the issue at hand, with the goal of building trust and ongoing relationships along with some agreed-upon approaches to policy. To that end, this convening adhered to a slightly amended Chatham House Rule:

  1. Individuals are invited as leading thinkers from important parts of our digital news environment and as critical voices to finding feasible solutions. For the purposes of transparency, CNTI feels it is important to publicly list all attendees and affiliations present. Any reporting on the event, including CNTI’s reports summarizing key takeaways and next steps, can share information (including unattributed quotes) but cannot explicitly or implicitly identify who said what.
  2. CNTI does request the use of photo and video at convenings. Videography is intended to help with the summary report. Any public use of video clips with dialogue by CNTI or its co-hosts requires the explicit, advance consent of the subject.
  3. To maintain focus on the discussion at hand, we asked that there be no external posting during the event itself.

Participants were not asked to present prepared remarks; rather, this was a thoughtful guided discussion. To prepare, we asked that participants review CNTI’s Issue Primers on AI in journalism and modernizing copyright law.

Participants at our convening event shared a number of helpful resources. Many of these resources are aimed at assisting local newsrooms. We present them in alphabetical order by organization/sponsor below. 

The American Journalism Project (AJP), which has announced a new partnership with OpenAI, serves as a useful resource for local newsrooms, bolstering local news projects to ensure all communities in the U.S. have access to trusted information. 

The Associated Press (AP) has launched, in addition to its Stylebook AI chapter, five new tools to improve workflow efficiencies in newsrooms with a focus on automation and transcription services. The AP also has assigned journalists on its global investigative team and beyond to cover artificial intelligence tools and their impacts on communities.

Another resource that organizations may find useful is CrowdSmart’s research on AI and conducting customer interviews, using AI to measure conversations and engage with human subjects, with the benefit of quantifying conversations in real time. 

Google has developed Pinpoint, a digital tool for analyzing and transcribing documents, that organizes and facilitates document collection. The company has also launched its Data Commons resource which enables users to access publicly available data from around the world.

Microsoft’s Journalism Hub shares tools to promote sustainable approaches to local journalism and includes a partnership with NOTA, an journalist-founded AI startup aiming to streamline newsroom processes without replacing journalists. Meanwhile, Microsoft’s open-data campaign aims to address inequalities in access by developing datasets and technologies that make data sharing easier. 

Finally, Respeecher, a novel startup founded in 2018, uses artificial intelligence to generate synthetic speech. The company has partnered with video game developers and motion picture studios to produce voices for characters – both fictional and real. 

We appreciate all of our participants for sharing these resources with CNTI.

The Center for News, Technology & Innovation (CNTI), an independent global policy research center, seeks to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. CNTI’s cross-industry convenings espouse evidence-based, thoughtful but challenging conversations about the issue at hand, with an eye toward feasible steps forward.

The Center for News, Technology & Innovation is a project of the Foundation for Technology, News & Public Affairs.

CNTI sincerely thanks the participants of this convening for their time and insights, and we are grateful to the Computer History Museum, the co-sponsor and host of this AI convening. Special thanks to Dan’l Lewin, Marguerite Gong Hancock and David Murphy for their support, and to Marietje Schaake for moderating such a productive discussion.

CNTI is generously supported by Craig Newmark Philanthropies, John D. and Catherine T. MacArthur Foundation, John S. and James L. Knight Foundationthe Lenfest Institute for Journalism and Google.

The post Defining AI in News appeared first on Center for News, Technology & Innovation.

]]>