Generative AI Policy Paper

‘Artificial minds, born of code and computation, handle with great care.’ ChatGPT, 2023

Why do we need a generative AI policy?

Generative AI tools are relatively new, but are already becoming widely integrated into work practices and software. If we do not engage with these tools at all, we risk losing methodological and competitive advantages by not making use of all the technological tools available to us, as well as being out of step with how other organisations and citizens are experiencing new technologies. If we engage with them without consideration, we may overlook unintended consequences and how we may contribute to technology-facilitated harms, which could negatively impact our team, those we work with or the integrity and trust people have in our work. 

As such, this policy paper is the start of an ongoing process as these tools and our collective understanding of them develop. Given the pace of change we will review this paper regularly. We would welcome feedback and critique on any aspect of this paper to support the development of these policies. Please email: [email protected] with feedback. 

This paper sets out the current state of play, our AI policy and our future focus. We have developed these based on: 

  • An evidence review on generative AI 
  • Review of generative AI use policies published by other organisations, notably newsrooms 
  • Team consultation and discussion
  • Our existing policies on use of digital technologies 

What do we mean by generative AI?

We use this as a catch-all term to refer to the emerging class of AI models which create new content – text, video, images – in response to a user prompt. Examples include Chat-GPT, Dall-E, and Bard. 

A good description of what these tools actually do from the New Yorker: “A user types a prompt into a chat interface; this prompt is transformed into a big collection of numbers, which are then multiplied against the billions of numerical values that define the program’s constituent neural networks, creating a cascade of frenetic math directed toward the humble goal of predicting useful words to output next. The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes its generation lacks majesty.”

Keuka College has set out a series of principles which define ChatGPT not as a writer due to the lack of authorial intent, and suggests the following approach to understanding its role in writing: 

“ChatGPT a) is not a writer, nor should the text it generates be used as a sole, one-to-one replacement for student-authored writing, and b) should only be employed in the compositional process with extreme care, and the text it produces should always be subjected to the student-writer’s mindful, critical appraisal—with specific attention paid to the currency, validity, and trustworthiness of ChatGPT’s “pre-training,” which very likely replicates the biases of its programmers and the audiences to which it caters.”

What are we/could we be using generative AI for? 

  • As a route to locate evidence and literature sources, using more trusted tools and ensuring that everything we include in our work as a result has been independently and thoroughly verified 
  • To research existing policy solutions we can critique
  • To aid and speed up writing in routine tasks, such as emails
  • To help debug code 
  • To format all of our references automatically 
  • For non-expert text classification

Other organisations’ AI policies summarised (at the time of writing)

Organisation Key Points of their Policies
German news agency DPA
  • Open to increased and new uses of AI to improve their work
  • Final decisions made by humans, and human responsible for any AI generated content
  • AI generated content explained to users
  • AI used should be legal, ethical, and technically secure
  • Employees should experiment with uses in a transparent and documented way
Wired 
  • Do not solely publish AI-generated or edited text, and flag if they do
  • May use to suggest headlines, social media posts
  • May use to generate ideas 
  • Do not use AI-generated images rather than stock photos
  • May experiment with it
Guardian 
  • Creating a working group to learn, consult and experiment with the technology
  • Adhering to journalistic standards
  • Plan forthcoming
Heidi.News
  • Adhering to same standards
  • Not replacing people with machines
  • AI can be used to help or improve work, but human supervision and editing remains core
  • AI is a tool ‘not a source of information’
  • Journalists responsible for accuracy of work
  • AI can help refine raw data, as proofreader, etc. 
  • Transparency about any synthetic images used
Insider
  • Setting up working group to experiment and prepare recommendations for use
  • Will experiment with using tools to write stories, prep interviews, outline stories, copyedit or write headlines
  • Journalists responsible for accuracy etc. in final output
  • Advised against inputting sensitive information
  • Will list its policies on AI use on its page 
Karoi Template ChatGPT Policy
  • Best used for feedback on own work 
  • Not permitted to enter sensitive or proprietary information into ChatGPT
  • Can use for feedback on snippets of work or to help with spell-checking, rewording etc. 

Principles for good practice at Demos 

The key point is that in many respects, nothing changes. We still retain our existing expectations about quality, accuracy, and originality of work, as well as responsible uses of technology. These principles are therefore more about clarifying these existing practices as they intersect with the use of generative AI tools rather than creating new ones. 

The main message to our team is that they need to be discussing with their manager how they are using generative AI in their work in their 1-2-1s so we can understand the new ways it is proving useful, and manage any risks accordingly. This should be an active conversation between team members and line managers.

1. HUMAN OVERSIGHT AND ACCOUNTABILITY 

We are always ultimately responsible for everything we publish.

If we publish something based on an output of a generative AI tool that subsequently proves to be misleading, harmful or false, these are our own errors and we will take appropriate action to correct them. 

The creator of content that is produced (whether a written report, blog, article, social media post, video, infographic or image) is responsible for the final output: in particular, the accuracy, clarity and content. All content will continue to be subject to the same senior sign-off procedures we currently have in place. 

We understand the human aspects of our work that are novel 

Generative AI can help as a research tool, in drafting and developing communications. But our ideas are our greatest asset: at Demos, analysis, decision-making and interpretation is done by humans. 

2. TRANSPARENCY

We are transparent about how we use generative AI in our work. 

For instance: if conducting a systematic evidence review, and using ChatGPT to identify sources, it would be appropriate to keep note that “we used ChatGPT [or another tool] to help us identify sources”.

If an ‘original’ idea is surfaced through a generative AI tool that we wish to include in our report (such as a policy recommendation that we had not generated ourselves but have tested and think has merit) it would be appropriate to include a footnote to that effect. This is not to assign authorial status to ChatGPT, but in recognition of the intellectual capital of the people who created the data ChatGPT has been trained on which enabled it to produce that output. This also enables other members of the team to understand in their reviews how generative AI has been used, if at all, in the production of the content. 

Other uses of generative AI tools (such as to facilitate formatting, summarise content, to identify evidence) do not need to be explicitly referenced, but should be included in a generally available published policy on how we may use generative AI in our research – for transparency with our readers and funders. 

3. ETHICAL AND SAFE USE 

We are aware of the risks involved in using generative AI tools, in particular with respect to the accuracy or bias of the outputs they produce

We are aware of the risk of generative AI tools to ‘hallucinate’: they may create or invent information, facts, even very convincing citations and sources – which it can be difficult and time-consuming to actively disprove. Newer models (e.g. GPT-4) are an improvement on earlier models in this regard, but these risks still persist. 

For instance, this is a good summary, from the Wall Street Journal: “It’s difficult to quantify how often that produces something like a hallucination, but a really good rule of thumb is to think about when you’re asking about something, what do you think would be the quality of the data that ChatGPT or the AI tool was trained on? If you’re asking about something that is really common knowledge, then it’s very likely you’ll get correct information because it is just said so often on the internet, but if you start asking about something really niche or really fringe or areas that are actually known to have really high levels of misinformation on the internet, that’s when you are going to start getting garbage reflected in these answers.”

As such, we will not use generative AI tools as sole sources of information: we may use them to highlight routes to information (such as suggesting sources or evidence for us to read, or summarising commonly held arguments) but we will not reproduce facts or information presented from generative AI in our research without checking that they are supported by evidence. 

We will not use generative AI tools to conduct analysis for us which we then reproduce uncritically

We are also aware that bias is common, and outputs may exhibit biases as a result of the data on which the model has been trained (which is not publicly available). We will be aware of this possibility when reviewing outputs from generative AI tools, and consider – as in all of our research – which perspectives may have been excluded, overlooked or de-amplified. We will acknowledge that AI tools may share information that is valuable to use, but that to use that value will require more critical analysis and engagement than simply reproducing. 

We are aware of the risks that generative AI may produce plagiarised material – as such we are transparent about anywhere that we use material generated by a generative AI tool unedited, and we will not use unedited material in our public work without a specific, explained, methodological reason to do so. 

We take steps to mitigate the risks of bias, plagiarism and inaccuracy as we would with any aspect of our research. 

We will be aware of the impact our use of AI tools may have on the value of intellectual property. 

We may use generated AI images for some purposes where there is a research need for it, but for general photography needs we will continue to use original and stock images where we can be confident that the images we are using have been licensed appropriately.

We will not input into ChatGPT any personally identifiable information of those that we work with, or any confidential or sensitive information. 

How data which a user inputs into generative AI tools is used is often unclear: it may be used to train AI models, and some models (e.g. ChatGPT) have been banned in some countries over data privacy concerns. Rule of thumb – if you wouldn’t share that information publicly or outside of Demos in regular proceedings, don’t share it with a generative AI tool. Act as if anything you input can be read by a third party.               

Examples of things not to do: 

  • Sharing a confidential RFP and asking ChatGPT to write a first draft response
  • Sharing personally identifiable information and asking ChatGPT to anonymise it 
  • Sharing a transcript from a focus group with personal information contained within the discussion and asking ChatGPT to summarise it 
  • Sharing code which may be proprietary and asking ChatGPT to debug it

As with our use of other digital platforms, we will abide by the terms of service and terms of use set out by the provider of the tool. 

We do not standardly conduct ‘jailbreak’ experiments (where we deliberately try to find workarounds to get an AI tool to output content that it has been trained not to). There may be exceptional cases where we deem it is necessary for our work and demonstrably in the public interest to do so and violate these terms. Decisions to do this will be taken by the SLT. 

We will not ask generative AI to produce illegal, harmful or deliberately offensive outputs 

Examples of this might be asking it to design a chemical weapon for us, asking it to be racist, sexist etc – even if this may be permitted by the terms of service. If there is a genuine research case for this (for instance: as part of a project investigating how generative AI could impact online harms) raise this with the SLT for sign-off first. If a generative AI tool you are using generates content which concerns or distresses you, please discuss it with your line manager or a member of the SLT. 

4. EXPERIMENTATION

We will share and discuss best practice, learnings, experiments, things that work well, things that don’t, and risks and opportunities we encounter. 

Everyone using a generative AI tool should be able to give a (broad, non-technical) account of how the tool works, and its strengths and weaknesses – we don’t use the tools uncritically and we test them for ourselves to see what works and what doesn’t in our own work.

We will actively engage both with teams internally and with those who we work with externally, to develop the conversation about the use of generative AI in our work. We are always open to feedback and learning on how we could be doing things differently or better. 

This could include: 

  • Involving research participants in discussions about the use of these tools if we plan to use them to engage the participants
  • Engaging with other think tanks and civil society organisations through existing networks to understand how other organisations are approaching using these tools and any challenges or opportunities they have experienced
  • Engaging with the AI community to better understand how these technologies are being developed and used, the latest thinking on ethical and responsible AI use, and contributing where we can

NEXT STEPS

We will investigate and clarify in future iterations of this policy: 

  • Which models are best suited for what tasks?
  • Which models have better/worse risk profiles?
  • We will review how the team is using ChatGPT, to make sure our applications of the technology are in line with our principals.

Version 1.0 June 5, 2023

ENDS