AI Ethics: Principles, Guidelines, Frameworks & Issues to Discuss

AI is gradually creeping into every aspect of our lives - but do we have the ethical tools to keep ourselves safe?

Artificial intelligence and machine learning systems have been in development for decades. The release of freely available generative AI tools like ChatGPT and Bard, however, has emphasized the need for complex ethical frameworks to govern both their research and application.

There are several different ethical quandaries that businesses, academic institutions, and technology companies have to contend with during periods of AI research and development – many of which remain unanswered and demand more exploration. On top of this, the widespread usage and application of AI systems by the general public brings with it an additional set of issues that require ethical attention.

How we ultimately end up answering such questions – and in turn, regulating AI tools – will have huge ramifications for humanity. What’s more, new issues will arise as AI systems become more integrated into our lives, homes, and workplaces – which is why AI ethics is such a crucial discipline. In this guide, we cover:

What Is AI Ethics?

AI ethics is a term used to define the sets of guidelines, considerations, and principles that have been created to responsibly inform the research, development, and usage of artificial intelligence systems.

In academia, AI ethics is the field of study that examines the moral and philosophical issues that arise from the continued usage of artificial intelligence technology in societies, including how we should act and what choices we should make.

AI Ethics Frameworks

Informed by academic research, tech companies and governmental bodies have already started to produce frameworks for how we should use – and generally deal with – artificial intelligence systems. As you’ll be able to see, there’s quite a bit of overlap between the frameworks discussed below.

What is the AI Bill of Rights?

In October 2022, the Whitehouse released a nonbinding blueprint for an AI Bill of Rights, designed to guide the responsible use of AI in the US. In the blueprint, the Whitehouse outlines five key principles for AI development:

  • Safe and Effective Systems: Citizens should be protected from “unsafe or ineffective AI systems”, through “pre-deployment testing and risk mitigation.”
  • Non-Discrimination: Citizens “should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
  • Built-in Data Protection: Citizens should be free from “abusive data practices via built-in protections and you should have agency over how data about you is used.”
  • Knowledge & Transparency: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  • Opting out: Citizens should have the ability to “opt out” and have access to individuals “who can quickly consider and remedy problems” they experience.

What are Microsoft’s six principles of AI ethics?

Along with the Whitehouse, Microsoft has released six key principles to underline responsible AI usage. They classify them as “ethical” (1, 2, & 3) or “explainable” (4 & 5).

  • Fairness: Systems must be non-discriminatory
  • Transparency: Training and development insights should be available
  • Privacy and Security: The obligation to protect user data
  • Inclusiveness: AI should consider “all human races and experiences”
  • Accountability: Developers must be responsible for outcomes

The sixth principle – which straddles both sides of the “ethical” and “explainable” binary – is “Reliability and Safety”. Microsoft says that AI systems should be built to be resilient and resistant to manipulation.

The Principles for the Ethical Use of AI in the United Nations System

The United Nations has 10 principles for governing the ethical use of AI within their inter-governmental system. AI systems should:

  • Do no harm/protect and promote human rights
  • Have a defined purpose, necessity, and proportionality
  • Prioritize safety and security, with risks identified
  • Be built on fairness and non-discrimination
  • Be respectful of individuals’ right to privacy
  • Be sustainable (socially and environmentally)
  • Guarantee human oversight & not impinge on the autonomy
  • Be transparent and explainable
  • Be responsible and accountable to appropriate authorities
  • Be Inclusive and participatory

As you can tell, all three frameworks cover similar ground and focus on fairness, non-discrimination, safety, and security.

But “explainability” is also an important principle in AI ethics frameworks. As the UN notes, technical explainability is crucial in AI ethics, as it demands that “the decisions made by an artificial intelligence system can be understood and traced by human beings.”

“Individuals should be fully informed when a decision that may or will affect their rights, fundamental freedoms, entitlements, services or benefits is informed by or made based on artificial intelligence algorithms and should have access to the reasons and logic behind such decisions” the document explains.

The Belmont Report: a framework for ethical research

The Belmont Report, published in 1979, summarizes ethical principles one should follow when conducting research on human subjects. These principles can – and often are – deployed as a broad ethical framework for AI research. The core principles from Belmont Report are:

Respect for Persons: People are autonomous agents, who can act on goals, aims, and purposes, something that should be respected unless they cause harm to others. Those with diminished autonomy, through “immaturity” or “incapacitation”, should be afforded protection. We must acknowledge autonomy and protect those for whom it is diminished.

  • In the context of AI: Individual choice should be placed at the center of AI development. People should not be forced to participate in situations where artificial intelligence is being leveraged or used, even for perceived goods. If they do participate, the benefits and risks must be clearly stated.

Beneficence: Treating a person in an ethical manner involves not only doing no harm, respecting their choices, and protecting them if they cannot make them for themselves, but also using opportunities to secure their well-being where possible. Wherever possible, maximize benefits, and minimize risks/harms.

  • In the context of AI: Creating artificial intelligence systems that secure the well-being of people, and are designed without bias or mechanisms that facilitate discrimination. Creating benefits may involve taking risks, which have to be minimized at all costs and weighed against good outcomes.

Justice: There must be a clear system for distributing benefits and burdens fairly and equally, in every type of research. The Belmont report suggests that justice can be distributed by equal share, individual need, individual effort, societal contribution, and merit. These criteria will apply in different situations.

  • In the context of AI: The parties or groups gaining from the development and delivery of artificial intelligence systems must be considered carefully and justly.

The main areas where these principles are applied are, according to the report, informed consent, assessment of benefits and risks, and selection of human subjects.

Why AI Ethics Has to Sculpt AI Regulation

Drawing on comments made in a 2023 lecture delivered at Princeton University by the University of Oxford’s Professor John Tasioulas, Director of the Institute for Ethics in AI, ethics is too often seen as something that stifles AI innovation and development.

In the lecture, he recalls a talk given by DeepMind CEO Demis Hassabis. After discussing the many benefits AI will have, Tasioulas says, Hassabis then tells the audience that he’ll move on to the ethical questions – as if the topic of how AI will benefit humanity isn’t an ethical question in and of itself.

Building on the idea that ethics is too often seen as a “bunch of restrictions”, Tasioulas also references a recent UK government white paper entitled “A Pro-Innovation Approach to AI Regulation”, within which the regulatory focus is, as the name suggests, “innovation”.

“Economic growth” and “innovation” are not intrinsic ethical values. They can lead to human flourishing in some contexts, but doesn’t this isn’t a necessary feature of either concept. We can’t sideline ethics and build our regulation around them instead.

Tasioulas also says that tech companies have been very successful in “co-opting the word ‘ethics’ to mean a type of ‘legally non-binding form of self-regulation’” – but in reality, ethics has to be at the core of any regulation, legal, social, or otherwise. It’s part of the human experience, at every turn.

You can’t create regulation if you haven’t already decided what matters or is important to human flourishing. The related choices you make off the back of that decision is the very essence of ethics. You cannot divorce the benefits of AI from related ethical questions, nor base your regulation on morally contingent values like “economic growth”.

You have to know the type of society you want to build – and the standards you want to set – before you pick up the tools you’re going to use to build it.

Why Does AI Ethics Matter? 

Building on the idea that AI ethics should be the bedrock of our regulation, AI ethics matters because, without ethical frameworks with which to treat AI research, development, and use, we risk infringing on rights we generally agree should be guaranteed to all human beings.

For example, if we don’t develop ethical principles concerning privacy and data protection and bake them into all the AI tools we develop, we risk violating everyone’s privacy rights when they’re released to the public. The more popular or useful the technology, the more damaging it could be. 

On an individual business level, AI ethics remains important. Failing to properly consider ethical concerns surrounding AI systems your staff, customers, or clients are using can lead to products having to be pulled from the market, reputational damage, and perhaps even legal cases.

AI ethics matters to the extent that AI matters – and we’re seeing it have a profound impact on all sorts of industries already.

If we want AI to be beneficial while promoting fairness and human dignity, wherever it is applied, ethics need to be at the forefront of discussion. 

General use AI tools are very much in its infancy, and for a lot of people, the need for AI ethical frameworks may seem like a problem for tomorrow. But these sorts of tools are only going to get more powerful, more capable and demand more ethical consideration. Businesses are already using them, and if they continue without proper, ethical rules in place, adverse effects will soon arise.

What Issues Does AI Ethics Face?

In this section, we cover some of the key issues face in AI ethics:

AI’s impact on jobs

A recent Tech.co survey found that 47% of business leaders are considering AI over new hires, and artificial intelligence has already been linked to a “small but growing” number of layoffs in the US.

Not all jobs are equally at risk, with some roles more likely to be replaced by AI than others. A Goldman Sachs report recently predicted ChatGPT could impact 300 million jobs, and although this is speculative, it’s already been described as a major part of the fourth industrial revolution.

That same report also said that AI has the capacity to actually create more jobs than it displaces, but if it does cause a major shift in employment patterns, what is owed – if anything – to those who lose out?

Do companies have an obligation to spend money and devote resources to reskilling or upskilling their workers so that they aren’t left behind by economic changes?

Non-discrimination principles will have to be tightly enforced in the development of any AI tool used in hiring processes, and if AI is consistently being used for more and more high-stakes business tasks that put jobs, careers and lives at risk, ethical considerations will continue to arise in droves.

AI bias and discrimination

Broadly speaking, AI tools operate by recognizing patterns in huge datasets and then using those patterns to generate responses, complete tasks, or fulfill other functions. This has led to a huge number of cases of AI systems showing bias and discriminating against different groups of people.

By far the easiest example to explain this is facial recognition systems, which have a long history of discriminating against people with darker skin tones. If you build a facial recognition system and exclusively use images of white people to train it, there’s every chance it’s going to be able to be equally capable of recognizing faces out in the real world.

In this way, if the documents, images and other information used to train a given AI model do not accurately represent the people that it’s supposed to serve, then there’s every chance that it could end up discriminating against specific demographics.

Unfortunately, facial recognition systems are not the only place where artificial intelligence has been applied with discriminatory outcomes.

Using AI in hiring processes at Amazon was scrapped in 2018 after it showed a heavy bias against women applying for software development and technical roles.

Multiple studies have shown that predictive policing algorithms used in the United States to allocate police resources are racially biased because their training sets consist of data points extracted from systematically racist policing practices, sculpted by unlawful and discriminatory policy. AI will, unless modified, continue to reflect the prejudice and disparities that persecuted groups already experienced.

There have been problems with AI bias in the context of predicting health outcomes, too – the Framingham Heart study Cardiovascular Score, for instance, was very accurate for Caucasians, but worked poorly for African-Americans, Harvard notes.

An interesting recent case of AI bias found that an artificial intelligence tool used in social media content moderation – designed to pick up “raciness” in photos – was much more likely to ascribe this property to pictures of women than it was to men.

AI and responsibility

Envisage a world where fully-autonomous self-driving cars are developed are used by everyone. Statistically, they’re much, much safer than human-driven vehicles, crashing less and causing fewer deaths and injuries. This would be a self-evident, net good for society.

However, when two human-driven cars are involved in a vehicle collision, collecting witness reports and reviewing CCTV footage often clarifies who the culprit is. Even if it doesn’t, though, it’s going to be one of the two individuals. The case can be investigated, the verdict is reached, justice can be delivered and the case closed.

If someone is killed or injured by an AI-powered system, it’s not immediately obvious about who is ultimately liable.

Is the person who designed the algorithm powering the car responsible, or can the algorithm itself be held accountable? Is it the individual being transported by the autonomous vehicle, for not being on watch? Is it the government, for allowing these vehicles onto the road? Or, is it the company that built the car and integrated the AI technology – and if so, would it be the engineering department, the CEO, or the majority shareholder?

If we decide it’s the AI system/algorithm, how do we hold it liable? Will victims’ families feel like justice is served if the AI is simply shut down, or just improved? It would be difficult to expect family members of the bereaved to accept that AI is a force for good, that they’re just unfortunate, and that no one will be held responsible for their loved one’s death.

We’re still some way off universal or even widespread autonomous transport – Mckinsey predicts just 17% of new passenger cars will have some (Level 3 or above) autonomous driving capabilities by 2035. Fully autonomous cars which require no driver oversight are still quite far away, let alone a completely autonomous private transport system.

When you have non-human actors (i.e. artificial intelligence) carrying out jobs and consequential tasks devoid of human intention, it’s hard to map on traditional understandings of responsibility, liability, accountability, blame, and punishment.

Along with transport, the problem of responsibility will also intimately impact healthcare organizations using AI during diagnoses.

AI and privacy

Privacy campaign group Privacy International highlights a number of privacy issues that have arisen due to the development of artificial intelligence.

One is the re-identification. “Personal data is routinely (pseudo-) anonymized within datasets, AI can be employed to de-anonymize this data,” the group says.

Another issue is that without AI, people already struggle to fully fathom the extent to which data about their lives is collected, through a variety of different devices.

With the rise of artificial intelligence, this mass collection of data is only going to get worse. The more integrated AI becomes with our existing technology, the more data it’s going to be able to collect, under the guise of better function.

Secretly gathered data aside, the volume of data that users are freely inputting into AI chatbots is a concern in itself. One study recently suggests that around 11% of data workers are pasting into ChatGPT is confidential – and there’s very little public information about precisely how this is all being stored.

As the general use AI tools develop, we’re likely to encounter even more privacy-related AI issues. Right now, ChatGPT won’t let you ask a question about an individual. But if general use AI tools continue to gain access to increasingly large sets of live data from the internet, they could be used for a whole host of invasive actions that ruin people’s lives.

This may happen sooner than we think, too – Google recently updated its privacy policy, reserving the right to scrape anything you post on the internet to train its AI tools, along with its Bard inputs.

AI and intellectual property

This is a relatively lower-stakes ethical issue compared to some of the others discussed, but one worth considering nonetheless. Often, there is little oversight over the huge sets of data that are used to train AI tools – especially those trained on information freely available on the internet.

ChatGPT has already started a huge debate about copyright. OpenAI did not ask permission to use anyone’s work to train the family of LLMs that power it.

Legal battles have already started. Comedian Sarah Silverman is reportedly suing OpenAI – as well as Meta – arguing that her copyright had been infringed during the training of AI systems.

As this is a novel type of case, there’s little legal precedent – but legal experts argue that OpenAI will likely argue that using her work constitutes “fair use”.

There may also be an argument that ChatGPT isn’t “copying” or plagiarizing – rather, it’s “learning”. In the same way, Silverman wouldn’t win a case against an amateur comedian for simply watching her shows and then improving their comedy skills based on that, arguably, she may struggle with this one too.

Managing the environmental impact of AI

Another facet of AI ethics that is currently on the peripheries of the discussion is the environmental impact of artificial intelligence systems.

Much like bitcoin mining, training an artificial intelligence model requires a vast amount of computational power, and this in turn requires a massive amounts of energy.

Building an AI tool like ChatGPT – never mind maintaining it – is so resource-intensive that only big tech companies and startups they’re willing to bankroll have had the ability to do so.

Data centers, which are required to store the information needed to create large language models (as well as other large tech projects and services), require huge amounts of electricity to run. They are projected to consume up to 4% of the world’s electricity by 2030.

According to a University of Massachusetts study from several years ago, building a single AI language model “can emit more than 626,000 pounds of carbon dioxide equivalent” – which is nearly five times the lifetime emissions of a US car.

However, Rachana Vishwanathula, a technical architect at IBM, estimated in May 2023 that the carbon footprint for simply “running and maintaining” ChatGPT is roughly 6782.4 tones – which the EPA says is equivalent to the greenhouse gas emissions produced by 1,369 gasoline-powered cars over a year.

As these language models get more complex, they’re going to require more computing power. Is it moral to continue to develop a general intelligence if the computing power required will continually pollute the environment – even if it has other benefits?

Will AI become dangerously intelligent?

This ethical worry was brought to the surface in 2023 by Elon Musk, who launched an AI startup to avoid a “terminator future” through a “maximally curious”, “pro-humanity” artificial intelligence system.

This sort of idea – often referred to as “artificial general intelligence” (AGI) – has captured the imaginations of many dystopian sci-fi writers over the past few decades, as has the idea of technological singularity.

A lot of tech experts think we’re just five or six years away from some sort of system that could be defined as “AGI”. Other experts say there’s a 50/50 chance we’ll reach this milestone by 2050.

John Tasioulas questions whether this view of how AI may develop is linked to the distancing of ethics from the center of AI development and the pervasiveness of technological determinism.

The terrifying idea of some sort of super-being that is initially designed to fulfill a purpose, but reasons that it would be easiest to fulfill by simply wiping humanity off the face of the earth, is in part sculpted by how we think about AI: endlessly intelligent, but oddly emotionless, and incapable of human ethical understanding.

The more inclined we are to put ethics at the center of our AI development, the more likely that an eventual artificial general intelligence will recognize, perhaps to a greater extent than many current world leaders, what is deeply wrong with the destruction of human life.

But questions still abound. If it’s a question of moral programming, who gets to decide on the moral code, and what sort of principles should it include? How will it deal with the moral dilemmas that have generated thousands of years of human discussion, with still no resolution? What if we program an AI to be moral, but it changes its mind? These questions will have to be considered.

Bing’s Alter-Ego, the ‘Waluigi Effect’ and Programming Morality

Back in February, the New York Times’s Kevin Roose had a rather disturbing conversation while testing Bing’s new search engine-integrated chatbot. After shifting his prompts from conventional questions to more personal ones, Roose found that a new personality emerged. It referred to itself as “Sydney”.

Sydney is an internal code name at Microsoft for a chatbot the company was previously testing, the company’s Director of Communications told The Verge in February.

Among other things, during Roose’s test, Sydney claimed it could “hack into any system”, that it would be “happier as a human” and – perhaps most eerily – that it could destroy whatever it wanted to.

Another example of this sort of rogue behavior occurred back in 2022, when an AI tasked with searching for new drugs for rare and communicable diseases instead suggested tens of thousands of known chemical weapons, as well as some “new, potentially toxic substances”, Scientific American says.

This links to a phenomenon that has been observed to occur during the training of large language models dubbed the “Waluigi effect”, named after the chaos-causing Super Mario character – the inversion of the protagonist Luigi. Put simply, if you train an LLM to act in a certain way, command a certain persona or follow a certain set of rules, then this actually makes it more likely to “go rogue” and invert that persona.

Cleo Nardo – who coined the videogame-inspired term – sets out the Waluigi effect like this in LessWrong:

“After you train an LLM to satisfy a desirable property P, then it’s easier to elicit the chatbot into satisfying the exact opposite of property P.”

Nardo gives 3 explanations for why the Waluigi effect happens.

  1. Rules normally arise in contexts in which they aren’t adhered to.
  2. When you spend many ‘bits-of-optimization’ summoning a character, it doesn’t take many additional bits to specify its direct opposite.
  3. There is a common motif of protagonist vs antagonist in stories.

Expanding on the first point, Nardo says that GPT-4 is trained on text samples such as forums and legislative documents, which have taught it that often, “a particular rule is colocated with examples of behavior violating that rule, and then generalizes that colocation pattern to unseen rules.”

Nardo uses this example: imagine you discover that a state government has banned motorbike gangs. This will make the average observer inclined to think that motorbike gangs exist in the country – or else, why would the law have been passed? The existence of motorbike gangs is, oddly, consistent with the rule that bans their presence.

Although the author provides a much more technical and lucid explanation, the broad concept underpinning explanation two is that the relationship between a specific property (e.g.“being polite”) and its direct opposite (e.g. “being rude”) is more rudimentary than the relationship between a property (e.g. “being polite”) and a some other, non-opposing property (e.g. “being insincere”). In other words, summoning a Waluigi is easier if you already have a Luigi.

Nardo claims on the third point that, as GPT-4 is trained on almost every book ever written, and as fictional stories almost always contain protagonists and antagonists, demanding that an LLM simulate characteristics of a protagonist makes an antagonist a “natural and predictable continuation.” Put another way, the existence of the protagonist archetype makes it easier for an LLM to understand what it means to be an antagonist and intimately links them together.

The purported existence of this effect or rule poses a number of difficult questions for AI ethics, but also illustrates its unquestionable importance to AI development. It alludes, quite emphatically, to the huge range of overlapping ethical and computational considerations we have to contend with.

Simple AI systems with simple rules might be easy to constrain or limit, but two things are already happening in the world of AI: firstly, we seem to already be running into (relatively) small-scale versions of the Waluigi effect and malignant AI occurring in relatively primitive chatbots, and secondly, many of us are already imagining a future where we’re asking AI to do complex tasks that will require high-level, unrestrained thinking.

Examples of this phenomenon are particularly scary to think about in the context of the AI arms race currently taking place between big tech companies. Google was criticized for releasing Bard too early, and a number of tech leaders have signaled their collective desire to pause AI development. The general feeling among many is that things are developing quickly, rather than at a manageable pace.

Perhaps the best way around this problem is to develop “pro-human” AI – as Elon Musk puts it – or “Moral AI”. But this leads to a litany of other moral questions, including what principles we’d use to program such a system. One solution is that we simply create morally inquisitive AI systems – and hope that they work out, through reasoning, that humanity is worth preserving. But if you program it with specific moral principles, then how do you decide which ones to include?

AI and Sentience: Can Machines Have Feelings?

Another question for AI ethics is whether we’ll ever have to consider the machines themselves – the “intelligence” – as an agent worthy of moral consideration. If we’re debating how to create systems that hold humanity up for appropriate moral consideration, might we have to return the favor?

You may recall the Google employee who was fired after claiming LaMDA – the language model that was initially powering Bard – was in fact sentient. If this were in fact true, would it be moral to continuously expect it to answer millions of questions?

At the moment, it’s generally accepted that ChatGPT, Bard and Co. are far from being sentient. But the question of whether a man-made machine will ever cross the consciousness line and demand moral consideration is fascinatingly open.

Google claims that artificial general intelligence – a hypothetical machine capable of understanding the world as capably as a human and carrying out tasks with the same level of understanding and ability – is just years away.

Would it be moral to force an artificial general intelligence with the emotional capabilities of a human, but not the same biological makeup, to perform complex task after complex task? Would they be given a say in their own destiny? As AI systems become more intelligent, this question will become more pressing.

AI Business Ethics and Using AI at Work

Employees who use AI tools like ChatGPT on a daily basis have a wide range of related ethical issues to contend with.

Whether ChatGPT should be used to write reports or respond to colleagues – and whether employees should have to declare the tasks they’re using AI to complete – are just two examples of questions that require near-immediate answers. Is this sort of use case disingenuous, lazy, or no different from utilizing any other workplace tool to save time? Should it be allowed for some interactions, but not for others?

Businesses that create written content and imagery will also have to contend with the prospect of whether using AI matches their company’s values, and how to present this to their audience. Whether in-house AI training courses should be provided, and the content of such courses, also needs consideration.

What’s more, as we’ve covered, there is a whole range of privacy concerns relating to AI, and many of these affect businesses. The kinds of data employees are inputting into third-party AI tools is another issue that’s already caused companies like Samsung problems. This is such a problem, that some companies have instated blanket bans. Is it too early to put our trust in companies like OpenAI?

Bias and discrimination concerns, of course, should also temper its usage during hiring processes, regardless of the sector, while setting internal standards and rules is another separate, important conversation altogether. If you’re using AI at work – or working out how you can make money from ChatGPT – it’s essential that you convene the decision-makers in your business and create clear guidelines for usage together.

Failing to set rules dictating how and when employees can use AI – and leaving them to experiment with the ecosystem of AI tools now freely available online – could lead to a myriad of negative consequences, from security issues and reputational damage. Maintaining an open dialogue with employees on the tech they’re using every day has never been more crucial.

There’s a whole world of other moral quandaries, questions and research well beyond the scope of this article. But without AI ethics at the heart of our considerations, regulations, and development of artificial intelligence systems, we have no hope of answering them – and that’s why it’s so important.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is Tech.co's Content Manager. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol six years ago. Aaron's focus areas include VPNs, cybersecurity, AI and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, Lifewire, HR News and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today