Saturday, July 27, 2024
HomeData ScienceAI and Ethics – How does AI see ethics? Introducing past cases...

AI and Ethics – How does AI see ethics? Introducing past cases and initiatives

Now that a society that coexists with AI is becoming a reality, we are at an important crossroads as to how we should face AI.
Among them, “AI ethics issues” has become an important issue in discussions on AI. In particular, the news in December 2020 that “Google fired an AI ethicist” attracted a lot of attention.
In an age when AI moves autonomously and plays many roles in society, each of us must think about how we should face AI.

In this article, we will focus on “AI and ethics” and consider how AI should be.

[What you can learn from this article] *Click to jump to the headline

  • AI and ethical issues
  • Examples of AI ethics
  • Criticism of corporate ethics on the rise
  • Addressing AI Ethics Issues

Contents

  • Consider AI ethics
    • What is ethics – why is AI ethics being discussed now?
    • AI ethics shakes human ethics
  • Examples of AI ethics
    • Cases due to AI’s “bias/bias”
    • liability for accidents
  • Criticism of corporate ethics, not just AI, is on the rise
    • Rikunabi problem in 2019
    • AI Ethicist Fired By Google
  • Efforts to tackle AI ethics issues intensify
    • AI principles
    • Corporate Efforts to Address AI Ethics Issues
    • Cross-cutting Efforts to Address AI Ethics Issues
  • summary

Consider AI ethics

Currently, interest in AI is increasing, but the introduction of AI to Japanese companies has not increased much. What is the reason for that?
According to the Ministry of Internal Affairs and Communications White Paper on Information and Communications (2021 edition), 12.4% of companies have introduced systems and services such as IoT and AI, and 9.7% are planning to introduce them.

What is ethics – why is AI ethics being discussed now?

While interest in AI is growing, adoption has not increased as much as interest.
One reason is the ethics of AI. As interest in AI deepens, more people and companies are questioning the introduction of AI, and discussions on AI ethical issues are beginning to increase along with the interest.

The trolley problem – an ethical dilemma

The trolley problem is a thought experiment that asks the ethical dilemma of whether to sacrifice one person to save many.
The specific story of the trolley problem is that the trolley is running out of control on the track, and five people are tied up on the track. You are standing in front of a lever that switches tracks, and you can switch the lever. However, one person is tied to the track after switching the lever. The choice at this time is a question of whether it is an ethical choice to sacrifice 5 people without pulling the lever or to sacrifice 1 person by pulling the lever.

There is no right answer to the trolley problem, and the answer changes depending on who thinks which is more ethical. Such problems are used when considering AI ethics.
As a well-cited example, I will introduce the ideas of “utilitarianism” and “Kant’s deontology” in the trolley problem.

way to think action
utilitarianism choose the one with the least sacrificePursue the greatest happiness for society pull the lever to sacrifice one Ethics that focus on results
Kant’s deontology Don’t Use Others as “Mere” Means Sacrifice 5 people without pulling the lever Ethics focused on motivation

Self-driving truck version problem

Problems without correct answers, such as the trolley problem, also lead to ethical problems in AI.
The trolley problem of self-driving cars must be considered when realizing self-driving . For example, if a self-driving car collides with another car at an intersection, even if the collision causes the car to change course and brake suddenly, there is a possibility that it will be caught in another car that is heading straight ahead, and the car will stop safely within the intersection. is difficult. However, in the case of sudden braking and steering, the possibility of involving other drivers or pedestrians is low, but the occupants are injured. What kind of operation should AI choose when an emergency operation such as an accident becomes necessary, and how should it make decisions? This kind of discussion is called the self-driving car trolley problem and is being studied all over the world.

Ethical issues with no correct answer, such as the trolley problem, are likely to become more complex as AI permeates society.

AI ethics shakes human ethics

From the above trolley problem, it does not mean that “AI ethics is a problem unique to AI and does not involve humans.” AI is involved in human ethical issues .
Therefore, when introducing AI, if human ethics are not clear, similar ethical problems will arise in technology using AI.

AI ethics is a manifestation of paradoxical human ethics

The problem of “who will take responsibility in the event of an accident with an automatic driving car” stems from the fact that the concept of “responsibility” is not clear in the first place. Human beings are free beings, and society is built on the premise that everything they do is under their control.

However, in reality, we act and act under the influence of luck such as events and coincidences that we cannot help ourselves. This is the ethical paradox that the philosophers Nagel and Williams refer to as “moral luck.”

AI bias problem is a manifestation of human discrimination

AI itself also has its own problems. However, it has been pointed out that many of them are caused by humans. The AI ​​bias problem is a prime example of this.
AI sometimes outputs racist, sexist, and religiously discriminatory results. It is said that this is the result of reflecting the bias of human society.
In other words, in order to solve the bias problem of AI, we must improve the bias problem of human society.

The ethical issue of AI raises questions for everyone involved in AI.
The issue of fairness that may arise from AI requires society to consider and discuss how fairness should be in the real world.

Examples of AI ethics

The reason why the introduction of AI is not increasing may be the ethical issues of AI. Here are some examples of AI ethics.

Cases due to AI’s “bias/bias”

In using AI, there are cases where the output results of AI are biased and fairness is becoming a problem.
In the process of machine learning, AI recognizes the regularity of data from a huge amount of data and introduces it into recognition systems. However, there are cases where the output results of the introduced AI lead to gender discrimination and racial discrimination, and the introduction of AI is stopped.

Gender Discrimination – Amazon’s Recruiting AI Discriminates Against Women

In 2014, Amazon was considering using AI for its recruitment system.
However, it has been discovered that the recruiting system only gives high marks to men and discriminates against women.
The reason was that men accounted for the majority of resumes sent to Amazon in the past 10 years when AI was trained. Because there were many men in the learned resumes, AI became a system that uniquely recruits men, which has become a problem.
AI engineers have tried to eliminate bias, but they can’t completely eliminate bias, and Amazon’s AI recruitment tools have failed.

Racism – Racial “bias” in facial recognition systems

In 2015, it became a problem that Google Photos, a service provided by Google, tagged a black couple with a “gorilla” tag. The tagging function of Google Photos is a function that AI automatically attaches related tags to photos.

The problem this time is that the AI ​​classified the person in the photo as “gorilla” based on the skin color.
In response to this problem, Google responded by deleting the “gorilla” tag and stopping searches. However, simply removing the tags does not solve the fundamental AI discrimination problem.
In addition to Google, major IT companies such as Amazon and IBM have announced their withdrawal from general-purpose facial recognition technology.

The reason for withdrawal from the face recognition system is that the recognition accuracy of non-white people is low. One of the reasons is that there is a large percentage of Caucasians in the training data, but there is no groundbreaking technology to improve accuracy yet, and IT companies are unlikely to solve this problem technically. It shows an attitude to freeze sales of general-purpose facial recognition systems as long as possible.

liability for accidents

When a system equipped with AI causes an accident, there is the problem of who should be held responsible for the accident.
In the future, if an AI-equipped self-driving car causes an accident, who should pay compensation? Is the engineer who created the AI ​​responsible for the accident?

Autonomous driving accidents and systems are expected to develop further in the future. In that case, who is responsible for the accident is an open question.

Fatal accidents involving self-driving cars

To date, there have been three accidents related to self-driving cars. In the United States, Tesla cars had two accidents in 2016 and 2018, and Uber cars had one accident in 2018.

Cause of accident
Tesla2016 The sunlight was strong, and the trailer that collided became white with the light and could not be recognized as an “object”.
Uber2018 The AI ​​of the self-driving car could not recognize the pedestrian and pulled it.It is positioned as the world’s first accident in which a pedestrian was killed by an autonomous vehicle.
Tesla2018 An accident occurred when the self-driving car turned the steering wheel. The driver was engrossed in a smartphone game and could not react.

The development of autonomous driving technology and AI is remarkable, but if AI actually causes an accident, companies will hesitate to introduce it. AI ethics and liability issues are issues that need to be discussed further in the future.

Medical errors with AI

The movement to introduce AI into the medical field is accelerating. However, if a medical error is made by AI, there is an ongoing debate about who should be held responsible.
Regarding AI medical malpractice, there is an idea that the doctor who performed the surgery should be responsible for the surgery performed according to the instructions of the AI.
However, AI has the property of not telling people what it thinks. Doctors and AI developers cannot explain why they thought so. Therefore, there is an opinion that doctors who follow the instructions of AI should be held responsible. However, from the side of the surgery, there is no change in the fact that the doctor made a medical mistake.

The introduction of AI into healthcare is a tricky one, as AI liability issues come into play.

Criticism of corporate ethics, not just AI, is on the rise

One of the reasons for the low adoption rate of AI is the increasing public criticism of corporate ethics.
The debate on AI ethics is also questioning the ethics of companies and companies that handle AI. It is important for companies to consider human rights protection, society as a whole, and the environment. It is important to set the scope to be protected within the company.

Rikunabi problem in 2019

The Rikunabi problem is a problem in which statistics on the decline rate of students’ unofficial job offers calculated from the job information site “Rikunavi” were sold to companies as data.
Due to the shortage of human resources in the new graduate recruitment market, the movement to utilize data calculated by AI for better human resources is attracting attention. However, the problem this time is that the student’s information was bought and sold without the consent of the student. This was said to be ethically problematic, and received great criticism from the public.

AI Ethicist Fired By Google

In 2020, Google’s AI researcher Timnit Gebr was fired.
She was fired for pointing out ethical issues with Google’s search engine.
The paper she pointed out was written about the risks of AI trained on huge amounts of data. She pointed out that the language model that Google uses for its search engine is too large, and that the bias of AI will become severe.

She claims she was fired after a confrontation with upper management over the paper. Being an Ethiopian and one of the few black people in the company, she was concerned about bias in AI. More than 2,200 Google employees have signed a petition to release the details of the incident since Gebble’s paper was revealed.

Movement to form a labor union at Google’s parent company

In 2021, employees of Google’s parent company, Alphabet, formed a labor union called the ‘Alphabet Workers Union.’
The union is asking companies to give workers a voice on values ​​such as discrimination and labor standards.
There were also protests over Mr. Gebble’s dismissal issue, but there were also complaints that the contracts of employees who had complained about working conditions in the past were temporarily suspended. The employee was able to return to work within a week of this appeal. Labor unions now have legitimate powers, and the movement to demand corporate ethics is becoming more active.

Efforts to tackle AI ethics issues intensify

The above ethical issues related to AI have become apparent, and efforts to solve these ethical issues are becoming active. Rather than denying AI, it is important to deepen the discussion from the perspective of how AI and humans can coexist. In order to coexist with AI, there is growing interest in taking measures to deal with various problems.

AI principles

FAT principle

FAT (Fairness, Accountability, Transparency) are three principles that must be strictly adhered to in operating AI.

  • Fairness

“Fairness” is a consideration to prevent unfair discrimination by AI. Because AI learns regularities from real data, its decisions can result in historical discrimination. Furthermore, the way the data is selected reinforces discrimination. Therefore, designers must remove inequality from the process of creating machine learning models.

  • Accountability

“Accountability” refers to the disclosure of all information related to AI, such as who will be responsible for AI in the event of a scandal, the details of work and operations using AI, and the purpose of AI when introducing AI into society. It means you have to. Parties may ask for any explanation of the AI.

  • Transparency

“Transparency” means that the workings and processes of AI are clearly understood. Currently, it is extremely difficult to ensure the transparency of AI. When an error occurs in AI, it is very unrealistic to pinpoint where the error occurred. The process of AI’s thinking about why the judgment made by AI is this result is completely a black box. Research is being conducted every day to solve this problem and use AI safely.

Human-Centric AI Social Principles

In 2018, the Japanese government established the “Conference on Human-Centric AI Social Principles” in the Cabinet Office. In 2019, the basic principles that should be kept in mind when AI is put into practical use in society were announced as the “Social Principles of Human-Centered AI.” Societies that implement AI, especially legislative and administrative bodies such as governments, must strive to improve the three values ​​of “human dignity,” “diversity and inclusiveness,” and “sustainability.” yeah.

Manifesto on machine learning and fairness

In 2019, the Japanese Society for Artificial Intelligence issued a joint statement with the Japan Society for Software Science and Technology and the Institute of Electronics, Information and Communication Engineers. We take seriously the impact that the use of machine learning has on fairness. ” was announced.

Corporate Efforts to Address AI Ethics Issues

Companies are working on the ethical issues of AI for the movement of the above academic societies and the social advancement of AI. Although few companies have introduced AI, there are many examples of its use for practical application.

Fujitsu and NEC formulate policies for AI use

In 2019, Fujitsu announced the “Fujitsu Group AI Commitment.”

This is a summary of items that must be observed in light of ethical values ​​when using AI in-house. In the same year, NEC announced the “NEC Group Policy on AI and Human Rights.” NEC will compile the rules that have been applied in the AI ​​projects it has worked on so far, share information within the company, and apply them to AI projects from 2020 onwards.

ABEJA Establishes Ethics Committee

In 2019, ABEJI announced the launch of a committee called “Ethical Approach to AI”. Experts will discuss ethical issues related to ABEJI’s AI operations, such as guideline creation and action policies, and will strive to realize and improve AI operations.

Sony plans to conduct ethical reviews on all AI

In 2020, Sony made the decision to conduct an ethics review of all its AI-powered products.
It is also possible to examine whether AI products consider discrimination and infringement of privacy, and to recommend that the project be canceled if there is a problem. Now that the ethics of AI is a hot topic, it is expected that large-scale efforts to improve the quality of AI will spread among companies in the future.

Cross-cutting Efforts to Address AI Ethics Issues

An organization that conducts enlightenment activities, etc. is operated for ethical issues of AI. In addition to corporate efforts, organizations are emerging to tackle AI ethical issues on the global stage.

Japanese Society for Artificial Intelligence Ethics Committee

In 2014, an Ethics Committee on AI was established in Japan led by Yutaka Matsuo.
A policy has been formulated to discuss the use of artificial intelligence technology in society. The “Statement on Machine Learning and Fairness” issued by the Japanese Society for Artificial Intelligence triggered a surge in public interest in AI ethics issues.

JDLA Public Policy Committee

The Japan Deep Learning Association (JDLA) is an organization established with the aim of improving Japan’s industrial competitiveness through technologies centered on deep learning.
We are putting effort into study sessions to develop human resources who will be responsible for deep learning technology and those who will use it. Through cooperation with other organizations, we are investigating the ethical impact of deep learning on society and disseminating policies.

Open AI

Open AI is a non-profit organization founded in 2015. Influential businessmen and investors such as Elon Musk are paying attention.
An organization whose goal is to promote open source-friendly artificial intelligence in a way that benefits all of humanity.
Aiming to develop ethical AI, we are developing in cooperation with the world’s top IT companies.

summary

What did you think. This time, I introduced the ethical issues of AI based on each case.

The number of companies introducing AI is increasing, but I think you can see that some ethical issues have arisen in contrast to the spread of AI.
In order to solve these problems, AI literacy is required both on the side of creating AI and on the side of using it. At the same time, we must solve the problem of bias in human society. For the development of AI, it was also necessary for humankind to change.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments