Saturday, July 27, 2024
HomeDeep Learning“Discussion and practice are two-way, and work in an agile manner”-AI ethics...

“Discussion and practice are two-way, and work in an agile manner”-AI ethics issues surrounding the world!

Table of contents

  • Current status of AI ethics efforts in the world and Japan
    • “Global-scale AI utilization” progressing in the world-discussed at the United Nations and UNESCO
    • Japan’s unique “long industrial structure” AI governance should be considered and examples should be disseminated
  • AI ethics, which has been the center of debate, how to put it into practice?
    • Discussion and practice of AI will be bi-directional and agile
  • Future Prospects and Thoughts Contained in This Special Feature
  • The Japanese Society for Artificial Intelligence, Journal of the Japanese Society for Artificial Intelligence, March 2021
  • in conclusion

Current status of AI ethics efforts in the world and Japan

“Global-scale AI utilization” progressing in the world-discussed at the United Nations and UNESCO

Sex discrimination and racial discrimination caused by AI, the occurrence of accidents caused by it, and corporate trust issues, are being taken up around the world as major problems in the spread of AI in the future.

In Japan, in April 2020, a labor union consisting of about 120 employees of IBM Japan and its subsidiaries, etc., responded to personnel evaluation and wage determination measures using IBM’s AI “Watson”, saying, “The decision process is becoming a black box. ” and appealed to the Tokyo Labor Relations Commission for relief.

It is no longer possible to avoid considering ethical issues when using AI in this way, and discussions are becoming active around the world.

–Please tell us briefly about what you felt after being in charge of translating this contribution and the current state of AI ethics around the world.

Mr. Ema: It is said that a consensus is being reached on the principles of what should be discussed, but I feel that the practice and what matters are quite regional.

For example, in Japan, AI tends to be used in discussions based on economic benefits such as innovation, but (in other regions) AI is also used to solve global problems such as global warming, refugees and slaves. I’m here. In addition, AI, which is becoming increasingly sophisticated, should not be monopolized by a single company. It’s been discussed.

ーーIn the world, not only the pursuit of profit by AI, but also the way of AI utilization that considers the earth and the future is being considered. What kind of organizations are internationally discussing the ethics and fairness of such AI and DL? Also, how is the content transmitted?

Mr. Ema: The OECD started discussions on the appropriate management of AI from a fairly early stage, such as in 2016. By the way, in Japan, the Ministry of Internal Affairs and Communications is working hard, and has been collaborating with the OECD from a very early stage.

In addition, although not included in this special feature, various organizations of the United Nations, such as UNESCO, have been discussing the impact and utilization of AI. Also, from 2020, an international discussion platform called GPAI (Global Partnership on AI) will be established, where discussions are being held in several subcommittees on topics such as data use and employment.

In addition, WHO and INTERPOL have started discussions on principles and toolkits for AI utilization in individual fields (medical and law enforcement agencies). Many of these are international, formed by committees of experts from different fields and industries, and databases, case studies, toolkits, etc. have been released or will be released soon.

Japan’s unique “long industrial structure” AI governance should be considered and examples should be disseminated

ーーIn the midst of such global trends, what role will Japan play in the field of AI ethics in the future?

Mr. Ema: Rather than AI ethics, AI governance (steering, management) may be more appropriate.

When we talk about governance, we are conscious of who is in charge of it. Primarily, it is the companies that develop services that will steer various AI-related issues.

However, in Japan, unlike B2C companies such as GAFA, it is characterized by an industrial structure with a long supply chain such as subcontracting and outsourcing. In this context, the challenge is to think about how we can appropriately conduct governance all the way down to the consumer.

In recent years, in addition to data management by the government and the use of AI, considering the feature of AI that learns, it is also possible to process and abuse AI by users and consumers. This situation makes it unclear who is responsible for AI problems. To that end, sharing and publishing specific examples will be helpful for other companies and countries. In particular, the current situation is that there are very few examples from Japan that are shared internationally.

AI ethics, which has been the center of debate, how to put it into practice?

Discussion and practice of AI will be bi-directional and agile

As discussions on AI ethics and governance become more active around the world, many companies find it difficult to take a step into “practice”.

ーーWhat should we actually start when the direction to go is fixed?

Mr. Ema: I think it’s better to think of it as a two-way process, rather than a one-way one-way decision and practice.

The AI ​​itself is not something that ends when it is created, but it has a strong element of agile creation or an eternal beta version that is constantly changing. It can be used for purposes other than the original purpose, and it can also be used in unexpected ways (for better or worse) by users who did not expect it. I think it will be necessary to keep an eye on various trends in Japan and overseas, and to be agile enough to respond appropriately as an outsider in the event of an AI-related incident or case.

ーーWhile AI ethical issues are becoming more and more focused around the world, what points should be discussed in the future?

Mr. Ema: First of all, I think it is important to recognize that the AI-related problems described in the special feature “exist.” Being in Japan, I don’t think the fairness issues that have been discussed so much in the special feature have happened yet. However, the problem of categorical bias, such as race, gender, region of origin, and age, is becoming a bigger problem as AI develops.

We also have to tackle the homework from before AI, such as inequality of access, privacy, and who is responsible when problems occur. It is necessary to discuss this not only with some experts, but also with a wink to pick up the voices of users, consumers, people who cannot use it, and people who cannot raise their voices. I have. Easy to say, but difficult to put into practice.

Future Prospects and Thoughts Contained in This Special Feature

ーーPlease tell us the reason for the title of this special issue, “From AI Principles to Practice”. Who’s practice is what?

Mr. Ema: Since around 2016, various principles related to AI fairness, safety, robustness, etc. have been published around the world. Issuers include companies, nations, industrial groups, and NPOs. There is a general consensus on the principles themselves, but how should we put them into practice?

For example, even if we say that fairness of AI is guaranteed, what does fairness mean? How can we ensure transparency? Medical AI and smart speakers may have different ways of thinking. From Principle to Practice was added to convey the message that we will think about individual specific cases and initiatives in this situation.

–Please tell us the points of the special feature and what to read.

Mr. Ema: This is also related to the interactivity between principles and practice, but it is important to periodically review and ask yourself who and how to get involved in practice. think.

The Japanese Society for Artificial Intelligence also made a special feature on diversity and inclusion, but we need to be careful about this word itself. The terms diversity and inclusiveness are used repeatedly in this special issue. While the values ​​of diversity and inclusion are principles to strive for, each contribution oozes with a renewed question of whether we are properly confronting those values.

While claiming to ensure inclusiveness, is it not an exploitation of participation? Above all, it is painful to hear for people involved in this field, but important points such as whether the fact that AI ethics and governance are in itself an indulgence in service development.

ーーMr. Ema, what kind of activities would you like to do in the future?

Mr. Ema: In the short term, I would like to disseminate case studies.

Fortunately, I am currently able to participate in concrete bottom-up discussions with people in the field, both domestically and internationally, rather than abstract theories. I think it would be great if we could abstract individual cases to a certain extent and create a mechanism for sharing and publishing them as a collection of cases.

For example, the University of Tokyo is currently accumulating various cases using a framework called the risk chain model . We hope to be able to publish some case reports by the summer of 2021. In addition, the Japan Deep Learning Association is holding a study group on AI governance and its evaluation . We are discussing with the people concerned about whether to maintain it as “. I think we can publish the report in Japanese and English this summer as well.

In addition, I am planning to publish a book, ” AI and Society through Pictures and Diagrams ” in mid-May , about what kind of debates are taking place around AI and society today. It is packed with things that I would like not only people involved in AI to know, but also junior and senior high school students, legal affairs and public relations of companies and governments, and those who will develop or use AI in the future.

You can read the original contribution translated by Mr. Ema here. ▼

Click here for details of Mr. Ema’s book “AI and Society through Pictures and Diagrams” (scheduled to be released on May 21, 2021)▼

ce, Journal of the Japanese Society for Artificial Intelligence, March 2021

In the March 2021 issue, in addition to the small special feature “From AI principles to practice I” introduced this time, as a special feature,

  • Special Feature “Role and Value Created by R&D Departments in Companies”
  • Lecture Series “AI Philosophy Map” (1st)
  • Joint Study Group 2020 Report

And so on.

“AI Philosophy Map” is a bird’s-eye view that will be newly added to AI Map β, and the first installment of the series began in this March issue.

In the first session, we welcomed Professor Hideyuki Nakajima (Sapporo City University), known as a pioneer of AI research in Japan, and Professor Fujio Tsutsumi (CRIEPI), who led the Artificial Intelligence Society ” AI Map β ” project, to open up the frontier of AI. It talks about the important role that philosophy plays in moving forward. From the second time onwards, we aim to compose an “AI Philosophy Map” by repeating dialogues between AI researchers and philosophers.

Mr. Kiyota, editor-in-chief of ” Artificial Intelligence “, commented on his thoughts on the March issue.

Mr. Kiyota: Issues surrounding AI ethics and governance are often perceived as “troublesome” or “irrelevant to me.” However, the many cases covered in the articles in this special feature teach us the importance of facing this problem sincerely from the beginning and accumulating discussions and practices.

Trying to conform an already established AI system to the AI ​​principles after the fact tends to end up being a boring story, and as Mr. Ema points out, it uses the AI ​​principles as a cover and an indulgence. It can also lead to Rather, by incorporating the perspective of ethics into AI from the development stage in the first place, and by delving into the roots to make it “ethical AI”, there is a possibility that a very epoch-making AI system will be created. I have. From the perspective of obtaining hints to make your research more interesting, I hope that you will read the “From AI Principles to Practice” special issue.

In the first lecture series “AI Philosophy Map” , Professor Hideyuki Nakajima and Professor Fujio Tsutsumi talk about the role that philosophy has played in AI research, and the role that it may play in the future. In addition, a joint special project by four corporate researchers, including myself, titled “ Role and Value Created by R&D Departments in Companies,” was held by 16 of 11 groups and 13 companies that are active on the front lines of corporate research and development (R&D). We have received contributions from many people, and it has become a fulfilling special feature that allows you to know the whole picture of corporate R&D. Please take a look at this as well.

If you are an individual member of the Japanese Society for Artificial Intelligence, you can read it for free from here. ▼ (Non-members can also view some of them for free.)

in conclusion

Ethical issues in AI are likely to be even more actively discussed in the future. How to involve as many people as possible and create a world where no one is discriminated against by AI is a challenge for Japan and the world as a whole, where IT is advancing rapidly.

In addition to AI professionals and people working in companies, we consumers who use products and services need to think about AI ethics. We, living in the IT society of the future, must think about the issues of fairness and privacy.

Among them, diversity and inclusiveness, as pointed out by Mr. Ema, are required. It will be necessary to reconsider whether the efforts and discussions are just cosmetic. This special feature, which realistically conveys the discussions and initiatives surrounding AI ethics around the world, is a must-read.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments