Saturday, July 27, 2024
HomeTechnologyWhat is the direction of social implementation of AI that Japan should...

What is the direction of social implementation of AI that Japan should aim for? “AI Action Plan Symposium” Report

On June 15, 2021 (Tuesday), the “ NEDO Artificial Intelligence (AI) Global R&D Action Plan Symposium in the field of AI technology – the direction of social implementation of AI that Japan should aim for – ” will be held online. I was.

As of 2021, how far has the social implementation of AI progressed? Will Japan be able to take the initiative in deep reinforcement learning in the future? How do AI experts perceive these issues?

In this article, we will report on the state of the symposium as a preliminary report.

*Some parts have been edited for publication.

Table of Contents

  • Overview of the symposium
  • About the “Overall Action Plan for Research and Development in the Field of Artificial Intelligence (AI) Technology”
  • Will AI stop being called AI?
  • How far has social implementation of AI progressed?
  • Can Japan Take the Initiative in Deep Reinforcement Learning?
    • About the characteristics and differentiation strategies of AI in each country
    • Is it possible to build an AI that can understand the meaning and value of tasks?

Overview of the symposium

At this symposium, we will introduce the results of the “Survey for formulating a broad R&D action plan and business extraction in the field of artificial intelligence (AI) technology”, as well as the expert members of the action plan formulation committee and Japan We discussed the direction of social implementation of AI that we should aim for.

◼Speakers

Mr. Hideyuki Nakajima (President, Sapporo City University), Chairman of the Action Plan Formulation Committee

Members of the Action Plan Formulation Committee (in alphabetical order) * Mr
. Masahiko Inami (Professor, Research Center for Advanced Science and Technology, The University of Tokyo)
Investigator)
, Tofuku Kawakami (Management Co-Creation Platform Co., Ltd. Managing Director)
, Hiroshi Maruyama (Executive Fellow, Kao Corporation / Specially Appointed Professor, Research Center for Artifacts, The University of Tokyo / PFN Fellow, Preferred Networks Co., Ltd.)
・Mr. Masahiro Murakawa (National Institute of Advanced Industrial Science and Technology, Information and Ergonomics Area, Artificial Intelligence Research Center, Deputy Research Center Director (concurrently), Research Planning Office, Artificial Intelligence Research Strategy Department)

*Mr. Inami will appear in the recording *Mr.
Matsuo will be absent

◼Lecture schedule

09:30: Symposium start
09:35: Greetings from the organizer
09:40: Introduction of results of AI action plan
09:45: Part 1 Talk session “How far has social implementation of AI progressed?”
11:00: Part 2 Talk Session “Can Japan Take the Initiative in Deep Reinforcement Learning?”
12:00: End

About the “Overall Action Plan for Research and Development in the Field of Artificial Intelligence (AI) Technology”

Currently, AI technology is being actively developed around the world. Therefore, in order for Japan to lead the world in the field of AI , NEDO has decided to take action toward the formulation of new technology strategies and the early start of projects . Action Plan (AI Action Plan)” was formulated and published.

In 2016, NEDO implemented the “Vision for Implementation of Next-Generation Artificial Intelligence in Society”. Based on this vision, the action plan was formulated after discussion by an action plan formulation committee consisting of six AI experts.

The committee held a total of six meetings online over the course of about half a year from January of this year. We organized and discussed the relationship with technology.

As a result, we have identified the following orange vertical axes as fields in which we will actively utilize AI over the next 10 years: “manufacturing (production),” “lifestyle/city,” “mobility,” “education,” and “well-being.” We have narrowed it down to 5 areas.

In addition, the gray part of the horizontal axis is the AI ​​technology so far, and the blue part shows the direction of AI technology development over the next 10 years.

In the AI ​​Action Plan, we have selected 12 AI technology issues (light blue areas) that should be addressed not only for the use of AI, but also for the co-evolution of humans and AI.

Based on these 12 AI technology developments that should be addressed, NEDO plans to consider the commercialization of a strategy formulation project as an action plan.

Chairman Nakajima

AI has always been said, “If it can be done, it’s not AI.” I think that once AI is incorporated into a system, no one will recognize it as AI and everyone will use it normally.
It means that AI has stepped into the intellectual realm. A typical example is Go and Shogi programs, which have completely replaced humans, but are still called AI. In that sense, I think things will be a little different in the future.

Mr. Maruyama

I have a program that plays chess, but I don’t think it’s really AI. If something like AlphaGo enters a normal game machine, I feel that it will no longer be an AI.

Chairman Nakajima

Certainly, chess does not seem to be called AI, but when watching NHK’s shogi program, AI’s winning rate is also evaluated. What’s interesting is that I thought it would be a problem for shogi players if machines became much stronger than humans, but that’s not the case at all. It’s like a car running faster than a human doesn’t think anything of it.

Committee member Ushiku

AI and shogi have a high affinity from the beginning, and I think that one activity that humans think is clever is “pointing to shogi”. In that sense, I think Professor Nakajima’s hypothesis that “AI will no longer be called” is interesting.
Until now, things are generally no longer called AI, and AI effects are also known. While saying that, they gradually stopped calling it only a “translation engine.” I thought it would be an interesting point of view to ask whether there is something that continues to be called AI because AI is doing very smart things automatically.
On the other hand, AI and shogi are very close friends, so in that sense, I’ve always called AI. I feel that the possibility of continuing to call it AI is a unique phenomenon unique to shogi.

Kawakami Committee

I don’t think there is a better word than AI for areas that have a large human-like area. For example, I think we should call autonomous driving “automatic driving system” or “automatic driving AI”.
On the other hand, for example, if AI is also used to control the suspension, I feel that we won’t bother to call it “suspension control AI”.
The term “human-like domain” seems to be controversial, but if it becomes a relatively human-like domain, there are no other words that are calming other than AI, so I think the word “AI” will calm down. I felt

How far has social implementation of AI progressed?

Chairman Nakajima

In the AI ​​action plan that we discussed and created, we emphasized “AI of semantic understanding”.
Until now, the system has been automated, incorporated, and operated automatically. However, from now on, I think that there will be a part that will not be incorporated, in other words, it will move while interacting with humans in a visible form.
Now, in a sense, I think that deep learning has come to be put into practical use in the form of AI for the first time in history. That’s what it means.
When it comes to understanding the meaning of the right side, I feel that it also includes the step of taking a step further and “conversing with humans in words”. Therefore, when I look at it as a person with whom I have a conversation in Japanese or English, I think that it can be seen without being buried, or at least as an interface.

Chairman Nakajima

Also, until now, it has been working in the form of a system called deep learning or computer simulation. Now that we are in the age of big data, the improvement of image and voice recognition accuracy is growing tremendously.
Our argument is that we want to move that to the right over the next 10 years. Understanding meaning, and not just simulation, but the fusion of deduction and induction (similar to the fusion of deep learning and symbolic reasoning), that is, learning from examples and applying rules. It’s about combining the two.
Below that is the optimization of society as a whole. Overall optimization is quite difficult, and it’s not very clear what kind of specifications to optimize. So what is optimization? It is necessary to discuss from that point.
The big data below that is learned by giving 1,000, 10,000, or 1,000,000 examples in deep learning. Since it is a form of “learning”, it means that we have to be able to “learn from small data”.
Then, under that, it is to do in a form that integrates various modalities. Right now, there are things like “images to words, words to images”, but I was thinking that it would be nice to be able to do various things in a form that is more tightly attached than that.
As mentioned above, the social adaptation fields that accompany it are “manufacturing”, “lifestyle/urban”, “mobility”, “education”, and “health”. I think it will be important.
In Japan’s AI strategy announced in 2019, there was a goal of “producing 250,000 AI personnel every year”, but it seems that it will not be achieved at all so far.
At least I don’t think there is anyone to teach it, so I’m thinking that I have to create an AI program for education.

Mr. Maruyama

I feel that the fusion of theater and induction is particularly important, and specifically how to combine simulation and machine learning. Recently, Preferred Networks announced that they will use machine learning to speed up atomic-level simulations by several orders of magnitude.
Until now, deep learning has progressed so much that there was a sort of data omnipotence, but surprisingly, that’s not the case. I think there will be more and more new fields where we use our knowledge of physics, chemistry, and so on, and also use data. Also, in terms of social implementation, I think it will have a very large impact on materials and material searches.

Committee member Ushiku

Since this April, I have been working with professors who are working on inorganic materials. So you are starting a project to increase the speed of developing new materials by a factor of 1,000. Among them, I think that the fusion of deduction and induction will be one of the axes in the future.
One more thing, the multi-modality part that connects language and images is actually a research theme that I have been working on for a long time. There is.
This is because, even for a single material, there are various ways of expressing the data measured from it and the data representing the material (expression as a character string), and there are various types of data that are measured. That’s right.
This is truly multimodal, and I think we need to be able to predict things like, “Is this kind of performance possible if I create this material by connecting what is verbalized and what is not?” I am thinking.

Kawakami Committee

I’m doing business implementation, but I think small data, deduction, induction, and simulators are related.
There is a problem that you can’t test so many times in areas where you can’t do anything without using a single simulator. When a single test takes time and costs, it becomes a simulator. However, if I drop everything into a simulator, I feel like I’m going to have to make a simulator.
In that sense, I think that the speed will increase if we use a simulator that partly incorporates human knowledge and partly not. When that doesn’t fit in the real world, I think it’s necessary to fill in that part with data.
If it can be done with small data, I think it would be easier, but in reality, when we take a lot of data and put it in the digital space, we need to incorporate human knowledge and past cases. I think it will work out better.
It is said that various things will become digital twins in the future.

Mr. Murakawa

When looking at the overall picture, the first question that Mr. Maruyama defined is how to solve the big data modulation part. I think those two are very much in line with that.
Especially when it comes to small data driving, it’s better to do it with less data, but metacognition is involved behind it. In that case, I think that “AI of semantic understanding” will be greatly related to how to work with small data.
I think that once the AI ​​that understands meaning is completed, the “relationship between humans and AI” will become more diverse. As a point of view that has not come up until now, I think that it is necessary to establish transfer learning technology in a realistic sense.
For example, natural language processing such as GTP-3 is expensive, so we can’t train it over and over again. I think that we should establish the technology around that, once we have learned such things and how to proceed with less data.
Also, humans are able to abstract something and adapt their past experiences to new domains. I think that it exists as a kind of AI.

Chairman Nakajima

The part of human experience today is the twisted one. Human beings have bodies, live their lives, and eat. Isn’t it?
However, if you want to interact with a program or want something from a program, you have to clarify everything, which has long been known in the form of the frame problem.
How to deal with the frame problem in the future has not yet been resolved. It’s also related to the meta-reasoning that we had just now, but I think it’s quite difficult to figure out what to do with an AI that doesn’t live.

Mr. Maruyama

Chairman Nakajima said, “How can we teach things that cannot be understood without living?”, but that is not written in the specifications. In other words, wouldn’t it be better to think of artificial intelligence, like the human mind, as something to be nurtured rather than designed?
The concept of nurturing and designing are not very connected. Design always requires specifications, but I feel that nurturing is like “I will give you an environment, but what you can do is left to chance”. So, I feel that the term artificial intelligence is self-contradictory. Intelligence is not artificial.

Committee member Ushiku

Professor Nakajima was talking about a hypothesis about how to deal with the frame problem. I’m sure there are a lot of people out there, but if you think about it that way, it’s still difficult to figure out how much you should describe the frame problem, so I don’t think anything has changed.
On the other hand, with the current third AI boom, the places that are being implemented in society are ” weak AI ” in the context of the teachers who are doing such AGI. In that case, the frame problem is relatively small, and of course, if there is a context of “I want to create AI for this kind of use,” the number of options that must be considered becomes narrower and narrower. I think it’s because I’m going.
So, I think that the part that is causing this breakthrough is actually that kind of thing, so I think that we are able to learn well while using data by having a good meeting with something like a frame problem. I think there was something like the ” expert system ” I mentioned earlier, but I think it was lacking in the last mile, like communicating with humans.
So, in such a way, AI that specializes in some kind of function will collaborate with humans in the end, and AI that can learn such as “If humans don’t like it, let’s stop making that decision.” , I think that social implementation will progress more.

Can Japan Take the Initiative in Deep Reinforcement Learning?

About the characteristics and differentiation strategies of AI in each country

Secretariat

We conducted a survey on overseas cases for the action plan committee.
In this way, it is the current situation that we are doing it in various forms. In response to this, I would like to ask the members of the committee to provide additional information regarding the current situation in Japan.

Kawakami Committee

I think there are national issues and the current situation, but I think that basically Japanese industries and companies are like “win with technology, lose with business”. However, in the field of AI, I think that winning in technology and losing in business is a difficult field to establish.
I think Japan is good at selling technology, but I think we still have to work hard to rethink that in business and how to monetize it as a business. increase.
In particular, if we take AI technology in the form of tools in a more easy-to-understand manner, we can do quite a lot of things, and in a world where the speed of evolution is fast, there are people who use it, and machines that are used with it. I think it’s about rethinking the relationship between things and how to adapt to business, business added value, productivity improvement, etc.
I think that what is happening now is that it is being used more in the areas of improving productivity and reducing costs. On the other hand, in the US and China, first of all, there is money-making and business, and I think that the main difference from Japan is that they are thinking about capturing technology that is directly linked to that.
Also, as in China and India, various POCs are conducted in detail in places like differences in the explosive power of startups. Then, while thinking about expanding the target market to 1.3 billion times as many people as possible, we are doing PCO, and if it goes well, we will invest a huge amount of money to speed up the game. However, I feel that Japan has not yet achieved that kind of business competition.
Chairman Nakajima
When thinking about social implementation, I think that one of the obstacles in Japan is the “law”. In general, laws follow technology, so it’s not like we can enact laws before technology is developed.
On the other hand, in the United States, it is written in a way of saying, “This is no good, but you can do these things.” It is said that Japan has a whitelist system and America has a blacklist system, but in that case, when a new technology is developed, it will lag behind other countries in terms of social implementation.
For example, self-driving cars are already in practical use in the United States and China, and self-driving cars are running around. However, Japan is in a situation where it can not be implemented unless the law on automatic driving is made.

Is it possible to build an AI that can understand the meaning and value of tasks?

Chairman Nakajima

I would like to talk about semantic understanding AI.
In short, it is a question of whether people will understand the meaning and value of the task when it is treated as a human tool.
As I said earlier, humans live their lives. This is what sets it apart from mass-produced programs. For example, creating a new dish can be done by a living human being, but AI is not good at it. On the other hand, it seems that AI can also imitate dishes with recipes.

Chairman Nakajima

IBM’s Watson is said to be, in some ways, the best modern version of expert systems. Watson is basically an assistant who supports humans. In that case, communicating work to the assistant leads to the next problem, that is, the frame problem, the inability to share values ​​because the physicality is different.
There is the so-called symbol grounding problem, which is whether we can understand the meaning of symbols properly in the real world.
Current neural networks, to put it in extreme terms, have the problems of being unable to learn meanings and being deceived by creating data that falls between learning results.
We would like to propose a hybrid approach that strengthens the weaknesses of machine learning and strengthens the weaknesses of symbolic reasoning. The fusion of deduction and inductiveness I mentioned earlier also comes up a little here, but I think that it may be possible to do something like an inductive logic program.
And I would be happy if the places written in red could be solved in some way. I haven’t decided exactly how to create a hybrid method yet. I would like to make it with everyone.
While discussing various things in the committee this time, the keyword system 1 / system 2 came up. System 1 is something that can be done at once, and System 2 is something that cannot be done without thinking.
Fundamentally, I think it’s psychologically appropriate that deep learning takes charge of system 1 and symbolic reasoning takes charge of system 2.

Chairman Nakajima

Ultimately, I think it would be good to create something like the figure below. In short, it runs an outer loop, and inside it, there is a deep neural network at the bottom, and system 1 is performed, and symbolic reasoning is at the top, and system 2 is performed. I think the collection needs to be studied in the future. It is a proposal that if you create such a two-story system, you will be able to create an AI that understands meaning.

Mr. Murakawa

Of course, I think that the system should be made for System 2. However, I think that it is quite difficult to decide “what should be the target” when starting the project for the first time. If it’s too simple, it’s not interesting, and if you ask how versatile you want it to be, I think it’s hard to get started.
I think that research in this field will become even more active if we can formulate a problem that is practically useful to some extent. For example, I feel that System 1 and System 2 in the world where people and robots work together and System 1 and System 2 for nursing care are quite different in terms of the necessary System 2. I felt that it was difficult to decide where to set the problem, but what do you think?
Chairman Nakajima
Chairman Nakajima: I think you are right. First of all, I think it’s challenging to decide what domain to start researching, but I’ve always thought that being able to have a conversation is important. I think it would be interesting to start by talking about your impressions of the movie.
Now Google and others can translate to some extent, but that doesn’t follow the logical structure, and I don’t think images and sentences follow the logical structure. Therefore, creating a composition only with fiction (a layer with a high degree of abstraction), which is the essence of words, and doing logical development there is something that deep learning cannot do now, so I tried to do such a task. I feel that if I do my best, I will be able to do something.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments