On June 15, 2021 (Tuesday), the “ NEDO Artificial Intelligence (AI) Global R&D Action Plan Symposium in the field of AI technology – the direction of social implementation of AI that Japan should aim for – ” will be held online. I was.
As of 2021, how far has the social implementation of AI progressed? Will Japan be able to take the initiative in deep reinforcement learning in the future? How do AI experts perceive these issues?
In this article, we will report on the state of the symposium as a preliminary report.
*Some parts have been edited for publication.
Table of Contents
- Overview of the symposium
- About the “Overall Action Plan for Research and Development in the Field of Artificial Intelligence (AI) Technology”
- Will AI stop being called AI?
- How far has social implementation of AI progressed?
- Can Japan Take the Initiative in Deep Reinforcement Learning?
- About the characteristics and differentiation strategies of AI in each country
- Is it possible to build an AI that can understand the meaning and value of tasks?
Overview of the symposium
At this symposium, we will introduce the results of the “Survey for formulating a broad R&D action plan and business extraction in the field of artificial intelligence (AI) technology”, as well as the expert members of the action plan formulation committee and Japan We discussed the direction of social implementation of AI that we should aim for.
◼Speakers
Mr. Hideyuki Nakajima (President, Sapporo City University), Chairman of the Action Plan Formulation Committee
Members of the Action Plan Formulation Committee (in alphabetical order) * Mr
. Masahiko Inami (Professor, Research Center for Advanced Science and Technology, The University of Tokyo)
Investigator)
, Tofuku Kawakami (Management Co-Creation Platform Co., Ltd. Managing Director)
, Hiroshi Maruyama (Executive Fellow, Kao Corporation / Specially Appointed Professor, Research Center for Artifacts, The University of Tokyo / PFN Fellow, Preferred Networks Co., Ltd.)
・Mr. Masahiro Murakawa (National Institute of Advanced Industrial Science and Technology, Information and Ergonomics Area, Artificial Intelligence Research Center, Deputy Research Center Director (concurrently), Research Planning Office, Artificial Intelligence Research Strategy Department)
*Mr. Inami will appear in the recording *Mr.
Matsuo will be absent
◼Lecture schedule
09:30: Symposium start 09:35: Greetings from the organizer 09:40: Introduction of results of AI action plan 09:45: Part 1 Talk session “How far has social implementation of AI progressed?” 11:00: Part 2 Talk Session “Can Japan Take the Initiative in Deep Reinforcement Learning?” 12:00: End |
About the “Overall Action Plan for Research and Development in the Field of Artificial Intelligence (AI) Technology”
Currently, AI technology is being actively developed around the world. Therefore, in order for Japan to lead the world in the field of AI , NEDO has decided to take action toward the formulation of new technology strategies and the early start of projects . Action Plan (AI Action Plan)” was formulated and published.
In 2016, NEDO implemented the “Vision for Implementation of Next-Generation Artificial Intelligence in Society”. Based on this vision, the action plan was formulated after discussion by an action plan formulation committee consisting of six AI experts.
The committee held a total of six meetings online over the course of about half a year from January of this year. We organized and discussed the relationship with technology.
As a result, we have identified the following orange vertical axes as fields in which we will actively utilize AI over the next 10 years: “manufacturing (production),” “lifestyle/city,” “mobility,” “education,” and “well-being.” We have narrowed it down to 5 areas.
In addition, the gray part of the horizontal axis is the AI technology so far, and the blue part shows the direction of AI technology development over the next 10 years.
In the AI Action Plan, we have selected 12 AI technology issues (light blue areas) that should be addressed not only for the use of AI, but also for the co-evolution of humans and AI.
Based on these 12 AI technology developments that should be addressed, NEDO plans to consider the commercialization of a strategy formulation project as an action plan.
Chairman Nakajima
It means that AI has stepped into the intellectual realm. A typical example is Go and Shogi programs, which have completely replaced humans, but are still called AI. In that sense, I think things will be a little different in the future.
Mr. Maruyama
Chairman Nakajima
Committee member Ushiku
Until now, things are generally no longer called AI, and AI effects are also known. While saying that, they gradually stopped calling it only a “translation engine.” I thought it would be an interesting point of view to ask whether there is something that continues to be called AI because AI is doing very smart things automatically.
On the other hand, AI and shogi are very close friends, so in that sense, I’ve always called AI. I feel that the possibility of continuing to call it AI is a unique phenomenon unique to shogi.
Kawakami Committee
On the other hand, for example, if AI is also used to control the suspension, I feel that we won’t bother to call it “suspension control AI”.
The term “human-like domain” seems to be controversial, but if it becomes a relatively human-like domain, there are no other words that are calming other than AI, so I think the word “AI” will calm down. I felt
How far has social implementation of AI progressed?
Chairman Nakajima
Until now, the system has been automated, incorporated, and operated automatically. However, from now on, I think that there will be a part that will not be incorporated, in other words, it will move while interacting with humans in a visible form.
Now, in a sense, I think that deep learning has come to be put into practical use in the form of AI for the first time in history. That’s what it means.
When it comes to understanding the meaning of the right side, I feel that it also includes the step of taking a step further and “conversing with humans in words”. Therefore, when I look at it as a person with whom I have a conversation in Japanese or English, I think that it can be seen without being buried, or at least as an interface.
Chairman Nakajima
Our argument is that we want to move that to the right over the next 10 years. Understanding meaning, and not just simulation, but the fusion of deduction and induction (similar to the fusion of deep learning and symbolic reasoning), that is, learning from examples and applying rules. It’s about combining the two.
Below that is the optimization of society as a whole. Overall optimization is quite difficult, and it’s not very clear what kind of specifications to optimize. So what is optimization? It is necessary to discuss from that point.
The big data below that is learned by giving 1,000, 10,000, or 1,000,000 examples in deep learning. Since it is a form of “learning”, it means that we have to be able to “learn from small data”.
Then, under that, it is to do in a form that integrates various modalities. Right now, there are things like “images to words, words to images”, but I was thinking that it would be nice to be able to do various things in a form that is more tightly attached than that.
As mentioned above, the social adaptation fields that accompany it are “manufacturing”, “lifestyle/urban”, “mobility”, “education”, and “health”. I think it will be important.
In Japan’s AI strategy announced in 2019, there was a goal of “producing 250,000 AI personnel every year”, but it seems that it will not be achieved at all so far.
At least I don’t think there is anyone to teach it, so I’m thinking that I have to create an AI program for education.
Mr. Maruyama
Until now, deep learning has progressed so much that there was a sort of data omnipotence, but surprisingly, that’s not the case. I think there will be more and more new fields where we use our knowledge of physics, chemistry, and so on, and also use data. Also, in terms of social implementation, I think it will have a very large impact on materials and material searches.
Committee member Ushiku
One more thing, the multi-modality part that connects language and images is actually a research theme that I have been working on for a long time. There is.
This is because, even for a single material, there are various ways of expressing the data measured from it and the data representing the material (expression as a character string), and there are various types of data that are measured. That’s right.
This is truly multimodal, and I think we need to be able to predict things like, “Is this kind of performance possible if I create this material by connecting what is verbalized and what is not?” I am thinking.
Kawakami Committee
There is a problem that you can’t test so many times in areas where you can’t do anything without using a single simulator. When a single test takes time and costs, it becomes a simulator. However, if I drop everything into a simulator, I feel like I’m going to have to make a simulator.
In that sense, I think that the speed will increase if we use a simulator that partly incorporates human knowledge and partly not. When that doesn’t fit in the real world, I think it’s necessary to fill in that part with data.
If it can be done with small data, I think it would be easier, but in reality, when we take a lot of data and put it in the digital space, we need to incorporate human knowledge and past cases. I think it will work out better.
It is said that various things will become digital twins in the future.
Mr. Murakawa
Especially when it comes to small data driving, it’s better to do it with less data, but metacognition is involved behind it. In that case, I think that “AI of semantic understanding” will be greatly related to how to work with small data.
I think that once the AI that understands meaning is completed, the “relationship between humans and AI” will become more diverse. As a point of view that has not come up until now, I think that it is necessary to establish transfer learning technology in a realistic sense.
For example, natural language processing such as GTP-3 is expensive, so we can’t train it over and over again. I think that we should establish the technology around that, once we have learned such things and how to proceed with less data.
Also, humans are able to abstract something and adapt their past experiences to new domains. I think that it exists as a kind of AI.
Chairman Nakajima
However, if you want to interact with a program or want something from a program, you have to clarify everything, which has long been known in the form of the frame problem.
How to deal with the frame problem in the future has not yet been resolved. It’s also related to the meta-reasoning that we had just now, but I think it’s quite difficult to figure out what to do with an AI that doesn’t live.
Mr. Maruyama
The concept of nurturing and designing are not very connected. Design always requires specifications, but I feel that nurturing is like “I will give you an environment, but what you can do is left to chance”. So, I feel that the term artificial intelligence is self-contradictory. Intelligence is not artificial.
Committee member Ushiku
On the other hand, with the current third AI boom, the places that are being implemented in society are ” weak AI ” in the context of the teachers who are doing such AGI. In that case, the frame problem is relatively small, and of course, if there is a context of “I want to create AI for this kind of use,” the number of options that must be considered becomes narrower and narrower. I think it’s because I’m going.
So, I think that the part that is causing this breakthrough is actually that kind of thing, so I think that we are able to learn well while using data by having a good meeting with something like a frame problem. I think there was something like the ” expert system ” I mentioned earlier, but I think it was lacking in the last mile, like communicating with humans.
So, in such a way, AI that specializes in some kind of function will collaborate with humans in the end, and AI that can learn such as “If humans don’t like it, let’s stop making that decision.” , I think that social implementation will progress more.
Can Japan Take the Initiative in Deep Reinforcement Learning?
About the characteristics and differentiation strategies of AI in each country
Secretariat
In this way, it is the current situation that we are doing it in various forms. In response to this, I would like to ask the members of the committee to provide additional information regarding the current situation in Japan.
Kawakami Committee
I think Japan is good at selling technology, but I think we still have to work hard to rethink that in business and how to monetize it as a business. increase.
In particular, if we take AI technology in the form of tools in a more easy-to-understand manner, we can do quite a lot of things, and in a world where the speed of evolution is fast, there are people who use it, and machines that are used with it. I think it’s about rethinking the relationship between things and how to adapt to business, business added value, productivity improvement, etc.
I think that what is happening now is that it is being used more in the areas of improving productivity and reducing costs. On the other hand, in the US and China, first of all, there is money-making and business, and I think that the main difference from Japan is that they are thinking about capturing technology that is directly linked to that.
Also, as in China and India, various POCs are conducted in detail in places like differences in the explosive power of startups. Then, while thinking about expanding the target market to 1.3 billion times as many people as possible, we are doing PCO, and if it goes well, we will invest a huge amount of money to speed up the game. However, I feel that Japan has not yet achieved that kind of business competition.
Chairman Nakajima
On the other hand, in the United States, it is written in a way of saying, “This is no good, but you can do these things.” It is said that Japan has a whitelist system and America has a blacklist system, but in that case, when a new technology is developed, it will lag behind other countries in terms of social implementation.
For example, self-driving cars are already in practical use in the United States and China, and self-driving cars are running around. However, Japan is in a situation where it can not be implemented unless the law on automatic driving is made.
Is it possible to build an AI that can understand the meaning and value of tasks?
Chairman Nakajima
In short, it is a question of whether people will understand the meaning and value of the task when it is treated as a human tool.
As I said earlier, humans live their lives. This is what sets it apart from mass-produced programs. For example, creating a new dish can be done by a living human being, but AI is not good at it. On the other hand, it seems that AI can also imitate dishes with recipes.
Chairman Nakajima
There is the so-called symbol grounding problem, which is whether we can understand the meaning of symbols properly in the real world.
Current neural networks, to put it in extreme terms, have the problems of being unable to learn meanings and being deceived by creating data that falls between learning results.
We would like to propose a hybrid approach that strengthens the weaknesses of machine learning and strengthens the weaknesses of symbolic reasoning. The fusion of deduction and inductiveness I mentioned earlier also comes up a little here, but I think that it may be possible to do something like an inductive logic program.
And I would be happy if the places written in red could be solved in some way. I haven’t decided exactly how to create a hybrid method yet. I would like to make it with everyone.
While discussing various things in the committee this time, the keyword system 1 / system 2 came up. System 1 is something that can be done at once, and System 2 is something that cannot be done without thinking.
Fundamentally, I think it’s psychologically appropriate that deep learning takes charge of system 1 and symbolic reasoning takes charge of system 2.
Chairman Nakajima
Mr. Murakawa
I think that research in this field will become even more active if we can formulate a problem that is practically useful to some extent. For example, I feel that System 1 and System 2 in the world where people and robots work together and System 1 and System 2 for nursing care are quite different in terms of the necessary System 2. I felt that it was difficult to decide where to set the problem, but what do you think?
Chairman Nakajima
Now Google and others can translate to some extent, but that doesn’t follow the logical structure, and I don’t think images and sentences follow the logical structure. Therefore, creating a composition only with fiction (a layer with a high degree of abstraction), which is the essence of words, and doing logical development there is something that deep learning cannot do now, so I tried to do such a task. I feel that if I do my best, I will be able to do something.