Thursday, April 18, 2024
HomeData ScienceRobot Ethics and Design with Shalaleh Rismani !

Robot Ethics and Design with Shalaleh Rismani [2022]!

Robot Ethics

Table of Contents

  • How can we design responsible AI and technology?
  • Main answers and points to note
  • Shalaleh Rismani interview highlights
  • Full interview

How can we design responsible AI and technology?

 

(Robot Ethics) For the first in a long-awaited series of interviews with AI experts, I am honored to have had the pleasure of speaking with Shalaleh Rismani.

Shalaleh is currently co-director and design researcher at the Open Robotethics Institute (ORI), a Canadian think tank, researching the ethical and social implications of AI and robotics. He is also a Ph.D. student in electrical and computer engineering at McGill University, where he is a member of the Responsible and Autonomous Intelligent Systems Ethics Lab, where AI systems adhere to AI ethical principles. We are researching methods to measure and evaluate whether or not

This time, we will ask him about the following.

  1. AI and Robot Ethics
  2. Design based on human-centered values
  3. AI development and regulation
The Open Robotethics Institute is a non-profit think tank founded in Vancouver, Canada in 2015 that studies the social and ethical implications of robotics and AI. As a research result, we are providing a free tool ” Foresight into AI Ethics ” to help companies formulate AI guidelines .
(*Translation Note 2) McGill University is a Canadian public university headquartered in Montreal, Canada. Founded in 1821, it is one of Canada’s most prestigious schools and has produced 12 Nobel Prize winners.

Main answers and points to note

Note: This interview compilation section below has been edited and shortened for clarity. To read the full interview, scroll down to the “Full Interview” section.

Shalaleh Rismani interview highlights

Can you tell us a little bit about your background and how you got into the world of AI and robotics?

My background is engineering. I majored in mechatronics as an undergraduate at the University of British Columbia (3), and at the time mechatronics was a new field (4). Mechatronics is a fusion of mechanical (`), electrical (engineering), and computer science, and I really liked it because you can do everything.

Around that time, Dr. Elizabeth Croft gave a presentation focusing on human-robot interaction. The presentation discussed how robotic systems and humans can actually build successful collaborations. Listening to such a presentation, I became very interested in the research. Therefore, I continued research and internships at the undergraduate level on interaction with robots. Through these activities, I became interested in the field of robot ethics.

Especially in my department, I learned how to make a robot system, but I didn’t ask many questions about why I made that system and what kind of positive or bad influence that system would have on people . . I regretted that. Because of that frustration, I joined the Open Robotethics Institute at the time, which was the trigger for me to enter this field.

(*Translation Note 3) The University of British Columbia is a public university located in British Columbia, Canada. Founded in 1908 as a branch of McGill University, it became independent in 1915. It is one of Canada’s prestigious universities, having produced four former Canadian prime ministers and seven Nobel laureates.
(*Translation Note 4) The Japanese Wikipedia entry ” Mechatronics ” includes the following image, which illustrates the interdisciplinary characteristics of the field.

In traditional engineering and robotics programs, ethical and social issues are not often taught in class or considered in research, but thankfully, that trend is rapidly changing these days. increase.

 But when did you personally start thinking about these issues? And when did ethical principles and social implications begin to play into your research and investigations?

In my case, I think (thinking about ethical and social issues) was one of the reasons I went into engineering in the first place. When I was in high school, I wanted to go into engineering because engineers can solve social problems.

My first contact with engineering and design was when I participated in a program to identify problems with the theme of “zero waste” and solve them through design. I was fascinated by the fact that social problems can be partially solved by technology.

I went on to study engineering and was fortunate enough to be probably the first student at the University of British Columbia to take a course called ‘Global Engineering’. In this course, I was able to learn practically how to solve ethical and social problems with technology. There, I learned concepts such as human-centered design, service design, impact assessment, stakeholder understanding, and context understanding.

I had personal interests and thoughts on ethical and social issues, but these issues were more formally touched on in a special course called Global Engineering. After that, my own personal reading and interest in the field allowed me to grow and learn more.

What are you currently working on in your research and work?

The field of robot ethics and AI ethics has evolved considerably since 2012-2013, when I first encountered the field. The term robotethics was coined (in 2002), so it’s not that old. The term “AI ethics” is newer, I think, five or six years ago, and an academic conference in this field was established five or six years ago.

So the problem I’m looking at right now, while it may seem simple and preliminary in other established fields, is still of interest in this field. We discuss the need and desire to build fair, transparent, and ethical AI systems . 

But if you don’t know what AI ethics means or how to measure it specifically, or if machine learning engineers actually build systems, what standards and regulations should you follow? Well , there are some standards in development, but none that people actually agree to or follow as a matter of course yet. So my research specifically looks at how AI ethics is and should be measured today.

The term robotethics is credited with being coined in 2002 by Italian roboticist Gianmarco Veruggio .

What are your thoughts on future trends and new challenges in the technology and AI ethics industry?

New trend? Interesting question. It can be said that the AI ​​industry currently faces a major challenge in making AI systems and the challenges associated with them practically successful on many levels. This is a technical challenge in the sense that many companies are not necessarily ready to deploy AI systems in practice .

 Preparation means not having the right data, infrastructure, and systems to actually develop and deploy machine learning models . To prepare for the introduction of AI systems, we need to better understand and practice the question of what is a good process for actually introducing AI and machine learning systems.

In practice, there are technical challenges such as how to protect the data, how it is formed, who has access to it, and how secure it is.

There are also technical challenges such as model implementation, model efficiency, etc. There are various issues such as whether calculations can actually be performed with the limited computing power that many companies have.

The above are technical issues, but at the same time, social issues are becoming very big. How will machine learning devices interact with people, how will they affect people, will people use them for the long term, and what to do if they break down. How should we act on these issues?

I recognize that many people besides myself are aware of these social and ethical issues and actively address them. These two (technical and socio-ethical issues surrounding AI) are the two main trends worth mentioning.

Do you have any advice for current or future roboticists, researchers and developers (or related people) in industry and academia on how to better design responsible technology?

When designing AI systems and robots, one thing that has been practiced for decades but is not practiced much today is to understand your stakeholders and the values ​​that specific stakeholders hold. It is mentioned.

For example, if you’re designing something in Vancouver, it’s different if you’re designing for an Indigenous community in Canada.

They have inherently different values, and even within the same country, different communities have different values ​​about how they want (or don’t want) to incorporate automated systems into their lives.

Human-centered design, or value-oriented design , is a very basic design theory that has been applied in other fields for decades, but not so much in AI system development and robot development.

I don’t think so. I think there is plenty of room to learn about ethical design from other disciplines and apply that learning to AI and robotics development.

In an ideal future world, how would you like your system to contain AI ethics?

In an ideal world, we believe AI ethics and AI systems should be regulated in order to build systems that work many times, not just once or twice.

In the computer science community and the machine learning community, there is an opinion that regulation stifles innovation . thinking about. I do not agree with the view that regulation hinders innovation.

I believe that innovation exists in many places. Just because you have regulations that require you to collect data in a certain way, doesn’t mean you can’t innovate as a company. You can be innovative even under various constraints. So, in the long run, I think this area should be regulated.

But until AI regulation is in place, ideally everyone designing AI systems knows how to ask themselves the ethical questions they should consider regulation while designing . I think.

In other words, if only one person in the company is promoting AI ethics, it will never work. The fact that everyone, or even the vast majority, in the company feels good about asking these ethical questions is actually a good move.

AI ethics is a highly evolving and dynamic field, yet not well-understood by many internet and technology users. Therefore, we would like to hear your opinion. 

How can you start conversations about AI ethics with friends, colleagues, neighbors, and others? What role can we play in facilitating conversations about ethical and responsible AI and initiating the spread of AI ethics?

That’s a great question. I think what you’re pointing out is very current and the problem is that people are not necessarily participating in this conversation despite being influenced by AI.

A simple method I use when thinking about this issue and talking to friends is to ask myself, “How am I spending my time?” And ask, who will direct your time? For example, when using social media apps, machine learning systems automatically make certain things more visible than others.

Next time, I would like you to pay attention to what is being displayed to you. What are you instructed to see? Maybe an hour of your day is being dictated by a machine learning system . This is a very big question for me and I want everyone to think about it. And those who have thought about it even once begin to ask their own questions.

Maybe an hour of your day is being dictated by a machine learning system.

Full interview

You can watch the full interview with Shalaleh in the video below (co-hosted by Catherine Yeo and Elisée Djapa from the Fair Bytes team).

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Most Popular

Recent Comments