Bias-sensitizing robot behaviours

A quest for Avoiding Harmful Bias and Discrimination by robots (AGAINST-19)

May 24, Half-day workshop at ICRA 2019, Montréal


About

Human behaviour is subject to a wide range of cognitive biases. The machines and robots we create will naturally inherit many of these biases, as recently brought to light in both the media and academia: through examples such as sexist ad targeting, racist policing, but also robotic ageism, sexism and racism. This workshop focuses on bias-sensitization of robot behaviour on issues such as gender, race, age and culture. Bias-sensitization - a concept brought forward by the field of gender studies - is the modification of behaviour to avoid discrimination and hidden and harmful bias.

The goal of this workshop is to raise awareness of, and provide new insights into, the biases involved in robot design, human-robot interaction and datasets; as well as the techniques which will be required to avoid harmful bias and discrimination by robots. The workshop will provide a venue for Robotics and AI researchers, as well as philosophers, psychologists and cognitive scientists to exchange the lessons learned from state-of-art research in the intersection of diverse fields of studies.

Topics of interest include:
- age/race/gender-biased robots
- feminist theory and robotics (e.g. gender studies in robotics, sex robots, feminist tech)
- culturally competent versus stereotyped robots
- learning from biased datasets in robotics
- fairness and transparency in robotic systems
- ethical robot decision making
- legal issues in robot discrimination
- ethical design and development of robotic systems

Location

The workshop will be hosted at the 2019 IEEE International Conference on Robotics and Automation (ICRA 2019).
Location: Room 520f, Palais des congrès de Montréal, Montréal, Canada.
Session: Friday May 24, 13:30-17:00

Speakers

  • Prof. Ayanna Howard

    14h00 - 14h30

    Diversity Biases with Socially-Interactive Robots

    Prof. Ayanna Howard @robotsmarts

    As robots more fully interact with humans, the intersectionality of human-robot trust and bias must be more carefully investigated. The focus on trust emerges because humans tend to trust robots similarly to the way they trust other humans; thus, the concern is that people may misunderstand the risk associated with handing over decisions to a robot. Bias involves the disposition to allow our thoughts and behaviors, often implicitly or subconsciously, to be impacted by our generalizations or stereotypes about others. Bias can influence our relationships with social robots based on our own gender, ethnicity, and age stereotypes. In this talk, we frame issues of trust and bias in robotic technology through the lens of human-robot interaction studies that highlight these diversity-related biases.

  • Prof. James E. Young

    14h30 - 15h00

    Why is gender relevant in robotics?

    Prof. James E. Young www video

    While creating robots that can sense, plan, move, and even work with people is a very challenging technical problem, gender-related issues arise that can hinder robot success. As roboticists, should we be taking gender issues into account when developing robots? I will discuss the issue of gender imbalance and bias in Computer Science and Engineering, and illustrate how this can hinder robot utility and adoption, and even to safety and equity issues. My talk focuses on raising awareness and understanding of real gender-related problems, and discussing initial ideas of what we can do to manage them.

  • Dr. Barbara Bruno

    15h30 - 16h00

    On socially assistive robots, cultural competence and the importance of asking questions

    Dr. Barbara Bruno www video

    A substantial body of evidence has shown that cultural competence, generally defined as the ability to understand, appreciate, and interact with persons from cultures and/or belief systems other than one’s own, can help healthcare practitioners better understand their patients’ values and needs, and deliver care in a way that is free of discrimination. Given the increased interest in using socially assistive robots to support human caregivers in delivering care to older persons, the time may be right to ask ourselves whether socially assistive robots could improve the quality of life of users and their caregivers if they were sensitive to cultural differences while perceiving, reasoning, and acting. In this talk, we share the work we did in the CARESSES project towards the development of a culturally competent social robot for the care of older people.

  • Dr. John Danaher

    16h00 - 16h30

    Discrimination, Fairness and the Design of Robots

    Dr. John Danaher @JohnDanaher video

    There is a common assumption that ethical robotics must abide by the principles of non-discrimination and fairness in the design and operation of robots. But what does this actually entail? This talk will outline the major philosophical views on the nature of discrimination and fairness and consider their application to the design of robots. In doing so, it will argue that some forms of discrimination must be tolerated, possibly even encouraged, in the design of robots.

Schedule

Time Slot Description
13h30 - 13h40 Welcome (organizers) -
13h40 - 14h00 Dr. Martim Brandao Dr. Martim Brandao University of Oxford, UK The RoboTIPS Project, Responsible Robotics, and Bias
14h00 - 14h30 Prof. Ayanna Howard Prof. Ayanna Howard Georgia Institute of Technology, US Diversity Biases with Socially-Interactive Robots
14h30 - 15h00 Prof. James E. Young Prof. James E. Young University of Manitoba Why is gender relevant in robotics?
15h00 - 15h30 Coffee break -
15h30 - 16h00 Dr. Barbara Bruno Dr. Barbara Bruno University of Genoa, Italy On socially assistive robots, cultural competence and the importance of asking questions
16h00 - 16h30 Dr. John Danaher Dr. John Danaher NUI Galway, Republic of Ireland Discrimination, Fairness and the Design of Robots
16h30 - 17h00 Panel discussion and closing -

Organizers

  • Masoumeh (Iran) Mansouri (Corresponding organizer)

    Masoumeh (Iran) Mansouri (Corresponding organizer) www

    Centre for Applied Autonomous Sensor Systems (AASS), Cognitive Robotics Lab, Örebro University, Sweden

  • Martim Brandao

    Martim Brandao www

    Oxford Robotics Institute (ORI), University of Oxford, UK

  • Alessandro Saffiotti (Workshop advisor)

    Alessandro Saffiotti (Workshop advisor) www

    Centre for Applied Autonomous Sensor Systems (AASS), Cognitive Robotics Lab, Örebro University, Sweden

Support

This workshop is supported by:

RoboTIPS: Developing Responsible Robots for the Digital Economy. An EPSRC 5 year Established Career Digital Economy Fellowship awarded to Professor Marina Jirotka. EPSRC reference EP/S005099/1

IEEE RAS Technical Committee on Robot Ethics