AdonisChang CHIA-MING CHANG 張家銘 AdonisChang CHIA-MING CHANG 張家銘
adonischangchiamingchang張家銘
AdonisChang CHIA-MING CHANG 張家銘 AdonisChang CHIA-MING CHANG 張家銘

A Video-based Study Comparing Communication Modalities between an Autonomous Car and a Pedestrian

 
Authors: Chia-Ming Chang, Koki Toda, Takeo Igarashi, Masahiro Miyata and Yasuhiro Kobayashi
Category: Interaction Design / 2018
Organisation: IGARASHI Lab / The University of Tokyo
& TOYOTA Japan

          
    ● The 10th ACM International Conference on Automotive User Interfaces and Interactive Vehicular Applications
       (AutomotiveUI 2018), Toroto, Canada, 23-25 September 2018

  


There are increasing needs in communication between an autonomous car and a pedestrian. Some conceptual solutions have been proposed to solve this issue, such as using various communication modalities (eyes, smile, text, light and projector) on a car to communicate with pedestrians. However, there is no detailed study in comparing these communication modalities. In this study, we compare five modalities in a pedestrian street-crossing situation via a video experiment. The results show that a text is better than other modalities to express car’s intention to pedestrians. In addition, we compare the modalities in different scenarios and environments as well as pedestrian’s perception of the modalities.


Communication Codalities in Daytime & Evening
Paper [PDF]

 

AdonisChang CHIA-MING CHANG 張家銘