In 2010, a group of students and faculty members at Carnegie Mellon University in Doha, Qatar, introduced their campus to Hala, the latest in a line of what the school termed “roboceptionists.” Consisting of a truncated torso and an LCD screen featuring a blue-skinned female CGI head, Hala was designed to provide students and visitors with instructions, directions, and anecdotes in either formal Arabic or American English.
In addition to educating visitors about Qatar, Hala’s purpose was to explore human-robot interaction (HRI) in a multicultural setting. The population of Doha is a demographic mosaic; the city is primarily inhabited by expatriates from all over the world (most of whom speak Arabic and/or English). Because of this relative diversity, Hala interacted with visitors from a slew of countries, using features like speech recognition and the ability to perceive users’ facial expressions to conduct, in Carnegie Mellon's words, “culturally appropriate” exchanges.
Among the school’s robotics department, Hala’s development sparked a flame of inquiry. If a robot could read different linguistic and visual cues, could its communicative abilities improve? What might that mean for the future of HRI?
No comments:
Post a Comment