Focus
when interacting with physical objects, technology and other people,
for communicating, learning or just having fun!
Capture
Record multisensory data comprising speech, full-body and face motion,
eye movements and gaze,
in order to investigate social
behaviour and interaction with
the physical environment
and with human participants.
Analyze
Synchronize, process
and measure multimodal and multisensory data
to shed light to the patterns
of human behaviour,
cognition and interaction.
Generate
Immitate human verbal and bodily behaviors through robotic platforms and virtual characters. Develop realistic systems which can interact with their users by employing different modalities in a natural way in various applications and contexts.
About the lab
leading-edge
research
The HUBIC Lab is designed to facilitate leading-edge research and experimentation in multimodal interaction, spanning from Human-Computer Interaction to human social behavior. HUBIC provides researchers with state-of-the-art device and infrastructure for capturing, analyzing and modelling the multimodal signals pertaining to human behavior: face and body motion, eye-tracking, Kinect sensors, high definition audio and video capturing equipment, robotic and virtual character platforms, biosensors (EEG) and more. See Equipment for a complete description.
The HUBIC Lab is housed at the Institute for Language and Speech Processing / “Athena” RIC.
The HUBIC Lab was established in the framework of the LangTERRA project “Enhancing the Research Potential of ILSP/”Athena” R.C. in Language Technology in the European Research ERA”. This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 285924.