Given their status as a preeminent form of social interaction, mobile phone conversations have been the subject of relatively limited investigation, in terms of social behavior. This leaves open a major gap when two important developments take place. On one hand, Mobile HCI often deals with advanced mobile phones containing a large number of sensors (e.g., GPS, accelerometers, magnetometers, capacitive touch) and with sufficient processing power to capture with unprecedented richness behavior and context of users (e.g., position, movement, hand grip, proximity of social network members, gait type, auditory context). On the other hand, the computing community, in particular Social Signal Processing (SSP), makes significant efforts towards automatic understanding (via analysis of verbal and nonverbal behavior) of social interactions captured with multiple sensors.
This workshop bridges the abovementioned gap by gathering SSP and Mobile HCI researchers. Cross-pollination is expected to extend the investigation area of the two domains and highlight a number of research questions that not only promise to bring significant novelty in both SSP and Mobile HCI, but also require the application of knowledge from both domains to be effectively investigated:
- Is it possible to integrate the input of mobile phone sensors in current approaches for automatic analysis of social phenomena in conversations?
- Does context influence the communication behavior of people talking on the phone?
- Does the transmission of nonverbal behavioral cues, so important in face-to-face communication, improve phone conversation experience?
- Does a better understanding of communication behavior influence the design of mobile phones?
- Can we evaluate how use of a mobile phone affects the key social interaction variables of ‘trust’ and ‘competence’ evaluation?.
- Can we create metrics which help us evaluate the effect on social interaction of augmenting the voice channel with other feedback channels?
- Can we create non-vocal, but embodied interaction techniques which are appropriate for mobile use?
- What would be the ethical issues related to the everyday use of in-hand, automated social signal analysis?
9.30 – 9.50 Cross-Pollination between Social Signal Processing and Mobile HCI
A.Vinciarelli and R.Murray-Smith (University of Glasgow)
Given their status as a preeminent form of social interaction, mobile phone conversations have been the subject of relatively limited investigation, in terms of social behaviour. This leaves open a major gap when two important developments take place. On one hand, Mobile HCI often deals with advanced mobile phones containing a large number of sensors (e.g., GPS, accelerometers, magnetometers, capacitive touch) and with sufficient processing power to capture with unprecedented richness behaviour and context of users (e.g., position, movement, hand grip, proximity of social network members, gait type, auditory context). On the other hand, the computing community, in particular Social Signal Processing (SSP), makes significant efforts towards automatic understanding (via analysis of verbal and nonverbal behaviour) of social interactions captured with multiple sensors. The goal of this presentation is to identify what are the most promising possibilities of bridging this gap.
9.50 – 10.30 Keynote 1: Rich Contextual Data in Development of Social Mobile Experiences
Juha Laurila (Nokia Research Centre, Lausanne)
Current mobile devices offer rich possibilities to collect contextual data capturing elements like social interactions, location, application usage or media creation/consumption. This talk gives an overview on related research efforts, but focuses especially on Lausanne data collection campaign, which has been running in the lake Geneva region since September 2009. The campaign motives, approach, arrangements and outcomes are introduced. Such multi-dimensional data, which has been sensed in a non-intrusive manner from existing heterogeneous social networks over long period of time, offer almost endless possibilities for researchers from various fields. This talk introduces some possible research directions and shows some examples of the recent findings.
10.30 – 11.10 Keynote 2: A view on Human-Human Communication
Jens Allwood (University of Goteborg)
11.10 – 11.30 Coffee Break
11.30 – 11.50 Feelabuzz – Direct Tactile Communication with Mobile Phones
R.Tünnermann, C.Mertes, T.Hermann (University of Bielefeld)
Touch can create a feeling of intimacy and connectedness even when transmitted over a distance. In this work we propose feelabuzz, a system to transmit movements of one mobile phone to the vibration actuator of another one. This is done in a direct, non-abstract way, without the use of pattern recognition techniques in order not to destroy the feel for the other. This means that the same channel enables direct communication, i.e. what another person explicitly signals, as well as implicit context communication, i.e. the complex movements any activity consists of or even those that are produced by the environment. We explore the potential of this approach, present the mapping we use and discuss further possible development beyond the existing prototype to enable a large-scale user study.
11.50 – 12.10 Negotiation models for mobile tactile interaction
D. Trendafilov, S.Lemmelä (Nokia Research Centre, Tampere), R.Murray-Smith (University of Glasgow)
With the recent introduction of mass-market mobile phones with touch-sensitive displays, location, bearing and motion sensing, we are on the cusp of significant progress in a highly interactive mobile social networking. We propose that such systems must work in various contexts, levels of uncertainties and utilize different types of human senses. In order to examine the feasibility of such a system we describe an experiment with an eyes-free implementation which allows the users to engage in a continuous interaction with each other by using capacitive touch input and vibro-tactile feedback and perform a goal-oriented collaborative task of target acquisition. Although challenges due to this eyes-free interaction method were indicated, encouragingly, users were able to engage in the interaction and perform the task successfully to a certain degree.
12.10 – 12.30 Applications of SSP at Sony Ercsson
Håkon Jonsson (SonyEricsson)
12.30 – 12.50 Observations and discussion on the mobile/SSP interactions
Scott Jenson (Google Mobile)
12.50 – 13.00 Wrap up and Outlook
A. Vinciarelli (University of Glasgow)
Workshop topics include (but are not limited to):
- Conversational behavior analysis
- Social Location and Context – measurement, analysis and use
- Social Signal Processing in design of mobile interactions
- Social Signal Processing in mobile entertainment and wellbeing
- Databases and Social Signal Processing based content retrieval
- Cognitive modeling, automatic understanding, and synthesis of social phenomena
- Full paper submission: June 15th, 2010
- Notification of Acceptance: June 25th, 2010
- Camera ready paper submission: June 30th, 2010
- Workshop: September 7th, 2010
Workshop articles will be published by Springer in a volume of the LNCS series. Authors are expected to submit six to eight pages long papers in LNCS/LNAI format (available on the Springer website for Word and Latex).
Submissions are now closed
The best paper will be considered for inclusion in a special issue of the International Journal of Mobile HCI.
The workshop takes place in conjunction with Mobile HCI 2010, Lisbon (September 7-10, 2010). The Mobile HCI series provides a forum for academics and practitioners to discuss the challenges and potential solutions for effective interaction with mobile devices and services. It covers the design, evaluation and application of techniques and approaches for all mobile and wearable computing devices and services.
- Alessandro Vinciarelli (University of Glasgow/Idiap Research Institute)
- Rod Murray-Smith (University of Glasgow)
- Herve’ Bourlard (Idiap Research Institute/EPFL)
- Marco Cristani (University of Verona, Italy)
- Anind Dey (Carnegie Mellon University, USA)
- Thomas Hermann (University of Bielefeld, Germany)
- Rob Jenkins (University of Glasgow, UK)
- Matt Jones (Swansea University, Wales)
- Juha K. Laurila (Nokia Research Center, Lausanne)
- Dirk Heylen (University of Twente, The Netherlands)
- Eamonn O’Neill (University of Bath, UK)
- Antti Oulasvirta (HIIT, Finland)
- Jean-Marc Odobez (Idiap Research Institute / EPFL)
- Isabella Poggi (Universita’ Roma Tre, Italy)
- Steve Renals (University of Edinburgh, UK)
- Christ Schmandt (MIT, USA)
- Fabio Valente (Idiap Research Institute, Switzerland)