CBAR 2015, Held in conjunction with IEEE FG 2015, in May 2015, Ljubljiana, Slovenia

1. Workshop Description
 and Objectives

    Unconsciously, humans evaluate situations based on environment and social parameters when recognizing emotions in social interactions. Without context, even humans may misunderstand the observed facial, vocal or body behavior. Contextual information, such as the ongoing task, the identity and natural expressiveness of the individual, and the intra- and inter-personal context, help us interpret and respond to social interactions. These considerations suggest that attention to context information can deepen our understanding of affect communication for a reliable real-world affect related applications. 
     
      Building upon the success of previous CBAR workshops, the key aim of this third workshop is to explore how computer vision can address the challenging task of automatic extraction and recognition of context information in real-world applications. Specifically, we wish to exploit advances in computer vision and machine learning for real time scene analysis and understanding that include, tracking and recognition of human actions, gender recognition, age estimation, and objects recognition and tracking for real-time context based visual, vocal, or audiovisual affect recognition. 
    
     The key aim of the workshop is to explore the challenges, benefits, and drawbacks of integrating context information on affect production, interpretation and recognition. We wish to investigate cutting-edge methods and methodologies in computer vision and machine learning that can be applied to (1) detect and interpret context information in social interaction and/or human-machine interaction, and (2) train and validate classifiers to create a fully automatic multimodal and context based affect recognition. 
    
      The workshop is relevant to FG given the challenging area of research on context based affect recognition and its wide range of applications such as but not limited to intelligent video surveillance, human-computer interaction, intelligent humanoid robots, clinical diagnosis (e.g., pain and depression assessment). The workshop focuses on making affect recognition more robust and useful in real-world situations (e.g., work, home, school, and health care environment). We solicit high quality papers from a variety of fields such as computer vision and pattern recognition, behavioral and automatic methodologies that uses innovative and promising approaches to extract, interpret and/or include contextual information in audiovisual affect recognition and how it can improve existing frameworks for human-centered affect recognition. 
    
       For its third year, the workshop aims at inviting scientists working in related areas of machine learning and computer vision, scene analysis, ambient computing, smarts environments to share their expertise and achievements in the emerging field of automatic and context based visual, audio, and multimodal affect analysis and recognition.

 
2. Topics of Interest
   

We are inviting regular, position and application papers on, but not limited to, the following topics:
  • Automatic detection and identification of social contexts
  • Machine-Learning for affective and social behavior modeling
  • Multimodal context based fusion for affect recognition:
      - Asynchrony between the modalities such as face, head/body, and voice
      - Innate priority among the modalities
      - Temporal variations in the relative importance of the modalities
             according to the context
           - Cutting-edge context based fusion tools
  • Applications:
          - Context aware clinical applications such Depressions severity detection, Pain motoring, Autism (e.g. the influence of age, gender, intimate vs. stranger interaction, physician-patient relationship, home vs. hospital environment, etc.) 
          - Context based and affect-aware intelligent tutors (e.g. learning profile, personality, assessments) 
          - Affect-based human-robot or human-embodied conversational agent interactions.

3. Invited Speakers


Louis-Pilippe Morency, Carnegie Mellon University, USA












Title:  Context-based Modeling of Interpersonal Dynamics

Abstract:   Natural conversation is a fluid and highly interactive process where participants exchange information continuously among themselves. Interpersonal dynamic models this close relationship between verbal and nonverbal messages during multi-party interactions. The prediction or interpretation of an individual's behavior is often best explained in the context of the concurrent and previous conversational messages from other participants. A simple but powerful example of interpersonal dynamic is backchannel feedback where nods or para-verbals such as “uh-huh” and “mm-hmm” are produced concurrently by the listener. In this talk, I will present our prior work on modeling dialogue context during face-to-face social interactions and show concrete applications in healthcare, education, business analytics and social multimedia.

Bio:  Louis-Philippe Morency is Assistant Professor in the Language Technology Institute at  the Carnegie Mellon University where he leads the Multimodal  Communication and Machine Learning Laboratory (MultiComp  Lab). He received his Ph.D. and Master degrees from MIT  Computer Science and Artificial Intelligence Laboratory. In  2008, Dr. Morency was selected as one of "AI's 10  to Watch" by IEEE Intelligent Systems. He has received 7 best paper awards in multiple ACM- and IEEE-sponsored conferences for his work on context-based gesture  recognition, multimodal probabilistic fusion and  computational models of human communication dynamics. For the past three years, Dr. Morency has been leading a  DARPA-funded multi-institution effort called SimSensei which  was recently named one of the year’s top ten most  promising digital initiatives by the NetExplo Forum, in  partnership with UNESCO.
 

Roddy Cowie, Queen's University Belfast, UK
 

 

 

 

 

 

 

 

 

Title:  The Engines of Emotion: Towards a Shared Understanding of The Work They Do.

Abstract:  Computational research is exploring more complex emotion-related phenomena, but lacks models of emotion that accommodate them naturally. A powerful approach assumes that the directly experienced phenomena point to ‘engines of emotion’, which have ongoing functions that we only partially register. Theory has proposed five broad functions: evaluating situations; preparing us to act accordingly, at multiple levels; ensuring that we learn from significant situations; interrupting conscious processes when necessary; and aligning us with other people. Emotional feelings inform conscious awareness of what they are doing, and emotion words split the space of their activity into discrete regions. The natural goal for computation is not to duplicate those forms: it is to describe what the ‘engines’ are doing.

Bio:  Roddy Cowie is Professor Emeritus of Psychology at Queen's University, Belfast. His research has focussed on the way rigorous models can be applied to highly subjective phenomena - in vision, speech perception, music, and for the past two decades, emotion. Much of his work on emotion was through influential research projects, including SEMAINE, SSPnet, Ilhaire, and HUMAINE (which he co-ordinated). The resulting databases are widely used, as are tools associated with them (notably FEELtrace and Gtrace).

 

Program:

Opening (13:45 - 14:00)

Keynote 1 (14:00-15:00)
Roddy Cowie
The engines of emotion: towards a shared understanding of the work they do

Session 1 (15:00-16:00)
Chair: Zakia Hammal

Ursula Hess and Shlomo Hareli
The influence of context on emotion recognition in humans

Maria Francesca O'Connor and Laurel D. Riek        
Detecting Social Context: A Method for Social Event Classification using Naturalistic Multimodal Data

Coffee break (16:00-16:15)

Keynote 2 (16:15-17:15)
Louis-Philippe Morency
Context-based Modeling of Human Communication Dynamics

Session 2 (17:15-18:15)
Chair: Merlin Teodosia Suarez

Jonathan Aigrain, Severine Dubuisson, Marcin Detyniecki and Mohamed Chetouani
Person-specific Behavioral Features for Automatic Stress Detection

Hanan Salam and Mohamed Chetouani
A Multi-level Context-based Modeling of Engagement in Human-Robot Interaction

 

5. Submission Policy
    

We call for submission of high-quality papers. The submitted manuscripts should not be submitted to another conference or workshop. Each paper will receive at least two reviews. Acceptance will be based on relevance to the workshop, novelty, and technical quality.
 
The reviewing process for the workshop will be “double-blind”. All submissions should, therefore, be appropriately anonymized not to reveal authors names or authors’ institutions.

Submissions may be up to 8 pages, in accordance with the IEEE FG conference format. Papers longer than six pages will be subject to a page fee (100 USD per page) for the extra pages (two max).

Workshop papers will be included in the conference proceedings.

We welcome regular, position and applications papers. The papers have to be submitted at the following link (EasyChairCBAR2015). At least one author of each paper must register and attend the workshop to present the paper. 

6. Tentative Deadlines

Submission Deadline:           Extended to: *January 20, 2015*
Notification of Acceptance:   February 10th, 2015
Camera Ready:                    February 17th, 2015
Workshop:                            May 4th, 2015

7. Organizers 


Zakia Hammal (zakia_hammal@yahoo.fr)       

The Robotics Institute, Carnegie Mellon University (http://www.ri.cmu.edu/)

Merlin Teodosia Suarez (merlin.suarez@delasalle.ph) 

De La Salle University (http://cehci.dlsu.edu.ph)  

Center for Empathic Human-Computer 
De La Salle University.  



8. Confirmed Program Committee
 (to be completed)
Carlos Busso, UT-Dallas, USA
Nadia Bianchi-Berthouze, University College London, UK
Genevra Castellano, University of Birmingham, UK 
Sidney D'Mello, University of Notre Dame, USA
Dirk Heylen, University of Twente, The Netherlands
James Lester, North Carolina State University, USA 

Pamela Pallett, The Ohio State University, USA
Thierry Pun, University of Geneva, Switzerland
Laurel Riek, University of Notre Dame, USA
Peter Robinson, Cambridge University, UK
Albert Ali Salah, Bogazici University, Turkey
Yan Tong, University of South Carolina, USA