An important aspect of the new generation of intelligent systems is the possibility to employ more than one output modality when interacting with the user. A quick and successful interaction is expected when, for instance, the system’s output is presented to the user via multimedia/hypermedia in which text and graphics are merged, or by a conversational agent that combines the use of speech and gesture. In such multimodal systems sophisticated specifications are needed to combine the different output modalities in such a way that each bit of information is presented in the most appropriate manner (i.e., the system should select the most suitable modalities and modality combinations to convey information to the user).
The MOG 2010 workshop aims to bring work on multimodal output generation from different disciplines together to establish common ground and discuss possible future collaborations. Besides contributions from research fields such as multimodal language generation and embodied conversational agents, we would like to bring in an additional angle by investigating how research on multimodal output generation can benefit from a non-engineering perspective on multimodality. For example, how can research done in psychology and cognitive sciences, related to understanding how humans perceive and process multimodal information, be properly formalized for the purposes of intelligent multimodal output generation? And to what extent is it possible to formalize existing theories about how meaning is made in multimodal communication and use that for generating more meaningful multimodal output in the context of intelligent systems?
Thus, we invite technically oriented contributions as well as work in the area of human communication, such as cognitive models of multimodal communication and interaction. In this way we hope to combine an AI/engineering perspective with input from other disciplines such as linguistics and psychology, providing a forum where international researchers from different disciplinary backgrounds can exchange ideas on multimodal output generation and engage in scientific research collaboration.
MOG 2010 is a follow-up of MOG 2008, the workshop on Multimodal Output Generation organized on April 3-4, 2008, at the University of Aberdeen. and MOG 2007, the workshop on Multimodal Output Generation organized on January 25-26, 2007, at the University of Aberdeen.
On the 5th of July a
MOG-supported meeting will be held at Trinity College Dublin. This
meeting targets opportunities for collaboration on
multimodal interaction. If you are interested in attending this
L A T E S T N E W S
Jul 09, 2010: The proceedings of MOG2010 are now available online
Jun 09, 2010: The detailed program is now available
May 06, 2010: Registration fee for MOG is 25 EUR
May 06, 2010: MOG supports meeting on collaborations on multimodal issues on the 5th of July (see above)
May 01, 2010: Gavin Doherty will be invited speaker at MOG 2010
Mar 08, 2010: Extended submission deadline: 21st March, 2010
Mar 04, 2010: Paul Piwek will be one of the invited speakers at MOG 2010
Feb 18, 2010: Third call for papers