First International Workshop on Multimedia Pragmatics

Event Notification Type: 
Call for Papers
Abbreviated Title: 
Co-located with the IEEE First International Conference on Multimedia Information Processing and Retrieval (MIPR'18), Pullman Airport Hotel
PDF icon MMPrag'18 CFP537.55 KB
Tuesday, 10 April 2018 to Thursday, 12 April 2018
United States
William Grosky
Richard Chbeir
Submission Deadline: 
Saturday, 30 December 2017

Most multimedia objects are spatio-temporal simulacrums of the real world. This supports our view that the next grand challenge for our community will be understanding and formally modeling the flow of life around us, over many modalities and scales. As technology advances, the nature of these simulacrums will evolve as well, becoming more detailed and revealing to us more information concerning the nature of reality.

Currently, IoT is the state-of-the-art organizational approach to construct complex representations of the flow of life around us. Various, perhaps pervasive, sensors, working collectively, will broadcast to us representations of real events in real time. It will be our task to continuously extract the semantics of these representations and possibly react to them by injecting some response actions into the mix to ensure some desired outcome.
Pragmatics studies context and how it affects meaning, and context is usually culturally, socially, and historically based. For example, pragmatics would encompass the speaker’s intent, body language, and penchant for sarcasm, as well as other signs, usually culturally based, such as the speaker’s type of clothing, which could influence a statement’s meaning. Generic signal/sensor-based retrieval should also use syntactical, semantic, and pragmatics-based approaches. If we are to understand and model the flow of life around us, this will be a necessity.

Our community has successfully developed various approaches to decode the syntax and semantics of these artifacts. The development of techniques that use contextual information is in its infancy, however. With the expansion of the data horizon, through the ever-increasing use of metadata, we can certainly put all media on more equal footing.

The NLP community has its own set of approaches in semantics and pragmatics. Natural language is certainly an excellent exemplar of multimedia, and the use of audio and text features has played a part in the development of our field.

However, if we are to develop more unified approaches to modeling the flow of life around us, both of our communities can certainly benefit by examining in detail what the other can offer. Many approaches are the same, but many are different. Certainly, the research in many areas, such as word2vec, from the NLP community can have a positive benefit to the multimedia community.

Now is the perfect time to actively promote this cross-fertilization of our ideas to solve some very hard and important problems.

Authors are invited to submit regular papers (6 pages), short papers (4 pages), and demo papers (2 pages) at the workshop website Guidelines can also be found there.

Topics of interest include, but are not limited to:
• Affective computing
• Computational semiotics
• Cross-cultural multi-modal recognition techniques
• Distributional semantics
• Event modeling, recognition, and understanding
• Gesture recognition
• Human-machine multimodal interaction
• Integration of multimodal features
• Machine learning for multimodal interaction
• Multimodal analysis of human behavior
• Multimodal datasets development
• Multimodal deception detection
• Multi-modal sensor fusion
• Multi-modality modeling
• Sentiment analysis
• Structured semantic embeddings
• Techniques for description generation of images/videos/other signal-based modalities

To be included in the IEEE Xplore Library, accepted papers must be registered and presented.

Here are some important dates:
• Submissions due: December 30, 2017
• Acceptance notification: January 24
• Camera-ready papers and author registration due: February 20
• Workshop date: April 10, 2018 (tentative)