As previously announced, we are organizing a Workshop On Multimedia Conversational AI at ACM Multimedia 2021 ( original blog post).

In this blog post, we announce the Keynote Speaker: Professor Eugene Agichtein.

Eugene Agichtein is a Winship Associate Professor in the Computer Science department at Emory University. Dr Agichtein is the founder and leader of the Emory Intelligent Information Access Laboratory (IR Lab). Being focused on information retrieval and text and data mining, they develop techniques for mining online user behaviour and interactions in web search and online social networks, large-scale content analysis and information extraction, and applications of these techniques to medical informatics. Since January 2019, he has been a part-time “Amazon Scholar” (Principal Scientist) at Amazon Alexa.

Eugene’s general research interests are web search and information retrieval, conversational search, and more generally in text and data mining, social media analysis, and human-computer interaction.

We are delighted to have Professor Eugene giving a talk at MuCAI. Keep tuned for more updates.

Without further ado, follows the call for papers.

2nd International Workshop on Multimodal Conversational AI @ACM Multimedia 2021, October 20-24, 2021, Chengdu, China

MuCAI 2021 Website

Deadline: August 10

The ACM Multimedia Workshop on Multimodal Conversational AI aims to bring
together researchers and practitioners in the areas of multimodal
conversational AI.
Recently, conversational systems have seen a significant rise in demand due
to modern commercial applications using systems such as Amazon’s Alexa,
Apple’s Siri, Microsoft’s Cortana and Google Assistant. The research on
multimodal chatbots is a widely underexplored area, where users and the
conversational agent communicate by natural language and visual data.
Conversational agents are now becoming a commodity as a number of companies
push for this technology. The wide use of these conversational agents
exposes the many challenges in achieving more natural, human-like, and
engaging conversational agents. The research community is actively
addressing several of these challenges: how are visual and text data
related in user utterances? How to interpret the user intent? How to encode
multimodal dialog status? What are the ethical and legal aspects of
conversational AI?
The Multimodal Conversational AI workshop will be a forum where researchers
and practitioners share their experiences and brainstorm about success and
failures in the topic. It will also promote collaboration to strengthen the
conversational AI community at ACM Multimedia.

Topics of Interest

  • Visual conversations/dialogs
  • Deep learning for multimodal conversational agents
  • Preference elicitation in conversational agents
  • Conversation state tracking models and online learning
  • Recommendations in conversational systems
  • Multimodal user intent understanding
  • Opinion recommendation in conversational agents
  • Supply/demand in conversational agents for e-commerce
  • Reinforcement learning in conversational agents
  • Resources and datasets
  • Design and evaluation of conversational agents
  • User-agent legal and ethical issues in conversational systems
  • User-Agent experience design
  • Conversational systems applications, including, but not limited to,
    e-commerce, social-good, music, Web search, healthcare.

Paper Submission Guidelines

Papers are up to 6 pages in length plus additional pages for references.
Papers will be refereed through double-blind peer review. The
proceedings will be distributed to all delegates at the conference.
All submissions must be written in English and be formatted according to
the ACM templates and guidelines. All papers should be submitted
electronically through the conference submission system.

Timeline

  • Submission: August 10
  • Notification: August 26
  • Camera-ready copy: September 2
  • Workshop: October 20

Organizers

  • Joao Magalhaes, Universidade Nova de Lisboa, Portugal
  • Alexander Hauptmann, Carnegie Mellon University, Language Technologies
    Institute
  • Ricardo G. Sousa, FARFETCH, Portugal
  • Carlos Santiago, ISR/IST, Portugal