SNPD 2026-II Workshop/Special Sessions

Workshop 1: The 7th International Workshop on Smart Media Theory and Application (SMTA 2026)

In conjunction with IEEE/ACIS SNPD2026- II

https://acisinternational.org/conferences/snpd-2026-ii/

Dali, Yunnan, China, November 6-8, 2026

Motivation

Smart media, an application of artificial intelligence, is becoming a hot research topic in recent years.  Smart  media takes  advantage  of context-aware  computing  to  analyze  the  consumer’s environment,  behaviors  and  preference,  and  to  provide  contents,  products  and  service  that matches up with needs of consumers to enhance userexperiences. The development of smart media applies the latest internet technologies to media field, which coverfundamental theories, application-oriented theories, key technologies and applications at multiple levels. SMTA aims at promoting exchange of the latest advances in smart media technologies, systems, and applications from both the research and development perspectives.

Smart media, as an application of artificial intelligence, has emerged as a prominent research topic in recent years. By leveraging context-aware computing, smart media analyzes users’ environments, behaviors, and preferences to deliver content, products, and services that align with their needs, enhancing user experiences. This innovative field integrates cutting-edge internet technologies with media applications, encompassing foundational theories, applied methodologies, critical technologies, and multi-level implementations. SMTA 2025 aims to foster the exchange of the latest advances in smart media technologies, systems, and applications from both research and development perspectives.

Topics

This workshop provides a forum for the academic community and the industry to share the latest developments and advances in the discipline. SMTA2025 will showcase high quality oral and poster presentations to meet the needs of a large and diverse community. Topics of interest include, but are not limited to:

This workshop provides an interdisciplinary platform for academics and industry professionals to present and discuss the latest developments in the field. SMTA 2025 will feature high-quality oral and poster presentations to engage a diverse and extensive audience. Topics of interest include, but are not limited to:

  • In-depth exploration of advanced artificial intelligence Advanced artificial intelligence research
  • Innovation in internet technology Innovations in internet technology
  • Big data analysis Big data analytics
  • In-depth Interconnection of the internet of things Enhanced interconnectivity of the Internet of Things (IoTs)
  • Cloud computing, edge computing in mobile networks Cloud computing and edge computing in mobile networks
  • Expanded applications of blockchain technology Extended applications of blockchain technology
  • Multi-dimensional shaping of digital human Multi-dimensional modeling of digital humans
  • Virtual Reality, Augmented Reality, Mixed Reality Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR)
  • VoIP, AoIP VoIP and AoIP technologies
  • Intelligent recommendation Intelligent recommendation systems
  • Generative artificial intelligence
  • Cloud broadcasting technology

Important Dates

Full Paper Submission: April 5, 2025

Acceptance Notification: April 19, 2025

Camera-Ready Papers & Registration Deadline: April 30, 2025

All submissions must be in English and follow the IEEE two-column template, including all tables, figures, andreferences (Download Template). Papers must be submitted in WORD or PDF format via email to rebeccazhang@cuc.edu.cn with the subject line  “SMTA 2025 Submission”.

Publications

Submitted papers must be original, unpublished, and not under consideration for publication elsewhere. Acceptedpapers will be included in the SNPD Proceedings (subject to registration fee payment) and indexed by EI. Top-ranked papers based on reviewers’ recommendations will be considered for publication in EI- or SCI-indexed journals.

Workshop Co-Chairs

Weiguo Lin

School of Computer and Cyber Sciences, Communication University of China, Beijing, 100024, China

Email: linwei@cuc.edu.cn

Jiefeng Liu

School of Computer and Cyber Sciences, Communication University of China, Beijing, 100024, China

Email: jfliu@cuc.edu.cn

Jing Zhou

School of Computer and Cyber Sciences, Communication University of China, Beijing, 100024, China

Email: zhoujing@cuc.edu.cn

Xin Zhang

School of Computer and Cyber Sciences, Communication University of China, Beijing, 100024, China

Email: rebeccazhang@cuc.edu.cn

 

Special Session 1: The 1st Special Session on Intelligent Audio-Visual Interaction in Digital Scenarios (IAVIDS 2026)

In conjunction with iEEE/ACIS SNPD2026-II

https://acisinternational.org/conferences/snpd-2026-ii/

Dali, Yunnan, China, November 6-8, 2026

Motivation

The proliferation of intelligent digital technologies—such as generative artificial intelligence (GenAI), edge computing, and extended reality (XR)—has reshaped media ecosystems, driving the evolution of digital scenarios into immersive, interactive, and context-aware environments. Within these media-centric ecosystems, Intelligent Audio-Visual Interaction (IAVI) serves as the core interface between users and digital media, directly determining the depth of user engagement, the efficiency of media information dissemination, and the innovation potential of media applications.

This interdisciplinary field integrates theoretical frameworks from media computing, signal processing, cognitive science, and human-computer interaction with cutting-edge technologies, addressing the unique demands of media environments. However, critical unresolved challenges persist: semantic alignment of heterogeneous audio-visual media data in dynamic interaction scenarios, low-latency and high-fidelity audio-visual interaction optimization for resource-constrained media terminals, and human-centric adaptation of audio-visual interaction systems to diverse media consumption contexts.

This Special Session aims to establish a high-caliber academic forum dedicated to media-centric intelligent audio-visual interaction research, bringing together researchers, engineers, scholars, and industry practitioners worldwide. We seek to facilitate the exchange of frontier findings, theoretical innovations, and empirical applications focused on audio-visual interaction in media environments, foster cross-disciplinary integration between computing and media science, and advance the theoretical depth and practical maturity of intelligent audio-visual interaction technology in shaping the future of digital media interaction.

Topics

This Special Session solicits original, unpublished submissions—including research papers, technical manuscripts, and comprehensive case studies—that focus on the theoretical, technical, and applied advancements of intelligent audio-visual interaction in media-centric scenarios. Topics of interest include, but are not limited to:

 

  1. Theoretical Foundations & Methodological Innovations for Media-Centric Intelligent Audio-Visual Interaction

Mathematical modeling and formal frameworks for audio-visual fusion in interactive media contexts

Semantic alignment and cross-modal knowledge graphs for media content interaction

Cognitive science-informed theoretical paradigms for user-centric media audio-visual interaction design

Data-efficient learning methodologies optimized for media interaction scenario constraints (e.g., dynamic content, variable user contexts)

Ontologies and metadata standards for standardized audio-visual media interaction

 

  1. Core Technologies for Media-Oriented Audio-Visual Interaction Advancement

Advanced audio signal processing for media interaction (e.g., real-time adaptive spatial audio, emotion-aware speech processing, media-specific noise suppression)

Intelligent video analysis for interactive media (e.g., gaze-driven content adaptation, gesture-based media control, scene-aware visual feedback)

GenAI-driven adaptive audio-visual content generation and interaction for personalized media experiences

Edge-cloud collaborative computing architectures for low-latency media audio-visual interaction systems

Synchronization and transmission optimization of audio-visual media streams in interactive scenarios

 

  1. Media Environment Application Domains & Empirical Studies

Audio-visual interaction systems in XR-based immersive media (e.g., interactive virtual concerts, augmented reality news delivery, mixed reality media storytelling)

Intelligent audio-visual interaction in smart media platforms (e.g., interactive broadcasting, personalized content recommendation engines, cloud-based media production collaboration)

Audio-visual interaction for social media (e.g., real-time audio-visual enhancement for user-generated content, interactive media communication tools)

Educational media applications (e.g., immersive audio-visual teaching platforms, interactive media-based tutoring systems)

Digital cultural media (e.g., interactive audio-visual exhibitions, virtual heritage site media interaction systems)

Important Dates

Full Paper Submission: July 31, 2026

Acceptance Notification: August 18, 2026

Camera-Ready Papers & Registration Deadline: August 31, 2026

Submission Guidelines

Originality & Eligibility: All submissions must represent original, unpublished work that has not been previously published, nor is currently under review for publication in any journal, conference, or preprint platform. Authors must disclose any overlapping work (e.g., preliminary abstracts or technical reports) at the time of submission.

Format & Length: Manuscripts must be written in English, adhere strictly to the IEEE two-column conference template, including all tables, figures, and references (Download Template).

Full Papers: 6-8 pages (excluding references) – intended for in-depth research presentations, detailed case studies, or comprehensive technical analyses.

Submission Channel: Submissions must be uploaded in WORD (.docx) or PDF (.pdf) format via email to  ymtaudio@cuc.edu.cn with the subject line: “IAVIDS 2026 Submission – [Paper Title] – [Author’s Affiliation]”.

Double-Blind Review: All submissions will undergo a rigorous double-blind peer review by at least two independent experts in the field. Authors must ensure all identifying information (names, affiliations, email addresses, and self-citations that reveal authorship) is removed from the manuscript to maintain anonymity. Review criteria include academic originality, theoretical significance, technical rigor, clarity of presentation, and relevance to media-centric intelligent audio-visual interaction.

Generative AI Usage: Authors must adhere to the SNPD2026-II conference policy on the use of generative AI tools in submissions. Any use of generative AI for content creation must be explicitly disclosed in the manuscript, with clear attribution to the tool and confirmation that authors take full responsibility for the accuracy, originality, and integrity of the final content.

Publications

Submitted papers must be original, unpublished, and not under consideration for publication elsewhere. Acceptedpapers will be included in the SNPD Proceedings (subject to registration fee payment) and indexed by EI. Exceptional papers (top 10-15%) will be recommended for fast-track review in EI/SCI-indexed journals (to be announced), with additional review and revision requirements as per journal guidelines. Accepted abstracts will be included in the conference program and a Book of Abstracts post-conference, enhancing the visibility of contributors’ work.

Special Session Chairs

Agnes Miaotong Yuan

School of Music and Recording Arts, Communication University of China, Beijing, 100024, China

Email: ymtaudio@cuc.edu.cn

Christopher Sauder Engeler

Multimedia Technologies, ETH Zürich, 8092 Zürich, Switzerland

Email: christopher.sauder@id.ethz.ch

Yuan Zhang

Department of Music Artificial Intelligence and Music Information Technology, Central Conservatory of Music, 100031, Beijing, China

Email: dazhangyu40@hotmail.com