Computing with Deep Neural Networks in Distributed Framework

Call for Papers

Workshop titled
“Computing with Deep Neural Networks in Distributed Framework”
as a part of the 27th International Conference on Distributed Computing and Networking,
to be held in Nara, Japan, during January 6–9, 2026

Deep Neural Networks (DNNs) exhibit high efficiency and wide applicability in various domains like image classification, image recognition, language translation, language modeling, video captioning, speech recognition, and recommendation systems, among others. Due to its wide applicability, DNNs have grown massively in size to capture more complex scenarios, resulting in considerably more complex computations during training. Nowadays, improved data generation technologies with low cost cause an exponential growth of data, which attributes the increased model size with enhanced capabilities. With the emergence of more nonlinear and high-dimensional datasets, there is a growing need to effectively train the deeper and increasingly complex DNN architectures. As a result, the DNN training process becomes so expensive in terms of time and resources that it exceeds the capacity of single accelerator. It is observed that with sufficient resources, DNNs can achieve dramatic improvements in its performance. A trained DNN is often used for inference which is not such costly task like training. The massive surge in resource requirements for DNN training has led to new devices such as Google’s TPU, NVidia’s Tesla, or Xilinx Alveo in addition to custom accelerators. Due to lack of cost-effectiveness of those high-end servers, parallelization of the training process over multiple commodity hardware, like small workstations, PCs, and Laptops, has become relevant, which necessitates efficient parallel strategies for distributed training on large clusters.

The purpose of this Workshop is to provide a leading-edge forum for academics as well as industrial professionals involved in the field to contribute for disseminating the most innovative research and developments of all aspects of distributed and parallel neural computing. The workshop will discuss several new and challenging issues in this field that require new technological innovations. We welcome submissions that identify challenges, propose new framework, architecture, and algorithms, and provide the detailed explanations of how the contributions advance the state-of-the-art in distributed DNN training, reports experimental results in comparison with the state-of-the-art systems. The workshop will focus on the distribution and parallelization of the neural computing models across accelerators including (but not limited to) the following aspects:

Guidelines for Authors

  1. Please check the ACM policy on Authorship (https://www.acm.org/publications/policies/new-acm-policy-on-authorship) and the use of generative AI in the papers (https://www.acm.org/publications/policies/frequently-asked-questions). Please ensure that the same is being followed in the accepted papers. Please note that authorships cannot be added or removed once a paper is accepted.
  2. The submitted manuscripts will undergo peer review following ACM Peer Review Policy (https://www.acm.org/publications/policies/peer-review). The accepted papers will be published in a companion volume along with the main conference proceedings, as stated below. At least one author of an accepted paper must register and present their paper at the workshop in person.

New Open Access Publishing Model for ACM

ICDCN 2026 workshop proceedings will be published as a companion volume along with the main conference proceedings. Note that, ACM has introduced a new open access publishing model for the International Conference Proceedings Series (ICPS). Please check the ACM Open Publication Model (https://www.acm.org/publications/icps/faq). Authors based at institutions that are not yet part of the ACM Open program ("https://libraries.acm.org/acmopen/open-participants) and do not qualify for a waiver will be required to pay an article processing charge (APC) to publish their ICPS article in the ACM Digital Library. Please note that this APC is in addition to the conference registration fee and is handled by the ACM directly. To determine whether or not an APC will be applicable to your article, please follow the detailed guidance here: https://www.acm.org/publications/icps/author-guidance. For any clarifications, please contact icps-info@acm.org.

Timeline

Contact Us

Professor Rajat Kumar De
Indian Statistical Institute, Kolkata
Email: rajat@isical.ac.in / rajatkde@gmail.com
Phone: +91 9433008009, +91 9088015909

Professor Nabendu Chaki
University of Calcutta, Kolkata
Email: nchaki@gmail.com
Phone: +91 9433068073