The “Compute Continuum” paradigm is transforming how we manage the heterogeneity and dynamism of widespread computing resources. By seamlessly integrating resources across the edge, fog, and cloud, this paradigm enhances data locality, performance, availability, adaptability, energy efficiency, and other non-functional properties. This is made possible by overcoming resource fragmentation and segregation in tiers, allowing applications to be seamlessly executed and relocated along a continuum of resources spanning from the edge to the cloud. Besides consolidated vertical and horizontal scaling patterns, the Compute Continuum also introduces fine-grained adaptation actions tailored to specific infrastructure components (e.g., optimizing energy consumption or leveraging specialized hardware such as GPUs, FPGAs, and TPUs). These capabilities unlock significant benefits, including support for latency-sensitive applications, reduction of network bandwidth consumption, improved privacy protection, and enable the development of novel services across domains such as smart cities, healthcare, safety, and mobility. All of this should be achievable by application developers without having to worry about how and where the developed components will be executed. To fully harness the potential of the Compute Continuum, proactive, autonomous, and infrastructure-aware management is essential. This calls for novel interdisciplinary approaches that exploit optimization theory, control theory, machine learning, and artificial intelligence methods.
In this landscape, the workshop is willing to attract contributions in the area of distributed systems with particular emphasis on support for geographically distributed platforms and autonomic features to deal with variable workloads and environmental events, and take the best of heterogeneous and distributed infrastructures.
A partial list of interesting topics of this workshop is the following:
- Scalable architectures and systems for the Compute Continuum
- Orchestration, deployment, and management of resources and applications in the Compute Continuum
- Programming models, languages and patterns for the Compute Continuum
- Compute Continuum performance modeling and analysis
- Function-as-a-Service and Backend-as-a-Service in the Compute Continuum
- Energy-efficient and carbon-aware solutions for sustainable Compute Continuum
- Lightweight virtualization for the Compute Continuum
- AI-driven optimization and AI-related workloads in the Compute Continuum (e.g., federated, distributed, decentralized learning)
- Scalable applications for the Compute Continuum (e.g., IoT, microservices, serverless)
- Data processing and analytics in the Compute Continuum
- Digital Twins and industry applications in the Compute Continuum
- Prototypes and real-life experiments involving Compute Continuum
- Heterogeneous hardware acceleration and domain-specific architectures
- Workflows in the Compute Continuum
- Convergence and integration of HPC and Continuum platforms
- Resilience and fault-tolerant strategies for the Compute Continuum
- Benchmarks, reproducibility frameworks, and real-world experimental platforms
Important dates
Submission deadline: May 5, 2025
Notification of acceptance: June 23, 2025
Camera-ready deadline: July 7, 2025
Workshop dates: August 25-26, 2025
Submission Instructions
The papers should be formatted according to the Springer LNCS guidelines. They should be between a minimum of 10 and a maximum of 12 pages, including figures, tables, and references. Submitted papers should present novel contributions and should not have been submitted for publication elsewhere.
All papers must be submitted using the EasyChair submission system and selecting the “International Workshop on Scalable Compute Continuum” track.