The “Compute Continuum” paradigm is transforming how we manage the heterogeneity and dynamism of widespread computing resources. By seamlessly integrating resources across the edge, fog, and cloud, this paradigm enhances data locality, performance, availability, adaptability, energy efficiency, and other non-functional properties. This is made possible by overcoming resource fragmentation and segregation in tiers, allowing applications to be seamlessly executed and relocated along a continuum of resources spanning from the edge to the cloud. Besides consolidated vertical and horizontal scaling patterns, the Compute Continuum also introduces fine-grained adaptation actions tailored to specific infrastructure components (e.g., optimizing energy consumption or leveraging specialized hardware such as GPUs, FPGAs, and TPUs). These capabilities unlock significant benefits, including support for latency-sensitive applications, reduction of network bandwidth consumption, improved privacy protection, and enable the development of novel services across domains such as smart cities, healthcare, safety, and mobility. All of this should be achievable by application developers without having to worry about how and where the developed components will be executed. To fully harness the potential of the Compute Continuum, proactive, autonomous, and infrastructure-aware management is essential. This calls for novel interdisciplinary approaches that exploit optimization theory, control theory, machine learning, and artificial intelligence methods.
Important dates
Submission deadline: May 5, 2025
Notification of acceptance: June 23, 2025
Camera-ready deadline: July 7, 2025
Workshop dates: August 25-26, 2025