The “Compute Continuum” paradigm promises to manage the heterogeneity and dynamism of widespread computing resources, aiming to simplify the execution of distributed applications improving data locality, performance, availability, adaptability, energy management as well as other non-functional features. This is made possible by overcoming resource fragmentation and segregation in tiers, enabling applications to be seamlessly executed and relocated along a continuum of resources spanning from the edge to the cloud. Besides consolidated vertical and horizontal scaling patterns, this paradigm also offers more detailed adaptation actions that strictly depend on the specific infrastructure components (e.g., to reduce energy consumption, or to exploit specific hardware such as GPUs and FPGAs). This enables the enhancement of latency-sensitive applications, the reduction of network bandwidth consumption, the improvement of privacy protection, and the development of novel services aimed at improving living, health, safety, and mobility. All of this should be achievable by application developers without having to worry about how and where the developed components will be executed. Therefore, to unleash the true potential offered by the Compute Continuum, proactive, autonomous, and infrastructure-aware management is desirable, if not mandatory, calling for novel interdisciplinary approaches that exploit optimization theory, control theory, machine learning, and artificial intelligence methods.
Important dates
Submission deadline: May 20, 2024 (extended!)
Notification of acceptance: June 20, 2024
Camera-ready deadline: July 1, 2024
Workshop dates: August 27, 2024