Unification of parts to form a system with cloud features
Vision
The Continuum System is designed to be provided as-a-service [4]
Continuum of Computing
Key research questions according to [1]
Portability: how do we program applications to run in such a dynamic environment
Reactive programming: how do we program services to respond to changes in application behaviour or data variability. Rust streams are a form of reactive programming.
Service discovery: how do we decide which services to compose. Identify services, data sources, and computational resources that are relevant to a request
how do we automate the process of continuous composition: N/A
how do we schedule and deploy workflows on top of the resulting infrastructure: N/A
Automated software requirements refinement [2]
The resource capabilities graph for the sensor set, edge devices, actuators, and cloud or HPC resources would include annotations that specify performance such as bandwidth and connectivity, storage capacity, and computation speed or domain-specific constraints (e.g., geographic location, power limitations, or maximum usage frequency) [3]
Very low overhead, distributed behavioural monitoring tools, a behavioural repository, and data sharing via CoAP.
Dynamically manage and provision services end-to-end [2]: it is still not possible to provide complex delivery guarantees from the service originator up to the customer. The reasons for this are both technical (i.e., lack of automated and dynamic SLA management and negotiation) and business-oriented (i.e., lack of incentive for cooperation).
Mobility support: users must be able to roam between networks without experiencing a major impact on service quality during and after handover.
We intend to program the continuum so that new algorithms and deep learning models can be pushed to appropriate locations (i.e., edge, fog, cloud, and/or HPC computing resources) using a simple lambda (function as a service) abstraction [3].
This would work as Function as a Service (FaaS). FaaS triggers a server only when a function is conducted, executes the expected operations and then terminates. The major advantages of this model are increased scalability and independency of the applications and lower costs. [5]
Five significant challenges for IoT-Cloud [5]:
Interoperability—applications on a platform should be able to amalgamate services and infrastructure from another Cloud-IoT platform. Cloud services like storage are not easily interoperable, since they require specific client libraries and protocols like ProtoBuf, without exposing REST APIs.
What about interoperability of edge devices?
Security and Privacy
Portability: efficient migration of each application and service has to be supported from platform to platform and follow the users’ traces and paths in the network. Also because of the uncertainty in available resources and application requirements, like inference in edge devices.
Reliability—establishing real-time communication between objects and applications with high connectivity and accessibility
Virtualization—the potential to provision resources and provide access to heterogeneous resources and hardware such as GPUs, FPGAs, etc.
Cloud-to-Edge orchestration is a crucial feature to speed the delivery of services, simplify optimisation, and reduce costs [5] in edge computing.
Challenges and opportunities according to [7]:
Programmability: the vision of computing stream that is defined as a serial of functions/computing applied on the data along the data propagation path. In a computing stream, the function can be reallocated, and the data and state along with the function should also be reallocated. However this works well for ML inference models, but not for control loop applications.
Naming: the naming scheme in edge computing is very important for programming, addressing, things identification, and data communication. CoAP allows identifying resources using URI
[10] have designed an edge computing platform based on a server-less architecture, which is able to provide flexible computation offloading to nearby clients to speed up computation-intensive and delay-sensitive applications. Serverless architecture naturally solves two important problems for edge computing:
serverless programming model greatly reduces the burden on users or developers in developing, deploying and managing edge applications
the functions are flexible to run on either edge or cloud, which lowers the barrier of edge-cloud inter-operability and federation.
The architecture
The architecture
How the architecture maps to Kubernetes
Orchestration along the Cloud-to-Edge continuum adds another layer of complexity and challenges. In the cloud-to-thing era, applications, as well as storage, are geo-distributed. K8s needs taking into account the above edge orchestration needs and requirements.
Rust
Rust is a good language for resource-constrained applications. The runtime can be minimal and crates ship with configurable features, to decrease dependency size
The Rust compiler can be over-zealous about static mutable variables, which are unsafe to use in concurrent OS but cannot happen in single-threaded priority-based embedded runtimes. This results in often hard-to-solve compiler errors because no_std often requires global static mutable variables
Ferrous System announces a joint-effort to improve Rust for critical systems, called Ferrocene
Rust doesn't offer yet tools to help with time analysis in RTIC. Rust promotes coding styles that facilitate optimization but not (worst-case) execution-time analysis
Asynchronous programming
A microservice has a lightweight stack (which allows its software dependencies to be always fully satisfied) and can be deployed, scaled and tested independently (which facilitates software evolution). In fact, at the present state of the art, these attractive traits can only be achieved with containerization. WASM allows improving the software portability across CPU architectures.
Energy efficiency with Rust (?)
Absense of embedded OS like Contiki
RTIC
WASM
Instead of dreaming of a single solution for all needs (in other words, undesirable monopoly), it is more opportune to devise technical solutions to ease source- or object-level mobility across them [4]. Several languages – e.g., Groovy, Scala, Clojure, Kotlin, etc. – exist that run on the JVM, but there is only one JVM. This shows that the JVM is a very convenient base to build upon, achieving at one-time robustness, interoperability, and portability.
Wasm needs a better memory management story
The evolution of virtualisation has moved away from virtual machines towards more lightweight solutions such as containers. This is specifically relevant for application packaging at a software platform and application level. [5]
Since edge site resources are considered to be limited due to constraints in physical hosting space [5], WASM modules are even better than containers. Edge applications may not need all the OS features provided by containers, they might require just some IO API.
Dynamic Software Updating (DSU) techniques aim at upgrading or modifying computer programs while they are running without the need for a shutdown and restart [6].
OCI WASM images compared to Docker images
The more generic solution assumes that the device is running interpreter of high level language (e.g. Lua, Python) that the reconfiguration is done by replacing the application code. Finally, when device is equipped with application processor with high level operating system (e.g. Linux), the reconfiguration may be done by replacing the whole software packages or even new system images. In contrast, when embedded processor is used, the reconfiguration may be done by replacing the flash image via over-the-air (OTA) technique. [11]
Alpine Linux
Alpine Linux is cool but it requires often extra work to make binaries work because of the missing dynamic GNU linker. Debian slim is better but weights more (55MB vs 5MB)
K8s/K3s
Developer Friendliness: The system ultimately provides hardware interaction and basic services for upper-level applications. How to design interactive APIs, program deployment module, resource application and revocation, and so on are the key factors for the system to be widely used. [8]
CPU and memory usage #28
Akri
EdgeX tries to unify the manipulation method of the IoT objects from the south side to a common API so that those objects can be manipulated in the same way by the applications of the north side. EdgeX provides SDK for developers to create device services so that it can support any combination of device interfaces and protocols by programming. [8]
REST
The role of REST in the Continuum of Computing.
Continuum Systems are designed to inter-operate at the highest level of the internet protocol stack, which allows them to realize value-added functions via natural distribution (and possibly even decentralization) by functionally aggregating components regardless of their physical location – without the limitations in the addressing capabilities of lower-level protocols – and of the technology stack in which they reside [4].
The primary intent of REST is to “transfer, access, and manipulate textual data representations in a stateless manner”
EdgeX Foundry provides a common API to manage the devices, and this brings great convenience to deploying and monitoring edge applications in a large scale. [8]
The problem of Machine-to machine communications is lack of compatibility [9]
CoAP
References
[1] Computing in the Continuum: Combining Pervasive Devices and Services to Support Data-driven Applications
[2] The Fluid Internet: Service-Centric Management of a Virtualized Future Internet
[3] Harnessing the Computing Continuum for Programming Our World
[4] HIPEAC
[5] The Cloud-to-Thing Continuum
[6] Dynamic Software Updates to Enhance Security and Privacy in High Availability Energy Management Applications in Smart Cities
[7] Edge Computing: Vision and Challenges
[8] A Survey on Edge Computing Systems and Tools
[9] A Survey on the Edge Computing for the Internet of Things
[10] LAVEA: Latency-aware Video Analytics on Edge Computing Platform
[11] Embedded systems in the application of fog computing – levee monitoring use case
Random thoughts.
Key points of the thesis
Vision
Continuum of Computing
Key research questions according to [1]
Dynamically manage and provision services end-to-end [2]: it is still not possible to provide complex delivery guarantees from the service originator up to the customer. The reasons for this are both technical (i.e., lack of automated and dynamic SLA management and negotiation) and business-oriented (i.e., lack of incentive for cooperation).
Mobility support: users must be able to roam between networks without experiencing a major impact on service quality during and after handover.
Integrated virtualization approach encompassing computational, storage, and networking resources. WASM allows virtualizing computational resources.
We intend to program the continuum so that new algorithms and deep learning models can be pushed to appropriate locations (i.e., edge, fog, cloud, and/or HPC computing resources) using a simple lambda (function as a service) abstraction [3].
Five significant challenges for IoT-Cloud [5]:
Cloud-to-Edge orchestration is a crucial feature to speed the delivery of services, simplify optimisation, and reduce costs [5] in edge computing.
Challenges and opportunities according to [7]:
[10] have designed an edge computing platform based on a server-less architecture, which is able to provide flexible computation offloading to nearby clients to speed up computation-intensive and delay-sensitive applications. Serverless architecture naturally solves two important problems for edge computing:
The architecture
Rust
RTIC
WASM
Alpine Linux
K8s/K3s
Akri
REST
The role of REST in the Continuum of Computing.
CoAP
References
[1] Computing in the Continuum: Combining Pervasive Devices and Services to Support Data-driven Applications [2] The Fluid Internet: Service-Centric Management of a Virtualized Future Internet [3] Harnessing the Computing Continuum for Programming Our World [4] HIPEAC [5] The Cloud-to-Thing Continuum [6] Dynamic Software Updates to Enhance Security and Privacy in High Availability Energy Management Applications in Smart Cities [7] Edge Computing: Vision and Challenges [8] A Survey on Edge Computing Systems and Tools [9] A Survey on the Edge Computing for the Internet of Things [10] LAVEA: Latency-aware Video Analytics on Edge Computing Platform [11] Embedded systems in the application of fog computing – levee monitoring use case