Cumulocity is an IoT platform that enables rapid connections of many, many different devices and applications. It allows you to monitor and respond to IoT data in real time and to spin up this capability in minutes. More information on Cumulocity IoT and how to start a free trial can be found here.
Cumulocity IoT enables companies to quickly and easily implement smart IoT solutions.
Getting stated is much easier on linux than on windows. For developement we used a Cent OS system.
Login on hub.docker.com via web browser
Checkout in docker hub the following images: a) Software AG Apama correlator b) Software AG Apama Builder c) Software AG Zementis Server
Make sure you go all the way through the checkout process.
On Cumulocity side you have to register the device in your tenant. In the config.ini you have to use an identifier such as the serial number or mac address. This serial number will be used for the registration purpose.
As soon as the containers are running you can start registering on your tenant. Make sure you have the correct instance inside the config.ini such as eu-latest.cumulocity.com or cumulocity.com.
The ThinEdge example allows the management of docker containers via the powerful concept of operations. In the tab Docker the current docker containers on the host system are shown together with metrics like CPU and RAM. From that point it is possible to stop, restart or delete containers. It is also possible to create new containers together with adjustments like port mapping.
The configuration change functunalities are used for editing the current configuration on device side. This could be e.g. log level or data cycle times.
In the analytics tab of the thinedge Managemt web application all EPL, APAMA Analytics Builder and Zementis. The slides shows which model is currently running and active on the thinedge device. With the slider an operation is generated that loads or unloads the model. Currently only EPL is working. The distribution and injection of APAMA Analytics Builder and Zementis models is work in progress.
The agents supports remote access via the platform. Due the concept of operations the device itself initiates the connect to the platform, not vice verse. The device starts a websocket connection to the platform. Possible protocolls are ssh and vnc. Any endpoint that can be reached from device side could be used in here. Thanks @Switchel who provided a very good library.
The device agent is the layer between device and platform and handles communication between both parties.
Basically theses tasks are handled in different modules:
Debugger is set to Info in every module, this makes debugging a lot easier. Change if you want.
Required and most commonly used RestAPI calls are modulized inside here. The naming of the modules is mainly derived from its API path such as e.g. alarm, event, measurement or Inventory. The authentification module is initialized at the very beginning of every module an uses the credentials.key file in the config directory. If not available the device Registration starts. If the credentials are not vallid they will be deleted. Every API module has its own logger.
Device control handles everything around operations handling from the platform to the device. There are two main modules:
Dedicated modules in this context are e.g. the update of configuration with updating the managed object or the exchange of files on the local machine for e.g. epl files. Others can be added for later use-cases. For the trigger currently SmartRest is used. The SmartRest template is created by the initial start of the Agent. The layout is inside the SmartRestTemplate.json under config.
This Template is needed for RemoteAcces, ModelManagement and Docker, since otherwise the Agents does not recognize the created operation.
The device registration is implemented and documented here:
Device Registration using Rest
After the registration process it is checked whether an managed object with particular identity is already available. If not a new device is created that contains the fragment of the Thind Edge use-case. The device layout is configred in the device.txt in config.
The streamingAnalytics module has three modules. It consists of:
Utils contains of many functinonalities that are required during the runtime of the agent such as reading out configuration files of the device or enabling the communication between threads.
Reads data from config and credentials files. The config.ini file lays within the config directory and must contain the following
[C8Y]
tenantInstance = eu-latest.cumulocity.com
[Device]
id = git_example
[Registration]
user = management/devicebootstrap
password = Fhdt1bb1f
tenant = management
tenantPostFix = /devicecontrol/deviceCredentials
[MQTT]
prefix = aggregated
prefixSignaltype = signalType
broker = localhost
port = 1883
These are informations that are required for the agent to be able to run properly. Feel free to add sections and variables. The credentials.key file will be stored by the agent within the registration process. In this approach it is not saved crypted.
The device.txt contains an example of a device as it is used in here. Feel free to adapt. However the Thinedge management app will only show the tabs docker and analtics if the SupportedOperations Fragment contains c8y_Docker and c8y_ThinEdge_Model.
{
"com_cumulocity_model_Agent": {},
"c8y_IsDevice": {},
"c8y_ThinEdge": {},
"c8y_Docker": {},
"c8y_ThinEdge_Model":{},
"c8y_Configuration": {
"config": "c8y.operations.check.interval=5;\nc8y.device.status.update=5;\nc8y.logging.level=DEBUG;\nc8y.aggregated.apama.topic=aggregated/#;\nc8y.model.status.update=5;\nc8y.signaltype.apama.topic=signaltype/#;"
},
"c8y_SupportedOperations": [
"c8y_Restart",
"c8y_Software",
"c8y_Configuration",
"c8y_Command",
"c8y_Firmware",
"c8y_RemoteAccessConnect",
"c8y_Docker",
"c8y_ThinEdge_Model"
]
}
The dockerWatcher contains a module that sends the docker stats from "docker stats" and "docker ps " to cumulocity. It therefore parses the content and creates an fragment "c8y_Docker" that contains metrics like memory, cpu and current status which will than be send to cumulocity. The ui uses that to visualize the current status.
The Thin Edge example here includes the following services:
Those services are orchestered via a docker-compose file.
Basic command for Docker within this project:
Starts thin edge container in background docker-compose up -d
Build images before starting the thin edge container in background docker-compose up --build -d
List of running containers docker-compose ps
Display log output of thin edge container with timestamp [-t], number of lines [--tail] and to follow log output [-f]. docker-compose logs -t -f --tail=20
Display log output of single container
docker logs
Shut down thin edge container docker-compose stop
Start/Stop single container
docker start/stop
Initialization from the level of the docker-compose file. You can remove or add services in this file e.g. if you need additional services or a changed mapping of volumes or ports.
Mosquitto is an open source implementation of a server for version 5.0, 3.1.1, and 3.1 of the MQTT protocol. Documentation for the broker, clients and client library API can be found in the man pages, which are available online at https://mosquitto.org/man/. There are also pages with an introduction to the features of MQTT, the mosquitto_passwd utility for dealing with username/passwords, and a description of the configuration file options available for the broker.
Currently the mqtt broker is not protected with user/password, this can easily be changed with a configuration.
More on Apama can be found here in the Apam Community:
Apama standalone is running within a container and directly listens on the MQTT via You can control the apama incstance with the following commands:
docker exec apama engine_inspect -h
-m | --monitors List monitor types
-j | --java List java applications
-e | --events List event types
-t | --timers List timers
-x | --contexts List contexts
-a | --aggregates List aggregate functions
-R | --receivers List receivers
-P | --pluginReceivers List plugin receivers
-r | --raw Raw (i.e. parser friendly) output
-h | --help This message
-n | --hostname
docker exec apama engine_inject 1.mon 2.mon 3.mon
To inject applications into the correlator. Note: If the file 2.mon contains an error, then engine_inject successfully injects 1.mon and then terminates when it finds the error in 2.mon. The tool does not operate on 3.mon.
docker cp HelloWorld.mon apama:/apama_work/Project_deployed/monitors
Copy mon-file to container destination folder. Now, need to be injected... NOTE: After stop and restart apama container, mon-file still exists in container, but will not automatically be injected. docker-compose down removes mon-file completly from container.
docker exec apama engine_inject ./Project_deployed/monitors/HelloWorld.mon
Inject mon-file to running correlator.
These tools are provided as-is and without warranty or support. They do not constitute part of the Software AG product suite. Users are free to use, fork and modify them, subject to the license agreement. While Software AG welcomes contributions, we cannot guarantee to include every contribution in the master project.
For more information you can Ask a Question in the TECHcommunity Forums.
Contact us at TECHcommunity if you have any questions.