[UNDER CONSTRUCTION] Series of chatbots that demonstrate Leolani’s functionalities.
The chatbots use the CLTL EMISSOR and KnowledgeRepresentation (aka the BRAIN) models and follow the Leolani platform in which signals are processed and generated as a stream in time. The interpretation of the signals is stored in the BRAIN, where knowledge cumulates. Reasoning over this knowledge (aka THOUGHTS), triggers to responses of the system to changes in the BRAIN as a result of input signal interpretations.
The interaction with a user is recorded by EMISSOR as signals in a scenario with a timeline. EMISSOR can record audio, text and images. The BRAIN is a triple store that records the interpretation and cumulated knowledge but also the perspective of the users.
Several Jupyter notebooks have been included that demonstrate different types of interactions. [NOTEBOOKS ARE OUT DATED AND NEED TO BE REVISED]
Before starting install GraphDB and launch it with a sandbox repository, which will act as a brain. A free version of GraphDB can be donwloaded and installed from:
After installing GraphDB you need to launch and create a repository with the name sandbox
. This repository will be
used as the BRAIN.
Furthermore, some of the application use docker repositories for sensor data processing suc as face and object detection. For this, you need to install Docker desktop. You can follow the instructions on this page: https://www.docker.com/products/docker-desktop
After installing docker desktop, we advise you to pull the docker images for sensor processing before you start. The images are rather big. Use the docker pull command from the command line:
Once the docker images are loaded and running in your Docker desktop they are available to make calls from the notebooks and other code.
In order to install the packages you should do the following from the terminal:
When there are no error messages you can launch jupyter to load the notebooks.
Start jupyter and select the kernel venv
jupyter lab
Select kernel venv for each notebook
The code has been developed and tested on Mac OS and Linux. Some issues may arise when installing and running on Windows 10.
You can consult the trouble shooting document for solutions: [TROUBLESHOOTING.md]
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)When using this code please make reference to the following papers:
@article{santamaria2021emissor, title={EMISSOR: A platform for capturing multimodal interactions as Episodic Memories
and Interpretations with Situated Scenario-based Ontological References}, author={Santamar{\'\i}a, Selene B{\'a}ez and
Baier, Thomas and Kim, Taewoon and Krause, Lea and Kruijt, Jaap and Vossen, Piek}, booktitle={Processings of the MMSR
workshop "Beyond Language: Multimodal Semantic Representations", IWSC2021, also available as arXiv preprint arXiv:
2105.08388}, year={2021} }
@inproceedings{vossen2019modelling, title={Modelling context awareness for a situated semantic agent}, author={Vossen,
Piek and Baj{\v{c}}eti{\'c}, Lenka and Baez, Selene and Ba{\v{s}}i{\'c}, Suzana and Kraaijeveld, Bram},
booktitle={International and Interdisciplinary Conference on Modeling and Using Context}, pages={238--252}, year={2019},
organization={Springer} }
@inproceedings{vossen2019leolani,
title={Leolani: A robot that communicates and learns about the shared world},
author={Vossen, Piek and Baez, Selene and Bajcetic, Lenka and Basic, Suzana and Kraaijeveld, Bram},
booktitle={2019 ISWC Satellite Tracks (Posters and Demonstrations, Industry, and Outrageous Ideas), ISWC 2019-Satellites},
pages={181--184},
year={2019},
organization={CEUR-WS}
}