Why is this demonstration using Docker for all logic? How do they interface exactly in Docker?
First of all, the docker-compose build command results inmanifest unknown, inclusion of a specific version likebionic-1.0.0 was a remedy.
The building took up a lot of time. I'm new to Docker, but what is the rationale of having separate containers in a demonstration that is less complex than any 3D Game? Would it be possible to have everything as a project, to work via an IDE on all modules? I'm quoting Lawrence Brubner:
And yet the tech world made an important breakthrough when it gave up on “Objects For Everything” and I suspect it will make an important breakthrough when it gives up on “Containers For Everything”.
From a student's point of view I'd rather not use Docker. Also having Java (Greta), Python (Dialogue and Argumentation Framework) and C++ (Unity) mix, isn´t really a benefit. All data exchange/interfacing/signaling is done via socketing like in s.connect((INTERNAL_AMQ_HOST, 61613)). I'm not sure this is the way to go with communicating via INTERNAL_AMQ_HOST = os.getenv("INTERNAL_AMQ_HOST") and
EXTERNAL_AMQ_HOST = os.getenv("EXTERNAL_AMQ_HOST"). To be honest, even C/C++programming on multiple embedded devices interacting (implementing backend services using HTTP) is easier than trying to untangle the container mess.
The CouncilOfCoaches Project is being listed as viable master thesis candidate in the
University of Applied Science ZHAW, Switzerland. An implementation like in AgentsUnited might scare off students, that actually would like to survey how humans perceive the interaction, once custom build/hosted. They would have to write an implementation from scratch, unless they are very versed in all those mentioned areas/programming languages/modules. Also from a perception point of view, 2D models might suffice, as I find the 3D models very unconvincing. Also the articulation in the prebuilt demonstration was very poor, without subtitles, it is hard to follow what the agents are saying in detail.
Why is this demonstration using Docker for all logic? How do they interface exactly in Docker? First of all, the
docker-compose build
command results inmanifest unknown
, inclusion of a specific version likebionic-1.0.0
was a remedy. The building took up a lot of time. I'm new to Docker, but what is the rationale of having separate containers in a demonstration that is less complex than any 3D Game? Would it be possible to have everything as a project, to work via an IDE on all modules? I'm quoting Lawrence Brubner: And yet the tech world made an important breakthrough when it gave up on “Objects For Everything” and I suspect it will make an important breakthrough when it gives up on “Containers For Everything”.From a student's point of view I'd rather not use Docker. Also having Java (Greta), Python (Dialogue and Argumentation Framework) and C++ (Unity) mix, isn´t really a benefit. All data exchange/interfacing/signaling is done via socketing like in
s.connect((INTERNAL_AMQ_HOST, 61613))
. I'm not sure this is the way to go with communicating viaINTERNAL_AMQ_HOST = os.getenv("INTERNAL_AMQ_HOST")
andEXTERNAL_AMQ_HOST = os.getenv("EXTERNAL_AMQ_HOST")
. To be honest, even C/C++programming on multiple embedded devices interacting (implementing backend services using HTTP) is easier than trying to untangle the container mess.The CouncilOfCoaches Project is being listed as viable master thesis candidate in the University of Applied Science ZHAW, Switzerland. An implementation like in AgentsUnited might scare off students, that actually would like to survey how humans perceive the interaction, once custom build/hosted. They would have to write an implementation from scratch, unless they are very versed in all those mentioned areas/programming languages/modules. Also from a perception point of view, 2D models might suffice, as I find the 3D models very unconvincing. Also the articulation in the prebuilt demonstration was very poor, without subtitles, it is hard to follow what the agents are saying in detail.