5G-Framework / volttron-tcc-docker-ecolong

Modified TCC docker for EcoLong
0 stars 0 forks source link

Prerequisites

Notes

Recommendations on running this Docker image on a Virtual Machine

Git Submodules

Docker

Quickstart

  1. Initialize the submodules; for details see .gitmodules
git submodule update --init --recursive
  1. Build image without using cache and start container
docker-compose build --no-cache  --force-rm
docker-compose up
  1. To view the logs of a service, enter inside the container and tail the logs. The command to tail the logs works for all containers. Follow the example below:
# Get inside building1 container
myuser@1234:~$ docker exec -itu volttron building1 bash
# tail the logs; commands works on all containers
volttron@building1:~$ tail -f $VOLTTRON_USER_HOME/logs/$(hostname).volttron.log

NOTE: Aliases have been added to the containers for convenience. See 'Aliases in the container' section for more details.

Container info

All containers are explicitly named in docker-compose.yml; each service defined in docker-compose.yml has a 'container_name' section. Thus, you can easily enter into a container via a human-readable container name. For example, to enter into the Volttron central instance, use docker-compose exec -itu volttron central bash.

In addition, the hostnames for the containers have also been explicitly defined in docker-compose.yml; each service has a hostname section. With this configuration, you know exactly which container you are in; this is helpful when opening several shells to monitor several buildings. For example, note the shell prompt after running docker-composes exec commands below:

# shell1
$ docker-compose exec -itu volttron central bash
volttron@central:~$

# shell2
$ docker-compose exec -itu volttron building1 bash
volttron@building1:~$

Aliases in the container

Aliases have been added to each container to aid in debugging and viewing logs. These aliases are defined in "$VOLTTRON_USER_HOME/.bash_aliases".

All containers have the following aliases:

# show status of all agents, i.e. 'vctl status'
vstat

# tail the volttron logs
tlogs

# tail the volttron logs and grep for 'ERROR'; this is helpful for debugging
tlogsERROR

# Search for ERROR in the logs
grep-ERROR

# restart all agents except EnergyPlus
vrestart

Also, all containers have an environment variable, $VLOG, which is the path to each container's volttron logs. You can use $VLOG to search for certain strings in volttron.log. For example, if we want to search for "ERROR" in the 'brsw' container's volttron.log, we run the following:

cat $VLOG | grep ERROR -a

will search for the string "ERROR", case sensitive, in home/volttron/volttron.log

The following containers have their own specific aliases

'central'

# grep for building topics sent from various buildings (e.g. building1, brsw, smalloffice, largeoffice)
grep-b1
grep-brsw
grep-so
grep-lo

'building1'

# grep for campus topic sent from central forwarder
grep-cmb1

'brsw'

# grep for campus topic sent from central forwarder
grep-cmbr

'smalloffice'

# grep for campus topic sent from central forwarder
grep-cmso

'largeoffice'

# grep for campus topic sent from central forwarder
grep-cmlo

Database persistence

All containers except 'central' have an sqlite database, which is used by the SQLHistorian agent.

https://docs.docker.com/get-started/05_persisting_data/

Troubleshooting

My VC Platform agent can't connect to the Volttron Central address. I see volttron.platform.vip.agent.subsystems.auth ERROR: Couldn't connect to https://localhost:8443 or incorrect response returned response was None in the logs

This most likely occurs if you are deploying this container behind a proxy. Ensure that your ~/.docker/config.json has no "proxies" configuration.

My Forwarder shows a BAD status when I run vctl status

Ensure that the configuration for your forwarder is using the same volttron-central-address property in volttron config, which is set in your platform_config.yml file.