Support for envoy firmware >= D7.0.88
From around V7 of the envoy firmware the security model for API access was changed. This is obviously problematic for software such as mine that relies on local access to the API's
While it is entirely up to Enphase as to how they develop their software I see a number of issues with their new security model
- It links your enphase community account to the token needed to access the API (If you don't want an account or enphase suspends your account you will lose access)
- It does not look to be based on a standard authentication mechanism such as OAuth (You should never write your own authentication protocol)
- It is currently broken in a number of ways and will reduce the security of your envoy device (I will not list the issues here)
The current release does support V7 firmware, but you will either need to manage the token generation yourself or supply your enphase web user & password details. SSL over HTTP is also a requirement so the port will need to be set to 443
EnphaseCollector uses the undocumented API in the Envoy device to collect individual solar panel data and upload to an influx db, pvoutput site or just as an internal view
Can be run as a java application or using the docker image
Main Page | Weekly History Tab | Questions and Answers Tab |
---|---|---|
If using the jar file you will need a Java 21 that you can get from https://adoptium.net/
Example #1 with default internal website (assuming jar is named enphasecollector-development-SNAPSHOT.jar which is the default build artifact)
java -jar enphasecollector-development-SNAPSHOT.jar
where the application will attempt to guess the envoy location and password.
Example #2 when envoy.local is not resolved, and you need to specify the ip address and the password cannot be guessed.
java -jar enphasecollector-DEV.jar --envoy.controller.host=envoy-ip --envoy.controller.password=envoy-password
where envoy-ip is the ip address of your envoy controller and envoy-password is likely to be the last 6 characters of your envoy controller serial number
Example #3 run spring boot locally with debugger support connecting to enphase to pull down a token
mvn spring-boot:run -Dspring-boot.run.arguments="--envoy.controller.host=<PRIVATE IP OF ENVOY> --envoy.controller.port=443 --envoy.enphaseWebUser=<USER> --envoy.enphaseWebPassword=<PASSWORD>" -Dspring-boot.run.jvmArguments="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"
If using the docker image
Example #1 using influxDB for storage
docker run \
-e TZ=your-timezone \
-e ENVOY_CONTROLLER_PASSWORD=envoy-password \
-e ENVOY_CONTROLLER_HOST=envoy-ip \
-e ENVOY_INFLUXDBRESOURCE_HOST=influxdb-ip \
-e ENVOY_INFLUXDBRESOURCE_PORT=influxdb-port \
-e SPRING_PROFILES_ACTIVE=influxdb \
dlmcpaul/enphasecollector
where envoy-password is likely to be the last 6 characters of your envoy controller serial number
Example #2 in standalone mode with internal database storage
docker run \
-e TZ=your-timezone \
-e ENVOY_CONTROLLER_PASSWORD=envoy-password \
-e ENVOY_CONTROLLER_HOST=envoy-ip \
-p 8080:8080 \
dlmcpaul/enphasecollector
and a web page available at http://localhost:8080/solar and looks like this
You can also link the internal database to an external file system, so the database kept on upgrade of the image using the mount point /internal_db
docker run \
-e TZ=your-timezone \
-e ENVOY_CONTROLLER_PASSWORD=envoy-password \
-e ENVOY_CONTROLLER_HOST=envoy-ip \
-p 8080:8080 \
--mount target=/internal_db,source=host_path
dlmcpaul/enphasecollector
and replace host_path with the path on your host machine where you want to store the data.
Example #3 sending data to pvoutput.
docker run \
-e TZ=your-timezone \
-e ENVOY_CONTROLLER_PASSWORD=envoy-password \
-e ENVOY_CONTROLLER_HOST=envoy-ip \
-e ENVOY_PVOUTPUTRESOURCE_SYSTEMID=your-system-id \
-e ENVOY_PVOUTPUTRESOURCE_KEY=your-key \
-e SPRING_PROFILES_ACTIVE=pvoutput \
dlmcpaul/enphasecollector
Your timezone is something like Australia/Sydney or similar
Example #4 sending data to mqtt.
docker run \
-e TZ=your-timezone \
-e ENVOY_CONTROLLER_PASSWORD=envoy-password \
-e ENVOY_CONTROLLER_HOST=envoy-ip \
-e ENVOY_MQQTRESOURCE_HOST=mqqt-ip \
-e ENVOY_MQQTRESOURCE_PORT=mqqt-port \
-e ENVOY_MQQTRESOURCE_TOPIC=topic-name \
-e ENVOY_MQQTRESOURCE_PUBLISHERID=publisher-id \
-e SPRING_PROFILES_ACTIVE=mqtt \
dlmcpaul/enphasecollector
if ENVOY_MQQTRESOURCE_PUBLISHERID is not provided a random value will be chosen
Note the spelling mistake in the environment variables (MQQT instead of MQTT) This will likely be fixed in a later release
Available environment variables descriptions:
Either supply
Or if you want auto refresh
The easiest way to configure the bands is with an external configuration file
envoy.bands[0].from = 0800
envoy.bands[0].to = 1200
envoy.bands[0].colour = #55BF3B
envoy.bands[1].from = 1600
envoy.bands[1].to = 1800
envoy.bands[1].colour = rgba(200, 60, 60, .2)
java -jar enphasecollector.jar --spring.config.additional-location=file:application.properties
All properties can be configured this way and will override any defaults set in the jar. Check the application.properties file for more properties that can be set
For Docker you will need a local directory to hold the file
docker run \
-e TZ=your-timezone \
-e ENVOY_CONTROLLER_PASSWORD=envoy-password \
-e ENVOY_CONTROLLER_HOST=envoy-ip \
-p 8080:8080 \
--mount target=/internal_db,source=host_path
--mount target=/properties,source=host_path
dlmcpaul/enphasecollector
While I make every effort to make this application secure I cannot make any guarantees. The application should be hosted behind a firewall and only exposed through a reverse proxy which includes an authentication mechanism and utilises https.
Docker (or Java 21)
If profile set to influxdb then an Influx DB is needed for storage of the statistics (Will autocreate 2 databases called 'solardb' and 'collectorStats')
If profile set to pvoutput then every 5m the stats will be uploaded to your account at https://pvoutput.org (you will need to create an account to to get the systemid and key)
You can set multiple profiles separated by a comma eg influxdb,pvoutput
The internal database is always populated so the local view is always available at /solar
Stats can be pulled to Prometheus by using the Actuator endpoint configured at /solar/actuator/prometheus
Stats can be pushed to a mqtt server with the mqtt profile (requires mqtt server)
This is a fairly standard maven project using spring boot so mvn package -Dmaven.test.skip
should get your started and can build a working jar located in the target directory
You will need the following tools installed to develop and build this code.
There are also modules built in if you want to store the data somewhere other than the internal database. To use them you will need an installation or authentication for the specific system:
There are some caveats