Closed shankari closed 1 month ago
Removing the infinite loop
// start thread to update energy optimization
std::thread([this] {
EVLOG_info << "running another round of energy optimization with update_interval " << config.upda
EVLOG_info << "about to init globals again";
globals.init(date::utc_clock::now(), config.schedule_interval_duration, config.schedule_total_dur
config.slice_ampere, config.slice_watt, config.debug, energy_flow_request);
auto optimized_values = run_optimizer(energy_flow_request);
EVLOG_info << "ran optimizer in the ready loop, got values " << optimized_values.size();
enforce_limits(optimized_values);
EVLOG_info << "enforced limits with " << optimized_values.size();
}).detach();
What is interesting is that with the loop and the forced limits, the 2024-10-16 13:53:28.035004 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000
log never shows up
$ grep Enforce /tmp/with_forced_limit.log
2024-10-16 14:02:28.032095 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W
2024-10-16 14:02:28.110878 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W
2024-10-16 14:02:28.236020 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W
2024-10-16 14:02:28.323189 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W
2024-10-16 14:02:28.378272 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W
$ grep iso15118_charge /tmp/with_forced_limit.log
2024-10-16 14:02:25.299565 [DEBG] energy_manager void Everest::Config::load_and_validate_manifest(const std::string&, const Everest::json&) :: Found module iso15118_charger:EvseV2G, loading and verifying manifest...
2024-10-16 14:02:25.750956 [DEBG] energy_manager void Everest::Config::resolve_all_requirements() :: Manifest of connector_1:EvseManager lists requirement 'hlc' which will be fulfilled by iso15118_charger:EvseV2G->charger:ISO15118_charger...
2024-10-16 14:02:25.754719 [DEBG] energy_manager void Everest::Config::resolve_all_requirements() :: Manifest of iso15118_charger:EvseV2G lists requirement 'security' which will be fulfilled by evse_security:EvseSecurity->main:evse_security...
2024-10-16 14:02:27.377779 [INFO] iso15118_charge :: TCP server on eth0 is listening on port [fe80::42:acff:fe12:4%27]:61341
2024-10-16 14:02:27.378673 [INFO] iso15118_charge :: TLS server on eth0 is listening on port [fe80::42:acff:fe12:4%27]:64109
2024-10-16 14:02:27.378744 [INFO] iso15118_charge :: SDP socket setup succeeded
2024-10-16 14:02:27.379200 [INFO] iso15118_charge :: Module iso15118_charger initialized [2286ms]
2024-10-16 14:02:28.525104 [INFO] manager :: SIGTERM of child: iso15118_charger (pid: 11109) succeeded.
That's weird, because I set the hardcoded value to be valid for 3600 seconds, which should be an hour.
Next tries:
enforce_limits
that is the problemiso15118_charger
In the SIL, if I enable the energy manager, it still crashes. @Abby-Wheelis can you confirm that this is true even on hardware?
Yes, I can confirm that the energy_manager
also seems to crash shortly after startup, I restored
And then ran, seeing lots of logs that I added, and a crash from energy_manager
So we do need to muck around with the iso15118_charger
code to stop the power budget from expiring...
max_current_valid_until
is set in Charger::set_max_current(float c, std::chrono::time_point<date::utc_clock> validUntil)
, which is called from ../modules/EvseManager/EvseManager.cpp
(charger->set_max_current(0.0F, date::utc_clock::now() + std::chrono::seconds(10));
)
// start with a limit of 0 amps. We will get a budget from EnergyManager that is locally limited by hw
// caps.
charger->set_max_current(0.0F, date::utc_clock::now() + std::chrono::seconds(10));
So that is where the original (expiring) limit comes from. Why aren't we getting any values from the EnergyManager?
Aha! It looks like handle_enforce_limits
can be called from EnergyNode
or EvseManager
# grep -r handle_enforce_limits ../modules
../modules/EvseManager/energy_grid/energyImpl.hpp: virtual void handle_enforce_limits(types::energy::EnforcedLimits& value) override;
../modules/EvseManager/energy_grid/energyImpl.cpp:void energyImpl::handle_enforce_limits(types::energy::EnforcedLimits& value) {
../modules/EnergyNode/energy_grid/energyImpl.hpp: virtual void handle_enforce_limits(types::energy::EnforcedLimits& value) override;
../modules/EnergyNode/energy_grid/energyImpl.cpp:void energyImpl::handle_enforce_limits(types::energy::EnforcedLimits& value) {
And both of them check to see if the response is "for them"
// is it for me?
if (value.uuid == energy_flow_request.uuid) {
Let's see if that is why they are ignoring the limit...
@Abby-Wheelis can you also apply this additional patch and re-enable the energy manager?
Change the while(true)
loop in modules/EnergyManager/EnergyManager.cpp
to
while(true) {
std::vector<types::energy::EnforcedLimits> optimized_values;
optimized_values.reserve(1);
EVLOG_info << "new round of optimization launched";
globals.init(date::utc_clock::now(), config.schedule_interval_duration, config.schedule_total_durat
config.slice_ampere, config.slice_watt, config.debug, energy_flow_request);
// auto optimized_values = run_optimizer(energy_flow_request);
// EVLOG_info << "ran optimizer in the ready loop, got values " << optimized_values.size();
if (optimized_values.size() == 0) {
types::energy::EnforcedLimits l;
l.uuid = "connector_1";
l.valid_until =
Everest::Date::to_rfc3339(globals.start_time + std::chrono::seconds(config.update_inter
types::energy::LimitsRes r;
r.ac_max_current_A = 32;
l.limits_root_side = r;
optimized_values.push_back(l);
}
enforce_limits(optimized_values);
EVLOG_info << "enforced limits with " << optimized_values.size();
{
std::unique_lock<std::mutex> lock(mainloop_sleep_mutex);
mainloop_sleep_condvar.wait_for(lock, std::chrono::seconds(config.update_interval));
}
}
I have also attached the expected logs. Once you confirm that this works, we can close this issue and move to #75
EDIT: Actually attached the logs expected_logs_basic_config_sil.log
I see you are running into a lot of issues, but I do not understand what problem you are solving here? The umwc yocto image comes with an sdk that can be used to cross compile Everest and install it under /var/everest and it just works.
Why do you want to hardcode the uuid? This is definitely wrong.
@corneliusclaussen our current plan for the CharIN milestone is to experiment with plugging in and yocto builds in parallel. I encourage you to look at the milestone tasks (https://github.com/EVerest/everest-demo/milestone/1), this is just one of them π Let us know if you have any feedback on our plans!
Concretely, for plugging in, on the uMWC, we are running the release from Mar/Apr 2024 + the hacks that AFS gave me for the OCPP managed charging in June + the hacks that I put in during CharIN week on raspbian.
energy_flow_request.uuid
but felt like I had spent enough time on this already.In parallel, we are working on rolling forward to the most recent release, which has the correct, non-hacky implementation of managed charging from AFS
Iirc all you want to do is cross compile Everest and run it on the umwc, that's why I don't understand why you are running into so many issues. I think it is a combination of a very old Debian image and a very old Everest version with lots of changes. With the yocto SDK this is a matter of minutes, we do it all the time, especially on Charin Testivals. So I think the quickest by far is to upgrade to the latest umwc yocto build ( has to be done once with the raspberry pi tooling as rauc update from Debian to yocto is not possible).
Once you are on yocto it can be updated with rauc again. And a custom build can be installed under /var/everest
In this build there is a symlink /etc/selected-everest which you can point to the /var/Everest install, and then you can also use this custom version from the ui.
So it's all prepared for that.
Iirc all you want to do is cross compile Everest and run it on the umwc, that's why I don't understand why you are running into so many issues. I think it is a combination of a very old Debian image and a very old Everest version with lots of changes.
Yup. Note that the first issue that I ran into was the SSL version - everest had rolled forward to SSL3, but debian (or at least the toolchain) had only SSL1 installed - 90% of my hacks were related to rolling back to the older version of SSL. https://github.com/EVerest/everest-demo/issues/51#issuecomment-2166696592 and onwards.
I am still not sure what is causing this most recent error with the run_optimizer
- I did not change it in June, and was able to start up properly. But now, it is crashing consistently, both in SIL and in hardware. We are not going to spend additional time investigating since it is so old and the code is so hacky.
I think that part of it was also a little bit of "too many cooks" - AFS made the smart charging changes; I did the initial cross-compile, and then tried to hand it off to @Abby-Wheelis and @faizanmir21 who are on-site and can access the EV lab, but don't have a lot of experience with EVerest.
But going forward, as we update to a more recent version, we will use yocto, and hopefully it will be simpler. @catarial has already been able to compile the most recent version of everest on a pi zero, and is working on the yocto build next.
We will be tracking the work required to get the yocto build to work in #76 Fingers crossed that it will be smooth sailing!
@Abby-Wheelis can you also apply this additional patch and re-enable the energy manager?
I made this patch (was there another patch I needed to make too? It looks like most of the other work was logging/troubleshooting) and restored the manager, and it no longer seems to be crashing.
I have also attached the expected logs.
Are these the logs you were talking about?
Changed the UUID to match and everything seems to work now
No, only this patch. All looks good. I am closing this.
Further updates on creating a custom version of EVerest using the new yocto method will be tracked in #76 Further updates on plugging this custom version into a hardware simulator/EV will be tracked in #75
Hopefully rolling forward will be smoother. One task down, four to go!
The EVerest project has open hardware as well https://everest.github.io/nightly/hardware/pionix_belay_box.html which is available as a kit from Pionix. Pionix also sells the uMWC (https://shop.pionix.com/products/umwc-micro-mega-watt-charger). This is is a non-open device in a housing that shares some hardware with the Belay Box although it has a different power module that is limited to 1W output.
In this issue, we will track the steps to run a custom build of EVerest on the uMWC so that we can perform HIL testing.
@faizanmir21 The instructions are here: https://everest.github.io/nightly/hardware/pionix_belay_box.html#developing-with-everest-and-belaybox
They are for the Belay Box, but I'm hoping that they will apply to the uMWC as well. If not, we can ask the community for help.
My suggested steps are:
everest-dev.service
and verify that it starts the dev build from/mnt/user_data/opt/everest
docker exec -it ....manager /bin/bash
ORdocker run -it ghcr.io/everest/everest-demo/manager:0.0.16 /bin/bash
@drmrd @couryrr-afs @wjmp for visibility