EVerest / everest-demo

EVerest demo: Dockerized demo with software in the loop simulation
Apache License 2.0
16 stars 21 forks source link

Build and run custom version of EVerest on the uMWC #51

Closed shankari closed 1 month ago

shankari commented 6 months ago

The EVerest project has open hardware as well https://everest.github.io/nightly/hardware/pionix_belay_box.html which is available as a kit from Pionix. Pionix also sells the uMWC (https://shop.pionix.com/products/umwc-micro-mega-watt-charger). This is is a non-open device in a housing that shares some hardware with the Belay Box although it has a different power module that is limited to 1W output.

In this issue, we will track the steps to run a custom build of EVerest on the uMWC so that we can perform HIL testing.

@faizanmir21 The instructions are here: https://everest.github.io/nightly/hardware/pionix_belay_box.html#developing-with-everest-and-belaybox

They are for the Belay Box, but I'm hoping that they will apply to the uMWC as well. If not, we can ask the community for help.

My suggested steps are:

  1. check everest-dev.service and verify that it starts the dev build from /mnt/user_data/opt/everest
  2. Install a dev build from the latest stable release (2024.3.0) https://github.com/EVerest/everest-core/releases/tag/2024.3.0
    • We already have everest builds in the docker containers. So you can run the manager docker container and use it as the base to cross-compile. To get a shell, you can use:
    • one of the demo scripts and then docker exec -it ....manager /bin/bash OR
    • docker run -it ghcr.io/everest/everest-demo/manager:0.0.16 /bin/bash
  3. Then rsync it over and try to boot up!

@drmrd @couryrr-afs @wjmp for visibility

shankari commented 1 month ago

Removing the infinite loop

    // start thread to update energy optimization                                                           
    std::thread([this] {                                                                      
            EVLOG_info << "running another round of energy optimization with update_interval " << config.upda
            EVLOG_info << "about to init globals again";                                                     
            globals.init(date::utc_clock::now(), config.schedule_interval_duration, config.schedule_total_dur
                         config.slice_ampere, config.slice_watt, config.debug, energy_flow_request);         
            auto optimized_values = run_optimizer(energy_flow_request);                             
            EVLOG_info << "ran optimizer in the ready loop, got values " << optimized_values.size();
            enforce_limits(optimized_values);                                                       
            EVLOG_info << "enforced limits with " << optimized_values.size();                                
    }).detach();                                                                                             
causes the process to not crash, but goes back to 0 power, similar to disabling the module completely ``` 2024-10-16 13:53:27.276290 [INFO] energy_manager: :: about to lock the energy mutex 2024-10-16 13:53:27.276441 [INFO] energy_manager: :: launched the mutex 2024-10-16 13:53:27.276767 [INFO] energy_manager: :: returning optimized values vector of length 1 2024-10-16 13:53:27.276932 [INFO] energy_manager: :: ran optimizer in the ready loop, got values 1 2024-10-16 13:53:27.276997 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W ... 2024-10-16 13:53:27.694557 [DEBG] energy_manager: Everest::Everest::subscribe_var(const Requirement&, const std::string&, const JsonCallback&):: :: Incoming grid_connection_point:EnergyNode->energy_grid:energy->energy_flow_request 2024-10-16 13:53:27.812083 [INFO] connector_1:Evs :: Ignoring BSP Event, BSP is not enabled yet. 2024-10-16 13:53:27.812661 [INFO] connector_1:Evs :: AC HLC mode enabled. 2024-10-16 13:53:27.900739 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000 2024-10-16 13:53:27.918303 [INFO] ev_manager:JsEv :: Unplug detected, restarting simulation. 2024-10-16 13:53:27.944001 [INFO] connector_1:Evs :: πŸŒ€πŸŒ€πŸŒ€ Ready to start charging πŸŒ€πŸŒ€πŸŒ€ 2024-10-16 13:53:28.035004 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000 2024-10-16 13:53:28.697083 [DEBG] energy_manager: void Everest::MQTTAbstractionImpl::on_mqtt_message(std::shared_ptr) :: topic everest/grid_connection_point/energy_grid/var starts with everest/ ... 2024-10-16 13:52:21.063319 [DEBG] energy_manager: void Everest::MQTTAbstractionImpl::on_mqtt_message(std::shared_ptr) :: topic everest/grid_connection_point/energy_grid/var starts with everest/ 2024-10-16 13:52:21.064637 [DEBG] energy_manager: Everest::Everest::subscribe_var(const Requirement&, const std::string&, const JsonCallback&):: :: Incoming grid_connection_point:EnergyNode->energy_grid:energy->energy_flow_request 2024-10-16 13:52:21.122293 [WARN] connector_1:Evs bool module::Charger::power_available() :: Power budget expired, falling back to 0. 2024-10-16 13:52:21.223231 [WARN] connector_1:Evs bool module::Charger::power_available() :: Power budget expired, falling back to 0. ```
shankari commented 1 month ago

What is interesting is that with the loop and the forced limits, the 2024-10-16 13:53:28.035004 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000 log never shows up

$ grep Enforce /tmp/with_forced_limit.log 
2024-10-16 14:02:28.032095 [INFO] energy_manager:  :: evse_manager_1 Enforce limits 32A -9999W 
2024-10-16 14:02:28.110878 [INFO] energy_manager:  :: evse_manager_1 Enforce limits 32A -9999W 
2024-10-16 14:02:28.236020 [INFO] energy_manager:  :: evse_manager_1 Enforce limits 32A -9999W 
2024-10-16 14:02:28.323189 [INFO] energy_manager:  :: evse_manager_1 Enforce limits 32A -9999W 
2024-10-16 14:02:28.378272 [INFO] energy_manager:  :: evse_manager_1 Enforce limits 32A -9999W 

$ grep iso15118_charge /tmp/with_forced_limit.log 
2024-10-16 14:02:25.299565 [DEBG] energy_manager  void Everest::Config::load_and_validate_manifest(const std::string&, const Everest::json&) :: Found module iso15118_charger:EvseV2G, loading and verifying manifest...
2024-10-16 14:02:25.750956 [DEBG] energy_manager  void Everest::Config::resolve_all_requirements() :: Manifest of connector_1:EvseManager lists requirement 'hlc' which will be fulfilled by iso15118_charger:EvseV2G->charger:ISO15118_charger...
2024-10-16 14:02:25.754719 [DEBG] energy_manager  void Everest::Config::resolve_all_requirements() :: Manifest of iso15118_charger:EvseV2G lists requirement 'security' which will be fulfilled by evse_security:EvseSecurity->main:evse_security...
2024-10-16 14:02:27.377779 [INFO] iso15118_charge  :: TCP server on eth0 is listening on port [fe80::42:acff:fe12:4%27]:61341
2024-10-16 14:02:27.378673 [INFO] iso15118_charge  :: TLS server on eth0 is listening on port [fe80::42:acff:fe12:4%27]:64109
2024-10-16 14:02:27.378744 [INFO] iso15118_charge  :: SDP socket setup succeeded
2024-10-16 14:02:27.379200 [INFO] iso15118_charge  :: Module iso15118_charger initialized [2286ms]
2024-10-16 14:02:28.525104 [INFO] manager          :: SIGTERM of child: iso15118_charger (pid: 11109) succeeded.

That's weird, because I set the hardcoded value to be valid for 3600 seconds, which should be an hour.

Next tries:

Abby-Wheelis commented 1 month ago

In the SIL, if I enable the energy manager, it still crashes. @Abby-Wheelis can you confirm that this is true even on hardware?

Yes, I can confirm that the energy_manager also seems to crash shortly after startup, I restored

this ``` - energy_manager: - connections: - energy_trunk: - - implementation_id: energy_grid - module_id: grid_connection_point - module: EnergyManager ```

And then ran, seeing lots of logs that I added, and a crash from energy_manager

logs: ``` $ sudo /mnt/user_data/opt/everest/bin/manager --conf /mnt/user_data/opt/everest/etc/everest/config-sil-no-crash.yaml sudo: unable to resolve host umwcdbde: Name or service not known 2024-10-16 16:20:42.204488 [INFO] manager :: ________ __ _ 2024-10-16 16:20:42.204978 [INFO] manager :: | ____\ \ / / | | 2024-10-16 16:20:42.205069 [INFO] manager :: | |__ \ \ / /__ _ __ ___ ___| |_ 2024-10-16 16:20:42.205347 [INFO] manager :: | __| \ \/ / _ \ '__/ _ \/ __| __| 2024-10-16 16:20:42.205478 [INFO] manager :: | |____ \ / __/ | | __/\__ \ |_ 2024-10-16 16:20:42.205605 [INFO] manager :: |______| \/ \___|_| \___||___/\__| 2024-10-16 16:20:42.205777 [INFO] manager :: 2024-10-16 16:20:42.205919 [INFO] manager :: Using MQTT broker localhost:1883 2024-10-16 16:20:42.206048 [INFO] manager :: Telemetry enabled 2024-10-16 16:20:42.225010 [INFO] everest_ctrl :: Launching controller service on port 8849 2024-10-16 16:20:42.290588 [INFO] manager :: Loading config file at: /mnt/user_data/opt/everest/etc/everest/config-sil-no-crash.yaml 2024-10-16 16:20:43.257814 [INFO] manager :: Config loading completed in 1048ms 2024-10-16 16:20:46.946623 [INFO] energy_manager: :: Module energy_manager initialized [3481ms] 2024-10-16 16:20:47.294016 [INFO] token_validator :: Module token_validator initialized [3603ms] 2024-10-16 16:20:47.457844 [INFO] auth:Auth :: Module auth initialized [4108ms] 2024-10-16 16:20:47.478470 [INFO] api:API :: INIT API module 2024-10-16 16:20:47.508750 [INFO] token_provider: :: Module token_provider initialized [3970ms] 2024-10-16 16:20:47.534440 [INFO] grid_connection :: Module grid_connection_point initialized [4043ms] 2024-10-16 16:20:47.591294 [INFO] api:API :: Module api initialized [4224ms] 2024-10-16 16:20:47.667752 [INFO] yeti_driver_1:M :: Initializing MMWBSP 2024-10-16 16:20:47.676306 [INFO] yeti_driver_1:M :: log message before the new connector hardcoding 2024-10-16 16:20:47.676629 [INFO] yeti_driver_1:M :: FILE WITH HARDCODED CONNECTOR TYPE! 2024-10-16 16:20:47.676107 [INFO] iso15118_charge :: TCP server on eth1 is listening on port [fe80::fbba:70d9:9849:a121%3]:61341 2024-10-16 16:20:47.677077 [INFO] iso15118_charge :: TLS server on eth1 is listening on port [fe80::fbba:70d9:9849:a121%3]:64109 2024-10-16 16:20:47.677619 [INFO] iso15118_charge :: SDP socket setup succeeded 2024-10-16 16:20:47.678083 [INFO] yeti_driver_1:M :: Module yeti_driver_1 initialized [3925ms] 2024-10-16 16:20:47.679090 [INFO] iso15118_charge :: Module iso15118_charger initialized [4048ms] 2024-10-16 16:20:47.743297 [INFO] evse_security:E :: Module evse_security initialized [4162ms] 2024-10-16 16:20:47.835952 [INFO] slac:EvseSlac :: Module slac initialized [4159ms] 2024-10-16 16:20:48.041326 [INFO] connector_1:Evs :: Module connector_1 initialized [4598ms] 2024-10-16 16:20:49.157922 [INFO] manager :: πŸš™πŸš™πŸš™ All modules are initialized. EVerest up and running [6984ms] πŸš™πŸš™πŸš™ 2024-10-16 16:20:49.160513 [INFO] energy_manager: :: NEW LOG STATEMENT sh: 1: echo: echo: I/O error 2024-10-16 16:20:49.188471 [INFO] energy_manager: :: returning optimized values vector of length 0 2024-10-16 16:20:49.204564 [INFO] slac:EvseSlac :: Starting the SLAC state machine 2024-10-16 16:20:49.206176 [INFO] slac:EvseSlac :: Qualcomm PLC Device Attributes: HW Platform: QCA7000 SW Platform: MAC Firmware: Build date: 1.2.5-0 ZC signal: Missing Line frequency: 50Hz 2024-10-16 16:20:49.306508 [INFO] slac:EvseSlac :: Entered Reset state 2024-10-16 16:20:49.306791 [INFO] slac:EvseSlac :: New NMK key: 49:30:45:4E:59:58:41:44:46:54:53:41:39:5A:48:37 2024-10-16 16:20:49.307355 [INFO] slac:EvseSlac :: Received CM_SET_KEY_CNF 2024-10-16 16:20:49.307785 [INFO] slac:EvseSlac :: Entered Idle state 2024-10-16 16:20:49.592767 [INFO] connector_1:Evs :: Max AC hardware capabilities: 6A/3ph 2024-10-16 16:20:49.755591 [INFO] connector_1:Evs :: AC HLC mode enabled. 2024-10-16 16:20:49.808876 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000 2024-10-16 16:20:49.857819 [INFO] connector_1:Evs :: πŸŒ€πŸŒ€πŸŒ€ Ready to start charging πŸŒ€πŸŒ€πŸŒ€ 2024-10-16 16:20:50.011078 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000 2024-10-16 16:20:50.454433 [CRIT] manager int boot(const boost::program_options::variables_map&) :: Module energy_manager (pid: 1593) exited with status: 139. Terminating all modules. 2024-10-16 16:20:50.455798 [INFO] manager :: SIGTERM of child: api (pid: 1589) succeeded. 2024-10-16 16:20:50.456008 [INFO] manager :: SIGTERM of child: auth (pid: 1590) succeeded. 2024-10-16 16:20:50.456276 [INFO] manager :: SIGTERM of child: connector_1 (pid: 1591) succeeded. 2024-10-16 16:20:50.458290 [INFO] manager :: SIGTERM of child: connector_1_powerpath (pid: 1592) succeeded. 2024-10-16 16:20:50.458561 [INFO] manager :: SIGTERM of child: evse_security (pid: 1594) succeeded. 2024-10-16 16:20:50.458716 [INFO] manager :: SIGTERM of child: grid_connection_point (pid: 1595) succeeded. 2024-10-16 16:20:50.458830 [INFO] manager :: SIGTERM of child: iso15118_car (pid: 1597) succeeded. 2024-10-16 16:20:50.458937 [INFO] manager :: SIGTERM of child: iso15118_charger (pid: 1602) succeeded. 2024-10-16 16:20:50.468762 [INFO] manager :: SIGTERM of child: slac (pid: 1603) succeeded. 2024-10-16 16:20:50.469523 [INFO] manager :: SIGTERM of child: token_provider (pid: 1604) succeeded. 2024-10-16 16:20:50.469902 [INFO] manager :: SIGTERM of child: token_validator (pid: 1605) succeeded. 2024-10-16 16:20:50.470145 [INFO] manager :: SIGTERM of child: yeti_driver_1 (pid: 1606) succeeded. 2024-10-16 16:20:50.470277 [CRIT] manager int boot(const boost::program_options::variables_map&) :: Exiting manager. ```
shankari commented 1 month ago
Bump up the validity to 3600,000 (just in case it is milliseconds) -- did not get fixed ``` ```
Set the limits in the loop without calling the optimizer - does not crash, but we are still getting the `Power budget expired, falling back to 0.` message. Note that this is more of an indication of how the problem is with the mutex ``` while(true) { std::vector optimized_values; optimized_values.reserve(1); EVLOG_info << "about to init globals again"; globals.init(date::utc_clock::now(), config.schedule_interval_duration, config.schedule_total_dur config.slice_ampere, config.slice_watt, config.debug, energy_flow_request); // auto optimized_values = run_optimizer(energy_flow_request); // EVLOG_info << "ran optimizer in the ready loop, got values " << optimized_values.size(); if (optimized_values.size() == 0) { types::energy::EnforcedLimits l; l.uuid = "evse_manager_1"; l.valid_until = Everest::Date::to_rfc3339(globals.start_time + std::chrono::seconds(3600000)); types::energy::LimitsRes r; r.ac_max_current_A = 32; l.limits_root_side = r; optimized_values.push_back(l); } enforce_limits(optimized_values); EVLOG_info << "enforced limits with " << optimized_values.size(); } ``` generates ``` 2024-10-16 18:15:51.891699 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 18:15:51.891839 [INFO] energy_manager: :: about to init globals again 2024-10-16 18:15:51.892061 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W 2024-10-16 18:15:51.976614 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 18:15:51.976697 [INFO] energy_manager: :: about to init globals again 2024-10-16 18:15:51.976861 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W 2024-10-16 18:15:51.982200 [WARN] connector_1:Evs bool module::Charger::power_available() :: Power budget expired, falling back to 0. 2024-10-16 18:15:52.063449 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 18:15:52.063567 [INFO] energy_manager: :: about to init globals again 2024-10-16 18:15:52.063797 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W 2024-10-16 18:15:52.083250 [WARN] connector_1:Evs bool module::Charger::power_available() :: Power budget expired, falling back to 0. 2024-10-16 18:15:52.183896 [WARN] connector_1:Evs bool module::Charger::power_available() :: Power budget expired, falling back to 0. ```

So we do need to muck around with the iso15118_charger code to stop the power budget from expiring...

shankari commented 1 month ago
Found those logs in `modules/EvseManager/Charger.cpp` ``` // returns whether power is actually available from EnergyManager // i.e. max_current is in valid range bool Charger::power_available() { if (shared_context.max_current_valid_until < date::utc_clock::now()) { EVLOG_warning << "Power budget expired, falling back to 0."; if (shared_context.max_current > 0.) { shared_context.max_current = 0.; signal_max_current(shared_context.max_current); } } return (get_max_current_internal() > 5.9); } ```

max_current_valid_until is set in Charger::set_max_current(float c, std::chrono::time_point<date::utc_clock> validUntil), which is called from ../modules/EvseManager/EvseManager.cpp (charger->set_max_current(0.0F, date::utc_clock::now() + std::chrono::seconds(10));)

    //  start with a limit of 0 amps. We will get a budget from EnergyManager that is locally limited by hw    
    //  caps.                                                                                                  
    charger->set_max_current(0.0F, date::utc_clock::now() + std::chrono::seconds(10));                         

So that is where the original (expiring) limit comes from. Why aren't we getting any values from the EnergyManager?

Aha! It looks like handle_enforce_limits can be called from EnergyNode or EvseManager

# grep -r handle_enforce_limits ../modules
../modules/EvseManager/energy_grid/energyImpl.hpp:    virtual void handle_enforce_limits(types::energy::EnforcedLimits& value) override;
../modules/EvseManager/energy_grid/energyImpl.cpp:void energyImpl::handle_enforce_limits(types::energy::EnforcedLimits& value) {
../modules/EnergyNode/energy_grid/energyImpl.hpp:    virtual void handle_enforce_limits(types::energy::EnforcedLimits& value) override;
../modules/EnergyNode/energy_grid/energyImpl.cpp:void energyImpl::handle_enforce_limits(types::energy::EnforcedLimits& value) {

And both of them check to see if the response is "for them"

    // is it for me?
    if (value.uuid == energy_flow_request.uuid) {    

Let's see if that is why they are ignoring the limit...

shankari commented 1 month ago
Bingo! ``` 2024-10-16 19:14:59.070561 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 19:14:59.070646 [INFO] energy_manager: :: about to init globals again 2024-10-16 19:14:59.070853 [INFO] energy_manager: :: evse_manager_1 Enforce limits 32A -9999W 2024-10-16 19:14:59.112087 [ERRO] grid_connection virtual void module::energy_grid::energyImpl::handle_enforce_limits(types::energy::EnforcedLimits&) :: In EnergyNode: for UUID of grid_connection_point, received limit of { "limits_root_side": { "ac_max_current_A": 32.0 }, "uuid": "evse_manager_1", "valid_until": "2024-11-27T11:14:59.070Z" } 2024-10-16 19:14:59.154161 [ERRO] connector_1:Evs virtual void module::energy_grid::energyImpl::handle_enforce_limits(types::energy::EnforcedLimits&) :: EvseManager: for connector_1, received limit of { "limits_root_side": { "ac_max_current_A": 32.0 }, "uuid": "evse_manager_1", "valid_until": "2024-11-27T11:14:59.070Z" } ```
Changed the UUID to match and everything seems to work now ``` 2024-10-16 19:21:07.974365 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 19:21:07.974448 [INFO] energy_manager: :: about to init globals again 2024-10-16 19:21:07.974649 [INFO] energy_manager: :: connector_1 Enforce limits 32A -9999W 2024-10-16 19:21:08.015554 [ERRO] grid_connection virtual void module::energy_grid::energyImpl::handle_enforce_limits(types::energy::EnforcedLimits&) :: In EnergyNode: for UUID of grid_connection_point, received limit of { "limits_root_side": { "ac_max_current_A": 32.0 }, "uuid": "connector_1", "valid_until": "2024-11-27T11:21:07.974Z" } 2024-10-16 19:21:08.032377 [ERRO] connector_1:Evs virtual void module::energy_grid::energyImpl::handle_enforce_limits(types::energy::EnforcedLimits&) :: EvseManager: for connector_1, received limit of { "limits_root_side": { "ac_max_current_A": 32.0 }, "uuid": "connector_1", "valid_until": "2024-11-27T11:21:07.974Z" } 2024-10-16 19:21:08.034156 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 32.000000, we get 32.000000 ```

@Abby-Wheelis can you also apply this additional patch and re-enable the energy manager? Change the while(true) loop in modules/EnergyManager/EnergyManager.cpp to

        while(true) {            
            std::vector<types::energy::EnforcedLimits> optimized_values;
            optimized_values.reserve(1);
            EVLOG_info << "new round of optimization launched";
            globals.init(date::utc_clock::now(), config.schedule_interval_duration, config.schedule_total_durat
                         config.slice_ampere, config.slice_watt, config.debug, energy_flow_request);
            // auto optimized_values = run_optimizer(energy_flow_request);
            // EVLOG_info << "ran optimizer in the ready loop, got values " << optimized_values.size();
            if (optimized_values.size() == 0) {
                types::energy::EnforcedLimits l;
                l.uuid = "connector_1";                                                       
                l.valid_until =                    
                        Everest::Date::to_rfc3339(globals.start_time + std::chrono::seconds(config.update_inter
                types::energy::LimitsRes r;
                r.ac_max_current_A = 32;
                l.limits_root_side = r;
                optimized_values.push_back(l);
            }                                   

            enforce_limits(optimized_values);
            EVLOG_info << "enforced limits with " << optimized_values.size();
            {            
                std::unique_lock<std::mutex> lock(mainloop_sleep_mutex);
                mainloop_sleep_condvar.wait_for(lock, std::chrono::seconds(config.update_interval));
            }                
        }                 

I have also attached the expected logs. Once you confirm that this works, we can close this issue and move to #75

EDIT: Actually attached the logs expected_logs_basic_config_sil.log

corneliusclaussen commented 1 month ago

I see you are running into a lot of issues, but I do not understand what problem you are solving here? The umwc yocto image comes with an sdk that can be used to cross compile Everest and install it under /var/everest and it just works.

corneliusclaussen commented 1 month ago

Why do you want to hardcode the uuid? This is definitely wrong.

shankari commented 1 month ago

@corneliusclaussen our current plan for the CharIN milestone is to experiment with plugging in and yocto builds in parallel. I encourage you to look at the milestone tasks (https://github.com/EVerest/everest-demo/milestone/1), this is just one of them πŸ˜„ Let us know if you have any feedback on our plans!

corneliusclaussen commented 1 month ago

Iirc all you want to do is cross compile Everest and run it on the umwc, that's why I don't understand why you are running into so many issues. I think it is a combination of a very old Debian image and a very old Everest version with lots of changes. With the yocto SDK this is a matter of minutes, we do it all the time, especially on Charin Testivals. So I think the quickest by far is to upgrade to the latest umwc yocto build ( has to be done once with the raspberry pi tooling as rauc update from Debian to yocto is not possible).

Once you are on yocto it can be updated with rauc again. And a custom build can be installed under /var/everest

In this build there is a symlink /etc/selected-everest which you can point to the /var/Everest install, and then you can also use this custom version from the ui.

So it's all prepared for that.

shankari commented 1 month ago

Iirc all you want to do is cross compile Everest and run it on the umwc, that's why I don't understand why you are running into so many issues. I think it is a combination of a very old Debian image and a very old Everest version with lots of changes.

Yup. Note that the first issue that I ran into was the SSL version - everest had rolled forward to SSL3, but debian (or at least the toolchain) had only SSL1 installed - 90% of my hacks were related to rolling back to the older version of SSL. https://github.com/EVerest/everest-demo/issues/51#issuecomment-2166696592 and onwards.

I am still not sure what is causing this most recent error with the run_optimizer - I did not change it in June, and was able to start up properly. But now, it is crashing consistently, both in SIL and in hardware. We are not going to spend additional time investigating since it is so old and the code is so hacky.

I think that part of it was also a little bit of "too many cooks" - AFS made the smart charging changes; I did the initial cross-compile, and then tried to hand it off to @Abby-Wheelis and @faizanmir21 who are on-site and can access the EV lab, but don't have a lot of experience with EVerest.

But going forward, as we update to a more recent version, we will use yocto, and hopefully it will be simpler. @catarial has already been able to compile the most recent version of everest on a pi zero, and is working on the yocto build next.

We will be tracking the work required to get the yocto build to work in #76 Fingers crossed that it will be smooth sailing!

Abby-Wheelis commented 1 month ago

@Abby-Wheelis can you also apply this additional patch and re-enable the energy manager?

I made this patch (was there another patch I needed to make too? It looks like most of the other work was logging/troubleshooting) and restored the manager, and it no longer seems to be crashing.

I have also attached the expected logs.

Are these the logs you were talking about?

Changed the UUID to match and everything seems to work now

If so, then I believe that is what I am seeing, after initialization I am seeing a loop of messages: ``` 2024-10-16 22:26:51.075596 [INFO] connector_1:Evs :: πŸŒ€πŸŒ€πŸŒ€ Ready to start charging πŸŒ€πŸŒ€πŸŒ€ 2024-10-16 22:26:51.224352 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 0.000000, we get 0.000000 2024-10-16 22:26:52.020574 [INFO] energy_manager: :: new round of optimization launched 2024-10-16 22:26:52.087538 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 32.000000, we get 32.000000 2024-10-16 22:26:52.139143 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 22:26:53.139522 [INFO] energy_manager: :: new round of optimization launched 2024-10-16 22:26:53.157806 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 32.000000, we get 32.000000 2024-10-16 22:26:53.210145 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 22:26:54.210580 [INFO] energy_manager: :: new round of optimization launched 2024-10-16 22:26:54.228634 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: In ISO15118 charger impl, after updating AC max current to 32.000000, we get 32.000000 2024-10-16 22:26:54.279238 [INFO] energy_manager: :: enforced limits with 1 2024-10-16 22:26:55.279616 [INFO] energy_manager: :: new round of optimization launched ```
shankari commented 1 month ago

No, only this patch. All looks good. I am closing this.

Further updates on creating a custom version of EVerest using the new yocto method will be tracked in #76 Further updates on plugging this custom version into a hardware simulator/EV will be tracked in #75

Hopefully rolling forward will be smoother. One task down, four to go!