vividfog / nordpool-predict-fi

A Python app and a Random Forest ML model that predicts spot prices for the Nordpool FI market.
MIT License
60 stars 8 forks source link

GPT-4 outline: Displaying past predictions (auto-generated) #10

Closed vividfog closed 4 months ago

vividfog commented 4 months ago

This is auto-generated by GPT-4 based on discussion of various ideas, outline for a future session.

Quote:

Motivation

To enhance the interactivity and information richness of the Nordpool Spot Price Prediction eCharts visualization, we are implementing a feature that allows users to toggle the visibility of historical prediction lines. These lines represent past predictions and help users visualize the evolution of price forecasts. Each line's opacity will decrease as it goes further back in time, providing a clear visual distinction between recent and older predictions.

Plan Overview

  1. Database Schema and Data Insertion: Set up a database schema to store snapshots of predictions and write data insertion logic.
  2. Backend Data Preparation: Implement a backend service to provide prediction data for the frontend.
  3. Frontend Implementation: Modify the HTML to include a toggle checkbox and implement the JavaScript to interact with the eCharts instance.
  4. Testing: Conduct thorough testing to ensure the feature works correctly.

Part 1: Database Schema and Data Insertion

SQL Schema

-- Table to store snapshots of predictions
CREATE TABLE prediction_snapshot (
    snapshot_id INTEGER PRIMARY KEY AUTOINCREMENT,
    snapshot_date TIMESTAMP NOT NULL
);

-- Table to store the details of each snapshot
CREATE TABLE snapshot_details (
    detail_id INTEGER PRIMARY KEY AUTOINCREMENT,
    snapshot_id INTEGER NOT NULL,
    timestamp TIMESTAMP NOT NULL,
    PricePredict_cpkWh FLOAT,
    FOREIGN KEY (snapshot_id) REFERENCES prediction_snapshot(snapshot_id)
);

Data Insertion Logic

-- Insert a new snapshot record
INSERT INTO prediction_snapshot (snapshot_date) VALUES (CURRENT_TIMESTAMP);

-- Insert the snapshot details (this would be part of a loop processing your prediction data)
INSERT INTO snapshot_details (snapshot_id, timestamp, PricePredict_cpkWh) 
VALUES ((SELECT last_insert_rowid()), '2024-05-29 00:00:00', 10.5);

Part 2: Backend Data Preparation

Implement a backend service to fetch the last 4 snapshots from the database and format them as JSON.

Part 3: Frontend Implementation

HTML

<label>
  <input type="checkbox" id="historyToggle" checked>Show History Lines</label>
<div id="main" style="width: 600px;height:400px;"></div>
<script src="path_to_echarts_lib"></script>

JavaScript

var myChart = echarts.init(document.getElementById('main'));

function loadChartData(callback) {
  // Simulate fetching JSON data from the backend
  var simulatedData = []; // Replace with actual JSON data fetching logic
  callback(simulatedData);
}

function toggleHistoryLines(showHistory) {
  loadChartData(function (latestData) {
    var option = {
      series: [{
        data: latestData,
        // other series options...
      }]
    };
    myChart.setOption(option);
  });
}

document.getElementById('historyToggle').addEventListener('change', function() {
  toggleHistoryLines(this.checked);
});

toggleHistoryLines(true); // Initialize with history lines visible

Part 4: Testing

  1. Database Tests: Verify the schema and insertion logic.
  2. Backend Tests: Ensure the backend correctly serves prediction snapshots as JSON.
  3. Frontend Tests: Test the toggle feature to confirm that it updates the chart as expected.

This plan includes all the details you need to enhance your chart with a new feature that improves the user's experience by providing context to the prediction data through historical lines. Make sure to adjust the database paths, JSON fetching logic, and eCharts configuration as per your project's setup.

vividfog commented 4 months ago

Index.html updated with a new history chart, which shows up to 14 previous prediction snapshots. This should give a clearer picture of the movement of the predictions over time. Together with this, the model is now trained with all available data every time a prediction runs: continuous training. When --train and --predict are used together, the RF model is not saved to disk, as both now happen in-memory. Models can still be saved for numerical analysis etc.