SNL-WaterPower / WecOptTool-MATLAB

WEC Design Optimization Toolbox
GNU General Public License v3.0
12 stars 9 forks source link

Better example spectra #108

Open ryancoe opened 4 years ago

ryancoe commented 4 years ago

Instead of the arbitrary spectra and no weighting (equal weighting) we're currently loading via example8Spectra, we should:

Previous discussion on this topic: #2 Issue about clustering to functionally support doing the above for any location: #40

ssolson commented 3 years ago

From https://github.com/MHKiT-Software/MHKiT-Python/pull/91 the centers and weight are as follows. This was for 8 clusters. Is there a preferred number of clusters?

Te Hm0 weights
8.108357 2.908718 0.152038
11.841317 2.577053 0.093350
9.862090 2.032084 0.177814
10.507295 3.474941 0.122957
8.380472 1.431654 0.174200
12.453715 4.634644 0.064942
14.101427 2.801273 0.031397
6.900830 1.834313 0.183303

image

H0R5E commented 3 years ago

What I don't understand is how this translates to spectra. Would you just calibrate a bretschneider spectrum to each of these points? Why would that be better than building a bespoke spectrum based on the data you have?

EDIT: I guess the broader question is what is the benefit of subdividing your data into multiple spectra anyway?

ssolson commented 3 years ago

Hey Mat I know this is done but I have a similar question. I was working on relating this to spectra and thought that if I did this for Tp instead of Te I could then create 8 representative Bretschiner spectra for the proposed site 46022. Still looking at the results but this is where I am.

image

H0R5E commented 3 years ago

@ssolson, I'm not saying this is not a useful analysis for some purposes. Just, in the context of WecOptTool, why would you use this approach, versus something like dat2dspec, if you had the data available?

EDIT: Probably the answer to my own question is that you need the raw sensor data for dat2dspec, and the data from the buoys are already processed into spectra. You can get the raw spectra though, so why not process those?

ssolson commented 3 years ago

Are you proposing that in your example we would make 1 wave spectra from all the time-series raw data? If so it is not immediately obvious to me that this would be equivalent to tuning to N weighted representative sea states. This is not to speak to the efficacy of either approach just that it does not seem similar. The N division method in my understanding allows a developer to pick a manageable subset of sea states and understand how a device would perform in each of these. Interested to hear your thoughts on the correct approach to this.

H0R5E commented 3 years ago

The N division method in my understanding allows a developer to pick a manageable subset of sea states and understand how a device would perform in each of these

OK, this makes sense, but don't we just sum the sea states together again in WecOptTool? Doesn't that sort of negate the point of splitting them up?

Are you proposing that in your example we would make 1 wave spectra from all the time-series raw data? If so it is not immediately obvious to me that this would be equivalent to tuning to N weighted representative sea states.

It looks like you can get the hourly raw spectra from the buoy data, so my first thought for accurately simulating that environment would be to do some kind of aggregation of probability distributions to build a good single representative spectrum.

EDIT: I guess you could see this as a sort of Bayesian approach, defining the spectra based on the data available.

The risk of creating more spectra from data that is already processed from spectral data is that you are (unnecessarily) adding error to the inputs. So, I think it would be prudent to explain the pro and cons of this approach, if we are going to suggest it to WecOptTool users.

EDIT: I would also wonder if composing 8 BS spectra to represent a single BS spectra is valid? I'm not sure I know the answer to that, but I guess the classical spectra describe the long term state of all the waves, not just a subset of them.

ryancoe commented 3 years ago

This method is meant to reduce the computational burden on analyzing site-specific performance. If you pull data from an NDBC buoy with 20 years of record, you will find yourself with thousands of sea states (these are all the points on your scatter plot). In general, you could run simulations for each one of these points and then take the average to get a site-specific performance -- this is rarely practical.

Instead we find n representative sea states and their probability weightings. Thus we can run only n simulation and use the probability weightings to get a site-specific performance.

In practice this would go like:

  1. Pull data (e.g., from NDBC)
  2. Perform clustering to obtain n representative sea states (each sea state is defined by bulk parameters, such as sig wave height, peak period, gamma, etc.) and weightings.
  3. Generate spectra from the n sets of bulk parameters
  4. Generate time realization of the n sea states
  5. Run simulations for each of the n sea states
  6. Combine/weight the results find average performance at the site.