ELIFE-ASU / PyInform

A Python Wrapper for the Inform Information Analysis Library
https://elife-asu.github.io/PyInform
MIT License
45 stars 9 forks source link

pyinform.error.InformError: an inform error occurred - "negative state in timeseries" #34

Open fishbacp opened 2 years ago

fishbacp commented 2 years ago

I have two simple time series, xs and ys, having 5000 samples each. I attempted to compute the transfer entropy via

T=transfer_entropy(xs, ys, k)

using various lag values, k. Each attempt yielded the following error message:

Traceback (most recent call last):
  File "/Users/fishbacp/Desktop/Python2022/transfer_entropy.py", line 16, in <module>
    T=transfer_entropy(x_source,x_target,1)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pyinform/transferentropy.py", line 222, in transfer_entropy
    error_guard(e)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pyinform/error.py", line 63, in error_guard
    raise InformError(e, func)
pyinform.error.InformError: an inform error occurred - "negative state in timeseries"

Any insights as to the error source? Should I be adjusting other keyword arguments?

Lszzz commented 1 year ago

Have you solved the problem?

jakehanson commented 1 year ago

Hi there,

The error is that observations should not contain negative values; I believe the reason for this error is that we are enumerating the states of a system, so a negative value is unexpected. More specifically, the negative value will throw off an attempt to calculate the base of the logarithm.

If your observations contain negative values, the solution is simply to remap the observed values to the positive integers using pyinform.utils.coalesce_series.

For example:

xs = [0,-1,-1,-1,-1,0,0,0,0]
ys = [0,0,1,1,1,1,0,0,0]

coal_xs, b = pyinform.utils.coalesce_series(xs)
transfer_entropy(coal_xs,ys,k=2)

returns the correct answer:

0.6792696431662097

In general, all that matters for information-theoretic calculations is the distribution of states and not the actual value of the states. So the entropy of xs = [-1, -1, 0] is the same as [+1, +1, 0] since they both result in the probability distribution [2/3, 1/3].

One last thing to note is that your observations should be comprised of discrete states. If you are working with continuous-valued observations, you will want to bin these observations first using pyinform.utils.binning.