gretelai / gretel-synthetics

Synthetic data generators for structured and unstructured text, featuring differentially private learning.
https://gretel.ai/platform/synthetics
Other
579 stars 87 forks source link

[BUG] train_numpy() got multiple values for argument 'feature_types' - dgan #133

Closed jankrans closed 1 year ago

jankrans commented 1 year ago

I was trying to run the example code of dgan. But I keep getting the same errors and can't really understand where it's going wrong...

The code comes from: https://github.com/gretelai/public_research/blob/main/oss_doppelganger/sample_usage.ipynb

import numpy as np
import pandas as pd

from gretel_synthetics.timeseries_dgan.dgan import DGAN
from gretel_synthetics.timeseries_dgan.config import DGANConfig, OutputType, Normalization

attributes = np.random.randint(0, 3, size=(1000,3))
features = np.random.random(size=(1000,20,2))

model = DGAN(DGANConfig(
    max_sequence_len=20,
    sample_len=4,
    batch_size=1000,
    epochs=10,  # For real data sets, 100-1000 epochs is typical
))

model.train_numpy(
    attributes, features,
    attribute_types = [OutputType.DISCRETE] * 3,
    feature_types = [OutputType.CONTINUOUS] * 2
)

synthetic_attributes, synthetic_features = model.generate_numpy(1000)

This code produces the following error:

~\AppData\Local\Temp\ipykernel_19824\3379362806.py in <module>
     19     attributes, features,
     20     attribute_types = [OutputType.DISCRETE] * 3,
---> 21     feature_types = [OutputType.CONTINUOUS] * 2
     22 )
     23 

TypeError: train_numpy() got multiple values for argument 'feature_types'

Miniconda3 with following libs installed with pip (used with jupyter notebook wihtinn VScode)

jankrans commented 1 year ago

Created a new environment with fresh install and got it resolved... (python 3.9, gretel 1.19.0, numpy 1.23.5) Still though, when trying example code with example and my own data. I'm stuck with following error, i've been looking in the code itself, but can't understand why it would show this error.

import numpy as np
from gretel_synthetics.timeseries_dgan.dgan import DGAN
from gretel_synthetics.timeseries_dgan.config import DGANConfig
attributes = np.random.rand(10000, 3)
features = np.random.rand(10000, 20, 2)
config = DGANConfig(
    max_sequence_len=20,
    sample_len=5,
    batch_size=1000,
    epochs=10
)
model = DGAN(config)
model.train_numpy(attributes, features)
synthetic_attributes, synthetic_features = model.generate(1000)

producing:

Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?6cf10b5a-7b85-44a9-8b33-22bedba04389)
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
Cell In [8], line 13
      6 config = DGANConfig(
      7     max_sequence_len=20,
      8     sample_len=5,
      9     batch_size=1000,
     10     epochs=10
     11 )
     12 model = DGAN(config)
---> 13 model.train_numpy(attributes, features)
     14 synthetic_attributes, synthetic_features = model.generate(1000)

File c:\Users\jankr\miniconda3\envs\tf\lib\site-packages\gretel_synthetics\timeseries_dgan\dgan.py:194, in DGAN.train_numpy(self, features, feature_types, attributes, attribute_types)
    191 _check_for_nans(attributes, features)
    193 if not self.is_built:
--> 194     attribute_outputs, feature_outputs = create_outputs_from_data(
    195         attributes,
    196         features,
    197         attribute_types,
    198         feature_types,
    199         normalization=self.config.normalization,
    200         apply_feature_scaling=self.config.apply_feature_scaling,
    201         apply_example_scaling=self.config.apply_example_scaling,
    202     )
...
     90         "feature_types must be the same length as the 3rd (last) dimemnsion of features"
     91     )
     92 feature_types = cast(List[OutputType], feature_types)

IndexError: tuple index out of range
kboyd commented 1 year ago

I'm guessing a bit since the full stack trace is truncated, but the order of arguments to train_numpy looks wrong. That example usage and actual code are out of sync I think. Thanks for including the link so I can confirm and get it fixed right away.

Try the following line for the train_numpy call using keyword args instead:

model.train_numpy(attributes=attributes, features=features)

And this also explains that original error you saw too. From the current source code (which should be exactly what's in version 0.19.0),

    def train_numpy(
        self,
        features: np.ndarray,
        feature_types: Optional[List[OutputType]] = None,
        attributes: Optional[np.ndarray] = None,
        attribute_types: Optional[List[OutputType]] = None,
    ):

So in your opening post, replacing the positional args with keyword versions from the order above, the function call was:

model.train_numpy(
    features=attributes, feature_types=features,
    attribute_types = [OutputType.DISCRETE] * 3,
    feature_types = [OutputType.CONTINUOUS] * 2
)

Hence the error about feature_types given twice.

kboyd commented 1 year ago

https://github.com/gretelai/public_research/pull/4 fixes function calls in the sample_usage.ipynb notebook.

kboyd commented 1 year ago

Notebook examples are updated now. Thanks for making us aware of these outdated examples!

Closing this issue. Please reopen if there are further problems running the example code.