mattsherar / Temporal_Fusion_Transform

Pytorch Implementation of Google's TFT
241 stars 60 forks source link

Key Error: 'seq_length' & IndexError: index out of range in self #3

Open jbelisario opened 4 years ago

jbelisario commented 4 years ago

@mattsherar

When running the trainer notebook, in the code block where keys are being added to the 'config' dictionary, a 'seq_length' key was not added.

Do you know what value this key should hold?

When blindly testing with values related to the previous code & dataset (ex. 1000 b/c of max_samples, 192 b/c of time_steps, etc.), I am receiving the following: IndexError: index out of range in self

This happens when running the following piece of code: image

I will continue to look into the issue but any help would be appreciated.

Xanyv commented 4 years ago

Hi! did you implement this model successfully?

jbelisario commented 4 years ago

Hi! did you implement this model successfully?

I did not. I found a PyTorch implementation I'm trying to figure out

lastlap commented 4 years ago

Try these values and changes. time_steps=128 num_encoder_steps=64 static_cols =['categorical_id'] num_static = 1

config['encode_length'] = 64 config['seq_length']=128

for batch in loader: output, encoder_ouput, decoder_output, attn, attn_weights, encoder_sparse_weights, decoder_sparse_weights = model(batch)

saberdarkknight commented 3 years ago

I use the following settings which work for me.

id_col = 'categorical_id' time_col='hours_from_start' input_cols =['power_usage', 'hour', 'day_of_week', 'hours_from_start', 'categorical_id'] target_col = 'power_usage' time_steps=192 num_encoder_steps = 168 output_size = 1 max_samples = 1000 input_size = 5 static_cols = ['categorical_id'] num_static = len(static_cols)

static_cols = ['categorical_id'] categorical_cols = ['hour'] real_cols = ['power_usage','hour', 'day'] config = {} config['static_variables'] = len(static_cols) config['time_varying_categoical_variables'] = 1 config['time_varying_real_variables_encoder'] = 4 config['time_varying_real_variables_decoder'] = 3 config['num_masked_series'] = 1 config['static_embedding_vocab_sizes'] = [369] config['time_varying_embedding_vocab_sizes'] = [369] config['embedding_dim'] = 8 config['lstm_hidden_dimension'] = 160 config['lstm_layers'] = 1 config['dropout'] = 0.05 config['device'] = 'cpu' config['batch_size'] = 64 config['encode_length'] = 168 config['attn_heads'] = 4 config['num_quantiles'] = 3 config['vailid_quantiles'] = [0.1,0.5,0.9] config['seq_length']=192

yangye19960922 commented 3 years ago

I use the following settings which work for me.

id_col = 'categorical_id' time_col='hours_from_start' input_cols =['power_usage', 'hour', 'day_of_week', 'hours_from_start', 'categorical_id'] target_col = 'power_usage' time_steps=192 num_encoder_steps = 168 output_size = 1 max_samples = 1000 input_size = 5 static_cols = ['categorical_id'] num_static = len(static_cols)

static_cols = ['categorical_id'] categorical_cols = ['hour'] real_cols = ['power_usage','hour', 'day'] config = {} config['static_variables'] = len(static_cols) config['time_varying_categoical_variables'] = 1 config['time_varying_real_variables_encoder'] = 4 config['time_varying_real_variables_decoder'] = 3 config['num_masked_series'] = 1 config['static_embedding_vocab_sizes'] = [369] config['time_varying_embedding_vocab_sizes'] = [369] config['embedding_dim'] = 8 config['lstm_hidden_dimension'] = 160 config['lstm_layers'] = 1 config['dropout'] = 0.05 config['device'] = 'cpu' config['batch_size'] = 64 config['encode_length'] = 168 config['attn_heads'] = 4 config['num_quantiles'] = 3 config['vailid_quantiles'] = [0.1,0.5,0.9] config['seq_length']=192

Hello, i have tried your setting but still got error: "The size of tensor a (24) must match the size of tensor b (0) at non-singleton dimension 0" Do you have a similar problem?