chickenbestlover / RNN-Time-series-Anomaly-Detection

RNN based Time-series Anomaly detector model implemented in Pytorch.
Apache License 2.0
1.17k stars 316 forks source link

What's the #17

Closed chenshanghao closed 5 years ago

chenshanghao commented 5 years ago

Hi chickenbestlover,

Can you help me explain why one is return ''return total_loss / (nbatch)' and the other is 'return total_loss / (nbatch+1)'

Thanks, Chauncey

chickenbestlover commented 5 years ago
def evaluate_1step_pred(args, model, test_dataset):
    # Turn on evaluation mode which disables dropout.
    model.eval()
    total_loss = 0
    with torch.no_grad():
        hidden = model.init_hidden(args.eval_batch_size)
        for nbatch, i in enumerate(range(0, test_dataset.size(0) - 1, args.bptt)):

            inputSeq, targetSeq = get_batch(args,test_dataset, i)
            outSeq, hidden = model.forward(inputSeq, hidden)

            loss = criterion(outSeq.view(args.batch_size,-1), targetSeq.view(args.batch_size,-1))
            hidden = model.repackage_hidden(hidden)
            total_loss+= loss.item()

    return total_loss / nbatch     **# you mean here? his is my mistake. it should be loss / ( nbatch+1)**

By the way, the function evaluate_1step_pred has been depreciated and no longer be used to evaluate accuracy.

chenshanghao commented 5 years ago

Thank you. Also, it seems you augment the training data in any case.

def preprocessing(self, path, train=True):
    """ Read, Standardize, Augment """

    with open(str(path), 'rb') as f:
        data = torch.FloatTensor(pickle.load(f))
        label = data[:,-1]
        data = data[:,:-1]
    if train:
        self.mean = data.mean(dim=0)
        self.std= data.std(dim=0)
        self.length = len(data)
        data,label = self.augmentation(data,label)
    else:
        if self.augment_test_data:
            data, label = self.augmentation(data, label)

    data = standardization(data,self.mean,self.std)

    return data,label