KittenCN / stock_prediction

基于神经网络的通用股票预测模型 A general stock prediction model based on neural networks
https://www.coderfan.com
GNU General Public License v3.0
178 stars 50 forks source link

DataLoader Error #23

Closed Antoniowu closed 8 months ago

Antoniowu commented 1 year ago

跑predict.py 发现两个问题, 问了大神,可以发现dataloader在最后需要做合并,如果设置了batchsize,那么这里就是进行batch合并,如果维度不统一,那么就会报错了。请问一下代码在哪里array和tensor? 1.RuntimeError: Trying to resize storage that is not resizable。 File "C:\stock\stock_prediction-master\predict.py", line 760, in train(epoch+1, train_dataloader, scaler, ts_code, test_queue) File "C:\stock\stock_prediction-master\predict.py", line 37, in train for batch in dataloader: File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 628, in next data = self._next_data() File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1333, in _next_data return self._process_data(data) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1359, in _process_data data.reraise() File "C:\ProgramData\Anaconda3\lib\site-packages\torch_utils.py", line 543, in reraise raise exception RuntimeError: Caught RuntimeError in DataLoader worker process 1. Original Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in _worker_loop data = fetcher.fetch(index) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\fetch.py", line 61, in fetch return self.collate_fn(data) File "C:\stock\stock_prediction-master\common.py", line 840, in custom_collate return torch.utils.data.dataloader.default_collate(batch) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 265, in default_collate return collate(batch, collate_fn_map=default_collate_fn_map) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 143, in collate return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility. File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 143, in return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility. File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 120, in collate return collate_fn_map[elem_type](batch, collate_fn_map=collate_fn_map) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data_utils\collate.py", line 162, in collate_tensorfn out = elem.new(storage).resize(len(batch), *list(elem.size())) RuntimeError: Trying to resize storage that is not resizable。

  1. 检查代码发现以下定义,r后面都被注释掉了,然后引用这个Class是不是不对? class BertForSequenceClassification(BertPreTrainedModel): r""" labels: (optional) torch.LongTensor of shape (batch_size,): Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

    Outputs: Tuple comprising various elements depending on the configuration (config) and inputs: loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,): Classification (or regression if config.num_labels==1) loss. logits: torch.FloatTensor of shape (batch_size, config.num_labels) Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states: (optional, returned when config.output_hidden_states=True) list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions: (optional, returned when config.output_attentions=True) list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

    Examples::

    tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
    model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
    input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
    labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
    outputs = model(input_ids, labels=labels)
    loss, logits = outputs[:2]

    """

Antoniowu commented 1 year ago

大佬,请问这个问题,您看了吗?谢谢哦!

KittenCN commented 1 year ago

邮件和这里我都看到了,最近工作繁忙,没有时间来debug..要等等了

Antoniowu commented 1 year ago

抱歉,抱歉,打扰了!空了再说。