Closed bellerofonte closed 4 months ago
I have managed to fix that issue buy recreating python environment in a different way:
conda create -n tsai-py12 python=3.12
conda activate tsai-py12
conda install ipykernel ipywidgets
python -m pip install tsai
python -m pip uninstall scikit-learn
python -m pip install scikit-learn==1.4.0
The rollback to scikit-learn==1.4.0
was necessary because otherwise I got ImportError: cannot import name '_get_column_indices' from 'sklearn.utils'
on calling from tsai.all import *
I am trying to launch sample notebooks on my M1 Max.
Unfortunately, I am constantly getting a dead kernel when it tries to instantiate
TSDatasets
f.e. in the Time Series data preparation notebook kernel dies on executing:
in the How to use Transformers with Time Series? notebook kernel dies on executing:
I have tried to debug these cells but kernel dies before even entering inside those lines.
The Jupyter's log shows following:
computer_setup()
outputs following:I am running these notebooks under
conda
env which has been created:The same issue when using python
3.10
and3.11
Finally, I have tried to run plain
torch
example to see if it works on my M1 and it succeeded.example
```python import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F # Define the neural network class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(784, 128) # Input layer (28x28 images flattened to 784 pixels) self.fc2 = nn.Linear(128, 64) # Hidden layer self.fc3 = nn.Linear(64, 10) # Output layer (10 classes for MNIST digits) def forward(self, x): x = x.view(-1, 784) # Flatten the input tensor x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # Load the MNIST dataset from torchvision import datasets, transforms transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) trainset = datasets.MNIST('data/', download=True, train=True, transform=transform) testset = datasets.MNIST('data/', download=True, train=False, transform=transform) # Create data loaders trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False) # Create an instance of the neural network net = Net() # Define the loss function and optimizer criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9) # Train the network epochs = 5 for epoch in range(epochs): running_loss = 0.0 for inputs, labels in trainloader: optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() print(f'Epoch {epoch+1}, Loss: {running_loss/len(trainloader):.4f}') # Test the network correct = 0 total = 0 with torch.no_grad(): for inputs, labels in testloader: outputs = net(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print(f'Accuracy on test set: {100 * correct / total:.2f}%') ```Any ideas what's going wrong here?