Open 2naff opened 1 year ago
dls = pets.dataloaders(path/'images')
<< when run this line, the kernel down
Was there an error prompt at all before the kernel shut down? And why did you assume it was the memory limit?
Apparently, torch.device("mps") on M1 GPU is analogous to torch.device("cuda") on an Nvidia GPU. I recommend running these codes from the official Pytorch site first to see if it helps.
# Check that MPS is available
if not torch.backends.mps.is_available():
if not torch.backends.mps.is_built():
print("MPS not available because the current PyTorch install was not "
"built with MPS enabled.")
else:
print("MPS not available because the current MacOS version is not 12.3+ "
"and/or you do not have an MPS-enabled device on this machine.")
else:
mps_device = torch.device("mps")
# Create a Tensor directly on the mps device
x = torch.ones(5, device=mps_device)
# Or
x = torch.ones(5, device="mps")
# Any operation happens on the GPU
y = x * 2
# Move your model to mps just like any other device
model = YourFavoriteNet()
model.to(mps_device)
# Now every call runs on the GPU
pred = model(x)
If not, I suggest the following alternatives:
# Use GPU
dls = pets.dataloaders(path/'images', device = 'mps')
# Or use CPU
dls = pets.dataloaders(path/'images', device = 'cpu')
Please don't ask for help here. Use the forums device : Mac M1 Pro
I have practiced through colab so far, but I wanted to implement fastai locally after I recently encountered Pythorch's gpu support.
I succeeded in making blocks, but I found the kernel down when I loaded the dataset from the block.
|| code ||
pets = DataBlock(blocks = (ImageBlock, CategoryBlock), get_items = get_image_files, splitter = RandomSplitter(seed=42), get_y = usingattr(RegexLabeller(r'(.+)\d+.jpg$'), 'name'), item_tfms = Resize(460), batch_tfms=aug_transforms(size=244, min_scale=0.75))
no problems until here
dls = pets.dataloaders(path/'images') << when run this line, the kernel down
I also modified the memory capacity, but it didn't change.
fastai 2.7.9 pypi_0 pypi fastbook 0.0.26 pypi_0 pypi torch 1.13.0.dev20220807 pypi_0 pypi torchaudio 0.14.0.dev20220603 pypi_0 pypi torchvision 0.14.0.dev20220807 pypi_0 pypi this is version what i installed using conda.
what can i do? or Is the support still unstable?