Open Treata11 opened 18 hours ago
By the way, this is the terminal log when I ran the generate.py
script:
$ python generate.py --cpu
/Users/Treata/Developer/Git/Contribution/Material-Map-Generator/generate.py:70: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(model_path)
1 ssInput
Converting PyTorch Frontend ==> MIL Ops: 100%|▉| 1502/1503 [00:00<00:00, 8498.18
Running MIL frontend_pytorch pipeline: 100%|█| 5/5 [00:00<00:00, 79.62 passes/s]
Running MIL default pipeline: 100%|████████| 89/89 [00:01<00:00, 47.04 passes/s]
Running MIL backend_mlprogram pipeline: 100%|█| 12/12 [00:00<00:00, 83.94 passes
Model saved as 1x_NormalMapGenerator-CX-Lite_200000_G.mlpackage
Converting PyTorch Frontend ==> MIL Ops: 100%|▉| 1502/1503 [00:00<00:00, 6292.29
Running MIL frontend_pytorch pipeline: 100%|█| 5/5 [00:00<00:00, 78.49 passes/s]
Running MIL default pipeline: 100%|████████| 89/89 [00:01<00:00, 48.21 passes/s]
Running MIL backend_mlprogram pipeline: 100%|█| 12/12 [00:00<00:00, 83.43 passes
Model saved as 1x_FrankenMapGenerator-CX-Lite_215000_G.mlpackage
It's worth noting that the converted CoreML models take half the space on disk compared to the original ones. Might be an indication that a layer of the original models is not translated...
Greetings,
I've tested the models on an M1 Mac & it works perfectly fine using the CPU. (Gets extremely slow for images with res higher than 512*512 though which I think is expected...) I plan to use the models in a free iOS app & in order to do that, I have to convert the
pth
files toCoreML
(so that it would be compatible in XCode). Utilizing CoreMLTools, I did the necessary steps explained in the docs in order to convert the model:def convert_to_coreml(torch_model, model_name): imgShape = (1, 3, 384, 384) example_input = torch.rand(*imgShape) # Example input, needed by jit tracer. traced_model = torch.jit.trace(torch_model, example_input)
Convert the model: https://apple.github.io/coremltools/source/coremltools.converters.convert.html
Convert each model to MLPackage/CoreML
for idx, model in enumerate(models): names = ['1x_NormalMapGenerator-CX-Lite_200000_G', '1x_FrankenMapGenerator-CX-Lite_215000_G'] convert_to_coreml(model, names[idx])