Open wdwgonzales opened 2 years ago
Here is another userid (i.e., 1184760821177413632
) that worked in my Intel machine but not my M1 Silicon machine:
python3 scripts/m3twitter.py --skip-cache --id 1184760821177413632 --auth scripts/auth.txt
Results for M1: (SEGMENTATION ERROR)
11/20/2021 18:50:57 - INFO - m3inference.m3inference - Version 1.1.5
11/20/2021 18:50:57 - INFO - m3inference.m3inference - Running on cpu.
11/20/2021 18:50:57 - INFO - m3inference.m3inference - Will use full M3 model.
11/20/2021 18:50:57 - INFO - m3inference.m3inference - Model full_model exists at /Users/wdwg/m3/models/full_model.mdl.
11/20/2021 18:50:57 - INFO - m3inference.utils - Checking MD5 for model full_model at /Users/wdwg/m3/models/full_model.mdl
11/20/2021 18:50:58 - INFO - m3inference.utils - MD5s match.
11/20/2021 18:50:58 - INFO - m3inference.m3inference - Loaded pretrained weight at /Users/wdwg/m3/models/full_model.mdl
11/20/2021 18:50:58 - INFO - m3inference.m3twitter - skip_cache is True. Fetching data from Twitter for id 1184760821177413632.
11/20/2021 18:50:58 - INFO - m3inference.m3twitter - GET /users/show.json?id=1184760821177413632
[1] 68546 segmentation fault python3 scripts/m3twitter.py --skip-cache --id 1184760821177413632 --auth
Results for Intel
11/20/2021 18:50:45 - INFO - m3inference.m3inference - Version 1.1.5
11/20/2021 18:50:45 - INFO - m3inference.m3inference - Running on cpu.
11/20/2021 18:50:45 - INFO - m3inference.m3inference - Will use full M3 model.
11/20/2021 18:50:46 - INFO - m3inference.m3inference - Model full_model exists at /Users/szoriac/m3/models/full_model.mdl.
11/20/2021 18:50:46 - INFO - m3inference.utils - Checking MD5 for model full_model at /Users/szoriac/m3/models/full_model.mdl
11/20/2021 18:50:46 - INFO - m3inference.utils - MD5s match.
11/20/2021 18:50:47 - INFO - m3inference.m3inference - Loaded pretrained weight at /Users/szoriac/m3/models/full_model.mdl
11/20/2021 18:50:47 - INFO - m3inference.m3twitter - skip_cache is True. Fetching data from Twitter for id 1184760821177413632.
11/20/2021 18:50:47 - INFO - m3inference.m3twitter - GET /users/show.json?id=1184760821177413632
11/20/2021 18:50:47 - INFO - m3inference.dataset - 1 data entries loaded.
Predicting...: 0%| | 0/1 [00:00<?, ?it/s]/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ../c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Predicting...: 100%|██████████████████████████████| 1/1 [00:07<00:00, 7.20s/it]
{'input': {'description': 'EXO-L (OT9) | WE ARE ONE! EXO! SARANGHAJA! | ELSA🌨❄',
'id': '1184760821177413632',
'img_path': '/Users/szoriac/m3/cache/1184760821177413632_224x224.jpg',
'lang': 'en',
'name': '𝘽𝙚𝙮𝙖 | PHIXO🇵🇭 _ EXO-L for Life',
'screen_name': 'L_1485_EXOs_Bea'},
'output': {'age': {'19-29': 0.082,
'30-39': 0.7598,
'<=18': 0.1569,
'>=40': 0.0013},
'gender': {'female': 0.2303, 'male': 0.7697},
'org': {'is-org': 0.0001, 'non-org': 0.9999}}}
Python version in general shouldn't cause this. I don't have an M1 laptop handy, but one thing you may try is to narrow down which line/function in scripts/m3twitter.py
causes the segmentation fault issue, which could help us understand what the potential reason might be.
Python version in general shouldn't cause this. I don't have an M1 laptop handy, but one thing you may try is to narrow down which line/function in
scripts/m3twitter.py
causes the segmentation fault issue, which could help us understand what the potential reason might be.
How do you suggest I do this? Maybe the the numbers 22412
and 68546
before the errors mean something?
Figured out how to do it, @zijwang. It would be this block of code:
if args.id != None:
pprint.pprint(m3Twitter.infer_id(args.id, skip_cache=args.skip_cache))
else:
pprint.pprint(m3Twitter.infer_screen_name(args.screen_name, skip_cache=args.skip_cache))
@computermacgyver You may very well be right. I think it is M1 Silicon related.
There may be an incompatibility issue for those running M3Inference with Apple M1 computers.
I just converted to a newer Apple M1 laptop and tried running m3. For certain ids, there are no problems. However, for most ids, I get "segmentation fault" (see error Output for A below).
I tried running it in my old laptop (Apple Intel). There are no problems for all ids. It runs smoothly. Examples can be found below:
Both (A) and (B) run fine for my Apple Intel laptop: (A)
Output for (A)
(B)
Output for (B)
But only (B) works for my Apple M1 laptop:
I get this error for (A)
But not for (B)
Is anyone else encountering the same problem? Am I doing something wrong? Is there a way to fix this?
Possibly helpful information: I used
m3inference = 1.1.5
on both laptops. The Python version for my Apple M1 is 3.9.7 while the Intel version runs 3.8.5. M1 does not support 3.8.5. It may be a version issue or not.