Usually the library runs fine (I often boot up fresh cloud GPU, Ubuntu 16.04 instances on many providers), but, on one occasion - instance by SnarkAI in particular, I saw this error:
Which calls upon the encode_str function - which encodes the string into utf-8 bytes if using python 3
However I'm also running Python 3.6 on all machines...
Removing the encode() in QRNN seems to have made it work on that machine for me - but I have to wonder 1. why only that machine had the issue and 2. would removing the .encode() in QRNN be alright for all other cases?
Usually the library runs fine (I often boot up fresh cloud GPU, Ubuntu 16.04 instances on many providers), but, on one occasion - instance by SnarkAI in particular, I saw this error:
'bytes' object has no attribute 'encode'
caused by this line: https://github.com/salesforce/pytorch-qrnn/blob/daadb0f39a1811128df7eb03933f286aa5e319ed/torchqrnn/forget_mult.py#L102which calls upon the constructor in pynvrtc
Upon further inspection, it seems that pynvrtc also performs encode() of its own:
https://github.com/NVIDIA/pynvrtc/blob/fffa9f6f4a7ee1d452346cbdf68b84b5246ccffb/pynvrtc/interface.py#L200
Which calls upon the encode_str function - which encodes the string into utf-8 bytes if using python 3
However I'm also running Python 3.6 on all machines...
Removing the encode() in QRNN seems to have made it work on that machine for me - but I have to wonder 1. why only that machine had the issue and 2. would removing the .encode() in QRNN be alright for all other cases?
Edit: Apparently this is caused by this recent PR merge: https://github.com/NVIDIA/pynvrtc/pull/2 Shouldn't QRNN be updated accordingly?