wyang17 / SQuIRE

Software for Quantifying Interspersed Repeat Expression
Other
49 stars 30 forks source link

Network is unreachable #25

Open vasilislenis opened 5 years ago

vasilislenis commented 5 years ago

Hello,

I am really sorry for bothering you again, but I have a new problem when I am trying to run the fetch phase. It looks like the arguments are fine, however, I am getting an error that has to do with the connection with UCSC.

`Downloading Compressed Chromosome files...

Traceback (most recent call last): File "/scratch/x.v.l.01/yes/envs/squire/bin/squire", line 11, in load_entry_point('SQuIRE', 'console_scripts', 'squire')() File "/scratch/x.v.l.01/SQuIRE/squire/cli.py", line 156, in main subargs.func(args = subargs) File "/scratch/x.v.l.01/SQuIRE/squire/Fetch.py", line 212, in main urllib.urlretrieve(chrom_loc1, filename=chrom_name_compressed) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/urllib.py", line 245, in retrieve fp = self.open(url, data) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/urllib.py", line 350, in open_http h.endheaders(data) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/httplib.py", line 1038, in endheaders self._send_output(message_body) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/httplib.py", line 882, in _send_output self.send(msg) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/httplib.py", line 844, in send self.connect() File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/httplib.py", line 821, in connect self.timeout, self.source_address) File "/scratch/x.v.l.01/yes/envs/squire/lib/python2.7/socket.py", line 575, in create_connection raise err IOError: [Errno socket error] [Errno 101] Network is unreachable`

I have tested to download a single file from UCSC it works fine, meaning that my cluster is reachable from the internet.

Any possible explanation? Many thanks in advance.

cpacyna commented 5 years ago

Happy to help! Could you include the command you're using to run fetch (squire Fetch ....)? We can troubleshoot from there.

vasilislenis commented 5 years ago

Yes, of course.

So, I have followed the instructions that you provide (generate a project folder, copy the sample_scripts inside and create a tmp folder).

I am activating the virtual environment:

source activate squire

And then I am submitting my job like:

sbatch fetch.sh arguments.sh

I can send you the fetch.sh script with my scheduler commands and the argument.sh with the parameters that I am passing if it is easier for you.

Thank you very much, Vasilis.

On 28 Feb 2019, at 14:54, cpacyna notifications@github.com wrote:

Happy to help! Could you include the command you're using to run fetch (squire Fetch ....)? We can troubleshoot from there.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/wyang17/SQuIRE/issues/25#issuecomment-468302421, or mute the thread https://github.com/notifications/unsubscribe-auth/AGhlf92WOm9G8S0gKRALzCR2HaKNA0gDks5vR-3BgaJpZM4bWgs7.

cpacyna commented 5 years ago

Hi Vasilis,

Thanks for sending this! We wrote our sample_scripts for PBS (qsub); I'm sorry if that wasn't clear. I just uploaded sample scripts for fetch and clean with Slurm now (in /slurm_sample_scripts) and will add map and count later today so you can see what those scripts should look like. I included the sbatch run command commented at the top of each script.

Not sure if that's causing your original issue (perhaps it's running out of memory and reports a loss of network connection) but try the Slurm scripts and let me know how that runs.

Regards, Chloe

vasilislenis commented 5 years ago

Hi Chloe,

Thank you very much for the scripts but unfortunately I am facing again the same issue. It is not a memory issue, definitely since I am allocating the whole memory of the node. Is it possible to send you my slurm-error file in case that you have a better idea about what’s going on?

Thank you very much in advance, Vasilis.

On 1 Mar 2019, at 14:33, cpacyna notifications@github.com wrote:

Hi Vasilis,

Thanks for sending this! We wrote our sample_scripts for PBS (qsub); I'm sorry if that wasn't clear. I just uploaded sample scripts for fetch and clean with Slurm now (in /slurm_sample_scripts) and will add map and count later today so you can see what those scripts should look like. I included the sbatch run command commented at the top of each script.

Not sure if that's causing your original issue (perhaps it's running out of memory and reports a loss of network connection) but try the Slurm scripts and let me know how that runs.

Regards, Chloe

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/wyang17/SQuIRE/issues/25#issuecomment-468684007, or mute the thread https://github.com/notifications/unsubscribe-auth/AGhlf2-SggbwmAJdZ4HvPwCMOP_GH2owks5vSTopgaJpZM4bWgs7.