Closed anwarMZ closed 4 years ago
Thanks a lot for the bug report @anwarMZ. Would you be able to share a SRP or SRR so that I can reproduce it at my end?
I just pushed https://github.com/saketkc/pysradb/commit/b1fa5d6c7c6792abaf2ffccddc9af8e481209abb which might fix it. Can you try this with the version on master?:
conda create -y -n pysradb_fix && conda activate pysradb_fix && pip install git+https://github.com/saketkc/pysradb.git
Thank for prompt reply, i have attached the file of study accessions here - SRA_srp.txt
Thanks for the SRP list. I will update here once I have a proper fix.
The last fix works. Here is an example with your SRP list: https://colab.research.google.com/drive/1pNeuZJjjHliYFk582kGNRpGJ1Fa2h9cn?usp=sharing
Let me know if you still face any errors. I prefer giving it a few seconds of sleep time to make sure it doesn't hit NCBI's API limits.
Hi @saketkc This works well for querying the ids. However, in this case it creates separate files for each query. In my case, I would like to have one file combined for all SRP queries. But i am not sure if the except can catch the error if the list is passed directly. any thoughts?
You should be able to concat the dataframes using pandas:
master_df = pd.concat([df1,df2, df3,....])
It is possible to query multiple SRPs at once, however given the NCBI's API limits it might time out if there are multiple SRRs (100s of them as in this case).
Sure so i just wanted to confirm that querying multiple (100s) of ids at once doesn't work with NCBI's API. Thank you for answering all queries. I have a quick question - For IDs where a certain metadata is missing. Does it still make a column for that and leave the cell empty? Because when concatenating, this needs to be made sure that two files don't have varying columns & order.
I have a quick question - For IDs where a certain metadata is missing. Does it still make a column for that and leave the cell empty? Because when concatenating, this needs to be made sure that two files don't have varying columns & order.
That's correct.
The only scenario in which this is not true is when you request detailed metadata. sra_metadata(srp, detailed=True)
. But you can still concat the dataframes pd.concat(sort=False)
Closing this, feel free to reopen if you still encounter issues.
It worked well for me when we last spoke but now i am gradually increasing my list to fetch metadata and i am facing an issue. The problem is when there is a certain Study accession that for some reason doesn't fetch metadata it takes long time catch the exception and move on to next one.For example in the current loop as we discussed - here in collab it stalls on following IDs and it takes significant time to get pass these IDs.
In this case i checked that for example these two accession IDs have had issues:
SRP040281
SRP046387
ERP000171
Also after looking at #47 i tried to update pysradb
with v=0.10.5.dev0
after commit #6904315
Thanks,
Zohaib
Thanks for reporting @anwarMZ, I will be taking a look at it later tomorrow.
Thanks! Saket
Hi @saketkc did you get a chance to reproduce the error?
Cheers, Zohaib
Sorry about the delay in responding. I am able to obtain results for the first two of these ids:
The problem with the third id is a missing organism tag ERP000171
(which ideally should have been Yersinia. I will have a fix for this soon, but this is not really a bug at the pysradb end.
Also, SRP040281 has 120k+ records, so it takes approximately 7 minutes on Colab to fetch it which I think is reasonable.
Okay, I was trying to get the details about the host specie which only comes with detailed flag e.g. db.sra_metadata(srp, detailed=True)
. In this case when I was get error in one of the accession ids, it just freezes for a significant time. But good to see i can now calculate time on each. Thanks
Yes, for a project with lot of runs, the retrieval time for metadata will increase (though only linearly as you would see in the last Colab notebook). The detailed mode adds an additional overhead, I haven't done any benchmarking but it should take at least 2x the time for the non-detailed mode.
I have fixed the issue with ERP000171, so I am closing this. Please feel free to reopen this if you face any issues. For projects with a lot of runs, you can expect it to take ~ 0.004 * nrecords
seconds if you are on Colab using the non-detailed mode.
Hi again @saketkc , Thank you for insights, i managed to get this done. I am now trying to download the sra
files for the fetched metadata. I used this example mentioned here in ipynb. I am running this script as a job on Sun GridEngine based cluster and script ended with error
self.retrieve()
File "/home/zohaib/.conda/envs/pysradb/lib/python3.8/site-packages/joblib/parallel.py", line 921, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/home/zohaib/.conda/envs/pysradb/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
return future.result(timeout=timeout)
File "/home/zohaib/.conda/envs/pysradb/lib/python3.8/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/home/zohaib/.conda/envs/pysradb/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
FileNotFoundError: [Errno 2] No such file or directory: '/projects/NCBI_seqdata/pysradb_downloads/SRP251618/SRX8624823/SRR12100406.sra.part'
With this the process was killed, I would like to know if you have any idea about this ? I believe it could be becasue the API timed out and needs time delay between successive downloads? Also if there is a way to skip the files that are already downloaded?
Thank you
The download method first downloads to a temporary location which in this case is pysradb_downloads/SRP251618/SRX8624823/SRR12100406.sra.part
: notice the .part
. Downloads are resumable by default. Once a download finishes, the .part
extension is removed to mark it complete.
In this case the error you get seems to likely be arising because the parallel module is getting confused if this particular file has already been downloaded (it thinks it hasn't been, but probably its download is already complete).
You should have SRR12100406.sra
Please feel free to open a new issue otherwise.
Thanks, Saket
Thanks, i will open a new issue to discuss downloading
0.10.4
3.8.3
10.15.5
. But using anaconda environment and pip installation ofpysradb
Description
Came across
pysradb
to extract the metadata for a batch of SRA runs (~9K). I tried two different approaches, however, both gave different error. Likely because of a missing value onSRAweb
, but i am not sure how an error can either be ignored and moved forward.1st Method
I tried to convert 9K SRA run accessions to SRA study IDs using
srr_to_srp
and then search approx. 500 accession ids againstSRAweb
Error
2nd Method
In this case I tried to run all 9K SRA run accessions directly against
SRAweb
Error
Thanks in advance, looking forward to hear from you. Zohaib