Closed lcoombe closed 8 months ago
Yes, the output is a bit weird. Here's how I might approach it, and I apologize for the format, this could do with a bit of a rewrite. The example below is taking the total column, which includes the top 2 recorded predictions from each model and then turning that into a dataframe that is Ancestry, Probability, and Model.
# Read in the csv
df = pd.read_csv("HG00096_extracted.vcf_HG00096.csv")
models = [
"gnomAD_continental",
"gnomAD_eur",
"gnomAD_eas",
"1kGP_amr",
"1kGP_afr",
"1kGP_eas",
"1kGP_eur",
"1kGP_sas",
"1kGP_continental",
"SGDP_continental"
]
# Function to parse the 'total' column and include models
def explode_total_column_with_models(row):
items = ast.literal_eval(row)
result = []
for model, (ancestries, probabilities) in zip(models, items):
for ancestry, prob in zip(ancestries, probabilities):
result.append((ancestry, prob, model))
return result
# Apply the function and create a list of tuples (ancestry, probability, model)
expanded_data_with_models = df['total'].apply(explode_total_column_with_models).explode().tolist()
# Create a new DataFrame from the expanded data
expanded_df_with_models = pd.DataFrame(expanded_data_with_models, columns=['Ancestry', 'Probability', 'Model'])
# Display the expanded DataFrame with models
print(expanded_df_with_models)
Thanks so much @andreirajkovic! That's super helpful!
A follow-up question - after getting a successful run with your provided HG00096 VCF, I moved on to using a VCF generated from a different 1000 genomes individual. The strange thing, is that the results look identical between the HG00096 test and this new individual - which seems wrong to me???
Any idea what could have gone wrong? This is my command:
usr/bin/time -pv singularity exec -B $PWD:$PWD,/path/to/data/resource_dir/:/path/to/data/resource_dir/ --env PYTHONPATH=/opt/Ancestry/:$PYTHONPATH --env TMP_DIR=$PWD/tmp_snv /path/to/data/bin/snvstory/3.0.1/snvstory_3.0.1.sif python3 -m igm_churchill_ancestry --path ERR3242326_chr21_gatk.vcf --resource /path/to/data/data/resource_dir/ --genome-ver 38 --mode WGS --output-dir /path/to/work/dir/snv_story
The snv_story
directory specified by --output-dir
was not created, but I found the outputs in tmp_snv/137748c7ce844513b8c18241c281346a/output/
Here are the PDFs generated:
@lcoombe if you could upload the vcf here I could take a look, but my guess is that you're using a single chromosome ERR3242326_chr21_gatk.vcf and the model will kinda freak out that there are just a bunch of zeros for most of the features. If you provide a vcf that contains variants sampled across the whole genome should provide a much different view
Ah ok! I was just testing to make sure my own generated VCF worked OK, but that makes sense. Here's the VCF: ERR3242326_chr21_gatk.vcf.gz
But, I will generate the full genome VCF and see if that works better!
Thanks for your help! Lauren
An update - I generated the whole genome VCF, and I am strangely still getting identical results? (For the 1kGP predictions)
In case it helps, here's the full VCF I'm using: https://www.bcgsc.ca/downloads/btl/lcoombe/ERR3242326_gatk.vcf.gz (It was too big to attach here)
My command was the same as before - just using the full VCF instead.
Here's the PDF generated - looking the same as the other two from before:
Any other insights you might have would be much appreciated!
Based on my first crack at this, it looks like there are very few variants that are intersecting between our feature set ~55 (gnomad dataset) and your supplied vcf. I'll keep looking into this to see if there is a mistake somewhere on our side e.g. the features in the resource folder we uploaded is the wrong version, ect.
Ok great, thank you! Out of curiosity - do you get the same results with the HG00096 example VCF as I got above? Just thinking that could help narrow down if it's something with how I'm running snvstory vs. an issue with my sample's VCF itself?
That's a good question. The HG00096 example VCF has only 1000 random variants and so it has no intersecting variants with the ancestry specific variants. I did sample 600,000 variants and ran into the same issue. So now I need to do some deeper digging to figure out what is going on.
Okay I think I figured it out.. there is an issue with how we handle chr in front of the chromosome number e.g. chr1 vs 1. From what I recall the chr prefix was for hg38 alignments, but this doesn't seem to always be the case. A quick fix for you would be to add chr to each line of your vcf see the bash code below. Also this is what I get when I compute your sample:
zcat ERR3242326_gatk.vcf.gz | awk 'BEGIN{FS=OFS="\t"} !/^#/{$1="chr"$1}1' | gzip > modified_file
_ERR3242326.vcf.gz
Ah ok! I'll give that a shot - what you're getting is somewhat what I'd expect for that sample. So hopefully if I make that formatting change, I'll get the same. I'll let you know - thank you!
That did the trick! With that renaming, I'm getting the same output as you, which is consistent with the sample's 1000 genomes ancestry label.
Thanks again for your help! Lauren
Hello,
Thanks again for your support!
I had a question about the output CSV format. With the HG00096 test data as an example, I was hoping to understand how to parse out the predicted ancestry from the CSV file, for example the 1kGP continental prediction.
This is the full CSV file I get:
Parsing the file, I see this for the 1kGP_continental probabilities:
However, I was unsure from this file alone how to know which probability refers to which super-population? From the PDF plots, it looks like 0.56 refers to "AMR", and 0.34 refers to "AFR", but was having trouble seeing how to parse that info from the CSV alone.
Thank you! Lauren