Open blahah opened 11 years ago
Hi,
There isn't really a full list of what is ongoing on right now. But it is a good idea so I've started a wiki page (here https://github.com/ash-dieback-crowdsource/data/wiki/Who-is-doing-what) for anyone who chooses to say what they are doing. Not everyone wants to say what they are doing as they just want to play with the data and see what happens. We aren't enforcing reporting like this. In fact we are very happy to have overlapping analyses, different people take different approaches to the same problem and that is very valuable to us
If what you have planned is quite time or resource consuming then feel free to contact me and I'll let you know if I know of anyone doing it already.
The data and analyses that have been pushed back to the repo are all listed in the wiki on this page https://github.com/ash-dieback-crowdsource/data/wiki and on our hub for the project at http://oadb.tsl.ac.uk
Hope this helps,
Dan
Hi Dan,
No problem. My initial plan is:
From there, if the data are suitable:
I have three questions initially:
Looks great, I think improved transcript assemblies from these data would be great.
To answer the questions
Dear all,
With reference to (2) below, a very preliminary ash genome assembly based on low coverage 454 can now be found here: http://ashgenome.org. We are awaiting Illumina data that should lead to considerable improvements of the assembly.
best wishes
Richard
On 19 Apr 2013, at 09:25, Dan MacLean wrote:
Looks great, I think improved transcript assemblies from these data would be great.
To answer the questions
I don't think anyone is doing an improved transcript assembly for the ash from these data, so yes that would be useful. Though the chalara stuff is getting some discussion at the moment (see issue #1 ). Lots, multiple ash transcriptomes, and multiple strains of chalara from across the continent are being done. Some ash genomic is being done too. Though for some of that we might not see the reads quickly as we know this is being done outside the crowdsourcing effort (which is fair enough, I should add). Yep, we have ftp. But rather than store subsets of reads its probably better to provide the tool version and command line you used for cleaning and normalising so that they can be regenerated if needed. — Reply to this email directly or view it on GitHub.
Dr Richard Buggs | Senior Lecturer | School of Biological and Chemical Sciences, Queen Mary University of London, E1 4NS, United Kingdom | email: r.buggs@qmul.ac.uk | website: http://www.sbcs.qmul.ac.uk/staff/richardbuggs.html | office: +44(0)207 882 3058 | mobile: +44(0)772 992 0401 | twitter: @RJABuggs
Brilliant! Thanks Richard!
Is there a discussion platform for this project somewhere? It's hard to see from just this repo what current work is being done, and what contributions I can make.