greenelab / deep-review

A collaboratively written review paper on deep learning, genomics, and precision medicine
https://greenelab.github.io/deep-review/
Other
1.25k stars 270 forks source link

Current Section Status #188

Closed cgreene closed 6 years ago

cgreene commented 7 years ago

@agitter is no longer updating the outline, we are no longer accepting new sections

Of course you should feel welcome to contribute to sections that already exist. We're also looking for people to take primary responsibility for sections. I'll copy an e-mail from @agitter below that has our - as far as I know - most up to date status:

As described in the intro https://github.com/greenelab/deep-review/blob/master/sections/02_intro.md we’ve broken the paper into Categorize, Study, and Treat sections. Each of these has been outlined, though we welcome suggestions for new sub-sections. There is also a Discussion of general issues pertinent to all three application areas and future outlook. Here are that primary topics in each section that are unclaimed as far as I know.

Categorize:

Study:

Treat:

Discussion:

https://github.com/greenelab/deep-review/issues/88 and https://github.com/greenelab/deep-review/issues/2 provide some context on our goals for the review and how we hope to differentiate it from existing papers. We don’t want to enumerate all deep learning papers in biomedicine so some of the Study sub-sections may be cut entirely if there is nothing especially interesting to say about them. To start working on a sub-section, you can create a pull request. https://github.com/greenelab/deep-review/pull/147 is an example of a completed pull request and https://github.com/greenelab/deep-review/pull/174 is one I’m actively working on where I’m still outlining and searching for relevant literature.

Please let us know if you want to discuss anything else specific or else we can take the discussion to GitHub so that others can contribute.

Edit by @agitter @cgreene had good suggestions in #200 that are helpful prompts for anyone starting a topic sub-section

akundaje commented 7 years ago

Yup. Definitely going to help with evaluation and interpretation sections. I'll work on these in the next 2-3 weeks.

agitter commented 7 years ago

Thanks @cgreene. My list above accounts for sections that are current being written (#174, #183, #191) and @gailrosen volunteering to write about metagenomics.

akundaje commented 7 years ago

@AvantiShri I'm roping you into the Interpretation section for this review. Lets plan and start writing it.

cgreene commented 7 years ago

Chatted with @qiyanjun and @jacklanchantin at PSB and they are going to take a stab at "Transcription factors and RNA-binding proteins"

Edit: Conference was actually PSB! :)

XieConnect commented 7 years ago

I can lead the rest efforts in the Categorize and Treat sections. I am familiar with these topics, and has just made first attempt at Categorize (sent PR yesterday).

In addition, I can help @brettbj with his new Data sharing and privacy section later if he needs my help, since this is my primary research.

I previously helped on the Genomics section (led by @agitter ), but did not complete yet. My main obstacle is how to differentiate with existing reviews with extensive coverage of Transcription factors, etc. Reading all these alternative reviews took way too long than I expected. External help, as recommended by @cgreene , would be very helpful.

cgreene commented 7 years ago

@XieConnect : agree that the sheer number of papers in many of these domains (and rapid rate of new articles appearing) has become killer. I suggest perhaps even a further divide and conquer approach. @qiyanjun and @jacklanchantin are interested in the TF question at least. Maybe you guys could strategize on how to most effectively divide the literature.

agitter commented 7 years ago

@XieConnect I also advocate spending time on papers that are especially interesting or relevant to our guiding question. I don't think we should feel compelled to cover every single paper in an area since our goal is to address a specific theme and not enumerate all relevant work.

XieConnect commented 7 years ago

@cgreene @agitter Great advice. I'll stand-by and aid @qiyanjun and @jacklanchantin later if help is needed. For now, I will wrap up the aforementioned healthcare related sections first.

gailrosen commented 7 years ago

When do you need this by? I am swamped right now.

On Tue, Jan 10, 2017 at 11:42 AM, Wei Xie notifications@github.com wrote:

@cgreene https://github.com/cgreene @agitter https://github.com/agitter Great advice. I'll stand-by and aid @qiyanjun https://github.com/qiyanjun and @jacklanchantin https://github.com/jacklanchantin later if help is needed. For now, I will wrap up the aforementioned healthcare related sections first.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188#issuecomment-271644268, or mute the thread https://github.com/notifications/unsubscribe-auth/AE-YuU-5rLXv6ZtxBKyQ_BlBOGr4JhS8ks5rQ8MFgaJpZM4Ldd-0 .

-- Gail L. Rosen, Associate Professor Electrical and Computer Engineering Drexel University Webpage/Contact info: http://www.ece.drexel.edu/gailr

gailrosen commented 7 years ago

I am working on something but haven't gone through the Github fork/pull yet

you can view here: https://docs.google.com/document/d/1I4wbXMil5Td7yX8Ioq7JnBYR7JBmCSQCOaUwAhIOFfo/edit?usp=sharing

On Tue, Jan 10, 2017 at 1:05 PM, Gail Rosen gail.l.rosen@gmail.com wrote:

When do you need this by? I am swamped right now.

On Tue, Jan 10, 2017 at 11:42 AM, Wei Xie notifications@github.com wrote:

@cgreene https://github.com/cgreene @agitter https://github.com/agitter Great advice. I'll stand-by and aid @qiyanjun https://github.com/qiyanjun and @jacklanchantin https://github.com/jacklanchantin later if help is needed. For now, I will wrap up the aforementioned healthcare related sections first.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188#issuecomment-271644268, or mute the thread https://github.com/notifications/unsubscribe-auth/AE-YuU-5rLXv6ZtxBKyQ_BlBOGr4JhS8ks5rQ8MFgaJpZM4Ldd-0 .

-- Gail L. Rosen, Associate Professor Electrical and Computer Engineering Drexel University Webpage/Contact info: http://www.ece.drexel.edu/gailr

-- Gail L. Rosen, Associate Professor Electrical and Computer Engineering Drexel University Webpage/Contact info: http://www.ece.drexel.edu/gailr

agitter commented 7 years ago

@gailrosen I'm not sure what our current deadline is (January 15 is not realistic), but thanks a lot for starting this section. When it's ready for review and comments please proceed with the pull request.

agitter commented 7 years ago

@blengerich will work on 'categorizing patients for clinical decision making' in the Treat section.

agitter commented 7 years ago

I added some prompts from @cgreene to the original post

agitter commented 7 years ago

@blengerich Are you still interested in drafting part of the Treat section?

bdo311 commented 7 years ago

I'm new to this! Happy to work on splicing and single-cell sections if no one's working on them right now.

agitter commented 7 years ago

@bdo311 both those sections are free, and I added you to the list in the first post. Please check out some of the suggested prompts there if you haven't already as you think about organizing the sections.

agitter commented 7 years ago

We're making a serious effort to have a first draft of most of the sections within the next week or so. I updated the outline in the first post to show what has been drafted and what remains untouched.

I'll argue that anything that hasn't been drafted in ~6 months isn't exciting enough to be considered "transformative". These topics can be alluded to in passing and covered to very briefly with a couple sentences. If there are any unclaimed topics you would like to "save", please let us know that you have started working on a draft.

@blengerich, you were interested in drafting something for 'categorizing patients for clinical decision making'. Will you have time in the next week to work on that?

@cgreene I mostly focused on the status of the Study section. Please make updates for Categorize if you have any. I think you and I can write most of the Discussion if no one else jumps in to do it.

blengerich commented 7 years ago

Hi @agitter, sorry for the delay. I've had a bit of trouble finding successful papers to include in the 'categorizing patients for clinical decision making' section. If you would like, in a few days, I can push a draft with a slightly more pessimistic tone that focuses on the challenges underlying this application. However, if you have other ideas, or anyone else would like to take over the section, I am happy to step aside.

agitter commented 7 years ago

@blengerich We are completely willing to take a pessimistic tone on some of these sections. If the state of the area is that things aren't working yet or haven't made a big difference over previous baselines, then this is the message we should deliver. We can still project an optimistic view of future opportunities, if that's warranted.

We'd be grateful to have anything you can contribute. This is an important topic, and if you aren't able to contribute I'm not sure that we have anyone else who can step in before the deadline. Thanks!

blengerich commented 7 years ago

Thanks for the feedback. I have a draft in progress and will push it in a couple of days.

bdo311 commented 7 years ago

I think I can do the remainder of the "study" subsections -- a lot of the content will potentially overlap things that have already been written so I'll either keep it short or try to reorganize.

agitter commented 7 years ago

@bdo311 Thanks for the additional help. Do you think there is a strong message to deliver in these remaining areas beyond what has been covered in other recent reviews? There has been a lot of deep learning papers in miRNA binding prediction and epigenetics, and we shouldn't try to present all of them. I think it would be most valuable to focus on whether neural network are being applied to the right problems (e.g. we had earlier discussions on predicting enhancer locations versus enhancer targets), offer such improved performance that new types of biological conclusions can be draw, have architectures that are particularly well-suited for the data types (beyond 1D convolutions on sequence), etc.

For the variant detection, I suggested #159 and #171 because #159 considers an unusual type of transfer learning that uniquely takes advantage of pre-trained networks for very different problems. That's something that could not be replicated with a different type of classifier. #171 (and its predecessor #99) provides a counter point to #159.

bdo311 commented 7 years ago

@agitter I will think about those over the coming weekend. My initial feeling is still that most improvements in accuracy are incremental and the real benefit lies in interpretation and integration of datasets -- which we've talked about in some of the sections we've written for the 04_study.md. Variant detection should be a different story though and I'll read that carefully.

agitter commented 7 years ago

I updated the outline again today.

@blengerich thanks for writing one of the remaining Treat sections. My Treat section contribution on ligand-based chemical screening should be coming this weekend.

@jacklanchantin we have several open TODOs on the first draft of the TF binding section. Do you think you'll have time to work on those in the next week or two per the updated timeline in #310? Specifically, I would like to see us be more critical about what constitutes state of the art results and how impactful deep learning has been in this area. Some evaluation strategies make it seem as if the TF binding prediction problem is solved and others show much more pessimistic performance. There are also a few specific papers we wanted to cover, and maybe even others that aren't in the TODOs.

agitter commented 7 years ago

@jisraeli offered to help with a few remaining sections, especially evaluation. You can see the basic outline we have in 06_discussion.md. Some of the problems with ROC have also come up in individual domains, such as my draft of #313. It would be great to pull in any lessons learned from the DREAM challenge if there is anything we can reference, even a stable URL.

Note that 04_study.md also has a first draft on TF binding. Because you've worked on that topic, it would be great to have your revisions there. My comments directly above summarize some of the open TODOs and there are many related papers listed as GitHub issues in this repo. I'm also wondering if the GitHub URL is the best DragoNN reference or if we should use something else.

There are some contribution suggestions here. You don't have to use the reference tags.

jacklanchantin commented 7 years ago

@agitter , I should be able to finish those TODOs by the 24th. I am leaving in a few days for ICLR in France, so I have some things I need to do before I leave, but I should be able to. I should be able to work on it this Thursday.

agitter commented 7 years ago

@jacklanchantin thanks. It may be a good idea to coordinate with @jisraeli, who may also make a round of edits.

jisraeli commented 7 years ago

Re evaluation section - I can't comment much on the DREAM challenge until the results are published. But there are enough papers out now that close examination of supplementary sections will reveal where the TFBS deep learning field stands so should be sufficient for this discussion.

Re referencing DragoNN - we are aiming to put up the manuscript on bioarxiv this month so we will have something to cite.

agitter commented 7 years ago

@jisraeli I agree we'll be okay without DREAM if those results aren't available. There are indeed plenty of other sources to draw upon, e.g., I believe you've commented on the supplement of #258.

akundaje commented 7 years ago

We are hoping to get a preprint of the DREAM paper out by mid June. If the review is not accepted by then, it would be great to cite the discussion of performance there.

agitter commented 7 years ago

@akundaje That timing should work well. My best guess for our timeline is an initial submission in a few weeks, which means we should still be in review or making revisions when the DREAM preprint comes out. We can add it during our revisions.

akundaje commented 7 years ago

Btw here are some slides from my CSHL Sysbio talk on the DREAM Challenge https://drive.google.com/file/d/0B_ssVVyXv8ZSYWIyWnppRk5ZMDQ/view?usp=sharing

Specifically focus on slides 20-24 for the dramatic differences between performance measures as expected.

qiyanjun commented 7 years ago

I can help doing some editing from next Monday. We have done new survey after the DeepChrome paper:

https://academic.oup.com/bioinformatics/article/32/17/i639/2450757/DeepChrome-deep-learning-for-predicting-gene

Not sure if the topic fits though.

On 4/18/17 12:56 PM, Anshul Kundaje wrote:

Btw here are some slides from my CSHL Sysbio talk on the DREAM Challenge https://drive.google.com/file/d/0B_ssVVyXv8ZSYWIyWnppRk5ZMDQ/view?usp=sharing

Specifically focus on slides 20-24 for the dramatic differences between performance measures as expected.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188#issuecomment-294909641, or mute the thread https://github.com/notifications/unsubscribe-auth/AFb2X2ZbYvYGVCjQ1Qn1IP8s43VuMI7Rks5rxOspgaJpZM4Ldd-0.

agitter commented 7 years ago

@akundaje I looked at your slides, and if I'm reading it correctly, number 30 is quite profound. For the purposes of this review, it would be hard for us to claim deep learning for TF binding has revolutionized predictive performance if a much simpler model can beat it in the DREAM setting. We'll definitely want to incorporate the DREAM preprint when it is out.

@qiyanjun Thanks for offering to help. DeepChrome is currently referenced in the Gene Expression subsection but not discussed in great detail.

akundaje commented 7 years ago

It's important to note that the 3rd ranking deep learning model is one particular deep learning model with a specific formulation. There were other deep learning models in the challenge that failed even more miserably. Internally, our deep models outperform the winners. So once again, simply throwing a deep net at a problem does not do much if the formulation is flawed..

There are fundamental issues in the way most of the models are set up that cause them to overfit to sequence features in training cell types. TFs often have very different partners (co-factors) in different cell types. So if you perfectly capture sequence features that define training cell types giving you excellent cross validation performance on the training celltype, you will likely fail miserably on the test celltype if the TF has switched cofactors.

Domain adaptation is hence the key. The reason the really simple model that uses only 1 PWM of the target TF works best is because it explicitly avoids overfitting training celltype sequence features by infact undefitting it. Using the single PWM prevents modeling cofactors altogether. One can clearly do better by using other strategies to adapt to the test cell type. This is what makes the cross cell type TF binding prediction problem so intriguing. It's not a classical ML problem of just training a classier on some data and using it for prediction. You always need to understand the relationship between the training and test cell types and adapt to the differences.

So yes the vanilla approach of throwing a deep net at the problem does not give the earth shattering results one may expect. A deep net with a better designed formulation can go quite far. We are working on a companion paper on our approach as we clearly could not participate in our own challenge :).

Anshul

On Apr 19, 2017 3:45 AM, "Anthony Gitter" notifications@github.com wrote:

@akundaje https://github.com/akundaje I looked at your slides, and if I'm reading it correctly, number 30 is quite profound. For the purposes of this review, it would be hard for us to claim deep learning for TF binding has revolutionized predictive performance if a much simpler model can beat it in the DREAM setting. We'll definitely want to incorporate the DREAM preprint when it is out.

@qiyanjun https://github.com/qiyanjun Thanks for offering to help. DeepChrome is currently referenced in the Gene Expression subsection but not discussed in great detail.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188#issuecomment-295213270, or mute the thread https://github.com/notifications/unsubscribe-auth/AAI7EQ5EG1KAXO3NYkYLbxKMHikBCkbsks5rxeXHgaJpZM4Ldd-0 .

qiyanjun commented 7 years ago

@Anshul great insight!

We actually published a related paper before: extending string kernel with domain adaptations to cross cell type TFBS prediction:

https://arxiv.org/abs/1609.03490 Transfer String Kernel for Cross-Context DNA-Protein Binding Prediction

Sent from my iPhone please excuse misspellings & briefness

On Apr 19, 2017, at 10:42 AM, Anshul Kundaje notifications@github.com wrote:

It's important to note that the 3rd ranking deep learning model is one particular deep learning model with a specific formulation. There were other deep learning models in the challenge that failed even more miserably. Internally, our deep models outperform the winners. So once again, throwing a deep net at a problem does not do much.

There are fundamental issues in the way most of the models are set up that cause them to overfit to sequence features in training cell types. TFs often have very different partners in different cell types. So if you perfectly capture sequence features that define training cell types giving you excellent cross validation performance on the training celltype, you will likely fail miserably on the test celltype if the TF has switched cofactors.

Domain adaptation is hence the key. The reason the really simple model that uses 1 PWM of the target TF only works best is because it explicitly avoids overfitting training celltype sequence features by infact undefitting it. Using the single PWM prevents modeling cofactors altogether. One can clearly do better by using other strategies to adapt to the test cell type. This is what makes the cross cell type TF prediction problem so intriguing. It's not a classical ML problem of just training a classier on some data and using it for prediction. You always need to understand the relationship between the training and test cell types and adapt to the differences.

So yes the vanilla approach of throwing a deep net at the problem does not give the earth shattering results one may expect. A deep net with a better designed formulation can go quite far. We are working on a companion paper on our approach as we clearly could not participate in our own challenge :).

Anshul

On Apr 19, 2017 3:45 AM, "Anthony Gitter" notifications@github.com wrote:

@akundaje https://github.com/akundaje I looked at your slides, and if I'm reading it correctly, number 30 is quite profound. For the purposes of this review, it would be hard for us to claim deep learning for TF binding has revolutionized predictive performance if a much simpler model can beat it in the DREAM setting. We'll definitely want to incorporate the DREAM preprint when it is out.

@qiyanjun https://github.com/qiyanjun Thanks for offering to help. DeepChrome is currently referenced in the Gene Expression subsection but not discussed in great detail.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188#issuecomment-295213270, or mute the thread https://github.com/notifications/unsubscribe-auth/AAI7EQ5EG1KAXO3NYkYLbxKMHikBCkbsks5rxeXHgaJpZM4Ldd-0 .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

akundaje commented 7 years ago

Yes! I remember seeing this paper. Very nice work with early insights into this issue.

Anshul

On Apr 19, 2017 10:04 AM, "qiyanjun" notifications@github.com wrote:

@Anshul great insight!

We actually published a related paper before: extending string kernel with domain adaptations to cross cell type TFBS prediction:

https://arxiv.org/abs/1609.03490 Transfer String Kernel for Cross-Context DNA-Protein Binding Prediction

Sent from my iPhone please excuse misspellings & briefness

On Apr 19, 2017, at 10:42 AM, Anshul Kundaje notifications@github.com wrote:

It's important to note that the 3rd ranking deep learning model is one particular deep learning model with a specific formulation. There were other deep learning models in the challenge that failed even more miserably. Internally, our deep models outperform the winners. So once again, throwing a deep net at a problem does not do much.

There are fundamental issues in the way most of the models are set up that cause them to overfit to sequence features in training cell types. TFs often have very different partners in different cell types. So if you perfectly capture sequence features that define training cell types giving you excellent cross validation performance on the training celltype, you will likely fail miserably on the test celltype if the TF has switched cofactors.

Domain adaptation is hence the key. The reason the really simple model that uses 1 PWM of the target TF only works best is because it explicitly avoids overfitting training celltype sequence features by infact undefitting it. Using the single PWM prevents modeling cofactors altogether. One can clearly do better by using other strategies to adapt to the test cell type. This is what makes the cross cell type TF prediction problem so intriguing. It's not a classical ML problem of just training a classier on some data and using it for prediction. You always need to understand the relationship between the training and test cell types and adapt to the differences.

So yes the vanilla approach of throwing a deep net at the problem does not give the earth shattering results one may expect. A deep net with a better designed formulation can go quite far. We are working on a companion paper on our approach as we clearly could not participate in our own challenge :).

Anshul

On Apr 19, 2017 3:45 AM, "Anthony Gitter" notifications@github.com wrote:

@akundaje https://github.com/akundaje I looked at your slides, and if I'm reading it correctly, number 30 is quite profound. For the purposes of this review, it would be hard for us to claim deep learning for TF binding has revolutionized predictive performance if a much simpler model can beat it in the DREAM setting. We'll definitely want to incorporate the DREAM preprint when it is out.

@qiyanjun https://github.com/qiyanjun Thanks for offering to help. DeepChrome is currently referenced in the Gene Expression subsection but not discussed in great detail.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188# issuecomment-295213270, or mute the thread https://github.com/notifications/unsubscribe-auth/ AAI7EQ5EG1KAXO3NYkYLbxKMHikBCkbsks5rxeXHgaJpZM4Ldd-0 .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/greenelab/deep-review/issues/188#issuecomment-295346842, or mute the thread https://github.com/notifications/unsubscribe-auth/AAI7EQkvh-fqb_pLCAIDXNGbU5nXUqJeks5rxj6OgaJpZM4Ldd-0 .

agitter commented 7 years ago

@akundaje This is all very interesting. Even without the DREAM preprint, it would be great to include some of this conversation into the first draft of the review using support from existing literature as much as possible. Then we could add the DREAM reference and make small changes without changing the entire section during revision. I expect many readers will have some familiarity with DeepBind and DeepSEA but not have a deep understanding of the complexity of the domain and the limitations of standard CNNs.

@jisraeli or @jacklanchantin do you have any interest in this type of edit?

alxndrkalinin commented 7 years ago

I started writing something on Discussion:transfer learning (#129, #330, #331, #332), also considering to add a paragraph on multi-modal/integrative DL in the same section (#14, #110, #112, #238).

agitter commented 7 years ago

Thanks @alxndrkalinin, noted in the first post. I think we can be brief about image-to-image transfer learning and don't need to cite too many primary papers. #47 did a good job with it already.

enricoferrero commented 7 years ago

Hello, as discussed in #317 I will try to add a 'Drug repositioning' subsection with a brief overview of deep learning applications in the Treat section. I'll be covering #38, #113, #317, #333 and a few other papers for general context/background.

I'd also like to slightly modify the relevant paragraph in the Introduction to better match this subsection.

Can someone please clarify what papers were you considering for the 'Effects of drugs on transcriptomic responses'? I would expect this to substantially overlap with drug repositioning so I'd be happy to review these papers, see if I've missed something and, if possible, merge everything into a single subsection.

EDIT: Was it maybe #24? If so I can see it's now been included in Study (Gene expression). I would then suggest to remove the 'Effects of drugs on transcriptomic responses' from Treat unless there was anything else?

agitter commented 7 years ago

@enricoferrero Thanks. I support editing the intro, but for practical purposes I suggest not editing that text directly until we figure out what we're doing with #246. Perhaps add a TODO in Treat that we need to update the intro.

The transcriptomic sub-section was indeed focused on #24 so I agree we can remove it.

bdo311 commented 7 years ago

I've unfortunately been really busy the past 2 weeks, but just submitted an initial draft of the variant calling section. I think I can do a brief review of microRNA binding and whether that is a good focus for deep learning research -- otherwise, likely not going to have time for anything else.

agitter commented 7 years ago

@bdo311 excellent, you've been a huge help. I'll review #344, and I agree that a short overview of miRNA binding is appropriate.

alxndrkalinin commented 7 years ago

See #347 for a draft for Transfer learning section. Review is requested.

Next I'm going to add a couple of things on imaging to Categorize:Imaging applications and Study:Morphological phenotypes as discussed with @cgreene before.

I suggest to keep empty sections that are important like evaluation, data limitations, and code, data, and model sharing. Even if assigned authors don't have time, I believe we can write at least something and not leave them completely out.

agitter commented 7 years ago

Thanks @alxndrkalinin. I agree those Discussion sections are too important to drop. Those may be the only exception to the April 24 drop deadline, though one of us will need to draft something soon.

cgreene commented 7 years ago

I agree with @agitter : I think the dropping at this stage would be in treat/categorize/study. I agree miRNA binding would be great @bdo311 if you have time to contribute it.

jacklanchantin commented 7 years ago

@agitter sorry it took me a while to get back to you. I added the TODO revisions for the TFBS section. What's the best way to go about submitting the request since the old one already got merged?

cgreene commented 7 years ago

@jacklanchantin - I think the best workflow is to pull / update your fork's master to the current master. Make a branch off of it. Make the changes to the branch. Then file a pull request back to greenelab's master.

jacklanchantin commented 7 years ago

@cgreene I think (?) I did it.