Closed Gaskell-1206 closed 2 years ago
Hi @Gaskell-1206!
Thank you for your pull request and welcome to our community.
In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.
In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.
Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed
. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.
If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!
Hello @Gaskell-1206, here is an example of downloading a single file with no extra dependencies. The key is to use the raw.githubusercontent.com
URL and the git hash. Then you can get the csv you want.
import requests
url = "https://raw.githubusercontent.com/microsoft/fastmri-plus/640500fb6a82257a73d0efe61963e68671bf15f4/Annotations/knee.csv"
request = requests.get(url, timeout=10, stream=True)
fname = "knee640500fb6a82257a73d0efe61963e68671bf15f4.csv"
with open(fname, "wb") as fh:
for chunk in request.iter_content(1024 * 1024):
fh.write(chunk)
Hello @Gaskell-1206, here is an example of downloading a single file with no extra dependencies. The key is to use the
raw.githubusercontent.com
URL and the git hash. Then you can get the csv you want.import requests url = "https://raw.githubusercontent.com/microsoft/fastmri-plus/640500fb6a82257a73d0efe61963e68671bf15f4/Annotations/knee.csv" request = requests.get(url, timeout=10, stream=True) fname = "knee640500fb6a82257a73d0efe61963e68671bf15f4.csv" with open(fname, "wb") as fh: for chunk in request.iter_content(1024 * 1024): fh.write(chunk)
OK. I'll use this way which is more convenient than using git clone.
Hi @mmuckley , I updated my code in 6f224cb.
self.annotated_examples
instead of adding annotation into self.examples
. If it's unnecessary, I can replace it.Can you give me some feedbacks and more guidances for the next step? Thanks!
Hello @Gaskell-1206.
This is starting to look pretty good!
I left some comments on largely engineering content. I also think we should probably provide a unit test for this new function - see the
SliceDataset
test in thetests
directory for an example. You will also want to create your own fake annotation data I think. Be sure your tests don't require an actual download - use monkey patching.
Hi @mmuckley.
Thank you for your comments.
I modified codes as you requested and added a unit test for AnnotatedSliceDataset
. I hope it helps. Thanks!
Also, some tests are failing. Unfortunately, I have to manually enable tests. If you want to run locally, you can see the tests we run here.
Also, some tests are failing. Unfortunately, I have to manually enable tests. If you want to run locally, you can see the tests we run here.
Thanks! I will try to fix bugs.
Hello @hansenms, this looks nearly ready to go. Would you like to take a look at it? We could wait a week I think.
This looks good. One quick question, which I could not convince myself of 100% is whether or not you have taken into account that the images were flipped up-down for annotation (to get them roughly in DICOM orientation) and that has to be taken into account when matching the labels up with the images. See: https://github.com/microsoft/fastmri-plus/blob/main/ExampleScripts/example.ipynb
This looks good. One quick question, which I could not convince myself of 100% is whether or not you have taken into account that the images were flipped up-down for annotation (to get them roughly in DICOM orientation) and that has to be taken into account when matching the labels up with the images. See: https://github.com/microsoft/fastmri-plus/blob/main/ExampleScripts/example.ipynb
Hi Micheal, thank you for checking codes. Actually, I didn't take this problem into account in the current draft PR. The following two questions I need to confirm with you first and then working on matching labels up with images.
y'=320-y-height
), will there be any problems?In addition, we will add a notebook showing a few examples with the annotations in a new folder in fastmri_examples
in this PR later.
@Gaskell-1206 I am not sure what you mean by the first question? You can see the script that we used to generate the DICOM images here https://github.com/microsoft/fastmri-plus/blob/main/ExampleScripts/fastmri_to_dicom.py. The fault settings in that script capture the processing that was done. On the second question, yes, we flipped one dimension on all images. It was not perfect in terms of getting to DICOM orientation on all images, but it was the closes we could get while doing the same to all images. I would still recommend doing a manual smoke test (like the notebook I referenced) to make sure the labels look reasonable.
Trying to help clarify: @hansenms I think for the first question @Gaskell-1206 was asking if you use ISMRMRD metadata for things like image cropping rather than any hard numbers such as 320 x 320 for the knee data.
@Gaskell-1206 as you can see in the fastmri_to_dicom.py
from @hansenms, the answer is "yes" since the annotations are made on the ground-truth data ("reconstruction_rss"
) that uses the ISMRMRD metadata for cropping. So we should assume that all annotations are made with properly-cropped reconstructions where the crops are determined by ISMRMD metadata.
Thanks @mmuckley, all makes sense. Lemme know if I can help clarify anything but we tried to capture the process by including the scripts we used.
@mmuckley @hansenms Thanks for making it clear. Based on that, I created a draft notebook to visualize annotaions and compare different way of adding annotaions. My solution is to change the y coordinate in labels when incorporate into SliceDataset
instead of flipping up-down images.
Hello @Gaskell-1206, I really like the visualization. Could you pick an example that's a little more off-center so we can be a bit more certain? Perhaps a meniscal tear?
Hello @Gaskell-1206, I really like the visualization. Could you pick an example that's a little more off-center so we can be a bit more certain? Perhaps a meniscal tear?
@mmuckley Notebook is updated. I think it's a good example of meniscal tear. Do you need another example of brain? If so, can you share test file to me? Thanks.
Okay @Gaskell-1206 I think we're almost ready to merge this. Could you
If you get these two then I will merge.
Hello @Gaskell-1206, some files need to be formatted for me to merge.
https://github.com/facebookresearch/fastMRI/runs/5115661539?check_suite_focus=true
Hello @Gaskell-1206, some files need to be formatted for me to merge.
https://github.com/facebookresearch/fastMRI/runs/5115661539?check_suite_focus=true
Done.