Open fedorov opened 1 month ago
when you say "works", can you confirm if the content returned is compressed or not?
@fedorov when I say it works I mean the proxy doesn't return a 500 error, and given its limit is 32MB it means it gets compressed. The responses come with ~27MB slices as opposed to the 32+MB of when we were going around the proxy.
Perfect, great news! This also seems to indicate, if the comment in that older issue is accurate, that there is no change in the behavior of the GHC backend.
Next, can you please test this with v1beta1
GHC API?
After that, can you please proceed with testing against the IDC proxy?
@wlongabaugh please remind us which tiers use which version of GHC API. I do not know where to find that information.
Next, can you please test this with v1beta1 GHC endpoint?
Can you provide a server url?
After that, can you please proceed with testing against the IDC proxy?
Hmm... not sure what you mean here. Can you elaborate? I mean, I was testing against a proxy all along. You mean another proxy?
Sorry I missed that you tested against the proxy. We need to know the behavior with both direct access to GHC and via the proxy. While using proxy, you cannot choose API version, you can only do it while accessing directly, substituting v1
with v1beta1
.
Are you able to download that study and make a Google DICOM store for direct testing, or you need help with that?
Are you able to download that study and make a Google DICOM store for direct testing, or you need help with that?
@fedorov I've never created a google DICOM store and I don't have access to IDC's GCP account, which means I'd need help indeed.
note that while accessing IDC DICOM store directly - bypassing the proxy - I am able to visualize both series.
I tried using the IDC fork's config file and the GCP server you sent in the message from the quote above, but I'm getting this error when I try to login
(also I'm not sure that by logging in with the default oidc config I'd be able to access this specific server anyway)
I was wondering how @pedrokohler would be able to access GHC directly, as he does indeed not have permissions.
The version of the API we are using is in the proxy runtime configuration file that lives in each tier. E.g. for test it in in gs://webapp-deployment-files-idc-test/proxy/proxy_runtime_config.txt.
The dev and test proxies are running v1beta1. Production proxy is still running v1.
@fedorov OK, I have reread my comment you highlighted above. https://github.com/OHIF/Viewers/issues/4382#issuecomment-2465278258 Yes, I originally thought it was the other way around, that we needed the transfer-syntax key/value to make it work. But yeah, we needed to drop the key/value to make it work. This all makes sense now. That was my first impression of this when it came up, that there was some change in compression that made it work, back at the start of 2024. So drop the transfer-syntax in V3.
transfer-syntax=* means allow ANY transfer syntax on the server, omitting transfer syntax means LEI
I do not know what LEI is. But can you give us background why in the past (it appears that) transfer-syntax=*
was not used, and later it was added? What triggered that change?
https://www.dicomlibrary.com/dicom/transfer-syntax/
Explicit VR Little Endian
I believe it's a sensible default for OHIF to follow. It tells the server not to bother with conversion and to send whatever is available, as OHIF can support it. This reduces the server's response time. If there's an issue with using *, it indicates a bug in OHIF.
The issue is not a bug in OHIF, but the fact that by providing it in the request, Google will not try to compress the 35MB+ slices it sends out.
This reduces the server's response time. If there's an issue with using *, it indicates a bug in OHIF.
@sedghi The issue in this case is not with OHIF per se, but with the specific combination of OHIF/data/user requirements. To summarize the long thread, we need to have the server compress the data before sending. We have good reasons for that. With the current transfer-syntax=*
we cannot satisfy our needs.
Can this be somehow addressed at the OHIF level without us making customizations specific to our fork/use case?
I do not think this use case is limited to IDC.
@pedrokohler I created a DICOM store that has Visible Female XC study: projects/idc-sandbox-000/locations/us-central1/datasets/fedorov-dev-healthcare/dicomStores/visible-female
. I gave you all the necessary permissions. You can use this store to test against DICOM store bypassing proxy, and compare behavior between v1
and v1beta1
GHC API.
@fedorov if you make sure the server already has compressed data stored for that study then the * should work, no? Can't you call an api in google and ask for change in stored data format?
First, based on the explanation from @dclunie yesterday, compression in this scenario is done at the HTTP level - it does not change the transfer syntax.
Second, one of our principles in IDC is not to modify the original encoding of the images in the files that are ingested into the DICOM server.
BTW I added all transfer syntax test data we had here https://github.com/cornerstonejs/cornerstone3D/pull/1568
Seems like we only are not supporting
DeflatedExplicitVRLittleEndianTransferSyntax
I will reiterate that one way to deal with this is to have the proxy look at the URLs being requested, and if it includes ones of these problematic studies, it can modify the requests headers (ditch transfer-syntax=*) for those studies to kick off the compression.
I will reiterate that one way to deal with this is to have the proxy look at the URLs being requested
@wlongabaugh in my list of preferred options, increasing complexity of a proxy that is specific to IDC is not at the top. IDC proxy based solution will never benefit other users of OHIF Viewer, and I think there is a good chance that this situation is not unique to IDC. In our case, we need this because of the proxy limitations on the buffer size. In other cases users may want to have smaller buffer size due to limited network bandwidth.
@sedghi would it be possible to add an option for the client/application to choose whether transfer-syntax=*
should be included or not?
I hope the answer is yes, and then the client could perhaps estimate the size of PixelData
based on frame size and bit depth, and decide based on that whether it should be included or not and request server compression? This could also be controllable via a config option. Just one idea. Maybe there is a better way to know what is the size of the object in the original encoding.
Yes, setting it is optional, but if you don't, every compressed study will be decompressed, as far as I know. So, perhaps we should have a per-study configuration or something similar.
@sedghi, you wrote:
Yes, setting it (
transfer-syntax=*
) is optional, but if you don't, every compressed study will be decompressed, as far as I know.
Short version - I don't think that is a compliant way of receiving compressed frames, even if Google does support it, so you should not be asking for transfer-syntax=*
TL;DNR:
The details here are a digression from the topic at hand (getting back uncompressed XC images, and getting large ones through the proxy by deflate applied at the Content-Encoding
rather than Transfer Syntax level), ...
... but it is relevant since it has apparently been observed that requesting multipart/related; type="application/octet-stream"; transfer-syntax=*
has an undesirable (and relevant) side effect of blocking Content-Encoding
( e.g., as deflate) as distinct from the simpler (and correct)multipart/related; type="application/octet-stream"
.
If you request application/octet-stream
(whether single part or multipart), as I understand it. the standard says you will get back uncompressed bytes, and the Google documentation implies you will get back uncompressed bytes because the default TS is uncompressed (and it will perform decompression as necessary if it can).
However, I don't think it is legal for a DICOMweb server to return in a frame request a compressed bitstream in an application/octet-stream
media type. That media type is only to be used for uncompressed data.
Rather, the server is required to return image/jpeg
for JPEG transfer syntaxes, etc. Otherwise how would you know (without inspecting the compressed byte stream) what scheme is used to compress the byte stream in the response, if it was not signaled in the media type in the returned Content-Type
header field? See DICOM PS3.18 8.7.3.3.2 Compressed Bulkdata Media Types.
I know that the Google server documentation for retrieval of frames says it supports multipart/related; type="application/octet-stream"; transfer-syntax=*
and further that
"For application/octet-stream the bulk data will be returned in whatever format it appears in the uploaded DICOM file"
but I think this is fundamentally wrong; returning compressed byte streams as application/octet-stream
is not what the standard specifies; I don't think you should be requesting that, even if it (sometimes) works.
Perhaps you should be requesting multipart/related; type="*/*"
or similar, if you don't care how it is compressed or if it is uncompressed, and the Content-Type in the response should tell you what it actually is. Perhaps even multipart/related; type="*/*"; transfer-syntax=*
However there is a warning in the Google documentation about image/jpeg defaulting to a .70 TS that is not supported, so if the images are lossless JPEG such as TS .50, then perhaps type="*/*"; transfer-syntax=*
will freak it out.
How is OHIF figuring out what compression scheme is used when requesting multipart/related; type="application/octet-stream"; transfer-syntax=*
(perhaps there is a transfer-syntax parameter on the Content-Type of the response that you are cuing off)?
I hope you are not using any TransferSyntaxUID value that might be present in the metadata (if they return Group 0x0002 data elements in the metadata, which they shouldn't) as opposed to (trying to) use QIDO AvailableTransferSyntaxUID in CP 1901. I don't think that Google supports AvailableTransferSyntaxUID though. To state the obvious, if Group 0x0002 TransferSyntaxUID is returned in the metadata, its value will not necessarily match whatever is in the returned pixel data if any transcoding has occurred.
I will do some more experiments wrt. what the Google server does or does not support, but the bottom line is that I think a more robust approach by OHIF to getting back (lossless) compressed pixel data if that is the form it is available in may be required if you have felt the need to use multipart/related; type="application/octet-stream"; transfer-syntax=*
for that reason and don't have an alternative pathway already.
I would be very interested to hear if you have experience with DICOMweb servers other than Google's that respond to multipart/related; type="application/octet-stream"; transfer-syntax=*
with anything other than uncompressed pixel data.
David
PS. I don't think much has changed in this respect since CP 1509 cleaned up the media types, or even the original description in Sup 161; i.e., I don't think we can blame the PS3.18 reorganization for changing anything in this respect wrt. media types for uncompressed versus compressed data. I will go through the development history of WADO-RS and review the discussions of the media types to see if there is anything I have missed.
Describe the Bug
This sample is no longer rendered correctly: https://viewer.imaging.datacommons.cancer.gov/v3/viewer/?StudyInstanceUIDs=1.3.6.1.4.1.5962.1.2.0.1677425356.1733
For comparison, see how it used to work here: https://github.com/OHIF/Viewers/issues/3354#issuecomment-1673389784:
To download the files in that study (note - it is ~233GB!), assuming you have python and pip on your system, do the following:
Steps to Reproduce
Load the sample study referenced in the above into the viewer.
The current behavior
First series shows grayscale with multiple slices arranged in a mosaic. Second does not load at all.
The expected behavior
Both should show up in color as used to work in the past given the screenshot in https://github.com/OHIF/Viewers/issues/3354#issuecomment-1673389784
OS
macOS
Node version
n/a
Browser
Chrome