Neural-Systems-at-UIO / VisuAlign

Online manual: https://visualign.readthedocs.io
MIT License
0 stars 1 forks source link

Export map in original image size or custom resolution #27

Open jjahanip opened 1 year ago

jjahanip commented 1 year ago

I can load large images in the VisuAlign (30,000x40,000 px) and I can align the map without any issues. But when I want to export the map, it downscales it to atlas resolustion.

Can I have an option to save the map with the original image size resolution or specifying a new resolution or upscale/downscale ratio?

Tevemadar commented 1 year ago

Yes, the exported overlay has the resolution of the atlas, because that's the data we really have. Then it can be stretched over the actual image.

Generating overlays matching image resolution is possible, but it will introduce strange artifacts due to the nonlinear nature of the deformation. As a proof of concept here is the snippet from NITRC modified to do that:

import sys
for arg in sys.argv:
    if arg.startswith("series="):
        series=arg[len("series="):]
    if arg.startswith("labels="):
        labels=arg[len("labels="):]
if "series" not in vars():
    print("series=<VisuAlign .json> parameter missing")
    sys.exit()
if "labels" not in vars():
    print("labels=<Exported palette .json> parameter missing")
    sys.exit()

postfix=labels[:labels.rfind(".")].replace(" ","_")

import json
with open(labels) as f:
    labels=[(i["red"],i["green"],i["blue"]) for i in json.load(f)]
print(f"{len(labels)} labels loaded")

with open(series) as f:
    series=json.load(f)

print(f"{len(series['slices'])} sections will be processed")

def inv3x3(m):
    det = m[0][0] * (m[1][1] * m[2][2] - m[2][1] * m[1][2])\
        - m[0][1] * (m[1][0] * m[2][2] - m[1][2] * m[2][0])\
        + m[0][2] * (m[1][0] * m[2][1] - m[1][1] * m[2][0])
    if det == 0:
        return None
    return [[(m[1][1] * m[2][2] - m[2][1] * m[1][2]) / det,
             (m[0][2] * m[2][1] - m[0][1] * m[2][2]) / det,
             (m[0][1] * m[1][2] - m[0][2] * m[1][1]) / det],
            [(m[1][2] * m[2][0] - m[1][0] * m[2][2]) / det,
             (m[0][0] * m[2][2] - m[0][2] * m[2][0]) / det,
             (m[1][0] * m[0][2] - m[0][0] * m[1][2]) / det],
            [(m[1][0] * m[2][1] - m[2][0] * m[1][1]) / det,
             (m[2][0] * m[0][1] - m[0][0] * m[2][1]) / det,
             (m[0][0] * m[1][1] - m[1][0] * m[0][1]) / det]]

def rowmul3(v,m):return [sum(v[j]*m[j][i] for j in range(3)) for i in range(3)]
def distsquare(ax,ay,bx,by):return (ax-bx)*(ax-bx)+(ay-by)*(ay-by)
def edgeindex(a,b):i=min(a,b);j=max(a,b);return j*(j-1)//2+i

class Triangle:
    def __init__(self,a,b,c,vlist,elist):
        self.patch=[]
        self.A=vlist[a]
        self.B=vlist[b]
        self.C=vlist[c]
        self.elist=elist
        self.edges=[edgeindex(a,b),
                    edgeindex(a,c),
                    edgeindex(b,c)]
        for edge in self.edges:
            elist[edge]+=1
        ax,ay=self.A[2:4]
        bx,by=self.B[2:4]
        cx,cy=self.C[2:4]
        self.decomp=inv3x3([[bx-ax,by-ay,0],
                            [cx-ax,cy-ay,0],
                            [ax,ay,1]])
        self.minx=min(ax,bx,cx)
        self.miny=min(ay,by,cy)
        self.maxx=max(ax,bx,cx)
        self.maxy=max(ay,by,cy)
        a2=distsquare(bx,by,cx,cy)
        b2=distsquare(ax,ay,cx,cy)
        c2=distsquare(ax,ay,bx,by)
        fa=a2*(b2+c2-a2)
        fb=b2*(c2+a2-b2)
        fc=c2*(a2+b2-c2)
        self.den=fa+fb+fc
        self.Mdenx=fa*ax+fb*bx+fc*cx
        self.Mdeny=fa*ay+fb*by+fc*cy
        self.r2den=distsquare(ax*self.den,ay*self.den,self.Mdenx,self.Mdeny)

    def removeedges(self):
        for edge in self.edges:
            self.elist[edge]-=1
        del self.edges
        del self.elist

    def incircle(self,x,y):
        return distsquare(x*self.den,y*self.den,self.Mdenx,self.Mdeny)<self.r2den

    def intriangle(self,x,y):
        if x<self.minx or x>self.maxx or y<self.miny or y>self.maxy:
            return
        uv1=rowmul3([x,y,1],self.decomp)
        if 0<=uv1[0]<=1 and 0<=uv1[1]<=1 and uv1[0]+uv1[1]<=1:
            return uv1

    def inforward(self,x,y):
        uv1=rowmul3([x,y,1],self.forwarddecomp)
        if 0<=uv1[0]<=1 and 0<=uv1[1]<=1 and uv1[0]+uv1[1]<=1:
            return uv1

def triangulate(w,h,markers):
    vertices=[[-0.1*w,-0.1*h,-0.1*w,-0.1*h],
              [ 1.1*w,-0.1*h, 1.1*w,-0.1*h],
              [-0.1*w, 1.1*h,-0.1*w, 1.1*h],
              [ 1.1*w, 1.1*h, 1.1*w, 1.1*h]]
    edges=[0]*((len(markers)+4)*(len(markers)+4-1)//2)
    triangles=[Triangle(0,1,2,vertices,edges),Triangle(1,2,3,vertices,edges)]
    edges[0]=edges[1]=edges[4]=edges[5]=2
    for marker in markers:
        x,y=marker[2:4]
        found=False
        keep=[]
        remove=[]
        for triangle in triangles:
            if not found and triangle.intriangle(x,y):
                found=True
            if triangle.incircle(x,y):
                remove.append(triangle)
            else:
                keep.append(triangle)
        if found:
            for triangle in remove:
                triangle.removeedges()
        else:
            keep.extend(remove)
        triangles=keep
        vcount=len(vertices)
        vertices.append(marker)
        for i in range(vcount-1):
            for j in range(i+1,vcount):
                if edges[edgeindex(i,j)]==1:
                    triangles.append(Triangle(i,j,vcount,vertices,edges))
    for triangle in triangles:
        triangle.A.append(triangle)
        triangle.B.append(triangle)
        triangle.C.append(triangle)
    for triangle in triangles:
        for t in triangle.A[4:]:
            if t is not triangle and t not in triangle.patch:
                triangle.patch.append(t)
        for t in triangle.B[4:]:
            if t is not triangle and t not in triangle.patch:
                triangle.patch.append(t)
        for t in triangle.C[4:]:
            if t is not triangle and t not in triangle.patch:
                triangle.patch.append(t)
    return {"trs":triangles,"cch":None}

def transform(triangulation,x,y):
    uv1=None
    c=triangulation["cch"]
    if c:
        uv1=c.intriangle(x,y)
        if uv1:
            triangle=c
        else:
            for triangle in c.patch:
                uv1=triangle.intriangle(x,y)
                if uv1:
                    triangulation["cch"]=triangle
                    break

    if not uv1:
        for triangle in triangulation["trs"]:
            uv1=triangle.intriangle(x,y)
            if uv1:
                triangulation["cch"]=triangle
                break
    if uv1:
        return (triangle.A[0]
                +(triangle.B[0]-triangle.A[0])*uv1[0]
                +(triangle.C[0]-triangle.A[0])*uv1[1],
                triangle.A[1]
                +(triangle.B[1]-triangle.A[1])*uv1[0]
                +(triangle.C[1]-triangle.A[1])*uv1[1])

import struct,re,PIL.Image,time

for section in series["slices"]:
    starttime=time.time()
    base=section["filename"]
    base=base[:base.rfind(".")]
    width=section["width"]
    height=section["height"]
    print(f"Attempting to process {base}, {width} x {height}")

    triangulation=triangulate(width,height,section["markers"])

    with open(base+"-"+postfix+".flat","rb") as f:
        b,w,h=struct.unpack(">BII",f.read(9))
        data=struct.unpack(">"+("xBH"[b]*(w*h)),f.read(b*w*h))
    print("Overlay",b,"BPP,",w,"x",h)

    image=PIL.Image.new("RGB",(width,height))
    for y in range(height):
        print("\r",height-y," ",end="",flush=True)
        for x in range(width):
            i,j=transform(triangulation,x,y)
            i=i*w/width
            j=j*h/height
            d=0
            if 0<=i<w and 0<=j<h:
               d=data[int(i)+int(j)*w] 
            image.putpixel((x,y),labels[d])
    image.save(base+".png","PNG")
    print("\rDone",round(time.time()-starttime),"s")

Usage:

Then wait, a lot. For me it ran for a minute (per image) in case of 2000x2000-ish images, but for 20000x15000 images it ran for 1.5 hours. The script needs PIL, but otherwise it's vanilla Python. The largest image used for testing was 29984 x 15887, which is just below 500 megapixels. PIL limitations probably kick in very soon above this resolution. That could be circumvented via simply not using PIL, and the speed could be much faster via not using Python, but the real problem is that the resulting images probably do not live up to the expectations.

Majpuc commented 1 year ago

Hi @jjahanip Does this answer your question? Let us know :)

darwinjob commented 1 year ago

Exporting or converting the overlay to (smoothed) SVG might help. Then the result can be easily displayed like this: https://darwinjob.github.io/osd-demo/svg.html

Sverreg commented 1 year ago

python nonlin.py series=somemouseseries.json labels="Rainbow 2017.json" for mouse

Hi. I'm not entirely sure which files to use here, maybe you can clarify exactly which files are needed and how they are generated? Is this the series file from QuickNii, or VisuAlign? We get a keyerror "marker" when we try to run it (with either the QuickNii or VisuAlign files). I assume we just have the wrong files.. I can upload them if that helps!

Edit: @darwinjob, could I ask how you generated this? Looks fantastic.

Majpuc commented 1 year ago

Dear Sverreg, We will be able to answer your question next week after the Easter break. Thanks!

Tevemadar commented 1 year ago

@Sverreg you need an export from QuickNII, the .flat files in particular and the single .json file which is also part of that export (it may not be that super obvious, but there is one). It's called Rainbow 2017.json in case of "QuickNII-ABAMouse-v3-2017", but for other atlas variants it has a different name. That's the one needed for the labels= parameter. For the series= parameter you need the .json file from VisuAlign, so the one with deformations. Caveat: as this script is meant to demonstrate that this approach actually doesn't work, it will crash unceremoniously when a section has no nonlinear markup.

Yours sincerely, "We"

(Unfortunately the smooth markup in darwinjob's example is hand drawn, not generated. The drawing tool itself was called "ZoomAnnotator")

darwinjob commented 1 year ago

Edit: @darwinjob, could I ask how you generated this? Looks fantastic.

Hi Me and my colleague @PolarBean were investigating this direction (raster image to vector image, tracing, 2DR -> 2DV). At the moment we don't have the production grade algorithm or tool for this use case. However, this is what can be quick and dirty achieved by just using openly available tools like https://express.adobe.com/tools/convert-to-svg/ The input image: The resulting SVG: https://anatomi.uio.no/projects/CalciMap/pair1/mouse857_P9_F_calb_s166_thumbnail_nl_adobe_express.svg

The resulting SVG can be easily overlayed with the high resolution image as I demonstrated above. SVG goodies: smaller file size, human readable, can be fine tuned in any SVG editor (Adobe Illustrator for example), not affected by zoom/scaling. The example isn't perfect but the direction is worth investigating for sure. Let us know if you're interested.

Best regards Д

PolarBean commented 1 year ago

If you're interested in auto converting png to SVG you could also check out this great package https://github.com/chiggum/mindthegap

darwinjob commented 1 year ago

If you're interested in auto converting png to SVG you could also check out this great package https://github.com/chiggum/mindthegap

Old school ImageMagick can do this too :) https://stackoverflow.com/questions/1132601/how-to-convert-a-jpeg-image-into-svg-format-using-imagemagick

Sverreg commented 1 year ago

Thanks for the help with rainbow atlas! We want to use this to demonstrate that our virus injections are in the correct area, as a figure panel. But, the rainbow atlas overlay somewhat obscures the fluorescence. Would it be possible to apply the same procedure to the delineated atlas (the delineations from VisuAlign)? Or if you could hide the arrows in VisuAlign somehow, we could use a screenshot.

PolarBean commented 1 year ago

While I don't know how to do this in VisuAlign (maybe @Tevemadar does). This is how data shared on ebrains with visualign registrations will appear in the localizoom viewer. See an example of a shared data card here and the viewer link here . This is probably not the most helpful in terms of immediately making a figure, however it is a great way to share your data when you publish. Hopefully we can find a way to create outlines in the meanwhile

Tevemadar commented 1 year ago

There are some related ideas in a recently revived feature request, https://github.com/Neural-Systems-at-UIO/QuickNII/issues/21#issuecomment-1507047353 I think I'll throw together something that covers both requests.

Majpuc commented 11 months ago

Follow up @Tevemadar