imxieyi / waifu2x-mac

Waifu2x-ios port to macOS, still in Core ML and Metal
MIT License
439 stars 45 forks source link

Can't compile application with build.sh OR Xcode #30

Closed HumerousGorgon closed 1 year ago

HumerousGorgon commented 1 year ago

Hey there! Trying to compile this myself, it won't work no matter what I do.

Attempting to compile in Xcode gives me this error inside of the Waifu2x.swift file: Waifu2x.swift:203:9 The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions

Attempting to compile using the build.sh file in terminal gives:

image

I'm not sure what I'm doing wrong here, but nevertheless it isn't compiling.

Thanks!

imxieyi commented 1 year ago

Fixed. Feel free to re-open if it's still happening with the latest commit.

HumerousGorgon commented 1 year ago

Sorry to open again; I am having the same issue and a different issue...

Terminal still gives me this:

image

Opening the project in Xcode see's it succeed in building but it won't let me run anything.

image

Any ideas? Am I doing something wrong?

EDIT: Nevermind, I'm dumb! Fixed it

imxieyi commented 1 year ago

The build script has been fixed in the latest commit.

HumerousGorgon commented 1 year ago

Seperate thing but do you know if it's at all possible to output a 16 bit image from this? Swift is capable of doing it but it's about what I need to change in the code to enable it to do so.

imxieyi commented 1 year ago

Basically what you need is to change CGImage format to 16-bit and then convert image to 16-bit instead of 8-bit in out_pipeline.

Due to limitation of Core ML the output image will not be actual 16-bit. The output from Core ML is float16 (cannot be changed), which means maximum 10 bits precision. I don't think it's possible to store 10-bit images though.

HumerousGorgon commented 1 year ago

Should be as simple as changing imgData to UInt16 in the out_pipeline right? Unfortunately I get errors appearing all over the shop when I do it.

At the end of the day, I really only need 10 bit precision. I've converted 10 bit video into 16 bit frames, need to upscale them but keep precision of 10 bit so that when I recompile it all together I don't get banding.

imxieyi commented 1 year ago

You should not use UInt16. Instead please increase size of UInt8 arrays.

If you need a solution as soon as possible you can try App Store version of waifu2x. It supports 16-bit video processing (tutorial) via pipe with ffmpeg. Please note that it also suffers from float16 issue on Core ML.