Closed james-s-tayler closed 1 year ago
Opps, my bad, seems I broke it for SD models, have commited a fix
I moved the codebase over to the new OnnxRuntime OrtValue API, was a large change and I missed this in my testing as I used LCM which does not do guidance so the error didnt show until I use the model you tried :/
Regarding OrtExtensions, I have noticed a bit of issue online about this, does not seem to be a windows issue, but a mac and linux one, one thing I do know is the app MUST be x64 or it wont work at all, as Mircosoft.ML is x64 only
This repo is new and does change rapidly, so sorry if I break your tests, still tiring to figure our the best way to structure this application as I find new cool things to add, so bare with me :p
If you publish the linux as self-contained it "should" run without issue
Tests look great, have not even had a chance to a one yet, so this is awesome
one small thing I noticed
services.AddOnnxStack();
services.AddOnnxStackStableDiffusion();
AddOnnxStackStableDiffusion
calls AddOnnxStack
internally so no need to call both
Awesome! Thanks, yeah now that I've pulled the latest master the tests are indeed passing :)
I'll try add an LCM test in as-well, so all bases are covered.
Awesome! Thanks, yeah now that I've pulled the latest master the tests are indeed passing :)
I'll try add an LCM test in as-well, so all bases are covered.
It might be easier to do a test with GuidanceScale
set to1f or below, as this would simulate a Model with that issue
Given these tests are running on CPU execution provider they run super slow, so my current goal at the moment is just get the most minimal set of happy path test cases (do the models load? can we generate an image consistently?) and get that merged. Then look at getting the docker containers reworked, so that the can leverage the Nvidia one that allows for GPU pass through and get the ability to run the test suite much faster, and then work through adding more comprehensive coverage.
That would be awesome, appreciate any tests added.
I could setup a local server here in CHCH with some GPUs if that's easier? your NZ right?
Oh snap, you're in NZ too! Nice! Didn't see that. Yeah, I'm up in Auckland.
I've got a 4090, so can run them plenty fast locally once the devops side of things supports it, but just need to work through that piece by piece. Ideally, the trajectory is getting the test suite running via hardware acceleration inside a CI/CD pipeline to ensure the integrity of the project as new functionality as developed. Keen also to make it as accessible/friendly as possible in terms of local developer experience to maximize ease of contribution.
4090 would be nice, I have a 3090 in my dev but only have P100 a T4 and 2 M40's in my servers, even combined they are not even close to your compute power
would you like me to merge this one in now, or would you like to add to this one?
I'm still adding to this one. I'm just about to push up the last commit on it since it looks like I've got the LCM tests working now too :) Once that is pushed I will let you know and it'll be ready to merge.
sweet as
Done! Should be ready to merge now.
thanks man!!
Hi,
I've been working on a PR to add some integration tests into
OnnxStackCore.sln
, so that I have a repeatable way to test and run the functionality that isn't coupled to a particular UI implementation, to guard against regressions as development continues, and to have a way to begin contributing new things.However, there are some problems I have encountered.
First was that for whatever reason, the tests run fine inside the Docker container, but not on my local ubuntu installation. I figured out it was due to some whacky bug in the OrtRuntime.Extensions library where it's trying to resolve the name for the dll's it needs to call when registering the custom operations, and those files have been renamed at some point, so it claims that it can't find
ortextensions
but in actual fact the file name changed tolibortextensions.so
and it needs that. Yet for some magical reason it just works when I run it inside the Docker container. Anyway, don't worry about that, since I can run it inside Docker I'm not so fussed on trying to solve that problem for now, just thought I would mention it.Second, and more importantly... I had both tests in this PR running perfectly and passing before merging the current master branch which contained 15 new commits. I think those changes might have actually broke something?
The error that I now get when I run the tests is as follows:
Did y'all break something, or do I need to update how I'm calling OnnxStack in order to fix my test? But, also that likely means anyone who was calling it this way and updated to a newer version would likely be experiencing the same exception, right?
I can see it's complaining the tensor shape being different...