Open Hapseleg opened 8 months ago
@ranareehanaslam fix (https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/pull/285) worked for me.
Just install his fork via "install from URL" on the extensions tab:
https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git
@ranareehanaslam fix (#285) worked for me.
Just install his fork via "install from URL" on the extensions tab:
https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git
oh wauw the only difference is "from importlib.metadata import version" instead of "from importlib_metadata import version" :D ty ill test it later ;)
Brother kindly see with open eyes 👀 i don't think that's the only changes ❤️ see the main repo
On Mon, 4 Mar 2024, 9:45 pm Nicolaj L J, @.***> wrote:
@ranareehanaslam https://github.com/ranareehanaslam fix (#285 https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/pull/285) worked for me.
Just install his fork via "install from URL" on the extensions tab: https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git
oh wauw the only difference is "from importlib.metadata import version" instead of "from importlib_metadata import version" :D ty ill test it later ;)
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-1977022495, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3JTQD32PHDKIMLP6R3YWSQK7AVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZXGAZDENBZGU . You are receiving this because you were mentioned.Message ID: @.***>
Brother kindly see with open eyes 👀 i don't think that's the only changes ❤️ see the main repo … On Mon, 4 Mar 2024, 9:45 pm Nicolaj L J, @.> wrote: @ranareehanaslam https://github.com/ranareehanaslam fix (#285 <#285>) worked for me. Just install his fork via "install from URL" on the extensions tab: https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git [image: image] https://private-user-images.githubusercontent.com/41015675/309813199-3b51d424-8109-41d9-8c33-8f7ce35c3427.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDk1NzA4NzAsIm5iZiI6MTcwOTU3MDU3MCwicGF0aCI6Ii80MTAxNTY3NS8zMDk4MTMxOTktM2I1MWQ0MjQtODEwOS00MWQ5LThjMzMtOGY3Y2UzNWMzNDI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMDQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzA0VDE2NDI1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk3ZjhjYmU1ZWJmOGI3M2E3ZmZiZWUxN2I2MjQwYTEwMWY3ZjI0ZThkYTNjZjc4ODUxYjYxOWJhOGFjYTBkZjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.EKJ6HgriGck-UZ5drepkufrH4270csyWbOZ42T3s_U4 oh wauw the only difference is "from importlib.metadata import version" instead of "from importlib_metadata import version" :D ty ill test it later ;) — Reply to this email directly, view it on GitHub <#286 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3JTQD32PHDKIMLP6R3YWSQK7AVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZXGAZDENBZGU . You are receiving this because you were mentioned.Message ID: @.>
Ooooh thats my bad, I was looking at a wrong fork because https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/forks?include=active&page=1&period=2y&sort_by=last_updated said you had no new commits, thank you.
Haha, thanks! now do follow me up and let's get this approved 💖
On Mon, 4 Mar 2024, 11:44 pm Nicolaj L J, @.***> wrote:
Brother kindly see with open eyes 👀 i don't think that's the only changes ❤️ see the main repo … <#m-4754366991949206239> On Mon, 4 Mar 2024, 9:45 pm Nicolaj L J, @.> wrote: @ranareehanaslam https://github.com/ranareehanaslam https://github.com/ranareehanaslam https://github.com/ranareehanaslam fix (#285 https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/pull/285 <#285 https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/pull/285>) worked for me. Just install his fork via "install from URL" on the extensions tab: https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git [image: image] https://private-user-images.githubusercontent.com/41015675/309813199-3b51d424-8109-41d9-8c33-8f7ce35c3427.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDk1NzA4NzAsIm5iZiI6MTcwOTU3MDU3MCwicGF0aCI6Ii80MTAxNTY3NS8zMDk4MTMxOTktM2I1MWQ0MjQtODEwOS00MWQ5LThjMzMtOGY3Y2UzNWMzNDI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMDQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzA0VDE2NDI1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk3ZjhjYmU1ZWJmOGI3M2E3ZmZiZWUxN2I2MjQwYTEwMWY3ZjI0ZThkYTNjZjc4ODUxYjYxOWJhOGFjYTBkZjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.EKJ6HgriGck-UZ5drepkufrH4270csyWbOZ42T3s_U4 https://private-user-images.githubusercontent.com/41015675/309813199-3b51d424-8109-41d9-8c33-8f7ce35c3427.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDk1NzA4NzAsIm5iZiI6MTcwOTU3MDU3MCwicGF0aCI6Ii80MTAxNTY3NS8zMDk4MTMxOTktM2I1MWQ0MjQtODEwOS00MWQ5LThjMzMtOGY3Y2UzNWMzNDI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMDQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzA0VDE2NDI1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk3ZjhjYmU1ZWJmOGI3M2E3ZmZiZWUxN2I2MjQwYTEwMWY3ZjI0ZThkYTNjZjc4ODUxYjYxOWJhOGFjYTBkZjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.EKJ6HgriGck-UZ5drepkufrH4270csyWbOZ42T3s_U4 oh wauw the only difference is "from importlib.metadata import version" instead of "from importlib_metadata import version" :D ty ill test it later ;) — Reply to this email directly, view it on GitHub <#286 (comment) https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-1977022495>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3JTQD32PHDKIMLP6R3YWSQK7AVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZXGAZDENBZGU https://github.com/notifications/unsubscribe-auth/AEXJW3JTQD32PHDKIMLP6R3YWSQK7AVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZXGAZDENBZGU . You are receiving this because you were mentioned.Message ID: @.>
Ooooh thats my bad, I was looking at a wrong fork because https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/forks?include=active&page=1&period=2y&sort_by=last_updated said you had no new commits, thank you.
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-1977230821, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3N4VB56IKGPL6RUQSLYWS6JNAVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZXGIZTAOBSGE . You are receiving this because you were mentioned.Message ID: @.***>
welp, i couldn't figure it out in any way, already done everything (upgrading pip stuff etc)
@ranareehanaslam fix (#285) worked for me.
Just install his fork via "install from URL" on the extensions tab:
https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git
maybe i did something wrong but i got this error now
don't get me wrong i got my tab but curious about those errors
i got this error now
I have the same problem
All of you checkout my repo @ranareehanaslam
welp, i couldn't figure it out in any way, already done everything (upgrading pip stuff etc)
@ranareehanaslam fix (#285) worked for me. Just install his fork via "install from URL" on the extensions tab:
https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git
maybe i did something wrong but i got this error now
don't get me wrong i got my tab but curious about those errors
I'm getting this error as well after switching repo
Ok, I'll check it out. Can you join me over TeamViewer anydesk etc..?? Skype
On Mon, 11 Mar 2024, 6:29 pm Nicolaj L J, @.***> wrote:
welp, i couldn't figure it out in any way, already done everything (upgrading pip stuff etc)
@ranareehanaslam https://github.com/ranareehanaslam fix (#285 https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/pull/285) worked for me. Just install his fork via "install from URL" on the extensions tab: https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git [image: image] https://private-user-images.githubusercontent.com/41015675/309813199-3b51d424-8109-41d9-8c33-8f7ce35c3427.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTAwOTI1NDEsIm5iZiI6MTcxMDA5MjI0MSwicGF0aCI6Ii80MTAxNTY3NS8zMDk4MTMxOTktM2I1MWQ0MjQtODEwOS00MWQ5LThjMzMtOGY3Y2UzNWMzNDI3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzEwVDE3MzcyMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk3YzI1NWJjOTA2MzRiYWQzMmNkYmU4NjkzZmI1NDQzMTY1MzQzZmMwMWVjNjQ1NTIxMjI4MDFhOWIwYTE1OWMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.Kr2UvyipPo7seQcO3DRtvN1Axp5UrwIuYqMNtWapdY8
maybe i did something wrong but i got this error now
[image: image] https://private-user-images.githubusercontent.com/51942945/311533499-7cf99cff-a2be-46c6-8c38-bb815f10a7c1.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTAxNjQwMDUsIm5iZiI6MTcxMDE2MzcwNSwicGF0aCI6Ii81MTk0Mjk0NS8zMTE1MzM0OTktN2NmOTljZmYtYTJiZS00NmM2LThjMzgtYmI4MTVmMTBhN2MxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzExVDEzMjgyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTgwNDlkODM2YmM3MGE0M2Y3ODY4ZTgwYzc2NzBmOTVjYmQ3Nzk5NzM4ZTQ3ZDUxY2E0ODZkZGE3NjkwZTY5NGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.6Eaxf0Hyype0cc6qUahLl6WQ97HFBJumlrim2SOcCk8 [image: image] https://private-user-images.githubusercontent.com/51942945/311534220-7a6e8027-50cd-4685-8fbe-683f07f9e091.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTAxNjQwMDUsIm5iZiI6MTcxMDE2MzcwNSwicGF0aCI6Ii81MTk0Mjk0NS8zMTE1MzQyMjAtN2E2ZTgwMjctNTBjZC00Njg1LThmYmUtNjgzZjA3ZjllMDkxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzExVDEzMjgyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWVlODcxZDg1ZjIyOGEyOTBkNmQ0M2M2ODlkNmVjYWRlNTAyYWYwNjk2NmQ1ZDljMGM4ZDNmMDY5OGY2YTVjNDcmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.EkXhTfM0z08EbFNO-qfvxns-8q8WL0uG_Yw_NJ8w4zs [image: image] https://private-user-images.githubusercontent.com/51942945/311534277-b3821b51-1460-4ea4-8718-323908610677.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTAxNjQwMDUsIm5iZiI6MTcxMDE2MzcwNSwicGF0aCI6Ii81MTk0Mjk0NS8zMTE1MzQyNzctYjM4MjFiNTEtMTQ2MC00ZWE0LTg3MTgtMzIzOTA4NjEwNjc3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzExVDEzMjgyNVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTUyNmY4Y2U0MzdjYjBjMGUyN2M5OTMwOGQyZmZmMTY1Mzk3NTg5MmRjOTk2NWI3YTg2NDkzMzdmNzE1ZGViMGEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.antYa4JddNVEVcQeINVvcLS_sP_5C1aVBHGJTSOu8U4
I'm getting this error as well after switching repo
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-1988445452, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3IRQA2B3AWHP7XOOFLYXWWTVAVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBYGQ2DKNBVGI . You are receiving this because you were mentioned.Message ID: @.***>
Thanks for the offer but no thanks :)
No problem, good luck! 💯
On Mon, 11 Mar 2024, 6:43 pm Nicolaj L J, @.***> wrote:
Thanks for the offer but no thanks :)
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-1988475952, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3LXSOTO33XQ54QFWS3YXWYJ5AVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBYGQ3TKOJVGI . You are receiving this because you were mentioned.Message ID: @.***>
@Hapseleg @AzEExE have you tied this fix https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/186#issuecomment-1852122589 for the DLL errors?
@Hapseleg @AzEExE have you tied this fix #186 (comment) for the DLL errors?
i haven't installed that in first place, but ive done that for cudnn-cu12
have you tied this fix for the DLL errors?
i haven't installed that in first place, but ive done that for cudnn-cu12
@AzEExE @Hapseleg
(Back up first) Delete all DLL files in this folder: system\python\Lib\site-packages\nvidia\cudnn\bin\
You should stop receiving the errors, but something else might break, I'm not sure.
Note that TensorRT does not work in UI Forge, afaik.
well really simple method solved all the problems i mentioned before (error popups etc.)
starting webui with administrator permission
Installed without any problems with the forge "fork" of automatic1111.
Installed without any problems with the forge "fork" of automatic1111.
@Hapseleg Have you gotten it to function? NVIDIA dev stated 1 month ago that TensorRT cannot work in Forge. Maybe (I'm not saying this would actually work, but) with a TRT model created in A1111 and moved into the Forge files.
Installed without any problems with the forge "fork" of automatic1111.
@Hapseleg Have you gotten it to function? NVIDIA dev stated 1 month ago that TensorRT cannot work in Forge. Maybe (I'm not saying this would actually work, but) with a TRT model created in A1111 and moved into the Forge files.
Well... I got it installed and I am able to start up Forge but once I press the "Generate Default Engines" it will complain about my checkpoint not containing an onnx model or something like that. I tried to research it a bit but I gave up after some time 😅
broh keep that running and it will probably take few moments and it will start generating the trt engine after few moments
Muhammad Rehan Aslam Machine Learning | ML OPS Engineer at Metex Labz 🌱 Passionate about AI and ML Ops with over 8 years of IT experience. [image: LinkedIn] LinkedIn https://www.linkedin.com/in/ranareehanaslam[image: GitHub] GitHub https://github.com/ranareehanaslam[image: Facebook] Facebook https://facebook.com/ranareehanaslam [image: Email] @. @.> [image: Phone] +923012963333 <+923012963333> [image: WhatsApp] WhatsApp https://wa.me/923012963333
On Wed, Mar 13, 2024 at 10:49 PM NLJ @.***> wrote:
Installed without any problems with the forge "fork" of automatic1111.
@Hapseleg https://github.com/Hapseleg Have you gotten it to function? NVIDIA dev stated 1 month ago that TensorRT cannot work in Forge. Maybe (I'm not saying this would actually work, but) with a TRT model created in A1111 and moved into the Forge files.
Well... I got it installed and I am able to start up Forge but once I press the "Generate Default Engines" it will complain about my checkpoint not containing an onnx model or something like that. I tried to research it a bit but I gave up after some time 😅
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-1995161866, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBWFLTELTNEKHVQL5AMKM6DYYCGRPAVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOJVGE3DCOBWGY . You are receiving this because you commented.Message ID: @.***>
Well... I got it installed and I am able to start up Forge but once I press the "Generate Default Engines" it will complain about my checkpoint not containing an onnx model or something like that. I tried to research it a bit but I gave up after some time 😅
@Hapseleg Yep, I receive the same ONNX model error in Forge. Someday soon I'll try to create an engine in A1111, copy it to Forge, then see what happens.
@ranareehanmetex Careful bro, your signature contained some identifying info.
Disable all 3rd party extensions and shut down A1111 In your models folder of A1111 remove the folders Unet-onnx and Unet-trt
Launch A1111 and enable TensorRT extension, it should work now, even though you will still get the errors. Next time you run A1111 or reload the UI, it MIGHT again not be working. I'm guessing there's an issue with existing trt profiles loading when there's a change in A1111. But I have no clue what exactly.
My best guess is when the profiles were generated with a different cudnn version or libbitsandbytes_cudaxxx.lib Where 'xxx' stands for 111, 112, 113, ..., 118
I also noticed that if you have more than one nvidia GPU, that the TensorRT optimization will always be run on device 0, even if you are running A1111 on another device.
I tried CUDA_VISIBLE_DEVICES=1
but it didn't fix the issue.
welp, i couldn't figure it out in any way, already done everything (upgrading pip stuff etc)
@ranareehanaslam fix (#285) worked for me. Just install his fork via "install from URL" on the extensions tab:
https://github.com/ranareehanaslam/Stable-Diffusion-WebUI-TensorRT.git
maybe i did something wrong but i got this error now
don't get me wrong i got my tab but curious about those errors
My rude fix: https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/143#issuecomment-2005044331
Well... I got it installed and I am able to start up Forge but once I press the "Generate Default Engines" it will complain about my checkpoint not containing an onnx model or something like that. I tried to research it a bit but I gave up after some time 😅
@Hapseleg Yep, I receive the same ONNX model error in Forge. Someday soon I'll try to create an engine in A1111, copy it to Forge, then see what happens.
@ranareehanmetex Careful bro, your signature contained some identifying info.
I'm not even able to create an engine, when i press the generate button I get this error: (And I only have 1 gpu, my cpu is an Intel KF so theres no integrated gpu.)
ERROR:root:Exporting to ONNX failed. Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
Building TensorRT engine... This can take a while, please check the progress in the terminal.
Building TensorRT engine for D:\stable-diffusion\forge test\webui\models\Unet-onnx\juggernautXL_v9Rdphoto2Lightning.onnx: D:\stable-diffusion\forge test\webui\models\Unet-trt\juggernautXL_v9Rdphoto2Lightning_4cd66f22_cc86_sample=2x4x128x128+2x4x128x128+2x4x128x128-timesteps=2+2+2-encoder_hidden_states=2x77x2048+2x77x2048+2x77x2048-y=2x2816+2x2816+2x2816.trt
Could not open file D:\stable-diffusion\forge test\webui\models\Unet-onnx\juggernautXL_v9Rdphoto2Lightning.onnx
Could not open file D:\stable-diffusion\forge test\webui\models\Unet-onnx\juggernautXL_v9Rdphoto2Lightning.onnx
[W] 'colored' module is not installed, will not use colors when logging. To enable colors, please install the 'colored' module: python3 -m pip install colored
[E] ModelImporter.cpp:773: Failed to parse ONNX model from file: D:\stable-diffusion\forge test\webui\models\Unet-onnx\juggernautXL_v9Rdphoto2Lightning.onnx
[!] Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?
Traceback (most recent call last):
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\stable-diffusion\forge test\webui\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py", line 126, in export_unet_to_trt
ret = export_trt(
File "D:\stable-diffusion\forge test\webui\extensions\Stable-Diffusion-WebUI-TensorRT\exporter.py", line 231, in export_trt
ret = engine.build(
File "D:\stable-diffusion\forge test\webui\extensions\Stable-Diffusion-WebUI-TensorRT\utilities.py", line 227, in build
network = network_from_onnx_path(
File "<string>", line 3, in network_from_onnx_path
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\polygraphy\backend\base\loader.py", line 40, in __call__
return self.call_impl(*args, **kwargs)
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\polygraphy\util\util.py", line 710, in wrapped
return func(*args, **kwargs)
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\polygraphy\backend\trt\loader.py", line 227, in call_impl
trt_util.check_onnx_parser_errors(parser, success)
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\polygraphy\backend\trt\util.py", line 86, in check_onnx_parser_errors
G_LOGGER.critical("Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?")
File "D:\stable-diffusion\forge test\system\python\lib\site-packages\polygraphy\logger\logger.py", line 605, in critical
raise ExceptionType(message) from None
polygraphy.exception.exception.PolygraphyException: Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?
@Hapseleg I'm not even able to create an engine, when i press the generate button I get this error: (And I only have 1 gpu, my cpu is an Intel KF so theres no integrated gpu.)
Can you check your Device Manager for installed display adapters? Right click on the Windows button and select Device Manager. In the top menu go to View > Show Hidden Devices. Under Display Adapters, see if there are any secondary devices installed. If so remove them.
Other possible solution. Download DDU (Display Driver Uninstaller). Run it and uninstall your display adapter drivers. Follow the instructions. This tool gets rid of everything related to your GPU that a normal uninstall does not. Reboot and reinstall your GPU drivers.
"2" devices also can refer to your GPU and CPU. Make sure you are not launching the app with CPU support.
I'm not familiar with Forge, I see there is a space in the path, D:\stable-diffusion\forge test\
I would always recommend avoiding spaces in paths. Use underscore instead. - Though I don't think this is the issue here.
Edit: I just noticed that indeed the error is refering to your CPU and GPU as 2 devices. Like I said, I'm not familiar with Forge, but you might have to add or remove launch args so it only uses GPU.
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!
Try this: set COMMANDLINE_ARGS=--always-gpu
@Hapseleg I'm not even able to create an engine, when i press the generate button I get this error: (And I only have 1 gpu, my cpu is an Intel KF so theres no integrated gpu.)
Can you check your Device Manager for installed display adapters? Right click on the Windows button and select Device Manager. In the top menu go to View > Show Hidden Devices. Under Display Adapters, see if there are any secondary devices installed. If so remove them.
Other possible solution. Download DDU (Display Driver Uninstaller). Run it and uninstall your display adapter drivers. Follow the instructions. This tool gets rid of everything related to your GPU that a normal uninstall does not. Reboot and reinstall your GPU drivers.
"2" devices also can refer to your GPU and CPU. Make sure you are not launching the app with CPU support.
I'm not familiar with Forge, I see there is a space in the path,
D:\stable-diffusion\forge test\
I would always recommend avoiding spaces in paths. Use underscore instead. - Though I don't think this is the issue here.Edit: I just noticed that indeed the error is refering to your CPU and GPU as 2 devices. Like I said, I'm not familiar with Forge, but you might have to add or remove launch args so it only uses GPU.
Nope there's only my GPU in device manager, my CPU has no gpu since its a KF model (intel-core-i513600kf)
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!
Try this: set COMMANDLINE_ARGS=--always-gpu
sadly didnt change anything, same error except the first line got changed to
ERROR:root:Exporting to ONNX failed. Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
So I'm just confused about the "Could not open file D:\stable-diffusion\forge test\webui\models\Unet-onnx\juggernautXL_v9Rdphoto2Lightning.onnx" part, where would I get this onnx file from?
it's because the onnx model not got generated you can try removing the venv first and then reinstall it and there is a slight issue in my PR let me fix that too ..
Just simply delete venv and extension and then retry installing the extension of ranareehanaslam PR
On Wed, 20 Mar 2024, 7:56 pm NLJ, @.***> wrote:
So I'm just confused about the "Could not open file D:\stable-diffusion\forge test\webui\models\Unet-onnx\juggernautXL_v9Rdphoto2Lightning.onnx" part, where would I get this onnx file from?
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/286#issuecomment-2009770047, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEXJW3K6FL2BRH7574SVGWDYZGPQNAVCNFSM6AAAAABEFKTNUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMBZG43TAMBUG4 . You are receiving this because you were mentioned.Message ID: @.***>
None of the above worked for me, this is what worked for me and fixed it to export engines with SD Forge https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/64#issuecomment-2001987142
Hello
I tried installing TensorRT on automatic1111 version 1.8.0 and it fails, (I also tested on 1.7.0 and there was no problems)
On my other pc I am getting almost the same error but instead its "ModuleNotFoundError: Launch"