Open janbrutovsky opened 3 years ago
Same issue, right now I'm running turicreate in a Qemu virtual machine emulating an x86 cpu, which isn't exactly fast
I'm wondering if the plan is for the CreateML app to replace/supersede turicreate?
I was hoping Apple would include support for its own open-source project much faster.
I'm wondering if the plan is for the CreateML app to replace/supersede turicreate?
@tagy I think 's all ML softwares are at different levels of user-abstraction (high to low):
I'm guessing we won't get news on this before WWDC, but it would be nice to know if there are plans for this. Currently running turicreate on a x64/Ti1080 server but very much looking forward to running it locally.
@Enzo90910 can you share some details on how you do this? Or is it just simple ssh?
@Enzo90910 can you share some details on how you do this? Or is it just simple ssh?
Basically following the doc here https://github.com/apple/turicreate/blob/main/LinuxGPU.md
You need Cuda, CuDNN and tensorflow-gpu I did it on a server over ssh yes, but nothing prevents you from doing it on a local desktop linux.
Great, thanks!
Has there been any update on this? Or even any recent Apple announcement about Turi Create?
Bumping-- any plans for turi-create on M1?
Same here, wondering if we will see M1 support? It's 2022 now....
TuriCreate does not have native M1 support, but it should run fine on an M1 machine if you use Rosetta.
See my instructions in another GitHub issue to use TuriCreate with Rosetta.
@TobyRoseman Can it use the GPU under Rosetta? Is it performant?
TuriCreate does not have native M1 support, but it should run fine on an M1 machine if you use Rosetta.
@TobyRoseman Is going to make this software obsolete?
Hi all, any news when the new m1 will be supported by turicreate? upgraded to to M1 as my intel mbp13 2019 was dying when doing more then 30objects with oneshot learning thanks a lot jan