Open SamTov opened 3 years ago
Does anyone have any information about this at all?
Last commit was on 3/3. Not a word since despite lots fo complaints about this. Pretty frustrating.
I guess when Google merges it with the main TF repo it has to be there. It might be on their end as well. If there is a timeline for this release then it would help us all understand.
True, I didn't think about that. So when we install it using Apple's instructions here, is the actual TF code coming from google's repo or Apple? I always thought it was strange that you couldn't actually visualize the code from the Apple-optimized version here like you can do on the official TF page. My understanding is that it is basically a fork that Apple started based on TF 2.4rc0 (as stated on this repo's readme). That version dates back to 11/2/2020 if you look at Google's TF repo. I went down a rabbit hole looking at all the changes that have taken place in their repo since 2.4rc0, and there are several directly related to M1 performance/MLcompute. I would imaging that if someone did to the latest version of TF2 what they did to 2.4rc0, it would fix some issues.
It seems like there are a lot of issues not being responded to at the moment. Perhaps they are focusing on some other deployment! I just decided not to run an Apple benchmark for my paper which was a little unfortunate. The M1 chip is performing remarkably well against other hardware and demonstrating the efficiency of the Neural Engine would have been great. I guess we will just have to wait and see what happens.
Very true. From their blog post that announced the release of Apple-optimized tensorflow, they say that the plan is to merge their fork with Google's. I'm definitely not complaining about my Mini's performance just using CPU in eager execution mode since I mostly work with tabular hospital data as opposed to huge image or video based applications.
Very exciting times for data scientists and Mac users!
More a question than an issue but will there be the possibility of using the 'GPU' or Neural engine whilst in eager execution mode in the near future? I am benchmarking some software and would like to compare a GPU enabled Linux machine with AMD cpu and NVIDIA gpu, an Intel PC with NVIDIA gpu, Intel Macbook pro, and a Mac M1. The pure cpu comparison looks good but I would like to also compare the gpu enabled calculations which at the moment seems impossible with the macs.
There is no machine learning being performed simply tensor operations.