intel / intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Apache License 2.0
1.61k stars 247 forks source link

Caching JIT to local disk to speed up #503

Open CarlGao4 opened 10 months ago

CarlGao4 commented 10 months ago

Describe the issue

If IPEX is not built with AOT for current device enabled, JIT will generate objects for current GPU (XPU). However, even if I run the same codes with same parameters after restarting Python, JIT will compile again, which takes a lot of time. I'm wondering if there is a way to store compiled objects in local disk cache?

BTW, it would be much helpful if I can determine whether AOT or JIT cache or JIT is available without running a function.

jingxu10 commented 9 months ago

@gujinghui @tye1 pls check this feature request

gujinghui commented 2 months ago

It's not IPEX capability. It should be runtime related feature. According to the page of SYCL EnvironmentVariables, below flag should be what you want. But it did not verified from IPEX side. Thanks.

SYCL_CACHE_PERSISTENT, Integer, Controls persistent device compiled code cache. Turns it on if set to ‘1’ and turns it off if set to ‘0’. When cache is enabled SYCL runtime will try to cache and reuse JIT-compiled binaries. Default is off.

CarlGao4 commented 2 months ago

Thanks. I'll give it a try.

CarlGao4 commented 2 months ago

Both https://github.com/intel/compute-runtime/blob/master/programmers-guide/COMPILER_CACHE.md and https://intel.github.io/llvm-docs/EnvironmentVariables.html should be checked. Thanks!

By the way, is it possible to add these two links to the documentation?