mlc-ai / relax

Apache License 2.0
149 stars 75 forks source link

Add the ability to differentiate between model loads from remote fetch v/s model loads from cache #265

Closed narangkay closed 1 year ago

tqchen commented 1 year ago

sorry for getting to this late. can you upstream this to https://github.com/apache/tvm/tree/unity unity branch

narangkay commented 1 year ago

Interesting.. would changes made there be pulled in here automatically? Would be awesome to have WebGPU support in TVM directly!

I've sent https://github.com/apache/tvm/pull/15357, lmk if that looks okay

MasterJH5574 commented 1 year ago

Interesting.. would changes made there be pulled in here automatically? Would be awesome to have WebGPU support in TVM directly!

I've sent apache/tvm#15357, lmk if that looks okay

@narangkay Thanks for sending it! Seeing it already merged :-) We will periodically sync this repo with apache/tvm so it will be good.

narangkay commented 1 year ago

@MasterJH5574 Is there a schedule for syncing? Is there a way for me to help? E.g. I could propose a pull request with those changes if there's a way to do that

MasterJH5574 commented 1 year ago

Hi @narangkay, we just did a rebase and your contribution is now contained (https://github.com/mlc-ai/relax/commit/057e61575b537b0c926e2f5ac07a37160780b1e6). Thanks for the patience!