tairov / llama2.mojo

Inference Llama 2 in one file of pure 🔥
https://www.modular.com/blog/community-spotlight-how-i-built-llama2-by-aydyn-tairov
MIT License
2.09k stars 139 forks source link

Updated `algorithm::parallelize` calls to match v0.4.0 parameter changes #42

Closed theshteves closed 10 months ago

theshteves commented 10 months ago

Mojo v0.4.0 just released (2023-10-05)!

We have improved and simplified the parallelize function. The function now elides some overhead by caching the Mojo parallel runtime.

https://docs.modular.com/mojo/changelog.html#changed

What I did

1) removed the deprecated runtime parameter from each call 2) reformatted llama2.mojo w/ mojo format

tairov commented 10 months ago

Hi @theshteves , thank you for PR. There is another PR sent already #41, that looks get rid of state.rt from all functions & RunState struct. But that one also include some vectorize change, which is not clear should we merge or not.

tairov commented 10 months ago

the other PR merged already, ty!