intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.47k stars 1.24k forks source link

Chronos will change its supported python version to 3.7 only #4246

Open TheaperDeng opened 2 years ago

TheaperDeng commented 2 years ago
Current Supportive python version Comment
Yes 3.6 tf1, torch (tf1 is not compatible with pytorch-lightning), python3.6 is end of life on 23 Dec 2021
Yes 3.7 tf1/tf2, torch (tf1 is not compatible with pytorch-lightning)
No 3.8 tf1/tf2, torch (tf1 is not compatible with pytorch-lightning)

Here is our plan.

We will reduce our python support from 3.6/3.7 to 3.7 only.

On Jenkins:

python3.6 env test will be tf1 ut(deprecated API) only

python3.7 will be a comprehensive test.

On document:

We will highly recommended our users to use python3.7 once tf2 is integrated.

We will allow our users to use python 3.6, but no tf2 models will be supported under this version

Future plan:

We will support 3.8 according to our customers' request.

liangs6212 commented 2 years ago

Currently, Nano uses tf2.7.0, but py36 only supports up to 2.6.0.

Current Supportive python version Comment Support version
Yes 3.6 tf1, torch (tf1 is not compatible with pytorch-lightning), python3.6 is end of life on 23 Dec 2021 numpy<=1.19.5 and tensorflow<2.7.0
Yes 3.7 tf1/tf2, torch (tf1 is not compatible with pytorch-lightning) numpy<=1.21.5 and latest version tensorflow
No 3.8 tf1/tf2, torch (tf1 is not compatible with pytorch-lightning)