intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.77k stars 1.27k forks source link

ML pipeline incorrect import statement #2208

Open ghost opened 6 years ago

ghost commented 6 years ago

In https://bigdl-project.github.io/master/#APIGuide/MLPipeline/DLEstimator_DLClassifier/ Every from bigdl.ml_pipeline.dl_classifier import * should be changed to from bigdl.models.ml_pipeline.dl_classifier import *

yiheng commented 6 years ago

Thanks for pointing out this. @hhbyyh can you take a look at this?

ghost commented 6 years ago

On the same page, for the Python examples, dlModel.transform(df).show(False) should be changed to dlModel.transform(df).show() for the DLClassifier and DLEstimator.