intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.58k stars 1.25k forks source link

Add Visualization support #371

Closed shane-huang closed 7 years ago

shane-huang commented 7 years ago

Need visualization tools to help user inspect their runs and models. Possible visualizations include:

  1. show data (e.g. numbers/text/video/image/audio)
  2. show data statistics. (e.g. distributions, histogram)
  3. show graph
    • model topology ( as in Tensorboard or optnet)
    • storage references - for potential memory sharing and optimization
  4. diagnosis (much alike scalar summaries in Tensorboard but maybe more)
    • loss, learning rate and accuracy curves
    • weights and gradients
    • throughput and latency
  5. common activities when developing a deep learning model (could demonstrate as examples)
shane-huang commented 7 years ago

Resampling (Down-sampling) need to be supported to make graph generation faster. Tensorboard uses Reservior sampling. https://en.wikipedia.org/wiki/Reservoir_sampling

shane-huang commented 7 years ago

Close the umbrella issue for now. Currently we support both Tensorboard and notebook visualizations. Sub issues include

437, #439, #438 - Merged

440 - Future work