kubeedge / ianvs

Distributed Synergy AI Benchmarking
https://ianvs.readthedocs.io
Apache License 2.0
103 stars 38 forks source link

Domain-Specific Large Model Benchmarking Based on KubeEdge-Ianvs #95

Open MooreZheng opened 1 month ago

MooreZheng commented 1 month ago

What would you like to be added/modified: Based on existing datasets, the issue aims to build a benchmark for domain-specific large models on KubeEdge-Ianvs. Namely, it aims to help all Edge AI application developers validate and select the best-matched domain-specific large models. This issue includes:

  1. Benchmark Dataset Map: A mapping document, e.g., a table, includes test datasets and their download method for various specific domains.
  2. Large Model Interfaces: Integrates open-source benchmarking projects like OpenCompass. Provides model API addresses and keys for online large model invocation.
  3. Domain-specific Large Model Benchmark: Focuses on NLP or multimodal tasks. Constructs a suite for the government sector, including test datasets, evaluation metrics, testing environments, and usage guidelines.
  4. (Advanced) Industrial/Medical Large Model Benchmark: Includes metrics and examples.
  5. (Advanced) Efficient Evaluation: Enables concurrent execution of tasks with automatic request and result collection.
  6. (Advanced) Task Execution and Monitoring: Visualizes the large model invocation process.

Why is this needed: As large models enter the era of scaled applications, the cloud has already provided infrastructure and services for these large models. Relevant customers have further proposed targeted application requirements on the edge side, including personalization, data compliance, and real-time capabilities, making AI services with cloud-edge collaboration a major trend. However, there are currently two major challenges in terms of product definition, service quality, service qualifications, and industry influence: general competitiveness and customer trust problems. The crux of the matter is that the current large model benchmarking focuses on assessing general basic capabilities and fails to drive large model applications from an industry or domain-specific perspective.

This issue reflects the real value of large models through industry applications from the perspectives of the domain-specific large model and cloud-edge collaborative AI, using industry benchmarks to drive the incubation of large model applications. Based on the collaborative AI benchmark test suite KubeEdge-Ianvs, this issue supplements the large model testing tool interface, provides matching test datasets, and constructs large model test suites for specific domains, e.g., for governments.

Recommended Skills: KubeEdge-Ianvs, Python, LLMs

Useful links: Introduction to Ianvs Quick Start How to test algorithms with Ianvs Testing incremental learning in industrial defect detection Benchmarking for embodied AI KubeEdge-Ianvs Example LLMs Benchmark List Ianvs v0.1 documentation (中国)国家标准计划《人工智能 预训练模型 第2部分:评测指标与方法》及政务大模型、工业大模型等标准化文件

MooreZheng commented 1 month ago

If anyone has questions regarding this issue, please feel free to leave a message here. We would also appreciate it if new members could introduce themselves to the community.

IcyFeather233 commented 1 month ago

I have a question about the Benchmark Dataset Map: which domains should this dataset cover? Is it for all domains, or just industrial and government sectors? Also, if I need to submit a preliminary version, where would be the most appropriate directory to submit it?

MooreZheng commented 1 month ago

I have a question about the Benchmark Dataset Map: which domains should this dataset cover? Is it for all domains, or just industrial and government sectors? Also, if I need to submit a preliminary version, where would be the most appropriate directory to submit it?

  1. For this issue, preferred domains would be those currently making great impacts in LLM, e.g., government affairs, industry, and medical domains.
  2. It depends on what is included in the submitted version. In the beginning, a proposal would be preferred.