masouduut94 / volleyball_analytics

This project is designed to display how we can utilize deep learning methods for Sports Data Analytics.
GNU General Public License v2.0
18 stars 3 forks source link
action-detection analytics ball-detection fastapi object-detection object-tracking scene-classification sports sports-analytics video-classification videomae volleyball volleyball-games volleyball-tracking yolov8

Volleyball Analytics

image source

Introduction to Sports Analytics

The use of Artificial Intelligence (AI) in sports analytics has become increasingly prevalent, offering teams and athletes valuable insights to enhance performance, strategy, and decision-making.

Video analysis is a crucial component of sports analytics, individually in volleyball. It involves the use of video footage of games or training sessions to gather detailed insights into player and team performance, identify strengths and weaknesses, and make informed decisions to improve overall efficiency and effectiveness on the court. This can be useful in many aspects:

Real-Time Feedback

Some advanced video analysis tools provide real-time feedback during matches or training sessions. Coaches can use this information to make immediate adjustments to tactics, substitutions, or strategies based on the ongoing performance.

Scouting Opponents

Teams use video analysis to scout upcoming opponents. By studying their playing style, key players, and preferred strategies, teams can prepare more effectively for upcoming matches.

Performance Evaluation

Coaches and analysts can review match footage to evaluate player techniques, strategies, and overall performance. This includes aspects such as serving, passing, setting, attacking, and defensive plays.

Statistical Tracking

Video analysis software often integrates with statistical tracking systems. This enables the extraction of key performance metrics, such as hitting percentages, passing accuracy, and blocking effectiveness, directly from the video footage.

Injury Prevention

Video analysis can help identify movement patterns and techniques that may contribute to injuries. Coaches and sports scientists can use this information to design training programs focused on injury prevention and optimize players' biomechanics.

Decision Review System

Video analysis can assist the referee by reviewing decisions using video footage. This technology can be utilized in several occasions in volleyball. For example, Hawk eye technology can check if the ball bounced inside the court or not. Something like this:

video_challenge

Overall, video analysis plays a pivotal role in enhancing coaching strategies, player development, and team performance in volleyball and many other sports. The combination of video footage and analytics provides a comprehensive understanding of the game, enabling teams to make data-driven decisions for success.

About Project

This machine learning project runs in real-time on top of 2 deep learning models.

Video Classification model:

In a live broadcast game, it is important to run processes only when game is on. To extract the periods that game is on, HuggingFace VideoMAE is utilized. This model is trained on a custom dataset that consists of 3 labels service, play, no-play. This model gets 30 frames as input, and outputs the label. it's service, play, no-play.

This is the VideoMAE architecture. videomae architecture

image source

Yolov8 model:

This state-of-the-art model is an object detection model that is trained on a dataset which includes several objects along with several volleyball actions.

This is yolov8 architecture: Yolov8

image source

The yolov8n is chosen for fine-tuning on 6 different objects (actions). In demos, you can see 6 colors for bounding boxes.

These are the outputs indicating the video classification + object detection results. The video classification model results can be seen in the top-left corner of video, and the object detection results can be seen as bounding boxes with mentioned colors. please note that the object detection is running on the frames that are labeled as SERVICE and PLAY.

Demo 1: FRANCE - POLAND

demo1

Demo 2: USA - CANADA

demo2

Demo 3: USA - POLAND

demo3

The third step is to use the generated data to find insights about the game. for example, in the below gif, one of the ace points is extracted.

Demo 4: FRANCE - POLAND Ace score.

demo2

Development

The whole project is developed with python 3.11. The requirements can be found in requirements.txt.

Open the conf/ml_models.yaml, and configure it this way:

yolo:
  player_segmentation:
    weight: "path/to/weights" # download: https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n-seg.pt
    labels: {0: 'person'}
  player_detection:
    weight: "path/to/weights" # download: https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n.pt
    labels: {0: 'person'}
  pose_estimation:
    weight: "path/to/weights" # download: https://github.com/ultralytics/assets/releases/download/v8.1.0/yolov8n-pose.pt
    labels: {0: 'person'}
  ball_segmentation:
    weight: "path/to/weights" # download: https://drive.google.com/file/d/1KXDunsC1ALOObb303n9j6HHO7Bxz1HR_/view?usp=sharing
    labels: {0: "ball"}
  action_detection6:
    weight: "/path/to/weights" # download: https://drive.google.com/file/d/1o-KpRVBbjrGbqlT8tOjFv91YS8LIJZw2/view?usp=sharing
    labels: { 0: 'ball', 1: 'block', 2: "receive", 3: 'set', 4: 'spike', 5: 'serve'}
  court_segment:
    weight: "/path/to/weights.pt" # donwload link: https://drive.google.com/file/d/1bShZ7hxNw_AESEgKf_EyoBXdFqCuL7V-/view?usp=sharing
    labels: { 0: "court"}

video_mae:
  game_state_3:
    weight: "/path/to/checkpoint directory" # download: https://drive.google.com/file/d/18vtJSLIpaRHOlvXNmwvSd9stYYAEsMcK/view?usp=sharing

For more information about datasets and weights, please have a look at the wiki page here:

you can also find insights about the API, the models used in the project, and their structure and training results. check it out yourself:

There are several scripts that can run the models and output demos that are listed here:

If you want to store the results in database, you must satisfy some dependencies:

development:
  user: "user"
  password: "*********"
  db: "volleyball_development"
  host: "some_ip"
  port: 5432
  dialect: 'postgresql'
  driver: 'psycopg2'
test:
  user: "user2"
  password: "********"
  db: "volleyball_test"
  host: "some_ip"
  port: 5432
  dialect: 'postgresql'
  driver: 'psycopg2'

these are sample values in the .env file:

MODE=development or test
DEV_USERNAME=user1
DEV_PASSWORD=********
DEV_HOST=localhost
DEV_DB=volleyball_development
DEV_PORT=5432
DEV_DRIVER=postgresql
TEST_DB_URL=sqlite:///./vb.db

About data

The video clips that are gathered as data are from this YouTube channel.

What comes next:

  1. Data Analysis is going to be added to the code. There are various KPIs that can be measured based on objects detected, like service success rate, service zone analysis, reception success rate, etc ...
  2. Publishing the datasets for video classification and volleyball object detection.