lumina37 / rotate-captcha-crack

CNN预测图片旋转角✨可用于破解旋转验证码
The Unlicense
312 stars 89 forks source link
convolutional-neural-networks pytorch

Rotate-Captcha-Crack

中文 | English

Predict the rotation angle of given picture through CNN. This project can be used for rotate-captcha cracking.

Test result:

test_result

Three kinds of model are implemented, as shown in the table below.

Name Backbone Cross-Domain Loss (less is better) Params MACs
RotNet ResNet50 75.6512° 24.246M 4.09G
RotNetR UniRepLKNet-P 15.1818° 18.117M 3.18G

RotNet is the implementation of d4nst/RotNet over PyTorch. RotNetR is based on RotNet, with UniRepLKNet-P(CVPR'24) as its backbone and class number of 128. The average prediction error is 15.1818°, obtained by 64 epochs of training (3 hours) on the Google Street View dataset.

The Cross-Domain Test uses Google Street View and Landscape-Dataset for training, and Captcha Pictures from Baidu (thanks to @xiangbei1997) for testing.

The captcha picture used in the demo above comes from RotateCaptchaBreak

Try it!

Prepare

git clone https://github.com/lumina37/rotate-captcha-crack.git --depth 1
cd ./rotate-captcha-crack

This project strongly suggest you to use uv for package management. Run the following commands if you already have uv:

uv sync

Or, if you prefer conda: The following steps will create a virtual env under the working directory. You can also use a named env.

conda create -p .conda
conda activate ./.conda
conda install matplotlib tqdm tomli
conda install pytorch torchvision pytorch-cuda=12.4 -c pytorch -c nvidia

Or, if you prefer a direct pip:

pip install torch torchvision --index-url https://download.pytorch.org/whl/cu124
pip install -e .

Download the Pretrained Models

Download the *.zip files in Release and unzip them all to the ./models dir.

The directory structure will be like ./models/RotNetR/230228_20_07_25_000/best.pth

The names of models will change frequently as the project is still in beta status. So, if any FileNotFoundError occurs, please try to rollback to the corresponding tag first.

Test the Rotation Effect by a Single Captcha Picture

If no GUI is presented, try to change the debugging behavior from showing images to saving them.

uv run test_captcha.py

If you do not have uv, please use:

python test_captcha.py

Use HTTP Server

With uv:

uv sync --extra server

or with conda:

conda install aiohttp

or with pip:

pip install -e .[server]
uv run server.py

If you do not have uv, just use:

python server.py

Use curl:

curl -X POST --data-binary @test.jpg http://127.0.0.1:4396

Or use Windows PowerShell:

irm -Uri http://127.0.0.1:4396 -Method Post -InFile test.jpg

Train Your Own Model

Prepare Datasets

Train

uv run train_RotNetR.py

Validate the Model on Test Set

uv run test_RotNetR.py

Details of Design

Most of the rotate-captcha cracking methods are based on d4nst/RotNet, with ResNet50 as its backbone. RotNet regards the angle prediction as a classification task with 360 classes, then uses cross entropy to compute the loss.

Yet CrossEntropyLoss with one-hot labels will bring a uniform metric distance between all angles (e.g. $\mathrm{dist}(1°, 2°) = \mathrm{dist}(1°, 180°)$ ), clearly defies the common sense. Arbitrary-Oriented Object Detection with Circular Smooth Label (ECCV'20) introduces an interesting trick, by smoothing the one-hot label, e.g. [0,1,0,0] -> [0.1,0.8,0.1,0], CSL provides a loss measurement closer to our intuition, such that $\mathrm{dist}(1°,180°) \gt \mathrm{dist}(1°,3°)$.

Meanwhile, the angle_error_regression proposed by d4nst/RotNet is less effective. That's because when dealing with outliers, the gradient leads to a non-convergence result. It's better to use a SmoothL1Loss for regression.