QIN2DIM / hcaptcha-challenger

🥂 Gracefully face hCaptcha challenge with MoE(ONNX) embedded solution.
https://docs.captchax.top/
GNU General Public License v3.0
1.48k stars 253 forks source link

feat(onnx): ViT zero-shot tasks #858

Closed QIN2DIM closed 10 months ago

QIN2DIM commented 10 months ago

Intro

See the example code for details.

The CLIP multimodal model enables zero-shot image classification. I've tested this on multiple datasets and the model is over 99.9% accurate, as long as an appropriate prompt is provided.

We just need to write positive_labels and negative_labels based on the cue words of the known challenge (image_binary_challenge). If a new prompt is encountered that has never been processed before, the program automatically performs the conversion and adjustment for the dichotomous task.

We tried to reproduce the process module using numpy, i.e., we did not need to rely on PyTorch to implement the process.

By default, we use the RN50.openai specification of the model for classification tasks. We encapsulate the activation of both the ONNX and VitTransformer Pipeline branches so that the program switches automatically when you have both torch and transformers installed in your runtime environment and a CUDA GPU available. Otherwise, it defaults to using ONNX and running on a CPU.

https://github.com/QIN2DIM/hcaptcha-challenger/blob/901afd1dbf97ac25191ec6ea2398daab9db97773/hcaptcha_challenger/onnx/modelhub.py#L245-L259

DEMO

datalake_post drawio

"""
1. **positive_labels** can contain only the slashed prompt, i.e., the meaning specified by the prompt

2. **negative_labels** usually have multiple categories,

please observe the other labels in the 9 images and fill in the label_name

3. **positive_labels** can fill in more than one, when there is ambiguity in the prompt.

   For example, if the prompt asks to select a `vehicle`, but `car` and `airplane` appear in the task.

   You can fill in this: `positive_labels = ["vehicle", "car", "airplane"]`

4. Sometimes the prompt doesn't change, but its corresponding image group is replaced.
   If you observe this, update your `datalake_post` to do so!

5. If a prompt never appears, i.e. you don't update it to datalake, the program automatically disassembles the prompt
and adds simple antonyms to the mapping network to ensure that the binary classification task proceeds properly.

   This process works sometimes, but the correctness rate is obviously no better than the way you fill it out manually
"""
from hcaptcha_challenger import split_prompt_message, label_cleaning, DataLake

def handle(x): return split_prompt_message(label_cleaning(x), "en")

datalake_post = {
    # --> off-road vehicle
    handle("Please click each image containing an off-road vehicle"): {
        "positive_labels": ["off-road vehicle"],
        "negative_labels": ["car", "bicycle"],
    },
    # --> pair of headphones
    handle("Please click each image containing a pair of headphones"): {
        "positive_labels": ["headphones"],
        "negative_labels": ["car", "elephant", "cat"]
    },
    # --> item of office equipment
    handle("Please click each image containing an item of office equipment"): {
        "positive_labels": ["office equipment", "chair"],
        "negative_labels": ["shoes", "guitar", "drum", "musical instruments"]
    }
}

def common():
    from hcaptcha_challenger import ModelHub

    # ... Some of the operations you are familiar

    modelhub = ModelHub.from_github_repo()
    modelhub.parse_objects()

    print(f"Before {modelhub.datalake.keys()=}")

    # Merge the data. And use this modelhub object later
    for prompt, serialized_binary in datalake_post.items():
        modelhub.datalake[prompt] = DataLake.from_serialized(serialized_binary)

    print(f"After {modelhub.datalake.keys()=}\n")

    for prompt, dl in modelhub.datalake.items():
        print(f"{prompt=}")
        print(f"{dl=}\n")

    # ... Some of the operations you are familiar

if __name__ == '__main__':
    common()

https://github.com/QIN2DIM/hcaptcha-challenger/blob/d38be1b3f148f4368e77b87cb74bccc826cf3117/src/objects.yaml#L553-L574

QIN2DIM commented 10 months ago

https://github.com/QIN2DIM/awesome-clip-production

Preview Blog

Tutorials

Self Hosting

Model Hub

Model Card

Benchmarks

Open-CLIP

EVA-CLIP

[Submitted on 27 Mar 2023]

DINOv2

[Submitted on 14 Apr 2023]

Datasets

LAION-400M

[Submitted on 3 Nov 2021]

LAION-2B

[Submitted on 16 Oct 2022]

DataComp

[Submitted on 27 Apr 2023 (v1), last revised 25 Jul 2023 (this version, v4)]

demo

import torch

torch.onnx.export(
    model,  # model being run
    # model input in one of acceptable format: torch.Tensor (for single input), tuple or list of tensors for multiple inputs or dictionary with string keys and tensors as values.
    dict(inputs),
    "clip-vit-base-patch16.onnx",  # where to save the model
    opset_version=14,  # the ONNX version to export the model to
    input_names=["input_ids", "pixel_values", "attention_mask"],  # the model's input names
    output_names=["logits_per_image", "logits_per_text", "text_embeds", "image_embeds"],  # the model's output names
    dynamic_axes={  # variable length axes
        "input_ids": {0: "batch", 1: "sequence"},
        "pixel_values": {0: "batch", 1: "num_channels", 2: "height", 3: "width"},
        "attention_mask": {0: "batch", 1: "sequence"},
        "logits_per_image": {0: "batch"},
        "logits_per_text": {0: "batch"},
        "text_embeds": {0: "batch"},
        "image_embeds": {0: "batch"}
    }
)