Verified-Intelligence / alpha-beta-CROWN

alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
Other
235 stars 58 forks source link

RuntimeError when verify CNN model with Tanh activation function #14

Closed 13luoyu closed 8 months ago

13luoyu commented 1 year ago

Hello, I notice that alpha-beta-crown is available for Sigmoid, Tanh activation functions. I use this code to verify model from ERAN. The CNN model with Sigmoid activation function has run smoothly, while for CNN model with Tanh activation function

Traceback (most recent call last):
  File "abcrown.py", line 650, in <module>
    main()
  File "abcrown.py", line 538, in main
    incomplete_verifier(model_ori, x, data_ub=data_max, data_lb=data_min, vnnlib=vnnlib)
  File "abcrown.py", line 72, in incomplete_verifier
    domain, x, data_lb, data_ub, vnnlib, stop_criterion_func=stop_criterion_min(arguments.Config["bab"]["decision_thresh"]))
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/beta_CROWN_solver.py", line 1122, in build_the_model
    bound_upper=False, aux_reference_bounds=aux_reference_bounds, cutter=self.cutter)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 1170, in compute_bounds
    cutter=cutter, decision_thresh=decision_thresh)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/optimized_bounds.py", line 592, in get_optimized_bounds
    update_mask=preserve_mask)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 1321, in compute_bounds
    self.check_prior_bounds(final)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 849, in check_prior_bounds
    self.check_prior_bounds(n)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 857, in check_prior_bounds
    node.inputs[i], prior_checked=True)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 959, in compute_intermediate_bounds
    unstable_size=unstable_size)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/backward_bound.py", line 154, in backward_general
    l.lA, l.uA, *l.inputs, start_shape=start_shape, start_node=node)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/activation.py", line 258, in bound_backward
    lA, lbias = _bound_oneside(last_lA, sign=-1)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/activation.py", line 255, in _bound_oneside
    A_prod, _bias = multiply_by_A_signs(last_A, w_pos, w_neg, b_pos, b_neg)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/clampmult.py", line 122, in multiply_by_A_signs
    return ClampedMultiplication.apply(A, d_pos, d_neg, b_pos, b_neg, False)
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/clampmult.py", line 101, in forward
    return ClampedMultiplication.clamp_mutiply_forward(A, d_pos, d_neg, b_pos, b_neg, patches_mode)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "/home/XueZhiYi/academic/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/clampmult.py", line 42, in clamp_mutiply_forward
        A_pos = A.clamp(min=0)
        A_neg = A.clamp(max=0)
        A_new = d_pos * A_pos + d_neg * A_neg
                ~~~~~~~~~~~~~ <--- HERE
        bias_pos = bias_neg = torch.tensor(0.)
        if b_pos is not None:
RuntimeError: The size of tensor a (1000) must match the size of tensor b (304) at non-singleton dimension 0

The command to run is:

python abcrown.py --onnx_path onnx/mnist/convMedGTANH__Point.onnx  --dataset MNIST --epsilon 0.02 --device cpu --complete_verifier skip --no_full_conv_alpha --end 100

I have fixed the code like Sigmoid in backward_bound.py:

if (sparse_intermediate_bounds and isinstance(node, (BoundRelu, BoundSigmoid))
                and nj.name != final_node_name and not share_slopes):

to

if (sparse_intermediate_bounds and isinstance(node, (BoundRelu, BoundSigmoid, BoundTanh))
                and nj.name != final_node_name and not share_slopes):

How to deal with it? Thank you!

huanzhang12 commented 1 year ago

Thanks for reaching out and reporting this issue! We can indeed produce this issue and we are working on a new version with better support on more activation functions.

@shizhouxing is working on fixing this issue, and it will be fixed in the next release.

shizhouxing commented 11 months ago

The latest code is working if you remove --no_full_conv_alpha.