Open KehanLong opened 5 months ago
Hi @KehanLong ,
In VNNLIB format, (assert (>= Y_0 0.0))
defines the condition for counterexamples (not verification), and the verification tries to verify the opposite (i.e., Y_0<0). So "safe" is expected.
When I increase the hidden layer size from 6 to 8, the verification process takes an excessive amount of time (more than 10,000 seconds) and fails to complete.
Deeper models can be harder to verify. It also depends on how the model is trained (if there are techniques such as regularization added to make the model more verification-friendly).
If I use the torch.norm function in the forward pass of the network, I encounter the following error: NotImplementedError: Exponent is not supported yet
torch.norm is probably not supported yet. You may manually write the norm computation in the model.
Hi @shizhouxing:
Thank you for your prompt response and assistance.I appreciate your help in resolving the issues!
I have been trying to manually write some norm computations in my neural network, but I am still encountering some problems. Let me provide more details about the issue.
I have a simple neural network defined as follows:
class SimpleModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(3, 32)
self.fc2 = nn.Linear(32, 16)
self.fc3 = nn.Linear(16, 1)
self.relu1 = nn.ReLU()
self.relu2 = nn.ReLU()
self.origin_tensor = torch.zeros(1, 3).to(self.fc1.weight.device)
def forward(self, input):
x = self.fc1(input)
x = self.relu1(x)
x = self.fc2(x)
x = self.relu2(x)
phi_x = self.fc3(x)
#print(f"phi_x shape: {phi_x.shape}")
phi_0 = self.fc3(self.relu2(self.fc2(self.relu1(self.fc1(self.origin_tensor)))))
#print(f"phi_0 shape: {phi_0.shape}")
V = phi_x
return V
the verification can be done easily.
However, If I simply changeV = phi_x
to V = phi_x - phi_0
and re-train the network. When verifying, I got the errors about:
if shape[i] == 1 and A.shape[i + 1] != 1:
IndexError: tuple index out of range
The error occurs in the broadcast_backward method of the alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/base.py file during the backward pass of the verification process.
I suspect that the issue might be related to the self.origin_tensor and how it is used in the computation of phi_0. It seems that the subtraction operation between phi_x and phi_0 is causing problems during the backward pass.
Could you please provide any insights or suggestions on how to resolve this issue? My goal is to enforce some structure on the network output, such as positive definiteness, by incorporating some computations such as norms or subtractions. However, the current approach is leading to errors during verification.
Besides, I am also interested in verifying the gradient of some learned neural networks. For example, given a learned NN (MLP) f, and a known vector field g(x), if I would like to verify that for a bounded region of input x, can I use alpha-beta crown to verify that grad_x f(x) · g(x) > 0? Any suggestions or insights are appreciated.
Hi,
I've encountered a few issues while using alpha-beta-CROWN for verifying the output of a simple neural network. I would greatly appreciate your assistance in resolving these concerns.
I have a simple neural network with a negated ReLU activation in the last layer, which should always produce negative outputs. However, the verification results indicate that the network output is always non-negative, which is inconsistent with the expected behavior.
Here's the network definition: class SimpleModel(nn.Module): def init(self): super().init() self.fc1 = nn.Linear(1, 6) self.fc2 = nn.Linear(6, 6) self.fc3 = nn.Linear(6, 1) self.relu1 = nn.ReLU() self.relu2 = nn.ReLU()
self.relu3 = nn.ReLU()
And here is the custom vnnlib file:
; Input variables (states). (declare-const X_0 Real) ; Output variables. (declare-const Y_0 Real) ; Input constraints. (assert (<= X_0 1.0)) (assert (>= X_0 -1.0)) ; Verifying output holds. (assert (>= Y_0 0.0))
and here is the yaml file I use: general: device: cpu
conv_mode: matrix enable_incomplete_verification: false root_path: ${CONFIG_PATH} csv_name: verification/specs/test_specs.csv model: name: >- Customized("test.py", "create_simple_model") input_shape: [-1, 1] data: dataset: Customized("test.py", "box_data", lower_limit = [-1.0], upper_limit = [1.0], scale = 1.0) attack: pgd_order: skip debug: print_verbose_decisions: false solver: batch_size: 50000 min_batch_size_ratio: 0. bound_prop_method: crown bab: override_timeout: 1000 decision_thresh: -1.e-4 branching: method: sb input_split: enable: True ibp_enhancement: True compare_with_old_bounds: True adv_check: -1 sb_coeff_thresh: 0.001
The verification results show that the network is "safe" and the output is always non-negative, which contradicts the expected behavior of the negated ReLU activation.
The result looks like: initial crown bounds: tensor([[0.]]) Worst class: (+ rhs) 0.0 Verified by initial bound! Result: safe in 0.0920 seconds ############# Summary ############# Final verified acc: 100.0% (total 1 examples) Problem instances count: 1 , total verified (safe/unsat): 1 , total falsified (unsafe/sat): 0 , timeout: 0 mean time for ALL instances (total 1):0.09201649202074873, max time: 0.09201741218566895 mean time for verified SAFE instances(total 1): 0.09201741218566895, max time: 0.09201741218566895 safe (total 1), index: [0]
Besides, I have a few other questions:
Issues with Network Scaling: When I increase the hidden layer size from 6 to 8, the verification process takes an excessive amount of time (more than 10,000 seconds) and fails to complete.
Error with torch.norm Usage: If I use the torch.norm function in the forward pass of the network, I encounter the following error: NotImplementedError: Exponent is not supported yet
Thank you for your time and attention to this matter!