Verified-Intelligence / auto_LiRPA

auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
https://arxiv.org/pdf/2002.12920
Other
269 stars 67 forks source link

Support for torch.clip #38

Closed chenxi-yang closed 1 year ago

chenxi-yang commented 1 year ago

Hi,

I met the following issues when trying to use auto_LiRPA with 'x = torch.clip(x, min, max)' in the forward function. May I ask if auto_LiRPA supports torch.clip? Thanks!

Traceback (most recent call last): File "test_provability.py", line 57, in main worst_reward, bestreward, = verifier.extract_symbolic_reward(model_name, policy_dir, model_dir, normalization) File "/home/xx/vrl/vrl/evaluation/verifier.py", line 385, in extract_symbolic_reward model = BoundedModule(verified_model, (my_input_state, my_input_attack), File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/auto_LiRPA-0.3-py3.8.egg/auto_LiRPA/bound_general.py", line 102, in init self._convert(model, global_input) File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/auto_LiRPA-0.3-py3.8.egg/auto_LiRPA/bound_general.py", line 810, in _convert nodesOP, nodesIn, nodesOut, template = self._convert_nodes( File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/auto_LiRPA-0.3-py3.8.egg/auto_LiRPA/bound_general.py", line 602, in _convert_nodes nodesOP, nodesIn, nodesOut, template = parse_module( File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/auto_LiRPA-0.3-py3.8.egg/auto_LiRPA/parse_graph.py", line 148, in parse_module trace_graph = _optimize_graph( File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/torch/onnx/utils.py", line 308, in _optimize_graph graph = _C._jit_pass_onnx(graph, operator_export_type) File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/torch/onnx/init.py", line 416, in _run_symbolic_function return utils._run_symbolic_function(*args, kwargs) File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/torch/onnx/utils.py", line 1406, in _run_symbolic_function return symbolic_fn(g, *inputs, *attrs) File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/torch/onnx/symbolic_opset11.py", line 75, in clamp return clamp_max(g, clamp_min(g, self, min), max) File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py", line 234, in wrapper return fn(g, args, kwargs) File "/home/xx/miniconda3/envs/verification/lib/python3.8/site-packages/torch/onnx/symbolic_opset11.py", line 94, in clamp_max max = g.op("Cast", max, to_i=symbolic_helper.cast_pytorch_to_onnx[dtype]) KeyError: None

huanzhang12 commented 1 year ago

Thank you for reporting this. We will put it on our todo list and add support in a next release.