ZJUICSR / AIcert

Mulan Permissive Software License, Version 2
207 stars 8 forks source link

Cannot install Flask==2.0.3 and flask==1.0.2 because these package versions have conflicting dependencies. #2

Open kevindadi opened 5 months ago

kevindadi commented 5 months ago

python version = 3.9.13

Acomand commented 5 months ago

I'd also like to know the version of protobuf, textattack, and auto_LiRPA. There may be few dependency conflicts. If possible, could you please update the environment dependencies file?

kevindadi commented 5 months ago

Protobuf = 3.20.0 Textattack = 0.3.10 Auto_LiRPA = 0.3 No dependency version changes in requirement

Acomand commented 5 months ago

Sorry, sir. It seems my comment confused you. I also have problems with the dependencies and I am just raising the new questions (or do you run the project successfully?). All the problems are:

  1. Flask==2.0.3 and flask==1.0.2 are in conflict.
  2. Even after choosing flask between 2.0.3 or 1.0.2, the version of protobuf becomes the new conflicting one.
  3. After fixing the conflicts in 2, there are new problems when running "python main.py".
kevindadi commented 5 months ago

Sorry, sir. It seems my comment confused you. I also have problems with the dependencies and I am just raising the new questions (or do you run the project successfully?). All the problems are:

  1. Flask==2.0.3 and flask==1.0.2 are in conflict.
  2. Even after choosing flask between 2.0.3 or 1.0.2, the version of protobuf becomes the new conflicting one.
  3. After fixing the conflicts in 2, there are new problems when running "python main.py".

No, sir, same question, I didn't run the project successfully either

hardtoname0907 commented 5 months ago

Hello! I noticed the discussion you were having and I'm experiencing something similar.So I use TensorFlow== 2.11.0 and installed protobuf==3.20.1, Flask==2.0.3 to fix dependency conflicts. I am able to successfully run the platform interface, but I am encountering an issue where none of the functionalities, such as "Adversarial Sample Generation," seem to work. When I click on the "Evaluate" button within this functionality, the system does not respond. I am curious if anyone else has encountered a similar problem.

Acomand commented 5 months ago

Hello! I noticed the discussion you were having and I'm experiencing something similar.So I use TensorFlow== 2.11.0 and installed protobuf==3.20.1, Flask==2.0.3 to fix dependency conflicts. I am able to successfully run the platform interface, but I am encountering an issue where none of the functionalities, such as "Adversarial Sample Generation," seem to work. When I click on the "Evaluate" button within this functionality, the system does not respond. I am curious if anyone else has encountered a similar problem.

Hi,

  1. The dependency conflicts seem to exist when using TensorFlow== 2.11.0 as alibi-detect 0.8.1 depends on tensorflow!=2.6.0, !=2.6.1, <2.8.0 and >=2.2.0.
  2. About your question, when you click the button to start the evaluation, you can see the log console at the bottom. And there are some output logs, such as errors. And the project does not provide the dataset and pretrainedmodel, so it dose not seem to work. For example I find the error "can find 'XXX.pth'" according to the log.
hardtoname0907 commented 5 months ago

Hi,

  1. The dependency conflicts seem to exist when using TensorFlow== 2.11.0 as alibi-detect 0.8.1 depends on tensorflow!=2.6.0, !=2.6.1, <2.8.0 and >=2.2.0.
  2. About your question, when you click the button to start the evaluation, you can see the log console at the bottom. And there are some output logs, such as errors. And the project does not provide the dataset and pretrainedmodel, so it dose not seem to work. For example I find the error "can find 'XXX.pth'" according to the log.

Thank you for your response and suggestions! (1) Indeed, switching to TensorFlow==2.11.0 resulted in a dependency conflict with alibi-detect 0.8.1. I'm still exploring compatible versions of alibi-detect to resolve this. (2) What's puzzling is that when I click the 'Evaluate' button, no error messages appear in the terminal,nor does the debug window respond in any way. (3) I was wondering if you have been able to successfully run this platform feature? It would be greatly appreciated if we could exchange contact information for further discussion.

Acomand commented 5 months ago

There are my operations to get the log: 1) I modify the dependencies to run the project, and I do not install textattack or any additional dependencies. 2) In "index.py", I remove the "import interface" code and just use "interface = 0" as the definition. 3) I copy the function "run_concolic" into "index.py" (also add a line "from function import concolic"). 4) Run "main.py" and use the item "测试样本自动生成" to get the results and I see the error logs.