CITCOM-project / causcumber

Cucumber driven causal inference for testing computational models.
1 stars 1 forks source link

pygraphviz error when running "pip install -r requirements.txt" in .\causcumber #20

Open Stanley-deng opened 3 years ago

Stanley-deng commented 3 years ago

When running the "pip install -r requirements.txt" in .\causcumber, I receive the following error: Running setup.py install for pygraphviz ... error pygraphviz/graphviz_wrap.c(2711): fatal error C1083: Cannot open include file: 'graphviz/cgraph.h': No such file or directory

I have graphviz installed and I tried to install pygraphviz first manually but it doesn't seems to workout

How do I fix this

Stanley-deng commented 3 years ago

I have follow the step to step up graphviz , but still show the same error, is there any specific version I should use or is there anything else I should install or set up first

jmafoster1 commented 3 years ago

Are you working in Windows? Is graphviz in your PATH environment variable? For now, you could try conda install pygraphviz instead. That seems to install OK without needing native graphviz, or at least it did for me... @bobturneruk did you have any luck getting pygraphviz working under Windows?

Stanley-deng commented 3 years ago

I’m working in windows environment, I have tried to set it as a variable. So I don’t need to install graphviz manually anymore with conda install pygraphviz?

Stanley-deng commented 3 years ago

Is there any difference between “conda install -c conda-forge pygraphviz” and “ conda install pygraphviz”?

jmafoster1 commented 3 years ago

As I understand it, you shouldn't need to install anything for conda install pygraphviz to work. I used it in an environment without it installed and where I couldn't install it due to not having admin rights. As for your second comment, I have absolutely no idea what the difference is I'm afraid.

bobturneruk commented 3 years ago

HI @Stanley-deng - the install instructions in the readme have recently changed. Would you mind having a go with the new version, please?

Stanley-deng commented 3 years ago

@bobturneruk Yes, I have seen and follow the updated readme and have add Graphviz into Path for current user, but I still received the same error

Stanley-deng commented 3 years ago

I seems to fix the problem related to Graphviz and have the system running, but I encountered another problem related to "behave": Traceback (most recent call last): File "c:\users\stanley\anaconda3\envs\causcumber\lib\site-packages\behave\model.py", line 1329, in run match.run(runner.context) File "c:\users\stanley\anaconda3\envs\causcumber\lib\site-packages\behave\matchers.py", line 98, in run self.func(context, *args, **kwargs) File "features\steps\compare_interventions.py", line 30, in step_impl context.z3_variables[parameter] = context.z3Typescast_type File "c:\users\stanley\anaconda3\envs\causcumber\lib\site-packages\behave\runner.py", line 321, in getattr raise AttributeError(msg) AttributeError: 'Context' object has no attribute 'z3Types'

I not really sure how to fix this

bobturneruk commented 3 years ago

@Stanley-deng - how did you fix the graphviz problem, please? What command did you run to get the above error?

jmafoster1 commented 3 years ago

Sorry, the above error is on me. One of my features in development somehow got partially merged into master. I'm fixing it now.

jmafoster1 commented 3 years ago

I have fixed the above error. If you do git pull origin main, that should fix it.

Stanley-deng commented 3 years ago

@bobturneruk I use ``python -m pip install --global-option=build_ext
--global-option="-IC:\Program Files\Graphviz\include" --global-option="-LC:\Program Files\Graphviz\lib" pygraphviz


and ```conda install -c conda-forge rpy2 ```
then I run causcumber with ```behave features/compare_interventions_basic.feature```
command
Stanley-deng commented 3 years ago

@jmafoster1 Ok I will try it as soon as possibe

bobturneruk commented 3 years ago

Thanks @Stanley-deng !

Stanley-deng commented 3 years ago

@jmafoster1 I just run the new version but I'm getting another error

Traceback (most recent call last):
        File "c:\users\stanley\anaconda3\envs\causcumber\lib\site-packages\behave\model.py", line 1329, in run
          match.run(runner.context)
        File "c:\users\stanley\anaconda3\envs\causcumber\lib\site-packages\behave\matchers.py", line 98, in run
          self.func(context, *args, **kwargs)
        File "features\steps\compare_interventions.py", line 77, in step_impl
          context.frequency,
        File "c:\users\stanley\anaconda3\envs\causcumber\lib\site-packages\behave\runner.py", line 321, in __getattr__
          raise AttributeError(msg)
      AttributeError: 'Context' object has no attribute 'frequency'
jmafoster1 commented 3 years ago

Yes, I've just noticed that myself! Classic example of "fix one bug, reveal another"! This one's harder to fix, I'm afraid, and might take some time. I know why it's happening and am looking into this as a solution. I'm struggling to make sense of it, but have a meeting with @bobturneruk later today where we will hopefully have the time to slot this in. For the time being behave features/compare_interventions.feature should work (I've just tried it) if you want to practice with that.

Stanley-deng commented 3 years ago

Yes, I think I manage to get it working with behave features/compare_interventions.feature, can you confirm is this how the system should run:

 Assertion Failed: Expected estimate < 0, got 50.08 < -123.31818181818225 < 196.56
      Captured stdout:
      {'quar_period': 14, 'n_days': 84, 'pop_type': 'hybrid', 'pop_size': 50000, 'pop_infected': 100, 'location': 'UK', 'interventions': 'baseline'}
      Looking for data in results/compare_interventions
      Running Do Why with params
        graph: dags/compare_interventions.dot
        treatment_var: ['location']
        outcome_var: ['cum_deaths_12']
        control_val: ['UK']::<class 'numpy.ndarray'>
        treatment_val: ['Rwanda']::<class 'numpy.ndarray'>
        identification: True
        verbose: True
        confidence_intervals: True
        kwargs: {'method_name': 'backdoor.linear_regression'}
        effect_modifiers: []
      GROUPS: {'Japan': 0, 'Rwanda': 1, 'UK': 2}
      Creating a causal model...
        adjustment_set []
      Datatype of treatment ['location']: [CategoricalDtype(categories=[0, 1, 2], ordered=False)]
      control_val [2.]
      treatment_val [1.]
      Identifying...
      Identified estimand
      Estimand type: nonparametric-ate

      ### Estimand : 1
      Estimand name: backdoor
      Estimand expression:
           d
      ───────────(Expectation(cum_deaths_12))
      d[location]
      Estimand assumption 1, Unconfoundedness: If U→{location} and U→cum_deaths_12 then P(cum_deaths_12|location,,U) = P(cum_deaths_12|location,)

      ### Estimand : 2
      Estimand name: iv
      No such variable found!

      ### Estimand : 3
      Estimand name: frontdoor
      No such variable found!

      Estimating...
      Total Effect Estimate: -123.31818181818225
      95% Confidence Intervals: [50.08, 196.56]

      Captured logging:
      WARNING:dowhy.causal_model:Causal Graph not provided. DoWhy will construct a graph based on data inputs.
      INFO:dowhy.causal_model:Model to find the causal effect of treatment ['location'] on outcome ['cum_deaths_12']
      INFO:dowhy.causal_estimator:b: cum_deaths_12~location
      INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator

  Scenario: Large population                        # features/compare_interventions.feature:132
    Given a simulation with parameters              # ../../causcumber/draw_dag_steps.py:16
      | parameter     | value    | type |
      | quar_period   | 14       | int  |
      | n_days        | 84       | int  |
      | pop_type      | hybrid   | str  |
      | pop_size      | 50000    | int  |
      | pop_infected  | 100      | int  |
      | location      | UK       | str  |
      | interventions | baseline | str  |
    And the following variables are recorded weekly # features/steps/compare_interventions.py:35
      | variable        | type |
      | cum_tests       | int  |
      | n_quarantined   | int  |
      | n_exposed       | int  |
      | cum_infections  | int  |
      | cum_symptomatic | int  |
      | cum_severe      | int  |
      | cum_critical    | int  |
      | cum_deaths      | int  |
    Given we run the model with pop_size=50000      # features/steps/compare_interventions.py:65
    When we run the model with pop_size=100000      # features/steps/compare_interventions.py:85
    Then the cum_infections_12 should be > control  # features/steps/compare_interventions.py:108

  Scenario: Subsequent mortality (has confounding)       # features/compare_interventions.feature:143
    Given a simulation with parameters                   # ../../causcumber/draw_dag_steps.py:16
      | parameter     | value    | type |
      | quar_period   | 14       | int  |
      | n_days        | 84       | int  |
      | pop_type      | hybrid   | str  |
      | pop_size      | 50000    | int  |
      | pop_infected  | 100      | int  |
      | location      | UK       | str  |
      | interventions | baseline | str  |
    And the following variables are recorded weekly      # features/steps/compare_interventions.py:35
      | variable        | type |
      | cum_tests       | int  |
      | n_quarantined   | int  |
      | n_exposed       | int  |
      | cum_infections  | int  |
      | cum_symptomatic | int  |
      | cum_severe      | int  |
      | cum_critical    | int  |
      | cum_deaths      | int  |
    Given a control scenario where cum_infections_7=4000 # features/steps/compare_interventions.py:151
    When cum_infections_7=5000                           # features/steps/compare_interventions.py:163
    Then the cum_infections_12 should be > control       # features/steps/compare_interventions.py:108
Finished Feature `Compare interventions`

Failing scenarios:
  features/compare_interventions.feature:116  Test and trace -- @1.3
  features/compare_interventions.feature:120  Test and trace -- @1.7
  features/compare_interventions.feature:129  Locations -- @1.1
  features/compare_interventions.feature:130  Locations -- @1.2

0 features passed, 1 failed, 0 skipped
9 scenarios passed, 4 failed, 0 skipped
61 steps passed, 4 failed, 0 skipped, 0 undefined
Took 0m1.791s

Thanks

jmafoster1 commented 3 years ago

Yes, that's how it should look. All the features passed when I ran it, but that's potentially subject to the nondeterminism of the model