pnnl-predictive-phenomics / emll

GNU General Public License v2.0
4 stars 0 forks source link

How does emll handle missing data? #16

Open janisshin opened 2 months ago

janisshin commented 2 months ago

In our previous meeting, Jeremy mentioned that emll now automatically handles missing data. I'm using emll right now, and it's complaining about array shape mismatches, but I have no idea where these numbers (specifically 555) are coming from.

For reference, in my model

Error message:


{
    "name": "ValueError",
    "message": "Incompatible Elemwise input shapes [(555, 28), (555, 555)]",
    "stack": "---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\tensor\\elemwise.py:439, in Elemwise.get_output_info(self, dim_shuffle, *inputs)
    437 try:
    438     out_shapes = [
--> 439         [
    440             broadcast_static_dim_lengths(shape)
    441             for shape in zip(*[inp.type.shape for inp in inputs])
    442         ]
    443     ] * shadow.nout
    444 except ValueError:

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\tensor\\elemwise.py:440, in <listcomp>(.0)
    437 try:
    438     out_shapes = [
    439         [
--> 440             broadcast_static_dim_lengths(shape)
    441             for shape in zip(*[inp.type.shape for inp in inputs])
    442         ]
    443     ] * shadow.nout
    444 except ValueError:

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\tensor\\utils.py:163, in broadcast_static_dim_lengths(dim_lengths)
    162 if len(dim_lengths_set) > 1:
--> 163     raise ValueError
    164 return next(iter(dim_lengths_set))

ValueError: 

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
Cell In[7], line 7
      2     advi = pm.ADVI()
      3     tracker = pm.callbacks.Tracker(
      4         mean = advi.approx.mean.eval,
      5         std = advi.approx.std.eval
      6     )
----> 7     approx = advi.fit(
      8         n=N_ITERATIONS, 
      9         callbacks = [tracker],
     10         obj_optimizer=pm.adagrad_window(learning_rate=5E-3), 
     11         total_grad_norm_constraint=0.7,
     12         obj_n_mc=1)
     14 SAMPLE_DRAWS = 1000

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pymc\\variational\\inference.py:138, in Inference.fit(self, n, score, callbacks, progressbar, **kwargs)
    136     callbacks = []
    137 score = self._maybe_score(score)
--> 138 step_func = self.objective.step_function(score=score, **kwargs)
    139 if progressbar:
    140     progress = progress_bar(range(n), display=progressbar)

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\configparser.py:48, in _ChangeFlagsDecorator.__call__.<locals>.res(*args, **kwargs)
     45 @wraps(f)
     46 def res(*args, **kwargs):
     47     with self:
---> 48         return f(*args, **kwargs)

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pymc\\variational\\opvi.py:379, in ObjectiveFunction.step_function(self, obj_n_mc, tf_n_mc, obj_optimizer, test_optimizer, more_obj_params, more_tf_params, more_updates, more_replacements, total_grad_norm_constraint, score, fn_kwargs)
    377 if score and not self.op.returns_loss:
    378     raise NotImplementedError(\"%s does not have loss\" % self.op)
--> 379 updates = self.updates(
    380     obj_n_mc=obj_n_mc,
    381     tf_n_mc=tf_n_mc,
    382     obj_optimizer=obj_optimizer,
    383     test_optimizer=test_optimizer,
    384     more_obj_params=more_obj_params,
    385     more_tf_params=more_tf_params,
    386     more_updates=more_updates,
    387     more_replacements=more_replacements,
    388     total_grad_norm_constraint=total_grad_norm_constraint,
    389 )
    390 seed = self.approx.rng.randint(2**30, dtype=np.int64)
    391 if score:

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pymc\\variational\\opvi.py:268, in ObjectiveFunction.updates(self, obj_n_mc, tf_n_mc, obj_optimizer, test_optimizer, more_obj_params, more_tf_params, more_updates, more_replacements, total_grad_norm_constraint)
    266     if more_tf_params:
    267         _warn_not_used(\"more_tf_params\", self.op)
--> 268 self.add_obj_updates(
    269     resulting_updates,
    270     obj_n_mc=obj_n_mc,
    271     obj_optimizer=obj_optimizer,
    272     more_obj_params=more_obj_params,
    273     more_replacements=more_replacements,
    274     total_grad_norm_constraint=total_grad_norm_constraint,
    275 )
    276 resulting_updates.update(more_updates)
    277 return resulting_updates

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pymc\\variational\\opvi.py:316, in ObjectiveFunction.add_obj_updates(self, updates, obj_n_mc, obj_optimizer, more_obj_params, more_replacements, total_grad_norm_constraint)
    312     more_replacements = dict()
    313 obj_target = self(
    314     obj_n_mc, more_obj_params=more_obj_params, more_replacements=more_replacements
    315 )
--> 316 grads = pm.updates.get_or_compute_grads(obj_target, self.obj_params + more_obj_params)
    317 if total_grad_norm_constraint is not None:
    318     grads = pm.total_norm_constraint(grads, total_grad_norm_constraint)

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pymc\\variational\\updates.py:177, in get_or_compute_grads(loss_or_grads, params)
    175     return loss_or_grads
    176 else:
--> 177     return pytensor.grad(loss_or_grads, params)

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:607, in grad(cost, wrt, consider_constant, disconnected_inputs, add_names, known_grads, return_disconnected, null_gradients)
    604     if hasattr(g.type, \"dtype\"):
    605         assert g.type.dtype in pytensor.tensor.type.float_dtypes
--> 607 _rval: Sequence[Variable] = _populate_grad_dict(
    608     var_to_app_to_idx, grad_dict, _wrt, cost_name
    609 )
    611 rval: MutableSequence[Optional[Variable]] = list(_rval)
    613 for i in range(len(_rval)):

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1407, in _populate_grad_dict(var_to_app_to_idx, grad_dict, wrt, cost_name)
   1404     # end if cache miss
   1405     return grad_dict[var]
-> 1407 rval = [access_grad_cache(elem) for elem in wrt]
   1409 return rval

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1407, in <listcomp>(.0)
   1404     # end if cache miss
   1405     return grad_dict[var]
-> 1407 rval = [access_grad_cache(elem) for elem in wrt]
   1409 return rval

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1362, in _populate_grad_dict.<locals>.access_grad_cache(var)
   1360 for node in node_to_idx:
   1361     for idx in node_to_idx[node]:
-> 1362         term = access_term_cache(node)[idx]
   1364         if not isinstance(term, Variable):
   1365             raise TypeError(
   1366                 f\"{node.op}.grad returned {type(term)}, expected\"
   1367                 \" Variable instance.\"
   1368             )

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1037, in _populate_grad_dict.<locals>.access_term_cache(node)
   1034 if node not in term_dict:
   1035     inputs = node.inputs
-> 1037     output_grads = [access_grad_cache(var) for var in node.outputs]
   1039     # list of bools indicating if each output is connected to the cost
   1040     outputs_connected = [
   1041         not isinstance(g.type, DisconnectedType) for g in output_grads
   1042     ]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1037, in <listcomp>(.0)
   1034 if node not in term_dict:
   1035     inputs = node.inputs
-> 1037     output_grads = [access_grad_cache(var) for var in node.outputs]
   1039     # list of bools indicating if each output is connected to the cost
   1040     outputs_connected = [
   1041         not isinstance(g.type, DisconnectedType) for g in output_grads
   1042     ]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1362, in _populate_grad_dict.<locals>.access_grad_cache(var)
   1360 for node in node_to_idx:
   1361     for idx in node_to_idx[node]:
-> 1362         term = access_term_cache(node)[idx]
   1364         if not isinstance(term, Variable):
   1365             raise TypeError(
   1366                 f\"{node.op}.grad returned {type(term)}, expected\"
   1367                 \" Variable instance.\"
   1368             )

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1037, in _populate_grad_dict.<locals>.access_term_cache(node)
   1034 if node not in term_dict:
   1035     inputs = node.inputs
-> 1037     output_grads = [access_grad_cache(var) for var in node.outputs]
   1039     # list of bools indicating if each output is connected to the cost
   1040     outputs_connected = [
   1041         not isinstance(g.type, DisconnectedType) for g in output_grads
   1042     ]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1037, in <listcomp>(.0)
   1034 if node not in term_dict:
   1035     inputs = node.inputs
-> 1037     output_grads = [access_grad_cache(var) for var in node.outputs]
   1039     # list of bools indicating if each output is connected to the cost
   1040     outputs_connected = [
   1041         not isinstance(g.type, DisconnectedType) for g in output_grads
   1042     ]

    [... skipping similar frames: _populate_grad_dict.<locals>.access_grad_cache at line 1362 (12 times), <listcomp> at line 1037 (11 times), _populate_grad_dict.<locals>.access_term_cache at line 1037 (11 times)]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1037, in _populate_grad_dict.<locals>.access_term_cache(node)
   1034 if node not in term_dict:
   1035     inputs = node.inputs
-> 1037     output_grads = [access_grad_cache(var) for var in node.outputs]
   1039     # list of bools indicating if each output is connected to the cost
   1040     outputs_connected = [
   1041         not isinstance(g.type, DisconnectedType) for g in output_grads
   1042     ]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1037, in <listcomp>(.0)
   1034 if node not in term_dict:
   1035     inputs = node.inputs
-> 1037     output_grads = [access_grad_cache(var) for var in node.outputs]
   1039     # list of bools indicating if each output is connected to the cost
   1040     outputs_connected = [
   1041         not isinstance(g.type, DisconnectedType) for g in output_grads
   1042     ]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1362, in _populate_grad_dict.<locals>.access_grad_cache(var)
   1360 for node in node_to_idx:
   1361     for idx in node_to_idx[node]:
-> 1362         term = access_term_cache(node)[idx]
   1364         if not isinstance(term, Variable):
   1365             raise TypeError(
   1366                 f\"{node.op}.grad returned {type(term)}, expected\"
   1367                 \" Variable instance.\"
   1368             )

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1192, in _populate_grad_dict.<locals>.access_term_cache(node)
   1184         if o_shape != g_shape:
   1185             raise ValueError(
   1186                 \"Got a gradient of shape \"
   1187                 + str(o_shape)
   1188                 + \" on an output of shape \"
   1189                 + str(g_shape)
   1190             )
-> 1192 input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)
   1194 if input_grads is None:
   1195     raise TypeError(
   1196         f\"{node.op}.grad returned NoneType, expected iterable.\"
   1197     )

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\scan\\op.py:2570, in Scan.L_op(self, inputs, outs, dC_douts)
   2568                 known_grads[diff_outputs[i]] = dC_dXts[dc_dxts_idx]
   2569             dc_dxts_idx += 1
-> 2570 dC_dinps_t = compute_all_gradients(known_grads)
   2572 # mask inputs that get no gradients
   2573 for dx in range(len(dC_dinps_t)):

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\scan\\op.py:2484, in Scan.L_op.<locals>.compute_all_gradients(known_grads)
   2475 # Required in case there is a pair of variables X and Y, with X
   2476 # used to compute Y, for both of which there is an external
   2477 # gradient signal. Without this, the total gradient signal on X
   (...)
   2480 # gradient obtained by propagating Y's external gradient signal
   2481 # to X.
   2482 known_grads = OrderedDict([(k.copy(), v) for (k, v) in known_grads.items()])
-> 2484 grads = grad(
   2485     cost=None,
   2486     known_grads=known_grads,
   2487     wrt=wrt,
   2488     consider_constant=wrt,
   2489     disconnected_inputs=\"ignore\",
   2490     return_disconnected=\"None\",
   2491     null_gradients=\"return\",
   2492 )
   2494 for i in range(len(wrt)):
   2495     gmp[wrt[i]] = grads[i]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:607, in grad(cost, wrt, consider_constant, disconnected_inputs, add_names, known_grads, return_disconnected, null_gradients)
    604     if hasattr(g.type, \"dtype\"):
    605         assert g.type.dtype in pytensor.tensor.type.float_dtypes
--> 607 _rval: Sequence[Variable] = _populate_grad_dict(
    608     var_to_app_to_idx, grad_dict, _wrt, cost_name
    609 )
    611 rval: MutableSequence[Optional[Variable]] = list(_rval)
    613 for i in range(len(_rval)):

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1407, in _populate_grad_dict(var_to_app_to_idx, grad_dict, wrt, cost_name)
   1404     # end if cache miss
   1405     return grad_dict[var]
-> 1407 rval = [access_grad_cache(elem) for elem in wrt]
   1409 return rval

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1407, in <listcomp>(.0)
   1404     # end if cache miss
   1405     return grad_dict[var]
-> 1407 rval = [access_grad_cache(elem) for elem in wrt]
   1409 return rval

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1362, in _populate_grad_dict.<locals>.access_grad_cache(var)
   1360 for node in node_to_idx:
   1361     for idx in node_to_idx[node]:
-> 1362         term = access_term_cache(node)[idx]
   1364         if not isinstance(term, Variable):
   1365             raise TypeError(
   1366                 f\"{node.op}.grad returned {type(term)}, expected\"
   1367                 \" Variable instance.\"
   1368             )

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\gradient.py:1192, in _populate_grad_dict.<locals>.access_term_cache(node)
   1184         if o_shape != g_shape:
   1185             raise ValueError(
   1186                 \"Got a gradient of shape \"
   1187                 + str(o_shape)
   1188                 + \" on an output of shape \"
   1189                 + str(g_shape)
   1190             )
-> 1192 input_grads = node.op.L_op(inputs, node.outputs, new_output_grads)
   1194 if input_grads is None:
   1195     raise TypeError(
   1196         f\"{node.op}.grad returned NoneType, expected iterable.\"
   1197     )

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\emll\\pytensor_utils.py:105, in LeastSquaresSolve.L_op(self, inputs, outputs, output_gradients)
    102 def force_outer(l, r):
    103     return tensor.outer(l, r) if r.ndim == 1 else l.dot(r.T)
--> 105 A_bar = force_outer(b - A.dot(c), x) - force_outer(b_bar, c)
    106 return [A_bar, b_bar]

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\tensor\\variable.py:125, in _tensor_py_operators.__sub__(self, other)
    121 def __sub__(self, other):
    122     # See explanation in __add__ for the error caught
    123     # and the return value in that case
    124     try:
--> 125         return pt.math.sub(self, other)
    126     except (NotImplementedError, TypeError):
    127         return NotImplemented

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\graph\\op.py:295, in Op.__call__(self, *inputs, **kwargs)
    253 r\"\"\"Construct an `Apply` node using :meth:`Op.make_node` and return its outputs.
    254 
    255 This method is just a wrapper around :meth:`Op.make_node`.
   (...)
    292 
    293 \"\"\"
    294 return_list = kwargs.pop(\"return_list\", False)
--> 295 node = self.make_node(*inputs, **kwargs)
    297 if config.compute_test_value != \"off\":
    298     compute_test_value(node)

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\tensor\\elemwise.py:483, in Elemwise.make_node(self, *inputs)
    477 \"\"\"
    478 If the inputs have different number of dimensions, their shape
    479 is left-completed to the greatest number of dimensions with 1s
    480 using DimShuffle.
    481 \"\"\"
    482 inputs = [as_tensor_variable(i) for i in inputs]
--> 483 out_dtypes, out_shapes, inputs = self.get_output_info(DimShuffle, *inputs)
    484 outputs = [
    485     TensorType(dtype=dtype, shape=shape)()
    486     for dtype, shape in zip(out_dtypes, out_shapes)
    487 ]
    488 return Apply(self, inputs, outputs)

File c:\\Users\\user\\miniconda3\\envs\
ew_Gayles\\lib\\site-packages\\pytensor\\tensor\\elemwise.py:445, in Elemwise.get_output_info(self, dim_shuffle, *inputs)
    438     out_shapes = [
    439         [
    440             broadcast_static_dim_lengths(shape)
    441             for shape in zip(*[inp.type.shape for inp in inputs])
    442         ]
    443     ] * shadow.nout
    444 except ValueError:
--> 445     raise ValueError(
    446         f\"Incompatible Elemwise input shapes {[inp.type.shape for inp in inputs]}\"
    447     )
    449 # inplace_pattern maps output idx -> input idx
    450 inplace_pattern = self.inplace_pattern

ValueError: Incompatible Elemwise input shapes [(555, 28), (555, 555)]"
}
mcnaughtonadm commented 2 months ago

Hey @janisshin which branch did you build emll from? @augeorge is still working on this feature, I believe. So maybe he can give you better information on where to find that functionality.

Not sure if this is related to your problem, but it might be a start!

janisshin commented 2 months ago

I used the main branch; would you advise that I use a different one? So if the functionality has not been implemented in the main branch which suggests that these accounting problems are from a different source then, I suppose.

augeorge commented 2 months ago

hi @janisshin @mcnaughtonadm that feature hasn't been fully tested and added yet! I've had to work on something else recently but hope to get it finished soon.

are you able to run https://github.com/pnnl-predictive-phenomics/emll/blob/master/notebooks/run_hackett_inference.py? I had a dimension mismatch error caused by using the wrong version of pytensor, so maybe that is also happening to you?

issue https://github.com/pnnl-predictive-phenomics/emll/issues/13

mcnaughtonadm commented 2 months ago

Oh yea, it could definitely be the same problem. @janisshin you may need to rebuild part of your environment to re-install the correct pymc package. I'd recommend uninstalling pymc and pytensor before installing emll from source so that it will use the pinned pymc==5.9.2 version. This will automatically install a compatible pytensor version, which should hopefully fix some of these errors.

It could be that you have a conda installed pymc / pytensor where the version isn't being recognized when building the repo using pip. This could result in incompatible pymc and pytensor versions. I'd follow the discussion over on issue #13 and the corresponding merged PR #14

janisshin commented 4 weeks ago

okay, so I tried running hackett.ipynb after reinstalling the correct versions of python, pytensor, and pymc. Unfortunately, I ran into an error on the second cell (the part where I am running run_hackett_inference.py)

---------------------------------------------------------------------------
NoSectionError                            Traceback (most recent call last)
File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\configparser.py:202, in PyTensorConfigParser.fetch_val_for_key(self, key, delete_key)
    [201](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:201) try:
--> [202](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:202)     return self._pytensor_cfg.get(section, option)
    [203](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:203) except InterpolationError:

File c:\Users\user\miniconda3\envs\idp_new\Lib\configparser.py:797, in RawConfigParser.get(self, section, option, raw, vars, fallback)
    [796](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/configparser.py:796) try:
--> [797](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/configparser.py:797)     d = self._unify_values(section, vars)
    [798](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/configparser.py:798) except NoSectionError:

File c:\Users\user\miniconda3\envs\idp_new\Lib\configparser.py:1168, in RawConfigParser._unify_values(self, section, vars)
   [1167](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/configparser.py:1167)     if section != self.default_section:
-> [1168](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/configparser.py:1168)         raise NoSectionError(section) from None
   [1169](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/configparser.py:1169) # Update with the entry specific variables

NoSectionError: No section: 'blas'

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\configparser.py:318, in ConfigParam.__get__(self, cls, type_, delete_key)
    [317](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:317) try:
--> [318](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:318)     val_str = cls.fetch_val_for_key(self.name, delete_key=delete_key)
    [319](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:319)     self.is_default = False

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\configparser.py:206, in PyTensorConfigParser.fetch_val_for_key(self, key, delete_key)
    [205](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:205) except (NoOptionError, NoSectionError):
--> [206](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:206)     raise KeyError(key)

KeyError: 'blas__ldflags'

During handling of the above exception, another exception occurred:

IndexError                                Traceback (most recent call last)
Cell In[2], [line 1](vscode-notebook-cell:?execution_count=2&line=1)
----> [1](vscode-notebook-cell:?execution_count=2&line=1) from run_hackett_inference import *

File c:\Users\user\Documents\research\emll\notebooks\run_hackett_inference.py:13
     [11](file:///C:/Users/user/Documents/research/emll/notebooks/run_hackett_inference.py:11) import pandas as pd
     [12](file:///C:/Users/user/Documents/research/emll/notebooks/run_hackett_inference.py:12) import numpy as np
---> [13](file:///C:/Users/user/Documents/research/emll/notebooks/run_hackett_inference.py:13) import pymc as pm
     [14](file:///C:/Users/user/Documents/research/emll/notebooks/run_hackett_inference.py:14) import pytensor.tensor as T
     [15](file:///C:/Users/user/Documents/research/emll/notebooks/run_hackett_inference.py:15) import cobra

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pymc\__init__.py:47
     [42](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:42)     augmented = f"{augmented} -fno-unwind-tables -fno-asynchronous-unwind-tables"
     [44](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:44)     pytensor.config.gcc__cxxflags = augmented
---> [47](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:47) __set_compiler_flags()
     [49](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:49) from pymc import _version, gp, ode, sampling
     [50](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:50) from pymc.backends import *

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pymc\__init__.py:30, in __set_compiler_flags()
     [28](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:28) def __set_compiler_flags():
     [29](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:29)     # Workarounds for PyTensor compiler problems on various platforms
---> [30](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:30)     import pytensor
     [32](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:32)     current = pytensor.config.gcc__cxxflags
     [33](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pymc/__init__.py:33)     augmented = f"{current} -Wno-c++11-narrowing"

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\__init__.py:119
    [115](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:115)     return as_tensor_variable(x, **kwargs)
    [118](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:118) # isort: off
--> [119](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:119) from pytensor import scalar, tensor
    [120](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:120) from pytensor.compile import (
    [121](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:121)     In,
    [122](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:122)     Mode,
   (...)
    [128](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:128)     shared,
    [129](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:129) )
    [130](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/__init__.py:130) from pytensor.compile.function import function, function_dump

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\tensor\__init__.py:107
    [105](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:105) # adds shared-variable constructors
    [106](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:106) from pytensor.tensor import sharedvar  # noqa
--> [107](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:107) from pytensor.tensor import (  # noqa
    [108](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:108)     blas,
    [109](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:109)     blas_c,
    [110](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:110)     blas_scipy,
    [111](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:111)     xlogx,
    [112](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:112) )
    [113](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:113) import pytensor.tensor.rewriting
    [116](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/__init__.py:116) # isort: off

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\tensor\blas.py:101
     [99](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas.py:99) from pytensor.scalar import bool as bool_t
    [100](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas.py:100) from pytensor.tensor import basic as at
--> [101](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas.py:101) from pytensor.tensor.blas_headers import blas_header_text, blas_header_version
    [102](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas.py:102) from pytensor.tensor.elemwise import DimShuffle
    [103](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas.py:103) from pytensor.tensor.math import add, mul, neg, sub

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\tensor\blas_headers.py:1015
    [997](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:997)             header += textwrap.dedent(
    [998](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:998)                 """\
    [999](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:999)                     static float sdot_(int* Nx, float* x, int* Sx, float* y, int* Sy)
   (...)
   [1009](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:1009)                     """
   [1010](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:1010)             )
   [1012](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:1012)     return header + blas_code
-> [1015](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:1015) if not config.blas__ldflags:
   [1016](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:1016)     _logger.warning("Using NumPy C-API based implementation for BLAS functions.")
   [1019](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/tensor/blas_headers.py:1019) def mkl_threads_text():

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\configparser.py:322, in ConfigParam.__get__(self, cls, type_, delete_key)
    [320](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:320) except KeyError:
    [321](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:321)     if callable(self.default):
--> [322](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:322)         val_str = self.default()
    [323](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:323)     else:
    [324](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/configparser.py:324)         val_str = self.default

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\link\c\cmodule.py:2782, in default_blas_ldflags()
   [2779](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2779) else:
   [2780](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2780)     rpath = None
-> [2782](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2782) cxx_library_dirs = get_cxx_library_dirs()
   [2783](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2783) searched_library_dirs = cxx_library_dirs + _std_lib_dirs
   [2784](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2784) all_libs = [
   [2785](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2785)     l
   [2786](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2786)     for path in [
   (...)
   [2792](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2792)     if l.suffix in {".so", ".dll", ".dylib"}
   [2793](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2793) ]

File c:\Users\user\miniconda3\envs\idp_new\Lib\site-packages\pytensor\link\c\cmodule.py:2734, in default_blas_ldflags.<locals>.get_cxx_library_dirs()
   [2726](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2726) p = subprocess_Popen(
   [2727](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2727)     cmd,
   [2728](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2728)     stdout=subprocess.PIPE,
   (...)
   [2731](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2731)     shell=True,
   [2732](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2732) )
   [2733](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2733) (stdout, stderr) = p.communicate(input=b"")
-> [2734](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2734) maybe_lib_dirs = [
   [2735](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2735)     [pathlib.Path(p).resolve() for p in line[len("libraries: =") :].split(":")]
   [2736](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2736)     for line in stdout.decode(sys.stdout.encoding).splitlines()
   [2737](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2737)     if line.startswith("libraries: =")
   [2738](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2738) ][0]
   [2739](file:///C:/Users/user/miniconda3/envs/idp_new/Lib/site-packages/pytensor/link/c/cmodule.py:2739) return [str(d) for d in maybe_lib_dirs if d.exists() and d.is_dir()]

IndexError: list index out of range
mcnaughtonadm commented 4 weeks ago

So I guess this one is a Blas error which could be caused by your numpy install. I'd maybe uninstall and reinstall numpy to make sure it's compiled for a compatible version with pytensor but this is just a hunch.

If you run just import pytensor in a python shell, do you see a WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions. ?

As a side note, I will probably be doing a project management refactoring of the code to use .lock files so we can guarantee the correct compatible versions of things are being installed, but for now maybe just try reinstalling numpy (and if that doesn't immediately work, maybe try a reinstall of pytensor with your new numpy install).