Closed pandrich closed 1 year ago
Hi @pandrich , could you paste conda env list
and also the output of
import aesara
print(aesara.config)
Hi @lucianopaz, Thanks a lot for getting back to me about this. Let me know if there is anything else I can do to help.
Here's the content of the test environment I'm using:
# packages in environment at C:\Users\andri\anaconda3\envs\pymc_test:
#
# Name Version Build Channel
aeppl 0.0.36 pyhd8ed1ab_0 conda-forge
aeppl-base 0.0.36 pyhd8ed1ab_0 conda-forge
aesara 2.8.6 py310hfa0c5ed_0 conda-forge
aesara-base 2.8.6 py310h5588dad_0 conda-forge
anyio 3.6.1 pyhd8ed1ab_1 conda-forge
argon2-cffi 21.3.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py310he2412df_2 conda-forge
arviz 0.12.1 pyhd8ed1ab_1 conda-forge
asttokens 2.0.8 pyhd8ed1ab_0 conda-forge
attrs 22.1.0 pyh71513ae_1 conda-forge
babel 2.10.3 pyhd8ed1ab_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
beautifulsoup4 4.11.1 pyha770c72_0 conda-forge
blas 2.116 mkl conda-forge
blas-devel 3.9.0 16_win64_mkl conda-forge
bleach 5.0.1 pyhd8ed1ab_0 conda-forge
brotli 1.0.9 h8ffe710_7 conda-forge
brotli-bin 1.0.9 h8ffe710_7 conda-forge
brotlipy 0.7.0 py310he2412df_1004 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2022.9.24 h5b45459_0 conda-forge
cachetools 5.2.0 pyhd8ed1ab_0 conda-forge
certifi 2022.9.24 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py310hcbf9ad4_0 conda-forge
cftime 1.6.2 py310h9b08ddd_0 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
cloudpickle 2.2.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
cons 0.4.5 pyhd8ed1ab_0 conda-forge
contourpy 1.0.5 py310h232114e_0 conda-forge
cryptography 37.0.1 py310h21b164f_0
curl 7.85.0 heaf79c2_0 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
debugpy 1.6.3 py310h8a704f9_0 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
etuples 0.3.8 pyhd8ed1ab_0 conda-forge
executing 1.1.1 pyhd8ed1ab_0 conda-forge
fastprogress 1.0.3 pyhd8ed1ab_0 conda-forge
filelock 3.8.0 pyhd8ed1ab_0 conda-forge
flit-core 3.7.1 pyhd8ed1ab_0 conda-forge
fonttools 4.37.4 py310h8d17308_0 conda-forge
freetype 2.12.1 h546665d_0 conda-forge
hdf4 4.2.15 h0e5069d_4 conda-forge
hdf5 1.12.2 nompi_h57737ce_100 conda-forge
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 4.11.4 py310h5588dad_0 conda-forge
importlib_resources 5.10.0 pyhd8ed1ab_0 conda-forge
intel-openmp 2022.1.0 h57928b3_3787 conda-forge
ipykernel 6.16.0 pyh025b116_0 conda-forge
ipython 8.5.0 pyh08f2357_1 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
jedi 0.18.1 pyhd8ed1ab_2 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
jpeg 9e h8ffe710_2 conda-forge
json5 0.9.5 pyh9f0ad1d_0 conda-forge
jsonschema 4.16.0 pyhd8ed1ab_0 conda-forge
jupyter_client 7.3.5 pyhd8ed1ab_0 conda-forge
jupyter_core 4.11.1 py310h5588dad_0 conda-forge
jupyter_server 1.19.1 pyhd8ed1ab_0 conda-forge
jupyterlab 3.4.8 pyhd8ed1ab_0 conda-forge
jupyterlab_pygments 0.2.2 pyhd8ed1ab_0 conda-forge
jupyterlab_server 2.15.2 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 py310h476a331_0 conda-forge
krb5 1.19.3 hc8ab02b_0 conda-forge
lcms2 2.12 h2a16943_0 conda-forge
lerc 4.0.0 h63175ca_0 conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 h8ffe710_7 conda-forge
libbrotlidec 1.0.9 h8ffe710_7 conda-forge
libbrotlienc 1.0.9 h8ffe710_7 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libcurl 7.85.0 heaf79c2_0 conda-forge
libdeflate 1.14 hcfcfb64_0 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
liblapacke 3.9.0 16_win64_mkl conda-forge
libnetcdf 4.8.1 nompi_h85765be_104 conda-forge
libpng 1.6.38 h19919ed_0 conda-forge
libpython 2.2 py310h5588dad_1 conda-forge
libsodium 1.0.18 h8d14728_1 conda-forge
libsqlite 3.39.4 hcfcfb64_0 conda-forge
libssh2 1.10.0 h9a1e1f7_3 conda-forge
libtiff 4.4.0 h8e97e67_4 conda-forge
libwebp-base 1.2.4 h8ffe710_0 conda-forge
libxcb 1.13 hcd874cb_1004 conda-forge
libzip 1.9.2 h519de47_1 conda-forge
libzlib 1.2.12 hcfcfb64_4 conda-forge
logical-unification 0.4.5 pyhd8ed1ab_0 conda-forge
m2w64-binutils 2.25.1 5 conda-forge
m2w64-bzip2 1.0.6 6 conda-forge
m2w64-crt-git 5.0.0.4636.2595836 2 conda-forge
m2w64-gcc 5.3.0 6 conda-forge
m2w64-gcc-ada 5.3.0 6 conda-forge
m2w64-gcc-fortran 5.3.0 6 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gcc-objc 5.3.0 6 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-headers-git 5.0.0.4636.c0ad18a 2 conda-forge
m2w64-isl 0.16.1 2 conda-forge
m2w64-libiconv 1.14 6 conda-forge
m2w64-libmangle-git 5.0.0.4509.2e5a9a2 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
m2w64-make 4.1.2351.a80a8b8 2 conda-forge
m2w64-mpc 1.0.3 3 conda-forge
m2w64-mpfr 3.1.4 4 conda-forge
m2w64-pkg-config 0.29.1 2 conda-forge
m2w64-toolchain 5.3.0 7 conda-forge
m2w64-toolchain_win-64 2.4.0 0 conda-forge
m2w64-tools-git 5.0.0.4592.90b8472 2 conda-forge
m2w64-windows-default-manifest 6.4 3 conda-forge
m2w64-winpthreads-git 5.0.0.4634.697f757 2 conda-forge
m2w64-zlib 1.2.8 10 conda-forge
markupsafe 2.1.1 py310he2412df_1 conda-forge
matplotlib-base 3.6.0 py310h51140c5_0 conda-forge
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
minikanren 1.0.3 pyhd8ed1ab_0 conda-forge
mistune 2.0.4 pyhd8ed1ab_0 conda-forge
mkl 2022.1.0 h6a75c08_874 conda-forge
mkl-devel 2022.1.0 h57928b3_875 conda-forge
mkl-include 2022.1.0 h6a75c08_874 conda-forge
mkl-service 2.4.0 py310h3d5ec83_0 conda-forge
msys2-conda-epoch 20160418 1 conda-forge
multipledispatch 0.6.0 py_0 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
nbclassic 0.4.5 pyhd8ed1ab_0 conda-forge
nbclient 0.7.0 pyhd8ed1ab_0 conda-forge
nbconvert 7.2.1 pyhd8ed1ab_0 conda-forge
nbconvert-core 7.2.1 pyhd8ed1ab_0 conda-forge
nbconvert-pandoc 7.2.1 pyhd8ed1ab_0 conda-forge
nbformat 5.7.0 pyhd8ed1ab_0 conda-forge
nest-asyncio 1.5.6 pyhd8ed1ab_0 conda-forge
netcdf4 1.6.1 nompi_py310h459bb5f_100 conda-forge
notebook 6.4.12 pyha770c72_0 conda-forge
notebook-shim 0.1.0 pyhd8ed1ab_0 conda-forge
numpy 1.23.3 py310h4a8f9c9_0 conda-forge
openjpeg 2.5.0 hc9384bd_1 conda-forge
openssl 3.0.5 hcfcfb64_2 conda-forge
packaging 21.3 pyhd8ed1ab_0 conda-forge
pandas 1.5.0 py310h1c4a608_0 conda-forge
pandoc 2.19.2 h57928b3_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 9.2.0 py310h52929f7_2 conda-forge
pip 22.2.2 pyhd8ed1ab_0 conda-forge
pkgutil-resolve-name 1.3.10 pyhd8ed1ab_0 conda-forge
prometheus_client 0.14.1 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.31 pyha770c72_0 conda-forge
psutil 5.9.2 py310h8d17308_0 conda-forge
pthread-stubs 0.4 hcd874cb_1001 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pygments 2.13.0 pyhd8ed1ab_0 conda-forge
pymc 4.2.1 hd8ed1ab_0 conda-forge
pymc-base 4.2.1 pyhd8ed1ab_0 conda-forge
pyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pyrsistent 0.18.1 py310he2412df_1 conda-forge
pysocks 1.7.1 pyh0701188_6 conda-forge
python 3.10.6 hcf16a7b_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-fastjsonschema 2.16.2 pyhd8ed1ab_0 conda-forge
python_abi 3.10 2_cp310 conda-forge
pytz 2022.4 pyhd8ed1ab_0 conda-forge
pywin32 303 py310he2412df_0 conda-forge
pywinpty 2.0.8 py310h00ffb61_0 conda-forge
pyzmq 24.0.1 py310hcd737a0_0 conda-forge
requests 2.28.1 pyhd8ed1ab_1 conda-forge
scipy 1.9.1 py310h578b7cb_0 conda-forge
send2trash 1.8.0 pyhd8ed1ab_0 conda-forge
setuptools 65.4.1 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sniffio 1.3.0 pyhd8ed1ab_0 conda-forge
soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge
stack_data 0.5.1 pyhd8ed1ab_0 conda-forge
tbb 2021.6.0 h91493d7_0 conda-forge
terminado 0.16.0 pyh08f2357_0 conda-forge
tinycss2 1.1.1 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
tomli 2.0.1 pyhd8ed1ab_0 conda-forge
toolz 0.12.0 pyhd8ed1ab_0 conda-forge
tornado 6.2 py310he2412df_0 conda-forge
traitlets 5.4.0 pyhd8ed1ab_0 conda-forge
typing-extensions 4.4.0 hd8ed1ab_0 conda-forge
typing_extensions 4.4.0 pyha770c72_0 conda-forge
tzdata 2022d h191b570_0 conda-forge
ucrt 10.0.20348.0 h57928b3_0 conda-forge
unicodedata2 14.0.0 py310he2412df_1 conda-forge
urllib3 1.26.11 pyhd8ed1ab_0 conda-forge
vc 14.2 hac3ee0b_8 conda-forge
vs2015_runtime 14.29.30139 h890b9b1_8 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
webencodings 0.5.1 py_1 conda-forge
websocket-client 1.4.1 pyhd8ed1ab_0 conda-forge
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
win_inet_pton 1.1.0 py310h5588dad_4 conda-forge
winpty 0.4.3 4 conda-forge
xarray 2022.9.0 pyhd8ed1ab_0 conda-forge
xarray-einstats 0.3.0 pyhd8ed1ab_0 conda-forge
xorg-libxau 1.0.9 hcd874cb_0 conda-forge
xorg-libxdmcp 1.1.3 hcd874cb_0 conda-forge
xz 5.2.6 h8d14728_0 conda-forge
zeromq 4.3.4 h0e60522_1 conda-forge
zipp 3.9.0 pyhd8ed1ab_0 conda-forge
zstd 1.5.2 h7755175_4 conda-forge
Aesara config:
floatX ({'float16', 'float64', 'float32'})
Doc: Default floating-point precision for python casts.
Note: float16 support is experimental, use at your own risk.
Value: float64
warn_float64 ({'raise', 'pdb', 'ignore', 'warn'})
Doc: Do an action when a tensor variable with float64 dtype is created.
Value: ignore
pickle_test_value (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x00000220550C0220>>)
Doc: Dump test values while pickling model. If True, test values will be dumped with model.
Value: True
cast_policy ({'numpy+floatX', 'custom'})
Doc: Rules for implicit type casting
Value: custom
deterministic ({'more', 'default'})
Doc: If `more`, sometimes we will select some implementation that are more deterministic, but slower. Also see the dnn.conv.algo* flags to cover more cases.
Value: default
device (cpu)
Doc: Default device for computations. only cpu is supported for now
Value: cpu
force_device (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056A9F340>>)
Doc: Raise an error if we can't use the specified device
Value: False
conv__assert_shape (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056A9F3A0>>)
Doc: If True, AbstractConv* ops will verify that user-provided shapes match the runtime shapes (debugging option, may slow down compilation)
Value: False
print_global_stats (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056A9F490>>)
Doc: Print some global statistics (time spent) at the end
Value: False
assert_no_cpu_op ({'raise', 'pdb', 'ignore', 'warn'})
Doc: Raise an error/warning if there is a CPU op in the computational graph.
Value: ignore
unpickle_function (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9060>>)
Doc: Replace unpickled Aesara functions with None. This is useful to unpickle old graphs that pickled them when it shouldn't
Value: True
<aesara.configparser.ConfigParam object at 0x0000022056AE8F10>
Doc: Default compilation mode
Value: Mode
cxx (<class 'str'>)
Doc: The C++ compiler to use. Currently only g++ is supported, but supporting additional compilers should not be too difficult. If it is empty, no C++ code is compiled.
Value: "C:\Users\andri\anaconda3\envs\pymc_test\Library\mingw-w64\bin\g++.exe"
linker ({'c|py_nogc', 'py', 'cvm_nogc', 'cvm', 'c|py', 'vm', 'c', 'vm_nogc'})
Doc: Default linker used if the aesara flags mode is Mode
Value: cvm
allow_gc (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9270>>)
Doc: Do we default to delete intermediate results during Aesara function calls? Doing so lowers the memory requirement, but asks that we reallocate memory at the next function call. This is implemented for the default linker, but may not work for all linkers.
Value: True
optimizer ({'unsafe', 'o2', 'merge', 'fast_run', 'fast_compile', 'None', 'o4', 'o1', 'o3'})
Doc: Default optimizer. If not None, will use this optimizer with the Mode
Value: o4
optimizer_verbose (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9030>>)
Doc: If True, we print all optimization being applied
Value: False
on_opt_error ({'raise', 'pdb', 'ignore', 'warn'})
Doc: What to do when an optimization crashes: warn and skip it, raise the exception, or fall into the pdb debugger.
Value: warn
nocleanup (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE90C0>>)
Doc: Suppress the deletion of code files that did not compile cleanly
Value: False
on_unused_input ({'raise', 'ignore', 'warn'})
Doc: What to do if a variable in the 'inputs' list of aesara.function() is not used in the graph.
Value: raise
gcc__cxxflags (<class 'str'>)
Doc: Extra compiler flags for gcc
Value:
cmodule__warn_no_version (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9150>>)
Doc: If True, will print a warning when compiling one or more Op with C code that can't be cached because there is no c_code_cache_version() function associated to at least one of those Ops.
Value: False
cmodule__remove_gxx_opt (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE91B0>>)
Doc: If True, will remove the -O* parameter passed to g++.This is useful to debug in gdb modules compiled by Aesara.The parameter -g is passed by default to g++
Value: False
cmodule__compilation_warning (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE92D0>>)
Doc: If True, will print compilation warnings.
Value: False
cmodule__preload_cache (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE92A0>>)
Doc: If set to True, will preload the C module cache at import time
Value: False
cmodule__age_thresh_use (<class 'int'>)
Doc: In seconds. The time after which Aesara won't reuse a compile c module.
Value: 2073600
cmodule__debug (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9390>>)
Doc: If True, define a DEBUG macro (if not exists) for any compiled C code.
Value: False
compile__wait (<class 'int'>)
Doc: Time to wait before retrying to acquire the compile lock.
Value: 5
compile__timeout (<class 'int'>)
Doc: In seconds, time that a process will wait before deciding to
override an existing lock. An override only happens when the existing
lock is held by the same owner *and* has not been 'refreshed' by this
owner for more than this period. Refreshes are done every half timeout
period for running processes.
Value: 120
ctc__root (<class 'str'>)
Doc: Directory which contains the root of Baidu CTC library. It is assumed that the compiled library is either inside the build, lib or lib64 subdirectory, and the header inside the include directory.
Value:
tensor__cmp_sloppy (<class 'int'>)
Doc: Relax aesara.tensor.math._allclose (0) not at all, (1) a bit, (2) more
Value: 0
tensor__local_elemwise_fusion (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9540>>)
Doc: Enable or not in fast_run mode(fast_run optimization) the elemwise fusion optimization
Value: True
lib__amblibm (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9600>>)
Doc: Use amd's amdlibm numerical library
Value: False
tensor__insert_inplace_optimizer_validate_nb (<class 'int'>)
Doc: -1: auto, if graph have less then 500 nodes 1, else 10
Value: -1
traceback__limit (<class 'int'>)
Doc: The number of stack to trace. -1 mean all.
Value: 8
traceback__compile_limit (<class 'int'>)
Doc: The number of stack to trace to keep during compilation. -1 mean all. If greater then 0, will also make us save Aesara internal stack trace.
Value: 0
experimental__local_alloc_elemwise (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9780>>)
Doc: DEPRECATED: If True, enable the experimental optimization local_alloc_elemwise. Generates error if not True. Use optimizer_excluding=local_alloc_elemwise to disable.
Value: True
experimental__local_alloc_elemwise_assert (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE97B0>>)
Doc: When the local_alloc_elemwise is applied, add an assert to highlight shape errors.
Value: True
warn__ignore_bug_before ({'0.10', '1.0.3', '0.4.1', '0.7', '0.6', 'all', '1.0', '1.0.5', '0.8.2', '1.0.2', '0.5', '1.0.4', '0.8', '0.3', '1.0.1', '0.8.1', 'None', '0.4', '0.9'})
Doc: If 'None', we warn about all Aesara bugs found by default. If 'all', we don't warn about Aesara bugs found by default. If a version, we print only the warnings relative to Aesara bugs found after that version. Warning for specific bugs can be configured with specific [warn] flags.
Value: 0.9
exception_verbosity ({'low', 'high'})
Doc: If 'low', the text of exceptions will generally refer to apply nodes with short names such as Elemwise{add_no_inplace}. If 'high', some exceptions will also refer to apply nodes with long descriptions like:
A. Elemwise{add_no_inplace}
B. log_likelihood_v_given_h
C. log_likelihood_h
Value: low
print_test_value (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9900>>)
Doc: If 'True', the __eval__ of an Aesara variable will return its test_value when this is available. This has the practical conseguence that, e.g., in debugging `my_var` will print the same as `my_var.tag.test_value` when a test value is defined.
Value: False
compute_test_value ({'raise', 'pdb', 'warn', 'off', 'ignore'})
Doc: If 'True', Aesara will run each op at graph build time, using Constants, SharedVariables and the tag 'test_value' as inputs to the function. This helps the user track down problems in the graph before it gets optimized.
Value: off
compute_test_value_opt ({'raise', 'pdb', 'warn', 'off', 'ignore'})
Doc: For debugging Aesara optimization only. Same as compute_test_value, but is used during Aesara optimization
Value: off
check_input (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9960>>)
Doc: Specify if types should check their input in their C code. It can be used to speed up compilation, reduce overhead (particularly for scalars) and reduce the number of generated C files.
Value: True
NanGuardMode__nan_is_error (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9990>>)
Doc: Default value for nan_is_error
Value: True
NanGuardMode__inf_is_error (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE99F0>>)
Doc: Default value for inf_is_error
Value: True
NanGuardMode__big_is_error (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9A80>>)
Doc: Default value for big_is_error
Value: True
NanGuardMode__action ({'raise', 'pdb', 'warn'})
Doc: What NanGuardMode does when it finds a problem
Value: raise
DebugMode__patience (<class 'int'>)
Doc: Optimize graph this many times to detect inconsistency
Value: 10
DebugMode__check_c (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9B10>>)
Doc: Run C implementations where possible
Value: True
DebugMode__check_py (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9BA0>>)
Doc: Run Python implementations where possible
Value: True
DebugMode__check_finite (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9B70>>)
Doc: True -> complain about NaN/Inf results
Value: True
DebugMode__check_strides (<class 'int'>)
Doc: Check that Python- and C-produced ndarrays have same strides. On difference: (0) - ignore, (1) warn, or (2) raise error
Value: 0
DebugMode__warn_input_not_reused (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9C00>>)
Doc: Generate a warning when destroy_map or view_map says that an op works inplace, but the op did not reuse the input for its output.
Value: True
DebugMode__check_preallocated_output (<class 'str'>)
Doc: Test thunks with pre-allocated memory as output storage. This is a list of strings separated by ":". Valid values are: "initial" (initial storage in storage map, happens with Scan),"previous" (previously-returned memory), "c_contiguous", "f_contiguous", "strided" (positive and negative strides), "wrong_size" (larger and smaller dimensions), and "ALL" (all of the above).
Value:
DebugMode__check_preallocated_output_ndim (<class 'int'>)
Doc: When testing with "strided" preallocated output memory, test all combinations of strides over that number of (inner-most) dimensions. You may want to reduce that number to reduce memory or time usage, but it is advised to keep a minimum of 2.
Value: 4
profiling__time_thunks (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9C90>>)
Doc: Time individual thunks when profiling
Value: True
profiling__n_apply (<class 'int'>)
Doc: Number of Apply instances to print by default
Value: 20
profiling__n_ops (<class 'int'>)
Doc: Number of Ops to print by default
Value: 20
profiling__output_line_width (<class 'int'>)
Doc: Max line width for the profiling output
Value: 512
profiling__min_memory_size (<class 'int'>)
Doc: For the memory profile, do not print Apply nodes if the size
of their outputs (in bytes) is lower than this threshold
Value: 1024
profiling__min_peak_memory (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9DE0>>)
Doc: The min peak memory usage of the order
Value: False
profiling__destination (<class 'str'>)
Doc: File destination of the profiling output
Value: stderr
profiling__debugprint (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9E40>>)
Doc: Do a debugprint of the profiled functions
Value: False
profiling__ignore_first_call (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9E70>>)
Doc: Do we ignore the first call of an Aesara function.
Value: False
on_shape_error ({'raise', 'warn'})
Doc: warn: print a warning and use the default value. raise: raise an error
Value: warn
openmp (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AE9ED0>>)
Doc: Allow (or not) parallel computation on the CPU with OpenMP. This is the default value used when creating an Op that supports OpenMP parallelization. It is preferable to define it via the Aesara configuration file ~/.aesararc or with the environment variable AESARA_FLAGS. Parallelization is only done for some operations that implement it, and even for operations that implement parallelism, each operation is free to respect this flag or not. You can control the number of threads used with the environment variable OMP_NUM_THREADS. If it is set to 1, we disable openmp in Aesara by default.
Value: False
openmp_elemwise_minsize (<class 'int'>)
Doc: If OpenMP is enabled, this is the minimum size of vectors for which the openmp parallelization is enabled in element wise ops.
Value: 200000
optimizer_excluding (<class 'str'>)
Doc: When using the default mode, we will remove optimizer with these tags. Separate tags with ':'.
Value:
optimizer_including (<class 'str'>)
Doc: When using the default mode, we will add optimizer with these tags. Separate tags with ':'.
Value:
optimizer_requiring (<class 'str'>)
Doc: When using the default mode, we will require optimizer with these tags. Separate tags with ':'.
Value:
optdb__position_cutoff (<class 'float'>)
Doc: Where to stop eariler during optimization. It represent the position of the optimizer where to stop.
Value: inf
optdb__max_use_ratio (<class 'float'>)
Doc: A ratio that prevent infinite loop in EquilibriumGraphRewriter.
Value: 8.0
cycle_detection ({'fast', 'regular'})
Doc: If cycle_detection is set to regular, most inplaces are allowed,but it is slower. If cycle_detection is set to faster, less inplacesare allowed, but it makes the compilation faster.The interaction of which one give the lower peak memory usage iscomplicated and not predictable, so if you are close to the peakmemory usage, triyng both could give you a small gain.
Value: regular
check_stack_trace ({'off', 'log', 'raise', 'warn'})
Doc: A flag for checking the stack trace during the optimization process. default (off): does not check the stack trace of any optimization log: inserts a dummy stack trace that identifies the optimizationthat inserted the variable that had an empty stack trace.warn: prints a warning if a stack trace is missing and also a dummystack trace is inserted that indicates which optimization insertedthe variable that had an empty stack trace.raise: raises an exception if a stack trace is missing
Value: off
metaopt__verbose (<class 'int'>)
Doc: 0 for silent, 1 for only warnings, 2 for full output withtimings and selected implementation
Value: 0
metaopt__optimizer_excluding (<class 'str'>)
Doc: exclude optimizers with these tags. Separate tags with ':'.
Value:
metaopt__optimizer_including (<class 'str'>)
Doc: include optimizers with these tags. Separate tags with ':'.
Value:
profile (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AEA1A0>>)
Doc: If VM should collect profile information
Value: False
profile_optimizer (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AEA200>>)
Doc: If VM should collect optimizer profile information
Value: False
profile_memory (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AEA230>>)
Doc: If VM should collect memory profile information and print it
Value: False
<aesara.configparser.ConfigParam object at 0x0000022056AEA260>
Doc: Useful only for the VM Linkers. When lazy is None, auto detect if lazy evaluation is needed and use the appropriate version. If the C loop isn't being used and lazy is True, use the Stack VM; otherwise, use the Loop VM.
Value: None
unittests__rseed (<class 'str'>)
Doc: Seed to use for randomized unit tests. Special value 'random' means using a seed of None.
Value: 666
warn__round (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AEA320>>)
Doc: Warn when using `tensor.round` with the default mode. Round changed its default from `half_away_from_zero` to `half_to_even` to have the same default as NumPy.
Value: False
numba__vectorize_target ({'parallel', 'cpu', 'cuda'})
Doc: Default target for numba.vectorize.
Value: cpu
numba__fastmath (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AEA410>>)
Doc: If True, use Numba's fastmath mode.
Value: True
numba__cache (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056AEA4A0>>)
Doc: If True, use Numba's file based caching.
Value: True
compiledir_format (<class 'str'>)
Doc: Format string for platform-dependent compiled module subdirectory
(relative to base_compiledir). Available keys: aesara_version, device,
gxx_version, hostname, numpy_version, platform, processor,
python_bitwidth, python_int_bitwidth, python_version, short_platform.
Defaults to compiledir_%(short_platform)s-%(processor)s-
%(python_version)s-%(python_bitwidth)s.
Value: compiledir_%(short_platform)s-%(processor)s-%(python_version)s-%(python_bitwidth)s
<aesara.configparser.ConfigParam object at 0x0000022056AEA560>
Doc: platform-independent root directory for compiled modules
Value: C:\Users\andri\AppData\Local\Aesara
<aesara.configparser.ConfigParam object at 0x0000022056AEA6E0>
Doc: platform-dependent cache directory for compiled modules
Value: C:\Users\andri\AppData\Local\Aesara\compiledir_Windows-10-10.0.22621-SP0-Intel64_Family_6_Model_158_Stepping_10_GenuineIntel-3.10.6-64
blas__ldflags (<class 'str'>)
Doc: lib[s] to include for [Fortran] level-3 blas implementation
Value: -lblas
blas__check_openmp (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x0000022056CD74C0>>)
Doc: Check for openmp library conflict.
WARNING: Setting this to False leaves you open to wrong results in blas-related operations.
Value: True
scan__allow_gc (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x00000220593CDD80>>)
Doc: Allow/disallow gc inside of Scan (default: False)
Value: False
scan__allow_output_prealloc (<bound method BoolParam._apply of <aesara.configparser.BoolParam object at 0x000002205937B9A0>>)
Doc: Allow/disallow memory preallocation for outputs inside of scan (default: True)
Value: True
@pandrich , this is an aesara bug that we’ve seen before but we still haven’t pinpointed its cause. You can see the original aesara issue (now turned into a discussion) here. Until we figure out what is causing aesara to mess up its default blas ldflags you will have to use this workaround. I’ll write it here to have a copy of the patch on GitHub too.
Before creating your model, you need to do the following to set the blas flags
import aesara
import os
import sys
aesara.config.blas__ldflags = f'"-L{os.path.join(sys.prefix, "Library", "bin")}" -lmkl_core -lmkl_intel_thread -lmkl_rt'
@pandrich, maybe you can help turn the aesara discussion back into an issue by helping us get a minimum reproducible example in plain aesara. What both your model and the other one that was failing had in common was that they both used a dot product under the hood. Could you try to see if the following snippet raises the same compilation error?
import aesara
from aesara import tensor as at
assert aesara.config.blas__ldflags == “-lblas”
x = at.dvector(“x”)
b = at.dmatrix(“b”)
y = at.dot(b, x)
f = aesara.function([x, b], y)
f([1, 0], [[0.5, 0.5], [0.5, 0.5]])
I tried that and got this:
File "c:\Opher\GitHub\bsf_donchin_jordan_2022\tests\test_aesara_flags.py", line 4, in <module>
assert aesara.config.blas__ldflags == '-lblas'
AssertionError
I tried again, changing the assert to a print:
print(aesara.config.blas__ldflags)
# assert aesara.config.blas__ldflags == '-lblas'
and got this:
Great @opherdonchin , the assert didn’t apply in your case because aesara was setting different blas flags than for @pandrich. The important thing is that the compilation fails with the same error
Thanks @lucianopaz and @opherdonchin for the help. I can confirm that I also get exactly the same error when I run that code. And thanks @lucianopaz for pointing me to that workaround!
Opened this new aesara issue with the reproducible example and one of the error messages
@pandrich , this is an aesara bug that we’ve seen before but we still haven’t pinpointed its cause. You can see the original aesara issue (now turned into a discussion) here. Until we figure out what is causing aesara to mess up its default blas ldflags you will have to use this workaround. I’ll write it here to have a copy of the patch on GitHub too.
What exactly makes you believe this is an Aesara-specific bug? So far, no one been able to reproduce the issue in a clean environment, and we've seen very similar issues that were exclusively caused by broken/changed dependencies and different package installation orders (e.g. installing numpy
without MKL-supported BLAS libraries in a single conda install
followed by installation of MKL libraries in a separate conda install
) multiple times in the past. It seems at least as likely to be entirely due to the environments/packages. Even https://github.com/aesara-devs/aesara/pull/947 wasn't clearly an Aesara issue, as it could have just as easily been caused by recent changes in the Windows compiler toolchains.
Regardless, we need to walk through the relevant logic in order to determine where the issue is.
Recall that some of the relevant logic depends on the success of an import mkl
statement and the successful compilation of the test program generated here, so it would be worth trying that import mkl
and test program in one of the broken environments. @pandrich, @opherdonchin, can you try those and report the results?
I haven’t been able to reproduce locally, but the problem is that aesara fails to determine the default blas ldflags, which is very odd, since when both @pandrich and @opherdonchin manually set their ldflags to what aesara should have determined here and here, compilation works for both their models and the small snippet. I agree with you that it’s very hard to play detective and figure the cause out without a failing environment.
From reading the aesara code, it looks like @pandrich will fail to import mkl
because he ends up with -lblas
, but I’m not so sure what will happen to @opherdonchin , since he gets -lmkl_rt
.
Maybe this is the branch that ends up being executed in @opherdonchin case?
Maybe this is the branch that ends up being executed in @opherdonchin case?
Yeah, and that could be caused by a failed import mkl
or failed try_blas_flag
call, since the latter will return ""
upon failure.
I added import mkl
to my cde but it didn't make a difference. That is, without the ae.config.blas__ldflags
there is an error of failing to find mkl_rt
and with the ae.config.blas__ldflags
there is no error and the correct output is produced.
import os
import sys
from aesara import tensor as at
import aesara as ae
import mkl
# ae.config.blas__ldflags = f'"-L{os.path.join(sys.prefix, "Library", "bin")}" -lmkl_core -lmkl_intel_thread -lmkl_rt'
x = at.dvector('x')
b = at.dmatrix('b')
y = at.dot(b,x)
f = ae.function([x, b], y)
print(f([1, 0], [[0.5, 0.5], [0.5, 0.5]]))
Thanks, Opher
I added
import mkl
to my cde but it didn't make a difference. That is, without theae.config.blas__ldflags
there is an error of failing to findmkl_rt
and with theae.config.blas__ldflags
there is no error and the correct output is produced.
We just need to know whether or not the import mkl
statement fails, and it sounds like it doesn't, so try_blas_flag
is the next thing to try.
First, what's the value of mkl.get_version_string()
?
I'm sorry. I wasn't sure how to try try_blas_flag
. It wasn't recognized either as a stand-alone
try_blas_flag
or as an attribute of aesara
ae.try_blas_flag
Either way it produced a 'not found' error of one sort or another.
mkl.get_version_string()
produced Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture application
Thanks! Opher
Hi all, thanks for the continued effort on this and sorry for the slow reply.
@brandonwillard, I have tried to specifically isolate problems related to installations so I'm running all the test in an environment that was created exclusively with conda create -n pymc_test -c conda-forge "pymc>=4"
. The only package I add afterwards, using conda, is jupyterlab. I'm hoping this at least limits the possibilities that this is due to the specific environment but you never know I guess.
I also can import mkl
without any problem, and the output of mkl.get_version_string()
is 'Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture applications'
.
What flag do you want me to test the cmodule.try_blas_flag
with (i'm assuming this is what you mean)?
Thanks again!
I'm adding also the output of the environment creation in case it's helpful. I did not even add jupyter in this case and have the same error popping up.
What flag do you want me to test the
cmodule.try_blas_flag
with (i'm assuming this is what you mean)?
Try the following:
import os
import sys
import textwrap
from aesara.link.c.cmodule import GCC_compiler, std_lib_dirs
print(f"{sys.platform=}")
lib_path = os.path.join(sys.prefix, "Library", "bin")
cflags = [f'-L"{lib_path}"']
thr = "mkl_intel_thread"
cflags += [f"-l{l}" for l in ("mkl_core", thr, "mkl_rt")]
print(f"{cflags=}")
#
# The following is from `try_blas_flag`.
#
test_code = textwrap.dedent(
"""\
extern "C" double ddot_(int*, double*, int*, double*, int*);
int main(int argc, char** argv)
{
int Nx = 5;
int Sx = 1;
double x[5] = {0, 1, 2, 3, 4};
double r = ddot_(&Nx, x, &Sx, x, &Sx);
if ((r - 30.) > 1e-6 || (r - 30.) < -1e-6)
{
return -1;
}
return 0;
}
"""
)
print(f"{os.name=}")
path_wrapper = '"' if os.name == "nt" else ""
cflags.extend([f"-L{path_wrapper}{d}{path_wrapper}" for d in std_lib_dirs()])
print(f"{cflags=}")
compilation_ok, run_ok, out, err = GCC_compiler.try_compile_tmp(
test_code, tmp_prefix="try_blas_", flags=cflags, try_run=True, output=True
)
print(f"{compilation_ok=}")
print(f"{run_ok=}")
print(out.decode())
print(err.decode())
Thanks @brandonwillard, here are the outputs of the print statements:
compilation_ok=True run_ok=True "" ""
@pandrich, this result implies that there aren't any compilation issues involving the MKL libraries, so it seems like the NumPy installation in that environment is not correctly set up for the BLAS libraries in your environment. That's an issue outside of the scope of Aesara.
To confirm, what is the result of running from aesara.link.c.cmodule import default_blas_ldflags; default_blas_ldflags()
and import numpy as np; np.__config__.get_info("blas_opt")
in this problem environment?
As usual, thanks for helping with this Brandon. Here are those outputs:
default_blas_ldflags()
'-L"C:\\Users\\andri\\anaconda3\\envs\\pymc_test\\Library\\bin" -lmkl_core -lmkl_intel_thread -lmkl_rt'
and
np.__config__.get_info("blas_opt")
{'define_macros': [('NO_ATLAS_INFO', 1), ('HAVE_CBLAS', None)],
'libraries': ['cblas', 'blas', 'cblas', 'blas', 'cblas', 'blas'],
'library_dirs': ['C:/Users/andri/anaconda3/envs/pymc_test\\Library\\lib'],
'include_dirs': ['C:/Users/andri/anaconda3/envs/pymc_test\\Library\\include'],
'language': 'f77'}
Here are those outputs:
default_blas_ldflags() '-L"C:\\Users\\andri\\anaconda3\\envs\\pymc_test\\Library\\bin" -lmkl_core -lmkl_intel_thread -lmkl_rt'
and
np.__config__.get_info("blas_opt") {'define_macros': [('NO_ATLAS_INFO', 1), ('HAVE_CBLAS', None)], 'libraries': ['cblas', 'blas', 'cblas', 'blas', 'cblas', 'blas'], 'library_dirs': ['C:/Users/andri/anaconda3/envs/pymc_test\\Library\\lib'], 'include_dirs': ['C:/Users/andri/anaconda3/envs/pymc_test\\Library\\include'], 'language': 'f77'}
If try_blas_flags
is working with only that \bin
path and no -lblas
flag, as it appears to in https://github.com/pymc-devs/pymc/issues/6182#issuecomment-1277373389, then we need to figure out exactly where the problematic -lblas
is coming from.
To recap, the original compilation error in https://github.com/pymc-devs/pymc/issues/6182#issue-1396191160 is due to the -lblas
(in combination with some bad/missing library paths); however, the output above is saying that the -lblas
flag shouldn't be included—only the mkl
flags. Since I don't see that flag in the output of default_blas_ldflags
and it also looks like the environment paths are not the same as the original environment, we need to confirm that this new environment also produces the exact same -lblas
errors.
Thanks Brandon for the follow up.
Yes, the -lblas
error occurs in both the environments I tested (where the only difference is that in one I installed jupyter lab for ease of running some additional tests, while the other is purely the result of conda create -n pymc_env -c conda-forge "pymc>=4"
).
I am a bit confused at this point, is aesara
looking for -lblas
but it shouldn't? Or is the problem that numpy
does not have it amongst its libraries?
I am a bit confused at this point, is
aesara
looking for-lblas
but it shouldn't? Or is the problem thatnumpy
does not have it amongst its libraries?
It looks like the -lblas
flag probably shouldn't be present in the new environment with MKL. Can you make sure that there aren't any environment variables or local settings that would add the -lblas
flags (or any others) and try the erring example again?
Here's a list of all the variables (system and local) defined on my machine.
I don't see anything particularly concerning but maybe I'm missing something.
I don't see anything particularly concerning but maybe I'm missing something.
Do you have an .aesararc
or .aesararc.txt
file anywhere (e.g. your home directory)?
Also, is that the environment when the Conda venv is activated?
To summarize what I'm seeing here:
"-lblas"
for aesara.config.config__ldflags
aesara.link.c.cmodule.default_blas_flags
returns "'-L"C:\\Users\\andri\\anaconda3\\envs\\pymc_test\\Library\\bin" -lmkl_core -lmkl_intel_thread -lmkl_rt'"
(i.e. no -lblas
flag)Since the aesara.config.blas__ldflags
value is supposed to be determined by the default_blas_flags
function (see here in add_blas_configvars
), the result is a bit confusing, so we need to be very sure that the above assumptions are correct.
Assuming they are, it might be that something is changing after the call to add_blas_configvars
—i.e. when the aesara.link.c.cmodule
module is loaded.
Apologies, that was from the general environment, I thought you wanted to see the system variables set in the background. I'm adding here the local variables specific of the conda environment:
I do have a .aesararc file in my home folder.
Both your statements above are correct. Possibly what might shed some light onto this is that everything works fine after I use the workaround suggested by Luciano, where I actually change the blas__ldflags
value.
@pandrich , this is an aesara bug that we’ve seen before but we still haven’t pinpointed its cause. You can see the original aesara issue (now turned into a discussion) here. Until we figure out what is causing aesara to mess up its default blas ldflags you will have to use this workaround. I’ll write it here to have a copy of the patch on GitHub too.
Before creating your model, you need to do the following to set the blas flags
import aesara import os import sys aesara.config.blas__ldflags = f'"-L{os.path.join(sys.prefix, "Library", "bin")}" -lmkl_core -lmkl_intel_thread -lmkl_rt'
I do have a .aesararc file in my home folder. Both your statements above are correct. Possibly what might shed some light onto this is that everything works fine after I use the workaround suggested by Luciano, where I actually change the
blas__ldflags
value.
Yes, that makes sense, and thanks again for walking through all this; it's notoriously difficult to reproduce environment-specific issues like these, so we sometimes need such long back-and-forths.
In general, we need to see what happens in a new, clean environment with only the default settings (i.e. no changes to aesara.config.blas__ldflags
and no .aesararc
) so we can pinpoint any potential bugs. Fixing issues in that scenario will make things a lot easier when you upgrade and/or create new environments in the future, especially on other machines; that's why it's important.
That said, is the error still present without either of those custom settings, and, if so, can you show us the output?
Hi Brandon,
Thanks as usual and I totally agree with you, fixing things at the root is the way to go! I think I was not clear in my previous message sorry, I pointed out to Luciano's message only because I found it curious that things would work if I change the blas__ldflags
from being just -lblas
to something that does not even contain -lblas
. So I thought that maybe that could be a helpful hint to identify the underlying issue.
That said, I think we have a lead!
Things seem to work just fine after I deleted the .aesararc file from my home folder!
Interestingly, now the value of the blas__ldflags
is not -lblas
anymore but it is '-L"C:\\Users\\andri\\anaconda3\\envs\\pymc_test\\Library\\bin" -lmkl_core -lmkl_intel_thread -lmkl_rt'
(the same that would make the code work in Luciano's example). So I'm guessing that the aesara config file was somehow wrongly updating this?
I double-tested this by restoring the .aesararc
file and indeed things went back to not working and the blas__ldflags
to be equal to -lblas
.
And indeed the content of .aesararc
is:
[blas]
ldflags = -lblas
I think this is it! I can't though figure out how that file was created in the first place.
Description of your problem
An error occurs when running a Gaussian Process model within an otherwise perfectly working pymc4 environment. The model runs without errors when using pymc3. I work on a Windows machine and I wonder if, given the error output, there could be an incompatibility problem with the path format used in the search for the blas folder?
Please provide a minimal, self-contained, and reproducible example.
Please provide the full traceback.
Complete error traceback
```python You can find the C code in this temporary file: C:\Users\andri\AppData\Local\Temp\aesara_compilation_error_872581w9 library evel/Library/mingw-w64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe: is not found. library blas is not found. --------------------------------------------------------------------------- CompileError Traceback (most recent call last) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\vm.py:1246, in VMLinker.make_all(self, profiler, input_storage, output_storage, storage_map) 1242 # no-recycling is done at each VM.__call__ So there is 1243 # no need to cause duplicate c code by passing 1244 # no_recycling here. 1245 thunks.append( -> 1246 node.op.make_thunk(node, storage_map, compute_map, [], impl=impl) 1247 ) 1248 linker_make_thunk_time[node] = time.time() - thunk_start File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\op.py:131, in COp.make_thunk(self, node, storage_map, compute_map, no_recycling, impl) 130 try: --> 131 return self.make_c_thunk(node, storage_map, compute_map, no_recycling) 132 except (NotImplementedError, MethodNotDefined): 133 # We requested the c code, so don't catch the error. File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\op.py:96, in COp.make_c_thunk(self, node, storage_map, compute_map, no_recycling) 95 raise NotImplementedError("float16") ---> 96 outputs = cl.make_thunk( 97 input_storage=node_input_storage, output_storage=node_output_storage 98 ) 99 thunk, node_input_filters, node_output_filters = outputs File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1202, in CLinker.make_thunk(self, input_storage, output_storage, storage_map, cache, **kwargs) 1201 init_tasks, tasks = self.get_init_tasks() -> 1202 cthunk, module, in_storage, out_storage, error_storage = self.__compile__( 1203 input_storage, output_storage, storage_map, cache 1204 ) 1206 res = _CThunk(cthunk, init_tasks, tasks, error_storage, module) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1122, in CLinker.__compile__(self, input_storage, output_storage, storage_map, cache) 1121 output_storage = tuple(output_storage) -> 1122 thunk, module = self.cthunk_factory( 1123 error_storage, 1124 input_storage, 1125 output_storage, 1126 storage_map, 1127 cache, 1128 ) 1129 return ( 1130 thunk, 1131 module, (...) 1140 error_storage, 1141 ) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1647, in CLinker.cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, cache) 1646 cache = get_module_cache() -> 1647 module = cache.module_from_key(key=key, lnk=self) 1649 vars = self.inputs + self.outputs + self.orphans File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\cmodule.py:1229, in ModuleCache.module_from_key(self, key, lnk) 1228 location = dlimport_workdir(self.dirname) -> 1229 module = lnk.compile_cmodule(location) 1230 name = module.__file__ File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1546, in CLinker.compile_cmodule(self, location) 1545 _logger.debug(f"LOCATION {location}") -> 1546 module = c_compiler.compile_str( 1547 module_name=mod.code_hash, 1548 src_code=src_code, 1549 location=location, 1550 include_dirs=self.header_dirs(), 1551 lib_dirs=self.lib_dirs(), 1552 libs=libs, 1553 preargs=preargs, 1554 ) 1555 except Exception as e: File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\cmodule.py:2640, in GCC_compiler.compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols) 2636 # We replace '\n' by '. ' in the error message because when Python 2637 # prints the exception, having '\n' in the text makes it more 2638 # difficult to read. 2639 # compile_stderr = compile_stderr.replace("\n", ". ") -> 2640 raise CompileError( 2641 f"Compilation failed (return status={status}):\n{' '.join(cmd)}\n{compile_stderr}" 2642 ) 2643 elif config.cmodule__compilation_warning and compile_stderr: 2644 # Print errors just below the command line. CompileError: Compilation failed (return status=1): "C:\Users\andri\anaconda3\envs\wfp-stock-level\Library\mingw-w64\bin\g++.exe" -shared -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -Wno-c++11-narrowing -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -march=broadwell -mmmx -mno-3dnow -msse -msse2 -msse3 -mssse3 -mno-sse4a -mcx16 -msahf -mmovbe -maes -mno-sha -mpclmul -mpopcnt -mabm -mno-lwp -mfma -mno-fma4 -mno-xop -mbmi -mbmi2 -mno-tbm -mavx -mavx2 -msse4.2 -msse4.1 -mlzcnt -mno-rtm -mno-hle -mrdrnd -mf16c -mfsgsbase -mrdseed -mprfchw -madx -mfxsr -mxsave -mxsaveopt -mno-avx512f -mno-avx512er -mno-avx512cd -mno-avx512pf -mno-prefetchwt1 -mclflushopt -mxsavec -mxsaves -mno-avx512dq -mno-avx512bw -mno-avx512vl -mno-avx512ifma -mno-avx512vbmi -mno-clwb -mno-pcommit -mno-mwaitx --param l1-cache-size=32 --param l1-cache-line-size=64 --param l2-cache-size=12288 -mtune=generic -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -m64 -DMS_WIN64 -I"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\numpy\core\include" -I"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\include" -I"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\c_code" -L"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\libs" -L"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand" -o "C:\Users\andri\AppData\Local\Aesara\compiledir_Windows-10-10.0.22621-SP0-Intel64_Family_6_Model_158_Stepping_10_GenuineIntel-3.8.13-64\tmpadhmjsnv\m47260be5189a2297f95a5722358fab6ed80907bc9b3b7adb3279eddf44b57064.pyd" "C:\Users\andri\AppData\Local\Aesara\compiledir_Windows-10-10.0.22621-SP0-Intel64_Family_6_Model_158_Stepping_10_GenuineIntel-3.8.13-64\tmpadhmjsnv\mod.cpp" -lblas "C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\python38.dll" C:/Users/andri/anaconda3/envs/wfp-stock-level/Library/mingw-w64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lblas collect2.exe: error: ld returned 1 exit status During handling of the above exception, another exception occurred: CompileError Traceback (most recent call last) Cell In [3], line 52 43 # Likelihood 44 target = pm.Normal( 45 "target", 46 mu=target_mu, (...) 49 dims="time" 50 ) ---> 52 priors = pm.sample_prior_predictive() File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\pymc\sampling.py:2307, in sample_prior_predictive(samples, model, var_names, random_seed, return_inferencedata, idata_kwargs, compile_kwargs) 2304 compile_kwargs.setdefault("allow_input_downcast", True) 2305 compile_kwargs.setdefault("accept_inplace", True) -> 2307 sampler_fn, volatile_basic_rvs = compile_forward_sampling_function( 2308 vars_to_sample, 2309 vars_in_trace=[], 2310 basic_rvs=model.basic_RVs, 2311 givens_dict=None, 2312 random_seed=random_seed, 2313 **compile_kwargs, 2314 ) 2316 # All model variables have a name, but mypy does not know this 2317 _log.info(f"Sampling: {list(sorted(volatile_basic_rvs, key=lambda var: var.name))}") # type: ignore File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\pymc\sampling.py:1785, in compile_forward_sampling_function(outputs, vars_in_trace, basic_rvs, givens_dict, constant_data, constant_coords, **kwargs) 1773 # Populate the givens list 1774 givens = [ 1775 ( 1776 node, (...) 1781 for node, value in givens_dict.items() 1782 ] 1784 return ( -> 1785 compile_pymc(inputs, fg.outputs, givens=givens, on_unused_input="ignore", **kwargs), 1786 set(basic_rvs) & (volatile_nodes - set(givens_dict)), # Basic RVs that will be resampled 1787 ) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\pymc\aesaraf.py:970, in compile_pymc(inputs, outputs, random_seed, mode, **kwargs) 968 opt_qry = mode.provided_optimizer.including("random_make_inplace", check_parameter_opt) 969 mode = Mode(linker=mode.linker, optimizer=opt_qry) --> 970 aesara_function = aesara.function( 971 inputs, 972 outputs, 973 updates={**rng_updates, **kwargs.pop("updates", {})}, 974 mode=mode, 975 **kwargs, 976 ) 977 return aesara_function File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\compile\function\__init__.py:317, in function(inputs, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input) 311 fn = orig_function( 312 inputs, outputs, mode=mode, accept_inplace=accept_inplace, name=name 313 ) 314 else: 315 # note: pfunc will also call orig_function -- orig_function is 316 # a choke point that all compilation must pass through --> 317 fn = pfunc( 318 params=inputs, 319 outputs=outputs, 320 mode=mode, 321 updates=updates, 322 givens=givens, 323 no_default_updates=no_default_updates, 324 accept_inplace=accept_inplace, 325 name=name, 326 rebuild_strict=rebuild_strict, 327 allow_input_downcast=allow_input_downcast, 328 on_unused_input=on_unused_input, 329 profile=profile, 330 output_keys=output_keys, 331 ) 332 return fn File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\compile\function\pfunc.py:371, in pfunc(params, outputs, mode, updates, givens, no_default_updates, accept_inplace, name, rebuild_strict, allow_input_downcast, profile, on_unused_input, output_keys, fgraph) 357 profile = ProfileStats(message=profile) 359 inputs, cloned_outputs = construct_pfunc_ins_and_outs( 360 params, 361 outputs, (...) 368 fgraph=fgraph, 369 ) --> 371 return orig_function( 372 inputs, 373 cloned_outputs, 374 mode, 375 accept_inplace=accept_inplace, 376 name=name, 377 profile=profile, 378 on_unused_input=on_unused_input, 379 output_keys=output_keys, 380 fgraph=fgraph, 381 ) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\compile\function\types.py:1759, in orig_function(inputs, outputs, mode, accept_inplace, name, profile, on_unused_input, output_keys, fgraph) 1747 m = Maker( 1748 inputs, 1749 outputs, (...) 1756 fgraph=fgraph, 1757 ) 1758 with config.change_flags(compute_test_value="off"): -> 1759 fn = m.create(defaults) 1760 finally: 1761 t2 = time.time() File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\compile\function\types.py:1652, in FunctionMaker.create(self, input_storage, trustme, storage_map) 1649 start_import_time = aesara.link.c.cmodule.import_time 1651 with config.change_flags(traceback__limit=config.traceback__compile_limit): -> 1652 _fn, _i, _o = self.linker.make_thunk( 1653 input_storage=input_storage_lists, storage_map=storage_map 1654 ) 1656 end_linker = time.time() 1658 linker_time = end_linker - start_linker File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\basic.py:254, in LocalLinker.make_thunk(self, input_storage, output_storage, storage_map, **kwargs) 247 def make_thunk( 248 self, 249 input_storage: Optional["InputStorageType"] = None, (...) 252 **kwargs, 253 ) -> Tuple["BasicThunkType", "InputStorageType", "OutputStorageType"]: --> 254 return self.make_all( 255 input_storage=input_storage, 256 output_storage=output_storage, 257 storage_map=storage_map, 258 )[:3] File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\vm.py:1255, in VMLinker.make_all(self, profiler, input_storage, output_storage, storage_map) 1253 thunks[-1].lazy = False 1254 except Exception: -> 1255 raise_with_op(fgraph, node) 1257 t1 = time.time() 1259 if self.profile: File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\utils.py:534, in raise_with_op(fgraph, node, thunk, exc_info, storage_map) 529 warnings.warn( 530 f"{exc_type} error does not allow us to add an extra error message" 531 ) 532 # Some exception need extra parameter in inputs. So forget the 533 # extra long error message in that case. --> 534 raise exc_value.with_traceback(exc_trace) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\vm.py:1246, in VMLinker.make_all(self, profiler, input_storage, output_storage, storage_map) 1241 thunk_start = time.time() 1242 # no-recycling is done at each VM.__call__ So there is 1243 # no need to cause duplicate c code by passing 1244 # no_recycling here. 1245 thunks.append( -> 1246 node.op.make_thunk(node, storage_map, compute_map, [], impl=impl) 1247 ) 1248 linker_make_thunk_time[node] = time.time() - thunk_start 1249 if not hasattr(thunks[-1], "lazy"): 1250 # We don't want all ops maker to think about lazy Ops. 1251 # So if they didn't specify that its lazy or not, it isn't. 1252 # If this member isn't present, it will crash later. File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\op.py:131, in COp.make_thunk(self, node, storage_map, compute_map, no_recycling, impl) 127 self.prepare_node( 128 node, storage_map=storage_map, compute_map=compute_map, impl="c" 129 ) 130 try: --> 131 return self.make_c_thunk(node, storage_map, compute_map, no_recycling) 132 except (NotImplementedError, MethodNotDefined): 133 # We requested the c code, so don't catch the error. 134 if impl == "c": File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\op.py:96, in COp.make_c_thunk(self, node, storage_map, compute_map, no_recycling) 94 print(f"Disabling C code for {self} due to unsupported float16") 95 raise NotImplementedError("float16") ---> 96 outputs = cl.make_thunk( 97 input_storage=node_input_storage, output_storage=node_output_storage 98 ) 99 thunk, node_input_filters, node_output_filters = outputs 101 @is_cthunk_wrapper_type 102 def rval(): File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1202, in CLinker.make_thunk(self, input_storage, output_storage, storage_map, cache, **kwargs) 1167 """Compile this linker's `self.fgraph` and return a function that performs the computations. 1168 1169 The return values can be used as follows: (...) 1199 1200 """ 1201 init_tasks, tasks = self.get_init_tasks() -> 1202 cthunk, module, in_storage, out_storage, error_storage = self.__compile__( 1203 input_storage, output_storage, storage_map, cache 1204 ) 1206 res = _CThunk(cthunk, init_tasks, tasks, error_storage, module) 1207 res.nodes = self.node_order File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1122, in CLinker.__compile__(self, input_storage, output_storage, storage_map, cache) 1120 input_storage = tuple(input_storage) 1121 output_storage = tuple(output_storage) -> 1122 thunk, module = self.cthunk_factory( 1123 error_storage, 1124 input_storage, 1125 output_storage, 1126 storage_map, 1127 cache, 1128 ) 1129 return ( 1130 thunk, 1131 module, (...) 1140 error_storage, 1141 ) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1647, in CLinker.cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, cache) 1645 if cache is None: 1646 cache = get_module_cache() -> 1647 module = cache.module_from_key(key=key, lnk=self) 1649 vars = self.inputs + self.outputs + self.orphans 1650 # List of indices that should be ignored when passing the arguments 1651 # (basically, everything that the previous call to uniq eliminated) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\cmodule.py:1229, in ModuleCache.module_from_key(self, key, lnk) 1227 try: 1228 location = dlimport_workdir(self.dirname) -> 1229 module = lnk.compile_cmodule(location) 1230 name = module.__file__ 1231 assert name.startswith(location) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\basic.py:1546, in CLinker.compile_cmodule(self, location) 1544 try: 1545 _logger.debug(f"LOCATION {location}") -> 1546 module = c_compiler.compile_str( 1547 module_name=mod.code_hash, 1548 src_code=src_code, 1549 location=location, 1550 include_dirs=self.header_dirs(), 1551 lib_dirs=self.lib_dirs(), 1552 libs=libs, 1553 preargs=preargs, 1554 ) 1555 except Exception as e: 1556 e.args += (str(self.fgraph),) File ~\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\cmodule.py:2640, in GCC_compiler.compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols) 2632 print( 2633 "Check if package python-dev or python-devel is installed." 2634 ) 2636 # We replace '\n' by '. ' in the error message because when Python 2637 # prints the exception, having '\n' in the text makes it more 2638 # difficult to read. 2639 # compile_stderr = compile_stderr.replace("\n", ". ") -> 2640 raise CompileError( 2641 f"Compilation failed (return status={status}):\n{' '.join(cmd)}\n{compile_stderr}" 2642 ) 2643 elif config.cmodule__compilation_warning and compile_stderr: 2644 # Print errors just below the command line. 2645 print(compile_stderr) CompileError: Compilation failed (return status=1): "C:\Users\andri\anaconda3\envs\wfp-stock-level\Library\mingw-w64\bin\g++.exe" -shared -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -Wno-c++11-narrowing -fno-exceptions -fno-unwind-tables -fno-asynchronous-unwind-tables -march=broadwell -mmmx -mno-3dnow -msse -msse2 -msse3 -mssse3 -mno-sse4a -mcx16 -msahf -mmovbe -maes -mno-sha -mpclmul -mpopcnt -mabm -mno-lwp -mfma -mno-fma4 -mno-xop -mbmi -mbmi2 -mno-tbm -mavx -mavx2 -msse4.2 -msse4.1 -mlzcnt -mno-rtm -mno-hle -mrdrnd -mf16c -mfsgsbase -mrdseed -mprfchw -madx -mfxsr -mxsave -mxsaveopt -mno-avx512f -mno-avx512er -mno-avx512cd -mno-avx512pf -mno-prefetchwt1 -mclflushopt -mxsavec -mxsaves -mno-avx512dq -mno-avx512bw -mno-avx512vl -mno-avx512ifma -mno-avx512vbmi -mno-clwb -mno-pcommit -mno-mwaitx --param l1-cache-size=32 --param l1-cache-line-size=64 --param l2-cache-size=12288 -mtune=generic -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -m64 -DMS_WIN64 -I"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\numpy\core\include" -I"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\include" -I"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\lib\site-packages\aesara\link\c\c_code" -L"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\libs" -L"C:\Users\andri\anaconda3\envs\wfp-stock-level-demand" -o "C:\Users\andri\AppData\Local\Aesara\compiledir_Windows-10-10.0.22621-SP0-Intel64_Family_6_Model_158_Stepping_10_GenuineIntel-3.8.13-64\tmpadhmjsnv\m47260be5189a2297f95a5722358fab6ed80907bc9b3b7adb3279eddf44b57064.pyd" "C:\Users\andri\AppData\Local\Aesara\compiledir_Windows-10-10.0.22621-SP0-Intel64_Family_6_Model_158_Stepping_10_GenuineIntel-3.8.13-64\tmpadhmjsnv\mod.cpp" -lblas "C:\Users\andri\anaconda3\envs\wfp-stock-level-demand\python38.dll" C:/Users/andri/anaconda3/envs/wfp-stock-level/Library/mingw-w64/bin/../lib/gcc/x86_64-w64-mingw32/5.3.0/../../../../x86_64-w64-mingw32/bin/ld.exe: cannot find -lblas collect2.exe: error: ld returned 1 exit status Apply node that caused the error: Dot22Scalar(Elemwise{true_div,no_inplace}.0, InplaceDimShuffle{1,0}.0, TensorConstant{-2.0}) Toposort index: 15 Inputs types: [TensorType(float64, (100, 1)), TensorType(float64, (1, 100)), TensorType(float64, ())] HINT: Use a linker other than the C linker to print the inputs' shapes and strides. HINT: Re-running with most Aesara optimizations disabled could provide a back-trace showing when this node was created. This can be done by setting the Aesara flag 'optimizer=fast_compile'. If that does not work, Aesara optimizations can be disabled with 'optimizer=None'. HINT: Use the Aesara flag `exception_verbosity=high` for a debug print-out and storage map footprint of this Apply node. ```Please provide any additional information below.
I have already tried repeatedly to rebuild the environment and have only used conda for package management (I saw that another user had an issue with pip superseeding his/her numpy installation, all my packages are from the conda-forge channel). BLAS does appear as an installed package in the environment.
Versions and main components