Closed Vaibhav-22-dm closed 1 year ago
Are you sure your mingw is the 64bit version? If you installed it from the mingw website, it is very likely that it will be 32bit. Msys2 has 64bit versions available
C:\Users\vaibh>g++ -v
Using built-in specs.
COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/6.3.0/lto-wrapper.exe
Target: mingw32
Configured with: ../src/gcc-6.3.0/configure --build=x86_64-pc-linux-gnu --host=mingw32 --with-gmp=/mingw --with-mpfr=/mingw --with-mpc=/mingw --with-isl=/mingw --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --with-pkgversion='MinGW.org GCC-6.3.0-1' --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --with-libiconv-prefix=/mingw --with-libintl-prefix=/mingw --enable-libstdcxx-debug --with-tune=generic --enable-libgomp --disable-libvtv --enable-nls
Thread model: win32
gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)
@lucianopaz Does Target: mingw32 mean that I have installed a 32 bit version? As far as I remember I had installed mingw from sourceforge.net Or is there any way to verify if I have installed a 64 bit version?
Yes, you installed the 32 bit version. I’m not an expert on this, but I know that msys2 ships a 64 bit version of mingw, whereas the mingw site only shows the 32 bit version
@lucianopaz Thanks for the solution. I installed the MinGW 64-bit compiler from msys2 and now I don't get that error. Although I am getting a different error which is as follows -
Defaulting to a maximum of 6 cores for MCMC sampling (all available). See the max_mcmc_cores parameter to control ART's use of parallelism.
Warning: Dataframe does not have a time column matching one of the supported formats. Assuming that all data in the file comes from a single time point.
C:\Users\vaibh\anaconda3\lib\site-packages\xgboost\compat.py:36: FutureWarning: pandas.Int64Index is deprecated and will be removed from pandas in a future version. Use pandas.Index with the appropriate dtype instead.
from pandas import MultiIndex, Int64Index
C:\Users\vaibh\anaconda3\lib\site-packages\pandas\core\internals\blocks.py:938: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
arr_value = np.asarray(value)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <timed exec>:2, in <module>
File C:\My Drive\D Drive Vaibhav\Machine Learning\Foreign Training\multiomicspaper\notebooks\../../AutomatedRecommendationTool\art\core.py:388, in RecommendationEngine.__init__(self, df, input_vars, input_var_type, bounds_file, scale_input_vars, response_vars, build_model, cross_val, ensemble_model, standardize, intercept, recommend, objective, threshold, target_values, num_recommendations, rel_rec_distance, niter, alpha, output_directory, max_mcmc_cores, verbose, testing, seed, initial_cycle, warning_callback, last_dashes_denote_replicates, num_sklearn_models, num_tpot_models)
386 self.save_pkl_object()
387 elif build_model:
--> 388 self.build_model()
389 if recommend:
390 self.optimize()
File C:\My Drive\D Drive Vaibhav\Machine Learning\Foreign Training\multiomicspaper\notebooks\../../AutomatedRecommendationTool\art\core.py:591, in RecommendationEngine.build_model(self)
588 self._initialize_models()
590 if self.cross_val:
--> 591 self._cross_val_models()
592 plot.predictions_vs_observations(self, cv_flag=True, errorbars_flag=True)
594 self._fit_models()
File C:\My Drive\D Drive Vaibhav\Machine Learning\Foreign Training\multiomicspaper\notebooks\../../AutomatedRecommendationTool\art\core.py:1042, in RecommendationEngine._cross_val_models(self)
1035 cv_predictions[j][i] = level0_cv_predictions
1037 # ================================================== #
1038 # Cross validated predictions for the ensemble model
1039 # -------------------------------------------------- #
1040
1041 # Build (fit) ensemble model
-> 1042 self._build_ensemble_model(idx=train_idx)
1044 # Predictions with ensemble model
1045 # Apart from the mean values, store prediction std and draws for plotting
1046 # (not possible always due to a bug in pymc3)
1047 f = np.zeros((len(test_idx), self.num_models, self.num_response_var))
File C:\My Drive\D Drive Vaibhav\Machine Learning\Foreign Training\multiomicspaper\notebooks\../../AutomatedRecommendationTool\art\core.py:968, in RecommendationEngine._build_ensemble_model(self, idx)
965 if self.standardize:
966 self._standardize_level1_data()
--> 968 self._ensemble_model(idx)
File C:\My Drive\D Drive Vaibhav\Machine Learning\Foreign Training\multiomicspaper\notebooks\../../AutomatedRecommendationTool\art\core.py:1407, in RecommendationEngine._ensemble_model(self, idx, testing)
1397 if not testing:
1398 # Instantiate sampler and draw samples from the posterior.
1399 # Omit the random_seed parameter, since PYMC3 @3.8 internally calls
(...)
1404 # chains. That should still be predictable since ART calls np.random.seed()
1405 # above.
1406 step = pm.NUTS() # Slice, Metropolis, HamiltonianMC, NUTS
-> 1407 self.trace[j] = pm.sample(
1408 const.n_iterations,
1409 step=step,
1410 initvals=initvals,
1411 progressbar=progressbar,
1412 tune=const.tune_steps,
1413 cores=cores,
1414 # work around an API update to be added in PYMC3 4.0
1415 return_inferencedata=False,
1416 # , init=adapt_diag
1417 # live_plot=True, skip_first=100, refresh_every=300, roll_over=1000
1418 )
1420 logger = logging.getLogger("pymc3")
1421 logger.propagate = True
File ~\anaconda3\lib\site-packages\pymc3\sampling.py:515, in sample(draws, step, init, n_init, start, trace, chain_idx, chains, cores, tune, progressbar, model, random_seed, discard_tuned_samples, compute_convergence_checks, callback, jitter_max_retries, return_inferencedata, idata_kwargs, mp_ctx, pickle_backend, **kwargs)
513 step = assign_step_methods(model, step, step_kwargs=kwargs)
514 else:
--> 515 step = assign_step_methods(model, step, step_kwargs=kwargs)
517 if isinstance(step, list):
518 step = CompoundStep(step)
File ~\anaconda3\lib\site-packages\pymc3\sampling.py:217, in assign_step_methods(model, step, methods, step_kwargs)
209 selected = max(
210 methods,
211 key=lambda method, var=var, has_gradient=has_gradient: method._competence(
212 var, has_gradient
213 ),
214 )
215 selected_steps[selected].append(var)
--> 217 return instantiate_steppers(model, steps, selected_steps, step_kwargs)
File ~\anaconda3\lib\site-packages\pymc3\sampling.py:143, in instantiate_steppers(_model, steps, selected_steps, step_kwargs)
141 unused_args = set(step_kwargs).difference(used_keys)
142 if unused_args:
--> 143 raise ValueError("Unused step method arguments: %s" % unused_args)
145 if len(steps) == 1:
146 return steps[0]
ValueError: Unused step method arguments: {'initvals'}
Is this issue somehow related to the deprecated version of Theano? If yes, then how can I resolve this?
I am trying to use AutomatedRecommendationTool - A machine learning Automated Recommendation Tool for guiding synthetic biology. It uses pymc3. But there are some issues regarding the compiler. Following is the Error:
Following is the code cell that I am trying to run:
I am using a jupyter notebook for executing the code. Following are the system specifications - Processor: AMD Ryzen 5 4600H with Radeon Graphics 3.00 GHz System type: 64-bit Operating System, x64-based processor Operating System: Windows 10
I have also installed a C++ compiler (MingW):
It seems that this error is related to theano, I tried to view the GitHub issues of theano but it doesn't seem to be helpful.
Following are the packages that I have installed -
Versions and main components