qiskit-community / qiskit-machine-learning

Quantum Machine Learning
https://qiskit-community.github.io/qiskit-machine-learning/
Apache License 2.0
624 stars 316 forks source link

test_change_kernel and test_qsvr fail in clean stable 0.7 environment #726

Closed iyanmv closed 2 months ago

iyanmv commented 7 months ago

Environment

What is happening?

I'm trying to improve the PKGBUILD for AUR and run the python tests in the check() function.

The following tests fail in a clean chroot environment:

_________________________ TestQSVR.test_change_kernel __________________________

self = <test.algorithms.regressors.test_qsvr.TestQSVR testMethod=test_change_kernel>

    def test_change_kernel(self):
        """Test QSVR with QuantumKernel later"""
        qkernel = FidelityQuantumKernel(feature_map=self.feature_map)

        qsvr = QSVR()
        qsvr.quantum_kernel = qkernel
        qsvr.fit(self.sample_train, self.label_train)
        score = qsvr.score(self.sample_test, self.label_test)

>       self.assertAlmostEqual(score, 0.38359, places=4)
E       AssertionError: 0.38411965819305205 != 0.38359 within 4 places (0.0005296581930520627 difference)

test/algorithms/regressors/test_qsvr.py:71: AssertionError
______________________________ TestQSVR.test_qsvr ______________________________

self = <test.algorithms.regressors.test_qsvr.TestQSVR testMethod=test_qsvr>

    def test_qsvr(self):
        """Test QSVR"""
        qkernel = FidelityQuantumKernel(feature_map=self.feature_map)

        qsvr = QSVR(quantum_kernel=qkernel)
        qsvr.fit(self.sample_train, self.label_train)
        score = qsvr.score(self.sample_test, self.label_test)

>       self.assertAlmostEqual(score, 0.38359, places=4)
E       AssertionError: 0.38411965819305205 != 0.38359 within 4 places (0.0005296581930520627 difference)

test/algorithms/regressors/test_qsvr.py:60: AssertionError
``` test/algorithms/classifiers/test_fidelity_quantum_kernel_pegasos_qsvc.py::TestPegasosQSVC::test_save_load Fatal Python error: Aborted Current thread 0x00007f424fe64740 (most recent call first): File "/usr/lib/python3.11/pickle.py", line 578 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 902 in save_tuple File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 1002 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 887 in save_tuple File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 959 in _batch_appends File "/usr/lib/python3.11/pickle.py", line 932 in save_list File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 887 in save_tuple File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 956 in _batch_appends File "/usr/lib/python3.11/pickle.py", line 932 in save_list File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 887 in save_tuple File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 959 in _batch_appends File "/usr/lib/python3.11/pickle.py", line 932 in save_list File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 1212 in save_module_dict File "/usr/lib/python3.11/pickle.py", line 560 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 717 in save_reduce File "/usr/lib/python3.11/pickle.py", line 603 in save File "/usr/lib/python3.11/site-packages/dill/_dill.py", line 412 in save File "/usr/lib/python3.11/pickle.py", line 998 in _batch_setitems File "/usr/lib/python3.11/pickle.py", line 972 in save_dict ... Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, psutil._psutil_linux, psutil._psutil_posix, symengine.lib.symengine_wrapper, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._isolve._iterative, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.linalg._flinalg, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize.__nnls, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._direct, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._statlib, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, sklearn.__check_build._check_build, sklearn.utils._isfinite, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.utils._logistic_sigmoid, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.preprocessing._target_encoder_fast, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, sklearn.datasets._svmlight_format_fast, sklearn.utils._random, sklearn.utils._vector_sentinel, sklearn.feature_extraction._hashing_fast, sklearn.utils._seq_dataset, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._argkmin_classmode, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_fast, sklearn.linear_model._cd_fast, sklearn._loss._loss, sklearn.utils.arrayfuncs, sklearn.svm._liblinear, sklearn.svm._libsvm, sklearn.svm._libsvm_sparse, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast (total: 154) /startdir/PKGBUILD: line 38: 741 Aborted (core dumped) PYTHONPATH="test_dir/$_site_packages:$PYTHONPATH" pytest -v test/algorithms/classifiers/test_fidelity_quantum_kernel_pegasos_qsvc.py ```
oscar-wallis commented 4 months ago

I can confirm this is happening, we are looking into resolving these test fails.

Environment

FAIL: test_change_kernel (test.algorithms.regressors.test_qsvr.TestQSVR.test_change_kernel)
Test QSVR with QuantumKernel later
----------------------------------------------------------------------
Traceback (most recent call last):
  File "qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 71, in test_change_kernel
    self.assertAlmostEqual(score, 0.38359, places=4)
AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)
======================================================================
FAIL: test_qsvr (test.algorithms.regressors.test_qsvr.TestQSVR.test_qsvr)
Test QSVR
----------------------------------------------------------------------
Traceback (most recent call last):
  File "qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 60, in test_qsvr
    self.assertAlmostEqual(score, 0.38359, places=4)
AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)

----------------------------------------------------------------------
Jrbiltmore commented 4 months ago

To address Phase 1, which focuses on resolving the test failures for the QSVR class in Qiskit Machine Learning, the following code snippets are provided. These snippets include adjustments to test tolerances, review and adjustments to the quantum kernel configuration, and model parameter tuning.### Step 1.1: Tolerance Adjustment in Testspython# Adjusting tolerance in the test casesclass TestQSVR(unittest.TestCase):    def test_change_kernel(self):        # Assuming self.score holds the score from QSVR prediction        self.assertAlmostEqual(score, 0.38359, places=5)  # Adjusted places to 5    def test_qsvr(self):        # Assuming self.score holds the score from QSVR prediction        self.assertAlmostEqual(score, 0.38359, places=5)  # Adjusted places to 5### Step 1.2: Review and Adjust Quantum Kernel Configurationpythonfrom qiskit import Aerfrom qiskit.circuit.library import ZZFeatureMapfrom qiskit.utils import QuantumInstancefrom qiskit_machine_learning.kernels import QuantumKerneldef setup_quantum_kernel():    feature_map = ZZFeatureMap(feature_dimension=2, reps=2, entanglement='linear')    backend = Aer.get_backend('qasm_simulator')    quantum_instance = QuantumInstance(backend, shots=1024)    quantum_kernel = QuantumKernel(feature_map=feature_map, quantum_instance=quantum_instance)    return quantum_kernel# Assuming the QSVR initialization somewhere in the testsquantum_kernel = setup_quantum_kernel()qsvr = QSVR(quantum_kernel=quantum_kernel)### Step 1.3: Model Parameter Tuningpythonfrom sklearn.model_selection import GridSearchCV# Assuming qsvr is an instance of QSVR and X_train, y_train are the training dataparameters = {    'C': [1, 10, 100],    'epsilon': [0.1, 0.01, 0.001]}qsvr_tuned = GridSearchCV(qsvr, parameters)qsvr_tuned.fit(X_train, y_train)print("Best parameters found: ", qsvr_tuned.best_params_)These code snippets are aimed at addressing the initial phase of resolving test failures. Adjusting the tolerance levels in tests can quickly mitigate failures due to minor discrepancies in floating-point calculations. Reviewing and adjusting the quantum kernel configuration ensures that the QSVR model is optimally set up for the given task. Finally, tuning the model parameters via grid search helps identify the best settings for improved prediction accuracy.Sent from my iPhoneOn 14. 2. 2024., at 02:26, oscar-wallis @.***> wrote: I can confirm this is happening, we are looking into resolving these test fails. Environment

Qiskit Machine Learning version: 0.7.1 Python version: 3.12.1 Operating system: MacOS

FAIL: test_change_kernel (test.algorithms.regressors.test_qsvr.TestQSVR.test_change_kernel) Test QSVR with QuantumKernel later

Traceback (most recent call last): File "/Users/oscar.wallis/Documents/GitHub/qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 71, in test_change_kernel self.assertAlmostEqual(score, 0.38359, places=4) AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)

====================================================================== FAIL: test_qsvr (test.algorithms.regressors.test_qsvr.TestQSVR.test_qsvr) Test QSVR

Traceback (most recent call last): File "/Users/oscar.wallis/Documents/GitHub/qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 60, in test_qsvr self.assertAlmostEqual(score, 0.38359, places=4) AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)


—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

Jrbiltmore commented 4 months ago

Continuing with the implementation plan, we now move to Phase 2, focusing on enhancing debugging and data preprocessing capabilities within the QSVR class environment. This phase aims to improve the visibility of internal processes and ensure consistent data handling.### Step 2.1: Implement Enhanced Debugging and Loggingpythonimport logging# Configure logginglogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')class QSVR(SVR, SerializableModelMixin):    def __init__(self, *, quantum_kernel: Optional[BaseKernel] = None, **kwargs):        super().__init__(kernel=self._quantum_kernel.evaluate, **kwargs)        self._quantum_kernel = quantum_kernel if quantum_kernel else FidelityQuantumKernel()        logging.info("QSVR instance created with quantum kernel: %s", type(self._quantum_kernel).__name__)    def fit(self, X, y, sample_weight=None):        logging.info("Starting fit method for QSVR.")        super().fit(X, y, sample_weight=sample_weight)        logging.info("QSVR model fitting completed.")    def predict(self, X):        logging.info("Making predictions with QSVR.")        return super().predict(X)    # Additional methods...### Step 2.2: Standardize Data Preprocessingpythonfrom sklearn.preprocessing import StandardScalerclass DataPreprocessor:    def __init__(self):        self.scaler = StandardScaler()    def fit_transform(self, X_train):        logging.info("Fitting and transforming training data.")        return self.scaler.fit_transform(X_train)    def transform(self, X_test):        logging.info("Transforming test data.")        return self.scaler.transform(X_test)# Example of using DataPreprocessorpreprocessor = DataPreprocessor()X_train_scaled = preprocessor.fit_transform(X_train)X_test_scaled = preprocessor.transform(X_test)These steps enhance the QSVR class by integrating a comprehensive logging mechanism that provides insights into the class's behavior during execution, which is crucial for debugging and performance analysis. Additionally, the implementation of a standardized data preprocessing approach ensures that all data inputs are consistently scaled and normalized, reducing potential sources of error and improving model performance. By systematically incorporating these improvements, the QSVR class becomes more robust, easier to debug, and more consistent in handling input data, laying a solid foundation for the subsequent phases of enhancement.Sent from my iPhoneOn 16. 2. 2024., at 00:33, J R @.> wrote:To address Phase 1, which focuses on resolving the test failures for the QSVR class in Qiskit Machine Learning, the following code snippets are provided. These snippets include adjustments to test tolerances, review and adjustments to the quantum kernel configuration, and model parameter tuning.### Step 1.1: Tolerance Adjustment in Testspython# Adjusting tolerance in the test casesclass TestQSVR(unittest.TestCase):    def test_change_kernel(self):        # Assuming self.score holds the score from QSVR prediction        self.assertAlmostEqual(score, 0.38359, places=5)  # Adjusted places to 5    def test_qsvr(self):        # Assuming self.score holds the score from QSVR prediction        self.assertAlmostEqual(score, 0.38359, places=5)  # Adjusted places to 5### Step 1.2: Review and Adjust Quantum Kernel Configurationpythonfrom qiskit import Aerfrom qiskit.circuit.library import ZZFeatureMapfrom qiskit.utils import QuantumInstancefrom qiskit_machine_learning.kernels import QuantumKerneldef setup_quantum_kernel():    feature_map = ZZFeatureMap(feature_dimension=2, reps=2, entanglement='linear')    backend = Aer.get_backend('qasm_simulator')    quantum_instance = QuantumInstance(backend, shots=1024)    quantum_kernel = QuantumKernel(feature_map=feature_map, quantum_instance=quantum_instance)    return quantum_kernel# Assuming the QSVR initialization somewhere in the testsquantum_kernel = setup_quantum_kernel()qsvr = QSVR(quantum_kernel=quantum_kernel)### Step 1.3: Model Parameter Tuningpythonfrom sklearn.model_selection import GridSearchCV# Assuming qsvr is an instance of QSVR and X_train, y_train are the training dataparameters = {    'C': [1, 10, 100],    'epsilon': [0.1, 0.01, 0.001]}qsvr_tuned = GridSearchCV(qsvr, parameters)qsvr_tuned.fit(X_train, y_train)print("Best parameters found: ", qsvr_tuned.best_params_)These code snippets are aimed at addressing the initial phase of resolving test failures. Adjusting the tolerance levels in tests can quickly mitigate failures due to minor discrepancies in floating-point calculations. Reviewing and adjusting the quantum kernel configuration ensures that the QSVR model is optimally set up for the given task. Finally, tuning the model parameters via grid search helps identify the best settings for improved prediction accuracy.Sent from my iPhoneOn 14. 2. 2024., at 02:26, oscar-wallis @.> wrote: I can confirm this is happening, we are looking into resolving these test fails. Environment

Qiskit Machine Learning version: 0.7.1 Python version: 3.12.1 Operating system: MacOS

FAIL: test_change_kernel (test.algorithms.regressors.test_qsvr.TestQSVR.test_change_kernel) Test QSVR with QuantumKernel later

Traceback (most recent call last): File "/Users/oscar.wallis/Documents/GitHub/qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 71, in test_change_kernel self.assertAlmostEqual(score, 0.38359, places=4) AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)

====================================================================== FAIL: test_qsvr (test.algorithms.regressors.test_qsvr.TestQSVR.test_qsvr) Test QSVR

Traceback (most recent call last): File "/Users/oscar.wallis/Documents/GitHub/qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 60, in test_qsvr self.assertAlmostEqual(score, 0.38359, places=4) AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)


—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

Jrbiltmore commented 4 months ago

Moving into Phase 3, we focus on long-term enhancements to the QSVR class. This phase involves implementing more significant changes to improve quantum kernel flexibility, error handling, performance optimization, documentation, and integration with quantum hardware. These steps are designed to make the QSVR more adaptable, efficient, and user-friendly.### Step 3.1: Improve Quantum Kernel Flexibilitypythonclass QuantumKernelRegistry:    _kernels = {}    @classmethod    def register_kernel(cls, name, kernel_class):        cls._kernels[name] = kernel_class    @classmethod    def get_kernel(cls, name, **kwargs):        if name in cls._kernels:            return cls._kernels[name](**kwargs)        else:            raise ValueError(f"Quantum kernel '{name}' is not registered.")# Example kernel registrationQuantumKernelRegistry.register_kernel("fidelity", FidelityQuantumKernel)# Usage in QSVRclass QSVR(SVR, SerializableModelMixin):    def __init__(self, *, quantum_kernel_name: Optional[str] = None, **kwargs):        if quantum_kernel_name:            quantum_kernel = QuantumKernelRegistry.get_kernel(quantum_kernel_name)            kwargs.update({"quantum_kernel": quantum_kernel})        super().__init__(**kwargs)### Step 3.2: Enhance Error Handling and Validationpythonclass QSVR(SVR, SerializableModelMixin):    def __init__(self, *, quantum_kernel: Optional[BaseKernel] = None, **kwargs):        if "kernel" in kwargs:            raise ValueError("The 'kernel' argument is not supported. Use 'quantum_kernel' instead.")        super().__init__(**kwargs)### Step 3.3: Performance Optimization and Benchmarkingpython# This is more of a conceptual step, requiring profiling and optimization based on specific needs# Example pseudo-code for caching kernel evaluationsclass CachedQuantumKernel(BaseKernel):    def __init__(self, base_kernel):        self.base_kernel = base_kernel        self.cache = {}    def evaluate(self, x, y):        key = (tuple(x), tuple(y))        if key not in self.cache:            self.cache[key] = self.base_kernel.evaluate(x, y)        return self.cache[key]# Benchmarking utilities could involve comparing execution times, accuracy, and other metrics# between quantum and classical kernels or different quantum kernel configurations.### Step 3.4: Quantum Kernel Parameter Tuningpython# Integration with tools like scikit-learn's GridSearchCV or similar for quantum kernel parameters# This might involve creating a wrapper or utility function that facilitates parameter tuning for quantum kernels### Step 3.5: Expand Documentation and Examplespython# This step involves updating documentation strings and providing more comprehensive examples and tutorials# Documentation should cover usage scenarios, parameter explanations, and best practices for working with quantum kernels### Step 3.6: Facilitate Integration with Quantum Hardwarepython# Example of specifying a quantum instance for hardware executionfrom qiskit import IBMQfrom qiskit.utils import QuantumInstanceIBMQ.load_account()  # Ensure you've saved your IBM Q account credentialsprovider = IBMQ.get_provider(hub='your_hub', group='your_group', project='your_project')backend = provider.get_backend('your_quantum_device')quantum_instance = QuantumInstance(backend=backend)quantum_kernel = QuantumKernel(feature_map=your_feature_map, quantum_instance=quantum_instance)qsvr = QSVR(quantum_kernel=quantum_kernel)These steps collectively aim to significantly enhance the QSVR class's flexibility, performance, and user experience. Implementing a kernel registry allows for easy experimentation with various quantum kernels. Enhanced error handling prevents common configuration mistakes, while performance optimizations ensure that the class runs efficiently. Expanding documentation and facilitating integration with quantum hardware make the class more accessible and practical for a wide range of users. Each of these improvements contributes to making the QSVR a more powerful and user-friendly tool for quantum machine learning.Sent from my iPhoneOn 16. 2. 2024., at 00:33, J R @.> wrote:To address Phase 1, which focuses on resolving the test failures for the QSVR class in Qiskit Machine Learning, the following code snippets are provided. These snippets include adjustments to test tolerances, review and adjustments to the quantum kernel configuration, and model parameter tuning.### Step 1.1: Tolerance Adjustment in Testspython# Adjusting tolerance in the test casesclass TestQSVR(unittest.TestCase):    def test_change_kernel(self):        # Assuming self.score holds the score from QSVR prediction        self.assertAlmostEqual(score, 0.38359, places=5)  # Adjusted places to 5    def test_qsvr(self):        # Assuming self.score holds the score from QSVR prediction        self.assertAlmostEqual(score, 0.38359, places=5)  # Adjusted places to 5### Step 1.2: Review and Adjust Quantum Kernel Configurationpythonfrom qiskit import Aerfrom qiskit.circuit.library import ZZFeatureMapfrom qiskit.utils import QuantumInstancefrom qiskit_machine_learning.kernels import QuantumKerneldef setup_quantum_kernel():    feature_map = ZZFeatureMap(feature_dimension=2, reps=2, entanglement='linear')    backend = Aer.get_backend('qasm_simulator')    quantum_instance = QuantumInstance(backend, shots=1024)    quantum_kernel = QuantumKernel(feature_map=feature_map, quantum_instance=quantum_instance)    return quantum_kernel# Assuming the QSVR initialization somewhere in the testsquantum_kernel = setup_quantum_kernel()qsvr = QSVR(quantum_kernel=quantum_kernel)### Step 1.3: Model Parameter Tuningpythonfrom sklearn.model_selection import GridSearchCV# Assuming qsvr is an instance of QSVR and X_train, y_train are the training dataparameters = {    'C': [1, 10, 100],    'epsilon': [0.1, 0.01, 0.001]}qsvr_tuned = GridSearchCV(qsvr, parameters)qsvr_tuned.fit(X_train, y_train)print("Best parameters found: ", qsvr_tuned.best_params_)These code snippets are aimed at addressing the initial phase of resolving test failures. Adjusting the tolerance levels in tests can quickly mitigate failures due to minor discrepancies in floating-point calculations. Reviewing and adjusting the quantum kernel configuration ensures that the QSVR model is optimally set up for the given task. Finally, tuning the model parameters via grid search helps identify the best settings for improved prediction accuracy.Sent from my iPhoneOn 14. 2. 2024., at 02:26, oscar-wallis @.> wrote: I can confirm this is happening, we are looking into resolving these test fails. Environment

Qiskit Machine Learning version: 0.7.1 Python version: 3.12.1 Operating system: MacOS

FAIL: test_change_kernel (test.algorithms.regressors.test_qsvr.TestQSVR.test_change_kernel) Test QSVR with QuantumKernel later

Traceback (most recent call last): File "/Users/oscar.wallis/Documents/GitHub/qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 71, in test_change_kernel self.assertAlmostEqual(score, 0.38359, places=4) AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)

====================================================================== FAIL: test_qsvr (test.algorithms.regressors.test_qsvr.TestQSVR.test_qsvr) Test QSVR

Traceback (most recent call last): File "/Users/oscar.wallis/Documents/GitHub/qiskit-machine-learning/test/algorithms/regressors/test_qsvr.py", line 60, in test_qsvr self.assertAlmostEqual(score, 0.38359, places=4) AssertionError: 0.38411965819305227 != 0.38359 within 4 places (0.0005296581930522848 difference)


—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

oscar-wallis commented 4 months ago

@adekusar-drl @woodsp-ibm have you had experience with these test fails? I ran these on the stable 0.7 and they still failed.

oscar-wallis commented 4 months ago

@Jrbiltmore I think your comments may be corrupted in some way, they are challenging to read.

woodsp-ibm commented 4 months ago

@oscar-wallis In CI, on ubuntu linux, mac and windows these pass. Locally with a different linux these pass for me. I have not ever been able to reproduce these. It seems you have though - maybe the test needs to be relaxed a bit - rounded to 3 decimal places it would be 0.384 which seems it would span all cases. I am not sure why there is the difference - maybe some precision difference in some native library that gets used.

oscar-wallis commented 4 months ago

@woodsp-ibm that's strange, I am running these tests on an M2 Mac chip and getting these issues. If we relax the tests don't we risk things passing the tests that really shouldn't?

woodsp-ibm commented 4 months ago

Github actions only recently introduced M1 chip capability and we do not have any action that tests there yet let alone M2.

If we relax the tests don't we risk things passing the tests that really shouldn't?

Yes, relaxing the test condition could be a risk - but here the exact same code on a slightly different platform it seems is failing the test, but is presumably working, unless you can see otherwise. If the code is assumed to be working - I guess it seems to be for you right aside from the small difference either we loosen the test a little to accommodate the variance across platforms or somehow make the result platform dependent. I have no idea at present exactly what are the characteristics are that cause the different result, but since your test on M2 has the same values as he original post done with Arch Linux that could be using an M1/M2 chip too.

We had something similar in optimization where a test was failing for someone locally who was using a Mac M1. It was observed at the time it might be nice to have that tested by CI - as its just become available I created this issue there qiskit-community/qiskit-optimization#593. In searching I can find posts related to this precision such as https://stackoverflow.com/questions/71441137/np-float32-floating-point-differences-between-intel-macbook-and-m1

iyanmv commented 4 months ago

Yes, relaxing the test condition could be a risk - but here the exact same code on a slightly different platform it seems is failing the test, but is presumably working, unless you can see otherwise. If the code is assumed to be working - I guess it seems to be for you right aside from the small difference either we loosen the test a little to accommodate the variance across platforms or somehow make the result platform dependent. I have no idea at present exactly what are the characteristics are that cause the different result, but since your test on M2 has the same values as he original post done with Arch Linux that could be using an M1/M2 chip too.

No, I'm not running on a M1/M2 chip. Arch Linux only supports x86-64 at the moment. Before opening the issue I run the tests on two different devices, one with an Intel i5-1135G7 and a second one on a AMD Ryzen 9 7900X3D. Both fail the exact same tests I reported.

iyanmv commented 4 months ago

And also notice that one of the tests is not failing because of a precision issue at all, the process crashes completely.

woodsp-ibm commented 4 months ago

@iyanmv Thanks for the info. I guess there is some other aspect of the environment then - here CI runs and passes these test on the latest versions of ubuntu, mac and windows VMs, across a range of Python versions. as provisioned via github actions (these latest versions change over time). The tests pass here, and for others locally. The two tests that are different are QSVR where QSVR is just a simple sub-class of scikit-learn SVR taking the kernel that is built out. Perhaps the kernel is different or there is some difference in scikit learn. As @oscar-wallis can reproduce maybe we can investigate further that aspect.

As to the system crash yours would be the first report I have ever seen in this regard.

iyanmv commented 4 months ago

I will try to investigate a little bit more the crash issue in the next few days and also check the CI pipeline you are using. I will comment if I figure something out.

oscar-wallis commented 4 months ago

Environment

Investigation Update

Hi @iyanmv, @woodsp-ibm and I went and tested the test failing errors on my device where I can replicate the failures. We found we could pass the tests if we set enforce_psd = False in FidelityQuantumKernel as shown below.

    def test_qsvr(self):
        """Test QSVR"""
        qkernel = FidelityQuantumKernel(feature_map=self.feature_map, enforce_psd=False)
        qsvr = QSVR(quantum_kernel=qkernel)
        qsvr.fit(self.sample_train, self.label_train)
        score = qsvr.score(self.sample_test, self.label_test)

        self.assertAlmostEqual(score, 0.38359, places=4)

    def test_change_kernel(self):
        """Test QSVR with QuantumKernel later"""
        qkernel = FidelityQuantumKernel(feature_map=self.feature_map, enforce_psd=False)

        qsvr = QSVR()
        qsvr.quantum_kernel = qkernel
        qsvr.fit(self.sample_train, self.label_train)
        score = qsvr.score(self.sample_test, self.label_test)

        self.assertAlmostEqual(score, 0.38359, places=4)

enforce_psd is a variable (by default set to True) added previously to the BaseKernel which, if true, will mathematically "find the closest positive semi-definite approximation to a symmetric kernel matrix. The (symmetric) matrix should always be positive semi-definite by construction, but this can be violated in case of noise, such as sampling noise." - that's lifted verbatim from the documentation. It does this simply using numpy arrays and mathematics.

There is a follow on question of why enforce_psd is actually causing an issue. FidelityQuantumKernel uses qiskit.primitive.Sampler by default, this Sampler uses a simulator by default. This means the kernel matrix should always be positive semi-definite by default. Additionally, if we run using qiskit_aer.primitives.Sampler with shots = None, ie the ideal case, no error is thrown, whether enforce_psd is set to True or False. That code is shown below too.

    def test_qsvr(self):
        """Test QSVR"""
        from qiskit.algorithms.state_fidelities import ComputeUncompute
        from qiskit_aer.primitives import Sampler

        qkernel = FidelityQuantumKernel(
            fidelity=ComputeUncompute(sampler=Sampler(run_options={"shots": None})),
            feature_map=self.feature_map,
            enforce_psd=True,
        )
        qsvr = QSVR(quantum_kernel=qkernel)
        qsvr.fit(self.sample_train, self.label_train)
        score = qsvr.score(self.sample_test, self.label_test)

        self.assertAlmostEqual(score, 0.38359, places=4)

This points to qiskit.primitives.Sampler being the issue, although may be fixed with Qiskit 1.0, I'll try to further isolate the issue and leave an issue over on the Qiskit repo. When I do, I'll link it to this issue so you can follow up if you wish.

Action Taken

P.S. I really liked your issue format and copied it here, I'll probably continue to use it so thanks!

oscar-wallis commented 4 months ago

I will try to investigate a little bit more the crash issue in the next few days and also check the CI pipeline you are using. I will comment if I figure something out.

We haven't been able to replicate the crash though, so I don't have anything to add to that unfortunately.

Jrbiltmore commented 4 months ago

experience am  now elevated jacking sum1 are using bilaw enforcement syszziles and the classical version of this project to filter errthing dat writ. Can I Haz yawl lookseeif I said that in plain english, an api check blocks it.Sent from my iPhoneOn 28. 2. 2024., at 02:39, oscar-wallis @.***> wrote:

I will try to investigate a little bit more the crash issue in the next few days and also check the CI pipeline you are using. I will comment if I figure something out.

We haven't been able to replicate the crash though, so I don't have anything to add to that unfortunately.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

oscar-wallis commented 4 months ago

@Jrbiltmore this comment is difficult to read at best. From your previous comments, it sounds like you might have some useful insights but either your editor is incompatible with GitHub or, as you said, some api check is blocking your message. I would like to hear what you have to say but please use the Github website so you can make sure your messages are properly formatted. Otherwise I will continue to hide your comments as they are unhelpful. And to answer your question the message is in plain English characters but not plain English, unless 'syszziles' was recently added to the dictionary and I didn't see it.

iyanmv commented 4 months ago

@oscar-wallis Thanks for the detailed analysis! I still didn't have time to look to this in more detail. I think I will wait for the next release and I will run the tests again with qiskit 1.0.1.

oscar-wallis commented 4 months ago

@iyanmv If you were using Qiskit 1.0 for these tests when you were getting both the test failures I was experiencing but with the additional test crash, the issue could be with how you have installed Qiskit 1.0. As mentioned in the Qiskit 1.0 release notes, to install Qiskit 1.0 you can't simply pip install -U qiskit to upgrade. You need to create a new clean virtual environment and within it install Qiskit 1.0 directly using pip install 'qiskit>=1'. I assume other package managers can be used, for more detailed instructions please check the Qiskit 1.0 Installation Guide. Let me know how it goes!

iyanmv commented 4 months ago

I build each qiskit package independently in a clean isolated environment but I do not use pip. Instead I use the recommended build and installer from the Arch Linux Python Guidelines. In addition, the tests are not run against the source code directory, but against another python isolated environment where the wheel file from the build() stage was installed. By the way, I have upgraded qiskit to 1.0.1 yesterday, but I will wait for a new release of qiskit-machine-learning to test everything against this version.

iyanmv commented 2 months ago

All test pass with 0.7.2 and qiskit 1.1.0rc1.