LHCfitNikhef / smefit_release

SMEFiT a Standard Model Effective Field Theory fitter
GNU General Public License v3.0
6 stars 1 forks source link

Replace MultiNest with Ultranest #51

Closed giacomomagni closed 1 year ago

giacomomagni commented 1 year ago

This PR is to replace MultiNest with dynesty gitHub as suggested by @LucaMantani.

This way the installation will be much simpler, finally being able to release on pypi!! Moreover this package is newer and more maintained both for documentation and development.

https://dynesty.readthedocs.io/en/latest/index.html

giacomomagni commented 1 year ago

Hi @LucaMantani (and @tgiani). In the end I think we can substitute Multinest, by this python package called Ultranest

https://johannesbuchner.github.io/UltraNest/index.html

I have runned a benchmark of the Global smefit2.0 NLO HO fit and took 3.3 hours with the standard settings and 32 cores. The outcome is attached below.

coefficient_histo.pdf

I recall Multinest was a bit faster, but given how difficult is to install I would move to this new package. Do you agree? If yes I'll proceed to clean this PR and scratch away Multinest.

PD: The benchmark was successful also at linear level.

juanrojochacon commented 1 year ago

I think @giacomomagni that the benefits of a self-contained package installation largely dominate over a slightly worse performance, so I am all for it!

juanrojochacon commented 1 year ago

And I think potential new users will also appreciate this development ;)

tgiani commented 1 year ago

hi Giacomo, thank you for this, it looks good to me and I think we should go for it

LucaMantani commented 1 year ago

I don't know this package, is there a reason why you moved away from dynesty? You said that it was running faster

LucaMantani commented 1 year ago

I am reading in the FAQ that the author claims that other packages have some biases and this is giving more faithful uncertainties, but to do so evaluates the likelihood more often and for that reason it is in general slower. So I think the slower performances are indeed expected.

Did you notice any difference in the results? Are posterior smoother or something like that?

giacomomagni commented 1 year ago

Well the author is the same of MultinNest (ieJohannes Buchner). Main reason to drop dynesty is that I was not able to reproduce the old results, and I think they have some intrisic diffrent feature:

https://dynesty.readthedocs.io/en/latest/faq.html

From what I saw I'm not sure that dynesty will be faster for a global fits with quadratics.

The only downside of Ultranest is that it still rely on MPI so to run in parallel you will need to have conda around or have your independent installation of mpicc. Given that we can still deploy on pypi if a user just need to run on a single core (or have mpicc already installed).

Did you notice any difference in the results? Are posterior smoother or something like that?

Yes posteriors are a bit smoothers, you can see the file attached above..

LucaMantani commented 1 year ago

Ok, thanks! Interesting effect in cpt, with UltraNest the two maxima are at the same footing while with MultiNest one is considered more important that the other!

Anyway, if you are happy with UltraNest and it makes installations easier, I'm all for it!

codecov[bot] commented 1 year ago

Codecov Report

Merging #51 (ccd7218) into main (42a6cd9) will decrease coverage by 0.12%. The diff coverage is 46.66%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main      #51      +/-   ##
==========================================
- Coverage   43.39%   43.27%   -0.12%     
==========================================
  Files          27       27              
  Lines        2187     2216      +29     
==========================================
+ Hits          949      959      +10     
- Misses       1238     1257      +19     
Flag Coverage Δ
unittests 43.27% <46.66%> (-0.12%) :arrow_down:

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
src/smefit/optimize/__init__.py 61.66% <77.77%> (+0.55%) :arrow_up:
src/smefit/cli/__init__.py 0.00% <0.00%> (ø)
src/smefit/runner.py 46.78% <27.77%> (-0.79%) :arrow_down:
src/smefit/optimize/ultranest.py 62.37% <62.16%> (ø)

... and 1 file with indirect coverage changes

giacomomagni commented 1 year ago

@tgiani this PR should be ready for review.

As said in the Readme it will be possible to install just with pip but not if you want to run in parallel mode. For that reason I'd keep the conda envs. MultiNest should not be need anymore.

tgiani commented 1 year ago

Hi @giacomomagni, I m looking at this, but the installation script does not work anymore for me. TRying to run it on stoomboot I get the following

Package operations: 42 installs, 14 updates, 0 removals                                                                                                                                                                                                

  • Updating numpy (1.25.1 /home/conda/feedstock_root/build_artifacts/numpy_1688887056611/work -> 1.24.4): Failed

  CalledProcessError

  Command '['/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/bin/python', '-m', 'pip', 'uninstall', 'numpy', '-y']' returned non-zero exit status 2.

  at /data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/subprocess.py:524 in run
       520│             # We don't call process.wait() as .__exit__ does that for us.
       521│             raise
       522│         retcode = process.poll()
       523│         if check and retcode:
    →  524│             raise CalledProcessError(retcode, process.args,
       525│                                      output=stdout, stderr=stderr)
       526│     return CompletedProcess(process.args, retcode, stdout, stderr)
       527│ 
       528│ 

The following error occurred when trying to handle this error:

  EnvCommandError

  Command ['/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/bin/python', '-m', 'pip', 'uninstall', 'numpy', '-y'] errored with the following return code 2

  Output:
  Found existing installation: numpy 1.25.1
  Uninstalling numpy-1.25.1:
    Successfully uninstalled numpy-1.25.1
  ERROR: Exception:
  Traceback (most recent call last):
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 169, in exc_logging_wrapper
      status = run_func(*args)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_internal/commands/uninstall.py", line 110, in run
      uninstall_pathset.commit()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_internal/req/req_uninstall.py", line 432, in commit
      self._moved_paths.commit()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_internal/req/req_uninstall.py", line 278, in commit
      save_dir.cleanup()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_internal/utils/temp_dir.py", line 173, in cleanup
      rmtree(self._path)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_vendor/tenacity/__init__.py", line 291, in wrapped_f
      return self(f, *args, **kw)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_vendor/tenacity/__init__.py", line 381, in __call__
      do = self.iter(retry_state=retry_state)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_vendor/tenacity/__init__.py", line 327, in iter
      raise retry_exc.reraise()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_vendor/tenacity/__init__.py", line 160, in reraise
      raise self.last_attempt.result()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/concurrent/futures/_base.py", line 439, in result
      return self.__get_result()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
      raise self._exception
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_vendor/tenacity/__init__.py", line 384, in __call__
      result = fn(*args, **kwargs)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pip/_internal/utils/misc.py", line 130, in rmtree
      shutil.rmtree(dir, ignore_errors=ignore_errors, onerror=rmtree_errorhandler)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/shutil.py", line 722, in rmtree
      _rmtree_safe_fd(fd, path, onerror)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/shutil.py", line 655, in _rmtree_safe_fd
      _rmtree_safe_fd(dirfd, fullname, onerror)
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/shutil.py", line 678, in _rmtree_safe_fd
      onerror(os.unlink, fullname, sys.exc_info())
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/shutil.py", line 676, in _rmtree_safe_fd
      os.unlink(entry.name, dir_fd=topfd)
  OSError: [Errno 16] Device or resource busy: '.nfs000000007670e6af004f2de3'

  at /data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/poetry/utils/env.py:1525 in _run
      1521│                 output = ""
      1522│             else:
      1523│                 output = subprocess.check_output(cmd, stderr=stderr, env=env, **kwargs)
      1524│         except CalledProcessError as e:
    → 1525│             raise EnvCommandError(e, input=input_)
      1526│ 
      1527│         return decode(output)
      1528│ 
      1529│     def execute(self, bin: str, *args: str, **kwargs: Any) -> int:

  • Updating packaging (23.1 /home/conda/feedstock_root/build_artifacts/packaging_1681337016113/work -> 23.2)
  • Updating setuptools (68.0.0 -> 68.2.2)
  • Updating six (1.16.0 /home/conda/feedstock_root/build_artifacts/six_1620240208055/work -> 1.16.0)
  • Updating tomli (2.0.1 /home/conda/feedstock_root/build_artifacts/tomli_1644342247877/work -> 2.0.1)
  • Updating typing-extensions (4.7.1 /home/conda/feedstock_root/build_artifacts/typing_extensions_1688315532570/work -> 4.8.0)
Installing dependencies from lock file
Warning: poetry.lock is not consistent with pyproject.toml. You may be getting improper dependencies. Run `poetry lock [--no-update]` to fix it.

Package operations: 42 installs, 9 updates, 0 removals

  • Installing numpy (1.24.4)
  • Updating six (1.16.0 /home/conda/feedstock_root/build_artifacts/six_1620240208055/work -> 1.16.0)
  • Installing contourpy (1.1.0)
  • Installing cycler (0.11.0)
  • Installing fonttools (4.41.0)
  • Installing kiwisolver (1.4.4)
  • Updating packaging (23.2 -> 23.1)
  • Installing pillow (10.0.0)
  • Installing pyparsing (3.0.9)
  • Installing python-dateutil (2.8.2)
  • Installing asttokens (2.2.1)
  • Installing executing (1.2.0)
  • Installing matplotlib (3.7.2)
  • Installing parso (0.8.3)
  • Updating ptyprocess (0.7.0 /home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl -> 0.7.0)
  • Installing pure-eval (0.2.2)
  • Installing pyrepl (0.9.0)
  • Installing pytz (2023.3)
  • Installing traitlets (5.9.0)
  • Installing wcwidth (0.2.6)
  • Installing backcall (0.2.0)
  • Updating colorama (0.4.6 /home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work -> 0.4.6)
  • Installing commonmark (0.9.1)
  • Installing corner (2.2.1)
  • Installing cython (0.29.36)
  • Installing decorator (5.1.1)
  • Updating distlib (0.3.6 /home/conda/feedstock_root/build_artifacts/distlib_1668356257807/work -> 0.3.6)
  • Installing fancycompleter (0.9.1)
  • Updating filelock (3.12.2 /home/conda/feedstock_root/build_artifacts/filelock_1686612785345/work -> 3.12.2)
  • Installing jedi (0.18.2)
  • Installing matplotlib-inline (0.1.6)
  • Installing pandas (1.5.3)
  • Updating pexpect (4.8.0 /home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work -> 4.8.0)
  • Installing pickleshare (0.7.5)
  • Updating platformdirs (3.8.1 /home/conda/feedstock_root/build_artifacts/platformdirs_1688739404342/work -> 3.8.1)
  • Installing prompt-toolkit (3.0.39)
  • Installing pygments (2.15.1)
  • Installing scipy (1.10.1)
  • Installing stack-data (0.6.2)
  • Installing wmctrl (0.4)
  • Installing asv (0.4.2): Preparing...
  • Installing click (8.1.5)
  • Installing cma (3.3.0)
  • Installing devtools (0.10.0)
  • Installing ipython (8.12.2)
  • Installing pdbpp (0.10.3)
  • Installing pyyaml (5.4.1): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke get_requires_for_build_wheel

  /tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
  !!

          ********************************************************************************
          The license_file parameter is deprecated, use license_files instead.

          By 2023-Oct-30, you need to update your project and remove deprecated calls
          or your builds will no longer be supported.

          See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
          ********************************************************************************

  !!
    parsed = self.parsers.get(option_name, lambda x: x)(value)
  running egg_info
  • Installing click (8.1.5)
  • Installing cma (3.3.0)
  • Installing devtools (0.10.0)
  • Installing ipython (8.12.2)
  • Installing pdbpp (0.10.3)
  • Installing pyyaml (5.4.1): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke get_requires_for_build_wheel

  /tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
  !!

          ********************************************************************************
          The license_file parameter is deprecated, use license_files instead.

          By 2023-Oct-30, you need to update your project and remove deprecated calls
          or your builds will no longer be supported.

          See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
          ********************************************************************************

  !!
    parsed = self.parsers.get(option_name, lambda x: x)(value)
  running egg_info
  • Installing asv (0.4.2): Installing...
  • Installing click (8.1.5)
  • Installing cma (3.3.0)
  • Installing devtools (0.10.0)
  • Installing ipython (8.12.2)
  • Installing pdbpp (0.10.3)
  • Installing pyyaml (5.4.1): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke get_requires_for_build_wheel

  /tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
  !!

          ********************************************************************************
          The license_file parameter is deprecated, use license_files instead.

          By 2023-Oct-30, you need to update your project and remove deprecated calls
          or your builds will no longer be supported.

          See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
          ********************************************************************************

  !!
    parsed = self.parsers.get(option_name, lambda x: x)(value)
  running egg_info
  • Installing click (8.1.5)
  • Installing cma (3.3.0)
  • Installing devtools (0.10.0)
  • Installing ipython (8.12.2)
  • Installing pdbpp (0.10.3)
  • Installing pyyaml (5.4.1): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke get_requires_for_build_wheel

  /tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
  !!

          ********************************************************************************
          The license_file parameter is deprecated, use license_files instead.

          By 2023-Oct-30, you need to update your project and remove deprecated calls
          or your builds will no longer be supported.

          See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
          ********************************************************************************

  !!
    parsed = self.parsers.get(option_name, lambda x: x)(value)
  running egg_info
  • Installing asv (0.4.2)
  • Installing click (8.1.5)
  • Installing cma (3.3.0)
  • Installing devtools (0.10.0)
  • Installing ipython (8.12.2)
  • Installing pdbpp (0.10.3)
  • Installing pyyaml (5.4.1): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke get_requires_for_build_wheel

  /tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
  !!

          ********************************************************************************
          The license_file parameter is deprecated, use license_files instead.

          By 2023-Oct-30, you need to update your project and remove deprecated calls
          or your builds will no longer be supported.

          See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
          ********************************************************************************

  !!
    parsed = self.parsers.get(option_name, lambda x: x)(value)
  running egg_info
  writing lib3/PyYAML.egg-info/PKG-INFO
  writing dependency_links to lib3/PyYAML.egg-info/dependency_links.txt
  writing top-level names to lib3/PyYAML.egg-info/top_level.txt
  Traceback (most recent call last):
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
      main()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pyproject_hooks/_in_process/_in_process.py", line 335, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=['wheel'])
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
      self.run_setup()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup
      exec(code, locals())
    File "<string>", line 271, in <module>
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/__init__.py", line 103, in setup
      return distutils.core.setup(**attrs)
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
      return run_commands(dist)
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
      dist.run_commands()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
      self.run_command(cmd)
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/dist.py", line 989, in run_command
      super().run_command(command)
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
      cmd_obj.run()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 318, in run
      self.find_sources()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 326, in find_sources
      mm.run()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 548, in run
      self.add_defaults()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/command/egg_info.py", line 586, in add_defaults
      sdist.add_defaults(self)
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/command/sdist.py", line 113, in add_defaults
      super().add_defaults()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/sdist.py", line 251, in add_defaults
      self._add_defaults_ext()
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/command/sdist.py", line 336, in _add_defaults_ext
      self.filelist.extend(build_ext.get_source_files())
    File "<string>", line 201, in get_source_files
    File "/tmp/tmptc5jdgsi/.venv/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 107, in __getattr__
      raise AttributeError(attr)
  AttributeError: cython_sources

  at /data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/poetry/installation/chef.py:147 in _prepare
      143│ 
      144│                 error = ChefBuildError("\n\n".join(message_parts))
      145│ 
      146│             if error is not None:
    → 147│                 raise error from None
      148│ 
      149│             return path
      150│ 
      151│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:

Note: This error originates from the build backend, and is likely not a problem with poetry but with pyyaml (5.4.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "pyyaml (==5.4.1)"'.

  • Installing rich (11.2.0)
  • Installing seaborn (0.11.2)
  • Installing ultranest (3.6.1): Failed

  ChefBuildError

  Backend subprocess exited when trying to invoke get_requires_for_build_wheel

  Error compiling Cython file:
  ------------------------------------------------------------
  ...
              # only consider points in the same cluster
              if clusterids[j] == clusterids[i]:
                  pair_dist = 0.0
                  for k in range(ndim):
                      pair_dist += (pts[i,k] - pts[j,k])**2
                  total_dist += pair_dist**0.5
                  ^
  ------------------------------------------------------------

  ultranest/mlfriends.pyx:180:16: Cannot assign type 'npy_double complex' to 'float_t'
  Compiling ultranest/mlfriends.pyx because it changed.
  Compiling ultranest/stepfuncs.pyx because it changed.
  [1/2] Cythonizing ultranest/mlfriends.pyx
  Traceback (most recent call last):
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
      main()
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pyproject_hooks/_in_process/_in_process.py", line 335, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
      return hook(config_settings)
    File "/tmp/tmp65oyp7bm/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
      return self._get_build_requires(config_settings, requirements=['wheel'])
    File "/tmp/tmp65oyp7bm/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
      self.run_setup()
    File "/tmp/tmp65oyp7bm/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 507, in run_setup
      super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
    File "/tmp/tmp65oyp7bm/.venv/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup
      exec(code, locals())
    File "<string>", line 58, in <module>
    File "/tmp/tmp65oyp7bm/.venv/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
      cythonize_one(*args)
    File "/tmp/tmp65oyp7bm/.venv/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
      raise CompileError(None, pyx_file)
  Cython.Compiler.Errors.CompileError: ultranest/mlfriends.pyx

  at /data/theorie/tgiani/miniconda3/envs/smefit_ultranest/lib/python3.10/site-packages/poetry/installation/chef.py:147 in _prepare
      143│ 
      144│                 error = ChefBuildError("\n\n".join(message_parts))
      145│ 
      146│             if error is not None:
    → 147│                 raise error from None
      148│ 
      149│             return path
      150│ 
      151│     def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:

Note: This error originates from the build backend, and is likely not a problem with poetry but with ultranest (3.6.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "ultranest (==3.6.1)"'.

  • Updating virtualenv (20.23.1 /home/conda/feedstock_root/build_artifacts/virtualenv_1687005325630/work -> 20.23.1)

Installation was successful !!

To start type:

    conda activate smefit_ultranest
    smefit -h
tgiani commented 1 year ago

is the installation script working without any problem for you?

tgiani commented 1 year ago

some comments:

giacomomagni commented 1 year ago

Thanks @tgiani for regenerating the conda files, most likely I made something wrong when doing it myself.

tgiani commented 1 year ago

@giacomomagni mm no I think you did everything correctly, but then for some reason a different version of pyyaml was being installed following the pyproject.toml and this was giving problems, not sure. Could you maybe try to install the code both on the cluster and locally and see if it works for you?

tgiani commented 1 year ago

I ve tried to run a smefit2.0 fit with quad corrections, asking for 32 cores, but it seems much slower. When you tested this branch for you it was taking just sligthly more than the old ns, right?

giacomomagni commented 1 year ago

I ve tried to run a smefit2.0 fit with quad corrections, asking for 32 cores, but it seems much slower. When you tested this branch for you it was taking just sligthly more than the old ns, right?

yes, how much slower was it for you?

tgiani commented 1 year ago

@giacomomagni quite a lot, I think it took smt like 8 hrs. But maybe I ve done something wrong. Which runcard have you used? To parallelise it s just the same as with the old ns no?

giacomomagni commented 1 year ago

@giacomomagni quite a lot, I think it took smt like 8 hrs. But maybe I ve done something wrong. Which runcard have you used? To parallelise it s just the same as with the old ns no?

yes it should be the same... For me it took roughly 3 hours and 20 mins with 32 cores. I used

this: ``` # Input YAML configurations for SMEFiT code result_ID: test_runcard_us_ho # absolute path where results are stored result_path: /data/theorie/gmagni/smefit_release/results # path to common data data_path: /data/theorie/gmagni/smefit_database/commondata # path to theory tables, default same as data path theory_path: /data/theorie/gmagni/smefit_database/theory # pQCD order (LO or NLO) order: NLO use_theory_covmat: True # SMEFT Expansion Order (NHO = Lambda^-2 , HO = Lambda^-4) use_quad: true # Set parameter bounds to previous SCAN result bounds: Null # NS settings nlive: 1000 use_t0: True # Datasets to include datasets: # TOP QUARK PRODUCTION # ttbar - ATLAS_tt_8TeV_ljets_Mtt - ATLAS_tt_8TeV_dilep_Mtt - CMS_tt_8TeV_ljets_Ytt - CMS_tt2D_8TeV_dilep_MttYtt - CMS_tt_13TeV_ljets_2015_Mtt - CMS_tt_13TeV_dilep_2015_Mtt - CMS_tt_13TeV_ljets_2016_Mtt - CMS_tt_13TeV_dilep_2016_Mtt - ATLAS_tt_13TeV_ljets_2016_Mtt - ATLAS_CMS_tt_AC_8TeV - ATLAS_tt_AC_13TeV # ttbar asymm and helicity frac - ATLAS_WhelF_8TeV - CMS_WhelF_8TeV # ttbb - CMS_ttbb_13TeV - CMS_ttbb_13TeV_2016 - ATLAS_ttbb_13TeV_2016 # tttt - CMS_tttt_13TeV - CMS_tttt_13TeV_run2 - ATLAS_tttt_13TeV_run2 # ttZ - CMS_ttZ_8TeV - CMS_ttZ_13TeV - CMS_ttZ_13TeV_pTZ - ATLAS_ttZ_8TeV - ATLAS_ttZ_13TeV - ATLAS_ttZ_13TeV_2016 # ttW - CMS_ttW_8TeV - CMS_ttW_13TeV - ATLAS_ttW_8TeV - ATLAS_ttW_13TeV - ATLAS_ttW_13TeV_2016 # Single top - CMS_t_tch_8TeV_inc - ATLAS_t_tch_8TeV - CMS_t_tch_8TeV_diff_Yt - CMS_t_sch_8TeV - ATLAS_t_sch_8TeV - ATLAS_t_tch_13TeV - CMS_t_tch_13TeV_inc - CMS_t_tch_13TeV_diff_Yt - CMS_t_tch_13TeV_2016_diff_Yt # tW - ATLAS_tW_8TeV_inc - ATLAS_tW_slep_8TeV_inc - CMS_tW_8TeV_inc - ATLAS_tW_13TeV_inc - CMS_tW_13TeV_inc # tZ - ATLAS_tZ_13TeV_inc - ATLAS_tZ_13TeV_run2_inc - CMS_tZ_13TeV_inc - CMS_tZ_13TeV_2016_inc # HIGGS PRODUCTION # ATLAS & CMS Combined Run 1 Higgs Measurements - ATLAS_CMS_SSinc_RunI - ATLAS_SSinc_RunII - CMS_SSinc_RunII # ATLAS & CMS Run II Higgs Differential - CMS_H_13TeV_2015_pTH - ATLAS_H_13TeV_2015_pTH # # ATLAS & CMS STXS - ATLAS_WH_Hbb_13TeV - ATLAS_ZH_Hbb_13TeV - ATLAS_ggF_ZZ_13TeV - CMS_ggF_aa_13TeV #- CMS_ggF_tautau_13TeV # DIBOSON DATA - ATLAS_WW_13TeV_2016_memu - ATLAS_WZ_13TeV_2016_mTWZ #- CMS_WZ_13TeV_2016_mWZ - CMS_WZ_13TeV_2016_pTZ # LEP - LEP_eeWW_182GeV - LEP_eeWW_189GeV - LEP_eeWW_198GeV - LEP_eeWW_206GeV # Coefficients to fit coefficients: OpQM: min: -10 max: 10 O3pQ3: min: -2.0 max: 2.0 Opt: min: -25.0 max: 15.0 OtW: min: -1.0 max: 1.0 OtG: min: -1.0 max: 1.0 Otp: min: -10.0 max: 5.0 OtZ: min: -10 max: 10 OQQ1: min: -10 max: 10.0 OQQ8: min: -20.0 max: 20.0 OQt1: min: -10.0 max: 10.0 OQt8: min: -20.0 max: 20.0 Ott1: min: -10.0 max: 10.0 O81qq: min: -5 max: 5 O11qq: min: -5 max: 5 O83qq: min: -5 max: 5 O13qq: min: -5 max: 5 O8qt: min: -5 max: 5 O1qt: min: -5 max: 5 O8ut: min: -5 max: 5 O1ut: min: -5 max: 5 O8qu: min: -5 max: 5 O1qu: min: -5 max: 5 O8dt: min: -5 max: 5 O1dt: min: -5 max: 5 O8qd: min: -5 max: 5 O1qd: min: -5 max: 5 OpG: min: -0.2 max: 0.075 OpB: min: -0.5 max: 0.35 OpW: min: -0.4 max: 0.5 OpWB: min: -0.3 max: 0.5 Opd: min: -5.0 max: 5.0 OpD: min: -1.0 max: 1.0 OpqMi: constrain: - OpD: 0.9248 - OpWB: 1.8347 min: -30 max: 30 O3pq: constrain: - OpD: -0.8415 - OpWB: -1.8347 min: -0.5 max: 1.0 Opui: constrain: OpD: 0.3333 min: -0.5 max: 1.0 Opdi: constrain: OpD: -0.1667 min: -0.5 max: 1.0 Ocp: min: -0.44 max: 0.81 Obp: min: -0.6 max: 0.22 Opl1: constrain: OpD: -0.25 min: -0.5 max: 1.0 Opl2: constrain: OpD: -0.25 min: -0.5 max: 1.0 Opl3: constrain: OpD: -0.25 min: -0.5 max: 1.0 O3pl1: constrain: - OpD: -0.8415 - OpWB: -1.8347 min: -30.0 max: 60.0 O3pl2: constrain: - OpD: -0.8415 - OpWB: -1.8347 min: -30.0 max: 60.0 O3pl3: constrain: - OpD: -0.8415 - OpWB: -1.8347 min: -0.5 max: 1.0 Ope: constrain: OpD: -0.5 min: -0.5 max: 1.0 Opmu: constrain: OpD: -0.5 min: -0.5 max: 1.0 Opta: constrain: OpD: -0.5 min: -0.5 max: 1.0 Otap: min: -0.28 max: 0.72 Oll: constrain: true value: 0.0 min: -0.5 max: 1.0 OWWW: min: -0.85 max: 0.35 OW: constrain: - OpD: -5.1754 - OpWB: -11.2835 min: -0.8 max: 0.25 OB: constrain: OpD: 2.8209 value: null min: -0.85 max: 0.3 rot_to_fit_basis: null ```
tgiani commented 1 year ago

ok thank you, let me try again

tgiani commented 1 year ago

ok I ve reinstalled everything and tried again and now with the runcard you gave it s taking around 3 hrs and results look fine if compared with the old ns

tgiani commented 1 year ago

@giacomomagni so I m happy to merge if you re fine with that

jacoterh commented 11 months ago

Hi @giacomomagni , what settings did you use in Ultranest to obtain those smooth histograms? I am experiencing very sharp peaks with the same plotting settings, see attached: coefficient_histo.pdf. My settings are:

nlive: 400 lepsilon: 0.05 target_evidence_unc: 0.5 target_post_unc: 0.5 frac_remain: 0.01 store_raw: false

giacomomagni commented 11 months ago

Maybe try to use higher nlive: 500 or nlive: 1000

Here there a some more details about each parameter, in case it helps.

jacoterh commented 11 months ago

Thanks @giacomomagni. The point it seems is that I get too many samples actually, around 70k now with live: 500. I can adjust the way the histograms are plotted, but I am just curious which settings you used at the time. That might be easier, do you remember? I expect the option frac_remain might be higher in your setup - I tried with 0.5, but perhaps that's even too low.

giacomomagni commented 11 months ago

Thanks @giacomomagni. The point it seems is that I get too many samples actually, around 70k now with live: 500. I can adjust the way the histograms are plotted, but I am just curious which settings you used at the time. That might be easier, do you remember? I expect the option frac_remain might be higher in your setup - I tried with 0.5, but perhaps that's even too low.

I used the runcard that is posted above ( clipped in the reply to Tommaso ... ), but they looks as yours except for nlive.

https://github.com/LHCfitNikhef/smefit_release/pull/51#issuecomment-1766596523