SMTG-Bham / doped

doped is a Python software for the generation, pre-/post-processing and analysis of defect supercell calculations, implementing the defect simulation workflow in an efficient, reproducible, user-friendly yet powerful and fully-customisable manner.
https://doped.readthedocs.io
MIT License
133 stars 31 forks source link

remake dopedxpy-sc-fermi #46

Open alexsquires opened 10 months ago

alexsquires commented 10 months ago
kavanase commented 8 months ago

Looking good @alexsquires. I like the mix-in/abstract class approach.

The _interpolate_chempots() function is quite nice, and I think will likely be one of the mostly-used features from these additions. From what we were discussing before in the #doped Slack, would it be possible to also integrate a grid-scanning approach too? As in, the current function linearly interpolates between two points (chem pot limits), which is nice and v useful in many cases, but if for instance we have a 3D chemical potential space, our min/max of the property of interest could also lie somewhere in the middle of this space, rather than along any linear path between two vertices/limits.

Ofc the user could iterate over all possible vertex/limit combinations if they knew what they were doing, which would be a good stab at this and get you most of the way there I imagine (kind of like a 'band structure path' approach), but still is only covering certain 'high-symmetry' paths in chem pot space. So if a grid approach was also possible (where e.g. the user can just set the chempot spacing in eV, or total number of grid points), that would also be v nice I think โ€“ should be doable with some of the scipy grid space interpolation functions I think? And/or pymatgen chemical potential diagram methods maybe? If it was possible to implement relatively painlessly, could we add these two options? โ˜๏ธ ๐Ÿ™

Sorry to be piling on the requests, but I guess the other main use case I'd imagine for this part of the code would be for the more complex defect thermodynamics analysis that py-sc-fermi allows, where you want to fix certain defect/species etc concentrations and perform some constrained solutions. Is it possible to integrate this to the fermi_solver code?

Also just fyi, about your earlier _format_defect_name() question, yes it is actually quite externally useful and I'll make it a public function in the next (minor) release.

alexsquires commented 8 months ago

Yeah, dw this is all on the list, should be ready to go by the end of the week, I sort of trashed everything to start again (for about the 6th time now), and just adding everything back in piecewise to make sure it's all consistent with the latest thermodynamics object.

On Thu, 15 Feb 2024 at 04:58, Seรกn Kavanagh @.***> wrote:

Looking good @alexsquires https://github.com/alexsquires. I like the mix-in/abstract class approach.

The _interpolate_chempots() function is quite nice, and I think will likely be one of the mostly-used features from these additions. From what we were discussing before in the #doped Slack, would it be possible to also integrate a grid-scanning approach too? As in, the current function linearly interpolates between two points (chem pot limits), which is nice and v useful in many cases, but if for instance we have a 3D chemical potential space, our min/max of the property of interest could also lie somewhere in the middle of this space, rather than along any linear path between two vertices/limits.

Ofc the user could iterate over all possible vertex/limit combinations if they knew what they were doing, which would be a good stab at this and get you most of the way there I imagine (kind of like a 'band structure path' approach), but still is only covering certain 'high-symmetry' paths in chem pot space. So if a grid approach was also possible (where e.g. the user can just set the chempot spacing in eV, or total number of grid points), that would also be v nice I think โ€“ should be doable with some of the scipy grid space interpolation functions I think? And/or pymatgen chemical potential diagram methods maybe? If it was possible to implement relatively painlessly, could we add these two options? โ˜๏ธ ๐Ÿ™

Sorry to be piling on the requests, but I guess the other main use case I'd imagine for this part of the code would be for the more complex defect thermodynamics analysis that py-sc-fermi allows, where you want to fix certain defect/species etc concentrations and perform some constrained solutions. Is it possible to integrate this to the fermi_solver code?

Also just fyi, about your earlier _format_defect_name() question, yes it is actually quite externally useful and I'll make it a public function in the next (minor) release.

โ€” Reply to this email directly, view it on GitHub https://github.com/SMTG-Bham/doped/pull/46#issuecomment-1945954259, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFMK74XGZEUFF3TPF7MZHH3YTXZ5JAVCNFSM6AAAAABBVDPFSSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBVHE2TIMRVHE . You are receiving this because you were mentioned.Message ID: @.***>

alexsquires commented 8 months ago

@adair-nicolson , could you re-parse your Cu2SiSe3 data now that everything is relatively final and I can add the grid searching stuff back in?

alexsquires commented 8 months ago

@kavanase proposing this as the full functionality - do you want to take a look at the example ipynb while I finish off the tests before I get to comfortable?

kavanase commented 8 months ago

@alexsquires sorry for the delay in getting to this! Busy time...

Looking very very nice, thanks very much for your work with this! ๐Ÿ™Œ

Some small queries from me:

A general question: (will also look into the code in more detail myself)

kavanase commented 8 months ago

Fyi, the py-sc-fermi interface tutorial is now accessible here: https://doped.readthedocs.io/en/dopey_fermi/py_sc_fermi_interface_tutorial.html

kavanase commented 7 months ago

@alexsquires just to flag, it seems like there are some unused files included here in the CdTe examples directory. Before merging can we check that only the ones needed are included, to avoid bloating the repo? Also just fyi, I removed CdTe_Thesis_pyscfermi.ipynb, data/dos_vasprun.xml and vasprun_nsp.xml here as they weren't being used, but if they were intended to be used for something else then please revert!

kavanase commented 7 months ago

Hi @alexsquires! Some notes on these changes from going through in more depth:

alexsquires commented 7 months ago

todo:

kavanase commented 7 months ago

@alexsquires are you still working away on this or do you want me to look over? Just checking!

alexsquires commented 7 months ago

Mostly there, just checks and warning suppression to go

On Sat, 6 Apr 2024, 19:51 Seรกn Kavanagh, @.***> wrote:

@alexsquires https://github.com/alexsquires are you still working away on this or do you want me to look over? Just checking!

โ€” Reply to this email directly, view it on GitHub https://github.com/SMTG-Bham/doped/pull/46#issuecomment-2041166010, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFMK74SFPG6DLBBD3HS5QRTY4A73PAVCNFSM6AAAAABBVDPFSSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBRGE3DMMBRGA . You are receiving this because you were mentioned.Message ID: @.***>

coderabbitai[bot] commented 5 months ago

[!IMPORTANT]

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

โค๏ธ Share - [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai) - [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai) - [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai) - [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)
๐Ÿชง Tips ### Chat There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai): - Review comments: Directly reply to a review comment made by CodeRabbit. Example: - `I pushed a fix in commit , please review it.` - `Generate unit testing code for this file.` - `Open a follow-up GitHub issue for this discussion.` - Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples: - `@coderabbitai generate unit testing code for this file.` - `@coderabbitai modularize this function.` - PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples: - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.` - `@coderabbitai read src/utils.ts and generate unit testing code.` - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.` - `@coderabbitai help me debug CodeRabbit configuration file.` Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. ### CodeRabbit Commands (Invoked using PR comments) - `@coderabbitai pause` to pause the reviews on a PR. - `@coderabbitai resume` to resume the paused reviews. - `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. - `@coderabbitai full review` to do a full review from scratch and review all the files again. - `@coderabbitai summary` to regenerate the summary of the PR. - `@coderabbitai resolve` resolve all the CodeRabbit review comments. - `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository. - `@coderabbitai help` to get help. ### Other keywords and placeholders - Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed. - Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description. - Add `@coderabbitai` anywhere in the PR title to generate the title automatically. ### CodeRabbit Configuration File (`.coderabbit.yaml`) - You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository. - Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information. - If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json` ### Documentation and Community - Visit our [Documentation](https://coderabbit.ai/docs) for detailed information on how to use CodeRabbit. - Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback. - Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.
alexsquires commented 5 months ago

@kavanase has something changed with the chemical potentials? Cu2SiSe3 example no longer has limits wrt the elements, CdTe does?

kavanase commented 5 months ago

@alexsquires I'm not totally sure what you're referring to here. I checked the chempots json files in the examples folders (examples/CdTe & examples/Cu2SiSe3), and they both seem to have the "limits_wrt_el_refs" subdict present:

CdTe:

image

Cu2SiSe3: image

kavanase commented 5 months ago

Also just to mention from looking through there, for consistency and reducing user confusion, it would be good to use the same chempots/el_refs/limit inputs format, nomenclature (chempots vs chemical_potentials) and handling as in the DefectThermodynamics methods. Should just be a copy and paste job to use them ๐Ÿ™

e.g. from thermodynamics.py:

    def get_equilibrium_fermi_level(
        self,
        bulk_dos_vr: Union[str, Vasprun, FermiDos],
        chempots: Optional[dict] = None,
        limit: Optional[str] = None,
        el_refs: Optional[dict] = None,
        temperature: float = 300,
        return_concs: bool = False,
        effective_dopant_concentration: Optional[float] = None,
    ) -> Union[float, tuple[float, float, float]]:
        r"""
        ...
            chempots (dict):
                Dictionary of chemical potentials to use for calculating the defect
                formation energies (and thus concentrations and Fermi level).
                If ``None`` (default), will use ``self.chempots`` (= 0 for all chemical
                potentials by default).
                This can have the form of ``{"limits": [{'limit': [chempot_dict]}]}``
                (the format generated by ``doped``\'s chemical potential parsing
                functions (see tutorials)) and specific limits (chemical potential
                limits) can then be chosen using ``limit``.

                Alternatively this can be a dictionary of chemical potentials for a
                single limit (limit), in the format: ``{element symbol: chemical potential}``.
                If manually specifying chemical potentials this way, you can set the
                ``el_refs`` option with the DFT reference energies of the elemental phases,
                in which case it is the formal chemical potentials (i.e. relative to the
                elemental references) that should be given here, otherwise the absolute
                (DFT) chemical potentials should be given.
            limit (str):
                The chemical potential limit for which to
                determine the equilibrium Fermi level. Can be either:

                - ``None``, if ``chempots`` corresponds to a single chemical potential
                  limit - otherwise will use the first chemical potential limit in the
                  ``chempots`` dict.
                - ``"X-rich"/"X-poor"`` where X is an element in the system, in which
                  case the most X-rich/poor limit will be used (e.g. "Li-rich").
                - A key in the ``(self.)chempots["limits"]`` dictionary.

                The latter two options can only be used if ``chempots`` is in the
                ``doped`` format (see chemical potentials tutorial).
                (Default: None)
            el_refs (dict):
                Dictionary of elemental reference energies for the chemical potentials
                in the format:
                ``{element symbol: reference energy}`` (to determine the formal chemical
                potentials, when ``chempots`` has been manually specified as
                ``{element symbol: chemical potential}``). Unnecessary if ``chempots`` is
                provided/present in format generated by ``doped`` (see tutorials).
                (Default: None)
            ...
        """
        fermi_dos = self._parse_fermi_dos(bulk_dos_vr)
        chempots, el_refs = self._get_chempots(
            chempots, el_refs
        )  # returns self.chempots/self.el_refs if chempots is None
        ...
alexsquires commented 3 months ago

For making the fermi_solver chempot interface consistent with doped.thermodynamics, do you mind if the chempot handling methods in thermodynamics are no longer private methods?

kavanase commented 3 months ago

Yeah sure! Not an issue at all

alexsquires commented 3 months ago

I've made sure all the chempots labelling is consistent and added the option to specify chemical potentials as a limit, rather than just a dictionary. It's a bit different from the DefectThermodynamics object, because I've made doped-formatted chempots a required attribute of the FermiSolvers. Any comments before I finalise it?

kavanase commented 3 months ago

@alexsquires this looks really great, thanks for pushing forward with it! I've had time to go through the code now, and made some small fixes on the way.

Some last comments from me:

image

Some minor points:

Not including it in the tutorial as it's irrelevant, but nice to plot the chempot scanning results here alongside the plot in the SI of Adair's paper: image

alexsquires commented 3 months ago

Just curious, is there a reason why joblib.Parallel is used rather than multiprocessing?

Annoyingly, I don't remember the specifics. Serves me right for letting this drag on so long. It's to do with passing defect system objects to multiprocessing. I played around with different ways to do it and this is the least ugly I could come up with.

Somewhat related; have you done any profiling/speed tests to see how much this speeds up these calculations? Would be curious to see! (On my to-do list anyway)

Not extensively. Re this and the next point, I noticed that the defaults in doped were processes != 1. I was leaving this hanging until some profiling was done. Once you get down to it on your to-do list, maybe we can come up with a sensible setting?

For the chempot scanning functions, setting the dependent variable is mostly unnecessary/inconsequential right? Like the output is basically the same, but it might just affect the grid placement somewhat, as they're just linearly related? So best to have this as an optional argument, and if not set just pick the last element in the formula or something like that?

Just binned it, good call.

On a related point, I don't totally get what n_points is exactly, and it isn't really explained by the docstrings. E.g. when I set it to 50 for a 2D grid, it returns much more than 50x50 = 2500 datapoints (but then also in the output plot it looks like less than this many datapoints?). Would be good to clarify this in the docstrings (and maybe tutorial) of how this sets the grid to be scanned over.

n_points is the number of points generated along each chemical potential axis on a regular grid. each point is then assessed as being either inside or outside the hull. So the number of points on the plot will always be less than npoints * npoints. The reason why the results dataframe is so long is because these are long format (because it makes plotting, espesh with non matplotlib (seaborn) packages, easier. And long-form data is generally more machine readable. So its n_points (within the hull) * the number of defect species (+ electrons and holes). I will make sure this is better reflected in the docs.

min_max_X bug.

I think I fixed this by always including the vertices of the grid in any search, rather than the in-hull points in the interpolation. Which is just a good thing to have anyway.

Related to this, and the tests, do you have chempot data for the non CdTe+Cu2SiSe3 examples? Any help with testing is gratefully received, and then I can make sure the new search works for all the systems.

I will work on the minor points in the meantime.

kavanase commented 3 months ago

Ok cool! Yeah with multiprocessing the shared memory handling / 'stateful' calculations / pickling can be annoying issues. Makes sense.

And yep for processes the defaults in doped depend on the calculation task. Typically tries to set processes to num_cpus - 1 when the workload is estimated to be sufficiently substantial that multiprocessing will give a significant speedup.

All sounds good!

For the chempots Q, yes there are also chempots for V2O5, MgO and Sb2O5 (see here: https://github.com/SMTG-Bham/doped/blob/main/tests/test_thermodynamics.py#L159), along with their DefectThermodynamics objects, but all binaries... I think there might have been an example chempots file for YTOS previously, but not anymore. Will search for this. Failing that, if you need it for a more complicated system, could ask Jiayi for her LNMO data? (Checked the publications and don't think it was shared online). Or could just use the MP data to generate some test chempots for a YTOS/others? Or if useful, using your LiNiO2 data?

Edit: Jiayi's LNMO chempot data could be useful regardless, so will ask her for it

kavanase commented 3 months ago

There is this YTOS phase diagram object in the git history (though not actually sure if it's parsed hybrid data or just taken from the Materials Project). Adair was following up that defects project before, so should have the parsed chempots for it? (https://github.com/SMTG-Bham/doped/blob/2.3.0/examples/competing_phases/ytos_phase_diagram.json)

alexsquires commented 3 months ago

Docstrings and typing updated to be doped consistent, and example notebook now populated with some papers that people can go and look at for.

kavanase commented 2 months ago

@alexsquires going through now. One question, would it make sense to allow initialisation of FermiSolverDoped/PyScFermi objects using FermiSolver(..., backend="doped/py-sc-fermi") to make it a bit cleaner for the user? Maybe defaulting to py-sc-fermi if installed, otherwise doped. If you agree, I can implement as it should just be a small update

alexsquires commented 2 months ago

Such a good idea that I'm embarrassed I didn't think of it before

kavanase commented 2 months ago

Ok cool will implement! Also to check, is there any point keeping multiplicity_scaling as an optional input? Like the default handling (with a minor update to allow some negligible numerical differences) should basically always be the desired usage?

        if multiplicity_scaling is None:
            ms = self.defect_thermodynamics.defect_entries[0].defect.structure.volume / self.volume
            # check multiplicity scaling is almost an integer
            if not np.isclose(ms, round(ms)):
                raise ValueError(
                    "The multiplicity scaling factor is not almost an integer. "
                    "Please specify a multiplicity scaling factor."
                )
            self.multiplicity_scaling = round(ms)
        else:
            self.multiplicity_scaling = multiplicity_scaling
kavanase commented 2 months ago

Also did you check with the latest code that the tutorial all works fine?

alexsquires commented 2 months ago

Ok cool will implement! Also to check, is there any point keeping multiplicity_scaling as an optional input? Like the default handling (with a minor update to allow some negligible numerical differences) should basically always be the desired usage?

        if multiplicity_scaling is None:
            ms = self.defect_thermodynamics.defect_entries[0].defect.structure.volume / self.volume
            # check multiplicity scaling is almost an integer
            if not np.isclose(ms, round(ms)):
                raise ValueError(
                    "The multiplicity scaling factor is not almost an integer. "
                    "Please specify a multiplicity scaling factor."
                )
            self.multiplicity_scaling = round(ms)
        else:
            self.multiplicity_scaling = multiplicity_scaling

Yeah, we can get rid of it. I suppose it is a hang over from my very "what if?" approach to py-sc-fermi where most of the fun was just playing around with the defect system and seeing how everthing changed, but if people want that, they can go to py-sc-fermi ๐Ÿค“

On the thing of having fermisolver with a back-end option, I think the default should just be doped in all cases. This is doped after all, and it is marginally faster for the thing I reckon 90% of users will want, which is the convenience functions with quench+anneal.

Everything was working locally when I pushed, but I was in a bit of a frenzy, have you found something that doesn't work?

kavanase commented 2 months ago

Ok! And yeah I guess if it's an advanced enough user that they were playing around with things like that, they could also achieve the same effects other ways (e.g. by just adding an entry to DefectEntry.degeneracy_factors etc).

For using the doped backend as default, ok makes sense! (Especially if it's currently a bit quicker). I think it should be fairly easy to add a convenience method that converts from the doped backend to the py-sc-fermi one, so might try add this so that the doped backend can be used by default, and then switches to the py-sc-fermi one if needed (e.g. for free_defects) if it's installed. I guess the free_defects behaviour could also be added to the doped Fermi solving functions too, which in theory should be fairly straightforward.

For the last bit about something that doesn't work, it was just that I moved this _handle_chempots() line a couple lines up (because the chempots output was un-used so I assumed it was meant to be before when chempots is used), but this broke some things, so the line is just removed now and all seems to be working as expected. https://github.com/SMTG-Bham/doped/blob/b8cf01515a0e3e64fb4899b9d986634696b68626/doped/interface/fermi_solver.py#L1451

kavanase commented 2 months ago

Added some of the changes mentioned above now.

I played around with joblib/multiprocessing for this, with some profiling (though not so easy). It seems that thread locking and synchronization is causing the bottlenecks and slow speeds once attempting parallelization, which I think implies the need for shared resources but this shouldn't be the case afaik... Anyway it's pretty fast now as is. If someone needs to run it faster they can ask about multiprocessing or try themselves

Question: For scan_chempots, it's meant to be a list of chemical potentials that are then scanned over yeah? I guess we expect this to be a rare use case, with interpolate_chempots or scan_chemical_potential_grid usually preferred by the users. Either way, the current code is broken as it doesn't loop over the input list. Just checking there's still a use case for this function before I go trying to fix?

A benefit of the merge to FermiSolver with backend choice now is that we also minimise docstring duplication for the methods, so easier to maintain and update

kavanase commented 2 months ago

Ok some notes on the above. I updated the chempots/limit/el_refs inputs to be flexible and adopt a similar format to that in thermodynamics, to make things easier for the user, for maintenance etc.

I also (mostly, at least) fixed the scan_chempots() function mentioned above.

From playing around, I realised that ChemicalPotentialGrid and associated functions seems to only work for ternary compounds? e.g. with CdTe: image

image

and Na2FePO4F:

image image

(I've now added the get_doped_chempots_from_entries convenience function to chemical_potentials so can generate example chempots from the Materials Project quickly and easily this way, as shown).

Would you be able to look at this, so that it works for all dimensions?

Also for the tests, it would be most efficient to use a decorator (like this for testing both the new and legacy MP APIs in the chemical potentials tests) which tests each analysis function for both the doped and py-sc-fermi backends, rather than duplicating code. Would you mind adding this, and some tests for the scan_... functions (ideally with a variety of binary/ternary/quaternary+ systems)? ๐Ÿ™

kavanase commented 2 months ago

@alexsquires something I was also thinking about. Do you think it still makes sense to have this under the separate interface module? Of course py-sc-fermi is one of the optional backends here, but most of the code is backend-agnostic. What about moving it into doped.thermodynamics? I don't have any strong feelings about this, just thinking if it's the better category fit

kavanase commented 2 months ago

@alexsquires, I was looking again at the current code in this PR (hopefully nearly ready to go ๐Ÿคž) as I'm currently trying to use it in some of my work (on selenium defects). I was hoping to use the py-sc-fermi use case where one can set a fixed concentration of a defect/impurity species, and solving with this constraint, but I think this is not implemented as far as I can tell?

I checked back and this was mentioned way back at the start of this PR, and the effective_dopant_concentration option was then added, but I think this other functionality wasn't?

Sorry to be piling on the requests, but I guess the other main use case I'd imagine for this part of the code would be for the more complex defect thermodynamics analysis that py-sc-fermi allows, where you want to fix certain defect/species etc concentrations and perform some constrained solutions. Is it possible to integrate this to the fermi_solver code?

Just checking I'm not missing something / avoiding redundant work. I can add this to the code if it's not there.

alexsquires commented 1 month ago

@alexsquires, I was looking again at the current code in this PR (hopefully nearly ready to go ๐Ÿคž) as I'm currently trying to use it in some of my work (on selenium defects). I was hoping to use the py-sc-fermi use case where one can set a fixed concentration of a defect/impurity species, and solving with this constraint, but I think this is not implemented as far as I can tell?

I checked back and this was mentioned way back at the start of this PR, and the effective_dopant_concentration option was then added, but I think this other functionality wasn't?

Sorry to be piling on the requests, but I guess the other main use case I'd imagine for this part of the code would be for the more complex defect thermodynamics analysis that py-sc-fermi allows, where you want to fix certain defect/species etc concentrations and perform some constrained solutions. Is it possible to integrate this to the fermi_solver code?

Just checking I'm not missing something / avoiding redundant work. I can add this to the code if it's not there.

I think it would be nice to maybe have it accept a dictionary of defects and concentraitons, and optionally a charge state so e.g.

fixed_concentrations = {"v_Cd": x, "v_Te_+1" : y}

This could fairly easily be an additional kwarg of any of the py-sc-fermi methods?

@alexsquires something I was also thinking about. Do you think it still makes sense to have this under the separate interface module? Of course py-sc-fermi is one of the optional backends here, but most of the code is backend-agnostic. What about moving it into doped.thermodynamics? I don't have any strong feelings about this, just thinking if it's the better category fit

I also thinks this makes sense. I will fix everything up at the beginning of next week, then we could consider moving it?

kavanase commented 1 month ago

Sounds good!

alexsquires commented 1 month ago

Do you have a local version of the example notebook that is not pushed? There are some issues running the version on GH? I can fix them if not, just checking I'm not reinventing the wheel

kavanase commented 1 month ago

As far as I can tell, my local version isn't different to the pushed version, but in case there's any formatting/etc issues, this is my local version, as well as another notebook I was using for some quick tests before. Archive.zip

alexsquires commented 1 month ago

This covers the bug features and feature requests. Do you want to consider the merge into thermodynamics?

kavanase commented 1 month ago

Ok great! Yes let's merge to doped.thermodynamics โ€“ and for the ChemicalPotentialGrid code, does it make sense to put it in doped.chemical_potentials?

alexsquires commented 1 month ago

Migration finished! Just updating the tests.