Closed ellisonbg closed 8 years ago
Another pain point, which I don't know if it could be addressed for the 4.x series is the fact that certain nbextensions depend on (a certain version) kernel-side code such as custom interactive widgets libraries.
At the moment, it is impossible to have two kernels with two different versions of ipywidgets, or bqplot, installed because they will look for the JavaScript at the same location.
Therefore, I think that the extension mechanism should acknowledge that there are two categories of extensions:
A proposal to solve this would be to have the kernelspec contain one more information: a uuid which would typically be generated when the kernelspec is installed. When running a notebook with this kernel, nbextension_base_path/u-u-i-d/
would then become a search path, and the one we use for custom widgets.
cc @jdfreder
Here is a super rough draft of a PR that addresses the first pain point: #879
Some questions that this bring up:
config=True
traitlets on NotebookApp
for enabling common, notebook, tree and terminal nbextensions?.py
config files allow lists to be appended/extended, but .json
doesn't.There are some design decisions to make, but the good news is that it is simple code and logic to add this. Let's discuss more tomorrow.
Release the fixes quickly in a 4.2 release.
I don't agree with "quickly". I would prefer "thoroughly tested" with "with good documentation and example".
Don't standardize specifically where in the python package they are. This allows existing packages such as nbgrader to not have to change anything.
I would still like if you make extensions in python packages to be able to activate them per submodule: jupyter activate nbgrader.student
/ jupyter activate nbgrader.teacher
throwing that data into the page.html template itself
Does that implicate you need to restart the server on loading extensions ?
Extra note, if the packages are Python and have versions, we should be able to
serve multiple versions at differents url like /nbextension/<extension>/<version>/
, if version is omitted latest is implied.
Re: @SylvainCorlay's point, also during that discussion, we proposed a kernel-specific nbextension path, which we never got around to implementing. I feel bad about that. I think we ended up proposing an nbextensions dir inside the kernelspec that kernel-specific extensions could go in.
It seems a bit worrisome that we would officially bless Python packages as the temporary solution for 4.x nbextensions, and then turn 180º and say that it's all npm, and not Python packages as fast as we can. But if we want to make it more convenient, adding a flag for 'install js from a Python package':
jupyter nbextension install --py nbgrader
seems like a better middle ground than breaking jupyter nbextension install [path]
, or guessing whether [path]
is a path or a package.
Do we need separate config=True traitlets on NotebookApp for enabling common, notebook, tree and terminal nbextensions?
I don't think so. What would these common nbextensions be?
How do we handle multiple config files that define those? Last wins? Try to merge?
I would do it the same way we do with the rest of config, where the more specific config has priority: user > env > system. I think that's the only thing that's missing from nbconfig for this to work.
How will folks at Continuum write these config files programmatically?
During the nbextension discussion a year ago, it was concluded that we must not activate extensions as part of installing them - that install & activate must be two separate actions. Are we changing our minds on that?
If someone wants to enable nbextensions via conda packages, the biggest hurdle is that all enabling currently resides in a single file, and conda packages should only write, not modify files. To support this, the only idea I have is a config.d
-style directory of config files, so that packages could drop a file in there, and all such files are loaded.
Re: @SylvainCorlay's point, also during that discussion, we proposed a kernel-specific nbextension path, which we never got around to implementing.
You can load extension from kernelspec directory using kernel.js, you just need to do relative requires.
It seems a bit worrisome that we would officially bless Python packages as the temporary solution for 4.x nbextensions, and then turn 180º and say that it's all npm, and not Python packages as fast as we can. But if we want to make it more convenient, adding a flag for 'install js from a Python package':
What about "just" adding npm
global dir to the search path ?
I would do it the same way we do with the rest of config, where the more specific config has priority: user > env > system. I think that's the only thing that's missing from nbconfig for this to work.
For general frontend config, it's hard to merge JSON, though for list of extension, that might be easier.
that install & activate must be two separate actions. Are we changing our minds on that?
I don't think so, I think the setup.py list is to discover extensions.
Just that I understand it correctly:
This:
embrace npm as our package format and manager
just means that developers of packages need to use npm, not that users of the package need to install npm in addition to python+python package manager?
If not: I don't think it's good for the adoption of extensions if e.g. a julia user has to install a python environment to get a python package manager to install a jupyter notebook and then start installing node and npm to get extensions. Learning python (the language, the tools, the libs...) instead of using a preinstalled SPSS/STATA/... is already hard for students and researchers, so don't add on learning another packaging ecosystem to get notebook extensions.
Maybe this belongs in a separate issue, but it seems like the right time to fix it along with these other problems. Installing server-side extensions suffers from a similar pain point about jupyter_notebook_config.py
vs jupyter_notebook_config.json
configs.
We found out the hard way that the JSON config takes precedence over the Python config in jupyter-incubator/dashboards#153. So if some server-side extensions ask users to add themselves to the .py config like c.NotebookApp.server_extensions = ['my.extension']
(currently: nbexamples, dashboards, declarativewidgets) while others use the ConfigManager
class to add themselves to the .json config (nbgrader, anything using the nbsetuptools), only the .json ones will wind up getting enabled.
I have no problem switching over to one or the other, but, which is the correct one? Or does something need to change so that both are supported and the lists of extensions are merged across the config types?
I have no problem switching over to one or the other, but, which is the correct one?
It's probably correct for .py config files to have higher priority, since they are generally human-edited and more powerful, while .json config files should always and only be programmatically edited. .py files can be considered 'manual overrides'.
In Python, it makes sense to do c.NotebookApp.server_extensions.append('my.extension')
. The JSON config files cannot do this, since they are a simple dict-dump. And further, they don't generally need to, because opening and appending to a list in a JSON file is doable, unlike in Python. A further point in favor of .py files having higher priority.
In Python, it makes sense to do c.NotebookApp.server_extensions.append('my.extension'). The JSON config files cannot do this, since they are a simple dict-dump. And further, they don't generally need to, because opening and appending to a list in a JSON file is doable, unlike in Python. A further point in favor of .py files having higher priority.
We already had a long discussion on which one between py an json should take precedence and decided on Json, as we want at some point to edit configuration only through UI. Having .py
taking precedence would mean in many cases that if a user does: c.NotebookApp.server_extensions = ['my.extension']
no automatic tool can ever activate an extension. So which one takes precedence is still not obvious to me, and I'm not sure a single case like this one change the balance.
Json could perfectly have append also, as long as we decide that an append or prepend keys become a LazyAppend/LazyPrepend.
We maybe should just warn more loudly if one file erase the settings of another, or put some time in looking at tools like redbaron to modify a Python config file in obvious case when a config option is not dynamic.
We maybe should just warn more loudly if one file erase the settings of another ...
The warning is definitely there and pretty loud on server start:
[W 14:29:11.533 NotebookApp] Unrecognized JSON config file version, assuming version 1
[W 14:29:11.536 NotebookApp] Collisions detected in jupyter_notebook_config.py and jupyter_notebook_config.json config files. jupyter_notebook_config.json has higher priority: {
"NotebookApp": {
"server_extensions": "<traitlets.config.loader.LazyConfigValue object at 0x7f15cc47dc18> ignored, using ['urth.dashboard.nbexts']"
}
}
But I'll admit @jtyberg and I missed it for some time when debugging the issue I linked above. Even when we did see it, we only knew what to do because we're expert-amateurs in how the Jupyter config system works. I'd bet a typical notebook user wouldn't have an easy time rectifying the problem.
Json could perfectly have append also, as long as we decide that an append or prepend keys become a LazyAppend/LazyPrepend.
If you wanted to scope it down to prepending/appending sequence values instead of having to merge arbitrary config objects, that would still solve the extension problem specifically without growing into a general purpose "merge all the configs!" solution.
What is the non-Continuum-camp feeling about the importance of being able to support "sandboxing" of extensions? For us (at Continuum) that means having a clean mechanism to be able to use conda environments to sandbox different sets of extensions and perhaps different versions of the same extension. From our work which has mixed both JS and Python code in the extensions conda packaging and conda environments have worked really well.
I'm not clear on how this works in either a pure-Python-package or pure-NPM-package world. If there is a good and clean solution that doesn't include conda, great. But is there scope to discuss how conda can fill the need here to manage cross-language packaging, or has that ship already sailed?
We maybe should just warn more loudly if one file erase the settings of another, or put some time in looking at tools like redbaron to modify a Python config file in obvious case when a config option is not dynamic.
If redbaron would allow us to write and update Python config files programmatically, I would be pretty happy to drop json config files altogether, since they would no longer solve a problem. But that's a long-term idea.
But is there scope to discuss how conda can fill the need here to manage cross-language packaging, or has that ship already sailed?
We can never rely on conda for this, so it's always going to be the case for us to use standard language packaging in multiple languages with some manual steps for stitching the two together, and then it can be 'simpler' for conda users, where packages can properly express cross-language dependencies. But whatever we come up with, it has to work outside conda.
Re: @SylvainCorlay's point, also during that discussion, we proposed a kernel-specific nbextension path, which we never got around to implementing. I feel bad about that. I think we ended up proposing an nbextensions dir inside the kernelspec that kernel-specific extensions could go in.
I would be ok to give it a try if you guys are ok with the proposal described earlier.
How is everyone's availability for a video chat later today? I am free after 10 am PST.
I'm free any time.
11am PT, 2pm ET would be good for me or 12pm PT, 3pm ET
An our later is "possible" but sub-optimal. Friday AM-midday ET also WFM. Afternoon not so great.
I probably can make it too.
If I have to choose I would do it tomorrow, so we give another 24 hs to the things raised here (in other discussion and the PR itself), but if we are doing it today, I would prefer around/after 12pm PT.
Is there a hackpad for this meeting?
Friday would be better here too if possible.
/cc @lbustelo
I like Damian's idea of having the meeting tomorrow. That will give me today to work on the implementation and allow more discussion.n We also may get more turn out with the extra day notice.
I propose 10am PST on appear.in or bluejeans to allow our "further east" folks to participate more easily. How does that sound?
On Thu, Dec 17, 2015 at 9:27 AM, Peter Parente notifications@github.com wrote:
Friday would be better here too if possible.
— Reply to this email directly or view it on GitHub https://github.com/jupyter/notebook/issues/878#issuecomment-165521341.
Brian E. Granger Associate Professor of Physics and Data Science Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger@calpoly.edu and ellisonbg@gmail.com
I will be unavailable for tomorrow, but would like to suggest the model used by git for chaining configuration files together:
"If not set explicitly with --file, there are four files where git config will search for configuration options:
$(prefix)/etc/gitconfig
System-wide configuration file.
$XDG_CONFIG_HOME/git/config
Second user-specific configuration file. If $XDG_CONFIG_HOME is not set or empty,
$HOME/.config/git/config will be used. Any single-valued variable set in this file will be overwritten by
whatever is in ~/.gitconfig. It is a good idea not to create this file if you sometimes use older versions of Git, as support for this file was added fairly recently.
~/.gitconfig
User-specific configuration file. Also called "global" configuration file.
$GIT_DIR/config
Repository specific configuration file.
If no further options are given, all reading options will read all of these files that are available. If the global or the system-wide configuration file are not available they will be ignored. If the repository configuration file is not available or readable, git config will exit with a non-zero error code. However, in neither case will an error message be issued.
The files are read in the order given above, with last value found taking precedence over values read earlier. When multiple values are taken then all values of a key from all files will be used.
All writing options will per default write to the repository specific configuration file. Note that this also affects options like --replace-all and --unset. git config will only ever change one file at a time.
You can override these rules either by command-line options or by environment variables. The --global and the --system options will limit the file used to the global or system-wide file respectively. The GIT_CONFIG environment variable has a similar effect, but you can specify any filename you want."
https://www.kernel.org/pub/software/scm/git/docs/git-config.html
We can also push that a few days down the road, I'm not sure there is an urgent need to do that now, as 4.1 is not yet out.
I'll try to make it, but have a company event.
So we're going to use npm
Real Soon Now, but need to at least work with virtualenv/pip
and conda
now. For the end user, that's all they want to do, or maybe some GUI stuff, not touch magic named files in five places or run a bunch of CLI.
Let's also try to make developers happy, and ease them into the npm world (lotta bower out there right now).
And let's try to make sysadmins happy. A surrogate for this is: how simply can I use binder to make a demo of my software? If you are doing a py/js job and requirements.txt
or environment.yml
isn't enough and you need a Dockerfile
, it's too hard.
I am :+1: allowing enabling extensions from all the places, as suggested... and would like to see this wrapped as a switch to nbextension install
, even if it its included elsewhere. I would prefer to see a data-first approach, have never liked the python config files. JSON is validatable, and has implementations everywhere.
As to the chaining: I almost would rather see the behavior like lodash's _.merge
, but the lists are still a problem. Germaine, here: server_extensions
should really be a hash :) Then, you could opt out into or out of an extension by setting it to True/False
The version stuff is scary, and treading on what a real package manager should do, but if it's an identified need, we should go ahead and address it.
So, with the those assumptions and:
setup.py
package that has already been installedpackage.json
...name
, version
and main
__init__.py
so that python can find it (setuptools.path:style
)...changing the command to make the positional argument optional, and accept --py
:
$> jupyter nbextension install --prefix="${CONDA_ENV_PATH}" \
--py=nbgrader.nbextensions.assignment_list.static \
--enable
Installing and enabling nbextension to `{prefix}` from python package `nbgrader`...
... found package.json
... found name `nbgrader.assignment_list`
... found version `0.2.0`
... creating `nbgrader.assignment_list@0.2.0`
... `0.2.0` is newer than previous version in `nbgrader.assignment_list` (`0.1.0`)
... removing `nbgrader.assignment_list`
... copying `nbgrader.assignment_list@0.2.0` to `nbgrader.assignment_list`
... found main `main.js`
... `nbgrader.assignment_list/main` already enabled
Speaking of "standard package management"... being able to communicate all of that at install time with setuptools entry_points
would be an option:
#setup.py
setup(
#...
entry_points={
'jupyter.nbextension': [
'nbgrader.assignment_list = nbgrader.nbextensions.assignment_list.static',
],
'jupyter.server_extension': [
'nbgrader = nbgrader.nbextensions.assignment_list:load_jupyter_server_extension',
],
#...
)
...but there are probably complexities i am missing: environments (should be detectable), hub deployments, etc. I would really rather see the config files as a way to opt out of or customize what happens at the package manager level, rather than a necessary step for every single installed package.
fyi - the original authors of Bower abandoned the project. Last I heard, they were looking for new maintainers.
FYI, it does appear that people are taking Bower up, as evidenced by the first several blog posts at http://bower.io/blog/
For reference, notes of the Dec. 18 meeting are in this hackpad.
For the record, the proposal that this mornings meeting came to was:
jupyter nbextension
subcommands, be able to read/write to all config directory locations._jupyter_nbextensions_paths()
API to find nbextensions in Python packages.Mmh, I thought we'd agreed to go to the conf.d
bag-o-files model instead of asking package managers to edit config files... Did I miss something?
I don't think we can do that in our current approach without breaking BW compat:
jupyter nbextension
command already makes edits to those same files.The only thing we could do is to add another separate layer that does the conf.d
approach. But it would have to sit along side our existing stuff and not replace it. I think we should use the current edit based approach for now and see if we want to add the conf.d
approach later.
Well, if we can offer for 4.2 the conf.d
option, conda/apt/etc might choose that option instead, without breaking BW compatibility (I'm not suggesting removing the existing functionality).
I think for example apt has post-install scripts, but the policy is that they shouldn't modify the files that they installed, only do other things.
So I'd like to see if we can go in the conf.d
direction as overall I think it's the saner option moving forward and towards 5.x, even if we do carry the existing solution for the 4.x lifecycle as well.
I agree that in the long run we want to have the conf.d type of approach as well.
On Fri, Dec 18, 2015 at 5:17 PM, Fernando Perez notifications@github.com wrote:
Well, if we can offer for 4.2 the conf.d option, conda/apt/etc might choose that option instead, without breaking BW compatibility (I'm not suggesting removing the existing functionality).
I think for example apt has post-install scripts, but the policy is that they shouldn't modify the files that they installed, only do other things.
So I'd like to see if we can go in the conf.d direction as overall I think it's the saner option moving forward and towards 5.x, even if we do carry the existing solution for the 4.x lifecycle as well.
— Reply to this email directly or view it on GitHub https://github.com/jupyter/notebook/issues/878#issuecomment-165930753.
Brian E. Granger Associate Professor of Physics and Data Science Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger@calpoly.edu and ellisonbg@gmail.com
I'm jumping into this discussion awfully late, coming over from @blink1073's great work in matplotlib/matplotlib#5754 to make interactive matplotlib plots a proper Jupyter widget.
My concern there is that the matplotlib widget fits very squarely into @SylvainCorlay's second category of "kernel dependent extensions". The coupling between the Javascript and Python side there is very tight, and as that stuff has developed it's almost never been the case that a new feature could be added or a bug fixed without changing both sides of the coin. It will be very important to the matplotlib widgets that there are no opportunities for a version mismatch between the Python and Javascript sides of the communication. It looks on first blush that #879 will address that -- matplotlib can create a NotebookApp subclass and specify where its Javascript lives (which could continue to be installed along with matplotlib in Python with Python-packaging tools etc. as it is now). Is my impression correct? And will that remain in the longer term plans?
Mike, we are still working out the details, but we expect the situation to only improve. I don't think you will end up creating a NotebookApp subclass though.
With 4.x nbextensions, most projects like matplotlib will ship their JS in the Python package and that JS code will be installed and activated as a separate step by the user (we are improving that situation).
Starting with 5.x we will start to rely on npm for a lot of this, but will still have a similar process that allows the version numbers to be synched in python and JS. Eventually you will probably want to separate out your JS code into an npm package that the python package also includes in its sources.
Cheers,
Brian
On Wed, Dec 30, 2015 at 9:40 AM, Michael Droettboom < notifications@github.com> wrote:
I'm jumping into this discussion awfully late, coming over from @blink1073 https://github.com/blink1073's great work in matplotlib/matplotlib#5754 https://github.com/matplotlib/matplotlib/pull/5754 to make interactive matplotlib plots a proper Jupyter widget.
My concern there is that the matplotlib widget fits very squarely into @SylvainCorlay https://github.com/SylvainCorlay's second category of "kernel dependent extensions https://github.com/jupyter/notebook/issues/878#issuecomment-165336911". The coupling between the Javascript and Python side there is very tight, and as that stuff has developed it's almost never been the case that a new feature could be added or a bug fixed without changing both sides of the coin. It will be very important to the matplotlib widgets that there are no opportunities for a version mismatch between the Python and Javascript sides of the communication. It looks on first blush that #879 https://github.com/jupyter/notebook/pull/879 wil l addres s that -- matplotlib can create a NotebookApp subclass and specify where its Javascript lives (which could continue to be installed along with matplotlib in Python with Python-packaging tools etc. as it is now). Is my impression correct? And will that remain in the longer term plans?
— Reply to this email directly or view it on GitHub https://github.com/jupyter/notebook/issues/878#issuecomment-168042873.
Brian E. Granger Associate Professor of Physics and Data Science Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger@calpoly.edu and ellisonbg@gmail.com
Starting with 5.x we will start to rely on npm for a lot of this, but will still have a similar process that allows the version numbers to be synched in python and JS. Eventually you will probably want to separate out your JS code into an npm package that the python package also includes in its sources.
The versioning here is critical obviously, and should prevent version mismatch problems, but it won't prevent problems with stale packages etc. On a purely logical level, it seems to me that code that is tightly coupled, and that couldn't exist without both client and server sides, should be installed in an atomic way. The piece I'm probably missing is what an npm package/installation provides over libraries in the kernel providing/pointing to their own resources (for the case of "kernel-specific extensions" -- it makes sense for primarily client-side extensions).
@mdboom Among other things, it prevents the duplicate loading of dependencies. Some client-side libs will not function correctly if they are loaded multiple times. It also makes it much simpler to specify those client-side dependencies.
@mdboom Among other things, it prevents the duplicate loading of dependencies. Some client-side libs will not function correctly if they are loaded multiple times. It also makes it much simpler to specify those client-side dependencies.
But how does any of that apply to @SylvainCorlay's second category above (kernel-dependent extensions)?
Kernel dependent extensions can provide their own list of JS dependencies, which get unioned with the front-end dependencies of the rest of the app.
If I'm understanding correctly, there appears to be no way to install the Javascript library atomically along with the kernel-side library under the proposed scheme. That is my fundamental objection with the design here. The strong versioning goes a long way, but not all the way, toward ameliorating some of the problems with that. The necessities/advantages of installing javascript content through npm seems to only apply to primarily client side extensions, and cut against the needs of kernel-dependent extensions, where atomic installation is important.
All that said, I'll apologize again for coming late to this discussion. Certainly from matplotlib's perspective, this seems like a regression for both users (which have an additional manual installation step and the possibility for confusion and more toes to shoot oneself in) and developers (that must release packages on two different package frameworks, where before there was one), with little benefit in our particular use case. But obviously, we are but one case and will go with the flow if the benefits for other kinds of extensions outweigh the disadvantages for our kind.
With nbextensions 4.x, there isn't a really reliable way of doing what you are asking. This is mainly because nbextensions are not versioned in any way.
With the new npm based 5.x approach, code in the kernel or server will be able to specify which version of an npm package they require. A user might have 5 different versions of that package installed, but we (and npm) will make sure that only the needed one gets loaded. This allows you to have separate installation of the Python and JS side, while still making sure versions match.
On Wed, Jan 6, 2016 at 7:54 AM, Michael Droettboom <notifications@github.com
wrote:
If I'm understanding correctly, there appears to be no way to install the Javascript library atomically along with the kernel-side library under the proposed scheme. That is my fundamental objection with the design here. The strong versioning goes a long way, but not all the way, toward ameliorating some of the problems with that. The necessities/advantages of installing javascript content through npm seems to only apply to primarily client side extensions, and cut against the needs of kernel-dependent extensions, where atomic installation is important.
All that said, I'll apologize again for coming late to this discussion. Certainly from matplotlib's perspective, this seems like a regression for both users (which have an additional manual installation step and the possibility for confusion and more toes to shoot oneself in) and developers (that must release packages on two different package frameworks, where before there was one), with little benefit in our particular use case. But obviously, we are but one case and will go with the flow if the benefits for other kinds of extensions outweigh the disadvantages for our kind.
— Reply to this email directly or view it on GitHub https://github.com/jupyter/notebook/issues/878#issuecomment-169366206.
Brian E. Granger Associate Professor of Physics and Data Science Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger@calpoly.edu and ellisonbg@gmail.com
Jupyter kernels have some of the same issues, see https://github.com/jupyter/jupyter_core/pull/61
You want to consider one solution fixing both nbextensions and kernels specs at the same time.
@damianavila , @parente and maybe more:
Just want to know is there any decisions made about the extensions (including both front-end and server-side) installation and management mechanism.
As you may have known, there is a popular project (https://github.com/ipython-contrib/IPython-notebook-extensions
) hosting many notebook extensions ( both front-end and server-side ), including an extensions nbextension
provides a web UI to manage front-end extensions.
@juhasch and other developer have made that project pip-install-able, but the current installation mechanism is sort of un-pythonic or un-notebook-nic. Since we already have nbextensions.py
to install front-end extensions.
Actually, extensions usually have three parts (nbextensions
for front-end, extensions
for python files and templates
for html templates), I wonder if we can modify nbextensions.py
or adopt designs in ipython-contrib/IPython-notebook-extensions
to support server-side extensions (the extensions
and templates
folder) in recent release?
Besides, I also think, for Jupyter Notebook end user (not developers), just using command line jupyter nbextension install <extension path or url>
is a good way to install extension.
@haobibo there were several discussions about this and we have now a clear picture about the things needed and how to implemented. In fact, @ellisonbg started a PR for this and I am working in some other branches to complete that PR and add additional missing pieces (for instance, a way to enable/disable server based extensions ala nbextension).
I should be easy to adapt the ipython-notebook-extensions into the new mechanism once that is finally merged. I would encourage you to try port some of the extension with the current proposed implementation (#879) to see if there is something we are missing.
Damian, please coordinate with @jdfreder - he is also working on finishing up the PR as well.
On Thu, Feb 18, 2016 at 6:18 AM, Damián Avila notifications@github.com wrote:
@haobibo https://github.com/haobibo there were several discussions about this and we have now a clear picture about the things needed and how to implemented. In fact, @ellisonbg https://github.com/ellisonbg started a PR for this and I am working in some other branches to complete that PR and add additional missing pieces (for instance, a way to enable/disable server based extensions ala nbextension).
I should be easy to adapt the ipython-notebook-extensions into the new mechanism once that is finally merged. I would encourage you to try port some of the extension with the current proposed implementation (#879 https://github.com/jupyter/notebook/pull/879) to see if there is something we are missing.
— Reply to this email directly or view it on GitHub https://github.com/jupyter/notebook/issues/878#issuecomment-185741823.
Brian E. Granger Associate Professor of Physics and Data Science Cal Poly State University, San Luis Obispo @ellisonbg on Twitter and GitHub bgranger@calpoly.edu and ellisonbg@gmail.com
@ellisonbg Nice, I will ping him. Thanks for letting me know :+1:
Since #879 was merged, I think this should be closed. There are other interesting discussions here but the thread is long and they get easily missed. I would have further discussions in new threads. Thoughts?
@fperez @damianavila @bollwyvl @sccolbert teoliphant @minrk @takluyver @ijstokes
I have been having a lot of on- and off-line discussion this week about the current state of nbextensions in 4.x. For a long time, we (at least this was my own logic) have hesitated to make any significant changes to how 4.x nbextensions are installed/loaded/packaged, because we know that much bigger changes are on the way in 5.0. After these recent conversations, I am convinced that we need to improve the existing 4.x nbextension architecture in the meantime. The current situation in 4.x is causing way to many problems for users and devs.
This goal of this issue is to 1) raise our community awareness that we need to do something about this and 2) come up with a concrete proposal for moving forward.
Current pain points of 4.x nbextensions
nbextensions can be installed in the following locations:
Config can be loaded from the following locations:
By default, installed extensions are not loaded, until activated. The only place extensions can be activated is in the users config directory (
~/.jupyter/nbconfig/notebook|notebook.json
):Pain Point #1: even though nbextensions and config can be installed in system, sys.prefix or user paths, the list of extensions to activate is only loaded from the user config. Thus, there is no way of activating nbextensions at the system or
sys.prefix
level.Point Point #2: There is no standard package format for nbextensions (other than a directory of stuff) and no standard way of copying an nbextension into place. Because of this, there are multiple, separate hacky ways of packaging and installing nbextensions.
nbgrader extension install
andnbgrader extensions activate
.install.py
script that installs and activate its extension.ls -n
to "install" his nbextensions here: https://github.com/minrk/ipython_extensionsProposal for addressing these pain points.
Here is the overall approach:
Here are the technical details of the proposal:
Pain Point 1
@takluyver has made an excellent point that the "nbconfig" frontend configuration system deliberately only loads from the users config (
~/.jupyter
) because these are really mean to only be "user preferences". I completely agree with this and others I have spoken to also concur. So we can't make that part of our app start to load system orsys.prefix
based config.I propose to start to load nbextension activation config from all config paths (system,
sys.prefix
and user) always, but do that instead by throwing that data into thepage.html
template itself, rather than loading later using the nbconfig web service. This would allow us to keep system data out of the user level app-preferences but still load config for nbextensions from all locations. This is not difficult to implement and is fully backwards compatible.Pain Point 2
This one is more difficult and my proposal will be more controversial. Today, in many cases, people are shipping JS code in Python packages. For 5.0 we are going to stop doing that and embrace npm as our package format and manager, but that would require breaking changes in 4.x, so is not on the table.
I propose, that for 4.2+ we embrace putting nbextensions into Python packages:
setup.py
that gives the package relative paths of those assets and 2) provide that same metadata in__init__.py
to enable runtime inspection. Something like this:jupyter nbextension
command line tool to work with python packages:jupyter nbextension install nbgrader
andjupyter nbextension enable nbgrader
. We could also include flags the target installation/config to the system, sys.prefix and user directories. This would again be fully backwards compatible.