twosigma / beakerx

Beaker Extensions for Jupyter Notebook
http://BeakerX.com
Apache License 2.0
2.8k stars 382 forks source link

"New Default Notebook" not loading customized version #3966

Closed jolespin closed 8 years ago

jolespin commented 8 years ago

I'm trying to customize my default notebook by adding Python3, R, and HTML. Also setting it up so it automatically loads particular modules that I use everytime (i.e. numpy, seaborn, pandas, networkx, etc.)

When I click the "New Default Notebook" it doesn't load the one I set up. I made sure to change the correct file.

Joshs-MacBook-Air:~ Mu$ cat ~/.beaker/v1/config/default.bkr
{
    "beaker": "2",
    "evaluators": [
        {
            "name": "HTML",
            "plugin": "HTML",
            "view": {
                "cm": {
                    "mode": "htmlmixed"
                }
            }
        },
        {
            "name": "R",
            "plugin": "R",
            "view": {
                "cm": {
                    "mode": "r"
                }
            }
        },
        {
            "name": "Python3",
            "plugin": "Python3",
            "setup": "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nimport networkx as nx\nimport pandas as pd\n\nimport os,sys,dill\nfrom collections import *\n\nfrom IPython.display import display\nfrom IPython.core.pylabtools import figsize, getfigs",
            "view": {
                "cm": {
                    "mode": "python"
                }
            }
        }
    ],
    "cells": [
        {
            "id": "section001",
            "type": "section",
            "level": 1,
            "title": "Hello Notebook",
            "collapsed": false,
            "evaluatorReader": false
        },
        {
            "id": "Primordial",
            "type": "code",
            "evaluator": "Python3",
            "input": {
                "body": [
                    "#Primordial",
                    "from scipy.stats import uniform",
                    "from sklearn.datasets import load_diabetes,load_iris",
                    "from sklearn.linear_model import ElasticNet,Lasso,Ridge",
                    "from sklearn.ensemble import RandomForestRegressor,GradientBoostingRegressor",
                    "from sklearn.grid_search import RandomizedSearchCV"
                ],
                "hidden": true
            },
            "output": {
                "state": {},
                "selectedType": "BeakerDisplay",
                "pluginName": "Python3",
                "shellId": "8CC62A3EB84144E6A01C1F125BBFB61C",
                "elapsedTime": 301,
                "dataresult": [
                    0,
                    1,
                    2,
                    3,
                    4,
                    5,
                    6,
                    7,
                    8,
                    9
                ]
            },
            "evaluatorReader": true,
            "lineCount": 6,
            "initialization": true
        },
        {
            "id": "codekR6ki0",
            "type": "code",
            "evaluator": "Python3",
            "input": {
                "body": [
                    "data = load_diabetes()",
                    "M_elasticnet = ElasticNet()",
                    "param_grid={\"alpha\"=uniform(), \"l1_ratio\"=uniform()}",
                    "",
                    ""
                ]
            },
            "output": {
                "state": {},
                "result": "<pre><span  class=\"ansired\">Init signature: </span>ElasticNet<span  class=\"ansiblue\">(</span>self<span  class=\"ansiblue\">,</span> alpha<span  class=\"ansiblue\">=</span><span  class=\"ansicyan\">1.0</span><span  class=\"ansiblue\">,</span> l1_ratio<span  class=\"ansiblue\">=</span><span  class=\"ansicyan\">0.5</span><span  class=\"ansiblue\">,</span> fit_intercept<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">True</span><span  class=\"ansiblue\">,</span> normalize<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">False</span><span  class=\"ansiblue\">,</span> precompute<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">False</span><span  class=\"ansiblue\">,</span> max_iter<span  class=\"ansiblue\">=</span><span  class=\"ansicyan\">1000</span><span  class=\"ansiblue\">,</span> copy_X<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">True</span><span  class=\"ansiblue\">,</span> tol<span  class=\"ansiblue\">=</span><span  class=\"ansicyan\">0.0001</span><span  class=\"ansiblue\">,</span> warm_start<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">False</span><span  class=\"ansiblue\">,</span> positive<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">False</span><span  class=\"ansiblue\">,</span> random_state<span  class=\"ansiblue\">=</span><span  class=\"ansigreen\">None</span><span  class=\"ansiblue\">,</span> selection<span  class=\"ansiblue\">=</span><span  class=\"ansiblue\">&apos;cyclic&apos;</span><span  class=\"ansiblue\">)</span><span  class=\"ansiblue\"></span>\n<span  class=\"ansired\">Docstring:</span>\nLinear regression with combined L1 and L2 priors as regularizer.\n\nMinimizes the objective function::\n\n        1 / (2 * n_samples) * ||y - Xw||^2_2 +\n        + alpha * l1_ratio * ||w||_1\n        + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2\n\nIf you are interested in controlling the L1 and L2 penalty\nseparately, keep in mind that this is equivalent to::\n\n        a * L1 + b * L2\n\nwhere::\n\n        alpha = a + b and l1_ratio = a / (a + b)\n\nThe parameter l1_ratio corresponds to alpha in the glmnet R package while\nalpha corresponds to the lambda parameter in glmnet. Specifically, l1_ratio\n= 1 is the lasso penalty. Currently, l1_ratio &lt;= 0.01 is not reliable,\nunless you supply your own sequence of alpha.\n\nRead more in the :ref:&#96;User Guide &lt;elastic_net&gt;&#96;.\n\nParameters\n----------\nalpha : float\n    Constant that multiplies the penalty terms. Defaults to 1.0\n    See the notes for the exact mathematical meaning of this\n    parameter.\n    &#96;&#96;alpha = 0&#96;&#96; is equivalent to an ordinary least square, solved\n    by the :class:&#96;LinearRegression&#96; object. For numerical\n    reasons, using &#96;&#96;alpha = 0&#96;&#96; with the Lasso object is not advised\n    and you should prefer the LinearRegression object.\n\nl1_ratio : float\n    The ElasticNet mixing parameter, with &#96;&#96;0 &lt;= l1_ratio &lt;= 1&#96;&#96;. For\n    &#96;&#96;l1_ratio = 0&#96;&#96; the penalty is an L2 penalty. &#96;&#96;For l1_ratio = 1&#96;&#96; it\n    is an L1 penalty.  For &#96;&#96;0 &lt; l1_ratio &lt; 1&#96;&#96;, the penalty is a\n    combination of L1 and L2.\n\nfit_intercept : bool\n    Whether the intercept should be estimated or not. If &#96;&#96;False&#96;&#96;, the\n    data is assumed to be already centered.\n\nnormalize : boolean, optional, default False\n    If &#96;&#96;True&#96;&#96;, the regressors X will be normalized before regression.\n\nprecompute : True | False | &apos;auto&apos; | array-like\n    Whether to use a precomputed Gram matrix to speed up\n    calculations. If set to &#96;&#96;&apos;auto&apos;&#96;&#96; let us decide. The Gram\n    matrix can also be passed as argument. For sparse input\n    this option is always &#96;&#96;True&#96;&#96; to preserve sparsity.\n    WARNING : The &#96;&#96;&apos;auto&apos;&#96;&#96; option is deprecated and will\n    be removed in 0.18.\n\nmax_iter : int, optional\n    The maximum number of iterations\n\ncopy_X : boolean, optional, default True\n    If &#96;&#96;True&#96;&#96;, X will be copied; else, it may be overwritten.\n\ntol : float, optional\n    The tolerance for the optimization: if the updates are\n    smaller than &#96;&#96;tol&#96;&#96;, the optimization code checks the\n    dual gap for optimality and continues until it is smaller\n    than &#96;&#96;tol&#96;&#96;.\n\nwarm_start : bool, optional\n    When set to &#96;&#96;True&#96;&#96;, reuse the solution of the previous call to fit as\n    initialization, otherwise, just erase the previous solution.\n\npositive : bool, optional\n    When set to &#96;&#96;True&#96;&#96;, forces the coefficients to be positive.\n\nselection : str, default &apos;cyclic&apos;\n    If set to &apos;random&apos;, a random coefficient is updated every iteration\n    rather than looping over features sequentially by default. This\n    (setting to &apos;random&apos;) often leads to significantly faster convergence\n    especially when tol is higher than 1e-4.\n\nrandom_state : int, RandomState instance, or None (default)\n    The seed of the pseudo random number generator that selects\n    a random feature to update. Useful only when selection is set to\n    &apos;random&apos;.\n\nAttributes\n----------\ncoef_ : array, shape (n_features,) | (n_targets, n_features)\n    parameter vector (w in the cost function formula)\n\nsparse_coef_ : scipy.sparse matrix, shape (n_features, 1) |             (n_targets, n_features)\n    &#96;&#96;sparse_coef_&#96;&#96; is a readonly property derived from &#96;&#96;coef_&#96;&#96;\n\nintercept_ : float | array, shape (n_targets,)\n    independent term in decision function.\n\nn_iter_ : array-like, shape (n_targets,)\n    number of iterations run by the coordinate descent solver to reach\n    the specified tolerance.\n\nNotes\n-----\nTo avoid unnecessary memory duplication the X argument of the fit method\nshould be directly passed as a Fortran-contiguous numpy array.\n\nSee also\n--------\nSGDRegressor: implements elastic net regression with incremental training.\nSGDClassifier: implements logistic regression with elastic net penalty\n    (&#96;&#96;SGDClassifier(loss=&quot;log&quot;, penalty=&quot;elasticnet&quot;)&#96;&#96;).\n<span  class=\"ansired\">File:           </span>~/anaconda/envs/python3/lib/python3.5/site-packages/sklearn/linear_model/coordinate_descent.py\n<span  class=\"ansired\">Type:           </span>ABCMeta</pre>",
                "selectedType": "Html",
                "pluginName": "Python3",
                "shellId": "8CC62A3EB84144E6A01C1F125BBFB61C",
                "elapsedTime": 324,
                "height": 1925
            },
            "evaluatorReader": true,
            "lineCount": 5
        }
    ],
    "namespace": {}
}
scottdraves commented 8 years ago

Thanks for the report, we'll check it out.

jolespin commented 8 years ago

Thanks. Beaker is an awesome resource and idea. Can't wait until everything is working perfectly!

scottdraves commented 8 years ago

Turns out there are two things going on here. First, the FAQ is wrong we changed the name for that file in the last release and I forgot to update the web page. The right name is .../default-notebook.bkr.

But there's another problem which means you need restart Beaker to pick up that file if you change it. I fixed the FAQ already the fix for loading it will come out in 1.6 next week. https://github.com/twosigma/beaker-notebook/issues/3968

Many thanks for pointing that out!

scottdraves commented 8 years ago

Thanks. Beaker is an awesome resource and idea. Can't wait until everything is working perfectly!

you are so welcome! please don't hold your breath! 🐠 🐠 🐠

not only do we fix bugs but we put them in too...

scottdraves commented 8 years ago

https://github.com/twosigma/beaker-notebook-www/commit/4c3d6a4dd7f6fcbbcfb6fe84e6352d696d7b7be4