jupyterlab / frontends-team-compass

A repository for team interaction, syncing, and handling meeting notes across the JupyterLab ecosystem.
https://jupyterlab-team-compass.readthedocs.io/en/latest/
BSD 3-Clause "New" or "Revised" License
57 stars 30 forks source link

Package location for Generative AI in Jupyter #172

Closed 3coins closed 1 year ago

3coins commented 1 year ago

This issue is to call a vote on the proposal in https://github.com/jupyterlab/jupyterlab/pull/13804

Where should this work be placed?

We propose that the GAI service API and UI components along with the default model engines should be placed in a package under the JupyterLab org. We believe that GAI provides a core functionality central to working with generative AI in JupyterLab which is missing at the moment in the JupyterLab framework. GAI models can also produce content that is harmful, offensive, or misleading. Making this part of Jupyter will enable the Jupyter OSS community to engage directly in a manner consistent with Jupyter governance model and make sure we continue to build this in a manner that works long term for the Jupyter users.

Full text of the proposal Generative AI (GAI) models can generate novel content rather than simply analyzing or acting on existing data. The generated data is both similar to the training data and a response to the user provided natural language prompt that describes a task or question. Recent generative AI models such as Amazon CodeWhisperer, Codex, Stable Diffusion, and ChatGPT have demonstrated solid results for many tasks involving natural language (content generation, summarization, question answering), images (generation, explanation, in-painting, out-painting, transformation), data (synthetic data generation, transformation, querying, etc.), and code (autocompletion, explanation, debugging, code reviews, etc.). Jupyter already has an architecture and user interface for code generation through autocompletion. This architecture has been used to integrate AI provided autocompletions into JupyterLab and the Jupyter Notebook (Kite, Tabnine). As generative AI models expand to perform other tasks, a generalization of the Jupyter autocompletion architecture will empower users to use generative AI to perform any task in Jupyter’s applications. We are proposing a new GAI architecture which is based on an extensible Jupyter server API for registering GAI models and the tasks they perform in Jupyter. This lets third parties integrate their generative AI models into Jupyter and it lets users enable the models and tasks. New GAI plugins are installed using the JupyterLab extension manager UI or a simple `pip`/`conda install`. Once models have been enabled, users working in JupyterLab or the Jupyter Notebook can 1. Select anything (text, notebook cell, image, file, etc), 2. Select or type a prompt to perform a task with the selection, and 3. Insert the AI generated response in a chosen location (replace, insert below, new file, etc.) ## Terminology (users & developers) * **Task**: an object that fully describes a task that the user wishes to perform. It includes a human readable name, a prompt template (what to do), the model engine to use (how to do it), and the insertion mode (where to put it) * **Insertion mode**: Field within a task that describes how the generated output is supposed to be inserted back into the document. For example, the generated output could replace the input, or it could be inserted above or below the input. * **Model engine**: a class deriving from `BaseModelEngine` that executes a model * Multiple model engines may share the same underlying **model**; a model engine is best thought of as a way to use a GAI model * **Prompt**: a string of text, sent to a GAI model to generate content; the final string output resulting from a synthesis of prompt variables and the prompt template * The prompt is computed in the backend model engine and it may be shown to the user * **Prompt variables**: variables sent to the Prompt API from the client * **Prompt body**: a reserved prompt variable that serves as the main body of the prompt, e.g. the contents of a notebook cell or a text selection * analogous to the relation between the body of a HTML document and the document itself * **Prompt template**: the template that builds a prompt from given prompt variables. * **Prompt output**: the response that a GAI model sends back after a prompt is run successfully * In the first version, we expect that text-based models’ prompt output will be in Markdown format, a mixture of code blocks and markdown-flavored text * Some GAI models can generate images, video, and audio content. ## Terminology (contributors) * **Document widget**: a main area widget that contains an editor widget. * Examples: `INotebookTracker#currentWidget`, `IEditorTracker#currentWidget` * **Editor widget**: the widget within a document widget that’s meant to render the content of a document and potentially serve as an editor as well. * Retrieved by `DocumentWidget#content` ## Service API ### Prompt API The Prompt API synthesizes a prompt, executes the model, and returns the output for a given task. This will be the core handler that provides our generated recommendations in notebooks. Request schema: ``` POST /api/gai/prompt { "task_id": string, // task ID "prompt_variables": { // object of prompt variables. `body` is reserved and must be specified in all requests "body": string, // prompt body as a string. for imgs, this may be a URL or b64 encoding [key: string]: string // extra prompt variables specific to a prompt template } } ``` Response schema: ``` 200 OK { "output": string, // model output. can be a URL or b64 encoding for images "insertion_mode": string // insertion mode. see "terminology" for more details. } ``` ### Task API The Task API allows users to create, read, update, and delete tasks. #### ListTasks The ListTasks API allows users to list all the tasks for registered model engines. Request schema: ``` GET /api/gai/tasks ``` Response schema: ``` 200 OK { "tasks": [ { "id": string, // task ID "name": string, // sentence-cased human readable name, e.g. "Explain code" "engine": string, // e.g. "gpt3" }, ... ] } ``` #### DescribeTask The DescribeTask API allows you to retrieve additional metadata about a task, such as the prompt template. Request schema: ``` GET /api/gai/tasks/{id} ``` Response schema: ``` 200 OK { "name": string, // human readable prompt template name, e.g. "Explain code" "engine" string, // model engine name "prompt_template": string, // prompt template content, e.g. "Explain this:\n{body}" "insertion_mode": string // insertion mode. see "terminology" for more details } ``` ### Engines API #### ListEngines The ListEngines API lists all model engines registered in the backend, i.e. ones that can be used to generate outputs via the the Prompt API. Request schema: ``` GET /api/gai/engines ``` Response schema: ``` { "engines": [ { name: string // name of the model, e.g. "gpt2" input_type: "txt" // input type. just "txt" for now output_type: "txt" // output type. just "txt" for now }, ... ] } ``` ## Server implementation outline ### jupyter_gai server extension The base `jupyter_gai` extension adds handlers for the HTTP APIs. Exposes a `BaseGaiModelEngine` class for other server extensions to extend. ```python class BaseGaiModelEngine: name: str input_type: str output_type: str *def* list_default_tasks(*self*) -> List[_DefaultTaskDefinition_]: pass async def execute(prompt: str): pass ``` ## Developer guide ### Registering GAI model engines GAI extension uses the entry points under the group name `jupyter_gai.model_engine_class` to discover new model engines. Model engine providers can create a new engine by extending the `BaseModelEngine` class and publishing a python package with this module exposed as an entry point. For example, here is the code and setup for `TestModelEngine`. ```python # test_model_engine.py class TestModelEngine(BaseModelEngine): name = "test" input_type = "txt" output_type = "txt" def list_default_tasks(self) -> List[DefaultTaskDefinition]: return [{ "id": "test-task", "name": "Test Task", "prompt_template": "Test Prompt Template", "insertion_mode": "test-insertion-mode" }] async def execute(self, task: DescribeTaskResponse, prompt_variables: Dict[str, str]): return "test output" ``` Expose this model engine as an entry point. Here, we are assuming that the package is named `jupyter_gai_test`. ```python # pyproject.toml [project.entry-points."jupyter_gai.model_engine_class"] test = "jupyter_gai_test:TestModelEngine" ``` ### Registering default tasks Each model engine exposes a method `list_default_tasks()`, which lists the default tasks that it will install into the Task DB. It has a return type of `List[DefaultTaskDefinition]`, which is defined as such: ```python class DefaultTaskDefinition(TypedDict): id: str # engine-scoped task ID. should be human readable, e.g. "explain-code" name: str prompt_template: str ``` The engine-scoped task ID is combined with the engine name to form a human-readable, unique task ID for a default task. A default task ID must have a format of `f"{engine_name}:{task_id}"` . This constraint is required for the UI code to have a stable identifier for default tasks. Note that there is no such constraint on custom task IDs; any globally unique string is sufficient. For our implementation, we will likely just use UUIDv4 by default. ## UI Components #### Explain or codify cell button in cell toolbar There is a new button in a cell toolbar that enables users to explain the code in a “code” cell or generates python3 code based on text in a “markdown” cell. *Explain code in code cell toolbar* ![gai-explain-code](https://user-images.githubusercontent.com/289369/213227511-c839637d-c9cc-42c8-aaaf-9c500148ab64.gif) *Generate code in markdown cell toolbar* ![gai-generate-code](https://user-images.githubusercontent.com/289369/213227664-123dbfaf-146d-4e2d-81da-af13b1b25076.gif) #### Generate output from selection... “Generate output from selection” mode provides a set of tasks for generating output based on text selection. New tasks can be installed through python entry points (see “Registering default tasks” section). ![Explain_from_selection](https://user-images.githubusercontent.com/289369/213227728-d94f83b6-ce2e-4d8d-b9c5-e59006f041b3.gif) #### Command driven insertion logic ```typescript export type InsertionContext = { widget: Widget; request: GaiService.IPromptRequest; response: GaiService.IPromptResponse; }; /** * Function that handles the insertion of Prompt API output into the * active document. This function expects that a command with an id * of `gai:insert-` is registered, context is passed * through to the command for handling insertion. See `InsertionContext` * for more info. * * @param app - Jupyter front end application * @param context - Insertion context */ export async function insertOutput( app: JupyterFrontEnd, context: InsertionContext ): Promise { app.commands.execute( `gai:insert-${context.response.insertion_mode}`, context as any ); return true; } ``` Default commands are registered to allow prompt output to be added above, below or inline replacement of the selected text. These commands work for all document widgets with an editor. Extension writers can enable additional insertion modes by registering a command following the pattern `gai:insert-`, and handling the insertion logic within the execute callback. For example, here is the code that handles insertion of prompt output in the default inserter. ```typescript export function buildDefaultInserter(mode: 'above' | 'below' | 'replace') { return function insert(context: InsertionContext): boolean { const { widget, request, response } = context; const editor = getEditor(widget); if (!editor) { return false; } switch (mode) { case 'above': editor.replaceSelection?.( `${response.output}\n\n${request.prompt_variables.body}` ); break; case 'below': editor.replaceSelection?.( `${request.prompt_variables.body}${response.output}` ); break; case 'replace': editor.replaceSelection?.(response.output); break; } return true; }; } ``` This handler is registered as execute function in the default commands in the GAI app. ```typescript commands.addCommand(CommandIDs.insertAbove, { execute: buildDefaultInserter('above') as any }); commands.addCommand(CommandIDs.insertBelow, { execute: buildDefaultInserter('below') as any }); commands.addCommand(CommandIDs.insertReplace, { execute: buildDefaultInserter('replace') as any }); ```

Summary

Vote Yes if you agree to temporarily creating a new repo under JupyterLab for the proposed project and moving it to the Jupyter Incubator program once it is up and running again (estimated to be 2-3 months from now). Vote No** if you disagree.


@jupyterlab/jupyterlab-council votes

The voting window closed on February 8. Quorum of 9/18 met with 16/18 @jupyterlab/council voting. The result was Yes.

13 Yes, 2 No, 1 Abstain

jasongrout commented 1 year ago

This looks like an awesome capability. Thanks for the proposal!

Who will be maintaining this repo, at least initially?

jasongrout commented 1 year ago

Summarizing notes from dev meeting discussion today (errors in summary are mine)

jasongrout commented 1 year ago

Vote Yes if you agree to creating a new repo under JupyterLab for the proposed project.

@3coins - How do we vote? I suggest picking two emojis for the two options.

jtpio commented 1 year ago

Summarizing notes from dev meeting discussion today

Another suggestion that was posted on the chat during the meeting was to first incubate the project under the jupyterlab-contrib organization on GitHub: https://github.com/jupyterlab-contrib.

Posting it here for visibility and to add it to the list of options for where this package could go.

andrii-i commented 1 year ago

@3coins - How do we vote? I suggest picking two emojis for the two options.

@jasongrout Let’s use a thumbs up emoji (:+1:) for “Yes” and thumbs down emoji (:-1:) for “No”

3coins commented 1 year ago

@jasongrout Thanks for posting the questions from the meeting. Agree on the use of emoji, let's hold off on the vote for a few days until the discussion on this topic is over. My team will answer questions here today and add some more details, so people have better context before voting.

TiesdeKok commented 1 year ago

Thanks for putting this together, definitely would be a great capability with potential!

A few high-level thoughts that come to mind (as a GAI power-user for academic work and my browser extension):

Happy to volunteer and contribute where helpful, this is work close to my interest.

andrii-i commented 1 year ago

Summarizing notes from dev meeting discussion today

Another suggestion that was posted on the chat during the meeting was to first incubate the project under the jupyterlab-contrib organization on GitHub: https://github.com/jupyterlab-contrib.

Posting it here for visibility and to add it to the list of options for where this package could go.

@jtpio thank you for surfacing this question. Jupyter GAI should be developed and governed under Project Jupyter organizational umbrella accordingly to processes and guidelines established within the organization. And jupyterlab-contrib is not officially a part of Project Jupyter.

bollwyvl commented 1 year ago

As discussed on the call (and some other percolating thoughts from the demo):

Acronym branding

The acronym-based branding, as it stands, is relatively opaque. Before last week, GAI has broadly referred to:

Indeed, the GAI of of some of these GAI is so far from GAI they are more like a gai.

Broader applicability

The case was made that the proposed models are not substantively different tools than spell checkers and formatters.

Great! Perhaps a more effective branding is the more activity-based jupyterlab-suggest (or -suggestions) that enable use of these proven, and crucially uncontroversial tools. By broadening the scope, this would allow things that any user can own, (and can be exhaustively tested in CI) such as notional:

The above all take different scopes:

Solving these problems in a general way will allow for many interesting applications, beyond pay-as-you-go ML models trained without regard to the licenses of the corpus, which have yet to play out in court. Jupyter just straight up can't be involved in these.

Defining Data Structures

Where Jupyter has been able to adopt existing standards, we can use existing tools to validate content. This is most notable in the notebook format, but additionally relevant in the documentation of Jupyter REST APIs, JupyterLab settings, notebook metadata, and others.

Versioned schema, such as the Jupyter Notebook Format, also offer a way to provide forward and backward migration, which would alleviate some of the concerns about the ability of downstream plugins to be updated to the newest format.

Recommended: use well-annotated JSON Schema rather than inventing something entirely new.

User Experience

Building on the above, JSON Schema can feed directly into react-json-schema-form, already a core dependency of JupyterLab 3, with expanded use in JupyterLab 4. This allows for an extremely composable form, with field-level customization, and better-than-average, near-instantaneous error messages without a roundtrip to the server.

Modal popups are pretty bad, and I hate seeing more added that aren't really undo-able and world-blocking, e.g. Delete all the files? [yes][no]

As an iterative activity, I'd recommend moving the whole UI to a sidebar, such that one can operate/copy paste their content. Notebook 7 will support sidebars, so there's really little downside.

In the specific case of a a suggestion to a notebook cell, reusing the @jupyterlab/cells elements would allow for drag-and-drop from the sidebar into the notebook in question. Or, if the scope of a suggestion is a whole cell, allow for displaying a diff a la nbdime, and then applying the diff.

Whether new cell(s) are created or existing ones patched, it seems relevant to carry some metadata about this suggestion, which could be further formalized and inspected, or reused. For example, storing the prompt for an image could become its accessibility-enhancing image caption, as could the source information, generation time, cost, random seeds, etc. Of course things like API tokens, usernames, etc. would not, and these should likely be in an entirely separate section of the UI.

dlqqq commented 1 year ago

Thank you all for your engagement! It's wonderful to see everybody excited about this. Let me do my best to address some of the comments so far:

@TiesdeKok You have raised all excellent points. The only actionable item right now is potentially changing how we handle secrets server-side, since the OpenAI model engines require an API key to function. Right now, we're encouraging users to store this in a traitlets configuration file, e.g. with contents c.GPT3ModelEngine.api_key="<key>". We're not sure if this is sufficient from a security perspective. Thoughts on this?

@bollwyvl Thank you for documenting our concerns regarding the name. Again, we're very open to suggestions here, and we'll keep the team informed of any alternative name proposals we can offer.

Solving these problems in a general way will allow for many interesting applications, beyond pay-as-you-go ML models trained without regard to the licenses of the corpus, which have yet to play out in court. Jupyter just straight up can't be involved in these.

Hm, could you elaborate more on your thoughts regarding this?

Recommended: use well-annotated JSON Schema rather than inventing something entirely new.

Building on the above, JSON Schema can feed directly into react-json-schema-form, already a core dependency of JupyterLab 3, with expanded use in JupyterLab 4. This allows for an extremely composable form, with field-level customization, and better-than-average, near-instantaneous error messages without a roundtrip to the server.

Thank you for the technical suggestion. What we were thinking is that the GAI server extension actually returns a form schema for prompt variables in the DescribeTask API, so that the frontend can render a form for the user to fill out, and then return these values when invoking the model via the Prompt API. This is how we plan on supporting multiple prompt variables beyond a user's text selection, e.g. "language version" or "image size".

Modal popups are pretty bad, and I hate seeing more added that aren't really undo-able and world-blocking, e.g. Delete all the files? [yes][no]

As an iterative activity, I'd recommend moving the whole UI to a sidebar, such that one can operate/copy paste their content. Notebook 7 will support sidebars, so there's really little downside.

Yes, we agree. As we mentioned in the call, we would very much like to migrate away from the "select-then-dialog" user experience we are providing today, and are open to ideas from the community.

Whether new cell(s) are created or existing ones patched, it seems relevant to carry some metadata about this suggestion, which could be further formalized and inspected, or reused. For example, storing the prompt for an image could become its accessibility-enhancing image caption, as could the source information, generation time, cost, random seeds, etc. Of course things like API tokens, usernames, etc. would not, and these should likely be in an entirely separate section of the UI.

This is actually already possible just by changing the logic of the inserters (insertion commands). Each inserter receives an InsertionContext that includes information about the initial request, hence a notebook inserter can add cell metadata about the request when inserting output.

bollwyvl commented 1 year ago

server extension actually returns a form schema

right, but extensions themselves could also provide their templates as JSON schema, rather than some new thing like JSON schema.

thoughts regarding this

The Jupyter Project, and by extension its user community, has been burned a number of times by having attractive capabilities that are strongly tied to a specific upstream API vendor. There is no reason to assume benign intent in any of the current large language model API providers, or their future acquirers.

Upcoming litigation (or the threat thereof), and the generally gross misunderstanding of open source's role in the software supply chain, is probably enough reason to not expose users, maintainers, or the brand to these risks.

By focusing on a sound, general API/UI could immediately provide offline/non-risky benefits like configurable spell check, formatting, etc. beyond what, say, the LSP CodeLens offers.

andrii-i commented 1 year ago

@bollwyvl thank you for sound points and suggestions.

server extension actually returns a form schema

right, but extensions themselves could also provide their templates as JSON schema, rather than some new thing like JSON schema.

Agreed, use of JSON schema as standard format for extensions schemas seems like a good idea and perfectly fits existing architecture. react-json-schema-form also looks looks like a good solution to easily render JSON schemas registered by extensions as forms.

By focusing on a sound, general API/UI could immediately provide offline/non-risky benefits like configurable spell check, formatting, etc. beyond what, say, the LSP CodeLens offers.

GAI is a model-agnostic framework that would allow people to experiment and interact with Generative AI engines. People using it are free to implement arbitrary tasks working through arbitrary engines or install tasks/engines implemented by others. We created DALL·E extension as an example; person using GAI can as well use it with open source models from https://huggingface.co/ or to test their own model.

Creating general, extendable, engine-agnostic API/UI for interaction with generative AI engines is a goal of this project. So I think we are on the same page here. If I'm missing something and you are suggesting a different direction, please elaborate more.

dlqqq commented 1 year ago

@bollwyvl

The Jupyter Project, and by extension its user community, has been burned a number of times by having attractive capabilities that are strongly tied to a specific upstream API vendor. There is no reason to assume benign intent in any of the current large language model API providers, or their future acquirers.

I understand you are concerned about upstream AI research companies, but what are your concerns specifically about this proposal? We believe that this proposal is sufficiently general and modular enough to be agnostic of whatever companies these models and their APIs originate from.

fcollonval commented 1 year ago

I have one comment about the vote request. As stated in the decision making process, an informal discussion - as started here - is required to precise the request and may be sufficient to move forward without the need for a vote.

If we need a vote, we should follow the Jupyter guidance. Meaning the first post must have a list of all eligible voters (aka the JupyterLab council members) with 3 checkboxes (Yes, No and Abstain) and you should set a deadline for the vote (at least a couple of weeks to be able for people to be reminded at the weekly calls).

Example of vote request issue: https://github.com/jupyterlab/team-compass/issues/143

So I would suggest bringing this point again next Wednesday's meeting to see if the discussion brings a consensus. If not we can ask for a formal vote after the meeting.

ellisonbg commented 1 year ago

Hi everyone, thank you for all the comments, questions, and suggestions (disclosure: this is my team at AWS who is working on this). A few quick replies:

Thanks everyone!

dlqqq commented 1 year ago

Hey team, we have finally returned with our naming suggestions for this repo and its packages. We believe a good name is described by the following:

Taking these guidelines into account, we believe the best name for the project is Jupyter AI. In other words, we propose simply dropping the "G". Keeping the adjective "generative" doesn't provide much additional context (as most models generate output), and leads to ambiguous and problematic pronunciation.

Other terms used in the project that include the phrase "GAI" have also been renamed. "GAI modules" are now just AI modules.

The monorepo name would be the project name in kebab-case, e.g. jupyter-ai. Its subrepositories will all follow the convention jupyter-ai*. For example, the monorepo with our current AI modules would have the following structure:

jupyter-ai/
|_ packages/
   |_ jupyter-ai-core/
   |_ jupyter-ai-gpt3/
   |_ jupyter-ai-dalle/

Here, jupyter-ai-core is the core server + lab extension, previously named just jupyter-gai. The other subrepos are the AI modules shown in the PR.

3coins commented 1 year ago

@jupyterlab/jupyterlab-council

As per discussion in today's JupyterLab weekly meeting, community has decided to put a vote for this decision. I have added all JupyterLab council member names along with a checkbox for their decision in the summary section.

The vote will be closed in a week on Thursday Feb 2nd.

isabela-pf commented 1 year ago

I’ve been trying to listen more than talk on this discussion because I know it’s not my area of expertise, but I still don’t feel like I’ve gotten a definitive answer to the question I asked on the January 18 JupyterLab call: why does this work belong in/under JupyterLab? Why is it not suited to be an extension or similar kind of satellite? To be clear, these questions aren’t a commentary on contribution quality or potential use cases or generative AI ethics.

Here are the answers I think I’ve heard (for your confirmation and/or debunking):

Thanks in advance for giving this comment a look. I want to make sure we have this information as clear as we can before voting.


I want to call out that the terminology section was very helpful to me. Thanks for taking the time to make sure we’re communicating precisely as we can.

ellisonbg commented 1 year ago

@isabela-pf thanks for your questions, hopefully I can answer them.

ajbozarth commented 1 year ago

I think the actual best place for this would be the Jupyter incubation program, but it is essentially dormant right now.

I think this is what felt off about this to me, I felt like the jupyterlab org was not quite the right place for it but couldn't figure out a solid reason why I felt that. Perhaps this package would be a good reason to reactivate the incubator and be the first active project under the new governance. I'll hold my vote until we can discuss more (possibly at tomorrow's meeting)

ellisonbg commented 1 year ago

@ajbozarth thanks, I will be sure to attend to discuss.

ellisonbg commented 1 year ago

@SylvainCorlay

ellisonbg commented 1 year ago

We have talked with a few folks this week about this proposal to get more feedback. The sense we are getting is that the best place for this work is the Jupyter Incubator program. However, the Jupyter Incubator program is dormant and has not been refactored for the new Jupyter governance model. The new Executive Council and Software Steering Council will work together to relaunch the Jupyter incubator program, but we expect that to take a few months with all the other tasks those two bodies are working on right now. Given that, we would like to change this proposal along the following lines:

We will bring this to the weekly JupyterLab meeting today and will reset the vote and update the proposal accordingly.

blink1073 commented 1 year ago

For precedent, the Jupyter Server team has agreed to "incubate" projects such as jupyter-scheduler and jupyverse, with the understanding that they would be marked as experimental until maturity was reached. Had there been an active incubator program, we probably would have preferred for both of them to exist there. It has not been a burden to incubate these projects, and we have collaborated with the authors at our weekly calls.

ellisonbg commented 1 year ago

We discussed this in the JupyterLab weekly meeting. After a short discussion @isabela-pf and @jasongrout proposed we continue with the revised vote. Voting form has been updated and the time has been reset. Thanks everyone!

3coins commented 1 year ago

Thank you all for participating in the vote. The voting period for this issue has closed. Here is the summary for the votes:

Yes: 13 votes No: 2 votes Abstain: 1 votes No vote: 2

Given the 16 participating council members, the 50% quorum is met and the proposal passes.

3coins commented 1 year ago

I don't seem to have permissions to create a new repo under jupyterlab org. Can one of the members with access help create this? The proposed name for the repo is jupyter-ai.

blink1073 commented 1 year ago

I created https://github.com/jupyterlab/jupyter-ai and gave "maintain" access to the council.

ellisonbg commented 1 year ago

Thanks everyone for participating in the discussion and vote and Steve for creating the repo.

On Thu, Feb 9, 2023 at 1:39 PM Steven Silvester @.***> wrote:

I created https://github.com/jupyterlab/jupyter-ai and gave "maintain" access to the council.

— Reply to this email directly, view it on GitHub https://github.com/jupyterlab/team-compass/issues/172#issuecomment-1424874101, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAGXUGEGJTZE2AROFELTGLWWVP2ZANCNFSM6AAAAAAT7IQIPE . You are receiving this because you were mentioned.Message ID: @.***>

-- Brian E. Granger

Senior Principal Technologist, AWS AI/ML @.***) On Leave - Professor of Physics and Data Science, Cal Poly @ellisonbg on GitHub