codota / tabnine-nvim

Tabnine Client for Neovim
https://tabnine.com
346 stars 31 forks source link

Tabnine Chat Feedback #92

Open amirbilu opened 1 year ago

amirbilu commented 1 year ago

This is the place to leave feedback / discuss issues on Tabnine Chat for nvim. Note this feature is still in BETA - to join the BETA - send us your Tabnine Pro email to support@tabnine.com.

shuxiao9058 commented 1 year ago

Recently I have port TabNine Chat to Emacs

https://github.com/shuxiao9058/tabnine

amirbilu commented 1 year ago

@shuxiao9058 this is awesome!!! Please leave us a message at support@tabnine.com to get Tabnine Pro credits and Tabnine swag.

shuxiao9058 commented 1 year ago

@shuxiao9058 this is awesome!!! Please leave us a message at support@tabnine.com to get Tabnine Pro credits and Tabnine swag.

Thanks, @amirbilu email already send.

chuckpr commented 1 year ago

Been experimenting with Tabnine Chat in Neovim. Works well except that I get this message frequently:

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: Expected value but found unexpected end of string at character 8193
stack traceback:
        [C]: in function 'decode'
        ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: in function 'cb'
        vim/_editor.lua:325: in function <vim/_editor.lua:324>
amirbilu commented 1 year ago

Thanks! I'll add tomorrow some debug info so we can figure it out together.

On Wed, Jul 19, 2023, 23:06 Charles Pepe-Ranney @.***> wrote:

Been experimenting with Tabnine Chat in Neovim. Works well except that I get this message frequently:

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lu a:60: Expected value but found unexpected end of string at character 8193 stack traceback: [C]: in function 'decode' ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: in function 'cb' vim/_editor.lua:325: in function <vim/_editor.lua:324>

— Reply to this email directly, view it on GitHub https://github.com/codota/tabnine-nvim/issues/92#issuecomment-1642684039, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB2TIGA7PH7JFKXLAR7SA5DXRA437ANCNFSM6AAAAAA2HBHUEI . You are receiving this because you were mentioned.Message ID: @.***>

aarondill commented 1 year ago

That's the JSON decode function, which means the chat binaries are outputting a JSON which is not syntactically correct (idk why or how though)

amirbilu commented 1 year ago

Yeah, for sure. It's not supposed to happen so I wonder what it outputs

On Wed, Jul 19, 2023, 23:09 Aaron Dill @.***> wrote:

That's the JSON decode function, which means the chat binaries are outputting a JSON which is not syntactically correct (idk why or how though)

— Reply to this email directly, view it on GitHub https://github.com/codota/tabnine-nvim/issues/92#issuecomment-1642687842, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB2TIGCFAPMX2BYGZEUCSZLXRA5IXANCNFSM6AAAAAA2HBHUEI . You are receiving this because you were mentioned.Message ID: @.***>

amirbilu commented 1 year ago

Been experimenting with Tabnine Chat in Neovim. Works well except that I get this message frequently:

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: Expected value but found unexpected end of string at character 8193
stack traceback:
        [C]: in function 'decode'
        ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:60: in function 'cb'
        vim/_editor.lua:325: in function <vim/_editor.lua:324>

Can you please try this https://github.com/codota/tabnine-nvim/pull/94, you should get a nicer debug message. When you get it - attach it here. Appreciate it!

chuckpr commented 1 year ago

Ok, here are the messages I am seeing using Chat built from the debug-message branch:

[tabnine-nvim] Failed to decode chat message: {"id":"25","command":"update_chat_conversation","data":{"id":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","messages":[{"id":"c2bd9e03-2eb1-46e7-a591-1d334959296c","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879814043","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"24cc71cd-95db-4493-8e0c-42620fc5af85","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that imports the Altair and Pandas libraries, loads the cars dataset from the Vega dataset, creates a bar chart using the Altair library, and displays a jitter chart.\n\nThe script starts by importing the Altair and Pandas libraries:\n```python\nimport altair as alt\nimport pandas as pd\n```\n\nIt then loads the cars dataset from the Vega dataset and creates a bar chart using the Altair library:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n```\n\nNext, the script loads the cars dataset again and uses Pandas to melt the data frame:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe script then creates a jitter chart for all the values in the melted_cars dataframe using Altair:\n```python\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n```\n\nFinally, the script displays the jitter chart:\n```python\n# Display the jitter chart\njitter_chart\n```","isBot":true,"timestamp":"1689879832134"},{"id":"c52306dc-6b04-44b7-ab57-f2201531f047","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879843080","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.C
[tabnine-nvim] Failed to decode chat message: hart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"39850d60-2884-449b-88c9-32cedbb5eda1","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that uses Pandas to melt a data frame.\n\nThe script starts by loading the cars dataset from the Vega dataset and using Pandas to create an id_vars list and a value_vars list:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe id_vars list contains the columns \"Name\" and \"Miles_per_Gallon\", while the value_vars list contains the columns \"Horsepower\", \"Cylinders\", \"Displacement\", \"Weight_in_lbs\", \"Acceleration\", and \"Year\".\n\nThe script then uses Pandas to melt the data frame, which creates a new column for each value variable and combines the id_vars into a single \"variable\" column.","isBot":true,"timestamp":"1689879851807"}]}}

I see these mesagges after highlighting some code and using /explain-code in Chat.

gunslingerfry commented 1 year ago

The rust package has an implicit dependency on webkit2gtk-4.1 in case anybody is having difficulty compiling.

edit: even with the package I'm getting linker errors. My version of webkit2gtk-4.1 may be too new? I'm not familiar enough with rust to figure this out.

EndeavourOS (Arch) NVIM v0.9.1 libwebkit2gtk-4.1 version: 0.8.4 rust version: 1.71.0 (just updated from rustup)

gunslingerfry commented 1 year ago

Here is a gist with the error output so I don't spam this thread: https://gist.github.com/gunslingerfry/8a8bcd1adeba6c8aba017a9dce0714a3

aarondill commented 1 year ago

Ok, here are the messages I am seeing using Chat built from the debug-message branch:


[tabnine-nvim] Failed to decode chat message: {"id":"25","command":"update_chat_conversation","data":{"id":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","messages":[{"id":"c2bd9e03-2eb1-46e7-a591-1d334959296c","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879814043","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"24cc71cd-95db-4493-8e0c-42620fc5af85","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that imports the Altair and Pandas libraries, loads the cars dataset from the Vega dataset, creates a bar chart using the Altair library, and displays a jitter chart.\n\nThe script starts by importing the Altair and Pandas libraries:\n```python\nimport altair as alt\nimport pandas as pd\n```\n\nIt then loads the cars dataset from the Vega dataset and creates a bar chart using the Altair library:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n```\n\nNext, the script loads the cars dataset again and uses Pandas to melt the data frame:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe script then creates a jitter chart for all the values in the melted_cars dataframe using Altair:\n```python\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n```\n\nFinally, the script displays the jitter chart:\n```python\n# Display the jitter chart\njitter_chart\n```","isBot":true,"timestamp":"1689879832134"},{"id":"c52306dc-6b04-44b7-ab57-f2201531f047","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"/explain-code","isBot":false,"timestamp":"1689879843080","intent":"explain-code","editorContext":{"fileCode":"import altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.Chart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\nprint(melted_cars.head())import altair as alt\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Create a bar chart using the Altair library\nalt.Chart(cars).mark_bar().encode(\n    x=\"Horsepower:Q\", y=\"Miles_per_Gallon:Q\", color=\"Origin:N\"\n)\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n\n# Create a jitter chart for all the values in the melted_cars dataframe\njitter_chart = (\n    alt.C

[tabnine-nvim] Failed to decode chat message: hart(melted_cars)\n    .mark_circle(size=60)\n    .encode(\n        x=\"value:Q\",\n        y=\"variable:N\",\n        color=\"variable:N\",\n        tooltip=[\"value:Q\", \"variable:N\"],\n    )\n    .transform_calculate(jitter=\"random() - 0.5\")\n    .transform_joinaggregate(mean=\"mean(value)\", count=\"count(value)\")\n    .transform_calculate(\n        x=\"if(datum.count > 1, (datum.mean - datum.stddev) + jitter, mean)\",\n        y=\"if(datum.count > 1, (datum.mean + datum.stddev) + jitter, mean)\",\n    )\n    .transform_calculate(r=\"min(width, height) / 2\")\n    .transform_scale(x=[-r, r], y=[-r, r])\n)\n\n# Display the jitter chart\njitter_chart\n# Print the first few rows of the melted data frame\n","selectedCode":"# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n","selectedCodeUsages":[]},"retrievalContext":{"documents":[]}},{"id":"39850d60-2884-449b-88c9-32cedbb5eda1","conversationId":"dc4e2d3c-041b-4fe4-9b80-35c7116e78e6","text":"The selected code is a Python script that uses Pandas to melt a data frame.\n\nThe script starts by loading the cars dataset from the Vega dataset and using Pandas to create an id_vars list and a value_vars list:\n```python\n# Load the data from the Vega dataset\ncars = data.cars()\n\n# Melt the data frame\nmelted_cars = pd.melt(\n    cars,\n    id_vars=[\"Name\", \"Miles_per_Gallon\"],\n    value_vars=[\n        \"Horsepower\",\n        \"Cylinders\",\n        \"Displacement\",\n        \"Weight_in_lbs\",\n        \"Acceleration\",\n        \"Year\",\n    ],\n)\n```\n\nThe id_vars list contains the columns \"Name\" and \"Miles_per_Gallon\", while the value_vars list contains the columns \"Horsepower\", \"Cylinders\", \"Displacement\", \"Weight_in_lbs\", \"Acceleration\", and \"Year\".\n\nThe script then uses Pandas to melt the data frame, which creates a new column for each value variable and combines the id_vars into a single \"variable\" column.","isBot":true,"timestamp":"1689879851807"}]}}

I see these mesagges after highlighting some code and using /explain-code in Chat.

It seems like something is going wrong, which is putting a newline into the json message.

Something in the returned file code

aarondill commented 1 year ago

@chuckpr If you don't mind, can you attempt to create a minimal file where you can reproduce this, and share it (perhaps in a gist)? I suspect that something in the handling of user code (perhaps a length issue?) is going wrong, so being able to reproduce this locally would be very helpful.

chuckpr commented 1 year ago

Sure, to reproduce the error, I ran /expain-code five times. On the fifth invocation, I started to see the error.

Gist: https://gist.github.com/chuckpr/8a67b3685b9631f4d633821143df3747

amirbilu commented 1 year ago

@aarondill did you manage to reproduce this? it seems to work fine for me

aarondill commented 1 year ago

@aarondill did you manage to reproduce this? it seems to work fine for me

I haven't had the time to try to. I won't be able to try for at least a few days.

shuxiao9058 commented 1 year ago

Recently I have port TabNine Chat to Emacs

https://github.com/shuxiao9058/tabnine

TabNine for Emacs now is on Melpa

gunslingerfry commented 1 year ago

Yay! @aarondill got me sorted. I had out of sync packages. Silly me not trying a system upgrade.

aarondill commented 1 year ago

@aarondill did you manage to reproduce this? it seems to work fine for me

@amirbilu Having just compiled and tested this on my machine, I can't seem to reproduce the error.

@chuckpr Can you still reproduce this issue on your machine? If so, does rerunning dl_binaries.sh fix the issue? If it does not, can you provide detailed system information and reproduction steps using the templates below?

An example for system information:

> uname -a
results_here
> cat /etc/os-release || cat /usr/lib/os-release
results_here
> cd /path/to/tabnine-nvim
> ls -A ./binaries
results_here
> cat chat_state.json
results_here
> cat ./chat/target/.rustc_info.json
results_here

Reproduction steps (an example):

  1. install tabnine-nvim using this file (FILENAME):
    contents of file
  2. open nvim test.py
  3. Go to line #
  4. Press V
  5. Select lines # through #
  6. Press : and type TabnineChatNew
  7. Type /explain-code repeatedly (5 times?)
  8. See Error executing vim.schedule lua callback... error in original nvim window.
aarondill commented 1 year ago

@amirbilu I think we need to put a list of dependencies in the readme for the chat plugin. We seem to be attracting people who are new to compiling their own software, and without a list of dependencies, we are getting issues like #96 and @gunslingerfry's issue above.

For now, it could just be the same list from chat.yaml and we can add to it as we encounter further dependencies, but something should be there.

aemonge commented 1 year ago

Some feedback from me, and from neovim.

Finally, this isn't a request but just awareness. I'm paying for chat-GPT4 mainly to develop, so if this chat is smarter or same-ish than GPT-4 I wouldn't mind migrating my payment from GPT4 to here :). Having the chat editor-integrated and narrow to development, it's what I'm looking for and craving.

Furthermore, please take this feedback as it is, positive feedback from a delighted customer. <3

amirbilu commented 1 year ago

Andres,

Thanks for the feedback! I'm passing it to the team

On Wed, Aug 2, 2023, 10:27 Andres Monge @.***> wrote:

Some feedback from me, and from neovim.

  • Would be super useful to have the ability to select text for context. I've noticed the chat uses my current buffer as context for my questions, and I rarely want it to reply with a full file suggestion; usually I'm in querying for a specific function.
  • A vi-mode for the input would be really nice. I know you can have binding to input-rc for emacs, or vi. That would be good.
  • CLI chat would be cool too. We often forget commands, or want to "unit" test bulk files.

Finally, this isn't a request but just awareness. I'm paying for chat-GPT4 mainly to develop, so if this chat is smarter or same-ish than GPT-4 I wouldn't mind migrating my payment from GPT4 to here :). Having the chat editor-integrated and narrow to development, it's what I'm looking for and craving.

Furthermore, please take this feedback as it is, positive feedback from a delighted customer. <3

— Reply to this email directly, view it on GitHub https://github.com/codota/tabnine-nvim/issues/92#issuecomment-1661654113, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB2TIGE5D3IHO25CKIAXDZDXTH6OHANCNFSM6AAAAAA2HBHUEI . You are receiving this because you were mentioned.Message ID: @.***>

allan-simon commented 1 year ago

Hello my neovim is running inside a dockerized environment (so without X) , so the same as it possible to get tabnine hub opening by doing port redirection, is there a way to open the chat from my host and point it to my neovim instance ?

gsharma-jiggzy commented 1 year ago

It would be nice to have vim native like chatgpt.nvim

https://github.com/jackMort/ChatGPT.nvim

aemonge commented 1 year ago

Or a simple terminal-based integration such as https://github.com/kardolus/chatgpt-cli , this could serve more user that only neovim ones. And us neovim can simply :terminal chatgtp-cli. Right @gsharma-jiggzy ?

nfwyst commented 1 year ago

chat is not easy to use, maybe the model should be upgraded....

for example, i have code like:

function Hello(x) {
  console.log("Hello" + " " + x);
}

Hello("marvin");

tabnine's answer is

截屏2023-10-02 19 35 56

this is a good start point...

AlexanderShvaykin commented 10 months ago

I have the error:

tabnine-nvim/lua/tabnine/chat/codelens.lua:89: attempt to index field 'range' (a nil value)

AlexanderShvaykin commented 10 months ago

After responding from the chat, I get an error message

Error executing vim.schedule lua callback: ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:82: Expected value but found unexpected end of string at character 8193
stack traceback:
        [C]: in function 'decode'
        ...ck/packer/start/tabnine-nvim/lua/tabnine/chat/binary.lua:82: in function ''
        vim/_editor.lua: in function <vim/_editor.lua:0>
amirbilu commented 10 months ago

Hi @AlexanderShvaykin does it happen constantly?

MJAS1 commented 10 months ago

I am trying to get TabnineChat to work, but the command only opens a blank window with nothing in it. I have not sent an email to support@tabnine.com to request chat to be enabled as I noticed that the instruction asking to do so was removed from the README. Was it removed on purpose?

amirbilu commented 10 months ago

Yes. Can you attach a screenshot of what you're seeing?

On Wed, Dec 6, 2023, 21:59 Manu Sutela @.***> wrote:

I am trying to get TabnineChat to work, but the command only opens a blank window with nothing in it. I have not sent an email to @.*** to request chat to be enabled as I noticed that the instruction asking to do so was removed from the README. Was it removed on purpose?

— Reply to this email directly, view it on GitHub https://github.com/codota/tabnine-nvim/issues/92#issuecomment-1843600687, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB2TIGE5UEOXMLN5IBAHASTYIDFDHAVCNFSM6AAAAAA2HBHUEKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBTGYYDANRYG4 . You are receiving this because you were mentioned.Message ID: @.***>

MJAS1 commented 10 months ago

I am on Fedora39 and was first trying to open it using i3wm which uses X11. I then tried with Plasma desktop using Wayland and the chat worked there. Next, I tried using Plasma+X11 and again I got a blank window. So it seems to be related to X11. Screenshot below Screenshot_20231206_223617

Mate2xo commented 9 months ago

Hi, It looks like that the codelens do not work for some languages ? For example, :TabnineExplain command works for lua and js files, but not for ruby. Though the chat sees and understand the ruby content : I can ask questions and require test generation directly from the chat window. From what I understand, the codelens might not set a symbol_under_cursor for ruby programs (didn't see any error messages). The behaviour is the same for all commands in https://github.com/codota/tabnine-nvim/blob/master/lua/tabnine/chat/user_commands.lua

If Ruby and/or other languages are not fully supported yet, it might be useful to mention it in the Readme

Mate2xo commented 9 months ago

I finally encountered an error on a ruby file :

Error executing vim.schedule lua callback: ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:89: attempt to index field 'range' (a nil value)                                                                                                                                                                         
stack traceback:                                                                                                                                                                                                                                                                                                                
        ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:89: in function 'is_symbol_under_cursor'                                                                                                                                                                                                                    
        ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:100: in function 'on_collect'                                                                                                                                                                                                                               
        ...pack/lazy/opt/tabnine-nvim/lua/tabnine/chat/codelens.lua:65: in function 'callback'                                                                                                                                                                                                                                  
        /usr/share/nvim/runtime/lua/vim/lsp.lua:2020: in function 'handler'                                                                                                                                                                                                                                                     
        /usr/share/nvim/runtime/lua/vim/lsp.lua:1393: in function ''                                                                                                                                                                                                                                                            
        vim/_editor.lua: in function <vim/_editor.lua:0>

This kind of error appears apparently randomly, couldn't find why, and disappears when launching a new neovim instance

amirbilu commented 9 months ago

@Mate2xo fixed by https://github.com/codota/tabnine-nvim/commit/3237a2800fd928477e10d6e122cce09abfb97cc2

amirbilu commented 9 months ago

Hi, It looks like that the codelens do not work for some languages ? For example, :TabnineExplain command works for lua and js files, but not for ruby. Though the chat sees and understand the ruby content : I can ask questions and require test generation directly from the chat window. From what I understand, the codelens might not set a symbol_under_cursor for ruby programs (didn't see any error messages). The behaviour is the same for all commands in https://github.com/codota/tabnine-nvim/blob/master/lua/tabnine/chat/user_commands.lua

If Ruby and/or other languages are not fully supported yet, it might be useful to mention it in the Readme

do you have lsp set for ruby? if yes, what lsp are you using? appreciate if you can provide an example file as well

beemdvp commented 9 months ago

I am on Fedora39 and was first trying to open it using i3wm which uses X11. I then tried with Plasma desktop using Wayland and the chat worked there. Next, I tried using Plasma+X11 and again I got a blank window. So it seems to be related to X11. Screenshot below Screenshot_20231206_223617

Damn you got past me, im on fedora 39 too and after struggling trying to install related system libs, I get a completely blank white screen with nothing rendered hahah

image

amirbilu commented 9 months ago

hi @beemdvp do you see anything in :messages ?

beemdvp commented 9 months ago

Hey @amirbilu there isnt there. I did notice though that if i try to hold left click and drag, there is actually content there. But for some reason everything is white/blank. So elements seem to render? Just not showing with the right styles maybe? Wonder if it could be some sort of file permissions issue

fcabjolsky commented 8 months ago

Hello I'm trying to debug this issue. Where could i find the source code of the chat (the source code of index.html)

beemdvp commented 8 months ago

okay i've switched to fully amd machine (noice) using fedora. I've noticed a bug:

  1. highlight block of code
  2. generate response from chat
  3. stop response generation
  4. shows error

Neovim version: image

Error output: image

aarondill commented 8 months ago

it seems the binaries are outputting something that is not valid json. i wouldn't be able to guess what this is though. personally, I think we should catch decoding errors and output our own error that includes the data that failed. this would make debugging these problems much easier.

Askath commented 8 months ago

okay i've switched to fully amd machine (noice) using fedora. I've noticed a bug:

  1. highlight block of code
  2. generate response from chat
  3. stop response generation
  4. shows error

Neovim version: image

Error output: image

Oh yeah, I have had the same bug on Mac OS, regardless of if a response is generated successfully or not.

sudoFerraz commented 8 months ago

Chat worked great on the first day I used it (yesterday)

From today on, after the first completion, tabnine completely stops working, unless there was a new update rolled out on the last 2 days, I think the CHAT completely messed my setup tho 😢

TabnineStatus still outputs normally and says that I'm a pro user;

Askath commented 8 months ago

Chat worked great on the first day I used it (yesterday)

From today on, after the first completion, tabnine completely stops working, unless there was a new update rolled out on the last 2 days, I think the CHAT completely messed my setup tho 😢

TabnineStatus still outputs normally and says that I'm a pro user;

What language Server do you have installed? I noticed that when I have an lsp running, that does not provide symbol documentation like angularls tabnine stops working depending on the order in which lsps have been started

sudoFerraz commented 8 months ago

Chat worked great on the first day I used it (yesterday) From today on, after the first completion, tabnine completely stops working, unless there was a new update rolled out on the last 2 days, I think the CHAT completely messed my setup tho 😢 TabnineStatus still outputs normally and says that I'm a pro user;

What language Server do you have installed? I noticed that when I have an lsp running, that does not provide symbol documentation like angularls tabnine stops working depending on the order in which lsps have been started

I don't think that could be related, as I was working on the same project when tabnine inline completions were working fine, using the same lsps and same setup. It really feels like there was a silent update on the past 2-3 that broke the suggestions after I accept the first one on my current session.

amirbilu commented 7 months ago

@sudoFerraz can you please contact us at support@tabnine.com ?

Mate2xo commented 7 months ago

Hi, It looks like that the codelens do not work for some languages ? For example, :TabnineExplain command works for lua and js files, but not for ruby. Though the chat sees and understand the ruby content : I can ask questions and require test generation directly from the chat window. From what I understand, the codelens might not set a symbol_under_cursor for ruby programs (didn't see any error messages). The behaviour is the same for all commands in https://github.com/codota/tabnine-nvim/blob/master/lua/tabnine/chat/user_commands.lua If Ruby and/or other languages are not fully supported yet, it might be useful to mention it in the Readme

do you have lsp set for ruby? if yes, what lsp are you using? appreciate if you can provide an example file as well

Sorry I took that long to respond

I am using the solargraph LSP. What kind of example file would you like ? The fix that you made resolved the issue, and I could not reproduce the issue anymore (thank you btw).

Jasha10 commented 6 months ago

My feedback is that I'd prefer a chat client implemented in neovim rather than one that uses WebView windowing via the wry crate. Compiling tabnine-nvim/chat is hard on ubuntu because I need to apt-install deps such as libcairo and some gtk-related things.

aarondill commented 6 months ago

@Jasha10 the chat client is currently shared with the VSCode extension, so I don't see this happening. I agree that a NeoVim-specific client would be better though. (I am not a maintainer of this repo)