Open goodosoft opened 6 months ago
@goodosoft What statistical information would you be interested in collecting? We currently already share a lot of important development data in the ~/.continue/dev_data folder, including accepted/rejected tab autocomplete suggestions, thumbs up/down responses in chat, etc. There is also a personal usage feature if you click the "?" icon in the bottom right and then "View My Usage".
If you're interested in collecting this on a team scale, we're working on this product and I'm happy to share details
interested in collecting this on a team scale
yes! I want to know lines of code generated and accept rate for each team member.
@goodosoft What statistical information would you be interested in collecting? We currently already share a lot of important development data in the ~/.continue/dev_data folder, including accepted/rejected tab autocomplete suggestions, thumbs up/down responses in chat, etc. There is also a personal usage feature if you click the "?" icon in the bottom right and then "View My Usage".
If you're interested in collecting this on a team scale, we're working on this product and I'm happy to share details
team dashboard will be great :) I want to contribute but I'm a backend developer :(
I think it would be good if for autocomplete the generated completionId
was included in the outcome information that is logged. This would enable some level of de-duping when collecting and aggregating this information.
we would like to collect this information as well, to judge the acceptance vs. Github Copilot (personally, ofc I prefer the local continue approach)
not sure if your "product" is ready already? of course we would prefer to keep the open source approach without licensing for now, especially since this area is moving soo fast and it's too early commit to one system
My company has a lot of engineers, we need to collect data on the prompts submitted to the model, the response of the model, and whether the model is adopted or not, and then use this data to adjust our internal model to make the model work better.
The expectation is to have a log of the callback address configuration, so that we only need to implement the interface can collect data to do analysis!
vscode plugin I can read and implement pr, but I do not know kt, so pr is difficult...
It would be incredibly helpful to have a way to collect telemetry data ourselves and create a developer satisfaction score based on metrics like autocompletion acceptance rate, thumbs up/down in chat, no of lines of code suggested vs accepted.
Having a callback address configuration (as suggested by @supowoers) would be fantastic, allowing us to collect all the necessary metrics from developers. This would enable us to make informed decisions about which models to prioritize and improve the overall user experience.
Would love to see this feature added!
Validations
Problem
We want to collect statistical information ourselves, how should we do it? Or how to send telemetry to our own system?
Solution
send telemetry to user own system or save in local database.