Open coldwarrl opened 1 year ago
+1
This is really useful when you want to make sure copilot uses proper project pre-existing patterns (like named exports instead of default exports in React)
đź‘‹ We are looking into this, thanks for all the upvotes so far!
Curious to learn; what would be your custom instruction for Copilot Chat?
It would be super useful to be able to train it to respond to questions about our custom design system components and our code standards.
Curious to learn; what would be your custom instruction for Copilot Chat?
The first I want to add is 'prefer async/await over continuations' in typescript
Also, it shall be possible to include/reference a file (pdf etc.) in the instruction descriptor to reference a (code) style guide for instance
+1
some more I found: "use moment for time and duration, never numbers" "do not write any comments" "import not require" "no narrative logs, terse event based logs" "avoid deep nesting"
here's some I would use. "don't add ; at the end of javascript statements" "use let instead of const"
Really useful thing for setting up standards and best practices beforehand!
I would write to use Pinia not Vuex, also to spit the code snippets in Script Setup TS Composition API, not in Options API, and that I am using Quasar.
Thanks for all the code-related examples so far. I am curious if they apply just to your style (aka would be a user setting) or general to the workspace/code you are working, maybe even with others (aka a shared workspace setting).
Another question: Do y'all have an example of user-related custom instructions (like communication style, expertise, etc) that would make Copilot Chat work better for you? Specifically, what would work better?
I think there should be workspace/team and user instructions....both make sense.
Concerning user instructions, some examples:
Some user communication instructions I collected on twitter for chatgpt:
I am looking forward to being able to provide GitHub Copilot with custom preamble (prompt/instruction) text that gives context to my interactions with the AI. The two most important configuration categories, for me, are:
This seems to map to VSCode's "User Preferences" vs. "Project/Workspace Preferences" config model. I don't feel super strongly about where/how the text is stored -- it's more important that it just be even possible. That said, I generally find that a good [dev] user configuration experience comes from at least:
$HOME/.gitconfig
, ~/.bashrc
, $profile
(PowerShell)./.gitattributes
, project ./.env.example
./.git/config
(git config --local), ./.env
Perhaps the various settings.json
s could store settings related to how the extension locates and loads (or doesn't load) prompt text related to the three categories above, rather than the text itself. This would make it friendlier for other IDEs and text editors to load the same prompts that VSCode does. ghcpconfig.toml
? :) [json sucks for prose]
I'm currently working with python so for this project I would love this custom instruction:
If you write code, use type annotations whenever possible. Include inline documentation in functions. When amending a piece of code, write only the part you amended.
Here is a prompt I use a lot, and would like to set (per file type, .PHP in my case):
..using WordPress Coding Standard
, e.g.: When I write refactor code
, it would be very nice if it did refactor code using WordPress Coding Standard
My use case is to explain I want conventional commit message for the source control AI generation:
..
for string concatenation, not .
str.format()
or ""%()
.// get the value from the Dictionary
.I personally don't find ChatGPT's web UI breakout of "What would you like ChatGPT to know about you to provide better responses?" vs. "How would you like ChatGPT to respond?" very useful. I swapped the information between the two boxes and re-asked some questions and nothing really changed. I have an idle suspicion that OpenAI made the UI this way partially so they can get "labeled" data from people who have chosen not to opt out of sharing their data. Box 1 gets them some information that's demographic-y, and Box 2 lets them understand how people write instructions, and serves to filter out the info from Box 1, perhaps for further training. After all, human-labeled data is still the gold standard. :) tl;dr I personally don't need more than one chunk of instructional text per scope (user:global, user:per-project, shared:per-project). I don't think scoping out prompts by purpose is a bad idea per se, I just don't think it's as important as many other features.
I would very much like this as well. Currently I am working on migrating a .NET Framework project to .NET Core, and I have set aside some texts I can (and need to) paste into every new conversation with copilot to ensure it knows which framework the project is using, its current state, and my overall goal. Without this information, the bulk of its responses will involve unsolicited advice on things like how to migrate the database and how to configure some library, which is already taken care of.
While using GPTs, I write these type of instructions in markdown that I update regularely. There I can offer a decent high-level context of my various projects and what kind of assistance I'm looking for.
@mike-clark-8192 https://github.com/microsoft/vscode-copilot-release/issues/563#issuecomment-1913649724
I +1 that system. Lack of initial/system prompt control is one of the main reasons I sometimes switch to ChatGPT, since I can more accurately guide its behavior. The ability to define behavior per-workspace would be really useful and be one less thing I need to switch windows for.
I'd like to be able to define it per file or pattern:
"github.copilot.customInstructions": {
"*.md": "Use '*' for list items."
"journal.md": "Use YYYY-MM-DD"
}
Or maybe even by Language Mode?
{
"[markdown]": "…"
"[javascript]": "Use imports, not requires. ..."
}
I prefer the first one, unless language mode works better in multi-language files like markdown and notebooks?
I'd like to be able to:
Thank you for your work and efforts :)
I agree with everything that's been said until now ><
I would like to add / point out that we already have a lot of files in our repo that can inform copilot how we work. For a Typescript project, consider the amount of information contained in the following files:
.prettierrc.json
(for the returned code formatting).eslintrc.json
(for the rules to follow)cypress.env.json
(indicates the use of cypress testing framework)tsconfig.json
(for the typescript configuration...)yarn.lock
(to determine the package manager)I can imagine copilot reading all these files and make a cache of this information (for example in .vscode/copilot.json
) at "start up".
It could then embed it in each prompt or attach it to a "session id"
These are just some ideas :)
A selection component with pre defined prompts, and a textarea component to define these prompts.
a good Python example would be to replace certain expressions ie.
"use the expression is True
instead of == True
"
"use the expression
is True
instead of== True
"
That is not a matter of style. They both mean different things, behave differently. The former tests for identity, i.e. non-Boolean "truthy" values are rejected and only True is accepted. The latter one is a beginner mistake. If the model ever generates that, it needs better training. Boolean values should not be tested again. They are already boolean.
Anyway, I think this thread has plenty of examples of what users think they want. The Copilot developers are intelligent and competent and will make good design decisions.
"use the expression
is True
instead of== True
"That is not a matter of style. They both mean different things, behave differently. The former tests for identity, i.e. non-Boolean "truthy" values are rejected and only True is accepted. The latter one is a beginner mistake. If the model ever generates that, it needs better training. Boolean values should not be tested again. They are already boolean.
Anyway, I think this thread has plenty of examples of what users think they want. The Copilot developers are intelligent and competent and will make good design decisions.
At this time, the model does generate it.
Any progress on this function?
I am eagerly anticipating this feature. Being able to write custom instructions - potentially for specific glob patterns - and share that for all contributors of the repo would be hugely valuable. The config also acts as a contributing guide of sorts which can grow/scale over time and benefits from being source-controlled.
Based on some examples posted in this thread, I can see a benefit to having one file committed to the repo as well as configuration which isn't committed which can contain user-specific instructions. Additionally, there could be benefits to being able to extend config so teams can create shareable configs to use consistently across multiple repos.
Some use-cases I would probably use might look something like this (following a flat eslint config structure):
copilot.config.js:
import sharedInstructions from "copilot-shared-config"
export default [sharedInstructions, {
instructions: [
'reply using UK English.',
'when replying with a multi-step solution, give me a summary of the solution first, then ask when I am ready for each step.',
'when writing code to be added to an existing file, review whether the file is considered to be high-complexity and suggest adding the code to a new file.',
],
}, {
files: ['*.graphql'],
instructions: [
'query names must use nouns which represent the resource(s) being fetched.',
'mutation names must use verbs to indicate the action being performed.',
'mutation names must use one of the following prefixes: create, update, delete, add, remove.',
],
}, {
files: ['*.spec.*'],
instructions: [
'Test names should use assertive, present-tense language that directly describes the expected behavior or outcome of the function being tested. Avoid using "should" or other conditional terms.',
],
}]
copilot.user.config.js
export default [{
instructions: [
'Use emojis to make responses more visually appealing',
'Do not provide code comments unless requested',
]
}]
It looks like this is implemented now, at least partially. In the Github Copilot Chat settings, you can enter custom instructions:
And when chatting, these custom instructions appear to be honored:
I haven't done any serious testing with/without the settings, and this doesn't cover all of the requests above, but it seems to at least understand the framework instruction I put in.
Would love to hear if this is now considered Done or if they're still working on it. But at the least, this seems like a great addition.
Thats great, but we also need it in editor in autocomplete ghost typing mode. For example I am not using copilot chat at all because Claude Sonnet 3.5 is so much better for everything
S pozdravom Ing. Radoslav BaÄŤik email: @.*** tel CR: +420722476209 tel SR: +421949441929 Skype:radoslav.bacik GTalk:radoslav.bacik
On Mon 19. 8. 2024 at 16:46, Chris Flora @.***> wrote:
It looks like this is implemented now, at least partially. In the Github Copilot Chat settings, you can enter custom instructions: image.png (view on web) https://github.com/user-attachments/assets/9f714be3-06bd-49f9-b68c-d76888441e9f
And when chatting, these custom instructions appear to be honored: image.png (view on web) https://github.com/user-attachments/assets/20f91ff8-abed-4fa9-94be-d8106b814bfc
I haven't done any serious testing with/without the settings, and this doesn't cover all of the requests above, but it seems to at least understand the framework instruction I put in.
Would love to hear if this is now considered Done or if they're still working on it. But at the least, this seems like a great addition.
— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/563#issuecomment-2296760509, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAO2LSYKIKB4SZZ5A4TEKKDZSIAOBAVCNFSM6AAAAAA7HJZ6ZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJWG43DANJQHE . You are receiving this because you commented.Message ID: @.***>
Quick update on why issue isn't closed as we shipped some first explorations with the latest release. Consider it in preview as we will change (it actually already changed in pre-release, and we got more changes planned) as we iterate on our learnings, benchmarks, and your feedback.
So, first ask is to provide hands-on feedback if it actually works for your use cases. Share what you do with it and how well it works. File issues for cases that you expect to work but that don't.
Release: github.copilot.chat.customUserInstructions
is an array of instructions added to all chat-related prompts.
Pre-release: github.copilot.chat.customInstructions
is an object syntax that allows to match instructions to specific languages. We are still finalizing this format during this month's iteration, so your feedback will be especially helpful.
Tip: Use workspace settings to scope them to specific projects but be aware that this also will be applied for your coworkers' Copilot.
Quick update on why issue isn't closed as we shipped some first explorations with the latest release. Consider it in preview as we will change (it actually already changed in pre-release, and we got more changes planned) as we iterate on our learnings, benchmarks, and your feedback.
So, first ask is to provide hands-on feedback if it actually works for your use cases. Share what you do with it and how well it works. File issues for cases that you expect to work but that don't.
Release:
github.copilot.chat.customUserInstructions
is an array of instructions added to all chat-related prompts.Pre-release:
github.copilot.chat.customInstructions
is an object syntax that allows to match instructions to specific languages. We are still finalizing this format during this month's iteration, so your feedback will be especially helpful.Tip: Use workspace settings to scope them to specific projects but be aware that this also will be applied for your coworkers' Copilot.
Good job for you but not enough. Using something like '@bot1' or '#bot1'to submit pre-defined instruction will be better. The 'bot1' can be stored in object syntax.
So, first ask is to provide hands-on feedback if it actually works for your use cases. Share what you do with it and how well it works. File issues for cases that you expect to work but that don't.
My first brush with it today was promising. Angular just recently had a paradigm shift with standalone components and the new structural directives, and telling copilot about which way I wanted to code in the user instructions really seems to have helped get good results on the first try. Will continue to test this week.
Hope we’ll be able to get something similar for autocomplete
Good job for you but not enough. Using something like '@BOT1' or '#bot1'to submit pre-defined instruction will be better. The 'bot1' can be stored in object syntax.
I am not sure I understand. Do you have specific scenarios in mind for what these pre-defined instructions cover? Sounds like you'd like to extend Chat variables and participants, which is possible with the extension API: https://code.visualstudio.com/api/extension-guides/chat#develop-a-chat-extension .
Looking forward to more hands-on feedback as y'all have a chance to try this out in release or pre-release.
@digitarald I suspect what @sheng-di is trying to communicate is that it may seem counterintuitive to use vanilla vscode settings to provide what is arguably project-specific details and context to an AI assistant that is supposed to integrate with our flow to help streamline our productivity. The ideal prompt may be something that evolves dynamically as context shifts over time, and I suspect many would like to see the ability to adjust prompts as part of the AI workflow itself. Things like perhaps highlighting some code and saying "all my classes should be formatted like this", and pointing it to some markdown documentation and example code files that should be used as a reference. Or defining patterns with a voice prompt: "help me scaffold a new class of {X} type", where I've had a chance to train it on what {X} is by showing it a few examples. Things like that. Essentially, at least from what I've seen in this thread, it boils down to every developer and project having their own local contextual rules, patterns, preferences and so on. Rather than just hoping the AI will see all our patterns and infer the correct behaviour without any prompting, we'd like to be able to communicate our preferences for how it assists us. Think of a programmer assisting us. We'd say "make sure whenever you see X, you do Y". Additionally, we're often working on transient tasks where the patterns, requirements, focus, etc. is specific to the task at hand. It'd be nice to say "I'm working on a task" and then be able to list a set of mandates/imperatives for the task that the AI is aware of when we indicate that that task is in focus right now. Even better, to have it respond and confirm (preferably with references, brief examples, or whatever makes sense) that it actually understands what we're referring to in our instructions. As experienced developers yourselves you know that development can be a dynamic, fluid process, and for something like this to be truly helpful, there needs to be some kind of dialogue with the AI regarding what we're working on -- to maintain an evolving context that we can talk about with respect to ongoing changes in the minuteae of the moment.
Ultimately I think the message is this:
It's not that we want to be able to provide custom prompts. We just want something a bit more contextually fluid than what the chat panel provides.
That "Copilot Chat" feature feels like an occasional consultant who doesn't even get it half the time, necessitating trips to instead visit his big brother over on the web. Rather than thinking in terms of rigid sets of custom instructions, we just want to be able to focus on our work and be able to communicate moment-to-moment patterns of relevance as we go.
If it were me, I'd probably look at some form of hierarchical structure for maintaining context under the hood, with each node representing an extension to the parent context with respect to the user's focus/concerns/tasks/etc.
In the side panel I'd have an ongoing "developer chat log" style interface where the AI can behave as though it's actually an active participant in the project, and can use that mode to reply in tandem with things it highlights in the code, e.g. "This code I just highlighted in purple, is that what you're referring to? If so, I take it that you mean that whenever you say you want to generate a new X, that I'd generate some code that looks like: {code-block-follows-here}. I could make this behave as a snippet if you like, so you can fill in the parts that would be custom to each case."
Within the editor I'd leverage the inline chat prompt box you already use for other things in order to make quick, in-the-flow amendments to the active context and to make various kinds of momentary inline requests for AI assisted edits, and utilise decorators of every kind, inline autocompletions, dynamic code actions, dynamic code lenses and so on. Don't make me remember the exact behaviours of the various forward-slash-prefixes and other syntactical incantations whose behaviour is somewhat ambiguous. Instead, just allow free text and have the AI assess the request in the active context that you're maintaining as a well-structured hiererachical model, then make decisions about what available VSCode editor features are the most appropriate channels of feedback, and then generate instructions to the Copilot UI engine about what feedback to provide and where to provide it.
So... yeah. Our needs are fluid and dynamic. "What kind of prompting do we want to do" is the wrong question and the wrong way to think about all this.
Whichever way you go with this, keep in mind that the fact that LLMs seem bad at dealing with complexity is not necessarily a showstopper. Utilise meta-prompting and agent-style processes under the hood to break down complex requests into managed/supervised/coordinated compositions of simple requests. Internally break things down, generating simple internal prompts as hidden subcomponents of whatever Copilot is trying to do and then have other prompts assess the results and coordinate them into whatever comes next. A lot is possible here, I think. Copilot just needs to be broken out of the box that treats it as nothing more than a fancy autocompletion feature,
@digitarald I suspect what @sheng-di is trying to communicate is that it may seem counterintuitive to use vanilla vscode settings to provide what is arguably project-specific details and context to an AI assistant that is supposed to integrate with our flow to help streamline our productivity. The ideal prompt may be something that evolves dynamically as context shifts over time, and I suspect many would like to see the ability to adjust prompts as part of the AI workflow itself. Things like perhaps highlighting some code and saying "all my classes should be formatted like this", and pointing it to some markdown documentation and example code files that should be used as a reference. Or defining patterns with a voice prompt: "help me scaffold a new class of {X} type", where I've had a chance to train it on what {X} is by showing it a few examples. Things like that. Essentially, at least from what I've seen in this thread, it boils down to every developer and project having their own local contextual rules, patterns, preferences and so on. Rather than just hoping the AI will see all our patterns and infer the correct behaviour without any prompting, we'd like to be able to communicate our preferences for how it assists us. Think of a programmer assisting us. We'd say "make sure whenever you see X, you do Y". Additionally, we're often working on transient tasks where the patterns, requirements, focus, etc. is specific to the task at hand. It'd be nice to say "I'm working on a task" and then be able to list a set of mandates/imperatives for the task that the AI is aware of when we indicate that that task is in focus right now. Even better, to have it respond and confirm (preferably with references, brief examples, or whatever makes sense) that it actually understands what we're referring to in our instructions. As experienced developers yourselves you know that development can be a dynamic, fluid process, and for something like this to be truly helpful, there needs to be some kind of dialogue with the AI regarding what we're working on -- to maintain an evolving context that we can talk about with respect to ongoing changes in the minuteae of the moment.
Ultimately I think the message is this:
It's not that we want to be able to provide custom prompts. We just want something a bit more contextually fluid than what the chat panel provides.
That "Copilot Chat" feature feels like an occasional consultant who doesn't even get it half the time, necessitating trips to instead visit his big brother over on the web. Rather than thinking in terms of rigid sets of custom instructions, we just want to be able to focus on our work and be able to communicate moment-to-moment patterns of relevance as we go.
If it were me, I'd probably look at some form of hierarchical structure for maintaining context under the hood, with each node representing an extension to the parent context with respect to the user's focus/concerns/tasks/etc.
In the side panel I'd have an ongoing "developer chat log" style interface where the AI can behave as though it's actually an active participant in the project, and can use that mode to reply in tandem with things it highlights in the code, e.g. "This code I just highlighted in purple, is that what you're referring to? If so, I take it that you mean that whenever you say you want to generate a new X, that I'd generate some code that looks like: {code-block-follows-here}. I could make this behave as a snippet if you like, so you can fill in the parts that would be custom to each case."
Within the editor I'd leverage the inline chat prompt box you already use for other things in order to make quick, in-the-flow amendments to the active context and to make various kinds of momentary inline requests for AI assisted edits, and utilise decorators of every kind, inline autocompletions, dynamic code actions, dynamic code lenses and so on. Don't make me remember the exact behaviours of the various forward-slash-prefixes and other syntactical incantations whose behaviour is somewhat ambiguous. Instead, just allow free text and have the AI assess the request in the active context that you're maintaining as a well-structured hiererachical model, then make decisions about what available VSCode editor features are the most appropriate channels of feedback, and then generate instructions to the Copilot UI engine about what feedback to provide and where to provide it.
So... yeah. Our needs are fluid and dynamic. "What kind of prompting do we want to do" is the wrong question and the wrong way to think about all this.
Let us have a conversation to build up transient contexts that it remembers.
Make sure contexts are hierarchical. Under the hood you would generate prompts automatically based on what you find from the root context node down to the active context node.
Make it easy to manually switch between contexts on demand.
Extend an inferred context from the manually selected context node to also include things like which file, line, symbol and so forth we're currently focused on, etc.
As part of our ability to manually manage our context hierarchy, allow us to communicate patterns that, when observed by the AI, allow automatic switching between certain contexts alongside the main user-controlled context.
Have the AI provide proactive feedback confirming that it understands what we're communicating. It can use decorations of all kinds (code, gutter, scrollbar, etc.), it can reply in the dev/project chat panel, it can use popovers, it can use tentative autocompletions, or anything else that would count as feedback.
The AI should be able to communicate with us in multiple modalities in tandem, but for all modalities leveraged, the dev/project chat interface should be central to it all, should last for the life of the project, and its visible content should reflect an association with the context hierarchy and with whatever active context node is active at the time.
Whichever way you go with this, keep in mind that the fact that LLMs seem bad at dealing with complexity is not necessarily a showstopper. Utilise meta-prompting and agent-style processes under the hood to break down complex requests into managed/supervised/coordinated compositions of simple requests. Internally break things down, generating simple internal prompts as hidden subcomponents of whatever Copilot is trying to do and then have other prompts assess the results and coordinate them into whatever comes next. A lot is possible here, I think. Copilot just needs to be broken out of the box that treats it as nothing more than a fancy autocompletion feature,
Yes, I agree with your point. I believe we should have a way to handle tasks where we can choose one type of assistant for one task and another type for a different task, rather than using a predefined, fixed set of prompts. This would limit our usage scenarios. Just like the agents everyone is using now, we should be able to select different agents based on different tasks.
Good job for you but not enough. Using something like '@BOT1' or '#bot1'to submit pre-defined instruction will be better. The 'bot1' can be stored in object syntax.
I am not sure I understand. Do you have specific scenarios in mind for what these pre-defined instructions cover? Sounds like you'd like to extend Chat variables and participants, which is possible with the extension API: https://code.visualstudio.com/api/extension-guides/chat#develop-a-chat-extension .
Looking forward to more hands-on feedback as y'all have a chance to try this out in release or pre-release.
For example, during my work, I need to use multiple programming languages simultaneously. I can predefine the role description for each programming language. For instance:
Python Assistant: You are a very powerful Python programmer, and you can solve issues related to Python.
Cpp Assistant: You are a relatively powerful Cpp assistant, and I can ask you Cpp-related questions.
Then, when I encounter some problems, I can conveniently go to the corresponding assistant to narrow down the scope of the large model’s answer. For example: @cpp assistant, help me solve this problem. It’s equivalent to: “You are a relatively powerful Cpp assistant, I can ask you some Cpp questions, help me solve this problem.”
You can understand it as the simplest way of prompt template replacement: prompt = {cppPrompt}, help me solve this problem`.
Although it might be possible to achieve this functionality by developing plugins, it is not very convenient. I still hope that the built-in plugin can have this built-in functionality.
 @axefrog thanks for writing our your in-depth forward-looking AI assistant thoughts. A few things stand out that we have been exploring, like more implicit context, handling context switching within a conversation, temporal context to infer the current task, and some more. Since this issue is about custom instructions, maybe you have a moment to file some of the specific bugs/feature ideas as separate issues.
Meanwhile the newly landed code instructions setting allows referencing files as well and scope them per language. @sheng-di, you might be able to use that format to create pre-prompts per language. Apart from these user settings, Copilot already does a lot of custom-context per language, so I also recommend filing issues where Copilot doesn't feel like an expert in a particular area.
@digitarald thanks for releasing this new feature, it's very useful. I wanted to know if there is a limit to the information we can upload. Currently I have uploaded 3 different instructions for a total of 15000 characters.
What about WebStorm/PHPStorm plugin?
@digitarald A lot of documentation seems to be lacking on what Copilot is actually doing, which makes it hard to know what custom instructions would work best. I imagine development is probably progressing at a rapid pace behind the scenes which makes documenting things more difficult, but is there any way we could get a little more info on (a) exactly what Copilot is doing as of right now, and (b) how the custom instructions are integrated with the context already being generated? Copilot seems better lately at recognising patterns in my repo, but I'm still never sure what's included in the context and what's omitted. If we had more up to date information about this, we could tailor the way we use vscode to better cater to the way Copilot is trying to assist us.
Also, could you provide some examples of the kinds of custom instructions that we can reasonably expect to work? So far I'm not having much luck. In my workspace file I have tried the following, to not much avail (the instructions regarding comments are definitely being disregarded):
"github.copilot.chat.customUserInstructions": [
"Class fields assigned in constructors should always be `readonly #fieldname: FieldType` by default.",
"Class constructors that only assign a single field should place the body of the constructor on the same line as the constructor declaration.",
"Avoid completing `//` comments that appear to be developer notes or conversational in nature.",
"When you see the word FOOBAR in a comment, it should always be completed as FOOBARBAZ.",
],
it would be nice if some of the built in functionality could also be augmented or extended.
For instance, i don't really like the style used to generate tests by default. Adding "When generating tests prefer to fully encapsulate each test such that all objects and dependencies used by each test are initialized inline within the test itself."
helped, but it is hard to test as I'm not sure how it really all works together.
I'd love to be able to give a much more direct instruction which I knew would always be included when tests are being generated via the built in actions to create tests.
Taking this further, I could also see custom actions, like generate a unit test, or generate an end-to-end test, and within those actions I would want more control of the context which could include text and things like paths to attach to the context which should be used when building up the solution.
@digitarald Doing various tests I noticed that the instructions are not fully processed using the @workspace /new
command.
I can give instructions on which files and folders need to be created, but it doesn't seem to see the code that those files need to contain.
I don't have the same problem if I use chat normally without the /new
command.
It would also be nice if you could indicate the commands or istruction #reference inside the custom user instructions,
for example, tell to create the workspace if certain conditions are met and not have to type the command from the chat.
@digitarald
Pre-release:
github.copilot.chat.customInstructions
is an object syntax that allows to match instructions to specific languages.
Is it me or this hasn't landed yet? customUserInstructions
works but chat.customInstructions
doesn't for me
I'm on v1.228.0
"github.copilot.chat.customUserInstructions": [ // :ok:
"Start with the word 'banana' every time"
],
"github.copilot.chat.customInstructions": { // Unknown Configuration Setting
"markdown": ["Start with the word 'banana' every time"],
"*.js": "Start with the word 'banana' every time"
},
It appears faded out in settings.json
too.
Also what's the correct syntax? language mode ('javascript') or pattern (*.js)?
Make sure you got the latest VS Code and no pending extension updates (command Extensions: Check for Extension Updates
, should show GitHub Copilot Chat v0.20.0 in Developer: Show Running Extensions
)
@digitarald Doing various tests I noticed that the instructions are not fully processed using the
@workspace /new
command.
Could you file a new issue, with your use case and instructions. Copilot is not adding instructions to /new
right now, but it does make sense to have user instructions apply there as well.
https://github.com/microsoft/vscode-copilot-release/issues/950 was closed in favor of this thread.
The issue proposes to be able to attach PDF files to the chat. This is highly beneficial when working with MCU programming. Imagine adding the datasheets of a specific MCU, such as an stm32, just to be able to ask for code implementations that could gather info from the file specific to your own MCU variant.
@thernstig right now all instructions are added, not using any semantic search. Feels like your pattern should find the related details from the PDF, vs attaching the full content in every request. I'll track that as potential follow-up.
Meanwhile, a potential workaround would be export the PDF into a text format and attaching it to Chat for those questions. This might also work with @workspace
search, if the file is in the workspace.
@digitarald yes, it all depends a bit. If Github CoPilot starts to support prompt caching such as Claude does, see https://www.anthropic.com/news/prompt-caching, then it would fit perfectly to use that to attach such custom files.
Hope Github CoPilot gets prompt caching soon. Then one could mark the files as cacheable and it would give more power.
Copilot shall have custom instructions like OpenAIs ChatGPT does do.
This could be achieved by an instruction file in each project. Of course, a global file might also be helpful.
Typical examples: