Open heejaechang opened 6 years ago
These are a lot of core issues. I would not want to take the feature in without addressing several of them.
Also tagging @ivanbasov and @jinujoseph
Agree
Should we be restrictive or open is a good question , I worry about the silent hidden semantic changes hence preferred to offer the refactoring only if the semantic meaning are same to begin with, I see we following different pattern for different refactoring so may need to discuss this.
I personally prefer getting all my refactoring for the entire line of foreach but other refactoring may have span restriction and ok with what ever we decide here .
Agree
Agree
Fundamentally agree but wonder if they will be noticeable unless you are comparing case by case
I think we will use the design meeting on monday to discuss some of the fundamentals but importantly none of this sounds like a blocker to release the feature or some thing we cann't come and build/add on.
Adding @DustinCampbell and @Pilchie as they might have background on some this principles.
I worry about the silent hidden semantic changes
Note: we should try to avoid silent changes. If semantics change in a way we can detect, we should consider warning the user.
I personally prefer getting all my refactoring for the entire line of foreach but other refactoring may have span restriction and ok with what ever we decide here .
Note: this is reasonable. however we have to be careful. Consider hte case of:
foreach (var x in ...)
.
If the user puts their cursor on 'var', and brings up the lightbulb, it's not good if they are offered refactorings for an outer construct above the refactorings for 'var'.
--
Note: this might require actual design work as we currently have no way for refactorings to indicate what span they belong to (unlike fixes, which can use the diagnostic span). If refactorings could say what span they applied to, then we could offer 'var' fixes and 'foreach' fixes, but prioritize the fixes depending on the user location. I think that could be reasonable. Tagging @kuhlenh for thoughts.
However until we get there, we will have a problem if our refactorings are too broad and then show up above the other refactorings that are more contextually relevant. This is the crucial bit. We should not be discussing if it's godo to have. We need to discuss the badness of unwanted refactorings showing up in the same list or above teh refactorings that are wanted.
importantly none of this sounds like a blocker to release the feature or some thing we cann't come and build/add on.
I do agree with this. Though i will throw my skeptical hat on and say that, generally, i don't think these sorts of issues routinely come back and get addressed. So, my concerned view is that these sorts of "work we can do later" bugs are just ways to get something in soon, while not really being as consistent and high quality as i would like for an in-box IDE feature.
Now, with every feature there's certainly a point where the team can say: "it's just no worth the time and effort to tick every single box". However, my main concern here is that much of the above just seems like the "bread and butter" sort of thing that i would expect to be done as part of a normal feature. They're the sorts of things we've expected from all IDE features (both internal and external).
--
So, for example: i can accept 'query to methods' not supporting complex cases at first. That's because the work to support them might be significant. But, when we're talking about an hour or so to do the work, i think we should put in our due diligence and just do it.
I am going to wait until team decides principle on these.
once we decide principle we should go and change all our existing refactoring to follow same pattern.
...
as I said to others, my opinions are
support ForEachVariableStatementSyntax for deconstruction -> we can do that later
[design question] should refactoring be offered even if refactoring can change semantic? -> I am fine with being strict or being permissive. but not middle somewhere. if we go permissive, we should go quite a bit more.
[design question] what span refactoring should be offered? -> I am +1 on line based. just realized we have "use explicit type" offer on "var". I have been playing with "foreach(var" for 4 days. never knew it is there until accidentally I moved my caret over "var".
anyway, we should decide on principle and change all refactoring to follow same pattern. also, behavior on what to do when there is selection, also HiddenPosition behavior.
[design question] okay to delete comments in certain case? -> I am fine with deleting or duplicating them. as long as we decide principle on these.
support embedded statement. (refactor introduce local variable service to actual service and use that in the refactoring) -> later. more importantly, we should visit all our code fix, refactoring provider and change them as solution based service. so that we can use them in other code fix/refactorings.
let people convert to for if the type implements indexer and either Length or Count -> Jinu, you decide, we can do either. if you ask if it is must?, I dont think it is must.
support multiple rename in refactoring -> this would be way better one to have then any other.
[design question] how much formatting we will try to preserve -> like all other ones, I am fine not preserving formatting too aggressively and just format parts that are changed including not generated code. but we need a principle set.
A general principle i have is: we should make our features as good as possible, as long as the time it takes is not too high. So, for easy, low hanging fruit, we should just go put in that effort. A reason why roslyn is great so much of the time is that we've sweated the small details in so many places. Or, in other words, polish matters, and putting in a reasonable amount of time toward polish should be a core principle. :)
I am fine with deleting or duplicating them. as long as we decide principle on these.
Note: the principle on this has been established already. It's always been: do a very good job with the types of coding patterns that are the most common. Try to do reasonable stuff when possible. And accept that there might be some cases missed in any transformation. If cases are missed drive fixes based on an evaluation of filed bugs so that we are hitting the cases that are affecting users. We've likely fixed thousands of comment bugs over the years. But, importantly, during that time we still had our features enabled and helping users, rather than just not being there and leaving them without the feature. :)
Note: i think @sharwell has mentioned he has an approach that can even do better than the above. It's well worth seeing that approach and trying to adopt it if possible. However, the above is what i would consider hte min-bar for our features. Currently, I don't believe we have any features that go and say "i'm going to turn myself off because of trivia", so i do not see why we now start doing that. This is not following any of the patterns or practices we have followed so far (which have had strong arguments as to why they work that way).
[design question] how much formatting we will try to preserve -> like all other ones, I am fine not preserving formatting too aggressively and just format parts that are changed including not generated code. but we need a principle set.
We have a principle on this as well. Specifically, if we can move original code over without changing it, we should try to do that as much as possible. If we can't, we should attempt to minimize the formatting impact on the user code, so it feels as much as possible that it's their code in their style
--
Note: the point of our PRs was to bring all this up. These are the principles we've followed up till now, and which we should continue with. If we want to write these down, i think that's fine. But it would be very strange to me if, after spending so much time and effort to sweat these sorts of things, we now decided it was no longer important and that our new features would behave much more poorly in these areas than other features we've already shipped.
According to your principle, I think the features are fine then since I see your PR, and things people point out you said, no that is not important. and other says it is not important, you said it is important. if I take same stand, then I think things all working reasonably. and things you points are not important and can be improved later if that becomes problem.
I am not sure how "as good as possible" is a principle. I thought principle is the thing we use to determine whether something is good or not when there is different opinion.
we just had meeting and all the principle you saying we had, nobody knows about. them. so I am not sure where that principle comes from.
Literally all our features we've written and hte bugs we've fixed there over the years. For example, when we find that we didn't appropriately preserve formatting well enough, and someone reported it, we tend to fix it. That demonstrates a commitment to keeping user formatting the same across refactorings.
If this was not one of our principles around our features, we would simply punt on these issues. But we actually think they are important. And, as such, i think we should ensure that new code we write attempts to follow those same goals as much as is reasonable so that we're not just getting something in early, and then just waiting on bugs to drive the work.
--
Similarly, though clearly from many conversations recently it seems like it isn't agreed upon, consistency within the codebase was a desirable feature for me. This means that features shoudl not be unnecessarily different for no good reason. If we've followed certain approaches in the past across all our features, then by default new features should follow those same approaches so as to be consistent. I don't know if this was ever called out as a "principle", but it's certainly been a philosophy we've tried to follow. It's particularly valuable because we can use previous decisions to drive future decisions based on consistency. When new features start following a different set of rules, we end up in a situation where new features don't have a clear idea about which rules they should follow. This is why i tend to emphatically push for avoiding that.
This came up with the discussion of "if we should allow fixes to just parse code and insert it". As far as i know, zero of our features do this. And, accordingly, we've designed our subsystems with the assumption that that does not and should not happen. I brought up many examples of the problems that would cause if you used that approach. But, the more important bit to me was that we simply should not do this at all, because now it becomes a confusing choice for all future refactorings/fixes. And then, every time these PRs get reviewed someone has to figure out "wait... are there downsides to parsing? what potential problems may occur?" This was necessary in your PR. By parsing code, your feature would not behave properly under certain circumstances. By driving these features to be implemented consistently, we lower the cost to review, maintain and ensure behavior.
you say it is important. I am waiting others opinion. and sure we do fix broken formatting after refactoring, but not all formatting style or related request.
and I say principle so that people can decide what is important and good rather than every single time going to design meeting or have team meeting.
you say it is important. I am waiting others opinion. and sure we do fix broken formatting, but not all.
I didn't say otherwise. I think you and i have a differnet idea on what a "principle" is. AFAIK, roslyn has very few "hard" principles. i.e. things that are so sacrosanct that we literally will not allow the code unless it follows it. What we do have are a ton of "soft" principles. Patterns and practices that are followed because they contribute substantively to the quality of hte feature. So, for example, we have "soft" principles around formatting. We believe it's important to respect user formatting when changing their code. So we'll put in a reasonable amount of effort to preserve it. But it's not a 'hard principle. We may not be perfect here, and we may not fix all bugs in that area. But that doesn't mean we don't care, or that we shouldn't do due diligence when writing features to do the right thing especially when the right thing is often cheap and easy to do.
sure, I agree. and I am not saying we should never care about user formatting. or we shouldn't do cheap and easy thing.
and I am not sure why that is related to this?
..
some of design question on refactoring span, or pattern based, or restrictive or permissive has nothing to do with cheap and easy. it is design question on behavior.
About formatting or comments, it is about what is "reasonable amount of effort to preserve it".
because now it becomes a confusing choice for all future refactorings/fixes. And then, every time these PRs get reviewed someone has to figure out "wait... are there downsides to parsing? what potential problems may occur?" This was necessary in your PR. By parsing code, your feature would not behave properly under certain circumstances. By driving these features to be implemented consistently, we lower the cost to review, maintain and ensure behavior.
but at the same time, you choose not to use IOperation and use syntax and semantic model since that is easier to do in your scenario. according to your logic there, even if it might be simpler in the situation, you should have used more complex IOperation since otherwise, that can cause people to "wait.. are there downsides to use syntax/semantic? what potential problems may occur?". and using syntax/semantic, you feature would not behavior properly under certain circumstances.
and consistency. then, shouldn't we move all to IOperation to be consistent?
I know issue with parsing approach you are refering. but also I believe parsing approach is easier to use in some cases especially when someone working with creating bunch of new nodes, or with very specific format of trivia.
also, parsing is sometimes much easier to understand what it is doing, rather than having this verbose tree build ups. I am not saying always better, I am saying there is such cases.
and we should fine mix those cases like we should be fine mixing syntax/semantic model or IOperation.
I believe concern on trivia/annotation exists in both parsing or tree transformation case, just in different form.
Here's my stance on these specific issues:
- support ForEachVariableStatementSyntax for deconstruction
I don't know what this question is?
- [design question] should refactoring be offered even if refactoring can change semantic? (#25460 (review))
This has always been nuanced. If it changes compilation, we should make that clear, but we've (for example), tended not to worry as much about changing order of execution. It often comes down to a judgement call about how important the semantics are, and how likely/often the change will break existing programs.
- [design question] what span refactoring should be offered?
Naively, I would say we should start with a smaller span, and watch usage to see if it needs to be expanded, so I would tend to just have it on the foreach
keyword. I'll defer to @kuhlenh here though.
- [design question] okay to delete comments in certain case?
The principle that from before Roslyn that @DustinCampbell and I tried to preserve through Roslyn is that we shouldn't ever delete comments, unless it's very clear that they are associated with some other code that is being deleted. It's better to duplicate than delete.
- support embedded statement. (refactor introduce local variable service to actual service and use that in the refactoring) #25460 (comment)
I think it would be strange and confusing to users to not support embedded statements in this feature. To me that doesn't mean that we need to block on making every refactoring available as a service. We shouldn't let the cost of adopting a desired architectural pattern throughout the whole codebase prevent us from doing the right thing for customers in a specific feature. IIRC, there are a couple of other refactorings/fixes that do something similar for expression bodied members. There is also the Add Braces fix, which we could share some code with.
- let people convert to for if the type implements indexer and either Length or Count #25460 (review)
This is something that I definitely feel like could come later, based on demand.
- support multiple rename in refactoring #25460 (comment)
Definitely think we can add this later.
- [design question] how much formatting we will try to preserve
Agree with @CyrusNajmabadi here. The principle is that if we're just moving the user's code, we should try to preserve their formatting as much as possible, and only format the newly added/changed code.
- [design question] ordering of caret based refactoring. #25460 (comment)
See my answer to 3 above. I would tend to limit the span of this for now.
@heejaechang , @Pilchie -- having it on just the 'foreach' keyword to start should be fine (it will still show in the menu if your cursor is at the end of the statement, right?).
@heejaechang , @Pilchie -- having it on just the 'foreach' keyword to start should be fine (it will still show in the menu if your cursor is at the end of the statement, right?).
By end of statement do you mean here:
foreach (var x in y) //<-- right after this )
Or do you mean here:
foreach (var x in y)
{
} //<-- right after this }
?
I took the approach in the first one in my Convert-For-To-Foreach refactoring. Basically at the start/end of the for-'header'. I didn't think to do it right at the end of the entire for-'block'.
and using syntax/semantic, you feature would not behavior properly under certain circumstances.
I didn't move to IOperation in several places because i tried it out and it didn't help with complexity. Why would Syntax/Semantic not behave properly under some circumstances? We've been using Syntax/Semantics for almost a decade, and it's worked quite well for us in all that time. It's completely battle tested, and has been designed for exactly these sorts of things.
Also, as was clear from the actual design of IOperation, it was not intended to deprecate Syntax/Semantics. Nor was it intended to be something that people would move to wholesale from Syntax/Semantics. It was intended to be a complimentary API for them. One that would be beneficial for some sorts of tasks, but might not be ideal for others. Devs would decide which of these APIs they would use based on which would be best for the problem they were solving.
IIRC, it was a specific non goal for IOperation to be intended to replace Syntax/Semantics. And if we do want it to be a replacement, that's something that will have to be designed, communicated, and approved by the IDE team before making any sort of replacement/deprecation strategies here with our features.
that can cause people to "wait.. are there downsides to use syntax/semantic?
You're literally talking about the systems roslyn analyzers and refactorings have used since the beginning. IOperation literally only became available in the last 6 months. Why would people think there were problems with syntax/semantics speculatively given that they've actually served us well all this time. :)
and consistency. then, shouldn't we move all to IOperation to be consistent?
Perhaps. You're welcome to ask for that change to happen. I would support it. That said, it's completely out of scope for these individual PRs. What's not out of scope is writing these PRs in line with how we've written all the other features in the IDE layer. And that means following those patterns and practices that have been laid out since the beginning and not going different routes that actually do have problems introduced by them.
also, parsing is sometimes much easier to understand what it is doing, rather than having this verbose tree build ups. I am not saying always better, I am saying there is such cases.
It's not a matter of better/worse. It's the fact that the parsing approach is not actually correct. i.e. because you're parsing, you lost information like the symbol annotations on nodes that Generator adds. Without this information, later parts of our system cannot operate accurately.
and we should fine mix those cases like we should be fine mixing syntax/semantic model or IOperation.
You are welcome to do the work necessary to support those cases without the drawbacks i outlined when responding to your PR. If you do that, then we can consider moving to allowing 'parsing' as an allowable primitive when building analyzers/refactorings. Until then, we shoudl not be taking paths that have known problems associated with them.
Again, to be totally clear:
I would be OK with allowing "parsing" with analyzers/refactorings.
However, that will only be OK, once we ensure that the parsing codepath does not have any drawbacks/problems associated with it versus the tree-construction codepath. Right now it does. If you go the parsing route, you can end up with cases (for example, ambiguities) where later parts of the system lack the information they need to do things correctly. That's why we moved to the tree route and why we use it so consistently. It is not arbitrary, and it's not because people are in love with tree construction. It's because this system ensures proper behavior of many different subsystems that we have built in order to ensure well functioning code actions even in strange corner cases. By using these system you avoid entire classes of bugs. If you go the parsing route, your code is now subject to those bugs, and that them adds cost to the IDE to have to fix in the future (not to mention, having to consider as part of the PR).
It's fine to want to be able to use 'parsing'. And i would even support PR's that use 'parsing'. But only when we've done the due diligence to make sure that if a feature does use 'parsing' it will work properly in all the same ways that using 'SyntaxGenerator' works properly.
--
By analogy: Say we have features that use library A, and work properly for scenarios 1 through 10. If someone comes along and says: "Use library B it's so much simpler", but that only works for "scenarios 1 through 8" then that's not a suitable suggestion.
Once library 'B' can take on all the responsibility of 'A', and will not have regressed behavior over using 'A', then it can be moved to, and it's simplicity can be appreciated. However until then, it's not a suitable choice, given the years of decisions to use the existing system since all those scenarios are important.
we had design meeting and this is what we decided for now.
...
Problem: What are our refactoring principles? The IDE team has tribal knowledge on how to design refactorings but many of those originators are no longer writing refactorings. Let’s codify the refactoring principles so that we can create a consistent, discoverable refactoring experience for both authors and developers.
Principle 1: Be permissive with refactorings because developers want as much assistance as possible in their code transformations.
Principle 2 : Warn developers if a refactoring may potentially cause semantic errors.
Principle 3: Code comments can only be removed if the purpose of the refactoring is to remove the comments.
Principle 4: Refactorings should adhere to the codebase or user’s code style.
Note: Can we tweak simplification to make the code style changes here? This would allow both ourselves and authors to provide fixes and refactorings that follow a codebase’s style and prevent a lot of code duplication.
Principle 5: Refactoring spans??? Our current idea is to A/B test changing refactoring spans with the refactorings that we currently working on: foreach to for, for to foreach, invert if. The goal of the new design is to make triggering a refactoring a more consistent experience so that users don’t have to guess where their caret needs to be in order to get a refactoring.
Principle 6: Refactorings, if possible, should work on broken code.???
Also tagging @AlekseyTs as he is helping us in review/design of LINQ related refactorings.
• B: Modify refactoring API to be line-based (so don’t have to call the service for every position in a line). The “primary span of interest” (typically the keyword) will be used to sort the action in the lightbulb menu.
Overall, i like this. But it will be very interesting to try to implement. Basically, many refactorings will have to just check every position in a line to figure out if they're valid right? And they can't stop the moment they find something valid, since there may be many places on the line right?
This effectively means that they will have to do a check per-character. Or, alternatively, they'll have to at least check every token on the line.
Principle 2 : Warn developers if a refactoring may potentially cause semantic errors.
Note: i'm wary about this in terms of how deep we go. I'm fairly certain that nearly 100% of our refactorings may cause semantic errors (by some definition of 'error'). In other words, we need to define what 'semantic error' actually means. In general, we've been handwavy here. For example, we've allowed change in order of operations, with the general assumption that the user was explicit in stating that they wanted to make the change, and thus was ok with the operation. For example:
Console.WriteLine(${P}, x + y);
If you extracted "x + y" into a local, do you really want to get a warning that semantics have changed here just because 'P' might have been side effecting in a way that might have affected 'x' and 'y'? We've broadly felt in the past that certain patterns and pratices on the part of the user can be assumed to not need that fine level of warning.
and red conflict boxes.
Red conflict boxes would appear in tons of cases, and would really make the experience feel far worse unnecessarily for 99% of cases.
I think we should err in the other direction. We should look for specific types of issues that are likely to cause problems. The general rule would be: "more often than not, if this happened, would a bug be likely to happen?" If the answer is "yes" we should warn (or maybe put the conflict marker). If hte answer is "no", then we can presume the user is likely safe and we odn't need to make the display unnecessarily scary.
Note: How can we better inform users that a refactoring they want to apply may have ramifications to their program semantics? One proposal is to add code comments explaining…
The preview window already has the ability to show arbitrary text notifications. We already use this today when adding a warning/conflict annotation. The text of those annotations is shown at the bottom of the preview window.
we don't need to make it complicated. we can divide refactoring to 2 mode (not exclusive), one caret position based, and selection based. and have 2 different api for both.
for caret, it can do exactly same as what most refactoring does, find ancestor from given position, if they find one, return token. the system will see that token is one same line, if not, throw away, if it is, it will use the token span for ordering.
having multiple possible candidate from caret position on same line case, we need to audit all refactorings, but I can already see some only caring closest one from caret. so it is not making something supported not supported.
for selection mode, probably same as before. but has higher priority than caret since selection show more explicit intention.
Principle 3: Code comments can only be removed if the purpose of the refactoring is to remove the comments.
This is an interesting principle, but i would like to undertand what it means in practice. If you are doing a transformation, and there are some comments in the original code that don't cleanly map to locations int eh new code, what is the expected behavior of hte refactoring? Should it:
It sounds like we're saying '3', but we've underspecified things.
Note that '2' has not been a problem for the IDE since inception. We've gotten bugs at times where our best effort wasn't good enough, and we've fixed as appropriate. But preservation not preserving all potential comments has so far not turned out to be a problem. While it seems like a nice principle, i question why we need this given that the current approach has not been a problem so far, and the new approach doesn't actually define how it would work or what the experience would be.
on permissive, we are talking about most of local refactoring should be 99.99% just work. we should have some indication that whether we are 100% confident it won't change semantic or otherwise, but it should never block code transformation from happening.
we can even make it to show refactoring icon only if we are 100% sure that semantic will be preserved, but not show icon but still support ctrl+., but point is it is user's choice whether they take code transformation and fix up or not.
here by local, we are talking about something for foreach to for, linq to for and etc. and rename or signature change is the other kind which is not local.
for caret, it can do exactly same as what most refactoring does, find ancestor from given position, if they find one, return token. the system will see that token is one same line, if not, throw away, if it is, it will use the token span for ordering.
I see. This is different from what you said above. The writeup said: "• B: Modify refactoring API to be line-based (so don’t have to call the service for every position in a line)."
This would not be line based. It woudl still be character based. It would just be using the 'refactoring' span + the position to prioritize refactorings. I would be ok with this.
Note: i would prefer another principle that says something like: The refactoring should only appear if the line the caret is on intersect a primary-defining part of the construct the refactoring applies to. So, for example, if i have:
for (...)
{
//// lots of code
// nested lambda {
$$ // here
// }
}
Then i really shouldn't see refactorings for 'for', even though the for-loop would intersect my caret. I think it's reasonable for c# and Vb to describe these as being available on the 'logical header' (and sometimes footer) of a construct. This wuold mean the for (...)
part of the for loop.
we are talking about principle here. we will find/create mechanism to hold the principle?
on permissive, we are talking about most of local refactoring should be 99.99% just work. we should have some indication that whether we are 100% confident it won't change semantic or otherwise,
My point is: we are never 100% confident. Users may always be dependent on something that will change.
we are talking about principle here.
Yes. I understand. I'm questioning some of the principles that introduce new constraints that didn't exist before and which were not a problem before. I'm also pointing out that some of hte principles may degrade things by introducing warnings almost 100% of the time, if we take the strict approach that we warn if we're ever uncertain about the code change.
For example: introduce local would warn almost every time. Move-declaration-near-reference would warn almost 100% of the time. etc. etc.
Converting any foreach to for (and vice versa) would have to warn for everything but arrays/strings, as we woudl never be certain that your enumerator and indexer had the exact same underlying behavior. Etc. etc.
Do we reaally want to warn in the cases where we're not 100% sure that no semantics change? That would be a big deviation from today when we only warn when we see things that are truly fishy and warrant users taking extra time to examine things.
I dont think we are talking about 100% confident. sure, there can be case we missed. and that is a bug and we will fix it. this is principle so this will be how we judge when someone decide to not offer refactoring. what the reason is. the principle simply states code transformation should always be possible if user choose to do regardless whether it change semantic or not. we will use other indicator to indicate whether we have high confident on the code transformation or not.
it says line based in a meaning that user should be able to ctrl+. on the line where suggested token is rather than moving caret exactly at the position of the token. and refactoring be ordered by that suggested token.
whether we should LB or not is another design question (but I believe we should do since we already do it for code fix). but this is principle. and what it states is we should see whether making it line based will increase usage of refactoring.
FYI, we also talked about showing visible cue on those suggested token so that it can lead people where to put caret for refactoring, but people seems not like that idea.
I dont think we are talking about 100% confident.
Well, you said this:
Principle 2 : Warn developers if a refactoring may potentially cause semantic errors.
and
we should have some indication that whether we are 100% confident it won't change semantic or otherwise,
:)
I'm just going off what you said above.
it says line based in a meaning that user should be able to ctrl+. on the line where suggested token is rather than moving caret exactly at the position of the token. and refactoring be ordered by that suggested token.
Got it. I'm ok with this meaning. I like this behavior, and i think it's very similar to how analyzers/fixes work. There's a span of applicability, and we prioritize based on teh relation of your caret to that span on hte same line.
I have no problem with this, and i think it will be a great improvement. It will also simplify a lot of refactorings that did a bunch of "interesting" applicability work, by removing all that logic and consolidating in a single place.
well, let's change that to high confident if having 100% there is bordering and it feels like it must be literally 100% confident and correct.
we will use other indicator to indicate whether we have high confident on the code transformation or not.
Yes. I like the part where you say "high confidence". I agree with this. If we have high confidence, then we don't show warnings/errors. If we have lower confidence we do. This allows us to be judicious about deciding if/when to show these warnings/conflicts.
it allows for some level of reasonablenesss. For example, we really shouldn't warn here:
foreach (var x in list) {
Console.WriteLine(x);
}
// convert to
for (int i = 0; i < list.Count; i++) {
Console.WriteLine(list[i]);
}
even though technically we're not 100% sure that thsi won't change semantics.
if having 100% there is bordering and it must be literally 100% confident and correct.
I don't understand what this means.
I meant, you are taking 100% confident as literally 100%.
FYI, we also talked about showing visible cue on those suggested token so that it can lead people where to put caret for refactoring, but people seems not like that idea.
I agree with "people" here. Refactorings should not show up in your code. You should just have the lightbulb for them. Suggestions are the point at which we've decided "hey... we have enough confidence to tell you to go do something proactively". For everything else, we should not be interfering with the code.
I meant, you are taking 100% confident as literally 100%.
When someone uses 100% that's meant to be taken literally :D But anyways, it sounds like we're on the same page here. When there is high confidence htings are ok, no warning/conflict. When there is low confidence, provdie warning/conflict.
I'm totally behind that principle.
Problem: What are our refactoring principles? The IDE team has tribal knowledge on how to design refactorings but many of those originators are no longer writing refactorings. Let’s codify the refactoring principles so that we can create a consistent, discoverable refactoring experience for both authors and developers.
Thank you for doing this. I'm always glad when tribal knowledge can be clarified and disseminated. It's great to have this, and will make future PRs much simpler and easier to do!
Principle 5: Refactoring spans???
Note: i think we can be sensible here and use our best judgement. Ask ourselves "waht would a user reasonably think is a span belonging to this construct".
For languages like c# this is often hte 'header' portion of hte constrct, and maybe a few child locations. For exampe, for something that applies to an 'if' statement, the spans would be:
[|if (...)|]
childstatment
[|else|]
childstatment
Similarly, for a for loop, it would be reasonable to think of this:
[|for (...)|]
statement
For switches it would be things like:
[|switch (...)|]
{
[|case ...:|]
}
etc. etc.
Basically when does a person still think they're 'on' the construct, as opposed to something unrelated that is just inside the construct. In general, as you descend into nested statements (or complex expressions like lambdas), then the user now thinks of that as the thing they related to, not this way outer construct.
I personally don't think A/B testing is really necessary. It should be pretty clear based on everything we've learned so far about refactorings since VS2005.
tagging @jinujoseph @kuhlenh @DustinCampbell for other opinions.