Open gafter opened 7 years ago
UTF-8 encodings come with a lot of issues for non-english languages and developers. So this feature might only be a good thing for english developers and a bad idea for everybody else.
It might be useful for interop with non-unicode applications, but then I prefer to have an explicit encoding conversation using System.Text.Encoding
.
@MovGP0 I think this is related to https://github.com/dotnet/corefxlab/blob/master/docs/specs/parsing.md . UTF8 strings are very common and if you don't have to convert to UTF16 (== .NET strings) and back again you save memory and CPU.
When UTF-8 string literals are added it would be nice to have UTF-8 version of StringBuilder as well.
I am quite curious about how it is going to handle the index operator. utf8string utf8 = "©α中文𨙧: some regular Chinese and special characters “; utf8char utf8c = utf8[5];
Does it mean the class need to enumerate and decode bytes to utf8characters inside until it finds the sixth character? Or, are you disallowing the index operator on the UTF8String?
Here are the similar questions:
If there no support or optimization for this, I would probably say probably we just need a syntax sugar for converting a string to a byte array with a UTF8 encoding.
Those questions are better suited for corefx.
@sumtec and @MovGP0 .NET Micro Framework always had UTF-8 string implementation only, transparent to the developer. It does support trimming and substrings and indexing, although the reverse looping is not optimized (source). It saved memory. You could, however, have the same arguments around "normal" strings with surrogate pairs.
See https://github.com/dotnet/csharplang/issues/2911 for a minimal specification for this feature.
Will there be a type that represents potentially invalid UTF-8 strings, like Linux file paths?
I'm not a fan of this approach as it treats utf8 strings as something other that then needs to be brought in through a side-channel.
It seems this fundamentally could not be picked up by a library author. i.e. if i have a library and i'm already using System.String (highly highly likely), i can't switch to utf8 strings because it will break all my consumers.
And, if i don't use utf8 strings, similarly my consumers will be less likely to as well since they would not want the costs marshalling to/from all libs.
--
I talked to @jcouv about this and the approach that feels like it would be most likely to succeed would be to provide a way to switch the .net runtime to/from utf8 mode (on a process boundary most likely). The benefits here are:
There is a downside in this that often gets brought up. Namely that utf8 strings do have different perf behavior for some ops over strings (namely indexing). However, this doesn't actually seem like a critical problem to me. First, remember that what i'm proposing involves a switch (either opt-in or opt-out) to use ut8 across the board. As such, if someone is in a domain where they index heavily and get a perf hit, they can not use utf8 until they address that problem. Second, i think the problem seems somewhat overblown in terms of how bad it is. We can likely break string indexing up into two domains:
str[i]
and then str[i + 1]
the information about the locatin collected in the first op can be used to make the second fast.char[]
or ImmutableArray<char>
Basically, it feels like there is a path that can get us to a future where almost everyone (final consumers and libraries alike) are on utf8 and the entire ecosystem gets the massive memory savings. It comes at the complexity of having opt-in/out and potentially needing some analyzers/classes for the people using strings in uncommon ways today. However, it seems much better to me than introducing a new utf8 string type that is highly unlikely to be picked up.
As an example of how we have a problem, take a look at Roslyn itself, including the entire Roslyn API we ship.
How could Roslyn itself possibly get the benefits of utf8 strings?
.Name
and .Name8
? How would memory not explode in such a world?Effectively, afaict, a project like Roslyn could never move to utf8. And we're one of the projects that would benefit the most here. We likely would save gigabytes of memory on real projects on user boxes.
So, as mentioned in teh start, this overall approach seems highly limited and constraining. it will only help projects that are isolated and can completely switch over without having to worry about dependencies. The overall ecosystem will find it nearly impossible to switch.
Conversely, the approach I outlined gives a path forward that allows big saving immediately across the board, with appropriate mechanisms for people to deal with rare problems if they arise. Then, if problems do occur in some places, they can be fixed up without holding the rest of the ecosystem back.
@CyrusNajmabadi Ist there an ongoing discussion on your "side-channel" proposal without introducing a new UTF8String type?
I share your concerns about the fragmentation problem a new UTF8String type would bring. However I could only find the discussion around design of the new types utf8 types https://github.com/dotnet/corefxlab/issues/2350 and the older compact string proposal: https://github.com/dotnet/coreclr/issues/7083
@CyrusNajmabadi Ist there an ongoing discussion on your "side-channel" proposal without introducing a new UTF8String type?
No clue. @jcouv @gafter is there any hope of this being not a side-channel type? note: personally, i think this is an appropriate hill to die on. It is that important.
@CyrusNajmabadi That is a question for corefxlab
,coreclr
, and corefx
, possibly focused at https://github.com/dotnet/corefxlab/issues/2350. This proposal isn't going anywhere without that team making a decision about what they want to do to support UTF-8. If the answer is a new type, this proposal applies.
@orthoxerox Re "Will there be a type that represents potentially invalid UTF-8 strings, like Linux file paths?". Are you asking about ReadOnlySpan<byte>
?
@gafter will System.IO
classes use ReadOnlySpan<byte>
?
Like, IEnumerable<ReadOnlySpan<byte>> System.IO.Directory.EnumerateFiles(ReadOnlySpan<byte> path)
?
@orthoxerox You would have to ask the folks designing those APIs.
I keep going back and forth on this... @CyrusNajmabadi's concerns are absolutely what I have felt to be the biggest downside, and I share the opinion that indexing into the string is not a major concern: I don't see indexing UTF-16 code units as being much different from indexing UTF-8 code units, as both encodings are variable-length, and so one code unit does not always represent one code point (nevermind that one code point does not always represent one character, depending on what the developer has in mind when they talk about "the third character in this string").
I mean, if you were to ask me, "hey @airbreather, if you were designing C# / .NET from scratch, what encoding would you use to store character data in string
?", then I would say "UTF-8" without a hint of hesitation (I feel more strongly about this point than I do about array covariance being a mistake). But there's so much momentum behind UTF-16 strings that I can't unequivocally support this proposal: introducing UTF-8 companion types to today's UTF-16 string
/ char
has a very real risk of harming performance, as the majority of the users of the UTF-8 stuff would wind up marshaling anyway to interop with third-party code that uses UTF-16 (edit: at least in the short-term until adoption picks up).
I'm also not terribly optimistic that this will really bear fruit without also investing significantly in CoreFX to add comprehensive first-class support, like what was done for Span<T>
/ ReadOnlySpan<T>
/ Memory<T>
/ ReadOnlyMemory<T>
, and I can definitely imagine that major established third-party libraries would not share my enthusiasm for adding parallels in their public API surface.
Ultimately, however, I've settled on a :+1: for this. I personally have a phobia about wasting CPU cycles and virtual memory bytes, so if LDT thinks that, in spite of the concerns raised here, this is something that has a realistic chance of making UTF-8 more of a first-class member of our ecosystem, then I'd be delighted to see this next important step towards breaking the chicken-and-egg feedback loop of:
We could try to expose both types of strings somehow? allowing consumers to move to utf8 when possible, while still having the System.String property. But how would this look?
.Name
and.Name8
? How would memory not explode in such a world?
@CyrusNajmabadi in this example, would it be viable for Roslyn to use .Name8
as the actual storage, but keep the existing .Name
properties around with accessors that marshal to/from UTF-16 on demand?
.Name8
, and there could be Roslyn-specific analyzers that help identify these.Admittedly, the prospect of ~doubling the public API surface alone may be enough to kill this idea...
Admittedly, the prospect of ~doubling the public API surface alone may be enough to kill this idea...
Yes. It seems like it would just be awful :-/
Would it be possible to make a UTF8String Implementation that extends the Type String,
so no changes to the API's.
The UTF8String would become an implementation detail.
Maybe even other implementations of String would become possible.
Like strings based on a Span
Or wait until "Type Classes" are introduced and then make a String "Type Class" https://github.com/dotnet/csharplang/issues/110
@inforithmics Almost any change to System.String
would be a breaking change. For example, String
has a contract that its chars can be accessed by index in constant time. That is not true of utf8 strings, so we do not want to expose that same API for them.
You are totally right, In my opinion the situation could only be simplified with Type Classes of C# 10. Where there could be added a String "Type Class" and the String interfaces would accept this and it wouldn't matter if it is a String or an UTF8String.
I read a little about other programming languages that moved from one string representation to another and I stumbled upon swift, where they changed the representation from utf16 to UTF8 going to version 5 of swift https://swift.org/blog/utf8-string/ They had the advantage to already had a base string class with different implementations so it simplified things for them, but it was still an ABI breakage.
The reason I'm suggesting a sort of base String type of kind is that the current implementation is not very friendly to large strings. https://mattwarren.org/2016/05/31/Strings-and-the-CLR-a-Special-Relationship/ because it needs large continuous blocks of memory. So if efficient String storage is requested the only possibility is at the moment to use (jagged) byte arrays, with wrapper methods. Because continued large blocks of memory are a problem in a .Net process with pinned objects. Because they cannot be rearranged. And this happens in native .net interop scenarios a lot.
Isn't this a subset of a much larger feature - deterministic functions - and already completed work in .NET 6 by means of unfolding constants?
static byte[] _helloWorldUtf8Bytes = Encoding.UTF8.GetBytes("Hello world");
// is JITted to
static byte[] _helloWorldUtf8Bytes = new byte[] { .... };
No. See the motivation section of the proposal, which mentions that exact pattern. There are startup costs as the JIT has to do the conversion, and you still have to pay memory for the UTF-16 representation you're never going to use.
No. See the motivation section of the proposal, which mentions that exact pattern. There are startup costs as the JIT has to do the conversion, and you still have to pay memory for the UTF-16 representation you're never going to use.
These 2 reasons are also applicable to justify adding "constant expressions" to C#, not just for UTF-8 strings but everything else:
class UTF8Encoding // or w/e it's called
{
public static deterministic byte[] GetBytes(string s) { ... }
}
static readonly byte[] _bytes = Encoding.UTF8.GetBytes("Hello world");
// is compiled by C# to this IL:
static readonly byte[] _bytes = new byte[] { .... };
PS. The proposal talks about static readonly
, but not JIT unfolding. But even if it did, my point above still stands.
Follow up from a conversation with @333fred and @tannergooding on the C# Discord (
#lowlevel
)
Wanted to comment and add that it would be awesome if [CallerMemberName]
, [CallerArgumentExpression]
, and nameof
expressions got support for target parameters of type ReadOnlySpan<byte>
(and byte[]
too if we wanted consistency there).
To provide a practical example and some context on this, we could leverage this in the Store to make our managed trace logger providers more efficient. We've been migrating our remaining C++ code to C#, and one of the things I'm currently working on is a managed version of our trace logging provider. This is a manifest-less ETW provider, meaning it needs each event to also get a metadata binary blob encoding all parameters being passed in the actual event data descriptors. C++ uses some macros to achieve this, whereas in C# I've come up with a builder-like approach that lets you build the metadata and event descriptor buffers in a declarative way. It looks something like this (I've garbled up the various names):
using TracingDataBuilder builder = TracingDataBuilder.Create();
builder.AppendEventTagAndName(SOME_TAG.BAR); // [CallerMemberName]
builder.AppendWStringKeyValuePair(someText, someTextLength, "Some literal");
builder.AppendWStringKeyValuePair(someOtherText, someOtherTextName); // [CallerArgumentExpression]
builder.AppendWStringKeyValuePair(someId, someIdLength); // [CallerArgumentExpression]
builder.AppendWStringKeyValuePair(someType, someTypeLength); // [CallerArgumentExpression]
builder.AppendBoolKeyValuePair(&someBoolParameter, nameof(someBoolParameter));
builder.AppendBoolKeyValuePair(&someOtherBoolParameter, nameof(someOtherBoolParameter));
builder.AppendInt32KeyValuePair(&someIntParameter, nameof(someIntParameter));
builder.AppendInt32KeyValuePair(&someOtherIntParameter, "Some other literal");
_ = builder.EventWriteTransfer(_traceLogger, descriptor, null, null);
There are 3 ways each parameter name is passed, as you can see:
[CallerMemberName]
or [CallerArgumentExpression]
nameof
Now, the parameter name in the metadata blob needs to be encoded as a UTF8 string, meaning that currently I need to encode each parameter name into the target buffer. This is still zero-allocation (using the Encoding.GetBytes
overload taking a target range), but not the fastest). If we could instead change those parameters to just be ReadOnlySpan<byte>
, the builder could instead just blit their contents directly into the target metadata buffer, without having to do any conversion at runtime.
Without support for those 3 scenarios, the alternative would be to either just keep using a string
and do the conversion at runtime (slow), or always use an UTF8 string literal, meaning the code would end up being much more verbose and error prone (string literals everywhere). It'd be great if the new UTF8 string was just extended to support the existing scenarios here 😄
EDIT: if all of these features couldn't be added, having support for just nameof
would at least be a major win, as it'd avoid having to pass hardcoded string literals everywhere, which is particularly error prone.
Some feedback/questions about the design were raised here: https://github.com/dotnet/csharplang/discussions/5983
Assuming that the natural type of "literal"u8
is byte[]
or ROS<byte>
, is there going to be some kind of marker within the assembly data that says "this is a UTF-8 literal" vs. "this is an instantiation of some binary data blob"?
I'm specifically thinking of disassembly / debugging / diagnostic scenarios. If a decompiler sees this in the IL stream:
ldc.i4.5
newarr [System.Runtime]System.Byte
dup
ldtoken field valuetype <foo>
call <initialize_array_helper>
Which of these two should that decompile into?
byte[] a = new byte[] { 0x48, 0x65, 0x6C, 0x6C, 0x6F };
byte[] b = "Hello"u8;
A marker somewhere that a diagnostic tool could inspect would prevent guessing and would ensure that the tool displays the correct human-friendly representation.
Tagging @AlekseyTs for Levi's question about decompilation and debugger representation.
is there going to be some kind of marker within the assembly data that says "this is a UTF-8 literal" vs. "this is an instantiation of some binary data blob"?
At the moment there are no plans to have any markers like that.
At the moment there are no plans to have any markers like that.
Thanks for the response. Is this being tracked anywhere for future implementation, with criteria for what would move it above the cut line, or is this more of a "we're not interested in ever doing this" thing?
I'm not really certain what the need for this would be. Why does it matter how the code was originally written? To me, it's like asking: "did the user originally have parentheses in the code when they wrote an expression?"
I'm not really certain what the need for this would be. Why does it matter how the code was originally written? To me, it's like asking: "did the user originally have parentheses in the code when they wrote an expression?"
I'd personally appreciate being able to see a human-readable string inside of a method when decompiling an assembly through ILSpy, instead of having to have my ASCII table handy to decode each character one-by-one. I can manage without it, but I can predict that this would have a much larger impact on my experience than "parentheses in the code"...
That said, I think it's totally reasonable to push that burden onto the decompiler: whenever it detects a byte sequence that's valid UTF-8 and could be represented as a UTF-8 string literal, it chooses that decompilation over any alternatives. UTF-8 is a sparse encoding, and so the risk of a false detection is kind-of low... not to mention that it could use heuristics.
This seems liek a better path anyways. Because that's waht i would want even if hte original code used new byte[] { ... }
.
What about the view within the debugger? IMO that's where I would find more value in being able to see the value represented as a String rather than a byte array.
That's a good question for sure. Def should be part of the feature evaluation around things like the debugger expression evaluator/presentation. @tmat do you know who would be pulled into that part of hte conversation?
Myself and the debugger team.
I can see the debugger providing more visualizers (including custom) for byte[]
type. It can perhaps offer to visualize any byte[]
as any encoding you specify. For example, the watch window allows to specify a format like so:
<expression>, <format>
I can imagine something like:
expr, u8
expr, encoding=sjis
Plus some UI around it if you don't want to type it in and for discoverability.
A workaround: Encoding.UTF8.ToString(expr)
:)
DataTips are more interesting - when you hover you might want to immediately see the string for u8 literals, not the bytes. Since data tips rely on presence of source code I think we should be able to analyze the relevant source and infer that a string should be displayed.
That said, I think it's totally reasonable to push that burden onto the decompiler: whenever it detects a byte sequence that's valid UTF-8 and could be represented as a UTF-8 string literal, it chooses that decompilation over any alternatives. UTF-8 is a sparse encoding, and so the risk of a false detection is kind-of low... not to mention that it could use heuristics.
I think you're going to find that this strategy leads to numerous false positives. Just within dotnet/runtime, this strategy would result in false positives in:
The use of heuristics to solve this problem is interesting, but that's subject to the whims of the disassembler author and would necessarily treat some languages (Chinese/Japanese/Korean being the likely candidate) as more user-hostile than English, which is a bad experience. My experience has been that heuristics aren't a good substitute for an agent (the compiler) embedding correct information at build time.
My experience has been that heuristics aren't a good substitute for an agent (the compiler) embedding correct information at build time.
I guess i don't understand the concept of 'correct'. They're equally correct to me. What it sounds like is more around what the author wrote. But that's not relevant to me as the consumer (for all the other reasons that style are not relevant). If i care about those details, i'll just look at the original source. If i'm working with an arbitrary compiled down representation, then my preference is to follow the style I want, not what the original code was. Indeed, that's more important to me in case a user doesn't use this feature, but i would still like the clarity in the decompiled code for my own readability purposes.
My point of view is that it is not a goal of IL to accurately represent the original source code, there are other tools for that (like PDBs and Source Link). Also, I can't think of another case where the compiler chooses IL representation based on the needs of decompilers and I don't see a reason why it should start here.
Aren't the implicit operators introducing a breaking change? Take for example the following code, which with C# 10 compiles fine, but will fail to compile if C# 11 is used.
using System;
var helper = new Helper();
helper.Add( "key", "value" );
class Helper
{
public void Add(string key, ReadOnlySpan<char> data) { }
public void Add(string key, ReadOnlySpan<byte> data) { }
}
@zlatanov there shouldnt be since plain strings should remain utf-16 and to make byte
overload considered you would have to add utf8
suffix to the last string or whatever the final suffix will be
@zlatanov there shouldnt be since plain strings should remain utf-16 and to make
byte
overload considered you would have to addutf8
suffix to the last string or whatever the final suffix will be
As I am writing this, it seems not to be the case. Check this out: https://sharplab.io/#v2:EYLgtghglgdgPgAQEwEYCwAoTA3CAnAAgAsBTAGwAcTCBeAmEgdwIAlyq8AKASgG5NSlagDoAggBNxnAgCIA1iQCeMgDSzcZAK4kZBPpkzJW7apgDemAlYIIAzDYAsBCVIQoADAQWK1AJRIQ4gDyMGSKAMoUEDAAPADGRPgAfATiEAAuENwEZgQAvpbWdo7Okpxunt5+AcGhEVGxwIrpJClpmdm5BRh5QA==
It fails to compile and gives an error CS0121: The call is ambiguous between the following methods or properties: 'Helper.Add(string, ReadOnlySpan<char>)' and 'Helper.Add(string, ReadOnlySpan<byte>)'
.
@zlatanov probably preview bug since utf8 suffix also isnt implemented in this branch (gives syntax error) so i wouldnt worry too much they still have about half a year before final release
@BreyerW This compiles just fine:
using System;
var helper = new Helper();
helper.Add( "key", "value"u8 );
class Helper
{
public void Add(string key, ReadOnlySpan<char> data) { }
public void Add(string key, ReadOnlySpan<byte> data) { }
}
@zlatanov ah i was mistakenly using utf8
. Then it is worth checking but i still think it is just preview glitch and will get smoothed out before final release
The problem is, that string has now an implicit conversion operator to ReadOnlySpan<char>
. In addition with the new UTF8 proposol the compiler added an implicit conversion from string literals to ReadOnlySpan<byte>
and byte[]
. This conversion should have a lower priority.
The problem is, that string has now an implicit conversion operator to
ReadOnlySpan<char>
. In addition with the new UTF8 the compiler added an implicit from string literals toReadOnlySpan<byte>
andbyte[]
conversion. This conversion should be have a lower priority.
Implicit operator to ReadOnlySpan<char>
is not new, it has been there for a while now. The implicit operator to ReadOnlySpan<byte>
is the new one.
Proposal: https://github.com/dotnet/csharplang/blob/main/proposals/csharp-11.0/utf8-string-literals.md Old draft proposal: https://github.com/dotnet/csharplang/issues/2911
Design Review
https://github.com/dotnet/csharplang/blob/main/meetings/2021/LDM-2021-10-27.md#utf-8-string-literals https://github.com/dotnet/csharplang/blob/main/meetings/2022/LDM-2022-01-26.md#open-questions-in-utf-8-string-literals https://github.com/dotnet/csharplang/blob/main/meetings/2022/LDM-2022-06-29.md#utf-8-literal-concatenation-operator