Wasted-Audio / hvcc

The heavy hvcc compiler for Pure Data patches. Updated to python3 and additional generators
https://wasted-audio.github.io/hvcc/
GNU General Public License v3.0
240 stars 28 forks source link

expr/expr~ #21

Open grrrr opened 3 years ago

grrrr commented 3 years ago

Hi all, thanks for the great initiative! One addition which would be a big leap forward, is the inclusion of expr/expr~ objects. The current code size and performance overhead of messaging for simple calculations is considerable and an expr port could generate higly optimized static code for that purpose. Especially on daisy, with its 128kB flash size, i am constantly running out of code space. best, Thomas

dromer commented 3 years ago

Nice suggestion! A better coverage of vanilla Pd objects would certainly be nice.

Knowing only the basics of Pd and Heavy internals I cannot comment on this though, I hope others can jump in with some suggestions on how to bring these objects in to the project.

dromer commented 3 years ago

Some possibly useful reads: https://github.com/enzienaudio/hvcc/issues/21 https://github.com/Simon-L/pd-static-expr

grrrr commented 3 years ago

Hi, these two reads implement the opposite of what is desired, the concept convert a mathematical expression into a network of objects. The hvcc rendering of connected pd objects results in a graph of function calls, where every call has a substantial performance overhead, and needs a lot of code space. The desired expr implementation should instead generate a monolithic representation of the formula.

I have briefly looked into it and this requires quite a substantial deviation from the way how things are usually done in hvcc. Nevertheless, because of the flexible Python backbone, it should be possible.

Am 25.07.2021 um 01:29 schrieb dromer @.***>:

Some possibly useful reads: enzienaudio#21 https://github.com/Simon-L/pd-static-expr

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.

-- Thomas Grill http://grrrr.org

dromer commented 3 years ago

You are right, I mostly put these here as a reference to myself of how these are used (I'm not as experienced with pd as I'd like). I briefly looked at how pd implements these internally and realize that this could get complex rather quickly.

Honestly I have no idea how this should be done, but I do see the potential of having this capability.

Fingers crossed someone has a magical insight and opens a PR ;)

dromer commented 2 years ago

Hmm, maybe useful? -> https://github.com/codeplea/tinyexpr

grrrr commented 2 years ago

Hi, that's great. It seems largely backwards compatible with Pd's syntax and is also extensible to support additional Pd functions like pow. Now, some insight is needed how to plug that into the hvcc ecosystem....

Am 14.11.2021 um 00:45 schrieb dreamer @.***>:

Hmm, maybe useful? -> https://github.com/codeplea/tinyexpr

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

-- Thomas Grill http://grrrr.org

dromer commented 2 years ago

So @diplojocus suggested (and I was also thinking) to leave the expr until the last stage ir2c and then create these functions as needed. The messaging graph before/after has already been established, just the function signatures and definitions need to be created. tinyexpr could help with the definition and simply wrap this up, but we'll need to dynamically create the signature for these as well I suppose.

Do we want to limit the number of allowed inputs? https://web.archive.org/web/20201111221923/yadegari.org/expr/expr.html describes 9 inputs only, but perhaps this was for convenience. "infinite inputs" could make this rather nasty. Perhaps fixing the signature to a set amount and initializing them as 0 is the easiest?

Not sure, just thinking out loud here.

dromer commented 2 years ago

And of course there's the deal with multi-line [expr] with multiple outputs ..

dgbillotte commented 2 years ago

I started diving into the hvcc code last night and have sense of direction for the Pd -> Heavy part and then how to slip tinyExpr in on the c generation side, but just started digging into HeavyIR... I think once I see how that part works this should be pretty straight fwd.

One question: I notice that some pd blocks get implemented as HeavyLangObjects, others as HeavyIrObjects, and some as both (send & receive). What is the line separating what should be handled by Heavy and what by HeavyIr.

Since this seems like a new fundamental operation at the HeavyIr end and as such would warrant a HeavyIrExprObject, does there need to be a HeavyLangExprObject? I'm thinking not...

Has anybody else done any work on expr/expr~ since Nov last year?

dgbillotte commented 2 years ago

On the HeavyIrExprObject itself:

an expr in pd can have multiple expressions with an outlet for each. It seems to me that since HeavyIr is a primitive language that it would make most sense for it to have an expr that handles an single expression (and has a single outlet) and then build up specific instances of pd-expr using 1 heavyIR-expr per expression.

Any thoughts about the Heavy philosophy/style and whether that fits or not?

dgbillotte commented 2 years ago

@dromer have you found any other useful docs or info about Heavy since the ones you posted last year?

Currently I'm just walking the path of a given Pd object through the transformations to C code and making a lot of guesses from there...

dromer commented 2 years ago

@dgbillotte I'm pretty much in the same position. Just go over the entire flow of things every time and see what part of the chain I end up. It's still very complex for me and a learning process at every step.

A downside of tinyexpr is that it will likely impact performance. Which is bad. Also I don't know if it could be used for all architectures. While it would be nice to be able to easily translate the plain [expr] functions directly, ultimately it would be better to generate actual C-code.

I've been thinking about this regularly, but have not made any tangible steps towards an implementation. Also not being able to anticipate the impact on performance and program size. And not to mention the possible impact on eventual DSP results (we really need signal tests on the CI) makes me weary to even start one avenue of research/implementation .. and then to fail and have to start all over again ..

Considering there's still a lot of other things to do I've just been holding it off, but happy that others are looking into the code and what could be possible :)

For now I'm actually considering that doing multiple expressions would be not supported as it complicates the graph and code generation a lot I think. From the usability perspective even being able to make single [expr] with a few i/o would already be very beneficial, so best to start there.

Btw I discussed with a friend whether or not translation to the internal heavy functions would be needed to be able to make use of compiler optimizations and such -> https://github.com/Wasted-Audio/hvcc/blob/develop/hvcc/generators/ir2c/static/HvMath.h Also something to think about and consider.

diplojocus commented 2 years ago

Hi Daniel,

In general the pipeline goes: pd -> heavy lang -> heavy ir -> c

HeavyLang objects should resolve at some point later to HeavyIR

I still think the best approach for this is to do it in pd2hv, and generate heavyLang/heavyIR graph based on the expr parameters.

Cheers, Joe

On Fri, 2 Sept 2022 at 11:47, dreamer @.***> wrote:

@dgbillotte https://github.com/dgbillotte I'm pretty much in the same position. Just go over the entire flow of things every time and see what part of the chain I end up. It's still very complex for me and a learning process at every step.

A downside of tinyexpr is that it will likely impact performance. Which is bad. Also I don't know if it could be used for all architectures. While it would be nice to be able to easily translate the plain [expr] functions directly, ultimately it would be better to generate actual C-code.

I've been thinking about this regularly, but have not made any tangible steps towards an implementation. Also not being able to anticipate the impact on performance and program size. And not to mention the possible impact on eventual DSP results (we really need signal tests on the CI) makes me weary to even start one avenue of research/implementation .. and then to fail and have to start all over again ..

Considering there's still a lot of other things to do I've just been holding it off, but happy that others are looking into the code and what could be possible :)

For now I'm actually considering that doing multiple expressions would be not supported as it complicates the graph and code generation a lot I think. From the usability perspective even being able to make single [expr] with a few i/o would already be very beneficial, so best to start there.

Btw I discussed with a friend whether or not translation to the internal heavy functions would be needed to be able to make use of compiler optimizations and such -> https://github.com/Wasted-Audio/hvcc/blob/develop/hvcc/generators/ir2c/static/HvMath.h

— Reply to this email directly, view it on GitHub https://github.com/Wasted-Audio/hvcc/issues/21#issuecomment-1235350857, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAERFHFABAJEQT3ZSJ4FUUDV4HLMZANCNFSM5APBLMHA . You are receiving this because you were mentioned.Message ID: @.***>

dgbillotte commented 2 years ago

@dromer sounds like an adventure, I'll share any insights I find.

re tinyexpr: agreed

re proper performance: I'm thinking in baby steps for now. The SIMD part of it started dawning on me as well. I'd be happy to get a POC working and see where it goes...

re multiple expressions: as far as I can tell it is just syntactic sugar, so limiting it to a single expression seems very reasonable to me. That said, if the Heavy/HeavyIR part is done right, it should be easy enough to turn mult-expression expr's into multiple single-expression Heavy expr's, once those are working :-)

@diplojocus , after some thought and considering your and @dromer's responses I'm getting a clearer idea of the abstractions intended by the Heavy folks. As such I think that decomposing the expression into HeavyIR primitives is what the orig authors would have done and I'm gonna head down that path.

At first I was concerned that HeavyIR didn't have the core primitives needed to cover all of the math functions available in expr, but after digging into heavy.ir.json some, I can see that most-ish of the stuff is there. I'll do a further analysis of that and see exactly what is missing.

With above thoughts in mind, this flow seems to make sense to me:

I guess I was liking the idea of tinyexpr because we wouldn't have to do that last step, but again, this smells like an adventure, so....

dgbillotte commented 2 years ago

@dromer: do you have support for pd [value] objects on your radar? I'm not pushing for it, but it has ramifications on expr implementation. If value is expected to be supported soon I would want to build that expectation into the expr stuff.

dromer commented 2 years ago

@dgbillotte I'm not sure if using HeavyIR primitives is really needed. What I was thinking of is to hold off creating any code until after the HeavyIR step, and then create actual C functions that become part of the core. Unwrapping the whole C expression into a HeavyIR graph would introduce a lot of messaging overhead, which would likely kill any advantage in terms of code-size.

In terms of adding pd-vanilla objects there is no roadmap at all. Whenever I see something that is trivial to add (like some of the Midi objects) I work on it, but there are no specific implementations planned. Check out the Projects tab for some of the things on the to-do list. In the near future I'd like to add more people to this section of the repo so some ideas/planning can be worked out into attainable steps and an actual kind of roadmap :)

dgbillotte commented 2 years ago

@dromer if I'm seeing it clearly, that would imply that there is a HeavyIrExpr object that takes in the complete expression and it would be the job of the c-generator to then turn that string into execute-able C. Is that correct?

I was thinking that going the route of the HeavyIr primitives would offload all of the SIMD related logic to the primitives where it would, presumably, be easier to deal with. I was not thinking, however, of all the extra message-passing overhead and can see how that could surpass any of the gains from SIMD.

I think doing it that way would be easier to implement. I'm just trying to understand the intentions of the layers and respect them instead of just paving a by-pass straight through ;-)

dromer commented 2 years ago

Yup understood, but in the case of [expr] the complete bypass might be the best approach in the end ;)

diplojocus commented 2 years ago

The concern about code generation size is valid, but really the solution there is to implement an optimisation pass on the heavyIR graph to do some code folding for control rate expressions.

By creating a new heavyIR primitive to handle [expr] you'll have to insert a new library to handle the runtime calculations for these nodes, defeating the purpose of the compiler itself. I agree it does appear to be the easier route though.

This is more pertinent if you're expecting to run signal rate [expr~] as well, as you're effectively splitting the work over two systems, and trying to embed one into the other.

On Sat, 3 Sept 2022 at 18:55, dreamer @.***> wrote:

Yup understood, but in the case of [expr] the complete bypass might be the best approach in the end ;)

— Reply to this email directly, view it on GitHub https://github.com/Wasted-Audio/hvcc/issues/21#issuecomment-1236172433, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAERFHHPUJOYVOJYBVOP4R3V4OGIZANCNFSM5APBLMHA . You are receiving this because you were mentioned.Message ID: @.***>

dromer commented 2 years ago

@diplojocus I thought that inside heavy there is no difference between control and signal rate. All objects are evaluated at the same rate, no?

diplojocus commented 2 years ago

Yes there's a difference, control rate objects are evaluated at the start of each vectorisation block (1/4/8 samples).

That being said, the boundary point when converting from control -> signal does impart some performance overhead if done liberally in a patch.

There was an idea floated at some point of calculating everything in the signal domain. But you'll still have lower frequency messaging happening at some point in the application, likely from the interface with the surrounding application.

Metasounds in Unreal Engine 5 is potentially a good reference here.

On Tue, 6 Sept 2022 at 14:46, dreamer @.***> wrote:

@diplojocus https://github.com/diplojocus I thought that inside heavy there is no difference between control and signal rate. All objects are evaluated at the same rate, no?

— Reply to this email directly, view it on GitHub https://github.com/Wasted-Audio/hvcc/issues/21#issuecomment-1238175630, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAERFHBRMURHDUTLPCIEFDDV45DL5ANCNFSM5APBLMHA . You are receiving this because you were mentioned.Message ID: @.***>

dgbillotte commented 2 years ago

I'm working on expr for now, just to get my bearings, but expr~ is my real goal once I get there...

I've been going back and forth conceptually between using the existing primitives or creating a new one. I'm currently thinking that the nature of the operation justifies a new IR primitive. But I guess it comes down to the question of what hvcc is ultimately for. It seems to me that the OWL, Bela, Daisy, etc products are what is keeping hvcc alive at the moment, though I would love to better understand who the actual users are. I'm coming from the audio/OWL perspective (Befaco LICH) so that's what I know about this world so far and am biased toward ;-)

If max support is not wanted (I haven't heard/seen much from the max community in the discussions) and C is the primary IR target (with the other targets each building off of the c code), then really the purpose of hvcc seems to be to "create accurate translations from a pd patch into c-code" and since timing is a critical part of what pd is designed to do, it should be of a higher priority than honoring the abstractions as they exist. I am all about using software abstractions properly, 💯 %, but the abstraction exists to serve a purpose. When the abstraction becomes a hinderance to the purpose, its value is questionable... That's just my 2 ¢, I would appreciate learning of other perspectives :-)

If that is the case, I think that the existing IR primitives do not adequately serve the core purpose of creating an accurate translation of pd source patches and should be extend to do so.

@diplojocus re "an optimisation pass on the heavyIR graph to do some code folding", by "folding" do you mean some form of combining multiple bin/unary ops into a single graph node, thus eliminating the extra message passing? If this is possible, I think it would be more of the "right" way to do it and would be happy to wander down that path some.

At the current time, hvcc seems to fill a void where I think the only real alternative is to use max instead of pd and as such I hope folks efforts can come together to create something that is stable and will be around for a while.

dromer commented 2 years ago

My personal interest is with DPF (vst2/3/lv2/clap plugins), OWL, Daisy, Bela, and webassembly. However there are also still users of Unity and Wwise.

So basically "all the targets" ;)

The max2hv path is there, but I have absolutely no idea if it even works. I'd be very happy to deprecate it. A discussion for it is here -> https://github.com/Wasted-Audio/hvcc/discussions/25

dgbillotte commented 2 years ago

Not extensively tested, but I have expr working for a simple patch.

I wanted to solicit any feedback on the approach I took and some next steps.

For this go at it I created a new HeavyIR object, __expr.

In short, I create per-instance evaluate functions in the Heavy_heavy.hpp/cpp files that get passed into the the cExpr_init() function for each "instance" and is stored in the instance's ControlExpr "object" where it can later be called by cExpr_onMessage() any time it needs to evaluate the expression. The passed in function just binds the variables in the expression to the input array, evaluates itself, and returns the value. With the expression compiled in they should run plenty fast for any control-rate needs.

I like how it works in theory, but the implementation could probably be cleaner. I used the get_C_impl() and get_C_def() functions to inject the functions and their prototypes into the Heavy_heavy.hpp/cpp files, but not sure if that is working with the system or against it...

I'll add some tests and open a PR once I've banged on it some.

You can have a look at https://github.com/dgbillotte/hvcc

dromer commented 2 years ago

So you are evaluating the expressions at runtime, rather than create a compiled C-function? Something tells me that for embedded purposes this could end up giving too much of a performance hit, but something that will of course need testing.

Will need to play with this myself a bit, will try out your branch this weekend if I find the time. Thnx for giving a go at this!

dgbillotte commented 2 years ago

no, they're compiled in, it just seems a round about way to do it. They live in Heavy_heavy.cpp like:

float Heavy_heavy::cExpr_ZRMzpAT8_evaluate(float* args) {
    return 3 + 5; // simple test, no variables
}

float Heavy_heavy::cExpr_KVwa098b_evaluate(float* args) {
    return ((float)(args[0])) + ((float)(args[1]));
}

and passed into cExpr_init like:

Heavy_heavy::Heavy_heavy(double sampleRate, int poolKb, int inQueueKb, int outQueueKb)
    : HeavyContext(sampleRate, poolKb, inQueueKb, outQueueKb) {
  numBytes += cExpr_init(&cExpr_ZRMzpAT8, &Heavy_heavy::cExpr_ZRMzpAT8_evaluate);
  numBytes += cExpr_init(&cExpr_KVwa098b, &Heavy_heavy::cExpr_KVwa098b_evaluate);
  numBytes += cExpr_init(&cExpr_Qev1EDBU, &Heavy_heavy::cExpr_Qev1EDBU_evaluate);

  // schedule a message to trigger all loadbangs via the __hv_init receiver
  scheduleMessageForReceiver(0xCE5CC65B, msg_initWithBang(HV_MESSAGE_ON_STACK(1), 0));
}
dromer commented 2 years ago

Aaah I see. However there is one individual function created for every [expr] object? So if you use the exact same expression, it becomes a whole new function definition?

What I would do is keep a list of used expressions by taking a heavyhash of the entire expression string, then if that expression already exists, simply point to the same one. This way code duplication could be reduced a lot.

One thing we also lose in your approach is any architecture specific optimizations. Extensive use of these expressions could then really create a performance hit (even for control rate). On desktop pc that maybe not be very apparent, but for the more bespoke architectures this could become quite a penalty (depending on the expression complexity of course).

Just some things to consider if you move forward with this approach.

dgbillotte commented 2 years ago

I def have μ-controllers in mind and open to all thoughts in that direction. The current approach is just a stab at it and has been useful just to learn all of the parts of this thing. I could happily toss what I have and take a different direction on it. I'm all for geeking out on making it spit out small and fast code.

If it is a likely case of having many [expr]s with the same expression, hashing and cacheing would be a good route to go. That case had not occurred to me... Thinking about it that way, I like the idea of there being a single function lookup table instead of a bunch of spurious function defns.

re optimizations: what approach do you have in mind that keeps the arch-specific optimizations possible?

dromer commented 2 years ago

Worst case we'll have to actually parse the expressions and put __hv specific operations in place. Not an easy task of course, but it would give the most control over the eventual code output.

dgbillotte commented 2 years ago

I put together some random-ish "try to break it" kind of patches and was amazed that they just kept working... Only bug I found was with an expression like "$f1 + $f3", which will have 3 inlets in pd, seg-faults on args[2], no surprise, easy fix...

I'll put together a thoughtful test for it this eve and then I'm gonna start investigating the signal-rate side of things. I'm sure it will be educational...

dgbillotte commented 2 years ago

After studying hvcc/generators/ir2c/static/HvMath.h last night I think I better understand your references to utilizing the architecture specific optimizations much better. 🎓

I think the approach I am using could easily be extended to expand the expressions out to take advantage of the stuff that is in HvMath.h. I look forward to getting to that part of it :-)

dgbillotte commented 1 year ago

I will say that I was naive in how I thought the expressions would ultimately get turned into to C code. Studying HvMath.h and SignalMath.py shows that for the SIMD stuff to work, the incoming expression needs to be rearranged from an infix notation to a sequential-prefix notation. I make the distinction of "sequential" because in my first estimations of this I was picturing a "nested"-prefix notation. An example:

Input expression: "sin($f1 + 2) / sqrt($f2)"

A nested prefix representation would be:

hv_div_f(hv_sin_f(hv_add_f($f1, 2)), hv_sqrt_f($f2));

However, for the sake of efficient buffer handling,HvMath.h deals with outputs as output parameters instead of return values, so a different processing pattern is needed, which I'll call "sequential" prefix notation, in which the output buffers from earlier steps are setup to be the input buffers for later steps:

__hv_add_f($f1, 2, BO0);
__hv_sin_f(BO0, BO1);
__hv_sqrt_f($f2, BO0);
__hv_div_f(BO1, BO0, BO2);

To handle that, I wrote an expression parser/rewriter that can output either of the forms above (for use in expr and expr~). I have it at https://github.com/dgbillotte/ExprParser for now. The parsing is correct as far as I've tested it, which includes some unit tests and some ad-hoc "throw crap at it and see what comes out" kind of tests. The generated C-code is pretty rough at this point but proves the point and helps to inspire the next step.

With that taken care of I have the form of what a per-object process function would look like. For context, the following function would get declared in Heavy_heavy.hpp and then called in Heavy_heavy.cpp in the Heavy_heavy::process method under the "// process all signal functions" comment.

This function is roughly what would be generated for the input expression that I started with: "sin($f1 + 2) / sqrt($f2)"

 void Heavy_heavy::cExprSig_rUZ70xyj_evaluate(hv_bInf_t* bIns, hv_bOutf_t bOut) {
  // declare tmp buffers
  hv_bufferf_t BO0, BO1;

  // declare buffers for constants
  hv_bufferf_t const_2; // initialize this to all 2's

  __hv_add_f(bIns[0], const_0, BO0);
  __hv_sin_f(BO0, BO1);
  __hv_sqrt_f(bIns[1], BO0);
  __hv_div_f(BO1, BO0, bOut);
 }

the calling site in Heavy_heavy::process() would look like kind of like this:

hv_bufferf* ins[2] = {&Bf2, &Bf0}
cExprSig_rUZ70xyj_evaluate(ins, VOf(Bf1));

The piece that creates a buffer of constants to deal with a single constant seems less than ideal, but it is a laziness that I am ok with at the moment. I've seen that there are a number of SIMD binary-op primitives that will operate on a vector and a constant which would be nice to use here, but that would involve some deeper changes/additions to HvMath.h...

This is where my brain is at on this thing at this point, any thoughts welcome...

- Daniel

dgbillotte commented 1 year ago

Right as I hit send above I glanced over at a generated Heavy_heavy.cpp that I have and saw these two lines inside process():

    __hv_var_k_f(VOf(Bf1), 0.5f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f, 0.5f);
    __hv_sub_f(VIf(Bf0), VIf(Bf1), VOf(Bf1));

It looks like a buffer of constants is how the heavy team was dealing with constants, so I'm not going to give that any more thought to that for a while...

dromer commented 1 year ago

@dromer: do you have support for pd [value] objects on your radar? I'm not pushing for it, but it has ramifications on expr implementation. If value is expected to be supported soon I would want to build that expectation into the expr stuff.

Having looked a bit more at certain missing objects I can see that value indeed has some impact on expr, since expr can make use of the same variables throughout a Pd program. As I understand it [value] basically defines a global variable that can be called/set from any part of the program. Superficially it seems that it shouldn't be too hard to implement, however I have no idea where to start with this and if it would be compatible with your expr work :#

dromer commented 1 year ago

Hmm, I'm actually thinking value might not be that difficult, if we consider it simply as a kind of send/receive in a single object. You "just" have to get the hash value and put a receive on it to get it. Maybe I'm thinking too simplistic here, but I think it could be possible to mostly emulate value behavior. Will try to prototype something for this some time.

However what would be very difficult (currently) is supporting arrays. According to PD docs you can give the name of an array and then have the input as an index that reads from it. At the moment we only support table so this would definitely not be possible in the current state.

dromer commented 1 year ago

I think it could be possible to mostly emulate value behavior.

Forget what I said here, I'm an idiot. Even though value can be set from a send, it ostensibly works very differently than a send/receive pair.

I'd say: lets implement expr/expr~ without value/array capability in the MVP. A future addition would need changes across the board, so no need to worry about that here I think.

dromer commented 1 year ago

Much more extensive (and clear) docs about current expr in pd: https://pd.iem.sh/objects/expr~/

Clearly there is a lot of functionality that we won't be able to support. It mentions up to 100 (!!) inputs and things like value and array will not be possible right now. And then there is a number of additional functions that may need extended parsing.

I'm currently writing some tests for control rate to at least explore to what extend we can support this part of the objects. https://github.com/Wasted-Audio/hvcc/commit/9ff861f26e2972c4e6c9a391d2d4d38f5d2c2a8a

dgbillotte commented 1 year ago

Nice find. The docs I was looking at when I last looked at this were ancient!

On Wed, Jul 19, 2023 at 3:37 PM dreamer @.***> wrote:

Much more extensive (and clear) docs about current expr in pd: https://pd.iem.sh/objects/expr~/

— Reply to this email directly, view it on GitHub https://github.com/Wasted-Audio/hvcc/issues/21#issuecomment-1642855512, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADYHBDECK6MKWWUZTSGPS3XRBOTZANCNFSM5APBLMHA . You are receiving this because you were mentioned.Message ID: @.***>