Closed mamaso closed 6 years ago
This is a known limitation and, with the current design, not something that can be easily addressed. I'm adding a comment in the SO question as the root cause is not directly related to redirects.
Is there another way to work around the issues binding redirects are usually used for? For example, I want to use the WebJobs nuget package, which requires 7.2.1 or higher of WindowsAzure.Storage. However, 8.1.1 is out, which updates a bunch of stuff, including adding support for block blob uploads. Then, when I declare a function as taking in a IQueryable<DynamicTableEntity>
, I'm stuck - if I reference the 8.1.1 version to get the functionality I want, I get an error at function bind time that it DynamicTableEntity
doesn't implement ITableEntity
.
Or is there another way to achieve this besides binding redirect support?
Is this worth looking at, @jorupp?
The development experience using this approach has been so much better for me.
@Blackbaud-MitchellThomas - it has the same basic issue - no binding redirects. In fact, I hadn't run into the issue myself until I switched to that approach yesterday and updated all my nuget packages to the latest (since I had the package GUI to tell me about it). I got binding errors because the latest Microsoft.Azure.WebJobs
needed an older version of WindowsAzure.Storage
, and didn't recognize the 8.1.1 types as matching the 7.2.1 types it had loaded (since there was no binding redirect to force them to both load the 8.1.1 types).
That's good to know that it's still a potential gap. I have a project running in that format, and it cleared up my flavor of these issues. But my problem was that it was erroring looking for a given .dll and couldn't find it, so I had been able to figure out to supply it with the csx file.
Blocked on this work: https://github.com/Azure/azure-webjobs-sdk-script/issues/1319
Could there be a way to work around this temporarily by providing the data on what version of an assembly to use in another form (ie. JSON file) and handling the AppDomain.AssemblyResolve
event? Just thinking out loud here.
Is there any possible work around for this? We have just tried to migrate from web jobs to functions, but our application depends on Micosoft.Owin.Security 3.1.0.0, but internally Asp.Net Identity depends on Micosoft.Owin.Security 2.1.0.0. We cannot migrate to functions as we would like to until this is supported.
This is a major issue. How can Microsoft state that Azure Functions are ready for use in production environments with such limitation?
@jorupp that is one possibility we have considered (we already perform a a fair amount of custom resolution). The the work planned in #1319 will address this in a better and more consistent (not Azure Functions specific) way.
In the meantime, an approach that will usually address this need is to place the assembly you wish to use in a bin
folder, in the function folder. The runtime will use this location as a fallback when probing for assemblies.
That doesn't work. If a strong named assembly is already loaded into the appdomain by azure functions, it will always use that one before loading a new assembly and will then throw a manifest mismatch error for any strong named assembly
I think this is just a fundamental design problem in how azure functions were thought out. As a suggestion, the best thing I can think of is for azure functions to handle ALL assembly resolve events and have NO assemblies in any internal bin folders, loading all assemblies from byte arrays to ensure a "no load context" for all assemblies. Assembly resolve can then handle multiple versions of the same assembly loaded into a single app domain
Basically prevent any loading of any assemblies by the .net runtime itself, as assembly resolution in .net is a total mess anyway
@jnevins-gcm that (as mentioned) doesn't work in all scenarios, but is a viable fallback for some. Much of what you suggest above is actually how things are handled (for example, private assemblies are loaded from byte arrays, without a load context, and side-by-side loading is supported), the Azure Functions assembly resolver establishes a function scoped context and bypasses the .NET loader in those scenarios, and the bin
folder mentioned above is not the application bin
folder, but the function scoped one (where assemblies are loaded using the method described above).
The details on how this work are a bit more complex, and there are scenarios where we must load assemblies differently, but if using private assemblies only, the workaround above works as described.
We are working on the longer term solution for this that will provide behavior consistent with a "regular" .NET application, but this is still a bit out.
I saw that work item. It seems like a very very bad idea. You're basically describing building a new remoting protocol.
The workaround doesn't work as described though, unless you're loading all the dlls ahead of time in the bin dirs irrespective of name, in which case one could name each conflicting dll according to its version. Is that the case?
*implementation not protocol
@jnevins-gcm I'll try to put some documentation better describing this process and also the scenarios I'm referring to (I've been meaning to do that for a while).
For the out-of-proc issue, there are a lot of details missing there as well, and we're trying to update it so everything is out in the open and we can get more feedback, but the scope is significantly larger than just trying to run .NET in isolation.
I'll work with @christopheranderson to have more details on that issue so we can better discuss the approach.
I'm trying to reference a NetStandard library in my function and I'm getting:
mscorlib: Exception has been thrown by the target of an invocation. IoT.EuriSmartOfficeFunc: Could not load file or assembly 'System.Runtime, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies
This is probably due to this issue, because actually version 4.1.1.0 is deployed. Is there any way at the moment to get this assembly redirected?
@nickverschueren what version of the runtime are you running against? (Here's how to check)
If you're running on anything below 1.0.10917, please restart your Function App and try again (if you find you're running on 10917, please also retry as it was just released)
After having played with a few small functions, I started porting one of our existing applications that has a component that's a Windows service that runs Quartz cron jobs over to Azure Functions as a precompiled assembly. I ran into an issue where one assembly is referencing log4net 2.0.8 but Microsoft.ApplicationInsights.Log4NetAppender is referencing log4net 2.0.5.
Since I can't do a binding redirect, I guess my options are to recompile the AI appender with 2.0.8, recompile the reference to downgrade to 2.0.5, or just give up and revert to WebJobs?
Recompiling in this particular case isn't a huge problem, but the lack of binding redirect support kind of makes Azure Functions a ticking time bomb when it comes to maintenance and gives me some serious pause. Is there any kind of timeline for when binding redirects will be supported?
+1
Without binding redirects, Azure Functions with precompiled assemblies is basically impossible to use, as soon as you want to use an external NuGet package
After some playing around, I've solved this by using ILRepack/ILMerge, to merge all the problematic dependencies into the main assembly. This seems to have solved the binding redirect issues
NetStandard barely works at all in net461 runtime with or without Azure Functions...so best to not mix two problems and just dual compile your netstandard libraries for 461 as well.
That's a great idea about ILMerging! I had tried this but unfortunately not all dlls are conducive to being ILMerged into your own (use reflection internally etc). Plus you'll want to ILMerge WITH internalize to prevent surface area overlap problems.
I managed to work around my log4net problem by handling the binding redirect programmatically at the start of the function. I learned something new today! The idea is from http://blog.slaks.net/2013-12-25/redirecting-assembly-loads-at-runtime/ and could be adapted to read from XML file, etc. Here I just hardcoded my log4net version to see if it would work, and sure enough it did:
private static void ConfigureBindingRedirects()
{
RedirectAssembly("log4net", new Version("2.0.8.0"), "669e0ddf0bb1aa2a");
}
private static void RedirectAssembly(
string shortName,
Version targetVersion,
string publicKeyToken)
{
ResolveEventHandler handler = null;
handler = (sender, args) =>
{
var requestedAssembly = new AssemblyName(args.Name);
if (requestedAssembly.Name != shortName)
{
return null;
}
var targetPublicKeyToken = new AssemblyName("x, PublicKeyToken=" + publicKeyToken)
.GetPublicKeyToken();
requestedAssembly.Version = targetVersion;
requestedAssembly.SetPublicKeyToken(targetPublicKeyToken);
requestedAssembly.CultureInfo = CultureInfo.InvariantCulture;
AppDomain.CurrentDomain.AssemblyResolve -= handler;
return Assembly.Load(requestedAssembly);
};
AppDomain.CurrentDomain.AssemblyResolve += handler;
}
Now this won't help you if you need redirect something the host has already loaded, like Microsoft.WindowsAzure.Storage. Since the Azure Functions team "owns" these I think it's reasonable to believe they will have a compatibility story for these assemblies going forward.
But the above option seems to be workable for when you're bringing in a legacy DLL that has dependencies that probably won't ever be recompiled.
OMG this is overly complex!
Won't work for strong named assemblies though right?
In my case, log4net is strong named and it works there, but I don't think I have any control over assemblies that the host has already loaded, like Newtonsoft.Json. In my case, looking through my apps that I want to port to Azure, the binding redirects used in these worker roles are typically a few "usual suspects", most often just log4net.
I get the assembly binding redirect but since you're loading a different version than the reference all supports I guess I'm confused why you don't get
A. "The located assembly's manifest definition does not match the assembly reference" for OTHER dependencies that log4net itself has (another different version reference to)
B. You don't end up with two dlls for log4net loaded into the appdomain, ending up with strange behavior for static fields referenced by different code
Can you explain please?
Thanks.
There's only one log4net DLL, version 2.0.8, which since I referenced it in the function itself, will already be loaded by the time I even get a chance to hook into AssemblyResolve. But when I call XmlConfigurator.Configure() and log4net starts looping through my appenders, the assembly resolver notices that Microsoft.ApplicationInsights.Log4Net wants 2.0.5 and can't find it on its own, so it will call my AssemblyResolve event where I hand it 2.0.8 instead (since 2.0.8 is already loaded, Assembly.Load() just returns the existing assembly ... I could also probably just loop through AppDomain.CurrentDomain.GetAssemblies() instead). MSDN says "The event handler can return a different version of the assembly than the version that was requested" so that part seems to be by design (https://msdn.microsoft.com/en-us/library/ff527268.aspx), you are "on your own" for loading an assembly that you think will work. That's about where my knowledge on this ends, though: my logging wasn't working before and now it is.
Interesting. I've done this before but I'm just surprised that you don't get "The located assembly's manifest definition does not match the assembly reference" because of the referenced assembly mismatch. It must be because the initial assembly is loaded into a "no load contexf".
Great idea!!
Obviously still doesn't work for dlls loaded WITH a load context (like dlls referenced by Azure Functions)
@npiasecki is correct, this will work for strong named assemblies as well (as previously mentioned, the runtime actually does some of that internally as well). @jnevins-gcm for reference, pre-compiled assemblies are loaded in the load-from context. Regular function assemblies are loaded without a context.
@flagbug that is a good approach! As previously stated, we're working on documenting some of the options and details about the resolution/load process, but I'd be curious to hear more about your specific scenario.
I bumped into this today in a different way while porting over a different system. I hadn't noticed the Application Insights updated to 2.4.0.0 and my logging wasn't working, but when I downgraded to 2.3.0.0 it started working again.
I understand that the team has plans afoot to address this, either by introducing a new major version of the runtime when the "blessed" assemblies are updated or by separating the host entirely from the function and using some kind of IPC, but in the meantime, am I right that I should be consulting this file as the list of assemblies that have been loaded by the host, and that I shouldn't go newer than the versions listed there?
@npiasecki that works, but you might be slightly better off looking at our web.config here. As the binding redirects ranges in that file ultimately determine whether a assembly you introduce will be redirected to our internal version or not.
I hit this also today with Newtonsoft. I'm trying to run some of the management API stuff in the QueueTrigger function and it's trying to find Newtonsoft 6.0.0.0 at compile-time so I can't even binding redirect it in code. Is there any way to tell the .net core compiler to do the binding redirects for compile-time?
Any updates on assembly redirect?
Its still a while away, likely 6+ months. We're focused on proving the out-of-proc model can perform well with functional parity to the in-proc model by porting our JavaScript support over to it. We haven't started moving C#/F# yet, and these languages are even more challenging because the programming model is richer (you can bind to more types, etc).
There's a possibility that in our .NET core port we can make our existing binding redirects more aggressive, as this would be an opportunity to make some breaking changes. This might help in scenarios where you are trying to use a slightly newer version of a given dependency, but its not the same thing as letting you specify your own binding redirects. We should know whether this change is feasible within the next month or two.
There's no way you can just do this via the simple, surefire implementation? I think most people would disagree with the direction and timeframe you're proposing.
@jnevins-gcm what implementation are you referring to? This thread has a discussion about some workarounds that can help in a subset of cases (e.g. if the assembly is not loaded by the host, such as log4net). Is that what you're referring to?
As you said, the workarounds the don't support different versions of assemblies already loaded into the host's default load context. I can think of two fairly simple implementations (the second would be my preference)
Allow function app package to include an app.config file or appSettings to add custom binding redirects. The pro of this is that it's simple. The con is that you allow the client to use versions of dlls that could potentially break the function runtime itself.
Don't load ANY dlls except WebJobs.Script into the default load context of the function host app domain. In other words, EVERY dll, including all function runtime required dlls, should be loaded into the LoadFrom context. Pro is that this gives full flexibility of assemblies loaded. Con is that it's slightly more overhead to implement. The good news is that most dlls loaded by the function runtime already behave this way because of FunctionMetadataResolver.
I've been watching this thread for a few weeks in the the hope of a resolution and it's disappointing to hear that we're still 6+ months out. I might not be fully understanding the scope of the issue but we've attempted to move our WebJobs to Functions twice now and abandoned it twice.
As soon as we have an external dependency on almost anything newish (which we invariably do) it just falls apart with versioning conflicts. Mostly seems to be an issue with assemblies already loaded internally - or dependencies of these. But also if two external assemblies have dependency on different versions of the same thing. It's particularly problematic with something like EntityFrameworkCore which has dependency on a whole bunch of other bits - a lot of which are newer than the internal bits.
The last batch of updates & preview tooling etc was a huge move in the right direction for us and it's a real shame that we're coming unstuck on this as the Functions model suits our use-case perfectly. As it stands though, for us it's really not usable beyond the simple "read from a queue", "write to a table" style function and it does seem to me that lack of assembly redirection is the main problem. That said, we might be trying to put a square peg in a round hole here and I'd much rather know that it's 6+ months out than be holding on for something which won't appear in the timescale we need it.
Also, it would be ideal to have the FunctionMetadataResolver itself read an app.config from the function app folder and use the binding redirects specified in that file to control the dynamic resolution. That would be great as it would resolve most issues out of the box without any intervention needed by the developer (including the scenario mentioned above by wimagee)
Yes we too have been trying to migrate some of the more complex pieces of code we have to the function model (anxiously awaiting updates). However even trying to reference the Azure management libs will cause the dependency resolver to die out with some conflict on Newtonsoft v6 and v9.
At this point we're staying with a complex, unfriendly powershell implementation because those functions behave correctly. Our options are now limited to implementing the rest calls ourselves completely, and losing the nice structure and types provided by the management libs or leaving in hard-to-test powershell.
@jnevins-gcm Approach 1 puts us in a difficult position when it comes to support - we're very reluctant to introduce a feature that allows users to break themselves in new and obscure ways that are difficult to debug. The particularly painful scenario is where a customer uses a binding redirect that works correctly, and then we make an internal change that works fine against our bits, but does not work against the redirected library. We roll that change out and the customer with the redirect gets broken in prod.
I'll let @fabiocav comment on the second approach you described.
I agree on approach 1 not being a great idea. Accordingly, I'd suggest approach 2 plus specifying a configuration to the FunctionMetadataResolver for wimagee's use case (two referenced assemblies referencing different versions of another assembly) (note though that this specific use case can be handled already today, painfully, but managing AssemblyResolve yourself....but expecting most/all developers to do that is unreasonable)
I echo all of the concerns here. We had plans to move our large workloads to functions, but this issue is a complete showstopper. DocumentDB, Json.net, etc... these are core libraries used in nearly every workload. We have to take updates on them in our own library's to continue moving our own apps forward.
Reading through, I understand that this is a complicated issue, requiring a significant refactor for the permanent solution...which is hopefully is along the lines of isolating the runtime dependencies from that of the compiled function. 6 months though...we really need something we can work with in the short term, even if there is a risk of breaking production code. I mean, we're building this stuff with preview tools after all, this crowd is no stranger to things breaking or changing.
I actually like both options @jnevins-gcm proposed as interim workarounds. In addition to that, I would propose the configuration option to opt-out of automatic runtime updates (or specify a specific version or range). This could mitigate some of the scenarios where automatic runtime/bits updates cause unpredictable and breaking changes to our functions. I realize that has it's challenges as well, however I do think we need to find a balance that gets us this flexibility and still getting the ease of use, auto scale, etc we all love about Functions.
@jnevins-gcm regarding option 2, as you've pointed out, for most of the DLLs we load in the context of functions, we already follow a similar approach to what you're describing (actually initiating the load without a context, instead of load-from for assemblies brought in by NuGet package references, for dynamically compiled functions).
Applying that to core runtime dependencies, with the current model, has a few subtle challenges, and would still require some customer/function provided unification logic in the case of types used by function bindings, which is where many customers end up running into issues with mismatches and lack of assembly redirect support. This introduces a high risk of a breaking change to the current behavior, and fragility that may be very problematic, making the runtime susceptible to external breaking changes that would be very difficult to diagnose. We've explored this in the past and landed where we are based on some of those issues.
I do agree that it would be good to have something sooner than what we're planning as a long term solution, and have a plans that are similar to what you've mentioned in your comment here. The improvement would enable redirects to be applied within the scope of a FunctionAssemblyLoadContext
, influencing how the metadata resolver loads those assemblies (what version it looks for). This enhancement, combined with some relaxation options, is very safe to introduce and would address one class of issues without requiring custom code.
It's worth noting that there are a couple of different classes of issues related to the binding redirect support, and this thread has a mix of them. The enhancement proposed above, as is, would not automatically resolve issues where there's "type interoperability" between the bindings and the function (e.g. a function that binds to CloudBlockBlob
and references a given version of the storage SDK that differs from what the binding uses). We have other work (in addition to the long term work mentioned by @paulbatum) planned to mitigate those issues.
I've created this issue to make it easier to track this moving forward and will be adding details to it as soon as possible (based on the current plans, towards the end of the month): https://github.com/Azure/azure-webjobs-sdk-script/issues/1716
This type of issues breaks the Serverless Architecture Concept. The goal is to execute code easily. I also struggled 2 months ago trying to make my Azure Functions because "they are the fiuture of WebJobs" and I eventually stopped because of the high level of complexity. From now onward, I apply a "12 months before use" rule to any new product released to Azure to ensure the product is stable and the feedback from users is positive.
Precompiled C# Azure Functions come with other issues such as "hacking " the VS project to make it work. Creating Precompiled C# Azure Functions is far from a single click on "New Project".
Rudy - parts of the issue still do apply to precompiled code.
Fabio, that's good to hear you are pursuing a shorter term interim solution. Presumably that solution is not also six months out? Basically, the current stage of functions precludes use from many/most non-hello-world scenarios.
Per this customer issue:
http://stackoverflow.com/questions/40816285/azure-functions-with-nuget-packages-that-have-different-versions-of-the-same-dep#