dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
15.42k stars 4.75k forks source link

Survey: Interop #40484

Closed elinor-fung closed 4 years ago

elinor-fung commented 4 years ago

We are interested in learning about the experience of developers that rely on or leverage interop. This is an opportunity for us to learn from you about how the community interops between .NET and other languages/runtimes/platforms. We would like to understand how developers think about interop conceptually and how it can be improved.

We are interested in all interop scenarios, not just those involving .NET - we want to learn from what works and doesn't work well on other languages/platforms. So if your project involves Rust and JavaScript interop, understanding those details is also interesting and very much welcome.

The below list represents questions we have, but feel free to ignore any/all and add your own framing, thoughts, and insights.

For combinations of interop (e.g. .NET and C/C++), can you tell us:

If you would rather not comment on GitHub, feel free to e-mail @AaronRobinsonMSFT or @elinor-fung - e-mail addresses can be found in our profiles.

Thank you!

john-h-k commented 4 years ago

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

CPython, via the C API - https://docs.python.org/3/extending/extending.html

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

Stuck, due to issues with loading methods. Either I need variadic function pointers (as GetExport worked fine), or to work out why my DllImports were complaining about a missing ordinal on arbitrary methods

What level of interop is required?

Ideally, 2 way interop, so Python can call C# and C# can call Python

What complications are there in terms of calling conventions, pinning memory, threads?

Python expects your method to export a native method called Py_Init<modulename>. There isn't an easy way to do this at the moment - I ended up using Aaron Robinson's DNNE project for it. A large proportion of python methods are varargs, which isn't currently supported for interop on Unix as far as I know, which is a big blocker. It also provides you with the library you invoke into at runtime, which has caused some strange effects I haven't been able to resolve (some methods work fine, and others complain about Index not found with a FileNotFoundException) Python also defines several exported fields (Py_NoneStruct to name one), which need to be referenceable to be usable. This can be worked around by providing a native shim method to return pointers to them, but is suboptimal Python also wants pointers to several metadata structures, which as far as I can tell, it expects to be pinned for the lifetime of the interop. We have the POH but this doesn't support single objects yet :(, which would make this nicer

What is required in terms of function pointers / delegate-like capabilities?

Vararg function pointers would be great, but this is currently a C# limitation - https://github.com/dotnet/csharplang/issues/3718

What is the lifetime management of those objects, e.g. GC, reference counted?

I believe they are ref counted

How many objects/interfaces/methods are involved?

It's a relatively large API, at least a thousand methods. Most types are relatively simple structs

Is the interop layer manually crafted, or automatically generated through some tooling?

I generated it with ClangSharp's PInvoke generator

Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. IDispatch or similar?

All the C API is static (working with the returned python objects is entirely dynamic, but I don't think that is something that needs special interop support for).

How frequently do the calls need to be made (calls / second)?

Not frequently

What is the typical size of data being transferred (bytes / API call)?

Not much, everything is pass-by-ref, so it is pretty trivial amounts generally

What memory patterns need to be supported, e.g. streams, structs, unions etc?

Long-term pinned static readonly structs would be nice in a few places. I can workaround it with AllocHGlobal and pointers but it is less clean in my opinion

How are errors handled? Is there a notion of exceptions, error codes, some other mechanism? What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

It just works via simple error codes, so not really applicable here

What would make interoperability easier?

Support for varargs on Unix. In-built support for exporting methods for native code to call. Support for importing exported fields.

Are there languages/runtimes that are emerging that are of interest? What makes them interesting?

CoreRT is interesting because of its support for native exports,

nathan130200 commented 4 years ago

An java interop would be amazing. http://www.ikvm.net/

AaronRobinsonMSFT commented 4 years ago

@nathan130200 There is some interest in this area. A group of people have been working toward bringing it back - https://github.com/ikvm-revived/ikvm.

nathan130200 commented 4 years ago

@AaronRobinsonMSFT omg, thank you!

tannergooding commented 4 years ago

Naturally you can just chat with me on Teams as well, but I felt like sharing here too as my projects aren't work related 😄

Which language/runtime/platform are you trying to interop with.

C/C++, Java/Kotlin, Objective-C/Swift

C/C++ is fairly standard/ubiquitous, but Java/Kotlin are largely necessary for Android and Objective-C/Swift are largely necessary for iOS and any kind of more involved macOS development.

What stage are you at in the interop

C/C++ and Java/Kotlin are relatively straightforward to interop with. Objective-C/Swift on the other hand are much harder to do so given how the type system and metadata are exposed.

C/C++ works nearly end to end with some areas lacking, namely around more specialized calling conventions such as __vectorcall or varargs. C++ in particular is lacking in areas where you have C++03 POD vs non-POD types as the runtime currently only has support for "POD" types.

Java/Kotlin are largely accessible given JNI but do require some level of manual management which isn't necessarily always straightforward. I have been prototyping some helper types for my own projects to better manage that.

Objective-C/Swift I am still in the early prototyping stages for. There is not only a lot to ingest in how it works given I am unfamiliar with parts of it, but there is a lot of dynamic lookup involved and management overhead that makes creating bindings for something like Metal or the macOS Windowing APIs hard.

What level of interop is required?

Passing data in both directions and managing the state of data between two separate runtimes.

Is the interop layer manually crafted, or automatically generated through some tooling?

For C/C++, the raw bindings are currently automatically generated via ClangSharp P/Invoke Generator (https://github.com/microsoft/clangsharp). The generator attempts to create "raw" bindings, that is bindings which require no implicit marshaling or implicit pinning to make work, so they are effectively 1-to-1 with the native API.

However, this doesn't represent an API which is necessarily "easy" to use from C# where you might expect to use strings, arrays, spans, etc. I am still investigating if there is a way to automatically generate helper methods that expose these higher level concepts and which perform any pinning or conversions needed.

How frequently do the calls need to be made (calls / second)?

For something like TerraFX (https://github.com/terrafx/terrafx), it is meant to be used for games or other multimedia scenarios; so you might expect anywhere from a few dozen to a couple hundred calls per frame and having a frame rate of 60-240, as a reasonable default. This gives you potentially a couple thousand calls per second as a potentiallity.

What is the typical size of data being transferred (bytes / API call)

The data is typically fairly small or is passed as a pointer if it is larger than what can fit in a register.

What memory patterns need to be supported, e.g. streams, structs, unions etc?

The most typical is unions and structs. Streams and COM style interfaces also come up.

How are errors handled?

Typically via error codes returned by the method or via errno.

What would make interoperability easier?

The current recomendation for several scenarios is (to my knowledge) to write C wrappers around the relevant calls to make them easier to use. However, this is potentialy problematic as you then need to compile your wrapper for all potential platforms. In the worst case, such as for libClangSharp, this is 10 RIDs and can amount to significant build and management overhead. Distributing these wrappers is also a problem and one that NuGet doesn't handle amazingly today.

Doing reverse calls (calling .NET from another language) is also difficult today as you can largely just get references to static methods and then have to manage and marshal any GCHandles yourself (or some general way to track an id you give to native to a managed object, so that you can correctly call a method on the instance).

In general, I think better support for managing this state and being able to map things like ids to managed objects (or the reverse) would be beneficial.

Having a way to manage inheritance hierarchies that aren't directly representable in native would also be beneficial. For example, today you can use the COM wrapper APIs to handle vtbls and such as .NET interfaces. However, using interfaces like actual objects isn't necessarily intuitive and may not translate correctly to other runtimes like Java or Objective-C. It also doesn't allow you to easily manage additional state if required and so you can end up with a class which you give to the user, which contains an interface, which itself is a "shim" for the native vtbl.

The way I really want to expose it to the user, for example, is to have a class (that way I can have a proper inheritance hierarchy; since structs don't allow inheritance) which wraps the native object and exposes the methods in a .NET friendly manner. The methods are then simply wrappers and invoke the corresponding vtbl method or which can easily have additional customization/logic as needed. I can expose the underlying native pointer as a handle and things look and feel like a regular .NET type, with minimal overhead. Some of this starts falling apart with casts, however. For example, you might get back a IDXGIFactory but it is actually a IDXGIFactory5, and there is no easy way to surface that or keep the type identity. The COM wrapper support solves this by using interfaces, but then you can't have the same level of customization and trying to further wrap those types adds additional layers of overhead/complexity.

sunkin351 commented 4 years ago

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

I'm in the middle of reimplementing an interop layer with the FMOD audio engine. https://www.fmod.com/resources/documentation-api?version=2.1&page=core-api.html

Target Interop language is C/C++.

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

One branch of my project is at the completed stage, but I have another branch at the implementing stage because I'm [rewriting it for/porting it to] .NET 5. As of writing this, its almost complete, just waiting for the new function pointer syntax to be shipped so I can finalize everything.

From 1 to 10, where 10 is hardest, how hard was it to complete?

My experience was around a 6, mainly because it has be doing a lot of tedious boilerplate for every function for the feature-set I wish to expose for my users. (Writing a source generator to take care of half of that did take a load off of me)

What level of interop is required?

Passing data to and from, of course

Function pointers were a necessity for this project, both for callbacks and for API function calling

Reentrancy is a thing with this library, both for debug callbacks, and for DSP object configuration in user code.

Before porting to .NET 5, I did have class objects take ownership of API handles, but with .NET 5 I'm no longer doing that as I'm no longer keeping track of managed delegates in library code.

A total of around 9 API object types, API function count ranging into the hundreds

The API binding layer is a hybrid of hand crafting and generated, before .NET 5 it was created by a 3rd party runtime library that would generate it at runtime, but as of .NET 5 I wrote a source generator to take care of that work.

All functions and types are statically defined up front, making this AOT compatible as of .NET 5

How frequently do the calls need to be made (calls / second)?

Assuming a game engine environment, an upwards of several hundred API calls per frame, but it really depends on what the user is doing.

What is the typical size of data being transferred (bytes / API call)?

Again, it really depends on what the user is doing. Startup API calls usually pass path strings to audio file locations, which I marshal manually. And in some circumstances, you can pass audio data directly. Otherwise, it's usually not more than 20-30 bytes per call.

What memory patterns need to be supported, e.g. streams, structs, unions etc?

Structs, big time, with minor support for unions, which is already possible.

How are errors handled?

Return codes, from every API function.

So far, I've achieved all of my goals with .NET 5 interop. But of course, I've only dabbled in C interop, so others might have ways to improve it.

FiniteReality commented 4 years ago

I'm writing a bunch of tooling to help improve the experience when it comes to .NET interop with C/C++, such as being able to compile C/C++ code on multiple platforms via a custom MSBuild SDK, so that I'm not stuck with Windows and Visual C++.

This is pretty difficult work, since while the MSBuild docs are existent, I can't find many docs on how individual components within the MSBuild ecosystem communicate. For example, how the various targets defined by Microsoft.NET.Sdk and the NuGet communicate - a high level flowchart here would really help my understanding without having to dig through hundreds of XML and C# files to find targets and tasks.

My goal is to ultimately allow a user to build a C/C++ project using MSBuild, and then pack it into a NuGet package like any other project, so that it can be uploaded to NuGet and referenced as a dependency. Ideally, I'd be able to just run dotnet build from the command line and have all C, C++ and C# code for a project just compile, so that it can be deployed as a single bundle. This would really improve installation of libraries which depend on native libraries (e.g. C# wrappers for libopus, the reference implementation for the Opus audio codec) or even applications which in turn depend on those.

As an example, @tannergooding mentioned writing C wrappers was problematic as compiling for various platforms was hard - the tooling I'm trying to develop would help alleviate this, but marking my packages on NuGet correctly is awkward, and I have to rely on awkward tricks such as this, this and this in order to correctly package these C interop wrappers.

My recommendations to improve this interoperability would be to allow NuGet packages which target .NET to reference NuGet packages which target the native framework. NuGet seems to already recognise native as a framework, and it even seems to use it when a C/C++ MSBuild project which uses NuGet specifies to use a RestoreProjectStyle of PackageReference, but referencing these projects from a project targeting .NET causes an error.

Furthermore, more documentation on the design and structure of things like NuGet and the .NET SDK internally (e.g. MSBuild tasks, targets, etc.) would be extremely nice. If these already exist, making them public and/or easier to find would also help.

SteveKCheng commented 4 years ago

Hi, what a great question!

I use C++ at my company, I built a code generator to interop between C++ and .NET Core. Originally I had used C++/CLI, but that only works on Windows while we want to run on Linux also.

The code generator has two parts, one part is a modified Clang compiler, and the other part in C#. (I would like to open-source it, but company politics...) We annotate our header files with custom C++11 attributes on things we want exported. Reading these header files, our custom Clang outputs a big JSON file describing the ABI. This includes struct definitions with offsets (necessarily platform-dependent), and linkage and type information on the C++ functions/methods themselves.

Then the next part of the code generator reads this JSON to create extern "C" interfaces for the C++ functions/methods. In limited situations we could actually get P/Invoke to call C++ functions directly, but we need exception handling, so all the extern "C" wrappers catch exceptions and write them to out parameters. Our extern "C" wrappers will translate raw pointers to lvalue/rvalue references etc. so that we can export "modern C++" functions. To some degree, it even works with templated types, just by instantiating each individually. Of course, template functions cannot be exported, only specializations/instantiations.

I think it is easily seen that manipulating this JSON --- in a functional style like a compiler transformation --- is not so easily done in C++, so that's why I decided to extract the JSON out first and isolate it from the enormous complexity of the Clang codebase.

Finally, the same C# code generator generates C# wrappers doing P/Invoke on those extern "C" wrappers.

This is going to be a long comment if I were to describe everything about our system, but in summary we do fairly sophisticated interoperability:

The scale of difficulty is definitely a 8 or 9: I had to learn how Clang represents C++ and modify Clang to emit exactly the right information in JSON. The C# code generator mentioned above was easier: I had written the original version in Python and then had an intern port it to C# as soon as it got complex enough to need static typing. Together this took a good few months of full-time work.

Things that would a great help (would be happy to give more details; just ask!):

Our tooling has a lot of sharp edges for sure but it supports a 20MB C++ DLL that processes production data, on multiple Linux servers.

Finally I want to note how our solution compares to other solutions out there:

The fortunate position that our project has is that we can change our C++ API to fit better for interop scenarios.

AtomicBlom commented 4 years ago

There's a project that I really should try again with .NET Core 3, but I tried experimenting with the idea of writing a Minecraft Mod that allowed users to write C# scripts to control a robot or their environment. I was more interested in the feasibility of the project than what the end shape would be at the time.

The original goal was following the announcement that Bedrock edition would support C# modding to see if I could create a compatibility layer with Java edition that would work on Windows/Mac/Linux. This later turned out to not be the case, but it was still a fun possibility I want to explore.

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it. Java 8 and C# via C++. I'm using JNI to generate the C interfaces.

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned. Temporarily abandoned, now that you've reminded me about it I intend to pick it up again.

From 1 to 10, where 10 is hardest, how hard was it to complete? 7 - Finding the right documentation was hard. There was a lot of easily findable information about interop with .net core 1.x, but I was targeting 2.0. Once I managed to track down the right documentation and headers calling from Java to C# turned out to be really easy and straightforward. Where I got stuck was the reentrancy calling back to Java.

What level of interop is required?

I wish I could be more verbose and actually try updating the project before hitting send, but I'm a little short on time. I'll try get to it soon and update my answer if time permits.

vivainio commented 4 years ago

@john-h-k consider checking out https://github.com/pythonnet/pythonnet if you haven't already

dlemstra commented 4 years ago

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

C library of https://github.com/dlemstra/Magick.Native that is an API to the @ImageMagick project: https://github.com/ImageMagick/ImageMagick that can be used on Windows/Linux/MacOS with the C# library (https://github.com/dlemstra/Magick.NET/tree/master/src/Magick.NET/Native)

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

Completed but gone through various iterations. Started with a dynamically linked C++/CLI library and now using a static C library.

From 1 to 10, where 10 is hardest, how hard was it to complete?

There was a lot of documentation and examples on how to do this (back in 2013) so I would say a 5 or a 6. The most difficult part was figuring out how to get the function pointers works and calling back from C into the .NET code.

Passing data from YYYY to XXXX or vice versa.

The data is passed vice versa. One example is where ImageMagick can use a byte array to read an image and another example is where the data is "send back" when the image is being written to a byte array.

Calling into methods in XXXX from YYYY or vice versa.

Also vice versa. Calling the API of the library and calling back through function pointers.

What complications are there in terms of calling conventions, pinning memory, threads?

For calling convention I can use Cdecl. Also being in control of the C library makes this a lot easier. At some parts memory is being pinned to improve the performance of copying data. Both .NET and the C library have their own threading so there are no complications involved there.

What is required in terms of function pointers / delegate-like capabilities?

On example of function use is reading data from a memorystream by passing in a function pointer into the C method that then calls back and reads data from the memory stream. This is done synchronous and it would be nice if I could do this asynchronous but I have no idea how I would need to do that.

Passing object references from XXXX to YYYY or vice versa.

Not really passing objects but only pointers to objects from .NET to C and back from C to .NET.

Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

I have not had this use case and I also don't know when I would need this.

What is the lifetime management of those objects, e.g. GC, reference counted?

The lifetime of these objects is controlled using IDisposable.

How many objects/interfaces/methods are involved?

The project is only calling various methods of the C libray and at the time of writing that is 1381 methods. Some of these methods created objects, some set fields of thos objects and some others are calling methods that use the created objects.

Is the interop layer manually crafted, or automatically generated through some tooling?

Initially this was done manualy but at some point I created a tool that uses a .json file that describes the objects and the methods/properties it contains and uses that to generates a nested class in the C# objects of the library.

Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. IDispatch or similar?

All the information is known up front.

How frequently do the calls need to be made (calls / second)?

That depends on where the library is being used?

What is the typical size of data being transferred (bytes / API call)?

This depends on the size of the image file that is being used but the @ImageMagick library itself reads the data in blocks. The size depends on the image format that is being used.

What memory patterns need to be supported, e.g. streams, structs, unions etc?

There is no real overlapping memory between the .NET and the C side. Allocations are done inside the native library and pointers are passed back to use for future calls.

How are errors handled?

The advantage here is that the @ImageMagick library already handles this pretty well. Most methods can be called with a pointer to a struct that will set a field to make clear that an exception was thrown.

Is there a notion of exceptions, error codes, some other mechanism?

As stated before an exception info struct is passed to most methods of the C library. When the method "returns" back into the C code the object is inspected and when necesary additonal information is retreived by passing the pointer of that object to a method that can retrieve the details.

What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

When the C library fails the .NET library will raise an exception that contains the information that was given by the C library.

How is testing done for interoperability scenarios?

For most tests the .NET class is called that calls the method in the C library. Then the object is inspected using various methods that are also calling the C part of the library.

How are diagnostics done for interoperability scenarios?

With VisualStudio and making sure that "nativeDebugging": true is set in the launchSettings.json I am able to debug the C code and diagnose most problems.

What would make interoperability easier?

Passing strings back and forth (the library uses UTF8) uses some code that I wrote myself and is probably not the best way to do this. Having something that would "change" a .NET string into an UTF8 char* would be very useful. And maybe some default tooling to generate the code based on the exports of the library? When I started there was some tooling but very complicated to use and that is why I ended up writing my own code generation tool (https://github.com/dlemstra/Magick.NET/tree/master/tools/FileGenerators/Native).

Are there languages/runtimes that are emerging that are of interest? What makes them interesting?

Have only been using interop for C but I would love to get this library working with ARM. But I have parked that until there is good ARM support for GitHub actions.

Feel free to reach out (twitter/email) if there are any follow up questions from your side.

jonpryor commented 4 years ago

I'm one of the principal authors and maintainers of Java.Interop, core of Xamarin.Android:

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

Java via JNI:

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

"Completed," to the extent that anything is ever actually completed…

From 1 to 10, where 10 is hardest, how hard was it to complete?

10?

What level of interop is required?

As full as possible. ;-)

C# code should be able call Java code, Java code should be able to call C# code, and -- the real kicker -- object lifetimes should be "reasonable".

  • Passing data from YYYY to XXXX or vice versa.

Both.

  • Calling into methods in XXXX from YYYY or vice versa.

Both.

  • What complications are there in terms of calling conventions, pinning memory, threads?

No major complications here.

  • What is required in terms of function pointers / delegate-like capabilities?

The JNIEnv struct is full of function pointers, which can be invoked with delegates or (presumably) function pointers. Once upon a time, delegates were used; we migrated to P/Invokes in a native library a few years ago for performance reasons.

  • Passing object references from XXXX to YYYY or vice versa.

Yes.

  • Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

Yes.

  • What is the lifetime management of those objects, e.g. GC, reference counted?

This is a major complication, and where Mono-specific extensions are used.

Java object lifetime is handled via JNI object references. Generally, we hold Global references to Java instances, preventing Java object references from being collected by the Java GC until the managed side is done with them. That's the simple part.

The complicated part is around the aforementioned "reasonable-ness": the expectation is that Java values should be able to reference managed values, and keep those values alive:

var list = new Java.Util.ArrayList();
list.Add(new MyJavaLangObjectSubclass());

given the C# declaration:

class MyJavaLangObjectSubclass : Java.Lang.Object {
}

Making this work "reasonably" involves using Mono's SGen Bridge, which alters the GC semantics of Java.Lang.Object and Java.Lang.Throwable subclasses so that Xamarin.Android code is executed to determine if an object instance can actually be collected, and the Xamarin.Android code involves "toggling" JNI Global References to JNI Weak Global References, performing a Java-side GC, and then seeing what was collected from Java.

I do not currently know if the .NET CoreCLR will gain an analogue to Mono's SGen Bridge.

There is an idea for a non-bridged backend, but it has not been implemented.

  • How many objects/interfaces/methods are involved?

"All" (most) of the types and methods declared in the Android SDK android.jar are involved, including 6168 classes and 1889 interfaces.

  • Is the interop layer manually crafted, or automatically generated through some tooling?

"Yes": large parts of the interop layer are both generated and manually crafted.

Higher-level bindings are entirely generated, but can be customized as well.

  • Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. IDispatch or similar?

Dynamism is involved. JNI signatures are statically defined, but the combination of a name and signature must be looked up with e.g. JNIEnv::GetMethodID() to obtain a jmethodID/jfieldID/etc., which is used for the "real" method invocation of e.g. JNIEnv::CallObjectMethod()

// If you have a `xamarin/java.interop` build on macOS, then run `make shell`
var jvm = new Java.InteropTests.TestJVM();
Java.Interop.JniRuntime.SetCurrent (jvm);

var Object_class = new Java.Interop.JniType("java/lang/Object");
var Object_init  = Object_class.GetConstructor("()V");
var instance     = Object_class.NewObject(Object_init, null);

var Object_toS   = Object_class.GetInstanceMethod("toString", "()Ljava/lang/String;");
var s            = Java.Interop.JniEnvironment.InstanceMethods.CallObjectMethod(instance, Object_toS, null);
// s == "0x7f92cd512170/L"
Console.WriteLine(Java.Interop.JniEnvironment.Strings.ToString(s));
// prints e.g. "java.lang.Object@2280cdac"

Java.Interop.JniObjectReference.Dispose (ref s);
Java.Interop.JniObjectReference.Dispose (ref instance);
  • How frequently do the calls need to be made (calls / second)?

Hundreds-to-thousands, or more: the intended use case is to use C# to write Android GUI applications, and developers may e.g. subclass Android.Views.View to participate in GUI layout and rendering.

  • What is the typical size of data being transferred (bytes / API call)?

Depends on the call. Most calls pass around object references, which are pointer-sized. However, we "deep marshal" arrays, so if e.g. somebody has a 3+ MB byte array (image data!), then the entire 3+MB array is marshaled.

(In retrospect, this was possibly a bad design decision.)

  • What memory patterns need to be supported, e.g. streams, structs, unions etc?

Pointers.

  • How are errors handled?
    • Is there a notion of exceptions, error codes, some other mechanism?

Java has exceptions. java.lang.Throwable is the base class for all Java exceptions, and is bound as Java.Lang.Throwable, which is a subclass of System.Exception. Thus, all bound Java exceptions are C# exception types.

When Java code calls into C# code, C# exceptions may be wrapped into a Java Throwable subclass for marshaling and stack unwinding purposes.

  • How are errors handled?
    • What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

Java expects there to be no memory corruption.

Java has a concept of a "pending exception." Non-Java code can set the pending exception, which will be thrown upon return to Java. Non-Java code can also check to see if there is a pending exception, e.g. immediately after calling a Java method.

Java requires that there be only one pending exception at a time. Causing a new exception to be thrown while a pending exception exists is a good way to cause the process to crash.

How is testing done for interoperability scenarios?

One-off testing and unit tests.

How are diagnostics done for interoperability scenarios?

There various "toggles" and integration points to collect additional diagnostic information at runtime, e.g. "Global reference logging".

What would make interoperability easier?

Cross-VM GC "support": it should be possible for "another VM" to be responsible for keeping a C# instance alive, in situations where the .NET GC would otherwise believe that the C# instance is garbage. C# finalizers and GC.ReRegisterForFinalize() cannot be abused in this manner (I've inadvertently tried!), and the performance there is terrible

More/better C# equivalents to Java language features C#9 will hopefully get Covariant Return Types -- yay! -- but the existence of Java language features that went years without C# analogues has contributed to complications around binding Java code, e.g. default interface methods (now present in C#8, but added to Java in 2014.)

A Time Machine (to fix "legacy" binding mistakes).

mfkl commented 4 years ago

Which language/runtime/platform are you trying to interop with.

C/C++, C++/WinRT, Java. Bit of Rust before. Android, iOS, Windows, macOS, tvOS, Linux, rasp.

If there is a public link describing the API/ABI etc, please reference it.

https://github.com/videolan/vlc/tree/master/include/vlc

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

Completed.

From 1 to 10, where 10 is hardest, how hard was it to complete?

Not sure, some part were trickier than others, I guess.

What level of interop is required? Passing data from YYYY to XXXX or vice versa.

Yes, mostly for buffer-based APIs (audio/video data, streams, etc.).

Calling into methods in XXXX from YYYY or vice versa.

Yes.

What complications are there in terms of calling conventions, pinning memory, threads?

Some use of GCHandle, some callback thread scheduling.

What is required in terms of function pointers / delegate-like capabilities?

Haven't used the new C# 9 function pointers yet, but many delegate for reverse native callbacks.

Passing object references from XXXX to YYYY or vice versa.

Sure.

Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

Yes, it is possible though discouraged. If the native side invokes a managed callback that calls back into the native side, the native lib can be unhappy.

What is the lifetime management of those objects, e.g. GC, reference counted?

Most C objects have new and release functions which are hooked to the .NET IDisposable pattern. There is also retain() methods available on some types from the native lib to increment the ref count manually.

How many objects/interfaces/methods are involved?

For libvlc, about 6 main public API C# types, ~300 native functions, ~30 structs.

Is the interop layer manually crafted, or automatically generated through some tooling?

I used CppSharp to kickstart it, but then reverted to manual for several reasons:

How frequently do the calls need to be made (calls / second)?

Depends on the API, some calls are usually not in a hot loop (like play() and pause()). But some definitely are and the frame rate depends on the media (e.g. could be 60 fps). Ex: https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__media__player.html#gafae363ba4c4655780b0403e00159be56

What is the typical size of data being transferred (bytes / API call)?

Usually small, except for the buffer based APIs.

What memory patterns need to be supported, e.g. streams, structs, unions etc?

Yes, all of these.

How are errors handled? Is there a notion of exceptions, error codes, some other mechanism?

There is an event for errors and some functions return false when it failed.

What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

It doesn't usually. Logging helps.

How is testing done for interoperability scenarios?

Some very basic unit tests and manual/user testing.

How are diagnostics done for interoperability scenarios?

Ah, that's on my todo list!

What would make interoperability easier?

Maybe this is relevant https://github.com/videolan/libvlcsharp/blob/3.x/src/LibVLCSharp/Shared/Helpers/MarshalUtils.cs

Are there languages/runtimes that are emerging that are of interest? What makes them interesting?

Interop goes both ways. dotnet hosting is a step in a good direction IMO and making it easy/well documented to start with opens up plenty of interesting and new interop scenarios, not only for .NET developers.

ldematte commented 4 years ago
* Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

1) Quite classic C/C++ scenario, integrate a native .dll and use it from C# code, but with lots of callback/function pointers. 2) Call into C# code to provide data, in as much as possible efficient way (0-copy, if/whenever possible)

* What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

Completed

* From 1 to 10, where 10 is hardest, how hard was it to complete?

3/4

* What level of interop is required?

  * Passing data from YYYY to XXXX or vice versa.
  * Calling into methods in XXXX from YYYY or vice versa.

Both

    * What complications are there in terms of calling conventions, pinning memory, threads?

Pinning, thread affininity

    * What is required in terms of function pointers / delegate-like capabilities?

That was the "hard" part of 1). We did not used delegates, but used a thin C++/CLI stub to convert C function pointers to an observer interface.

  * Passing object references from XXXX to YYYY or vice versa.

kind of (just plain data arrays)

  * Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

For 2), yes

  * What is the lifetime management of those objects, e.g. GC, reference counted?

GC + pinning. In C++ they are stack allocated whenever possible.

  * How many objects/interfaces/methods are involved?

Less then 10 interfaces with less then 10 methods each.

  * Is the interop layer manually crafted, or automatically generated through some tooling?

Manual

  * Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. `IDispatch` or similar?

Static

  * How frequently do the calls need to be made (calls / second)?

Many. It is high frequency data, so we range between 10/sec (min) to 10k/sec (max)

  * What is the typical size of data being transferred (bytes / API call)?

It is inversely proportional with frequency. High freq calls have smaller payloads (like less than 1K), lower freq carry more data (up to 1M each)

  * What memory patterns need to be supported, e.g. streams, structs, unions etc?

Structs, and contiguous memory (arrays/ReadOnlySpan). Streaming is done using callbacks with data args (push not pull)

  * How are errors handled?

When they need to cross border, through dedicated method on the interfaces

    * Is there a notion of exceptions, error codes, some other mechanism?

Error codes, but we translate them to different callbacks

    * What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

We always assume safe unless fatal, in that case we now we need to abort.

* How is testing done for interoperability scenarios?

Unit tests + stress tests (replay of real captured production data)

* What would make interoperability easier?

Keep the work going on Span/ReadOnlySpan/Memory. It is the way to go. Better function pointer/delegate interaction (using delegates in C/C++ and using function pointers in C#)

* Are there languages/runtimes that are emerging that are of interest? What makes them interesting?

Rust/golang. Low level but better (less bloated, less complicated, less error prone) than C++ But not viable options until there is a good dependable IDE (editor/debugger) for them. May VS? One day? :)

Thank you!

You are welcome!

kostasvl commented 4 years ago

It took a LOT of effort to get the Unreal Engine to interface with C# via C++/CLI. I thought about giving up, every step of the way! See here: https://github.com/EpicGames/UnrealEngine/pull/6245. If you can't access that, I printed it to pdf and uploaded it here: https://drive.google.com/file/d/1SMBep2D1cZHMbPRzuQYhas7d8-CgfPZU/view?usp=sharing.

If Microsoft were to address all those issues, I'm sure Epic would be more motivated to address any remaining issues on their side. As it is, I hesitated to even send them that PR..

Thanks!

nathan130200 commented 4 years ago

@kostasvl i've found on github an .NET bindings implementation for Unreal Engine: https://github.com/nxrighthere/UnrealCLR using .NET Core.

GSPP commented 4 years ago

I'm using TensorFlow from C#. TensorFlow is normally accessible from Python. I use Pythonnet and dynamic to interact with real Python TensorFlow objects. The code looks quite like the original Python code and it interacts nicely with my other C# code. It is a very nice solution.

I do not want to use Python because a) I lose interop with my other .NET code and b) C# is a more productive language especially for larger projects.

There has been some friction around the interop boundary but nothing major. For the most part, dynamic just works. Performance does not matter because the interop calls are only to interact with the TF object model. The actual compute-heavy load is carried by the C code inside of TF. The Python libraries themselves are only (nice) bindings to the C code plus convenience logic around them.

I also can use other Python libraries that use Python TF such as Keras. If I were to use .NET native bindings to TF I would lose the entire TF ecosystem which is a killer.

A pain point has been version compatibility of TF, Python, Pythonnet and .NET Core. It took a few attempts to find a working combination. For now, I refuse to upgrade in order to avoid having to do that experimentation again.

Another pain point was finalization performance. Finalizing a Pythonnet object seems to be rather heavy and single-threaded. I was forced to eagerly dispose a few objects in order to not be limited by finalization too much. (This is very analogous to COM objects.) I am also initiating a full GC every few seconds eagerly so that finalization can happen in parallel with the main program. I don't fully recall why this was helpful but it was.

I don't know what the .NET team could do to make this better. It's quite awesome the way it is. Maybe reach out to the Pythonnet team and ask them?

lostmsu commented 4 years ago

@GSPP I am a maintainer of Python.NET and also author of Gradient, which is basically a generated wrapper for TensorFlow Python API, built on top of Python.NET. It alleviates most of dynamic, and lets you subclass TensorFlow types. We are working hard right now to publish the first release candidate - it should be out in a week or two. The latest preview is very capable though if you're interested. We have samples for GPT-2 and RL in Unity.

rubo commented 4 years ago

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

Objective-C, iOS.

From 1 to 10, where 10 is hardest, how hard was it to complete?

8

What level of interop is required?

Need to invoke an Objective-C function with va_list (variadic arguments) from C#.

While I was able to do some magic to make it work on ARM64 (real devices), it is still a problem on x86_64 machines (iOS simulator). Actually, this is a known problem in Xamarin world when using some of iOS localization APIs. Here is a few years old discussion with @rolfbjarne and @spouliot from Xamarin.iOS team.

stephen-hawley commented 4 years ago

Swift interop. Specifically:

john-h-k commented 4 years ago

@stephen-hawley

It might be my lack of swift knowledge, but I'm not sure what you mean by

the ability to pinvoke into/from the self register (or argument, depending on the CPU)

Do you just mean supporting the Swift equivalent to thiscall so you can call instance methods?

stephen-hawley commented 4 years ago

In older versions of swift if you had, say, a class like this:

public class Useless {
    public init () { }
    public func performFabulousTrick (where: City) { }
}

When I called Useless.performFabulousTrick, like many language implementations, the implicit self (this) argument gets prepended to the argument list. In Swift 4 (IIRC), they switched to using a dedicated register on most ABIs for the self argument to help cut down on register juggling. For closures, the self register is also used to hold onto a context block that contains unbound variable references.

Ultimately, I'd like to be able to (1) pinvoke into these methods directly without having to write wrapper methods (2) be able to provide a delegate to swift from C# that can be correctly called from swift.

I would like to be able to, for example, build a swift protocol witness table (essentially a vtable) built from C# delegates and pass it into swift and have swift call C# methods.

rolfbjarne commented 4 years ago

I'm one of the principal authors and maintainers of Xamarin.iOS/Xamarin.Mac

  • Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

C/Objective-C (specifically on Apple platforms: iOS, tvOS, watchOS, macOS)

  • What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

Completed.

  • From 1 to 10, where 10 is hardest, how hard was it to complete?

Where is "can't do it, have to resort to assembly code" on this scale? 11?

What level of interop is required?

  • Passing data from YYYY to XXXX or vice versa.

Both

  • Calling into methods in XXXX from YYYY or vice versa.

Both

  • What complications are there in terms of calling conventions, pinning memory, threads?

Calling conventions:

  • What is required in terms of function pointers / delegate-like capabilities?

We have to be able to pass function pointers to C/Objective-C.

  • Passing object references from XXXX to YYYY or vice versa.

No, only value types. We pass around IntPtrs representing instances of Objective-C types.

  • Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

Yes.

  • What is the lifetime management of those objects, e.g. GC, reference counted?

Objective-C is reference counted, so we do the same.

  • How many objects/interfaces/methods are involved?

Pretty much the entire Objective-C API as provided by Apple in iOS/tvOS/watchOS/macOS. A very rough count says ~6000 classes.

  • Is the interop layer manually crafted, or automatically generated through some tooling?

Both: a lot of the code is generated, but some parts are manual as well.

  • Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. IDispatch or similar?

All static.

  • How frequently do the calls need to be made (calls / second)?

As frequently as possible. Customers tend to find ways to hit the limit... I believe this is around tens/hundreds of thousands of calls per second for the simplest variants now.

  • What is the typical size of data being transferred (bytes / API call)?

All sizes, parameter-less calls to multi-mb data arrays.

  • What memory patterns need to be supported, e.g. streams, structs, unions etc?

Structs, unions and raw memory for the most part.

  • How are errors handled?

Objective-C usually uses a mix of error codes and out parameters which represent error objects. There's nothing special about these, just normal P/Invokes.

However, Objective-C also has exceptions, which makes things quite complicated. Luckily Apple recommends not using them, so they're somewhat rare (but that doesn't mean Apple doesn't use them sometimes too). Objective-C exceptions are just like C++ exceptions, and we handle them using custom assembly code to catch these Objective-C exceptions upon return to managed code after calling a P/Invoke, and then we throw a managed exception instead.

We also do the reverse: if we're returning to native code after native code has called managed code (through a managed function pointer for instance), we throw an Objective-C exception if managed code threw an exception.

* Is there a notion of exceptions, error codes, some other mechanism?

Both, see above.

* What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

According to Apple's documentation, Objective-C exceptions are not safe. We try to do as good as we can, by converting them into managed exceptions that can be handled by managed code.

  • How is testing done for interoperability scenarios?

Mostly unit tests.

  • What would make interoperability easier?
tannergooding commented 4 years ago

There's no C# support for passing SIMD vectors in SIMD registers as parameters (mono/mono#17868). This makes it impossible to create a P/Invoke to such functions, so we had to create a wrapper C function that translates between normal arguments and SIMD arguments. The downside is that this totally defeats the purpose of using SIMD vectors in the first place (performance).

@rolfbjarne, for these two the new Vector128<T> and Vector256<T> types (introduced in .NET Core 3.1) correspond to the __m128 and __m256 types in the relevant System V ABI (and the Windows ABI). Barring any known bugs, this should largely just work and we have a set of tests validating that this works: https://github.com/dotnet/runtime/tree/master/src/tests/Interop/PInvoke/Generics

Some parameters must be 16-byte aligned (certain native methods take a pointer to allocated memory, and that memory must be 16-byte aligned). There's no way to specify this in a P/Invoke, which means we have to do the alignment ourselves.

At least on the stack, these types 16-byte aligned (and there is partial support for 32-byte alignment on the latter). On the heap, the GC still won't align above 8-bytes, but there is an approved GC.AllocateArray overload that would allow you to enforce alignment for data on the Pinned Object Heap: https://github.com/dotnet/runtime/issues/27146, it just didn't make it in for .NET 5

rolfbjarne commented 4 years ago

@tannergooding

@rolfbjarne, for these two the new Vector128<T> and Vector256<T> types (introduced in .NET Core 3.1) correspond to the __m128 and __m256 types in the relevant System V ABI (and the Windows ABI). Barring any known bugs, this should largely just work and we have a set of tests validating that this works: https://github.com/dotnet/runtime/tree/master/src/tests/Interop/PInvoke/Generics

Looking back at this, I might have explained myself badly. Our particular problem comes from Clang's __ext_vector_type__ attribute (of which there's very little documentation). The calling convention changes if a vector is declared using this attribute: https://gist.github.com/rolfbjarne/bff8584f19c0e83d9e194d768b6a22d6

tannergooding commented 4 years ago

Ah, I see. This is for a clang specific modifier: http://clang.llvm.org/docs/LanguageExtensions.html#vectors-and-extended-vectors

Do you know how it differs from __m128 or is it functionally treated the same when passing?

rolfbjarne commented 4 years ago

Do you know how it differs from __m128 or is it functionally treated the same when passing?

I'm sorry to say I have no idea.

elachlan commented 4 years ago

We wrote a wrapper around a GO Library compiled to C using CGO. https://github.com/expert1-pty-ltd/cloudsql-proxy

It was difficult, but we hadn't done much pinvoke before. Most of the work was done on the GO side to add functions to make it easier to manage.

For us speed is incredibly important since its used at "login", we make a function call and it starts the proxy which then triggers a callback. We are also trying to target .net standard 2.0 due to our use of .net framework for the dependant application.

Documentation seemed to be lacking a bit on pinvoke. It would have been easier if the documentation had a bit more in it to fill in the blanks. We mostly had to look at stackoverflow articles to work out some of the issues.

AndresLuga commented 4 years ago

Does MS Office interop fit this issue?

We have used Visual Studio Tools for Office (VSTO) for Excel add-ons, some of them use WinForms controls on a TaskPane. It would be great if VSTO projects could be upgraded to .Net 6+.

MattBolitho commented 4 years ago

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

Usually C/C++ perhaps Rust in future.

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

Mostly implemented but with extensions to make in future.

From 1 to 10, where 10 is hardest, how hard was it to complete?

3-5. It's easy enough to get basis interop done. It's more of a problem when the native side has a lot of functionality to wrap up nicely for the C# consumer.

What level of interop is required? Passing data from YYYY to XXXX or vice versa.

Yes.

Marshalling of blittable generics is going to be pretty helpful.

Calling into methods in XXXX from YYYY or vice versa.

Yes.

What complications are there in terms of calling conventions, pinning memory, threads?

Calling conventions are easy enough to map. As others have mentioned variadic functions and __vectorcall support would be nice.

What is required in terms of function pointers / delegate-like capabilities?

Strongly typed callbacks through function pointers would be incredibly handy. It looks like these are already on their way and they look nice :)

Passing object references from XXXX to YYYY or vice versa.

Yes.

Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

No.

What is the lifetime management of those objects, e.g. GC, reference counted?

Usually GC. The object's memory is pinned in C# and passed down to the native code. On the native side, anything that isn't too big to fit onto the stack is allocated there.

How many objects/interfaces/methods are involved?

Usually only a handful. Although there's no reason to assume this couldn't grow to be a lot of methods.

Is the interop layer manually crafted, or automatically generated through some tooling?

Often manually crafted to be able to meet the needs of the data types on either side.

Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. IDispatch or similar?

Statically defined.

How frequently do the calls need to be made (calls / second)?

This varies depending on the use case and domain. Very frequently in the worst case, perhaps a few thousand or ten thousand times per second. Of course it's best to assume as often as physically possible :)

What is the typical size of data being transferred (bytes / API call)?

It's hard to say as it can vary so much. Pointers are often used to avoid passing large structures.

What memory patterns need to be supported, e.g. streams, structs, unions etc?

Definitely structs but streams would be very interesting!

How are errors handled? Is there a notion of exceptions, error codes, some other mechanism? What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

Return codes.

What would make interoperability easier?

Are there languages/runtimes that are emerging that are of interest? What makes them interesting?

Definitely Rust due to the high performance and safety of it.

anon17 commented 4 years ago

C/C++ interop. One complication was unicode: the C code uses wchar_t for unicode on windows, that's utf32 on linux, .net can't utf32 in p/invoke. If they switch to utf8, they need to refactor code to use tchar and write compatibility layer to switch between strlen and wcslen, if they try utf16 on linux, glibc doesn't provide strlen for utf16. Stream processing, specifically filtering like compression and encryption, can you come up with an API for it that can be consumed from both C and .net? There's pretty nice z_stream from zlib, but it requires pointer arithmetic that .net can't easily do.

mstroppel commented 4 years ago

Which language/runtime/platform are you trying to interop with. If there is a public link describing the API/ABI etc, please reference it.

We use C++ modules from C#. The C++ modules:

Further we use a SWIG like generator to generate "interop code" from C++ headers. The custom made generator was introduce for Compact Framework (Windows CE). Right now we could theoretically switch back to SWIG. But this would not add any business value to our product at the moment.

What stage are you at in the interop? Planning/designing, implementing, stuck, completed, abandoned.

From 1 to 10, where 10 is hardest, how hard was it to complete?

~3

What level of interop is required? Passing data from YYYY to XXXX or vice versa.

We have to both. Both is realized via a struct:

This rather complex approach was chosen to avoid a lot of .GetXXX() interop calls from C# to C++.

Calling into methods in XXXX from YYYY or vice versa.

Calling from C++ to C# was "worked around" by the struct mentioned above.

What complications are there in terms of calling conventions, pinning memory, threads?

Mostly solved by SWIG

What is required in terms of function pointers / delegate-like capabilities?

Existing functionality is good.

Passing object references from XXXX to YYYY or vice versa.

See above: using a struct adds more code but improves performance. Would be good to have a simple way to pass multiple "properties" from a C++ GetXXX() method to a C# XXX { get; } property.

Does the interop involve reentrancy, e.g. YYYY -> XXXX -> YYYY?

No.

What is the lifetime management of those objects, e.g. GC, reference counted?

Done by SWIG

How many objects/interfaces/methods are involved?

Mostly we have one "manager" which acts as facade. We have 2 projects = 2 managers. Each involves around 3 data or configuration classes. Methods (sum of all in all involved classes) are around ~300 to 500.

Is the interop layer manually crafted, or automatically generated through some tooling?

We are using SWIG.

Is all the info statically defined up front (e.g. metadata) or does it involve dynamism, e.g. IDispatch or similar?

Statically up front.

How frequently do the calls need to be made (calls / second)?

We are querying weights values from a scale. Currently we update with about 20 Hz with up to 4 scales in parallel. In the future it will go up to 100 Hz.

What is the typical size of data being transferred (bytes / API call)?

The struct has a size of ~350 Bytes

What memory patterns need to be supported, e.g. streams, structs, unions etc?

Structs, classes.

How are errors handled? Is there a notion of exceptions, error codes, some other mechanism?

Pure by return value due to a lack of exception support in other products where the C++ library is used.

What needs to happen on one of those errors, how does the other side know what is safe vs corrupted?

Resend a command again.

How is testing done for interoperability scenarios?

No unit or integration tests done. Purely covered by End-To-End tests.

How are diagnostics done for interoperability scenarios?

Using logging. That's the only position where C++ calls into C# (into log4net).

What would make interoperability easier?

Better support to work with C++ classes.

Are there languages/runtimes that are emerging that are of interest? What makes them interesting?

I don't see something for us.

nxrighthere commented 4 years ago

The most useful interop feature for our project would be a possibility to transparently make blittable "twin" structures for managed classes using attributes with explicit layout to pass and convert back and forth to unmanaged code only specific fields.

A rough example:

[ClassLayout(LayoutKind.Explicit, Size = 8)] // Transparently makes a blittable "twin" structure for the class
public partial class Actor : IEquatable<Actor> { // Remains a class in managed code with all benefits
    private IntPtr pointer; // Passed back and forth to unmanaged code in a P/Invoke call and managed code
    private Component component; // Ignored, only relevant in managed code
    private string name; // Ignored
    ...
}

This will allows us to eliminate the intermediate conversion from a blittable structure to appropriate managed classes and instead let the .NET handle such cases.

AaronRobinsonMSFT commented 4 years ago

@nxrighthere This sounds like something we are planning already. Thanks for the feedback!

/cc @jkoritzinsky @elinor-fung

deangoddard commented 4 years ago

I think of Interop as a higher level of order and function, stacked on top of the very important low level interconnection issues between managed/unmanaged code as discussed above. So without dismissing the importance of the technical issues of 'how', I'd like to share a dream towards which we can achieve a common outcome.

I reckon if we start to think of any executable (application) being a service, that can be acquired from any authorised point (WAN/LAN), then Interop becomes the ODBC or SaaS for the desktop environment. Developers will no longer have to reinvent or find a third party framework, when they could just use another application that's already purpose built. For example emails, if say Outlook is already available, why redo your own emailing system? You won't need to know about IMAP vs SMTP/POP3 etc., Outlook (or equivalent) will handle the technical side it.

Just a thought.

FiniteReality commented 4 years ago

I think of Interop as a higher level of order and function, stacked on top of the very important low level interconnection issues between managed/unmanaged code as discussed above. So without dismissing the importance of the technical issues of 'how', I'd like to share a dream towards which we can achieve a common outcome.

I reckon if we start to think of any executable (application) being a service, that can be acquired from any authorised point (WAN/LAN), then Interop becomes the ODBC or SaaS for the desktop environment. Developers will no longer have to reinvent or find a third party framework, when they could just use another application that's already purpose built. For example emails, if say Outlook is already available, why redo your own emailing system? You won't need to know about IMAP vs SMTP/POP3 etc., Outlook (or equivalent) will handle the technical side it.

Just a thought.

This to me sounds a lot more like RPC than interop, since you're talking about applications communicating with other applications rather than applications communicating with libraries.

AaronRobinsonMSFT commented 4 years ago

Thank you all who have offered feedback for the survey over the past several weeks – even to those who have only used the reaction emoji. We are closing this issue as it has been 3 weeks since we publicly announced it via the .NET blog.

The majority of responses have been made right here on github – which is great! This level of community feedback has allowed us to confirm some assumptions but also surprised us by what we thought were priorities.

Some highlights that have surprised us:

Some things that this survey has confirmed for us are:

“Wait a minute” you may be saying, “What about X, Y, and Z?”. Yes, the above two lists are not exhaustive and there are other points made in this issue and offline which aren’t on the list. This is not the end of the conversation – only the end of this specific survey issue. We still want to hear from all of you and we also want to know when we get it wrong. That is why, as we continue to move forward with planning the next and future releases of .NET, the Interop team and all teams in .NET will continue posting survey issues like this one. Some will be entirely open-ended questions, while others will be more quantitative surveys like the recent Native AOT. With either approach, our plan is to take in responses, design and plan, and then validate our interpretation/plan with the community. The “design and plan” is where we are with Interop right now. In the not too distant future, you can expect to see a public plan for Interop with a request for comments and feedback on direction.

Interop isn’t a trivial space and the feedback above has only confirmed it is even more varied in the wild than we expected. Please continue the feedback and believe me when I say we do and will consider all feedback as we plan for the future of .NET Interop.

jmp75 commented 4 years ago

I missed the original survey blog post and only came across this thread. I "fell" into native interop a decade or so ago and thought I should share briefly these with the hope this helps. I really under-promoted some of the software over the years; some may be superseded now, probably not all though. I'll be concise in text here and mostly provide to software assets in the hope they are more telling for @elinor-fung and @AaronRobinsonMSFT.

Two main projects come to mind.

First, my initial foray into interop was as a co-contributor to bidirectional R and .NET interop: R.NET and rClr. I refactored a utility for native dynamic interop to avoid platform specific and static code, and at a time where Mono was the only option on Linux.

Second, since 2013 I've led the development of a C++ scientific software stack that needs bindings for access from R, Python, Matlab, .NET, etc.. For background this is now powering a 7-day flow forecasting service. The main interop technical characteristics of the stack:

Some of the open source software stemming from this stack:

Lastly and recently (and without .NET in the mix) I had to interactively test and port a Fortran90+ codebase to Python. I found the f90wrap: Fortran to Python interface generator with derived type support which was a blessing to help reverse engineer and test the Fortran codebase from computational notebooks.

jazzdelightsme commented 3 years ago

C++/Windows/COM and COM-like-but-not-quite-full-COM (dbgeng API), and PowerShell. The Windows Debugger is a native application, which supports plugins ("debugger extensions"), and I have written an extension in C# (https://github.com/Microsoft/DbgShell).

I have a complete, working project that runs on the full desktop .NET Framework; but each time I try to port to .NET Core (now just ".NET"), I get stuck. I have most recently attempted porting to .NET 5.0.

For the original full desktop .NET Framework, that was a few years ago... but let's call it a 7. The old CLR hosting APIs are/were pretty gnarly, and it required a lot of "advanced" stuff, and there wasn't much documentation or samples.

For the current attempt to port to .NET 5.0... it's at least a 9, and it appears I will not be able to complete the port with the same feature set as before at all, because I STILL will not be able to unload my code (see #45285).

"Full", ha. Data of all sorts must be passed back and forth from the native app and the C# code; callbacks in both directions; cooperation/coordination of threads and locking; re-entrancy; zillions of interfaces.

The interop layer is all hand-crafted--I would love for it to be generate-able somehow, and it seems like it ought to be possible, but there are just too many things that require custom handling to work right.

The current code does not involve much dynamism, but the relatively new DbgModel API does have a pretty high capability for dynamism (designed to work well with / model javascript), and I would like to take advantage of that.

When I started the project, I took advantage of COM Interop to deal with all the DbgEng interfaces (which are sort of "COM-lite"). It worked okay for a lot of it... but in the end I ran into problems that required me to drop down to "native" interop, using C++/CLI as a bridge layer, and handling COM stuff completely myself. A little more info here. If C#'s COM interop had been a little more flexible, it might have saved me a TON of time and work that had to be sunk into a C++/CLI layer instead (which was the first blocking issue to get to .NET Core; it has taken years to get C++/CLI support back).

In general, I am mostly happy with the direction of interop support in C#, and I'm very excited to try some of the newer features to make my code better. But I can't try those newer features until I get onto .NET 5, and there are some serious impediments to getting there, mostly in the area of CLR hosting: for example, we have AssemblyLoadContext instead of AppDomain for unloadability, but I can't specify an ALC to use when hosting the CLR and calling into my managed code: a tragic case of "you can't get there from here". And even if I could, code that I depend on (the PowerShell runtime) would immediately accidentally "fall out" of that ALC and into the default ALC (#45285 again).