NETMF / netmf-interpreter

.NET Micro Framework Interpreter
http://netmf.github.io/netmf-interpreter/
Other
487 stars 224 forks source link

Update STM32F4 HAL to include latest official HAL from ST #405

Open josesimoes opened 8 years ago

josesimoes commented 8 years ago

Although working the existing headers for the STM32F4 are outdated. This is a problem when creating new solutions based in STM32F4 devices other than the '407. "Marrying" with current CMSIS headers and code can be also be difficult.

I have a branch with the latest official HAL from ST. The HAL code files were updated were required (mostly UART, SPI and PWM because there are diferences on what modules are available on each device). The HAL projects were cleaned up. The templates were updated. The platform_selector.h in the solutions was updated to use the new device identifier. A new file with the HAL configuration was added to the solutions.

josesimoes commented 8 years ago

@smaillet-ms risking asking the obvious: Assuming that the approach implemented in Llilum was done learning from what might be wrong/not so good in NETMF, can it be implemented in NETMF too? Or is that too complicated or requires a great deal of changes?

smaillet-ms commented 8 years ago

Yes, Llilum was built with full awareness of things like mBed and CMSIS, which didn't exist when NETMF was created. The goal was to minimize the amount of HAL like code we define or standardize around. One of the challenges with NETMF is the rather blurry lines between HAL, PAL, CPU and CLR code layering. With Llilum things are more clear because of the C# vs C/C++ split. So, what I think we need to do with NETMF is migrate in stages:

  1. Shift to where we have a standard CMSIS-PAL
    That is provide a single common NETMF PAL that links against a BSP library for a board that implements the CMSIS-CORE and CMSIS-DRIVER APIs for that device. Thus providing clean isolation between the HAL and PAL and getting NETMF out of defining or controlling the startup sequence, which is a source of a great deal of confusion and frustration when porting to new SOC families.
  2. Unify the NETMF PAL with the Llilum managed interop layer APIs
    Unifying the top level APIs allows us to have a common native BSP used for both runtimes. This helps to make migration a lot simpler when you are ready to use Llilum.

The biggest change for NETMF will be at the HAL level, however, by leveraging the existing CMSIS support from silicon vendors the SOC specific parts of that are already there. The major work is in the board specific configuration and drivers for any peripherals on the board that aren't part of the SOC itself. Thus this transition is focused more on what you don't have to write than what you do. In that respect there's not likely much of a migration path for those using the current "do it however you like" model. Rather it would be like starting over but, hopefully, it would be much easier to do since you'd be leveraging the existing library support instead of porting it with lot's of modifications and tweaks to fit into the old NETMF HAL API.

josesimoes commented 8 years ago

I'm not familiar with PAL layer. I've only dip my toes in the HAL. I think I understand what you are suggesting. Can you "draw" a picture of what you are thinking? Maybe a folder structure with the proposed layout to implement such changes... that would help fully understand you ideas on this.

smaillet-ms commented 8 years ago

The best picture available is this one from the Wiki. The general idea is to Create a PAL that uses CMSIS as the HAL API instead of the current NETMF HAL. This will require some changes to the CLR code that has direct calls to HAL APIs so that the CLR can remain completely dependent on the PAL API only. That will allow existing ports to continue doing what they are doing with minimal change and for new systems to spin up based on the CMSIS libraries.

The folder layout isn't planned to be any different than what we have now. I imagine the new universal CMSIS-PAL could be located DeviceCode\Targets\OS\CMSIS-PAL. This would likely just reference the Targets\OS\CMSIS-RTX support already present.

It might be useful to further adjust the %SPOCLIENT%\CMSIS folder a layout that matches the standard ARM tooling CMSIS-PACK repository. Thus the current folder would be moved to %SPOCLIENT%\Pack\ARM\CMSIS.

Ideally we could use the pack folders already installed by the MDK tools if you have them installed so that we'd have an Environment var to point to the pack root (i.e. CMSIS_PACK_ROOT). Unfortunately, we need to add the NETMF project files into the picture so that we can actually build the code. Thus we either need to completely replicate the pack folder in the NETMF tree (Basically what we are doing now) OR we change our project files to reference the source files form their pack install location. For example, the dotnetmf.proj file for RTX we'd have something like this: <Compile Include="%CMSIS_PACK_ROOT%\ARM\CMSIS\4.3.0\CMSIS\RTOS\RTX\SRC\HAL_CM.c" />

Thus the source can remain intact and isolated from the NETMF build requirements. (e.g. reduce the copying and tweaking needed to leverage the existing code for NETMF.)

Further, using the standard layout allows using the standard pack mechanism to download and install packs. While we don't want to ties ourselves to only supporting CMSIS we should be able to support it as it is intended to be used without needing to perform significant "porting" efforts. As mBed 3.0 evolves we can look into the "Yota" system they are using, For managed code and our own code we can use Nuget. With some time and experience with those mechanisms we should be able to formulate an abstraction to enable a NUGET like development experience that incorporates multiple packaging technologies. (But that's a ways out yet - baby steps...) For now I'd be happy with download and install the packs manually. As long as we can use them without modification we've solved the major problem, the rest is about usability and convenience.

josesimoes commented 8 years ago

If the goal is to make NETMF "hardware agnostic" the current DeviceCode should live under each Solution folder, correct?

Having an RTOS "under" the NETMF, or should I say the CLR?, is probably a good idea. I mean always, not only because of the network. This will help better management on power, having async drivers, non blocking tasks, etc.

Should we follow a complete "mBeddish" approach here stick to that architecture? From what I can see on their roadmap for v3 they'll have a lot of good stuff coming that NETMF could benefit from.

smaillet-ms commented 8 years ago

Device code is where it is, if for no other reason than to not break existing users. Additionally it should stay there as the intent for that model was that you would likely have the same peripherals hanging of a several chips (or even on the same SOC within a family) that concept still applies and, for any platforms not using CMSIS we still need to have a model for sharing common code so that's where it goes.

Forget mBed for the moment v3 is still pre-release and v2.0 is a completely different animal from CMSIS and mBed OS 3.0 - CMSIS is real now, stable with LOTS of silicon vendor support. Once we've got CMSIS worked out we can loop back to an mBed PAL or any other "common" layer for other architectures like Zephyr A key point to remember is that NETMF is CPU and software platform neutral.

CLR doesn't require an RTOS underneath and we won't be adding a requirement for one. It does it's own threading, which, due to the nature of the interpreter is much more efficient than a native code kernel. (The memory overhead for storing a thread is quite a bit smaller for the interpreter as it doesn't need to save any CPU registers or state, etc...)

josesimoes commented 8 years ago

I believe that is very important to have all this working with mkd and gcc, so please keep that in mind.

All in all I understand that you have all that pretty much figured out, so I'm not seeing that any further input is required at this time...

josesimoes commented 8 years ago

Regarding the hardware initialization: I'm not sure about the benefits of having the scatter file being parsed/generated from those xml files on the solution folders. Unless I'm missing something here it seems that they don't add any added value being generated this way. It would seem much simpler, intuitive and easier to understand if they were plain, simple ld files that are edited as needed and passed to the linker.

smaillet-ms commented 8 years ago

They are xml and generate the output file because not all of the tools use ld files, in fact the arm tools use scatter files with a completely different syntax - GCC uses ld scripts. Other tool chains use other solutions. The use of XML helps to abstract all of that and enables tooling to parse them more simply. (e.g. we don't need a unique link/locate language parser for every tool-chain supported, just an XML parser will do). While I think the usefulness of the XML is probably coming to an end, unless/until the entire build system is re-worked, I don't see a lot of value in re-working just that.

josesimoes commented 8 years ago

Understood about the loader files. I was - wrongly - under the impression that the mdk and the gcc ld files were equivalent, just had different name and extension.

josesimoes commented 8 years ago

@smaillet-ms could you please add a new branch to submit PRs for this?

josesimoes commented 8 years ago

@smaillet-ms what should be the naming convention/layout for the "new" DeviceCode folder?

Something like... DeviceCode\Targets\Native\STM32F0_CMSIS DeviceCode\Targets\Native\STM32F4_CMSIS

Or... DeviceCode\Targets\Native\CMSIS\STM32F0 DeviceCode\Targets\Native\CMSIS\STM32F4

smaillet-ms commented 8 years ago

Just: DeviceCode\Targets\Native\CMSIS

The whole point of building a PAL that relies on CMSIS APIs is to make it completely SOC and board neutral for Cortex-M based devices.

josesimoes commented 8 years ago

@smaillet-ms You've lost me there...

I understand that you are referring to the PAL, but we still need a place for each platform HAL, right?

ST provides HAL headers, source and CMSIS specific headers for each series (F0, F1, etc) and for each device inside the series (407xx, 410cx, 429xx, etc). All these files are supplied in separate packages for each series (F0, F1, etc).

How do you suggest we handle this?

josesimoes commented 8 years ago

Is the following enough to address your requirement of "getting NETMF out of defining or controlling the startup sequence":

smaillet-ms commented 8 years ago

Essentially, yes. Though it is important to note that anything other than implementing main() to initialize your PAL and calling ApplicationEntryPoint() to kick off the CLR everything else is platform/HAL and tool dependent and not officially mandated in any way by NETMF. That said, for any given platform+tools there likely common solutions that would be literally copy pasted from one project to another - such cases should be standardized into a common library for that particular platform+tool (I.e. NETMF_CMSISvX.x_MDK_HAL.lib)

josesimoes commented 8 years ago

@smaillet-ms OK.

Still need a clarification about the folder structure for the HAL drivers and the PAL stuff. I get it that the PAL code should be under: DeviceCode\Targets\Native\CMSIS

But what about the PAL headers that are platform/series/device specific? And the HAL drivers that are platform/series/device specific too? In order to match the organization of the STM32 Cube packs can we have some like this:

DeviceCode\Targets\ST\STM32F0xx_HAL_Driver DeviceCode\Targets\ST\STM32F4xx_HAL_Driver

DeviceCode\Targets\Native\CMSIS\ DeviceCode\Targets\Native\CMSIS\ST\STM32F0xx DeviceCode\Targets\Native\CMSIS\ST\STM32F4xx

This structure allows to easily add new vendors/platforms/series/devices.

smaillet-ms commented 8 years ago

CMSIS-PAL should be completely independent of the target, that's the point of it. It relies exclusively on the headers defined in CMSIS. As to how to layout the code: The source code from third parity packs should not be in the NETMF source tree at all, they should go into the CMSIS pack tree wherever that is installed. The only thing that would go into the NETMF tree is the netmf project files that are needed to get the third party code to build in our tree. There are two factors at play in this.

  1. We don't want to fill our tree with copies of code from multiple other providers
  2. We don't want to have to force our code/projects into a source tree provided by others. (e.g. if you already have a CMSIS-Pack folder with packs installed from some other tool like Keil's MDK, then we shouldn't go adding our own custom projects throughout the tree, and shouldn't ask developers to add them.

This is the key break from the past - we need to simplify re-use of existing source and keep from the perpetual maintenance cycle of downloading, updating and testing lots of third party code. The CMIS-Pack model is designed to support multiple versions of the packs installed on a system at a time. When you reference the project code the version is part of the path to getting it. Thus you can add support for the latest super chip in a family without breaking or even requiring a single change in any projects that use the current packs

Thus for CMSIS based systems we want a tree layout in NETMF that matches the tree layout for the standard pack folder used by the tools leveraging CMSIS-Pack. In the NETMF tree the only thing that is checked in is the project and settings files needed for building the code that is located in some other folder. Ideally we'd be able to automate the creation of such projects from the information contained in the pack descriptions but for starters if we can get one done manually to understand what is needed we should be ok.

josesimoes commented 8 years ago

@smaillet-ms I have a suggestion regarding the HAL features "configuration". Curently an HAL feature (e.g. GPIO) is added to the solution by adding the corresponding project to the solution proj, or the stubs as an alternative. With the CMSIS it's much easier to have a single project for the HAL and include there every "feature" and code file to compile there. My suggestion is to have a HAL.settings file in the solution folder (one for TinyBooter and another for TinyCLR, because you probably want different feature sets). In this file the features to include are defined. Then is up to that HAL proj to add the respective code file, or the stub if the feature is disabled.

I believe this makes it much easier when you are tweaking a solution, instead of having to add all those individual projects or the stubs.

smaillet-ms commented 8 years ago

Maybe I'm missing something but I don't see how your proposal changes the story... Ultimately the user has to choose which modules to include or not. CMSIS-pack has a means to express dependencies including API/interface dependencies (along the lines of - "I'm the network stack so I need at least one network interface conforming to API-XYZ to be useful, don't care what particular implementation").

josesimoes commented 8 years ago

It's about how easy it is two choose which features go into a solution. The current approach is to add individual projects and libcatproj to the tinyclr proj. That doesn't work so well when you add CMSIS. Not to mention that can be confusing and takes time. My suggestion is all about simplifying that process. I have this working now, but wanted to validate the idea before going all the way with it.

smaillet-ms commented 8 years ago

I'm still missing WHAT you are actually proposing. It seems to me you are moving where you have to add the projects (e.g. from TinyCLR.proj to some other proj) either way you still have to choose which ones you want in a given device image.

Putting them into a single "HAL" so the choice is done once doesn't work for all scenarios either. While it might cover the 90% case of a development board it doesn't help a real commercial platform that has to deal with trade-offs of functionality vs. hardware resource costs and constraints. There's no way to automate the decision making required for that. Someone has to choose to include libraries or not. Unfortunately, with the way the CLR is currently implemented that's not a true pay for play story in NETMF. (e.g. there are strong API dependencies at multiple levels, and the solution for that was to use stub libraries, thus deliberately including code that doesn't do anything useful and forcing the user to make an explicit choice which to use.)

I don't think a truly better solution lies in the HAL, but instead lies in the upper layers and removing the need for the stubs. Specifically, the CLR should have little or no dependencies on anything other than the most basic of facilities required for it to execute the interpreter loop. Everything else should be optional and added as needed. If we fix the "stubs" problem to deliver on a true "pay for play" model in NETMF, then the CMSIS problem becomes a no-brainer (e.g. CMSIS-PACK)

josesimoes commented 8 years ago

@smaillet-ms Would it be too much of a breaking change to rework the USB_DYNAMIC_CONFIGURATION struct? I'm asking because it looks overly complex and it seems to have a lot of room for improvement. 1) All the WinUSB, OS string and several parts of the OS descriptors are constants and should belong in the driver, not in the configuration (because they are not actually configurable. 2) A lot of string lengths and such can be calculated so the developer shouldn't have to do all that math and tweaking to get it working. 3) There are a lot of USB descriptor parameters and configs that are unnecessarily exposed making the configuration more complicated and possibly requiring more knowledge about all that. I guess that structure made a lot of sense when it was designed but now it doesn't anymore. Specially with the WinUSB approach a lot of that is not even configurable anymore...

I suggest that the product name, manufacturer, VID and PID remain configurable. All the rest should go into the driver. As for the interface and endpoints I'm not sure... The two endpoints required for the debug interface probably should always be there (so should be left out of the configuration). But do we need to be able to configure other endpoints and interfaces?

cw2 commented 8 years ago

I suggest that the product name, manufacturer, VID and PID remain configurable.

I vote also for the device serial number (currently not used).

smaillet-ms commented 8 years ago

The design of the configuration is based on the USB specification for the USB descriptors. There's nothing in the configuration that belongs in the driver. The driver shouldn't care about what the USB configuration is. Keep in mind that the descriptors found in the sample solutions are NOT required of a NETMF device! The sample descriptors are used for USB based debugging but they are just one use of USB for NETMF. It is entirely plausible that the device implements a well known USB interface, such as USB-MIDI or USB-HID, etc... and doesn't even support debugging over USB. That is what the configuration data structures are all about. Creating them requires knowledge of USB Descriptors.

The configuration data structures are an in memory representation of the raw USB descriptors. This format keeps the USB device side configuration code fairly simple. Ultimately I'd like to see this simplified fro the developer as well but altering the format used at runtime doesn't strike me as the way to go.

Instead of modifying the format, which breaks existing BSPs we could use a different language to describe the data structures and letting a tool do the mundane computations of string lengths etc.. would be a preferred approach. We actually have such a representation in the form of an XML file that can be pushed to a device via MFDeploy (if USB isn't used for debugging or the device has explicitly allowed reconfiguring the USB descriptors). Unfortunately our current build tooling doesn't use it for creating the default configuration information. Nor does it support a simple notation for the common extended descriptors. Adding that to the XML schema and building a tool to generate the current C++ code file formats wouldn't be a huge effort for one person to take on.

josesimoes commented 8 years ago

I understand that the configuration format is based in the USB descriptors. And I also understand how USB descriptors work. I also get it that the current configuration is for NETMF debugging only and that isn't mandatory for a solution to work. Also get it that a particular device could implement any USB device. The current USB PAL is able to send the descriptors that are in the configuration and those can be any of the standard USB (HID, CDC, MSD, etc). The thing is that that would be inconsequent because there is no NETMF API supporting those. A board implementing any of those would have to add support to properly handle the specify requests and expose that through a proper API. The USB controller class, in my opinion, is too generic.

My point here is that there are a lot of details in USB configuration that are being exposed unnecessarily and that makes it confusing to use.

Adapting the current USB PAL to use ST USB CMSIS compliant library is difficult. As a matter of fact let me tell you that ST has USB CMSIS compliant libraries for USB devices (all classes) and USB hosts. Moreover the Cortex devices that are candidates for NETMF and include the USB feature have, at most, two available controllers.

Considering all that I would suggest rethinking the USB implementation along these lines:

How to accomplish this? We would have:

I believe that this would bring simplicity to the USB configuration and implementation. Not to mention that it helps adding support for new devices/hosts that don't exist today. I can tell you that this is doable not only theoretical. I have a working implementation of the USB debugger using ST USB CMSIS device library. Despite the official library doesn't have support for WinUSB, it has a general purpose device class that I've used as the base to implement a WinUSB device.

As a final comment: I believe that this is not to much of a breaking change for current solutions that are based in the official repository code. The USB debug would be there from the start. Any other USB support that may exist today in private repositories will work anyway along with this new approach.

smaillet-ms commented 8 years ago

I've been thinking about the USB configuration for some time now and had a chance to try an experiment recently. The idea is to Simpify the expression of the configuration and let a tool generate whatever data structures a given HAL, framework or run-time requires. For example the reference standard USB configuration for MCBSTM32F400 could be expressed using a MUCH simpler XAML form such as:

<!--
This file provides an example of a XAML based USB Configuration descriptor.
XAML, at its core, is an XML representation of an object graph, which fits
extremely well with the hierarchical data structures for USB descriptors.
Since the Descriptor types are defined in an actual .NET assembly, the 
mechanism for defining descriptors is extensible. It is possible to express
a Device as a set of low level primitives including interfaces, endpoints,
etc... directly or by using higher order descriptors, such as this example
uses to provide the NETMF Debug transport interface. Any common USB-IF 
defined interfaces, configurations or functions can be defined this way.
Using higher order descriptor classes simplifies the expression of USB
configuration to just defining the vendor specific details leaving the rest
to the standard implementation.

This XAML will load at runtime into an object graph of descriptors that is
normally used by a tool to generate the data structures needed for a particular
runtime. (e.g. for NETMF this would generate the otherwise VERY tedious and
error prone static USB Configuration section data structure.) That is this is
a platform neutral representation and has use across multiple libraries and
frameworks.
-->
 <DeviceDescriptor
    xmlns="http://schemas.microsoft.com/NETMF/2016/UsbDescriptors"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    Manufacturer="{x:Reference Name=ManufacturerName}"
    Product="{x:Reference Name=ProductName}"
    VendorId="0x0483" 
    ProductId="0xA08F"
    DeviceRelease="0x0200"
    Version="V2_0"
    ExtendedOsDescriptorVendorId="0xA5"
    >
    <ConfigurationDescriptor
        Attributes="SelfPowered"
        MaxPower="255"
        ConfigurationValue="1"
        >
        <!--
        This single interface entry defines all the endpoints and OS extension descriptors
        for a NETMF Debug interface. (e.g. this includes the 'WINUSB' Extended Compat ID
        and the DeviceInterfaceGuid Extended OS Property)
        -->
        <NetmfDebugInterfaceDescriptor />
    </ConfigurationDescriptor>
    <DeviceDescriptor.Strings>
        <!--
        In USB, Each individual string can have multiple values depending on the language ID requested.
        Currently, NETMF PAL libraries only support en-us but the XAML description allows for more if
        NETMF ever enables systems with full language support.
        -->
        <StringDescriptor x:Name="ManufacturerName" Index="1">
            <LocalizedString LocaleId="EnglishUnitedStates" Value="Microsoft .NET Foundation" />
        </StringDescriptor>
        <StringDescriptor x:Name="ProductName" Index="2">
            <LocalizedString LocaleId="EnglishUnitedStates" Value="MCBSTM32F400" />
        </StringDescriptor>
        <!-- 
        NOTE: Since the device has the ExtendedOsDescriptorVendorId property set
        an OsStringDescriptor is automatically added to the string descriptors
        with the correct value including the specified vendor code. This is the
        simplest way to get set up the OS string descriptor as it doesn't require
        manually coordinating the vendor specified code. 
        -->
    </DeviceDescriptor.Strings>
</DeviceDescriptor>

That's a whole heck of a lot simpler then the C data strucutre and, as the comments suggest, can be enhanced to handle any standard Configurations (CDC, USB-MIDI, USB-MS, etc...) Ultimately, it all reduces down to the few items an OEM must provide.

While the higer level elements help keep things simpler, it is possible to use lower level primitive elements such as interfaces, endpoints, and custom descriptors directly for anything that doesn't have an existing higher level class defined. Furthermore, when edited in VS you can even get intellisense!

The build system can be extended to generate the actual code/binary files from this simpler format whenever it is changed.

josesimoes commented 8 years ago

@smaillet-ms as I've mentioned before regarding the USB configuration I totally agree that the current one is tedious and error prone. Your suggestion is pretty much inline with what we have proposed in our PR #434. As we have it now the USB configuration is part of the solution.settings file (as it is implemented there could even be different USB configurations for the tinybooter and tinycrl). See bellow. Right now is focused on support for USB debug and any of the other "missing" USB configurations can be added in the same fashion.

<!-- USB configuration --> <PropertyGroup> <PropertyGroup> <USB_VID>0x0483</USB_VID> <USB_PID>0xA08F</USB_PID> <USB_MANUFACTURER>Microsoft OpenTech</USB_MANUFACTURER> <USB_PRODUCT>STM32F4DISCOVERY</USB_PRODUCT> </PropertyGroup>

And this doesn't require any particular changes to the current build system.

See here at line 33.

smaillet-ms commented 8 years ago

Yep, we're thininking in similar directions, the XAML approach is more of a continuing evolution of the idea that allows for, easy extensibility to other configurations, and allows code generation for wildly different HAL/PAL implementations (i.e. the STMCube libraries, MDK middleware libs, existing NETMF PAL) while maintiing an independence from the project settings. That last point is important as part of the rework of the projects I'm driving at is to remove the "special NETMF knowledge" from the build infrastructure itself so it is easier to leverage existing build system and project representations. (There's actually someone looking into porting the NETMF build to NetBSD! 😮 )

josesimoes commented 8 years ago

Glad to know that you think that. If you care to go through the PR you'll notice that everything that you mention is already there. And it's working! I would prefer to go with json instead of xml for the configuration and settings, because it's compact and more readable, but that would require deeper changes on the tasks code.

smaillet-ms commented 8 years ago

I have read through the PR and while I'm sure it solves the problems for you, it doesn't solve the broader general problems I'm looking to solve for NETMF going forward. (Some of that I've written up already but it's a REALLY big topic and I've haven't gotten it all wirtten down yet). The short form is I want to get NETMF out of the HAL business, and mostly out of the PAL business, so it remains focused on its true core as a .NET runtime. In the past we had no choice - there weren't any alternatives. Now there are multiple HAL/PAL type layers available and NETMF needs to adapt to that so we can focus on developing the core and not the plumbing it runs on top of. This is a pretty broad, and bold shift that I'm sure will meet with some resistance from any places. That is, I can imagine responses like "Why do all that, if you just tweak this one rough area everything would be perfect..." unfortunately that setiment often leaves off the implicitly subjective "for me". While the tweak in question might well help in that particular case it doesn't really address the larger issue that the world has changed around NETMF since it was first created and it is just plain too hard and time consuming to port/use for many real world devices it would otherwise be a great platform for. There are a LOT of reasons for this complexity, the USB config is certainly a part of that but the real problem is much much bigger. NETMF has danced around the problem for several major and minor release for a while now. It's time to face up to the reality and do the work. It's not pretty work, it'snot a big long check list of new feautures the marketing folks love to show off. But it is necessary for the future usability of NETMF. There's an old saying an Engineering manager I work with has "Sometimes you have to stop and sharpen the saw. If you are cutting wood, you can't just keep going cutting wood non-stop efficiently. At some point you need to stop, clean and sharpen the saw so you can make new progress"

So, by all means please keep doing what you need to do to ship your products, and keep feeding back your progress, successes and failures with new ways of thinking about this stuff. Everything helps contribute to a better NETMF. Though it is important to remember we can't gurantee any specifc work or idea will end up as the next solution to the problems as there are a lot of factors including consistency with how other similar kinds of problems will be solved in other areas fo the system (e.g. multiple perfectly valid ways of solving a particular problem may be proposed but only one fits consitently with the patterns used throughout the rest of the system, thus, raising that one above the rest)

josesimoes commented 8 years ago

Accomplishing all that is not an easy task. That's for sure! All that makes me wonder: how about llilum? Wasn't the plan that it would be the "next" NETMF?! If that is still true why keep investing in the current NETMF? Please don't get me wrong: I'm just asking, not that I see that as being pointless or something like that!

PS1: Let me stress this: my team and I we are only trying to contribute to NETMF. We are doing it in the spirit of open source and in a modest and humble way. If - at any point - it looked differently I do apologise for that! Also know that I'm not trying to push our ideas as the best possible ones. And for sure I won't be whining for any of those to reach the final version.

PS2: I hear you on that saying about the lumberjack and its efficiency with the saw. That manager of yours must have been boy scout. Baden-Powell mentions that story in its Scouting for Boys and it's obviously famous among scouts. 😉

smaillet-ms commented 8 years ago

No problems - With only text it is sometimes difficult to communicate all aspects of thought/intent. It's only natural some confusion can creep in so it's often worth pre-emptively clarifying to help ensure it doesn't turn into a problem. The NETMF core team hasn't really done a good job of defining, let alone communicating, a proper design contribution process/workflow. I've been reviewing what other projects have done over the last few weeks on that and will try and get something posted for discussion.

RE: Llilum Llilum has reached a stable compiler/base runtime library point and needs some hands on time in the real world. There are areas we know we need to improve but want to be sure we've nailed the fundamentals first (e.g. We know we need to improve compilation time, code size and better VS integration but do we have the write story for basic libraries, interop to native, HAL/PAL, ...?).

Llilum shares the same HAL/PAL problems that NETMF does, albeit without nearly as much of a legacy problem, so my thinking is that this level of change, while big for NETMF on it's own, is also applicable to Llilum (e.g. a shared common PAL abstraction). Thus, I see this as worth the investment to help bring them together. Furthermore there are still cases where, as an interpreter, NETMF has value (e.g. easy sandboxing / dynamically loading third party code). So NETMF still has a life of it's own. I'd like to be able to get to where NETMF is on par with CoreCLR and Llilum from the perspective of the runtime itself. Thus moving code around between them becomes plausible and reasonable. While NETMF won't be 100% complete feature for feature in the runtime. It can be a well defined strict subset.

Thus there are two broad arcs I'm looking at for the NETMF roadmap:

  1. Simplify the porting process to reduce friction of adoption
  2. Bring NETMF runtime into conformance with Llilum and CoreCLR

I'd love to see the day where NETMF itself was entirely written in C# with Llilum thus gaining the best of both worlds when an interpreter is desired. I was actually toying around with that idea over a year ago as I was ramping back up on NETMF to help better understand the internals of an IL Interpreter but implementing such a thing for real is a looooong ways out.

josesimoes commented 8 years ago

That sure looks an outstanding job. You have my vote! 😄

Let me suggest that you open this discussion to the community. It can get better if more people contribute with ideas and views from different perspectives. I know that this is a "limited democracy" and you guys will always have the final saying on all this, but for sure it won't hurt listening to people.

Speaking of which I strongly suggest that you guys start doing something about the NETMF community. I mean the repository for NETMF is here but the "home" of NETMF doesn't look to be here... It probably lives at the GHI forum... To clarify: I don't think that there is something wrong with the GHI forum. I tip my hat to GHI: they've done a great job with NETMF. Specially in promoting it and make it usable by everyone, from hobbyists to companies (probably more than Microsoft itself...) but they are not the NETMF. Often times when people have general questions with NETMF they go to that forum. Again there is nothing wrong with that, but it does seem awkward. And there is a reason for that... There are incredibly smart and valuable people out there that should be brought here. Their knowledge, their contributions, their expertise. I believe that people are willing to contribute just need to feel that that is wanted, is welcomed and it makes a difference. For example: all the above that it's being discussed and all that you are trying to accomplish for NETMF, divided in tasks that people could pick and start working on would make it a reality way more fast and sooner than doing it alone. Right now that is not happening and all that talent and potential is being wasted. At least it looks that way to me.

This repository should be the NETMF home, but it is not. Create a forum for NETMF where discussions can happen (a gitter room? a NETMF forum along with the other community forums in MSDN?). Using issues for keeping a conversation or a dialog with several people is not practical and it doesn't work that well in my opinion. Engage people. Bring them here. Start building a true community around NETMF and good things will happen. Like a famous poster out there says: "I want to believe" 😉