Azure / logicapps

Azure Logic Apps labs, samples, and tools
MIT License
363 stars 301 forks source link

ZERO SUPPORT FOR LOGIC APP (V2 STANDARD) DEVOPS DEPLOYMENT. #458

Closed klawrawkz closed 1 year ago

klawrawkz commented 2 years ago

Hi, my name is klawrawkz. I am fine. How are you?

What's the story with Logic apps? How do we create an enterprise grade devops process for "Standard" v2 Logic Apps where we cannot use the function app "id" property to create a connections and the JSON is (finally) compartmentalized to reflect the different aspects of the "app"? What's the devops deployment scenario when logic apps are compartmentalized into various separate .json files that need to be deployed as well as the various "elastic" app service components we need to "host" these pathetic web apps?

These new "Standard" Logic Apps are bifurcated such that the function connection requires a different construct than does v1 "logic app". Not that the v1 logic app connections are "functional" in any devops sense, but the JSON construct is different in terms of deployment. This difference made a devops deployment almost "possible" except that connections do not work without MANUAL intervention. In V2, the function connection is contained in the Logic App's "connections.json" file. These function connections are seemingly (or ACTUALLY) "volatile" (because they don't work reliably if deployed via "automation" aka via VS CODE) as they frequently lead to strange parsing errors that are only reported when tracking the streaming logs. Thus it becomes much more important to provide best practices for DevOps deployment regarding these "new" Logic Apps.

Below is a sample of Function connection in a "Standard" Logic App. Note the "connectionName" is required rather than "id" E.G.:

            "My_Super_Duper_Function": {
                "inputs": {
                    "body": "@outputs('Compose')",
                    "function": {
                        "connectionName": "mySuperDuperFunctionConnection"
                    },
                    "method": "POST"
                },
                "runAfter": {
                    "Compose": [
                        "Succeeded"
                    ]
                },
                "type": "Function"
            }

The "mySuperDuperFunctionConnection" is defined in the "Connections.json" file. E.G.

{
    "functionConnections": {
        "azureFunctionOperation": {
            "authentication": {
                "name": "Code",
                "type": "QueryString",
                "value": "@appsetting('azureFunctionOperation_functionAppKey')"
            },
            "displayName": "mySuperDuperFunctionConnection",
            "function": {
                "id": "/subscriptions/some-random-subscription-guidThingy-c844cc805171/resourceGroups/MY_SUPER_DUPER_RG/providers/Microsoft.Web/sites/mySuperDuperfunction/functions/superDuperFunctionEndPoint"
            },
            "triggerUrl": "https://mySuperDuperfunction.azurewebsites.net/api/superDuperFunctionEndPoint"
        }
    },
    "managedApiConnections": { ... so on and so forth ...

Are you actively supporting these logic app thingies? I mean, are you promoting them as "enterprise grade" services and supporting them? (...If the enterprise I work for had it to do over again, I can assure you that logic "apps" would not be in the equation.) This is how miserable the experience is. Not to mention what we are finding- non-existent support, and non-existent guidance. This on top of a pretty uniformly unsatisfactory experience generally.

There is seemingly nothing "Enterprise" grade about this stuff. Seriously, a bunch of JSON mashed up, converted behind the scenes into C# "function code, and unceremoniously dumped into a web app?

I have searched high and low for guidance all around the interweeeeebz. Disappointing. No helpful guidance. No samples. Nothing relevant to a modern enterprise based on automated devops principles. ...Honestly, my enterprise org feels terribly neglected. I don't know what to tell them except be "patient". Someday AWS will completely take over... DUH. Seriously. Where's the feedback? Where's the guidance? Where's the sense of "community spirit"? What is the story? The common sense approach to deploying these things through devops is difficult or (in my case) impossible to find. My scenario must be pretty common place. 1) Take logic app. 2) Connect logic app to a function app. 3) Do some development. 4) Be marginally satisfied with the "results" of the development effort. 5) Set up a devops process and use this modern approach to deploy code to the enterprise. Easy-peasy, right? Where is the guidance???

What say you???

I hope you are doing well.

Sincerely,

klawrawkz

AB#15950956

mirzaciri commented 2 years ago

I support everything @klawrawkz said here.

We are looking into logic app v2 now, but the more I look at it from a DevOps perspective, the more confused and reluctant I become about using it.

There are some hacky ways of how to manage connectors, but from an enterprise view, you want clean, manageable ways of using it.

klawrawkz commented 2 years ago

Yes. I completely agree with you @mirzaciri. That's what is needed. Ideally this guidance comes from Microsoft and the community. In reviewing what samples exist on github, most of the material is over a year old. Where is the "enterprise" guidance on the "new" version, e.g. logic app (standard) etc., especially comprehensive devops samples? The term "enterprise" here implies organizational automation, "soup to nuts" IT department devops. The scenario use case is NOT a single developer using vs code "extensions" and kudo under the hood to zip deploy files to pre-existing azure infrastructure.

rohithah commented 2 years ago

Hi klawrawkz - Thank you for the detailed feedback and they are duly noted. There are several areas that we are currently working on and hopefully will address some of the concerns you are raising • We will soon be releasing an experience to enable customers export their consumption Logic apps to standard. • We do understand that some of the validation errors are not always surfaced in the Logic app designer or the portal and the users would have to look into the log stream. This is also an area we are currently working on to make these errors readily visible to the users.

Also have you taken a look at the dev-ops related documentations related to Logic apps standard hosted here? If not, please take a look and let me know the specific scenario you are trying to build and the specific error you are running into. Happy to work with you get your scenario work.

klawrawkz commented 2 years ago

Hi @rohithah, it's nice to meet you. Yes, I have reviewed the docs maintained at the location you mention.

The issue with this documentation is that it is essentially "circular." In other words, the reader will begin at the beginning and end at the beginning of this material. Once at the beginning again it becomes clear that no progress has been made, that there is little actionable enterprise development or enterprise devops automation advice to be gleaned from the documents.

I recognize that the material is comprehensive, and I applaud those who labored so mightily to create this content.

However, much of it is out of date, confusing, or wrong (or a mixture of all 3 ingredients). So once an individual (Ind-A) has read through, and visited the various linked documents, Ind-A is left completely underwhelmed by the experience because there is a dearth of actionable guidance. Even though there is a ton of content, none of it is sufficient to answer Ind-A's question in simple terms, "What are the steps necessary to develop an enterprise logic app solution, create a devops pipeline for such logic apps (standard v2) app, and HOW does one create such a devops pipeline in an enterprise environment? Such instructional material is critical and IMHO it is unforgivable to promote logic apps as "enterprise" suitable, or enterprise desirable, or even enterprise anything other than anathema. Without providing pertinent guidance and useful tooling, it is not appropriate to promote logic apps as being ready for the enterprise.

"Pertinent guidance" then, is comprehensive and contains "real-world" examples that depict enterprise best practices. E.G. How to develop an enterprise logic app, and an enterprise pipeline that includes managed connections, other connection flavors, and "function connections." Or "connections to pre-existing" or "greenfield" functions.

Furthermore, the existing utilities (Powershell, Azure REST API and Az Cli) at the time of this writing, do not contain methodology to orchestrate and manage logic apps (standard v2). Again, this is a huge oversight. I state this at the risk of being an incredible bore, it is unacceptable to push out a technology without guidance or management utility tools sufficient or necessary to govern the outcome at an enterprise level. Especially WHEN THE TECH (logic app (standard etc., etc.) IS BEING PROMOTED AS AN "ENTERPRISE" GRADE SOLUTION.

Allow me to recap and achieve the highest level of tedium possible. The operational tooling we need is Powershell, Azure Cli, and REST API. The tooling shall possess methodologies sufficient to interact with logic "apps" at an enterprise grade level. The guidance needed is basic to comprehensive "how to" information with pertinent "real-world" examples and samples we can consume to inform our enterprise best practice approach. In the case of logic app (standard v2) this is shall be guidance on 1) how to create a .git/azure .git-based repo connected to a cicd pipeline 2) that is sufficient to create infrastructure (if needed or desired), 3) that creates infrastructure (if needed), 4) or does not create infrastructure (if not needed), 5) creates functional (functioning) connections to various other services, 6) and finally provides deployment explanations such that we in the community can create and update, via enterprise automation and tooling, fully functional enterprise logic apps connected to the vast array of services we maintain currently, or will create and maintain in the future. What we don't need is the Azure portal hoodwinking us into creating logic app (standard, or other v2 types) as there is no other type of logic app to create via the portal. This is a deplorable situation.

Possessing anything less than the above described guidance and tooling implies, apriori, that logic apps are not ready for enterprise use, and should not be promoted as such, i.e. "enterprise ready". This "unready" state is where we stand currently in the logic app fiasco because we are absent (almost) any usable guidance or tooling). If these logic "apps" are to be forced on us anyway via the portal, they should bear a humongous RED label reading:

{ ... "WARNING THERE ARE LOGIC APPS HERE. USE LOGIC APPS IN THE ENTERPRISE AT YOUR OWN RISK. 
{ ... THEY ARE NOT READY FOR THE ENTERPRISE AND ARE NOT BEING SUPPORTED AT ENTERPRISE STANDARD LEVELS."

I hope you are doing well.

Sincerely,

klawrawkz

WenovateAA commented 2 years ago

I didn't read all comments here, but I suppose main question is how to perform multi-env. deploy with CI/CD pipelines while keeping single connections.json. I implemented such pipeline using samples provided by MS. Main trick is that you parameterize all Ids and all dynamic parameters and keep them inside app settings. You populate this settings during resource deployment stage of your pipeline, before you actually publish the code (and connections.json within it). So here part of connections.json:

{
    "managedApiConnections": {
        "connectionname": {
            "api": {
                "id": "/subscriptions/@appsetting('WORKFLOWS_SUBSCRIPTION_ID')/providers/Microsoft.Web/locations/westeurope/managedApis/connectionname"
            },
            "authentication": {
                "type": "ManagedServiceIdentity"
            },
            "connection": {
                "id": "/subscriptions/@appsetting('WORKFLOWS_SUBSCRIPTION_ID')/resourceGroups/@appsetting('WORKFLOWS_RESOURCE_GROUP_NAME')/providers/Microsoft.Web/connections/connectionname"
            },
            "connectionRuntimeUrl": "@appsetting('CONNECTION_RUNTIMEURL')"
        } 

...
}

Links: https://github.com/Azure/logicapps/tree/master/azure-devops-sample https://stackoverflow.com/questions/68888469/how-to-parameterize-the-values-in-workflow-json-and-connections-json-files-of-az

mirzaciri commented 2 years ago

Do you keep all connection files for various env in a repo? Or is this created through ci/cd? As I see on your .json file, ManagedServiceIdentity is used only through the portal, not during development. Then you need to have "raw" authentication type.

Also, if you are using a new managed connection, for a new project, how is this handled through each env? Do you first deploy a new connection to the portal for that specific env? or is this handled automatically on your ci/cd process?

WenovateAA commented 2 years ago

@mirzaciri No, connnections.json is kept single for each environemnt. This is the exact purpose we put config parameters there which are env. specific. Deploy sequence thru pipieline is the following:

  1. Create LA first to get managed identity id
  2. Create managed connections ("type": "Microsoft.Web/connections") and access policies ("type": "Microsoft.Web/connections/accessPolicies"). For policies you specify identity id:
                    "identity": {
                        "tenantId": "[subscription().tenantid]",
                        "objectId": "[reference(resourceId('Microsoft.Web/sites', parameters('logicAppName')),'2019-08-01', 'full').identity.principalId]"
                    }
  3. Put connection runtime URL to output:
        "azuretablesRuntimeUrl": {
            "type": "string",
            "value": "[reference(resourceId('Microsoft.Web/connections', 'azuretables'),'2016-06-01', 'full').properties.connectionRuntimeUrl]"
        }
  1. During later stage in pipeline populate URL to app setting using az cli. We do this as it is unknown during LA creation.

az functionapp config appsettings set --name $(LogicAppName) --resource-group $(RG) --settings "YOUR_CONNECTION_RUNTIMEURL=$(servicebusRuntimeUrl)" --query "[?name == 'YOUR_CONNECTION_RUNTIMEURL']"

Actually, as i can see now LA supports user assigned managed identity, that would make it possible to move the last step to LA creation, as we don't need to wait until LA is created to know its managed identity id. Hope it makes sense.

Again, it's all described in github example for CI/CD

klawrawkz commented 2 years ago

Hi @WenovateAA,

The Managed API connection type is documented in a bit of a less confusing way, true. The issue of function app connections is more pressing for me than the managed connections are b/c we already have them working. I'm not sure if our method is exactly the same as yours, so I'd be interested in comparing notes as it would be a good learning experience. It seems that you may have a workable approach can be molded into a working sample, no offence MS, superior to what Microsoft has produced. As I say, documentation for logic apps with complete working real world scenario code samples are not readily available.

I'd be grateful, as would the community as a whole I'm sure, if you would be willing to share a functional code sample of your approach. JSON "code" kills me b/c it is unreadable and therefore unintelligible to me. This is even more true for JSON snippets. I can't read this stuff well enough to comprehend what a snippet is telling me. OK, OK, I admit it. I use terraform, not ARM templates for precisely this reason. Sadly azure terraform provider does not understand logic app v. 2.0 at the time of this writing. This is one of the tooling shortcomings I mention above. So I am obliged to hack away with ARM templates. Honestly, I don't find JSON to be readable, and it is my personal belief that JSON is a data protocol, or a file format and data interchange format, not a "development language."

So you see, you'd really be helping out a fellow American who is down on his luck if you'd be willing to share some code and help out with this.

Good post brother. Thanks.

klaw

mirzaciri commented 2 years ago

@mirzaciri No, connnections.json is kept single for each environemnt. This is the exact purpose we put config parameters there which are env. specific. Deploy sequence thru pipieline is the following:

  1. Create LA first to get managed identity id
  2. Create managed connections ("type": "Microsoft.Web/connections") and access policies ("type": "Microsoft.Web/connections/accessPolicies"). For policies you specify identity id:
                    "identity": {
                        "tenantId": "[subscription().tenantid]",
                        "objectId": "[reference(resourceId('Microsoft.Web/sites', parameters('logicAppName')),'2019-08-01', 'full').identity.principalId]"
                    }
  1. Put connection runtime URL to output:
        "azuretablesRuntimeUrl": {
            "type": "string",
            "value": "[reference(resourceId('Microsoft.Web/connections', 'azuretables'),'2016-06-01', 'full').properties.connectionRuntimeUrl]"
        }
  1. During later stage in pipeline populate URL to app setting using az cli. We do this as it is unknown during LA creation.

az functionapp config appsettings set --name $(LogicAppName) --resource-group $(RG) --settings "YOUR_CONNECTION_RUNTIMEURL=$(servicebusRuntimeUrl)" --query "[?name == 'YOUR_CONNECTION_RUNTIMEURL']"

Actually, as i can see now LA supports user assigned managed identity, that would make it possible to move the last step to LA creation, as we don't need to wait until LA is created to know its managed identity id. Hope it makes sense.

Again, it's all described in github example for CI/CD

Your answer is yes actually, you have separate connection files for separated env. And this is going all well and good when deploying new managed connections? If you want to share your code, that would be awsome, as @klawrawkz has mentioned

WenovateAA commented 2 years ago

@mirzaciri I indeed have single connections.json in my repo for all environments. Example of it is in my 1st comment here. Again, idea is to keep env. specific values (subscription id, RG, etc) in app. settings. You then populate the app settings during deployment, specific values to different environments. This way you keep connections.json the same. Yes, deployment of connections via ARM templates works fine. I can't share my pipeline template as it's very specific to service I develop. As I mentioned, working example of how to create connections and parameterize connections.json could be seen in official example for LA CI/CD. Then deploying in Azure JSON is a format to understand. There is no way one can avoid this when mastering ARM templates.

mirzaciri commented 2 years ago

@mirzaciri I indeed have single connections.json in my repo for all environments. Example of it is in my 1st comment here. Again, idea is to keep env. specific values (subscription id, RG, etc) in app. settings. You then populate the app settings during deployment, specific values to different environments. This way you keep connections.json the same. Yes, deployment of connections via ARM templates works fine. I can't share my pipeline template as it's very specific to service I develop. As I mentioned, working example of how to create connections and parameterize connections.json could be seen in official example for LA CI/CD. Then deploying in Azure JSON is a format to understand. There is no way one can avoid this when mastering ARM templates.

You say that the connections.json file is the same, but connection file locally and for different environments should be different. 1, because local uses RAW auth and portal uses managed identity and 2, different environments have different API connectors.

klawrawkz commented 2 years ago

@mirzaciri, @rohithah, and @WenovateAA,

I have concluded my research, sleuthed, found, deciphered, and so on, what I believe is a "best case enterprise scenario" that really is a minimally acceptable approach limited by what I suppose we could call the "Enterprise Logic App Lifecycle Conundrum". In so doing, I also discovered a migration path away from Logic Apps (thingies) via translating JSON "instructions" to C# code base. I'll get to the end game solution in a moment. First, though, let's address the assertion that Microsoft has provided "enterprise grade" samples demonstrating best practices for Logic Apps.

I argue this is not the case. In fact, my assertion is that Logic Apps do not have the capabilities required to be appropriate for release as "enterprise components". I argue to the contrary, that Logic Apps should not be used in a production environment. Certainly they should not be used for mission critical applications where scaling, reliability, and disaster recovery are requirements.

Now on to the Microsoft logic app "samples". My complaint about the sample materials that @WenovateAA mentions is that the information is so basic as to be completely useless in illustrating principals and practices for any organization that is conducting serious work. The sample(s) are valuable in that they point to concepts which need to be taken into account when an organization considers "should we be using Logic Apps in the enterprise". Sadly, though, these samples contain an insufficient amount of content to guide an organization to implement "enterprise best practices". There is not enough meat on the bones of the samples to be useful to me and my organization. That's my opinion, and my line in the sand over which no one shall cross and.... You get the idea.

IMHO, the samples do not provide the serious guidance that is required by a enterprises who are intent upon developing, testing, deploying, and maintaining scalable performant software based solutions. That's my claim and I'm sticking with it. A second opinionated claim I'll make is that Microsoft does not provide enterprise appropriate guidance because there is none to produce. Ergo, one rational conclusion enterprises can arrive at is that Logic Apps are not suitable for enterprise use. To those enterprises who disagree, deploy such thingies at your own peril. The future development and maintenance nightmares, not to mention the subsequent attrition as front line IT workers flee the sinking ship that was once your thriving business concern, could hint at the colossal error that was made when you opted for logic app thingies in the enterprise. I can say this in good faith because THE TERROR IS MINE.

When to use logic app thingies? These "thingies" (my term of art for logic apps) are most appropriately Azure Toys For Tots in the enterprise. What I mean is probably the best use for logic apps (lower case intentional, yup) is to enable brainstorming by non-technical users who wish to demo some future vision for software product. Perhaps the marketing staff creates a slew of logic app thingies all lashed together and covered with mud, to use in demonstrating new use cases for developing some novel service the enterprise could sell. The IT group could create a playground, a secure and isolated RG, where marketing or product development employees can freely and safely create their logic-app-thingy mudballs demonstrating new and modernistic ideas for the sales guys to peddle based on whatever it is the company does. Then once the thingy demos are all over, the ideas have been promulgated, and sales and marketing folks are safely deployed to the golf course, IT can SALT the earth surrounding and including the thingy sandpit and blast all the mudballs down the drain so they can do no harm.

One Solution To "Enterprise Management And Deployment" Of Thingies - Create A Devops Pipeline Comprised Of

  1. Terraform
  2. Azure REST API/ Azure WebJobs SDK
  3. Bash or other shell scripts
  4. Function deployment tasks

This could have been a preamble, but as it stands this strategy is a "postamble". JSON is a DATA PROTOCOL. JSON IS NOT A FIRST CLASS DEVELOPMENT LANGUAGE. Nobody has to learn JSON as if its a language, because there is nothing to learn other than the syntax. Not a development language, I must repeat. I must repeat. I must repeat.... This has been a public service announcement. Thank you.

  1. The DevOps thingy Infrastructure Approach is Terraform. We use Terraform to create all required infrastructure resources including hosting environment, hosting application (web app), application insights, storage accounts, etc. As I say, RM templates are convoluted and difficult to work with. Why not use Terraform to simplify this work. Using terraform state management, we are guaranteed that our Azure landing zone is always current, up to date, and configured as expected. ... Oh boy, JSON is not a development language, lol....

  2. Azure REST API contains functions that allow you to create a new thingy (workflow) in the terraformed thingy hosting environment (web app). If you choose to use the REST API then you will not need to implement step 4. Be advised that the REST API is better suited for devops where thingies are already extant in Azure. Below are some sample REST API calls that can be used in the deployment. I leave out the headers and auth tokens to keep the sample shorter.

  3. Bash/Shell script is used to write any custom application config settings you may require. This can also be done via terraform, so you may be able to eliminate this devops step.

  4. Function App deployment step. This approach is best for a new thingy deployment. If you have installed the vs code "Function" extension, you will find that this will allow you to create a logic workflow thingy app. Once you have "developed" your thingy workflow app, the project files can be used to create a devops deployment. Use Azure git as the source in devops, and create a function deployment step using logic app thingy details in the function deployment task. The function deployment task will handle your connections.json file for you. So in your dev environment be sure to include your Azure connections. As long as the connections are verified as being functional, they can be deployed via devops.

Migration Path Out Of The Morass Of Thingydome - Translate Workflow JSON To IL And IL To C# WebJobs

SDKs And Approach

Azure WebJobs SDK/Azure WebJobs SDK Core Extensions/Azure Functions Host We use the Azure Functions Host to translate thingy "workflow" json instructions to C# code via IL. Then we use the WebJobs SDK to create triggers. The SDK/Core Extensions provide triggers and bindings we register by calling config.UseCore(). The Core extensions provide general purpose bindings that account for most of the common scenarios we find in thingies. There is a binding for ExecutionContext provides invocation specific system information in your thingy. When using the SDK, connections are first class objects and don't require the special handling demanded by JSON-based thingies. For example, the following sample code demonstrates a means of 1) creating a connection, and 2) accessing the Invocation ID for a specific function (previously a JSON-based thingy).


// Connection and configuration code in Azure WebJob SDK...
    var _myQueueConn = ConfigurationManager
        .ConnectionStrings["MyQueueConnection"].ConnectionString;
    MyHostConfiguration config = new MyHostConfiguration();
    config.QueueConnectionString = _myQueueConn;
    MyWorkflowJobHost workflowHost = new MyWorkflowJobHost(config);
    workflowHost.RunAndBlock();

// Queue trigger and logging code in Azure WebJobs SKD...
public static void ProcessQueue(
    [QueueTrigger("items")] Widget widget,
    TextWriter logEntry,
    ExecutionContext myExectionContext)
{
    logEntry.WriteLine("InvocationId: {0}", myExecutionContext.InvocationId);
}

Implementation is easy. All we need to do is to register the Core extensions by calling config.Core() in our startup code. The SDK provides a rich dashboard experience for monitoring services. We can track the invocation ID above using the dashboard logs that are provided out of the box by the SDK. Programmatic access to this data allows us to connect the dots between an invocation the generated logs. Or we can use Application Insights for analytics.

By means of this "translation" process we are easily able to convert JSON-based thingies to C# code and develop enterprise worthy "workflows" without the encumbrance of JSON-based thingies and the accompanying headaches brought on by JSON-based thingies in the enterprise.

// Yup.

    // There
   // You
  // Have
 // It
// .
erwinkramer commented 2 years ago

This product is definitely manageable. I think the main issue is that logic apps itself is trying to be the simplest thing imaginable in Azure. Yet it truly is one of the hardest Azure resource types to manage IaC (without touching the portal) and development strategy wise (without touching the portal - this is still flakey but it's getting there). Mainly because of all of the connector types and different authentication type for those connectors (depending on where you run it too). In fact it is the only resource i encountered where the official docs is telling me to run a netwerk debugger to check how a managed connector request is formatted, like how are you expecting someone that wants to take the easy route (by choosing Logic Apps over C#), to handle stuff like that?

mirzaciri commented 2 years ago

In fact it is the only resource i encountered where the official docs is telling me to run a netwerk debugger to check how a managed connector request is formatted, like how are you expecting someone that wants to take the easy route (by choosing Logic Apps over C#), to handle stuff like that?

Could not agree more. Recently we needed to use the Salesforce connector, and after 3-4 pages through google search, we found out that the best way was to look at the network traffic to fetch the "correct" payload on how to build the connector. If you are an enterprise, with multiple connectors, this is just stupid (and the authorization is a whole new ballpark of stupidity unfortunately)

klawrawkz commented 2 years ago

Hi @erwinkramer and @mirzaciri,

I suppose it comes as no surprise that I am in complete agreement with both of you. At my shop we inherited an architecture that relies/relied in many cases on the "old" consumption version "logic apps" thingies. No hope of acceptable enterprise monitoring or disaster recovery in sight. Chaos all around. With the advent of the "new V2" logic app thingy offering "logic app (standard)" {the name is strange; what evs} we hoped to limp into a migration to V2 because of app insights monitoring and stateful characteristics, and some questionable disaster recovery. Once the transition was accomplished we would have bought time to devise and vet a migration path away from the dreadful thingy architecture.

Perhaps we should have been suspicious because there is no formal migration path set forth by Microsoft from thingy V1 to thingy V2 (standard). Perhaps we should have been suspicious because these thingies have been in a "preview" status for an exceedingly long time. In reality reasons for suspicion are/were far more numerous than and outweigh the fluff "advantages" Microsoft is pumping out furiously.

The laundry list of oddities, compromises, downright violations of long established enterprise best practice patterns, and the fact that logic app thingies do not work as advertised finally lead us to bail on migrating to V2 and we are employing our own scorched earth logic app policy to leave no "logic app thingy" survivors as I outlined above. Fiasco is the word that comes to my mind, lol.

We all thus report a strikingly similar experience:

Ergo, one rational conclusion enterprises can arrive at is that Logic Apps are not suitable for enterprise use. To those enterprises who disagree, deploy such thingies at your own peril. The future development and maintenance nightmares, not to mention the subsequent attrition as front line IT workers flee the sinking ship that was once your thriving business concern, could hint at the colossal error that was made when you opted for logic app thingies in the enterprise.

In fact it is the only resource i encountered where the official docs is telling me to run a netwerk debugger to check how a managed connector request is formatted, like how are you expecting someone that wants to take the easy route (by choosing Logic Apps over C#), to handle stuff like that?

If you are an enterprise, with multiple connectors, this is just stupid (and the authorization is a whole new ballpark of stupidity unfortunately).

In closing I have shamelessly appropriated Bowie's lyrics to plead with Microsoft to stop the nonsense. Curtail, I say. Works for me.

This is ground control to Microsoft Your circuit's dead, there's something wrong Can you hear me, Microsoft? Can you hear me, Microsoft? Can you hear me, Microsoft?

Hope everyone is doing well.

klaw

mirzaciri commented 2 years ago

Hi @erwinkramer and @mirzaciri,

I suppose it comes as no surprise that ......

As much as we share the same pains, in regards to this topic, I think our main fault is that this is an early product (but Microsoft shouldn't have said it's enterprise-ready). I think it goes both ways.

Although I hope to see some improvements in the near future, for now, we are moving to Azure Functions (so I can still keep my hairline intact).

As a side note: Take a look at the blog from Integrate 2022.

esdccs1 commented 2 years ago

I'd also like to add that the newest version of VisualStudio does not plan on supporting Logic Apps as an extension. https://docs.microsoft.com/en-us/answers/questions/748062/azure-logic-apps-tools-for-the-visual-studio-2022.html

This is a bit odd, and makes me wonder ... If logic apps are planned on being expanded upon, would they not include it in their latest premiere IDE offering?

mirzaciri commented 2 years ago

n, would they not include it in their latest premiere IDE offering?

I think that makes sense, in regards to the support of VS Code with logic app standard.

klawrawkz commented 2 years ago

@esdccs1 & @mirzaciri, perhaps the "premiere IDE offering", so @esdccs1, is meticulously being positioned on the chopping block? IMHO we find various imperfections and strange omissions in the product that take away from what was/is a splendid dev environment. Meanwhile, VSCode (hoisted precariously on "Electron" is being enhanced via extensions to integrate broadly with Microsoft's and other cloud provider ecosystems. Is it odd that VSCode supports logic "apps" and generally runs the gamete of "serverless" offerings, while Visual Studio, the "Premiere Offering" does not? I don't think so if we read the tea leaves to divine hypothetically possible future plans for development tool offerings. :)


Electron is a framework for creating native applications with web technologies like JavaScript, HTML, and CSS. It takes care of the hard parts so you can focus on the core of your application.

yup.

github-actions[bot] commented 1 year ago

This issue is stale because it has been open for 30 days with no activity.

TeamDman commented 1 year ago

Make a DSL or something so we can track our powerautomate/logic apps in git. Bicep is doing great, I'd much rather learn a new language instead of more magic YAML syntax.

hartra344 commented 1 year ago

There's a lot of good discussion and information here. Going to move it to the discussions tab for better visibility to continue