Open TylerLeonhardt opened 6 years ago
Every single time I've used "write-output" for logging, I've ending up regretting it.
Just say no.
However, returning tuples from functions is useful, extremely useful. I would be very disappointed to see that capability gone from any implementation (I've taken to commonly using something I first saw in golang: $ret, $err = function-call).
So I vote for Solution 1.
First blush, I like option 2.
Care to elaborate, @dfinke? 😊
I like the notion that all the output in the function gets logged, and I'm guessing to the trace console, and no extra work on my part I can expect the last line to be returned to the caller.
I'm guessing a majority of functions can leverage that. Most of the http triggers I've built, I emit info to the pipeline for logging/tracing and then then the last statement I have is $result > $res
.
For that pattern, my code wouldn't really change.
I'd like to play with it a bit, get a better feel.
Would leveraging PowerShell classes be out of the question? Explicit return, and attributes to control bindings of functions to methods. Plus in C# functions I have an explicit way to log information I want logged, and would like the same in PowerShell. Lazier approaches (return everything), and return the last thing don't feel right to me.
Just to add some context:
In C# functions I have an explicit way to log information I want logged, and would like the same in PowerShell
We are planning on just leveraging PowerShell logging that already exists. All of the Write-*
cmdlets will work out of the box and log to the respective log levels in Functions.
Not advocating these, but other possibilities might be a special variable:
$return = $outputForReturnVariable
Or even a cmdlet similar to the ones for output bindings:
Push-$Return $outputForReturnVariable
(That dollar in the function name would probably break things, but I thought it was a cute idea).
Neither of these strike me as great, but they might avoid some of the pipeline collection flattening problems...
@rjmholt oddly enough, Push-$Return
is a perfectly legal function name:
${function:Push-$Return} = { $global:return = $args | % { $_ } }
Having said that, magic variables and extra functions *are not the answer. That's the problem we had before: dealing with weird/odd ways of getting values in and out (by reading and writing files) put me off using PowerShell in Azure functions completely (frankly, put me off Azure functions completely as something that was obviously and unashamedly sloppy and unfinished).
Personally I think Kirk's answer is the best: let azure functions have the exact same semantics of a method in a PowerShell class: Write-Output is not allowed (and not implied) and return
is required, and only valid once, because then you're done.
As a bonus this means that you have an easy way of building a framework outside Azure functions that lets your tools help you write the right thing (i.e. just write a wrapper for your code in a class, and specify the return type you need to return, etc).
And what can return return? I'm strongly opposed to not being able to return a tuple.
In our team discussion, the logical PowerShell way of aligning with AzFunctions semantics for $returns
is either only support it via classes which has a 1:1 mapping or return everything that is sent to the pipeline. Accepting everything sent to the pipeline has its own issues in terms of both usability and some technical challenges for serializing. However, we already have the need for a custom cmdlet to send output to any arbitrary output binding since you can have multiple output bindings in an AzFunction. So you would do something like: Push-OutputBinding -Name Request -Value ...
. We could re-use this for $return
and it would look something like Push-OutputBinding -Name '$returns' -Value ...
which is not great as $returns
needs to be escaped. I think specifically to support $returns
semantics, it makes the most sense to use a PowerShell class so that there aren't any surprises to the user.
For me, I think support of $returns
overall is a lower priority pending customer feedback that it is something people want rather than just using Push-OutputBinding
all the time.
It should be easy to move code from one platform to another. It should be easy to move scripters from one platform to another. Wherever possible, code should behave the same way wherever it runs. Moving a working script to Functions should largely be a matter of copy/paste. So Solution 1.
@Jaykul, don’t worry I also think pipeline output is the way to go. It means that PowerShell concepts will support users everywhere, rather than having to relearn things. (And yes I thought that might be a legal function name, but it did break syntax highlighting already, and it would probably break a bunch of tooling).
While there is a risk of having to serialise a large number of pipeline objects, (1) serialisation to JSON is not the bottleneck in much of my PowerShell code, and (2) PowerShell users already have a best practice of not polluting the pipeline.
My only concern is that it’s sometimes a pain trying to prevent things from being written to the pipeline. Like why @BrucePayette advises not to use an ArrayList — because Add() returns ints.
@rjmholt which is why I suggested class semantics because in classes there's already no leaking output.
@SteveL-MSFT can OutputBindings be handled as simply as setting a $global:outputname
variable, or is it actually something where you need to be able to call Push
over and over for the same name?
@Jaykul the author of the Azure Function can define multiple output bindings, you would call Push
for each binding once
@Jaykul I misunderstood what you were asking. Setting the variable per output binding was something that was considered, but we'd have to have it namespaced to avoid conflicts, so something like: $output:outputname
and because of scoping, we'd need $global:output:outputname
.
It's a little confusing to have a function called Push
that I can only call once ...
I still like the idea of global variables, or even returning a hashtable with a key for each named output ;-)
Or $global:PsAzureOutput
could have a property on it for each named output ...
I don't know. Don't listen to me -- I feel like I need to take another look at how this stuff is done in other languages to understand what's supposed to happen.
For context, the function to set the value to be returned for a given output binding is called Push-OutputBinding
. It has a Name
(aka the name of the output binding) and a Value
parameter.
You can call it as many times as you like.
The code is here if you're curious.
Once your function script is done being evaluated, then the worker will grab a Hashtable of OutputBindingName => OutputBindingValue
using Get-OutputBinding
that contains all the output bindings and send them along their way to the Function host.
This was our response to a "PowerShell experience for sending multiple outputs". In C#, they use out
parameters like I call out above.
The $return
feature of Functions in other languages was used as a shorthand for functions that only have one output binding. It's really a convenience feature more than anything else.
In those other languages, they take advantage of the fact that "returning" would associate that return value to the only output binding of the function (in which the name
is set to $return
).
That prompted the question... "well PowerShell doesn't really follow those other languages in that you just use the return keyword and return only that to the pipeline... So what does $return
look like in PowerShell?"
It's a weird question 😅
Is Push
the correct verb here? I think it is rarely used and may lead to confusion. It implies adding to a stack where order matters. Does it in this case? Wouldn't Set-BindingOutput -Name foo -Value bar
make more sense?
The verb can be changed but that's a separate question we can bring up in another issue. I only say that since this issue is about $return
On the topic of $returns
, to me taking all output of the function make sense. IMHO the output stream should not be used for logging purposes. If you want to log, use Write-Verbose/Debug/Information.
If an azure function was analogous to PowerShell Function, in my mind $returns
would use option 1 and "take everything". I honestly wish we could do things more like C# functions, but those ways are a bit anti-pattern for PowerShell, IMHO.
When I first heard about Azure Functions and PowerShell Support, I expected I could basically port an advanced function or script. Instead I found some really weird pattern.
WRT to multiple output bindings, IMO a separate command is a hard to discover. I would personally expect to take those in as parameters and update those within the body of the function/script.
Some pseudo-code as an example:
[cmdletbinding()]
param (
[FunctionOutput[String]]
$MyOutput1,
[FunctionOutput[Int]]
$MyOutput2
)
$MyOutput1.Set("This is my string output")
$MyOutput2.Set(42)
Perhaps doing the same with return makes sense too. I'm not sure. *shrugs
At the very least, we need to close on this by the time we go to preview.
Given that Durable Functions requires $returns
, our decision should be driven by the design and validation of our Durable Functions design.
Background
In Azure Functions, there is a concept of "output bindings" which are essentially the output of your function. You define these outputs in the bindings section of the
function.json
. Here's an examplefunction.json
for an HttpTrigger and response:notice the binding
Response
has adirection
ofout
.There exists a "special" output binding called
$returns
which would look like this:This, in every language that Azure Functions supports in V2, gives the user the ability to "return" the result. Rather than explicitly specifying that value X goes to the output binding of "Foo".
For example, in C# you would normally declare outputs using the
out
keyword in the function definition:function.json
run.cs
But if you want to leverage
$returns
you would do:function.json
run.cs
Note the use of the
return
keyword.The problem
PowerShell is different than most languages because it can return multiple things by leveraging the Pipeline.
In addition to that, a lot of folks write to the pipeline as a way to log something.
So implementing
$returns
for PowerShell is strange because it's not clear what the intended behavior should be.The possible solutions
Solution 1: Take everything
Everything that gets thrown onto the pipeline, will be added to a list. That list will be set as the output for the output binding
$returns
. In addition to that, everything added to the pipeline can be logged to the host.This idea comes from the fact that since PowerShell can "return" multiple things, a list of the things written to the pipeline would be the most "PowerShell-y" experience for
$returns
.This does mean that the user would need to log using
Write-Host
or others instead of simply writing to the pipeline otherwise their log statements will be included in the$returns
output.Solution 2: Take one thing
Everything that gets thrown onto the pipeline will be logged, but the LAST item added to the pipeline will be the output for the output binding
$returns
.This idea comes from wanting to align with other languages. It also gives the user the ability to use the pipeline for logging and "returning".
FWIW, AWS Lambda went with this approach - so I've heard.
Solution 3: Take nothing
This would mean that
$returns
doesn't make sense for PowerShell and the only way to specify output is by using thePush-OutputBinding
command.That said, everything added to the pipeline can be logged to the host.