Closed alfonsogarciacaro closed 6 years ago
From all my use case, I never needed reflection when using Fable so for me it's a yes.
After, perhaps we can still support reflection by using attributes if this is really needed. But in general I prefer avoiding reflection usage, I don't think it's a good practice and don't help in writing clean, robust code.
In order to make the JS code more readable, we could generate comments like something: [0 /* Bar */, 6, "foo"]
.
We could either generate this when in DEBUG
mode or always and let plugins like uglifyJS removes comments from the output.
I am not sure to understand this part:
There will be challenges with runtime equality/comparison and string conversion
@MangelMaxime
But in general I prefer avoiding reflection usage, I don't think it's a good practice and don't help in writing clean, robust code.
Not a good practice? Any kind of metaprogramming (outside of unsafe interop) will not be possible without Reflection and since Fable doesn't support qoutations, Reflection is the only way. Many library implementers using Reflection expect to be able to read the metadata of types provided by the users.
@Zaid-Ajaj I just said that because, each time I use Reflection finally I switched back to another solution. :)
Because, in general reflection usage is less frequent that's also why, I propose to support it via an attribute if we can't support it via another way. :) Like so, the code is by default optimized.
Some notes :)
Reflection will only be disable for obj
, it will still be available for types know at compile-time or generics (if you inline the function or use PassGenerics
attribute). I think this should cover most of current use cases. It may be tricky for some fable-core functions that rely on reflection info at runtime, like ofJson
and keyValueList
but I hope we can find a work around.
Yes, comments are a good idea. They are automatically removed in production by UglifyJS so it should be fine.
About string conversion, for example, right now Fable first checks if the object implements ToString
and then the type of the object. I'm not sure how this could be done without the type info at runtime (and whether it'll be possible or not to override ToString
). We can get some info for the root objects at compile time, but for nested members it's more difficult.
Compile records as JS arrays (as Reason/Bucklescript do) or as pojos
Pojos is the winner for me as "compatibility with libraries like React" and debugging is a big plus.
as Reason/Bucklescript do
Bucklescript is a very good source for that kind of stuff.
So for example Bar(6, "foo") becomes [0, 6, "foo"]
One of the key selling point of Fable is the code readability, and someone that relies on this won't ever upgrade.
In Babel we usually have some option to tune the output, what if Fable adds a compilation flag which turns the optimizations on. The downside is having to maintains two code paths doing basically the same thing, to avoid that I would recommend using another AST pass (if optimization enabled) to transform: Bar(6, "foo")
into [0, 6, "foo"]
.
In the case of the optimization being turned off, you could still debug or read the code as you currently can.
In Babel we usually have some option to tune the output, what if Fable adds a compilation flag which turns the optimizations on.
Well I think it just summarizes my point of view. π
Fulma is an example of a very useful library that adds many KBs to the app bundle, which can be a big drawback for some users
We need numbers π
Anyway, as long as Fable apps are easy to debug it will be ok for me because in the long run, it's always maintenance on old code which becomes pricey.
@whitetigle Quoting @alfonsogarciacaro
Approximately, a normal Fable SPA app is around 100KB minified but Fulma adds 200KB extra
And we have some work to remove around 100KB to Fulma, with a new version of Fable by manually adding [<Fable.Core.CompileAsArray>]
(new attribute)
Don't know if you've seen this but recent V8 versions have changed their approach to optimizing code:
Hi all,
Adding specific attribut should be avoid, opt-in optimization look's like a better approach
Structural equality: It's should work as standard F#
Array/Obj Array or object is maybe not so important since we still have good source map for debug. I don't want to debug/read js.
Buddle size is important Users navigate more often now on mobile device with 3G or 4G. it's always pain of waiting one or two minutes to get the APP.
FWIW, I've only used reflection recently and only as a last resort. I'm only using it on a generic function (with PassGenerics
) so I think these changes won't effect that.
π on making compiled union / record types lighter
If this is going to be controlled with a global switch, we probably need something to override this behavior per type, module or even project (*.fsproj
file where a file comes from). In case you explicitly want certain types to be classes.
Would it make sense to introduce this feature in Fable 1.4 (?) for non-public types only and then extend it to all public APIs in Fable 2?
Thanks a lot for the comments! They are very useful to make the design for Fable 2.0, I'll write a summary later :+1: Thanks also for the link @jgrund, the post was very good.
About compilation flags for optimizations, I think this may increase the maintenance cost too much if it changes the runtime semantics which can make some functions in fable-core not work as expected. However, it should be possible to do some specific changes to make debugging easier: like using strings for union tags during development and integers in production.
Performance is a feature and I think Fable should offer better runtime optimisations. Opt-in via attributes makes the code less readable and also harder to share with .net server side. Thus my preference would be compiler / project switches for optimisation features while retaining the old behaviour (at least until proven obsolete). Opt-in is still a good choice for selected application.
I also thought that using F# arrays and lists could optionally be optimised to plain JS arrays for performance/size reasons.
@alfonsogarciacaro performance is not that important so why bother, A wise man once told me dev experience is more important. π
Messing aside, reflection is the route of so much evil, you should wean people off it, if they need, they can stay on older version while they figure out a better way to implement without reflection. If it wasn't for reflection and a few other .net gems, we would be able to do much better flattening and inlining of execution graph in compiler (fsc).
The project switches statements are going to be nightmare to maintain and split the project in two directions anyway, there is always the historical releases for those who need old way as an alternate version but I think you need to pick a direction and go with it.
performance is not that important so why bother, A wise man once told me dev experience is more important
was that me? ;P
Performance optimizations are important when anyone is actually using the project... and they actually have a problem with performance. I'd be very careful in introducing premature optimizations that may make the developer experience worse just for sake of some benchmarks.
The main point of the "Lightweight types" isn't the performance. It's the bundle size if I understood my discussion with @alfonsogarciacaro .
The performance, are just a consequence of this point :)
In general, I rarely look at performance problems because it's kind of rare to see them. (I am just speaking of my experience :) )
@Krzysztof-Cieslak, well you have in some guise at some stage, but I was referring to @alfonsogarciacaro in this instance ... don't worry, your a wise man too.
I fully agree, performance is important in library's where people using, some who will inevitably run into performance issues. I think the important balance is make the dev Api simple/elegant as possible and do all the hard performance work behind the scenes, its not a LOB app where we can/should have simple clear abstractions for complex logic ... It's a base library that should be magic and performant under the simple/elegant api, hiding complexities from Dev.
The main point of the "Lightweight types" isn't the performance. It's the bundle size if I understood my discussion with @alfonsogarciacaro . The performance, are just a consequence of this point :)
Due to CPU cache sizes and IO bandwidth, the two are highly correlated π
@gerardtoconnor Hehe, dev experience is very important, that's what we've been doing the past two years and thanks to that we now have awesome projects like Ionide, Elmish or Fulma. Now we can focus on performance :wink: Also now that we have a wider user base, we can see better how Fable is used on the wild and what features can we drop: these optimizations will mean you cannot do type testing on obj
anymore (and probably also not custom equality for DUs) but if this is OK for most users we can give it a go.
In any case, you're right about the difficulty to maintain different code generations, we'll have to define a clear path for Fable 2.0 instead. About reflection, Fable only allows a small part but it's useful for serialization and other tricks (like converting DUs into JS objects or dropdown options). This will be maintained (for types known at compile time or that use PassGenerics
) but it will be available through module functions so it can be removed by tree shaking if not invoked.
I am passing objects as messages between Fable and Akka.Net/F# on the server. Because the type of the message identifies the message, being able to include the type in the serialization and being able to deserialize something in Fable without knowing what the type might be, will remain important to me. After deserialization of a message on the client, its runtime type :? (instanceof) is used to route the message.
It would be fine to have to opt-into keeping the typeinfo and the true JS/runtime type so this keeps working. The majority of the types do not need this, however, so removing the proliferation of [<Pojo>]
would be welcomed.
I just reached the 1MB size for prod bundle... Now, it is OK for high speed internet but not ok for anything other than that. And again it is indeed a big size. So, if anything coming as reducing the size. I am up for it.
Just to give idea. Where I can't even use [<Pojo>]
type Validate = {
IsValid : bool
ErrMsg : string
} with
static member isValid =
(fun m -> m.IsValid), (fun n m -> {m with IsValid = n})
static member errMsg =
(fun m -> m.ErrMsg), (fun n m -> {m with ErrMsg = n})
type DocumentStatus = Select = 0 | All = 1 | New = 2 | InProcess = 3 | Processed = 4 | Error = 5
type DocumentModel = {
File : string
FileName : string
Label : string
Status : DocumentStatus
Patient : string
Date : string
Notes : string
}with
static member file = (fun m -> m.File), (fun n m -> {m with File = n})
static member fileName = (fun m -> m.FileName), (fun n m -> {m with FileName = n})
static member label = (fun m -> m.Label), (fun n m -> {m with Label = n})
static member status = (fun m -> m.Status), (fun n m -> {m with Status = n})
static member patient = (fun m -> m.Patient), (fun n m -> {m with Patient = n})
static member date = (fun m -> m.Date), (fun n m -> {m with Date = n})
static member note = (fun m -> m.Notes), (fun n m -> {m with Notes = n})
type ErrorModel = {
File : Validate
FileName : Validate
Label : Validate
Patient : Validate
Date : Validate
Notes : Validate
}with
static member file = (fun m -> m.File), (fun n (m:ErrorModel) -> {m with File = n})
static member fileName = (fun m -> m.FileName), (fun n (m:ErrorModel) -> {m with FileName = n})
static member label = (fun m -> m.Label), (fun n (m:ErrorModel) -> {m with Label = n})
static member patient = (fun m -> m.Patient), (fun n (m:ErrorModel) -> {m with Patient = n})
static member date = (fun m -> m.Date), (fun n (m:ErrorModel) -> {m with Date = n})
static member notes = (fun m -> m.Notes), (fun n (m:ErrorModel) -> {m with Notes = n})
type Model = {
Id : int
ShowUploadModal : bool
DocumentModel : DocumentModel
ErrorModel : ErrorModel
IsValid : bool
}with
static member documentModel = (fun m -> m.DocumentModel),(fun n m -> {m with DocumentModel = n})
static member errorModel = (fun m -> m.ErrorModel), (fun n m -> {m with ErrorModel = n})
Now this gonna increase size like anything. Static functions are there to use lences - aether
to modify record types.
And obviously many fish operators are missing for readability...
@kunjee17 You can also not use static member to have the lenses support. You can write them like:
type Model = {
Id : int }
}
let modelIdLens = ...
// or
module ModelLens =
let id = ...
And so be able to use the [<Pojo>]
attributes.
Do you use CDN, in your application or are you bundling everything in it like your code and libraries too ?
@MangelMaxime I am bundling everything as of now. Here are the libraries I am using as of now
"dependencies": {
"@servicestack/client": "^1.0.0",
"animate.css": "^3.5.2",
"babel-polyfill": "^6.26.0",
"babel-runtime": "^6.26.0",
"flatpickr": "^4.0.6",
"font-awesome": "^4.7.0",
"izitoast": "^1.1.5",
"lowdb": "^1.0.0",
"preact": "^8.2.1",
"preact-compat": "^3.16.0",
"remotedev": "^0.2.7"
},
"devDependencies": {
"@types/lowdb": "^0.15.0",
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-plugin-transform-runtime": "^6.23.0",
"babel-preset-env": "^1.6.1",
"bulma": "0.5.2",
"bulma-extensions": "^0.5.2",
"css-loader": "^0.28.7",
"fable-loader": "^1.1.2",
"fable-utils": "^1.0.6",
"file-loader": "^0.11.1",
"loglevel": "^1.5.0",
"node-sass": "^4.5.3",
"sass-loader": "^6.0.6",
"style-loader": "^0.18.2",
"webpack": "^3.6.0",
"webpack-dev-server": "^2.8.2"
I ll give a shot at what you said but putting things in module instead of static member
Update1 @MangelMaxime for same code I have shared in above comment. Generated code is 50 lines less in Fable reply and also there were no include if I use modules with Pojo instead of static functions with type.
For above code here is generated code
import { setType } from "fable-core/Symbol";
import _Symbol from "fable-core/Symbol";
import { compareRecords, equalsRecords } from "fable-core/Util";
export class Validate {
constructor(isValid, errMsg) {
this.IsValid = isValid;
this.ErrMsg = errMsg;
}
[_Symbol.reflection]() {
return {
type: "Test.Validate",
interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
properties: {
IsValid: "boolean",
ErrMsg: "string"
}
};
}
Equals(other) {
return equalsRecords(this, other);
}
CompareTo(other) {
return compareRecords(this, other) | 0;
}
static get isValid() {
return [function (m) {
return m.IsValid;
}, function (n, m_1) {
return new Validate(n, m_1.ErrMsg);
}];
}
static get errMsg() {
return [function (m) {
return m.ErrMsg;
}, function (n, m_1) {
return new Validate(m_1.IsValid, n);
}];
}
}
setType("Test.Validate", Validate);
export class DocumentModel {
constructor(file, fileName, label, status, patient, date, notes) {
this.File = file;
this.FileName = fileName;
this.Label = label;
this.Status = status | 0;
this.Patient = patient;
this.Date = date;
this.Notes = notes;
}
[_Symbol.reflection]() {
return {
type: "Test.DocumentModel",
interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
properties: {
File: "string",
FileName: "string",
Label: "string",
Status: "number",
Patient: "string",
Date: "string",
Notes: "string"
}
};
}
Equals(other) {
return equalsRecords(this, other);
}
CompareTo(other) {
return compareRecords(this, other) | 0;
}
static get file() {
return [function (m) {
return m.File;
}, function (n, m_1) {
return new DocumentModel(n, m_1.FileName, m_1.Label, m_1.Status, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get fileName() {
return [function (m) {
return m.FileName;
}, function (n, m_1) {
return new DocumentModel(m_1.File, n, m_1.Label, m_1.Status, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get label() {
return [function (m) {
return m.Label;
}, function (n, m_1) {
return new DocumentModel(m_1.File, m_1.FileName, n, m_1.Status, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get status() {
return [function (m) {
return m.Status;
}, function (n, m_1) {
return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, n, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get patient() {
return [function (m) {
return m.Patient;
}, function (n, m_1) {
return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, m_1.Status, n, m_1.Date, m_1.Notes);
}];
}
static get date() {
return [function (m) {
return m.Date;
}, function (n, m_1) {
return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, m_1.Status, m_1.Patient, n, m_1.Notes);
}];
}
static get note() {
return [function (m) {
return m.Notes;
}, function (n, m_1) {
return new DocumentModel(m_1.File, m_1.FileName, m_1.Label, m_1.Status, m_1.Patient, m_1.Date, n);
}];
}
}
setType("Test.DocumentModel", DocumentModel);
export class ErrorModel {
constructor(file, fileName, label, patient, date, notes) {
this.File = file;
this.FileName = fileName;
this.Label = label;
this.Patient = patient;
this.Date = date;
this.Notes = notes;
}
[_Symbol.reflection]() {
return {
type: "Test.ErrorModel",
interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
properties: {
File: Validate,
FileName: Validate,
Label: Validate,
Patient: Validate,
Date: Validate,
Notes: Validate
}
};
}
Equals(other) {
return equalsRecords(this, other);
}
CompareTo(other) {
return compareRecords(this, other) | 0;
}
static get file() {
return [function (m) {
return m.File;
}, function (n, m_1) {
return new ErrorModel(n, m_1.FileName, m_1.Label, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get fileName() {
return [function (m) {
return m.FileName;
}, function (n, m_1) {
return new ErrorModel(m_1.File, n, m_1.Label, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get label() {
return [function (m) {
return m.Label;
}, function (n, m_1) {
return new ErrorModel(m_1.File, m_1.FileName, n, m_1.Patient, m_1.Date, m_1.Notes);
}];
}
static get patient() {
return [function (m) {
return m.Patient;
}, function (n, m_1) {
return new ErrorModel(m_1.File, m_1.FileName, m_1.Label, n, m_1.Date, m_1.Notes);
}];
}
static get date() {
return [function (m) {
return m.Date;
}, function (n, m_1) {
return new ErrorModel(m_1.File, m_1.FileName, m_1.Label, m_1.Patient, n, m_1.Notes);
}];
}
static get notes() {
return [function (m) {
return m.Notes;
}, function (n, m_1) {
return new ErrorModel(m_1.File, m_1.FileName, m_1.Label, m_1.Patient, m_1.Date, n);
}];
}
}
setType("Test.ErrorModel", ErrorModel);
export class Model {
constructor(id, showUploadModal, documentModel, errorModel, isValid) {
this.Id = id | 0;
this.ShowUploadModal = showUploadModal;
this.DocumentModel = documentModel;
this.ErrorModel = errorModel;
this.IsValid = isValid;
}
[_Symbol.reflection]() {
return {
type: "Test.Model",
interfaces: ["FSharpRecord", "System.IEquatable", "System.IComparable"],
properties: {
Id: "number",
ShowUploadModal: "boolean",
DocumentModel: DocumentModel,
ErrorModel: ErrorModel,
IsValid: "boolean"
}
};
}
Equals(other) {
return equalsRecords(this, other);
}
CompareTo(other) {
return compareRecords(this, other) | 0;
}
static get documentModel() {
return [function (m) {
return m.DocumentModel;
}, function (n, m_1) {
return new Model(m_1.Id, m_1.ShowUploadModal, n, m_1.ErrorModel, m_1.IsValid);
}];
}
static get errorModel() {
return [function (m) {
return m.ErrorModel;
}, function (n, m_1) {
return new Model(m_1.Id, m_1.ShowUploadModal, m_1.DocumentModel, n, m_1.IsValid);
}];
}
}
setType("Test.Model", Model);
And here is updated code
open Fable.Core
open Fable.Core.JsInterop
[<Pojo>]
type Validate = {
IsValid : bool
ErrMsg : string
}
module ValidateLens =
let isValid =
(fun m -> m.IsValid), (fun n m -> {m with IsValid = n})
let errMsg =
(fun m -> m.ErrMsg), (fun n m -> {m with ErrMsg = n})
type DocumentStatus = Select = 0 | All = 1 | New = 2 | InProcess = 3 | Processed = 4 | Error = 5
[<Pojo>]
type DocumentModel = {
File : string
FileName : string
Label : string
Status : DocumentStatus
Patient : string
Date : string
Notes : string
}
module DocumentModelLens =
let file = (fun m -> m.File), (fun n m -> {m with File = n})
let fileName = (fun m -> m.FileName), (fun n m -> {m with FileName = n})
let label = (fun m -> m.Label), (fun n m -> {m with Label = n})
let status = (fun m -> m.Status), (fun n m -> {m with Status = n})
let patient = (fun m -> m.Patient), (fun n m -> {m with Patient = n})
let date = (fun m -> m.Date), (fun n m -> {m with Date = n})
let note = (fun m -> m.Notes), (fun n m -> {m with Notes = n})
[<Pojo>]
type ErrorModel = {
File : Validate
FileName : Validate
Label : Validate
Patient : Validate
Date : Validate
Notes : Validate
}
module ErrorModelLens =
let file = (fun m -> m.File), (fun n (m:ErrorModel) -> {m with File = n})
let fileName = (fun m -> m.FileName), (fun n (m:ErrorModel) -> {m with FileName = n})
let label = (fun m -> m.Label), (fun n (m:ErrorModel) -> {m with Label = n})
let patient = (fun m -> m.Patient), (fun n (m:ErrorModel) -> {m with Patient = n})
let date = (fun m -> m.Date), (fun n (m:ErrorModel) -> {m with Date = n})
let notes = (fun m -> m.Notes), (fun n (m:ErrorModel) -> {m with Notes = n})
[<Pojo>]
type Model = {
Id : int
ShowUploadModal : bool
DocumentModel : DocumentModel
ErrorModel : ErrorModel
IsValid : bool
}
module ModelLens =
let documentModel = (fun m -> m.DocumentModel),(fun n m -> {m with DocumentModel = n})
let errorModel = (fun m -> m.ErrorModel), (fun n m -> {m with ErrorModel = n})
it will generate more JavaScripty code
export const ValidateLens = function (__exports) {
const isValid = __exports.isValid = [function (m) {
return m.IsValid;
}, function (n, m_1) {
return {
IsValid: n,
ErrMsg: m_1.ErrMsg
};
}];
const errMsg = __exports.errMsg = [function (m) {
return m.ErrMsg;
}, function (n, m_1) {
return {
IsValid: m_1.IsValid,
ErrMsg: n
};
}];
return __exports;
}({});
export const DocumentModelLens = function (__exports) {
const file = __exports.file = [function (m) {
return m.File;
}, function (n, m_1) {
return {
File: n,
FileName: m_1.FileName,
Label: m_1.Label,
Status: m_1.Status,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const fileName = __exports.fileName = [function (m) {
return m.FileName;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: n,
Label: m_1.Label,
Status: m_1.Status,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const label = __exports.label = [function (m) {
return m.Label;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: n,
Status: m_1.Status,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const status = __exports.status = [function (m) {
return m.Status;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Status: n,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const patient = __exports.patient = [function (m) {
return m.Patient;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Status: m_1.Status,
Patient: n,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const date = __exports.date = [function (m) {
return m.Date;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Status: m_1.Status,
Patient: m_1.Patient,
Date: n,
Notes: m_1.Notes
};
}];
const note = __exports.note = [function (m) {
return m.Notes;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Status: m_1.Status,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: n
};
}];
return __exports;
}({});
export const ErrorModelLens = function (__exports) {
const file_1 = __exports.file = [function (m) {
return m.File;
}, function (n, m_1) {
return {
File: n,
FileName: m_1.FileName,
Label: m_1.Label,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const fileName_1 = __exports.fileName = [function (m) {
return m.FileName;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: n,
Label: m_1.Label,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const label_1 = __exports.label = [function (m) {
return m.Label;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: n,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const patient_1 = __exports.patient = [function (m) {
return m.Patient;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Patient: n,
Date: m_1.Date,
Notes: m_1.Notes
};
}];
const date_1 = __exports.date = [function (m) {
return m.Date;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Patient: m_1.Patient,
Date: n,
Notes: m_1.Notes
};
}];
const notes = __exports.notes = [function (m) {
return m.Notes;
}, function (n, m_1) {
return {
File: m_1.File,
FileName: m_1.FileName,
Label: m_1.Label,
Patient: m_1.Patient,
Date: m_1.Date,
Notes: n
};
}];
return __exports;
}({});
export const ModelLens = function (__exports) {
const documentModel = __exports.documentModel = [function (m) {
return m.DocumentModel;
}, function (n, m_1) {
return {
Id: m_1.Id,
ShowUploadModal: m_1.ShowUploadModal,
DocumentModel: n,
ErrorModel: m_1.ErrorModel,
IsValid: m_1.IsValid
};
}];
const errorModel = __exports.errorModel = [function (m) {
return m.ErrorModel;
}, function (n, m_1) {
return {
Id: m_1.Id,
ShowUploadModal: m_1.ShowUploadModal,
DocumentModel: m_1.DocumentModel,
ErrorModel: n,
IsValid: m_1.IsValid
};
}];
return __exports;
}({});
So, what will be good route to take. Is this thing can be default in fable ? @alfonsogarciacaro
Update2
I am half way through modifying all models with Pojo . It is reducing the code; But seems so little. Will keep posting on update.
I think it would be better to keep this discussion in a separate issue @kunjee17 just try keeping this one a discussion :)
@MangelMaxime sorry got carried away. But yes we need light weight types or a guidelines to achieve that...
@michaelsg I think we had a similar discussion when releasing Fable 0.7 or 1.0. To support deserialization with type info we added a global variable to hold a type dictionary. This is the only global variable in fable-core and I'd like to remove it for the next version, as I haven't heard of anybody else using this feature.
Generics will still be available so you could do something like the following to tag your JSON with the typename:
let inline toJsonWithTypeName (o: 'T) =
toJson (typeof<'T>.FullName, o) // On server side you would use Newtonsoft.Json
let ofJsonWithTypeName json =
let (typeName, parsed): string * obj = ofJson json
if typeName = typeof<MyType1>.FullName then
let x = inflate<MyType1> parsed
// Do something with MyType1
()
elif typeName = typeof<MyType2>.FullName then
let x = inflate<MyType2> parsed
// Do something with MyType2
()
else
failwithf "Cannot handle type %s" typeName
This code is not very elegant (probably we can improve it using Active Patterns or a type dictionary) but it's a small price to pay to improve the rest of your (and everybody else's) generated code. As Fable apps are growing we're noticing that the bundles are getting quite big so we need to remove some runtime info in order to remain competitive agains alternate solutions (like Reason).
One note of caution,
inflate
is currently not recursive. But I guess we can fix that if necessary.
Two other notes: unions and records will be plain objects so you will be able to parse them just with the native browser API JSON.parse
. F# classes will still be translated to JS classes so you type testing will be possible at runtime.
@kunjee17 Yes, it's better to move the discussion to Fulma or open a new issue :) Please check the size of the gzipped bundle as that would be the size to download if your server gzips JS files, which should be doing ;) Fable 2.0 will improve the situation but if your bundle is growing too much at one point you may need to split your app.
It seems that the emulating classes with properties is not always faster. When the call frequency is high (call heavy) the plain old javascript objects are significantly (~2x) faster than properties.
I have a message codec (where the typical code looks like this flyweight: https://github.com/fable-compiler/Fable/issues/1352#issuecomment-385976678). I have started to port my F# code to Fable2 as it was reported to be faster than Fable1. Surprisingly this is not a case with call heavy code. The Fable2 code will spent lot's of time doing property operations in addition to the additional indirection cost (both the Fable1 and the Fable2 profile linked). The application is not open (but eventually I would like to have a simplified but representative version to open up as a workload use-case - to track the compiler performance).
Fable1 (where each line represent an alternating encode/decode operation of 1_000_000 repetitions reusing the some corresponding encode/decode buffer):
0: 3.852000(s) time, 3852.000000(ns/msg) average latency, 259605.399792(msg/s) average throughput - message size: 146 - GC count: 1
0: 4.157000(s) time, 4157.000000(ns/msg) average latency, 240558.094780(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.835000(s) time, 3835.000000(ns/msg) average latency, 260756.192960(msg/s) average throughput - message size: 146 - GC count: 1
1: 4.011000(s) time, 4011.000000(ns/msg) average latency, 249314.385440(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.893000(s) time, 3893.000000(ns/msg) average latency, 256871.307475(msg/s) average throughput - message size: 146 - GC count: 1
2: 4.086000(s) time, 4086.000000(ns/msg) average latency, 244738.130201(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.849000(s) time, 3849.000000(ns/msg) average latency, 259807.742271(msg/s) average throughput - message size: 146 - GC count: 1
3: 4.165000(s) time, 4165.000000(ns/msg) average latency, 240096.038415(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.984000(s) time, 3984.000000(ns/msg) average latency, 251004.016064(msg/s) average throughput - message size: 146 - GC count: 1
4: 4.126000(s) time, 4126.000000(ns/msg) average latency, 242365.487155(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.897000(s) time, 3897.000000(ns/msg) average latency, 256607.646908(msg/s) average throughput - message size: 146 - GC count: 1
5: 4.259000(s) time, 4259.000000(ns/msg) average latency, 234796.900681(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.930000(s) time, 3930.000000(ns/msg) average latency, 254452.926209(msg/s) average throughput - message size: 146 - GC count: 1
6: 4.223000(s) time, 4223.000000(ns/msg) average latency, 236798.484490(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.917000(s) time, 3917.000000(ns/msg) average latency, 255297.421496(msg/s) average throughput - message size: 146 - GC count: 1
7: 4.238000(s) time, 4238.000000(ns/msg) average latency, 235960.358660(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.880000(s) time, 3880.000000(ns/msg) average latency, 257731.958763(msg/s) average throughput - message size: 146 - GC count: 1
8: 4.178000(s) time, 4178.000000(ns/msg) average latency, 239348.970799(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.931000(s) time, 3931.000000(ns/msg) average latency, 254388.196388(msg/s) average throughput - message size: 146 - GC count: 1
9: 4.241000(s) time, 4241.000000(ns/msg) average latency, 235793.444942(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.953000(s) time, 3953.000000(ns/msg) average latency, 252972.426006(msg/s) average throughput - message size: 146 - GC count: 1
10: 4.156000(s) time, 4156.000000(ns/msg) average latency, 240615.976901(msg/s) average throughput - message size: 146 - GC count: 1
Node profile (redacted): https://gist.github.com/zpodlovics/9d42207536e15efe042a766b1fbbfae2
Fable2:
0: 7.198000(s) time, 7198.000000(ns/msg) average latency, 138927.479856(msg/s) average throughput - message size: 146 - GC count: 1
0: 7.473000(s) time, 7473.000000(ns/msg) average latency, 133815.067577(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.378000(s) time, 7378.000000(ns/msg) average latency, 135538.086202(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.346000(s) time, 7346.000000(ns/msg) average latency, 136128.505309(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.486000(s) time, 7486.000000(ns/msg) average latency, 133582.687684(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.576000(s) time, 7576.000000(ns/msg) average latency, 131995.776135(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.428000(s) time, 7428.000000(ns/msg) average latency, 134625.740442(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.526000(s) time, 7526.000000(ns/msg) average latency, 132872.707946(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.522000(s) time, 7522.000000(ns/msg) average latency, 132943.366126(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.529000(s) time, 7529.000000(ns/msg) average latency, 132819.763581(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.535000(s) time, 7535.000000(ns/msg) average latency, 132714.001327(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.628000(s) time, 7628.000000(ns/msg) average latency, 131095.962244(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.468000(s) time, 7468.000000(ns/msg) average latency, 133904.659882(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.524000(s) time, 7524.000000(ns/msg) average latency, 132908.027645(msg/s) average throughput - message size: 146 - GC count: 1
7: 7.541000(s) time, 7541.000000(ns/msg) average latency, 132608.407373(msg/s) average throughput - message size: 146 - GC count: 1
7: 7.604000(s) time, 7604.000000(ns/msg) average latency, 131509.731720(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.441000(s) time, 7441.000000(ns/msg) average latency, 134390.538906(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.497000(s) time, 7497.000000(ns/msg) average latency, 133386.688009(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.512000(s) time, 7512.000000(ns/msg) average latency, 133120.340788(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.466000(s) time, 7466.000000(ns/msg) average latency, 133940.530405(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.414000(s) time, 7414.000000(ns/msg) average latency, 134879.956838(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.549000(s) time, 7549.000000(ns/msg) average latency, 132467.876540(msg/s) average throughput - message size: 146 - GC count: 1
Node profile (redacted): https://gist.github.com/zpodlovics/a454ad7521945f367bc13b69a7230118
Environment: Ubuntu 16.04 x86_64 Node: v10.9.0 Fable1: 1.3.17 Fable2: 2.0.0-beta-001
These results are very interesting @zpodlovics, thanks for sharing! So in your case Fable 2 is running slower than Fable 1. Actually, we've been working in PR #1512 to inline property calls (and besides that we're also pending to add F# optimization which also inlines properties. We haven't merged it yet because we didn't see a performance gain (we usually use the REPL itself to measure that), but maybe your project can be a very interesting way to test this.
To clarify, #1512 introduces quite large (up to 60%) performance degradation on some workloads like REPL.
@alfonsogarciacaro I am afraid the property inlining will not help Fable2 here as it looks like the property itself will introduces lots of additional operations and indirection (and will trash the data and instruction cache) instead of a simple object field access:(https://gist.github.com/zpodlovics/a454ad7521945f367bc13b69a7230118#file-fable2-profile-txt-L362). And inlining is not always a win - as you have to execute usually lot more code - instead of using a call reusing the hot instruction & uop cache. Small, tight code (eg.: K interpreter) could be really-really fast: (http://tech.marksblogg.com/billion-nyc-taxi-kdb.html).
Property related new functionality from the Fable2 profile:
[C++ entry points]:
ticks cpp total name
45882 43.0% 28.1% v8::internal::Runtime_DefineDataPropertyInLiteral(int, v8::internal::Object**, v8::internal::Isolate*)
A few property related functionality overhead from the Fable2 profile:
5137 3.1% 3.1% v8::internal::JSObject::MigrateToMap(v8::internal::Handle<v8::internal::JSObject>, v8::internal::Handle<v8::internal::Map>, int)
[..]
4623 2.8% 2.8% v8::internal::Runtime_DefineDataPropertyInLiteral(int, v8::internal::Object**, v8::internal::Isolate*)
[..]
3595 2.2% 2.2% v8::internal::LookupIterator::WriteDataValue(v8::internal::Handle<v8::internal::Object>, bool)
[..]
2734 1.7% 1.7% v8::internal::LookupIterator::ApplyTransitionToDataProperty(v8::internal::Handle<v8::internal::JSReceiver>)
[..]
2722 1.7% 1.7% v8::internal::Map::TransitionToDataProperty(v8::internal::Handle<v8::internal::Map>, v8::internal::Handle<v8::internal::Name>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::PropertyConstness, v8::internal::Object::StoreFromKeyed)
[..]
2582 1.6% 1.6% v8::internal::LookupIterator::PrepareTransitionToDataProperty(v8::internal::Handle<v8::internal::JSReceiver>, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::Object::StoreFromKeyed)
[..]
2399 1.5% 1.5% v8::internal::TransitionsAccessor::SearchTransition(v8::internal::Name*, v8::internal::PropertyKind, v8::internal::PropertyAttributes)
[..]
2066 1.3% 1.3% v8::internal::LookupIterator::State v8::internal::LookupIterator::LookupInRegularHolder<false>(v8::internal::Map*, v8::internal::JSReceiver*)
[..]
1822 1.1% 1.1% v8::internal::(anonymous namespace)::UpdateDescriptorForValue(v8::internal::Handle<v8::internal::Map>, int, v8::internal::PropertyConstness, v8::internal::Handle<v8::internal::Object>)
[..]
1777 1.1% 1.1% v8::internal::Object::AddDataProperty(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::ShouldThrow, v8::internal::Object::StoreFromKeyed)
[..]
1743 1.1% 1.1% v8::internal::Handle<v8::internal::PropertyArray> v8::internal::Factory::CopyArrayAndGrow<v8::internal::PropertyArray>(v8::internal::Handle<v8::internal::PropertyArray>, int, v8::internal::PretenureFlag)
[..]
1587 1.0% 1.0% v8::internal::LookupIterator::UpdateProtector() [clone .part.349]
[..]
1283 0.8% 0.8% v8::internal::JSObject::DefineOwnPropertyIgnoreAttributes(v8::internal::LookupIterator*, v8::internal::Handle<v8::internal::Object>, v8::internal::PropertyAttributes, v8::internal::ShouldThrow, v8::internal::JSObject::AccessorInfoHandling)
[..]
1270 0.8% 0.8% int v8::internal::BinarySearch<(v8::internal::SearchMode)1, v8::internal::DescriptorArray>(v8::internal::DescriptorArray*, v8::internal::Name*, int, int*)
[..]
1062 0.7% 0.7% void v8::internal::LookupIterator::Start<false>()
[..]
1049 0.6% 0.6% void v8::internal::LookupIterator::NextInternal<false>(v8::internal::Map*, v8::internal::JSReceiver*)
[..]
917 0.6% 0.6% v8::internal::HeapObject::synchronized_set_map(v8::internal::Map*)
A few % here and another few % there and this small list suddenly responsible for: ~22.6%
Thanks for the new data @zpodlovics. To be sure I'm understanding it well, is this the change in the generated JS code which is causing the performance issue?
// Fable 1
class Foo {
get MyProperty() {
return costlyOperation(this);
}
}
x.MyProperty // Use example
// Fable 2
function Foo$$get_MyProperty(this$) {
return costlyOperation(this$);
}
Foo$$get_MyProperty(x) // Use example
I was hoping that using a function (besides better compatibility with JS tree shaking and minification) would help the JS runtime to make a static dispatch, but maybe I was wrong. However, in both cases costlyOperation
is evaluated every time the property is called. We cannot change that because it's part of F# semantics. If you need to cache the value you need to do it explicitly or use an auto implemented property member val MyProperty = costlyOperation()
In any case I'm open to attach properties again to the prototype if this can improve performance. If we create a branch with this change, would you test it in your project?
The performance hit - according to my profiling - comes from the Fable1 object -> Fable2 property change. There are no costly operation in the encoding/decoding path. Each call is nothing more than a primitive type read/write (+ sometimes a few ( < 100 ) byte array read/write) + minimal transformation (usually integer bitness expansion / truncation) if needed. However there are lot's of fields in an encoder/decoder (not only primitive types, but structures (including nested structures and variable length data) that built from these primitive types). The encoder/decoder allocated once (with nested encoder/decoder for each non-primitive field/structure), so the message encoding/decoding could be done without allocation in the hot path.
A trivial header decoder/encoder with two field:
Fable1 generated js: https://gist.github.com/zpodlovics/96013570fe8130628004ee8ed05cbfb1
Fable2 generated js: https://gist.github.com/zpodlovics/917dd235b1c726921d44ba16ade3271f
Flamegraphs (using the earlier benchmarks - is there any better way than this to share svg?):
Fable1 flamegraph: https://gist.github.com/zpodlovics/8b7a3fa3890388f1ad1e233e85275283
Fable2 flamegraph: https://gist.github.com/zpodlovics/96b4b4ff169e477d16328abe28138c94
The field read/write operation must evaluated every time as expected. In a concurrent environment (js shared buffers + workers) not even the local cpu caching are allowed, the read/write done using the physical memory (volatile ops).
Thank you for your support! I am happy to help to improve Fable, so please create the branch and I will test it in my project.
Update: "This article describes some key fundamentals that are common to all JavaScript engines β and not just V8, the engine the authors (Benedikt and Mathias) work on. As a JavaScript developer, having a deeper understanding of how JavaScript engines work helps you reason about the performance characteristics of your code." https://mathiasbynens.be/notes/shapes-ics https://mathiasbynens.be/notes/prototypes
@alfonsogarciacaro Based on my earlier updates (Mathias Bynens posts + conference videos) it seems that the performance hit comes from the multiple shape definition and the shape walking (plus the additional indirection introduced by the standalone functions?). Also the defineProperty usage also seems problemmatic. In theory V8 have some inline cache tracing functionality (at least in debug mode --trace-ic
, I will check it out later.
const object1 = { x: 1, y: 2 };
vs
const object = {};
object.x = 5;
object.y = 6;
Probably also worth to check this: https://medium.com/@bmeurer/surprising-polymorphism-in-react-applications-63015b50abc
@zpodlovics I'm not very used to JS profiling tools but I was aware that it's better to avoid polymorphism to help the JS engine optimize the code. However, I was hoping that the standalone functions could be optimized because they're used with the same type.
In any case, I've created the properties
branch to experiment with attaching getters and setters (without indices) to the prototype. Can you please check it out and test it? You can find instructions on how to test local builds here: https://github.com/fable-compiler/Fable/tree/properties#using-your-local-build-in-your-projects
Maybe @ncave can also give it a go?
@alfonsogarciacaro My build fails with some Map/Set test error:
/usr/bin/yarn run test
yarn run v1.5.1
$ node ../node_modules/mocha/bin/mocha ../build/tests --reporter dot -t 10000
[..]
1447 passing (7s)
24 failing
TypeError: (0 , _Map.FSharpMap$$get_IsEmpty) is not a function
at Context.<anonymous> (/tmp/fable-compiler/build/tests/MapTests.js:43:69)
Full error list:
https://gist.github.com/zpodlovics/a19bbfba7c1d47796c160604af3e90f3
Update: commented out the tests, now fighting with https://github.com/fable-compiler/Fable/issues/1413
Sorry, I'm an idiot. Forgot to rebuild fable-core before running the tests. Let me fix this.
Still waiting for CI to finish, but hopefully it's fixed now :pray:
Tests are passing but the REPL is not building :/ Anyways, can you please give it a go @zpodlovics? If it works in your case and shows a performance improvement, we can work on top of it to fix the remaining issues.
@alfonsogarciacaro Thank you! I have managed to get it working (now it's based on fable-core.2.0.0-beta-002). While the code looks different, the performance roughly remains the same. I am afraid, there is no other way than dig deeper using the js profiling tools.
0: 7.343000(s) time, 7343.000000(ns/msg) average latency, 136184.120931(msg/s) average throughput - message size: 146 - GC count: 1
0: 7.554000(s) time, 7554.000000(ns/msg) average latency, 132380.195923(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.593000(s) time, 7593.000000(ns/msg) average latency, 131700.250230(msg/s) average throughput - message size: 146 - GC count: 1
1: 7.636000(s) time, 7636.000000(ns/msg) average latency, 130958.617077(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.542000(s) time, 7542.000000(ns/msg) average latency, 132590.824715(msg/s) average throughput - message size: 146 - GC count: 1
2: 7.700000(s) time, 7700.000000(ns/msg) average latency, 129870.129870(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.701000(s) time, 7701.000000(ns/msg) average latency, 129853.265810(msg/s) average throughput - message size: 146 - GC count: 1
3: 7.652000(s) time, 7652.000000(ns/msg) average latency, 130684.788291(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.707000(s) time, 7707.000000(ns/msg) average latency, 129752.173349(msg/s) average throughput - message size: 146 - GC count: 1
4: 7.805000(s) time, 7805.000000(ns/msg) average latency, 128122.998078(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.751000(s) time, 7751.000000(ns/msg) average latency, 129015.610889(msg/s) average throughput - message size: 146 - GC count: 1
5: 7.778000(s) time, 7778.000000(ns/msg) average latency, 128567.755207(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.673000(s) time, 7673.000000(ns/msg) average latency, 130327.121074(msg/s) average throughput - message size: 146 - GC count: 1
6: 7.772000(s) time, 7772.000000(ns/msg) average latency, 128667.009779(msg/s) average throughput - message size: 146 - GC count: 1
7: 7.675000(s) time, 7675.000000(ns/msg) average latency, 130293.159609(msg/s) average throughput - message size: 146 - GC count: 1
7: 8.168000(s) time, 8168.000000(ns/msg) average latency, 122428.991185(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.753000(s) time, 7753.000000(ns/msg) average latency, 128982.329421(msg/s) average throughput - message size: 146 - GC count: 1
8: 7.634000(s) time, 7634.000000(ns/msg) average latency, 130992.926382(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.616000(s) time, 7616.000000(ns/msg) average latency, 131302.521008(msg/s) average throughput - message size: 146 - GC count: 1
9: 7.654000(s) time, 7654.000000(ns/msg) average latency, 130650.640188(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.653000(s) time, 7653.000000(ns/msg) average latency, 130667.712008(msg/s) average throughput - message size: 146 - GC count: 1
10: 7.598000(s) time, 7598.000000(ns/msg) average latency, 131613.582522(msg/s) average throughput - message size: 146 - GC count: 1
https://gist.github.com/zpodlovics/266d35c2064af1ecad1733ecb21a2d56
@alfonsogarciacaro Your generic parameters / interface explanation here (https://github.com/fable-compiler/Fable/issues/1547#issuecomment-417596116) gave me an idea to try out without any interface related "magic" with Fable2:
In the codec I have replaced the interface type with a concrete implementation type, and viola:
0: 3.847000(s) time, 3847.000000(ns/msg) average latency, 259942.812581(msg/s) average throughput - message size: 146 - GC count: 1
0: 3.824000(s) time, 3824.000000(ns/msg) average latency, 261506.276151(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.822000(s) time, 3822.000000(ns/msg) average latency, 261643.118786(msg/s) average throughput - message size: 146 - GC count: 1
1: 3.821000(s) time, 3821.000000(ns/msg) average latency, 261711.593824(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.827000(s) time, 3827.000000(ns/msg) average latency, 261301.280376(msg/s) average throughput - message size: 146 - GC count: 1
2: 3.855000(s) time, 3855.000000(ns/msg) average latency, 259403.372244(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.925000(s) time, 3925.000000(ns/msg) average latency, 254777.070064(msg/s) average throughput - message size: 146 - GC count: 1
3: 3.833000(s) time, 3833.000000(ns/msg) average latency, 260892.251500(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.825000(s) time, 3825.000000(ns/msg) average latency, 261437.908497(msg/s) average throughput - message size: 146 - GC count: 1
4: 3.827000(s) time, 3827.000000(ns/msg) average latency, 261301.280376(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.841000(s) time, 3841.000000(ns/msg) average latency, 260348.867482(msg/s) average throughput - message size: 146 - GC count: 1
5: 3.818000(s) time, 3818.000000(ns/msg) average latency, 261917.234154(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.803000(s) time, 3803.000000(ns/msg) average latency, 262950.302393(msg/s) average throughput - message size: 146 - GC count: 1
6: 3.794000(s) time, 3794.000000(ns/msg) average latency, 263574.064312(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.825000(s) time, 3825.000000(ns/msg) average latency, 261437.908497(msg/s) average throughput - message size: 146 - GC count: 1
7: 3.836000(s) time, 3836.000000(ns/msg) average latency, 260688.216893(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.853000(s) time, 3853.000000(ns/msg) average latency, 259538.022320(msg/s) average throughput - message size: 146 - GC count: 1
8: 3.838000(s) time, 3838.000000(ns/msg) average latency, 260552.371027(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.851000(s) time, 3851.000000(ns/msg) average latency, 259672.812257(msg/s) average throughput - message size: 146 - GC count: 1
9: 3.833000(s) time, 3833.000000(ns/msg) average latency, 260892.251500(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.849000(s) time, 3849.000000(ns/msg) average latency, 259807.742271(msg/s) average throughput - message size: 146 - GC count: 1
10: 3.904000(s) time, 3904.000000(ns/msg) average latency, 256147.540984(msg/s) average throughput - message size: 146 - GC count: 1
Oh really cool @zpodlovics. If I read correctly you even have performance improvement here.
Can I ask you to share an example of code before / after related to your point:
without any interface related "magic" with Fable2
Like that we can refer to it later, and perhaps others people can find it useful "trick" :)
@MangelMaxime Nothing really, just replaced the interface type with a concrete type (but you have to do more complex things than this to measure the call overhead correctly).
[<Interface>]
type IFunc1 =
abstract member Func1: int32 -> int32
type MyFunc1() =
member this.Func1(x:int32) = x+2
interface IFunc1 with
member this.Func1(x: int32) = this.Func1(x)
[<Class>]
type TestInterface(instance: IFunc1) =
member this.Call(x: int32) =
instance.Func1(x)
type TestGeneric<'T when 'T:> IFunc1>(instance: 'T) =
member this.Call(x: int32) =
instance.Func1(x)
type TestSpecialized(instance: MyFunc1) =
member this.Call(x: int32) =
instance.Func1(x)
let myF = MyFunc1()
let myCI = TestInterface(myF)
let myCG = TestGeneric(myF)
let myCS = TestSpecialized(myF)
Interesting, dynamic dispatch rears its ugly head :) SRTPs to the rescue!
@et1975 @realvictorprm Inline interfaces to the rescue: https://github.com/fsharp/fslang-suggestions/issues/641 and https://github.com/Microsoft/visualfsharp/pull/4726
Thanks for investigating this further @zpodlovics, so the performance loss comes from interface casting (in Fable 1 there was no casting as interface members were attached directly to the object) instead of the standalone properties. This is somewhat expected but we need to make users aware of this.
Another alternative could be to use object expressions to implement the interface instead of a type, these don't need casting.
Is it casting or dynamic dispatch? Seems the distinction would be important to know.
@et1975 It's more like shape creation and shape migration/transformation - at least based on the profiling /flamegraph.
Yes, we don't have something like virtual tables, but Fable 2 transforms (actually wraps) objects when casted to interfaces. As this operation is implicit when passing an object to a function accepting the interface it may be that it happens too many times hurting performance. Maybe we can find a way to detect these cases and either find a workaround or inform the user.
One of the guidelines when designing Fable was to generate JS as standard as possible that could be easily consumed from other JS code. Because of this F# types (including records and unions) were compiled to the closest thing to real types in modern JS: ES2015 classes (with reflection info).
This is working well and it also means you can export the type to JS and call normally all properties and methods. However, it also means types become somewhat expensive (mainly in terms of code size) in JS, where it's more usual to define less classes and use more literal objects (aka, plain old JS objects or pojo). Example where 2 lines of F# result in 36 lines of JS. It's becoming particularly obvious in Elmish apps where you usually declare lots of small types to define messages and models. Fulma is an example of a very useful library that adds many KBs to the app bundle, which can be a big drawback for some users.
To counter this we are trying to optimize the compilation of certain types with a special attribute (see #1308 #1316). This is giving good results but it forces the user to remember to use the attribute and its special rules, and also to tolerate slightly different runtime semantics for the same type depending on the decorator (and we already have too many such attributes!
Erase
,Pojo
...). A different approach for a new major version would be to compile all records and unions in a more lightweight-fashion so the semantics are more predictable and every user gets the benefits without using special attributes. An initial proposal could be to do something similar to what Reason/Bucklescript are doingBar(6, "foo")
becomes[0, 6, "foo"]
(the first element is the tag index). Unions without any data field (e.gtype Foo = A | B | C
) could be compiled as integers (same as enums). This is what Reason/Bucklescript do and given how often unions are used in F# programs to model messages, etc, it should shave a lot of code as array syntax is very lightweight in JS, as well as performance gains (JS engines are very optimized to deal with arrays).Notice this means there won't be a class/function being generated in JS to represent the type. Instance and static members of the type will become methods of the enclosing module. Actually, this is what the F# compiler does by default, but Fable mounts the members in the type.
This will take a considerable amount of work and will be incompatible with some of the current Fable features, specially around reflection and type testing. We have to decide carefully if the pros outweigh the cons:
Pros
Cons