gcanti / io-ts

Runtime type system for IO decoding/encoding
https://gcanti.github.io/io-ts/
MIT License
6.7k stars 328 forks source link

Interop with other Either types #78

Closed OliverJAsh closed 6 years ago

OliverJAsh commented 7 years ago

It would be nice if io-ts worked with different Either types. Perhaps we could provide an adapter? If we did this, we'd need to think about which parts of the Either type are required.

For context, on one project I am experimenting with using the Either type from funfix: https://funfix.org/, and I want to avoid duplicate Either types in my project.

gcanti commented 6 years ago

@OliverJAsh What do you mean with "adapter"? A fold called on the validation result is the simplest solution I can think of

import * as t from 'io-ts'
import { Either, Left, Right } from 'funfix-core'

type FunFixValidation<T> = Either<Array<t.ValidationError>, T>

function myvalidate<T>(v: any, type: t.Type<T>): FunFixValidation<T> {
  return t.validate(v, type).fold<FunFixValidation<T>>(Left, Right)
}
sledorze commented 6 years ago

@gcanti I think @OliverJAsh was maybe thinking about something without the overhead to write the call.. But I can't see that happening without all code using fp-ts Either to be abstracted over it, which seems to bring a lot more overhead..

OliverJAsh commented 6 years ago

By "adapter" I was thinking of a way to provide an Either constructor to io-ts. E.g.

t.validate(value, type, {
  getEither: () => FunfixEither
})

Essentially I want to avoid depending on (and therefore bundling) fp-ts when using io-ts if I already have another Either type in my application. 🤔

zerobias commented 6 years ago

@OliverJAsh no need to create any unique method exactly for Eithers. Js provides a safe way to make custom protocols via Symbol interfaces. Therefore it is possible to store any type meta info in fields and methods inside symbol property to tell other modern functional libraries about your typeclasses in a conventional way

And we already have success story. Streaming libraries are uses symbol-observable to have interop with each other.

import { from } from 'most'
import { createStore } from 'redux'

const someStore = createStore(s => s) // typical redux store...
const storeUpdates$ =
  from(someStore) // ...also supports interconnection via Symbol.observable

So why not to try it too?

I already made an implementation of that concept and using it in production code https://github.com/zerobias/apropos/tree/develop/packages/signature It contains symbol itself and several methods to use class type signatures in statically typed environment

OliverJAsh commented 6 years ago

That's food for thought. Thanks for sharing @zerobias!

gcanti commented 6 years ago

But I can't see that happening without all code using fp-ts Either to be abstracted over it

@sledorze We can go even further, we might abstract over the effect returned by validate and use finally tagless

export type Errors = Array<ValidationError>

export interface MonadThrow<E, M> extends Monad<M> {
  throwError: <A>(e: E) => HKT<M, A>
}

export interface MonadType<M> extends MonadThrow<Errors, M> {
  zipWith: <A, B, C>(f: (a: A, b: B) => C) => (fa: HKT<M, A>, lazyfb: () => HKT<M, B>) => HKT<M, C>
}

export class Type<M, S, A> {
  readonly _A: A
  readonly _S: S
  readonly _M: M
  constructor(
    readonly name: string,
    readonly is: (v: any) => v is A,
    readonly validate: (s: S, c: Context) => HKT<M, A>,
    readonly serialize: (a: A) => S
  ) {}
}

export class StringType<M> extends Type<M, any, string> {
  readonly _tag: 'StringType' = 'StringType'
  constructor(M: MonadType<M>) {
    super(
      'number',
      (v): v is string => typeof v === 'string',
      (s, c) => (this.is(s) ? M.of(s) : M.throwError([{ value: s, context: c }])),
      identity
    )
  }
}

export const getStringType = <M>(M: MonadType<M>): StringType<M> => new StringType(M)

It would allow

gcanti commented 6 years ago

I wrote a POC supporting asynchronous validations (branch 78)

Basically it works like this: you provide an instance of MonadType<M> for some monad M and you get back a complete runtime type system which works in that monadic context.

// core.ts
export interface MonadThrow<E, M> extends Monad<M> {
  throwError: <A>(e: E) => HKT<M, A>
}

export interface MonadType<M> extends MonadThrow<Array<ValidationError>, M> {
  zipWith: <A, B, C>(f: (a: A, b: B) => C) => (fa: HKT<M, A>, lazyfb: () => HKT<M, B>) => HKT<M, C>
  attempt: <A>(fx: HKT<M, A>, lazyfy: () => HKT<M, A>) => HKT<M, A>
}

export type Is<A> = (v: any) => v is A
export type Validate<M, S, A> = (s: S, context: Context) => HKT<M, A>
export type Serialize<S, A> = (a: A) => S

export class Type<M, S, A> {
  readonly '_A': A
  readonly '_S': S
  readonly '_M': M
  constructor(
    readonly name: string,
    readonly is: Is<A>,
    readonly validate: Validate<M, S, A>,
    readonly serialize: Serialize<S, A>
  ) {}
}

export const getTypeSystem = <M>(M: MonadType<M>): TypeSystem<M> => {
  ...
}

So depending on the instance you pass, you can choose

I wrote 5 instances:

gcanti commented 6 years ago

@sledorze This is a first attempt to get a basic idea of performances (using benchmark.js)

Code

suite
  .add('io-ts', function() {
    const T = t.type({
      a: t.string,
      b: t.number,
      c: t.array(t.boolean),
      d: t.tuple([t.number, t.string])
    })
    t.validate({}, T)
  })

Results

# 0.8.2

io-ts x 140,895 ops/sec ±3.03% (79 runs sampled)
io-ts x 140,089 ops/sec ±3.37% (77 runs sampled)
io-ts x 142,319 ops/sec ±2.71% (80 runs sampled)

# 0.9.0

io-ts x 190,435 ops/sec ±0.77% (88 runs sampled)
io-ts x 190,476 ops/sec ±0.67% (87 runs sampled)
io-ts x 182,115 ops/sec ±0.65% (88 runs sampled)

# POC

io-ts x 153,537 ops/sec ±0.68% (86 runs sampled)
io-ts x 155,058 ops/sec ±0.82% (88 runs sampled)
io-ts x 152,557 ops/sec ±0.71% (84 runs sampled)
sledorze commented 6 years ago

@gcanti that's interesting, also wondering what happened between 0.8.2 and 0.9.0.. What is the platform tested? nodeJs? this brings so much questions is the POC test doing a 'fail fast' path with the monadic 'only the first error' approach?

gcanti commented 6 years ago

what happened between 0.8.2 and 0.9.0

@sledorze it looks like creating a type is less expensive in 0.9.0+

node --version
v8.1.2

Type definitions only

Code

import * as Benchmark from 'benchmark'
import * as t from 'io-ts'

const suite = new Benchmark.Suite()

suite
  .add('io-ts', function() {
    t.type({
      a: t.string,
      b: t.number,
      c: t.array(t.boolean),
      d: t.tuple([t.number, t.string])
    })
  })
  .on('cycle', function(event: any) {
    console.log(String(event.target))
  })
  .on('complete', function(this: any) {
    console.log('Fastest is ' + this.filter('fastest').map('name'))
  })
  .run({ async: true })

Results

0.8.2

277,827 ops/sec ±2.50% (72 runs sampled)

0.9.0

431,908 ops/sec ±0.65% (88 runs sampled)

POC

424,371 ops/sec ±0.63% (88 runs sampled)

Validations only

Code

import * as Benchmark from 'benchmark'
import * as t from 'io-ts'

const suite = new Benchmark.Suite()

const T = t.type({
  a: t.string,
  b: t.number,
  c: t.array(t.boolean),
  d: t.tuple([t.number, t.string])
})
const payload = { c: [1], d: ['foo'] }

suite
  .add('io-ts', function() {
    t.validate(payload, T)
  })
  .on('cycle', function(event: any) {
    console.log(String(event.target))
  })
  .on('complete', function(this: any) {
    console.log('Fastest is ' + this.filter('fastest').map('name'))
  })
  .run({ async: true })

Results

0.8.2

210,008 ops/sec ±0.63% (87 runs sampled)

0.9.0

205,194 ops/sec ±0.66% (85 runs sampled)

POC (all errors)

158,106 ops/sec ±0.56% (87 runs sampled)

POC (first error)

745,731 ops/sec ±0.60% (88 runs sampled)

sledorze commented 6 years ago

@gcanti is there some differences on all/first error for successful matches? (no error generated?)

gcanti commented 6 years ago

@sledorze not that much

All 88,530 ops/sec ±1.46% (86 runs sampled) First 94,668 ops/sec ±0.94% (89 runs sampled)

It looks like there are some low hanging fruits, for example this is the results using linked lists

Base reference (0.9.0)

invalid payload x 197,033 ops/sec ±0.84% (82 runs sampled) valid payload (no errors) x 178,816 ops/sec ±1.22% (89 runs sampled)

Context and Errors as Arrays (POC all errors)

invalid payload x 154,399 ops/sec ±0.97% (85 runs sampled) valid payload (no errors) x 91,946 ops/sec ±0.98% (87 runs sampled)

Context and Errors as LinkedLists + optimized zipEithers (POC all errors)

invalid payload x 509,834 ops/sec ±0.52% (89 runs sampled) valid payload (no errors) x 148,614 ops/sec ±1.92% (85 runs sampled)

sledorze commented 6 years ago

@gcanti indeed, that's quite a lot of performance gains. btw are your libs integrated into a CI ?

It may be interesting to track down that automatically, like so: https://travis-ci.org/lloydmeta/frunk/jobs/172486500#L398

I may help on that (if we decide to integrate on a CI first).