Open gnumonik opened 2 years ago
Assuming that RowToList
and ListToRow
have linear time complexities at compile time and Prim.Row.Nub
is better than quadratic, this solution should be better. Basically it replaces handwritten uniqueness check with reuse of built-in Nub
logic.
module Uniq where
import Prelude
import Prim
import Prim.TypeError
import Type.RowList
import Prim.Row as R
import Prim.RowList as RL
import Type.Proxy (Proxy(..))
class AllUniqueLabels :: forall (k :: Type). RL.RowList k -> Constraint
class AllUniqueLabels list
class SameLength :: forall (a :: Type) (b :: Type). RL.RowList a -> RL.RowList b -> Constraint
class SameLength a b | a -> b
instance SameLength RL.Nil RL.Nil
instance SameLength as bs => SameLength (RL.Cons a ta as) (RL.Cons b tb bs)
instance
( R.Nub row nbd
, RL.RowToList nbd nbdl
, ListToRow rl row
, SameLength rl nbdl
) => AllUniqueLabels rl
checker :: AllUniqueLabels (RL.Cons "a" Unit (RL.Cons "b" Unit RL.Nil)) => Unit
checker = unit
test :: Unit
test = checker
My soon-to-be-merged PR replaces term-level constructor indices with a type level
PSchema
. This change provides concrete benefits (compile time validation of PureScript bridge errors, makes possible correct genericToData/FromData
instances for Ledger/user types that contain records w/ non-lexicographic label order). This type level machinery works well for simple types, but compilation time grinds to a halt when a data schema is given or generated for a complex type.I am fairly certain that this slowdown is caused by the manner in which the various
Unique*
type classes (AllUniqueLabels/UniqueIndices/AllUnique2
) have been implemented. They all follow a similar pattern (example fromTypeLevel.RowList.purs
):This is an OK translation of a logical definition of uniqueness into the type system, but is not very efficient. The PureScript compiler does not appear to optimize constraint solving very much (I think there might be a space leak in the compiler?), so the implementation should be updated with an optimized version.
Because of the various practical constraints that govern the design of on-chain code, the compilation slowdown is not likely to affect many (if any) current projects. At the moment, compilation is still acceptable for all ledger types and all Cardax types, and the slowdown has only become apparent when running various tests. However, if those practical constraints loosen over time and more complex on-chain types become common, the inefficiency could become a more urgent issue, and ought to be addressed in the near-to-medium term if possible.