Closed johnynek closed 8 years ago
Awesome! This looks great. :+1:
Note that I moved algebra/number into the ring package because it references Additive/Multiplicative stuff. The reference looks pretty light, so it could be moved back to core.
That said, I'm not sure which is a better fit. I guess any use case of numbers you always have rings, so it seemed like a fine place to put it.
What do you think?
Yeah I agreed with that decision. I think of the "number" stuff as a specialization of the ring/field stuff.
One other item: we lost the .asJoinSemilattice/asMeetSemilattice
functionality here, but it could be added back. I was thinking of JoinSemilattice.from[A](sl: Semilattice[A]): JoinSemilattice[A]
. Of course, it's almost as fast to just define this in your code as needed if it ever comes up.
Yeah, I think for now let's just leave that stuff off. I'd rather get the core stuff right then add useful things rather than speculatively add all those conversions like I did before.
Cool. Well, this is a non-trivial change, so I'll give others a chance to weigh it.
I think we are getting close to a point where core could be stable for a good while.
What is the relationship between ring
and lattice
modules? I'm aware of some conversions between ring-like and lattice-like structures, but those are symmetric. Is there something to favor lattice
depending on ring
rather than the other way around, or is the choice more-or-less arbitrary?
The choice was more or less arbitrary with the feeling that lattices are more constrained than rings (idempotency being an additional constraint).
Also, while I think Lattices, especially Semilattice, could be useful in systems (particularly CRDTs seem to basically just be semilattices), it's actually a bit of a challenge to think of an algorithm I want to write that only depends on, for instance DistributiveLattice.
On the other hand, rig, rng and ring are all useful for matrix algebra, so we can, for instance, write code for scalding or spark to do matrix algebra for and T:Ring.
I agree the splits are somewhat arbitrary, but I do think they make sense -- we needed some kind of split here and this one is as good as any IMO.
I was not talking about the split, but about the dependency of lattice
on ring
.
I do agree that if one is going to split into 3 modules, this way makes the most sense. Another option could be to keep lattice
independent of ring
and then provide an additional module with conversions. This for me is purely a matter of aesthetic/symmetry though, since I'm not too concerned with the size of the JARs.
@TomasMikula would you move the constructions we have not into it's own jar, or perhaps down into std?
My main concern is that I want to try to stabilize core and hope it will change at a very slow rate (hopefully an incompatible changes maybe on the order of every 2 years or longer).
Jar size is somewhat of a concern, but I bet many will wind up pulling in std, which will depend on everything.
Not sure if typeclass conversions fit into std
either. As I said, I'm not too concerned that using lattice
pulls in an extra dependency, it's just that the asymmetry doesn't feel quite right. If others agree that this is a good compromise, I'm fine with it.
Related to modules, I'm trying to think where would OrderedSemigroup
, OrderedMonoid
go if we were to add those. On top of OrderedSemigroup/Monoid
one might want an extra property
a + b ≤ a, b
or
a + b ≥ a, b
Turns out both @johnynek and I were interested in these: http://mathoverflow.net/questions/179390/standard-name-for-a-monoid-semigroup-with-ab-leq-a-b
By current division, they fit into core
, but if people are concerned about the size of core
, maybe we should not inflate it.
My main concern is stability, my secondary concern is size (and if we are concerned with size, we should speak about the actual artifact size, which as of last version was: 3MB
http://search.maven.org/#artifactdetails%7Corg.spire-math%7Calgebra_2.11%7C0.3.1%7Cjar
So, that's actually pretty large. I'm not sure how much we improved things here, but perhaps we should set some goals and attack the problem directly if it is a concern.
As to OrderedSemigroup/OrderedMonoid, I would like to see them (it seems a few sketch algorithms use them), and I agree they would go into core. Something like:
trait OrderedSemigroup[A] extends Semigroup[A] {
// the order under which:
// order.lteq(a, combine(a, b))
// order.lteq(b, combine(a, b))
def order: Order[A]
}
It is a shame we can't express something like this:
trait OrderedOp[A, S[X] <: Semigroup[X]] extends S[A] {
def order: Order[A]
}
I guess the closest would be:
case class OrderedOperation[A, S[X] <: Semigroup[X])(order: Order[A], op: S[A])
Yeah, it seems the number of type classes can be almost exponential in the number of possible properties, so I understand your attempt at better composability.
For the sake of moving forward, with regard to what should be in core
, this PR looks good to me. We might revisit the ring
-lattice
relationship later.
Is there a reason we can't have something like this?
trait OrderedSemigroup[A] extends Semigroup[A] with Order[A]
@non That is how I imagined it. I think @johnynek is trying to think of a way to avoid the combinatorial explosion of type classes.
This can work too: trait OrderedSemigroup[A] extends Semigroup[A] with Order[A]
, but I was worried a bit about the fact that the semigroups and orders might (even both) not be the "standard" ones, and the risk of implicitly getting the wrong one seems high, but this is just anxiety for the most part.
Consider min and max. Both are OrderedSemigroups[T]
for all T: Order
. But setting one of those up implicitly could yield strange results (because anything looking for an Order might get the reversed one).
By using the case class approach for some of these compositions (which was just an idea), it seems like you have a bit more explicitness with picking up an OrderedSemigroup
without forcing either the Semigroup
or Order
to be resolved a new way.
By the way, we might be getting far afield. Seems the main question we want to address here is:
1) Do we want to make lattice only depend on core, and add a lattice-ring-constructions or something, which has the constructions of rings from lattices or vice-versa.
2) keep as is.
We want stability, small jars (pay for what you use), but also simple project structure (which pulls against the other two).
Should we merge this?
I've thought more about it, and I do feel the ring < lattice is meaningful. DistributiveLattices are subsets of rings. If for no other reason, that makes since.
So, I'm happy to merge this as is. We can consider further factoring in the future, in my view.
:+1:
I agree with merging this for pragmatic reasons :+1:
I disagree with your other justification—lattices IMO are meaningful completely independently of rings. Also, I only see a (much) weaker relationship: lower-bounded distributive lattices are rigs.
@TomasMikula yes, I misspoke: there is no negation, of course.
Some ScalaDoc links were broken by this and now
sbt ringJVM/doc
fails with
ring/src/main/scala/algebra/ring/BoolRing.scala:4: Could not find any member to link for "BoolFromBoolRing".
ring/src/main/scala/algebra/ring/BoolRng.scala:4: Could not find any member to link for "algebra.lattice.GenBool".
This also causes sbt publishLocal
to fail.
closes #43 #105