When I was first thinking this through I felt very committed to the one-hole-context interpretation, to the point that I didn't want to add behaviors that seemed not to make sense in a type theoretical context. So I didn't use negative numbers in the random projection code because negation of types doesn't (yet) make sense. Instead I just sparsified the vectors by using nulls where there would normally be negatives. But now I think I'm more interested in developing the vector calculus interpretation, because the directional derivative idea makes sense even if we just think of the word expressivity values as ordinary scalars. And in that context the negative numbers serve a really important function! So I should add an option to turn negation back on.
Also, I should add some normalization options. In particular, in the directional derivative model, these should probably be normalized to be unit vectors. In fact, come to think of it, this could really be causing problems!
When I was first thinking this through I felt very committed to the one-hole-context interpretation, to the point that I didn't want to add behaviors that seemed not to make sense in a type theoretical context. So I didn't use negative numbers in the random projection code because negation of types doesn't (yet) make sense. Instead I just sparsified the vectors by using nulls where there would normally be negatives. But now I think I'm more interested in developing the vector calculus interpretation, because the directional derivative idea makes sense even if we just think of the word expressivity values as ordinary scalars. And in that context the negative numbers serve a really important function! So I should add an option to turn negation back on.
Also, I should add some normalization options. In particular, in the directional derivative model, these should probably be normalized to be unit vectors. In fact, come to think of it, this could really be causing problems!