Closed gfixler closed 5 years ago
Any chance we might see the source code? That would really help. I don't think it's caused by lambdas, but until I see source code, I can only guess. How big are the files? Have you tried Jedi's dev branch?
BTW: If you have "a few pages" of lambdas you should seriously consider rewriting it. I'm almost certain that's the wrong way of solving a problem in Python. There's very few reasons to even use a lambda. And nested lambdas are certainly evil. :-)
I can't share the whole source, sadly (I'd love to), because it's under a silly NDA.
BTW: If you have "a few pages" of lambdas you should seriously consider rewriting it.
I disagree. This is the rewrite :) It's really simple stuff:
ident = lambda x: x
juxt = lambda *fs: lambda x: [f(x) for f in fs]
sel = lambda: cmds.ls(selection=True, flatten=True)
wpos = lambda tform: cmds.xform(tform, query=True, worldspace=True, translation=True)
Most simply wrap Maya's nasty, verbose, flag-filled commands into totally constrained, reasonable functions with short names. With just the above 4 lines, I can do this to get a list of 2-element lists of selected items paired with their worldspace positions, a common need in Maya work:
map(juxt(ident, wpos), sel())
Normally that would be a whole block of code. Returning functions is part of higher-order functional programming. It allows me to do cool things, e.g. I can name new compositions of abilities:
namepos = juxt(ident, wpos)
That simplifies the previous example to 3 words:
map(namepos, sel())
I want this. I want Haskell. I can't have Haskell for now, so I'll use a few lambdas meanwhile.
I tried the dev branch, but it was the same.
I want this. I want Haskell. I can't have Haskell for now, so I'll use a few lambdas meanwhile.
Well it is not Haskell ;-)
Just a small recommendation, write it like this:
def juxt(*fs):
def wrapper(x): # Here it might make sense to return a lambda.
return lambda x: [f(x) for f in fs]
return wrapper
def wpos(tform)
return cmds.xform(tform, query=True, worldspace=True, translation=True)
This is the pythonic way. I'm not saying you shouldn't use functions as callbacks. That makes sense. With lambdas you miss 3 things:
I can't share the whole source, sadly (I'd love to), because it's under a silly NDA.
Too bad. How big is the file?
BTW: I'm not at all saying that I won't fix this because you're not writing "pythonic" code. Jedi should be fast anyhow!
Well it is not Haskell ;-)
I know :(
This is the pythonic way.
I'm not really that into the pythonic way anymore. It doesn't seem to be based on much more than "this is how we all do things." The more Haskell I learn, the more wrong - or at least sub-optimal - the pythonic way has begun to seem. A really good type system is a hell of a thing.
Readability. Python people tend to see a function and ignore a lambda.
I find these one-liner lambdas to be at least an order of magnitude more readable, and I can fit dozens on one screen, which I find very useful. How can they ignore the lambda when it's all lambdas? :)
Docstrings. I have no idea what the functions above are doing.
You shouldn't. You're not a Maya TD. If you were, you'd have written this line dozens of times in your career to get the worldspace position of a transform [here, named 'tform']:
cmds.xform('tform', query=True, worldSpace=True, translation=True)
My version adds 3 words to wrap that particular incantation in a new name, wpos ('world position'):
wpos = lambda tform: cmds.xform(tform, query=True, worldSpace=True, translation=True)
Most of what I'm writing are simple wrappers like that, which remove all choices, and grant composability wins. The often dozens of available flags per command (>1200 commands) lead to enormous n-path complexity and complex interdependencies thereof. After 18 years of writing code in Maya, though, I know that we all pretty much only ever use 2 or 3 things from cmds.xform
, so I'm giving those their own names. The 3 flags that get you the worldspace position of a transform shall be called wpos
, and that's it. It's done. That call literally hasn't changed in 20 years. Neither have any of the others. This language just doesn't move. There's 0 need for maintenance, or deep understanding of anything. Use wpos
to get the worldspace position of something, the end. I'd rather just have a textfile - written after the library stabilizes - that says, e.g.:
sel - get flattened selection list
loc - create a locator
wpos t - get worldspace position of transform t
comp *fs - compose functions (right-associative)
juxt *fs - juxtapose functions, e.g. juxt(toUpper, reverse)('foo') => ['FOO','oof']
etc...
A handful of other one-liners are just apings of higher-order functions from Haskell and Clojure, like juxt
, comp
, const
, etc.
Named functions. Tracebacks are way nicer with functions. Readability of tracebacks is significantly reduced!
I concede this point. Before I got some of these exactly right (I'm using TDD to build against every edge case I know of - though it seems almost unnecessary for most of these simple lines), the tracebacks were essentially useless.
I should add a few things. I have an order of magnitude more experience than the next person at my company in this niche wing of the programming world - 18 years vs. 2 years. They have quite a way to go just learning what things are in Maya, and how rigs work, and the countless bugs and crappy edge cases Maya brings to the table, before they can begin to think about composing one-liners to do bigger tasks, Linux CLI-style. I won't be allowed to share this stuff outside of my small department (sigh...), and my coworkers aren't planning to use this. In fact, only one remains even writing code in Maya - the other few are now in Unity all the time, so this is like archaic knowledge they'll never need again.
That said, this is a toolbox I'm creating for me. It's very selfish :) After 18 years, most of them wasted not getting anywhere with OOP, I've found FP, and it's doing all the things OOP promised, but never really delivered on. Don't think of this so much as standard code I'm writing to share with my fellow Python developers - I've done plenty of that, and I share with other devs in many other ways, e.g. via tech talks. Think of this in isolation, as a shell-like interface I'm using as I do manual labor each day, rigging characters and such. This isn't a library - it's my .vimrc. It's full of personal stuff that won't make sense to others, but it makes me ridiculously fast.
One of the "pythonic" things I've noticed is that any deviation from "pythonic" is met with lots of gnashing of teeth. It's like no Python programmer wants you to ever step outside of the bounds of what is "pythonic." I've never appreciated that. I stepped way outside by learning a bunch of Haskell this year, and it's made all this "pythonic" stuff seem really silly. There's a place - a big one - for making things that other coders can understand and help build and maintain, but I maintain that there should also be a place where I can explore, and create things for myself that make me super efficient. That's also a thing.
Too bad. How big is the file?
Only like 2-3 screens of code. It's silly that I can't share it, but my company is huge and litigious, and they don't understand phrases like "It's just some one-liner, functional wrappings of the original Maya code," or "And some direct apings of Clojure and Haskell functions."
BTW: I'm not at all saying that I won't fix this because you're not writing "pythonic" code. Jedi should be fast anyhow!
I appreciate it! It's low priority, though. I'm following a handful of avenues to allow me to write code for Maya in Haskell from now on (I'm just really loving it), and even these lambdas seem to need very little help from Jedi, because little depends on anything else. This is really flat coding. The small subset of things I use from other libraries I've mostly memorized.
Interesting views! Thanks for sharing :-) I don't know too much about Haskell. I have a hard time understanding it whenever I read it. There's something to functional programming that I love (In Jedi I've used a lot of immutability/recursion/function structures that are very similar to the way how functional programming works). I find it troubling though that FP still seems very academic and very hard to use (especially Haskell). I like Python because of its simplicity :-)
I'm not here to argue about wheter the pythonic way is the "perfect" way, but I do think style guides for a language are really important (it's what PHP is definitely still missing). It's just a great way to build a community. If you are really thinking about screen space (which I think is an issue sometimes), I would argue that
def wpos(tform): return cmds.xform(tform, query=True, worldSpace=True, translation=True)
is the one liner :-) It's 3 characters more than its lambda equivalent, but that's probably not a big issue :-) It's more readable for the average Python programmer and you get tracebacks for free.
I would also argue that readability is not that important if you are the only one using it. However always remember that that might change in the future :-) Plus at least I tend to forget what my own code does after a few years.
Good arguments, though. You have convinced me that I should really learn Haskell. I know a little bit, but I should really start understanding Haskell's type system.
What do you think about Rust? It seems to promise quite a few things including a FP style.
Interesting views! Thanks for sharing :-)
What a great reaction. I have a hard time striking a balance between gushing about stuff I find fascinating and not sounding like a know-it-all jerk (I'm neither, I promise!). You took my words in the spirit in which they were intended. Thanks! :)
I don't know too much about Haskell.
Who does? It's huge. The more I'm learning, the more overwhelmed I'm feeling, at least in some respects. There are so many libraries now, and I'm starting to wade into the more powerful abstractions.
I have a hard time understanding it whenever I read it.
I don't think Haskell is the kind of language any can really get just by reading it. It's a bit alien. It also occasionally reuses words for things programmers would never expect. For example, return
does not mean anything at all like what programmers expect it to mean. That actually bugs me a bit. I think Haskell's version comes from math, and mathematicians seem to name things in crazy ways.
There's something to functional programming that I love (In Jedi I've used a lot of immutability/recursion/function structures that are very similar to the way how functional programming works).
Awesome! You sound like you'd completely love Haskell. Yes, immutability and recursion are amazing, though the latter more so with laziness. Haskell is lazy by default (surprisingly lazy), so things 'just work' nicely (i.e. time/space complexity stuff), but in Python it's a real grab-bag, so some ideas that are great in Haskell either need extra love in Python, or are just going to have to suck a bit from a complexity standpoint.
I find it troubling though that FP still seems very academic and very hard to use (especially Haskell).
Me too! I think everyone in the Haskell world wishes it wasn't quite so academic and unapproachable for folks who actually have to write code for a living :) One nice thing is that it's a tight ecosystem, and everyone's really friendly, for fear of scaring the few new people away, and almost anything you do is of huge interest to a bunch of people immediately. There's a thing in #haskell on freenode called the Haskell Hug, wherein 5 people try to tell you 10 different cool things to think about whenever you ask a question - it can be overwhelming. I've felt overwhelmed in the Python world, but for different reasons, like when someone asks a question on StackOverflow, and within 20 seconds there are 7 replies, and tons more rolling in, all of them being constantly edited for 15 minutes after that, and all for that sweet karma. Haskell's community is still pretty small, so it's a lot less like that, which is a bit of a relief, honestly.
I like Python because of its simplicity :-)
Yeah, dynamic languages - especially Python and Ruby - are sooo easy to use, that it's really hard to get anyone who loves them to look at something like Haskell. The learning curve is definitely longer. I'm still not able to just crank out a program with user interaction without fighting, even after playing for most of this year (admittedly I haven't focused much on interactivity yet). The main guy behind Haskell (of a small group of main guys) - Simon Peyton Jones - talks about Haskell's (and his) jealousy over the easiness of dynamic languages, and how they're always pushing toward that direction. He has a graph with usefulness on one axis, and safety on another. Most languages like Python start in the useful but very dangerous corner (and I can see that a lot more clearly now, after Haskell's type system taught me a lot more about true safety).
Haskell started out diagonally opposite, in the completely safe, but totally useless corner. The first version literally couldn't output anything, so you could run programs, but not see what they were doing :) Both sides are heading toward what SPJ calls "Nirvana" - the corner of complete safety and usefulness, and both have borrowed from each other. I think Haskell inspired generics in Java and C#, and I think in Java's case, Phil Wadler - one of the chief designers of Haskell - actually helped implement them. Haskell has constantly tried to attain the ease of dynlangs, too. Edward Kmett's lens library, e.g., tries to make it easy to get and set values in nested record types and such in a functionally pure way that's as easy as foo.bar.baz = 42
in Python.
I'm not here to argue about wheter the pythonic way is the "perfect" way,
Yeah, me either.
but I do think style guides for a language are really important (it's what PHP is definitely still missing). It's just a great way to build a community.
I completely agree. It would be a pain for people who know how to swing the Python hammer to jump into my code. I was just having fun expanding too much on what I could have simply stated as "This is just experimental fun, trying to mimic Haskell's goodness in Python - not really for sharing with anyone else."
def wpos(tform): return cmds.xform(tform, query=True, worldSpace=True, translation=True)
is the one liner :-) It's 3 characters more than its lambda equivalent, but that's probably not a big issue :-) It's more readable for the average Python programmer and you get tracebacks for free.
Hmmm... you have a point there. Nice tracebacks would help. It works nicely for the Maya calls, though I still need to return lambdas if I want fake currying in the other bits. Real currying is magical:
doubleProduct a b = 2 * a * b
map (doubleProduct 7) [1, 2, 3]
=> [14, 28, 42]
I would also argue that readability is not that important if you are the only one using it. However always remember that that might change in the future :-)
If only! Thinking about it, though, even if there were a few hundred of these, I could probably convert them all over to functions in an hour. I could probably do it with a regex in Vim :) I'll just make a lambdas branch, and merge over to a release branch with a git hook to regex them back in to regular functions and run the tests.
Plus at least I tend to forget what my own code does after a few years.
I forget weekly, but whether lambda or def, these things I'm making are ridiculously tiny. It takes seconds to load one back into my head entirely. It's really been a fun, light-hearted way to get a lot of things done.
Good arguments, though. You have convinced me that I should really learn Haskell.
Yay! One of us! One of us!
Oh shoot, I just saw the future. You're going to fall in love with Haskell, start to not like Python all that much, give up on jedi-vim and start making things for Haskell and Vim, and I'm going to end up with a village-worth of torches and pitchforks outside my door :(
I know a little bit, but I should really start understanding Haskell's type system.
I can [try to] get you started with a few simple things.
::
means "has type," e.g. "foo" :: String
will type-check and compile, but 7 :: String
will not. Haskell's compiler doesn't listen to your type annotations - it infers all types itself (it occasionally can't, and needs help, but it's really rare), and then just checks any of your ::
assertions, and fails to compile if necessary, or even constrains the type to something simpler than it would have gone with, e.g. it might think 7
is capable of being any kind of Num
(e.g. Double), depending on context, but if you say 7 :: Int
, it'll say "Okay, just an Int, then." Then you can't add it to a Double
, though, because the types don't match. I used to think type mismatches were a pain. Now I think of them as a savior. They eliminate so many possible bugs, and you can just trust that it's impossible to use something the wrong way. That's a big thing I miss when I return to Python. It seems so crazy to me now that arguments can be literally anything at all. It's like working without a net.
Functions are values, too, and thus have types. Their types are their signatures. We don't need to, but often write types above definitions to assert our own beliefs about their inputs and output (and share that idea with other coders), and Haskell will fail to compile if we were wrong about what we believed. E.g.:
double :: Int -> Int
double x = 2 * x
You could just write the second line, and Haskell would figure out the above bit (it would actually not constrain it to Int, as we have, but whatever). The type of double
is Int -> Int
- we say "Int to Int." If we wrote, say, Int -> String
, it would fail to compile, because *
requires two numbers.
As an aside, Haskell definitions like that one above aren't assignments. We're defining "double x" to be the same thing as "2 * x". This means we can "eta-reduce" things, as in math. We don't need the x
in that definition, because it's on both sides. It cancels out. Instead of saying that doubling x is the same as doing 2 * x, we can just say that doubling is the same as 2*, like this:
double = (2*)
Those parentheses are creating a 'section,' which is where you wrap a normally infix operator, like *
or +
, and thus turn it into a binary function (a function of 2 inputs). You can wrap *
by itself, e.g. (*)
, and you can think of it as having invisible inputs, e.g. (_*_)
. Sections are used like functions, in prefix position:
(*) 2 3
=> 6
But, being Haskell functions, you can partially apply them by filling in one of the sides, e.g. (*3)
or (7*)
, and still use them as prefix functions, but now of only one input:
(*7) 3
=> 21
So when we say double = (*2)
, we can then say double 3
, which is the same as (*2) 3
, which is the same as 3 * 2
. This means you can use these as functions all over the place, e.g. map (*2) [1,2,3] ;=> [2,4,6]
, or foldr (+) 0 [1,2,3] ;=> 6
. I won't explain folds, but it's basically Python's reduce. Here I've folded addition over a list of 3 numbers, starting with 0.
Back to function types, the confusing bit comes when you see more than one arrow, e.g.:
times :: Int -> Int -> Int
times x y = x * y
It made sense before - Int -> Int == Int in, Int out - but what does having two arrows mean? The answer is weird, but super cool, and really powerful: all functions in Haskell take a single argument only. The way to understand this is to view the signature this way: Int -> (Int -> Int)
. Haskell's inputs are curried automatically. This means that when you pass in 2 arguments - your x
and y
here - Haskell passes in your x
value, which fulfils the first Int, and that returns back a function of type Int -> Int
, which Haskell then hands your y
value (this is the auto-currying in action), and now that function does its thing, multiplying both together and giving you back x * y
. This happens invisibly, and it's what I've been trying to get at with my lambdas that return lambdas - partial application, as we had with sections earlier. You don't have to pass in all the args:
times7 = times 7
That just made a new function of type Int -> Int
, wherein the x
has been set to 7, because passing in only the first argument gave us back a function of the remaining args, in this case just the y
. It takes the remaining value - the y
- and gives you back it times 7. You could map times
over a list of numbers:
map times [1,2,3]
...and that would give you back a list of functions, which each take a number, and each multiply it by a different value from the original list. You could map a lambda that takes a function and applies it to a 2 over that list:
map (\f -> f 2) (map times [1,2,3])
=> [2, 4, 6]
That (\f -> f 2)
is a lambda function (\
is supposed to look a bit like the greek lambda - Haskell will also accept the Unicode lambda here). It takes a function and applies it (calls it) on a 2.
Note how function application has no syntax (e.g. f 2
) - it's just a space. Functions are so useful in Haskell that they're given the 'quietest' syntax, e.g. none at all. I'm skipping around, and this isn't a great tutorial - more a whet your appetite kind of thing - but I've been finding that Haskell is often much simpler than Python. I, too, want simplicity. I'm not playing in Haskell because it's huge and complex. I'm using it because it's fun and refreshing, and it's really rocking my brain.
One of the eye-openers has been how useful the types all by themselves are. For example:
a -> a
It's possible to use type variables as type stand-ins, for things that can work with any type. This type takes something of some type a, and returns something of that same type. If you keep thinking about it, this can only be the identity function - it can only return whatever you gave it. You can't query the type of something in Haskell - the types are checked at compile time, but then erased, so that info is gone at runtime. You can't call reverse
or square
on the input, because you have no idea what a
will be - it could be an Int, a String, a list of lists of Bool, even a function. No function can work in isolation on something of an unknown type, so the only place a thing of type a
could come from is the input of type a
.
Types are so useful for reasoning about code this way, that there's actually a search engine - Hoogle - that lets you search for Haskell functions not just by names - e.g. "reverse"
- but also by types - e.g. "Int -> a -> [a]"
(this finds the replicate
function) - which is actually really handy. I knew I wanted to throw away the first n
elements of a list, but couldn't remember its name, so I searched Int -> [a] -> [a]
, and it was the second result - drop
. In fact, many times when you ask a question about something in #haskell, the first thing they ask back is "Well, what's it's type?" I used to wonder why they kept asking that, but I get it now - it's hugely revealing.
What do you think about Rust? It seems to promise quite a few things including a FP style.
I haven't looked into it yet, but the name keeps popping up. Someone at one of my Haskell meetups a month back was saying nice things about it. I'll have to check it out.
Btw, what was this issue about? I can't remember ;)
Looks like this conversation went a bit off topic, not to mention almost 6 years old, but I wanted to tag in as I see this is still open and it is the closest thing to what I'm experiencing with my jedi-vim slow downs.
In my situation, I also use lambdas a lot, but I use them because they are my preferred way to index pandas dataframes. For example:
dff = df.loc[lambda df: (df.A > 100) & (df.B.str.contains(r'foo')]
Here is an exact snippet where I keep getting a long hang time when I open the last (
DF_AREAS = DF.loc[
lambda df: pd.notna(df.AREA_NAME)
].drop_duplicates(...# editor hangs here for a ~5 seconds
@mbkupfer Your issue probably isn't the lambdas, but the fact that pandas completions are slow. There's a couple (or at least one) issues open in the Jedi tracker about that. The best change you have is probably the issue about creating a database index.
Using this issue a bit more to nerd about programming languages :)
@gfixler I know it's years later. But: I have finally read your post. I always knew I wanted to and interestingly it came after working a bit with Haskell and Rust. IMO Haskell was a bit too mathematical in a way. I watched Dave Beazley's lambda tutorial "Lambda Calculus From the Ground Up", which was a big help to understand lambda calculus (which also helped me to understand Haskell better.
I'm planning to rewrite Jedi and Parso in Rust now. It's a very elegant and powerful language. Rust is overall just a better fit. It has no garbage collection and a very new vibrant community. I have also learned a ton about parsers/type inference/software architecture while working with Jedi, so now I can finally use Jedi as a prototype and implement it in a fast language :). Let's see if ever finish this, haha. :)
@davidhalter thanks for the quick reply on this older than old issue 😆
Any chance you could link that issue. It's a bit tricky to navigate the Jedi issue tracker.
Also, excited to see a potential rewrite of jedi in rust. You've also piqued my interest into picking up rust.
https://github.com/davidhalter/jedi/issues/1059 is the issue.
About Rust: You can also stick with Python if you don't do CPU-bound stuff. But if you do CPU-bound stuff Python is very slow, so you need a fast language. C/C++ sucks, Go has no features, I don't like C#/Java too much, so Rust is pretty much what I've always wanted: A modern language without GC and some very powerful features. Have fun with it :) I still love Python though. Especially for smaller projects.
The best change you have is probably the issue about creating a database index.
@davidhalter, it looks like the issue is still open though so I'm not sure what change you are referring to.
Your issue probably isn't the lambdas, but the fact that pandas completions are slow.
Is it possible to ignore completions by module, or say and regex pattern?
@mbkupfer I wasn't clear enough: There is no fix as of now. This is just an issue that - once fixed - will improve performance for big modules.
Is it possible to ignore completions by module, or say and regex pattern?
No. The only thing you could theoretically do is add pandas
to jedi.settings.auto_import_modules
. That will probably make it fast, because it will be imported instead of statically analyzed.
Also I'm closing this, the original issue was that a lot of lambdas make Jedi slow. The reason for this is probably mostly that in certain scenarios open brackets are really hard to deal with in case of error recovery. If there are a few defs/classes in between, error recovery gets a lot easier. So please just write plain old functions and you'll be fine. Also as I mentioned, for big modules, I hope to eventually implement davidhalter/jedi#1059. However this might take a long time.
Feel free to continue the discussion.
I've been writing a lot of lambda one-liners, many of which return lambda one-liners, like this:
times = lambda x: lambda y: x * y
I'm playing with function composition, and make-believe currying. I can do
times(5)(7)
to multiply two numbers, or I can dotimes3 = times(3)
to partially apply times, resulting in a new function that already has an argument filled in (i.e. currying).I have a few pages of such things in a file, and a mix of regular
def foo ():'-style functions here and there. I noticed yesterday, adding some new definitions that when I opened a new line and typed
(, everything hung. I had to open a new shell and
killall -9 vimto get out of it, then
$ reset` to clean up the mess in the original shell. I tried everything to find the problem - rolling code back to earlier, rolling vim plugins back, launching Vim without plugins and without .vimrc, independently and together. The problem was plugins, and I narrowed it down to jedi-vim. When I remove that, and I've had to for now, the problem goes away.I was also able to make it go away by commenting out a few chunks of lambda definition lines. I could leave the regular functions. Something about lambdas really seems to slow it down. I tried uncommenting a lambda, typing
(
on a new line, undoing, uncommenting another, typing(
on a new line - things started to slow down quickly. I'd get more and more of a lag per test, until suddenly it was taking 10 seconds. Another uncommenting and it took too long for me to bother waiting for it. It's slowing down exponentially.Any idea what this could be? Any chance of a fix?