Open GuillaumeGomez opened 9 months ago
And fixed mendes (forgot to generate with the where clause).
So what is/was the motivation for this? What are you hoping to get out of it? It's a little hard to get excited about a +780/-230 diff.
Performance mostly. Do you have a way to measure it for askama_derive
?
I don't think we have a great way to do that, no. I think informally we've used something like roughly checking the compile time for the testing crate in this repo?
Performance is a good goal, though -- if you can show that this moves the needle there, I'd be interested!
I think that splitting code generation into a lot of small calls to writeln makes it much more difficult to have an idea of what the generated code will look like.
Currently every line is written in its own statement, because the code is generated with (more or less) correct indentation. I don't know if this feature is actually needed, because "cargo expand" reformats the resulting code anyway. Using multiline strings is more readable, if we can live without our manual indenting.
So overall, I think it'll be much more efficient to parse the derive proc-macro (not sure exactly how to test it...) as this code doesn't try to make sense of most tokens it parses.
However, and I think it's quite important to note: it has a maintenance cost (as does any code) and if you don't think it's worth it, we can close it. The proc-macro performance isn't that critical and the three dependencies are still in the dependency tree since
askama
usesserde
.Even if not merged, some improvements in the code can still be retrieved from this code, mostly around how the code is generated. I think that splitting code generation into a lot of small calls to
writeln
makes it much more difficult to have an idea of what the generated code will look like.Anyway, it was fun to do. Parsing rustc tokens is pretty easy so I did it originally to see how hard it would be to not use
syn
and all the crates (which are great, but because they need to support a lot of cases, are bound to be slower than hand-made code, specific to one use case).