noahmorrison / chevron

A Python implementation of mustache
MIT License
480 stars 52 forks source link

Template Inheritance #16

Open rafi opened 8 years ago

rafi commented 8 years ago

Any chance you could support this proposal? Mustache.php implemented it nicely.

noahmorrison commented 8 years ago

Hey, sorry for the long delay before responding, haven't been feeling too well lately!

I think this is a nice addition to the mustache spec but, since it isn't standard, if speed takes a hit then I'll have to refuse this. Speed and implementing the entire spec were the biggest reasons for making chevron as opposed to using what was already available (Plus... I've always wanted to make a compiler and mustache is easy to compile).

That said, I'm very interested in trying this out and seeing if we can get something we like while maintaining spec compliance and speed. One thing I don't like about this (I'm allowed to have complaints because it isn't spec, right?), is that it adds the < tag which is different from the > tag. This is just begging for confusion, and I actually didn't notice for the first 30ish minutes of looking at this.

I'd suggest we go with unifying < and >, basically if a partial tag (< or >) has a matching end tag then it will be a "parent"/"super" tag, if it doesn't then it will be a normal partial tag like it is now. This would require a change on the tokenizer, right now the tokenizer is a generator of flat tuples containing two elements, the type of the tag and the value inside the tag. so {{#test}}hey{{/test}} would generate ('section', 'test'), ('literal', 'hey'), ('end', 'test'). Instead I would like to suggest turning the tokenizer into a function instead of a generator, and return a list of nested tags instead. The previous example would return this instead

[{ 'type': 'section',
           'value': 'test',
           'children': [{ 'type': 'literal', 
                          'value': 'hey'}
                          ]}
]

I don't know what kind of performance hit turning the generator into a function would cause, I would imagine it would have little effect though, especially because I don't support streaming the output of render anyways.

We would need this new structure of the tokenizer so that we can efficiently check if a tag is closed or not while tokenizing by looking at the list we are creating, and not need to worry about it in the rendering function.

I'm open to comments but I think this is a backwards compatible and efficient way to do it. I'll start implementing it in the next few days, but if you have any concerns feel free to voice them.