linkedin / css-blocks

High performance, maintainable stylesheets.
http://css-blocks.com/
BSD 2-Clause "Simplified" License
6.33k stars 152 forks source link

feat: Definition ingestion and parsing into block. #446

Closed timlindvall closed 4 years ago

timlindvall commented 4 years ago
timlindvall commented 4 years ago

Sooo... I introduced a check in the Block constructor to ensure GUID uniqueness... which... uhh... caused a number of tests to break. 😅

So, I've gone ahead and added a code path that attempts to recover in the event of a duplicate ID (which, while exceptionally unlikely, is still something we could crash into in real-life use) by appending a number to the end of the GUID. This may be a bad idea because it means GUIDs are no longer 5 characters specifically, and we still need to check that we somehow still haven't tripped across a previous GUID, but I'd imagine we'd want to support GUIDs of varying lengths eventually. (Note: We only try to recover if the GUID was autogenerated... if the GUID came in from a definition file, we hard error.)

Hrmm... reflecting on the latest diff, I can improve the check if we set a rule that GUIDs shouldn't ~include dashes~ end with a dash followed by just numbers (thus, if we have to append, we can reserve ~dashes~ -## for that). What are your thoughts on all this?

chriseppstein commented 4 years ago

What are your thoughts on all this?

You're right that a hash collision in an autogenerated hash is a real issue that we need to be able to handle somehow.

This may be a bad idea because it means GUIDs are no longer 5 characters specifically

There's no guarantee that the guid be 5 chars long. It's an opaque string. But there are some is one important qualities to a guid that we have to guarantee:

Given this, the strategy that you're using won't be able to achieve the guarantees because it may generate different different guids for the same block depending on what blocks have been processed previously.

So what will work? I think we have to raise a hard error in all cases if this situation occurs. Then if it does happen, we can provide a configuration option that the user can set to increase the size of the sha. When config options change, all the caches get invalidated, so that will guarantee consistency.

What this means for your broken tests? I suspect this is because there's duplicate metadata for distinct tests. I think we either have to change the tests so that each test case has distinct metadata, or reset the tracking of duplicate guids between each test case. If we're not doing it yet, there's a reset method on the block factory that should reset the guid tracking.

timlindvall commented 4 years ago

idk what's happening, the test suite is passing locally after fixing the lint issues. =(