Closed rrthomas closed 4 years ago
Your design sounds OK to me. I've implemented it; take a look!
I think I've addressed all your latest comments; thanks for making them so detailed and easy to digest. Note that the comments read with read_comments
are prepended to to_add
, so that as you say in your example the final result is to "append A=x and B=y at the end".
I think I've addressed your latest round of comments.
(replying to your comment about performance)
I beg to differ. Many micro-optimizations don’t matter much, but
algorithmic complexity is a real thing. With your implementation,
specifying -a
n times makes the operation n times as slow. 10 -a
in
a row is enough to make your code an order of magnitude slower.
I’ll check your new code tomorrow.
Edit: I’m having a few Internet troubles at home. I’m not forgetting about this.
(replying to your comment about performance) I beg to differ. Many micro-optimizations don’t matter much, but algorithmic complexity is a real thing. With your implementation, specifying
-a
n times makes the operation n times as slow. 10-a
in a row is enough to make your code an order of magnitude slower.
I guarantee you no user would ever care, as no user would ever notice. Trust me, I've been strenuously avoiding unnecessary optimizations for 20 years now!
Great stuff! I'll continue with my other changes.
About the performance debate, I’m not talking about complex optimization to save a few nanoseconds, but taking a minute to reflect upon the better algorithm before writing code. Prepending or appending take about the same time to code, but have a tremendous impact on performance.
I agree with you that most of the time micro-optimizations aren’t worth spending time on them, but learning to optimize is a good way to grab the much better option when it’s within arm’s reach.
Prepending or appending take about the same time to code, but have a tremendous impact on performance.
Not in this case: the difference is between a constant-time and linear operation (hence linear or quadratic performance) over perhaps a dozen items at most.
I agree with you that most of the time micro-optimizations aren’t worth spending time on them, but learning to optimize is a good way to grab the much better option when it’s within arm’s reach.
No point wasting brain cycles on that when correctness and clarity are much more important, as here.
I do agree, though, that in this case we were able to get both better performance and clearer code. The point I was trying to make is that only clearer code mattered in this case.
Maybe we could do more elegant than that. At first I considered deleting
options::set_all
because it is equivalent to a combination ofoptions::delete_all
andoptions::to_add
. However, I didn’t like that parse_options read stdin, in particular because it makes it harder to test.An alternative would be for
run
to convert set_all to delete_all+to_add after reading stdin, butrun
shouldn’t edit its options, so this doesn’t sound like good design. The current design whereprocess
reads stdin late is not good either by the way.I think the best option is to add a FILE parameter to parse_options for it to populate
to_add
. The main function will pass stdin, but the test suite can give it an fmemopen* handle. parse_options will have to keep a localset_all
boolean to check for arguments consistency before actually consuming its input at the end.What do you think?