Idea: parse changeset removed/added lines and modify tags database in-place accordingly by removing tags from - lines and adding tags from + lines. In theory it may significantly reduce time to update tagfile and will allow to automatically run ctags from pre-commit and post-checkout hooks without user workflow disruption.
Won't underlying filesystem ruin theoretical performance due to necessity of reading whole file and dumping it whole back after modifications?
Are tagfile parser and formatter fast enough to parse large tags file and dump them back after modifications, or they will be as slow, as recursively parsing directly from sources?
Do we need some novel logfile-like append-only DB format to reduce the performance hit, CoW cloning and SSD wear?
Idea: parse changeset removed/added lines and modify tags database in-place accordingly by removing tags from
-
lines and adding tags from+
lines. In theory it may significantly reduce time to update tagfile and will allow to automatically run ctags from pre-commit and post-checkout hooks without user workflow disruption.Suggested command:
The question: is it feasible?