My previous attempt at dealing with large files had trouble with large files which contained significant segments of invalid tokens sequestered within a string literal at the beginning of the file (where I performed my preview). This necessitated a different approach. This new solution has the advantage of both being simpler (just based on error counting) and also being robust to this sort of problem.
My previous attempt at dealing with large files had trouble with large files which contained significant segments of invalid tokens sequestered within a string literal at the beginning of the file (where I performed my preview). This necessitated a different approach. This new solution has the advantage of both being simpler (just based on error counting) and also being robust to this sort of problem.