cessen / ropey

A utf8 text rope for manipulating and editing large texts.
MIT License
1.04k stars 46 forks source link

Simple improvement to `Rope::lines()` iterator by adding a cache #69

Closed poliorcetics closed 1 year ago

poliorcetics commented 2 years ago
poliorcetics commented 2 years ago

@pascalkuthe here are my first changes, the one I'm sure about. I'm exploring improvements to the underlying methods, but this PR alone is a 34% improvement.

Benchmarking

I created files with line breaks at line 100, with 10k lines, 100k lines and 1 million line.

Before this PR, on my machine, it took

ropey-10k.txt => 10001 lines, 6.6705ms
ropey-100k.txt => 100001 lines, 67.236666ms
ropey-1m.txt => 1000001 lines, 681.34475ms

Now it takes:

ropey-10k.txt => 10001 lines, 4.445ms
ropey-100k.txt => 100001 lines, 44.790708ms
ropey-1m.txt => 1000001 lines, 454.385208ms

Code used for tests:

extern crate ropey;

use std::fs::File;
use std::io::BufReader;

use ropey::Rope;

fn main() {
    mesure("10k");
    mesure("100k");
    mesure("1m");
}

fn mesure(f: &str) {
    let text = Rope::from_reader(BufReader::new(File::open(format!("/tmp/ropey-{f}.txt")).unwrap())).unwrap();

    let start = std::time::Instant::now();
    let count = text.lines().count();
    let end = start.elapsed();
    println!("ropey-{f}.txt => {count} lines, {end:?}");
}

I'm not uploading the files because they're very heavy, but I can probably put them on some random hosting service if needed

archseer commented 2 years ago

Could you store a chunk_iter: Chunks<'a> similar to how Bytes and Chars do? It seems like LinesEnum is doing a similar job

poliorcetics commented 2 years ago

Could you store a chunk_iter: Chunks<'a> similar to how Bytes and Chars do? It seems like LinesEnum is doing a similar job

Looking at it, there some points where that could be a problem:

1) Lines is only interested in some chunks, a skip strategy needs the TextInfo 2) Chunks<'a> yields &'a str, forgoing the TextInfo, so I'll have to modify it before working with it 3) Once I have the TextInfo, there could be multiple newlines in a single chunk, meaning I'll have to iterate over them too

All of those are not blockers, just things to be aware of for the scope of the work.

I'll try to do it though, that seems like a promising perf opportunity

pascalkuthe commented 2 years ago

Thanks for posting this! This version offers a nice speesbump but there is still lots of performance left on the table. This version performs multiple binary searches (get_chunk_at_line_break, RopeSlice::new).

As @archseer mentioned we ideally want to simply iterate all chunks with the Chunks iterator and construct the line slices during that transversal. However then you are still calling RopeSlice::new (and there is also overhead because chunks returns strings). To avoid that, I have implemented a custom version of the chunks iterator that tracks the lowest common ancestor of a line. I wanted to post a PR last week but reverse iteration still base some bugs and private things got in the way. Will try to post my version as a draft later today or latest tomorrow

pascalkuthe commented 2 years ago

I posted #70. Multiple tests still fail on that branch it contains some unrelated changes. I just posted it to showoff my approach

cessen commented 1 year ago

@poliorcetics

I definitely appreciate the effort you've put into this! Unfortunately @pascalkuthe's PR (#70) is really the approach I had in mind. So I'm closing this in favor of that. Sorry about that!

poliorcetics commented 1 year ago

Don't worry, their solution is best yes, mine was more of a stopgap :)