I came across com.athaydes.spockframework.report.internal.ReportDataAggregator#readTextFrom:
@PackageScope
static String readTextFrom( RandomAccessFile file ) {
def buffer = new byte[8]
def result = new StringBuilder( file.length() as int )
int bytesRead
while ( ( bytesRead = file.read( buffer ) ) > 0 ) {
result.append( new String( buffer[ 0..( bytesRead - 1 ) ] as byte[], charset ) )
}
return result.toString()
}
Reading 8 bytes at a time and making and appending lots of tiny strings seems really inefficient! Is there perhaps a micro-optimisation that can be done here? Something LIKE:
static String readTextFrom( RandomAccessFile file ) {
def buffer = new byte[file.length() as int]
int bytesRead = file.read(buffer);
if (bytesRead != buffer.length) {
// error message or something...
}
return new String(buffer, 0, bytesRead, charset)
}
...this specific code would fail if given a long-sized file, but that's unlikely (? famous last words)?
I've been trawling through the code (trying to get a handle on https://github.com/AOEpeople/geb-spock-reports/issues/34).
I came across com.athaydes.spockframework.report.internal.ReportDataAggregator#readTextFrom:
Reading 8 bytes at a time and making and appending lots of tiny strings seems really inefficient! Is there perhaps a micro-optimisation that can be done here? Something LIKE:
...this specific code would fail if given a long-sized file, but that's unlikely (? famous last words)?