playframework / anorm

The Anorm database library
https://playframework.github.io/anorm/
Apache License 2.0
240 stars 75 forks source link

Why streamming results all loaded into memory #161

Closed tunhuh95 closed 6 years ago

tunhuh95 commented 6 years ago

I'm working with big table with more than 4G data (approximate 2 million row). So, I tried to process row per row (not store in memory) like this https://github.com/playframework/anorm/blob/master/docs/manual/working/scalaGuide/main/sql/ScalaAnorm.md#streaming-results but I got a GC overhead error.

This is my code

SQL("select * from Banner").withResult(write)

@tailrec
def write(op: Option[Cursor])(implicit writer: Writer): Unit = op match {
    case Some(cursor) =>
      writer.append(cursor.row[Int]("Id").toString).append(',')
        .append(cursor.row[String]("Content").toString).append(',')
      write(cursor.next)
    case _ =>
  }
cchantep commented 6 years ago

Hi,

I doubt there is anything specific to Anorm about that, as the write function is a plain tailrec one, appending/accumulating data, so prone to memory shortage.

Note that this tracker is for confirmed issue/actionable feature request. Please rather use Play MailingList or StackOverflow to ask question/for help.

Best regards