outworkers / phantom

Schema safe, type-safe, reactive Scala driver for Cassandra/Datastax Enterprise
http://outworkers.github.io/phantom/
Apache License 2.0
1.05k stars 186 forks source link

Is it possible to have a default update method so what I don't have to spell out every column in modify statement? #608

Open git4sun opened 7 years ago

git4sun commented 7 years ago

In most cases, I just need to set all the column data to the given object. Because that way I can separate all the data manipulation logic and the database operation. Is it possible to have an update method like update.where(_.id eqs id).modify(Record(...)) ? It can also prevent the mistakes like forgetting updating a couple columns when there are too many.

alexflav23 commented 7 years ago

Hi @git4sun you can have a prepared update statement and that will automatically bind a record if you bind all its components, and this is just using simple applications of shapeless.Generic.

case class Record(email: String, name: String)

val query = update.where(_.id eqs id).modify(_.email setTo ?).and(_.name setTo ?) .prepare()

query.bind(Record("test@yahoo.com", "test")).future()

The problem you have is that no part of the primary key can be updated, if your record also includes primary keys, how would you deal with that for instance? The bind wouldn't be deterministic so it's not really that easy to implement one at framework level generically, we are working on improvements here with the macro API.

In the mail

Regards, Flavian

git4sun commented 7 years ago

for example: case class Record(id:UUID, email:String, name:String) For the default update, I would do def updateRecord(r:Record) = update.where(_.id eqs r.id).modify(_.email setTo r.email).and(_.name setTo r.name).future I realized that I have this function for all my 12 tables. And I also have specific update methods for specific requirements. I am not suggesting making this the only update function, but a default or special update function that can save some time and mistakes. My team made a couple mistakes when listing every column (forget a column typically). This may give you some idea of how it looks. https://github.com/jtdev/play-scala-spring-data-jpa/blob/master/app/controllers/Application.scala

alexflav23 commented 7 years ago

HI @git4sun,

We are working on generating more macro assisted methods, starting with a store and a prepared store method, to allow an automated insertion of columns, but it's still in progress as there are a few edges cases we don't yet cover properly.

The schema inference and typesafe checking mechanism will support what you are after, it will take 1 or 2 more sprints to get there, but I will keep you posted.

Regards,

git4sun commented 7 years ago

@alexflav23 great news! BTW, thanks for the great software.

alexflav23 commented 7 years ago

Hi @git4sun,

We have reached the point where implementing this is very feasible, as we now have implicit machinery that's capable of generating extractor types from a table and typechecking them in advanced scenarios.

But before going further here, there are a few interesting scenarios to account for:

What I'm thinking is to use Scala symbols, but you would have to manually match the names against the column names. We could do something to auto-match if tables only have a single column of a given type, but that's not likely to be a very realistic scenario. For store we can rely on the Record type field names to infer associations, for updates we have no such thing.

So here's what I'm thinking:

db.table.update(`colName -> 5, `colName2 -> "some new text")

Then you could get compile time errors if:

And so on, I'm sure there are things I couldn't cover off the top of my head. Please feel free to pitch in if you had anything different in mind.

Regards.