unobliged / plymlet

plymlet rails test code
http://plymlet.herokuapp.com
0 stars 0 forks source link

Architecture: Redis, Postgres Hstore #23

Open unobliged opened 12 years ago

unobliged commented 12 years ago

Look more into whether Redis is viable for live, but set up a separate project to test hstore on. Plan to port to pg hstore later unless Redis math adds up. Try to set up basic performance test to measure difference.

unobliged commented 12 years ago

Redis does not seem viable in the long run from a cost standpoint no matter how I crunch it, given the memory usage on just CEDICT let alone the many planned dictionaries that will be added. It might still be useful for some other purpose, but for now will stop using it and try implementing hstore.

http://travisjeffery.com/b/2012/02/using-postgress-hstore-with-rails/ https://github.com/softa/activerecord-postgres-hstore |Important!| For each model that uses hstore, add a line as per this example: class Person < ActiveRecord::Base serialize :data, ActiveRecord::Coders::Hstore end Notes: Upgraded postgres with contrib to 9.2, added hstore gem: https://github.com/softa/activerecord-postgres-hstore and did migrations to change serialized columns to hstore. I was unable to change the type cleanly, so I ended up removing the columns and then adding them back as hstore data type.

unobliged commented 12 years ago

There appear to be issues with setting an empty hash as a default value (which oddly enough was handled very cleanly by serialization): https://github.com/softa/activerecord-postgres-hstore/issues/22 I was not able to figure out a way to initialize it with an empty hash via psql, though I did try setting a junk key-value pair. That is not an ideal solution though, perhaps a callback might be better?

unobliged commented 12 years ago

Found a good workaround with after_initialize here: http://stackoverflow.com/questions/9610277/rails-after-initialize-only-on-new This gets around the problem of after_initialize being called on finds. Seems to work now with newly created users, will play around with this before implementing on Passage.vocab_list. Update: didn't seem to break the heroku demo, but should still do further testing 2nd Update: implemented on Passage.vocab_list, so far so good

unobliged commented 12 years ago

Next step is to look into how to load the dictionaries onto postgres and see if we can take advantage of hstore to segregate all the definition attributes for querying (parts of speech, alternate symbols, definitions, etc.)

unobliged commented 11 years ago

Minor update from having worked on edict2_parser, for heroku demo row limit is 10k but can cram as much as needed into redis/memcache as long as memory limit not reached. It might be worth looking at those again, or somehow offloading the dictionary and only storing vocabulary per passage. Perhaps create another app and call it via api?