Open GoogleCodeExporter opened 9 years ago
The problem is caused by retrieving the learning data. But I agree the data
should be retrieved on demand.
Original comment by mrlhwlib...@gmail.com
on 21 Jan 2015 at 7:32
Do you mean these statements in CardDaoImpl.java?
for (Card c : cs) {
learningDataDao.refresh(c.getLearningData());
categoryDao.refresh(c.getCategory());
So what would be the best solution?
1. skip the getCategory()? (because we don't need it explicitely in the list
mode?)
This should at least reduce the time by approximately 30-50 percent (as we only have half the amount of single selects).
2. extend the database cursor, so that all learningData attributes are read in
the fetch, too. Once the learningdata is in the memory: try to put it in the
learningDataDao (is this possible in ormLite?).
3. Do another cursor-fetch with ALL learningData. Afterwards join it in the
memory with the data, that we have read with:
List<Card> cs = cardQb.query();
What do you think?
PS 1:
This
http://ormlite.com/javadoc/ormlite-core/doc-files/ormlite_2.html#Foreign-Collect
ions
would not help:
the documentation says a single select is done for each fetch result of the
cursor...
PS 2:
something like this would not work in ormLite, wouldn't it?
SomeCollectionImplementation<Card, LearningData> cardAndLearningData =
cardAndLearningQb.query();
Original comment by haimi...@googlemail.com
on 21 Jan 2015 at 6:49
Yes, you are right the refresh takes quite some time to load 2 times more time
to refresh these values.
Each refresh will generate a select statement.
I wonder if we can put these for statement in a callBatchTasks. It would
enhance the performance. We can try it first because it is very simple to do.
Original comment by mrlhwlib...@gmail.com
on 21 Jan 2015 at 7:00
I don't think that callBatchTasks would help. Reason: it is just meant for
update purposes... .
But the easiest and best step to do first would be: use the Object Cache
feature:
http://ormlite.com/javadoc/ormlite-core/doc-files/ormlite_5.html#Object-Caches
If we do .setObjectCache(true) for learningData and category, this has only
limited effect on memory consumption (because the footprint of these tables is
not so big).
Afterwards, we could enable it also for the cardDao, to see how fast anymemo
gets afterwards and how much memory is allocated with >5000 databases.
Original comment by haimi...@googlemail.com
on 21 Jan 2015 at 7:52
regarding my topic #2:
this would be a solution:
http://stackoverflow.com/questions/11987552/ormlite-joins-or-rawquery-auto-mappi
ng
(with regards to the Dao.getRawRowMapper()solution)
But we have to take into account, that for this an artificial
"cardLearningDataDao" (this is a combination of cardDao and learningDataDao
with all the fields of both Daos) with no dataBase persistence has to be
made... .
So, this is a somehow strange workaround... .
Original comment by haimi...@googlemail.com
on 21 Jan 2015 at 9:25
regarding the object cache (
http://ormlite.com/javadoc/ormlite-core/doc-files/ormlite_5.html#Object-Caches
):
the solution to keep the instances really in the memory would be:
cardDao.setObjectCache(ReferenceObjectCache.makeSoftCache());
learningDataDao.setObjectCache(ReferenceObjectCache.makeSoftCache());
(with "setObjectCache(true)" they are garbage collected quickly)
Do you have an idea where to place these statements, so that whenever an
instance of LearningData or Category is read/updated/deleted, they are cached
already?
Thanks!
Original comment by haimi...@googlemail.com
on 21 Jan 2015 at 10:13
I think using join could improve the performance. I am not exactly sure how to
do the object mapping automatically though. I think manually setting all the
fields is needed.
I won't use object cache though because I am not sure how it handles the
updates and make the data consistent.
Original comment by mrlhwlib...@gmail.com
on 21 Jan 2015 at 10:55
Also the setObjectCache call can be done after the getdao method below.
https://code.google.com/p/anymemo/source/browse/src/org/liberty/android/fantasti
schmemo/AnyMemoDBOpenHelper.java#175
Original comment by mrlhwlib...@gmail.com
on 21 Jan 2015 at 10:57
thanks for pointing me to the right code position.
I think you can use the object cache without fear regarding consistency.
Because the framework does the persistence as usual. It just offers you a
real-time copy of the database in memory. Topics that are relevant for me would
just be:
1. how big is the footprint in memory doing so?
2. is the clearCache in our code done properly in all situations?
3. is this feature of ormLite old enough / used enough by a lot of people so
that possible bugs are already corrected?
If these questions give positive answers, I would prefer it over the
rawMapper/join, as this might have a negative impact on code simplicity / code
maintainability.
But in this respect, you are the boss :-)
Original comment by haimi...@googlemail.com
on 22 Jan 2015 at 3:27
I just feel that in the future I would not use ORMLite anymore since the
maintainer stopped the development of Android version.
Instead, you are right, I will return to manual mapping and Android's content
provider so I can integrate with ListView easily. The ListView can query
content provider on demand and resolve the performance issue.
Original comment by mrlhwlib...@gmail.com
on 22 Jan 2015 at 5:56
as I understood it, the people from OrmLite do not want to add "our" missing
feature on purpose (as they want to keep OrmLite lightweighted). In general,
the concept of ORM frameworks seems useful to me. It avoids a lot of coding
errors and speeds up development. It's just, that you have to follow other
paths when you want to do performance critical stuff.
Regarding setObjectCache(ReferenceObjectCache.makeSoftCache()):
without reengineering parts of the code, it would not improve the situation. We
would have to keep the AnyMemoDBOpenHelper for each used database open.
Otherwise with closing a database, the cache would be deleted and would have to
be build up from scratch the next time.
Regarding:
>>Instead, you are right, I will return to manual mapping and Android's
content provider so I can integrate
>>with ListView easily. The ListView can query content provider on demand and
resolve the performance issue.
could you give me a rough idea when you would start/finish this task?
Thanks in advance.
Original comment by haimi...@googlemail.com
on 22 Jan 2015 at 11:09
I like the object mapping part but in general I don't like query building. I
would like to write customized query and map to object easily.
I don't have timeline for moving to ContentProvider model since I generally
don't have enough free time for AnyMemo and my friends are helping with the new
features.
The planned way is to move only Card list to content provider as a first step
and the n move other pieces one by one.
Original comment by mrlhwlib...@gmail.com
on 23 Jan 2015 at 1:08
I have another question regarding the entity relationships of the database:
is there really a necessity to have two different classes / database tables (as
this is a 1:1 relationship):
- Card
- LearningData
(I just want to understand the motivation to do this. I do not ask for a change
:-)))) )
Thanks!
Original comment by haimi...@googlemail.com
on 27 Jan 2015 at 6:25
No, there is no need a 1-to-1 relation between learning data and card. The
motivation before is to separate out the learning data so I can use different
learning algorithm while keeping the basic card data the same.
In the AnyMemo client for the cloud platform, the database schema is much
simpler and flexible.
Card is basically a JSON object stored in dynamodb:
cards_table:
{
__type__: "QA",
id: "1234-12345678-12345678-1234"
deck_id: "UUID",
question: "q1",
answer: "a1",
tags: ["abc", "def"]
learningData: {
__type__ "AnyMemoAlgorithm"
whatever: ""
},
}
Original comment by mrlhwlib...@gmail.com
on 27 Jan 2015 at 6:40
Original issue reported on code.google.com by
haimi...@googlemail.com
on 21 Jan 2015 at 7:09