surinder-insonix / datanucleus-appengine

Automatically exported from code.google.com/p/datanucleus-appengine
0 stars 0 forks source link

List.remove(int index) doesn't work correctly #179

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Create two persistent classes with parent/child relation:
 class Account{ 
  @Persistent(mappedBy="account") 
  List<Phone> phones = new ArrayList<Phone>(); 
 } 
 and 
 class Phone { 
    @Persistent 
   Account account; 
 } 
2. Add two Phone objects into list,  so phones = [phone1, phone2] 

3. Remove children by calling
phones.remove(0); 
phones.remove(0); 

4. Save and then restore Account object. Phone list should be empty, but it 
has phone2 object.

If phones are removed in this way
phones.remove(1); 
phones.remove(0); 

or
 phones.clear(); 

everything is all right. I observe this behavior both on local machine 
(Windows Xp, Java 1.6.0_16, appengine 1.3) and on deployed version.

Original issue reported on code.google.com by ailin...@gmail.com on 26 Dec 2009 at 2:02

GoogleCodeExporter commented 8 years ago

Original comment by max.r...@gmail.com on 12 Jan 2010 at 6:08

GoogleCodeExporter commented 8 years ago
I see the same behavior you're reporting when I remove the objects inside a 
txn.  
However, if I remove the objects without a txn it seems to work.  Still 
digging...

Original comment by max.r...@gmail.com on 12 Jan 2010 at 9:55

GoogleCodeExporter commented 8 years ago
Ugh, yeah, there's a whole class of bugs like this.  DataNucleus assumes that 
writes 
made in a txn can be "seen" by subsequent reads/queries within that same txn.  
It's a 
perfectly reasonable assumption, except the Datastore doesn't work this way.  
Writes 
made inside a txn are only visible to reads once the txn is committed.  When 
you call 
removeAt(0) we query the datastore for the element at index 0 and then delete 
that 
element.  When you call removeAt(0) again we query the datastore for the 
element at 
index 0, which is the same element that was returned as the result of the first 
query 
because the fact that the element 0 has been deleted and that element 1 has 
been 
shifted down is not yet visible.  We'll need to do some sort of intelligent 
caching to 
make this work.  In the meantime your workaround is sound: always delete 
starting 
from the end and working towards the front of the list.

Original comment by max.r...@gmail.com on 12 Jan 2010 at 10:55

GoogleCodeExporter commented 8 years ago
I see. Now I understand my second problem. But I don't see workaround.
The problem is - I want to have many to many relationship. Each object keeps of 
list of 
keys referencing to other objects. When I create new object I get its key and 
put in 
some list. But that key is empty until I save this object. I don't want objects 
which 
are not referenced by another, in other words I want to do everything in one tx.
But it seems to be impossible. Any ideas?

Original comment by ailin...@gmail.com on 13 Jan 2010 at 9:54

GoogleCodeExporter commented 8 years ago
You're bumping into a very fundamental limitation of how datastore transactions 
work - transactions simply can't span entity groups.  One option, which I don't 
expect to be palatable, is to accept the fact that sometimes your updates will 
fail 
partway through and your data will be left in an inconsistent state.  You could 
run a 
cron job that "grooms" dangling references.  Another option, which I expect 
you'll 
like better, is to wait for transactional tasks to become available.  This is a 
feature 
that lets you add a task to the task queue as part of a datastore transaction, 
and it 
allows you to achieve eventual consistency across entity groups.  For example 
you 
could create an object in a txn and then add a task to a task queue as part of 
that 
same txn.  This task would then create another object in a txn and then add 
another 
task to a task queue.  This final task would then give the original object a 
pointer to 
the object that was created in the first task.  Ordinarily this will all happen 
so quickly 
that the chances of any request seeing the incomplete state will be low, but if 
something does fail partway through, the tasks are guaranteed to keep executing 
until they succeed.  But, this isn't available yet.  Probably in a few weeks.

Original comment by max.r...@gmail.com on 13 Jan 2010 at 10:04

GoogleCodeExporter commented 8 years ago
I'm not sure I understand you.
All my objects are in the same group. I have something like this
class Parent{
   @Persistent(mappedBy="parent")
   List<Foo> fooList;
   @Persistent(mappedBy="parent")
   List<Bar> barList;
}

class Bar{
  @Persistent
  Owner owner; 
  List<Key> fooList;
}
class Foo{
  @Persistent
  Owner owner; 
  List<Key> barList;
}

As far as I understand all objects referenced by Owner are in the same group. 
But 
when I create new Foo instance I don't have key until I save it. So I can't 
update 
list in Bar object.

Original comment by ailin...@gmail.com on 14 Jan 2010 at 1:55

GoogleCodeExporter commented 8 years ago
Oh, sorry, I didn't realize all your objects were in the same entity group.  I 
don't 
understand what is preventing you from updating the list in the Bar object with 
the 
Foo Key.  This should work:

Create and save  the Parent. 
Create the Bar with the Parent Key as the parent for the new Bar and save.
Create the Foo with the Parent Key as the parent for the new Foo, add the Bar 
Key to 
Foo.barList, and save.
Update the Bar with the Foo Key.

Original comment by max.r...@gmail.com on 14 Jan 2010 at 2:17

GoogleCodeExporter commented 8 years ago
What do you mean by "save"? I save when I commit. Which means I have several 
transactions.

Original comment by ailin...@gmail.com on 14 Jan 2010 at 2:39

GoogleCodeExporter commented 8 years ago
try pm.flush().  It should perform all pending writes.

Original comment by max.r...@gmail.com on 14 Jan 2010 at 2:42

GoogleCodeExporter commented 8 years ago
I can't modify a list.
If I have class 
class Dummy{ 
  @Persistent 
  String name; 
  @Persistent 
  List<Long> list; 
} 

and run this code 
Dummy d = new Dummy(); 
d.setName("name"); 
pm.makePersistent(d); 
d.setName("new name"); 
pm.close(); 
name "new name" will be stored. What is correct. 
But any modifications to list made after pm.makePersistent(d); are 
lost. 
This is a problem. 

Original comment by ailin...@gmail.com on 14 Jan 2010 at 3:22

GoogleCodeExporter commented 8 years ago
I'd encourage you to use transactions since lifecycle of a PersistenceCapable 
object is 
much more clearly defined.  This works fine for me:

    beginTxn();
    Dummy d = new Dummy();
    d.setName("name");
    pm.makePersistent(d);
    d.setName("new name");
    d.setList(Utils.newArrayList(1L, 2L, 3L));
    commitTxn();
    pm.close();
    pm = pmf.getPersistenceManager();
    beginTxn();
    d = pm.getObjectById(Dummy.class, d.getId());
    assertEquals("new name", d.getName());
    assertEquals(3, d.getList().size());
    commitTxn();

Original comment by max.r...@gmail.com on 14 Jan 2010 at 4:38

GoogleCodeExporter commented 8 years ago
Yes,
  if I replace entire list by new one it works. But if I just add object to existing 
list it doesn't. I would classify it as a bug.
But, any way, thank you for the hint.

Original comment by ailin...@gmail.com on 14 Jan 2010 at 5:10

GoogleCodeExporter commented 8 years ago
Max - In your comment #11, continue by removing an item from the list (without 
creating a new ArrayList).  That's where I'm seeing issues (I'm using detached 
objects).

Original comment by jamesk...@gmail.com on 1 Jul 2010 at 5:00

GoogleCodeExporter commented 8 years ago
Issue 167 has been merged into this issue.

Original comment by googleco...@yahoo.co.uk on 28 Sep 2011 at 6:12

GoogleCodeExporter commented 8 years ago
This test (the original one in the first post) passes with SVN trunk, using the 
latest storage version (i.e the list is stored in a property in the parent 
object).

Original comment by googleco...@yahoo.co.uk on 31 Oct 2011 at 7:46