Closed Twenkid closed 4 years ago
And where it gets longer than it had too. Or at the moment it doesn't matter because you're doing a major refactoring and the code would change?
What is excessive len(root) - root with an excessive length?
Yes, that fork[0][0][5] (roots, = len(root)) which is always >1 Even though there are instances of initial len(root) == 0|1
If you track it manually, can you point where exactly the id changes
abruptly while it had to be the same?
No, the point where len(root_) is incremented outside of scanP run that formed it. Basically, you pick initial root with len(root) == 0|1, then add something like:
trackid = id(root), then: if id(root_) == trackid, if len(root) > 0|1 break # add a breakpoint before break
Or at the moment it doesn't matter because you're doing a major refactoring
and the code would change?
Yes, I am starting with root counter "roots" directly. The reason I had intermediate root_ is to contain Ps to reassign their forks from _Ps to blob_seg, etc.
But now I am making fork a mutable [], initialized as [[_P]] and appended with additional variables as it is incorporated into blob_seg, and then blobseg into blob. This way up-reference (fork id) will be constant and I don't need down-references in root to update it.
> What is excessive len(root_) - root_ with an excessive length?
> Yes, that _fork_[0][0][5] (roots, = len(root_)) which is always >1 Even though there are instances of initial len(root_) == 0|1
OK
From the code online: https://github.com/boris-kz/CogAlg/blob/master/frame_dblobs.py
while _ix <= x: # while horizontal overlap between P and _P, after that: P -> P_
if _buff_:
_P, _x, _fork_, root_ = _buff_.popleft() # load _P buffered in prior run of scan_P_, if any
elif _P_:
_P, _x, _fork_ = _P_.popleft() # load _P: y-2, _root_: y-3, starts empty, then contains blobs that replace _Ps
root_ = [] # Ps connected to current _P
else:
break
_ix = _x - len(_P[6])
if P[0] == _P[0]: # if s == _s: core sign match, also selective inclusion by cont eval?
fork_.append([]) # mutable placeholder for blobs connected to P, filled with Ps after _P inclusion with complete root_
root_.append(fork_[len(fork_)-1]) # to bind last P in fork_ to _P and then to blob?
if P[0] == _P[0]:
Then root_ is appended. Then if not x > ix: (i.e. x <= ix) a series of IFs are checked, one of which is if len(root) == 0: # never happens?
The other path I see is:
while _ix <= x: # while horizontal overlap between P and _P, after that: P -> P_
if _buff_:
_P, _x, _fork_, root_ = _buff_.popleft()
So something wrong pushed-poped to buff
There are cases where len(root_) is 0 but are they present when all other conditions are met to get there?
(Thinking out loud, I have to get my hands dirty and I'll do.)
` No, the point where len(root_) is incremented outside of scanP run that formed it. Basically, you pick initial root with len(root) == 0|1, then add something like: trackid = id(root), then: if id(root_) == trackid, if len(root) > 0|1 break # add a breakpoint before break
`
Ah, OK, I thought you're using some special hidden debugger functions which track the objects' lifecycle. I was thinking of a robust tracking of any object that we want, where and when exactly it was created etc., without such manual checks.
So something wrong pushed-poped to buff
There are cases where len(root_) is 0 but are they present when all other conditions are met to get there?
(Thinking out loud, I have to get my hands dirty and I'll do.)
Sorry Todor, this is not relevant anymore, I just updated frame_dblobs without root_s, check it out. scanP is probably still buggy, and functions beyond scanP are not updated yet.
OK
На вт, 18.09.2018 г., 5:04 Boris Kazachenko notifications@github.com написа:
So something wrong pushed-poped to buff
There are cases where len(root_) is 0 but are they present when all other conditions are met to get there?
(Thinking out loud, I have to get my hands dirty and I'll do.)
Sorry Todor, this is not relevant anymore, I just updated frame_dblobs without root_s, check it out. scanP is probably still buggy, and functions beyond scanP are not updated yet.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/issues/8#issuecomment-422228827, or mute the thread https://github.com/notifications/unsubscribe-auth/AWSP2G4z6qIz-YZ10raaFPgwXAXhwKjgks5ucFSggaJpZM4WWgpX .
BTW, do you know about the numpy "fancy indexing" and conditional indices? They are supposed to be more efficient than normal iteration and shorter, too.
p[p>0] = p*1.6
https://www.numpy.org/devdocs/reference/generated/numpy.where.html#numpy.where
numpy.where(condition[, x, y]) Return elements chosen from x or y depending on condition.
"If all the arrays are 1-D, where is equivalent to:
[xv if c else yv for c, xv, yv in zip(condition, x, y)]"
np.where(x < y, x, 10 + y) # both x and 10+y are broadcast array([[10, 0, 0, 0], [10, 11, 1, 1], [10, 11, 12, 2]])
...
putmask(a, mask, values) Changes elements of an array based on conditional and input values.
Examples
...
A more complex one which sends functions as parameters:
class numpy.nditer[source] Efficient multi-dimensional iterator object to iterate over arrays.
https://www.numpy.org/devdocs/reference/generated/numpy.nditer.html#numpy.nditer
...
https://www.numpy.org/devdocs/reference/routines.indexing.html
Of course, for more meaningful transformations, some kind of nesting of such "fancy" operations or lambda functions and perhaps different data layout would be required.
E.g. more parallel process of pattern competion in pre-allocated slots in numpy arrays, and then wrapping them up, rather than sequentially one by one and doing allocations of new variables for each call. Where that's algorithmically possible; it may require intermediate steps and transformations.
https://github.com/boris-kz/CogAlg/blob/7ab8bf6e0d293e5695805c5def39d8998ecbfb0c/frame_draft.py#L482