boris-kz / CogAlg

This project is a Computer Vision implementation of general hierarchical pattern discovery principles introduced in README
http://www.cognitivealgorithm.info
MIT License
91 stars 41 forks source link

Explicitly suggesting that P in scan_P_ is a partial pattern, while P_ and _P_ are complete #4

Closed Twenkid closed 6 years ago

Twenkid commented 6 years ago

Regarding scanP: Todor: it's reasonable to mention that P, the input parameter, is a partial pattern, send by form_P:

P = pri_s, I, D, Dy, M, My, G, alt_rdn, e_
 While the P taken from the P_ or _P_ are complete patterns, with a different sequence:
 P = s, ix, x, I, D, Dy, M, My, G, alt_rdn, e_, alt_ 

That's confusing on first read, because by default the same name suggests a list of the same type, however then below scan_P_ reads P[0][1] with a comment as ix, i.e. a different type. Yes, it becomes clear when studying more and seeing thatP_is filled later etc., also by keeping in mind that the above P in form_P is commented as "partial". However, using the same name confuses and I think suggesting the difference more explicitly at least with a comment would speed up code understanding. Alternatively, mnemonically suggesting that partial patterns are such, for example pP, Pp or something in the input parameter of the function or in form_P also.

boris-kz commented 6 years ago

Thanks Todor. P is actually a complete 1D pattern, it adds more parameters in the process of forming 2D patterns (P2).

P_ is only there to be converted to P when current line terminates.

So, higher-line template pattern _P is more complex, and then it also mediates attached P2 forks: blobs, vPPs, dPPs.

Rather than adding endless new names, I add prefix _ to indicate higher-line pattern or variable. We need to to keep in mind that contents depend on the line pattern belongs to: y, y-1, y-2, y-3.

Any given 2D function are always accesses two lines: relatively higher and lower.

I have brief explanation in the very top comment of level_1_2D.

But I guess it's too brief, could you expand it?

Also, thanks for mentioning named tuples, I am looking into it.

On Fri, Feb 2, 2018 at 7:46 PM, Todor Arnaudov notifications@github.com wrote:

Regarding scanP: Todor: it's reasonable to mention that P, the input parameter, is a partial pattern, send by form_P: P = pri_s, I, D, Dy, M, My, G, altrdn, e While the P taken from the P_ or P are complete patterns, with a different sequence: P = s, ix, x, I, D, Dy, M, My, G, altrdn, e, alt_ That's confusing on first read, because by default the same name suggests a list of the same type, however then below scanP reads P[0][1] with a comment as ix, i.e. a different type. Yes, it becomes clear when studying more and seeing that P_ is filled later etc., also by keeping in mind that the above P in form_P is commented as "partial". However, using the same name confuses and I think suggesting the difference more explicitly at least with a comment would speed up code understanding. Alternatively, mnemonically suggesting that partial patterns are such, for example pP, Pp or something in the input parameter of the function or in formP also.

You can view, comment on, or merge this pull request online at:

https://github.com/boris-kz/CogAlg/pull/4 Commit Summary

  • Update level_1_2D_draft.py

File Changes

Patch Links:

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGTt1wu71VrzqejUuLfAaQ7UAAEgiks5tQ6xxgaJpZM4R39pj .

Twenkid commented 6 years ago

I've read it, but yes, it's brief and when reading the code afterwards the detail that P is different is covered by the comments/expectations.

OK. Is this edition fine:

(...)
y-3: term_P2(P2_): P2s are evaluated for termination, re-orientation, and consolidation 

  Any given 2D function always accesses two lines: relatively higher and lower.

    postfix '_' denotes array name (vs. same-name element),
    prefix '_' as _P denotes prior-input: a higher-line pattern or variable. Notice that the
               contents depend on the line pattern belongs to: y, y-1, y-2, y-3, thus for example
               the variables of the _P patterns are different than the ones in P in scan_P_.

Which comparison functions are 2D? All but the first: comp and ycomp? Or ycomp also counts as 2D?

Your additional explanations above may also be suggestive if included in the intro, should it be also included? Or there could be an additional file with notes about the code and its logic, potentially as wordy as is suitable?

(I realise that it's possibly explained somewhere in the CogAlg blog, but recently I've been keeping myself focused only in the pure code.)

boris-kz commented 6 years ago

Yes, starting from ycomp. These explanations are specific to level_1_2D, so I think they should stay in the top comment. Which could be as long as we want. Also, I guess more initial comments in every function will help.

Twenkid commented 6 years ago

OK. (I removed "Notice that", it's redundant.)

boris-kz commented 6 years ago

Thanks. I will probably edit it latter.

On Fri, Feb 2, 2018 at 9:24 PM, Todor Arnaudov notifications@github.com wrote:

OK. (I removed "Notice that", it's redundant, and maybe it should be "Note that" also)

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-362767956, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGZSv5FLb9yIlw8-iJ_GUi2v9ZtEKks5tQ8NSgaJpZM4R39pj .

Twenkid commented 6 years ago

In ycomp():

1.) I noticed that differences branch of the gradient variables doesn't have a "filter" (initially - average).

dg = _d + fdy # d gradient vg = _m + fmy - ave # v gradient

What's the reasoning, isn't it also supposed to be compared? Zero is assumed as a fixed filter?

2.) The alt-value for alt_len is taken always from the dP branch of form_P. (First vP branch is called, then dP, they use the same ident "alt", so the value of the last one is used)

if alt[0]: dalt.append(alt); valt.append(alt) alt_len, alt_vG, alt_dG = alt

Then both dalt and valt are updated with the same alt-tuple, which is the one computed in form_P( for the dP.

Further:

if alt_vG > alt_dG: # comp of alt_vG to alt_dG, == goes to alt_P or to vP: primary? vP[7] += alt_len # alt_len is added to redundant overlap of lesser-oG- vP or dP else: dP[7] += alt_len

Both patterns are updated with the difference-pattern's alt_len.

Shouldn't these values be different? Alternative comparisons - expecting different results by default? Is it correct and if so could you give more reasoning about that?

(I know that they are supposed to "overlap" and thus having some "redundancy", but I guess I'll reach to more clear understanding later)

3.) Then comes: s = 1 if g > 0 else 0 if s != pri_s and x > rng + 2: #

The sign s marks the first "above/below"-filter comparison. The gradient "g" is the "positive match" and it's combined (summed): the match to previous pixel on the left, and on top: vg = _m + fmy - ave # v gradient.

Again referring to 1. - the lack of "ave" filter for difference - the filter is 0?

boris-kz commented 6 years ago

1.) I noticed that differences branch of the gradient variables doesn't have a "filter" (initially - average).

dP is defined by comparison to shorter-range feedback: prior input, vP is defined by comparison to higher (prior) - level feedback: filter. These are are different orders of patterns

2.) The alt-value for alt_len is taken always from the dP branch of form_P. (First vP branch is called, then dP, they use the same ident "alt", so the value of the last one is used)

They both use the same local variable alt. It is the same because it's an overlap between the two. It's a measure of redundancy to eventually increment filter A for the weaker of the two. But smaller-scale differences in value are minor and probably don't justify the costs of adjusting filter. So, alt_len is summed and buffered until 2D patterns terminate, and then the filter for the weaker one is adjusted. This is still tentative, part of that redundancy adjustment problem.

if alt_vG > alt_dG: # comp of alt_vG to alt_dG, == goes to alt_P or to vP: primary? vP[7] += alt_len # alt_len is added to redundant overlap of lesser-oG- vP or dP else: dP[7] += alt_len Both patterns are updated with the difference-pattern's alt_len.

No, only the weaker (redundant) of the two is updated by shared alt_len, see above.

3.) Then comes: s = 1 if g > 0 else 0 if s != pri_s and x > rng + 2: # The sign s marks the first "above/below"-filter comparison. The gradient "g" is the "positive match" and it's combined (summed): the match to previous pixel on the left, and on top: vg = _m + fmy - ave # v gradient. Again referring to 1. - the lack of "ave" filter for difference - the filter is 0?

The equivalent of filter for dP is prior input. It is defined by the sign of difference, not by the sign of value.

boris-kz commented 6 years ago

Todor, I got rid of alt_. It was buffering individual alt_P overlaps to delete them in case stronger altP becomes relatively weaker on some higher level. I no longer think this is feasible, the patterns will diverge and it should be easier to reconstruct from their e buffers. Also replaced alt tuple with olp vars: olp_len, olp_vG, olp_dG, and alt_rdn

Twenkid commented 6 years ago

OK.

For the sorting of fork_ elements by "crit" (on the first call is it formoG ?), for max-min shouldn't it be: ` fork.sort(key = lambda fork: fork[0], reverse=True) # max-to-min crit, or sort and select at once: ` https://www.programiz.com/python-programming/methods/list/sort

fork = [(10, 5), (6,4), (1,3), (15,2), (3,1), (245,99), (50,12)] fork.sort(key = lambda fork: fork[0], reverse=True) print(fork_)

[(245, 99), (50, 12), (15, 2), (10, 5), (6, 4), (3, 1), (1, 3)]

boris-kz commented 6 years ago

Yes, thanks! crit is determined by typ, which is received from scanP. Yes, initial crit is fork_oG.

Twenkid commented 6 years ago

Hi, I noticed there's a big update in the code and unspecified parts and I'm reading it, but currently it's a stagnation period about having something meaningful to say.

Just this, the if len(a_list) - after recheck, for lists if (a_list) returns False if it's empty, as you've expected, so it can be just:

if len(fork_):  # P is evaluated for inclusion into its fork _Ps on a higher line (y-1)
if fork_:

My correction stands for the tuples, in their case a_tuple = 0,0,0 returned True for if a_tuple:, either if it's all 0 or has 1 anywhere.

f = [] f.append(5) f [5] if f: print("a") ... a f=[] if f: print("1") ... a = 0,0,0 if a: print("1") ... 1

boris-kz commented 6 years ago

Ok. Are you using PyCharm?

On Wed, Feb 7, 2018 at 4:29 PM, Todor Arnaudov notifications@github.com wrote:

Hi, I noticed there's a big update in the code and unspecified parts and I'm reading it, but currently it's a stagnation period about having something meaningful to say.

Just this, the if len(a_list) - after recheck, for lists if (a_list) returns False if it's empty, as you've expected, so it can be just:

if len(fork_): # P is evaluated for inclusion into its fork Ps on a higher line (y-1) if fork:

My correction stands for the tuples, in their case a_tuple = 0,0,0 returned True for if a_tuple:, either if it's all 0 or has 1 anywhere.

f = [] f.append(5) f [5] if f: print("a") ... a f=[] if f: print("1") ... a = 0,0,0 if a: print("1") ... 1

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-363916990, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGa9JuaMKIgt1B4Mk2nA7RrRal5rIks5tShXBgaJpZM4R39pj .

Twenkid commented 6 years ago

Not recently, but I did when testing Pypy. Console Python and any editor were fine for me for now. ( I opened it through PyCharm now, cloned from Git (VCS->...) it would ease this process, but I've been studying it directly from github anyway.)

boris-kz commented 6 years ago

It's handy for tracking variables, and I have a lot of them.

On Thu, Feb 8, 2018 at 2:52 AM, Todor Arnaudov notifications@github.com wrote:

Not recently, but I did when testing Pypy. Console Python and any editor were fine for me for now. I open it through Python now, cloned from Git (VCS->...) it would ease this process.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-364029861, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGWtuEtx8fXVYDvECTD6rYEFlZoodks5tSqe9gaJpZM4R39pj .

Twenkid commented 6 years ago

Yes. BTW, it seems that in this session they are less of a burden for my working memory.

However these days I've been thinking practically about a custom analysis-debug tool and its first desirable features. The beginning of the implementation could be these days, depending on my focus.

boris-kz commented 6 years ago

Yes. BTW, it seems that in this session they are less of a burden for my working memory.

Like I said, it's matter of practice.

However these days I've been thinking practically about a custom analysis-debug tool and its first desirable features. The beginning of the implementation could be these days, depending on my focus. Huh? What happened to your custom language, custom OS, custom CPU, custom math, and custom universe? If you had focus, you would be working on the algorithm.

On Fri, Feb 9, 2018 at 5:34 AM, Todor Arnaudov notifications@github.com wrote:

Yes. BTW, it seems that in this session they are less of a burden for my working memory.

However these days I've been thinking practically about a custom analysis-debug tool and its first desirable features. The beginning of the implementation could be these days, depending on my focus.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-364397521, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGUEtbJuoXEpwVdKkXJ-gBOAIvf6eks5tTB81gaJpZM4R39pj .

Twenkid commented 6 years ago

Like I said, it's matter of practice.

Yes, but practice not only with your code, it was less of a burden from the first glance.

The rest: edited (censored).

Give an unambiguous, exhaustive and practical definition of "working on the algorithm" and prove and explain how and why it excludes certain approaches, including ones which you are not familiar with or have no idea.

Twenkid commented 6 years ago

BTW, I've been failing to detect in your code the so called "negative patterns"?, are they implemented in Le2 code and how do they "look"?

That implies also where are the other mythical "complemented" patterns "Neg" + Pos.? And they are for both vP and dP (separately: (vP+, vP-)=complemented vP?; (dP+, dP-)=complemented dP ?).

Something about that? : (unlikely) def form_pP(par, pP_): # forming parameter patterns within PP

2Le: DIV of multiples (L) to form ratio patterns, over additional distances = negative pattern length LL

?

Neg. are defined as having m < Aver.match.

In Le2 code AFAIK there are comparisons > A like: while fork_ and (crit > A or ini == 1):

If it's not > A, the cycle is just not entered. (And I don't have yet the tech to trace it easy enough etc., see previous comments :) )

boris-kz commented 6 years ago

BTW, I've been failing to detect in your code the so called "negative patterns"?, are they implemented in Le2 code and how do they "look"?

2D patterns are composed of matching-sign 1D patterns, so they can be either positive or negative

It's just that there could be multiple forks, so I added another filter to account for redundancy.

In current update, it's: while fork_ and (fork[0] > ave * rdn):

It's true that "double-negative" patterns are not represented, that would just pile-up more redundancy.

That implies also where are the other mythical "complemented" patterns

"Neg" + Pos.? And they are for both vP and dP (separately: (vP+, vP-)=complemented vP?; (dP+, dP-)=complemented dP ?).

Complemented patterns would be formed in discontinuous comparison between 2D patterns, across opposite-sign P2s. That would probably be 3Le, or 4Le for video.

Something about that? : def formpP(par, pP): # forming parameter patterns within PP

These are sub-patterns formed by individual same-type variables (parameters) within patterns.

2Le: DIV of multiples (L) to form ratio patterns, over additional distances = negative pattern length LL

?

That's second sequence in comp_P: if dI * dL > div_a: ...

Except that for summed vars S, DIV is replaced by normalization: * rL. It's cheaper, though not quite the same.

But this is still tentative. LL would be from discontinuous search, not on 2Le.

BTW, is it legal to directly append, etc. tuples, I am not getting an error from PyCharm:

e_.append((p, g, alt_g))

?

Twenkid commented 6 years ago

Yes, it's legal:


>>> python
>>> e_ = [1,2,3]
>>> p = 25; g = 16; alt_g = 3
>>> e_.append((p, g, alt_g))
>>> e_
[1, 2, 3, (25, 16, 3)]

Complemented patterns would be formed in discontinuous comparison between 2D patterns, across opposite-sign P2s.

Then what is a continuous one and what's the difference between cont. and disc. comp? Discont.: comparison between adjacent + and - patt, i.e. of "different types", that's why it's discont? Continuous: comp of the same type patt? (+ +); vP, vP; dP,dP ?

Twenkid commented 6 years ago

I see now - the negative patterns are just ones with s = 0?, s = 1 if g > 0 else 0 # g = 0 is negative? Anyway, I think that it should should be marked more explicitly in the code in order to refer to the text. It's said "same-sign gradient", but not "negative/positive patterns".

Twenkid commented 6 years ago

Also the ave-coeff. are still not completely defined, their computation/update?

global ave; ave = 127  # filters, ultimately set by separate feedback, then ave *= rng
    global div_a; div_a = 127  # not justified
    global ave_k; ave_k = 0.25  # average V / I

Back to the olp logic discussed earlier in this thread, and the olp-code in ycomp

if olp:  # if vP x dP overlap len > 0, incomplete vg - ave / (rng / X-x)?
        odG *= ave_k; odG = odG.astype(int)  # ave_k = V / I, to project V of odG
        if ovG > odG:  # comp of olp vG and olp dG, == goes to vP: secondary pattern?
            dP[7] += olp  # overlap of lesser-oG vP or dP, or P = P, Olp?
        else:
            vP[7] += olp  # to form rel_rdn = alt_rdn / len(e_)

No, only the weaker (redundant) of the two is updated by shared alt_len, see above.

  • also dP is defined by comparison to shorter-range feedback: prior input, vP is defined by comparison to higher (prior) - level feedback: filter. These are are different orders of patterns

I read the code as that the weight of the diff.grad is much larger than the one of the vG (if ave_k is not possible to turn into < 1 somehow), i.e. the initial ovG has to be ave_k times bigger than odG in order ovG > odG, thus the dP to be counted as "redundant" and its olp updated. Therefore dP is assumed as the more important pattern.

What's the reasoning? (or what was, if I've forgotten)

Shorter feedback is more powerful? ("Predictability decreases with distance") Higher level feedback's cost has to be justified?

There might be many/increasing number of sources for higher level feedback, thus their weight has to be spread (divided) across many - at higher levels/higher derivatives/...?

However in the next stage, form_P, both grad. are multiplied with the same ave_k:


 if typ: alt_oG *= ave_k; alt_oG = alt_oG.astype(int)  # ave V / I, to project V of odG
        else: oG *= ave_k; oG = oG.astype(int)               # same for h_der and h_comp eval?

        if oG > alt_oG:  # comp between overlapping vG and dG
            Olp += olp  # olp is assigned to the weaker of P | alt_P, == -> P: local access
        else:
            alt_P[7] += olp
boris-kz commented 6 years ago

Also the ave-coeff. are still not completely defined, their computation/update?

global ave; ave = 127 # filters, ultimately set by separate feedback, then ave *= rng global div_a; div_a = 127 # not justified global ave_k; ave_k = 0.25 # average V / I

It's a feedback from higher-power comp, obviously on yet-undefined hLe.

Filters have multiple orders, depending on comp that formed them:

bit-filters LSB, MSB, integer filters ave, Ave, ratio filters ave_k, etc.

Back to the olp logic discussed earlier in this thread, and the olp-code in ycomp

if olp: # if vP x dP overlap len > 0, incomplete vg - ave / (rng / X-x)? odG *= ave_k; odG = odG.astype(int) # ave_k = V / I, to project V of odG if ovG > odG: # comp of olp vG and olp dG, == goes to vP: secondary pattern? dP[7] += olp # overlap of lesser-oG vP or dP, or P = P, Olp? else: vP[7] += olp # to form rel_rdn = altrdn / len(e)

  • an earlier comment:

No, only the weaker (redundant) of the two is updated by shared alt_len, see above.

  • also

dP is defined by comparison to shorter-range feedback: prior input, vP is defined by comparison to higher (prior) - level feedback: filter. These are are different orders of patterns

I read the code as that the weight of the diff.grad is much larger than the one of the vG

(if ave_k is not possible to turn into < 1 somehow),

Yes, ave_k is fractional. Any filters are negative by default, so this is really a division.

i.e. the initial ovG has to be ave_k times bigger than odG in order ovG > odG, thus the dP to be counted as "redundant" and its olp updated. Therefore dP is assumed as the more important pattern.

No, D is less predictive (selective) than V, see above.

However in the next stage, form_P, both grad. are multiplied with the same ave_k:

if typ: alt_oG = ave_k; alt_oG = alt_oG.astype(int) # ave V / I, to project V of odG else: oG = ave_k; oG = oG.astype(int) # same for h_der and h_comp eval?

if oG > alt_oG:  # comp between overlapping vG and dG
    Olp += olp  # olp is assigned to the weaker of P | alt_P, == -> P: local access
else:
    alt_P[7] += olp

No, it's if typ: P = vP and alt_P = dP, else: reverse. So, only dG * ave_k for both.

boris-kz commented 6 years ago

On Sat, Feb 10, 2018 at 2:37 AM, Todor Arnaudov notifications@github.com wrote:

I see now - the negative patterns are just ones with s = 0?, s = 1 if g > 0 else 0 # g = 0 is negative? Anyway, I think that it should should be marked more explicitly in the code in order to refer to the text. It's said "same-sign gradient", but not "negative/positive patterns".

Ok, I may add something.

This is confusing because we are talking about blobs: 2D Ps are defined on 1Le

I will have negative P2s, from comp_P, will get back to you on that.

Good questions!

boris-kz commented 6 years ago

Yes, it's legal:

python e_ = [1,2,3] p = 25; g = 16; altg = 3 e.append((p, g, altg)) e [1, 2, 3, (25, 16, 3)]

Thanks.

Complemented patterns would be formed in discontinuous comparison between 2D patterns, across opposite-sign P2s.

Then what is a continuous one and what's the difference between cont. and disc. comp? Discont.: comparison between adjacent + and - patt, i.e. of "different types", that's why it's discont? Continuous: comp of the same type patt? (+ +); vP, vP; dP,dP ?

No, +Ps and -Ps always alternate, so same-sign comp will be positionally discontinuous. And +P comparands will have a record of gap: intervening -Ps, that's why they are complemented.

Twenkid commented 6 years ago

i.e. the initial ovG has to be ave_k times bigger than odG in order ovG > odG, thus the dP to be counted as "redundant" and its olp updated. Therefore dP is assumed as the more important pattern. No, D is less predictive (selective) than V, see above.

OK - I've mis-remembered ave = 127 with ave_k = 0.25, assuming ave_k = 127.

if typ: alt_oG = ave_k; alt_oG = alt_oG.astype(int) # ave V / I, to project V of odG else: oG = ave_k; oG = oG.astype(int) # same for h_der and h_comp eval?

 if oG > alt_oG:  # comp between overlapping vG and dG
     Olp += olp  # olp is assigned to the weaker of P | alt_P, == -> P: local access
 else:
     alt_P[7] += olp

No, it's if typ: P = vP and alt_P = dP, else: reverse. So, only dG * ave_k for both

I meant that in either case/branch of the if, oG is altered with the same coefficient (similarly to an earlier question about alt+=...), while in ycomp there are two kinds of overlap gradients etc.

However now I realize why - it's because in the form_P stage the two kinds of gradients v/d are merged into a common G, oG.

boris-kz commented 6 years ago

So, I was thinking about +|- dPPs and vPPs, and realised that scanP and fork_eval should only apply to blobs. That's because additional complexity of comp_P -> dPPs and vPPs is per blob. So, blobs should be evaluated for comp_P after their termination. Last update shows that blob-only scanP, which also includes blob-only fork_eval. Next, I will do term_blob, which will call comp_P. Thanks!

Twenkid commented 6 years ago

Cool! :)

Complemented patterns would be formed in discontinuous comparison between 2D patterns, across opposite-sign P2s. + Then what is a continuous one and what's the difference between cont. and disc. comp? Discont.: comparison between adjacent + and - patt, i.e. of "different types", that's why it's discont? Continuous: comp of the same type patt? (+ +); vP, vP; dP,dP ? + No, +Ps and -Ps always alternate, so same-sign comp will be positionally discontinuous. And +P comparands will have a record of gap: intervening -Ps, that's why they are complemented.

Right, because a +P pattern is terminated when its match is < filter and a -P is terminated when the match is > filter?

+

So, I was thinking about +|- dPPs and vPPs, + Complemented patterns would be formed in discontinuous comparison between 2D patterns, across opposite-sign P2s. That would probably be 3Le, or 4Le for video.

Then do you mean that continuous and discontinuous comparison are not valid concepts at le_1_2D?

Or, more likely, that the discussed comparisons are all discontinuous, because of the inherent +P, -P sequences? Which seems as one of the basic pattern-creation key-points/schemes in your algorithm: these are the edges of the creation-termination cycles?

Thus "discontinuous" here is having a gap (>1) between the end-coordinate of the first and the start-coordinate of the following?

However, regarding +-PP. So they are supposed to be the same sign, thus the comparisons which created them are continuous, meaning no coordinate gaps - because the constituent patterns are at different lines?

boris-kz commented 6 years ago

Right, because a +P pattern is terminated when its match is < filter and a -P is terminated when the match is > filter?

Yes.

Then do you mean that continuous and discontinuous comparison are not valid

concepts at le_1_2D?

Or, more likely, that the discussed comparisons are all discontinuous, because of the inherent +P, -P sequences? Which seems as one of the basic pattern-creation key-points/schemes in your algorithm: these are the edges of the creation-termination cycles?

Thus "discontinuous" here is having a gap (>1) between the end-coordinate of the first and the start-coordinate of the following?

However, regarding +-PP. So they are supposed to be the same sign, thus the comparisons which created them are continuous, meaning no coordinate gaps - because the constituent patterns are at different lines?

1Le 2D patterns are all continuous, it's just that explored continuity is first horizontal and then vertical. PP are formed from comparison between verticaly consecutive Ps within selected blobs, in term_blob. But they will be defined by combined d | v of compared variables of 1D Ps, vs. by dG | vG for blobs.

Twenkid commented 6 years ago

BTW, regarding the open questions you told me a while ago, the third one:

- how to project feedback (D_input, D_average, etc.) and combine it across multiple levels.

I don't know if it's connected, but I had a thought regarding those ave, ave_k etc. hiLe feedback vars, having yet unspecified dynamics.

The usage of the word-sense "average" suggests that its one value + yet they are technically constants. However, I assume that these values are supposed to be very dynamic and adjustable per each item per pattern or at least per each constituent pattern (if not the initial p,d,m tuples), i.e. the hiLe should be able to feed different ave_k etc. for each coordinate/substituent sub-pattern?

You're working on how to balance the feeds, how to calculate/adjust the effect of deeper levels (not immediately next?), how different levels feedback values would interact to produce some "final feedback" value which would be used at the lower level stage?

boris-kz commented 6 years ago

They are averages over hLe filter patterns. So, they are constant within fP: significantly longer span than target-level patterns.

The most basic combination is of co-derived D and M. Feedback of D would be adjusting bit-filters, to minimize overflow and underflow (this is not implemented here but conceptually important).

But back-projected D and M compete, so adjusting factor is < D. And this reduction should increase over projected distance, because match has greater base range: it is common for 2 inputs, vs. 1 input for d.

So, feedback should be = (D / 2) / (proj L / DL). Or, considering slower decay (longer base) of M: (D / 2) ^ 1/ (proj L / DL)? I am not sure.

On Sun, Feb 11, 2018 at 7:03 PM, Todor Arnaudov notifications@github.com wrote:

BTW, regarding the open questions you told me a while ago, the third one:

- how to project feedback (D_input, D_average, etc.) and combine it across multiple levels.

I don't know if it's connected, but I had a thought regarding those ave, ave_k etc. hiLe feedback vars, having yet unspecified dynamics.

The usage of the word-sense "average" suggests that its one value + yet they are technically constants. However, I assume that these values are supposed to be very dynamic and adjustable per each item per pattern or at least per each constituent pattern (if not the initial p,d,m tuples), i.e. the hiLe should be able to feed different ave_k etc. for each coordinate/substituent sub-pattern?

You're working on how to balance the feeds, how to diminish/adjust the effect of deeper levels, how different levels feedback values would interact?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-364803144, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGdrfa-twq-bV54-fnWXDegi-ypx_ks5tT3--gaJpZM4R39pj .

Twenkid commented 6 years ago

The most basic combination is of co-derived D and M. Feedback of D would be adjusting bit-filters, to minimize overflow and underflow (this is not implemented here but conceptually important).

OK, that may be appropriate for performance improvement at some very low-level implementation, such as FPGA/ASIC chip, but also for a higher level (like some sort of a smart decoder to Assembly, switching code for different cases) to utilise vector operations over 8-bit, 16-bit, 32-bit data, instead of using a general type, possibly float-32 bit or more which avoids overflows at the cost of a lot of computational redundancy.

But back-projected D and M compete, so adjusting factor is < D. And this reduction should increase over projected distance, because match has greater base range: it is common for 2 inputs, vs. 1 input for d.

For summation/average that 2:1 ratio makes sense given the definition, however respectively m has a lower resolution and the prediction is for two final coordinates.

Well, so this is the overlap and redundancy that's computed? And ave_k = 0.25 for a start, because m has two overlaps for the 2 dimensions = 4 common matches?

If that is correct, again I think explicit comments in the code would be helpful.

A) So, feedback should be = (D / 2) / (proj L / DL). Or, considering slower B) decay (longer base) of M: (D / 2) ^ 1/ (proj L / DL)? I am not sure.

"Longer base" = two coordinates with conceptually defined common match, compared to 1 for difference?

proj L - projected L (length of patterns/coordinates to which the feedback has impact)? DL - Lenght (number of elements) of a span of summed D?

What's ^ - power of? Thus - fractional exponent (if proj L/DL is >=1) or plain power if proj L/DL is less than 1.

D/2 > 1? (that's for sure?) However: div < 0. = * ++ pow < 0 = root -- pow > 0 = pow ++++ (if proj L/DL < 1, the result is > 1) (i.e. if proj L/DL < 0 and D/2 > 1, B > A)

Should it be so? Do you know what the values of these variables would be? could their >< vary? The magnitude relations between the two equations may vary too much in both direction

L as number of low(er) level input coordinates, for the immediate lower Le?

(However all higher Le project - if so, shouldn't the distance in the hierarchy, the depth of the levels, reflected in the formula or it's just projected level by level from top to bottom and accumulated?)

(D / 2) / I am not sure.

"*" means you're not sure about D/2 part? (It's not clear)

Twenkid commented 6 years ago

or proj L is projected distance? Well, clarifying the variables would explain the confusion B > A.

Twenkid commented 6 years ago

BTW, are you using graphical calculators?

https://www.desmos.com/calculator

For the above expressions:

y_1\ =\ \frac{\frac{D}{2}}{\frac{p}{L}}

y_2\ =\ \left(\frac{D}{2}\right)^{\left(\frac{1}{\frac{p}{L}}\right)}

boris-kz commented 6 years ago

On Mon, Feb 12, 2018 at 6:29 AM, Todor Arnaudov notifications@github.com wrote:

The most basic combination is of co-derived D and M. Feedback of D would be adjusting bit-filters, to minimize overflow and underflow (this is not implemented here but conceptually important).

OK, that may be appropriate for performance improvement at some very low-level implementation, such as FPGA/ASIC chip, but also for a higher level (like some sort of a smart decoder to Assembly, switching code for different cases) to utilise vector operations over 8-bit, 16-bit, 32-bit data, instead of using a general type, possibly float-32 bit or more which avoids overflows at the cost of a lot of computational redundancy.

This just to understand general pricinples of feedback

But back-projected D and M compete, so adjusting factor is < D. And this reduction should increase over projected distance, because match has greater base range: it is common for 2 inputs, vs. 1 input for d.

For summation/average that 2:1 ratio makes sense given the definition, however respectively m has a lower resolution and the prediction is for two final coordinates.

No, summation range is the same for D and M, but predictive value of M is x2.

And ave_k = 0.25 for a start, because m has two overlaps for the 2 dimensions = 4 common matches?

No, that's just equal weight of D and M: 0.5, * 0,5 equal match and miss per all inputs (including future input of D) It's just an initialization, no deep reasoning behind it.

A) So, feedback should be = (D / 2) / (proj L / DL). Or, considering slower B) decay (longer base) of M: (D / 2) ^ 1/ (proj L / DL)? I am not sure.

"Longer base" = two coordinates with conceptually defined common match, compared to 1 for difference?

Yes.

proj L - projected L (length of patterns/coordinates to which the feedback has impact)? DL - Lenght (number of elements) of a span of summed D?

Yes, distance / length

What's ^ - power of? Thus - fractional exponent (if proj L/DL is >=1) or plain power if proj L/DL is less than 1.

D/2 > 1? (that's for sure?)

Yes, this is for skipping only, there is no projection for simple feedback

However: div < 0. = ++ pow < 0 = root -- pow > 0 = pow ++++ (i.e. if proj L/DL < 0 and D/2 > 1, B > A)*

Should it be so? Do you know what the values of these variables would be? could their >< vary?

These are summed variables of fP, which is defined by ff: filter'filter, initialized as ff = f.

The magnitude relations between the two equations may vary too much in both direction

L as number of low(er) level input coordinates,

Yes

for the immediate lower Le?

No, skipping is by multi-level feedback

(However all higher Le project - if so, shouldn't the distance in the hierarchy, the depth of the levels, reflected in the formula

or it's just projected level by level from top to bottom and accumulated?)

Yes, L is in bottom-Le coord, because all lower levels skip.

(D / 2) / I am not sure.

"*" means you're not sure about D/2 part? (It's not clear)

No sure if fD decay should be by division or root | LOG. DIV means that decay is proportional to original D and LOG is proportional to remaining D. So, it's probably LOG.

I will take a look at graph calc, thanks.

boris-kz commented 6 years ago

Sorry, I realized that twice faster decay: D / (proj_distance / D_span), should be separate from redundancy to M: D / 2.

So, it's probably decay by division: feedback = (D / 2) / (proj_distance / D_span)).

Note that only D is fed back, M doesn't update the filter.

Twenkid commented 6 years ago

So, it's probably decay by division: feedback = (D / 2) / (proj_distance / D_span)). vs. (D / 2) / (proj L / DL).

BTW, semantically these identifiers proj_distance, D_span sound better - D both for summed difference and for distance is confusing.

To me the code usages implicitly suggest that "L" is a natural number - length of a list (array) of collected data items. While "distance" is more about an abstract distance, in the void, with a possible unit measure, scales. Span to me is also better reminding about the material that is covered by it, a span of inputs/patterns.

Yes, L is in bottom-Le coord, because all lower levels skip.

Thus the proj_distance and D_span have maximum values of the bottom level input resolution? (the camera, the lowest)

Note that only D is fed back, M doesn't update the filter.

This filter (which is for difference patterns?) or in general?

boris-kz commented 6 years ago

So, it's probably decay by division: feedback = (D / 2) / (proj_distance / D_span)). vs. (D / 2) / (proj L / DL).

BTW, semantically these identifiers proj_distance, D_span sounds better - D both for summed difference and for distance is confusing.

Yes, that was for internal use.

Yes, L is in bottom-Le coord, because all lower levels skip.

Thus the proj_distance and D_span have maximum values of the bottom level input resolution? (the camera, the lowest)

Yes.

Note that only D is fed back, M doesn't update the filter.

This filter (which is for difference patterns?) or in general?

Bit filters are for variables rather than patterns, dPs are not filtered. Integer filters are for vPs. Ratio filters are for both filter pattern fP, and multi-Le hierarchy skip: coord_fP, I think. Etc., filter resolution is an order lower than filtered input resolution, to justify the cost of filtering.

All filters are updated by minimal summed difference between inputs and current filter. This difference is D for bit-filters, V for integer filter ave, etc.

Twenkid commented 6 years ago

form_P(...

I += p # pixels summed within P D += d # lateral D, for P comp and P2 orientation Dy += dy # vertical D, for P2 normalization M += m # lateral D, for P comp and P2 orientation My += my # vertical M, for P2 orientation G += g # d or v gradient summed to define P value, or V = M - 2a * W?

It doesn't matter, but that's an old "mistake" in the comments, obviously lateral M.

boris-kz commented 6 years ago

Yes, thanks.

Twenkid commented 6 years ago

def scan_blob(typ, blob): # vertical scan of Ps in Py_ for comp_P, incr_PP, form_pP_? ... What's the mnemonics of the S-vars?

vS_ders etc. - Scan?

Also in the yet-commented code:

if dw sign == ddx sign and min(dw, ddx) > a: _S /= cos (ddx) # to angle-normalize S vars for comp

( If I'm not mistaken you once mentioned "coSinus patterns"? cos (correlated to dot product) is an angle between vectors)

... But then:

S = 1 if abs(D) + V + a * len(e_) > rrdn * aS else 0 # rep M = a*w, bi v!V, rdn I? '''

That sounds like "sign" or "type".

What are the purpose and goal of the orientation?

dimensionally reduced axis: vP PP or contour: dP PP; dxP is direction pattern

It sounds as traversing the adjacent border items of the patterns/blobs (that's a contour). The border items in the pattern-records are supposed to be the ones where sign changes - the first and the last with > av. m for scanning in both dimensions/direction.

I see there's width and height (w, H).

"dimensionally reduced" - in order to simplify further processing, to have it in one list and care only about the match, not the x,y dimensions?

boris-kz commented 6 years ago

S always means summed. Dimensionally reduced: 2D blob -> 1D axis. Orientation is rescanning blob with a strong axis under the angle orthogonal to the axis. Contour areas are where dPPs are stronger than overlapping vPPs, fill-in areas are the reverse.

On Feb 28, 2018 3:22 AM, "Todor Arnaudov" notifications@github.com wrote:

def scanblob(typ, blob): # vertical scan of Ps in Py for comp_P, incr_PP, formpP? ... What's the mnemonics of the S-vars?

vS_ders etc. - Scan?

Also in the yet-commented code:

if dw sign == ddx sign and min(dw, ddx) > a: _S /= cos (ddx) # to angle-normalize S vars for comp

( If I'm not mistaken you once mentioned "coSinus patterns"? cos (correlated to dot product) is an angle between vectors)

... But then:

S = 1 if abs(D) + V + a len(e_) > rrdn aS else 0 # rep M = a*w, bi v!V, rdn I? '''

That sounds like "sign" or "type".

What are the purpose and goal of the orientation?

dimensionally reduced axis: vP PP or contour: dP PP; dxP is direction pattern

It sounds as traversing the adjacent border items of the patterns/blobs (that's a contour). The border items in the pattern-records are supposed to be the ones where sign changes - the first and the last with > av. m for scanning in both dimensions/direction.

I see there's width and height (w, H).

"dimensionally reduced" - in order to simplify further processing, to have it in one list and care only about the match, not the x,y dimensions?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-369157044, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGbbms62vHukPBRrJUxpgbMZxeVKsks5tZQytgaJpZM4R39pj .

boris-kz commented 6 years ago

Also, contour can be where dG-blob is stronger than overlapping vG-blob.

On Feb 28, 2018 7:15 AM, "Boris Kazachenko" boris.kz@gmail.com wrote:

S always means summed. Dimensionally reduced: 2D blob -> 1D axis. Orientation is rescanning blob with a strong axis under the angle orthogonal to the axis. Contour areas are where dPPs are stronger than overlapping vPPs, fill-in areas are the reverse.

On Feb 28, 2018 3:22 AM, "Todor Arnaudov" notifications@github.com wrote:

def scanblob(typ, blob): # vertical scan of Ps in Py for comp_P, incr_PP, formpP? ... What's the mnemonics of the S-vars?

vS_ders etc. - Scan?

Also in the yet-commented code:

if dw sign == ddx sign and min(dw, ddx) > a: _S /= cos (ddx) # to angle-normalize S vars for comp

( If I'm not mistaken you once mentioned "coSinus patterns"? cos (correlated to dot product) is an angle between vectors)

... But then:

S = 1 if abs(D) + V + a len(e_) > rrdn aS else 0 # rep M = a*w, bi v!V, rdn I? '''

That sounds like "sign" or "type".

What are the purpose and goal of the orientation?

dimensionally reduced axis: vP PP or contour: dP PP; dxP is direction pattern

It sounds as traversing the adjacent border items of the patterns/blobs (that's a contour). The border items in the pattern-records are supposed to be the ones where sign changes - the first and the last with > av. m for scanning in both dimensions/direction.

I see there's width and height (w, H).

"dimensionally reduced" - in order to simplify further processing, to have it in one list and care only about the match, not the x,y dimensions?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-369157044, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGbbms62vHukPBRrJUxpgbMZxeVKsks5tZQytgaJpZM4R39pj .

Twenkid commented 6 years ago

If S is always Sum, then what's the meaning of S = 1 vs 0 in the quoted line? Is it a normalized sum, thus a maximum/minimum in [0,1] range?

Rescanning - thus it'd be with the same content, but different order? I'm not sure what's the purpose, though - to have an alternative representation, or that representation is special (I guess, but don't see how exactly).

Or you mean - it's a straight line, linear scanning, following that computed angle. But it's not clear to me what's "axis under the angle orthogonal to the axis", what's "under an angle"?

Guesses:

Strong axis? - the axis is one of x and y, implying of patterns produced by lateral or vertical comparison; strong is one in which the G/gradient is the bigger one - the gradient in the x/laterally compared pattern is bigger than the gradient of the vertically compared?

For contour as well, in decompressed language: stronger means having > G, adjusted with the compensation coefficients for the cost? (In all flavors of the gradient, depending on the stage of processing)

Therefore contours are selections? of the subpatterns, where the difference patterns are more predictive, while the fill-in area is where the value patterns are more-predictive (all according to the current "same span" filter for the vG and 0? for the dG, since they lacked a filter from the dg = _d + fdy?, and also adjusted by the cost-coefficients)?

boris-kz commented 6 years ago

You seem to like typing and guessing, Todor. We could save a lot of both by talking once in a while. I can only guess that you don't want to focus that much.

On Feb 28, 2018 9:14 AM, "Todor Arnaudov" notifications@github.com wrote:

If S is always Sum, then what's the meaning of S = 1 vs 0 in the quoted line? Is it a normalized sum, thus a maximum/minimum in [0,1] range?

Rescanning - thus it'd be with the same content, but different order? I'm not sure what's the purpose, though - to have an alternative representation, or that representation is special (I guess, but don't see how exactly).

Or you mean - it's a straight line, linear scanning, following that computed angle. But it's not clear to me what's "axis under the angle orthogonal to the axis", what's "under an angle"?

Guesses:

Strong axis? - the axis is one of x and y, implying of patterns produced by lateral or vertical comparison; strong is one in which the G/gradient is the bigger one - the gradient in the x/laterally compared pattern is bigger than the gradient of the vertically compared?

For contour as well, in decompressed language: stronger means having > G, adjusted with the compensation coefficients for the cost? (In all flavors of the gradient, depending on the stage of processing)

Therefore contours are selections? of the subpatterns, where the difference patterns are more predictive, while the fill-in area is where the value patterns are more-predictive (all according to the current "same span" filter for the vG and 0? for the dG, since they lacked a filter from the dg = _d + fdy?, and also adjusted by the cost-coefficients)?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-369251662, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGVk6p0qzxWyDTT_ttS1OitP--Es-ks5tZV80gaJpZM4R39pj .

Twenkid commented 6 years ago

I think I were afraid that I'd be unprepared to ask meaningful questions. If you don't mind - OK. I'll notify you on Skype, maybe on Friday or in the following days. If there are problems - Google+. ~ 19h-21h my time?

boris-kz commented 6 years ago

How about 15-18 or 22-05, any day?

On Feb 28, 2018 2:13 PM, "Todor Arnaudov" notifications@github.com wrote:

I think I were afraid that I'd be unprepared to ask meaningful questions. If you don't mind - OK. I'll notify you on Skype, maybe on Friday or in the following days. If there are problems - Google+. ~ 19h-21h my time?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-369349885, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGVxrfesM1guSyD841jrP1Z9PScw_ks5tZaVlgaJpZM4R39pj .

Twenkid commented 6 years ago

OK, what about today in ~16 h?

boris-kz commented 6 years ago

Ok, I will be on.

On Mar 1, 2018 7:21 AM, "Todor Arnaudov" notifications@github.com wrote:

OK, what about today in ~16 h?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/boris-kz/CogAlg/pull/4#issuecomment-369574881, or mute the thread https://github.com/notifications/unsubscribe-auth/AUAXGTDGv2dRIDDpjcI9R4UTVXrfDVxbks5tZ-esgaJpZM4R39pj .

Twenkid commented 6 years ago

ОК.