Open GoogleCodeExporter opened 8 years ago
From: Doug Pearson on 8/26/08
I'm agree with you Bob that I'm not really happy with the idea that the
programmer defines the support, although I can certainly see the practical
benefits of what Randy is suggesting. But I do think it's reasonable that
the
programmer defines the PSCM operation (either implicitly -- have Soar
figure it
out -- or explicitly -- you define it). Or put another way it would be
interesting to see which rules are (a) hard to determine the operation
[variable attributes are an obvious example] and (b) which rules don't
actually
map to a PSCM operation at all. If those sets are reasonably small then
explicit definition might not be such a big deal as it would be uncommon.
We've generally taken the stance that any syntactically valid rule should
be
accepted and processed and that gives a lot of flexiblity in the language
but
perhaps it's a case off enough rope to hang yourself? Of course if you can
write PSCM operations in a subgoal that produce a non-PSCM chunk then
there's
really no way around allowing it. But I'd have to think harder about this
than
I'm inclined to do right now to see if that's possible.
From: Karen Coulter on 8/26/08
I didn't read through all the details of these emails, but I'm pretty sure
that
if you check the Soar rule parsing, Randy's desire to specify support on a
per
rule basis is already available. It used to be that users could add
:i-support or :o-support right before the LHS to force Soar to categorize
a
rule at load time. I'm pretty sure that those flags override any o-support
mode setting. It was considered a hack, I guess, but it's been there
since
Soar 6. At that time, all matched rules fired in each elaboration cycle --
there was not the difference between elab and apply, so not sure what weird
behavior would happen now that elab and apply happen at different points.
The run-time assignment of support is incredibly complex and fairly error-
prone
especially for objects with lots of structure. Randy illustrates that
point
quite well. Variables and how they bind make it even more confusing.
From: Bob Marinier on 8/26/08
:i-support and :o-support still work, but Soar does not currently support
assigning support to learned chunks the way Randy wants to.
------- Comment #2 From Bob Marinier 2008-09-29 10:42:53 (-) [reply] ------
-
Updating comments on o-support discussion from ~1 month ago:
From: Randy Jones on 8/30/08
Part of the question, though, is what are the PSCM operations and who
defines
them? Were "operator elaborations" a valid PSCM operation in the versions
of
Soar in which it was impossible to create operator elaborations? So if we
allow ourselves to say that the PSCM includes "operator elaborations",
"operator applications", "state elaborations", and "state applications",
then
my proposed approach magically becomes okay, right? (Although I'd be
inclined
to divide that group of 4 into a group of two instead: "elaborations" and
"applications".) Then I suppose you would also throw in "operator
proposals"
and "operator comparisons", and then require that those be i-supported
(which,
if I remember correctly, has not traditionally been the requirement in the
Soar
implementations, although I believe that's been the convention). But I'll
also
note that if you did it this way (that is, by making my proposed changes to
the
PSCM, including forbidding o-supported operator proposals and comparisons),
I
believe it would become impossible to write or learn rules that are outside
the
PSCM.
From: Doug Pearson on 8/30/08
I'm not sure who would define the PSCM operations but if as you said there
was
a set which was closed over chunking and where support was clearly defined
for
each operation (and rules that tried to be two operations at once were
invalid
in some manner) then I think that would be great, whatever syntax/method
was
used to mark them. I'm not really arguing for what those operations should
be,
but I think defining the problem in those terms is possibly a better way to
go
than focusing on the mechanism itself.
It's a bit like chunking. If you try to think about it as the operation
(backtracking from a result through rule firings to a set of wmes in the
superstate) your head just spins. But if you think about which elements of
the
super state should be referenced at all in the subgoal's problem solving it
becomes tractible.
In the same way if we spent less time thinking about rules and support and
more
about PSCM operations (including arguing for what the correct set of
operations
is) it might lead to better system design and ultimately less confusing
bugs.
From: John Laird on 8/31/08
Along the lines of what Doug said, it seems to be a bit arbitrary to
consider
the PSCM role of a new structure is a superstate to be based on the *last*
inference that is made in a substate. It also seems to be very challenging
to
get closure over chunking for a scheme based on the last inference. For
example, it is hard to see how a chunk could be learned that has the PSCM
role
of operator elaboration (because the result would have to be created by an
operator elaboration in the substate). What the current scheme tries to do
is
capture the role of the complete processing in the substate – what is
tested
and what is modified. I admit we haven’t gotten it right yet, but that
doesn’t
mean there isn’t a solution along that path.
From: Randy Jones on 8/31/08
I'm a little confused by your example. Under the current scheme, a
learned chunk would be an operator elaboration only if the result being
returned is being attached directly to a super-operator, right? And that
result can *only* be returned by the last inference in the substate,
because that's when the chunk is created. So the PSCM role of an
operator elaboration chunk is *already* based on the last inference that
is made in the substate. I haven't yet seen an example where it would be
challenging to get closure over chunking using a scheme based on the
last inference. So what am I missing? My basic point is that it seems
like the current approach wants to allow the possibility that the PSCM
role of a chunk is somehow a "side effect", but in practice nobody does
it that way (not to mention the evils of side effects from an
engineering perspective)...at least in my experience, everybody knows
what kinds of chunks they want to get when they write their code, and
then they have to figure out a way within the current implementation to
accomplish that. In my proposed scheme that's easy, because you just
tell Soar what you want it to be when you return the result (and you
also get no conflict between the support of the original result and the
support of the chunk)...in all of the past O-support schemes there have
been at least certain kinds of chunks that you have to do twisted things
to try to get what you want out of the system.
From: John Laird on 8/31/08
My example about operator elaboration was to make a point about closure
over
chunking, not about the last inference. I still am curious how you would
achieve closure for operator elaboration.
I do think there are issues with last inference and here are two examples.
Consider using an operator in the selection space to compare two
superoperators and then generate a preference. If you make the operator
application i-supported in the substate so the preference is i-supported,
that risks having a flickering elaboration (going in and out) if the result
doesn't terminate the substate and if the operator application tests for
the
absence of the preference (which applications often do). I have used
operators for this in extensions to the selection space and it works in the
current system. Alternatively, consider using a state elaboration in a
substate to do the final step in operator application in a superstate. I
guess you could label the state elaboration as an operator application, but
there is no guarantee in Soar that the result is always part of an operator
application. It could sometimes be part of what has traditionally been
state
elaboration (depending on what is tested by prior processing in the
substate) where the correct behavior is for the result to retract when the
reasons for its creation are removed from working memory. One of the
strengths of Soar is it can reuse a problem space in different ways
depending on the context.
I guess a lot of this discussion arises because you and I have two
fundamentally different views of the use of Soar. In my view, as we
progress
more and more toward general human-level behavior, I think we need to get
the programmer out of the way and have most of the creation of rules happen
through chunking (and possibly other learning mechanisms if chunking is
insufficient). Thus, there shouldn't be a programmer in the background that
"knows" what kind of chunks they want. Maybe this means that the direction
I
want Soar to go in doesn't make it the best tool for knowledge engineering
applications.
I also think part of this has to do with styles of writing Soar code. I
think approaches where you have to twist your programming means that a
different approach should be investigated (as opposed to finding easier
ways
to do the twisting). That is my experience when talking to students who
have
had issues with the current support mechanisms using chunking (modulo the
issues that Bob brought up in his original email).
From: John Laird on 9/1/08
After reading Randy's email again and thinking about this some more, I
believe it does come down to a difference in philosophy about language
design and the purpose of Saor. The key point probably is about substates
and results. One of the distinguishing characteristics of Soar is that
results are a side-effect of processing in a subgoal. I've always thought
that once a subgoal is set up, it just goes off and does its thing. There
is
no explicit return of results - although sometimes some of the structures
are obviously connected to the superstate (such as operator preferences),
but not always. In terms of processing in the substate, structures that
become results aren't "special" and sometimes a given structure might be a
result, but other times, depending on the structure of the substate, it
might not be a result. This makes it possible to do some pretty interesting
things in substates that would not be possible if results had to be tagged.
This contrasts with most programming languages that try to avoid global
data
and side effects. In those cases, the programmer does "control" when a
result is created, and knows what the result structure should be. Thus, I
would think that under that model, controlling the support would make
sense.
So, if Soar is being used as an advanced programming language, I can see
Randy's point. And in the design of Zoom - if they include
impasses/substates/results, it is probably appropriate to have explicit
results with explicit support. But the goal of Soar is not to be an
advanced
programming language and so it is appropriate for some of these design
decision to be different.
From: Randy Jones on 9/1/08
Thanks for your further thoughts on this, John. I guess my opinion
might boil down to this: The ability to create the kinds of "possibly
interesting side effects" that you describe is *exactly* what causes the
kinds of mixed-support problems that were identified in the email that
started this thread. So I don't believe it will be possible to maintain
the ability to keep these kinds of side effects but also get rid of the
potential for these kinds of problems. Put another way, I believe that
if you ever do find a way to solve the problems, you will discover that
you have effectively eliminated the possibility of having the side
effects. The reason I fall on the side of the fence I do is that I've
never seen any compelling examples of the usefulness of these particular
kinds of side effects, but I've seen lots of cases where the
support-mixing problems get in the way of a clean design.
Original comment by voigtjr@gmail.com
on 23 Jul 2009 at 5:02
Original comment by voigtjr@gmail.com
on 23 Jul 2009 at 5:29
Original comment by voigtjr@gmail.com
on 23 Feb 2010 at 7:44
Original issue reported on code.google.com by
voigtjr@gmail.com
on 23 Jul 2009 at 5:02