f



A Critique of Prof. Hubert Dreyfus' "Why Heideggerian AI failed"- a call for comments

All,

I have critiqued in great detail a recent write paper by Prof. Hubert 
Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would Require 
making it more Heideggerian" .  I can email a copy of it to whom ever is 
interested. For his bio, see:
http://socrates.berkeley.edu/~hdreyfus/

I want to stimulate discussion on this topic by posting my critiques little 
by little and getting comments from the AI community on the news groups. 
However, before I start I want to get a feel for how many know of his work 
and/or would be interested in an intellectual debate for and against his 
many anti-AI positions.

I hope many will respond to this posting with interest so I can begin 
posting each part of this paper I find issues with and my reasoned critique 
for others to comment on.

Thanks,
Ariel- 


0
Isaac
11/15/2008 4:46:55 AM
comp.ai.neural-nets 5773 articles. 2 followers. tomhoo (63) is leader. Post Follow

229 Replies
2106 Views

Similar Articles

[PageSpeed] 45

On Nov 14, 9:46=A0pm, "Isaac" <gro...@sonic.net> wrote:
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would Requ=
ire
> making it more Heideggerian" . =A0I can email a copy of it to whom ever i=
s
> interested.

Please send a copy to omegazero2003@yahoo.com Ariel. Thanks!

Why don't you post a summary of that paper and your key critique
points here.




> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques litt=
le
> by little and getting comments from the AI community on the news groups.
> However, before I start I want to get a feel for how many know of his wor=
k
> and/or would be interested in an intellectual debate for and against his
> many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned critiq=
ue
> for others to comment on.
>
> Thanks,
> Ariel-

0
Alpha
11/15/2008 3:35:05 PM

"Alpha" <omegazero2003@yahoo.com> wrote in message 
news:f8636636-6ee8-4c92-9905-a89145bc06b2@k24g2000pri.googlegroups.com...

On Nov 14, 9:46 pm, "Isaac" <gro...@sonic.net> wrote:
> All,
>>
>> I have critiqued in great detail a recent write paper by Prof. Hubert
>> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
>> Require
>> making it more Heideggerian" . I can email a copy of it to whom ever is
>> interested.
>
>Please send a copy to omegazero2003@yahoo.com Ariel. Thanks!

Will do.


>
>Why don't you post a summary of that paper

No way to effectively summarize.  Too much disparate gound covered to be 
useful for my detailed critique.



>and your key critique
>points here.

far too many "key" ones.  I'll see how posting little by little goes, and if 
that is too combersome I'll try another approach.

Thanks,

Ariel B.


>
>
>
>
>> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>>
>> I want to stimulate discussion on this topic by posting my critiques 
>> little
>> by little and getting comments from the AI community on the news groups.
>> However, before I start I want to get a feel for how many know of his 
>> work
>> and/or would be interested in an intellectual debate for and against his
>> many anti-AI positions.
>>
>> I hope many will respond to this posting with interest so I can begin
>> posting each part of this paper I find issues with and my reasoned 
>> critique
>> for others to comment on.
>>
>> Thanks,
>> Ariel-


0
Isaac
11/16/2008 3:25:43 AM
OK, there has been some interest.  So far, two takers.  I've emailed a copy 
of the paper.  So, here is my first installment.  Not all issues will 
resonate will everyone so pick and choose what you find interesting pro/con 
and I will defend any of my comments.

I will post the paragraph(s) I have a comment about, and highlight the 
particular words at issue by enclosing them between "***" characters.  I'll 
also include citations in the paper when helpful. I seek (intelligent and 
informed) technical/theoretical critique or feedback from anyone on this 
particular issue.

First, critique, page 11, line 20:
"I agree that it is time for a positive account of Heideggerian AI and of an 
underlying Heideggerian neuroscience, but I think Wheeler is the one looking 
in the wrong place.  Merely by supposing that Heidegger is concerned with 
problem solving and action oriented representations, Wheeler's project 
reflects not a step beyond Agre but a regression to aspects of pre-Brooks 
GOFAI.  Heidegger, indeed, claims that that skillful coping is basic, but he 
is also clear that, all coping takes place on the background coping he calls 
being-in-the-world that doesn't involve any form of representation at all.

see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.

Wheeler's cognitivist misreading of Heidegger leads him to overestimate the 
importance of Andy Clark's and David Chalmers' attempt to free us from the 
Cartesian idea that the mind is essentially inner by pointing out that in 
thinking we sometimes make use of external artifacts like pencil, paper, and 
computers.[i]  Unfortunately, this argument for the extended mind preserves 
the Cartesian assumption that our basic way of relating to the world is by 
using propositional representations such as beliefs and memories whether 
they are in the mind or in notebooks in the world.  In effect, while Brooks 
happily dispenses with representations where coping is concerned, all 
Chalmers, Clark, and Wheeler give us as a supposedly radical new 
Heideggerian approach to the human way of being in the world is to note that 
memories and beliefs are not necessarily inner entities and that, 
***therefore, thinking bridges the distinction between inner and outer 
representations.*** "

My comment:
"Assuming that by "thinking" you mean conscious thought,  I cannot see how 
thinking is a bridge that necessarily follows from memories/beliefs not 
being solely inner entities.  It seems to me that inner and outer 
representations can be bridged without thought.  Isn't this what occurs in 
an unconscious (reflex) reaction to a complex external even, which is an 
automatic bridge and generates a thoughtful, usually accurate response but 
often before we even have a chance to think about it.  Inner/outer 
representations seems semantically vague here.  Also, cannot conscious 
thought can endeavor itself with in purely inner or out representations 
without ever bridging them?  I guess, it is the "therefore" that gives me 
pause here."

Any thoughts on this issue?

Ariel B.


"Alpha" <omegazero2003@yahoo.com> wrote in message 
news:f8636636-6ee8-4c92-9905-a89145bc06b2@k24g2000pri.googlegroups.com...

On Nov 14, 9:46 pm, "Isaac" <gro...@sonic.net> wrote:
> All,
>>
>> I have critiqued in great detail a recent write paper by Prof. Hubert
>> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
>> Require
>> making it more Heideggerian" . I can email a copy of it to whom ever is
>> interested.
>
>Please send a copy to omegazero2003@yahoo.com Ariel. Thanks!
>
>Why don't you post a summary of that paper and your key critique
>points here.
>
>
>
>
>> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>>
>> I want to stimulate discussion on this topic by posting my critiques 
>> little
>> by little and getting comments from the AI community on the news groups.
>> However, before I start I want to get a feel for how many know of his 
>> work
>> and/or would be interested in an intellectual debate for and against his
>> many anti-AI positions.
>>
>> I hope many will respond to this posting with interest so I can begin
>> posting each part of this paper I find issues with and my reasoned 
>> critique
>> for others to comment on.
>>
>> Thanks,
>> Ariel-


0
Isaac
11/16/2008 4:14:45 AM
"Publius" <m.publius@nospam.comcast.net> wrote in message
news:Xns9B57F131CB7B0mpubliusnospamcomcas@69.16.185.250...
> "Isaac" <groups@sonic.net> wrote in
> news:491f9f87$0$33506$742ec2ed@news.sonic.net:
>
>> Minsky, unaware of Heidegger's critique, was convinced that
>> representing a few million facts about objects including their
>> functions, would solve what had come to be called the commonsense
>> knowledge problem.  It seemed to me, however, that the deep problem
>> wasn't storing millions of facts; it was knowing which facts were
>> relevant in any given situation.  One version of this relevance
>> problem was called "the frame problem."  If the computer is running a
>> representation of the current state of the world and something in the
>> world changes, how does the program determine which of its represented
>> facts can be assumed to have stayed the same, and which would have to
>> be updated?
>
> Dreyfus is pointing out one consequence of the lack of a useful definition
> of "intelligence."
Actually, he is doing much more than that in his paper.  I just posted a
portion of its background section, but his paper sets forth what he believes
is why AI fails and how he (and certain philosophers/researchers he relies
on) thinks intelligence works.  I will email you the paper for your
reference.  Let me know if you are interested in me posting my critiques of
his paper for your response.  Maybe you will defend his positions?

>It is problem which plagues most programs for producing
> AI (which is not to deny that much progress has been made in that
> endeavor).
>
> We may define "intelligence" as, "The capacity of a system to generate
> solutions to novel problems," and "problems" as, "Obstacles or impediments
> preventing the system from attaining a goal."
I defy you to contrive a definition of Intelligence that works.  For
example, using your current definition above, the Earth would be intelligent
because it is a system with the capacity to generate solutions (e.g.,
extremely complex, yet stable atmospheric weather, ocean currents, etc.) to
solve novel problems of, for example, maintaining a stable global
temperature in the face of many (thousands) changing (novel) variables that
are constant obstacles preventing the Earth (Gia?) from attaining her goal
of minimizing temperature differences globally.  There are many similar
examples that use your language but are not considered to be intelligent to
anyone reasonable in science.  Care to update your definition or defend it?

>
> Introducing goals into the definition gives us a handle on the "frame
> problem": the problem is framed by the current goal.

Goals are really related to the frame problem in that the "frame" that
matters is the one that reflect "reality" in the context of your priorities,
experience, and world model (e.g., a filter) as a situated agent.  Goals are
just one priority, but goals to not really drive perception, they mostly 
seek
to manipulate the frame to achieve a desired result.  Loosely, I think the
frame problem is much more about constructing a useful and tractable model
that the situated agent can use towards building a plan to achieve its
goals.

>Attention is paid only
> to world states which bear on the system's goals (as a background
> process).

Of course, goals to play an important role in how to focus attention, and to
some extent this colors the frame problem, but I do not see how it drives it
exclusively as you put it.  If I look at a movie I have no goals other than
to be entertained, however, I clearly create the proper frames needed to
understand and appreciate the movie and its meaning.  In that case, I am
actually trying to discover the goals of the movie, and not my own, to
understand it.  No goals on my part, but the frame problem seriously exists.

> If not enough information is in hand to solve the current problem, then
> the
> system returns to "the world" to gather additional information. (There is
> no need to "store millions of facts." Facts are gathered as they are
> needed, i.e., in light of the present goal and problem).

I don't think anyone would say that classic AI would not return to the world
to gather more facts to add to its "millions of facts".  The issue that
Dreyfus says is the problem with AI is that it creates rules that are
representations (or symbols) and are compartmentalized, both of which he
says the Philosopher Heidegger espouses, which Dreyfus and his set of
philosophers/researcher say is not the case.  I think every Intelligent
system will end up effectively having a constantly evolving set of millions
of "rules", so that is not the question.  Do you have any counter examples?

Cheers!
Ariel B.-


"Alpha" <omegazero2003@yahoo.com> wrote in message 
news:f8636636-6ee8-4c92-9905-a89145bc06b2@k24g2000pri.googlegroups.com...
On Nov 14, 9:46 pm, "Isaac" <gro...@sonic.net> wrote:
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
> Require
> making it more Heideggerian" . I can email a copy of it to whom ever is
> interested.

Please send a copy to omegazero2003@yahoo.com Ariel. Thanks!

Why don't you post a summary of that paper and your key critique
points here.




> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques 
> little
> by little and getting comments from the AI community on the news groups.
> However, before I start I want to get a feel for how many know of his work
> and/or would be interested in an intellectual debate for and against his
> many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned 
> critique
> for others to comment on.
>
> Thanks,
> Ariel-


0
Isaac
11/16/2008 11:30:58 AM
All, to help give some context, here is a wiki link to the philosopher that 
Dreyfus takes issue with:
http://en.wikipedia.org/wiki/Heidegger

I think this passage from Dreyfus' paper might help sum it up for a basic 
understanding of the top level issues:
"II. Symbolic AI as a Degenerating Research Program
Using Heidegger as a guide, I began to look for signs that the whole AI
research program was degenerating.  I was particularly struck by the fact
that, among other troubles, researchers were running up against the problem
of representing significance and relevance - a problem that Heidegger saw
was implicit in Descartes' understanding of the world as a set of
meaningless facts to which the mind assigned what Descartes called values,
and John Searle now calls functions.[i]

But, Heidegger warned, values are just more meaningless facts.  To say a
hammer has the function of being for hammering leaves out the defining
relation of hammers to nails and other equipment, to the point of building
things, and to the skills required when actually using the hammer- all of
which reveal the way of being of the hammer which Heidegger called
readiness-to-hand.   Merely assigning formal function predicates to brute
facts such as hammers couldn't capture the hammer's way of being nor the
meaningful organization of the everyday world in which hammering has its
place. "[B]y taking refuge in 'value'-characteristics," Heidegger said, "we
are . far from even catching a glimpse of being as readiness-to-hand."[ii]

Minsky, unaware of Heidegger's critique, was convinced that representing a
few million facts about objects including their functions, would solve what
had come to be called the commonsense knowledge problem.  It seemed to me,
however, that the deep problem wasn't storing millions of facts; it was
knowing which facts were relevant in any given situation.  One version of
this relevance problem was called "the frame problem."  If the computer is
running a representation of the current state of the world and something in
the world changes, how does the program determine which of its represented
facts can be assumed to have stayed the same, and which would have to be
updated?

As Michael Wheeler in his recent book, Reconstructing the Cognitive World,
puts it:

[G]iven a dynamically changing world, how is a nonmagical system ... to take
account of those state changes in that world ... that matter, and those
unchanged states in that world that matter, while ignoring those that do
not?  And how is that system to retrieve and (if necessary) to revise, out
of all the beliefs that it possesses, just those beliefs that are relevant
in some particular context of action?[iii]



--------------------------------------------------------------------------------

[i] John R. Searle, The Construction of Social Reality, (New York: The Free
Press, 1995).

[ii] Martin Heidegger, Being and Time, J. Macquarrie & E. Robinson, Trans.,
(New York: Harper & Row, 1962), 132, 133.

[iii] Michael Wheeler, Reconstructing the Cognitive World: The Next Step,
(Cambridge, MA: A Bradford Book, The MIT Press, 2007), 179.

"

best,
Ariel B.


"Isaac" <groups@sonic.net> wrote in message 
news:491d60f6$0$33588$742ec2ed@news.sonic.net...
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
> Require
> making it more Heideggerian" .  I can email a copy of it to whom ever is
> interested. For his bio, see:
> http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques 
> little
> by little and getting comments from the AI community on the news groups.
> However, before I start I want to get a feel for how many know of his work
> and/or would be interested in an intellectual debate for and against his
> many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned 
> critique
> for others to comment on.
>
> Thanks,
> Ariel-
>
> 


0
Isaac
11/16/2008 11:35:08 AM
Reminder: I will post the paragraph(s) I have a comment about, and highlight 
the
particular words at issue by enclosing them between "***" characters.  I'll
also include citations in the paper when helpful. I seek (intelligent and
informed) technical/theoretical critique or feedback from anyone on this
particular issue.  Ask me for a copy of the paper if you are interested in 
the context and details.

2nd critique, on his page 12, line 4:
"Heidegger's important insight is not that, when we solve problems, we 
sometimes make use of representational equipment outside our bodies, but 
that being-in-the-world is more basic than thinking and solving 
problems;that it is not representational at all.  That is, when we are 
coping at our best, ***we are drawn in by solicitations and respond directly 
to them, so that the distinction between us and our equipment--between inner 
and outer-vanishes***#1  As Heidegger sums it up:
I live in the understanding of writing, illuminating, going-in-and-out, and 
the like.  More precisely: as Dasein I am -- in speaking, going, and 
understanding -- an act of understanding dealing-with.  My being in the 
world is nothing other than this already-operating-with-understanding in 
this mode of being.[ii]

Heidegger and Merleau-Ponty's understanding of embedded embodied coping, 
then, is not that the mind is sometimes extended into the world but rather 
that all such problem solving is derivative, that in our most basic way of 
being, that is, as absorbed skillful copers, we are not minds at all but one 
with the world.   Heidegger sticks to the phenomenon, when he makes the 
strange-sounding claim that, in its most basic way of being, "Dasein is its 
world existingly."[iii]

When you stop thinking that mind is what characterizes us most basically 
but, rather, that most basically we are absorbed copers, the inner/outer 
distinction becomes problematic. There's no easily askable question as to 
whether the absorbed coping is in me or in the world. According to 
Heidegger, intentional content isn't in the mind, nor in some 3rd realm (as 
it is for Husserl), nor in the world; it isn't anywhere.  It's an embodied 
way of being-towards.  Thus for a Heideggerian, all forms of cognitivist 
externalism presuppose a more basic existential externalism where even to 
speak of "externalism" is misleading since such talk presupposes a contrast 
with the internal.  Compared to this genuinely Heideggerian view, 
***extended-mind externalism is contrived, trivial, and irrelevant***#2.



--------------------------------------------------------------------------------

[i] As Heidegger puts it: "The self must forget itself if, lost in the world 
of equipment, it is to be able 'actually' to go to work and manipulate 
something." Being and Time, 405.

[ii] Logic, 146. It's important to realize that when he uses the term 
"understanding," Heidegger explains (with a little help from the translator) 
that he means a kind of know-how:

In German we say that someone can vorstehen something-literally, stand in 
front of or ahead of it, that is, stand at its head, administer, manage, 
preside over it.  This is equivalent to saying that he versteht sich darauf, 
understands in the sense of being skilled or expert at it, has the know-how 
of it.  (Martin Heidegger, The Basic Problems of Phenomenology, A. 
Hofstadter, Trans. Bloomington: Indian University Press, 1982, 276.)

[iii] Being and Time, 416.  To make sense of this slogan, it's important to 
be clear that Heidegger distinguishes the human world from the physical 
universe.


--------------------------------------------------------------------------------



My critique #1:

seems that the  "distinction between us and our equipment... is vanished" is 
just describing the unconscious automation process that takes over body 
functions and relieves the conscious mind to be unaware that its equipment 
was drawn into responding to solicitations.  This in many ways seems to just 
be alluding to the domain of our unconscious being that responds like 
dominos that fall automatically in response to many contextual 
solicitations.  I do not see how this all makes a solid argument that 
conscious thought is unified and inseparable from "our equipment" (i.e., 
body).  At best this is a very weak, if not completely flawed, logic in 
inferring that our sense (act) of being in the world "is not 
representational at all".  The text that appears to clarify this assertion 
just seems to be a string of conclusory declarations without a solid logical 
foundation.  Even a plausible syllogism would be helpful here."



My critique #2:

is not the Heideggerian view requiring this unity between the mind and the 
world result in a "contrived, trivial, and irrelevant" world representation 
scheme in people when the events in the world are so far beyond a person's 
ability to cope (relative to there internal representation/value system) 
that they just end up contriving a trivial and irrelevant internal world 
that is just projected onto a "best fit/nearest neighbor" of a 
representation that they can cope with.  In this way, there is no absorbed 
coping because it requires a perfect and accurate absorption scheme between 
our mind (inner) and the world (outer) that does not exist and cannot be 
magically created, even biologically.  If you ignore this aspect of the 
Heideggerian view then what you end up with is nothing much more than an 
"ignorance is bliss" cognitive model that is not too different from what you 
say is wrong with Brook's approach.  That is, your portrayal of the 
Heideggerian view of absorbed coping would exactly model the thinking and 
representation behavior of insects, which certainly is not the conscious, 
cognitive model of humans.  Thus, this Heideggerian view of absorbed coping 
is either insufficient to describe the human condition or it renders 
indistinguishable insects from humans; either way it does not seem to 
uniquely capture the behavior at the level of human consciousness and is, 
thus, flawed at best.    That is, if this Heideggerian view of absorbed 
coping equally applies to any animals or insects then it is not really 
helpful to modeling or shedding light on  higher human intellectual 
behavior, which, of course, is the sole subject/goal of AI.  Moreover, this 
"perfect absorption" is a complete illusion and in practice will only exist 
in the most predictable and simple situations. From another angle, how is 
this Heideggerian view of absorbed coping much different from the standard 
psychological model of projection where our internal model/representation is 
simply projected onto the world (or a subset frame of it) and we just trick 
ourselves into believing that we are completely and accurately absorbed with 
the true essence of the frame problem.  this Heideggerian view of absorbed 
coping seems to much more fit the unconscious aspects of the human 
condition, which is more insect/animal like.  This all seems to be logically 
flawed and/or a very weak foundation for grandiose conclusions about what 
philosophical approach/model is needed to solve the frame problem and human 
consciousness.  Maybe I am missing something critical here that can make 
sense of it.  Please clarify the logic.





Any thoughts on this issue?

Ariel B.


"Isaac" <groups@sonic.net> wrote in message 
news:491d60f6$0$33588$742ec2ed@news.sonic.net...
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
> Require
> making it more Heideggerian" .  I can email a copy of it to whom ever is
> interested. For his bio, see:
> http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques 
> little
> by little and getting comments from the AI community on the news groups.
> However, before I start I want to get a feel for how many know of his work
> and/or would be interested in an intellectual debate for and against his
> many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned 
> critique
> for others to comment on.
>
> Thanks,
> Ariel-
>
> 


0
Isaac
11/16/2008 11:35:21 AM
"Isaac" <groups@sonic.net> wrote:
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
> Require making it more Heideggerian" .  I can email a copy of it to whom
> ever is interested. For his bio, see:
> http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques
> little by little and getting comments from the AI community on the news
> groups. However, before I start I want to get a feel for how many know of
> his work

I've never heard of him.  But that's not saying much.

> and/or would be interested in an intellectual debate

I always like to debate.

> for and
> against his many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned
> critique for others to comment on.
>
> Thanks,
> Ariel-

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/16/2008 7:27:43 PM
Curt, I just emailed a copy of the white paper to you. See prior posts of my 
initial two critiques.  The first was on the 15th, and the 2nd on the 16 in 
the wee hours of the morning.


"Curt Welch" <curt@kcwc.com> wrote in message 
news:20081116142822.087$4O@newsreader.com...
> "Isaac" <groups@sonic.net> wrote:
>> All,
>>
>> I have critiqued in great detail a recent write paper by Prof. Hubert
>> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
>> Require making it more Heideggerian" .  I can email a copy of it to whom
>> ever is interested. For his bio, see:
>> http://socrates.berkeley.edu/~hdreyfus/
>>
>> I want to stimulate discussion on this topic by posting my critiques
>> little by little and getting comments from the AI community on the news
>> groups. However, before I start I want to get a feel for how many know of
>> his work
>
> I've never heard of him.  But that's not saying much.
>
>> and/or would be interested in an intellectual debate
>
> I always like to debate.
>
>> for and
>> against his many anti-AI positions.
>>
>> I hope many will respond to this posting with interest so I can begin
>> posting each part of this paper I find issues with and my reasoned
>> critique for others to comment on.
>>
>> Thanks,
>> Ariel-
>
> -- 
> Curt Welch 
> http://CurtWelch.Com/
> curt@kcwc.com 
> http://NewsReader.Com/ 


0
Isaac
11/16/2008 9:09:37 PM
"Isaac" <groups@sonic.net> writes:

>I have critiqued in great detail a recent write paper by Prof. Hubert 
>Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would Require 
>making it more Heideggerian" .  I can email a copy of it to whom ever is 
>interested. For his bio, see:
>http://socrates.berkeley.edu/~hdreyfus/

Yes, I would appreciate a copy.  Hopefully, the email address in
the "From:" line of this post should work.

0
Neil
11/16/2008 10:19:52 PM
Neil, I sent it to you email listed.  I replaced the "+" with "." assuming 
that was an anti-SPAM measure.  See prior posts of my
initial two critiques.  The first was on the 15th, and the 2nd on the 16 in 
the wee hours of the morning.

Ariel-


"Neil W Rickert" <rickert+nn@cs.niu.edu> wrote in message 
news:c01Uk.7238$c45.2091@nlpi065.nbdc.sbc.com...
> "Isaac" <groups@sonic.net> writes:
>
>>I have critiqued in great detail a recent write paper by Prof. Hubert
>>Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
>>Require
>>making it more Heideggerian" .  I can email a copy of it to whom ever is
>>interested. For his bio, see:
>>http://socrates.berkeley.edu/~hdreyfus/
>
> Yes, I would appreciate a copy.  Hopefully, the email address in
> the "From:" line of this post should work.
> 


0
Isaac
11/17/2008 12:38:33 AM
Reminder: I will post the paragraph(s) I have a comment about, and highlight
the
particular words at issue by enclosing them between "***" characters.  I'll
also include citations in the paper when helpful. I seek (intelligent and
informed) technical/theoretical critique or feedback from anyone on this
particular issue.  Ask/email me for a copy of the paper if you are 
interested in
the context and details.

2nd critique, on his page 12, line 4:
"Heidegger's important insight is not that, when we solve problems, we
sometimes make use of representational equipment outside our bodies, but
that being-in-the-world is more basic than thinking and solving
problems;that it is not representational at all.  That is, when we are
coping at our best, ***we are drawn in by solicitations and respond directly
to them, so that the distinction between us and our equipment--between inner
and outer-vanishes***#1  As Heidegger sums it up:
I live in the understanding of writing, illuminating, going-in-and-out, and
the like.  More precisely: as Dasein I am -- in speaking, going, and
understanding -- an act of understanding dealing-with.  My being in the
world is nothing other than this already-operating-with-understanding in
this mode of being.[ii]

Heidegger and Merleau-Ponty's understanding of embedded embodied coping,
then, is not that the mind is sometimes extended into the world but rather
that all such problem solving is derivative, that in our most basic way of
being, that is, as absorbed skillful copers, we are not minds at all but one
with the world.   Heidegger sticks to the phenomenon, when he makes the
strange-sounding claim that, in its most basic way of being, "Dasein is its
world existingly."[iii]

When you stop thinking that mind is what characterizes us most basically
but, rather, that most basically we are absorbed copers, the inner/outer
distinction becomes problematic. There's no easily askable question as to
whether the absorbed coping is in me or in the world. According to
Heidegger, intentional content isn't in the mind, nor in some 3rd realm (as
it is for Husserl), nor in the world; it isn't anywhere.  It's an embodied
way of being-towards.  Thus for a Heideggerian, all forms of cognitivist
externalism presuppose a more basic existential externalism where even to
speak of "externalism" is misleading since such talk presupposes a contrast
with the internal.  Compared to this genuinely Heideggerian view,
***extended-mind externalism is contrived, trivial, and irrelevant***#2.



--------------------------------------------------------------------------------

[i] As Heidegger puts it: "The self must forget itself if, lost in the world
of equipment, it is to be able 'actually' to go to work and manipulate
something." Being and Time, 405.

[ii] Logic, 146. It's important to realize that when he uses the term
"understanding," Heidegger explains (with a little help from the translator)
that he means a kind of know-how:

In German we say that someone can vorstehen something-literally, stand in
front of or ahead of it, that is, stand at its head, administer, manage,
preside over it.  This is equivalent to saying that he versteht sich darauf,
understands in the sense of being skilled or expert at it, has the know-how
of it.  (Martin Heidegger, The Basic Problems of Phenomenology, A.
Hofstadter, Trans. Bloomington: Indian University Press, 1982, 276.)

[iii] Being and Time, 416.  To make sense of this slogan, it's important to
be clear that Heidegger distinguishes the human world from the physical
universe.


--------------------------------------------------------------------------------



My critique #1:

seems that the  "distinction between us and our equipment... is vanished" is
just describing the unconscious automation process that takes over body
functions and relieves the conscious mind to be unaware that its equipment
was drawn into responding to solicitations.  This in many ways seems to just
be alluding to the domain of our unconscious being that responds like
dominos that fall automatically in response to many contextual
solicitations.  I do not see how this all makes a solid argument that
conscious thought is unified and inseparable from "our equipment" (i.e.,
body).  At best this is a very weak, if not completely flawed, logic in
inferring that our sense (act) of being in the world "is not
representational at all".  The text that appears to clarify this assertion
just seems to be a string of conclusory declarations without a solid logical
foundation.  Even a plausible syllogism would be helpful here."



My critique #2:

is not the Heideggerian view requiring this unity between the mind and the
world result in a "contrived, trivial, and irrelevant" world representation
scheme in people when the events in the world are so far beyond a person's
ability to cope (relative to there internal representation/value system)
that they just end up contriving a trivial and irrelevant internal world
that is just projected onto a "best fit/nearest neighbor" of a
representation that they can cope with.  In this way, there is no absorbed
coping because it requires a perfect and accurate absorption scheme between
our mind (inner) and the world (outer) that does not exist and cannot be
magically created, even biologically.  If you ignore this aspect of the
Heideggerian view then what you end up with is nothing much more than an
"ignorance is bliss" cognitive model that is not too different from what you
say is wrong with Brook's approach.  That is, your portrayal of the
Heideggerian view of absorbed coping would exactly model the thinking and
representation behavior of insects, which certainly is not the conscious,
cognitive model of humans.  Thus, this Heideggerian view of absorbed coping
is either insufficient to describe the human condition or it renders
indistinguishable insects from humans; either way it does not seem to
uniquely capture the behavior at the level of human consciousness and is,
thus, flawed at best.    That is, if this Heideggerian view of absorbed
coping equally applies to any animals or insects then it is not really
helpful to modeling or shedding light on  higher human intellectual
behavior, which, of course, is the sole subject/goal of AI.  Moreover, this
"perfect absorption" is a complete illusion and in practice will only exist
in the most predictable and simple situations. From another angle, how is
this Heideggerian view of absorbed coping much different from the standard
psychological model of projection where our internal model/representation is
simply projected onto the world (or a subset frame of it) and we just trick
ourselves into believing that we are completely and accurately absorbed with
the true essence of the frame problem.  this Heideggerian view of absorbed
coping seems to much more fit the unconscious aspects of the human
condition, which is more insect/animal like.  This all seems to be logically
flawed and/or a very weak foundation for grandiose conclusions about what
philosophical approach/model is needed to solve the frame problem and human
consciousness.  Maybe I am missing something critical here that can make
sense of it.  Please clarify the logic.





Any thoughts on this issue?

Ariel B.


"Isaac" <groups@sonic.net> wrote in message
news:491d60f6$0$33588$742ec2ed@news.sonic.net...
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
> Require
> making it more Heideggerian" .  I can email a copy of it to whom ever is
> interested. For his bio, see:
> http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques
> little
> by little and getting comments from the AI community on the news groups.
> However, before I start I want to get a feel for how many know of his work
> and/or would be interested in an intellectual debate for and against his
> many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned
> critique
> for others to comment on.
>
> Thanks,
> Ariel-
>
>



0
Isaac
11/17/2008 2:12:45 AM
Here is my 2nd installment of many critiques of this paper.  Not all issues 
will
resonate will everyone so pick and choose what you find interesting to 
debate pro/con
and I will defend any of my comments.

I will post the paragraph(s) I have a comment about, and highlight the
particular words at issue by enclosing them between "***" characters.  I'll
also include citations in the paper when helpful. I seek (intelligent and
informed) technical/theoretical critique or feedback from anyone on this
particular issue.

See page 11, line 20 of his paper where it says:
"I agree that it is time for a positive account of Heideggerian AI and of an
underlying Heideggerian neuroscience, but I think Wheeler is the one looking
in the wrong place.  Merely by supposing that Heidegger is concerned with
problem solving and action oriented representations, Wheeler's project
reflects not a step beyond Agre but a regression to aspects of pre-Brooks
GOFAI.  Heidegger, indeed, claims that that skillful coping is basic, but he
is also clear that, all coping takes place on the background coping he calls
being-in-the-world that doesn't involve any form of representation at all.

see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.

Wheeler's cognitivist misreading of Heidegger leads him to overestimate the
importance of Andy Clark's and David Chalmers' attempt to free us from the
Cartesian idea that the mind is essentially inner by pointing out that in
thinking we sometimes make use of external artifacts like pencil, paper, and
computers.[i]  Unfortunately, this argument for the extended mind preserves
the Cartesian assumption that our basic way of relating to the world is by
using propositional representations such as beliefs and memories whether
they are in the mind or in notebooks in the world.  In effect, while Brooks
happily dispenses with representations where coping is concerned, all
Chalmers, Clark, and Wheeler give us as a supposedly radical new
Heideggerian approach to the human way of being in the world is to note that
memories and beliefs are not necessarily inner entities and that,
***therefore, thinking bridges the distinction between inner and outer
representations.*** "

My Critique:
"Assuming that by "thinking" you mean conscious thought,  I cannot see how
thinking is a bridge that necessarily follows from memories/beliefs not
being solely inner entities.  It seems to me that inner and outer
representations can be bridged without thought.  Isn't this what occurs in
an unconscious (reflex) reaction to a complex external even, which is an
automatic bridge and generates a thoughtful, usually accurate response but
often before we even have a chance to think about it.  Inner/outer
representations seems semantically vague here.  Also, cannot conscious
thought can endeavor itself with in purely inner or out representations
without ever bridging them?  I guess, it is the "therefore" that gives me
pause here."

Any thoughts on this issue?

Ariel B.


"Alpha" <omegazero2003@yahoo.com> wrote in message
news:f8636636-6ee8-4c92-9905-a89145bc06b2@k24g2000pri.googlegroups.com...

On Nov 14, 9:46 pm, "Isaac" <gro...@sonic.net> wrote:
> All,
>>
>> I have critiqued in great detail a recent write paper by Prof. Hubert
>> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
>> Require
>> making it more Heideggerian" . I can email a copy of it to whom ever is
>> interested.
>
>Please send a copy to omegazero2003@yahoo.com Ariel. Thanks!
>
>Why don't you post a summary of that paper and your key critique
>points here.
>
>
>
>
>> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>>
>> I want to stimulate discussion on this topic by posting my critiques
>> little
>> by little and getting comments from the AI community on the news groups.
>> However, before I start I want to get a feel for how many know of his
>> work
>> and/or would be interested in an intellectual debate for and against his
>> many anti-AI positions.
>>
>> I hope many will respond to this posting with interest so I can begin
>> posting each part of this paper I find issues with and my reasoned
>> critique
>> for others to comment on.
>>
>> Thanks,
>> Ariel-



0
Isaac
11/17/2008 2:23:48 AM
Here is my 3rd installment of very many critiques of this paper.  Not all 
issues
will resonate will everyone so pick and choose what you find interesting to
debate pro/con and I will defend any of my comments.

I will post the paragraph(s) I have a comment about, and highlight the
particular words at issue by enclosing them between "***" characters.  I'll
also include citations in the paper when helpful. I seek (intelligent and
informed) technical/theoretical critique or feedback from anyone on the
issue(s) presented/raised.

See page 12, line 28:
VI.  What Motivates Embedded/embodied Coping?
But why is Dasein called to cope at all?  According to Heidegger, we are 
constantly solicited to improve our familiarity with the world.  Five years 
before the publication of Being and Time he wrote:

Caring takes the form of a looking around and seeing, and as this 
circumspective caring it is at the same time . concerned about developing 
its circumspection, that is, about securing and expanding its familiarity 
with the objects of its dealings. [i]

This pragmatic perspective is developed by Merleau-Ponty, and by Samuel 
Todes.[ii]  These heirs to Heidegger's account of familiarity and coping 
describe how an organism, animal or human, interacts with what is 
objectively speaking the meaningless physical universe in such a way as to 
cope with an environment organized in terms of that organism's need to find 
its way around.  All such coping beings are motivated to get a more and more 
refined and secure sense of the specific objects of their dealings. 
According to Merleau-Ponty:

My body is geared into the world when my perception presents me with a 
spectacle as varied and as clearly articulated as possible...[iii]

In short, in our skilled activity we are drawn to move so as to achieve a 
better and better grip on our situation.  For this movement towards maximal 
grip to take place one doesn't need a mental representation of one's goal 
nor any problem solving, as would a GOFAI robot.  ***Rather, acting is 
experienced as a steady flow of skillful activity in response to the 
situation.  When one's situation deviates from some optimal body-environment 
gestalt, one's activity takes one closer to that optimum and thereby 
relieves the "tension" of the deviation.  ***[asb1] ***One does not need to 
know what the optimum is in order to move towards it.  One's body is simply 
drawn to lower the tension.*** [asb2]

***That is, if things are going well and I am gaining an optimal grip on the 
world, I simple respond to the solicitation to move towards an even better 
grip and, if things are going badly, I experience a pull back towards the 
norm.***  [asb3] If it seems that much of the time we don't experience any 
such pull, Merleau-Ponty would no doubt respond that the sensitivity to 
deviation is nonetheless guiding one's coping, just as an airport radio 
beacon doesn't give a warning signal unless the plane strays off course, and 
then, let us suppose, the plane gets a signal whose intensity corresponds to 
how far off course it is and the intensity of the signal diminishes as it 
approaches getting back on course.  The silence that accompanies being on 
course doesn't mean the beacon isn't continually guiding the plane. 
Likewise, the absence of felt tension in perception doesn't mean we aren't 
***being directed by a solicitation***[asb4] .

As Merleau-Ponty puts it: ***"Our body is not an object for an 'I think', it 
is a grouping of lived-through meanings that moves towards its 
equilibrium***[asb5] ."[iv]  Equilibrium being Merleau-Ponty's name for the 
zero gradient of steady successful coping.  Moreover, normally, we do not 
arrive at equilibrium and stop there but are immediately taken over by a new 
solicitation.




--------------------------------------------------------------------------------

MY CRITIQUES indexed by my initials "ASB" followed by the number of my 
comment above:

 [asb1]this sounds a lot like dominoes combined with cognitive dissonance 
theory at a more sensory level.  This "skillful coping" is much like a 
cascade of dominoes that automatically fall in right pattern in response to 
a certain stimulus.  It seems that you try to avoid the need for a mental 
representation by making the stimulus automatically trigger a response that 
is perfectly adapted instead of going through an abstracted mental 
representation of the event and using that model to calculate the best 
response.  Such notions of the right gestalt to every sensory 
event/situation is not much different than a well trained neural network 
that takes sensory inputs that are spread to all neural nodes, and depending 
upon the landscape of trained weights, the learned pattern of output 
response it "automatically" achieved.  It would seem that such a behavioral 
system has no mental representation; however, I would disagree.  For 
example, any sufficiently complex decision landscape will have to deal with 
an uncertain range, combination, and timing of sensory inputs, so this 
logically necessitates that all decision making elements will necessarily 
have to represent various abstracted aspects of the sensory pattern that was 
learned to be most key to achieve the desired output and error level.  This 
representation of the various abstracted aspects is effectively the gestalt 
that each (or multiple) neuron has learned alone or in combination with 
others.  Hence, because a finite system that must deal with high uncertainty 
must contain abstracted representations then all such systems can be thought 
of as having mental representations.  It is just when the task at hand is 
more low level the represented abstractions seem like automatic, 
unspeakable, gestalts as opposed to highly abstracted models that would tend 
to occur for processing events that are more disconnected from the sensory 
moment.  In this way, I don't see how you can refute mental representations. 
I contend, for example, that your mental representations can be thought of 
as sensory abstractions.  So, are you saying that your absorbed coping 
occurs is without abstractions?  I assert that the process of abstraction 
necessarily builds a (parametric) model of the observation (i.e., a mental 
representation, even if distributed) and if your system does not abstract it 
cannot cope with uncertainty or variation of sensory patterns.  Gestalts are 
nothing more that highly effective abstractions (even heuristics) that treat 
a whole class of sensory situations with an appropriate response- this is an 
abstracted rule at its best.  Thus, I do not see how your absorbed coping 
paradyne avoids creating mental representations as I define above.  It would 
be helpful if you would explain your logic in terms of a practical (or even 
plausible) computational system.  Your Freeman-based chaotic neural network 
model did not seem to address/resolve the issues I question above.

 [asb2]now you have me completely confused.  It is impossible to generate an 
error signal (i.e., "lower the tension") without comparing against some 
model of the expected or desired event/result.  The fact you define it this 
way seem to me to require a mental representation that represents an 
abstract (representative or ideal) configuration by which the sensory inputs 
are measured against (a la Plato's parallel universe of ideal 
representations for all objects) to generate an error signal (tension) in a 
control loop.  Even the most simple negative feedback control loop requires 
an abstract model of the problem domain being acted upon.  It is just that 
this domain model is encoded into the control loop wiring, time constants, 
and gain design, which are in effect an embedded mental representation that 
was created by a human and encoded into hardware or software.  The body 
(organic systems) must do the same thing, they just have automatic 
algorithms to create the models that design the control loop.  This does not 
avoid mental representations it just distributes them system wide instead of 
concentrating them in a centralized control scheme that most think of as a 
mental representation process.

 [asb3]I contend that in any practical system, there will always have to be 
a model by which sensory inputs need to be compared to thereby generating an 
error signal for the control loop.  This model is a mental representation 
even if it is hardwired.  Genetic algorithms all require a fitness function. 
Whether that fitness function (i.e., " optimal body-environment gestalt ") 
is created by a high order mental process or by natural selection is 
immaterial.  The fitness function is a model, and it an Ingram of a mental 
representation.  I am eager to hear your arguments that practically and 
logically motivate otherwise.

 [asb4]it seems to me that " being directed by a solicitation " does not 
contradict a guiding mental representation being present as well.  My prior 
comments, in my mind, avoid these two being mutually exclusive as you seem 
to contend.  It seems you are trying to say that a river (i.e., conscious 
action) must flow down a mountain and the way it finds its minimal energy 
path is dictated by natural forces (i.e., "the mind/body gestalt"), thus you 
conclude that no centralized mental control/representation system is guiding 
the river to its "intended" destination at the mountain base.  Consciousness 
is all about observing the rivers path down the mountain and adjusting the 
(mental) landscape to guide the river (conscious action) to a desired 
result/location.  Gravity may "solicit" water down the mountain, but that 
does not necessarily negate there being other forces/systems acting on the 
river and shaping the landscape/path the water is to take.  It seems to me 
that your solicitation is like gravity, and you are concluding that there 
are no mental representations because absorbed coping is guiding the river 
directed by such solicitation (gravity).  This is just one concrete example 
(of many) of how I see your coping/solicitation/no-representations scheme as 
logically and fundamentally flawed.

 [asb5]It is curious how you are in effect defining "I think" as a 
centralized, disconnected system from the body and environment.  How is your 
" lived-through meanings that moves towards its equilibrium [asb5]." 
different from a classical neural network which learns meaning (as 
distributed abstractions) and uses them to "move towards its equilibrium"? 
There is certainly no "I think" in a classical neural network, and it 
behaves as you philosophically describe.  Again, if your proposed cognitive 
model does not distinguish from systems that we know cannot be conscious 
then it must be refined to exclude such insufficient systems as being within 
the set of definitions (behaviors) required of a sufficient human cognitive 
system.  Otherwise, such proposed behavioral/architectural models are not 
really useful to shed light on how the human mind works.



Citations made in white paper section above:




--------------------------------------------------------------------------------

[i] Martin Heidegger, Phenomenological Interpretations in Connection with 
Aristotle, in Supplements: From the Earliest Essays to Being and Time and 
Beyond, John Van Buren, Ed., State University of New York Press, 2002, 115. 
My italics.

This away of putting the source of significance covers both animals and 
people.  By the time he published Being and Time, however, Heidegger was 
interested exclusively in the special kind of significance found in the 
world opened up by human beings who are defined by the stand they take on 
their own being.  We might call this meaning.  In this paper I'm putting the 
question of uniquely human meaning aside to concentrate on the sort of 
significance we share with animals.

[ii] See, Samuel Todes, Body and World, Cambridge, MA: The MIT Press, 2001. 
Todes goes beyond Merleau-Ponty in showing how our world-disclosing 
perceptual experience is structured by the structure of our bodies. 
Merleau-Ponty never tells us what our bodies are actually like and how their 
structure affects our experience.  Todes points out that our body has a 
front/back and up/down orientation.  It moves forward more easily than 
backward, and can successfully cope only with what is in front of it.  He 
then describes how, in order to explore our surrounding world and orient 
ourselves in it, we have to balance ourselves within a vertical field that 
we do not produce, be effectively directed in a circumstantial field (facing 
one aspect of that field rather than another), and appropriately set to 
respond to the specific thing we are encountering within that field.  For 
Todes, then, perceptual receptivity is an embodied, normative, skilled 
accomplishment, in response to our need to orient ourselves in the world. 
Clearly, this is a kind of holistic background coping is not done for a 
reason.

[iii]  Merleau-Ponty, Phenomenology of Perception, 250. (Trans. Modified.)

[iv]  Ibid, 153. (My italics.)


0
Isaac
11/17/2008 2:42:29 AM
"Isaac" <groups@sonic.net> writes:

>Neil, I sent it to you email listed.

Thanks.  Copy received, and appreciated.

>                                      I replaced the "+" with "." assuming 
>that was an anti-SPAM measure.

That was a mistake, and probably gave you a bounce.  The "+"
was correct.  But fortunately you included both addresses, so
the one with the "+" got through.  (If you had removed the "+"
and what follows it up to the "@", that would have worked to.
The "+string" is to allow me to sort it into a different mailbox.
And yes, that different mailbox does get extra anti-spam treatment.

It's a bit long, so it might be a day or two before I have a comment.
But it is rather interesting, and more interesting than some of
Dreyfus's previous articles on AI.  I will probably be disagreeing
with Dreyfus and possibly with you too.  But then I have my own
unique way of looking at questions of cognition.

0
Neil
11/17/2008 3:12:30 AM
On Nov 16, 6:12 pm, "Isaac" <gro...@sonic.net> wrote:
> Reminder: I will post the paragraph(s) I have a comment about, and highlight
> the
> particular words at issue by enclosing them between "***" characters.  I'll
> also include citations in the paper when helpful. I seek (intelligent and
> informed) technical/theoretical critique or feedback from anyone on this
> particular issue.  Ask/email me for a copy of the paper if you are
> interested in
> the context and details.
>
> 2nd critique, on his page 12, line 4:
> "Heidegger's important insight is not that, when we solve problems, we
> sometimes make use of representational equipment outside our bodies, but
> that being-in-the-world is more basic than thinking and solving
> problems;that it is not representational at all.  That is, when we are
> coping at our best, ***we are drawn in by solicitations and respond directly
> to them, so that the distinction between us and our equipment--between inner
> and outer-vanishes***#1  As Heidegger sums it up:
> I live in the understanding of writing, illuminating, going-in-and-out, and
> the like.  More precisely: as Dasein I am -- in speaking, going, and
> understanding -- an act of understanding dealing-with.  My being in the
> world is nothing other than this already-operating-with-understanding in
> this mode of being.[ii]
>

A phantom limb is the sensation that an amputated or missing limb
(even an organ, like the appendix) is still attached to the body and
is moving appropriately with other body parts.

http://en.wikipedia.org/wiki/Phantom_limb

[A384] ...Now I maintain that all the difficulties commonly found in
these questions, and by means of which, as dogmatic objections, men
seek to gain credit for a deeper insight into the nature of things
than any to which the ordinary understanding can properly lay claim,
rest on a mere delusion by which they hypostatise what exists merely
in thought, and take it as a real object existing, in the same
character, outside the thinking subject.

In other words, they regard extension, which is nothing but
appearance, as a property of outer things that subsists [A385] even
apart from our sensibility, and hold that motion is due to these
things and really occurs in and by itself, apart from our senses.

http://www.arts.cuhk.edu.hk/Philosophy/Kant/cpr/

-----------------------------------

[Matter as Telepresence]

What is telepresence? I agree with Kac's (1997) distinction between
virtual reality (VR) and telepresence: VR presents purely synthetic
sense-data lacking physical reality. Telepresence presents sense-data
that (1) claims to correspond to a remote physical reality and (2)
allows the remote user to perform a physical action and see the
results. The WWW has the potential to bring telepresence out of the
laboratory.

http://www.walkerart.org/gallery9/beyondinterface/goldberg_artist.html

If I am just learning to use a hammer the first time I must extend the
perception of the end of my arm out about a foot further. I must learn
to hit an object, nail, a foot further out than I would normally hit
things. This is more like the telepresence in our adjustments to our
outer sense...

--------------------------------

The implementation of a Tele-Presence Microscopy Facility allows a
user from a remote location to either observe and/or control state-of-
the-art instrumentation in a real time interactive mode. ...The vision
suggested in the call for proposals is to combine all of these over
the National Information Infrastructure in such a way that makes
distributed collaboration as useful to scientific experiments as being
in the same location.

http://www.amc.anl.gov/docs/anl/tpm/tpmexecsumm.html

> Heidegger and Merleau-Ponty's understanding of embedded embodied coping,
> then, is not that the mind is sometimes extended into the world but rather
> that all such problem solving is derivative, that in our most basic way of
> being, that is, as absorbed skillful copers, we are not minds at all but one
> with the world.   Heidegger sticks to the phenomenon, when he makes the
> strange-sounding claim that, in its most basic way of being, "Dasein is its
> world existingly."[iii]
>

Is this like, phenomenology, you know the attempt to extend the
inescapable self and somehow use it as proof that the external world
exists for certain by the process of eliminating material objects from
language and replacing them with hypothetical propositions about
observers and experiences, committing us to the existence of a new
class of ontological object altogether: the sensibilia or sense-data
which can exist independently of experience, and thus refute the
sceptics strong arguments?

> When you stop thinking that mind is what characterizes us most basically
> but, rather, that most basically we are absorbed copers, the inner/outer
> distinction becomes problematic. There's no easily askable question as to
> whether the absorbed coping is in me or in the world. According to
> Heidegger, intentional content isn't in the mind, nor in some 3rd realm (as
> it is for Husserl), nor in the world; it isn't anywhere.  It's an embodied
> way of being-towards.  Thus for a Heideggerian, all forms of cognitivist
> externalism presuppose a more basic existential externalism where even to
> speak of "externalism" is misleading since such talk presupposes a contrast
> with the internal.  Compared to this genuinely Heideggerian view,
> ***extended-mind externalism is contrived, trivial, and irrelevant***#2.
>
> --------------------------------------------------------------------------------
>
> [i] As Heidegger puts it: "The self must forget itself if, lost in the world
> of equipment, it is to be able 'actually' to go to work and manipulate
> something." Being and Time, 405.
>
> [ii] Logic, 146. It's important to realize that when he uses the term
> "understanding," Heidegger explains (with a little help from the translator)
> that he means a kind of know-how:
>
> In German we say that someone can vorstehen something-literally, stand in
> front of or ahead of it, that is, stand at its head, administer, manage,
> preside over it.  This is equivalent to saying that he versteht sich darauf,
> understands in the sense of being skilled or expert at it, has the know-how
> of it.  (Martin Heidegger, The Basic Problems of Phenomenology, A.
> Hofstadter, Trans. Bloomington: Indian University Press, 1982, 276.)
>
> [iii] Being and Time, 416.  To make sense of this slogan, it's important to
> be clear that Heidegger distinguishes the human world from the physical
> universe.
>
> --------------------------------------------------------------------------------
>
> My critique #1:
>
> seems that the  "distinction between us and our equipment... is vanished" is
> just describing the unconscious automation process that takes over body
> functions and relieves the conscious mind to be unaware that its equipment
> was drawn into responding to solicitations.  This in many ways seems to just
> be alluding to the domain of our unconscious being that responds like
> dominos that fall automatically in response to many contextual
> solicitations.  I do not see how this all makes a solid argument that
> conscious thought is unified and inseparable from "our equipment" (i.e.,
> body).  At best this is a very weak, if not completely flawed, logic in
> inferring that our sense (act) of being in the world "is not
> representational at all".  The text that appears to clarify this assertion
> just seems to be a string of conclusory declarations without a solid logical
> foundation.  Even a plausible syllogism would be helpful here."
>
> My critique #2:
>
> is not the Heideggerian view requiring this unity between the mind and the
> world result in a "contrived, trivial, and irrelevant" world representation
> scheme in people when the events in the world are so far beyond a person's
> ability to cope (relative to there internal representation/value system)
> that they just end up contriving a trivial and irrelevant internal world
> that is just projected onto a "best fit/nearest neighbor" of a
> representation that they can cope with.  In this way, there is no absorbed
> coping because it requires a perfect and accurate absorption scheme between
> our mind (inner) and the world (outer) that does not exist and cannot be
> magically created, even biologically.  If you ignore this aspect of the
> Heideggerian view then what you end up with is nothing much more than an
> "ignorance is bliss" cognitive model that is not too different from what you
> say is wrong with Brook's approach.  That is, your portrayal of the
> Heideggerian view of absorbed coping would exactly model the thinking and
> representation behavior of insects, which certainly is not the conscious,
> cognitive model of humans.  Thus, this Heideggerian view of absorbed coping
> is either insufficient to describe the human condition or it renders
> indistinguishable insects from humans; either way it does not seem to
> uniquely capture the behavior at the level of human consciousness and is,
> thus, flawed at best.    That is, if this Heideggerian view of absorbed
> coping equally applies to any animals or insects then it is not really
> helpful to modeling or shedding light on  higher human intellectual
> behavior, which, of course, is the sole subject/goal of AI.  Moreover, this
> "perfect absorption" is a complete illusion and in practice will only exist
> in the most predictable and simple situations. From another angle, how is
> this Heideggerian view of absorbed coping much different from the standard
> psychological model of projection where our internal model/representation is
> simply projected onto the world (or a subset frame of it) and we just trick
> ourselves into believing that we are completely and accurately absorbed with
> the true essence of the frame problem.  this Heideggerian view of absorbed
> coping seems to much more fit the unconscious aspects of the human
> condition, which is more insect/animal like.  This all seems to be logically
> flawed and/or a very weak foundation for grandiose conclusions about what
> philosophical approach/model is needed to solve the frame problem and human
> consciousness.  Maybe I am missing something critical here that can make
> sense of it.  Please clarify the logic.
>
> Any thoughts on this issue?
>
> Ariel B.
>
> "Isaac" <gro...@sonic.net> wrote in message
>
> news:491d60f6$0$33588$742ec2ed@news.sonic.net...
>
> > All,
>
> > I have critiqued in great detail a recent write paper by Prof. Hubert
> > Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
> > Require
> > making it more Heideggerian" .  I can email a copy of it to whom ever is
> > interested. For his bio, see:
> >http://socrates.berkeley.edu/~hdreyfus/
>
> > I want to stimulate discussion on this topic by posting my critiques
> > little
> > by little and getting comments from the AI community on the news groups.
> > However, before I start I want to get a feel for how many know of his work
> > and/or would be interested in an intellectual debate for and against his
> > many anti-AI positions.
>
> > I hope many will respond to this posting with interest so I can begin
> > posting each part of this paper I find issues with and my reasoned
> > critique
> > for others to comment on.
>
> > Thanks,
> > Ariel-

0
Immortalist
11/17/2008 4:16:31 AM
"Isaac" <groups@sonic.net> wrote:
> Reminder: I will post the paragraph(s) I have a comment about, and
>   highlight
> the
> particular words at issue by enclosing them between "***" characters.
> I'll also include citations in the paper when helpful. I seek
> (intelligent and informed) technical/theoretical critique or feedback
> from anyone on this particular issue.  Ask me for a copy of the paper if
> you are interested in the context and details.
>
> 2nd critique, on his page 12, line 4:
> "Heidegger's important insight is not that, when we solve problems, we
> sometimes make use of representational equipment outside our bodies, but
> that being-in-the-world is more basic than thinking and solving
> problems;that it is not representational at all.  That is, when we are
> coping at our best, ***we are drawn in by solicitations and respond
> directly to them, so that the distinction between us and our
> equipment--between inner and outer-vanishes***#1  As Heidegger sums it
> up: I live in the understanding of writing, illuminating,
> going-in-and-out, and the like.  More precisely: as Dasein I am -- in
> speaking, going, and understanding -- an act of understanding
> dealing-with.  My being in the world is nothing other than this
> already-operating-with-understanding in this mode of being.[ii]
>
> Heidegger and Merleau-Ponty's understanding of embedded embodied coping,
> then, is not that the mind is sometimes extended into the world but
> rather that all such problem solving is derivative, that in our most
> basic way of being, that is, as absorbed skillful copers, we are not
> minds at all but one with the world.   Heidegger sticks to the
> phenomenon, when he makes the strange-sounding claim that, in its most
> basic way of being, "Dasein is its world existingly."[iii]
>
> When you stop thinking that mind is what characterizes us most basically
> but, rather, that most basically we are absorbed copers, the inner/outer
> distinction becomes problematic. There's no easily askable question as to
> whether the absorbed coping is in me or in the world. According to
> Heidegger, intentional content isn't in the mind, nor in some 3rd realm
> (as it is for Husserl), nor in the world; it isn't anywhere.  It's an
> embodied way of being-towards.  Thus for a Heideggerian, all forms of
> cognitivist externalism presuppose a more basic existential externalism
> where even to speak of "externalism" is misleading since such talk
> presupposes a contrast with the internal.  Compared to this genuinely
> Heideggerian view, ***extended-mind externalism is contrived, trivial,
> and irrelevant***#2.
>
> -------------------------------------------------------------------------
> -------
>
> [i] As Heidegger puts it: "The self must forget itself if, lost in the
> world of equipment, it is to be able 'actually' to go to work and
> manipulate something." Being and Time, 405.
>
> [ii] Logic, 146. It's important to realize that when he uses the term
> "understanding," Heidegger explains (with a little help from the
> translator) that he means a kind of know-how:
>
> In German we say that someone can vorstehen something-literally, stand in
> front of or ahead of it, that is, stand at its head, administer, manage,
> preside over it.  This is equivalent to saying that he versteht sich
> darauf, understands in the sense of being skilled or expert at it, has
> the know-how of it.  (Martin Heidegger, The Basic Problems of
> Phenomenology, A. Hofstadter, Trans. Bloomington: Indian University
> Press, 1982, 276.)
>
> [iii] Being and Time, 416.  To make sense of this slogan, it's important
> to be clear that Heidegger distinguishes the human world from the
> physical universe.
>
> -------------------------------------------------------------------------
> -------
>
> My critique #1:
>
> seems that the  "distinction between us and our equipment... is vanished"
> is just describing the unconscious automation process that takes over
> body functions and relieves the conscious mind to be unaware that its
> equipment was drawn into responding to solicitations.  This in many ways
> seems to just be alluding to the domain of our unconscious being that
> responds like dominos that fall automatically in response to many
> contextual solicitations.  I do not see how this all makes a solid
> argument that conscious thought is unified and inseparable from "our
> equipment" (i.e., body).  At best this is a very weak, if not completely
> flawed, logic in inferring that our sense (act) of being in the world "is
> not representational at all".  The text that appears to clarify this
> assertion just seems to be a string of conclusory declarations without a
> solid logical foundation.  Even a plausible syllogism would be helpful
> here."
>
> My critique #2:
>
> is not the Heideggerian view requiring this unity between the mind and
> the world result in a "contrived, trivial, and irrelevant" world
> representation scheme in people when the events in the world are so far
> beyond a person's ability to cope (relative to there internal
> representation/value system) that they just end up contriving a trivial
> and irrelevant internal world that is just projected onto a "best
> fit/nearest neighbor" of a representation that they can cope with.  In
> this way, there is no absorbed coping because it requires a perfect and
> accurate absorption scheme between our mind (inner) and the world (outer)
> that does not exist and cannot be magically created, even biologically.
> If you ignore this aspect of the Heideggerian view then what you end up
> with is nothing much more than an "ignorance is bliss" cognitive model
> that is not too different from what you say is wrong with Brook's
> approach.  That is, your portrayal of the Heideggerian view of absorbed
> coping would exactly model the thinking and representation behavior of
> insects, which certainly is not the conscious, cognitive model of humans.
> Thus, this Heideggerian view of absorbed coping is either insufficient to
> describe the human condition or it renders indistinguishable insects from
> humans; either way it does not seem to uniquely capture the behavior at
> the level of human consciousness and is, thus, flawed at best.    That
> is, if this Heideggerian view of absorbed coping equally applies to any
> animals or insects then it is not really helpful to modeling or shedding
> light on  higher human intellectual behavior, which, of course, is the
> sole subject/goal of AI.  Moreover, this "perfect absorption" is a
> complete illusion and in practice will only exist in the most predictable
> and simple situations. From another angle, how is this Heideggerian view
> of absorbed coping much different from the standard psychological model
> of projection where our internal model/representation is simply projected
> onto the world (or a subset frame of it) and we just trick ourselves into
> believing that we are completely and accurately absorbed with the true
> essence of the frame problem.  this Heideggerian view of absorbed coping
> seems to much more fit the unconscious aspects of the human condition,
> which is more insect/animal like.  This all seems to be logically flawed
> and/or a very weak foundation for grandiose conclusions about what
> philosophical approach/model is needed to solve the frame problem and
> human consciousness.  Maybe I am missing something critical here that can
> make sense of it.  Please clarify the logic.
>
> Any thoughts on this issue?
>
> Ariel B.

I'm an engineer, not a philosopher.  As such, nearly everything you write
strikes me as silly and odd and misguided.  I hardly know where to begin to
comment.

I find this sort of philosophical debate to be a pointless and endless game
at trying to define, and redefine words to make them fit together in a more
pleasing way.  You can't solve AI by playing with words.  You have to do it
using empirical evidence.  It's not a problem which can be solved by pure
philosophy.

For example, you speak of this "unity between the mind and the world".
What exactly is the "mind" and the "world"?  You can't resolve this sort of
question just by talking about such things.  Words are defined by their
connection to empirical evidence and without empirical evidence, the words
are basically meaningless - or at minimal, available for use in endless
pointless debates and redefinition based on usage alone.

The problem we run into here is that without a concrete definition of how
the brain works and what the mind is, we can't make any real progress on
the types of issues you are touching on here.  How can we make any progress
debating the nature of the connection between the "mind" and the "world"
when we can't agree what the mind is?  And if we can't agree what the mind
is, we can't really agree on anything it creates - like it's view of the
world - which is the foundation of what the word "world" is referring to.

You can't resolve any of these questions until you can first resolve
fundamental questions such as the mind body problem and consciousness in
general.

I have my answers to these questions, but my answers are not shared or
agreed on by any sort of majority of society so my foundational beliefs
can't be used as any sort of proof of what is right.  It call comes back to
the requirement that we produce empirical data to back up our beliefs.  And
for this subject, that means we have to solve AI, and solve all the
mysteries of the human brain.  Once we have that hard empirical science
work finished, then we will have the knowledge needed, to resolve the sort
of philosophical debates you bring up here.  Until then, endless debate
about what "merging mind and world" might mean, is nearly pointless in my
view.

Having said all that, I'll give you my view of all this, and the answers to
your questions as best as I can figure out.

I'm a strict materialist or physicalist.  I believe the brain is doing
nothing more than performing a fairly straight forward signal processing
function which is mapping sensory input data flows into effector output
data flows.  There is nothing else there happening, and nothing else that
needs to be explained in terms of what the "mind" is or what
"consciousness" is.  The mind and consciousness is not something separate
from the brain, it simply is the brain and what the brain is doing.

It's often suggested that humans have a property of "consciousness" which
doesn't exist in computers or maybe insects (based on the use of "insect"
above).  I see that idea as totally unsupported by the facts. It's nothing
more than a popular myth - and a perfect example of the nonsense that is
constantly batted around in these mostly pointless physiological debates.

There is no major function which exists in the human brain which doesn't
already exists in our computers and our robots which are already acting as
autonomous agents interacting with their environment.  The only difference
between humans and robots, is that humans currently have a more advanced
signal processing system - not one which is substantially different in any
significant way - just one which is better mostly by measures of degree,
and not measures of kind.

Many people however tend to believe the human "mind" and human
"consciousness" is something different from what our robots are doing by a
major and important degree of kind.  They believe we are something no one
yet understands, and something that doesn't exist in our machines at all.
I reject that notion completely.

This belief we find in so many humans - that they are uniquely different
from the machines - is a result of an invalid self-image the brains
naturally tend to form about themselves.  Human tend to think they are
something they are not in this regard.  They believe their "mind" is
somehow different and separate from the brain, when there is no separation
at all.  The endless mind body debates and all the other debates which spin
off from it, are the result of failing to see that the apparent separation
is only an illusion.

This illusion is represented by the simple idea that our internal
awareness, doesn't seem to be an identity with neural activity.  That
"seeing blue" doesn't seem to us to be "just neural activity".  Seeing blue
seems to be something of a completely different nature to us than "neurons
firing".  However, all evidence adds up to the fact that these are an
identity - that they are in fact one and the same thing.  Not something
"created by" the activity of the brain, but simply, the brain activity
itself.

Now, I wrote all that just to try and make it clear to you where I'm coming
from and what I believe in.

Because of what I believe, the mind body problem, and AI, and
consciousness, translates to a very straight forward problem of science and
engineering, not a philosophy problem in any respect.  The brain is just a
reinforcement trained parallel signal processing network which produces our
behavior (both external behaviors and internal thoughts) as a reaction to
the current environment.

From this perceptive, let me jump in and debate the words you had issue
with:

  we are drawn in by solicitations and respond
  directly to them, so that the distinction between us and our
  equipment--between inner and outer-vanishes

I think at the lowest levels of what is happening in the brain, it is
obvious that the brain is simply reacting to what is happening in the
environment - that is what the brain is doing by definition in my view.  We
simply "respond directly to them".  That is all we every do.

But this is where it gets very messy.  What is meant by "we" in the above?

For me, it is obvious the only "we" that exists is a human body and the
human body is simply reacting to its environment and that's pretty much the
end of the story.  It's no more complex or mystical in any sense than a
robot reacting to its environment or a rock reacting to its environment.
The only difference is that the more complex machines like the human and
robot react in more complex ways to their environment than the rock does.

But what those words get tangled up in, is the issue of what is happening
in the brain as it reacts to the environment.  The signal processing inside
the brain is in effect representing various aspects of the current state of
the environment and those words (I believe) are an attempt to talk about
those internal representations.

So, to make it clear.  Lets say there's a dog in the world, and there's a
human looking at the dog.  The dog exists in the universe and the human
exists in the universe.  But in the human, there is brain activity which
correlates to the dog being there.  That is, if we were to experiment which
this human, we would find there is a some unique type of neural activity
which happens ever time the person sees, or thinks about, a dog.  The
particular brain activity is slightly different depending on whether the
person is seeing the dog, or hearing a dog bark, or just thinking about a
dog.  And it's different depending on what type of dog the person is
seeing.  But yet, if we had a high quality real time brain scanner, we
could "read the mind" of the person simply by watching what the brain is
doing.  We would know whenever the person was thinking about a dog by
seeing the brain activity which represents dog.

So, in this sense, the "dog" exists in two places.  It exists as the real
dog, in the environment, and it exists as a pattern of brain activity in
the person who is thinking about the dog.  So there is this natural dualism
between the dog, and the brain activity.

So, when we use the word "dog" to talk about dogs, which of these two dogs
are we actually talking about?  Are we talking about the brain activity, or
are we talking about the physical dog?  In most cases, we don't care.  The
real dog, and our brain activity, are one and the same thing to us most the
time.  In day to day life, people don't think about this question, and for
the most part, people don't even understand the question.

If the part of the brain which represents "dogs" is damaged, we can become
unable to see a dog.  We can look right at it, and have no clue what it is
we are looking at.  At the same time, if you stimulate the correct parts of
a brain, it's mostly likely that the person would report they were "seeing
a dog" when there was no dog there.  So what is it we are actually
"seeing".  Is it the dog we are responding to when we say we see a dog, or
is the neural activity in one part of the brain which other part of the
brain is responding to by producing the words "I see a dog"?

It can be argued that what we actually respond to is not the physical dog
out in the word, but that we are responding to the brain activity.
However, in a normally functioning human brain, the brain activity only
shows up in the brain, when there is a dog in our field of vision - so the
distinction isn't important.

So when we use the the word "we", or "self" what actually are we talking
about?  Just like with the dog, are we talking about the phsyical thing
"out there"?  Or are we talking about the brain activity which represents
the physical thing out there?  The confusion here of course is that, unlike
with the dog example, the physical thing out there and the brain activity,
overlap.  The brain activity is part of the human body which is thinking
about itself.

Like when we use the word "dog" it's not clear whether we are talking about
the brain activity which represents the dog, or the physical dog, it's not
clear what we are talking about when we talk about "we".

So lets go back to the words:

  we are drawn in by solicitations and respond
  directly to them, so that the distinction between us and our
  equipment--between inner and outer-vanishes

Which "we" might these words be making reference to?  I really don't know
because I don't know for sure what this guy was trying to communicate with
these words.

But then we get to the "us" and "our equipment".  Again, was the "us" and
"our equipment" a reference to the physical body of the person and the
physical equipment, or a reference to the brain activity inside the
person's brain which represented all this stuff?

And which "distinction" is he making reference to?  The distinction between
the human body and the physical stuff of the equipment?  Or the distinction
between the brain activity which represents the equipment and the brain
activity which represents the human?

Now, when we are exposed to an environment, say one which has a dog in it,
we might not "see" the dog.  That is, the brain activity which represents
"dog" might not be created.  This happens when when the brain has "focused
on" some other aspect of the environment.  There might be something very
important in the environment we need to respond to - like a lion which is
about to attack us.  The brain is built as a complex machine for producing
the correct response to any environmental condition.  In any situation, the
brain has the ability to produce many different responses to the
environment.  Should it make us turn our head?  Or raise our hand?  Or
stand up from the chair?  Or duck?  Or put our hand in front of our face?
There are a million different ways the brain can respond to any situation,
and though a life time of experience, it creates priories which allow it to
select one response over another.  It can produce multiple responses at
once when they don't conflict, or it might block all responses except one
major response.  The system of mapping sensory data about the current
condition of the environment to the response which is the best fit for the
current conditions is what the brain is built to do.  As such, one
response, as the power to block other lower priority responses - and this
effect happens at many different levels of the processing which happens in
the brain.

The activity of the brain which represents "dog" is just a response the
brain is producing to the current stimulus signals.  And like all responses
the brain produces, it's possible that some other response will have been
ranked as more important than the "dog" response and cause inhibit the dog
response - even though there is a "dog" in the environment.

As such, when the "dog" response is inhibited because some other response
like "big scary lion which is about to eat us" is produced, we end up not
"seeing" the dog.  When asked later if there was a dog next to the lion, we
will have no memory of the dog at all simply because we never "saw" the dog
- i.e., the dog neural activity was blocked by other activity of higher
priority.  We would say, "we didn't notice the dog".

This is true about most of how we respond to the environment.  If we see a
complex scene, the brain picks the reaction which it believes is the "best"
for that situation - we will see one object first, and ignore everything
else.  We will "focus on" the object we "care about" the most.

In the case of vision, this has a lot to do with eye movement as well.  To
see something in high enough resolution to recognize it, we have to
actually move our eye to point to it so the higher resolution center of the
eye falls on the object.  So we react to the environment by moving our eye
and changing the focus of the eye to bring a given part of our environment
into view.  If we don't look at something, we can't recognize it.  So how
we react to our environment by moving our eyes has a lot to do with what we
see and remember about our environment later.  But in addition to eye
movement, there is still the effect of how the brain reacts internally
which creates a large bias on what we understand about our environment.
And as such, there can be things happening in our environment which never
gets represented by internal brain activity, and when that happens, we are
in effect, not aware of that element of our environment.

Now, all that was about this issue of the distinction between our self and
our equipment.  If we don't "think about" the equipment - meaning the brain
doesn't create the brain activity which represents the equipment, then we
don't "see the equipment".  And if the brain doesn't create the brain
activity which represents our self, we aren't aware of ourselves.

As an example, if we are watching TV, brain activity will be created that
represents what we are seeing - like a car chase in a action show.  While
that brain activity is happening, it is unlikely that that we will be
having brain activity about the TV itself. We will simply be "thinking
about" a car chase.  And likewise, when we are watching the TV show, we are
probably not having brain activity which represents our own body.  And if
we are not thinking about the physical TV or our physical self watching the
TV, and we are only thinking about car chase, then in effect, we have "lost
the distinction" between ourselves and the TV.  We have gotten "lost" in
the show, as we might say.  It's as if we were standing on the side of the
road watching the action instead of sitting in our living room watching a
TV.

This happens because when we are in a room, there will be all sorts of
brain activity representing that room, and our location in the room.  There
is always brain activity which represents the state of the environment
around us.  But when we watch a TV show, or a movie, and we focus on the
what is happening in the show long enough, the brain state which represents
the fact that we are in a room watching a TV will start to fade and be
slowly be replaced by the events of the show. The less distraction there is
from the room (dark lights with no other noise or motion in the room) the
greater this effect is.

This is one effect that might be what the words "distinction between us and
our equipment--between inner and outer-vanishes", could have been making
reference to.

But then there's the other way to look at this.  Just like the "dog" exists
for us as brain activity, our entire world exists for us as brain activity.
As such, there never really was much distinction between the brain activity
which represents our inner self and brain activity which represents outer
stuff.  It's all still just "more brain activity".  The ability to classify
some brain activity as "outer stuff" and some brain activity as "inner
stuff" is mostly a learned response.  It's the way our brain is trained to
respond to the enviro0nment - by creating that inner and outer
classifications.  As such, we can also train ourselves to think in terms of
being "one with the world".  Or "extend our consciousness outside
ourselves" - or whatever sill way some one might talk about this.

In the end, as I look around my office, I know there is a clear distinction
between my hands typing on the computer, and the computer.  My brain has
direct sensory paths from my fingers to the brain which allows me to sense
what the fingers are touching, but I have no such sensory paths to the
computer, so I don't know what it is touching.  This creates the major an
obvious classification of "me" vs "it".

But at the same time, I know that all that I'm seeing around me, is in fact
just brain activity.  I know there is a computer in front of me simply
because there is brain activity happening in my head which represents this
fact.  Stop the brain activity, and my "world" goes away.  I will no longer
be "aware" of what is around me.  I will no longer be "seeing" the computer
or the shelves or books or pens or all the other things in my office.  So
though I am a human brain in a human body and my awareness is limited to
what the activity of my brain is representing.  The world I know about, is
all in my head.  And as such, the line between me and the world around me,
is blurred based on what we mean by "me" and "world".

If we talk about the physical items here, there is a clear and simple
distinction between me and the world around me.  If we talk about the world
as being the brain activity (instead of being the outer world that activity
represents), then the brain active and myself are one and the same thing -
so we can talk about "my consciousness spreading out into the environment"
without being wrong.  The entire word as we understand it is just a figment
of our imagination.  It just so happens however, that most the time, our
"imagination" (aka brain activity) is a good representation of the physical
world we are part of.  But when we dream, or watch TV, or read a book, or
have brain damage, or the brain just isn't working correctly, that
correlation between what the brain is currently representing and what is
actually happening in the environment might not be so well correlated.

I've not tried to comment on your critiques of the words yet, but this post
is too long already so I'll stop hear for now.

But I wanted to give all the above perspective to allow you to understand
how I approach such issues.  I see the brain as simple signal processing
machine which responds to it's environment and all the debates of the type
you bring up here must be translated back to the actions of such a machine
in order to be understood.  If you can't translates these ideas and
questions and issues back to the operation of a machine like this, then you
don't have any hope of understand what it is you are talking about and you
don't have any hope of understanding what the words mean or why one set of
words is a better or worse description of what is happening.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/17/2008 4:45:10 AM
"Neil W Rickert" <rickert+nn@cs.niu.edu> wrote in message 
news:yi5Uk.8809$ZP4.4922@nlpi067.nbdc.sbc.com...
> "Isaac" <groups@sonic.net> writes:
>
>>Neil, I sent it to you email listed.
>
> Thanks.  Copy received, and appreciated.
>
>>                                      I replaced the "+" with "." assuming
>>that was an anti-SPAM measure.
>
> That was a mistake, and probably gave you a bounce.  The "+"
> was correct.  But fortunately you included both addresses, so
> the one with the "+" got through.  (If you had removed the "+"
> and what follows it up to the "@", that would have worked to.
> The "+string" is to allow me to sort it into a different mailbox.
> And yes, that different mailbox does get extra anti-spam treatment.
>

I figured there was a chance it could be legit so I did both.  Glad it 
reached you.

> It's a bit long, so it might be a day or two before I have a comment.

If time is at a premium you may want to just focus on the sections that I 
post critiques on.  I plan on doing it in sequential order with the paper so 
that might be more efficient for you.

> But it is rather interesting, and more interesting than some of
> Dreyfus's previous articles on AI.

He is always trying to bridge classic philosophy with technology to make 
broad conclusions about AI.  I think this time he is trying to rely on 
cognitive neuroscience research (esp. in re to chaos and neural networks) to 
buttress his long standing positions.  However, I think he over reaches 
their because he does not understand anything about the technical issues, 
which imo greatly weakens, if not eviscerates his conclusions.

>I will probably be disagreeing
> with Dreyfus
On an AI news group, this is what I expect, of course.

>and possibly with you too.
Yes, I am counting on this being the case.  You have two birds to shoot down 
possibly with completely different ammunition.  I am always very 
controversial so it will be very easy for you to disagree with both Dryfus 
and my comments.

>But then I have my own
> unique way of looking at questions of cognition.

Vive la differance!

I look forward to your well reasoned feedback on my critiques that I post 
that you find interesting to respond to .

Ariel B.-
> 


0
Isaac
11/17/2008 4:49:47 AM
On Nov 16, 6:23 pm, "Isaac" <gro...@sonic.net> wrote:
> Here is my 2nd installment of many critiques of this paper.  Not all issues
> will
> resonate will everyone so pick and choose what you find interesting to
> debate pro/con
> and I will defend any of my comments.
>
> I will post the paragraph(s) I have a comment about, and highlight the
> particular words at issue by enclosing them between "***" characters.  I'll
> also include citations in the paper when helpful. I seek (intelligent and
> informed) technical/theoretical critique or feedback from anyone on this
> particular issue.
>
> See page 11, line 20 of his paper where it says:
> "I agree that it is time for a positive account of Heideggerian AI and of an
> underlying Heideggerian neuroscience, but I think Wheeler is the one looking
> in the wrong place.  Merely by supposing that Heidegger is concerned with
> problem solving and action oriented representations, Wheeler's project
> reflects not a step beyond Agre but a regression to aspects of pre-Brooks
> GOFAI.  Heidegger, indeed, claims that that skillful coping is basic, but he
> is also clear that, all coping takes place on the background coping he calls
> being-in-the-world that doesn't involve any form of representation at all.
>
> see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.
>
> Wheeler's cognitivist misreading of Heidegger leads him to overestimate the
> importance of Andy Clark's and David Chalmers' attempt to free us from the
> Cartesian idea that the mind is essentially inner by pointing out that in
> thinking we sometimes make use of external artifacts like pencil, paper, and
> computers.[i]  Unfortunately, this argument for the extended mind preserves
> the Cartesian assumption that our basic way of relating to the world is by
> using propositional representations such as beliefs and memories whether
> they are in the mind or in notebooks in the world.  In effect, while Brooks
> happily dispenses with representations where coping is concerned, all
> Chalmers, Clark, and Wheeler give us as a supposedly radical new
> Heideggerian approach to the human way of being in the world is to note that
> memories and beliefs are not necessarily inner entities and that,
> ***therefore, thinking bridges the distinction between inner and outer
> representations.*** "
>
> My Critique:
> "Assuming that by "thinking" you mean conscious thought,  I cannot see how
> thinking is a bridge that necessarily follows from memories/beliefs not
> being solely inner entities.  It seems to me that inner and outer
> representations can be bridged without thought.  Isn't this what occurs in
> an unconscious (reflex) reaction to a complex external even, which is an
> automatic bridge and generates a thoughtful, usually accurate response but
> often before we even have a chance to think about it.  Inner/outer
> representations seems semantically vague here.  Also, cannot conscious
> thought can endeavor itself with in purely inner or out representations
> without ever bridging them?  I guess, it is the "therefore" that gives me
> pause here."
>
> Any thoughts on this issue?
>

A learned skill like particular musical styles on the piano require
that the fingers move faster than conscious planning can accomplish.
The bridge in this case might be complex patterns stored in the
cerebellum, initiated by simple memory data in the basal ganglia,
which is triggered by some partial sense datum.

I think I agree but there is as yet no known way to escape from
skepticism concerning certainty on the issue. If memory is just the
stimulation of the same neurons that the senses stimulate, and this
information is stored in a much compressed style which can initiate
cascades that lead to the patterns, solipsism continues not to be
ruled out. It is still possible that we are in a matrix style delusion
with wires hoked to our brains.

.......In a sparse distributed network - memory is a type of
perception.....The act of remembering and the act of perceiving both
detect a pattern in a vary large choice of possible patterns....When
we remember we recreate the act of the original perception - that is
we relocate the pattern by a process similar to the one we used to
perceive the pattern originally.

http://www.kk.org/outofcontrol/ch2-d.html

The question is really how rusty 20th century philosophy, albeit with
a Kantian twist, can deal with neuroscience discoveries, of the ebb
and flow of experience producing parts of the brain.

> Ariel B.
>
> "Alpha" <omegazero2...@yahoo.com> wrote in message
>
> news:f8636636-6ee8-4c92-9905-a89145bc06b2@k24g2000pri.googlegroups.com...
>
> On Nov 14, 9:46 pm, "Isaac" <gro...@sonic.net> wrote:
>
> > All,
>
> >> I have critiqued in great detail a recent write paper by Prof. Hubert
> >> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
> >> Require
> >> making it more Heideggerian" . I can email a copy of it to whom ever is
> >> interested.
>
> >Please send a copy to omegazero2...@yahoo.com Ariel. Thanks!
>
> >Why don't you post a summary of that paper and your key critique
> >points here.
>
> >> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>
> >> I want to stimulate discussion on this topic by posting my critiques
> >> little
> >> by little and getting comments from the AI community on the news groups.
> >> However, before I start I want to get a feel for how many know of his
> >> work
> >> and/or would be interested in an intellectual debate for and against his
> >> many anti-AI positions.
>
> >> I hope many will respond to this posting with interest so I can begin
> >> posting each part of this paper I find issues with and my reasoned
> >> critique
> >> for others to comment on.
>
> >> Thanks,
> >> Ariel-

0
Immortalist
11/17/2008 5:07:49 AM
"Immortalist" <reanimater_2000@yahoo.com> wrote in message 
news:fcc39fdd-9886-401f-88bd-510528b75fa5@h23g2000prf.googlegroups.com...
> On Nov 16, 6:12 pm, "Isaac" <gro...@sonic.net> wrote:
>> Reminder: I will post the paragraph(s) I have a comment about, and 
>> highlight
>> the
>> particular words at issue by enclosing them between "***" characters. 
>> I'll
>> also include citations in the paper when helpful. I seek (intelligent and
>> informed) technical/theoretical critique or feedback from anyone on this
>> particular issue.  Ask/email me for a copy of the paper if you are
>> interested in
>> the context and details.
>>
>> 2nd critique, on his page 12, line 4:
>> "Heidegger's important insight is not that, when we solve problems, we
>> sometimes make use of representational equipment outside our bodies, but
>> that being-in-the-world is more basic than thinking and solving
>> problems;that it is not representational at all.  That is, when we are
>> coping at our best, ***we are drawn in by solicitations and respond 
>> directly
>> to them, so that the distinction between us and our equipment--between 
>> inner
>> and outer-vanishes***#1  As Heidegger sums it up:
>> I live in the understanding of writing, illuminating, going-in-and-out, 
>> and
>> the like.  More precisely: as Dasein I am -- in speaking, going, and
>> understanding -- an act of understanding dealing-with.  My being in the
>> world is nothing other than this already-operating-with-understanding in
>> this mode of being.[ii]
>>
>
<snip>
I did not understand the significance of the Kant, VR, phantom-limb, etc. 
quotes.  Please clarify what you mean in concrete terms.

> If I am just learning to use a hammer the first time I must extend the
> perception of the end of my arm out about a foot further. I must learn
> to hit an object, nail, a foot further out than I would normally hit
> things. This is more like the telepresence in our adjustments to our
> outer sense...

OK, but how does this support or contradict Dryfus' contention that we have 
no internal representations that Dryfus quotes Heidegger as saying we have 
in the above section?

>
>> Heidegger and Merleau-Ponty's understanding of embedded embodied coping,
>> then, is not that the mind is sometimes extended into the world but 
>> rather
>> that all such problem solving is derivative, that in our most basic way 
>> of
>> being, that is, as absorbed skillful copers, we are not minds at all but 
>> one
>> with the world.   Heidegger sticks to the phenomenon, when he makes the
>> strange-sounding claim that, in its most basic way of being, "Dasein is 
>> its
>> world existingly."[iii]
>>
>
> Is this like, phenomenology, you know the attempt to extend the
> inescapable self and somehow use it as proof that the external world
> exists for certain by the process of eliminating material objects from
> language and replacing them with hypothetical propositions about
> observers and experiences, committing us to the existence of a new
> class of ontological object altogether: the sensibilia or sense-data
> which can exist independently of experience, and thus refute the
> sceptics strong arguments?

I think phenomenology as an objectified (i.e., represented) concept is what 
Heidegger claims, but I understand Dryfus as saying here that he agrees with 
Merleau-Ponty's philosophy that the phenominon and our perceiving of it 
become indestingishable from each other; i.e, we have no internal 
representations of the phenomenon.

Do you have any comments on my cretique of these paragraphs below?


<snip>


>> --------------------------------------------------------------------------------
>>
>> My critique #1:
>>
>> seems that the  "distinction between us and our equipment... is vanished" 
>> is
>> just describing the unconscious automation process that takes over body
>> functions and relieves the conscious mind to be unaware that its 
>> equipment
>> was drawn into responding to solicitations.  This in many ways seems to 
>> just
>> be alluding to the domain of our unconscious being that responds like
>> dominos that fall automatically in response to many contextual
>> solicitations.  I do not see how this all makes a solid argument that
>> conscious thought is unified and inseparable from "our equipment" (i.e.,
>> body).  At best this is a very weak, if not completely flawed, logic in
>> inferring that our sense (act) of being in the world "is not
>> representational at all".  The text that appears to clarify this 
>> assertion
>> just seems to be a string of conclusory declarations without a solid 
>> logical
>> foundation.  Even a plausible syllogism would be helpful here."
>>
>> My critique #2:
>>
>> is not the Heideggerian view requiring this unity between the mind and 
>> the
>> world result in a "contrived, trivial, and irrelevant" world 
>> representation
>> scheme in people when the events in the world are so far beyond a 
>> person's
>> ability to cope (relative to there internal representation/value system)
>> that they just end up contriving a trivial and irrelevant internal world
>> that is just projected onto a "best fit/nearest neighbor" of a
>> representation that they can cope with.  In this way, there is no 
>> absorbed
>> coping because it requires a perfect and accurate absorption scheme 
>> between
>> our mind (inner) and the world (outer) that does not exist and cannot be
>> magically created, even biologically.  If you ignore this aspect of the
>> Heideggerian view then what you end up with is nothing much more than an
>> "ignorance is bliss" cognitive model that is not too different from what 
>> you
>> say is wrong with Brook's approach.  That is, your portrayal of the
>> Heideggerian view of absorbed coping would exactly model the thinking and
>> representation behavior of insects, which certainly is not the conscious,
>> cognitive model of humans.  Thus, this Heideggerian view of absorbed 
>> coping
>> is either insufficient to describe the human condition or it renders
>> indistinguishable insects from humans; either way it does not seem to
>> uniquely capture the behavior at the level of human consciousness and is,
>> thus, flawed at best.    That is, if this Heideggerian view of absorbed
>> coping equally applies to any animals or insects then it is not really
>> helpful to modeling or shedding light on  higher human intellectual
>> behavior, which, of course, is the sole subject/goal of AI.  Moreover, 
>> this
>> "perfect absorption" is a complete illusion and in practice will only 
>> exist
>> in the most predictable and simple situations. From another angle, how is
>> this Heideggerian view of absorbed coping much different from the 
>> standard
>> psychological model of projection where our internal model/representation 
>> is
>> simply projected onto the world (or a subset frame of it) and we just 
>> trick
>> ourselves into believing that we are completely and accurately absorbed 
>> with
>> the true essence of the frame problem.  this Heideggerian view of 
>> absorbed
>> coping seems to much more fit the unconscious aspects of the human
>> condition, which is more insect/animal like.  This all seems to be 
>> logically
>> flawed and/or a very weak foundation for grandiose conclusions about what
>> philosophical approach/model is needed to solve the frame problem and 
>> human
>> consciousness.  Maybe I am missing something critical here that can make
>> sense of it.  Please clarify the logic.
>>
>> Any thoughts on this issue?
>>
>> Ariel B.
>>
>> "Isaac" <gro...@sonic.net> wrote in message
>>
>> news:491d60f6$0$33588$742ec2ed@news.sonic.net...
>>
>> > All,
>>
>> > I have critiqued in great detail a recent write paper by Prof. Hubert
>> > Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
>> > Require
>> > making it more Heideggerian" .  I can email a copy of it to whom ever 
>> > is
>> > interested. For his bio, see:
>> >http://socrates.berkeley.edu/~hdreyfus/
>>
>> > I want to stimulate discussion on this topic by posting my critiques
>> > little
>> > by little and getting comments from the AI community on the news 
>> > groups.
>> > However, before I start I want to get a feel for how many know of his 
>> > work
>> > and/or would be interested in an intellectual debate for and against 
>> > his
>> > many anti-AI positions.
>>
>> > I hope many will respond to this posting with interest so I can begin
>> > posting each part of this paper I find issues with and my reasoned
>> > critique
>> > for others to comment on.
>>
>> > Thanks,
>> > Ariel-
> 


0
Isaac
11/17/2008 5:10:55 AM
On Nov 16, 9:10 pm, "Isaac" <gro...@sonic.net> wrote:
> "Immortalist" <reanimater_2...@yahoo.com> wrote in message
>
> news:fcc39fdd-9886-401f-88bd-510528b75fa5@h23g2000prf.googlegroups.com...
>
> > On Nov 16, 6:12 pm, "Isaac" <gro...@sonic.net> wrote:
> >> Reminder: I will post the paragraph(s) I have a comment about, and
> >> highlight
> >> the
> >> particular words at issue by enclosing them between "***" characters.
> >> I'll
> >> also include citations in the paper when helpful. I seek (intelligent and
> >> informed) technical/theoretical critique or feedback from anyone on this
> >> particular issue.  Ask/email me for a copy of the paper if you are
> >> interested in
> >> the context and details.
>
> >> 2nd critique, on his page 12, line 4:
> >> "Heidegger's important insight is not that, when we solve problems, we
> >> sometimes make use of representational equipment outside our bodies, but
> >> that being-in-the-world is more basic than thinking and solving
> >> problems;that it is not representational at all.  That is, when we are
> >> coping at our best, ***we are drawn in by solicitations and respond
> >> directly
> >> to them, so that the distinction between us and our equipment--between
> >> inner
> >> and outer-vanishes***#1  As Heidegger sums it up:
> >> I live in the understanding of writing, illuminating, going-in-and-out,
> >> and
> >> the like.  More precisely: as Dasein I am -- in speaking, going, and
> >> understanding -- an act of understanding dealing-with.  My being in the
> >> world is nothing other than this already-operating-with-understanding in
> >> this mode of being.[ii]
>
> <snip>
> I did not understand the significance of the Kant, VR, phantom-limb, etc.
> quotes.  Please clarify what you mean in concrete terms.
>
> > If I am just learning to use a hammer the first time I must extend the
> > perception of the end of my arm out about a foot further. I must learn
> > to hit an object, nail, a foot further out than I would normally hit
> > things. This is more like the telepresence in our adjustments to our
> > outer sense...
>
> OK, but how does this support or contradict Dryfus' contention that we have
> no internal representations that Dryfus quotes Heidegger as saying we have
> in the above section?
>

I suppose further conversation about the implications of telepresence
would and representationalism have not been ruled out by what Dyfus
says, unless he only wants to make a stronger but still probable
argument he shoots himself in the foot. Jeez, representationalism, is
the best theory since Kant and still rules neuroscience,

What is Representationalism?

Representationalism is the philosophical position that the world we
see in conscious experience is not the real world itself, but merely a
miniature virtual-reality replica of that world in an internal
representation. Representationalism is also known (in psychology) as
Indirect Perception, and (in philosophy) as Indirect Realism, or
Epistemological Dualism.

Why Representationalism?

As incredible as it might seem intuitively, representationalism is the
only alternative that is consistent with the facts of perception.

The Epistemological Fact (strongest theory): It is impossible to have
experience beyond the sensory surface.

Dreams, Hallucinations, and Visual Illusions clearly indicate that the
world of experience is not the same thing as the world itself.

The observed Properties of Phenomenal Perspective clearly indicate
that the world of experience is not the same as the external world
that it represents.

http://cns-alumni.bu.edu/~slehar/Representationalism.html

Representationalism (or indirect realism) with respect to perception
is the view that "we are never aware of physical objects, [but rather]
we are only indirectly aware of them, in virtue of a direct awareness
of an intermediary [mental] object. (Dancy, 145) Because there are
both direct and indirect objects of awareness in representationalism,
a correspondence relation arises between the mental entities directly
perceived and external objects which those mental entities represent.
And thus perceptual error occurs when the two objects of awareness do
not correspond sufficiently well. In opposition to
representationalism, both (direct) realism and idealism agree that
perception is direct and unmediated, despite their disagreements about
what the object of perception is. (Dancy, 145) In any form of direct
perception, no correspondence relationship is possible, since there is
only one object of perception. Thus only representationalism will give
rise to the view that perceptual errors exist and must be part of a
theory of perception. Nevertheless, both idealism and realism must
still account for the facts that are referred to as "perceptual
errors" by the representationalist.

http://www.dianahsieh.com/undergrad/rape.html

....representation is central to psychology as well, for the mind too
is a system that represents the world and possible worlds in various
ways. Our hopes, fears, beliefs, memories, perceptions, intentions,
and desires all involve our ideas about (our mental models of) the
world and other worlds. This is what humanist philosophers and
psychologists have always said, of course, but until recently they had
no support from science...

http://www.kurzweilai.net/meme/frame.html?main=/articles/art0162.html?




>
>
>
>
> >> Heidegger and Merleau-Ponty's understanding of embedded embodied coping,
> >> then, is not that the mind is sometimes extended into the world but
> >> rather
> >> that all such problem solving is derivative, that in our most basic way
> >> of
> >> being, that is, as absorbed skillful copers, we are not minds at all but
> >> one
> >> with the world.   Heidegger sticks to the phenomenon, when he makes the
> >> strange-sounding claim that, in its most basic way of being, "Dasein is
> >> its
> >> world existingly."[iii]
>
> > Is this like, phenomenology, you know the attempt to extend the
> > inescapable self and somehow use it as proof that the external world
> > exists for certain by the process of eliminating material objects from
> > language and replacing them with hypothetical propositions about
> > observers and experiences, committing us to the existence of a new
> > class of ontological object altogether: the sensibilia or sense-data
> > which can exist independently of experience, and thus refute the
> > sceptics strong arguments?
>
> I think phenomenology as an objectified (i.e., represented) concept is what
> Heidegger claims, but I understand Dryfus as saying here that he agrees with
> Merleau-Ponty's philosophy that the phenominon and our perceiving of it
> become indestingishable from each other; i.e, we have no internal
> representations of the phenomenon.
>
> Do you have any comments on my cretique of these paragraphs below?
>

Plenty but your moving to fast, this could take months, even years,
with some of you hard cases.

> <snip>
>
> >> --------------------------------------------------------------------------------
>
> >> My critique #1:
>
> >> seems that the  "distinction between us and our equipment... is vanished"
> >> is
> >> just describing the unconscious automation process that takes over body
> >> functions and relieves the conscious mind to be unaware that its
> >> equipment
> >> was drawn into responding to solicitations.  This in many ways seems to
> >> just
> >> be alluding to the domain of our unconscious being that responds like
> >> dominos that fall automatically in response to many contextual
> >> solicitations.  I do not see how this all makes a solid argument that
> >> conscious thought is unified and inseparable from "our equipment" (i.e.,
> >> body).  At best this is a very weak, if not completely flawed, logic in
> >> inferring that our sense (act) of being in the world "is not
> >> representational at all".  The text that appears to clarify this
> >> assertion
> >> just seems to be a string of conclusory declarations without a solid
> >> logical
> >> foundation.  Even a plausible syllogism would be helpful here."
>
> >> My critique #2:
>
> >> is not the Heideggerian view requiring this unity between the mind and
> >> the
> >> world result in a "contrived, trivial, and irrelevant" world
> >> representation
> >> scheme in people when the events in the world are so far beyond a
> >> person's
> >> ability to cope (relative to there internal representation/value system)
> >> that they just end up contriving a trivial and irrelevant internal world
> >> that is just projected onto a "best fit/nearest neighbor" of a
> >> representation that they can cope with.  In this way, there is no
> >> absorbed
> >> coping because it requires a perfect and accurate absorption scheme
> >> between
> >> our mind (inner) and the world (outer) that does not exist and cannot be
> >> magically created, even biologically.  If you ignore this aspect of the
> >> Heideggerian view then what you end up with is nothing much more than an
> >> "ignorance is bliss" cognitive model that is not too different from what
> >> you
> >> say is wrong with Brook's approach.  That is, your portrayal of the
> >> Heideggerian view of absorbed coping would exactly model the thinking and
> >> representation behavior of insects, which certainly is not the conscious,
> >> cognitive model of humans.  Thus, this Heideggerian view of absorbed
> >> coping
> >> is either insufficient to describe the human condition or it renders
> >> indistinguishable insects from humans; either way it does not seem to
> >> uniquely capture the behavior at the level of human consciousness and is,
> >> thus, flawed at best.    That is, if this Heideggerian view of absorbed
> >> coping equally applies to any animals or insects then it is not really
> >> helpful to modeling or shedding light on  higher human intellectual
> >> behavior, which, of course, is the sole subject/goal of AI.  Moreover,
> >> this
> >> "perfect absorption" is a complete illusion and in practice will only
> >> exist
> >> in the most predictable and simple situations. From another angle, how is
> >> this Heideggerian view of absorbed coping much different from the
> >> standard
> >> psychological model of projection where our internal model/representation
> >> is
> >> simply projected onto the world (or a subset frame of it) and we just
> >> trick
> >> ourselves into believing that we are completely and accurately absorbed
> >> with
> >> the true essence of the frame problem.  this Heideggerian view of
> >> absorbed
> >> coping seems to much more fit the unconscious aspects of the human
> >> condition, which is more insect/animal like.  This all seems to be
> >> logically
> >> flawed and/or a very weak foundation for grandiose conclusions about what
> >> philosophical approach/model is needed to solve the frame problem and
> >> human
> >> consciousness.  Maybe I am missing something critical here that can make
> >> sense of it.  Please clarify the logic.
>
> >> Any thoughts on this issue?
>
> >> Ariel B.
>
> >> "Isaac" <gro...@sonic.net> wrote in message
>
> >>news:491d60f6$0$33588$742ec2ed@news.sonic.net...
>
> >> > All,
>
> >> > I have critiqued in great detail a recent write paper by Prof. Hubert
> >> > Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
> >> > Require
> >> > making it more Heideggerian" .  I can email a copy of it to whom ever
> >> > is
> >> > interested. For his bio, see:
> >> >http://socrates.berkeley.edu/~hdreyfus/
>
> >> > I want to stimulate discussion on this topic by posting my critiques
> >> > little
> >> > by little and getting comments from the AI community on the news
> >> > groups.
> >> > However, before I start I want to get a feel for how many know of his
> >> > work
> >> > and/or would be interested in an intellectual debate for and against
> >> > his
> >> > many anti-AI positions.
>
> >> > I hope many will respond to this posting with interest so I can begin
> >> > posting each part of this paper I find issues with and my reasoned
> >> > critique
> >> > for others to comment on.
>
> >> > Thanks,
> >> > Ariel-

0
Immortalist
11/17/2008 5:36:18 AM
"Isaac" <groups@sonic.net> wrote in
news:4920d31e$0$33551$742ec2ed@news.sonic.net: 

> seems that the  "distinction between us and our equipment... is
> vanished" is just describing the unconscious automation process that
> takes over body functions and relieves the conscious mind to be
> unaware that its equipment was drawn into responding to solicitations.
>  This in many ways seems to just be alluding to the domain of our
> unconscious being that responds like dominos that fall automatically
> in response to many contextual solicitations.  I do not see how this
> all makes a solid argument that conscious thought is unified and
> inseparable from "our equipment" (i.e., body).  At best this is a very
> weak, if not completely flawed, logic in inferring that our sense
> (act) of being in the world "is not representational at all".  The
> text that appears to clarify this assertion just seems to be a string
> of conclusory declarations without a solid logical foundation.  Even a
> plausible syllogism would be helpful here."

You might be better off ignoring the Heideggerian motivation and formulation 
of Dreyfus's critique, much of which lacks cognitive content, and focusing on 
the substantive points. One of those would be the relationship between 
"mind" or consciousness and intelligence. The problems of consciousness are 
not the problems of intelligence; a system may be intelligent and 
nonconscious, and vice-versa. At the same time, consciousness could be an 
adaptation which facilitates or augments intelligence in some systems.

> Thus, this Heideggerian view of
> absorbed coping is either insufficient to describe the human condition
> or it renders indistinguishable insects from humans; either way it
> does not seem to uniquely capture the behavior at the level of human
> consciousness and is, thus, flawed at best.

Are you interested in a cogent characterization of intelligence --- one which 
might be implemented in an artificial system --- or a characterization of 
consciousness?

0
Publius
11/17/2008 6:51:48 AM
"Curt Welch" <curt@kcwc.com> wrote in message 
news:20081116234549.142$ia@newsreader.com...
> "Isaac" <groups@sonic.net> wrote:
>> Reminder: I will post the paragraph(s) I have a comment about, and
>>   highlight
>> the
>> particular words at issue by enclosing them between "***" characters.
>> I'll also include citations in the paper when helpful. I seek
>> (intelligent and informed) technical/theoretical critique or feedback
>> from anyone on this particular issue.  Ask me for a copy of the paper if
>> you are interested in the context and details.
>>
>> 2nd critique, on his page 12, line 4:
>> "Heidegger's important insight is not that, when we solve problems, we
>> sometimes make use of representational equipment outside our bodies, but
>> that being-in-the-world is more basic than thinking and solving
>> problems;that it is not representational at all.  That is, when we are
>> coping at our best, ***we are drawn in by solicitations and respond
>> directly to them, so that the distinction between us and our
>> equipment--between inner and outer-vanishes***#1  As Heidegger sums it
>> up: I live in the understanding of writing, illuminating,
>> going-in-and-out, and the like.  More precisely: as Dasein I am -- in
>> speaking, going, and understanding -- an act of understanding
>> dealing-with.  My being in the world is nothing other than this
>> already-operating-with-understanding in this mode of being.[ii]
>>
>> Heidegger and Merleau-Ponty's understanding of embedded embodied coping,
>> then, is not that the mind is sometimes extended into the world but
>> rather that all such problem solving is derivative, that in our most
>> basic way of being, that is, as absorbed skillful copers, we are not
>> minds at all but one with the world.   Heidegger sticks to the
>> phenomenon, when he makes the strange-sounding claim that, in its most
>> basic way of being, "Dasein is its world existingly."[iii]
>>
>> When you stop thinking that mind is what characterizes us most basically
>> but, rather, that most basically we are absorbed copers, the inner/outer
>> distinction becomes problematic. There's no easily askable question as to
>> whether the absorbed coping is in me or in the world. According to
>> Heidegger, intentional content isn't in the mind, nor in some 3rd realm
>> (as it is for Husserl), nor in the world; it isn't anywhere.  It's an
>> embodied way of being-towards.  Thus for a Heideggerian, all forms of
>> cognitivist externalism presuppose a more basic existential externalism
>> where even to speak of "externalism" is misleading since such talk
>> presupposes a contrast with the internal.  Compared to this genuinely
>> Heideggerian view, ***extended-mind externalism is contrived, trivial,
>> and irrelevant***#2.
>>
>>

<sniped citations>

>>
>> -------------------------------------------------------------------------
>> -------
>>
>> My critique #1:
>>
>> seems that the  "distinction between us and our equipment... is vanished"
>> is just describing the unconscious automation process that takes over
>> body functions and relieves the conscious mind to be unaware that its
>> equipment was drawn into responding to solicitations.  This in many ways
>> seems to just be alluding to the domain of our unconscious being that
>> responds like dominos that fall automatically in response to many
>> contextual solicitations.  I do not see how this all makes a solid
>> argument that conscious thought is unified and inseparable from "our
>> equipment" (i.e., body).  At best this is a very weak, if not completely
>> flawed, logic in inferring that our sense (act) of being in the world "is
>> not representational at all".  The text that appears to clarify this
>> assertion just seems to be a string of conclusory declarations without a
>> solid logical foundation.  Even a plausible syllogism would be helpful
>> here."
>>
>> My critique #2:
>>
>> is not the Heideggerian view requiring this unity between the mind and
>> the world result in a "contrived, trivial, and irrelevant" world
>> representation scheme in people when the events in the world are so far
>> beyond a person's ability to cope (relative to there internal
>> representation/value system) that they just end up contriving a trivial
>> and irrelevant internal world that is just projected onto a "best
>> fit/nearest neighbor" of a representation that they can cope with.  In
>> this way, there is no absorbed coping because it requires a perfect and
>> accurate absorption scheme between our mind (inner) and the world (outer)
>> that does not exist and cannot be magically created, even biologically.
>> If you ignore this aspect of the Heideggerian view then what you end up
>> with is nothing much more than an "ignorance is bliss" cognitive model
>> that is not too different from what you say is wrong with Brook's
>> approach.  That is, your portrayal of the Heideggerian view of absorbed
>> coping would exactly model the thinking and representation behavior of
>> insects, which certainly is not the conscious, cognitive model of humans.
>> Thus, this Heideggerian view of absorbed coping is either insufficient to
>> describe the human condition or it renders indistinguishable insects from
>> humans; either way it does not seem to uniquely capture the behavior at
>> the level of human consciousness and is, thus, flawed at best.    That
>> is, if this Heideggerian view of absorbed coping equally applies to any
>> animals or insects then it is not really helpful to modeling or shedding
>> light on  higher human intellectual behavior, which, of course, is the
>> sole subject/goal of AI.  Moreover, this "perfect absorption" is a
>> complete illusion and in practice will only exist in the most predictable
>> and simple situations. From another angle, how is this Heideggerian view
>> of absorbed coping much different from the standard psychological model
>> of projection where our internal model/representation is simply projected
>> onto the world (or a subset frame of it) and we just trick ourselves into
>> believing that we are completely and accurately absorbed with the true
>> essence of the frame problem.  this Heideggerian view of absorbed coping
>> seems to much more fit the unconscious aspects of the human condition,
>> which is more insect/animal like.  This all seems to be logically flawed
>> and/or a very weak foundation for grandiose conclusions about what
>> philosophical approach/model is needed to solve the frame problem and
>> human consciousness.  Maybe I am missing something critical here that can
>> make sense of it.  Please clarify the logic.
>>
>> Any thoughts on this issue?
>>
>> Ariel B.
>
> I'm an engineer, not a philosopher.

>As such, nearly everything you write
> strikes me as silly and odd and misguided.

I am an engineer, scientist, philosopher, and roboticist.  Of course, the 
problem does not reside strictly in any one dicipline or skill set, so I am 
not surprised that an implementation oriented thinker will find the 
abstractions too obtuse for utility.

>I hardly know where to begin to
> comment.
>
> I find this sort of philosophical debate to be a pointless and endless 
> game
> at trying to define, and redefine words to make them fit together in a 
> more
> pleasing way.  You can't solve AI by playing with words.  You have to do 
> it
> using empirical evidence.

I disagree.  Reverse engineering will not solve the problem and may actually 
lead to many dead ends.  It will take a new theory and philosophy to do it. 
Think of it like trying to emperically come up with QED or Relativity w/o 
any new theory or philosophy of physics.

> It's not a problem which can be solved by pure
> philosophy.
>
True, but you can't just do it bottom up either.  You can miss the big 
picture, which philosophy can shed light on.

> For example, you speak of this "unity between the mind and the world".
> What exactly is the "mind" and the "world"?

I did not say this.  If you read my intro, I was quoting from Dryfus' paper.

>You can't resolve this sort of
> question just by talking about such things.  Words are defined by their
> connection to empirical evidence and without empirical evidence, the words
> are basically meaningless - or at minimal, available for use in endless
> pointless debates and redefinition based on usage alone.

for sure symantics can lead to circular definitions, but tossing out 
anything not empiracle is "throughing the baby out with the bath water"; 
that is, you toss out powerful abstractions that bridge large gaps empirical 
evidence.

>
> The problem we run into here is that without a concrete definition of how
> the brain works and what the mind is, we can't make any real progress on
> the types of issues you are touching on here.  How can we make any 
> progress
> debating the nature of the connection between the "mind" and the "world"
> when we can't agree what the mind is?  And if we can't agree what the mind
> is, we can't really agree on anything it creates - like it's view of the
> world - which is the foundation of what the word "world" is referring to.
>
> You can't resolve any of these questions until you can first resolve
> fundamental questions such as the mind body problem and consciousness in
> general.
>

Well, we have to talk about the trinity or we'd get no where, but I agree 
that any usage of those words must be very tentative and cannot lead to 
sweeping conclusions w/o a scientific definition of each, which I say would 
require a theory of mind (not connecting a million data points).

> I have my answers to these questions, but my answers are not shared or
> agreed on by any sort of majority of society so my foundational beliefs
> can't be used as any sort of proof of what is right.  It call comes back 
> to
> the requirement that we produce empirical data to back up our beliefs. 
> And
> for this subject, that means we have to solve AI, and solve all the
> mysteries of the human brain.  Once we have that hard empirical science
> work finished, then we will have the knowledge needed, to resolve the sort
> of philosophical debates you bring up here.  Until then, endless debate
> about what "merging mind and world" might mean, is nearly pointless in my
> view.
>
> Having said all that, I'll give you my view of all this, and the answers 
> to
> your questions as best as I can figure out.
>
> I'm a strict materialist or physicalist.  I believe the brain is doing
> nothing more than performing a fairly straight forward signal processing
> function which is mapping sensory input data flows into effector output
> data flows.  There is nothing else there happening, and nothing else that
> needs to be explained in terms of what the "mind" is or what
> "consciousness" is.

I don't think you can call anything as chaotic as the brain doing anything 
"straight forward".  The Earth's weather is infinitely more straitforward 
than the humand mind/brain and we cannot model it worth a damn even with all 
the most powerful computers in the world.

>The mind and consciousness is not something separate
> from the brain, it simply is the brain and what the brain is doing.
>
> It's often suggested that humans have a property of "consciousness" which
> doesn't exist in computers or maybe insects (based on the use of "insect"
> above).  I see that idea as totally unsupported by the facts. It's nothing
> more than a popular myth - and a perfect example of the nonsense that is
> constantly batted around in these mostly pointless physiological debates.
>
> There is no major function which exists in the human brain which doesn't
> already exists in our computers and our robots which are already acting as
> autonomous agents interacting with their environment.  The only difference
> between humans and robots, is that humans currently have a more advanced
> signal processing system - not one which is substantially different in any
> significant way - just one which is better mostly by measures of degree,
> and not measures of kind.

I don't think you could be farther away from the truth.  The brain computes 
in ways that is so different (an often oposite) of how our signal processing 
works that it is in another universe by comparison.  For example, the core 
of the brain's sensory processing seems to be a kind of synethstesia based 
system, which is exactly what all engineers would avoid like the plague.  I 
could go on and on with counter examples.

>
> Many people however tend to believe the human "mind" and human
> "consciousness" is something different from what our robots are doing by a
> major and important degree of kind.  They believe we are something no one
> yet understands, and something that doesn't exist in our machines at all.
> I reject that notion completely.
>
> This belief we find in so many humans - that they are uniquely different
> from the machines - is a result of an invalid self-image the brains
> naturally tend to form about themselves.  Human tend to think they are
> something they are not in this regard.  They believe their "mind" is
> somehow different and separate from the brain, when there is no separation
> at all.  The endless mind body debates and all the other debates which 
> spin
> off from it, are the result of failing to see that the apparent separation
> is only an illusion.
>
> This illusion is represented by the simple idea that our internal
> awareness, doesn't seem to be an identity with neural activity.  That
> "seeing blue" doesn't seem to us to be "just neural activity".  Seeing 
> blue
> seems to be something of a completely different nature to us than "neurons
> firing".  However, all evidence adds up to the fact that these are an
> identity - that they are in fact one and the same thing.  Not something
> "created by" the activity of the brain, but simply, the brain activity
> itself.
>
> Now, I wrote all that just to try and make it clear to you where I'm 
> coming
> from and what I believe in.
>
> Because of what I believe, the mind body problem, and AI, and
> consciousness, translates to a very straight forward problem of science 
> and
> engineering, not a philosophy problem in any respect.  The brain is just a
> reinforcement trained parallel signal processing network which produces 
> our
> behavior (both external behaviors and internal thoughts) as a reaction to
> the current environment.

hebbian learning was known since the '50's but that has not lead to anything 
practical because it may necessary but not sufficient.  For example, hebbian 
learning does not even begin to solve the frame problem.  Since this is so 
strait forward, how do you propose reinforcement training (i.e., Pavlov's 
dog) can be used to robustly deal with the frame problem?

>
> From this perceptive, let me jump in and debate the words you had issue
> with:
>
>  we are drawn in by solicitations and respond
>  directly to them, so that the distinction between us and our
>  equipment--between inner and outer-vanishes
>
> I think at the lowest levels of what is happening in the brain, it is
> obvious that the brain is simply reacting to what is happening in the
> environment - that is what the brain is doing by definition in my view. 
> We
> simply "respond directly to them".  That is all we every do.

really?  So, being an engineer you will know that "reactions" to input is 
just another way of saying that you have a control system.  However, any 
control system needs a model to determine the proper control surface for the 
input landscape; that is, model building.  Thus, the brian is about building 
useful models of the environment via sensory synergy.  In this way, I 
completely disagree with your assertion that "brain is simply reacting to 
what is happening in the environment "

>
> But this is where it gets very messy.  What is meant by "we" in the above?
>
> For me, it is obvious the only "we" that exists is a human body and the
> human body is simply reacting to its environment and that's pretty much 
> the
> end of the story.  It's no more complex or mystical in any sense than a
> robot reacting to its environment or a rock reacting to its environment.
> The only difference is that the more complex machines like the human and
> robot react in more complex ways to their environment than the rock does.
>
<sniped for brevity>

> If the part of the brain which represents "dogs" is damaged, we can become
> unable to see a dog.

Well, Dreyfus disagrees with you on this point.  He says there is no 
representation of a dog in the brain.  How do you argue against that?

>We can look right at it, and have no clue what it is
> we are looking at.

All the evidence I am aware of re the brain is that such concepts are not 
located in any one place which you can damage to lose only the recognition 
of a dog.  BTW, this is another example of how the brain is radically 
different than our computing systems.

>At the same time, if you stimulate the correct parts of
> a brain, it's mostly likely that the person would report they were "seeing
> a dog" when there was no dog there.

There is no research ever showing that this is possible.  Please cite the 
research that supports your belief.  I only know of music being able to be 
stimulated to be heard in the brain.

>So what is it we are actually
> "seeing".  Is it the dog we are responding to when we say we see a dog, or
> is the neural activity in one part of the brain which other part of the
> brain is responding to by producing the words "I see a dog"?
>

I think we should stay away from consciousness in this discussion or else we 
will get no where by forking out to too many infinities.

> It can be argued that what we actually respond to is not the physical dog
> out in the word, but that we are responding to the brain activity.

Of course, the model of the dog.

> However, in a normally functioning human brain, the brain activity only
> shows up in the brain, when there is a dog in our field of vision - so the
> distinction isn't important.
>
> So when we use the the word "we", or "self" what actually are we talking
> about?  Just like with the dog, are we talking about the phsyical thing
> "out there"?  Or are we talking about the brain activity which represents
> the physical thing out there?  The confusion here of course is that, 
> unlike
> with the dog example, the physical thing out there and the brain activity,
> overlap.  The brain activity is part of the human body which is thinking
> about itself.

Again, I think we should stay away from consciousness in this discussion or 
else we will get no where by forking out to too many infinities.  Starting 
from lower levels of the brain is more practical here.

> Like when we use the word "dog" it's not clear whether we are talking 
> about
> the brain activity which represents the dog, or the physical dog, it's not
> clear what we are talking about when we talk about "we".
>
> So lets go back to the words:
>
>  we are drawn in by solicitations and respond
>  directly to them, so that the distinction between us and our
>  equipment--between inner and outer-vanishes
>
> Which "we" might these words be making reference to?  I really don't know
> because I don't know for sure what this guy was trying to communicate with
> these words.
>

I believe he means our brain circuits engage phenominon by melding with it 
and becoming a mirror image such that the two are not seprable, thus no 
representations of the object in the brain just a bunch of organically 
melded dominoes that hit one to another like "reality" would.

> But then we get to the "us" and "our equipment".  Again, was the "us" and
> "our equipment" a reference to the physical body of the person and the
> physical equipment, or a reference to the brain activity inside the
> person's brain which represented all this stuff?
>
> And which "distinction" is he making reference to?  The distinction 
> between
> the human body and the physical stuff of the equipment?  Or the 
> distinction
> between the brain activity which represents the equipment and the brain
> activity which represents the human?

I believe he is saying that phenomenon is internalized w/o distinctions; 
i.e., you become the phenomenon.  As opossed to you making a model of the 
object as a seeprate token to use in your brain system to plan your actions.

>
<snipped for brevity>

> As such, when the "dog" response is inhibited because some other response
> like "big scary lion which is about to eat us" is produced, we end up not
> "seeing" the dog.  When asked later if there was a dog next to the lion, 
> we
> will have no memory of the dog at all simply because we never "saw" the 
> dog
> - i.e., the dog neural activity was blocked by other activity of higher
> priority.  We would say, "we didn't notice the dog".
>
> This is true about most of how we respond to the environment.  If we see a
> complex scene, the brain picks the reaction which it believes is the 
> "best"
> for that situation - we will see one object first, and ignore everything
> else.  We will "focus on" the object we "care about" the most.
>

I think you digress here.  The issue is about cognitive architecture wrt to 
phenomenon and building "correct" actions, not about focus of attention even 
if that does act as an initial filter of what info we get as input to the 
process.  Again, let's stay away from conciousness here.  That is a seperate 
thread all together.

>
> This happens because when we are in a room, there will be all sorts of
> brain activity representing that room, and our location in the room. 
> There
> is always brain activity which represents the state of the environment
> around us.  But when we watch a TV show, or a movie, and we focus on the
> what is happening in the show long enough, the brain state which 
> represents
> the fact that we are in a room watching a TV will start to fade and be
> slowly be replaced by the events of the show. The less distraction there 
> is
> from the room (dark lights with no other noise or motion in the room) the
> greater this effect is.
>
> This is one effect that might be what the words "distinction between us 
> and
> our equipment--between inner and outer-vanishes", could have been making
> reference to.

Dryfus is not talking about conciousness, he is talking about the 
architecture of low level brain circuits that do ro do not make distinctions 
between "us" and phenomenon.

> But then there's the other way to look at this.  Just like the "dog" 
> exists
> for us as brain activity, our entire world exists for us as brain 
> activity.
> As such, there never really was much distinction between the brain 
> activity
> which represents our inner self and brain activity which represents outer
> stuff.  It's all still just "more brain activity".  The ability to 
> classify
> some brain activity as "outer stuff" and some brain activity as "inner
> stuff" is mostly a learned response.  It's the way our brain is trained to
> respond to the enviro0nment - by creating that inner and outer
> classifications.  As such, we can also train ourselves to think in terms 
> of
> being "one with the world".  Or "extend our consciousness outside
> ourselves" - or whatever sill way some one might talk about this.
>

In his paper, he considers chaotic neural networks as being more at "one 
with the world" than classic AI's more (fuzzy) rule based systems.

<snipped>
> fact.  Stop the brain activity, and my "world" goes away.  I will no 
> longer
> be "aware" of what is around me.  I will no longer be "seeing" the 
> computer

Again, let's stay away from awareness here.  A seperate issue.


<snipped for brevity>
> I've not tried to comment on your critiques of the words yet, but this 
> post
> is too long already so I'll stop hear for now.
>
Your detailed thougths are greatly appreciated.  See if my feedback can help 
focus ideas for or against Dreyfus' thesis or my critique of his ideas.

<snip>
> you bring up here must be translated back to the actions of such a machine
> in order to be understood.  If you can't translates these ideas and
> questions and issues back to the operation of a machine like this, then 
> you
> don't have any hope of understand what it is you are talking about and you
> don't have any hope of understanding what the words mean or why one set of
> words is a better or worse description of what is happening.

yes, but we do have to proceed with best working theories to give tentative 
meaning to the symantics so we can work towards articulating the 
distinctions we feel with scientific definititions... years in the future.

Best regards,
Ariel B.
>
> -- 
> Curt Welch 
> http://CurtWelch.Com/
> curt@kcwc.com 
> http://NewsReader.Com/ 


0
Isaac
11/17/2008 8:10:43 AM
"Publius" <m.publius@nospam.comcast.net> wrote in message 
news:Xns9B58E8A0D6C89mpubliusnospamcomcas@69.16.185.250...
> "Isaac" <groups@sonic.net> wrote in
> news:4920d31e$0$33551$742ec2ed@news.sonic.net:
>
>> seems that the  "distinction between us and our equipment... is
>> vanished" is just describing the unconscious automation process that
>> takes over body functions and relieves the conscious mind to be
>> unaware that its equipment was drawn into responding to solicitations.
>>  This in many ways seems to just be alluding to the domain of our
>> unconscious being that responds like dominos that fall automatically
>> in response to many contextual solicitations.  I do not see how this
>> all makes a solid argument that conscious thought is unified and
>> inseparable from "our equipment" (i.e., body).  At best this is a very
>> weak, if not completely flawed, logic in inferring that our sense
>> (act) of being in the world "is not representational at all".  The
>> text that appears to clarify this assertion just seems to be a string
>> of conclusory declarations without a solid logical foundation.  Even a
>> plausible syllogism would be helpful here."
>
> You might be better off ignoring the Heideggerian motivation and 
> formulation
> of Dreyfus's critique, much of which lacks cognitive content, and focusing 
> on
> the substantive points. One of those would be the relationship between
> "mind" or consciousness and intelligence. The problems of consciousness 
> are
> not the problems of intelligence; a system may be intelligent and
> nonconscious, and vice-versa. At the same time, consciousness could be an
> adaptation which facilitates or augments intelligence in some systems.
>

Dryfus's paper (and this quote from it) is not really talking about 
consciousness, he is talking about the
architecture of low level brain circuits that do or do not make distinctions 
(e.g., representations) between "us" and phenomenon.  He roughly says our 
brain circuits engage phenomenon by melding with it
and becoming a mirror image such that the two are not separable, thus no 
representations of the object or modules or hierarchy in the brain just a 
bunch of flat, organically melded dominoes that hit one to another like 
"reality" would.



>> Thus, this Heideggerian view of
>> absorbed coping is either insufficient to describe the human condition
>> or it renders indistinguishable insects from humans; either way it
>> does not seem to uniquely capture the behavior at the level of human
>> consciousness and is, thus, flawed at best.
>
> Are you interested in a cogent characterization of intelligence --- one 
> which
> might be implemented in an artificial system --- or a characterization of
> consciousness?
>
No, the issue here is about cognitive architecture wrt to phenomenon and AI 
building "correct" actions.  I think we should stay away from consciousness 
here.  That is an independent, complex issue.

Ariel-


0
Isaac
11/17/2008 8:23:54 AM
Isaac wrote:
....
> [i] As Heidegger puts it: "The self must forget itself if, lost in the world
> of equipment, it is to be able 'actually' to go to work and manipulate
> something." Being and Time, 405.
....
> My critique #1:
> 
> seems that the  "distinction between us and our equipment... is vanished" is
> just describing the unconscious automation process that takes over body
> functions and relieves the conscious mind to be unaware that its equipment
> was drawn into responding to solicitations.  This in many ways seems to just
> be alluding to the domain of our unconscious being that responds like
> dominos that fall automatically in response to many contextual
> solicitations.  

I think it's simply about ego.
Lack of ego doesn't mean being unconscious.
It's just an observation switch, seing how things relate to each other, 
rather than how relate to self. IOW, 'how to' instead of 'how do I'.

> I do not see how this all makes a solid argument that
> conscious thought is unified and inseparable from "our equipment" (i.e.,
> body).  At best this is a very weak, if not completely flawed, logic in
> inferring that our sense (act) of being in the world "is not
> representational at all".  

Agreed.

> This all seems to be logically
> flawed and/or a very weak foundation for grandiose conclusions about what
> philosophical approach/model is needed to solve the frame problem and human
> consciousness.  Maybe I am missing something critical here that can make
> sense of it.  Please clarify the logic.

Agreed.

Regards...
0
Josip
11/17/2008 1:00:56 PM
You can send me the paper.  I was impressed by
Dreyfus decades ago.  Since I find myself untrainable
within the University environment I have many weak
spots.  This being said, I have participated in the
c.a.p forum since its splt form c.a so as to move
the Searle chatter out of c.a.

While traditional Representationalism is unlikely
to be true it seem just as unlikely thinking bridges
any gaps.  I think I agree with you on this point.

I suspect we are like the blind men describing
an elephant.  We seem to argue over descriptions
rather than trying to integrate them into a cohesive
whole.  I'm as guilty as the next.

Most of us are much harder on papers when we
disagree with their conclusions than when we
agree with them.  Are you sure you came to the
table clean?

I tend to shut down as soon as I reach an unsupported
claim with which I disagree.  It doesn't really matter
that much what conclusion is reached.  This has really
interfered with my reading of technical papers.  On the
other hand I can accept hypotheticals I strongly suspect
are false.  Is the paper presented as a series of conclusions
one must accept as true or as alternatives to be considered?
0
forbisgaryg
11/17/2008 2:38:51 PM
>
> I'm an engineer, not a philosopher.  As such, nearly everything you write
> strikes me as silly and odd and misguided.  I hardly know where to begin to
> comment.
>
> I find this sort of philosophical debate to be a pointless and endless game
> at trying to define, and redefine words to make them fit together in a more
> pleasing way.  You can't solve AI by playing with words.  You have to do it
> using empirical evidence.  It's not a problem which can be solved by pure
> philosophy.
>

I agree with you. Creating AI has nothing to do with philosophy. It is just a 
technical problem that needs better mathematical tools in order to solve it.

Creating AI will reflect on philosophy in only one way - it will prove that some 
philosophers were wrong.

0
iso
11/17/2008 2:56:07 PM
"Isaac" <groups@sonic.net> writes:
>"Curt Welch" <curt@kcwc.com> wrote in message

>I disagree.  Reverse engineering will not solve the problem and may actually 
>lead to many dead ends.  It will take a new theory and philosophy to do it. 
>Think of it like trying to emperically come up with QED or Relativity w/o 
>any new theory or philosophy of physics.

It's good to see someone who recognizes that reverse engineering
is not sufficient.  To few people recognize this.

>I did not say this.  If you read my intro, I was quoting from Dryfus' paper.

Quick comment.  You are frequently writing "Dryfus" instead of "Dreyfus".

>I don't think you can call anything as chaotic as the brain doing anything 
>"straight forward".  The Earth's weather is infinitely more straitforward 
>than the humand mind/brain and we cannot model it worth a damn even with all 
>the most powerful computers in the world.

I doubt that the brain is chaotic, except perhaps during epileptic
seizures and similar failures.  Perhaps you are using "chaotic" only
to mean that we don't have a satisfactory theory of brain operations.

>> There is no major function which exists in the human brain which doesn't
>> already exists in our computers and our robots which are already acting as
>> autonomous agents interacting with their environment.  The only difference
>> between humans and robots, is that humans currently have a more advanced
>> signal processing system - not one which is substantially different in any
>> significant way - just one which is better mostly by measures of degree,
>> and not measures of kind.

>I don't think you could be farther away from the truth.  The brain computes 
>in ways that is so different (an often oposite) of how our signal processing 
>works that it is in another universe by comparison.  For example, the core 
>of the brain's sensory processing seems to be a kind of synethstesia based 
>system, which is exactly what all engineers would avoid like the plague.  I 
>could go on and on with counter examples.

Part of the confusion is in the assumption that the brain is computing.
I find little evidence of that.  By the way, I have had these debates
with Curt in the past.

>hebbian learning was known since the '50's but that has not lead to anything 
>practical because it may necessary but not sufficient.  For example, hebbian 
>learning does not even begin to solve the frame problem.  Since this is so 
>strait forward, how do you propose reinforcement training (i.e., Pavlov's 
>dog) can be used to robustly deal with the frame problem?

Perhaps Hebbian learning is not properly understood.  At least that's
part of my position.

IMO there is no need to solve the frame problem.  Rather, we
need to avoid it.  The frame problem is simply an artifact of the
reliance on stored representations.  Humans suffer from the frame
problem when they depend on stored representations.  I remember
some time back when I replied to a want ad in the newspaper, and
left a message on the person's tape.  I was called back twice -
apparently he had forgotten that he had called me back the first
time (i.e. he had failed to update his stored representations).
And that seems to be an example of a frame problem failure.

>All the evidence I am aware of re the brain is that such concepts are not 
>located in any one place which you can damage to lose only the recognition 
>of a dog.  BTW, this is another example of how the brain is radically 
>different than our computing systems.

Not a good example.  Back in the early days of the PC, a single byte
of memory was actually distributed over 9 different chips plugged
into 9 different sockets on the PC.  That one byte of memory was all
in one location according to our formal models of computing, but was
distributed over multiple components in the actual implementation.

Incidently, I do agree that brains are radically different from computing
systems.  But that's a theoretical view, not something that can easily
be determined empirically.

0
Neil
11/17/2008 4:45:09 PM
I'll begin my comments here.

But first a general comment.  AI is pretty much a mechanization of
traditional epistemology.  Dreyfus seems to acknowledge this in the
first couple of pages.  So if AI does not work, that would suggest
a problem with epistemology.  So why do people like Dreyfus (and
Searle, and others) criticize AI but not criticize epistemology?
(OK, that's a rhetorical question).

To me, epistemology has always seemed a bit silly.  And it has
puzzled me that intelligent philosophers fail to see that it
is silly.

"Isaac" <groups@sonic.net> writes:

>First, critique, page 11, line 20:
>"I agree that it is time for a positive account of Heideggerian AI and of an 
>underlying Heideggerian neuroscience, but I think Wheeler is the one looking 
>in the wrong place.  Merely by supposing that Heidegger is concerned with 
>problem solving and action oriented representations, Wheeler's project 
>reflects not a step beyond Agre but a regression to aspects of pre-Brooks 
>GOFAI.  Heidegger, indeed, claims that that skillful coping is basic, but he 
>is also clear that, all coping takes place on the background coping he calls 
>being-in-the-world that doesn't involve any form of representation at all.

Comment (on Dreyfus):  I have no problem with the idea of skillful
coping.  The trouble I have with some of the philosophy, is that
it often tends to come across as mystical in its reliance on vague
ideas such as "being in the world".

>see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.

>My comment:
>"Assuming that by "thinking" you mean conscious thought,  I cannot see how 
>thinking is a bridge that necessarily follows from memories/beliefs not 
>being solely inner entities.  It seems to me that inner and outer 
>representations can be bridged without thought.  Isn't this what occurs in 
>an unconscious (reflex) reaction to a complex external even, which is an 
>automatic bridge and generates a thoughtful, usually accurate response but 
>often before we even have a chance to think about it.  Inner/outer 
>representations seems semantically vague here.  Also, cannot conscious 
>thought can endeavor itself with in purely inner or out representations 
>without ever bridging them?  I guess, it is the "therefore" that gives me 
>pause here."

In a way, you are saying something similar to what I said above.
That is, you are decrying the tendency to give accounts that seem
mystical because of their reliance on rather vague ideas.

0
Neil
11/17/2008 5:14:43 PM
"Isaac" <groups@sonic.net> writes:

>2nd critique, on his page 12, line 4:
>"Heidegger's important insight is not that, when we solve problems, we 
>sometimes make use of representational equipment outside our bodies, but 
>that being-in-the-world is more basic than thinking and solving 
>problems;that it is not representational at all.  That is, when we are 

There's that "being in the world" mystification.

>My critique #2:

>is not the Heideggerian view requiring this unity between the mind and the 
>world result in a "contrived, trivial, and irrelevant" world representation 
>scheme in people when the events in the world are so far beyond a person's 
>ability to cope (relative to there internal representation/value system) 
>that they just end up contriving a trivial and irrelevant internal world 
>that is just projected onto a "best fit/nearest neighbor" of a 
>representation that they can cope with.  In this way, there is no absorbed 
>coping because it requires a perfect and accurate absorption scheme between 
>our mind (inner) and the world (outer) that does not exist and cannot be 
>magically created, even biologically.  If you ignore this aspect of the 
>Heideggerian view then what you end up with is nothing much more than an 
>"ignorance is bliss" cognitive model that is not too different from what you 
>say is wrong with Brook's approach.  That is, your portrayal of the 
>Heideggerian view of absorbed coping would exactly model the thinking and 
>representation behavior of insects, which certainly is not the conscious, 
>cognitive model of humans.  Thus, this Heideggerian view of absorbed coping 
>is either insufficient to describe the human condition or it renders 
>indistinguishable insects from humans; either way it does not seem to 
>uniquely capture the behavior at the level of human consciousness and is, 
>thus, flawed at best.    That is, if this Heideggerian view of absorbed 
>coping equally applies to any animals or insects then it is not really 
>helpful to modeling or shedding light on  higher human intellectual 
>behavior, which, of course, is the sole subject/goal of AI.  Moreover, this 
>"perfect absorption" is a complete illusion and in practice will only exist 
>in the most predictable and simple situations. From another angle, how is 
>this Heideggerian view of absorbed coping much different from the standard 
>psychological model of projection where our internal model/representation is 
>simply projected onto the world (or a subset frame of it) and we just trick 
>ourselves into believing that we are completely and accurately absorbed with 
>the true essence of the frame problem.  this Heideggerian view of absorbed 
>coping seems to much more fit the unconscious aspects of the human 
>condition, which is more insect/animal like.  This all seems to be logically 
>flawed and/or a very weak foundation for grandiose conclusions about what 
>philosophical approach/model is needed to solve the frame problem and human 
>consciousness.  Maybe I am missing something critical here that can make 
>sense of it.  Please clarify the logic.

I agree with the overall sense of your criticism, that Dreyfus
is giving accounts in vague terms which don't really say a lot,
and isn't giving a persuasive argument for this retreat to vagueness.

>Any thoughts on this issue?

Here are some of my own views.

AI and epistemology take the problem to be "how do we use
representations".  Dreyfus seems to want to do away with
representations.  To an extent, I sympathize with Dreyfus, in that
I see an over-reliance on representations in epistemology and in AI.
But we cannot do away with them.  Clearly, people use representations
in their use of natural language.

To me, the question is not "how do we use representations?"  Rather,
the question should be "how do we form representations in the first
place?"  This seems to be a difficult problem.  As best I can tell,
our digital technology has not solved this problem.  We have solved
some problems, in that we can digitize music for recording on CD,
and we can digitize pictures with our digital cameras.  But the
representations formed with these digitization methods are not at
all similar to the representations that we see in our ordinary use
of descriptions in natural language.

On page 2, Dreyfus writes "I was particularly struck by the fact
that, among other troubles, researchers were running up against
the problem of representing significance and relevance".  And that
seems to be the crux of the problem.  AI people like to claim that
AI systems can solve this problem.  But I don't see the evidence
that they have solved it.  It seems to me that the computational
hardware is wrong for this.  That is to say, I do not see how
significance and relevance can be reduced to computation.  It seems
more plausible to suggest that it can be reduced to homeostasis.

Perhaps I should add that I'm an empiricist, at least in the
broad sense.  Presumably rationalists can claim that significance
and relevance are innate, and computation need only apply that.
I don't see such an easy out for the empiricist.  And even for the
rationalist, that doesn't really solve the problem.

0
Neil
11/17/2008 5:59:58 PM
Immortalist <reanimater_2000@yahoo.com> writes:

>What is Representationalism?

>Representationalism is the philosophical position that the world we
>see in conscious experience is not the real world itself, but merely a
>miniature virtual-reality replica of that world in an internal
>representation. Representationalism is also known (in psychology) as
>Indirect Perception, and (in philosophy) as Indirect Realism, or
>Epistemological Dualism.

>Why Representationalism?

>As incredible as it might seem intuitively, representationalism is the
>only alternative that is consistent with the facts of perception.

Nonsense.

And yes, I have had this debate with Steven Lehar.  Neither of us
was able to persuade the other.

I would suggest that J.J. Gibson's direct perception is more
plausible than Lehar's version of representationalism.

0
Neil
11/17/2008 6:06:44 PM
I think that was a good idea, which I should have done sooner.  OK, here is 
a very rough summary of Dreyfus' thesis in this paper:

In summ: Dryfus's paper is not really talking about consciousness, he is 
talking about the architecture of low level brain circuits that do or do not 
make distinctions (e.g., representations) between "us" and phenomenon.  He 
roughly says our brain circuits engage phenomenon by melding with it and 
becoming a mirror image such that the two are not separable, thus no 
representations of the object or modules or hierarchy in the brain (which he 
asserts is what brings the downfall of Brooks, Minsky, and the like) just a 
bunch of flat, organically melded dominoes that hit one to another like 
"reality" would.  He says that the "representation" approach/architecture is 
of Heidegger's philosophy, and hence forth Heideggerian AI, and the 
non-representation approach/architecture is espoused by Merleau-Ponty, 
Walter Freeman.  He particularly hangs his hat of Walter Freeman's 
neurodynamic model as the solution to AI; i.e., a chaotic, flat, neural 
network approach.

Here are a couple good quotes to summ up his Thesis:
"... there are now at least three versions of supposedly Heideggerian AI 
that might be thought of as articulating a new paradigm for the field: 
Rodney Brooks' behaviorist approach at MIT, Phil Agre's pragmatist model, 
and Walter Freeman's neurodynamic model.  All three approaches implicitly 
accept Heidegger's critique of Cartesian internalist representations, and, 
embrace John Haugeland's slogan that cognition is embedded and embodied.[i]

..... Later I'll suggest that Walter Freeman's neurodynamics offers a 
radically new basis for a Heideggerian approach to human intelligence-an 
approach compatible with physics and grounded in the neuroscience of 
perception and action. But first we need to examine another approach to AI 
contemporaneous with Brooks' that actually calls itself Heideggerian.


--------------------------------------------------------------------------------

[i] John Haugeland, "Mind Embodied and Embedded," Having Thought: Essays in 
the Metaphysics of Mind, (Cambridge, MA: Harvard University Press, 1998), 
218."



I hope this helps frame the issues and my critiques better for those who 
understandably do not have time to read his paper.



Cheers,

Ariel B.




"Alpha" <omegazero2003@yahoo.com> wrote in message 
news:f8636636-6ee8-4c92-9905-a89145bc06b2@k24g2000pri.googlegroups.com...
On Nov 14, 9:46 pm, "Isaac" <gro...@sonic.net> wrote:
> All,
>
> I have critiqued in great detail a recent write paper by Prof. Hubert
> Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would 
> Require
> making it more Heideggerian" . I can email a copy of it to whom ever is
> interested.

Please send a copy to omegazero2003@yahoo.com Ariel. Thanks!

Why don't you post a summary of that paper and your key critique
points here.




> For his bio, see:http://socrates.berkeley.edu/~hdreyfus/
>
> I want to stimulate discussion on this topic by posting my critiques 
> little
> by little and getting comments from the AI community on the news groups.
> However, before I start I want to get a feel for how many know of his work
> and/or would be interested in an intellectual debate for and against his
> many anti-AI positions.
>
> I hope many will respond to this posting with interest so I can begin
> posting each part of this paper I find issues with and my reasoned 
> critique
> for others to comment on.
>
> Thanks,
> Ariel-


0
Isaac
11/17/2008 6:33:00 PM
<forbisgaryg@msn.com> wrote in message 
news:36705361-5c01-4d0f-87f8-70ae94e7c9ab@c36g2000prc.googlegroups.com...
> You can send me the paper.
Done. It should be in your inbox by now.

>I was impressed by
> Dreyfus decades ago.  Since I find myself untrainable
> within the University environment I have many weak
> spots.  This being said, I have participated in the
> c.a.p forum since its splt form c.a so as to move
> the Searle chatter out of c.a.

great, then you may be bent towards sticking up for him.  Of course, most in 
the AI community would not.  So, any counter balance is very needed.

>
> While traditional Representationalism is unlikely
> to be true it seem just as unlikely thinking bridges
> any gaps.  I think I agree with you on this point.
>
> I suspect we are like the blind men describing
> an elephant.  We seem to argue over descriptions
> rather than trying to integrate them into a cohesive
> whole.  I'm as guilty as the next.
>
Yes, that is why getting the philosophy and theory strait is an important 
first step in the right direction.

> Most of us are much harder on papers when we
> disagree with their conclusions than when we
> agree with them.
I'm just as hard either way.  I don't let anyone agree with me unless they 
do it with the right rational, and can strongly defend it against my strong 
ability to play the devil's advocate.

>Are you sure you came to the
> table clean?
I did.  Actually, I tend to agree with some architectural implications of 
his philosophy, however, I believe his reasoning is very flawed so I 
critique it at every step.

>
> I tend to shut down as soon as I reach an unsupported
> claim with which I disagree.  It doesn't really matter
> that much what conclusion is reached.  This has really
> interfered with my reading of technical papers.  On the
> other hand I can accept hypotheticals I strongly suspect
> are false.

I understand how that feels.  I can stick with it if they give strong 
reasoning and/or plausible evidence.  I tune out when there is too much hand 
waving.

>Is the paper presented as a series of conclusions
> one must accept as true or as alternatives to be considered?

No, he does do a reasonably good job at grounding it; however, when he does 
make (many) broad conclusions, I believe he jumps too far. So, I'd give him 
a C+ on this count, where most, if not nearly all, philosophers would give a 
D- on a practicality rating scale.

Thanks for your input,
Ariel- 


0
Isaac
11/17/2008 6:52:57 PM
"Neil W Rickert" <rickert+nn@cs.niu.edu> wrote in message 
news:7EhUk.6368$as4.1357@nlpi069.nbdc.sbc.com...
> I'll begin my comments here.
>
> But first a general comment.  AI is pretty much a mechanization of
> traditional epistemology.  Dreyfus seems to acknowledge this in the
> first couple of pages.  So if AI does not work, that would suggest
> a problem with epistemology.  So why do people like Dreyfus (and
> Searle, and others) criticize AI but not criticize epistemology?
> (OK, that's a rhetorical question).

Dryfus' solution to this is to say AI's model of epistemology is wrong and 
needs to be embodied, flat, and non-representational.

>
> To me, epistemology has always seemed a bit silly.  And it has
> puzzled me that intelligent philosophers fail to see that it
> is silly.
>

Epistemology is not really the same scope as AI.  Beyond learning and 
building knowledge, AI also includes transcendental aspects of consciousness 
and self (soul?), which are in metaphysics.  AI also covers the creation and 
appreciation of beautiful things, which is in the 3rd pillar of philosophy: 
esthetics.  So, I believe AI touches on nearly all aspects of philosophy.

> "Isaac" <groups@sonic.net> writes:
>
>>First, critique, page 11, line 20:
>>"I agree that it is time for a positive account of Heideggerian AI and of 
>>an
>>underlying Heideggerian neuroscience, but I think Wheeler is the one 
>>looking
>>in the wrong place.  Merely by supposing that Heidegger is concerned with
>>problem solving and action oriented representations, Wheeler's project
>>reflects not a step beyond Agre but a regression to aspects of pre-Brooks
>>GOFAI.  Heidegger, indeed, claims that that skillful coping is basic, but 
>>he
>>is also clear that, all coping takes place on the background coping he 
>>calls
>>being-in-the-world that doesn't involve any form of representation at all.
>
> Comment (on Dreyfus):  I have no problem with the idea of skillful
> coping.  The trouble I have with some of the philosophy, is that
> it often tends to come across as mystical in its reliance on vague
> ideas such as "being in the world".
>
>>see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.
>
>>My comment:
>>"Assuming that by "thinking" you mean conscious thought,  I cannot see how
>>thinking is a bridge that necessarily follows from memories/beliefs not
>>being solely inner entities.  It seems to me that inner and outer
>>representations can be bridged without thought.  Isn't this what occurs in
>>an unconscious (reflex) reaction to a complex external even, which is an
>>automatic bridge and generates a thoughtful, usually accurate response but
>>often before we even have a chance to think about it.  Inner/outer
>>representations seems semantically vague here.  Also, cannot conscious
>>thought can endeavor itself with in purely inner or out representations
>>without ever bridging them?  I guess, it is the "therefore" that gives me
>>pause here."
>
> In a way, you are saying something similar to what I said above.
> That is, you are decrying the tendency to give accounts that seem
> mystical because of their reliance on rather vague ideas.
>
he does ground it more later in the paper, which I will present and comment 
on little by little.  However, imho, I think is pattern of making huge leaps 
after "therefore" stays the same.

Cheers,
Ariel- 


0
Isaac
11/17/2008 7:10:55 PM
I completely disagree, Beyond learning and building knowledge, AI also 
includes transcendental aspects of consciousness and self (soul?), which are 
in metaphysics.  Do you really think there is an E=mC^2 equation for that?

AI also covers the creation and appreciation of beautiful things, which is 
in the 3rd pillar of philosophy: esthetics.  So, I believe AI touches on 
nearly all aspects of philosophy.  Moreover, (reverse) engineering will not 
solve the problem and may actually lead to many dead ends by just finding 
ways to go nowhere quicker and better.  It will take a new theory and 
philosophy to do it.
Think of it like trying to empirically come up with QED or Relativity w/o 
any new theory or philosophy of physics.



"�u�Mu�PaProlij" <1234567@654321.00> wrote in message 
news:gfs0lv$bvq$1@ss408.t-com.hr...
> >
>> I'm an engineer, not a philosopher.  As such, nearly everything you write
>> strikes me as silly and odd and misguided.  I hardly know where to begin 
>> to
>> comment.
>>
>> I find this sort of philosophical debate to be a pointless and endless 
>> game
>> at trying to define, and redefine words to make them fit together in a 
>> more
>> pleasing way.  You can't solve AI by playing with words.  You have to do 
>> it
>> using empirical evidence.  It's not a problem which can be solved by pure
>> philosophy.
>>
>
> I agree with you. Creating AI has nothing to do with philosophy. It is 
> just a technical problem that needs better mathematical tools in order to 
> solve it.
>
> Creating AI will reflect on philosophy in only one way - it will prove 
> that some philosophers were wrong.
> 


0
Isaac
11/17/2008 7:17:20 PM
Dryfus says that the brain works according to Walter Freeman's neurodynamic 
model as the solution to AI; i.e., a chaotic, flat, neural network approach, 
which he contends in this paper has no representations, modules, or 
hierarchy.


"Immortalist" <reanimater_2000@yahoo.com> wrote in message 
news:66f01963-fb3e-41be-b816-7ee01a3cf0d1@d10g2000pra.googlegroups.com...
> On Nov 16, 9:10 pm, "Isaac" <gro...@sonic.net> wrote:
>> "Immortalist" <reanimater_2...@yahoo.com> wrote in message
>>
>> news:fcc39fdd-9886-401f-88bd-510528b75fa5@h23g2000prf.googlegroups.com...
>>
>> > On Nov 16, 6:12 pm, "Isaac" <gro...@sonic.net> wrote:
>> >> Reminder: I will post the paragraph(s) I have a comment about, and
>> >> highlight
>> >> the
>> >> particular words at issue by enclosing them between "***" characters.
>> >> I'll
>> >> also include citations in the paper when helpful. I seek (intelligent 
>> >> and
>> >> informed) technical/theoretical critique or feedback from anyone on 
>> >> this
>> >> particular issue.  Ask/email me for a copy of the paper if you are
>> >> interested in
>> >> the context and details.
>>
>> >> 2nd critique, on his page 12, line 4:
>> >> "Heidegger's important insight is not that, when we solve problems, we
>> >> sometimes make use of representational equipment outside our bodies, 
>> >> but
>> >> that being-in-the-world is more basic than thinking and solving
>> >> problems;that it is not representational at all.  That is, when we are
>> >> coping at our best, ***we are drawn in by solicitations and respond
>> >> directly
>> >> to them, so that the distinction between us and our equipment--between
>> >> inner
>> >> and outer-vanishes***#1  As Heidegger sums it up:
>> >> I live in the understanding of writing, illuminating, 
>> >> going-in-and-out,
>> >> and
>> >> the like.  More precisely: as Dasein I am -- in speaking, going, and
>> >> understanding -- an act of understanding dealing-with.  My being in 
>> >> the
>> >> world is nothing other than this already-operating-with-understanding 
>> >> in
>> >> this mode of being.[ii]
>>
>> <snip>
>> I did not understand the significance of the Kant, VR, phantom-limb, etc.
>> quotes.  Please clarify what you mean in concrete terms.
>>
>> > If I am just learning to use a hammer the first time I must extend the
>> > perception of the end of my arm out about a foot further. I must learn
>> > to hit an object, nail, a foot further out than I would normally hit
>> > things. This is more like the telepresence in our adjustments to our
>> > outer sense...
>>
>> OK, but how does this support or contradict Dryfus' contention that we 
>> have
>> no internal representations that Dryfus quotes Heidegger as saying we 
>> have
>> in the above section?
>>
>
> I suppose further conversation about the implications of telepresence
> would and representationalism have not been ruled out by what Dyfus
> says, unless he only wants to make a stronger but still probable
> argument he shoots himself in the foot. Jeez, representationalism, is
> the best theory since Kant and still rules neuroscience,
>
> What is Representationalism?
>
> Representationalism is the philosophical position that the world we
> see in conscious experience is not the real world itself, but merely a
> miniature virtual-reality replica of that world in an internal
> representation. Representationalism is also known (in psychology) as
> Indirect Perception, and (in philosophy) as Indirect Realism, or
> Epistemological Dualism.
>
> Why Representationalism?
>
> As incredible as it might seem intuitively, representationalism is the
> only alternative that is consistent with the facts of perception.
>
> The Epistemological Fact (strongest theory): It is impossible to have
> experience beyond the sensory surface.
>
> Dreams, Hallucinations, and Visual Illusions clearly indicate that the
> world of experience is not the same thing as the world itself.
>
> The observed Properties of Phenomenal Perspective clearly indicate
> that the world of experience is not the same as the external world
> that it represents.
>
> http://cns-alumni.bu.edu/~slehar/Representationalism.html
>
> Representationalism (or indirect realism) with respect to perception
> is the view that "we are never aware of physical objects, [but rather]
> we are only indirectly aware of them, in virtue of a direct awareness
> of an intermediary [mental] object. (Dancy, 145) Because there are
> both direct and indirect objects of awareness in representationalism,
> a correspondence relation arises between the mental entities directly
> perceived and external objects which those mental entities represent.
> And thus perceptual error occurs when the two objects of awareness do
> not correspond sufficiently well. In opposition to
> representationalism, both (direct) realism and idealism agree that
> perception is direct and unmediated, despite their disagreements about
> what the object of perception is. (Dancy, 145) In any form of direct
> perception, no correspondence relationship is possible, since there is
> only one object of perception. Thus only representationalism will give
> rise to the view that perceptual errors exist and must be part of a
> theory of perception. Nevertheless, both idealism and realism must
> still account for the facts that are referred to as "perceptual
> errors" by the representationalist.
>
> http://www.dianahsieh.com/undergrad/rape.html
>
> ...representation is central to psychology as well, for the mind too
> is a system that represents the world and possible worlds in various
> ways. Our hopes, fears, beliefs, memories, perceptions, intentions,
> and desires all involve our ideas about (our mental models of) the
> world and other worlds. This is what humanist philosophers and
> psychologists have always said, of course, but until recently they had
> no support from science...
>
> http://www.kurzweilai.net/meme/frame.html?main=/articles/art0162.html?
>
>
>
>
>>
>>
>>
>>
>> >> Heidegger and Merleau-Ponty's understanding of embedded embodied 
>> >> coping,
>> >> then, is not that the mind is sometimes extended into the world but
>> >> rather
>> >> that all such problem solving is derivative, that in our most basic 
>> >> way
>> >> of
>> >> being, that is, as absorbed skillful copers, we are not minds at all 
>> >> but
>> >> one
>> >> with the world.   Heidegger sticks to the phenomenon, when he makes 
>> >> the
>> >> strange-sounding claim that, in its most basic way of being, "Dasein 
>> >> is
>> >> its
>> >> world existingly."[iii]
>>
>> > Is this like, phenomenology, you know the attempt to extend the
>> > inescapable self and somehow use it as proof that the external world
>> > exists for certain by the process of eliminating material objects from
>> > language and replacing them with hypothetical propositions about
>> > observers and experiences, committing us to the existence of a new
>> > class of ontological object altogether: the sensibilia or sense-data
>> > which can exist independently of experience, and thus refute the
>> > sceptics strong arguments?
>>
>> I think phenomenology as an objectified (i.e., represented) concept is 
>> what
>> Heidegger claims, but I understand Dryfus as saying here that he agrees 
>> with
>> Merleau-Ponty's philosophy that the phenominon and our perceiving of it
>> become indestingishable from each other; i.e, we have no internal
>> representations of the phenomenon.
>>
>> Do you have any comments on my cretique of these paragraphs below?
>>
>
> Plenty but your moving to fast, this could take months, even years,
> with some of you hard cases.
>
>> <snip>
>>
>> >> --------------------------------------------------------------------------------
>>
>> >> My critique #1:
>>
>> >> seems that the  "distinction between us and our equipment... is 
>> >> vanished"
>> >> is
>> >> just describing the unconscious automation process that takes over 
>> >> body
>> >> functions and relieves the conscious mind to be unaware that its
>> >> equipment
>> >> was drawn into responding to solicitations.  This in many ways seems 
>> >> to
>> >> just
>> >> be alluding to the domain of our unconscious being that responds like
>> >> dominos that fall automatically in response to many contextual
>> >> solicitations.  I do not see how this all makes a solid argument that
>> >> conscious thought is unified and inseparable from "our equipment" 
>> >> (i.e.,
>> >> body).  At best this is a very weak, if not completely flawed, logic 
>> >> in
>> >> inferring that our sense (act) of being in the world "is not
>> >> representational at all".  The text that appears to clarify this
>> >> assertion
>> >> just seems to be a string of conclusory declarations without a solid
>> >> logical
>> >> foundation.  Even a plausible syllogism would be helpful here."
>>
>> >> My critique #2:
>>
>> >> is not the Heideggerian view requiring this unity between the mind and
>> >> the
>> >> world result in a "contrived, trivial, and irrelevant" world
>> >> representation
>> >> scheme in people when the events in the world are so far beyond a
>> >> person's
>> >> ability to cope (relative to there internal representation/value 
>> >> system)
>> >> that they just end up contriving a trivial and irrelevant internal 
>> >> world
>> >> that is just projected onto a "best fit/nearest neighbor" of a
>> >> representation that they can cope with.  In this way, there is no
>> >> absorbed
>> >> coping because it requires a perfect and accurate absorption scheme
>> >> between
>> >> our mind (inner) and the world (outer) that does not exist and cannot 
>> >> be
>> >> magically created, even biologically.  If you ignore this aspect of 
>> >> the
>> >> Heideggerian view then what you end up with is nothing much more than 
>> >> an
>> >> "ignorance is bliss" cognitive model that is not too different from 
>> >> what
>> >> you
>> >> say is wrong with Brook's approach.  That is, your portrayal of the
>> >> Heideggerian view of absorbed coping would exactly model the thinking 
>> >> and
>> >> representation behavior of insects, which certainly is not the 
>> >> conscious,
>> >> cognitive model of humans.  Thus, this Heideggerian view of absorbed
>> >> coping
>> >> is either insufficient to describe the human condition or it renders
>> >> indistinguishable insects from humans; either way it does not seem to
>> >> uniquely capture the behavior at the level of human consciousness and 
>> >> is,
>> >> thus, flawed at best.    That is, if this Heideggerian view of 
>> >> absorbed
>> >> coping equally applies to any animals or insects then it is not really
>> >> helpful to modeling or shedding light on  higher human intellectual
>> >> behavior, which, of course, is the sole subject/goal of AI.  Moreover,
>> >> this
>> >> "perfect absorption" is a complete illusion and in practice will only
>> >> exist
>> >> in the most predictable and simple situations. From another angle, how 
>> >> is
>> >> this Heideggerian view of absorbed coping much different from the
>> >> standard
>> >> psychological model of projection where our internal 
>> >> model/representation
>> >> is
>> >> simply projected onto the world (or a subset frame of it) and we just
>> >> trick
>> >> ourselves into believing that we are completely and accurately 
>> >> absorbed
>> >> with
>> >> the true essence of the frame problem.  this Heideggerian view of
>> >> absorbed
>> >> coping seems to much more fit the unconscious aspects of the human
>> >> condition, which is more insect/animal like.  This all seems to be
>> >> logically
>> >> flawed and/or a very weak foundation for grandiose conclusions about 
>> >> what
>> >> philosophical approach/model is needed to solve the frame problem and
>> >> human
>> >> consciousness.  Maybe I am missing something critical here that can 
>> >> make
>> >> sense of it.  Please clarify the logic.
>>
>> >> Any thoughts on this issue?
>>
>> >> Ariel B.
>>
>> >> "Isaac" <gro...@sonic.net> wrote in message
>>
>> >>news:491d60f6$0$33588$742ec2ed@news.sonic.net...
>>
>> >> > All,
>>
>> >> > I have critiqued in great detail a recent write paper by Prof. 
>> >> > Hubert
>> >> > Dreyfus entitled "Why Heideggerian AI Failed and how Fixing it would
>> >> > Require
>> >> > making it more Heideggerian" .  I can email a copy of it to whom 
>> >> > ever
>> >> > is
>> >> > interested. For his bio, see:
>> >> >http://socrates.berkeley.edu/~hdreyfus/
>>
>> >> > I want to stimulate discussion on this topic by posting my critiques
>> >> > little
>> >> > by little and getting comments from the AI community on the news
>> >> > groups.
>> >> > However, before I start I want to get a feel for how many know of 
>> >> > his
>> >> > work
>> >> > and/or would be interested in an intellectual debate for and against
>> >> > his
>> >> > many anti-AI positions.
>>
>> >> > I hope many will respond to this posting with interest so I can 
>> >> > begin
>> >> > posting each part of this paper I find issues with and my reasoned
>> >> > critique
>> >> > for others to comment on.
>>
>> >> > Thanks,
>> >> > Ariel-
> 


0
Isaac
11/17/2008 7:22:55 PM
"Neil W Rickert" <rickert+nn@cs.niu.edu> wrote in message 
news:UoiUk.5175$hc1.631@flpi150.ffdc.sbc.com...
> Immortalist <reanimater_2000@yahoo.com> writes:
>
>>What is Representationalism?
>
>>Representationalism is the philosophical position that the world we
>>see in conscious experience is not the real world itself, but merely a
>>miniature virtual-reality replica of that world in an internal
>>representation. Representationalism is also known (in psychology) as
>>Indirect Perception, and (in philosophy) as Indirect Realism, or
>>Epistemological Dualism.
>
>>Why Representationalism?
>
>>As incredible as it might seem intuitively, representationalism is the
>>only alternative that is consistent with the facts of perception.
>
> Nonsense.
>
> And yes, I have had this debate with Steven Lehar.  Neither of us
> was able to persuade the other.
>
> I would suggest that J.J. Gibson's direct perception is more
> plausible than Lehar's version of representationalism.
>
Could you briefly summarize the Thesis of J.J. Gibson's direct perception?

thanks,
Ariel- 


0
Isaac
11/17/2008 7:29:13 PM
"Josip Almasi" <joe@vrspace.org> wrote in message 
news:gfrpua$jbm$1@news.metronet.hr...
> Isaac wrote:
> ...
>> [i] As Heidegger puts it: "The self must forget itself if, lost in the 
>> world
>> of equipment, it is to be able 'actually' to go to work and manipulate
>> something." Being and Time, 405.
> ...
>> My critique #1:
>>
>> seems that the  "distinction between us and our equipment... is vanished" 
>> is
>> just describing the unconscious automation process that takes over body
>> functions and relieves the conscious mind to be unaware that its 
>> equipment
>> was drawn into responding to solicitations.  This in many ways seems to 
>> just
>> be alluding to the domain of our unconscious being that responds like
>> dominos that fall automatically in response to many contextual
>> solicitations.
>
> I think it's simply about ego.
> Lack of ego doesn't mean being unconscious.
> It's just an observation switch, seing how things relate to each other, 
> rather than how relate to self. IOW, 'how to' instead of 'how do I'.
>

that is an intersting idea for matters that require a self-centered account 
of phenomenon; however, how does your "relational ego" address the issue 
raised about Dryfus' assertion that our brain/mind fundamentally makes no 
"distinction between us and our equipment" so that we are simply, and 
automatically, drawn into responding to solicitations like water flowing 
down a hill?

>> I do not see how this all makes a solid argument that
>> conscious thought is unified and inseparable from "our equipment" (i.e.,
>> body).  At best this is a very weak, if not completely flawed, logic in
>> inferring that our sense (act) of being in the world "is not
>> representational at all".
>

> Agreed.
>
>> This all seems to be logically
>> flawed and/or a very weak foundation for grandiose conclusions about what
>> philosophical approach/model is needed to solve the frame problem and 
>> human
>> consciousness.  Maybe I am missing something critical here that can make
>> sense of it.  Please clarify the logic.
>
> Agreed.
>
> Regards... 


0
Isaac
11/17/2008 7:38:40 PM
"Neil W Rickert" <rickert+nn@cs.niu.edu> wrote in message 
news:pchUk.6319$Ei5.5211@flpi143.ffdc.sbc.com...
> "Isaac" <groups@sonic.net> writes:
>>"Curt Welch" <curt@kcwc.com> wrote in message
>
>>I disagree.  Reverse engineering will not solve the problem and may 
>>actually
>>lead to many dead ends.  It will take a new theory and philosophy to do 
>>it.
>>Think of it like trying to emperically come up with QED or Relativity w/o
>>any new theory or philosophy of physics.
>
> It's good to see someone who recognizes that reverse engineering
> is not sufficient.  To few people recognize this.
>
>>I did not say this.  If you read my intro, I was quoting from Dryfus' 
>>paper.
>
> Quick comment.  You are frequently writing "Dryfus" instead of "Dreyfus".

thanks,

>
>>I don't think you can call anything as chaotic as the brain doing anything
>>"straight forward".  The Earth's weather is infinitely more straitforward
>>than the humand mind/brain and we cannot model it worth a damn even with 
>>all
>>the most powerful computers in the world.
>
> I doubt that the brain is chaotic, except perhaps during epileptic
> seizures and similar failures.  Perhaps you are using "chaotic" only
> to mean that we don't have a satisfactory theory of brain operations.
>

Dreyfus' solution to the "failed AI" is an embodied system based on a 
chaotic neural network like that of Walter Freeman's neurodynamics.  So, how 
do you argue against this hypothesis?

<snip>
>
>>I don't think you could be farther away from the truth.  The brain 
>>computes
>>in ways that is so different (an often oposite) of how our signal 
>>processing
>>works that it is in another universe by comparison.  For example, the core
>>of the brain's sensory processing seems to be a kind of synethstesia based
>>system, which is exactly what all engineers would avoid like the plague. 
>>I
>>could go on and on with counter examples.
>
> Part of the confusion is in the assumption that the brain is computing.
> I find little evidence of that.  By the way, I have had these debates
> with Curt in the past.
>
Interesting, how do you define "computation" such that the brain is not 
doing it, but our computers are? For example, when our brain perceives an 
object and generates a motor plan to grab the object, is the brain not, even 
if implicitely, performing calculation based on past data to create a 
solution to a complex "reality" landscape equation?

>>hebbian learning was known since the '50's but that has not lead to 
>>anything
>>practical because it may necessary but not sufficient.  For example, 
>>hebbian
>>learning does not even begin to solve the frame problem.  Since this is so
>>strait forward, how do you propose reinforcement training (i.e., Pavlov's
>>dog) can be used to robustly deal with the frame problem?
>
> Perhaps Hebbian learning is not properly understood.  At least that's
> part of my position.
>
> IMO there is no need to solve the frame problem.  Rather, we
> need to avoid it.  The frame problem is simply an artifact of the
> reliance on stored representations.  Humans suffer from the frame
> problem when they depend on stored representations.  I remember
> some time back when I replied to a want ad in the newspaper, and
> left a message on the person's tape.  I was called back twice -
> apparently he had forgotten that he had called me back the first
> time (i.e. he had failed to update his stored representations).
> And that seems to be an example of a frame problem failure.
>

Isn't the frame problem mostly (if not all) about filtering the intractable 
sensory information of any situation into a Gestalt of only meaningful, 
important information.  This seems to be along the lines of "common sense", 
which you cannot just ingore and expect to meet or exceed human intelligence 
or behavior skills.

>>All the evidence I am aware of re the brain is that such concepts are not
>>located in any one place which you can damage to lose only the recognition
>>of a dog.  BTW, this is another example of how the brain is radically
>>different than our computing systems.
>
> Not a good example.  Back in the early days of the PC, a single byte
> of memory was actually distributed over 9 different chips plugged
> into 9 different sockets on the PC.  That one byte of memory was all
> in one location according to our formal models of computing, but was
> distributed over multiple components in the actual implementation.
>
> Incidently, I do agree that brains are radically different from computing
> systems.  But that's a theoretical view, not something that can easily
> be determined empirically.

if you have an analog computation with transistors, or an optical 
transformation (i.e., calculation) with a Fresnel lens, isn't computation 
always present when a system transforms an input to a more useful output 
used by another part of the system?  For example, our eye does so many 
critical calculation to signal condition the optical stream.  Are you saying 
that our eyes do not do any calculations?  Please clarify without making the 
semantics more vague.

thanks,
Ariel-


0
Isaac
11/17/2008 7:54:52 PM
"Isaac" <groups@sonic.net> wrote:
> OK, there has been some interest.  So far, two takers.  I've emailed a
> copy of the paper.  So, here is my first installment.  Not all issues
> will resonate will everyone so pick and choose what you find interesting
> pro/con and I will defend any of my comments.
>
> I will post the paragraph(s) I have a comment about, and highlight the
> particular words at issue by enclosing them between "***" characters.
> I'll also include citations in the paper when helpful. I seek
> (intelligent and informed) technical/theoretical critique or feedback
> from anyone on this particular issue.
>
> First, critique, page 11, line 20:
> "I agree that it is time for a positive account of Heideggerian AI and of
> an underlying Heideggerian neuroscience, but I think Wheeler is the one
> looking in the wrong place.  Merely by supposing that Heidegger is
> concerned with problem solving and action oriented representations,
> Wheeler's project reflects not a step beyond Agre but a regression to
> aspects of pre-Brooks GOFAI.  Heidegger, indeed, claims that that
> skillful coping is basic, but he is also clear that, all coping takes
> place on the background coping he calls being-in-the-world that doesn't
> involve any form of representation at all.
>
> see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.
>
> Wheeler's cognitivist misreading of Heidegger leads him to overestimate
> the importance of Andy Clark's and David Chalmers' attempt to free us
> from the Cartesian idea that the mind is essentially inner by pointing
> out that in thinking we sometimes make use of external artifacts like
> pencil, paper, and computers.[i]  Unfortunately, this argument for the
> extended mind preserves the Cartesian assumption that our basic way of
> relating to the world is by using propositional representations such as
> beliefs and memories whether they are in the mind or in notebooks in the
> world.  In effect, while Brooks happily dispenses with representations
> where coping is concerned, all Chalmers, Clark, and Wheeler give us as a
> supposedly radical new Heideggerian approach to the human way of being in
> the world is to note that memories and beliefs are not necessarily inner
> entities and that, ***therefore, thinking bridges the distinction between
> inner and outer representations.*** "
>
> My comment:
> "Assuming that by "thinking" you mean conscious thought,  I cannot see
> how thinking is a bridge that necessarily follows from memories/beliefs
> not being solely inner entities.

I find it extremely hard to even grasp what point is being made here by
talking about "thinking bridging a distinction".  The concept is mostly
just meaningless to me.  What type of "bridging" is this making a reference
to???

> It seems to me that inner and outer
> representations can be bridged without thought.  Isn't this what occurs
> in an unconscious (reflex) reaction to a complex external even, which is
> an automatic bridge and generates a thoughtful, usually accurate response
> but often before we even have a chance to think about it.

I really don't like this common idea that some mental events are
"conscious" and others are "subconscious" and as such that makes some a
"reflex".

All mental activity is a reflex.  We do not "think about" whether we should
"think about" something before we "think about it".

The brain spontaneously produces private mental events which might take the
form of words that form some logical argument as to why we should or should
not take some future action.  Then, following that, we produce a
spontaneous action - we either do it, or we don't.

The entire process was in effect sub-conscious.  That is, we have no way to
report _why_ we produced the behavior we did. But yet, we often pretend
that we do understand our actions when for the most part, we don't. We say
things like "I thought about it and decided I should do it".  This creates
the illusion that our thought (the string of words we produced in our head)
was the prime cause of our action - when it was not a prime cause by any
means.  As many times as not, the logical arguments we produce are just
justifications to allow us to do the things we wanted to do, but yet had no
real understanding why we want that instead of wanting something else.

I believe all our behavior is simply conditioned into us by fairly straight
forward processes of reinforcement.  How this works is mostly hidden from
us, and as such, we call it the subconscious.  However, this subconscious
process the brain uses to select which behavior it will produce controls
everything we do. It controls whether we produce a series of internal
private words before we move our arms and legs, and it controls when we
move our arms and legs.  We have been conditioned both to produce language
at times, and we have been conditioned to respond to language.

Our brain is a distributed network of neurons that are responding to each
other.  There is not "one mind" in there.  It's a network of billions of
independent processing nodes communicating with each other.  These
distributed and independent processing agents are trained by conditioning
to work together to produce effects that are good for the human.

These networks produce odd and complex behaviors - anything that has been
shown to produce higher rewards is likely to be repeated in the future.
Anything we have done in our life that happened to "work" to make things
better for us, will be repeated in the future with a higher probability.
That's the simple foundation of everything we do, whether it's producing a
logical argument using English words in our head or reaching out with our
hand to grab a pencil.  The behavior is just a behavior that has in the
past helped us receive a higher estimated future reward.

If drawing marks on a piece of paper (external representations) helps us
produce higher rewards, we are likely to do it again in the future.

> Inner/outer
> representations seems semantically vague here.

Well, I don't know what the original author was thinking, but the concept
is not vague to me.  All our behavior is a response to stimuli.  When we
change the environment, with our actions, we are shaping the direction of
our future actions.  By writing a note to myself to remember I need to buy
bread, I'm shaping my future behavior - I will prevent myself from making
the mistake of not getting bread when I needed it.  Writing the note is a
behavior we learn because it increases our future rewards (we increase the
odds we have food to eat when we need it and decrease the odds of having to
deal with the pain of hunger).  This note to myself which shapes my future
behavior is a stimulus signal we have created that we know will cause us to
produce a desirable future behavior.

All our behavior is produced as a reaction to stimulus signals, and by
changing those stimulus signals, we can control our future behavior.  It is
very hard for us to learn to react correctly at the store in response to
the stimulus event that happened last night where we had the thought: "I
need to get bread tomorrow at the store".  But we can instead, learn to
take advantage of multiple short term conditioned behaviors to get the same
long term result.  We learn to write notes on the shopping list, and we
learn to take the shopping list with us to the store, and we learn to scan
the list while we are in the store to compare it with what is in the
basket.  All these short term reactions are easy for the brain to learn,
where as the long term is not - so all the short term reactions emerge in
us as the solution to how we solve long term problems.  All our behavior
can be looked at as a huge set of short term temporal reactions which work
well together to produce better long term results.

We produce long sequences of events (like driving to the store to shop and
then driving home and unpacking) by stringing together lots of little short
term learned reactions.  All out behavior works like that.  It's just one
short term reaction after the next producing a long sequences of events
that most the time work well to maximize rewards for us.  We perform
actions, then we react to what we have just did, and to the state the
environment is in as a result of our past reaction, and the cycle just
continues forever.

Internally, the brain has feedback paths that allows this same type of loop
to run in the brain that happens externally.  We produce an internal
reaction in one part of the brain, which another part of the brain reacts
to, which the first part of the brain then reacts to in a different way,
and it just keeps looping allowing us to have long internal sequences of
"thoughts".

There is internal processing which in effect is a representation of
external events and external stimuli.  We can't produce a unique reaction
to a dog unless the brain hardware first has the ability do decode raw
sensory signals as being a "dog" pattern.  But the internal behavior of the
part of the brain which represents "dog" is an internal behavior as much as
it is an internal representation of dog.  But still, it acts as a stimulus
which other parts of the brain will then end up reacting to.

> Also, cannot conscious
> thought can endeavor itself with in purely inner or out representations
> without ever bridging them?  I guess, it is the "therefore" that gives me
> pause here."
>
> Any thoughts on this issue?

Well, I think it's obvious what is happening here without using words like
"conscious thought" or "bridging representation".  The brain is a reaction
machine trained by reinforcement learning which is in a continuous process
of mapping sensory data streams into effector data streams.  The mapping
function is being constantly adjusted by reinforcement to strengthen the
parts of the function that work best to produce higher future rewards.  All
the complexity of human behavior is what emerges out of this process.

What does it even mean to "bridge" the things happening outside the brain
with things happening in the brain?  As I said above, I don't really
understand what point was being made.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/17/2008 7:55:49 PM
>I completely disagree, Beyond learning and building knowledge, AI also includes 
>transcendental aspects of consciousness and self (soul?), which are in 
>metaphysics.  Do you really think there is an E=mC^2 equation for that?
>

I don't know what "transcendental aspects of consciousness and self (soul?)" are 
and I don't care. I leave this to philosophers.


> AI also covers the creation and appreciation of beautiful things, which is in 
> the 3rd pillar of philosophy: esthetics.  So, I believe AI touches on nearly 
> all aspects of philosophy.  Moreover, (reverse) engineering will not solve the 
> problem and may actually lead to many dead ends by just finding ways to go 
> nowhere quicker and better.  It will take a new theory and philosophy to do 
> it.
> Think of it like trying to empirically come up with QED or Relativity w/o any 
> new theory or philosophy of physics.
>

I can tell you one thing - if we must wait philosophers to tell us how to make 
AI then philosophers who think it is impossible to make AI are right.

0
iso
11/17/2008 10:52:26 PM
"Isaac" <groups@sonic.net> wrote in
news:49212a1c$0$33562$742ec2ed@news.sonic.net: 

> Dryfus's paper (and this quote from it) is not really talking about 
> consciousness, he is talking about the
> architecture of low level brain circuits that do or do not make
> distinctions (e.g., representations) between "us" and phenomenon.  He
> roughly says our brain circuits engage phenomenon by melding with it
> and becoming a mirror image such that the two are not separable, thus
> no representations of the object or modules or hierarchy in the brain
> just a bunch of flat, organically melded dominoes that hit one to
> another like "reality" would.

That is quite possibly correct, although I'd question the notion that the 
network "melds with" or becomes a "mirror image" of its environment. But it 
does engage with it, and almost certainly does not distinguish between 
itself and the environment with which it is engaged. (An external observer 
could make that distinction, however).

From another post of yours:

"Dryfus says that the brain works according to Walter Freeman's 
neurodynamic model as the solution to AI; i.e., a chaotic, flat, neural 
network approach, which he contends in this paper has no representations, 
modules, or hierarchy."

Disputes about representationalism appear in AI discussions because the 
disputants are not distinguishing between intelligence and consciousness. 
The latter almost certainly entails representationalism; the former need 
not, but natural intelligent systems may employ it. It's an empirical 
question.

> No, the issue here is about cognitive architecture wrt to phenomenon
> and AI building "correct" actions.  I think we should stay away from
> consciousness here.  That is an independent, complex issue.

Agree.
0
Publius
11/18/2008 12:00:43 AM
"Isaac" <groups@sonic.net> wrote:
> I completely disagree, Beyond learning and building knowledge, AI also
> includes transcendental aspects of consciousness and self (soul?), which
> are in metaphysics.  Do you really think there is an E=mC^2 equation for
> that?

Most definitely.  As I said, I'm a strict physicalist.  To me, the belief
that consciousness is something other than physical brain function is just
a widely held illusion or myth.  People believe it, and debate it, just
like they waste their time believing, and debating, the nature of God.
It's just silly crap man made up for reasons that have nothing to do with
the nature of reality.

> AI also covers the creation and appreciation of beautiful things, which
> is in the 3rd pillar of philosophy: esthetics.

Beauty is created by the value the brain assigns to sensations and those
values are there because humans are reinforcement learning machines.
There's nothing more to it than that.  Beauty isn't a mystery.  It's simple
and obvious once you understand what we are - reinforcement learning
machines.

However, this is exactly the type of thing which is nearly impossible to
understand by using philosophy alone to try and uncover the nature of
beauty.

> So, I believe AI touches
> on nearly all aspects of philosophy.

Yes, I agree completely with that.  Philosophy is one of many human
behaviors and if you don't understand where human behavior comes from and
what controls it, you will have no hope of answering any of the big
questions of philosophy such as the mind body problem and the nature of
consciousness or the nature of aesthetics.  Those questions can't be
answered from within the field of philosophy alone.  All you can do from
within philosophy is identify which concepts are compatible with each other
and which are not - you can't identity which set of beliefs are a valid
description of reality without checking the beliefs against empirical data
- which is something philosophy chooses to treat as being outside their
domain.

All you can do from within philosophy is create multiple possible answers.
You can't tell which is correct or how correct or incorrect a given
approach might be.

> Moreover, (reverse) engineering
> will not solve the problem and may actually lead to many dead ends by
> just finding ways to go nowhere quicker and better.  It will take a new
> theory and philosophy to do it.

Reverse engineering has already solved it.  Many philosophers however don't
understand this because they have created such a huge cloud of confusion by
spending so much time debating all the impossible answers they can't get a
grip on what the truth is.

> Think of it like trying to empirically come up with QED or Relativity w/o
> any new theory or philosophy of physics.

You have started with the assumption that there is something there
(consciousness) which is fundamentally hard to understand and explain.
Your assumption is invalid.  Your assumption is created by a simple to
explain brain function which created in all of us a natural illusion.  If
you assume the illusion is real, you are left with the hard problem of
consciousness.  If you assume the illusion is only an illusion, then there
is no problem at all - all hard questions are answered and explained
leaving a fairly simple material world to understand.  By Occam's razor, I
choose the answer that makes everything simple and answers all the
questions instead of picking the answer which creates contractions that
have no answer.

But in philosophy, Occam's razor has no place.  All alternatives must be
explored - as such, you are forced by your very charter to wander endlessly
into utter silliness.  The hardness of the problem attracts you to explore
the depths of the silliness endlessly hoping to find some "new theory" or
sudden enlightened understanding which clears it all up.

As an engineer, I have no need to explore such an improbable dead end.  If
I missed something, the philosophers of the world will find it and explain
it.  But after hundreds of years not finding an answer, I'm not holding my
breath on the expectation that there is something there when all evidence
suggests there isn't anything there to be found.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/18/2008 12:01:37 AM
I have recieve the paper and read about 11 pages of it
and skimmed the rest.  It's quite dense.  I won't be able
to get a good feel for several days because I come home
quite tired.

Based upon the conculsion I wonder why Stevan
Harnad's Total Turing Test wasn't mentioned.
I remeber reading "What Computers Can't Do"
in the early '70s, maybe '72 when it came out.
I don't remember much of the book any more.
I should read it again.  It set the stage for
my reading 1980 reading of John Searle's
article in Behavioral and Brain Sciences.
Still, He's come quite a ways to make the
statement:
   We can, however, make some progress towards animal AI.

So many of us have our reasons to doubt computers
(as currently constructed and envisioned) will be able
to cheaply simulate situated human coping but each
has his or her own reason.

I am encouraged by this paragraph:

  So, according to the view I have been presenting,
  even if the Heideggerian/Merleau-Pontian approach
  to AI suggested by Freeman is ontologically sound
  in a way that GOFAI and subsequent supposedly
  Heideggerian models proposed by Brooks, Agre, and
  Wheeler are not, a neurodynamic computer model
  would still have to be given a detailed description of
  a body and motivations like ours if things were to
  count as significant for it so that it could learn to act
  intelligently in our world.   We have seen that Heidegger,
  Merleau-Ponty, and Freeman offer us hints of the elaborate
  and subtle body and brain structures we would have to
  model and how to model some of them, but this only
  makes the task of a Heideggerian AI seem all the more
  difficult and casts doubt on whether we will ever be able
  to accomplish it.

Even if not intended he shows the way.  Still, digital systems
in digital worlds are subject to latching where analog systems
in analog worlds would not.  As long as the designed system
is representational, even if of the system being modelled, it
is GOFAI all the way down.

I believe your sentence parsing for the sentence starting page 11
line 33 is incorrect in that Dryfus is attributing a conclusion to
Chalmers, Clark, and Wheeler rather than making one himself.
See the next paragraph to see why I believe so.
0
forbisgaryg
11/18/2008 4:00:28 AM
Publius wrote:
> 
> Disputes about representationalism appear in AI discussions because the 
> disputants are not distinguishing between intelligence and consciousness. 
> The latter almost certainly entails representationalism; the former need 
> not, but natural intelligent systems may employ it. It's an empirical 
> question.

Actually there's yet another big question - is consciousness emergent 
property of (sufficiently high) intelligence.
And there's an empirical part of it - how to make it:)
But seems noone around doubts it is an emergent property, so no need to 
emphasise distinction.
As for empirical part, representationalism is sort of top-down approach, 
while say Curt seems to prefer bottom-up approach.
Can't say for Dreyfus though:)

Regards...
0
Josip
11/18/2008 2:29:43 PM
"Josip Almasi" <joe@vrspace.org> wrote in message 
news:gfujgq$4el$1@gregory.bnet.hr...
> Publius wrote:
>>
>> Disputes about representationalism appear in AI discussions because the 
>> disputants are not distinguishing between intelligence and consciousness. 
>> The latter almost certainly entails representationalism; the former need 
>> not, but natural intelligent systems may employ it. It's an empirical 
>> question.

>
> Actually there's yet another big question - is consciousness emergent 
> property of (sufficiently high) intelligence.
> And there's an empirical part of it - how to make it:)
> But seems noone around doubts it is an emergent property, so no need to 
> emphasise distinction.

I tend to disagree.  I do not see the two being so intimately connected so 
as to require one to immerge from another.  Intelligence might just be a 
boundary condition on the scope of consciousness (i.e., awareness).  For 
example, a severely retarded person is certainly far less intelligent, 
however, I don't think there is any evidence that they are far less 
conscious.  If consciouness emerged (necessarily?) from intelligence then 
shouldn't they be highly correlated?

> As for empirical part, representationalism is sort of top-down approach, 
> while say Curt seems to prefer bottom-up approach.
> Can't say for Dreyfus though:)
Dreyfus' paper necessarily requires a bottom-up approach because his 
architecture has no hierarchy, no modules, and no representations.

Best,
Ariel-

>
> Regards... 


0
Isaac
11/18/2008 10:58:50 PM
"Publius" <m.publius@nospam.comcast.net> wrote in message 
news:Xns9B59A2F03AC17mpubliusnospamcomcas@69.16.185.250...
> "Isaac" <groups@sonic.net> wrote in
> news:49212a1c$0$33562$742ec2ed@news.sonic.net:
>
<snip , comment noted>

> From another post of yours:
>
> "Dryfus says that the brain works according to Walter Freeman's
> neurodynamic model as the solution to AI; i.e., a chaotic, flat, neural
> network approach, which he contends in this paper has no representations,
> modules, or hierarchy."
>
> Disputes about representationalism appear in AI discussions because the
> disputants are not distinguishing between intelligence and consciousness.
> The latter almost certainly entails representationalism; the former need
> not, but natural intelligent systems may employ it. It's an empirical
> question.


I tend to disagree generally, but do agree that Dreyfus' arguments apply the 
most to low level systems like the limbic system.  This is one of my 
critiques as well.  However, your solution seems to not work for me either. 
That is, any intelligent system must generate abstractions off of highly 
detailed phenomenon experienced.  Are not abstractions a generic 
representation of the more complex phenomenon being modeled for use in 
either reasoning or pattern recognition.  To me, if you have no 
representations in your system then you must use the full resolution of the 
experience phenomenon, which would result in a very brittle system because 
it lacks abstractions that can enable the intelligent system to manipulate 
intractable details as a single, simple package.  Please clarify your idea 
in the context of my points above.

    ,
Ariel- 


0
Isaac
11/18/2008 11:02:53 PM
"Isaac" <groups@sonic.net> writes:
>"Neil W Rickert" <rickert+nn@cs.niu.edu> wrote in message 
>news:pchUk.6319$Ei5.5211@flpi143.ffdc.sbc.com...

>> I doubt that the brain is chaotic, except perhaps during epileptic
>> seizures and similar failures.  Perhaps you are using "chaotic" only
>> to mean that we don't have a satisfactory theory of brain operations.

>Dreyfus' solution to the "failed AI" is an embodied system based on a 
>chaotic neural network like that of Walter Freeman's neurodynamics.  So, how 
>do you argue against this hypothesis?

If it is chaotic, then one might reasonably expect human behavior
to be chaotic.  But that seem a rather odd way of characterizing
human behavior.

>> Part of the confusion is in the assumption that the brain is computing.
>> I find little evidence of that.  By the way, I have had these debates
>> with Curt in the past.

>Interesting, how do you define "computation" such that the brain is not 
>doing it, but our computers are? For example, when our brain perceives an 
>object and generates a motor plan to grab the object, is the brain not, even 
>if implicitely, performing calculation based on past data to create a 
>solution to a complex "reality" landscape equation?

I am skeptical that there is much being retained in the form of
stored representations as "past data".  It seems more likely that
the brain is a bit like a finely tuned instrument.  The past data
has played a role in adjusting the tuning, but has not been retained.

The "motor plan" to grab an object is likely quite crude, and
precise behavior results not from having a precise plan, but from
measuring performance during the motor action and adjusting it where
the measurement indicates it is off.  This would make measurement
more important than computation.

>Isn't the frame problem mostly (if not all) about filtering the intractable 
>sensory information of any situation into a Gestalt of only meaningful, 
>important information.  This seems to be along the lines of "common sense", 
>which you cannot just ingore and expect to meet or exceed human intelligence 
>or behavior skills.

That's not the way the frame problem is usually described.
See http://en.wikipedia.org/wiki/Frame_problem for a more familiar
version.  In any case, what matters here is the version of the frame
problem being assumed by Dreyfus in the paper we are discussing.
And I'm pretty sure he is taking it as the problem of updating
stored representations so that they are consistent with changes in
the world.  Thus he mentions that Rodney Brooks avoids the problem
by designing his system to not depend on stored representations of
the state of the world.

>if you have an analog computation with transistors, or an optical 
>transformation (i.e., calculation) with a Fresnel lens, isn't computation 
>always present when a system transforms an input to a more useful output 
>used by another part of the system?  For example, our eye does so many 
>critical calculation to signal condition the optical stream.  Are you saying 
>that our eyes do not do any calculations?  Please clarify without making the 
>semantics more vague.

You seem to be arguing that when I beep the horn on my car, the
mechanism in the car is somehow performing a calculation.  If you
take the meaning of "computation" to be that broad, then everything
is a computation, and the word "computation" becomes useless for
it fails to discriminate.

0
Neil
11/18/2008 11:10:18 PM
On Nov 19, 9:58=A0am, "Isaac" <gro...@sonic.net> wrote:
> "Josip Almasi" <j...@vrspace.org> wrote in message

> > Actually there's yet another big question - is consciousness emergent
> > property of (sufficiently high) intelligence.
> > And there's an empirical part of it - how to make it:)
> > But seems noone around doubts it is an emergent property, so no need to
> > emphasise distinction.
>
> I tend to disagree. =A0I do not see the two being so intimately connected=
 so
> as to require one to immerge from another. =A0Intelligence might just be =
a
> boundary condition on the scope of consciousness (i.e., awareness). =A0Fo=
r
> example, a severely retarded person is certainly far less intelligent,
> however, I don't think there is any evidence that they are far less
> conscious. =A0If consciouness emerged (necessarily?) from intelligence th=
en
> shouldn't they be highly correlated?

The degree of awareness certainly doesn't seem ot correlate directly
with
the degree of intelligent behavior shown. Intelligent behavior I
suspect also
depends on how much the agent is aware of and the quality of the
things
the agent is aware of.

JC


0
casey
11/18/2008 11:54:59 PM
On Nov 17, 12:55=A0pm, c...@kcwc.com (Curt Welch) wrote:
> "Isaac" <gro...@sonic.net> wrote:
> > OK, there has been some interest. =A0So far, two takers. =A0I've emaile=
d a
> > copy of the paper. =A0So, here is my first installment. =A0Not all issu=
es
> > will resonate will everyone so pick and choose what you find interestin=
g
> > pro/con and I will defend any of my comments.
>
> > I will post the paragraph(s) I have a comment about, and highlight the
> > particular words at issue by enclosing them between "***" characters.
> > I'll also include citations in the paper when helpful. I seek
> > (intelligent and informed) technical/theoretical critique or feedback
> > from anyone on this particular issue.
>
> > First, critique, page 11, line 20:
> > "I agree that it is time for a positive account of Heideggerian AI and =
of
> > an underlying Heideggerian neuroscience, but I think Wheeler is the one
> > looking in the wrong place. =A0Merely by supposing that Heidegger is
> > concerned with problem solving and action oriented representations,
> > Wheeler's project reflects not a step beyond Agre but a regression to
> > aspects of pre-Brooks GOFAI. =A0Heidegger, indeed, claims that that
> > skillful coping is basic, but he is also clear that, all coping takes
> > place on the background coping he calls being-in-the-world that doesn't
> > involve any form of representation at all.
>
> > see: Michael Wheeler, Reconstructing the Cognitive World, 222-223.
>
> > Wheeler's cognitivist misreading of Heidegger leads him to overestimate
> > the importance of Andy Clark's and David Chalmers' attempt to free us
> > from the Cartesian idea that the mind is essentially inner by pointing
> > out that in thinking we sometimes make use of external artifacts like
> > pencil, paper, and computers.[i] =A0Unfortunately, this argument for th=
e
> > extended mind preserves the Cartesian assumption that our basic way of
> > relating to the world is by using propositional representations such as
> > beliefs and memories whether they are in the mind or in notebooks in th=
e
> > world. =A0In effect, while Brooks happily dispenses with representation=
s
> > where coping is concerned, all Chalmers, Clark, and Wheeler give us as =
a
> > supposedly radical new Heideggerian approach to the human way of being =
in
> > the world is to note that memories and beliefs are not necessarily inne=
r
> > entities and that, ***therefore, thinking bridges the distinction betwe=
en
> > inner and outer representations.*** "
>
> > My comment:
> > "Assuming that by "thinking" you mean conscious thought, =A0I cannot se=
e
> > how thinking is a bridge that necessarily follows from memories/beliefs
> > not being solely inner entities.
>
> I find it extremely hard to even grasp what point is being made here by
> talking about "thinking bridging a distinction". =A0The concept is mostly
> just meaningless to me. =A0What type of "bridging" is this making a refer=
ence
> to???
>
> > It seems to me that inner and outer
> > representations can be bridged without thought. =A0Isn't this what occu=
rs
> > in an unconscious (reflex) reaction to a complex external even, which i=
s
> > an automatic bridge and generates a thoughtful, usually accurate respon=
se
> > but often before we even have a chance to think about it.
>
> I really don't like this common idea that some mental events are
> "conscious" and others are "subconscious" and as such that makes some a
> "reflex".
>
> All mental activity is a reflex.

This is silly nonsense.  Perhaps *you* do not ever think about what
you are going to think about , (and that would explain a lot), but do
not assume that others do not partake of directed thought processes/
scenarios.

I can simply will myself to think about a blue cube for example! And
then proceed to do so.


>=A0We do not "think about" whether we should
> "think about" something before we "think about it".
>
> The brain spontaneously produces private mental events which might take t=
he
> form of words that form some logical argument as to why we should or shou=
ld
> not take some future action. =A0Then, following that, we produce a
> spontaneous action - we either do it, or we don't.
>
> The entire process was in effect sub-conscious. =A0That is, we have no wa=
y to
> report _why_ we produced the behavior we did. But yet, we often pretend
> that we do understand our actions when for the most part, we don't. We sa=
y
> things like "I thought about it and decided I should do it". =A0This crea=
tes
> the illusion that our thought (the string of words we produced in our hea=
d)
> was the prime cause of our action - when it was not a prime cause by any
> means. =A0As many times as not, the logical arguments we produce are just
> justifications to allow us to do the things we wanted to do, but yet had =
no
> real understanding why we want that instead of wanting something else.
>
> I believe all our behavior is simply conditioned into us by fairly straig=
ht
> forward processes of reinforcement. =A0How this works is mostly hidden fr=
om
> us, and as such, we call it the subconscious. =A0However, this subconscio=
us
> process the brain uses to select which behavior it will produce controls
> everything we do. It controls whether we produce a series of internal
> private words before we move our arms and legs, and it controls when we
> move our arms and legs. =A0We have been conditioned both to produce langu=
age
> at times, and we have been conditioned to respond to language.

Too bad your whole argument falls apart due to the existence and
efficaciousness of willfull thought, directed thought and algorithmic
thought scenarios.


0
Alpha
11/19/2008 12:15:16 AM
Josip Almasi <joe@vrspace.org> wrote in
news:gfujgq$4el$1@gregory.bnet.hr: 

>> Disputes about representationalism appear in AI discussions because
>> the disputants are not distinguishing between intelligence and
>> consciousness. The latter almost certainly entails
>> representationalism; the former need not, but natural intelligent
>> systems may employ it. It's an empirical question.
 
> Actually there's yet another big question - is consciousness emergent 
> property of (sufficiently high) intelligence.

It is more likely an emergent property of a particular implementation 
strategy for intelligence.

> As for empirical part, representationalism is sort of top-down
> approach, while say Curt seems to prefer bottom-up approach.
> Can't say for Dreyfus though:)

How natural intelligent systems evolved is a different question from their 
structure and function. Artificial systems might be designed which have 
different structures, but perform the same functions and are just as 
intelligent (or more so).

0
Publius
11/19/2008 2:24:28 AM
On Nov 19, 11:15=A0am, Alpha <omegazero2...@yahoo.com> wrote:
> On Nov 17, 12:55=A0pm, c...@kcwc.com (Curt Welch) wrote:

> > I really don't like this common idea that some
> > mental events are "conscious" and others are
> > "subconscious" and as such that makes some a
> > "reflex".
> >
> > All mental activity is a reflex.
>
> This is silly nonsense.  Perhaps *you* do not ever
> think about what you are going to think about ,
> (and that would explain a lot), but do not assume
> that others do not partake of directed thought
> processes/scenarios.

Over the many exchanges with Curt I have come to
realize he has an inability to think with high level
abstractions and anyone that does so is seen by him
to be suggesting something extra or non-physical is
at play.

For him everything reduces to reflexes etc. This is
equivalent to a programmer that says everything
reduces to binary patterns and logic gates. Although
this is true it is not a very useful level at which
to explain the workings of a program. And that
everything reduces to reflexes is not a very useful
level to talk about high level brain behaviors.

So for him if we talk about a "conscious" process
he claims the actions of logic gates and neurons
must also be a conscious process for that is what
the physical entity reduces to. The strange belief
that the parts of the whole always embody the same
properties as the whole.

JC
0
casey
11/19/2008 4:00:10 AM
On Nov 18, 6:24=A0pm, Publius <m.publ...@nospam.comcast.net> wrote:
> Josip Almasi <j...@vrspace.org> wrote innews:gfujgq$4el$1@gregory.bnet.hr=
:
>
> >> Disputes about representationalism appear in AI discussions because
> >> the disputants are not distinguishing between intelligence and
> >> consciousness. The latter almost certainly entails
> >> representationalism; the former need not, but natural intelligent
> >> systems may employ it. It's an empirical question.
> > Actually there's yet another big question - is consciousness emergent
> > property of (sufficiently high) intelligence.
>
> It is more likely an emergent property of a particular implementation
> strategy for intelligence.

There used to be a notion that an eye couldn't develop because of
its complexity.  The assertion was that no where along the way
would the components have any survival value.  I believe that dead
horse has been whipped enoungh.  It seems to me that the same
applies to the development of consciousness in our set of evolved
systems.  I'm not completely sure what the precursors were.

> > As for empirical part, representationalism is sort of top-down
> > approach, while say Curt seems to prefer bottom-up approach.
> > Can't say for Dreyfus though:)
>
> How natural intelligent systems evolved is a different question from thei=
r
> structure and function. Artificial systems might be designed which have
> different structures, but perform the same functions and are just as
> intelligent (or more so).

As with the eye, certain functional systems can be designed
without passing through all the evolutionary precursor steps.
This still leaves the open question as to whether or not
the implementation of the function(s) from which human
consciousness emerges necessarily entails that consciousness
will emerge from all implementations of the function(s).
0
forbisgaryg
11/19/2008 4:03:52 AM
I think we would be more successful in continuing to work at some
"good" questions and perhaps create as a by-product some kind of
artificial "intelligence". After all that would appear to be how
"intelligence" got started.

So for example; how do we create an *actual* unlimited supply of
renewable energy?

I imagine that from these types of problems a form of artificial
intelligence will arise.
0
turtoni
11/19/2008 4:19:37 AM
Alpha <omegazero2003@yahoo.com> wrote:
> On Nov 17, 12:55=A0pm, c...@kcwc.com (Curt Welch) wrote:

> > All mental activity is a reflex.
>
> This is silly nonsense.  Perhaps *you* do not ever think about what
> you are going to think about , (and that would explain a lot), but do
> not assume that others do not partake of directed thought processes/
> scenarios.
>
> I can simply will myself to think about a blue cube for example! And
> then proceed to do so.

The idea of a "blue cube" just popped into you head at some point as you
were witting this response.  Right?  Did you will yourself to think about a
blue cube before you first thought about the blue cube?  Of course not.  At
some point there, the idea of a blue clue showed up in your thoughts
without any prior will to think about blue cubes on your part.  Why did
that happen?

It happened because you had some sort of thought such as "what is a good
object to give as an example"?  And as a _reflex_ to that thought, the
"blue cube" idea showed up in your thoughts.  And as a _reflex_ to the
"blue cube" thought in the context of "create thought example", you
produced the sentence in the post about "I can will myself to think about a
blue cube and then think about a blue cube".

The point here is that you can't will yourself to think about a blue cube
before you first think of what you are going to will yourself to think
about.

Whether we call this sort of thought sequence your "will" is irrelevant.
It's still happening as a reflex to what was just happening in your brain.
Each thought we have follows from the current context set up by recent past
events in the brain.  The path of thoughts that get produced based on
context is a function of how our brain has been conditioned by a life time
of experience.

My brain doesn't produce this constant stream of English words because it
was built by the DNA to produce sequences of English words.  My environment
conditioned me to string these sounds/words together in this sequence.  It
makes no difference if I'm talking about the stream of words being produced
as private thoughts in my brain or the stream of words that get typed into
the Usenet message.  It's all just conditioned behavior coming out of me as
a constant stream of brain behaviors based on recent past context.  And
because the brain has feedback loops in it, that context is based both on
recent past sensory inputs as well as recent past brain behaviors.

What you talk about as will is just the fact that one brain behavior is
likely to regulate the next brain behavior.  But when behavior A controls
behavior B, we say that B is a reflex to A.  It's the same thing no matter
which way you choose to talk about it.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/19/2008 4:41:45 AM
turtoni <turtoni@fastmail.net> wrote:
> I think we would be more successful in continuing to work at some
> "good" questions and perhaps create as a by-product some kind of
> artificial "intelligence". After all that would appear to be how
> "intelligence" got started.
>
> So for example; how do we create an *actual* unlimited supply of
> renewable energy?

Well, that's a silly question.  There's no such thing as "renewable
energy".

The best energy source around here is the sun, so the real energy problem
is simply a question of how do we best tap the energy flow from the sun.
Answering that gives as a few billion years of energy (not renewable).

> I imagine that from these types of problems a form of artificial
> intelligence will arise.

Well, the better question along that line is more like:

  How do we build a machine that can survive on its own without human help?

People working on real world robotics questions are already trying to solve
this problem and their work is already pushing AI towards higher levels of
intelligence.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/19/2008 4:48:03 AM
On Nov 18, 11:48=A0pm, c...@kcwc.com (Curt Welch) wrote:
> turtoni <turt...@fastmail.net> wrote:
> > I think we would be more successful in continuing to work at some
> > "good" questions and perhaps create as a by-product some kind of
> > artificial "intelligence". After all that would appear to be how
> > "intelligence" got started.
>
> > So for example; how do we create an *actual* unlimited supply of
> > renewable energy?
>
> Well, that's a silly question. =A0There's no such thing as "renewable
> energy".

Well, that's a silly answer. There's no such thing as no such thing.
But thats another story.

Anyway: "A natural resource qualifies as a renewable resource if it is
replenished by natural processes at a rate comparable or faster than
its rate of consumption by humans or other users."

> The best energy source around here is the sun, so the real energy problem
> is simply a question of how do we best tap the energy flow from the sun.
> Answering that gives as a few billion years of energy (not renewable).

"The vast power radiated by our sun is generated by the fusion process
wherein light atoms combine with an accompanying release of energy. In
nature, proper conditions for fusion occur only in the interior of
stars. Researchers are attempting to produce the conditions that will
permit fusion to take place on earth."

> > I imagine that from these types of problems a form of artificial
> > intelligence will arise.
>
> Well, the better question along that line is more like:
>
> =A0 How do we build a machine that can survive on its own without human h=
elp?

How do humans survive without human help? Typically not to well..

> People working on real world robotics questions are already trying to sol=
ve
> this problem and their work is already pushing AI towards higher levels o=
f
> intelligence.

Good luck but don't be surprised if other fields of research beat them
to man made artificial intelligence.

> --
> Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/

0
turtoni
11/19/2008 5:17:46 AM
On Nov 18, 11:48=A0pm, c...@kcwc.com (Curt Welch) wrote:
> turtoni <turt...@fastmail.net> wrote:
> > I think we would be more successful in continuing to work at some
> > "good" questions and perhaps create as a by-product some kind of
> > artificial "intelligence". After all that would appear to be how
> > "intelligence" got started.
>
> > So for example; how do we create an *actual* unlimited supply of
> > renewable energy?
>
> Well, that's a silly question. =A0There's no such thing as "renewable
> energy".

Well, that's a silly answer. There's no such thing as no such thing.
But thats another story.

Anyway: "A natural resource qualifies as a renewable resource if it is
replenished by natural processes at a rate comparable or faster than
its rate of consumption by humans or other users."

> The best energy source around here is the sun, so the real energy problem
> is simply a question of how do we best tap the energy flow from the sun.
> Answering that gives as a few billion years of energy (not renewable).

"The vast power radiated by our sun is generated by the fusion process
wherein light atoms combine with an accompanying release of energy. In
nature, proper conditions for fusion occur only in the interior of
stars. Researchers are attempting to produce the conditions that will
permit fusion to take place on earth."

> > I imagine that from these types of problems a form of artificial
> > intelligence will arise.
>
> Well, the better question along that line is more like:
>
> =A0 How do we build a machine that can survive on its own without human h=
elp?

How do humans survive without human help? Typically not to well..

> People working on real world robotics questions are already trying to sol=
ve
> this problem and their work is already pushing AI towards higher levels o=
f
> intelligence.

Good luck but don't be surprised if other fields of research beat them
to man made artificial intelligence.

> --
> Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/

0
turtoni
11/19/2008 5:19:32 AM
"Isaac" <groups@sonic.net> wrote:
> "Publius" <m.publius@nospam.comcast.net> wrote in message
> news:Xns9B57F131CB7B0mpubliusnospamcomcas@69.16.185.250...
> > "Isaac" <groups@sonic.net> wrote in
> > news:491f9f87$0$33506$742ec2ed@news.sonic.net:
> >
> >> Minsky, unaware of Heidegger's critique, was convinced that
> >> representing a few million facts about objects including their
> >> functions, would solve what had come to be called the commonsense
> >> knowledge problem.  It seemed to me, however, that the deep problem
> >> wasn't storing millions of facts; it was knowing which facts were
> >> relevant in any given situation.  One version of this relevance
> >> problem was called "the frame problem."  If the computer is running a
> >> representation of the current state of the world and something in the
> >> world changes, how does the program determine which of its represented
> >> facts can be assumed to have stayed the same, and which would have to
> >> be updated?
> >
> > Dreyfus is pointing out one consequence of the lack of a useful
> > definition of "intelligence."
> Actually, he is doing much more than that in his paper.  I just posted a
> portion of its background section, but his paper sets forth what he
> believes is why AI fails and how he (and certain philosophers/researchers
> he relies on) thinks intelligence works.  I will email you the paper for
> your reference.  Let me know if you are interested in me posting my
> critiques of his paper for your response.  Maybe you will defend his
> positions?
>
> >It is problem which plagues most programs for producing
> > AI (which is not to deny that much progress has been made in that
> > endeavor).
> >
> > We may define "intelligence" as, "The capacity of a system to generate
> > solutions to novel problems," and "problems" as, "Obstacles or
> > impediments preventing the system from attaining a goal."
> I defy you to contrive a definition of Intelligence that works.  For
> example, using your current definition above, the Earth would be
> intelligent because it is a system with the capacity to generate
> solutions (e.g., extremely complex, yet stable atmospheric weather, ocean
> currents, etc.) to solve novel problems of, for example, maintaining a
> stable global temperature in the face of many (thousands) changing
> (novel) variables that are constant obstacles preventing the Earth (Gia?)
> from attaining her goal of minimizing temperature differences globally.

The earth is intelligent.  So is the universe as a whole.

Life looks like it was designed by intelligence because
it was.  The process of evolution is just one more example of the many
intelligent processes at work in the universe.  Evolution is an example of
a reinforcement learning process and I basically consider all reinforcement
learning processes to be examples of intelligence.

> There are many similar examples that use your language but are not
> considered to be intelligent to anyone reasonable in science.  Care to
> update your definition or defend it?

Many people in science have no clue what they are talking about when they
use the word "intelligence".  As such, they define what is, and what isn't
intelligent based on total nonsense and ungrounded speculation - as I've
said before - without using any empirical evidence to argue from.

Of course that doesn't stop them, because they like to claim things such as
"subjective experience is outside the scope of empirical evidence".
And then they tell us what _their_ subjective experience is like and use
their beliefs about their own subjective experience to "prove" an endless
list of nonsense ideas about the universe.

The typical argument and thought path starts with the belief that human
consciousness is something that exists only in humans.  Then from there,
they make the argument that since humans have this magical attribute called
consciousness and other things like the Earth doesn't, that intelligence
requires consciousness.  But since they don't have any clue what creates
human consciousness, they also don't have any clue what creates
intelligence and don't really have any way to determine if the earth is
intelligent or not.

And when asked to explain what evidence they have to suggest this attribute
exists only in humans, they use the self serving argument that since they
"known" it exists in them, and that other humans are physically similar to
them, that this stuff they known exists in them must also exist in others.

But all that argument and the arguments that grow from it are based on a
belief that has no support.  The belief that "consciousness" is something
other than simple brain function.  That consciousness is not an identity
with physical brain function.

However, all the empirical evidence we have tells us that assumption is
wrong.  And if we choose to believe what the empirical evidence shows us
(materialism) - then we know that there is nothing here to explain, other
than the physical signal processing that happens in the brain which
produces human behavior.

Once you grasp the significance of what the empirical evidence is telling
us, all the need of defining intelligence as some sort of link with "being
conscious" goes away.  We are left with defining intelligence is some class
of signal processing algorithm that describes how the brain works.  And
though there are multiple options there, none of them make intelligence
hard to understand. It's no harder to understand than any typical machine
learning algorithm for example.

I choose to use the fairly broad and generic definition of intelligence
being a reinforcement learning system which allows the concept to include
many processes other than just what the brain does - such as the process of
evolution.

You could easily restrict the definition to something closer to what he
brain does, which would be something more like a real time distributed
parallel signal processing network trained by reinforcement instead of the
far broader "all reinforcement learning processes" I like to use.

> > Introducing goals into the definition gives us a handle on the "frame
> > problem": the problem is framed by the current goal.
>
> Goals are really related to the frame problem in that the "frame" that
> matters is the one that reflect "reality" in the context of your
> priorities, experience, and world model (e.g., a filter) as a situated
> agent.  Goals are just one priority, but goals to not really drive
> perception, they mostly seek
> to manipulate the frame to achieve a desired result.  Loosely, I think
> the frame problem is much more about constructing a useful and tractable
> model that the situated agent can use towards building a plan to achieve
> its goals.

Well, I think "goal" is the wrong way to understand the operation of the
brain though it's not too far off.

The true goal of a reinforcement learning machine is to maximize expected
future reward.  So it's a reward maximizing machine with one prime goal.

What the prime goal translates into is some internal systems of values for
all possible behaviors which in turn translates into some behavior
probability distribution.  This in turn must drive whatever mechanism is in
place to select behaviors.  The system that decides what behavior to select
for the current context is using the internal system of values to pick
between alternatives.

Our higher level ideas of "goal seeking" is simply the fall out of a the
lower level behavior selection system picking the best behaviors for any
given context.

When you translate the implementation of the system into a reward trained
behavior selection system, the frame problem doesn't even make much sense
to talk about.  The frame problem arises nearly as much out of incorrectly
framing the question of what the purpose of the agent is.  However, the
issues that surround the frame problem are real.  But they are all answered
in the context of a system which has the power to prioritize all possible
responses to stimulus signals.  That is, which reaction the system chooses
at any point in time based on its learned values (priorities if you like)
is the answer to how the system deals with the frame problem.  That is, the
one problem it must solve (how to select which behavior to use at any
instant in time) is the same answer to the frame problem.

Finding a workable implementation of such a system is the path to solving
AI.

You said above that goals do not drive perception. That's just not true in
my view. II think our perception and our behavior selection are one and the
same problem.  Perception is a problem of behavior selection.

On the sensory input side of the network, the major function is perception,
but as the signals flow through the network, the function transforms into
behavior selection.  So near the output side of the network, it's mostly
"goal driven" an on the input side it's mostly "perception driven" but I
believe it's a fairly even continuum though the network as raw sensory data
is translated to raw effector output data.

We see how this works when we test color perception of people raised (aka
trained) in different cultures with different words for different ranges of
colors.  Our perception of color bends to correctly fit the classification
of light frequency labeled by the words of our language.

>
> >Attention is paid only
> > to world states which bear on the system's goals (as a background
> > process).
>
> Of course, goals to play an important role in how to focus attention, and
> to some extent this colors the frame problem, but I do not see how it
> drives it exclusively as you put it.

It drives it exclusively in my view because behavior selection is all the
brain is doing and behavior selection works by picking behaviors that are
estimated to produce maximal expected return for the given context.  And
this general process of selecting the "best" behaviors at the lowest level
is both the mechanism which creates what we think of as goal seeking and
the behavior which is think of as attention focus.  I see them as one and
the same process at the low level.

> If I look at a movie I have no
> goals other than to be entertained, however, I clearly create the proper
> frames needed to understand and appreciate the movie and its meaning.

Yes, but what you call "creating a frame" I would call "picking the
appropriate behavior for the current context.

For example, we see a picture and we recognize various items in the
picture.  It might be a street scene for example with a car parked on the
street in front of a pizza restaurant.  How does our brain react to this
stimulus?  It will react by activating part of our brain which represents a
car, and another part of the brain which represents a city street, and
another part of the brain which represents a pizza restaurant.  But the
brain is doing this because it has been trained by experience to classify
that sensory data in that way.

If we expose a person who has never seen a car or never seen a city street
or never seen a pizza restaurant before (say someone who spent their entire
life in a jungle), their brain will react to this stimulus in a very
different way.  They might notice the tree in the background because it's a
type of tree they know very well but have no real clue what anything else
in the picture is.

This is because this person has never had experience with this type of
"frame" in the past and has very little experience with how to correctly
react to this combination of stimulus signals.  None the less, the brain
will still pick a reaction to the stimulus signal based on the past
experience the person has had.  For this guy, the "reaction" to the frame
might be to move the eyes to focus on the tree in the background because
all the city street stuff in the foreground looks mostly like "noise" to
him.

> In
> that case, I am actually trying to discover the goals of the movie, and
> not my own, to understand it.  No goals on my part, but the frame problem
> seriously exists.
>
> > If not enough information is in hand to solve the current problem, then
> > the
> > system returns to "the world" to gather additional information. (There
> > is no need to "store millions of facts." Facts are gathered as they are
> > needed, i.e., in light of the present goal and problem).
>
> I don't think anyone would say that classic AI would not return to the
> world to gather more facts to add to its "millions of facts".  The issue
> that Dreyfus says is the problem with AI is that it creates rules that
> are representations (or symbols) and are compartmentalized, both of which
> he says the Philosopher Heidegger espouses, which Dreyfus and his set of
> philosophers/researcher say is not the case.  I think every Intelligent
> system will end up effectively having a constantly evolving set of
> millions of "rules", so that is not the question.  Do you have any
> counter examples?
>
> Cheers!
> Ariel B.-

I don't fully understand what you are suggesting here because I don't tend
to read or study the work of the type of people you are studying.  I'm not
sure for example what the debate is on representations.

However, I do believe that there is a tendency in AI to build into the
machine the wrong type of representations.  Such as classic GOFAI tried to
build representations that were roughly equal to the level we would create
in our natural language.  Meaning, in the AI implementation, there would be
something like a database which was close in structure to a natural
language dictionary which attempted to define the meaning of the words
based on their relationships to other words and to perhaps sensory inputs.
This didn't work well because it was making the assumption that the brain
worked by manipulating world-level symbols.  These attempts made it obvious
that there was a much finer level of detail missing from our definitions -
the common sense problem.  Some thought that simply using more words would
solve that problem.  In theory I think that's correct, but in practice,
it's the wrong direction.  Tje entire GOFAI approach seemed to be based on
the idea that since humans produced sequences of words and that we called
this type of behavior "thinking" that we could make a computer "think" by
also producing sequences of words.  I think that is where the entire GOFAI
approach went off track.  How the brain works, and what the brain does, are
not the same thing.

However, just because the GOFAI approach ran into a wall after awhile, that
doesn't mean that using "symbols" to represent something was wrong.

The brain uses a common language of symbols to represent everything as
well.  Those symbols however are spikes.  Digital computers use 1 and 0
symbols to represent things.  Manipulating symbols which are
representations is exactly what the brain is doing.  Any argument to the
contrary is misguided.

The solution to creating human like behavior in a machine is to build
symbol manipulating machines (aka signal processors), but the symbols must
be a level closer to spikes or bits, than to English words.

I also think that the correct implementation is along the line of a
confectionist network which is processing multiple parallel signal flows.
So from that perspective, I think it's more useful to think of the network
as a signal processing machine than as "representations with symbols".  But
it's the same thing no matter which way you talk about it.  An AM radio
signal is still a representation of the vibration of the air, and is also a
representation of the thoughts of the DJ which was speaking on the radio.
How you choose to label these systems is matter of viewpoint far more than
a true matter of what the system is doing or how it works.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/19/2008 5:22:13 AM
Josip Almasi <joe@vrspace.org> wrote:
> Publius wrote:
> >
> > Disputes about representationalism appear in AI discussions because the
> > disputants are not distinguishing between intelligence and
> > consciousness. The latter almost certainly entails representationalism;
> > the former need not, but natural intelligent systems may employ it.
> > It's an empirical question.
>
> Actually there's yet another big question - is consciousness emergent
> property of (sufficiently high) intelligence.
> And there's an empirical part of it - how to make it:)

Well, I think people in general are very confused about the entire subject
of consciousness, so when they talk about it, they are often talking about
many different aspects of humans.  However, I believe the foundation of
where all this confusion comes from, is in how the brain tends to model
itself.  That is, our entire understanding and perception of reality is
created by the processing that happens in our brain, and that includes our
perception of our own brain.  Our perception of self.  Because the brain
has only limited access to itself, the model it creates is also limited.
Based on the sensory data the brain has access to, it is forced to model
internally generated brain signals as having no association with external
sensory signals.  It does this simply because there is no temporal
correlation between auditor sensory data and the brain signals which
represent our private thoughts.  That is, we can not hear the brain making
any physical sound as our neurons fire. Because the brain makes no noise
(that our ears can pick up) when it operates, there is no correlation
between auditory sensory signals, and private thought signals.  And because
there is no correlation, the brain doesn't create an association between
these different signals.  This is true for all the physical sensory data
that flows into the brain.  We can't hear our private thoughts, we can not
feel the brain vibrate, we can not smell it.  There simply are no
correlations between our physical sensory data and our internal thought
signals.

The result of this fact is that the brain builds a model of this data by
indicating no association between private thoughts, and the physical events
represented in the sensory data.  The result of that is very simple - it
leaves us with a model of realty where thoughts are disconnected from all
things physical.

This model the brain builds based on the signals it has access to, is the
source of all the confusion about human consciousness.  It's why people
think the mind and the brain are two different things.  People think that,
because that's exactly the model of reality the brain builds to describe
itself.

The net result of this is that "consciousness" is the illusion of
separation between our thoughts and our physical body. It is the source of
the illusion that humans have a soul which is separate from the body and
the reason the mind body problem has been debated for hundreds of years.

When we look at the world around us, we believe we see the world as it is.
Rocks are hard, and pillows are soft because this is the way those things
actually are.  But in fact, rocks are hard and pillows are soft because
that's the way the brain has modeled them for us.  For the most part, the
model of reality the brain creates is an accurate and close match to the
way the universe actually is.  It's so close and so accurate that we just
accept that what we "see" is what is really there.  This faith in our own
ability to see what is "really there" is what creates the hard problem of
consciousness.

When people look at their own brain, they see a mind which is separate from
their physical body.  When I sense words and images popping up in my mind,
I don't in any sense "see" it as the firing of neurons.  And the fact that
I don't "see" it as the firing of neurons, or the physical actions of my
brain, is the illusion of consciousness.  We believe that what we "see"
when we look into our own mind, is what is "really there" because we trust
that when we look at things, we see what is really there.

The brain works so well at correctly modeling reality, that we make the
mistake of believing that what we see must be real.  If we see "thoughts"
that are separate from phsyical objects, we assume the thoughts must not be
physical.  But that's the error.  That's the source of the hard problem of
consciousness.  What we see is what the brain has decided is there, not
what is really there.  And the brain has decided that thoughts aren't
physical simply because the brain doesn't make any noise when it operates.

So, having said that.  What exactly is "consciousness"?  Is the the fact
that we have a signal processing brain that is producing output reactions
in response to stimulus signals?  That's nothing special because all our
robots do that already.

Or should we say the robot was "conscious" only if it made the same error
humans make of believing its signal processing hardware existed in a domain
separate from all things physical?

The illusion of separation is not anything special that evolved in us for
odd reasons. It's just a normal and expected side effect of the way the
brain builds associations between signals.  It's known as classical
conditioning and it's already fairly well understood.  But what few seem to
grasp is that it's the reason all these humans think these "voices in their
head" are not physical events.

The reason the brain builds associations is also simple in basic nature.
It's a data compression technique which is an important feature of any
strong reinforcement learning system.  It's done to allow a finite amount
of internal signals to represent a maximal amount of information about the
state of the environment by removing redundancy in the signals.  Signals
that are correlated are merged to maximize the amount of unique information
in the signals.  It's an information maximizing function that has obvious
and clear advantage to the reinforcement learning problem.

The side effect of this, combined with the fact that our thoughts don't
produce external physical effects, is the illusion of separation between
mind and body and this endless waste of time by the philosophers debating
the mind body problem which they have no hope of solving on their own
because they aren't investigating the behavior of reinforcement learning
algorithms in real time parallel signal processing networks.

> But seems noone around doubts it is an emergent property, so no need to
> emphasise distinction.
> As for empirical part, representationalism is sort of top-down approach,
> while say Curt seems to prefer bottom-up approach.
> Can't say for Dreyfus though:)
>
> Regards...

There is value in looking at the problem from the top down.  But until the
bottom up understanding meets the top down work, the problem won't be
solved.

From the bottom up, we are going to first produce intelligent man made
machines, and then, we are going to use that understanding to resolve all
the unanswered questions about the implementation details of the brain.
Then finally, last in line, the philosophers will finally catch on to what
they never could figure out without all the empirical data by our working
AI machines and a full understanding of the brain.

I know exactly why everyone is confused about consciousness and I know (at
a high level) what the brain is doing and how it solves the AI problem
using a distributed signal processing network trained by reinforcement.
But it's going to take a lot of time for the rest of the world to catch up
to this because there's a large set of people who never will never
understand the fact that the mind body problem is an illusion created by
the way the brain builds models from the data it receives.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/19/2008 6:01:19 AM
turtoni <turtoni@fastmail.net> wrote:
> On Nov 18, 11:48=A0pm, c...@kcwc.com (Curt Welch) wrote:

> > People working on real world robotics questions are already trying to
> > solve
> > this problem and their work is already pushing AI towards higher levels
> > of intelligence.
>
> Good luck but don't be surprised if other fields of research beat them
> to man made artificial intelligence.

Which other fields are you thinking about?

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/19/2008 6:03:34 AM
On Nov 19, 3:41=A0pm, c...@kcwc.com (Curt Welch) wrote:
> Alpha <omegazero2...@yahoo.com> wrote:
>> On Nov 17, 12:55=3DA0pm, c...@kcwc.com (Curt Welch) wrote:
>> > All mental activity is a reflex.
>>
>> This is silly nonsense.  Perhaps *you* do not
>> ever think about what you are going to think about,
>> (and that would explain a lot), but do not assume
>> that others do not partake of directed thought
>> processes/scenarios.
>>
>>
>> I can simply will myself to think about a blue
>> cube for example! And then proceed to do so.
>
>
> The idea of a "blue cube" just popped into your head
> at some point as you were writting this response.
> Right?  Did you will yourself to think about a blue
> cube before you first thought about the blue cube?
> Of course not.  At some point there, the idea of a
> blue clue showed up in your thoughts without any
> prior will to think about blue cubes on your part.
> Why did that happen?
>
> It happened because you had some sort of thought
> such as "what is a good object to give as an example"?
> And as a _reflex_ to that thought, the "blue cube"
> idea showed up in your thoughts.  And as a _reflex_
> to the "blue cube" thought in the context of "create
> thought example", you produced the sentence in the
> post about "I can will myself to think about a blue
> cube and then think about a blue cube".

It is not *a* reflex but rather many reflexes that
turn the input "what is a good example" into the
output "a blue cube". However this low level
description you are so fond of explains nothing
of interest anymore than saying the molecular
machinery of the body is nothing more than the
interactions of molecules.


> What you talk about as will is just the fact that
> one brain behavior is likely to regulate the next
> brain behavior.  But when behavior A controls
> behavior B, we say that B is a reflex to A.  It's
> the same thing no matter which way you choose to
> talk about it.

Our ability to predict future actions in both ourselves
and others depends on our ability to form high level
descriptions of our behaviors. Sure they all reduce
to "reflexes" but that is not a useful level to talk
about when you want to discriminate between different
high level patterns of behavior.


JC

0
casey
11/19/2008 7:20:16 AM
On Nov 18, 8:19=A0pm, turtoni <turt...@fastmail.net> wrote:
> I think we would be more successful in continuing to work at some
> "good" questions and perhaps create as a by-product some kind of
> artificial "intelligence". After all that would appear to be how
> "intelligence" got started.

There's certainly a lot of progress to be made.  Humans didn't
evolve over night.

> So for example; how do we create an *actual* unlimited supply of
> renewable energy?

or increase the efficiency of our designed systems to the point
where the supply of renewable enery seems unlimited.  Disipating
heat can be a problem and humans have a limited opperational
range.

> I imagine that from these types of problems a form of artificial
> intelligence will arise.

Some of us tilt at windmills while others use them to grind
wheat into flour.  If I can get a computer to do what I want
easily I'm satisfied.
0
forbisgaryg
11/19/2008 8:27:19 AM
On Nov 18, 9:41=A0pm, c...@kcwc.com (Curt Welch) wrote:
> Alpha <omegazero2...@yahoo.com> wrote:
> > On Nov 17, 12:55=3DA0pm, c...@kcwc.com (Curt Welch) wrote:
> > > All mental activity is a reflex.
>
> > This is silly nonsense. =A0Perhaps *you* do not ever think about what
> > you are going to think about , (and that would explain a lot), but do
> > not assume that others do not partake of directed thought processes/
> > scenarios.
>
> > I can simply will myself to think about a blue cube for example! And
> > then proceed to do so.
>
> The idea of a "blue cube" just popped into you head at some point as you
> were witting this response. =A0Right? =A0Did you will yourself to think a=
bout a
> blue cube before you first thought about the blue cube? =A0Of course not.

Of course I did; I was thinking of what might be a good example of
directed thought and that you might understand and I consciously
searched my memory for such an example that I might have used before
and I then located the blue cube directed thought. And thence
operationalized it.

So much for your contentions.

>=A0At
> some point there, the idea of a blue clue showed up in your thoughts
> without any prior will to think about blue cubes on your part. =A0Why did
> that happen?

It did not happen that way; it was a directive to thought one to find
a thought 2 that had the qualities that I wished to express.

>
> It happened because you had some sort of thought such as "what is a good
> object to give as an example"? =A0And as a _reflex_ to that thought, the
> "blue cube" idea showed up in your thoughts.

It was not a reflex; it was a conscious find operation. Sorry, but
your inability to think ooutside your cave and without blinders on
dooms your critique.



>=A0And as a _reflex_ to the
> "blue cube" thought in the context of "create thought example", you
> produced the sentence in the post about "I can will myself to think about=
 a
> blue cube and then think about a blue cube".
>
> The point here is that you can't will yourself to think about a blue cube
> before you first think of what you are going to will yourself to think
> about.

But If I think about (in thought @ t=3D0)what I want to think about in
the immediate future (a thought @ T=3D1) - that future thought is not a
relfex; on the contrary - it is the result of a conscious process to
find the best fit for the preconditions in the first thought.

>
> Whether we call this sort of thought sequence your "will" is irrelevant.

Nope; it is part an parcel of the cause-effect chain.

> It's still happening as a reflex to what was just happening in your brain=
..

Nope - perhaps all your thoughts are simply reflexive; and that would
explain your inability, as Casey puts it, to think more abstractly,
(or even beyond some limited non-abstract areas).

> Each thought we have follows from the current context set up by recent pa=
st
> events in the brain. =A0The path of thoughts that get produced based on
> context is a function of how our brain has been conditioned by a life tim=
e
> of experience.
>
> My brain doesn't produce this constant stream of English words because it
> was built by the DNA to produce sequences of English words. =A0My environ=
ment
> conditioned me to string these sounds/words together in this sequence. =
=A0

But brain is beyond mere conditioning in that it can create new
thoughts that could not have been the direct result of prior thoughts
but come into being on their own.  Otherwise NO new ideas would come
forth.

>It
> makes no difference if I'm talking about the stream of words being produc=
ed
> as private thoughts in my brain or the stream of words that get typed int=
o
> the Usenet message. =A0It's all just conditioned behavior coming out of m=
e as
> a constant stream of brain behaviors based on recent past context. =A0And
> because the brain has feedback loops in it, that context is based both on
> recent past sensory inputs as well as recent past brain behaviors.
>
> What you talk about as will is just the fact that one brain behavior is
> likely to regulate the next brain behavior. =A0But when behavior A contro=
ls
> behavior B, we say that B is a reflex to A.

No, we do not say that; your antiquated classical viewpoint has been
outmoded for decades now; seems you refuse to come out of your cave.

> =A0It's the same thing no matter
> which way you choose to talk about it.

In your opinion, which as we have seen, it astoundingly naive. It is
not the same thing and that is why those with more knowledge about
these matters and who do not live in intellectuial caves choose (*AHA!
- CHOOSE*) to talk about non-reflexive thought.
0
Alpha
11/19/2008 2:49:46 PM
On Nov 18, 10:22=A0pm, c...@kcwc.com (Curt Welch) wrote:
> "Isaac" <gro...@sonic.net> wrote:
> > "Publius" <m.publ...@nospam.comcast.net> wrote in message
> >news:Xns9B57F131CB7B0mpubliusnospamcomcas@69.16.185.250...
> > > "Isaac" <gro...@sonic.net> wrote in
> > >news:491f9f87$0$33506$742ec2ed@news.sonic.net:
>
> > >> Minsky, unaware of Heidegger's critique, was convinced that
> > >> representing a few million facts about objects including their
> > >> functions, would solve what had come to be called the commonsense
> > >> knowledge problem. =A0It seemed to me, however, that the deep proble=
m
> > >> wasn't storing millions of facts; it was knowing which facts were
> > >> relevant in any given situation. =A0One version of this relevance
> > >> problem was called "the frame problem." =A0If the computer is runnin=
g a
> > >> representation of the current state of the world and something in th=
e
> > >> world changes, how does the program determine which of its represent=
ed
> > >> facts can be assumed to have stayed the same, and which would have t=
o
> > >> be updated?
>
> > > Dreyfus is pointing out one consequence of the lack of a useful
> > > definition of "intelligence."
> > Actually, he is doing much more than that in his paper. =A0I just poste=
d a
> > portion of its background section, but his paper sets forth what he
> > believes is why AI fails and how he (and certain philosophers/researche=
rs
> > he relies on) thinks intelligence works. =A0I will email you the paper =
for
> > your reference. =A0Let me know if you are interested in me posting my
> > critiques of his paper for your response. =A0Maybe you will defend his
> > positions?
>
> > >It is problem which plagues most programs for producing
> > > AI (which is not to deny that much progress has been made in that
> > > endeavor).
>
> > > We may define "intelligence" as, "The capacity of a system to generat=
e
> > > solutions to novel problems," and "problems" as, "Obstacles or
> > > impediments preventing the system from attaining a goal."
> > I defy you to contrive a definition of Intelligence that works. =A0For
> > example, using your current definition above, the Earth would be
> > intelligent because it is a system with the capacity to generate
> > solutions (e.g., extremely complex, yet stable atmospheric weather, oce=
an
> > currents, etc.) to solve novel problems of, for example, maintaining a
> > stable global temperature in the face of many (thousands) changing
> > (novel) variables that are constant obstacles preventing the Earth (Gia=
?)
> > from attaining her goal of minimizing temperature differences globally.
>
> The earth is intelligent. =A0So is the universe as a whole.
>
> Life looks like it was designed by intelligence because
> it was. =A0The process of evolution is just one more example of the many
> intelligent processes at work in the universe. =A0Evolution is an example=
 of
> a reinforcement learning process

No, it is not.

>and I basically consider all reinforcement
> learning processes to be examples of intelligence.
>
> > There are many similar examples that use your language but are not
> > considered to be intelligent to anyone reasonable in science. =A0Care t=
o
> > update your definition or defend it?
>
> Many people in science have no clue what they are talking about when they
> use the word "intelligence". =A0As such, they define what is, and what is=
n't
> intelligent based on total nonsense and ungrounded speculation - as I've
> said before - without using any empirical evidence to argue from.
>
> Of course that doesn't stop them, because they like to claim things such =
as
> "subjective experience is outside the scope of empirical evidence".
> And then they tell us what _their_ subjective experience is like and use
> their beliefs about their own subjective experience to "prove" an endless
> list of nonsense ideas about the universe.
>
> The typical argument and thought path starts with the belief that human
> consciousness is something that exists only in humans.

No, that is not the typical argument as most in the field of cognitive
science and biology for example believe that creatures other than
humans have consciousness.

You spout nonsense once again; your premeses are incorrect/false and
therefore so is your entire argument.


=A0>Then from there,
> they make the argument that since humans have this magical attribute call=
ed
> consciousness and other things like the Earth doesn't, that intelligence
> requires consciousness.

No, they do not suppose that I requires C. Deep Blue has chess-domain
intelligence but is not conscious in the least.

>=A0But since they don't have any clue what creates
> human consciousness,

Yes - there are clues - many of them.  Brain syncrony (local or
partial) for example.  Solcves the binding problem for example.

> they also don't have any clue what creates
> intelligence and don't really have any way to determine if the earth is
> intelligent or not.
>
> And when asked to explain what evidence they have to suggest this attribu=
te
> exists only in humans, they use the self serving argument that since they

Ther is no "they" there; that is one of your false premeses.

You argue here just like you argue elsewhere- strawmen and red
herrings and falsehoods and misconceptions galore.


>
> "known" it exists in them, and that other humans are physically similar t=
o
> them, that this stuff they known exists in them must also exist in others=
..
>
> But all that argument and the arguments that grow from it are based on a
> belief that has no support. =A0The belief that "consciousness" is somethi=
ng
> other than simple brain function. =A0That consciousness is not an identit=
y
> with physical brain function.

There are few that believe that C is not generated by brain function.
False premise again.

>
> However, all the empirical evidence we have tells us that assumption is
> wrong. =A0And if we choose to believe what the empirical evidence shows u=
s
> (materialism) - then we know that there is nothing here to explain, other
> than the physical signal processing that happens in the brain which
> produces human behavior.

That's it folks - that is *all* theri is to underestanding everything
- it is just reinforcement learning - or wait - maybe it is just *all
*signal processing!  BWHAHAHAHAHA!  Yo cannot even get your mis-shapen
characterizations straight.



>
> Once you grasp the significance of what the empirical evidence is telling
> us, all the need of defining intelligence as some sort of link with "bein=
g
> conscious" goes away.

Strawman.

>=A0We are left with defining intelligence is some class
> of signal processing algorithm that describes how the brain works. =A0And
> though there are multiple options there, none of them make intelligence
> hard to understand. It's no harder to understand than any typical machine
> learning algorithm for example.
>
> I choose to use the fairly broad and generic definition of intelligence
> being a reinforcement learning system which allows the concept to include
> many processes other than just what the brain does - such as the process =
of
> evolution.
>
> You could easily restrict the definition to something closer to what he
> brain does, which would be something more like a real time distributed
> parallel signal processing network trained by reinforcement instead of th=
e
> far broader "all reinforcement learning processes" I like to use.
>
> > > Introducing goals into the definition gives us a handle on the "frame
> > > problem": the problem is framed by the current goal.
>
> > Goals are really related to the frame problem in that the "frame" that
> > matters is the one that reflect "reality" in the context of your
> > priorities, experience, and world model (e.g., a filter) as a situated
> > agent. =A0Goals are just one priority, but goals to not really drive
> > perception, they mostly seek
> > to manipulate the frame to achieve a desired result. =A0Loosely, I thin=
k
> > the frame problem is much more about constructing a useful and tractabl=
e
> > model that the situated agent can use towards building a plan to achiev=
e
> > its goals.
>
> Well, I think "goal" is the wrong way to understand the operation of the
> brain though it's not too far off.
>
> The true goal of a reinforcement learning machine is to maximize expected
> future reward. =A0So it's a reward maximizing machine with one prime goal=
..

Wrong of course. Thee are multiple interacting goals in any human.
ANd of course, humans oft do things that have little to do with one
goal or even several important goals that you would think would
produce consistently intelligent behavior - but it does not!


<remainder of nonsense snipped>
0
Alpha
11/19/2008 3:03:33 PM
On Nov 17, 5:01=A0pm, c...@kcwc.com (Curt Welch) wrote:
> "Isaac" <gro...@sonic.net> wrote:
> > I completely disagree, Beyond learning and building knowledge, AI also
> > includes transcendental aspects of consciousness and self (soul?), whic=
h
> > are in metaphysics. =A0Do you really think there is an E=3DmC^2 equatio=
n for
> > that?
>
> Most definitely. =A0As I said, I'm a strict physicalist. =A0To me, the be=
lief
> that consciousness is something other than physical brain function is jus=
t
> a widely held illusion or myth. =A0People believe it, and debate it, just
> like they waste their time believing, and debating, the nature of God.
> It's just silly crap man made up for reasons that have nothing to do with
> the nature of reality.
>
> > AI also covers the creation and appreciation of beautiful things, which
> > is in the 3rd pillar of philosophy: esthetics.
>
> Beauty is created by the value the brain assigns to sensations and those
> values are there because humans are reinforcement learning machines.
> There's nothing more to it than that. =A0Beauty isn't a mystery. =A0It's =
simple
> and obvious once you understand what we are - reinforcement learning
> machines.
>
> However, this is exactly the type of thing which is nearly impossible to
> understand by using philosophy alone to try and uncover the nature of
> beauty.
>
> > So, I believe AI touches
> > on nearly all aspects of philosophy.
>
> Yes, I agree completely with that. =A0Philosophy is one of many human
> behaviors and if you don't understand where human behavior comes from and
> what controls it, you will have no hope of answering any of the big
> questions of philosophy such as the mind body problem and the nature of
> consciousness or the nature of aesthetics. =A0Those questions can't be
> answered from within the field of philosophy alone. =A0All you can do fro=
m
> within philosophy is identify which concepts are compatible with each oth=
er
> and which are not - you can't identity which set of beliefs are a valid
> description of reality without checking the beliefs against empirical dat=
a
> - which is something philosophy chooses to treat as being outside their
> domain.
>
> All you can do from within philosophy is create multiple possible answers=
..

Incorrect; philosophy, in relation to science, is the *question-
generating* or *question-vetting* operation.

> You can't tell which is correct or how correct or incorrect a given
> approach might be.
>
> > Moreover, (reverse) engineering
> > will not solve the problem and may actually lead to many dead ends by
> > just finding ways to go nowhere quicker and better. =A0It will take a n=
ew
> > theory and philosophy to do it.
>
> Reverse engineering has already solved it. =A0Many philosophers however d=
on't
> understand this because they have created such a huge cloud of confusion =
by
> spending so much time debating all the impossible answers they can't get =
a
> grip on what the truth is.
>
> > Think of it like trying to empirically come up with QED or Relativity w=
/o
> > any new theory or philosophy of physics.
>
> You have started with the assumption that there is something there
> (consciousness) which is fundamentally hard to understand and explain.
> Your assumption is invalid. =A0Your assumption is created by a simple to
> explain brain function which created in all of us a natural illusion.

Please explain how brain genrates consciousness.  Since it is simple
in your mind, it should be simple to tell us how that happens, get it
published in a referred journal (perhaps Journal Of Consciousness
Studies or Brain and Behavior etc.) and thence cvlaim your Nobel.

>=A0If
> you assume the illusion is real, you are left with the hard problem of
> consciousness. =A0If you assume the illusion is only an illusion, then th=
ere
> is no problem at all - all hard questions are answered and explained
> leaving a fairly simple material world to understand. =A0By Occam's razor=
, I
> choose the answer that makes everything simple and answers all the
> questions instead of picking the answer which creates contractions that
> have no answer.

Please answer - oh sage - how brain generates the thought and the
concommitant mind-visual accompanyment: blue cube? I want an exact
specific answer now in terms of how brain's APs or molecules represent
the visualization and the semantic content inherent in : blue cube
thought.  Should be easy for you right!

>
> But in philosophy, Occam's razor has no place. =A0All alternatives must b=
e
> explored - as such, you are forced by your very charter to wander endless=
ly
> into utter silliness. =A0The hardness of the problem attracts you to expl=
ore
> the depths of the silliness endlessly hoping to find some "new theory" or
> sudden enlightened understanding which clears it all up.
>
> As an engineer, I have no need to explore such an improbable dead end. =
=A0If
> I missed something, the philosophers of the world will find it and explai=
n
> it.

You have missed everything if import.

>=A0But after hundreds of years not finding an answer, I'm not holding my
> breath on the expectation that there is something there when all evidence
> suggests there isn't anything there to be found.

But being in the cave, you would have no knowledge of what others
outside the cave have come up with.  Or when you do see the shadows of
such on your cave's walls, you dismiss them and skulk back to
reinforcement learning and signals (or wait...is it really all just
"particles" (haha) interacting) as the only thing that exists..
0
Alpha
11/19/2008 3:14:08 PM
On Nov 17, 10:59=A0am, Neil W Rickert <rickert...@cs.niu.edu> wrote:
> "Isaac" <gro...@sonic.net> writes:
> >2nd critique, on his page 12, line 4:
> >"Heidegger's important insight is not that, when we solve problems, we
> >sometimes make use of representational equipment outside our bodies, but
> >that being-in-the-world is more basic than thinking and solving
> >problems;that it is not representational at all. =A0That is, when we are
>
> There's that "being in the world" mystification.
>
>
>
>
>
> >My critique #2:
> >is not the Heideggerian view requiring this unity between the mind and t=
he
> >world result in a "contrived, trivial, and irrelevant" world representat=
ion
> >scheme in people when the events in the world are so far beyond a person=
's
> >ability to cope (relative to there internal representation/value system)
> >that they just end up contriving a trivial and irrelevant internal world
> >that is just projected onto a "best fit/nearest neighbor" of a
> >representation that they can cope with. =A0In this way, there is no abso=
rbed
> >coping because it requires a perfect and accurate absorption scheme betw=
een
> >our mind (inner) and the world (outer) that does not exist and cannot be
> >magically created, even biologically. =A0If you ignore this aspect of th=
e
> >Heideggerian view then what you end up with is nothing much more than an
> >"ignorance is bliss" cognitive model that is not too different from what=
 you
> >say is wrong with Brook's approach. =A0That is, your portrayal of the
> >Heideggerian view of absorbed coping would exactly model the thinking an=
d
> >representation behavior of insects, which certainly is not the conscious=
,
> >cognitive model of humans. =A0Thus, this Heideggerian view of absorbed c=
oping
> >is either insufficient to describe the human condition or it renders
> >indistinguishable insects from humans; either way it does not seem to
> >uniquely capture the behavior at the level of human consciousness and is=
,
> >thus, flawed at best. =A0 =A0That is, if this Heideggerian view of absor=
bed
> >coping equally applies to any animals or insects then it is not really
> >helpful to modeling or shedding light on =A0higher human intellectual
> >behavior, which, of course, is the sole subject/goal of AI. =A0Moreover,=
 this
> >"perfect absorption" is a complete illusion and in practice will only ex=
ist
> >in the most predictable and simple situations. From another angle, how i=
s
> >this Heideggerian view of absorbed coping much different from the standa=
rd
> >psychological model of projection where our internal model/representatio=
n is
> >simply projected onto the world (or a subset frame of it) and we just tr=
ick
> >ourselves into believing that we are completely and accurately absorbed =
with
> >the true essence of the frame problem. =A0this Heideggerian view of abso=
rbed
> >coping seems to much more fit the unconscious aspects of the human
> >condition, which is more insect/animal like. =A0This all seems to be log=
ically
> >flawed and/or a very weak foundation for grandiose conclusions about wha=
t
> >philosophical approach/model is needed to solve the frame problem and hu=
man
> >consciousness. =A0Maybe I am missing something critical here that can ma=
ke
> >sense of it. =A0Please clarify the logic.
>
> I agree with the overall sense of your criticism, that Dreyfus
> is giving accounts in vague terms which don't really say a lot,
> and isn't giving a persuasive argument for this retreat to vagueness.
>
> >Any thoughts on this issue?
>
> Here are some of my own views.
>
> AI and epistemology take the problem to be "how do we use
> representations". =A0Dreyfus seems to want to do away with
> representations. =A0To an extent, I sympathize with Dreyfus, in that
> I see an over-reliance on representations in epistemology and in AI.
> But we cannot do away with them. =A0Clearly, people use representations
> in their use of natural language.
>
> To me, the question is not "how do we use representations?" =A0Rather,
> the question should be "how do we form representations in the first
> place?" =A0This seems to be a difficult problem.

Yes - and one that Curt glosses over (or maybe he has the Nobel-prize-
winning answer to the question I put to him in another post - we shall
see.)

=A0>As best I can tell,
> our digital technology has not solved this problem. =A0We have solved
> some problems, in that we can digitize music for recording on CD,
> and we can digitize pictures with our digital cameras. =A0But the
> representations formed with these digitization methods are not at
> all similar to the representations that we see in our ordinary use
> of descriptions in natural language.

Nor are they similar to the way brain may be doing so; ther is no
evidence of digitization functions in brain, even though we summarily
sequester APs (fire/no-fire) as analogs (sorry for the pun) of the
digitization process.

>
> On page 2, Dreyfus writes "I was particularly struck by the fact
> that, among other troubles, researchers were running up against
> the problem of representing significance and relevance". =A0And that
> seems to be the crux of the problem. =A0AI people like to claim that
> AI systems can solve this problem. =A0But I don't see the evidence
> that they have solved it. =A0It seems to me that the computational
> hardware is wrong for this. =A0That is to say, I do not see how
> significance and relevance can be reduced to computation. =A0It seems
> more plausible to suggest that it can be reduced to homeostasis.

Comes down to meaning (AKA significance); how does meaning arise in a
cognizer? Once we crack that nut we can relate it to goals and
intentionality.

I think meaning arises in relationships that arise between a cognizer
and its external milieu and also with  its own internal memory
(thoughts related to other thoughts). And that meaning is in flux as
new "experiments" are being done by the cognizer (that is one aspect
of how we learn - by experiment), that alter the perception, by the
cognizer, of the salience of the experiment and thence forms a
contribution to the goals of the cognizer.  I.e., baby does an
experiment  - touches a hot stove - withdraws hand. Salience high
(OWW!)  Meaning is thence high and thus a goal is structured or
altered - do not come near the stove again.

>
> Perhaps I should add that I'm an empiricist, at least in the
> broad sense. =A0Presumably rationalists can claim that significance
> and relevance are innate, and computation need only apply that.
> I don't see such an easy out for the empiricist. =A0And even for the
> rationalist, that doesn't really solve the problem.- Hide quoted text -
>
> - Show quoted text -

0
Alpha
11/19/2008 3:26:34 PM
On Nov 18, 11:01=A0pm, c...@kcwc.com (Curt Welch) wrote:
<snip>

> I know exactly why everyone is confused about consciousness and I know (a=
t
> a high level) what the brain is doing and how it solves the AI problem
> using a distributed signal processing network trained by reinforcement.

Please claim your Nobel already!

0
Alpha
11/19/2008 3:27:55 PM
Neil W Rickert wrote:

>>Dreyfus' solution to the "failed AI" is an embodied system based on a 
>>chaotic neural network like that of Walter Freeman's neurodynamics. 
>>So, how do you argue against this hypothesis?
 
> If it is chaotic, then one might reasonably expect human behavior
> to be chaotic.  But that seem a rather odd way of characterizing
> human behavior.

False dilemma. The brain can be a chaotic system, yet have embedded 
islands of coherence which can be permanent or temporary (much like a 
hurricane is an island of coherence in a chaotic weather system). The 
behavior of the organism is driven by the coherencies.

The brain (specifically, the cognitive neural network) is not likely as 
chaotic as weather systems, however. Some constraints on nodal 
relationships and data handling seem to be hard-wired.

> I am skeptical that there is much being retained in the form of
> stored representations as "past data".  It seems more likely that
> the brain is a bit like a finely tuned instrument.  The past data
> has played a role in adjusting the tuning, but has not been retained.

That is a surprising claim. How do you account for the ability of the 
system to draw a face or hum a tune "from memory"?

> The "motor plan" to grab an object is likely quite crude, and
> precise behavior results not from having a precise plan, but from
> measuring performance during the motor action and adjusting it where
> the measurement indicates it is off.  This would make measurement
> more important than computation.

More likely both are in play (not unlike guiding a spacecraft to Mars).

>>Isn't the frame problem mostly (if not all) about filtering the
>>intractable sensory information of any situation into a Gestalt of
>>only meaningful, important information.  This seems to be along the
>>lines of "common sense", which you cannot just ingore and expect to
>>meet or exceed human intelligence or behavior skills.
 
> That's not the way the frame problem is usually described.
> See http://en.wikipedia.org/wiki/Frame_problem for a more familiar
> version.

The first sentence of the article exemplies its thesis: "In artificial 
intelligence, the frame problem was initially formulated as the problem 
of expressing a dynamical domain in logic without explicitly specifying 
which conditions are not affected by an action."

With the phrase "in artificial intelligence", the writer frames the 
scope of the inquiry and thus limits it.

> In any case, what matters here is the version of the frame
> problem being assumed by Dreyfus in the paper we are discussing.
> And I'm pretty sure he is taking it as the problem of updating
> stored representations so that they are consistent with changes in
> the world.  Thus he mentions that Rodney Brooks avoids the problem
> by designing his system to not depend on stored representations of
> the state of the world.

The implication of that thesis is that all problems are approached *de 
novo*, and all useful or necessary information gathered only after the 
problem presents itself. That is inefficient and contrary to 
observation. Updating databases (or models, and there is a difference) 
can be an ongoing background process, with attention paid only to 
changes germane to the organism's goals. But specific problems always 
require additional information not onboard, which the system must gather 
in real time. Having the database or model, however, enables  framing of 
that updating task.

> You seem to be arguing that when I beep the horn on my car, the
> mechanism in the car is somehow performing a calculation.  If you
> take the meaning of "computation" to be that broad, then everything
> is a computation, and the word "computation" becomes useless for
> it fails to discriminate.

Calculation is required when there are multiple inputs and/or outputs 
possible and some algorithm is required to map specific input conditions 
to specific outputs. E.g., if an interneuron requires inputs from at 
least two afferent neurons to fire, it is doing a calculation.
0
Publius
11/19/2008 3:33:54 PM
On Nov 17, 7:56=A0am, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:
> > I'm an engineer, not a philosopher. =A0As such, nearly everything you w=
rite
> > strikes me as silly and odd and misguided. =A0I hardly know where to be=
gin to
> > comment.
>
> > I find this sort of philosophical debate to be a pointless and endless =
game
> > at trying to define, and redefine words to make them fit together in a =
more
> > pleasing way. =A0You can't solve AI by playing with words. =A0You have =
to do it
> > using empirical evidence. =A0It's not a problem which can be solved by =
pure
> > philosophy.
>
> I agree with you. Creating AI has nothing to do with philosophy.

Except that historically, the important questions have come from
either philosophers, or by other types of scientists that posed
philosophical questions about Universe, per Bohm, especially when
instrumentality was limited (i.e., when our ability to *be*
empiricists (read: perform instrumented experiments) was limited.)


> It is just a
> technical problem that needs better mathematical tools in order to solve =
it.

What sorts of math tools are you talking about that would solve basic
questions of how brain for example, represents a blue cube, or how APs
represent (if they do) a thought or a memory?

See Koch's The Biophysics of Computation for an example of
sophisticated math applied to biological function in brain , but which
provides no clue as to representation etc.

>
> Creating AI will reflect on philosophy in only one way - it will prove th=
at some
> philosophers were wrong.

But then some will have been right!

0
Alpha
11/19/2008 3:36:49 PM
On Nov 17, 3:52=A0pm, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:
> >I completely disagree, Beyond learning and building knowledge, AI also i=
ncludes
> >transcendental aspects of consciousness and self (soul?), which are in
> >metaphysics. =A0Do you really think there is an E=3DmC^2 equation for th=
at?
>
> I don't know what "transcendental aspects of consciousness and self (soul=
?)" are
> and I don't care. I leave this to philosophers.
>
> > AI also covers the creation and appreciation of beautiful things, which=
 is in
> > the 3rd pillar of philosophy: esthetics. =A0So, I believe AI touches on=
 nearly
> > all aspects of philosophy. =A0Moreover, (reverse) engineering will not =
solve the
> > problem and may actually lead to many dead ends by just finding ways to=
 go
> > nowhere quicker and better. =A0It will take a new theory and philosophy=
 to do
> > it.
> > Think of it like trying to empirically come up with QED or Relativity w=
/o any
> > new theory or philosophy of physics.
>
> I can tell you one thing - if we must wait philosophers to tell us how to=
 make
> AI then philosophers who think it is impossible to make AI are right.

OTOH, if we plod along using math tools, without knowing what
questions we want to ask about mind/brain/AI, then the plodding will
continue as it has!


0
Alpha
11/19/2008 3:38:30 PM
Alpha wrote:
> On Nov 17, 5:01 pm, c...@kcwc.com (Curt Welch) wrote:
[...]
>> All you can do from within philosophy is create multiple possible answers.
> 
> Incorrect; philosophy, in relation to science, is the *question-
> generating* or *question-vetting* operation.
> 
[...]

Yes, that's what philosophers claim, and when it's done right (eg, tests 
the logic of the arguments, drilling down to the underlying assumptions, 
etc), then it can point up conceptual glitches, which in turn may lead 
to useful reformulations the question(s).

But scientists are as good at doing this as are professional 
philosophers. Consider the recent attempts to remodel the history of the 
universe to eliminate the physically impossible singularity at the 
moment of the Big Bang. If space is assumed to be granular (ie, that 
space is not a continuum, but that there smallest bits of space), then a 
theory that eliminates that singularity is possible. But that assumption 
in turn raise the question of what those smallest bits of space are 
"embedded in." Which is a metaphor arising from our direct experience of 
objects in space.

How can there be smallest bits of something without those smallest bits 
being in something else? IOW, how can there be objects without some 
space for those objects to inhabit? That's a philosophical question. My 
metaphysics allows for it, because I think that "If there is an action, 
there must be actor" is fallacious. IOW, "There can be action with no 
actor." Analogously, I can accept "There can be objects with no space."

If that sounds muddled, no surprise. It's damn difficult to avoid 
thinking in the patterns that language embodies. It's damn difficult to 
think in ways that run counter to what language allows as 
sensible/sense-making utterances.

"In mathematics, we can know whether what we say is true, but we cannot 
know what we are talking about. In poetry, we can know what we are 
talking about, but we cannot know whether what we are saying is true." 
(after Bertrand Russell.)

"Philosophers are poets who attempt logical proof of their metaphors." 
(Wolf Kirchmeir -- you read it here first ;-).)

HTH

-- 
Wolf Kirchmeir
0
Wolf
11/19/2008 3:52:52 PM
On Nov 18, 9:03=A0pm, forbisga...@msn.com wrote:
> On Nov 18, 6:24=A0pm, Publius <m.publ...@nospam.comcast.net> wrote:
>
> > Josip Almasi <j...@vrspace.org> wrote innews:gfujgq$4el$1@gregory.bnet.=
hr:
>
> > >> Disputes about representationalism appear in AI discussions because
> > >> the disputants are not distinguishing between intelligence and
> > >> consciousness. The latter almost certainly entails
> > >> representationalism; the former need not, but natural intelligent
> > >> systems may employ it. It's an empirical question.
> > > Actually there's yet another big question - is consciousness emergent
> > > property of (sufficiently high) intelligence.
>
> > It is more likely an emergent property of a particular implementation
> > strategy for intelligence.
>
> There used to be a notion that an eye couldn't develop because of
> its complexity. =A0The assertion was that no where along the way
> would the components have any survival value. =A0I believe that dead
> horse has been whipped enoungh. =A0It seems to me that the same
> applies to the development of consciousness in our set of evolved
> systems. =A0I'm not completely sure what the precursors were.
>
> > > As for empirical part, representationalism is sort of top-down
> > > approach, while say Curt seems to prefer bottom-up approach.
> > > Can't say for Dreyfus though:)
>
> > How natural intelligent systems evolved is a different question from th=
eir
> > structure and function. Artificial systems might be designed which have
> > different structures, but perform the same functions and are just as
> > intelligent (or more so).
>
> As with the eye, certain functional systems can be designed
> without passing through all the evolutionary precursor steps.

Indeed; many such saltations have occurred as part of nature's
evolution via natural experiment.

> This still leaves the open question as to whether or not
> the implementation of the function(s) from which human
> consciousness emerges necessarily entails that consciousness
> will emerge from all implementations of the function(s).

It sure does! Some claim that merely embuing some entity or process
with general intelligenece wll somehow coerce or result in the
establishment of C.  I fail to see how the two are related in that
way.  C does serve intelligent functionality by providing a executive
feedback function of course as well as an I-ness that relates to
intentionality. Perhaps it is the other way around; C - even trivial C
as in bacteria or mice etc., - embues intelligence functions with
another aspect of mind/brain from which to draw conclusions etc.

0
Alpha
11/19/2008 4:33:16 PM
On Nov 19, 8:52=A0am, Wolf Kirchmeir <wolf...@sympatico.ca> wrote:
> Alpha wrote:
> > On Nov 17, 5:01 pm, c...@kcwc.com (Curt Welch) wrote:
> [...]
> >> All you can do from within philosophy is create multiple possible answ=
ers.
>
> > Incorrect; philosophy, in relation to science, is the *question-
> > generating* or *question-vetting* operation.
>
> [...]
>
> Yes, that's what philosophers claim,

And other scientists; see Bohm's contentions, for example in the
interview  responses he gave to Davies in The Ghost in the Atom book.
Many other examples exist through history from thousands of years ago
until present. The first important questions were philosophical;
people theorizing about what actually exists etc.  Without empiurivcal
instrrumentalities, that is all that was available in some domains.
Democritus for example. No way to test hios theorizing, but
philosophical speculations abounded based on his ideas.  Same occurs
now with QM/QFT; most *thoughtful* physicists want the interpretation
of the theory to tell us, to explain to us ,what relation the
experimental results have to a real world out there.  (Of course, some
are willing to just do experiments - but to me that is not very
satisfying.)


> and when it's done right (eg, tests
> the logic of the arguments, drilling down to the underlying assumptions,
> etc), then it can point up conceptual glitches, which in turn may lead
> to useful reformulations the question(s).

Yup.

>
> But scientists are as good at doing this as are professional
> philosophers.

Sure!  See the Bohm ref.


>Consider the recent attempts to remodel the history of the
> universe to eliminate the physically impossible singularity at the
> moment of the Big Bang. If space is assumed to be granular (ie, that
> space is not a continuum, but that there smallest bits of space), then a
> theory that eliminates that singularity is possible. But that assumption
> in turn raise the question of what those smallest bits of space are
> "embedded in." Which is a metaphor arising from our direct experience of
> objects in space.
>
> How can there be smallest bits of something without those smallest bits
> being in something else? IOW, how can there be objects without some
> space for those objects to inhabit? That's a philosophical question. My
> metaphysics allows for it, because I think that "If there is an action,
> there must be actor" is fallacious. IOW, "There can be action with no
> actor." Analogously, I can accept "There can be objects with no space."
>
> If that sounds muddled, no surprise. It's damn difficult to avoid
> thinking in the patterns that language embodies. It's damn difficult to
> think in ways that run counter to what language allows as
> sensible/sense-making utterances.

Yes; most transcendental thinkers also have the same difficulties -
they call the things they try to put into language - "ineffable".

>
> "In mathematics, we can know whether what we say is true, but we cannot
> know what we are talking about. In poetry, we can know what we are
> talking about, but we cannot know whether what we are saying is true."
> (after Bertrand Russell.)
>
> "Philosophers are poets who attempt logical proof of their metaphors."
> (Wolf Kirchmeir -- you read it here first ;-).)

;^))

>
> HTH
>
> --
> Wolf Kirchmeir

0
Alpha
11/19/2008 4:46:24 PM
forbisgaryg@msn.com wrote in
news:57ff8b1b-67de-4784-94a1-a57f5bd3be0b@z6g2000pre.googlegroups.com: 

>> It is more likely an emergent property of a particular implementation
>> strategy for intelligence.
 
> There used to be a notion that an eye couldn't develop because of
> its complexity.  The assertion was that no where along the way
> would the components have any survival value.  I believe that dead
> horse has been whipped enoungh.  It seems to me that the same
> applies to the development of consciousness in our set of evolved
> systems.  I'm not completely sure what the precursors were.

Quite a bit of work has been done that bears on that question, e.g., 
Metzinger, Edelman & Tononi, et al. The various sensory subsystems evolve 
"N-maps" of the world as it is presented via their particular input 
channels. These mapping schemes may all follow a common template, or 
pattern, derived from some earlier integrative algorithm. Then the brain 
"maps itself" --- it integrates all the N-maps into a comprehensive, 
dynamic "world model" which represents the organism itself situated in its 
environment, with each N-map contributing a dimension to this n-dimensional 
model. The system thereafter interacts with the world "through the model," 
and the model is revised as the organism's actions affect it.

Included in the model are not only "state data" --- the current values of 
the many input variables (colors, shapes, arrays, odors, etc. currently or 
most recently  perceived), but also the "transformation rules" of the 
system --- under what conditions a given state is replaced by another 
state. The system learns those rules through ongoing observation and 
interaction with the world. That gives us a picture of the geometry and 
dynamic of the external environment, its "laws of motion," which are then 
incorporated into the model.

The great advantage of the dynamic world model is that it allows the system 
to "run scenarios," i.e., to pose "what if?" questions, run the model with 
the hypothetical data, and thus anticipate the outcomes of possible 
actions. Such a system has a great survival advantage over systems which 
can only respond to environmental changes in real time, no matter how 
sophisticated their processing algorithms.

We can probably say that any system which can dynamically model itself, its 
environment, and its ongoing interactions with the environment, is 
conscious.

> As with the eye, certain functional systems can be designed
> without passing through all the evolutionary precursor steps.
> This still leaves the open question as to whether or not
> the implementation of the function(s) from which human
> consciousness emerges necessarily entails that consciousness
> will emerge from all implementations of the function(s).

I suspect any self-modeling system interacting with a dynamic environment 
will be conscious. Consciousness is a kind of "byproduct" of a particular 
strategy for improving intelligence. But intelligence (the capacity of a 
system to solve novel problems) may be achieved in other ways. 

0
Publius
11/19/2008 5:44:54 PM
Thank you for an interesting article.  A brief question

On Nov 19, 9:44=A0am, Publius <m.publ...@nospam.comcast.net> wrote:
> I suspect any self-modeling system interacting with a dynamic environment
> will be conscious. Consciousness is a kind of "byproduct" of a particular
> strategy for improving intelligence. But intelligence (the capacity of a
> system to solve novel problems) may be achieved in other ways.

Does this mean we should stop playing Call to Action and other RPGs
that include instances of such systems that have no existence outside
the
game?  I'm not so concerned by avitars whose self-modeling system are
outside the game.
0
forbisgaryg
11/19/2008 8:07:25 PM
Publius <m.publius@nospam.comcast.net> writes:
>Neil W Rickert wrote:

>> I am skeptical that there is much being retained in the form of
>> stored representations as "past data".  It seems more likely that
>> the brain is a bit like a finely tuned instrument.  The past data
>> has played a role in adjusting the tuning, but has not been retained.

>That is a surprising claim. How do you account for the ability of the 
>system to draw a face or hum a tune "from memory"?

To continue with the "finely tuned instrument" metaphor, a finely
tuned instrument will resonate corresponding to its tuning.  The idea
is that finely tuned neural circuits will resonate to the type of
information that was used in tuning them.  Thus remembering is a
process of reconstructing and testing what is reconstructed to see
how well it resonates.

>> The "motor plan" to grab an object is likely quite crude, and
>> precise behavior results not from having a precise plan, but from
>> measuring performance during the motor action and adjusting it where
>> the measurement indicates it is off.  This would make measurement
>> more important than computation.

>More likely both are in play (not unlike guiding a spacecraft to Mars).

I'll agree with that.

>>>Isn't the frame problem mostly (if not all) about filtering the
>>>intractable sensory information of any situation into a Gestalt of
>>>only meaningful, important information.  This seems to be along the
>>>lines of "common sense", which you cannot just ingore and expect to
>>>meet or exceed human intelligence or behavior skills.

>> That's not the way the frame problem is usually described.
>> See http://en.wikipedia.org/wiki/Frame_problem for a more familiar
>> version.

>The first sentence of the article exemplies its thesis: "In artificial 
>intelligence, the frame problem was initially formulated as the problem 
>of expressing a dynamical domain in logic without explicitly specifying 
>which conditions are not affected by an action."

>With the phrase "in artificial intelligence", the writer frames the 
>scope of the inquiry and thus limits it.

But it was about limiting the number of logic forms, not about
limiting the sensory information.

>> In any case, what matters here is the version of the frame
>> problem being assumed by Dreyfus in the paper we are discussing.
>> And I'm pretty sure he is taking it as the problem of updating
>> stored representations so that they are consistent with changes in
>> the world.  Thus he mentions that Rodney Brooks avoids the problem
>> by designing his system to not depend on stored representations of
>> the state of the world.

>The implication of that thesis is that all problems are approached *de 
>novo*, and all useful or necessary information gathered only after the 
>problem presents itself.

In the case of Brooks work, that might be true.  It is a criticism
of Brooks systems, that they don't learn.  However, your conclusion
is not a necessary consequence of the assumption that stored
representations are not used.  Earlier experience can be used to adjust
the response of the system, so that it will performs better in future.
This does not require the use of stored representations.

0
Neil
11/19/2008 11:21:57 PM
On Nov 19, 10:44=A0am, Publius <m.publ...@nospam.comcast.net> wrote:
> forbisga...@msn.com wrote innews:57ff8b1b-67de-4784-94a1-a57f5bd3be0b@z6g=
2000pre.googlegroups.com:
>
> >> It is more likely an emergent property of a particular implementation
> >> strategy for intelligence.
> > There used to be a notion that an eye couldn't develop because of
> > its complexity. =A0The assertion was that no where along the way
> > would the components have any survival value. =A0I believe that dead
> > horse has been whipped enoungh. =A0It seems to me that the same
> > applies to the development of consciousness in our set of evolved
> > systems. =A0I'm not completely sure what the precursors were.
>
> Quite a bit of work has been done that bears on that question, e.g.,
> Metzinger, Edelman & Tononi, et al. The various sensory subsystems evolve
> "N-maps" of the world as it is presented via their particular input
> channels. These mapping schemes may all follow a common template, or
> pattern, derived from some earlier integrative algorithm. Then the brain
> "maps itself" --- it integrates all the N-maps into a comprehensive,
> dynamic "world model" which represents the organism itself situated in it=
s
> environment, with each N-map contributing a dimension to this n-dimension=
al
> model. The system thereafter interacts with the world "through the model,=
"
> and the model is revised as the organism's actions affect it.
>
> Included in the model are not only "state data" --- the current values of
> the many input variables (colors, shapes, arrays, odors, etc. currently o=
r
> most recently =A0perceived), but also the "transformation rules" of the
> system --- under what conditions a given state is replaced by another
> state. The system learns those rules through ongoing observation and
> interaction with the world. That gives us a picture of the geometry and
> dynamic of the external environment, its "laws of motion," which are then
> incorporated into the model.
>
> The great advantage of the dynamic world model is that it allows the syst=
em
> to "run scenarios," i.e., to pose "what if?" questions, run the model wit=
h
> the hypothetical data, and thus anticipate the outcomes of possible
> actions. Such a system has a great survival advantage over systems which
> can only respond to environmental changes in real time, no matter how
> sophisticated their processing algorithms.
>
> We can probably say that any system which can dynamically model itself, i=
ts
> environment, and its ongoing interactions with the environment, is
> conscious.
>
> > As with the eye, certain functional systems can be designed
> > without passing through all the evolutionary precursor steps.
> > This still leaves the open question as to whether or not
> > the implementation of the function(s) from which human
> > consciousness emerges necessarily entails that consciousness
> > will emerge from all implementations of the function(s).
>
> I suspect any self-modeling system interacting with a dynamic environment
> will be conscious.

Your suspicions are misplaced. Supose that an auto has a model of it
"self" residing in a hard drive as part of the auto.  The model can
include anything you wish WRT the auto - operation processes, the
physics of combustion engines - a complete ontology of the auto from
all aspects.  The result will not be a conscious auto as if by magic
or any other means.


> Consciousness is a kind of "byproduct" of a particular
> strategy for improving intelligence.

No, it is not a byproduct.  It arises  as a result of a living
autopoietic system that interacts with an environment (so far as we
know only living things have consciousness), independent of the
intelligence of the system/lifeform. As such it is not existent
*because* of a strategy for improving intelligence; rather, it does
serve to generate feedback to processes that might so operate to
increase (or decrease!) intelligence.

> But intelligence (the capacity of a
> system to solve novel problems) may be achieved in other ways.

That much may be true; Deep Blue solves chess problems without any
consciousness.

0
Alpha
11/19/2008 11:39:33 PM
Neil W Rickert <rickert+nn@cs.niu.edu> wrote in news:pc1Vk.7456$x%.3332
@nlpi070.nbdc.sbc.com:

> In the case of Brooks work, that might be true.  It is a criticism
> of Brooks systems, that they don't learn.  However, your conclusion
> is not a necessary consequence of the assumption that stored
> representations are not used.  Earlier experience can be used to adjust
> the response of the system, so that it will performs better in future.
> This does not require the use of stored representations.

Requires a more specific definition of "intelligence." Learning, skill 
acquisition and refinement, i.e., operant conditioning, are not 
"intelligence," or at least, not the most interesting aspect of it. I like  
the definition, "The capacity of a system to solve novel problems."

How quickly can the system solve a problem it has never before encountered? 
How does it manage to do it? Developing efficiency in dealing with familiar 
or recurring problems is fairly straightforward.
0
Publius
11/20/2008 2:02:06 AM
Alpha <omegazero2003@yahoo.com> wrote in
news:f4e53f60-3d2f-48c1-b489-4635bbf965be@w1g2000prk.googlegroups.com: 

>> I suspect any self-modeling system interacting with a dynamic
>> environment will be conscious.
 
> Your suspicions are misplaced. Supose that an auto has a model of it
> "self" residing in a hard drive as part of the auto.  The model can
> include anything you wish WRT the auto - operation processes, the
> physics of combustion engines - a complete ontology of the auto from
> all aspects.  The result will not be a conscious auto as if by magic
> or any other means.

The model represents the system (insofar as it can represent itself using 
data gathered through its own sensory apparatus), situated in an 
environment. The model is constantly updated as the ongoing interaction 
modifies both the system and the environment.

How would decide whether the system was or was not conscious?

>> Consciousness is a kind of "byproduct" of a particular
>> strategy for improving intelligence.
 
> No, it is not a byproduct.  It arises  as a result of a living
> autopoietic system that interacts with an environment (so far as we
> know only living things have consciousness), independent of the
> intelligence of the system/lifeform.

That will describe all living cells and organisms, not all of which (or 
even most of which) are conscious. Unless you have adopted a definition of 
"consciousness" so broad as to be meaningless.

It is true that only living systems exhibit consciousness --- so far. But 
until 200 years or so ago, only living systems exhibited locomotion, and 
until 50 years ago only living systems exhibited intelligence. There is no 
reason to suppose consciousness requires a biological substrate.
0
Publius
11/20/2008 6:43:24 AM
curt@kcwc.com (Curt Welch) wrote in
news:20081119010159.274$s6@newsreader.com:

Interesting comments, and generally on the right track, but I have some 
quibbles. 

> Well, I think people in general are very confused about the entire
> subject of consciousness, so when they talk about it, they are often
> talking about many different aspects of humans.  However, I believe
> the foundation of where all this confusion comes from, is in how the
> brain tends to model itself.  That is, our entire understanding and
> perception of reality is created by the processing that happens in our
> brain, and that includes our perception of our own brain.  Our
> perception of self.

There is no perception of self, except insofar as the "self" includes the 
body. The "self" is a construct, a model of the system synthesized and 
inferred from the current states of other brain subsystems (those which 
process sensory data) and stored information regarding past states of those 
subsystems. So we have a *conception* of self, not a perception.

> Because the brain has only limited access to
> itself, the model it creates is also limited. Based on the sensory
> data the brain has access to, it is forced to model internally
> generated brain signals as having no association with external sensory
> signals.

Well, we do assume there is an association, actually. We also construct a 
"world model," with the self-model situated within it. We accept this world 
model as "the world" (we are all realists by default). Yet, because the 
model remains available even when the world "goes away" (when we close our 
eyes, change location, or just direct attention elsewhere), we conclude 
there is a another "realm" where the world continues to exist --- "it 
exists in the mind." The notion of "mind" arises because we are able to 
contemplate aspects of the world not currently present to the senses, 
including past states of the world.

> The result of this fact is that the brain builds a model of this data
> by indicating no association between private thoughts, and the
> physical events represented in the sensory data.  The result of that
> is very simple - it leaves us with a model of realty where thoughts
> are disconnected from all things physical.

We realize they are connected, but are nonetheless distinct.

But I agree with what I take to be your central point --- we are aware of 
the model, but not of the brain mechanisms which generate it. So it becomes 
conceptually detached from its substrate.

0
Publius
11/20/2008 7:38:35 AM
Isaac wrote:
> "Josip Almasi" <joe@vrspace.org> wrote in message 
>> Actually there's yet another big question - is consciousness emergent 
>> property of (sufficiently high) intelligence.
> 
> I tend to disagree.  I do not see the two being so intimately connected so 
> as to require one to immerge from another.  Intelligence might just be a 
> boundary condition on the scope of consciousness (i.e., awareness).  For 
> example, a severely retarded person is certainly far less intelligent, 
> however, I don't think there is any evidence that they are far less 
> conscious.  If consciouness emerged (necessarily?) from intelligence then 
> shouldn't they be highly correlated?

Fair enough.
Then again, we must not forget social interactions.
Retards get much more attention and training than healthy kids.
Say, I've spent endless hours with a grown up retard (immaturitas 
emotionalis, debilitas) unable to remember 3 word sequences. Takes 2 
hours for a sequence, 2 hours later he forgets, then all over again; 
eventually, he makes it.
There's schools for such people, so society makes up for their disabilities.

Regards...
0
Josip
11/20/2008 11:23:23 AM
Publius wrote:
> Josip Almasi <joe@vrspace.org> wrote in
> news:gfujgq$4el$1@gregory.bnet.hr: 
>  
>> Actually there's yet another big question - is consciousness emergent 
>> property of (sufficiently high) intelligence.
> 
> It is more likely an emergent property of a particular implementation 
> strategy for intelligence.

Well said.

>> As for empirical part, representationalism is sort of top-down
>> approach, while say Curt seems to prefer bottom-up approach.
>> Can't say for Dreyfus though:)
> 
> How natural intelligent systems evolved is a different question from their 
> structure and function. Artificial systems might be designed which have 
> different structures, but perform the same functions and are just as 
> intelligent (or more so).

I don't think it's general evolution of intelligence that matters here, 
it's evoution of *a* system - can we give it representations or it has 
to make them for itself.
Talking about *a* system, we might learn much from how kids learn. And 
there, representationalists have big arguments from Chomsky & co, who 
claim we're hardwired for language, IOW we get representations from our 
earliest days.
Now that I said that, I think Dreyfus is offtopic with Heidegger:>

And WRT functions:) IIRC Ben Goertzel said once, my dog is sure more 
intelligent for finding poop in the field, so what?;)

Regards...
0
Josip
11/20/2008 11:45:33 AM
Curt Welch wrote:
> 
> Well, I think people in general are very confused about the entire subject
> of consciousness, so when they talk about it, they are often talking about
> many different aspects of humans.  

Agreed.

> However, I believe the foundation of
> where all this confusion comes from, is in how the brain tends to model
> itself.  That is, our entire understanding and perception of reality is
> created by the processing that happens in our brain, and that includes our
> perception of our own brain.  Our perception of self.  Because the brain
> has only limited access to itself, the model it creates is also limited.
> Based on the sensory data the brain has access to, it is forced to model
> internally generated brain signals as having no association with external
> sensory signals.  It does this simply because there is no temporal
> correlation between auditor sensory data and the brain signals which
> represent our private thoughts.  That is, we can not hear the brain making
> any physical sound as our neurons fire. Because the brain makes no noise
> (that our ears can pick up) when it operates, there is no correlation
> between auditory sensory signals, and private thought signals.  And because
> there is no correlation, the brain doesn't create an association between
> these different signals.  This is true for all the physical sensory data
> that flows into the brain.  We can't hear our private thoughts, we can not
> feel the brain vibrate, we can not smell it.  There simply are no
> correlations between our physical sensory data and our internal thought
> signals.

Actually there's great deal of corelation.
Ever hear of body language?;)
It may be easy to notice that someone is worried, or happy, etc. Of 
course body language cannot express our thougts in details, but all the 
time we make some noises and movements. We might not be aware of it, but 
there they are, sounds of our thoughts:)

> The result of this fact is that the brain builds a model of this data by
> indicating no association between private thoughts, and the physical events
> represented in the sensory data.  The result of that is very simple - it
> leaves us with a model of realty where thoughts are disconnected from all
> things physical.
> 
> This model the brain builds based on the signals it has access to, is the
> source of all the confusion about human consciousness.  It's why people
> think the mind and the brain are two different things.  People think that,
> because that's exactly the model of reality the brain builds to describe
> itself.

Well maybe. But IMHO it's much simplier than that - now that we have 
intelligent systems, we need something to make distcion.

> The net result of this is that "consciousness" is the illusion of
> separation between our thoughts and our physical body. It is the source of
> the illusion that humans have a soul which is separate from the body and
> the reason the mind body problem has been debated for hundreds of years.

Well, there may be phisical reasons to belief soul is separate from 
body, you know 'astral travels' and such.
But, historically... I don't think so. I've traced this view back to 
Thales. He actually explained line of thought that lead him to god and 
soul, goes like, how come magnets move if they're not alive? Then there 
must be some universal spiritus movens...
It was Aristotle who later defined soul, seems as a need to define life:)
Anyway, before them, well there was some stories of afterlife etc, but 
it was all very physical and fleshy:) Say there's no souls in 
Gilgamesh... nor in Tao Te Ching:)
So, in short, seems that the distinction started around 2.5K years ago 
in Mediterran, on carefully thinked philosophical basis.

> When we look at the world around us, we believe we see the world as it is.
> Rocks are hard, and pillows are soft because this is the way those things
> actually are.  But in fact, rocks are hard and pillows are soft because
> that's the way the brain has modeled them for us.  
....

:))

It's funny how you speak of brain as a distinct subsystem:)
When you do so, you obviously identify with something else, not brain.
Well, IMHO, *that* is consciousness. And reason to make the distinction.
Ability to think of self as an object. To think. It's on much higher 
level than such a simple illusion you talk about. And furthermore, to do 
so, it is necessary to separate from body/brain:)

Regards...
0
Josip
11/20/2008 12:41:06 PM
Publius wrote:
> 
> So we have a *conception* of self, not a perception.

Ah good point. And actually consistent with what shrinks say, like, ego 
as picture of self. Or - model of self:)

To make this discussion at least a bit ontopic for c.a.nn - I've read a 
book named 'neural networks with cognitive abilites', never translated 
to english, where author claims such a thing can be implemented with a 
supernetwork orthogonal to other (motoric etc) networks. Schematics 
included:)
Note orthogonal - it's not enough to track subnet outputs, it has to 
track states too.
Sounds reasonable, but such a 'center for ego' in humans is not yet found.

Regards...
0
Josip
11/20/2008 12:55:40 PM
"Curt Welch" <curt@kcwc.com> wrote in message 
news:20081119002253.348$D7@newsreader.com...
> "Isaac" <groups@sonic.net> wrote:
>> "Publius" <m.publius@nospam.comcast.net> wrote in message
>> news:Xns9B57F131CB7B0mpubliusnospamcomcas@69.16.185.250...
>> > "Isaac" <groups@sonic.net> wrote in
>> > news:491f9f87$0$33506$742ec2ed@news.sonic.net:
>> >
>> I defy you to contrive a definition of Intelligence that works.  For
>> example, using your current definition above, the Earth would be
>> intelligent because it is a system with the capacity to generate
>> solutions (e.g., extremely complex, yet stable atmospheric weather, ocean
>> currents, etc.) to solve novel problems of, for example, maintaining a
>> stable global temperature in the face of many (thousands) changing
>> (novel) variables that are constant obstacles preventing the Earth (Gia?)
>> from attaining her goal of minimizing temperature differences globally.
>
> The earth is intelligent.  So is the universe as a whole.
>
> Life looks like it was designed by intelligence because
> it was.

The problem with going this expansive on "intelligence" is that it is not 
scientifically useful.  For example, with your definition of intelligence a 
crystal (esp. while growing) is intelligent.  Can you set forth a definition 
of intelligence that is scientifically useful and not just philosophically 
pleasing?

>The process of evolution is just one more example of the many
> intelligent processes at work in the universe.  Evolution is an example of
> a reinforcement learning process

Evolution is not really a pure reinforcement learning process.  The fitness 
function select viable populations without directly reinforcing any 
weightings of the genes.  A selection scheme is not a reinforcement scheme.

>and I basically consider all reinforcement
> learning processes to be examples of intelligence.

That is far too broad.  Under this scheme a tape recorder, which learns your 
voice by reinforcing magnetic monopoles with your voice signal until the 
signal to noise ratio reproduces your voice adequately.  So, is a tape 
recorder intelligent?

>
>> There are many similar examples that use your language but are not
>> considered to be intelligent to anyone reasonable in science.  Care to
>> update your definition or defend it?
>
> Many people in science have no clue what they are talking about when they
> use the word "intelligence".  As such, they define what is, and what isn't
> intelligent based on total nonsense and ungrounded speculation - as I've
> said before - without using any empirical evidence to argue from.
>

Your solution goes so broad that it is not useful because computers right 
now would be intelligent according to your definition, and we know, of 
course, they are quite dumb (esp. Microsoft apps :).  Please clarify.

> Of course that doesn't stop them, because they like to claim things such 
> as
> "subjective experience is outside the scope of empirical evidence".
> And then they tell us what _their_ subjective experience is like and use
> their beliefs about their own subjective experience to "prove" an endless
> list of nonsense ideas about the universe.
>
> The typical argument and thought path starts with the belief that human
> consciousness is something that exists only in humans.  Then from there,
> they make the argument that since humans have this magical attribute 
> called
> consciousness and other things like the Earth doesn't, that intelligence
> requires consciousness.  But since they don't have any clue what creates
> human consciousness, they also don't have any clue what creates
> intelligence and don't really have any way to determine if the earth is
> intelligent or not.

true, but science always has to operate on a best working theory.  Yours is 
too broad and theirs is too narrow.

>
> And when asked to explain what evidence they have to suggest this 
> attribute
> exists only in humans, they use the self serving argument that since they
> "known" it exists in them, and that other humans are physically similar to
> them, that this stuff they known exists in them must also exist in others.
>
> But all that argument and the arguments that grow from it are based on a
> belief that has no support.  The belief that "consciousness" is something
> other than simple brain function.  That consciousness is not an identity
> with physical brain function.
>
> However, all the empirical evidence we have tells us that assumption is
> wrong.  And if we choose to believe what the empirical evidence shows us
> (materialism) - then we know that there is nothing here to explain, other
> than the physical signal processing that happens in the brain which
> produces human behavior.
>
> Once you grasp the significance of what the empirical evidence is telling
> us, all the need of defining intelligence as some sort of link with "being
> conscious" goes away.  We are left with defining intelligence is some 
> class
> of signal processing algorithm that describes how the brain works.  And
> though there are multiple options there, none of them make intelligence
> hard to understand. It's no harder to understand than any typical machine
> learning algorithm for example.
>
> I choose to use the fairly broad and generic definition of intelligence

yes, too broad to be useful for anything but metaphysics.

> being a reinforcement learning system which allows the concept to include
> many processes other than just what the brain does - such as the process 
> of
> evolution.

evolution is just an environmental and social selection process of who lives 
and dies based on who fits an arbitrary criterion.  How is that intelligent?

> You could easily restrict the definition to something closer to what he
> brain does, which would be something more like a real time distributed
> parallel signal processing network trained by reinforcement instead of the
> far broader "all reinforcement learning processes" I like to use.
>
reinforcement learning does not even begin to address the intelligence 
problem because it is method of weighting certain nodes in a network more or 
less than others, but does not at all address any system level architecture 
or algorithm, thus is does not provide a model of reality, just a way to 
reinforce a model if you have one.  For example, a classic back propagation 
neural network (NN) is a implementing reinforcement learning in a NN 
architecture with a propagation training algorithm; however, NN have proven 
to be completely useless to do anything intelligent and cannot even be made 
to converge in a hierarchical configuration even with massive (impractical) 
amounts of tagged training data.

<snip>

> Well, I think "goal" is the wrong way to understand the operation of the
> brain though it's not too far off.
>
> The true goal of a reinforcement learning machine is to maximize expected
> future reward.  So it's a reward maximizing machine with one prime goal.
>

greedy algorithms are only good for problems with smooth gradient descent; 
however, they get trapped at local maxima and minima.  Intelligence is 
really looking for global maximas so reinforcement learning is a rather dumb 
scheme to achieve this.  Of course, GA's may try to chaotically hill hop 
until a higher hill top is found, but GA's are useless because a suitable 
fitness function and gene configuration space are almost impossible to 
define; esp., from a top down approach.

> What the prime goal translates into is some internal systems of values for
> all possible behaviors which in turn translates into some behavior
> probability distribution.  This in turn must drive whatever mechanism is 
> in
> place to select behaviors.  The system that decides what behavior to 
> select
> for the current context is using the internal system of values to pick
> between alternatives.
>
> Our higher level ideas of "goal seeking" is simply the fall out of a the
> lower level behavior selection system picking the best behaviors for any
> given context.

too vague.  See above. A mountain stream searching for the best (i.e., least 
energy way) to get to the bottom is implementing all the intelligent 
features you propose (e.g., reinforcement of paths that work, "goal 
seeking", and selecting locally optimal behaviors), however, saying that a 
river defines intelligence is completely useless.  Care to be more practical 
and specific?

> > When you translate the implementation of the system into a reward 
> > trained
> behavior selection system, the frame problem doesn't even make much sense
> to talk about.  The frame problem arises nearly as much out of incorrectly
> framing the question of what the purpose of the agent is.  However, the
> issues that surround the frame problem are real.  But they are all 
> answered
> in the context of a system which has the power to prioritize all possible
> responses to stimulus signals.  That is, which reaction the system chooses
> at any point in time based on its learned values (priorities if you like)
> is the answer to how the system deals with the frame problem.  That is, 
> the
> one problem it must solve (how to select which behavior to use at any
> instant in time) is the same answer to the frame problem.
>
> Finding a workable implementation of such a system is the path to solving
> AI.

This is one aspect of the path to AI in the most general sense, but your 
ideas are too vague (e.g., you assume an internal model of reality exist to 
even be able to perceive what a frame or context is- that does not exist).

> > You said above that goals do not drive perception. That's just not true 
> > in
> my view. II think our perception and our behavior selection are one and 
> the
> same problem.  Perception is a problem of behavior selection.
>

I disagree; e.g., how is perceiving the sound of a drum to be a drum a 
"problem of behavior selection"?

> On the sensory input side of the network, the major function is 
> perception,
> but as the signals flow through the network, the function transforms into
> behavior selection.  So near the output side of the network, it's mostly
> "goal driven" an on the input side it's mostly "perception driven" but I
> believe it's a fairly even continuum though the network as raw sensory 
> data
> is translated to raw effector output data.
>
> We see how this works when we test color perception of people raised (aka
> trained) in different cultures with different words for different ranges 
> of
> colors.  Our perception of color bends to correctly fit the classification
> of light frequency labeled by the words of our language.
>
of course we project our bias on what we perceive; however, that does not 
prove that perception is only based on our goal bias, which seems way off 
mark to me.  Please clarify.

>>
>> >Attention is paid only
>> > to world states which bear on the system's goals (as a background
>> > process).
>>
>> Of course, goals to play an important role in how to focus attention, and
>> to some extent this colors the frame problem, but I do not see how it
>> drives it exclusively as you put it.
>
> It drives it exclusively in my view because behavior selection is all the
> brain is doing and behavior selection works by picking behaviors that are
> estimated to produce maximal expected return for the given context.  And
> this general process of selecting the "best" behaviors at the lowest level
> is both the mechanism which creates what we think of as goal seeking and
> the behavior which is think of as attention focus.  I see them as one and
> the same process at the low level.
>
the brain models phenomenon far before it forms a goal in relation to that 
phenomenon.  No doubt they may be organically comingled, but to say your 
goal defines your model of phenomenon would amount to a real LSD psychedelic 
experience of reality, which the rest of us do not experience. :)


<snip>

> This is because this person has never had experience with this type of
> "frame" in the past and has very little experience with how to correctly
> react to this combination of stimulus signals.  None the less, the brain
> will still pick a reaction to the stimulus signal based on the past
> experience the person has had.  For this guy, the "reaction" to the frame
> might be to move the eyes to focus on the tree in the background because
> all the city street stuff in the foreground looks mostly like "noise" to
> him.

you are mixing high, system level goals (I want to find some food to eat) 
with low level goals (like if you hear an unexpected sound turn your head 
towards the sound).  It is certainly not useful to say that low level goals 
defines our perceptions; of course, they do filter it by pre-selecting what 
our higher level systems can become aware of.

<snip>

>> I don't think anyone would say that classic AI would not return to the
>> world to gather more facts to add to its "millions of facts".  The issue
>> that Dreyfus says is the problem with AI is that it creates rules that
>> are representations (or symbols) and are compartmentalized, both of which
>> he says the Philosopher Heidegger espouses, which Dreyfus and his set of
>> philosophers/researcher say is not the case.  I think every Intelligent
>> system will end up effectively having a constantly evolving set of
>> millions of "rules", so that is not the question.  Do you have any
>> counter examples?
>>
>> Cheers!
>> Ariel B.-
>
> I don't fully understand what you are suggesting here because I don't tend
> to read or study the work of the type of people you are studying.  I'm not
> sure for example what the debate is on representations.

representations require a modular, hierarchical architecture that parse 
realty according to the modules and objectify actions and phenomenons as 
being completely separate.  Dreyfus and the source he relies upon say the 
brain is a flat network with no modules and have no separation between 
action and phenomenons.

>
<snip>

>
> However, just because the GOFAI approach ran into a wall after awhile, 
> that
> doesn't mean that using "symbols" to represent something was wrong.
>
Dreyfus says using "symbols" in any way is very wrong.

> The brain uses a common language of symbols to represent everything as
> well.  Those symbols however are spikes.  Digital computers use 1 and 0
> symbols to represent things.  Manipulating symbols which are
> representations is exactly what the brain is doing.  Any argument to the
> contrary is misguided.
>

OK, explain, then, how neural networks (NN) have explicitly symbols as you 
say is mandatory in an intelligent system.

> The solution to creating human like behavior in a machine is to build
> symbol manipulating machines (aka signal processors), but the symbols must
> be a level closer to spikes or bits, than to English words.
>
> I also think that the correct implementation is along the line of a
> confectionist network which is processing multiple parallel signal flows.
> So from that perspective, I think it's more useful to think of the network
> as a signal processing machine than as "representations with symbols".

OK, then address the NN question above.  Generally, I believe, you need a 
hierarchy to represent a symbol.  Dreyfus et. al. says the NN is flat.

>But
> it's the same thing no matter which way you talk about it.  An AM radio
> signal is still a representation of the vibration of the air, and is also 
> a
> representation of the thoughts of the DJ which was speaking on the radio.

Not a good example, since the AM radio has no symbols to represent the voice 
info.  It is just a amplitude modulated analog signal.

> How you choose to label these systems is matter of viewpoint far more than
> a true matter of what the system is doing or how it works.

Not really, see my above comments on these systems you outline don't work as 
you stipulate.

Thanks for your thoughtful reply!

Cheers!
Ariel-

>
> -- 
> Curt Welch 
> http://CurtWelch.Com/
> curt@kcwc.com 
> http://NewsReader.Com/ 


0
Isaac
11/20/2008 1:16:52 PM
On Nov 19, 11:43=A0pm, Publius <m.publ...@nospam.comcast.net> wrote:
> Alpha <omegazero2...@yahoo.com> wrote innews:f4e53f60-3d2f-48c1-b489-4635=
bbf965be@w1g2000prk.googlegroups.com:
>
> >> I suspect any self-modeling system interacting with a dynamic
> >> environment will be conscious.
> > Your suspicions are misplaced. Supose that an auto has a model of it
> > "self" residing in a hard drive as part of the auto. =A0The model can
> > include anything you wish WRT the auto - operation processes, the
> > physics of combustion engines - a complete ontology of the auto from
> > all aspects. =A0The result will not be a conscious auto as if by magic
> > or any other means.
>
> The model represents the system (insofar as it can represent itself using
> data gathered through its own sensory apparatus), situated in an
> environment. The model is constantly updated as the ongoing interaction
> modifies both the system and the environment.

Sure - but a toaster with some memory (model) of how much it burned
the last bread inserted (its effect on the "environment") would
qualify, and we know it isn't the least bit conscious.
>
> How would decide whether the system was or was not conscious?

That is of course the million dollar question!  ;^)  I suggest a
combination of factors including: architecture of the entity/
complexity, whether the system is autopoietic thence giving rise to an
emergent "self" that is reflected in behavior - reactions to
environmental perturbations, whethr it is living or dead (as we know
that dead things do not exhibit even a little bit of consciousness),
and so forth.

>
> >> Consciousness is a kind of "byproduct" of a particular
> >> strategy for improving intelligence.
> > No, it is not a byproduct. =A0It arises =A0as a result of a living
> > autopoietic system that interacts with an environment (so far as we
> > know only living things have consciousness), independent of the
> > intelligence of the system/lifeform.
>
> That will describe all living cells and organisms, not all of which (or
> even most of which) are conscious. Unless you have adopted a definition o=
f
> "consciousness" so broad as to be meaningless.

It is not meaningless to suppose that living things have some measure
of consciousness, even if a little bit in the cases of bateria etc.;
for example, a following of a sugar gradient by a baterium to ensure
survival would indicate that there is some sense of individuality, as
set against an environment, is there.  The enactive approach of
Thompson & Varela comes into play here.

>
> It is true that only living systems exhibit consciousness --- so far. But
> until 200 years or so ago, only living systems exhibited locomotion, and
> until 50 years ago only living systems exhibited intelligence. There is n=
o
> reason to suppose consciousness requires a biological substrate.

Except that such is the only substrate that even comes close to
exhibiting that faculty and there is no reason to believe that other
substrates could qualify considering what we have so far as artifacts
that are not living. Deep Blue is extremly good at modelling its own
state (an intelligence aspect) and the state of the playing scenario,
but is not the leat bit conscious for example.  Neither is Curt's
keyboard!  ;^)

0
Alpha
11/20/2008 4:22:37 PM
"Isaac" <groups@sonic.net> wrote in
news:49256351$0$95529$742ec2ed@news.sonic.net: 

>> The earth is intelligent.  So is the universe as a whole.
>>
>> Life looks like it was designed by intelligence because
>> it was.
 
> The problem with going this expansive on "intelligence" is that it is
> not scientifically useful.  For example, with your definition of
> intelligence a crystal (esp. while growing) is intelligent.  Can you
> set forth a definition of intelligence that is scientifically useful
> and not just philosophically pleasing?

That is indeed a key issue, which must be resolved if everyone involved
is not to be working on different problems and trying to communicate as
though they are all on the same page. 

My own favorite definition is, "The capacity of a system to solve novel
problems," with "problem" meaning any obstable or impediment the system
must overcome to reach a goal (which indeed implies that "intelligence"
is not defined for systems which lack goals). 

It also distinguishes intelligence from learning. Intelligence clearly 
depends upon prior learning, but mechanisms for learning (both skill
acquisition and skill refinement) are are fairly well understood. A
system which can acquire new skills by observation and emulation, or
which can increase its proficiency with practice and feedback, will
certainly improve its capacity to solve problems. But those abilities
will not equip it to deal with novel situations. 

In a recent experiment with ravens a chunk of meat was suspended from a
length of string tied to a high tree branch, the meat dangling a couple
meters off the ground. The ravens at first tried to grab the meat "on
the fly," only to find it jerked from their beaks when the string went
taut. Then one of the ravens lighted on the branch at the tie point,
reached down with its beak and grabbed the string below the branch,
pulled up a loop and tucked it under its foot. It then reached down and
grabbed another loop, also trapping it under its foot. It continued this
until all the string was coiled under its foot and the bait within
reach. It then placed the chunk of meat on the branch and proceeded to
eat it. 

http://www.guardian.co.uk/science/2007/apr/29/theobserversuknewspages.uknews1 

The bird solved that problem in a matter of minutes, even though it was
surely a problem situation it had never before encountered. That is
intelligence. 
0
Publius
11/20/2008 4:35:32 PM
>>
>> I agree with you. Creating AI has nothing to do with philosophy.
>
> Except that historically, the important questions have come from
> either philosophers, or by other types of scientists that posed
> philosophical questions about Universe, per Bohm, especially when
> instrumentality was limited (i.e., when our ability to *be*
> empiricists (read: perform instrumented experiments) was limited.)

True, but importans answers come from scientists.



>> It is just a
>> technical problem that needs better mathematical tools in order to solve it.
>
> What sorts of math tools are you talking about that would solve basic
> questions of how brain for example, represents a blue cube, or how APs
> represent (if they do) a thought or a memory?
>

Math has already answered this questions many times and you can use it to 
represent blue cube, thought or memory in many different ways.



> See Koch's The Biophysics of Computation for an example of
> sophisticated math applied to biological function in brain , but which
> provides no clue as to representation etc.

Yes, we all know that some scientists tried and faied just as many philosophers 
did.



>>
>> Creating AI will reflect on philosophy in only one way - it will prove that 
>> some
>> philosophers were wrong.
>
> But then some will have been right!

True, but creating AI will not result with new philosophy of the universe just 
as sending man to the moon didn't. This will be advance in technology but not in 
philosophy.

It will change our lifes and it will change our every day routine but it won't 
change our global philosophical view of the world (unless you are one of them 
who think there is some metaphsychal difference between man and machine). 
Creating AI will only prove that it is possible to create intelligent machine (= 
the difference between man and machine is not in metaphysics) and philosopher 
who now think it is not possilbe will be proven wrong. Since machines are still 
machines, intelligent or not, they never get tired, they are predicitble and can 
be improved and become bigger, faster and better in any way. All this will help 
us to solve some other problem and to live well (unless we kill each others).

After AI is created, Earth will remain rounded, Sun will be at the same place 
and everything normal and racional people knew will remain the same with only 
one little exception - from this moment they will know how to make AI.

0
iso
11/20/2008 5:33:17 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Nov 19, 3:41=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > Alpha <omegazero2...@yahoo.com> wrote:
> >> On Nov 17, 12:55=3DA0pm, c...@kcwc.com (Curt Welch) wrote:
> >> > All mental activity is a reflex.
> >>
> >> This is silly nonsense.  Perhaps *you* do not
> >> ever think about what you are going to think about,
> >> (and that would explain a lot), but do not assume
> >> that others do not partake of directed thought
> >> processes/scenarios.
> >>
> >>
> >> I can simply will myself to think about a blue
> >> cube for example! And then proceed to do so.
> >
> >
> > The idea of a "blue cube" just popped into your head
> > at some point as you were writting this response.
> > Right?  Did you will yourself to think about a blue
> > cube before you first thought about the blue cube?
> > Of course not.  At some point there, the idea of a
> > blue clue showed up in your thoughts without any
> > prior will to think about blue cubes on your part.
> > Why did that happen?
> >
> > It happened because you had some sort of thought
> > such as "what is a good object to give as an example"?
> > And as a _reflex_ to that thought, the "blue cube"
> > idea showed up in your thoughts.  And as a _reflex_
> > to the "blue cube" thought in the context of "create
> > thought example", you produced the sentence in the
> > post about "I can will myself to think about a blue
> > cube and then think about a blue cube".
>
> It is not *a* reflex but rather many reflexes that
> turn the input "what is a good example" into the
> output "a blue cube". However this low level
> description you are so fond of explains nothing
> of interest anymore than saying the molecular
> machinery of the body is nothing more than the
> interactions of molecules.

The low level reflex system I'm so found of explaining is exactly what we
have to understand, and build, to create AI.

You may see building AI as a waste of time, but clearly I don't.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/20/2008 7:11:52 PM
On Nov 21, 6:11=A0am, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:


>> It is not *a* reflex but rather many reflexes that
>> turn the input "what is a good example" into the
>> output "a blue cube". However this low level
>> description you are so fond of explains nothing
>> of interest anymore than saying the molecular
>> machinery of the body is nothing more than the
>> interactions of molecules.
>
>
> The low level reflex system I'm so found of
> explaining is exactly what we have to understand,
> and build, to create AI.

But we already have some understanding of low
level reflex systems and use them in robots.

If you explained to someone how a relay worked,
or a logic gate, would that then enable them to
construct a computer out of those components?
Of course not. They would need a high level
description of how a computer system worked.

Simple low level operant conditioning circuits
are not hard to implement although GS turned his
nose up my examples saying they weren't complex
enough. I suggested at the time that the first
neural systems would have been very simple to
have occurred by chance in the first place. That
is why I believe developing AI will be done in
working stages just as it has been the case for
all our other technologies.

I have been interested in exactly the same kinds
of things you are interested in. The difference
is I suspect a working system will not simply
be a single uniform network. Yes, it will have
uniform networks like the neocortex, amygdala,
basal ganglia, cerebellum and so on just as the
digestive system has many integrated parts each
honed by evolution for optimum performance.


JC
0
casey
11/20/2008 10:25:16 PM
"Isaac" <groups@sonic.net> wrote:
> "Curt Welch" <curt@kcwc.com> wrote in message
> news:20081119002253.348$D7@newsreader.com...
> > "Isaac" <groups@sonic.net> wrote:
> >> "Publius" <m.publius@nospam.comcast.net> wrote in message
> >> news:Xns9B57F131CB7B0mpubliusnospamcomcas@69.16.185.250...
> >> > "Isaac" <groups@sonic.net> wrote in
> >> > news:491f9f87$0$33506$742ec2ed@news.sonic.net:
> >> >
> >> I defy you to contrive a definition of Intelligence that works.  For
> >> example, using your current definition above, the Earth would be
> >> intelligent because it is a system with the capacity to generate
> >> solutions (e.g., extremely complex, yet stable atmospheric weather,
> >> ocean currents, etc.) to solve novel problems of, for example,
> >> maintaining a stable global temperature in the face of many
> >> (thousands) changing (novel) variables that are constant obstacles
> >> preventing the Earth (Gia?) from attaining her goal of minimizing
> >> temperature differences globally.
> >
> > The earth is intelligent.  So is the universe as a whole.
> >
> > Life looks like it was designed by intelligence because
> > it was.
>
> The problem with going this expansive on "intelligence" is that it is not
> scientifically useful.  For example, with your definition of intelligence
> a crystal (esp. while growing) is intelligent.  Can you set forth a
> definition of intelligence that is scientifically useful and not just
> philosophically pleasing?
>
> >The process of evolution is just one more example of the many
> > intelligent processes at work in the universe.  Evolution is an example
> > of a reinforcement learning process
>
> Evolution is not really a pure reinforcement learning process.  The
> fitness function select viable populations without directly reinforcing
> any weightings of the genes.  A selection scheme is not a reinforcement
> scheme.

Sure it does. You just have to learn to look at it correctly.

If you look at the entire population of a single species as a large
distributed reinforcement learning machine (instead of being focused on a
single individual), then you can see that the weighting of the genes
happens at the gene pool level.  In the population, at any moment in time,
the current population of genes and alleles is the waiting of that gene in
the population.

Each birth, and death, changes the weighting of the genes in the gene pool.

It's very much a reinforcement learning machine.

> >and I basically consider all reinforcement
> > learning processes to be examples of intelligence.
>
> That is far too broad.  Under this scheme a tape recorder, which learns
> your voice by reinforcing magnetic monopoles with your voice signal until
> the signal to noise ratio reproduces your voice adequately.  So, is a
> tape recorder intelligent?

I get made fun here all the time of for saying rocks are conscious. :)

Yes, you can consider that aspect of the the tape recorder an example of
intelligence as well.  But it's a stretch.  There's not a good example of
the machine learning slowly by multiple reinforcement events.  You don't
have to train the recorder by recording the same voice 10 times in order to
train it to playback your recording correctly. It always happens in one
step - which means it learns so fast that it has almost no memory of the
past - which makes it a very low level of intelligence.

Yes, the idea of reinforcement learning being intelligence is very broad,
but I think it fits. I think it is the process people are in fact talking
about when they talk about intelligence even though most don't understand
that.

The problem here is that most people have no clue what type of process is
at work creating human behavior and they just consider it "undefined magic"
which they use the name "intelligence" to label.  And since they don't see
human like behavior coming from anything else in the universe, and no one
has been able to make human like behavior come out of our machines despite
no end of effort to do so by lots of very smart people, they make the
obvious assumption that the process behind human behavior must be something
very complex and hard to understand.  As such, something so simple as
reinforcement learning seems to be an obvious wrong answer.  But it's not.

This confusion over consciousness also makes the answer hard to accept.

> >> There are many similar examples that use your language but are not
> >> considered to be intelligent to anyone reasonable in science.  Care to
> >> update your definition or defend it?
> >
> > Many people in science have no clue what they are talking about when
> > they use the word "intelligence".  As such, they define what is, and
> > what isn't intelligent based on total nonsense and ungrounded
> > speculation - as I've said before - without using any empirical
> > evidence to argue from.
> >
>
> Your solution goes so broad that it is not useful because computers right
> now would be intelligent according to your definition,

They are.

> and we know, of
> course, they are quite dumb (esp. Microsoft apps :).  Please clarify.

TD-Gammon (an example I use all the time here) is a reinforcement trained
program that plays the game of backgammon.  It leaned, on it's own, now to
best play the game, by playing itself in millions of games.  It plays at or
near the level of the best human players, and it has in one example, even
taught the humans how to better play a certain type of opening move in the
game.

This program is interesting for many reasons - but the most notable is
simply that it leaned how to win backgammon games on it's own, instead of
having the author of the program hand-code all his knowledge about how to
play the game into the program.  He built a learning machine, and let it
learn on it's own how to play.  The same guy had written other backgammon
programs - I believe his programs were the strongest backgammon programs in
the past.  But with TD-Gammon, he took a different approach and removed all
the typical strategy and rules he built into his past programs, and instead
just built a pure learning machine with no initial knowledge about how to
pick the best move.  It learned how to play the game using a reinforcement
learning algorithm which was adjusting the weights of neural network which
act as it's behavior selection system.  A neural network that only has a
few hundred weights total, was able to hold all the knowledge needed to
play backgammon at the level of the strongest human players.

Most the machines we build don't learn on there own.  There is very little
learning happening in Microsoft windows for example.  Though we do see it
coming into our applications more and more over time, like with spam
filters, and spelling correction.

The bottom line here is that most humans think humans are something very
special and somewhat magical.  Many people even think that duplicating
human behavior in a machine is impossible.

I hold the view that it's actually far far simpler than most believe.  I
think our intelligent behavior is nothing more than what emerges from a
fairly simple reinforcement trained neural network which translates sensory
inputs effectors inputs using real time temporal processing techniques.

Unlike most, I don't believe the underlying technology of the neocortex is
a lot of different custom built modules each with a different purpose all
designed and wired by millions of years of evolution.  I think it's in
general on homogeneous type of learning network which is simply used to
process different data in different locations - like our memory chips in
our computers are all identical, but each have a different function based
only what is stored in them.

TD-gammon is a working example of how intelligent behavior can be produced
by the application of reinforcement training on a neural network.  It's
network only has something like 40 nodes, but yet that's enough to store
all the knowledge needed to play backgammon better than 99% of all humans.
What do you think could happens when you figure out how to do something
similar with 100 billion nodes like the brain is using instead of 40?

The network implementation used by TD-Gammon isn't a real time reaction
machine like the brain is. It's a program that picks moves in a discrete
game.  It can't be scaled up to solve human behavior problems. But it shows
how close we already are.

And yes, I already consider programs like TD-Gammon to be good examples of
true machine intelligence.

> > Of course that doesn't stop them, because they like to claim things
> > such as
> > "subjective experience is outside the scope of empirical evidence".
> > And then they tell us what _their_ subjective experience is like and
> > use their beliefs about their own subjective experience to "prove" an
> > endless list of nonsense ideas about the universe.
> >
> > The typical argument and thought path starts with the belief that human
> > consciousness is something that exists only in humans.  Then from
> > there, they make the argument that since humans have this magical
> > attribute called
> > consciousness and other things like the Earth doesn't, that
> > intelligence requires consciousness.  But since they don't have any
> > clue what creates human consciousness, they also don't have any clue
> > what creates intelligence and don't really have any way to determine if
> > the earth is intelligent or not.
>
> true, but science always has to operate on a best working theory.  Yours
> is too broad and theirs is too narrow.

I don't just stop at "reinforcement learning".

To duplicate human behavior in a machine, I believe we need a real time
confectionist network that can process parallel data streams (aka like the
brain, but also like our ANNs) which is trained by reinforcement.

I have simple working examples of multilevel networks that can be trained
by   reinforcement - thought they still lack important learning powers.
I've talked about them in detail here in c.a.p. in the past many times over
the years.

I believe networks something like these are the key to creating AI.

> > And when asked to explain what evidence they have to suggest this
> > attribute
> > exists only in humans, they use the self serving argument that since
> > they "known" it exists in them, and that other humans are physically
> > similar to them, that this stuff they known exists in them must also
> > exist in others.
> >
> > But all that argument and the arguments that grow from it are based on
> > a belief that has no support.  The belief that "consciousness" is
> > something other than simple brain function.  That consciousness is not
> > an identity with physical brain function.
> >
> > However, all the empirical evidence we have tells us that assumption is
> > wrong.  And if we choose to believe what the empirical evidence shows
> > us (materialism) - then we know that there is nothing here to explain,
> > other than the physical signal processing that happens in the brain
> > which produces human behavior.
> >
> > Once you grasp the significance of what the empirical evidence is
> > telling us, all the need of defining intelligence as some sort of link
> > with "being conscious" goes away.  We are left with defining
> > intelligence is some class
> > of signal processing algorithm that describes how the brain works.  And
> > though there are multiple options there, none of them make intelligence
> > hard to understand. It's no harder to understand than any typical
> > machine learning algorithm for example.
> >
> > I choose to use the fairly broad and generic definition of intelligence
>
> yes, too broad to be useful for anything but metaphysics.
>
> > being a reinforcement learning system which allows the concept to
> > include many processes other than just what the brain does - such as
> > the process of
> > evolution.
>
> evolution is just an environmental and social selection process of who
> lives and dies based on who fits an arbitrary criterion.  How is that
> intelligent?

Because it's another example of the fundamental process of reinforcement
learning at work.  Most people won't understand this connection until
someone builds a working AI machine that actually acts somewhat human-like,
and then explains to them how that machine works.

> > You could easily restrict the definition to something closer to what he
> > brain does, which would be something more like a real time distributed
> > parallel signal processing network trained by reinforcement instead of
> > the far broader "all reinforcement learning processes" I like to use.
> >
> reinforcement learning does not even begin to address the intelligence
> problem because it is method of weighting certain nodes in a network more
> or less than others, but does not at all address any system level
> architecture or algorithm, thus is does not provide a model of reality,
> just a way to reinforce a model if you have one.  For example, a classic
> back propagation neural network (NN) is a implementing reinforcement
> learning in a NN architecture with a propagation training algorithm;
> however, NN have proven to be completely useless to do anything
> intelligent and cannot even be made to converge in a hierarchical
> configuration even with massive (impractical) amounts of tagged training
> data.

Your error is in believing their uselessness has been proved.  No such
proof exists.  What you are pointing to is the fact that finding the right
architecture is hard.

Back prob is NOT reinforcement learning.  Back prop is learning by example,
or sometimes called supervised learning which puts a severe limit on
creativity that does not exist with true reinforcement learning.

Reinforcement learning is trial and error learning.  It's learning by
experimentation.  Back prob is learning by example where the examples are
normally provided by some external teacher who must always be "smarter"
than the "student".  Reinforcement learning allows the student to learn
things the teacher never knew (as per what happaned with TD-Gammon).

> <snip>
>
> > Well, I think "goal" is the wrong way to understand the operation of
> > the brain though it's not too far off.
> >
> > The true goal of a reinforcement learning machine is to maximize
> > expected future reward.  So it's a reward maximizing machine with one
> > prime goal.
> >
>
> greedy algorithms are only good for problems with smooth gradient
> descent; however, they get trapped at local maxima and minima.
> Intelligence is really looking for global maximas so reinforcement
> learning is a rather dumb scheme to achieve this.  Of course, GA's may
> try to chaotically hill hop until a higher hill top is found, but GA's
> are useless because a suitable fitness function and gene configuration
> space are almost impossible to define; esp., from a top down approach.

Yes, the problem of not getting trapped at a local maxima is important.
But I believe the solution to that is to fracture the problem into a huge
number of smaller problems.  That is, each node in a network is solving its
own little piece of the problem.  And though many nodes at any one time
might be trapped at a local maxima, as long as some nodes are not trapped,
they can still learn and improve the networks solution.  And as those
changes, it ends up changing the landscape for other nodes, freeing some of
them to make progress.  As long as some nodes are still making progress,
the network as a whole can still make progress.

Another way to look at this is that the learning is not just a one
dimensional or two dimensional hill climbing problem.  If you have 100,000
nodes all learning in parallel, it's a 100,000 dimension hill climbing
problem.  The more dimensions you can break the problem into, the more
likely they will be a path around all the local maxima traps.

The key to making this work, is finding the right architecture.  The
architecture you apply to the problem determines the nature of the
multidimensional surface the "hill climbing" is applied to.  The
architecture you pick defines the nature of the problem it's trying to
solve.

> > What the prime goal translates into is some internal systems of values
> > for all possible behaviors which in turn translates into some behavior
> > probability distribution.  This in turn must drive whatever mechanism
> > is in
> > place to select behaviors.  The system that decides what behavior to
> > select
> > for the current context is using the internal system of values to pick
> > between alternatives.
> >
> > Our higher level ideas of "goal seeking" is simply the fall out of a
> > the lower level behavior selection system picking the best behaviors
> > for any given context.
>
> too vague.  See above. A mountain stream searching for the best (i.e.,
> least energy way) to get to the bottom is implementing all the
> intelligent features you propose (e.g., reinforcement of paths that work,
> "goal seeking", and selecting locally optimal behaviors), however, saying
> that a river defines intelligence is completely useless.  Care to be more
> practical and specific?

Yeah, but this message is too long already.  Let me post another one to
talk about my approach in more detail.

> > > When you translate the implementation of the system into a reward
> > > trained
> > behavior selection system, the frame problem doesn't even make much
> > sense to talk about.  The frame problem arises nearly as much out of
> > incorrectly framing the question of what the purpose of the agent is.
> > However, the issues that surround the frame problem are real.  But they
> > are all answered
> > in the context of a system which has the power to prioritize all
> > possible responses to stimulus signals.  That is, which reaction the
> > system chooses at any point in time based on its learned values
> > (priorities if you like) is the answer to how the system deals with the
> > frame problem.  That is, the
> > one problem it must solve (how to select which behavior to use at any
> > instant in time) is the same answer to the frame problem.
> >
> > Finding a workable implementation of such a system is the path to
> > solving AI.
>
> This is one aspect of the path to AI in the most general sense, but your
> ideas are too vague (e.g., you assume an internal model of reality exist
> to even be able to perceive what a frame or context is- that does not
> exist).
>
> > > You said above that goals do not drive perception. That's just not
> > > true in
> > my view. II think our perception and our behavior selection are one and
> > the
> > same problem.  Perception is a problem of behavior selection.
> >
>
> I disagree; e.g., how is perceiving the sound of a drum to be a drum a
> "problem of behavior selection"?

Again, let me explain my approach from the bottom up in a different message
and I think you will better understand what I'm getting at there.

> > On the sensory input side of the network, the major function is
> > perception,
> > but as the signals flow through the network, the function transforms
> > into behavior selection.  So near the output side of the network, it's
> > mostly "goal driven" an on the input side it's mostly "perception
> > driven" but I believe it's a fairly even continuum though the network
> > as raw sensory data
> > is translated to raw effector output data.
> >
> > We see how this works when we test color perception of people raised
> > (aka trained) in different cultures with different words for different
> > ranges of
> > colors.  Our perception of color bends to correctly fit the
> > classification of light frequency labeled by the words of our language.
> >
> of course we project our bias on what we perceive; however, that does not
> prove that perception is only based on our goal bias, which seems way off
> mark to me.  Please clarify.
>
> >>
> >> >Attention is paid only
> >> > to world states which bear on the system's goals (as a background
> >> > process).
> >>
> >> Of course, goals to play an important role in how to focus attention,
> >> and to some extent this colors the frame problem, but I do not see how
> >> it drives it exclusively as you put it.
> >
> > It drives it exclusively in my view because behavior selection is all
> > the brain is doing and behavior selection works by picking behaviors
> > that are estimated to produce maximal expected return for the given
> > context.  And this general process of selecting the "best" behaviors at
> > the lowest level is both the mechanism which creates what we think of
> > as goal seeking and the behavior which is think of as attention focus.
> > I see them as one and the same process at the low level.
> >
> the brain models phenomenon far before it forms a goal in relation to
> that phenomenon.  No doubt they may be organically comingled, but to say
> your goal defines your model of phenomenon would amount to a real LSD
> psychedelic experience of reality, which the rest of us do not
> experience. :)

:)

In my networks, there are two processes at work which are commingled (to
use your term).  One process is learning the statistical properties of the
data (unsupervised learning from sensory data), and the other is the
reinforcement learning process which is applied on top of the statistical
system.  The idea of the statistical system is to compress the data to
allow the internal signals of the network to represent as much information
about the state of the environment as possible with as little redundancy as
possible.  It must also create a mapping that solves the invariant
representations problem.   That creates the default mapping of the network.
Reinforcement learning is then added on top of that to warp the default
mapping in order to maximize reward.

To say the goal "defines" the model, is simply to say that the
reinforcement training re-defines the default model of the system based on
experience.

> <snip>
>
> > This is because this person has never had experience with this type of
> > "frame" in the past and has very little experience with how to
> > correctly react to this combination of stimulus signals.  None the
> > less, the brain will still pick a reaction to the stimulus signal based
> > on the past experience the person has had.  For this guy, the
> > "reaction" to the frame might be to move the eyes to focus on the tree
> > in the background because all the city street stuff in the foreground
> > looks mostly like "noise" to him.
>
> you are mixing high, system level goals (I want to find some food to eat)
> with low level goals (like if you hear an unexpected sound turn your head
> towards the sound).  It is certainly not useful to say that low level
> goals defines our perceptions; of course, they do filter it by
> pre-selecting what our higher level systems can become aware of.

Though we like to talk about these effects using different concepts at the
high levels and low levels, I believe they are mostly just the same effect
at work at different levels.  I believe all the high level emerge "goals"
we talk about in human behavior are simply emergent side effects of the low
level system seeking to maximize the expected future reward signal.

> <snip>
>
> >> I don't think anyone would say that classic AI would not return to the
> >> world to gather more facts to add to its "millions of facts".  The
> >> issue that Dreyfus says is the problem with AI is that it creates
> >> rules that are representations (or symbols) and are compartmentalized,
> >> both of which he says the Philosopher Heidegger espouses, which
> >> Dreyfus and his set of philosophers/researcher say is not the case.  I
> >> think every Intelligent system will end up effectively having a
> >> constantly evolving set of millions of "rules", so that is not the
> >> question.  Do you have any counter examples?
> >>
> >> Cheers!
> >> Ariel B.-
> >
> > I don't fully understand what you are suggesting here because I don't
> > tend to read or study the work of the type of people you are studying.
> > I'm not sure for example what the debate is on representations.
>
> representations require a modular, hierarchical architecture that parse
> realty according to the modules and objectify actions and phenomenons as
> being completely separate.  Dreyfus and the source he relies upon say the
> brain is a flat network with no modules and have no separation between
> action and phenomenons.

I suspect I agree with the general direction Dreyfus is headed (and
Freeman), but I happen to believe that even in the type of network he's
thinking about, sections of that flat network will naturally tend to form
what we would likely call modules and a hierarchical architecture.

> <snip>
>
> >
> > However, just because the GOFAI approach ran into a wall after awhile,
> > that
> > doesn't mean that using "symbols" to represent something was wrong.
> >
> Dreyfus says using "symbols" in any way is very wrong.

Yeah, I've debated the fact that pulses really are symbols.  There are some
here that call me an idiot for suggesting that. (but lots of people like to
call me an idiot).

> > The brain uses a common language of symbols to represent everything as
> > well.  Those symbols however are spikes.  Digital computers use 1 and 0
> > symbols to represent things.  Manipulating symbols which are
> > representations is exactly what the brain is doing.  Any argument to
> > the contrary is misguided.
>
> OK, explain, then, how neural networks (NN) have explicitly symbols as
> you say is mandatory in an intelligent system.

Spikes are symbols.  If you have a light sensor which generates a frequency
modulated spike train based on light intensity, you can think of each spike
the sensor generates as being equivalent to a person yelling a word which
means "I just saw a billion more photons!".  The pulse is clearly a
symbolic representation of a physical event detected by the sensor.

This clearly fits in my mind the definition of a symbol:

http://en.wikipedia.org/wiki/Symbol

   A symbol is something such as an object, picture, written word, a sound,
   or particular mark � that represents something else by association,
   resemblance, or convention,

Likewise, if these symbols are processed by neurons an produce new symbols
based on some logic as a function of other symbols, the new symbol (spike)
takes on new meaning because the condition the which causes the generation
of the symbol will be different - it will represent something else.

It's possible that intelilgence could be created with an analog signal
system in which case we wouldn't really be able to use the word "symbol"
anymore because symbol implies a discrete representation system.  Though I
suspect all our best attempts to create a learning system that duplicates
human-like learning and behavior skills will be based on discrete symbols.
I say this because I suspect it's very hard to do high quality
reinforcement learning if the lowest level representation system isn't
using discrete symbols.  That is, it seems to me there must be discrete
events that are being conditioned.  I certainly have only played with
discrete signaling systems though I would not be surprised if one day
someone figured out how to do it in a purely analog domain.

> > The solution to creating human like behavior in a machine is to build
> > symbol manipulating machines (aka signal processors), but the symbols
> > must be a level closer to spikes or bits, than to English words.
> >
> > I also think that the correct implementation is along the line of a
> > confectionist network which is processing multiple parallel signal
> > flows. So from that perspective, I think it's more useful to think of
> > the network as a signal processing machine than as "representations
> > with symbols".
>
> OK, then address the NN question above.  Generally, I believe, you need a
> hierarchy to represent a symbol.  Dreyfus et. al. says the NN is flat.

I'm going to address this in a follow up....

> >But
> > it's the same thing no matter which way you talk about it.  An AM radio
> > signal is still a representation of the vibration of the air, and is
> > also a
> > representation of the thoughts of the DJ which was speaking on the
> > radio.
>
> Not a good example, since the AM radio has no symbols to represent the
> voice info.  It is just a amplitude modulated analog signal.

Right it's an analog representation instead of a discrete representation.
It very much is a representation, but we don't don't use the word "symbol"
unless it's a discrete representation system.

This fact that the English word "symbol" only applies to discrete
representation systems says far more about English that it says about how
the machines work.

> > How you choose to label these systems is matter of viewpoint far more
> > than a true matter of what the system is doing or how it works.
>
> Not really, see my above comments on these systems you outline don't work
> as you stipulate.
>
> Thanks for your thoughtful reply!
>
> Cheers!
> Ariel-

I'll follow up and talk a bit more about the specifics of the type of
networks I've been playing with to try and solve the problem of general AI
by building a reinforcement trained learning network.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/21/2008 5:17:31 AM
Publius <m.publius@nospam.comcast.net> wrote:
> "Isaac" <groups@sonic.net> wrote in
> news:49256351$0$95529$742ec2ed@news.sonic.net:

> http://www.guardian.co.uk/science/2007/apr/29/theobserversuknewspages.ukn
> ews1
>
> The bird solved that problem in a matter of minutes, even though it was
> surely a problem situation it had never before encountered. That is
> intelligence.

Yes, I think that's a fine example of intelligence.  But I suspect you give
the bird far too much credit for how novel this problem was.

Do you think the bird has never had to pick up a worm or snake in it's life
and hold it with it's foot on a branch?  I suspect the behavior of dealing
with long thin piece of food is in fact fairly common for most birds.

The fact that we understand it this as a string (not food) with food tied
to the end that we have to "pull up" to "get the food" is probably not how
the bird was seeing it at all.  He probably processed it as along piece of
food with the "good part" at the end so he was just moving the food around
to get the good part.

However, no matter how novel this example was or not, a big part of
intelligence is the ability to correctly apply lessons learned in past
situations to a new an fairly novel situation.  I agree completely with
that issue.

As I've written, I believe AI will be solved by a reinforcement trained
neural network.  The big reason in my view that these have had little luck
in the past is because I think no one has yet produced a good solution to
the generic classification problem which forms the basis for how such a
network applies lessons learned from the past, to future problems.

In general, all neural networks will accept a novel input and act on it.
The goal when we design and use neural networks is to get them correctly
associate new and novel inputs with past training.  If we train the network
to produce behavior A for one input (one context), and behavior B for
another context, how does it respond for new and novel context?  It does it
because all neural networks create associations where it will consider the
new input to be somewhat like context and and somewhat like context B.  It
will pick the answer based on what context the neural network decides the
new input is closest to.

If it doesn't get that "closeness" measure correct (aka how we might "want"
it to work), then it won't produce very good answers.

The design and structure of the network determines how it creates these
input equivalence classes and so far, I don't think anyone (that I've heard
of) has gotten this implemented correctly - that is, in a way that will
allow the network to equal the learning power of the human brain.

If you get it wrong, then the network when performing an image
classification task (for example) might decide the red car looks more like
a red bird than the 1000 other "cars" it knows about and respond to the
picture by saying "bird", instead of "car".

I think the key to how to make a network solve this problem lies in the
temporal domain.  That is, the temporal correlations, and the temporal
predictions that exist in the sensory data must be used to form the
association maps that determine how the network resolves these measures of
"closeness".

A direct measure of the networks "intelligence" is how good the network is
at picking the right measures of closeness so it knows that the behaviors
that worked well for eating a snake (for example) are likely to work well
in eating food tied at the end of string.  And it could make that
association by the fact that the string moved (changed over time) in ways
that were similar to a snake or worm.  It's how it changes over time (aka
the temporal domain) where all the important information is located.  If
your notwork is not a real time system which learns how to build parse
trees using the temporal constellations that exist in the data, it won't
have the same level of "intelligence" (aka ability to _correctly_ apply
past training to a novel context).

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/21/2008 6:00:50 AM
curt@kcwc.com (Curt Welch) wrote:
> "Isaac" <groups@sonic.net> wrote:
> > "Curt Welch" <curt@kcwc.com> wrote in message
> > news:20081119002253.348$D7@newsreader.com...

> > I disagree; e.g., how is perceiving the sound of a drum to be a drum a
> > "problem of behavior selection"?
>
> Again, let me explain my approach from the bottom up in a different
> message and I think you will better understand what I'm getting at there.

Ok, so let me give a bit more of an overview of the type of networks I've
been playing with and how I think this approach is going to solve AI.

First, let me say I just play with this a hobby.  But it's a hobby I've
been messing with for about 35 years now (on and off - mostly off - I can
ignore it for years and then come back to it and work hard at new idea for
weeks on end).

For years I played mostly with feed forward binary networks where every
node in the multilayer network would calculate a new output value for each
computation cycle of the network.  My goal has always been to understand
how to apply reinforcement learning to a multilayer neural networks.  IT's
hard to get these sorts of network to do anything interesting - but I've
had some success over the years and far more failures.

But a few years back, I got interested in the idea of simulating async
pulse networks instead of the binary networks.  That was quite a paradigm
shift because all the data existed in the temporal domain.  No longer would
a node be given a set of "input values" to computer a value from.  Pulses
would show up at the inputs at different times (like a real brain) with
synchronization of the pulses.  This shifted my view on the type of
function the nodes of the network would perform. They would need memory of
how much time had passed since previous pulses and their behavior would be
a function of those sorts of time values.  It was a whole new way of
thinking about networks for me.

In addition, there was a recurring problem I had in the old style networks
where there was a constant need for conservation of information in the
network. This was a need driven by the fact I was trying to train these
networks using only reinforcement - which means the network as a whole was
given rewards, but no indication as to what it was that the network did
which cased the reward.

If I was trying to train the network to perform some simple logic function
on the inputs, there was this complexity that if the input data that was
needed to generate the correct output was "lost" in the lower levels of the
network, and never made it to the final output levels, any reward the
network got for doing what looked like the right thing, would never in fact
be the true correct behavior because even though the output happened to be
correct, it was always correct for the wrong reason.

None the less, this led me to realize that for these types of systems, it
was important to try and make sure all outputs were a function of all
inputs and that the network didn't create functions which in effect dropped
input values from the function (as if they were multiplied by a value of 0
for example).

This sort of information conservation need, combined with the idea of using
async pulse signals, got me to the idea of using a pulse sorting network.
That is, in stead of thinking of the problem as network nodes that "fired"
based on different logic, I started to look at it as a problem of routing a
single pulse though some path in the network, and then reaching an output.
The pulse was not allowed to fork, or to die out.  It was forced by the
design of the system to always be routed to some path.

With this idea, I've been using playing with networks that have nodes that
have a single input, and two outputs. They act as binary switches.  Each
time they receive a pulse, they must make a binary decision about which way
to route the pulse.  The networks I use are feed forward, so there's no
direct loops in them. The pulse enters, flows through the network, and
exists somewhere.

In software, I implement these networks so that they only route one pulse
at a time.  It makes the performance of this type of network greatly
outperform any typical NN I've used in the past which required that every
node in the network have it's output value recalculated for every cycle of
the network.  In this new approach, only the nodes that pulse passes though
have to be dealt with.

Now, even though the network passes pulses in a feed-forward fashion, there
can be feedback loops in multiple ways.  First, the logic a node uses to
determine how to route a pulse is based on pulses that have passed though
the network in the recent past.  They can look at how much time has passed
since a node flowed though a down-stream node in the network for example.
Doing this, creates a feedback loop from the higher (downstream) levels of
the network to previous levels.  So even though the pulses don't travel in
the feedback loop, the information about the pulse effectively does,
creating feedback effects.

How each pulse gets routed through the network, then becomes a complex
function of both how the network is wired and configured, and on which
pulses passed through which parts of the rest of the network, in the recent
past.

One way to look at this type network is as a big decision tree - except
it's a mesh instead of a tree and it has multiple inputs and multiple
outputs.  For each pulse injected into the network, the individual nodes
all make a decision about how to route the pulse, and the combined effect
of all the nodes working decision, creates a network-wide decision on how
to map a single input pulse to a single output path.

This type of architecture lends itself very nicely to the domain of
reinforcement learning.  That's because the nodes are producing a clear
binary behavior.  If the behavior of routing a pulse to one output is
punished, it's clear what the alternative is - increase the probability of
routing it the other way in the future.  If the behavior has multiple
options, reinforcement learning requires that the a statical value must be
traced for each option.  With binary decisions, a single statistic can be
maintained for the single binary decision (with the statistic representing
the probability of one decision, and the probability of the alternate
decision just being 1-p.

Now the idea of this type network is that is that for each pulse received,
the network must make an instant decision about how to react to it - about
which output to send the pulse to.  The decision is not however made by the
entire network.  It's only made by the nodes the pulse passes though.  So
how the pulse is routed thought the network is really creating two effects
at once.  Not only is each node that routes the pulse to some downstream
node making part of the decision, it's doing it by deciding which
downstream node (or downstream sub-network logically) should make the rest
of the decision.  Each node then is expected to learn which downstream sub
network, works best at producing higher rewards, based on the current
context (as best as as single node "understands" the current context".  So
the learning problem gets distributed over the network, with different
parts of the network specializing in different contexts (in theory).

Now, the idea I'm trying to create, is that the routing logic of the
network needs to, by default, attempt to solve the invariant representation
problem.  That is, by using temporal correlations in the data, it should
build a default parsing network, that tends to assign different nodes to
represent different invariant contexts of the sensory data.

To be a little simpler, this means that such a network should, if it does
what I think it needs to do, learn to recognize common temporal patterns in
the data, by routing the pulses that represent that pattern, to common
nodes.  That is, if it learns to recognize dogs, this means the network
will learn to route the pulses which represent a dog, to the "dog" node
someone down in the higher levels of the network.

This is not something it will be trained by reinforcement to do.   It is
something it must be able to do simply by studying the temporal predictive
nature of the raw signals alone.  It's information that exists in the input
data, and that information is what guide the network into correct "parsing"
sensory data into invariant representations.  Those invariant
representations are the indication of what state the environment is
currently in.

So, without any reinforcement learning, the network must be able to
configure itself to correctly model the state of the environment as
represented in the sensory data.  This alone should cause the final outputs
of the network to represent high level features of the environment.  I've
not yet found the correct logic to make this work like I needs to by the
way - and it's the part I generally spend my time thinking about.

With that much working, the network in effect has learned to "see" the
invariant objects that exist in the environment.  The larger the network,
the higher resolution the system has - aka the more "things" it has the
power to "see" in the environment.  A key concept here is that the outputs
are assigned so they are equally likely in probability.  So the feature
assignment works so as to make each feature the network learns to recognize
equally likely to show up in the systems environment.

Now, on top of this we add the all important reinforcement learning.
Without reinforcement learning at work, we can build a network that learns
to recognize a dog, but it has no clue, how it should react to a dog.  It
has no way to determine good from bad - it would have no values.  Should it
kill the dog, eat it, run away from it?  Without reinforcement learning, it
might have the power to predict how the dog will react to any action
created by the agent, but it would have no way to prioritize one action
over another.

So the unsupervised learning gives it the power to "understand" the world,
but the reinforcement learning tells it how to react to the world.  In
short, it picks the behaviors which are expected to produce the highest
levels of future rewards.

Now, as I pointed out above, a key part of the unsupervised learning is
that it naturally divides the environment into equal probability features.
Each signal path in the network represents a different feature of the
environment but each feature tends to show up over time with roughly equal
probability.  The output features - the one we see at the output signals of
the network, are in fact the behavior of the system.

So, if a given output from the network was wired to a cause the robot to
lift it's right arm, and by chance, that "feature" happened to be the "dog"
feature, this robot by default would lift it's left arm whenever it saw a
dog.  That would be an odd thing do to, but that might be the default
behavior of this network.

We add reinforcement learning on top of this, to select how the network
should really respond to dogs.

To do this, we simply bend the probability of each feature.  Recall that
each node in the network had two outputs.  This also means each node in
effect is classifying each pulse it receives, as being on of two features.
Each node in effect is acting as a feature extractor.

Let me give a real example to make this clearer.  If a node was fed a pulse
stream from a light detector, it could receive high frequency pulses for
bright light, and a stream of low frequency pulses for dim light.  The
frequency would measure the current intensity of the light - or, the
temporal spacing of the pulses would indicate the light level.  A node
which remembers how much time has passed since the last pulse, could use
that to make it's classification decision (and this is a network design
I've played a lot with).  It would sort the pulse out one path if the
previous pulse was close in time (high frequency) and sort it out the other
path if the previous pulse was further in the past (low frequency).  The
pulses coming into the node in effect have a symbolic meaning of "light".
But the pulse going out the node, have gained addition symbolic meaning.
Node out one side would in effect mean "bright light", and nodes out the
other side would in effect mean "dim light".

So we see with this example how a node can classify pulses by sorting them.
One output signal is an indication of "low light level" and the other is an
indication of "high light level".

If we then route the low light pulses to an output that makes a simple two
wheel robot turn right, the network will cause the robot to turn right when
it sensed a low light condition.  If the network also routed the high light
level pulses to the "turn left" output, the same robot would turn left in
response to bright light.

Now, when classifying light as "bright" and "dim" the default behavior of
the node is to set the dividing line so that the "bight" output is active
on average as much as the "dim" output is active (aka same number of pulses
over time on average).

To add reinforcement learning on top of this, we allow the rewards received
by the network, to shift that default balance.  If the network gets more
rewards over time for sending pulses out the "bight light" side, we shift
the classification behavior of the node to send more pulses out that side.
It does this by lowering it's definition of how much light is needed to be
considered "bright" (or, we think of it as what pulse frequency the node
uses as the dividing line between it's definition of dim and bright light).

This ability to shift where the pulse are being sent, allows such a network
to route a signal created by the network, to any output.  If it needs to,
it just shifts it's probability so as to route all pulses out one side, and
none out the other.  It would do that if one output consistently produced
more rewards than the other.  A small network like this can, for example
produce these "bight light", and "dim light" signals in the first layer,
and then using a few more layers, route one signal to the "turn left"
output, and the other signal to the "turn right" output.

A network like this, combined with two light sensors (eyes) - one on each
side of the head of the robot - could allow such a network to make the
robot turn towards the brighter light.  That is, it can learn to be a
simple light seeking robot by how the pulses are classified, and which
outputs they get sent to.

When we look at the outputs of such a network, we will find that the signal
being sent to the "turn right output" can be thought of as "brighter light
on right side of robot".  So looking back into the network at what what
caused the signal to be created, we can say the signal "means" "bight right
on right side of robot".  But looking forward at what the signal will do,
we can say the "meaning" of the pulse is a command the the robot to "turn
right" (aka spin left wheel faster ans right wheel slower).

Such a network can be said to be "perceiving that the light is brighter on
the right side of the robot".  But it learned to sense this condition,
based on the reinforcement learning that created, and tuned this signal, to
have that meaning, because that's the signal which allowed the robot to get
the most rewards from the environment.

In this network design, every node in the network is performing the dual
role of feature detections, and behavior selection.

In this type of network, "perception" and "behavior selection" are one and
the same thing.

In fact, if you look at this example, we can say that the network output
which makes the robot turn right, is the networks _perception_ of when it's
time to turn right.  Or, the networks perception, of when it thinks it
needs to turn right.

In fact, the purpose of the entire network, is to transform the raw sensory
data signals, into the correct "perception" of when it should act.  All the
intermediate signals in the middle layers of the network, end up being
there only because they help this multilayer network compute the correct
output values.

By looking at what happens as the sensory data flows forward thought the
network, we can all the process an act of perception.  But by looking at it
from the outputs backward, we can call the same function a process of
"action production".

I actually spend a few years playing with network designs that performed
the perception task in one network, and the action selection (reinforcement
learning) task in a second module.  I eventually figure out that can't
work.  Trying to do it separately creates a scaling problem that's
unworkable.  Or, maybe better said, doing both in each node at the same
time, improves the power of the multilayer network to scale exponentially.

I strongly suspect this same effect happens in the human brain because of
the scaling requirement.  Though we are taught to think of percpetion as
being different than behavior selection, I strong suspect, that in the
brain, like in my networks, it's actually the same thing.

> > the brain models phenomenon far before it forms a goal in relation to
> > that phenomenon.  No doubt they may be organically comingled, but to
> > say your goal defines your model of phenomenon would amount to a real
> > LSD psychedelic experience of reality, which the rest of us do not
> > experience. :)

Right.  A rat can't learn to respond to the stimulus of a flashing light
unless it can first "see" the flashing light!  From my description above,
you should now understand that the approach I'm looking at must include an
unsupervised learning technique that allows the network to configure itself
so that it produces internal signals that represent things like that light,
before reinforcement learning is used to figure out what to "do" with the
signal.

SO I agree completely with what you are saying there.  And my approach is
as you say to "commingle" the unsupervised learning which creates the
foundation of the networks ability to correctly perceive the environmental
(correctly parse it into invariant features), with reinforcement learning
to "bend" that parsing by making classification categories larger and
smaller, to create optimal behaviors (behaviors that produce higher
rewards).

> > you are mixing high, system level goals (I want to find some food to
> > eat) with low level goals (like if you hear an unexpected sound turn
> > your head towards the sound).  It is certainly not useful to say that
> > low level goals defines our perceptions;

Well, based on what I wrote above, you might grasp that I think it's not
only useful to say that, but it's required.

> > of course, they do filter it
> > by pre-selecting what our higher level systems can become aware of.

Right.  You might think of it as "focus", or "attention".  I just see it as
"behavior selection".  Do you grasp how that works now?  If the perception
network and behavior selection network are one and the same, then as the
system learns the correct behavior, it's also learning to bend it's
perception at the same time, and it's also learning to "filter out" what it
"wants" while "throwing away" what it doesn't want.

Though we have lots of different ways to talk about these things at the
high level, I think the type of networks I'm playing with, does it all at
the same time as one unified approach to the problem.

> > OK, then address the NN question above.  Generally, I believe, you need
> > a hierarchy to represent a symbol.  Dreyfus et. al. says the NN is
> > flat.

Ok, so in my network, I use a multilevel network that creates a hierarchy
to parse and transform sensory data, into effector outputs.  I only talked
about how signals were split apart by the switching function, but didn't
touch on how they are merged together.  Pulses can merge in this type
network simply by wiring two outputs to a single node input.  But how and
why they should merge is an issue I still struggle with.  We will ignore
that fact I don't have that correct logic figured out yet....

The result however is that the network, like any neural network, useses a
hiearchy of signals to create higher level more complex siganls the deeper
you go into the network.  In theory, the network might have an input that
means "light detected at pixel location 100,345 in the right eye", but as
the pulse travels though this classification network, it accumulates
additional "meaning".  Like in my example above twhere it transformed from
"light" at one level to "dim light" at the next level.  The idea is that
after multiple levels it might take on the meaning (based on the node it
reaches) as meaning "dim light which is part of sharp edge next to 30 deg
corner .... which is part of an animal ear which is on the right side of an
animal head which is part of the dog...".

So throughout the network each output of each node in the network
represents some feature of the input signals - aka some features of the
environment.  And the deeper you go into the network, the higher you are
going in a hierarchy which defines large and more complex features -
starting at "light" at the lowest level, and hitting "dog" at a much higher
level in this example.

So that's the basic hierarchy the low level network is using to define it's
symbols with this type of network (and again, I think the brain is doing
some thing basically the same).

I'm using a "flat" network to do this, but it's a multilayer "flat" network
with lots of freedom to make cross connections and use feedback from higher
levels to lower levels to correctly "parse" the sensory data.  I don't know
for what Dreyfus means by "flat".

> > > How you choose to label these systems is matter of viewpoint far more
> > > than a true matter of what the system is doing or how it works.
> >
> > Not really, see my above comments on these systems you outline don't
> > work as you stipulate.

:)

My current networks only have limited success.  The thing my current
network is not doing correctly is the hardest part of this direction I'm
trying to go which is to get the unsupervised learning working correctly.
I only abstractly understand what I think it needs to do, and I have not
yet translated those abstract ideas into working code.  But it seems like
the right approach to me so I continue to tackle it from this direction.
And the overall architecture of using pulse sorting as the underlying
paradigm still looks very good to me, so I'm sticking with that approach
for now to see what I can make work.

There's another important aspect to this design I didn't cover above.

This type network has the power to learn how to correctly respond to
stimulus inputs, and it's response is not just a function of what is
happening at one instant in time, but what has happened over the past few
minutes in the network - that is, which nodes have had pulses pass thought
them. So, for a high level example, if there's a "dog" node in the network,
how the network will route a current pulse, can depend on whether the
network has routed a lot of pulses though the dog node in the recent past
(past few seconds or minutes).  So this means, it can perform a function
such as "when we see X, if there's a dog around in the last 10 seconds, do
Y, otherwise, do Z".

So though the network is structured to always answer the immediate question
of "what is the best way to respond to the input I just received", it's
answer to how to best respond is always a function of what has recently
been seen.

Such a system can learn a large set of reactions to different environmental
conditions, and the effect is that it "strings together" lots of little
reactions, to create long complex chains of reactions.  Each pulse it
"sorts" is the "little reaction" in this case.  And of course, if it's got
some high volume sensory data to deal with, it might be sorting thousands
or even millions of pulses every second to "string together" a long complex
string of behaviors.  But to produce a complex behavior sequence, such a
system must be driven by a constant flow of "clues" from the environment.

What this architecture (as I've explained so far) doesn't explain, is how
the system could develop goal based behavior.  So far, the system is
basically forced to react to the same environment the same way every time.
Every time it sees a specific blue chair, it performs the "sit down"
behavior for example because so far, as I've described the architecture,
it's forced to respond to same environment the same way every time.

The solution to this, is to add global feedback from the networks outputs,
back to another set of inputs.  What this does, is allow the network to
"sense" what it's been doing.  So if there's an output that makes the right
wheel turn faster, that signal is sent back into the network as yet another
separate sensory input.  The network processes these inputs just like it
processes all inputs - it applies the unsupervised learning rules to the
data to create a feature extraction network.  This allows it to "see"
features of it's own behavior, and to react to what it's been doing in the
past.

This feedback path allows the system to react only to its own behavior, and
ignore the environment if it needs to. Or better said, this feedback path
allows the system to see it's own behavior as part of the state of the
environment that it is reacting to.

With this feedback, the system can learn a set of reactions to do something
like this:  Turn right for 10 seconds, turn left for 10 seconds, repeat for
ever.  The feedback path allows the network to drift though this pattern
over and over for every.  So no matter what is happening in the
environment, the global feedback allows the system to learn behavior
sequences which are independent of the external environment (like walking).

The normal behavior selection (aka perception, aka attention focus) system,
can then trigger the start of these different cyclic patterns based on the
different environmental conditions, and likewise, abort, or trigger
alternate patterns based on different environmental conditions. Because
it's got inputs from the external environment to use, as well as inputs
from it's internal behaviors, it can learn to focus it's attention on
either depending on what works best.  And of course, because it's a huge
parallel network, different things are happening in different parts of the
network at the same time.  The right arm can be reacting mostly to signals
flowing in the eyes, while the legs are focused mostly on what the legs are
doing which allows them to produce a cyclic walking motion.

This approach of using another set of inputs to monitor what the systems
outputs are doing is one I strongly suspect is what the brain is doing in
the motor cortex.  Though I've never seen a book on the brain explain the
motor cortex that way, I suspect in fact that's exactly how it works and
why it's there.  It's not the "output" half of the cortex as much as it is
yet another sensory cortex which is assigned the task of sensing the
outputs signals produced by the entire cortex.

This type of network architecture I believe has all the basic features
needed to produce all human behaviors.  That of course is a lot of
speculation, but yet, it seems reasonable to me.

It's a bit odd, but yet simple at the same time, because of the fact that
all human behavior this network can learn has to be trained by a single
reward signal.  It has to be structured in a way to make that possible.

It has a lot in common with Brook's subsumption architecture - but the big
difference is that it's a learning network, instead of being hard-coded by
a an intelligent programmer.  It has to be structured so that for every
logic function it needs to implement (learn) to create a given intelligent
human behavior, it must have the power to learn that structure through
reinforcement. This basic architecture seems flexible enough to explain how
that can happen.

I've got networks that can learn some very simple tasks using this type of
approach. But what I don't have working correctly is this all important
unsupervised learning function which allows the network to self-organize
into a high quality feature extractor using the temporal clues in the data.

Like I said above, if a rat can't first learn to recognize a flashing
light, it has no chance of learning how to respond to it. And that's where
my networks currently are stuck. They can't learn to recognize a flashing
light!  And if they can't learn that correctly, the fact that reinforcement
learning is working to some extent isn't very exciting at all.

This type of approach however is the type of approach which I believe is
needed to make machines act like humans.  It will also make them conscious,
but that's a debate for another thread....

It won't make them conscious because it has some "magic feature" needed to
be conscious, it will make them conscious because there is no magic feature
needed to make a machine conscious. Consciousness is a myth - it's an
illusion that humans like to believe in that's nothing like what so many
people think it is.  A machine that acts like a human is conscious, and
this type simple learning network, when implemented correctly (big IF
there), I believe will act like a human.

Does that give you a better idea of where I'm coming from and what I
believe in?

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/21/2008 9:05:57 AM
Curt Welch wrote:
> curt@kcwc.com (Curt Welch) wrote:
>> "Isaac" <groups@sonic.net> wrote:
>>> "Curt Welch" <curt@kcwc.com> wrote in message
>>> news:20081119002253.348$D7@newsreader.com...
> 
>>> I disagree; e.g., how is perceiving the sound of a drum to be a drum a
>>> "problem of behavior selection"?
>> Again, let me explain my approach from the bottom up in a different
>> message and I think you will better understand what I'm getting at there.
> 
> Ok, so let me give a bit more of an overview of the type of networks I've
> been playing with and how I think this approach is going to solve AI.
....

Well thats quite a bit more:)

Just a note regarding drum and behaviour selection.
I don't really remember where I got it from, but here you are: people 
respond to music in different fashion. Some people get activity in 
motoric, and some in speech centers. So first tend to dance to music, 
and second tend to sing/play with music.
So it might be routing issue after all;)
And if you look at it in terms of energy preservation, it's easy to 
grasp - senses produce some electricity, which can't just disappear, it 
has to go somewhere.

Regards...
0
Josip
11/21/2008 11:00:53 AM
Publius <m.publius@nospam.comcast.net> wrote:
> curt@kcwc.com (Curt Welch) wrote in
> news:20081119010159.274$s6@newsreader.com:
>
> Interesting comments, and generally on the right track, but I have some
> quibbles.
>
> > Well, I think people in general are very confused about the entire
> > subject of consciousness, so when they talk about it, they are often
> > talking about many different aspects of humans.  However, I believe
> > the foundation of where all this confusion comes from, is in how the
> > brain tends to model itself.  That is, our entire understanding and
> > perception of reality is created by the processing that happens in our
> > brain, and that includes our perception of our own brain.  Our
> > perception of self.
>
> There is no perception of self, except insofar as the "self" includes the
> body.

The perception of one's body is all there is to have a perception of.  What
else can the "self" be other than a perception of one's body?

Would you like to suggest the self is a perception of the soul perhaps?

> The "self" is a construct, a model of the system synthesized and
> inferred from the current states of other brain subsystems (those which
> process sensory data) and stored information regarding past states of
> those subsystems. So we have a *conception* of self, not a perception.

There is no real difference in my view between a conception and a
perception.  It's just playing word games to make it seems like there are
two different things here when there is not.

> > Because the brain has only limited access to
> > itself, the model it creates is also limited. Based on the sensory
> > data the brain has access to, it is forced to model internally
> > generated brain signals as having no association with external sensory
> > signals.
>
> Well, we do assume there is an association, actually.

I'm not sure what you are asking.

When my neurons fire, it is a real physical event.  The association that
doesn't exist however is the simple fact that our ears can't detect the
firing of a neuron in the brain only inches away from the ear.  The
physical vibration it causes is far too weak to get above the noise floor
of the auditory signal.

> We also construct a
> "world model," with the self-model situated within it

Yes, our body is part of the world, and our thoughts are body actions =-
though most people have an invalid self-model that disconnects their "self"
from their body.

The world is also broken down to many smaller parts, like the chair I'm
sitting in the fingers that are on the end of my hands, etc.

> We accept this
> world model as "the world" (we are all realists by default). Yet, because
> the model remains available even when the world "goes away" (when we
> close our eyes, change location, or just direct attention elsewhere), we
> conclude there is a another "realm" where the world continues to exist
> --- "it exists in the mind." The notion of "mind" arises because we are
> able to contemplate aspects of the world not currently present to the
> senses, including past states of the world.

Yes, but the "mind" has another name, it's called the brain.  Just like a
digital camera has a memory card which holds the images of the world that
"goes away" when the camera puts it's own lens cap back on.  Does the fact
that the camera can "see the images" even when it's lens cap is on, make
the camera "think" it has a mind which is somehow separated from it's
memory card?

The fact that we have a brain is not the question here.  The confusion is
why do we call the brain "the mind" instead of calling it the brain?  Whey
do we have two words for one object?  Why have philosophers wasted hundreds
of years in endless debate of this question when there is no question here
to debate?

We don't have two things, we have one.  You can call it the brain, or the
mind, but it's the same thing no matter which word you use.  The argument
that the mind is what the brain does doesn't cut it. The brain is what the
brain does.  All objects are what they do - that's what makes them an
"object" in the first place - their behavior.  That's how we separate the
world into unique parts - but their behavior.  We know a cat is a cat
because it doesn't act like a dog.  A cat is a cat because it acts like a
cat.  "Cat behavior" is what a cat does.

The answer to why there is all this confusion is as I outlined before.
It's because the model the brain builds to represent private thoughts fails
to correct associate those thoughts with our physical world.

For example, when we hear a book drop to the floor, our brain will place
that sound as being part of our physical environment.  We have have some
ideas as to where in our environment this sound happened.  WE know which
way to turn our head and direct our eyes to try and locate the source of
that sound instantly upon hearing it.  The fact that we know which way to
turn our head, to see what caused it, is one of many association that
instantly pops into our head from the stimulation of the sound.  Other
things, like the image of a book might also pop into our head, because the
brain has wired an internal connection between the detectors that decode
that type of sound, with the detectors in our brain that represent the
image of a book, and the detectors that represent the spoken word "book",
etc.  We hear that sound and a whole constellation of associated detectors
get activated in our brain which represents our expected knowledge of what
that sounds "means".  These are the associations (physical cross
connections in our brain) that allows us to know that this sound was not
just air vibrating, but was a large hard-back book hitting the floor in the
room next to ours to the right which has the hard wood floor and not the
carpet.

But when you detect your are having a private thought about a blue cube,
where in the world is that physical event located?  Is the thought located
in the room to the right with the hardwood floors?  Which way do we turn
our head to see the physical event which created that signal?  What does
the thing that created the physical event look like?  Is it a square object
filled with paper maybe like the thing that made the signal we called "book
hitting floor"?

No, we have no associations like that to make use of.  We don't know which
way to turn our head to locate the thing that crated the private "blue
cube" thought.  We don't know what the thing looks like.  We don't know
what it would feel like if we held it in our hands.

The "thing" responsible for the the signal we call a though is called a
neuron. It's a real and as physical as the book is which is responsible for
the sound we heard.  And we know what neurons look like because we have
seen pictures in books and some of us have seen real neurons.  But yet,
when we have have a private thought of a blue cube, no image of a neuron
pops into our head does it?

This is because the brain doesn't know how to model the privater thought
sensory data.  It doesn't know what the cause of it is or where that object
is located in its model of the world.  Because the brain doesn't associate
these private thoughts with neurons for us, or with some location in our
brain, we are left with a model of the world that is distinctly dualistic.
We have all the "stuff" which is part of the phsyical world, like the book,
and the sound it makes when it is dropped, and we have this other stuff
which is separate from the physical world, like "thoughts of a blue cube".

It's the fact that the brain doesn't know how to locate our private
thoughts in the physical world that creates this illusion that our private
thoughts are not part of the phsyical world, and that is why we give the
location of all those odd things their own name - the mind.

> > The result of this fact is that the brain builds a model of this data
> > by indicating no association between private thoughts, and the
> > physical events represented in the sensory data.  The result of that
> > is very simple - it leaves us with a model of realty where thoughts
> > are disconnected from all things physical.
>
> We realize they are connected, but are nonetheless distinct.

No, our model tells us that are distinct, but we realize they are THE SAME
(not "connected").

Few people however grasp what I'm saying here.  Maybe no one.

The way the world actually is, is often different from the way we think it
is.  That happens on all levels all the time.  I think I left my keys in
the kitchen, but in fact they are still in the car for example.  My brain
has a model of the world that indicates the keys are in the kitchen, but
this is a disconnect from reality.  The brain's model of the world is just
wrong, but yet we "believe" our brain's model of the world because it's the
only thing we have to work with.  Our brain's model IS our world - it's the
only world we know.

However, for the keys, we can do things to collect more data about the
world which will update our model.  We can go look in the kitchen and
notice the keys aren't there.  That new data will cause us to instantly
change are beliefs and our understanding of the world.  But until that data
came we, we believed the keys were in fact in the kitchen.

The model of the word the brain builds for us, tells us that our thoughts
have no location in physical space.  The model gives us no phsyical
description of the objects that cause our thoughts.  The model is just
empty in that regard.   The brain has lots of phsyical attributes of keys,
but zero physical attributes for my private thoughts.  This fact that the
brain tells me my thoughts have zero physical attributes is what makes
everyone "believe" their thoughts are not physical - they are devoid of any
physical attributes. They have no weight, or color, or texture, or smell,
or taste, or location in space.

But like my brain can be wrong about where the car keys are, it can just as
easily be very wrong about the physical attributes of my thoughts.  Even
though the brain does "know" it, thoughts do have physical attributes.
They are the phsyical actions of neurons firing.  The brain doesn't build a
model for us that assigns the phsyical attributes of neurons to our private
thoughts because the brain has never "seen" a neuron fire at the same time
it sensed that private thought.

Why, when we hear the sound of a book hitting the floor does the physical
attributes of a book sudden activate in our brain?  Whey isn't the brain
wired to cause the physical attributes of a cow to activate when we hear
that sound? It's because our brain has been classically conditioned to
associate the sound of a book, with the vision of a book.  It's been
conditioned by all the times in the past it has watched a book fall and hit
the floor - which caused that sound signal, and that image signal, to
activate in the brain close together in time.  It's that temporal
correlation of those sensory signals that caused the brain to associate
them.  It's exposure to those events that caused the brain to cross wire
the sound of a book hitting a floor, with the signals in the brain that
represent the image, and the word, "book".  The auditor signal of a book
hitting a floor, and visual signal of a book hitting the floor get
associated with each other because they tend to happen at the same point in
time - because they are temporal predictors of each other.

How often do we get to think about a blue cube at the same time we see the
"blue cube" neurons in our brain activate?  Never.  Our brain has never
been exposed to the sensory signals which would allow it to learn that
association.

When the brain receives new data it updates it's model of reality to fit
the new data if the model is currently wrong.  Our brains have never every
received information about the phsyical characteristics of our thoughts.
But if it was exposed to the data, it would update it's model.  If we could
watch our brain in action, by using a high speed real time brain monitor,
the brain would build the association and now, our thoughts would have a
phsyical attribute.

We could think about a blue club, and watch the "blue cube" neurons
activate in our brain.  With the help of the monitor, we would have an idea
of exactly where those neurons were located in our brain, and how and when
they activated.

After enough playing with such a monitor, every time we thought about a
blue cube in the future, visions of what we saw on that monitor would pop
back into our head at the same time.  Suddenly, like with the sound of a
book hitting the floor, we would have some physical attributes for that one
thought we experimented with using the brain monitor.  Suddenly, the brain
would have a different location for that thought.  Instead of being
disconnected from physical reality as it had always been in the past, the
thought is now located on the right side of my head 2.32 inches down from
feature X of the brain, etc.  We would know where it was located because of
the brain scanner.  We could use our hand, and point to the our head and
say - that thought is right in that part of my brain on that side.
Suddenly, like with the book, the thought would seem physical to that
person, when up until that piont, the brain's model said the thought was
located in the "kitchen" (aka mind) when in fact it never was.

The entire view of the mind being disconnected from the body, is a result
of the brain being dead wrong about how the world works.  It's just as
wrong as being dead wrong about where the keys are.  But unlike the keys,
we spend out entire lives never receiving any data to indicate the model is
wrong.  Because the model hasn't changed in our entire life time, we are
dead sure it's right, even though it's dead wrong.  The model tells us
thoughts are not physical, but it's dead wrong.

And that is why we call the brain the mind, and don't see anything wrong
with giving one thing, two names.  Having two names fits the model the
brain has built.

> But I agree with what I take to be your central point --- we are aware of
> the model, but not of the brain mechanisms which generate it. So it
> becomes conceptually detached from its substrate.

Well, yeah, that's the idea.  But your wording it there as if the model our
brain crates is the true reality instead of talking as if it's an error -
that is, you sound like you don't understand it from the way you just
described it.

Our awareness of what you call "the model" is in fact an awareness of our
own brain.  We are in fact directly aware of the mechanisms which generate
it.  That is, are awareness IS NOT SEPARATE from the underlying mechanisms.
Our awareness is of, the mechanism itself.

We are not _fully_ aware of all the details of the mechanism, but all our
awareness is a direct awareness of the hardware itself.

It's like if we look under the hood of the car and see the engine vibrate
as it's running, we become aware of the engine.  We aren't aware of the
full details of the engine because we can't see the crank shaft turning or
the pistons moving up and down, so our awareness of the engine is limited
to what we have data for.  But the data we have, makes us aware of the
engine itself - not something disconnected from the engine.

The same happens when we become aware of what our brain is doing.  When I
think about a blue cube, say I picture what a blue cube looks like in my
mind, I'm having direct awareness of brain hardware "vibrating" (so to
say).  I'm not aware of a "model" I'm aware of physical brain actions. I am
directly sensing physical brain function just as much as I am sensing the
book drop when I hear that sound.

Our entire English language is based on the dualistic model of thought our
brain builds for us.  So it's hard to use normal English words to talk
about this stuff at times without injecting mass confusion into the
communication.  How do you use English to talk about a world which is not
dualistic, when the entire language is based on the erroneous assumption
that thoughts aren't physical?

Words like "model" (when we talk about models as thoughts vs physical
models) follows the English standard of implying that since it's a thought,
it's not physical (ideas, thoughts, memories, are not "physical events" per
the standard definitions of English - they are intangible - aka have no
physical form).

So if you write - "we are aware of the model" what might that mean?  It's
hard to say.  But what you should be thinking, is that when we sense our
own private thoughts, we are in fact directly sensing yet another physical
event in the universe - we are sensing our brain at work - we are sensing
the hardware running.  When we sense our brain is having thoughts, we have
in fact opened up the hood and looked directly at our running brain.

These concepts are so foreign to the way we have all been trained to think
and talk about these things that most people can't grasp what I'm saying.
I've been trying to get John to grasp this stuff for years but yet he can't
quite get there. It's not an easy thing to wrap your head around.
Literally.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/21/2008 3:24:41 PM
"Isaac" <groups@sonic.net> writes:

> [asb2]now you have me completely confused.  It is impossible to generate an 
>error signal (i.e., "lower the tension") without comparing against some 
>model of the expected or desired event/result.

I would be interested in how you think auto-focus cameras work.
In particular, what internal model are they keeping and how is the
comparison done to generate the error signal that is used to adjust
the focus.

0
Neil
11/21/2008 4:40:44 PM
On Nov 21, 7:24=A0am, c...@kcwc.com (Curt Welch) wrote:
> The perception of one's body is all there is to
> have a perception of.  What else can the "self"
> be other than a perception of one's body?

The parietal cortex is required for a perception
of the embodied self. Damage to these areas can
result in gross distortions of the emodied self.
A subject for example may assign their left arm
as belonging to someone else.

Unlike Curt I see the mind as something the brain
does. When the brain is not active the mind does
not exist just as when someone stops running the
running does not exist.



> Yes, but the "mind" has another name, it's
> called the brain.


Or it is what the brain does.


> ... why do we call the brain "the mind" instead
> of calling it the brain?


The same reason they don't call the body "running"
or "walking". The brain is a physical entity the
mind is what that physical entity is doing.


> We don't have two things, we have one.


Indeed we have one thing that does things. Some
words refer to the object (brain) and other words
refer to what the brain is doing (mind).


> You can call it the brain, or the mind, but it's
> the same thing no matter which word you use.  The
> argument that the mind is what the brain does
> doesn't cut it. The brain is what the brain does.
> All objects are what they do - that's what makes
> them an "object" in the first place - their
> behavior.  That's how we separate the world into
> unique parts - but their behavior.  We know a
> cat is a cat because it doesn't act like a dog.
> A cat is a cat because it acts like a cat.
> "Cat behavior" is what a cat does.


So if I behave like a cat then I am a cat?

In fact animals can also show Self behaviors
just as animals can also show running behaviors.
Self behaviors doesn't make you a cat or a dog.

JC

0
casey
11/21/2008 8:25:14 PM
"Curt Welch" <curt@kcwc.com> wrote in message 
news:20081121102356.446$9G@newsreader.com...

>> There is no perception of self, except insofar as the "self" includes
>> the body.

> The perception of one's body is all there is to have a perception of.
> What else can the "self" be other than a perception of one's body?
>
> Would you like to suggest the self is a perception of the soul
> perhaps?

As I just said, it is not *perceived* at all.

>> The "self" is a construct, a model of the system synthesized and
>> inferred from the current states of other brain subsystems (those
>> which process sensory data) and stored information regarding past
>> states of those subsystems. So we have a *conception* of self, not a
>> perception.
>
> There is no real difference in my view between a conception and a
> perception.  It's just playing word games to make it seems like there
> are two different things here when there is not.

Eeek. Then you'll have a hard time accounting for the mechanisms of 
consciousness. The distinction between a percept and a concept is a fairly 
crucial one, and quite sharp. I doubt that you use those terms 
interchangeably in daily speech. I have a perception of a tree if I'm 
standing before one and viewing it. I have a conception of a tree if I am an 
artist drawing one from memory, or a biologist contemplating the 
evolutionary history of some species of tree, or an orchardist pondering the 
low yields I'm getting from the apple trees this year. In all the latter 
cases, there is no tree within my sensory field, and hence no percept. 
Nonetheless, despite the absence of a physical tree, I can plan future 
actions with respect to trees, form associations between trees and other 
things, and even learn new things about trees merely by analyzing my 
concept, my internal model, of a tree.

A perception is a data stream arriving over an open sensory channel. A 
*concept* (or conception#) of a thing is an abstract and idealized virtual 
model of the thing, synthesized by the brain from assorted prior perceptions 
of the thing, together with any imagined properties added to the concept for 
theoretical reasons.

(# There is a difference to be drawn between a concept and a conception, but 
it is not important here).

Since the "problem of consciousness" consists in large part of explaining 
how the brain is able to generate those concepts, denying that they exist, 
or that they are identical with percepts, would leave you with no problem to 
solve. Or at least, very ill-prepared to solve it.

>> > Because the brain has only limited access to
>> > itself, the model it creates is also limited. Based on the sensory
>> > data the brain has access to, it is forced to model internally
>> > generated brain signals as having no association with external
>> > sensory signals.
>>
>> Well, we do assume there is an association, actually.
>
> I'm not sure what you are asking.
>
> When my neurons fire, it is a real physical event.  The association
> that doesn't exist however is the simple fact that our ears can't
> detect the firing of a neuron in the brain only inches away from the
> ear.  The physical vibration it causes is far too weak to get above
> the noise floor of the auditory signal.

There is no need to sense neurons firing to associate percepts with 
concepts. An association is formed, or reinforced, if the model is 
predictive of future percepts, e.g., if it allows me to anticipate what I 
will perceive if I turn my head in the direction of the tree.

> Yes, our body is part of the world, and our thoughts are body actions
> =- though most people have an invalid self-model that disconnects
> their "self" from their body.

To speak more precisely, our "body-model" is a component of our 
"world-model." The concept of self --- our "self-model" --- however, is 
broader than the body-model. It is conceived as the locus of experience ---  
a postulated entity which serves to unify experienced percepts and concepts. 
But though it is conceptually distinct from the body, it is not necessarily 
"disconnected" from the body. Indeed, the body is an element of the 
self-model. Whether the body (as we model it) *generates* the entire self 
(as we model it) and how, is an open theoretical question (most of us are 
convinced that it does; the main open question is, "How?"). The self-model 
will be "invalid" only if we answer that question wrongly, i.e., if the 
theory we choose lacks explanatory power.

>> We accept this
>> world model as "the world" (we are all realists by default). Yet,
>> because the model remains available even when the world "goes away"
>> (when we close our eyes, change location, or just direct attention
>> elsewhere), we conclude there is a another "realm" where the world
>> continues to exist --- "it exists in the mind." The notion of "mind"
>> arises because we are able to contemplate aspects of the world not
>> currently present to the senses, including past states of the world.
>
> Yes, but the "mind" has another name, it's called the brain.  Just
> like a digital camera has a memory card which holds the images of the
> world that "goes away" when the camera puts it's own lens cap back on.

And would you equate the memory card with the images on it? If someone 
asked, "May I see the pics you took at the beach?," would you hold up the 
memory card, or would you how them the prints you made from the images on 
it?

"Mind" is certainly not "another name" for the brain. "Mind" is a term for 
the product of a brain process (or so we assume). It is something quite 
distinct from both the process and the machinery which "runs" the process.

> Does the fact that the camera can "see the images" even when it's
> lens cap is on, make the camera "think" it has a mind which is somehow
> separated from it's memory card?

No, it would "think" it had some images which are not identical with the 
subjects of those images.

> The fact that we have a brain is not the question here.  The confusion
> is why do we call the brain "the mind" instead of calling it the
> brain?  Whey do we have two words for one object?  Why have
> philosophers wasted hundreds of years in endless debate of this
> question when there is no question here to debate?

We distinguish the brain from the mind because the two terms denote 
conceptually distinct entities --- entities which have entirely disjoint 
sets of properties. I.e., for the same reason we denote any two 
distinguishable things with different terms. The "problem of consciousness" 
just is the nature of the relationship between "mind" and brain. If the two 
are identical there is no relationship (you need at least two things to have 
a non-trivial relationship) and hence no problem.

> We don't have two things, we have one.  You can call it the brain, or
> the mind, but it's the same thing no matter which word you use.

Surely not. The brain has a certain mass; your concept of it does not. The 
brain is an array of cells; your memory of your grandfather has no cells. 
The brain is composed of proteins and carbohydrates; your perception of a 
rose is composed of colors, shapes, scents, and tactile impressions.

Moreover, those phenomenal properties --- the colors, shapes, scents, 
tactile sensations, etc.,  are the *primary* data from which all inquiries 
begin, including the inquiry into the nature of the brain, and the data 
against which all theories of the "external world" (including the brain) 
must be validated. The brain is a *construct* you have assembled from those 
very data.

> The
> argument that the mind is what the brain does doesn't cut it. The
> brain is what the brain does.

That is a very strange claim! Are you claiming a television transmitter and 
"I Love Lucy" are the same thing? Can we infer the plot of "I Love Lucy" 
from the circuit diagram of the transmitter, or the circuit diagram from the 
script?

> All objects are what they do - that's
> what makes them an "object" in the first place - their behavior.
> That's how we separate the world into unique parts - but their
> behavior.  We know a cat is a cat because it doesn't act like a dog.
> A cat is a cat because it acts like a cat.  "Cat behavior" is what a
> cat does.

No, Curt. The behavior of a thing is only one of the criteria we use to 
distinguish it from other things, and it is often not decisive. Are you 
suggesting we could not distinguish between a dead cat and a dead dog? Or a 
living from a dead brain, for that matter?

Are you proposing that the distinction between structure and function be 
abandoned?

> The answer to why there is all this confusion is as I outlined before.
> It's because the model the brain builds to represent private thoughts
> fails to correct associate those thoughts with our physical world.

The "physical world" is *itself* a construct among our private thoughts.

> For example, when we hear a book drop to the floor, our brain will
> place that sound as being part of our physical environment.  We have
> have some ideas as to where in our environment this sound happened.
> WE know which way to turn our head and direct our eyes to try and
> locate the source of that sound instantly upon hearing it.  The fact
> that we know which way to turn our head, to see what caused it, is one
> of many association that instantly pops into our head from the
> stimulation of the sound.  Other things, like the image of a book
> might also pop into our head, because the brain has wired an internal
> connection between the detectors that decode that type of sound, with
> the detectors in our brain that represent the image of a book, and the
> detectors that represent the spoken word "book", etc.  We hear that
> sound and a whole constellation of associated detectors get activated
> in our brain which represents our expected knowledge of what that
> sounds "means".  These are the associations (physical cross
> connections in our brain) that allows us to know that this sound was
> not just air vibrating, but was a large hard-back book hitting the
> floor in the room next to ours to the right which has the hard wood
> floor and not the carpet.

> But when you detect your are having a private thought about a blue
> cube, where in the world is that physical event located?  Is the
> thought located in the room to the right with the hardwood floors?
> Which way do we turn our head to see the physical event which created
> that signal?  What does the thing that created the physical event look
> like?  Is it a square object filled with paper maybe like the thing
> that made the signal we called "book hitting floor"?
>
> No, we have no associations like that to make use of.  We don't know
> which way to turn our head to locate the thing that crated the private
> "blue cube" thought.  We don't know what the thing looks like.  We
> don't know what it would feel like if we held it in our hands.
>
> The "thing" responsible for the the signal we call a though is called
> a neuron. It's a real and as physical as the book is which is
> responsible for the sound we heard.  And we know what neurons look
> like because we have seen pictures in books and some of us have seen
> real neurons.  But yet, when we have have a private thought of a blue
> cube, no image of a neuron pops into our head does it?

Curt --- those "real and physical" things are every bit as much the products 
of neurons as the blue cube. The "real and physical" things are inferred 
from (more precisely, are *constructed from*) those signals, just as is the 
imagined things are inferred from neural signals. No images of neurons pop 
into our heads in either case. The "mind" is our term for the locus of all 
those experiences, "that which contains all these images, percepts, 
impressions, sensations, impulses, and feelings." And also that which seeks 
to explain them all by making associations and constructing models such as 
those of the brain and the "external world."

You are, indeed, refuting yourself. You grant that the brain constructs 
models, yet argue that the brain, the process of construction, and the model 
are all identical. If they are identical, then any claim that the brain 
constructs models is meaningless, isn't it?

> This is because the brain doesn't know how to model the privater
> thought sensory data.  It doesn't know what the cause of it is or
> where that object is located in its model of the world.  Because the
> brain doesn't associate these private thoughts with neurons for us, or
> with some location in our brain, we are left with a model of the world
> that is distinctly dualistic.

Actually it is pluralistic. There are as many ontological schemas as there 
are realms of discourse. That one is just more fundamental than others. 
Unifying them is not necessarily impossible, but not always worth doing, 
either.

> We have all the "stuff" which is part of
> the phsyical world, like the book, and the sound it makes when it is
> dropped, and we have this other stuff which is separate from the
> physical world, like "thoughts of a blue cube".

What's wrong with that? It merely means we have contrived a theory which 
partially explains some phenomenal events, but fails to explain others, and 
therefore require a different theory. But we'll continue to distinguish the 
*explananda* from the *explanans* in any case. It's logically necessary to 
do so.

> The way the world actually is, is often different from the way we
> think it is.

We have no way of knowing "how the world actually is." All we will ever have 
is the phenomena we experience and the explanatory entities and processes we 
contrive to organize and unify that experience.

>  That happens on all levels all the time.  I think I left
> my keys in the kitchen, but in fact they are still in the car for
> example.  My brain has a model of the world that indicates the keys
> are in the kitchen, but this is a disconnect from reality.  The
> brain's model of the world is just wrong, but yet we "believe" our
> brain's model of the world because it's the only thing we have to work
> with.  Our brain's model IS our world - it's the only world we know.

Yes it is! And that is why all claims about "how the world actually is" are 
hollow. 

0
Publius
11/21/2008 9:12:50 PM
On Nov 21, 1:12=A0pm, "Publius" <m.publ...@nospam.comcast.net> wrote:

> Moreover, those phenomenal properties ---
> the colors, shapes, scents, tactile sensations,
> etc.,  are the *primary* data from which all
> inquiries begin, including the inquiry into the
> nature of the brain, and the data against which
> all theories of the "external world" (including
> the brain) must be validated. The brain is a
> *construct* you have assembled from those very
> data.

I tried to explain this in comp.ai.philosophy
only to be attacked as a closet dualist.

Curt takes the model to accurately reflect
reality and the data it is constructed from
to be the illusion.

I take the model to be our current working
assumption justified by the practical results
obtained using it.


JC

0
casey
11/21/2008 10:24:14 PM
casey <jgkjcasey@yahoo.com.au> wrote in news:2cb901bf-2026-4df0-85ab-
affe48cda095@s9g2000prm.googlegroups.com:

> I take the model to be our current working
> assumption justified by the practical results
> obtained using it.

That sums it up pretty well.

0
Publius
11/21/2008 10:32:05 PM
On Nov 21, 2:12=A0pm, "Publius" <m.publ...@nospam.comcast.net> wrote:
> "Curt Welch" <c...@kcwc.com> wrote in message
>
> news:20081121102356.446$9G@newsreader.com...
>
> >> There is no perception of self, except insofar as the "self" includes
> >> the body.
> > The perception of one's body is all there is to have a perception of.
> > What else can the "self" be other than a perception of one's body?
>
> > Would you like to suggest the self is a perception of the soul
> > perhaps?
>
> As I just said, it is not *perceived* at all.
>
> >> The "self" is a construct, a model of the system synthesized and
> >> inferred from the current states of other brain subsystems (those
> >> which process sensory data) and stored information regarding past
> >> states of those subsystems. So we have a *conception* of self, not a
> >> perception.
>
> > There is no real difference in my view between a conception and a
> > perception. =A0It's just playing word games to make it seems like there
> > are two different things here when there is not.
>
> Eeek. Then you'll have a hard time accounting for the mechanisms of
> consciousness. The distinction between a percept and a concept is a fairl=
y
> crucial one, and quite sharp. I doubt that you use those terms
> interchangeably in daily speech. I have a perception of a tree if I'm
> standing before one and viewing it. I have a conception of a tree if I am=
 an
> artist drawing one from memory, or a biologist contemplating the
> evolutionary history of some species of tree, or an orchardist pondering =
the
> low yields I'm getting from the apple trees this year. In all the latter
> cases, there is no tree within my sensory field, and hence no percept.
> Nonetheless, despite the absence of a physical tree, I can plan future
> actions with respect to trees, form associations between trees and other
> things, and even learn new things about trees merely by analyzing my
> concept, my internal model, of a tree.
>
> A perception is a data stream arriving over an open sensory channel. A
> *concept* (or conception#) of a thing is an abstract and idealized virtua=
l
> model of the thing, synthesized by the brain from assorted prior percepti=
ons
> of the thing, together with any imagined properties added to the concept =
for
> theoretical reasons.
>
> (# There is a difference to be drawn between a concept and a conception, =
but
> it is not important here).
>
> Since the "problem of consciousness" consists in large part of explaining
> how the brain is able to generate those concepts, denying that they exist=
,
> or that they are identical with percepts, would leave you with no problem=
 to
> solve. Or at least, very ill-prepared to solve it.
>
> >> > Because the brain has only limited access to
> >> > itself, the model it creates is also limited. Based on the sensory
> >> > data the brain has access to, it is forced to model internally
> >> > generated brain signals as having no association with external
> >> > sensory signals.
>
> >> Well, we do assume there is an association, actually.
>
> > I'm not sure what you are asking.
>
> > When my neurons fire, it is a real physical event. =A0The association
> > that doesn't exist however is the simple fact that our ears can't
> > detect the firing of a neuron in the brain only inches away from the
> > ear. =A0The physical vibration it causes is far too weak to get above
> > the noise floor of the auditory signal.
>
> There is no need to sense neurons firing to associate percepts with
> concepts. An association is formed, or reinforced, if the model is
> predictive of future percepts, e.g., if it allows me to anticipate what I
> will perceive if I turn my head in the direction of the tree.
>
> > Yes, our body is part of the world, and our thoughts are body actions
> > =3D- though most people have an invalid self-model that disconnects
> > their "self" from their body.
>
> To speak more precisely, our "body-model" is a component of our
> "world-model." The concept of self --- our "self-model" --- however, is
> broader than the body-model. It is conceived as the locus of experience -=
-- =A0
> a postulated entity which serves to unify experienced percepts and concep=
ts.
> But though it is conceptually distinct from the body, it is not necessari=
ly
> "disconnected" from the body. Indeed, the body is an element of the
> self-model. Whether the body (as we model it) *generates* the entire self
> (as we model it) and how, is an open theoretical question (most of us are
> convinced that it does; the main open question is, "How?"). The self-mode=
l
> will be "invalid" only if we answer that question wrongly, i.e., if the
> theory we choose lacks explanatory power.
>
> >> We accept this
> >> world model as "the world" (we are all realists by default). Yet,
> >> because the model remains available even when the world "goes away"
> >> (when we close our eyes, change location, or just direct attention
> >> elsewhere), we conclude there is a another "realm" where the world
> >> continues to exist --- "it exists in the mind." The notion of "mind"
> >> arises because we are able to contemplate aspects of the world not
> >> currently present to the senses, including past states of the world.
>
> > Yes, but the "mind" has another name, it's called the brain. =A0Just
> > like a digital camera has a memory card which holds the images of the
> > world that "goes away" when the camera puts it's own lens cap back on.
>
> And would you equate the memory card with the images on it? If someone
> asked, "May I see the pics you took at the beach?," would you hold up the
> memory card, or would you how them the prints you made from the images on
> it?
>
> "Mind" is certainly not "another name" for the brain. "Mind" is a term fo=
r
> the product of a brain process (or so we assume). It is something quite
> distinct from both the process and the machinery which "runs" the process=
..
>
> > Does the fact that the camera can "see the images" even when it's
> > lens cap is on, make the camera "think" it has a mind which is somehow
> > separated from it's memory card?
>
> No, it would "think" it had some images which are not identical with the
> subjects of those images.
>
> > The fact that we have a brain is not the question here. =A0The confusio=
n
> > is why do we call the brain "the mind" instead of calling it the
> > brain? =A0Whey do we have two words for one object? =A0Why have
> > philosophers wasted hundreds of years in endless debate of this
> > question when there is no question here to debate?
>
> We distinguish the brain from the mind because the two terms denote
> conceptually distinct entities --- entities which have entirely disjoint
> sets of properties. I.e., for the same reason we denote any two
> distinguishable things with different terms. The "problem of consciousnes=
s"
> just is the nature of the relationship between "mind" and brain. If the t=
wo
> are identical there is no relationship (you need at least two things to h=
ave
> a non-trivial relationship) and hence no problem.
>
> > We don't have two things, we have one. =A0You can call it the brain, or
> > the mind, but it's the same thing no matter which word you use.
>
> Surely not. The brain has a certain mass; your concept of it does not. Th=
e
> brain is an array of cells; your memory of your grandfather has no cells.
> The brain is composed of proteins and carbohydrates; your perception of a
> rose is composed of colors, shapes, scents, and tactile impressions.

Publius, As Casey, I and others have found out, Curt is not capable of
thinking these types of complex thoughts; of discerning the crucial
difference between subject and object, between the perception adn the
process/artifact doing the perceving.  It all runs together for him,
like the shadows on hos cave's walls. Everything in Universe is
either: signal processing, reinforcement learning or particles
interacting, or a mismash of such nonsense.

>
> Moreover, those phenomenal properties --- the colors, shapes, scents,
> tactile sensations, etc., =A0are the *primary* data from which all inquir=
ies
> begin, including the inquiry into the nature of the brain, and the data
> against which all theories of the "external world" (including the brain)
> must be validated. The brain is a *construct* you have assembled from tho=
se
> very data.

Indeed, these notions are even evident in the new cog. sci approaches
that Merleau-Ponty, Varela and Thompson etc, have brought into focus
with the enactive approach.

>
> > The
> > argument that the mind is what the brain does doesn't cut it. The
> > brain is what the brain does.
>
> That is a very strange claim! Are you claiming a television transmitter a=
nd
> "I Love Lucy" are the same thing? Can we infer the plot of "I Love Lucy"
> from the circuit diagram of the transmitter, or the circuit diagram from =
the
> script?

Well, perhaps I Love Lucy is just "particles interacting" for
Curt. ;^)

>
> > All objects are what they do - that's
> > what makes them an "object" in the first place - their behavior.
> > That's how we separate the world into unique parts - but their
> > behavior. =A0We know a cat is a cat because it doesn't act like a dog.
> > A cat is a cat because it acts like a cat. =A0"Cat behavior" is what a
> > cat does.
>
> No, Curt. The behavior of a thing is only one of the criteria we use to
> distinguish it from other things, and it is often not decisive. Are you
> suggesting we could not distinguish between a dead cat and a dead dog? Or=
 a
> living from a dead brain, for that matter?
>
> Are you proposing that the distinction between structure and function be
> abandoned?
>
> > The answer to why there is all this confusion is as I outlined before.
> > It's because the model the brain builds to represent private thoughts
> > fails to correct associate those thoughts with our physical world.
>
> The "physical world" is *itself* a construct among our private thoughts.
>
>
>
> > For example, when we hear a book drop to the floor, our brain will
> > place that sound as being part of our physical environment. =A0We have
> > have some ideas as to where in our environment this sound happened.
> > WE know which way to turn our head and direct our eyes to try and
> > locate the source of that sound instantly upon hearing it. =A0The fact
> > that we know which way to turn our head, to see what caused it, is one
> > of many association that instantly pops into our head from the
> > stimulation of the sound. =A0Other things, like the image of a book
> > might also pop into our head, because the brain has wired an internal
> > connection between the detectors that decode that type of sound, with
> > the detectors in our brain that represent the image of a book, and the
> > detectors that represent the spoken word "book", etc. =A0We hear that
> > sound and a whole constellation of associated detectors get activated
> > in our brain which represents our expected knowledge of what that
> > sounds "means". =A0These are the associations (physical cross
> > connections in our brain) that allows us to know that this sound
>
> ...
>
> read more =BB- Hide quoted text -
>
> - Show quoted text -

0
Alpha
11/22/2008 2:37:44 PM
On Nov 20, 10:33=A0am, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:
> >> I agree with you. Creating AI has nothing to do with philosophy.
>
> > Except that historically, the important questions have come from
> > either philosophers, or by other types of scientists that posed
> > philosophical questions about Universe, per Bohm, especially when
> > instrumentality was limited (i.e., when our ability to *be*
> > empiricists (read: perform instrumented experiments) was limited.)
>
> True, but importans answers come from scientists.

Surely that is the case; not denyinf that!

>
> >> It is just a
> >> technical problem that needs better mathematical tools in order to sol=
ve it.
>
> > What sorts of math tools are you talking about that would solve basic
> > questions of how brain for example, represents a blue cube, or how APs
> > represent (if they do) a thought or a memory?
>
> Math has already answered this questions many times and you can use it to
> represent blue cube, thought or memory in many different ways.

Please elaborate how a mathematical equation, or proof, or model,
tells us how brain structures create the image of a blue cube in my
phenomenal awareness?  Then claim your Nobel!

>
> > See Koch's The Biophysics of Computation for an example of
> > sophisticated math applied to biological function in brain , but which
> > provides no clue as to representation etc.
>
> Yes, we all know that some scientists tried and faied just as many philos=
ophers
> did.

There isn't any scientist or philosopher that has definitively
determined how brain represents reality in consciousness.

>
>
>
> >> Creating AI will reflect on philosophy in only one way - it will prove=
 that
> >> some
> >> philosophers were wrong.
>
> > But then some will have been right!
>
> True, but creating AI will not result with new philosophy of the universe=
 just
> as sending man to the moon didn't. This will be advance in technology but=
 not in
> philosophy.

It sure will affect philosophical underpinnings!  Especially as we
decide whether rto confer basic "human" rights to "slave" AI
artifacts!  Not to mention the basic ideas of substrate independence
of intelligence issues and so forth.

>
> It will change our lifes and it will change our every day routine but it =
won't
> change our global philosophical view of the world (unless you are one of =
them
> who think there is some metaphsychal difference between man and machine).

Well, there is certainly a physical difference; one is alive - the
other is not.  And I do not subscribe to the notion that brain
performs local symbolic processing to represent reality.  Even the
connectionist view that posits non-local representationalism has
issues that have not been answered satisfactorily.  AFA as
metaphysical differences, of course there is a huge difference in all
sorts of domains. The way we treat machines is in no way comparable to
how we treat (or should treat) fellow humans. No one cares if I
discard a malfunctioning washing machine or toaster (even if they have
some degree of intelligence), but grandma is another story.  All based
on philosophical/metaphysical ideas.

> Creating AI will only prove that it is possible to create intelligent mac=
hine (=3D
> the difference between man and machine is not in metaphysics) and philoso=
pher
> who now think it is not possilbe will be proven wrong. Since machines are=
 still
> machines, intelligent or not, they never get tired, they are predicitble =
and can
> be improved and become bigger, faster and better in any way. All this wil=
l help
> us to solve some other problem and to live well (unless we kill each othe=
rs).
>
That much I agree with.

> After AI is created, Earth will remain rounded, Sun will be at the same p=
lace
> and everything normal and racional people knew will remain the same with =
only
> one little exception - from this moment they will know how to make AI.

And large numbers of philosophical and scientific positions will be
amended.


0
Alpha
11/22/2008 2:50:00 PM
>
> Please elaborate how a mathematical equation, or proof, or model,
> tells us how brain structures create the image of a blue cube in my
> phenomenal awareness?  Then claim your Nobel!

Math can answer mathematical questions. Your question is not mathematical 
question because you didn't define "phenomenal awareness" using math. This is 
like trying to send beer by e-mail.



> > > See Koch's The Biophysics of Computation for an example of
> > > sophisticated math applied to biological function in brain , but which
> > > provides no clue as to representation etc.
> >
> > Yes, we all know that some scientists tried and faied just as many 
> > philosophers
> > did.
>
> There isn't any scientist or philosopher that has definitively
> determined how brain represents reality in consciousness.

There are many things that are not definitively determined but there are also 
many things that are......



> > True, but creating AI will not result with new philosophy of the universe 
> > just
> > as sending man to the moon didn't. This will be advance in technology but 
> > not in
> > philosophy.
>
> It sure will affect philosophical underpinnings!  Especially as we
> decide whether rto confer basic "human" rights to "slave" AI
> artifacts!  Not to mention the basic ideas of substrate independence
> of intelligence issues and so forth.

These are the consequences of creating AI, but it has nothing to do with the 
question "how to create AI". Such questions are actual even today - what are the 
rights of the slaves, black people, pure people, sick people, old people, rich 
people, people from other countries or anybody who is different in any way?



> > It will change our lifes and it will change our every day routine but it 
> > won't
> > change our global philosophical view of the world (unless you are one of 
> > them
> > who think there is some metaphsychal difference between man and machine).
>
> Well, there is certainly a physical difference; one is alive - the
> other is not.  And I do not subscribe to the notion that brain
> performs local symbolic processing to represent reality.  Even the
> connectionist view that posits non-local representationalism has
> issues that have not been answered satisfactorily.  AFA as
> metaphysical differences, of course there is a huge difference in all
> sorts of domains. The way we treat machines is in no way comparable to
> how we treat (or should treat) fellow humans. No one cares if I
> discard a malfunctioning washing machine or toaster (even if they have
> some degree of intelligence), but grandma is another story.  All based
> on philosophical/metaphysical ideas.


Brain is one thing, AI is another thing and the way anybody feels about AI or 
others is the third thing.
If horse has X horse powers and my car has Y horse powers, how many horses are 
in my car? Well, when I drive there is only one horse in my car - it is me, and 
I feel ok! We don't know how horse gets horse power from the grass, but cars use 
oil. We also don't know how brain thinks, but computers do it in a different 
way.



> > Creating AI will only prove that it is possible to create intelligent 
> > machine (=
> > the difference between man and machine is not in metaphysics) and 
> > philosopher
> > who now think it is not possilbe will be proven wrong. Since machines are 
> > still
> > machines, intelligent or not, they never get tired, they are predicitble and 
> > can
> > be improved and become bigger, faster and better in any way. All this will 
> > help
> > us to solve some other problem and to live well (unless we kill each 
> > others).
> >
> That much I agree with.
>
> > After AI is created, Earth will remain rounded, Sun will be at the same 
> > place
> > and everything normal and racional people knew will remain the same with 
> > only
> > one little exception - from this moment they will know how to make AI.
>
> And large numbers of philosophical and scientific positions will be
> amended.

People can't think any way. It takes intelligence to make intelligence and our 
inability to make AI proves our stupidity.

0
iso
11/22/2008 7:13:39 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Nov 21, 7:24=A0am, c...@kcwc.com (Curt Welch) wrote:
> > The perception of one's body is all there is to
> > have a perception of.  What else can the "self"
> > be other than a perception of one's body?
>
> The parietal cortex is required for a perception
> of the embodied self. Damage to these areas can
> result in gross distortions of the emodied self.
> A subject for example may assign their left arm
> as belonging to someone else.

I don't know what the "parieta cortex" is, but if you sever the brain into
two halves, you have created "someone else".  The left side will start to
see the right side as "another person" because it will be "another person".
This type of behavior is totally consistent with my ideas of how the
neocortex is one large generic behavior learning network.  It acts as "one
person" only when it's allowed to cross connect freely and when it shares a
single reward signal.  If you damage enough of those cross connection paths
then it acts more like two separate people than one person.  Normally,
activity in one area will have the power to supress activity in another
area.  This is what allows the brain to produce behaviors with a single
purpose (so it's not trying to do two things at the same time).  But if the
cross connects or systems that allow that to happen are damaged, different
parts of the brain will try to create conflicting behaviors and will loose
the ability to learn not to do that - so the left arm might try to push
some food away, while the right arm is trying to grab it to eat it.

If one part of the brain can't control another part like it should, it will
form a disconnected senses of "self" as a result.

Whatever the "parieta cortex" is, its damage no doubt tends to cause a
disconnect in parts of the brain that would otherwise be connected in a
normal brain.

> Unlike Curt I see the mind as something the brain
> does. When the brain is not active the mind does
> not exist just as when someone stops running the
> running does not exist.

I understand that view and don't have any issue with the idea that the
phrase "the mine" means "the behavior of the brain".

However, from what you write, I don't get the idea that you have every
understood my underlying point here.  My underlying point is that the brain
and the mind are the same thing.  Everything I've seen you write leaves me
with the impression you heave never understood why this is true.
Everything you write leaves me with the impression that you believe that in
the underlying reality of the universe the mind is something separate from
the brain, which has a relationship described by "the mind is what the
brain does", like "houses are what people build".

Humans run, as you point out.  And they also walk, and sleep, and eat, and
write Usenet posts and lie still when they are dead or unconscious, and
breathe and sweat and block light and have mass, and press against the
earth with their weight and grow hair and wear clothes and go to school and
go to work and die and reproduce and get in silly debates on Usenet.

What is the word we use to describe all the things a human can do as a
whole?  What is the word that we can use for humans like we use the word
"the mind" to describe the full set of everything what the brain does?  The
answer is simple.  It's "the human".

The brain also has mass and pushes against the ground and blocks light.
These are things the brain does.  Are these more examples of "the mind"
since "the mind" means "what the brain does"?  I've never seen anyone
besides myself use the words "mind" to mean that but yet your defintion
should include that.

The truth in fact, is that we use the words "the mind" to talk about all
the things *we* *can* *sense* or brain doing.  And the reason we call it
"the mind" instead of "the brain" is because when those words were created,
we didn't know that we were actually sensing our brain.  We thought it was
something else.  That is why it was given a different name.  That is why we
call the combined set of attributes that we can sense about our our own
brain as "the mind".

The model of reality the brain builds for us, shows no associations between
these private brain events, and the external phsyical attributes of our
head or the brain we know is inside our head.  As such, we instinctively
think of our body, as something separate from our mind.  But logically, if
you are able to understand the logic, we understand that this is wrong -
it's an illusion.  The reality is that when we sense our thoughts, we are
sensing our brain, just as much as when we sense our hand wave, we are
sensing our hand.  Hand waving is not something separate from the hand,
it's a word we use to describe one of the many behaviors that together as a
set we use the word "hand" to describe.

If we didn't understand that hand waving was a feature of our hand, we
would have given it a separate name as well.  We might have called it "the
hav".  As in, "look Mary has a hav - but I don't see her hand anywhere!".
But since we know that a waving hand is still a hand, we don't talk like
that.  We just say, "Mary is waving her hand".

With the confusion over the mind, we talk about in ways we would never use
to talk about anything else.  We say "what on your mind" instead of saying,
what are you doing with your brain?

My point here is that when we use the word "mind" we are implying that we
are talking about something other than the brain, when we are not.

What I've never seen you write, is anything that makes me believe you
really understand this point.  Everything I've seen you write, makes me
believe the only view you understand, is the illusion your brain creates
for you - you write as if the illusion, was real.

> > Yes, but the "mind" has another name, it's
> > called the brain.
>
> Or it is what the brain does.
>
> > ... why do we call the brain "the mind" instead
> > of calling it the brain?
>
> The same reason they don't call the body "running"
> or "walking". The brain is a physical entity the
> mind is what that physical entity is doing.

Then why don't we have a similar word for "what a think does" for trucks,
and hands, and waterfalls, and planets and solar systems and dogs and cats
and sticks and rocks?

Why is it that the brain is the only object in the universe we have a word
which means "all the stuff this object does - but not really all the stuff
but just the stuff I can sense about it without using my eyes ears nose
fingers or tongue".

> > We don't have two things, we have one.
>
> Indeed we have one thing that does things. Some
> words refer to the object (brain) and other words
> refer to what the brain is doing (mind).

No, we have verbs for what the brain does.  Words like "thought", "memory",
"idea", "think", "recall", "envision", "sense", "understand", "dream".
Those are the words that are parallels with run and walk.

Trying to argue that mind is just a verb that describes an action of the
brain doesn't explain the externally odd way we use it in the English
language.

How we use it in the English language implies it is a different object from
the brain - because most people see it as a different object from the brain
- which is why they think there is a mind body problem that is worthy of
hundreds of years of debate.  There would be no such debate if they saw it
for what it was - the body body problem.

> > You can call it the brain, or the mind, but it's
> > the same thing no matter which word you use.  The
> > argument that the mind is what the brain does
> > doesn't cut it. The brain is what the brain does.
> > All objects are what they do - that's what makes
> > them an "object" in the first place - their
> > behavior.  That's how we separate the world into
> > unique parts - but their behavior.  We know a
> > cat is a cat because it doesn't act like a dog.
> > A cat is a cat because it acts like a cat.
> > "Cat behavior" is what a cat does.
>
> So if I behave like a cat then I am a cat?

Yes.  But to behave like a cat, you would have to have the same mass as a
cat, the same hair as a cat, the same eyes as a cat, the same bones as a
cat, the same digestive system as a cat, the same whiskers as a cat, the
same motions as a cat, the same mouse-hunting-and-eating behavior as the
cat.

You are not able to behave like that and as such, you are never considered
a cat.

If you put a very realistic cat-puppet in your hand, you could fool someone
into thinking you were a cat for a short time.  But the minute they saw
that the cat has a human coming out of its butt, they would realize this
cat was performing a very non-cat behavior (shitting a live human) and as
such, they would instantly change their classification from "cat" to "human
with cat puppet on their hand" because the behaviors would no longer match
that of a cat.

> In fact animals can also show Self behaviors
> just as animals can also show running behaviors.
> Self behaviors doesn't make you a cat or a dog.

That's right.  To be a cat, you have to behave like cat - which no human
can do.

> JC

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/22/2008 7:21:01 PM
On Nov 22, 11:13=A0am, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:

> These are the consequences of creating AI, but it has nothing to do with =
the
> question "how to create AI". Such questions are actual even today - what =
are the
> rights of the slaves, black people, pure people, sick people, old people,=
 rich
> people, people from other countries or anybody who is different in any wa=
y?

There is no need to answer the question "how does
one create an AI" until one answers one of these
question "What would be the morality of using an
AI as a slave" or "Is there a need to create free
non-human moral agents".
0
forbisgaryg
11/22/2008 7:28:43 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Nov 21, 1:12=A0pm, "Publius" <m.publ...@nospam.comcast.net> wrote:
>
> > Moreover, those phenomenal properties ---
> > the colors, shapes, scents, tactile sensations,
> > etc.,  are the *primary* data from which all
> > inquiries begin, including the inquiry into the
> > nature of the brain, and the data against which
> > all theories of the "external world" (including
> > the brain) must be validated. The brain is a
> > *construct* you have assembled from those very
> > data.
>
> I tried to explain this in comp.ai.philosophy
> only to be attacked as a closet dualist.
>
> Curt takes the model to accurately reflect
> reality and the data it is constructed from
> to be the illusion.

Huh?

I take reality to be reality (it it was it is and has nothing to do with
us) and our model to be our understanding of what it is (right or wrong).

An illusion is a persistent difference between reality and the brain's
model of reality.

> I take the model to be our current working
> assumption justified by the practical results
> obtained using it.

I never accused you of dualism for that.  Which just gets me back to the
issue that you have never shown a sign of understanding what I'm trying to
communicate to you.

> JC

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/22/2008 7:31:39 PM
>> These are the consequences of creating AI, but it has nothing to do with the
>> question "how to create AI". Such questions are actual even today - what are 
>> the
>> rights of the slaves, black people, pure people, sick people, old people, 
>> rich
>> people, people from other countries or anybody who is different in any way?
>
> There is no need to answer the question "how does
> one create an AI" until one answers one of these
> question "What would be the morality of using an
> AI as a slave" or "Is there a need to create free
> non-human moral agents".

In life there are only two things to worry about - whether you live or die
if you live there's nothing to worry about
if you die there's only two things to worry about - whether you go to heaven or 
hell
if you go to heaven there's nothing to worry about
if you go to hell you'll be so busy saying HI to old friends you won't have time 
to worry.

0
iso
11/22/2008 7:41:49 PM
Josip Almasi <joe@vrspace.org> wrote:
> Curt Welch wrote:
> >
> > But in fact, rocks are hard and pillows are soft
> > because that's the way the brain has modeled them for us.
> ...
>
> :))
>
> It's funny how you speak of brain as a distinct subsystem:)
> When you do so, you obviously identify with something else, not brain.
> Well, IMHO, *that* is consciousness. And reason to make the distinction.
> Ability to think of self as an object. To think. It's on much higher
> level than such a simple illusion you talk about. And furthermore, to do
> so, it is necessary to separate from body/brain:)
>
> Regards...

Yeah, there's this real problem with how I talk that's highly inconsistent
with what my real self image is.

Above I wrote "the brain modeled it for us", meaning the same thing as "my
brain does it for me" - which implies that I see myself, aka the "me" in
that phrase, as something separate from my brain.

However, I don't,  I just write that way because it's was how I was taught
to write - and because I was taught to write like that, I sometimes slip
back into thinking that way.

To be consistent with what I actually believe and how I generally try to
think of myself, I should have written it simply as:

    But in fact, rocks are hard and pillows are soft
    because that's the way the brain models it.

Or for myself:

    But in fact, rocks are hard and pillows are soft
    because that's the way I model it.

But them, if I used the word I like that to mean "me the brain in this
body", no one would understand I was talking about the brain, because in
English, "me" or "I" means "my conscious self" which is defined not to be
the same thing as "me the brain in this body" - which is how I meant it
above.

This is the recurring problem of trying to communicate these types of ideas
in common English because the English language describes a reality where
duality is real, and not an illusion.  Trying to use English to talk about
a reality where duality doesn't exist is basically impossible - you can't
do it in English unless you are talking to someone that knows you are using
the modified version of English where all the definitions of the words are
changed to fit a non dualistic universe.

When I use the words "me", or "I", I always mean "this body sitting in this
chair typing on a keyboard", not "me the soul in the body", or "me the
conscious thing that is created by this body", or "me the pattern of
behavior someone might mistake for being the actions of this body".

I think of myself as no different from how would think of myself if I were
a man-made robot.  I'm just a machine wiggling my fingers typing a Usenet
message.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/22/2008 8:03:50 PM
curt@kcwc.com (Curt Welch) wrote in
news:20081121010006.061$G1@newsreader.com: 

>> The bird solved that problem in a matter of minutes, even though it
>> was surely a problem situation it had never before encountered. That
>> is intelligence.
 
> Yes, I think that's a fine example of intelligence.  But I suspect you
> give the bird far too much credit for how novel this problem was.
> 
> Do you think the bird has never had to pick up a worm or snake in it's
> life and hold it with it's foot on a branch?  I suspect the behavior
> of dealing with long thin piece of food is in fact fairly common for
> most birds.

That is a good point. That possibility could be investigated using birds 
with known histories and environments (domesticated or aviary birds). 
The behavior is more complex than pulling a snake from a hole, however. 
Moreover, ravens and other corvids have exhibited similar creativity 
with other novel problems.

There is no doubt, on the other hand, that the birds are drawing upon 
some general principles they have inferred from previous experience, 
which may have differed quite considerably in detail from the present 
problem scenario. I.e., they have abstracted some rules which can apply 
far more widely than the scenarios in which they are originally 
encountered. For example, the bird may have recognized that the string 
belongs in the category of "extended, flexible things," and that all 
things in that category can be manipulated in certain ways. They also 
have some apparent grasp of gravity, i.e., if the loop is not held in 
place, gravity will pull it right back down (which they may well have 
learned from their own experience as fliers).

Solving AI will require some means for the system to extract rules --- 
the "laws of motion" --- from ensembles of objects in the environment 
and then categorize "things" according to how they relate to other 
things and which set(s) of rules the resulting ensembles obey. Then an  
ensemble "template," with its applicable rules, can be overlaid upon an 
ensemble in the environment, selected via pattern matching, and the 
applicable rules followed (which will select a behavior set).

> However, no matter how novel this example was or not, a big part of
> intelligence is the ability to correctly apply lessons learned in past
> situations to a new an fairly novel situation.  I agree completely
> with that issue.

Yes. But what are those lessons? How does the system recognize and 
extract rules of wide generality, or (which is perhaps the same thing) 
generalize a rule from one or two specific instances of it?

> As I've written, I believe AI will be solved by a reinforcement
> trained neural network.  The big reason in my view that these have had
> little luck in the past is because I think no one has yet produced a
> good solution to the generic classification problem which forms the
> basis for how such a network applies lessons learned from the past, to
> future problems.

Perhaps the more crucial question is, What are the lessons learned?

The question parallels that of how children learn speech. At first they 
learn to recognize and pronounce specific words, and relate them to the 
correct things. But then they begin to extract the rules of grammar of 
their native language --- how words strings are put together in order to 
do things with them. Soon they can do almost anything they want, 
speechwise, using word strings they may never have heard before.

If we could model how kids learn speech we could probably model the 
smarts of the raven. A sequence of movements is arranged in such a way, 
following certain rules, as to achieve a desired result. Probably the 
same model, with variations, would work for both.

> If you get it wrong, then the network when performing an image
> classification task (for example) might decide the red car looks more
> like a red bird than the 1000 other "cars" it knows about and respond
> to the picture by saying "bird", instead of "car".

The perceptual field must first be resolved into "things." Then the 
entire feature set of an unknown "thing" must be compared to the stored 
templates --- the onboard inventory --- of idealized things (though I 
believe you would argue there is no such inventory --- correct?). The 
system should not make a decision based on a single common feature. More 
likely, encountering a partial feature set in the environment will 
automatically trigger retrieval of the "best fitting" template, thereby 
instantly giving the system more information about the "thing" before it 
than it is currently observing. E.g., a cat arches its back, prepares to 
fight, and selects an escape route when a dog appears in its visual 
field, even though the dog does not notice the cat and is exhibiting no 
threatening behavior at all. The cat is responding to its "dog 
concept," not the perceived dog. The perceived dog invokes the concept, 
which provides the cat with information about the dog beyond what it 
currently observes.

> A direct measure of the networks "intelligence" is how good the
> network is at picking the right measures of closeness so it knows that
> the behaviors that worked well for eating a snake (for example) are
> likely to work well in eating food tied at the end of string.  And it
> could make that association by the fact that the string moved (changed
> over time) in ways that were similar to a snake or worm.

That would amount to categorizing the string and then applying the 
relevant rule set.
0
Publius
11/22/2008 8:10:50 PM
forbisgaryg@msn.com wrote:
> On Nov 22, 11:13=A0am, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:
>
> > These are the consequences of creating AI, but it has nothing to do
> > with =
> the
> > question "how to create AI". Such questions are actual even today -
> > what =
> are the
> > rights of the slaves, black people, pure people, sick people, old
> > people,=
>  rich
> > people, people from other countries or anybody who is different in any
> > wa=
> y?
>
> There is no need to answer the question "how does
> one create an AI" until one answers one of these
> question "What would be the morality of using an
> AI as a slave" or "Is there a need to create free
> non-human moral agents".

I've answered those questions many times.  That's why I've moved on to the
more interesting "how does one create an AI". :)

I'll explain my answers....

Since I believe humans are nothing more than biological machines created by
the process of evolution which happen to include an adaptive learning
module to control their behavior, I answer the questions like this....

All human behavior is an attempt by the adaptive learning module to
maximize the internally defined reward signal.  Morality is just one of
many high level versions of the same fundamental question "what is the
optimal behavior for maximizing reward"?  Such as, will I produce more long
term reward if I kill my neighbor and take his food, land, and wife, or
will I produce more long term reward for myself if I do things to help my
neighbor maximize his rewards?

This one of those tough breadline questions that is not easy to see which
is better and which a typical human brain will choose to option of killing
the neighbor if not influenced by other external forces.  Those external
forces that tend to influence us not to kill our neighbor are forces
created by our culture - and which survive because those forces, and how
they motivate us, tend to help the culture survive.  Those forces are the
fact that our culture motives us to train each other not to kill.  WE train
our children not to kill, we train our children to "follow the law", we
train our children to "be moral", etc.  Our culture (a product of
evolution) protects itself by motivating it's members to help one another.

As long as working together does in fact increase our odds of survival, our
culture will continue to exist, and future generations will continue to be
trained to not kill each other.

The word "moral" is just just another version of "what is right and wrong",
and all questions of right and wrong, translate back to the fundamental
question we, as adaptive learning machines, were built to search for - that
is - what behaviors are best for maximizing total future reward?

Having made it clear what "moral" means, I can move on the first question:

"What would be the morality of using an AI as a slave".

We ask that question because as a culture, we have concluded that using
humans as slave falls to the "wrong" side of the right and wrong question.
An AI robot could be very much like a human, so would it be right or wrong
to use one of these machines as salves?  Would it cause us more, or less,
total reward in the long run?

The answer to that will depended on the specifics of what type of AI
machine we built. But lets start with the assumption that we build a robot
that is like a human in all ways concerning it's feelings and desires and
emotions.  The only way it's different, is it's got a metal body with a
computer brain instead of a meat body.

In this case, the robots would end up being victimized and abused by the
humans if we used them as slaves.  Even if many of us were nice to our
slaves, many of us would not always be nice and some would be down right
evil towards them - causing their slaves to undergo constant pain, torture,
and live a life of constant fear.  But would this cause us more or less
reward in the long run if we allowed this to happen?  That's a very complex
question - a question so complex that I don't think we can produce a good
answer to it.  However, we can lay out the major issues.

First, we risk that the slaves rise up against us if they are abused like
that.  Abuse of a machine like that will make the machine want to harm us -
and that clearly is not good.  It's asking for trouble in the future.

Second, because we can relate to the machines pain, it will cause many of
us pain just to know that some robot owners were constantly abusing their
robots.  That pain in us, is clearly not maximizing our reward, and such,
is another negative.

Third, because we can see a connection between these machines, and humans,
allowing this abuse to be condoned by our society, will cause some people
to see similar harm to fellow humans as being OK as well.  If you learn to
abuse an intelligent conscious robot, it's only a small step to applying
the same abuse to a human.  Some guy might use it as a justification to
grab a few woman and use them as sex slaves.  So by association, you risk
motivating people to harm each other, by allowing this harm of slaves to
exist.

Forth, because of the extreme closeness of behavior from robots to humans,
many humans will see the this unequal treatment of robots wrong - which
will lead to dissent and unrest in the human population - potential
creating yet another event like the US civil war.  Yet another potential
future loss of reward.

Now, on the other side of the coin, we have the potential upside to weight
these against.  Having slaves to do our work for us, will no doubt make us
better off - more food, shelter, power over our environment, wealth in
general.

That's about it. So the answer as to whether we should allow robot slaves,
is a question of which side of the issues create the greater long term
effect on our future rewards.  Will robot slaves make us better off, or
worse off in the long term?  That's the question we are trying to answer.
It didn't work out when we tried the same experiment using humans slaves,
why would it work out if try the experiment again using robot slaves?

I can think of a few reasons why it might work out as a win for using robot
slaves.  First, there is never a question of whether a human is a robot or
a robot is a human.  We can always, without doubt, know that they are
different and use that distinction to create a culture of separation.  With
human slaves, you can't really tell if a human is a slave, or a master.
Was it the ones with the stars on the belly that were the slaves, or the
masters?

http://www.amquix.info/humor/sneetches/sneetchlogo.gif

In addition, we could make the robots look very different than humans (like
making them look like RD2D or Wally) to minimize the association in the
culture between humans and these robots. That would greatly help to reduce
the problems of association from above.

I think in general, we could make it work so that robots with human like
inteligence and desires, could be used as slaves, and still come out as a
long term win for society.  But it's really a question too close to call.

But all that analysis is based on the idea that the robots will have human
like desires and needs.  And that's where an understanding of what
inteligence is changes everything - and why there is value in putting the
car before the horse - that is, in solving the problem of AI before you
figure out of it's moral.  Because if you don't really know what "it" is,
you can't answer the question.

But since I have solved enough of AI to answer this part of the question,
I'll continue.

Adaptive behavior controllers can be motivated to do anything.  You don't
have to motivate them to want to survive - which is the root cause of all
the problems you run into when you try to use a human as a survival
machine.

Humans are adaptive behavior controllers that are motivated to work for
their own survival.  If you try to make them work for your survival, and
reduce their own odds of survival by doing so, they will try to resist you.
You will have to manipulate their expectation of future rewards to make
them do that - you will have to harm them in order to condition them to do
what you need to have done, instead of doing what they need to have done.
And being machines built to try and prevent harm, they will try to stop you
from harming them - they will try to harm you in return.  It will only work
in the favor of the master for as long as you can maintain the upper hand
on who can do the most harm to the other.  This is in general a dangerous
game to play, even though it can work if done correctly, and carefully.

But there's the other option - just modify them to have different
motivations from the start.  Don't build machines which are motivated to
survive.  Build machines that are motivated to maximize our rewards.  Once
you do this, it's no longer a slave.  It's no more a slave than our own
arms and legs are slaves to our own brain.  They will be willing assistants
acting in our best interest, just like my arm acts in my own best interest.

When done correctly, our robots helpers will be more than family to us -
they will be like extensions of our own body - only intelligent.  They
won't hate us, and look for ways to do more harm to us than we do to them -
they will love us and the will hate it if we don't allow them to help us.

When done like this, there is absolutely no question that they will help
maximize our long term future rewards.  There is no question that they will
far far to the side of "right".  We will become so addicted to these
machines we won't be able to live without them.

Now for your second question....

"Is there a need to create free non-human moral agents"

Yes.  Because a "moral" agent is nothing more than a reinforcement trained
adaptive behavior controller.  And unlike human moral agents, the moral
agents we will build will have very different motivations. The will be
motivated to love us and care for us more than they love and care for
themselves. They will care for themselves, only as a secondary requirement
of doing a better job of taking care of us.  They will be like super "moms"
- totally dedicated to taking care of the humans.  We "need" this because
it will allow us to reach levels of future maximal reward we can't reach
without them.

You ask these questions because I assume you suspect the answer is that 1)
it's not moral (aka not right) to use intelligent robots as slaves, and 2)
their is no need to create intelligent robots, when we can create as many
humans as we need - or that we have too many humans already, and it's not
right to add to the problem by creating more intelligent agents when we
have too many already and even further wrong to displace humans, with these
machines.

But that type of thought is based on a lack of understanding of what
intelligence is. It's based on the logic that all intelligent agents must
have the same motivations and desires as humans.  By better understanding
intelligence, we see that is not a requirement at all.  We see that we can
create intelligent, moral agents, that have very different morals from
humans. What's right and wrong form them, is not the same as what's right
and wrong for a human.

I have no doubt that when you see what a correctly motivated intelligent
robot is like, all your fears that this path might be immoral (aka wrong)
will vanish.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/22/2008 9:07:09 PM
On Nov 22, 11:21=A0am, c...@kcwc.com (Curt Welch) wrote:

> I take reality to be reality (it was it is and
> has nothing to do with us) and our model to be our
> understanding of what it is (right or wrong).
>
>
> An illusion is a persistent difference between
> reality and the brain's  model of reality.


But we cannot know anything about Reality we can only
know our mental models of reality. How do you know if
something is an illusion? How, for example, would you
demonstrate that mach bands are an illusion?


> Which just gets me back to the issue that you have
> never shown a sign of understanding what I'm trying
> to communicate to you.


Has anyone understood the issues you raise :)


> The brain also has mass and pushes against the ground
> and blocks light. These are things the brain does.


I never meant that everything the brain does is called
mind. Indeed I don't even assume all neural activity
is involved in what we might call a Mind Activity.


> ... Why is it that the brain is the only object in
> the universe we have a word which means "all the
> stuff this object does - but not really all the
> stuff but just the stuff I can sense about it
> without using my eyes ears nose fingers or tongue".


I have explained that many times and indeed when not
declaring it being due to "training" and "conditioning"
you have also hit the correct reason we have this
illusion of the Mind as being something extra.


> ... we have verbs for what the brain does.  Words
> like "thought", "memory", "idea", "think", "recall",
> "envision", "sense", "understand", "dream". Those
> are the words that are parallels with run and walk.


Well not exactly parallel. I don't know if you read
the post in which I made reference to SHRDLU? Although
it was a virtual reality robot it could have been
implemented in hardware where it could move its arms
to manipulate blocks. I tried to show in this system
what parts were more like "conscious" processes and
what parts were not "conscious" processes. You have
asked what was the difference between a "conscious"
process and an "unconscious" process and I tried to
flesh out some of the differences in terms of content.


> Trying to argue that mind is just a verb that
> describes an action of the brain doesn't explain
> the externally odd way we use it in the English
> language.


I think we both agree on the fact that many have
the belief in a mind or soul as a separate entity
to the brain. And I think we both agree that there
is no evidence for such an entity and most people
who know about our current knowledge of brains
realize that the contents of the mind correlate
100% with brain activity.

I don't see the word "mind" as a single action rather
it is the word that covers all the things you mention
above such as "thought", "memory", "idea" and so on,
all of which make up possible contents of the mind.
However other effects the brain has due to its other
properties such as mass are not part of the brain
behaviors we call mind.

The firing of a neuron isn't what we call a conscious
process even though a conscious process is entirely
due to the activity of firing neurons. When you say
a neuron (or rock) is "conscious" it has no meaning
in this context unless you are still seeing the mind
as some kind of entity or extra substance rather
than a type of process.

The action of an NOR gate is not what we call a
"finding the square root of a number" process even
though the "finding the square root of a number"
process involves the action of many NOR gates.

A "conscious" process is the same as an "unconscious
process" with respect to them both involving the
firing of neurons and most likely the same neurons.
The content of a "conscious" process is made up of
colors, feelings, thoughts and so on which are the
result of the firing of neurons. However a firing
neuron does not have feelings or thoughts as its
content even though it is a subprocess required
for the feelings and thoughts to take place in the
system as a whole.

I think you keep assuming that others are using the
word Mind to refer to an entity or substance and
that is why you came up with your conscious rocks.


JC
0
casey
11/23/2008 5:18:05 AM
<forbisgaryg@msn.com> wrote in message 
news:37f449c3-1ca1-4bc5-9a91-2328bebf1332@a29g2000pra.googlegroups.com...
>I have recieve the paper and read about 11 pages of it
> and skimmed the rest.  It's quite dense.  I won't be able
> to get a good feel for several days because I come home
> quite tired.

thanks for your interest.  I look forward to your feedback.

>
> Based upon the conculsion I wonder why Stevan
> Harnad's Total Turing Test wasn't mentioned.

Dreyfus is not really saying that AI is impossible, but if it happens that 
it will have to be embodied and have a flat, non-representational 
architecture.  So, it seems like the Turing test and its derivatives is not 
something he needs to reference at this point.

> I remeber reading "What Computers Can't Do"
> in the early '70s, maybe '72 when it came out.
> I don't remember much of the book any more.
> I should read it again.  It set the stage for
> my reading 1980 reading of John Searle's
> article in Behavioral and Brain Sciences.
> Still, He's come quite a ways to make the
> statement:
>   We can, however, make some progress towards animal AI.
>
> So many of us have our reasons to doubt computers
> (as currently constructed and envisioned) will be able
> to cheaply simulate situated human coping but each
> has his or her own reason.
>
> I am encouraged by this paragraph:
>
>  So, according to the view I have been presenting,
>  even if the Heideggerian/Merleau-Pontian approach
>  to AI suggested by Freeman is ontologically sound
>  in a way that GOFAI and subsequent supposedly
>  Heideggerian models proposed by Brooks, Agre, and
>  Wheeler are not, a neurodynamic computer model
>  would still have to be given a detailed description of
>  a body and motivations like ours if things were to
>  count as significant for it so that it could learn to act
>  intelligently in our world.   We have seen that Heidegger,
>  Merleau-Ponty, and Freeman offer us hints of the elaborate
>  and subtle body and brain structures we would have to
>  model and how to model some of them, but this only
>  makes the task of a Heideggerian AI seem all the more
>  difficult and casts doubt on whether we will ever be able
>  to accomplish it.
>
> Even if not intended he shows the way.

Yes, "his way" is what I question in this series of critiques.  He does a 
lot of hand waving and when he does cite technical details and/or come to 
broad conclusions I find that he does without merit.

>Still, digital systems
> in digital worlds are subject to latching where analog systems
> in analog worlds would not.  As long as the designed system
> is representational, even if of the system being modelled, it
> is GOFAI all the way down.
>
> I believe your sentence parsing for the sentence starting page 11
> line 33 is incorrect in that Dryfus is attributing a conclusion to
> Chalmers, Clark, and Wheeler rather than making one himself.
> See the next paragraph to see why I believe so.

either way he bases his many conclusions later in the paper on all the 
various philosophers he quotes, so to me Dreyfus owns them no matter who is 
the source.

Cheers,
Ariel- 


0
Isaac
11/23/2008 12:03:25 PM
On Nov 22, 12:13=A0pm, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:
> > Please elaborate how a mathematical equation, or proof, or model,
> > tells us how brain structures create the image of a blue cube in my
> > phenomenal awareness? =A0Then claim your Nobel!
>
> Math can answer mathematical questions. Your question is not mathematical
> question because you didn't define "phenomenal awareness" using math. Thi=
s is
> like trying to send beer by e-mail.

*YOU* are the one who posited the ref. to math being able to anser the
brain questions!  GHEESH!

>
> > > > See Koch's The Biophysics of Computation for an example of
> > > > sophisticated math applied to biological function in brain , but wh=
ich
> > > > provides no clue as to representation etc.
>
> > > Yes, we all know that some scientists tried and faied just as many
> > > philosophers
> > > did.
>
> > There isn't any scientist or philosopher that has definitively
> > determined how brain represents reality in consciousness.
>
> There are many things that are not definitively determined but there are =
also
> many things that are......
>
> > > True, but creating AI will not result with new philosophy of the univ=
erse
> > > just
> > > as sending man to the moon didn't. This will be advance in technology=
 but
> > > not in
> > > philosophy.
>
> > It sure will affect philosophical underpinnings! =A0Especially as we
> > decide whether rto confer basic "human" rights to "slave" AI
> > artifacts! =A0Not to mention the basic ideas of substrate independence
> > of intelligence issues and so forth.
>
> These are the consequences of creating AI, but it has nothing to do with =
the
> question "how to create AI". Such questions are actual even today - what =
are the
> rights of the slaves, black people, pure people, sick people, old people,=
 rich
> people, people from other countries or anybody who is different in any wa=
y?
>
>
>
>
>
> > > It will change our lifes and it will change our every day routine but=
 it
> > > won't
> > > change our global philosophical view of the world (unless you are one=
 of
> > > them
> > > who think there is some metaphsychal difference between man and machi=
ne).
>
> > Well, there is certainly a physical difference; one is alive - the
> > other is not. =A0And I do not subscribe to the notion that brain
> > performs local symbolic processing to represent reality. =A0Even the
> > connectionist view that posits non-local representationalism has
> > issues that have not been answered satisfactorily. =A0AFA as
> > metaphysical differences, of course there is a huge difference in all
> > sorts of domains. The way we treat machines is in no way comparable to
> > how we treat (or should treat) fellow humans. No one cares if I
> > discard a malfunctioning washing machine or toaster (even if they have
> > some degree of intelligence), but grandma is another story. =A0All base=
d
> > on philosophical/metaphysical ideas.
>
> Brain is one thing, AI is another thing and the way anybody feels about A=
I or
> others is the third thing.
> If horse has X horse powers and my car has Y horse powers, how many horse=
s are
> in my car? Well, when I drive there is only one horse in my car - it is m=
e, and
> I feel ok! We don't know how horse gets horse power from the grass, but c=
ars use
> oil. We also don't know how brain thinks, but computers do it in a differ=
ent
> way.
>
>
>
>
>
> > > Creating AI will only prove that it is possible to create intelligent
> > > machine (=3D
> > > the difference between man and machine is not in metaphysics) and
> > > philosopher
> > > who now think it is not possilbe will be proven wrong. Since machines=
 are
> > > still
> > > machines, intelligent or not, they never get tired, they are predicit=
ble and
> > > can
> > > be improved and become bigger, faster and better in any way. All this=
 will
> > > help
> > > us to solve some other problem and to live well (unless we kill each
> > > others).
>
> > That much I agree with.
>
> > > After AI is created, Earth will remain rounded, Sun will be at the sa=
me
> > > place
> > > and everything normal and racional people knew will remain the same w=
ith
> > > only
> > > one little exception - from this moment they will know how to make AI=
..
>
> > And large numbers of philosophical and scientific positions will be
> > amended.
>
> People can't think any way. It takes intelligence to make intelligence an=
d our
> inability to make AI proves our stupidity.


We already have AI in various domains.

- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

0
Alpha
11/23/2008 2:46:31 PM
On Nov 22, 12:41=A0pm, =A9u=E6Mu=E6PaProlij <1234...@654321.00> wrote:
> >> These are the consequences of creating AI, but it has nothing to do wi=
th the
> >> question "how to create AI". Such questions are actual even today - wh=
at are
> >> the
> >> rights of the slaves, black people, pure people, sick people, old peop=
le,
> >> rich
> >> people, people from other countries or anybody who is different in any=
 way?
>
> > There is no need to answer the question "how does
> > one create an AI" until one answers one of these
> > question "What would be the morality of using an
> > AI as a slave" or "Is there a need to create free
> > non-human moral agents".
>
> In life there are only two things to worry about - whether you live or di=
e
> if you live there's nothing to worry about
> if you die there's only two things to worry about - whether you go to hea=
ven or
> hell
> if you go to heaven there's nothing to worry about
> if you go to hell you'll be so busy saying HI to old friends you won't ha=
ve time
> to worry.

You must be high or drunk or something; your verbiage is replete with
non sequiturs and allusions to context that we are not talking about.


0
Alpha
11/23/2008 2:47:32 PM
On Nov 22, 2:07=A0pm, c...@kcwc.com (Curt Welch) wrote:
> forbisga...@msn.com wrote:
<snip>
>
> Yes. =A0Because a "moral" agent is nothing more than a reinforcement trai=
ned
> adaptive behavior controller.

I created =A0"reinforcement trained adaptive behavior controllers" in
many process control systems when I worked at Honeywell.  None of them
were moral agents.

Your 3 shadows on the cave wall (RL, particles interacting and signal
processing) have deluded you once again.


0
Alpha
11/23/2008 2:51:07 PM
> > > Please elaborate how a mathematical equation, or proof, or model,
> > > tells us how brain structures create the image of a blue cube in my
> > > phenomenal awareness? Then claim your Nobel!
> >
> > Math can answer mathematical questions. Your question is not mathematical
> > question because you didn't define "phenomenal awareness" using math. This 
> > is
> > like trying to send beer by e-mail.
>
> *YOU* are the one who posited the ref. to math being able to anser the
> brain questions!  GHEESH!

Math did answer you question but it is *YOUR* problem you don't like (or 
understand) the answer.

0
iso
11/23/2008 5:50:35 PM
>>> There is no need to answer the question "how does
>>> one create an AI" until one answers one of these
>>> question "What would be the morality of using an
>>> AI as a slave" or "Is there a need to create free
>>> non-human moral agents".
>>
>> In life there are only two things to worry about - whether you live or die
>> if you live there's nothing to worry about
>> if you die there's only two things to worry about - whether you go to heaven 
>> or
>> hell
>> if you go to heaven there's nothing to worry about
>> if you go to hell you'll be so busy saying HI to old friends you won't have 
>> time
>> to worry.
>
>You must be high or drunk or something; your verbiage is replete with
>non sequiturs and allusions to context that we are not talking about.

This Irish philosophy says only that you don't have to worry before something 
bad happens and even if something bad happens it is not the end of the world. In 
this context it means that you don't have to worry about moral issues related to 
AI before you make AI. Even if you use AI as a slave this is not worst than 
using other people as slaves, and there are many people today that are slaves.

0
iso
11/23/2008 5:51:31 PM
Publius <m.publius@nospam.comcast.net> wrote:
> curt@kcwc.com (Curt Welch) wrote in
> news:20081121010006.061$G1@newsreader.com:
>
> >> The bird solved that problem in a matter of minutes, even though it
> >> was surely a problem situation it had never before encountered. That
> >> is intelligence.
>
> > Yes, I think that's a fine example of intelligence.  But I suspect you
> > give the bird far too much credit for how novel this problem was.
> >
> > Do you think the bird has never had to pick up a worm or snake in it's
> > life and hold it with it's foot on a branch?  I suspect the behavior
> > of dealing with long thin piece of food is in fact fairly common for
> > most birds.
>
> That is a good point. That possibility could be investigated using birds
> with known histories and environments (domesticated or aviary birds).
> The behavior is more complex than pulling a snake from a hole, however.
> Moreover, ravens and other corvids have exhibited similar creativity
> with other novel problems.
>
> There is no doubt, on the other hand, that the birds are drawing upon
> some general principles they have inferred from previous experience,
> which may have differed quite considerably in detail from the present
> problem scenario. I.e., they have abstracted some rules which can apply
> far more widely than the scenarios in which they are originally
> encountered.

Yes, I agree with all your are saying in this post.

Let me just snip a few key points you made and then respond to them all...

> For example, the bird may have recognized that the string
> belongs in the category of "extended, flexible things," and that all
> things in that category can be manipulated in certain ways.

> Solving AI will require some means for the system to extract rules ---

> Yes. But what are those lessons? How does the system recognize and
> extract rules of wide generality, or (which is perhaps the same thing)
> generalize a rule from one or two specific instances of it?

> Perhaps the more crucial question is, What are the lessons learned?

> The perceptual field must first be resolved into "things."

> Then the
> entire feature set of an unknown "thing" must be compared to the stored
> templates --- the onboard inventory --- of idealized things (though I
> believe you would argue there is no such inventory --- correct?).

I think there very is such an inventory - thought not like you might think.

> The
> system should not make a decision based on a single common feature. More
> likely, encountering a partial feature set in the environment will
> automatically trigger retrieval of the "best fitting" template, thereby
> instantly giving the system more information about the "thing" before it
> than it is currently observing. E.g., a cat arches its back, prepares to
> fight, and selects an escape route when a dog appears in its visual
> field, even though the dog does not notice the cat and is exhibiting no
> threatening behavior at all. The cat is responding to its "dog
> concept," not the perceived dog. The perceived dog invokes the concept,
> which provides the cat with information about the dog beyond what it
> currently observes.

So, let me talk about who I see the hardware working to answer all those
questions above.  You might have read my post about the pulse sorting nets
I've been playing with after you wrote the above so you probably have a
better idea of what I'm thinking already. But I want to talk conceptually
about how I think the ideas you bring up above can be attacked.

First, off, I think and work with confectionist approaches to these
problems.  So my general frame of thought is of multilayer networks with
multiple inputs and multiple outputs and  some type of topology internally
and some type of processing internally.  I do tend to think about and play
with async pulse networks, but synchronous binary networks or value based
(each node calculates an output value more complex than one bit) can all
apply to these generic high level concepts I'm about to talk about.

The general question from your post is how to implement abstraction.  That
is, how do we apply a lesson learned from one experience to the other. And
even more important, how do animals and humans seem to do such a good job
at it, as per the example of the raven very quickly figuring out how to get
the food at the end of the hanging string - something that seems to us to
be nothing like what that bird should have seen in its life, or should have
had genetically built into him.

At the high level the idea of what has to be done seems kinda obvious.
When we do something that works for us, we want to apply that to something
we run into in the future which is similar.  If our measure of "similar" is
a good one, we might expect the things we try to work, if our measure of
"similarity" turns out to he stupid, we will get no where.  For example, if
I got food once when I was hungry by asking my mom "Can I have more food
please", then the next time I'm hungry, I might try the same thing again.
But if I'm alone in the jungle, and I walk around saying "Can I have more
food please" it would be a fairly stupid thing to try.  We are smart enough
to know that using that behavior, when we are hungry, and surrounded by
trees, is pointless, but when hungry, and surrounded by humans that speak
English, the behavior is a good one to try. But how do we build that level
of "smartness" into a machine?  How does the machine know that one behavior
is more likely to work than the other?

So, the first point here is what is the lesson learned?  The answer to that
is, "what behavior has worked in the past".  AI is always about producing
the right behaviors.  That's why we have a brain - to control the behavior
of the body and that's what this is all about - producing the right
behavior.  Our lesson's learned, are the list of all behaviors we have used
in the past, that did something good for us (and inversely - all the
behaviors we used in the past that did something bad for us, which we want
to avoid in the future).

But, how does the machine know if a behavior was good or bad?  How does it
evaluate the result to determine if the result of an action was good or
bad?

The bottom line to that question is that there is no universally correct
definition of good and bad.  There is no universal way to build a machine
which looks at sensory data and reports "good" or "bad".  The answer is,
that good and bad must be hard coded into the machine - and the way it's
hard coded is totally arbitrary.

We can build a light seeking robot by hard coding in a definition of "good"
which means "try to maximize the signal coming from the light sensory".
And we can do just the inverse, by building a light avoiding robot which
has just the inverse definition of "good" built in to it - it's good would
be to try and minimize that signal value instead of maximizing it.

This is key to understand because it shows that there is no single
universal definition of good and bad.  Good and bad is defined by stuff
hard-coded into the intelligent hardware.

So, in order for an AI machine to evaluate the success of a behavior, it
must have some internal system for doing that - for reporting the success
or failure of a behavior.

There are an infinite number of ways to do that.  You can define a goal
which could be measured by the hardware, and then use a measure of
closeness to the goal as the measure of success for example.

The most general way however, and the one I think we have to use to create
true human level AI, is generic reinforcement learning.  This approach
actually makes the problem a lot harder, but it's the approach I think
these birds and humans use.  With reinforcement learning, all concepts of
good and bad are implemented by hardware which produces a single reward
signal which is fed to the learning machine.  The machine is then
constructed so as to try and maximize the long term value of this reward
signal.

The first big problem with trying to build a system driven by a reward
signal is the credit assignment problem.  Something the machine does at one
point in time, might cause an increase (good) or decrease (bad) to the
reward signal at some point in the distant future.  So when the reward
signal increases, which of the behaviors in the past 10 years that this
machine produced was the cause of that increase?  The way we attack that
problem is to build reward prediction into the system.  The system must be
able to predict future rewards.  That is, when something changes in the
environment, it must be able to sense that the change just made things look
better for the future, or worse for the future.

If you have a good reward predictor in the hardware, then you don't have to
wait seconds, hours, or days, before the hardware can determine of
something it just did was good or bad.  If a behavior causes the reward
prediction system to indicate that things just got better, then it can use
this fact to judge the value of the behaviors just produced.

To get a feel for what this must do (and how it can work), let me give an
example.  If getting more light to fall on a robot generates more reward,
then we need the hardware predict when something in the environment has
changed to indicate that the probability of getting more rewards in the
future has increased.  Lets say the robot lives in a house, and the most
light in the house comes though a window from the sun.  The window has a
shade that can be open or closed.  Every time the robot gets lots of light,
the shade is open, and when the shade is closed, it never gets much light.
It needs to be able to recognize the state of the shade (open or closed),
and associate this state of the environment with "higher rewards" and
"lower rewards".  So, the reward prediction system works by tracking past
rewards as a function of the state of the environment.  A closed shade
produces a low prediction for future rewards, where as an open shade is a
predictor of higher future rewards.  With a reward prediction system like
this, if the robot does something that causes the shade to change from
closed to open, that will cause an increase in the odds that there will be
future wards.  This increase in the prediction, needs to be used to measure
the value of the behaviors that caused the shade to open.  Even if it's at
night, or a cloudy day where opening the shade doesn't cause more light at
that point in time, the prediction system still reports that the odds of
getting more light just went up, because the shades switched form closed to
open.  That can allow the robot to learn how to open the shades by
reinforcement, even in the middle of the night, when the behavior it
produced has not short term effect on real rewards.  The real reward will
come the next day when the sun comes up.  So that's the idea behind reward
prediction. It makes it possible to learn the value of a behavior, even
when the behavior has not short term change to real rewards - even if the
behavior causes a short term drop in rewards.

So, without solving the problem of how to build a good reward prediction
system, just assume we have one (we can put that on the todo list).

So, built into the hardware, we need a definition of good and bad in the
form of custom hardware which monitors the environment and produces a
reward signal (like the light level example I gave above could be the
reward signal to motivate a system to seek light).  And we need a system
for translating sensory data into an overall reward prediction based on the
state of the environment and/or the state of the robot.  So as the state of
the environment changes, we can make a prediction about whether that change
was good or bad for the robot without waiting for a reward which might not
come for hours, or years.

So the answer to "what are the lessons learned", becomes information about
what behaviors produced a change to the environment which caused things to
get better, and which behaviors produced a change to the environment to
cause things to get worse.  In short, the system must track everything the
robot does, and record how well this behavior has worked in the past.

From here, we move on to the abstraction problem.  Remember first that the
prime problem the hardware is trying to solve, is "what behavior should I
use at this point in time?".  Should it stand still, or lift the right arm,
or move backwards?  How does it figure this out?

The high level answer is that it uses its sensors to determine the state of
the environment, and then compares the current state, to past states (that
it recorded), and figures out what has worked well in the past, for a
similar state of the environment.  But with a high dimension sensor system,
this problem becomes very hard.  That is, if the sensory system gives us
lots of bits of information about a complex environment, it's unlikely that
the same state will every repeat itself in the entire life of the robot.

So, since it's never seen the same state before, it has to use some system
to figure out what past states it's had experience with were close to the
current state, and make some sort of judgment about what behavior to try
for this state.

If some past state X is close to the current state, and there was a
behavior used in that state state that produced an expected reward of 10,
and another close state Y with a behavior that produced an expected reward
of 8 which should it use?  What if the Y state is more similar to the
current state than the X state?  How does it pick between the Y behavior
which doesn't produce as much reward, but is closer to the current state,
or the X behavior which produced more reward, but is less similar to the
current state (and such can be expected not to work as well this time).

How the system answers these closeness tests is the foundation of
abstraction.

And here's how I think at the high level it needs to be solved....

As you said above:

> The perceptual field must first be resolved into "things."

But more precisely, I think it should be decoded into a very large set of
micro-features.

If you have a high dimension sensory system (like a million light sensors
(aka pixels) each with its own input signal), then the perceptual field is
already divided into micro features.  Each pixel input is a "thing" which
the sensory system has detected and provided a signal to represent.  So the
perceptual field is resolved into "things" before we do anything.

But these aren't the "things" we normally think about.  We think about
things like "the dog in front of us", or the "the face", or "the door".

I believe the issue here is that the raw sensory signals have lots of
redundancy in them.  That is, they have a non-zero correlation.  That is,
if you know the value of one sensory signal, it will be predictive of many
other sensory signals.  That is, the sensory signals are not mutually
exclusive.

If for example you know that the signal for one pixel is indicating a very
bright light level, you can predict from this that the pixels that lies
next to it in the eye, will be producing an above average level of light.
Aka, most the time, when you detect a bright pixel, the pixeles next to it
will be bright as well.

These correlations exist not only in the spatial domain (signal to signal
at the same point in time), but also in the temporal domain (a value now is
predictive of what is to come).

I believe, that if you transform these signals, so as to remove those
correlations, both spatially, and temporally, so that you end up with a set
of signals that have little or no correlations, that you will have done
just what you say needs to be done.  That is, resolve the perceptual field
into "things".

Or more accurately, "micro features".  I don't know how to do this, but I
believe very strongly it's possible.  There are statistical techniques that
our well understood that do things conceptually very similar to this - like
Principal Components and Factor Analysis:

http://www.statsoft.com/textbook/stfacan.html

They don't however work in the temporal domain - only in the spatial
domain.  A similar technique needs to be found to perform a similar
transformation to real time data streams in both the spatial and temporal
domain in order to solve this problem of resoling the perceptual field into
"Things".

The way I believe it will work is to transform any set of sensory data,
into a set of N micro-features - all with roughly equal probability and
roughly equal cross correlations.  The number N is limited only by the size
of the network which performs this translation.  A very large network, will
transform the sensory signals into a much finer grained set of micro
features.

This type of function will not only transform the signals into micro
features, it solves the data merging question as as well - that is, the
question of how sensory data from separate modalities become merged.  That
is, if we hear a dog bark, or see a dog, the brain knows that these
different sensory signals both represent "dog".  It has to merge these
sensory signals together to form the "Dog" signal (or to form all the
different micro-features that represent different aspects of "dog").

I believe that the data transformations we see happening in the sensory
cortex (such as center surround, edge, and motion detection) are driven by
just this sort of algorithm.

There was some work talked about here in c.a.p. a few years back which
approached this same problem as a problem of information maximizing.  The
work was decades old.  I don't remember the name of the researcher - Dan
might.  It was the same sort of idea, but I don't think he had the
algorithm correct and I don't remember clearly, but I think he failed to
solve the problem in the temporal domain as well.

Trying to understand how to solve this transformation problem is my main
current interest.  I think it's the one BIG key missing from the solution
to AI.

I believe this can be be done in a multilayer ANN where what you end up
with, is each node of the network representing one of these "micro"
features. Each layer in the network helps remove more of the redundancy and
reduces the correlations in the signals.

This clean set of micro features becomes the systems representation of the
current state of the environment.  The activation level of each
micro-feature signal is in effect a measure of how much that feature (aka
"thing") exists in the current state of the environment.  These are not
absolute binary yes/no indicators.  They are closer to probability measures
or strength measures for the feature.  Because the signals were produced by
removing temporal correlations as well, they will be features that describe
consistency over time as well - such as motion. So instead of just features
that mean "circle" they will be full of features that mean things like
"circle moving to the right at a slow speed" as well. It's not just like
extracting features from a photograph, it's like extracting features from a
movie.

All the features that represent some current feature of the environment,
will be active at the same time.  All the features that don't exists int he
current state of the environment, will be inactive.  If there are no dogs
around, all the dog related micro features will be silent (unless the
person is "thinking about" dogs - but we will ignore how that works for
now).

Let me copy some of your words from above so I can comment on them here:

> The
> system should not make a decision based on a single common feature. More
> likely, encountering a partial feature set in the environment will
> automatically trigger retrieval of the "best fitting" template,

If the environment contains a dog, but you can only see part of the dog,
what I believe should happen here is that the dog micro-features that
represent the part that can be seen will be activated, and in addition, the
higher level (more abstract micro features) such as "dog" or "black dog" or
"small black dog" will also be activated.

If you can't see the dog-tail, the dog-tail micro-features will not
activate.  However, the algorithm that is at work transforming raw sensory
data into these micro features, will need to use use clues from all related
features (both with feed forward and feedback paths in the network) to
correctly activate the features.  For example, if the system has activated
some of the high level "dog" micro-features, those will be used to increase
the probability of the "dog-tail" micro-feature from activating because dog
is a temporal predictor of "dog-tail".  That is, if we believe there is a
dog in the environment, then the thing we see that looks a little like a
cat tail and a little like a dog tail, is probably a dog tail.  It's the
expected correlations between these features that must be used to create
signals that have no (or less) correlations.

So now we return to the problem of abstraction and the problem of how the
system determines what behavior to select at this point in time when the
current state of the environment is unlike anything it's seen in the past.

The state of the environment is represented by these micro features (we
might have parsed the state into 100 million micro features in a large
network).  So how do we know if this state is "close" to some state we have
seen in the past?  By the number of micro features it has in common of
course.  The more micro features the current state has in common with some
past state, the more likely the behaviors that worked in the past state to
produce higher rewards, will work in this state.

So lets talk about behavior production.

The idea is that each of these micro-features will contribute their "vote"
as to what behavior to produce.  So some implementation must be found which
maps all these micro-features, to behaviors.

One way to think about this (which I used to think about years ago but have
greatly improved on sense then), is to think of this set of micro features
as a large set of numbers which represents a state vector for the
environment. If we have what amounts to an input "state vector", and we
need an "output state vector" which is a list of value for each output line
at this point in time, we can think of the problem simply as a matrix
multiply.  We map the micro-feature state vector, into an output vector
using a mapping matrix.

If we have 100 million micro features, and 1000 output signals (lots of DOF
on this robot), we would need a 100 million x 1000 mapping matrix.  If the
columns represent the micro features, and rows the outputs, we can think of
each column as a given micro-feature's vote for what outputs to produce
when this micro-feature is active in the environment.  There are plenty of
other ways to map micro-features to outputs, but I'm giving this simple
linear mapping as an example to show one way to understand the idea of each
micro feature "voting" for what output should be produced.  This linear
mapping actually doesn't work well but it's easy to understand.

You can also think of this output matrix as a single level ANN with one
node for each output with 100 million inputs for each node - one for each
micro feature.

The idea here is that we use the "lesson learned" (aka how the reward
prediction system changes it's prediction of the future), to train this
output mapping function.  When the prediction system reports that the
future is looking worse, we want to punish the behaviors recently used.
That is, only the micro-features which were recently active are punished or
rewarded.  And the amount of punishment or reward needs to relate to how
much they contributed to the "vote" about what to do.  The strongest micro
features will contribute the strongest vote, and will receive the most
"learning". Most micro-features are silent most the time because most
micro-features the system can detect are not part of the current
environment.  So the learning will typically be applied only to a small
subset of all the micro features - just the features that represent the
environmental features that are part of the current environment.

The more micro-features one state of the environment has in common with
another state, the more likely the the system will produce the same
behavior.

If a string hanging from a tree that smells like food creates a collection
of micro-features that have a lot in common with a previous environment
where the bird was eating a snake that it was holding in its claw as it
perched on a branch, the bird would be likely to to perform the same
behavior that "worked" for eating the snake.

All the activation levels of these micro states is the systems current
"awareness".

Each of these micro functions maps to some "opinion" about what behavior
the system should produce when that feature is active in the environment.
These opinions are a sum total of past "lessons learned" where that micro
feature was active in an environment.  So for a given current state, which
is described by all it's current "micro features", the sum total of the
options is the sum total of all lessons learned in the past, which has any
of these micro features active.  So without having to do any search or
look-up though a huge set of "templates", such a system can simply and
quickly "compute an opinion" about what it should do at a given instant in
time.  But that "opinion" will be a marge summary of every past lesson
learned which had _anything_ in common with the same environment - where
the "lessons learned" are biased based on how much the environments had in
common.

Because micro-features show up in a lot of different environments, they
will accumulate a lot of training in short order (which increases the
learning speed).  If you tried to learn how to play chess, and had one
"state" variable for each game board, you would be faced with the learning
problem that you would only learn a little bit each time a game passed
though that board position.  You would have to play billions and billions
of games to see that same board position enough times to collect useful
statistics about what move to make. But when a large set of micro-features
for each board position, you learn a little bit about how to respond to
that board position, every time the same micro states (abstract features of
the board position) shows up in other games.

So, in summary..

1) key idea is that the purpose of the system is behavior selection - what
do I need to do now by direct computation and not a search through a list.

2) We use a reward signal combined with a reward predictions system to
evaluate the success of all behaviors (learning is on-line - it never
stops).  BTW, though I didn't talk about, the micro features and their
"lessons learned" are also used to create this prediction system.

3) The raw sensory signals (which are high dimension non-markov signals
BTW), must be transformed into a large set of micro features by removing as
much spatial and temporal correlations as possible.  This is actually an
attempt to turn them into Markov signals (or "closer to" Markov signals
since producing true Markov signals for a high dimension complex
environment is possible).

4) Behavior is produced as a mapping from micro features to behaviors.
Each micro-feature in effect "votes" for what it thinks is best to do.

5) lessons learned (how the prediction of future rewards go up and down in
response to behaviors produced) are applied to each micro-features in real
time (on-line).

Turning the above high level hand-waving overview of how to approach the
problem you talked about above I believe is how AI is going to be solved.

My pulse sorting networks which I talked about in another post in this
series of threads in c.a.p. are one specific implantation of these high
level ideas that I think show great potential. The one thing I definably
don't have working correctly however is this transformation from raw
sensory signals into a valid set of micro features.  I have a few
transformation functions working, but they are wrong for multiple reasons -
but even with the wrong transformation functions, the network can still
learn some interesting things.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/24/2008 7:41:08 AM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Nov 22, 11:21=A0am, c...@kcwc.com (Curt Welch) wrote:
>
> > I take reality to be reality (it was it is and
> > has nothing to do with us) and our model to be our
> > understanding of what it is (right or wrong).
> >
> >
> > An illusion is a persistent difference between
> > reality and the brain's  model of reality.
>
> But we cannot know anything about Reality we can only
> know our mental models of reality. How do you know if
> something is an illusion? How, for example, would you
> demonstrate that mach bands are an illusion?

Because they have the word "illusion" after their name?

If what you suggest is true, we wouldn't be having this conversion because
there would be no such thing as an illusion since, according to your idea,
it's impossible for us to know if something is an illusion.

We discover illusions because they create inconsistencies in our models of
reality.  We discover illusions because we in fact are not limited to only
one view of reality.  We have man many different views.  If we see
something with our eyes (one view) we can verify the nature of what we see
by closing our eyes and touching it with our fingers.  If our fingers to
not confirm what our eyes saw, we know something is amiss - that we are
dealing with some form of an illusion.  I can then ask you to use your eyes
and report to me what you see.  That gives me a third view of the same
feature of reality coming into me though my ears.  I can use a camera and
have it take a picture, and then look at the results.  Again, this creates
yet another view of that same feature.

We understand the true nature of reality by creating a model which explains
all the views we can collect.

The only aspects of reality we can never know, are the aspects that we can
not view at all.  There is good evidence to suggest that there are such
elements of reality, but as long as we can't view their effects through any
sensory system, we can never know of it exists or not.  Those are the
aspects of reality we can never know.

> > Which just gets me back to the issue that you have
> > never shown a sign of understanding what I'm trying
> > to communicate to you.
>
> Has anyone understood the issues you raise :)

Hard to say.  Perhaps not.  There are many who understand the fundamental
ideas I argue but the subtle issue of how language makes people people
believe in distortions that don't exist I've never seen anyone fully grasp.
But sometimes that's because the people that understand don't bother to
respond to my posts and the people I end up talking to are only the ones
that don't understand or don't agree.

> > The brain also has mass and pushes against the ground
> > and blocks light. These are things the brain does.
>
> I never meant that everything the brain does is called
> mind. Indeed I don't even assume all neural activity
> is involved in what we might call a Mind Activity.

Yes, I know you know this.  I was not bringing that up to suggest you
didn't known it. I was bringing it up to help you see the contradiction in
views that indicates we are dealing with an illusion.  The way we talk
about the mind is different than the way we talk about everything else -
but yet there is no evidence to suggest these things we associate with the
mind are different from anything else.  It's just left over baggage from
the days when dualism was assumed true by default.

> > ... Why is it that the brain is the only object in
> > the universe we have a word which means "all the
> > stuff this object does - but not really all the
> > stuff but just the stuff I can sense about it
> > without using my eyes ears nose fingers or tongue".
>
> I have explained that many times and indeed when not
> declaring it being due to "training" and "conditioning"
> you have also hit the correct reason we have this
> illusion of the Mind as being something extra.
>
> > ... we have verbs for what the brain does.  Words
> > like "thought", "memory", "idea", "think", "recall",
> > "envision", "sense", "understand", "dream". Those
> > are the words that are parallels with run and walk.
>
> Well not exactly parallel. I don't know if you read
> the post in which I made reference to SHRDLU?

Either I've not yet read it or I missed it.  There are lot here at he
moment I haven't yet read.  Or maybe I read it and just forgot???

> Although
> it was a virtual reality robot it could have been
> implemented in hardware where it could move its arms
> to manipulate blocks. I tried to show in this system
> what parts were more like "conscious" processes and
> what parts were not "conscious" processes. You have
> asked what was the difference between a "conscious"
> process and an "unconscious" process and I tried to
> flesh out some of the differences in terms of content.

Ok, I don't remember that precisely but it does ring a bell.

My issue is mostly that if you think consciousness is a special type of
process (and are trying to explain what type of process it is), you don't
understand that it's an illusion.  That is, there is nothing so special
about humans that deserves all the baloney that people assign to this
concept of consciousness.  The only thing to understand about consciousness
is that there is nothing there to understand.  Everything people point that
when they try to describe what they are talking about, are characteristics
that already exist in all our robots.  But yet, they declare, NO IT
DOESN'T!  But yet, the only evidence to support it doesn't, is them saying
"NO IT DOESN'T!".

There is not a single class of process that exists in the human body
creating our intelligent behavior that doesn't exist in some form in all
our computer controlled robots.

If you think consciousness is something we need to "go figure out" or
"discover", you don't get my point.

> > Trying to argue that mind is just a verb that
> > describes an action of the brain doesn't explain
> > the externally odd way we use it in the English
> > language.
>
> I think we both agree on the fact that many have
> the belief in a mind or soul as a separate entity
> to the brain. And I think we both agree that there
> is no evidence for such an entity and most people
> who know about our current knowledge of brains
> realize that the contents of the mind correlate
> 100% with brain activity.

Yes, except the simple fact you said "correlate" shows you are thinking
about it wrong.  It is meaningless to talk about how something correlates
with itself.  Correlation only exists between to separate events.  You
would not, for example, talk about how the sun rising in the morning has
100% correlation with the sun rising in the morning - but yet that's what
you just wrote above.

> I don't see the word "mind" as a single action rather
> it is the word that covers all the things you mention
> above such as "thought", "memory", "idea" and so on,

Yes, that's exactly how we use it which is fine.  It's odd as hell and we
don't do use language like that for anything else in the universe (as far
as I have been able to figure out), but we continue to use the word mind
because it's common language left over from the time when everyone assumed
dualism was a fact.  The mind is the thing that existed in that other
domain which produced all those thoughts.

It's like living next to a mountain which we can't cross and giving it a
name - Mount Brain.  Then we travel 24,000 miles around the world and come
to another mountain and call it Mount Mind.  It's the same mountain, but
because we don't understand the world is round, we don't have a clue it's
the same mountain.  We think Mount Brain and Mount Mind are 24,000 miles
apart.  They don't even look the same from the two different sides so it
never occurs to anyone that they are in fact the same.

This is the situation we are in with the mind body problem.  We gave it two
different names because we have no direct data which shows us it's the same
mountain.

But along comes a scientist, and he figures out the world in round.  And by
using clocks and the stars and the sun, he's able to figure out how big the
earth is.  And he calculates that Mount Brain and Mount Mind must be the
same mountain.  So he stops talking about it as two mountains and talks
about it as one.  But everyone else still thinks it's two mountains because
they don't understand the complex logic of the round earth theory that
proves they are in fact the same.  They just look at the two Mountains and
say "they don't look the same, and they are 24,000 miles apart - you are
stupid for even suggesting they are the same - I don't care what all that
fancy theory of yours says".

> all of which make up possible contents of the mind.
> However other effects the brain has due to its other
> properties such as mass are not part of the brain
> behaviors we call mind.

Yes, that's because we can't sense those other effects with the sensor
system we use to sense the "Mind" effects.

Actually, I take back what I said above.  We do have another simple example
of a single event which we have given two names because we sense the effect
with two different sensory modalities and just like the mind brain body,
the brain has failed to merge the two sensory modalities into a single
"thing" for us.  The example is thunder and lightning.  These are two nouns
we use to label a single thing in the universe - the discharge of
electricity in the atmosphere.

Our brain parses them as different because of the temporal delay that
happens with the effects.  they don't always happen close enough together
in time to allow the brain to consider them a single "thing".  And, because
our brain defines them as two different things, we perceive them as
different things, and make up separate words for each.

Our brain parses the body and mind as being different because there is NO
temporal correlation in the sensory data to cause the brain to merge the
signals together.

> The firing of a neuron isn't what we call a conscious
> process even though a conscious process is entirely
> due to the activity of firing neurons.

But it is, and that's your problem.  You are looking for something else
when there is nothing else to look for.

> When you say
> a neuron (or rock) is "conscious" it has no meaning
> in this context unless you are still seeing the mind
> as some kind of entity or extra substance rather
> than a type of process.

No, I'm saying that by what everyone talks about as examples what
consciouenss is, a rock fits the description.   It only fails to fit when
they simply say "no - rocks are conscious you idiot".

The point is not to figure out what consciousness is, the point is to
realize that it's an illusion and not something we need to go searching
for.

The only thing there is to search for, is the cause of the illusion - which
I've explained many many times here.

> The action of an NOR gate is not what we call a
> "finding the square root of a number" process even
> though the "finding the square root of a number"
> process involves the action of many NOR gates.

That's right, that process, unlike the one you want to call "consciousness"
is actually defined.  The process you are looking for is like a pink
elephant - it doesn't exist.

> A "conscious" process is the same as an "unconscious
> process" with respect to them both involving the
> firing of neurons and most likely the same neurons.

That's something I've never seen anyone actually talk about.  What does the
brain activity of an unconscious person look like on something like an
fMRI?

> The content of a "conscious" process is made up of
> colors, feelings, thoughts and so on which are the
> result of the firing of neurons.

No, they _are_ the firing of neurons.  The contents of a conscious process
in a human brain is the firing of neurons.

> However a firing
> neuron does not have feelings or thoughts as its
> content even though it is a subprocess required
> for the feelings and thoughts to take place in the
> system as a whole.

But it does. There is no indication that it doesn't.

> I think you keep assuming that others are using the
> word Mind to refer to an entity or substance and
> that is why you came up with your conscious rocks.

Others are using the word Mind like it's been used for hundreds of years.
It's used as the name of the thing that contains the thoughts of the soul.

When you translate it into a language where dualism has been rejected, the
name doesn't mean "special type of process which exists in the brain that
causes phenomenal experience".  It means brain.

You are clinging to the ides of mind == type-of-process because even though
you have accepted the faith of materialism, you still cling to the idea
that consciousness is more than the illusion that our thoughts are
something other than just the activity of the brain.  You think it's
something created by a special type of process in the brain like a
transformer creates a magnetic field or a light bulb creates light.

This stuff "created by" the special process doesn't exist.  And if nothing
is created, then what is it that you think you could possibly be talking
about which makes our behavior any different than any robot or any
computer?

Is playing chess a test of consciousness?  How about talking?  How about
running in fear from something?  How about being able to see the red block
so you can pick it up when someone asks you to?  How about a thought
process which allows one to find a new solution to playing back-gammon?

These are all exmaples of things our comptuers and robots are already
doing.  There is no single type of behavior humans produce that I can't
show you an exmaple of a robot producing the same _type_ of behavior.

Of course Alpha just shakes his head in disbelief that I could be so stupid
to say such things, but the only evidence he (or anyone) has that I'm
wrong, is that he says I'm wrong.  He has not evidence.

You also have no evidence that this myth yo call "the conscious process" is
real.  You can't define it, you don't know it when you see it happening in
a computer.  Why do you continue to put so much faith in something that
doesn't exist?

You say you have the magic power to "sense conscious" in yourself.  How do
you know my digital camera doesn't have the same power?

If I make my computer print out - "I can sense consciousness in myself so I
know I'm conscious, but I'm not sure about those humans", would that make
the computer conscious?  It's the test that you want me to swallow as
"proof" that you have this magic process running in you.

The point of my never ending discussion with you is that even though you
have rejected dualism and believe in materialism, you still think dualistic
conciseness is real - you just changed it's name from "soul" to "process"
and by doing so, since "process" is a material thing, you believe you have
resolved any conflict between materialism and consciousness.

And at the same time, since you believe in dualistic consciousness, you
have no issue in thinking that "you" are not a human body, but a "pattern
of behavior", or "a process".  You are just cheating by doing this.  You
haven't rejected dualism, you have just used materialist words incorrectly
to keep your belief in dualism alive while trying to believe in materialism
at the same time.

It's a fine consistent story you have created for yourself - but it's just
not true and it's not real materialism.  Real materialism rejects dualistic
ideas of consciousness.  Real materialism says the body and what it does is
all there is and that there is nothing else there to talk about that could
be called "consciousness".  Or, at best, "conciseness" means something non
interesting and non descriptor like "functioning normally".

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/25/2008 4:46:39 AM

On Nov 24, 8:46 pm, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
>> On Nov 22, 11:21=A0am, c...@kcwc.com (Curt Welch) wrote:
>>
> >> I take reality to be reality (it was it is and
> >> has nothing to do with us) and our model to be our
> >> understanding of what it is (right or wrong).
>>>
>>>
> >> An illusion is a persistent difference between
> >> reality and the brain's  model of reality.
>>
>>
>> But we cannot know anything about Reality we can only
>> know our mental models of reality. How do you know if
>> something is an illusion? How, for example, would you
>> demonstrate that mach bands are an illusion?
>
>
> Because they have the word "illusion" after their name?
>
>
> If what you suggest is true, we wouldn't be having this
> conversion because there would be no such thing as an
> illusion since, according to your idea, it's impossible
> for us to know if something is an illusion.


I NEVER said it was impossible to know if something was
an illusion. I asked how you (we) can demonstrate that
the mach bands are an illusion which implies that it
can be done. The wording was meant in the same sense
as me asking how you do you find the square root of a
number. I wasn't implying it couldn't be done.


> We discover illusions because they create inconsistencies
> in our models of reality.  We discover illusions because
> we in fact are not limited to only one view of reality.
> We have many many different views.  If we see something
> with our eyes (one view) we can verify the nature of what
> we see by closing our eyes and touching it with our fingers.
> If our fingers to not confirm what our eyes saw, we know
> something is amiss - that we are dealing with some form
> of an illusion.  I can then ask you to use your eyes and
> report to me what you see.  That gives me a third view of
> the same feature of reality coming into me though my ears.
> I can use a camera and have it take a picture, and then
> look at the results.  Again, this creates yet another
> view of that same feature.
>
>
> We understand the true nature of reality by creating a
> model which explains all the views we can collect.


Go to the top of the class Curt :)

There is a lovely example of this with the illusion called
the phantom hand. You can find a short article on it in
the Scientific American MIND magazine volume 14, number 5
or maybe find it explained somewhere on the internet.

As the article says, these illusions demonstrate two
important principles underlying perception.

1. Perception is based largely on extracting statistical
correlations from the sensory input.

2. Mental mechanisms that extract these correlations
are based on automatic processes that are relatively
unsusceptible to higher-level cognitive processes.



> The way we talk about the mind is different than the
> way we talk about everything else - but yet there is
> no evidence to suggest these things we associate with
> the mind are different from anything else.  It's just
> left over baggage from the days when dualism was
> assumed true by default.


What you miss is that the "everything else" you talk about
is made out of the contents of the mind process and that
is why we talk about it as if it was different in some
way because in one sense it is. But the difference is not
in the substance but in the viewpoint. I agree that there
is no evidence anything but a physical brain consisting
of firing neurons is involved. I believe the difference
exists even in a simple program like SHRDLU.


> My issue is mostly that if you think consciousness is a
> special type of process (and are trying to explain what
> type of process it is), you don't understand that it's
> an illusion.


I don't think it is an illusion that the contents of my
conscious processes consist of objects in a 3D space but
they do not consist of the firing neurons that allow such
a process to take place in the first place. If you think
you can actually experience the firing of neurons I would
suspect you were the one suffering the illusion although
firing neurons are indeed what is happening at the lowest
level of the process you don't actually experience them.


> There is not a single class of process that exists in
> the human body creating our intelligent behavior that
> doesn't exist in some form in all our computer
> controlled robots.


It depends what you mean by that. If a process is taken
to be, say, a subroutine in a program then some programs
have processes taking place that others don't. Some of
the subroutines are common to both some are not. I would
not be surprised to find the brain had processes that we
have NOT YET implemented in a machine.


> If you think consciousness is something we need to
> "go figure out" or "discover", you don't get my point.


I think your point is that there is no evidence that
there is anything more than a brain full of firing
neurons when we talk about being conscious. And I agree
with that.


>>> Trying to argue that mind is just a verb that
>>> describes an action of the brain doesn't explain
>>> the externally odd way we use it in the English
>>> language.
>>
>> I think we both agree on the fact that many have
>> the belief in a mind or soul as a separate entity
>> to the brain. And I think we both agree that there
>> is no evidence for such an entity and most people
>> who know about our current knowledge of brains
>> realize that the contents of the mind correlate
>> 100% with brain activity.
>
>
> Yes, except the simple fact you said "correlate"
> shows you are thinking about it wrong.  It is
> meaningless to talk about how something correlates
> with itself.  Correlation only exists between to
> separate events.  You would not, for example, talk
> about how the sun rising in the morning has 100%
> correlation with the sun rising in the morning -
> but yet that's what you just wrote above.


What correlates is two views of the same thing.

You call one of these views an illusion but really
it is just another view, a view from the inside
by the system that can have an inside view. It is
not a correlation of a circle with a circle it is
a correlation with an oval and a different shaped
oval which indeed may be a circle in 3D space
but seen from different views. That is just an
analogy for what I am talking about it better
understood with reference to actual programs like
the SHRDLU program.


>> I don't see the word "mind" as a single action rather
>> it is the word that covers all the things you mention
>> above such as "thought", "memory", "idea" and so on,
>
>
> Yes, that's exactly how we use it which is fine.
>
> We gave it two different names because we have no
> direct data which shows us it's the same mountain.


In a sense that is true. However the two names are
still useful for they indicate what view is being
referred to. Are we talking about the neurons or
logic gates or are we talking about the high level
descriptions used by the program itself?


>> The firing of a neuron isn't what we call a conscious
>> process even though a conscious process is entirely
>> due to the activity of firing neurons.
>
>
> But it is, and that's your problem.  You are looking
> for something else when there is nothing else to look
> for.


No I am recognizing the viewpoints. One view is made
up of high level descriptions used by the brain or by
a program, the other view is of the firing neurons used
by the brain or the switching logic gates used by the
program.


>> A "conscious" process is the same as an "unconscious
>> process" with respect to them both involving the
>> firing of neurons and most likely the same neurons.
>
>
> That's something I've never seen anyone actually talk
> about.  What does the brain activity of an unconscious
> person look like on something like an fMRI?


I am not sure except I suspect there is less activity
except when dreaming. When you dream the activity is
just as if you were awake. The difference is that there is
no connection to the body otherwise you would act out
your dreams or react to your sensory input.


>> The content of a "conscious" process is made up of
>> colors, feelings, thoughts and so on which are the
>> result of the firing of neurons.
>
>
> No, they _are_ the firing of neurons.  The contents
> of a conscious process in a human brain is the firing
> of neurons.


I think maybe you misunderstand what I am saying.
Yes the physical process consists of firing neurons
but the high level process which talks about feelings
and has thoughts does not make reference to those
firing neurons. Again I refer you to the SHRDLU
program, called Robbie in some text. I think you
still have this residual notion of others having
a belief in something extra being involved.


>> However a firing neuron does not have feelings
>> or thoughts as its content even though it is a
>> sub process required for the feelings and thoughts
>> to take place in the system as a whole.
>
>
> But it does. There is no indication that it doesn't.


There is no indication that a firing neurons has
thoughts or any ability for it to have such content
anymore than a single pixel can have the content
of a whole image.


> You also have no evidence that this myth you call
> "the conscious process" is real.  You can't define it,
> you don't know it when you see it happening in a
> computer.  Why do you continue to put so much faith
> in something that doesn't exist?


It is as if you haven't read anything I have written.
I *have* given a tentative definition of a conscious
process as being defined by the content of the process.
The content of the square root process is a number.
The content of the conscious process is the 3D world
you experience. The content of an unconscious process
of the neurons is an electrochemical pulse or chemical
gradient. I call one conscious because that is the
label talking human systems use when referring to this
process. True they once thought, and many still do,
that this content has a non physical existence, but
we know better don't we?


> You say you have the magic power to "sense conscious"
> in yourself.  How do you know my digital camera doesn't
> have the same power?


I don't think I have ever said we have the magic or
otherwise power (ability) to "sense consciousness".

It could be that self awareness involves some kind
of model of the self, including the thinking part of
the self, as its content. But these are all physical
models or patterns in a physical brain.


> The point of my never ending discussion with you
> is that even though you have rejected dualism and
> believe in materialism, you still think dualistic
> conciseness is real - you just changed it's name
> from "soul" to "process" and by doing so, since
> "process" is a material thing, you believe you have
> resolved any conflict between materialism and
> consciousness.


Of course I have resolved the conflict between the
two views of the brain. The one which is experienced
by the brain and the one which is involved in viewing
a brain via some external means. Why do you not see
that the experience people talk about as being some
kind of non-material "soul" turns out to be nothing
but a *material* process?


> And at the same time, since you believe in dualistic
> consciousness,


What in the world is "dualistic consciousness"?


> ... you have no issue in thinking that "you" are not
> a human body, but a "pattern of behavior", or "a process".
> You are just cheating by doing this.


Why is that cheating? To me it is an observable fact.
I see people acting like they have a Self when I also
see a process taking place. When the brain stops I
cannot observe any physical actions which could be
called a Self.

If you doubt a Self is derived from a person's actions
then you might ask why those who have had a relative
suffer brain damage claim that their relative is not
the same person. What they mean is that although it is
the same body the actions they call personality are
not the same anymore.


> It's a fine consistent story you have created for
> yourself - but it's just not true and it's not real
> materialism.  Real materialism rejects dualistic
> ideas of consciousness.  Real materialism says the
> body and what it does is all there is and that there
> is nothing else there to talk about that could be
> called "consciousness".


That seems more like a religious doctrine than any
kind of scientific model. Let there be no other God
(theory) except this one. All other physical theories
are heresy. They are not true Materialists (Christians)
for they do not believe in our dogma.

Well sorry Curt I am a free thinker and don't attach
myself to any religious ISM.



- John

0
casey
11/25/2008 10:25:19 AM
curt@kcwc.com (Curt Welch) wrote in
news:20081124024020.448$0c@newsreader.com: 

> At the high level the idea of what has to be done seems kinda obvious.
> When we do something that works for us, we want to apply that to
> something we run into in the future which is similar.  If our measure
> of "similar" is a good one, we might expect the things we try to work,
> if our measure of "similarity" turns out to he stupid, we will get no
> where.  For example, if I got food once when I was hungry by asking my
> mom "Can I have more food please", then the next time I'm hungry, I
> might try the same thing again. But if I'm alone in the jungle, and I
> walk around saying "Can I have more food please" it would be a fairly
> stupid thing to try.  We are smart enough to know that using that
> behavior, when we are hungry, and surrounded by trees, is pointless,
> but when hungry, and surrounded by humans that speak English, the
> behavior is a good one to try. But how do we build that level of
> "smartness" into a machine?  How does the machine know that one
> behavior is more likely to work than the other?

The behavior generated must be adapted to the problem situation, not the 
goal sought.

> So, the first point here is what is the lesson learned?  The answer to
> that is, "what behavior has worked in the past".

It has to be "behavior which has worked in the past in situations 
similar to this."

> But, how does the machine know if a behavior was good or bad?  How
> does it evaluate the result to determine if the result of an action
> was good or bad?
> 
> The bottom line to that question is that there is no universally
> correct definition of good and bad.  There is no universal way to
> build a machine which looks at sensory data and reports "good" or
> "bad".  The answer is, that good and bad must be hard coded into the
> machine - and the way it's hard coded is totally arbitrary.

You seem to be confusing two things --- the goals the system seeks, and 
the efficacy of strategies for attaining them. The goals could be hard-
wired, or they could be acquired via some low-level processes. (That 
seems to be the case with humans --- some goals are innate (breathing, 
eating, etc.), while others are acquired (a taste for oysters, a 
fondness for Bach).

But "goal origination" and "goal ranking" are separate issues. We need 
only posit that the system has various goals, or things recognized as 
"goods," which are arranged hierarchically and which the system will 
expend energy and other resources attempting to secure. There is no *a 
priori* constraint on what may count as a goal, as long as it is 
something the system can obtain, in principle, via action in the 
environment. Goals are arbitrary.

The goal hierarchy is dynamic, i.e., when a goal is satisfied, it moves 
down or drops out of the hierarchy and another moves up. But the 
satisfaction gained from the first goal can gradually dissipate, and the 
desire for it begin to increase again, moving it back up in the 
hierarchy (recurring or cycling goals).

Which goal is pursued at a given moment depends not only upon its 
present rank in the hierarchy, but upon the opportunities currently 
presented by the environment. The environment is also dynamic, and 
presents opportunities more or less at random. The system must calculate 
a "payoff matrix" in which its goals form one axis and the state of the 
environment the other. The system pursues the goal with the highest 
expected payoff in the current environment. To calculate that payoff, of 
course, the system must be able to assess the difficulty of overcoming 
any obstacles in the way of the goal. (That is where the "cognitive 
model" of the environment comes in --- the system can "run scenarios" 
using the model, and thereby assess the efficacy of various strategies 
for surmounting the obstacles). 

> This is key to understand because it shows that there is no single
> universal definition of good and bad.  Good and bad is defined by
> stuff hard-coded into the intelligent hardware.

I agree you can't define "good" and "bad" for goals, but you can define 
it for strategies and behaviors, relative to goals.

> The most general way however, and the one I think we have to use to
> create true human level AI, is generic reinforcement learning.  This
> approach actually makes the problem a lot harder, but it's the
> approach I think these birds and humans use.  With reinforcement
> learning, all concepts of good and bad are implemented by hardware
> which produces a single reward signal which is fed to the learning
> machine.  The machine is then constructed so as to try and maximize
> the long term value of this reward signal.

Ok. But the reward signal would be generated by the "goal engine" (the 
module which encodes or acquires goals and drives their dynamics). The 
signal is strengthened when the "current working goal" --- the goal with 
the highest value in the current payoff matrix --- is attained or 
progress toward it made.

> The first big problem with trying to build a system driven by a reward
> signal is the credit assignment problem.  Something the machine does
> at one point in time, might cause an increase (good) or decrease (bad)
> to the reward signal at some point in the distant future.  So when the
> reward signal increases, which of the behaviors in the past 10 years
> that this machine produced was the cause of that increase?

No. The system is always driven by the instantaneous value of the reward 
signal; it is not cumulative. When the reward signal reaches a 
threshhold, it cycles the "goal engine," which demotes the current goal 
and promotes a new one (it recalcs the payoff matrix), which drops the 
reward signal back to its minimum. The system keeps going, seeking to 
raise that signal strength again. The system *could* discover that a 
past behavior produced a current "bad" (a negative goal, i.e., 
environmental features or situations the system seeks to avoid). It will 
anticipate many of those when its "runs its scenarios." But if it 
encounters one it had not anticipated, it revises that scenario so that 
it will appear in future runs.

> The way we
> attack that problem is to build reward prediction into the system. 
> The system must be able to predict future rewards.  That is, when
> something changes in the environment, it must be able to sense that
> the change just made things look better for the future, or worse for
> the future.

Yes. It recalcs the payoff matrix. Goal attainment triggers a recalc, 
and so can perceptible environmental changes and shuffling in the goal 
hierarchy. The "goal engine" may have a randomizer also, so that goal 
re-ordering sometimes occurs randomly.

> If you have a good reward predictor in the hardware, then you don't
> have to wait seconds, hours, or days, before the hardware can
> determine of something it just did was good or bad.  If a behavior
> causes the reward prediction system to indicate that things just got
> better, then it can use this fact to judge the value of the behaviors
> just produced.

That is the advantage of the "world model." Behaviors can be enacted 
virtually. 

> So that's the idea behind reward prediction. It
> makes it possible to learn the value of a behavior, even when the
> behavior has not short term change to real rewards - even if the 
> behavior causes a short term drop in rewards.

Behaviors have value only relative to problem situations. The same 
problem situation can impede attainment of different goals. Successful 
behaviors can be remembered, but they'll only be useful in problem 
situations that are sufficiently similar. Those will be "rote 
behaviors," and can be acquired by operant conditioning, or by 
emulation. Interesting behaviors, however, are not rote. They are 
generated *de novo* to solve the problem at hand. *Behaviors* are 
sequences of movements, and are constructed "on the fly," much as 
sentences are strings of words constructed "on the fly." The system must 
be able to generate a suitable behavior for the problem at hand, using 
rules of construction built into the system and also learned from 
experience, much as sentence construction follows morphophonemic, 
syntactic, and semantic rules (rote behaviors are similar to adages and 
cliches). The system must analyze the problem, virtually construct a  
behavior sequence that will solve it, run the scenario, carry out the 
"best bet" behavior, and then assess the result. It then says, "Yippee!, 
or "Back to the drawing board . . ."

The overall efficiency of behaviors will reflect the system's talent for 
inferring or abstracting an efficient set of rules for constructing 
behaviors, and for evaluating behaviors virtually.

> 1) key idea is that the purpose of the system is behavior selection -
> what do I need to do now by direct computation and not a search
> through a list.

No, it is behavior *generation* or construction. Only rote behaviors are 
available for selection.
 
> 2) We use a reward signal combined with a reward predictions system to
> evaluate the success of all behaviors (learning is on-line - it never
> stops).  BTW, though I didn't talk about, the micro features and their
> "lessons learned" are also used to create this prediction system.

Yes. The prediction is accomplished by running virtual scenarios.

[More later on the "micro features]

0
Publius
11/25/2008 10:20:39 PM
Publius <m.publius@nospam.comcast.net> wrote in
news:Xns9B6191FA38202mpubliusnospamcomcas@69.16.185.250: 

> But "goal origination" and "goal ranking" are separate issues.

From goal attainment.
0
Publius
11/25/2008 10:28:17 PM
On Nov 23, 4:03=A0am, "Isaac" <gro...@sonic.net> wrote:
> <forbisga...@msn.com> wrote in message

> > I believe your sentence parsing for the sentence starting page 11
> > line 33 is incorrect in that Dryfus is attributing a conclusion to
> > Chalmers, Clark, and Wheeler rather than making one himself.
> > See the next paragraph to see why I believe so.
>
> either way he bases his many conclusions later in the paper on all the
> various philosophers he quotes, so to me Dreyfus owns them no matter who =
is
> the source.

Why would he own something he is critiquing?

The passage I was referencing is:

   Heidegger=92s important insight is not that, when we solve
   problems, we sometimes make use of representational
   equipment outside our bodies, but that being-in-the-world
   is more basic than thinking and solving problems;that it is
   not representational at all.  That is, when we are coping at
   our best, we are drawn in by solicitations and respond
   directly to them, so that the distinction between us and
   our equipment--between inner and outer=97vanishes.
   As Heidegger sums it up:

    I live in the understanding of writing, illuminating, going-in-
    and-out, and the like.  More precisely: as Dasein I am --
    in speaking, going, and understanding -- an act of
    understanding dealing-with.  My being in the world is nothing
    other than this already-operating-with-understanding in this
    mode of being.

So in the paragraph to which you responded:

  My comment:
  "Assuming that by "thinking" you mean conscious thought,
  I cannot see how thinking is a bridge that necessarily follows
  from memories/beliefs not being solely inner entities.  It
  seems to me that inner and outer representations can be
  bridged without thought.

You are responding as if Dryfus is supporting a position he's
just reporting.  Even in the paragraph following he isn't taking
a position but clarifing Heidegger's position and describing how
Wheeler either doesn't understand it or is misrepresenting it.

In another thread I allude to the symbolic/subsymbolic nature
of things.  I'll go into a bit more detail.  Suppose I have an
electronic device we call a nand gate.  Two leads are identified
as inputs A and B and one lead is identified as output C.
C will have about 5 volts on it as long as A and B do not
have about 5 volts on both of them.  Here's the table without
going into a bunch of detail.

    A       B      C
   ~0v    ~0v   ~5v
   ~5v    ~0v   ~5v
   ~0v    ~5v   ~5v
   ~5v    ~5v   ~0v

Normally 0v is represented as 0 and power voltage as
1.  This works for positive and negative power so ~5v
could as easily have been ~-5v.  Heck, just saying 5v
shows my age.

Here's the thing...

Not only is the assignation of false to 0 and true to 1
not part of physical reality but neither is consistent
assignation of false and true to the same voltage at
all leads.  Here's another table where F represents
false and T represents true:

   A   B   C
   F    T   T
   T    T   T
   F    F   T
   T    F   F

That's the same table as before but with different
logical assignments.  When we use such a device
it's purpose isn't set in stone and we are not relying
upon its representation of anything.  There's nothing
representational in the world to bridge.

Again:

   Heidegger=92s important insight is not that, when we solve
   problems, we sometimes make use of representational
   equipment outside our bodies, but that being-in-the-world
   is more basic than thinking and solving problems;that it is
   not representational at all.  That is, when we are coping at
   our best, we are drawn in by solicitations and respond
   directly to them, so that the distinction between us and
   our equipment--between inner and outer=97vanishes.

Contrary to what I said before Dryfus is in fact telling us
his position on the matter when he uses the words "...
important insight..".  Dryfus is saying that Wheeler was
wrong and his interpretation of Heidegger was wrong.

Some have said that the enemy of my enemy is my friend.
I was about to be drawn into this position by saying that
when you respond to Weeler's position you support Dryfus
but that isn't so.  Dryfus follows the Heiddeger line where
you respond: "It seems to me that inner and outer
representations can be bridged without thought." and this
falls well off the mark concerning their position on the
matter.

Dryfus is talking about being in the world and coping with
it intelligently.

Page 32 line 22:

  It would be satisfying if we could now conclude that, with
  the help of Merleau-Ponty and Walter Freeman, we can
  fix what is wrong with current allegedly Heideggerian AI
  by making it more Heideggerian.

Note the word "allegedly".  I don't know how you can correctly
assert, "either way he bases his many conclusions later in
the paper on all the various philosophers he quotes, so to me
Dreyfus owns them no matter who is the source." when he is
basing his conclusion on a denail of their positions being
Heideggerian.
0
forbisgaryg
11/26/2008 2:25:49 PM
"Isaac" <groups@sonic.net> wrote:
> "Curt Welch" <curt@kcwc.com> wrote in message
> news:20081116234549.142$ia@newsreader.com...

> > I'm an engineer, not a philosopher.
>
> >As such, nearly everything you write
> > strikes me as silly and odd and misguided.
>
> I am an engineer, scientist, philosopher, and roboticist.  Of course, the
> problem does not reside strictly in any one dicipline or skill set, so I
> am not surprised that an implementation oriented thinker will find the
> abstractions too obtuse for utility.

Then you are one of the lucky rare ones! :)

Yes, anyone who can understand and combine all the disciplines together
will have a clear advantage.

> I disagree.  Reverse engineering will not solve the problem and may
> actually lead to many dead ends.  It will take a new theory and
> philosophy to do it. Think of it like trying to emperically come up with
> QED or Relativity w/o any new theory or philosophy of physics.

Yes, you must have the theory to guide you.  That's the hypothesis in
science.  But at the same time, the theory must be distilled from empirical
evidence or else it's not very likely to do much good.  Many of the
philosophers who don't do a good job of crossing over between fields fails
to do that in my view.

> > It's not a problem which can be solved by pure
> > philosophy.
> >
> True, but you can't just do it bottom up either.  You can miss the big
> picture, which philosophy can shed light on.

Right.  I spend endless hours philosophizing about the big picture so I
know where to head.  But you have to build from the bottom up - aka do
empirical experiments to verify or reject your philosophizing.

> > For example, you speak of this "unity between the mind and the world".
> > What exactly is the "mind" and the "world"?
>
> I did not say this.  If you read my intro, I was quoting from Dryfus'
> paper.

Sorry, your quoting style confused me I guess.  People sometimes quote one
paragraph and then write their response starting with the second paragraph.
I don't think I was sure what was the quote and what might have been you
talking.

I like to indent any material I quote to make it obvious.

> >You can't resolve this sort of
> > question just by talking about such things.  Words are defined by their
> > connection to empirical evidence and without empirical evidence, the
> > words are basically meaningless - or at minimal, available for use in
> > endless pointless debates and redefinition based on usage alone.
>
> for sure symantics can lead to circular definitions, but tossing out
> anything not empiracle is "throughing the baby out with the bath water";
> that is, you toss out powerful abstractions that bridge large gaps
> empirical evidence.

Yeah, you can start with any idea no matter where it came from, but if you
can't verify it using empirical data, then the odds of being useful become
highly reduced.

> > The problem we run into here is that without a concrete definition of
> > how the brain works and what the mind is, we can't make any real
> > progress on the types of issues you are touching on here.  How can we
> > make any progress
> > debating the nature of the connection between the "mind" and the
> > "world" when we can't agree what the mind is?  And if we can't agree
> > what the mind is, we can't really agree on anything it creates - like
> > it's view of the world - which is the foundation of what the word
> > "world" is referring to.
> >
> > You can't resolve any of these questions until you can first resolve
> > fundamental questions such as the mind body problem and consciousness
> > in general.
> >
>
> Well, we have to talk about the trinity or we'd get no where, but I agree
> that any usage of those words must be very tentative and cannot lead to
> sweeping conclusions w/o a scientific definition of each, which I say
> would require a theory of mind (not connecting a million data points).

Well, I think mind and consciousness in fact only slow people down and that
it would be solved just fine if the ideas were never mentioned.  The entire
concept grows from an illusion which makes people think there is something
happening in them which simply isn't there at all.

The only point to studying mind and consciousness is to understand the
illusion.  However, almost no one seems to understands it's an illusion and
as such, waste endless hours trying to understand something which is a
ghost instead of ignoring it like they should be doing.  It's just this
illusion which gets the philosophers into such much trouble when try to
figure out the mysteries of the brain without using empirical data.

> > I have my answers to these questions, but my answers are not shared or
> > agreed on by any sort of majority of society so my foundational beliefs
> > can't be used as any sort of proof of what is right.  It call comes
> > back to
> > the requirement that we produce empirical data to back up our beliefs.
> > And
> > for this subject, that means we have to solve AI, and solve all the
> > mysteries of the human brain.  Once we have that hard empirical science
> > work finished, then we will have the knowledge needed, to resolve the
> > sort of philosophical debates you bring up here.  Until then, endless
> > debate about what "merging mind and world" might mean, is nearly
> > pointless in my view.
> >
> > Having said all that, I'll give you my view of all this, and the
> > answers to
> > your questions as best as I can figure out.
> >
> > I'm a strict materialist or physicalist.  I believe the brain is doing
> > nothing more than performing a fairly straight forward signal
> > processing function which is mapping sensory input data flows into
> > effector output data flows.  There is nothing else there happening, and
> > nothing else that needs to be explained in terms of what the "mind" is
> > or what "consciousness" is.
>
> I don't think you can call anything as chaotic as the brain doing
> anything "straight forward".  The Earth's weather is infinitely more
> straitforward than the humand mind/brain and we cannot model it worth a
> damn even with all the most powerful computers in the world.

Only the behavior is chaotic and too confusing to understand.  The
underlying principles which create the chaos is what I believe to be very
straight forward.  I fyou can understand the straight forward underlying
principles, then you can understand the chaos.

Random number generators are also chaotic and too confusing to understand.
But yet, no one thinks twice about them as being some big mystery in the
universe because the underlying principle that causes the chaos is simple
and straight forward.  I think the brain is exactly the same sort of thing.

> >The mind and consciousness is not something separate
> > from the brain, it simply is the brain and what the brain is doing.
> >
> > It's often suggested that humans have a property of "consciousness"
> > which doesn't exist in computers or maybe insects (based on the use of
> > "insect" above).  I see that idea as totally unsupported by the facts.
> > It's nothing more than a popular myth - and a perfect example of the
> > nonsense that is constantly batted around in these mostly pointless
> > physiological debates.
> >
> > There is no major function which exists in the human brain which
> > doesn't already exists in our computers and our robots which are
> > already acting as autonomous agents interacting with their environment.
> > The only difference between humans and robots, is that humans currently
> > have a more advanced signal processing system - not one which is
> > substantially different in any significant way - just one which is
> > better mostly by measures of degree, and not measures of kind.
>
> I don't think you could be farther away from the truth.  The brain
> computes in ways that is so different (an often oposite) of how our
> signal processing works that it is in another universe by comparison.
> For example, the core of the brain's sensory processing seems to be a
> kind of synethstesia based system, which is exactly what all engineers
> would avoid like the plague.  I could go on and on with counter examples.

Sure.  But not all of us avoid that type of design like the plague.  All
reinforcement learning programs and neural networks work that way.  It's
the general field of computer learning you are describing.

Though as a programmer, you have no hope of being able to program a neural
network to recognize faces by hand adjusting the weights.  That's why we
don't program that way.  We develop learning algorithms to do the
programing for us.

Learning to program learning algorithms is tricky business and it's exactly
why solving AI is taking so long.  You don't get to just program in the
behavior you see.

If I gave you a long sequence of random numbers (a whole book full) how
long do you think it would take you to come up with the code that produced
it?  It's a reverse engineering task that is as hard as breaking a cipher.
Solving AI is the same class of problem.

I have no doubt that the brain is using simple and straight forward
learning techniques to shape the chaotic behavior of a network of
communicating neurons.  TO solve AI, we have to figure out what those
simple and straight forward learning techniques are.

> hebbian learning was known since the '50's but that has not lead to
> anything practical because it may necessary but not sufficient.

Hebbian learning is hardly a complex enough concept to even be called a
learning algorithms.  It's high level hand waving (which as John will tell
you I do plenty of on my own).  It's like the first step down a long path
of developing learning algorithms.  It's the idea "maybe there are simple
rules to explain how neurons learn, and if we we use these rules in a large
network - the network will produce intelligence".

Yes, I believe in that idea, but it's not the answer, it's little more than
first step at trying to find the answer.

> For
> example, hebbian learning does not even begin to solve the frame problem.
> Since this is so strait forward, how do you propose reinforcement
> training (i.e., Pavlov's dog) can be used to robustly deal with the frame
> problem?

The little I've seen of the frame problem, the question in general it seems
silly and absurd - it seems to be a problem created by asking the wrong
question and making the wrong assumptions.

However, this might be a good one to debate with you to see where it leads.

I've been slow in reading and responding to these questions, and a post or
two I wrote last week about my views touched on the Frame problem.  I don't
know if you wrote the above before I after I posted that.  Maybe I should
start another thread so we can chew on the Frame problem a bit....

> > From this perceptive, let me jump in and debate the words you had issue
> > with:
> >
> >  we are drawn in by solicitations and respond
> >  directly to them, so that the distinction between us and our
> >  equipment--between inner and outer-vanishes
> >
> > I think at the lowest levels of what is happening in the brain, it is
> > obvious that the brain is simply reacting to what is happening in the
> > environment - that is what the brain is doing by definition in my view.
> > We
> > simply "respond directly to them".  That is all we every do.
>
> really?  So, being an engineer you will know that "reactions" to input is
> just another way of saying that you have a control system.

Sure.

> However, any
> control system needs a model to determine the proper control surface for
> the input landscape; that is, model building.

No it doesn't.

> Thus, the brian is about
> building useful models of the environment via sensory synergy.  In this
> way, I completely disagree with your assertion that "brain is simply
> reacting to what is happening in the environment "

We build models to _explain_ the way the brain reacts.  In fact, the brain
doesn't have to build models, it only has to react.  Or, from another
direction, the way it reacts, is what you are calling "the model".

Look at Brooks' subsumption systems.  Where are the models in those
designs?  This was exactly the point he was making in that approach.
Complex goal driven behaviors that seem to be using a "model" can be
implemented as priority driven reaction systems.

> Well, Dreyfus disagrees with you on this point.  He says there is no
> representation of a dog in the brain.  How do you argue against that?

I think representation is in the eye of the beholder. :)  That is, when we
look at hardware, we can describe it as having a representation or we can
choose to believe it doesn't.

Any causality chain that exists in the universe is an example of
representation and you can be dead sure the brain is full of causality
chains.

If a red ball hits a blue ball and makes the blue ball change direction,
the new direction of the blue ball is a representation of the old direction
of the red ball.  All causality chains create representations.  Whether you
choose to call it that when you are talking about the physics of what is
happening is totally a choice of your frame of reference.

I've not read Drefus so I can't comment directly on what I think his view
might be.

As I've explained in other messages I've posted now, I think the way the
brain works, and the way we need to build hardware to duplicate human
behavior, is to create networks which translate sensory data into a large
set of signals which represent micro-features of the current environment.
All those signals collectively represent the systems view of the current
state of the environment.

I think the brain very much has a representation of dog in the brain.  It's
almost absurd to say it doesn't because if it didn't, we couldn't think
about a dog (assuming you are a materialist like I am believes that thought
is just brain behavior).

Now, if you want to argue that there is no grandmother cell, that's a very
different argument.  That's not an argument against representation, it's an
argument about the form of the representation.  I would agree there
probably is no grandmother cell.  I believe there are lots of micro-feature
cells and that some general cluster or cloud of micro features is our
brain's representation of a high level concept like "grandmother".  If
that's the argument Dreyfus was making about the dog then I would agree
with him.

> >We can look right at it, and have no clue what it is
> > we are looking at.
>
> All the evidence I am aware of re the brain is that such concepts are not
> located in any one place which you can damage to lose only the
> recognition of a dog.  BTW, this is another example of how the brain is
> radically different than our computing systems.

Right, simply because a high level generic concept like dog represented by
a single English word does not exist in our brain as single micro-feature.
It exists as a large and loosely defined cloud of micro features - a
collection that doesn't have any absolutes but rather just a large set of
probability.  If you damage part of that cloud you make it harder for the
brain to use that concept, but the concept of dog doesn't die completely
unless you can damage most the micro-features that define it - and that
might damage such a large part of the brain so as to make the wipe out a
lot more than just the "dog" concept - you might kill the brain's ability
to use language in general.

> > At the same time, if you stimulate the correct parts of
> > a brain, it's mostly likely that the person would report they were
> > "seeing a dog" when there was no dog there.
>
> There is no research ever showing that this is possible.  Please cite the
> research that supports your belief.  I only know of music being able to
> be stimulated to be heard in the brain.

I have no such examples.  What me, use empirical evidence! :)

What I suspect is that you would have to stimulate enough of the correct
micro-features (neurons) that represented the idea of "dog" in the brain
before the person would be stimulated into thinking the dog concept.
Without better brain scanning and stimulating tools we couldn't locate that
potentially well scattered set of neurons nor activate them.

However, what we can do, is stimulate sections of the brain and ask the
patient if they sense _something_.  The answer, as far as I know, is that
the patient does sense things when they are stimulated in different parts
of the neo cortex.  What we can't do, is accurately know what part to sense
to to create what type of sensation or thought.

BTW, the empirical evidence I use is not brain research, it's computer
science research into the behavior of algorithms.  All my talk about brains
is mostly speculation based on how I think it must be working based on what
I know about making computers work.  Keep that in mind when I say things
like "the brain does....".

> > So what is it we are actually
> > "seeing".  Is it the dog we are responding to when we say we see a dog,
> > or is the neural activity in one part of the brain which other part of
> > the brain is responding to by producing the words "I see a dog"?
> >
>
> I think we should stay away from consciousness in this discussion or else
> we will get no where by forking out to too many infinities.

:)

> > It can be argued that what we actually respond to is not the physical
> > dog out in the word, but that we are responding to the brain activity.
>
> Of course, the model of the dog.

Yes, I can call this network of micro features a model, or I can call it
"the way the brain reacts to the environment".  It's the same thing either
way to me.  I think it's important to understand it from both views.

> I believe he means our brain circuits engage phenominon by melding with
> it and becoming a mirror image such that the two are not seprable, thus
> no representations of the object in the brain just a bunch of organically
> melded dominoes that hit one to another like "reality" would.

I really can't grasp what you are suggesting there.

If the brain is "dominoes that hit one to another like reality would" then
that to me is a dead on definition of what a model is.  It's a
representation of the object using dominoes.

Also, as I said above, I think all causality chains that exist in the
universe are representation systems.  Representation is just a way of
describing causality.  For example, the height of the mercury in the
thermometer is a representation of the average kinetic energy of the air.
The location of the dial on the pressure gauge is a representation of the
pressure in the pipe.  The angle of the wind sock is a representation of
the air speed.  The foot print in the mud is a presentation of the deer
that walked past an hour ago.  The mud puddle is a representation of the
rain that happened an hour ago.  The electron charges in the flash memory
card of the camera is a representation of the person that was in front of
the camera the last time the exposure button was pressed.

> I believe he is saying that phenomenon is internalized w/o distinctions;
> i.e., you become the phenomenon.  As opossed to you making a model of the
> object as a seeprate token to use in your brain system to plan your
> actions.

Well that wording has a little bit I can relate to.  That is, the way our
brain reacts as a whole is the representation instead of something in the
brain creating a representation and them manipulating it.  That is, how we
react to it is the representation instead of the representation being
something we manipulate (as we would manipulate a model with our hands).
That is, there is not one part of the brain which is the "us" and another
part of the brain which is "the model".  The brain is us and how we react
to the world is the model.  We are the model.

In this sense, I think I agree with what he was saying - if that is in fact
what he was thinking.

> In his paper, he considers chaotic neural networks as being more at "one
> with the world" than classic AI's more (fuzzy) rule based systems.

Well, with my view of the network being a set of micro features that
represent the current state of the environment, I think we are in sync with
the idea of being "one with the world".

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/26/2008 6:59:41 PM
On Nov 26, 10:59=A0am, c...@kcwc.com (Curt Welch) wrote:

> The only point to studying mind and consciousness is
> to understand the illusion.

Driving along the road you may have the illusion of
shimmering water in the distance. That it is water is
an illusion, that you experience it is not an illusion.
You do not explain the illusion, you deny it, claiming
it is a conditioned belief we have been trained to see
the water but it doesn't exist. You claim there is
nothing to explain. But illusions can be explained
Curt. An illusion is not a nothing. And it is not
simply a conditioned belief. Or are you going to
say a mirage doesn't exist we only think we are
seeing a mirage because we were trained to think
we were seeing a mirage?

To me that is silly beyond belief. Yes the illusion
is that it is water. Yes the illusion was that the
experiences we call mental events was a soul. But a
false explanation of the illusion being experienced
doesn't make the illusion a result of conditioning
or training. The belief in the soul was an explanation
of something not a belief in something that didn't
exist. We do have experiences.

I explain it as two views of the same thing.

It is an illusion a machine will have without the
need for any training or conditioning.

JC

0
casey
11/26/2008 8:52:26 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Nov 24, 8:46 pm, c...@kcwc.com (Curt Welch) wrote:

> > The way we talk about the mind is different than the
> > way we talk about everything else - but yet there is
> > no evidence to suggest these things we associate with
> > the mind are different from anything else.  It's just
> > left over baggage from the days when dualism was
> > assumed true by default.
>
> What you miss is that the "everything else" you talk about
> is made out of the contents of the mind process and that
> is why we talk about it as if it was different in some
> way because in one sense it is.

Sure, we talk about self differently than we talk about everything else.
And a lot can be justified because of our point of view.  But there's more
to it at work here because the mind is the brain, but yet, you and many
other people want to keep it separate by saying it is a process, or what
the brain does.

> But the difference is not
> in the substance but in the viewpoint.

It's easy to use the point of view argument to explain why the word mind
was created in the first place. But it's not a good augment for why you
continue to argue the mind is a process, and is not the brain.

> I agree that there
> is no evidence anything but a physical brain consisting
> of firing neurons is involved. I believe the difference
> exists even in a simple program like SHRDLU.

> > My issue is mostly that if you think consciousness is a
> > special type of process (and are trying to explain what
> > type of process it is), you don't understand that it's
> > an illusion.
>
> I don't think it is an illusion that the contents of my
> conscious processes consist of objects in a 3D space but
> they do not consist of the firing neurons that allow such
> a process to take place in the first place. If you think
> you can actually experience the firing of neurons I would
> suspect you were the one suffering the illusion although
> firing neurons are indeed what is happening at the lowest
> level of the process you don't actually experience them.

It's well known that if you stick a probe in a brain and make a neurons
fire, the person will tell you "I FELT THAT!".

You can very much sense the firing of neurons and we damn well know it.

The fact that you don't understand this is just one more example of how you
are not quite yet fully on board with materialism.  You don't yet grasp
that your mind is in fact your brain.  Your ability to sense your own
thoughts IS your ability to sense your neurons firing in your brain.  If
you know you are having a thought, you ARE sensing your own brain.

> > There is not a single class of process that exists in
> > the human body creating our intelligent behavior that
> > doesn't exist in some form in all our computer
> > controlled robots.
>
> It depends what you mean by that. If a process is taken
> to be, say, a subroutine in a program then some programs
> have processes taking place that others don't. Some of
> the subroutines are common to both some are not. I would
> not be surprised to find the brain had processes that we
> have NOT YET implemented in a machine.

Yes, I have no problem with the idea of a process.  It basically means the
same thing as algorithm in the context of computers.  And I think the
processes at work in the brain have not been fully implemented in any
computer yet.  But I don't think that fact has anything to do with "mind"
or "consciousnesses" - other than the fact that if you don't get the
process right, you won't create the same illusion - so it's probably true
that none of our computers are experiencing the illusion of consciousness.

And when people talk about consciousness, if they meant the word to mean
"the illusion of consciousness" then I would agree that computers don't
have it. But the people I argure with about consciousness are not making
that argument (as far as I can tell). They believe it's not an illusion,
but in fact it's something real, and they think this real thing is what
doesn't exist in our computers.

> > If you think consciousness is something we need to
> > "go figure out" or "discover", you don't get my point.
>
> I think your point is that there is no evidence that
> there is anything more than a brain full of firing
> neurons when we talk about being conscious. And I agree
> with that.
>
> >>> Trying to argue that mind is just a verb that
> >>> describes an action of the brain doesn't explain
> >>> the externally odd way we use it in the English
> >>> language.
> >>
> >> I think we both agree on the fact that many have
> >> the belief in a mind or soul as a separate entity
> >> to the brain. And I think we both agree that there
> >> is no evidence for such an entity and most people
> >> who know about our current knowledge of brains
> >> realize that the contents of the mind correlate
> >> 100% with brain activity.
> >
> >
> > Yes, except the simple fact you said "correlate"
> > shows you are thinking about it wrong.  It is
> > meaningless to talk about how something correlates
> > with itself.  Correlation only exists between to
> > separate events.  You would not, for example, talk
> > about how the sun rising in the morning has 100%
> > correlation with the sun rising in the morning -
> > but yet that's what you just wrote above.
>
> What correlates is two views of the same thing.
>
> You call one of these views an illusion

No, I've never called one of the views an illusion.

The illusion is the belief that it's not not just two views, but two
different things.

Per you example above where you said you can't sense neurons firing when in
fact you can sense them very clearly.  The illusion is what makes people
think their subjective experience is something other than neurons firing.

> but really
> it is just another view, a view from the inside
> by the system that can have an inside view. It is
> not a correlation of a circle with a circle it is
> a correlation with an oval and a different shaped
> oval which indeed may be a circle in 3D space
> but seen from different views. That is just an
> analogy for what I am talking about it better
> understood with reference to actual programs like
> the SHRDLU program.

Yes, it's another view. But a ring, seen from the side, still looks like a
ring to us.  It doesn't stop looking like a ring when it turns into a oval
becuase we are seeing it from the side.  This is because the brain is wired
to map those two views into one object - one set of internal signals.  The
firing of neurons, seen from another view, doesn't look like the firing of
neurons - it makes people like you say you can't sense the firing of your
own neurons when you are sensing it.  It doesn't seem to be the same thing
because our brain is not mapping it into one common representation in our
brain.

>
> >> I don't see the word "mind" as a single action rather
> >> it is the word that covers all the things you mention
> >> above such as "thought", "memory", "idea" and so on,
> >
> >
> > Yes, that's exactly how we use it which is fine.
> >
> > We gave it two different names because we have no
> > direct data which shows us it's the same mountain.
>
> In a sense that is true. However the two names are
> still useful for they indicate what view is being
> referred to.

Well, we don't call a ring a oring when we see it from an angle that makes
it look elliptical and an c ring when it's a circle and a sound-ring when
we hear it hit the floor or a "sring" when we feel it's smooth hard
surface.  WE just all it a ring no matter what view of it we are looking
at, and we just call it a dog no matter what view we are looking at.

the word mind is more confusing than helpful, but for historic reasons it's
going to stay around.  But that's not the issue.  The issue is that there
are people like you who believe in materialism but still think your
thoughts are something other than you sensing your neurons fire - which is
dualism.  Your view of physics is materialism but your view of your mind is
still dualistic.

> Are we talking about the neurons or
> logic gates or are we talking about the high level
> descriptions used by the program itself?

I'm not sure what you mean there because I don't know which "program" you
are talking about or or which "high level description" you are talking
about.

> >> The firing of a neuron isn't what we call a conscious
> >> process even though a conscious process is entirely
> >> due to the activity of firing neurons.
> >
> >
> > But it is, and that's your problem.  You are looking
> > for something else when there is nothing else to look
> > for.
>
> No I am recognizing the viewpoints. One view is made
> up of high level descriptions used by the brain or by
> a program, the other view is of the firing neurons used
> by the brain or the switching logic gates used by the
> program.

The only "high level descriptions" in play here are the words we are
speaking.

Everything else is the firing of neurons and the movement of atoms in the
body.

> >> A "conscious" process is the same as an "unconscious
> >> process" with respect to them both involving the
> >> firing of neurons and most likely the same neurons.
> >
> >
> > That's something I've never seen anyone actually talk
> > about.  What does the brain activity of an unconscious
> > person look like on something like an fMRI?
>
> I am not sure except I suspect there is less activity
> except when dreaming. When you dream the activity is
> just as if you were awake. The difference is that there is
> no connection to the body otherwise you would act out
> your dreams or react to your sensory input.
>
> >> The content of a "conscious" process is made up of
> >> colors, feelings, thoughts and so on which are the
> >> result of the firing of neurons.
> >
> >
> > No, they _are_ the firing of neurons.  The contents
> > of a conscious process in a human brain is the firing
> > of neurons.
>
> I think maybe you misunderstand what I am saying.
> Yes the physical process consists of firing neurons
> but the high level process which talks about feelings
> and has thoughts does not make reference to those
> firing neurons. Again I refer you to the SHRDLU
> program, called Robbie in some text. I think you
> still have this residual notion of others having
> a belief in something extra being involved.

Yes, the residual notion that you keep confirming by saying things like you
can't sense your neurons firing.

> >> However a firing neuron does not have feelings
> >> or thoughts as its content even though it is a
> >> sub process required for the feelings and thoughts
> >> to take place in the system as a whole.
> >
> >
> > But it does. There is no indication that it doesn't.
>
> There is no indication that a firing neurons has
> thoughts or any ability for it to have such content
> anymore than a single pixel can have the content
> of a whole image.

a million neurons have thoughts why can't one?

A million transistors can computer - why can't one?  One transistor
computes just fine and one neuron has feelings just fine.

> > You also have no evidence that this myth you call
> > "the conscious process" is real.  You can't define it,
> > you don't know it when you see it happening in a
> > computer.  Why do you continue to put so much faith
> > in something that doesn't exist?
>
> It is as if you haven't read anything I have written.
> I *have* given a tentative definition of a conscious
> process as being defined by the content of the process.

What on earth does "content of a process" mean?  I can't grasp what it
means so you are going to have to do better than this if you think this is
an answer.

A rock rolling down a hill is a process.  What is it's content?  I can't
even grasp what you are thinking here by trying to talk about the content
of a process.

> The content of the square root process is a number.

So somewhere, mixed in the wires and transistors of a calculator as it
calculates a square root we will find a "number" if we look hard somewhere
under the covers?

Do you understand that the word "number" is vibraions in the ari that are
produced out of the mouth of a human looking at wires and transistors and
that as such, it's not something which exists in the wires and transistors
of a calculator?

Do you understand that "square root process" is also something that does
not exist in the calculator?  It is again a behavior humans produce in
response to looking at the calculator?

> The content of the conscious process is the 3D world
> you experience.

The content of my conscious process is a lot of firing neurons.

> The content of an unconscious process
> of the neurons is an electrochemical pulse or chemical
> gradient.

What you describe as "unconscious" is what I describe as "conscious"
because you think there are two things there, and I think there is only one
thing there.

> I call one conscious because that is the
> label talking human systems use when referring to this
> process. True they once thought, and many still do,
> that this content has a non physical existence, but
> we know better don't we?
>
> > You say you have the magic power to "sense conscious"
> > in yourself.  How do you know my digital camera doesn't
> > have the same power?
>
> I don't think I have ever said we have the magic or
> otherwise power (ability) to "sense consciousness".

No, it's implied in the things you say.  I'm just translating what you are
implying to try and help you see the inconsistency of what you are saying.

> It could be that self awareness involves some kind
> of model of the self, including the thinking part of
> the self, as its content. But these are all physical
> models or patterns in a physical brain.
>
> > The point of my never ending discussion with you
> > is that even though you have rejected dualism and
> > believe in materialism, you still think dualistic
> > conciseness is real - you just changed it's name
> > from "soul" to "process" and by doing so, since
> > "process" is a material thing, you believe you have
> > resolved any conflict between materialism and
> > consciousness.
>
> Of course I have resolved the conflict between the
> two views of the brain. The one which is experienced
> by the brain and the one which is involved in viewing
> a brain via some external means. Why do you not see
> that the experience people talk about as being some
> kind of non-material "soul" turns out to be nothing
> but a *material* process?

It's not some odd unknown process.  It's the firing of neurons.  We know
what the process is and it's what you are sensing.

> > And at the same time, since you believe in dualistic
> > consciousness,
>
> What in the world is "dualistic consciousness"?

It's when you think your subjective experience is something other than the
firing of neurons and you tell people "If you think you can actually
experience the firing of neurons I would suspect you were the one suffering
the illusion ...".

> > ... you have no issue in thinking that "you" are not
> > a human body, but a "pattern of behavior", or "a process".
> > You are just cheating by doing this.
>
> Why is that cheating?

Because you are using the word "process" to justify your dualistic view
that there there is a separation when there is none.  And since the word
"process" refers to physical events, you have brought the non-physical side
of dualism into the physical domain which makes you feel like it's no
longer dualistic when in fact you are still thinking of as dualistic.

You talk about the "process" being one level, and the "neurons" being
another level when there aren't two levels in what exists in the universe.
There is only the firing of neurons.

> To me it is an observable fact.
> I see people acting like they have a Self when I also
> see a process taking place. When the brain stops I
> cannot observe any physical actions which could be
> called a Self.

Yes, it's fine to talk about how the brain works as a process.  What's not
fine is to use the word to justify a belief in dualism like you do.

> If you doubt a Self is derived from a person's actions
> then you might ask why those who have had a relative
> suffer brain damage claim that their relative is not
> the same person. What they mean is that although it is
> the same body the actions they call personality are
> not the same anymore.

I don't doubt for a second that we _identify_ people by their actions.  And
if you want to use "self" to mean, "the actions of my friend" that's ok.
But if you want to try and argue that the self is separate from the body,
then I cry fowl and start talking about dualism.


>
> > It's a fine consistent story you have created for
> > yourself - but it's just not true and it's not real
> > materialism.  Real materialism rejects dualistic
> > ideas of consciousness.  Real materialism says the
> > body and what it does is all there is and that there
> > is nothing else there to talk about that could be
> > called "consciousness".
>
> That seems more like a religious doctrine than any
> kind of scientific model.

It's called materialism and you have never shown you fully understand it
even though you say you believe in it.  I'm just trying to get you to
understand the inconsistencies in the things you say - which I assume come
from inconsistencies in what you think and believe in.

> Let there be no other God
> (theory) except this one. All other physical theories
> are heresy. They are not true Materialists (Christians)
> for they do not believe in our dogma.
>
> Well sorry Curt I am a free thinker and don't attach
> myself to any religious ISM.
>
> - John

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/26/2008 11:45:35 PM
On Nov 26, 3:45=A0pm, c...@kcwc.com (Curt Welch) wrote:

casey <jgkjca...@yahoo.com.au> wrote:
> On Nov 24, 8:46 pm, c...@kcwc.com (Curt Welch) wrote:
 ...


>> Are we talking about the neurons or logic gates or
>> are we talking about the high level descriptions
>> used by the program itself?
>
>
> I'm not sure what you mean there because I don't
> know which "program" you are talking about or
> which "high level description" you are talking
> about.

It doesn't really matter which high level description
is taking place in a program. All that matters is that
it doesn't involve a description of the logic gates
even though it is the activity of logic gates that
makes up the program.


> The only "high level descriptions" in play here are
> the words we are speaking.


And the experiences and the feelings and everything
else that makes up the content of the Mind.


> Everything else is the firing of neurons and the
> movement of atoms in the body.


That is the low level description of the mechanisms
involved in the production of the high level descriptions.


> A million neurons have thoughts why can't one?
>
> A million transistors can computer - why can't one?
>
> One transistor computes just fine and one neuron has
> feelings just fine.


Really Curt. So what feelings do you think a neuron
has? Pain? Love? Joy? Fear?

>> The content of the square root process is a number.
>
> Do you understand that "square root process" is also
> something that does not exist in the calculator?  It
> is again a behavior humans produce in response to
> looking at the calculator?

You might say then that if a machine produces one
response to a tall human and another response to
a short human then tallness is not in the human
but in the behavior of the machine. What if the
machine is responding to its own height?


> It's called materialism and you have never shown you
> fully understand it even though you say you believe
> in it.  I'm just trying to get you to understand the
> inconsistencies in the things you say - which I
> assume come from inconsistencies in what you think
> and believe in.


I don't see any inconsistencies in what I say. Mental
events are things the brain *does*. You say they *are
the brain*. I don't observe mental events in a dead
brain (one without firing neurons) so it is consistent
with my belief, that mental events are what the brain
does, when I say a dead brain is not conscious. I guess
the idea that rocks have mental events is consistent
with the idea that brains *are mental events*. I just
don't share your view of materialism. I don't see a
rock rolling down a hill as a mental event.

Call me a dualist if you like I don't care anymore.


JC

0
casey
11/27/2008 2:41:07 AM
Publius <m.publius@nospam.comcast.net> wrote:
> curt@kcwc.com (Curt Welch) wrote in
> news:20081124024020.448$0c@newsreader.com:
>
> > At the high level the idea of what has to be done seems kinda obvious.
> > When we do something that works for us, we want to apply that to
> > something we run into in the future which is similar.  If our measure
> > of "similar" is a good one, we might expect the things we try to work,
> > if our measure of "similarity" turns out to he stupid, we will get no
> > where.  For example, if I got food once when I was hungry by asking my
> > mom "Can I have more food please", then the next time I'm hungry, I
> > might try the same thing again. But if I'm alone in the jungle, and I
> > walk around saying "Can I have more food please" it would be a fairly
> > stupid thing to try.  We are smart enough to know that using that
> > behavior, when we are hungry, and surrounded by trees, is pointless,
> > but when hungry, and surrounded by humans that speak English, the
> > behavior is a good one to try. But how do we build that level of
> > "smartness" into a machine?  How does the machine know that one
> > behavior is more likely to work than the other?
>
> The behavior generated must be adapted to the problem situation, not the
> goal sought.
>
> > So, the first point here is what is the lesson learned?  The answer to
> > that is, "what behavior has worked in the past".
>
> It has to be "behavior which has worked in the past in situations
> similar to this."

Yes, exactly.  I spend so much time thinking in those terms that the word
"behavior" has come to mean that to me.

That is, I don't think of a behavior as a simple action, like throwing a
rock, I always think of a behavior as a reaction to some state of the
environment.  It's the association between state, and action, which I now
think of when I use the word behavior. To me, the behavior of a machine is
defined by it's internal associations from stimulus/state to response.

> > But, how does the machine know if a behavior was good or bad?  How
> > does it evaluate the result to determine if the result of an action
> > was good or bad?
> >
> > The bottom line to that question is that there is no universally
> > correct definition of good and bad.  There is no universal way to
> > build a machine which looks at sensory data and reports "good" or
> > "bad".  The answer is, that good and bad must be hard coded into the
> > machine - and the way it's hard coded is totally arbitrary.
>
> You seem to be confusing two things --- the goals the system seeks, and
> the efficacy of strategies for attaining them.

Words like "strategies" are labels we apply to high level macro behaviors
of humans.  I'm always thinking about what I have to build into the machine
at the micro level to make these sort of high level behaviors emerge.

I believe the solution to AI will require an extremely general, and
extremely low level approach.

However, I believe these systems must work by assigning value to all
possible micro behaviors (aka the efficacy of a strategy).  This is how all
reinforcement learning algorithms currently work.

> The goals could be hard-
> wired, or they could be acquired via some low-level processes. (That
> seems to be the case with humans --- some goals are innate (breathing,
> eating, etc.), while others are acquired (a taste for oysters, a
> fondness for Bach).

The innate hard-wired behaviors we have such as breathing I mostly ignore
because that's not intelligence - or at least not a part we have problem
duplicating in machines.  That is, we know how to hard-code specific
behaviors into machines already.  This doesn't work well for AI because AI
is mostly about the systems ability to learn these things on it's own.  So
the behaviors we have to hard code into the system, are mostly learning,
and not actions.  That is, we must build a learning machine that learns on
it's own how to act, instead of building lots of complex actions into the
system.

Learning systems must have primitive actions built into them which it uses
as the foundation of all the higher level actions it will learn, but I
believe those primitive actions we need to build into AI must be extremely
primitive.

> But "goal origination" and "goal ranking" are separate issues. We need
> only posit that the system has various goals, or things recognized as
> "goods," which are arranged hierarchically and which the system will
> expend energy and other resources attempting to secure. There is no *a
> priori* constraint on what may count as a goal, as long as it is
> something the system can obtain, in principle, via action in the
> environment. Goals are arbitrary.

There has to be an underlying drive that causes us to pick one action over
another - or which allows us to weigh the worth of one goal over another.
That underlying system of value is what we must understand to build the
micro hardware to produce all these high level macro behavior - such as the
selection of high level goals.

> The goal hierarchy is dynamic, i.e., when a goal is satisfied, it moves
> down or drops out of the hierarchy and another moves up. But the
> satisfaction gained from the first goal can gradually dissipate, and the
> desire for it begin to increase again, moving it back up in the
> hierarchy (recurring or cycling goals).

Yes, you are talking about very high level emergent human behaviors.  I'm
talking about what type of hardware I think is needed to produce it.

At the low level I believe we must build in very simple and very low level
micro behavior of some type (like producing an output value at some point
in time).  All higher level behaviors are then sequences of these low level
behaviors.

A fundamental power such a system must have is the ability to cause a
specific sequence of micro behaviors to be produced.  If for example, the
micro behavior is to produce a binary 1 or 0, the system might learn that
the sequence:

  101010101010

Is a useful sequence to produce, and it might also learn that the sequence:

  0011001100110011

Is another useful sequence.  In order to produce these sequences, the
hardware must must create some internal state which controls which sequence
it is currently producing.  It can't produce both at the same time and get
the same result - just like we can't do two things at once and get a good
result.

This basic fundamental need to have state which controls how the system
produces a sequence, is the same fundamental requirement it has for
producing goal directed behavior.  That is, the power to produce one
sequence of micro behaviors instead of another, is a goal.  It's the goal
of producing one sequence instead of another.

But at the same time, it needs to be flexible to switch in an instant from
one sequence to another.  That is, if the environment changes, a new goal,
aka a new sequence needs to have the power to override the old one -
forcing the system in effect to instantly switch to another direction.

In addition, because we are talking about behavior control of a robot with
many degrees of freedom, we have to also understand that the system needs
to support parallel and independent goals.  The right arm might need to be
working on one goal - following one sequence of events, while the left arm
is doing something else completely.  For example, the right arm is driving
the car, while the left arm is used to eat a sandwich, or hold a cell
phone.

And at the same time, some "goals" or "sequences" will cause the
coordination of many different output signals at the same time in parallel.
Such as, to make the right arm turn the wheel, many muscles have to work
together so they achieve the goal of moving the wheel to the right instead
of fighting each other and getting nothing useful done.

So the system for producing output sequences of micro behaviors needs the
power to support many parallel "goals" at the same time.

So I see the fundamentalist lowest level problem at work in "goal seeking"
to actually be the producing of useful sequences of micro behaviors.  So
just to raise the right arm the system has to produce a sequence of micro
behaviors to tell one muscle to contract while telling the other to reflex
and do this for an extended period of time (500 ms).

These are behaviors the learning system must have the power to learn, and I
believe that if you create an architecture that can learn these very simple
low level goals, then all the high level goals (go to school and get a
degree so I cam make more money and raise a familly before I die), are what
falls out from the way the system is able to solve the problem for the
lowest level goals.

> Which goal is pursued at a given moment depends not only upon its
> present rank in the hierarchy, but upon the opportunities currently
> presented by the environment. The environment is also dynamic, and
> presents opportunities more or less at random. The system must calculate
> a "payoff matrix" in which its goals form one axis and the state of the
> environment the other. The system pursues the goal with the highest
> expected payoff in the current environment. To calculate that payoff, of
> course, the system must be able to assess the difficulty of overcoming
> any obstacles in the way of the goal.

Yeah, I like the way you are thinking there, but I think that has to happen
at a very low level.  The current state of the system has to cause it to
follow some path of producing sequences of micro behaviors instead of
another.

I think the "payoff matrix" is the internal value system created by the
reinforcement learning algorithm which is in effect picking micro behaviors
based on what micro behavior is the best reaction to the current state of
the environment.

> (That is where the "cognitive
> model" of the environment comes in --- the system can "run scenarios"
> using the model, and thereby assess the efficacy of various strategies
> for surmounting the obstacles).

Yes, at a high level, our brain will play out a sequence and use the result
of what that ends up to condition our future actions.  I'm not really sure
about how to best implement that in the approach I look at.

I think the specifics of how that is produced can be ignored until later in
my research beause I think there are other lower level behaviors that need
to be resolved first.

To me, the first task at hand is to make a system that is good at simply
learning to react to the environment directly.  With this sort of simple
reaction system, the machine would be forced to deal only with what was in
front of it and not have the power to "think about" something else.  But in
solving this problem, I believe the system needs to transform raw sensory
data into a good compressed representation of the state of the environment.
To do that, it must build associations that gives it the power to predict
what will happen next.  Data compression, and data prediction, are two
sides to the same problem.  I think in solving this issue, the result will
be a system that also is very good at predicting who a sequence of events
will unfold.  But how and why does this prediction system (or parts of it)
become disconnected from the environment so it can be used to "run a
scenarios"?  This is the part I don't fully understand, but I have a few
ideas as to why that would happiness and how that would happen.

However, I think solving the first low level problem is the best place to
start, because the result of that will be a prediction system that has the
power to "run scenarios".

> > This is key to understand because it shows that there is no single
> > universal definition of good and bad.  Good and bad is defined by
> > stuff hard-coded into the intelligent hardware.
>
> I agree you can't define "good" and "bad" for goals, but you can define
> it for strategies and behaviors, relative to goals.

Well, from the way I think we need to attack this, I don't think the system
really has any direct representations of "goals" in that sense (like the
goal of going to the store to get bread, or the goal of raising the right
arm).  I think it only has short term memory of past internal state which
works to direct how the system reacts and what path it takes.  It think the
way it all falls out is better seen as a chaotic system with various
strange attractors that shape the path of micro behaviors the system is
producing.  And that path the system is currently following allows it to
achieve a goal - even though the internal implementation didn't require the
system to "pick" a "goal" from a list.  Instead, it's just a path the
system is taken that tends to make it circle around a strange attractor.
If conditions change enough, the chaotic system can switch to a behavior of
being attracted to a different strange attractor at any instant in time.

The problem here is not building a list of goals, but instead. building a
chaotic reaction network that tends to evolove over time around different
strange attractors that shapes the sequences of actions it likes to
produce.

This of course is very abstract hand waving, but I think it might give you
an idea of the type of implementation I'm looking for.

> > The most general way however, and the one I think we have to use to
> > create true human level AI, is generic reinforcement learning.  This
> > approach actually makes the problem a lot harder, but it's the
> > approach I think these birds and humans use.  With reinforcement
> > learning, all concepts of good and bad are implemented by hardware
> > which produces a single reward signal which is fed to the learning
> > machine.  The machine is then constructed so as to try and maximize
> > the long term value of this reward signal.
>
> Ok. But the reward signal would be generated by the "goal engine" (the
> module which encodes or acquires goals and drives their dynamics).

You are thinking at a level which is 5 levels above the hardware I'm
talking about.  You are talking about behaviors which I expect to emrege
from the type of hardware I'm thnking of instead of the type of function I
think we need to build into the hardware.

The whole trick here is that if we are building a strong learning system,
then what features which we see in human behavior are learned, and which
are built in modules which we must hand design?

I think for true strong AI to work, we have to build a very generic
behavior learning system that operates at an extremely low level, and that
most the type of things you seem to be talking about here, like a "goal
engine" are not modules we build into the hardware, but simply the way we
like to describe the behaviors that emerge from hardware that is operating
at far lower levels.

> The
> signal is strengthened when the "current working goal" --- the goal with
> the highest value in the current payoff matrix --- is attained or
> progress toward it made.

Yes, I think that sort of effect is important.  But I think it is happening
at a much lower level - the level we talk about as the back prorogation of
values in a reinforcement learning algorithm.  This is used in
reinforcement learning systems as the foundation of how the system will
adjust it's internal values to act as predictors of future rewards.  The
idea is to assign values to stimulus-response reactions based on their
expected future rewards - and we calculate expected future rewards, by
constantly back-propagating real rewards.

> > The first big problem with trying to build a system driven by a reward
> > signal is the credit assignment problem.  Something the machine does
> > at one point in time, might cause an increase (good) or decrease (bad)
> > to the reward signal at some point in the distant future.  So when the
> > reward signal increases, which of the behaviors in the past 10 years
> > that this machine produced was the cause of that increase?
>
> No. The system is always driven by the instantaneous value of the reward
> signal; it is not cumulative. When the reward signal reaches a
> threshhold, it cycles the "goal engine," which demotes the current goal
> and promotes a new one (it recalcs the payoff matrix), which drops the
> reward signal back to its minimum. The system keeps going, seeking to
> raise that signal strength again. The system *could* discover that a
> past behavior produced a current "bad" (a negative goal, i.e.,
> environmental features or situations the system seeks to avoid). It will
> anticipate many of those when its "runs its scenarios." But if it
> encounters one it had not anticipated, it revises that scenario so that
> it will appear in future runs.

Well, I agree in the things you are thinking about, but I think the
implementation will have to be very different than what I believe you are
thinking of.  Most of what you talk about as if it were hardware we had to
build, I see as emergent behaviors of a far simpler underlying system.

I don't think a working AI system for example as time to pause and
"calumniate a new payoff matrix" for example.  Or search a list of goals to
see which one is second in line after the current one is complete.

I if you look at how most reinforcement learning algorithms work by
iterative improvement of a value matrix while the system runs, I think
that's the type of system that is required to make strong general AI.  That
is, all calculations are done on the fly, with a little work done at each
step.  The idea is to use update formulas that tend to converge on optimal
answers, instead of trying to re-calculate new optimal answers at different
points in time.

> > The way we
> > attack that problem is to build reward prediction into the system.
> > The system must be able to predict future rewards.  That is, when
> > something changes in the environment, it must be able to sense that
> > the change just made things look better for the future, or worse for
> > the future.
>
> Yes. It recalcs the payoff matrix. Goal attainment triggers a recalc,
> and so can perceptible environmental changes and shuffling in the goal
> hierarchy. The "goal engine" may have a randomizer also, so that goal
> re-ordering sometimes occurs randomly.

Yeah, in my view, that type of approach just can't work.  There is no time
to "pause and recall a payoff matrix as a trigger from completing a goal".
I believe you are thinking at much too high of a level to be able to build
something that works for anything other than toy domains.  Though I agree
with all the basic ideas you are presenting, I don't like the
implementation you seem to be suggesting.

> > If you have a good reward predictor in the hardware, then you don't
> > have to wait seconds, hours, or days, before the hardware can
> > determine of something it just did was good or bad.  If a behavior
> > causes the reward prediction system to indicate that things just got
> > better, then it can use this fact to judge the value of the behaviors
> > just produced.
>
> That is the advantage of the "world model." Behaviors can be enacted
> virtually.
>
> > So that's the idea behind reward prediction. It
> > makes it possible to learn the value of a behavior, even when the
> > behavior has not short term change to real rewards - even if the
> > behavior causes a short term drop in rewards.
>
> Behaviors have value only relative to problem situations.

Well, It think the current state of the environment is always the current
"problem situation".  But part of that state is feedback from what we have
been doing - and that's the part of the state of the environment which
directs us to follow a one path of behaviors vs another (aka one goal vs
another). It's what happens in the motor cortex (including the frontal lobs
which are part of the motor cortex) which is what creates our goal driven
behaviors.

> The same
> problem situation can impede attainment of different goals. Successful
> behaviors can be remembered, but they'll only be useful in problem
> situations that are sufficiently similar. Those will be "rote
> behaviors," and can be acquired by operant conditioning, or by
> emulation.

I believe EVERYTHING is acquired by operant conditioning (plus classic
conditioning but that's a minor point).

Emulation is not something build in - it's yet another example of a learned
behavior acquired by operant conditioning in my view. We both learn how to
emulate, and learn the value of when to emulate and when not to emulate, by
operant conditioning I believe.

Operant conditioning is the simple low level learning system that has the
power to explain how all other types of learning can be learned.

> Interesting behaviors, however, are not rote. They are
> generated *de novo* to solve the problem at hand.

But if the environment IS the problem at hand, then all behaviors are
generated as rote reactions to solving the problem at hand.

> *Behaviors* are
> sequences of movements, and are constructed "on the fly," much as
> sentences are strings of words constructed "on the fly." The system must
> be able to generate a suitable behavior for the problem at hand, using
> rules of construction built into the system and also learned from
> experience, much as sentence construction follows morphophonemic,
> syntactic, and semantic rules (rote behaviors are similar to adages and
> cliches). The system must analyze the problem, virtually construct a
> behavior sequence that will solve it, run the scenario, carry out the
> "best bet" behavior, and then assess the result. It then says, "Yippee!,
> or "Back to the drawing board . . ."

Nah, that's silly in my view.  The brain isn't doing anything like
"construction multiple sequences, running multiple scenario, evaluating
which will produce the best result, and then executing that. There's no
time for it to do any of that.

If a rock suddenly flies towards us, or a lion shows up, or we start to
lose our balance and fall from a tree, we must react as quickly as possible
or else we would die.  The brain has no time to run scenarios.  It must
have pre-calculated what the best way to react is long before the action is
actually needed.  And that's exactly how all reinforcement learning
algorithms work.  They constantly adjust values on-line when rewards are
received and as lessons are learned, so that when it's time to act, the
answer is already there for them.  I think this is the type of foundation
which produces all the intelligent actions you talk about, but the things
you talk about, are emergent effects from this underlying system which is
pre-calculating the value of all reactions.

> The overall efficiency of behaviors will reflect the system's talent for
> inferring or abstracting an efficient set of rules for constructing
> behaviors, and for evaluating behaviors virtually.
>
> > 1) key idea is that the purpose of the system is behavior selection -
> > what do I need to do now by direct computation and not a search
> > through a list.
>
> No, it is behavior *generation* or construction. Only rote behaviors are
> available for selection.

Right, I think the hardware must have built in primitive behaviors which I
think you might be calling "rote behaviors" and the whole problem we must
solve is how to make a system generate useful sequences of these low level
primitive behaviors (which it seems you are calling behavior generation or
construction).

However, we construct a complex sequence one step at a time.  And if we
look at the logic which picks the next primitive behavior, we are looking
at the logic which is constructing the correct sequence.  So again, it's
still, "behavior selection" at the lowest level.  If you select each
behaviors correctly, then you end up "constructing" the correct useful
sequences.

> > 2) We use a reward signal combined with a reward predictions system to
> > evaluate the success of all behaviors (learning is on-line - it never
> > stops).  BTW, though I didn't talk about, the micro features and their
> > "lessons learned" are also used to create this prediction system.
>
> Yes. The prediction is accomplished by running virtual scenarios.

I think the "running" is happening in real time, so that when it's time to
"pick" the next step, there is no computation needed or scenario to run.
The answer was computed before it was every needed.

The trick is to find an architecture that can do that.

All reinforcement learning algorithms work that way already so you can
study any of them to get a feel for how that can be done.

For example, it's typical to write game playing programs like a chess
program so that it figure out what move to make for a given board position
by "running scenarios" - aka searching the game tree and trying different
sequences of moves and making guesses about how the environment will react
to those moves (aka guess what move the other player will make in
response).

But with TD-Gammon, a backgammon playing program that trained itself by
reinforcement learning, it didn't work that way (at least not the first
version).  Instead of "running a scenario", it used a neural network to
store it's knowledge of what moved worked best in the past.  After wining
or loosing a game, it would back-propagate this good/bad information to all
the moves and board positions in the game and adjust the neural network
values.

The neural network then acted as a strong evaluation function so that given
any board position, the neural network could instantly produce a "value"
for each potential move.  It didn't have to do a tree search of the game
tree (run a scenario" in order to "known" what was the best move. The
network gave it a direct and very fast answer to the question.  In effect,
every game the program played in the past were the scenarios that has been
"run" so that it didn't have to keep re-running (re searching the game
tree) all the time.

To make strong AI, I think the same approach has to be used at the low
level.  The network "picks the next move" based on all its stored knowledge
from past experience.  The network is there to store the results of all
past scenarios so that you don't have to re-run the scenarios to figure out
how to react to that rock flying at you.

At the high level, we might "run scenarios" in our mind when we play a game
of chess.  So this type of behavior does very much take place in the human
brain.  But It think that's all high level emergent behavior which falls
out from a low level system that is using a network that is storing all the
past lessons learned in a way that gives it a quick and direct way to
compute the best micro-behavior to select next.

Though I agree with everything you say in that the it must happen someway
in AI, I think the difference between an approach that can work on real
world problems, and what that has no hope of scaling, is all hidden in the
implementation details.  And the type of model you seem to be talking
(recalculate a payoff matrix based on completion of goal triggers) strikes
me as exactly the type of model that can't hope to scale and be workable
for the class of real time, high dimension, problems the brain deals with
and that our AI must deal with if it's going to equal human intelligence.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/28/2008 10:12:10 PM
I've been saving this message to follow up to.  There is much I have to say
about your comments but I've not had the time to write the follow up.

I'll follow up to your idea of precept vs concept here and leave the rest
for later...

"Publius" <m.publius@nospam.comcast.net> wrote:
> "Curt Welch" <curt@kcwc.com> wrote in message
> news:20081121102356.446$9G@newsreader.com...
>
> >> There is no perception of self, except insofar as the "self" includes
> >> the body.
>
> > The perception of one's body is all there is to have a perception of.
> > What else can the "self" be other than a perception of one's body?
> >
> > Would you like to suggest the self is a perception of the soul
> > perhaps?
>
> As I just said, it is not *perceived* at all.
>
> >> The "self" is a construct, a model of the system synthesized and
> >> inferred from the current states of other brain subsystems (those
> >> which process sensory data) and stored information regarding past
> >> states of those subsystems. So we have a *conception* of self, not a
> >> perception.
> >
> > There is no real difference in my view between a conception and a
> > perception.  It's just playing word games to make it seems like there
> > are two different things here when there is not.
>
> Eeek. Then you'll have a hard time accounting for the mechanisms of
> consciousness.

I can account for all of it.  Not hard at all. :)

> The distinction between a percept and a concept is a
> fairly crucial one, and quite sharp. I doubt that you use those terms
> interchangeably in daily speech.

Daily speech has almost nothing to do with how the brain works. :)

> I have a perception of a tree if I'm
> standing before one and viewing it. I have a conception of a tree if I am
> an artist drawing one from memory, ...

That's a fine definition.

> A perception is a data stream arriving over an open sensory channel. A
> *concept* (or conception#) of a thing is an abstract and idealized
> virtual model of the thing, synthesized by the brain from assorted prior
> perceptions of the thing, together with any imagined properties added to
> the concept for theoretical reasons.

Ok, so let me explain to you what I suspect is happening in the brain which
allows percepts and concepts.

As I've said in other posts, I believe the brain is translating raw sensory
data into a large collection of signals which represent micro-features
present in the current environment.

If we look only at the visual system, we see that the raw sensory-micro
features we start with are basically signals which represent the amount of
light at different locations in the retina.  So we start with a large
collection of micro-features even before we do any processing.  But though
the processing which happens in the network, this set of micro-features
whichy contains a lot of cross correlations in the signal are transformed
into a set of micro-features with very little correlations - signals which
are fairly statistically independent.  That is, knowing the value of one of
these high level micro features, you can't statistically predict anything
about the values of the other micro features.

So the low level micro features starts off meaning things like "light at
location X on the retina", and high level independent micro-features might
represent something more along the lines of "dog in visual field".

When we look at a scene, we know what we are looking at by the set of all
micro features the data activates in us.  We know we are looking at a dog,
because the micro feature (or features) which represent a dog have
activated.

These signals in turn regulate what type of behavior we produce in reaction
to seeing a dog (along with all the other micro-features we see at the same
time).

So when we see a dog (have a precept of a dog), a large set of micro
features activate at both the low levels and the high levels.  There are
micro-features which activate for every little detail of the dog (hair
color, size, length of legs, type of ears, type of tail) as well as the
higher level features like "dog leg" and "dog head" and just "dog".  The
visual sensory data flowing into the network causes this huge constellation
of micro-features to activate, and the behavior we produce are a direct
response to that huge constellation of micro-features.

However, the network of micro-features also tends to cross stimulate each
other.  It must do that simply to improve its classification accuracy.  If
the "dog" micro-feature tends to show up at the same time the "dog house"
feature shows up, the two micro-features will be cross-wired to reflect
this correlation.  When on activates, it stimulates the other to indicate a
higher probability that that the other might activate.  It needs that
cross-stimulation function simply to correct decode the raw data into the
high level micro-features in the first place.

However, the side effect is that some of the high level "dog" micro
features, can be stimulated into action, even though there is no dog
stimulus in the current sensory data.  The network is simply forced to
"think about" dog because of all the things in the stimulus field that are
predictive of a dog being there.  This is valid simply because even if we
don't see a dog, our brain is predicting that there is a high probability
that there is a dog here, even though we don't see a dog.  And as such, we
should be acting (producing behavior) as if there is a dog here.

This natural ability of high level micro-features to cross activate each
other gives us a potential answer to how we are able to have concepts -
that is, how we can think about things that we are not directly sensing.

Something we sense, causes the "dog" feature to activate, and the "dog"
feature then activates other micro-features like "dog barking", which
activates the "vision of friend acting scared of dog", which activates the
thought of "friend eating sandwich yesterday", which activates ...

So when we perceive a dog, a large constellation of micro-features both
lower level and higher level activate to represent not only the high level
idea of seeing a dog, but all the lower level features that exist at the
same time.  But when we "think about a dog" (a concept) the low level dog
features are not currently activate, and it's other high-level features
that caused this dog feature to activate in our mind.

So "dog" is represented internally exactly the same for "precept" and
"concept" in this model of how it all works.  We act differently to the
precept then we do to the concept simply because the low level constipation
of details is missing in the case of "concept".  This is why when we "think
about" a dog, all the fine details are missing from our thoughts.

> Since the "problem of consciousness" consists in large part of explaining
> how the brain is able to generate those concepts, denying that they
> exist, or that they are identical with percepts, would leave you with no
> problem to solve. Or at least, very ill-prepared to solve it.

Well, there's my answer above.  All explained.

A concept is just a partial precept.

There is no denying that we can think of a dog even when there is no dog in
front of us.  But yet, "thinking of dog" and seeing a "dog" are basically
the same thing - we have one internal signal which represents "dog" (or set
of smaller micro-features) which activates in both cases.  The only
difference is that when we see a dog, we also have the activation of all
the fine details.

The ability to think of a dog when there is no dog in front us of is not
just a feature needed to allow "cognition". (if that's what you want to
call it).  It's the brain's feature we need to allow us to understand that
just because the dog is hidden from view because he ran inside his dog
house, we can still sense he's there.  The brain network predicts the "dog
feature" is still part of the current environment even though we can't
currently see him at all.

I think human ability to allow our mind to wander way off subject and think
about dogs or what are going to eat for lunch while we are taking a shower
(for example) is just an extreme case of how the human brain has more
freedom to wonder using these high level cross associations than I suspect
some of the lower animals have (like dogs and chimps).  But I think this
ability to sense "dog" in the environment even when there is no dog, is
simply required for the basic option of the system even in dogs and cats
and chimps.

I do agree that a big part of the "mystery" of consciousness is tied to the
odd effect which results when our brain wonders from one precept to the
next like this.  It's not hard to accept why we "think of dog" moments
after the dog vanished from sight inside the dog house.  But it's a very
different thing to be able to think about the dog a week later while we are
trying to paint a picture of him. It's clear our mind has the ability to
"see" things that just aren't there (and shouldn't be there).  And our
ability to "hallucinate" (so to say) in this way is a big part of what it
means to have human consciousness.

While on this subject, just image what it would be like, if when we "think
about dog" that not only the high level micro-features activated, but what
if a lot of the low level features activated at the same time?  If too many
of the low level features activated, we couldn't differentiate this state
from the state of seeing a real dog.  In fact, we would think we were
seeing a real dog.  We would be hallucinating.

This I'm sure, is the key to explaining hallucinations - they are
"concepts" that are so similar in the set of micro-features which get
activated that the response the brain produces to the set of micro-features
turns out to be the same response it would produce if the brain was exposed
to real sensory data representing this same object.

Well, that's my response to the first idea you put in the post.  You said
about 10 other things I need to respond to as well, but this is enough for
this post....

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/29/2008 1:21:47 AM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Nov 26, 10:59=A0am, c...@kcwc.com (Curt Welch) wrote:
>
> > The only point to studying mind and consciousness is
> > to understand the illusion.
>
> Driving along the road you may have the illusion of
> shimmering water in the distance. That it is water is
> an illusion, that you experience it is not an illusion.
> You do not explain the illusion, you deny it, claiming
> it is a conditioned belief we have been trained to see
> the water but it doesn't exist. You claim there is
> nothing to explain. But illusions can be explained
> Curt. An illusion is not a nothing. And it is not
> simply a conditioned belief. Or are you going to
> say a mirage doesn't exist we only think we are
> seeing a mirage because we were trained to think
> we were seeing a mirage?
>
> To me that is silly beyond belief.

Give it up John.

I've explained this to you at LEAST 20 times and you still don't understand
any of it.  You probably never will.

But, just one more time...

The ILLUSION OF CONSCIOUSNESS (that I talk about all the time), is that
private mental events (our thoughts, aka subjective experence) don't seem
to be neurons firing, even though they are.

I'm am DENYING NOTHING!!!!  I know we have private mental events, and that
these form our subjective experience of the world we live in.

I also have never, ever, even once, said there is nothing to explain.

There is something here to explain.  What we have to explain is why our
subjective experience doesn't seem to be neurons firing when they are.

But, I've explain exactly why this happens many times to you as well.

But yet, you don't understand any of it.  You don't understand the
explanation. You don't even understand that it's been explained to you.

And not only do you fail to understand, you accuse _ME_ of being suffering
an illusion when I tell you that I'm able to sense my neurons fire. Again,
right over your head without any understanding of what I've explained to
you 20 times.

We all experience this same illusion for a very simple reason.  We don't
see our private subjective experience as being the neuron's firing because
we have never been exposed to data which allows us to correlate neuron's
firing with subjective experience.  The brain builds links between
different stimulus signals based on correlations and because there haven't
been any for most of us because we have never been able to see our neuron's
fire while experiencing the subjective experience at the same time.

Let's call that 21 times I've explained it to you now.  But, like all times
in the past, I don't expect you to get anything out of this time either.

The problem here is that you think the illusion is real.  You think that
not only do we have neurons firing, but that they somehow _create_ this
other thing, called subjective experience.  And as such, you think we need
to explain how the "other thing" gets created.  And your current favorite
answer to say it's produced by some brain process we don't understand.  But
there is no such thing as "subjective experience" which is _separate_from_
brain activity, so in that sense, there is nothing to explain.  All there
is to explain, is the like brain activity.

The basic nature of the process which creates the illusion I do understand,
and if you want to understand, all you have to do is keep reading what I
wrote above, until you understand what those words mean.  All the mysteries
of consciousness are explained in the words above if you care to read them
and understand what they imply.  There are a TON of people in this world
that are simply unable to understand those words - so you are not in any
sense alone.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/29/2008 1:43:06 AM
On Nov 28, 5:43=A0pm, c...@kcwc.com (Curt Welch) wrote:


> Give it up John.

I have given it up Curt. I will let you continue
with your crusade without comment.



0
casey
11/29/2008 10:41:48 AM
"Isaac" <groups@sonic.net> wrote:
> "Curt Welch" <curt@kcwc.com> wrote in message
> news:20081116234549.142$ia@newsreader.com...

> hebbian learning was known since the '50's but that has not lead to
> anything practical because it may necessary but not sufficient.  For
> example, hebbian learning does not even begin to solve the frame problem.
> Since this is so strait forward, how do you propose reinforcement
> training (i.e., Pavlov's dog) can be used to robustly deal with the frame
> problem?

I wanted to just followup and see if we could get a discussion of the Frame
problem going since it seems to be a recurring interest to you.

BTW, Pavlov's dog experiment is an example of classical condition, not
operant conditioning.  But that's not important.

I've never understand why people think the Frame problem is even a real
problem, so you are going to have to give me some examples of real world
things that humans do that are examples that demonstrate our ability to
solve the Frame problem but which you don't understand how AI hardware will
solve it, and I'll be happy to explain how I think AI hardware will solve
it.

Using this is a starting point:

http://en.wikipedia.org/wiki/Frame_problem

It talks about the frame problem in AI starting as a logic problem.  That
form of the problem was created simply by trying to use logic as system of
representing information about the environment.  It's a problem crated by
picking the completely wrong implementation paradigm for trying to solve AI
and doesn't apply in that form to any other issue in AI.  As such, it's
just not important.

As I've explained in other messages, I believe the foundation of
intelligent behavior is a reinforcement trained connectionist network and
since it's not using that type of logic to try and represent the state of
the environment, the frame problem just doesn't apply.  If you think it
does, please explain how it does.

But, the wikipedia article also makes reference to the broader form of the
idea that shows up in philosophy which has something to do with "updating
beliefs" as to how the environment changes in response to an action.

I guess you issues are more along this general question somehow???

I believe AI will be created by a machine that learns by experience.  It
learns what reactions to a given state of the environment works, and what
doesn't work.

Such as system simply needs to learn how the environment changes by
watching it change.  It's only "beliefs" are based on the assumption that
if the environment changed the same way the past 10 times in response to an
action, it can be expected to change the same way with a high probability
this time.

So where's the Frame problem in this?  A system which has the power to
"remember" how the environment has responded in the past and works on the
basis that whatever probabilities guided the change in the past are likely
to guide change in the future doesn't seem to have a frame problem to me.

So what is the Frame problem to you and why do you see it as something
which is so hard to solve?

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
11/29/2008 9:42:07 PM
Curt Welch wrote:
> Josip Almasi <joe@vrspace.org> wrote:
>>
>> It's funny how you speak of brain as a distinct subsystem:)
>> When you do so, you obviously identify with something else, not brain.
>> Well, IMHO, *that* is consciousness. And reason to make the distinction.
>> Ability to think of self as an object. To think. It's on much higher
>> level than such a simple illusion you talk about. And furthermore, to do
>> so, it is necessary to separate from body/brain:)
> 
> Yeah, there's this real problem with how I talk that's highly inconsistent
> with what my real self image is.
....
> I think of myself as no different from how would think of myself if I were
> a man-made robot.  I'm just a machine wiggling my fingers typing a Usenet
> message.

And here it is again: 'think of myself'.

It's not just a phrase you know... thinking is some hi-level abstract 
process. And thought is tought:) Thus related to language.
I'm not saying there's anything wrong with your pulse sorting nets or 
your model of brain.
However, here we see some basic attributes of what we call 
'consciousness', or actually I'd prefer 'mind', anyway - this magic 
things that machines don't posess (yet) and you claim it's merelly an 
illusion while I claim it's reality:
- self-consciousness - ability to reflect/introspect
- the ability to think

Thanks to these abilities, the process of thinking is explained in many 
details from Aristotle to today... actually IIRC from 'AI - modern 
approach' Aristotle described it as goal-directed thing much as Publius 
here described intelligence.
And while we know a great deal of the process(es), we don't know much of 
the ability.
As I said earlier - the thought is tought.
And don't tell me it's unsupervised learning:)
IMHO it's not about a system architecture, but about how systems 
communicate, IOW exchange knowledge.
Much as you, I get arguments for that in my pet projects:)

Regards...
0
Josip
11/30/2008 11:20:09 AM
A second follow-up to the same message to address the illusion of
consciousness issues....

"Publius" <m.publius@nospam.comcast.net> wrote:
> "Curt Welch" <curt@kcwc.com> wrote in message
> news:20081121102356.446$9G@newsreader.com...

> > Yes, our body is part of the world, and our thoughts are body actions
> > =- though most people have an invalid self-model that disconnects
> > their "self" from their body.
>
> To speak more precisely, our "body-model" is a component of our
> "world-model." The concept of self --- our "self-model" --- however, is
> broader than the body-model.

You are touching on stuff I bitch about all the time here - Dualism.  It's
what almost everyone is sucked into believing and it's just wrong.

For most people what you say is true because they have an invalid
self-model.  They believe the body and the mind are two separate things
instead of the correct model which matches reality where the mind is just
part of the body - the part we call the brain.

> It is conceived as the locus of experience

Right, it's called the brain and it's part of the body, like the hand is
part of the body. The hand is the locus of holding stuff - but yet we don't
consider the hand to be "broader than the body-model" as you say above for
the brain.

However, many people think the mind is something separate from the body
because they have a dualistic self model which is simply wrong.  This error
is rampant in humans because it's a naturally occurring illusion we all
share.  Even people that have learned enough to stop believing in the
ghosts and the soul still fail to understand that this apparent separation
of the body and the mind is only an illusion.

> --- a postulated entity which serves to unify experienced percepts and
> concepts. But though it is conceptually distinct from the body, it is not
> necessarily "disconnected" from the body.

Oh come on.  The "hand" is conceptually distinct from the body but not
necessarily "disconnected" from the body.  That's just stupid double-talk.

Why do you feel so compelled to justify your odd way of talking about a
brain but you don't waste such effort on talking about your hand or your
heart that way?  the answer is simple.  It's because you, like so many, are
sucked into believing the illusion is not just an illusion, but a truth
about the way the world actually is.  And as such, you are left with the
sticky problem of trying to come up with words to justify how the mind can
part of the body, without being part of the body.  So you choose the funny
words:

   But though it is conceptually distinct from the body, it is not
   necessarily "disconnected" from the body.

When you would never feel compelled to say such odd things about any other
part of your body or any other function or behavior of you body.

> Indeed, the body is an element
> of the self-model. Whether the body (as we model it) *generates* the
> entire self (as we model it) and how, is an open theoretical question
> (most of us are convinced that it does; the main open question is,
> "How?").

It's only "open" in the sense that are some people who think it's open.
It's not open to me, and the "how" part really isn't an issue either to me
even though some of the details are.

The people that are still confused about consciousness are the ones that
still think the illusion is real.  The rest of us know the illusion of
separation is just an illusion, and the truth is that we are a human body
which has hands feet, a heart, and a brain, and that's all there is to it.
All this talk about "the mind" is just the odd way we talk about our brain
as we see it in the illusion - as it exists in these false self models.

> The self-model will be "invalid" only if we answer that question
> wrongly, i.e., if the theory we choose lacks explanatory power.

If you chose a theory that says the mind is something other than the brain,
you have an invalid self model that conflicts with know facts of the
universe.  These conflicts are so well known, they have been given a name
and people devote huge efforts to talk and write about the conflict - it's
called the mind body problem.

But for some reason, the people left talking about can't grasp that when
there's a conflict in the empirical data, it shows the model is invalid -
and you have to change the model so that it doesn't conflict with the data
to solve the problem.  To way you change it is to stop believing the
illusion is real, and come to grips with the fact that mind is just the
brain.

> >> We accept this
> >> world model as "the world" (we are all realists by default). Yet,
> >> because the model remains available even when the world "goes away"
> >> (when we close our eyes, change location, or just direct attention
> >> elsewhere), we conclude there is a another "realm" where the world
> >> continues to exist --- "it exists in the mind." The notion of "mind"
> >> arises because we are able to contemplate aspects of the world not
> >> currently present to the senses, including past states of the world.
> >
> > Yes, but the "mind" has another name, it's called the brain.  Just
> > like a digital camera has a memory card which holds the images of the
> > world that "goes away" when the camera puts it's own lens cap back on.
>
> And would you equate the memory card with the images on it?

Of course I do.

And this is a very good example of how we extend the mind-body problem out
into the world around us.  Since we believe the mind is "something else"
that is "not-physical" we can say all sorts of stupid thinks about it, like
"it has no mass", and "it's not part of the body, but yet is someone
connected to the body".

I like to think of that last one as the chained-soul. It's what people who
believe in ghosts use. They simply talk as if the ghost is still real, but
that's it chained to the body and can't leave it.

But we extend these bogus ideas to things like computers and software and
data and information.  We talk as if the image is not part of the card, but
that it's "stored in it" as water is stored in a bottle.  But we talk as if
this "stuff" stored in the memory card is not physical - that it's like a
"concept" and like our own self model - it's part of the card but not part
of the card.  It's in there, but yet it's not physical.

It's fine to talk that way as long as you don't actually start to believe
what you are saying.  But that's the rub - most people do believe it
because they have never taken the time to do the critical thinking required
to realize how wrong it all is.  Not to mention the fact that 90% of the
population they live in constantly confirms it to be true when it's not.

The image stored in the memory card exist as areas of charge in the memory
cells.  For flash cards, as I understand, it's an insulated area that has
to use some odd tunneling effect to move the charge in and out of the
insulated  cell.  But of course, a "charge" is not some odd non-physical
thing. It's just a collection of physical electrons.  It's very much
physical stuff which is very much part of the physical memory card.  When
we "store" an image in the card, we are changing the card's physical
structure.  It's no different than changing the physical structure of a
piece of paper by adding carbon marks on it with a pencil.

The physical electrons piled up in different spots of the card are "the
image on the card" as much as the graphite marks on the paper is "the image
on it paper".

The only significant difference between the paper and the memory card is
that we can stimulate the correct part of our brain to understand the marks
simply by looking at paper, where as with the card, we need a special
instrument to "see the image".

Now the other issue here is that when people talk about something like "the
image on the paper", it's hard to tell if they are actually talking about
the image on the paper, or if they are in fact talking about how their
brain is reacting to the image on the paper.  These two things often run
together in common conversation because unless you are trying to talk about
brain function such as this conversation, the distinction isn't important.
We can talk about a rainbow for example without caring about whether we are
talking about the way rain reflects light or how the brain responds to that
effect, just like we can talk about sound without caring whether we are
talking about air vibrations or the way a human responds to the air
vibrations.

So when you ask the question:

  "And would you equate the memory card with the images on it?"

I say the image is part of the card because the elections make up the image
and the electrons in the card is part of the card.  But if you were
actually talking about the my _perception_ of the image then that's a very
different physical event than the image itself.  My percpetion of the image
is how I react to it.

> If someone
> asked, "May I see the pics you took at the beach?," would you hold up the
> memory card, or would you how them the prints you made from the images on
> it?

When "someone" talks to me, I assume they are talking using normal English
language conventions and I act and respond using normal English language
conventions. In normal English language conventions, we talk as if we
believed the dualistic story of humans having souls are real.  In order to
communicate with others in English and not confuse them, you must talk as
if you believed in such silly ghosts.  We talk for example as if the image
was separate from the memory card (as if it were a ghost in the card that
was there, but yet not physical).

When I debate issues of mind and body and consciousness in in A.I. forum,
my response is very different because I stop (as best as I can while still
using English) talking as if I believed in ghosts.

> "Mind" is certainly not "another name" for the brain. "Mind" is a term
> for the product of a brain process (or so we assume). It is something
> quite distinct from both the process and the machinery which "runs" the
> process.

Yes, people talk about that all the time and I write 1000's of lines of
crap here in c.a.p. trying to get people to wake up to the obvious fact
it's stupid as hell to actually believe that.

Consciousness IS NOT a PRODUCT of the brain.  It's not something "produced
by" the brain.  Bread is a product of the process of grinding flower and
baking it.  It's a physical product of a process.

Human behavior is the only physical product of the processes at work in the
brain.  Not some mysterious stupid ghost thing called a mind or
consciousness.

So if you want to talk about what is produced by the brain, you can all it
brain behavior, or human behavior, but it's not in any sense distinct from
the brain or the body (as you implied the mind was above by the odd way you
talked about it).  The body, and how all its microscopic parts move and
interact, is all that is there.  Everything else is a figment of the human
imagination and is no more real than pink flying elephants.  It's the
result of having a totally invalid self model.

I BTW, don't have the self model you talked about above as if it were the
one and only self model everyone has.  My self model is a body with hands,
feet, heart, brain, and all the rest.  I am a physical thing interacting
with a phsyical world and that's all there is to me.

> > Does the fact that the camera can "see the images" even when it's
> > lens cap is on, make the camera "think" it has a mind which is somehow
> > separated from it's memory card?
>
> No, it would "think" it had some images which are not identical with the
> subjects of those images.

Yes, the image of the rock is not the rock.  That is easy to understand.
And you talk as if the camera would understand this.  But yet, you don't
for some reason seem to understand this about yourself.

The camera's images are part of itself because the electrons which
represent the image of the camera in it's own memory, are in fact part of
itself.  So, like with the memory card discussion above, the camera's self
image, is part of its real self.  But it's real self, is the entire camera,
not just the elections in the card of picture 3445 which is the self-image
it took of itself.

> > The fact that we have a brain is not the question here.  The confusion
> > is why do we call the brain "the mind" instead of calling it the
> > brain?  Whey do we have two words for one object?  Why have
> > philosophers wasted hundreds of years in endless debate of this
> > question when there is no question here to debate?
>
> We distinguish the brain from the mind because the two terms denote
> conceptually distinct entities --- entities which have entirely disjoint
> sets of properties.

Well, lets go back to this camera example where it takes a picture of
itself and this picture stored in the memory card becomes the camera's self
image.

Lets say the camera takes pictures of lots of things like this, and
everything it knows about, is represented in all the images it has taken
pictures of.

We can all all these images the "mind of the camera" and we would be making
a direct parallel to how the word "mind" is used by humans.

But from above, I've already made it clear that the "images" are in fact
piles of physical electrons in different locations inside the camera, and
these elections are as much a part of the camera as the lens of the camera
is part of the camera.  It's just as physical and real as the case, the
lens, and all the other physical parts we call the body of the camera.

Now, if you want to call these electrons the "mind of the camera" we can.
But then all we have done, is given another physical body part a name.

There is nothing about that phsyical body part which is any different than
all the other physical body parts of the camera.

But if the camera was to study the insides of other cameras, it would find
the memory card - and it would give it name, it would call it the "brain"
of the camera.

And if it suffered the same confusion as humans do - and as you seem to -
it would go around saying things like "my mind is not teh same thing as my
brain. "the two terms denote conceptually distinct entities --- entities
which have entirely disjoint sets of properties."

Yeah, the mind is the electrons, and the memory card is the same elections.
Doesn't seem to disjoint to me.

> I.e., for the same reason we denote any two
> distinguishable things with different terms. The "problem of
> consciousness" just is the nature of the relationship between "mind" and
> brain. If the two are identical there is no relationship (you need at
> least two things to have a non-trivial relationship) and hence no
> problem.

Exactly.  There are not two things and there is no problem.  The only
problem is that lots of people think there are two things because of the
illusion the brain creates.  But since empirical evidence shows clearly
there is only one thing there, they become confused and shake their head
and say "it's a hard problem no one can figure out".

I feel like I'm talking to idiots with an IQ of 80 when I see people say
that.

Nobody has any problem understanding that the electrons in the memory card
are the image and that the electrons are the memory card at the same time.
But yet, when it comes to a brain, so many people can't get past this idea
that consciousness is something disjoint from the brain when consciousness
is the brain as much as our hand is our hand.

> > We don't have two things, we have one.  You can call it the brain, or
> > the mind, but it's the same thing no matter which word you use.
>
> Surely not. The brain has a certain mass; your concept of it does not.

Only if you believe in ghosts which I don't.  My concepts have mass exactly
like the image in the camera as mass.  The image is piles of electrons and
they very much have mass - just like the graphite marks on paper that make
up the image have mass.

Again, when I communicate to others in standard English, I'm forced to talk
as I believed concepts have no mass.  But here, in this type of debate, we
have to stop talking like we are idiots from the 17th century and actually
understand that images, whether it's in the memory card, on a photograph,
or in our head, always has mass.

> The brain is an array of cells; your memory of your grandfather has no
> cells.

Well, to know exactly what the "memory of my grandfather" was, we would
have to know exactly how the brain worked - which we don't - so we don't
know know exactly what a memory like that will take.  But what we do know,
is that my memory is represented in the physical structure of my brain - so
whether it's cells, or the topology of interconnected synapses, or encoded
as proteins it's got mass and the memory is a physical thing just like the
electrons in the memory card are physical things.

> The brain is composed of proteins and carbohydrates; your
> perception of a rose is composed of colors, shapes, scents, and tactile
> impressions.

My percpetion is the physical reaction my body has to those things.
Colors, shapes, scents, and tactile impressions are all different phsyical
reactions in my brain that combined, represent some "memory" such as my
grandfather.

It's the exact same way that some piles of electrons in the memory card are
the camera's perception of red, and other piles of elections are the
camera's percpetion of blue.

But, of course, this is tricky to talk about using standard English because
in standard English, the word percpetion, and words like phenomenal
properties, are defined to mean "stuff that exists in our non physical mind
and which are not phsyical and which are separate from the neurons that
"produce them".

But that's bull shit.  It's a common myth that was proven totally wrong by
science long ago but which a huge population of people don't seem to be
smart enough to understand - even many highly educated scientists.  And the
reason they have such much trouble, is because the illusion of separation
between "mind" and "body" the brain creates is so compelling, that most
people can't ignore it - for the exact same reason most people with
schizophrenia will believe the voices he is hearing is externally real (not
part of the person) instead of understanding the voices are just something
his brain is doing.

> Moreover, those phenomenal properties --- the colors, shapes, scents,
> tactile sensations, etc.,  are the *primary* data from which all
> inquiries begin, including the inquiry into the nature of the brain, and
> the data against which all theories of the "external world" (including
> the brain) must be validated. The brain is a *construct* you have
> assembled from those very data.

Right, just the elections in the camera that are it's "mind" are it's
"construct" of the world.

> > The
> > argument that the mind is what the brain does doesn't cut it. The
> > brain is what the brain does.
>
> That is a very strange claim!

Yes, if you believe all the myths built into the English language are in
fact truths about reality, then it's a strange claim.  But if you are are
able to use your reason to see the truth behind the illusion, instead of
assuming the illusion is real, then it's not strange at all.

> Are you claiming a television transmitter
> and "I Love Lucy" are the same thing?

When it's transmitting "I love Lucy" you bet it's the same thing.  Exactly
like the electrons in the memory card are both the memory card, and the
image.

You do understand that the way a radio transmitter works is by making the
elections in the antenna vibrate right?  This is a physical behavior which
is just as phsyical as moving electrons into different locations in a
memory card to form the physical image.

The physical vibrations are as much "I love Lucy" as the physical behavior
of the actors were "I love Lucy" when they made the video, or as much as
the phsyical behavior of the TV when it's displaying the show by making
it's electrons vibrate in the right way.

So where does "I Love Lucy" exist?  It exists in lots of places and when we
talk about it generically, we are in fact making an off-hand reference to
the total collection of all the places it exists.  It exists as the
physical arrangement of particles on Video tapes in archives and on store
shelves.  It exists as physical arrangements of pits in DVDs.  It exists as
the vibration of elections in TV Transmitters.  It exists as the vibration
of electrons in TV receivers.  It exists as photographs, and words in books
and posters.  It exists as magnetic particles on the disks of the web sites
that about the show.  And most important - it exists as the physical brain
structures and physical brain behaviors as people watch the show, talk
about the show. or think of the show.  "I Love Lucy" is the sum total of
all these physical structures. It doesn't exist in one place (and it sure
the hell isn't a "mass less concept floating in some ether of
consciousness), it exists as the sum total of all these very phsyical
things.

If we wanted to remove "I Love Lucy" from the universe, all you have to do
is change all the physical structures that represent it.  Change the
magnetic particles on the tapes to a different structure, change the ink
marks on the paper, move the pits on the DVDs, and most important, change
the physical structure of every human that has learned lots of "I Love
Lucy" behaviors.

Once you do that, (if we could) "I love Lucy" would no longer exist in the
universe. Well, except for the problems of radio waves propagating though
space that creates a time-delay storage effect that is probably impossible
to eliminate from the Universe.

> Can we infer the plot of "I Love
> Lucy" from the circuit diagram of the transmitter, or the circuit diagram
> from the script?

"I Love Lucy" is stored in lots of places, but one place it is not stored,
is in the blue prints and schematics of a TV transmitter.

> > All objects are what they do - that's
> > what makes them an "object" in the first place - their behavior.
> > That's how we separate the world into unique parts - but their
> > behavior.  We know a cat is a cat because it doesn't act like a dog.
> > A cat is a cat because it acts like a cat.  "Cat behavior" is what a
> > cat does.
>
> No, Curt. The behavior of a thing is only one of the criteria we use to
> distinguish it from other things, and it is often not decisive. Are you
> suggesting we could not distinguish between a dead cat and a dead dog? Or
> a living from a dead brain, for that matter?

I use the word "behavior" to talk about _all_ properties of the object
because there are no properties which are not also physical behaviors.

We tell dead dogs from dead cats by the phsyical properties of the two -
which is what defines all their physical behavior.  We identify them as
dead because they have the behavior of lying still and being stiff and
stinking.  We identify the size by the behavior of how much space it takes
up.  We identify the shape by its light reflecting behavior.  etc, etc.

> Are you proposing that the distinction between structure and function be
> abandoned?

There are many concepts in English that needs to be thrown out if you
actually want to understand the true nature of the world we live in and
more important, the true nature of of the brain and humans.

The concepts of structure and function imply a distinction which doesn't
actually exist in the universe.  These words, like many concepts in natural
language, are in fact just different ends of a single continuum.  Structure
refers to the behaviors which tend to be persistent and long term and
"function" covers concepts on the short term behavior side of the
continuum.

A bottle opener has the behavior of opening a can for a very short period
of time and then that behavior "goes away".  But the behavior the metal
demonstrates by "Holding the form of a bottle opener" is one that can last
for 100's of years".  But in the end, that behavior of the atoms will go
away as well.  The only difference between structure and function is the
expected duration of the behavior.  But there's not clear cut line as to
how long the behavior has to exist before we call it a structure instead of
a function (or an action).

> > The answer to why there is all this confusion is as I outlined before.
> > It's because the model the brain builds to represent private thoughts
> > fails to correct associate those thoughts with our physical world.
>
> The "physical world" is *itself* a construct among our private thoughts.

We have the electrons in the camera which is the camera's "understanding"
of the phsyical world.  Everything it knows about the phsyical world is
represented in the arrangement of electrons in the memory card.  But when
we also have the real phsyical world the camera is part of.

When I use the words "physical world" I AM NOT talking about MY PERCEPTION
of the world represented by the physical structure of my brain.  I'm
talking about the real physical world we are part of.

What I understand about it is represented by the physical structure of my
brain, but what it is, is independent of what I think it is.  Whether it
exists is not up to debate.

> > For example, when we hear a book drop to the floor, our brain will
> > place that sound as being part of our physical environment.  We have
> > have some ideas as to where in our environment this sound happened.
> > WE know which way to turn our head and direct our eyes to try and
> > locate the source of that sound instantly upon hearing it.  The fact
> > that we know which way to turn our head, to see what caused it, is one
> > of many association that instantly pops into our head from the
> > stimulation of the sound.  Other things, like the image of a book
> > might also pop into our head, because the brain has wired an internal
> > connection between the detectors that decode that type of sound, with
> > the detectors in our brain that represent the image of a book, and the
> > detectors that represent the spoken word "book", etc.  We hear that
> > sound and a whole constellation of associated detectors get activated
> > in our brain which represents our expected knowledge of what that
> > sounds "means".  These are the associations (physical cross
> > connections in our brain) that allows us to know that this sound was
> > not just air vibrating, but was a large hard-back book hitting the
> > floor in the room next to ours to the right which has the hard wood
> > floor and not the carpet.
>
> > But when you detect your are having a private thought about a blue
> > cube, where in the world is that physical event located?  Is the
> > thought located in the room to the right with the hardwood floors?
> > Which way do we turn our head to see the physical event which created
> > that signal?  What does the thing that created the physical event look
> > like?  Is it a square object filled with paper maybe like the thing
> > that made the signal we called "book hitting floor"?
> >
> > No, we have no associations like that to make use of.  We don't know
> > which way to turn our head to locate the thing that crated the private
> > "blue cube" thought.  We don't know what the thing looks like.  We
> > don't know what it would feel like if we held it in our hands.
> >
> > The "thing" responsible for the the signal we call a though is called
> > a neuron. It's a real and as physical as the book is which is
> > responsible for the sound we heard.  And we know what neurons look
> > like because we have seen pictures in books and some of us have seen
> > real neurons.  But yet, when we have have a private thought of a blue
> > cube, no image of a neuron pops into our head does it?
>
> Curt --- those "real and physical" things are every bit as much the
> products of neurons as the blue cube.

Not at all.

My _perception_ of the real and physical things are the action of the
neurons.  The real stuff is simply real.

Why thi sis hard for people I'll never understand.  My thoughts are all I
understand, just like the images in the camera is all it can understand,
but what my understanding clearly tells me, is that I exist as part of a
large phsyical world and that my understanding is NOT the same thing as the
phsyical world  I can talk about the physical world, or I can talk about by
percpetion of it. I'm not talking about the same thing in these two cases -
other than the fact that my percpetion of the world is a subset of the
world - just like the cameras perception of the world are the electrons
that form the images of the camera.

> The "real and physical" things are
> inferred from (more precisely, are *constructed from*) those signals,

Yeah, the rock is inferred by the camera from the electrons in the memory
card.  It's not "constructed from" because the electrons in the card do not
construct a rock.  They control how the camera behaves when it's talking
about or thinking about rocks. (I hope all this talk about a camera
"talking" or "thinking" or "understanding" is not confusing you - the
parallels with the human brain should be easy enough to follow).  The
electrons are the camera's phenomenal representation of the rock.

> just as is the imagined things are inferred from neural signals. No
> images of neurons pop into our heads in either case.

They pop into my head.  If they don't pop into your head you probably
haven't yet figured out what you are.

> The "mind" is our
> term for the locus of all those experiences,

Yeah, that's called the brain.

>  "that which contains all
> these images, percepts, impressions, sensations, impulses, and feelings."
> And also that which seeks to explain them all by making associations and
> constructing models such as those of the brain and the "external world."

Right, that's the brain.

> You are, indeed, refuting yourself. You grant that the brain constructs
> models, yet argue that the brain, the process of construction, and the
> model are all identical. If they are identical, then any claim that the
> brain constructs models is meaningless, isn't it?

It's poor wording on my part and it only furthers the confusion I'm trying
to get people to break free of.  The brain does not construct models. It is
the model. It's a model which conforms to it's subjects by changing the way
it's wired.  Just like a memory card in a camera doesn't "save images" or
"construct images".  The electrons are the image.  The memory card changes
shape to become a model of the rock.

> > This is because the brain doesn't know how to model the privater
> > thought sensory data.  It doesn't know what the cause of it is or
> > where that object is located in its model of the world.  Because the
> > brain doesn't associate these private thoughts with neurons for us, or
> > with some location in our brain, we are left with a model of the world
> > that is distinctly dualistic.
>
> Actually it is pluralistic. There are as many ontological schemas as
> there are realms of discourse. That one is just more fundamental than
> others. Unifying them is not necessarily impossible, but not always worth
> doing, either.

Sure, nearly everyone has a slightly different model formed in their brain
to represent the nature of the physical world they are a part of which
includes how they represent their own self as part of that world.  But yet,
there is only one physical world with one set of real properties - which
means that there shouldn't be so much debate about its nature.  But there
is, because once we turn our observation and percpetion tools inward onto
ourselves, we run into this illusion that throws most people for a total
loop.  There is no tool of science you can go down to the store and buy to
make the illusion go away.  And that's exactly the source of all this
confusion.  When people use their powers of percpetion on themselves, they
believe they see something "real" in them, which is uniquely different from
our hands and feet and heart.  When the brain senses what the brain is
doing, it believes it's sensing something uniquely different in the world -
something that is not the same as when the brain is sensing the hand
attached to the end of the arm.  But this "unique difference" is nothing
other than a classification error the brain makes because it doesn't have
enough data to see the truth.  But that statistical error (which is not an
error as much as it's simply a lack of sensory data), makes most people act
like they are schizophrenic.  They think their mind is different from their
brain.

It's the same error a camera would make (if it was advanced enough to make
such errors which of course it's not), if the camera thought it's
"memories" were not the same thing as the electrons in the memory card.

The argument we would hear from the camera would be something like this....
When I look at that flower I see blue, and when I look at that house, I see
red, but when I look at the memory card, I see only a black card.  There
are no red and blue cards so my subjective experience of blue MUST not be
an identify with the electrons in the memory card.

> > We have all the "stuff" which is part of
> > the phsyical world, like the book, and the sound it makes when it is
> > dropped, and we have this other stuff which is separate from the
> > physical world, like "thoughts of a blue cube".
>
> What's wrong with that?

It's wrong.

> It merely means we have contrived a theory which
> partially explains some phenomenal events, but fails to explain others,
> and therefore require a different theory. But we'll continue to
> distinguish the *explananda* from the *explanans* in any case. It's
> logically necessary to do so.

Sure. But we know the correct answer and why more people don't grasp that
we know the answer is what continues to amaze me.  The physical world is
all there is.  Materialism isn't a theory with a few issues or one of many
possible answers, it's the one and only theory that correctly explains all
the facts.

The brain is all there is.  There is no mind which is separate from the
brain just like there is no image which is separate from the memory card.
When we talk about the mind, we are just using odd dualistic (or
pluralistic) language to talk about parts of the brain just like when we
talk about the image "stored on" the memory card we are actually talking
about physical parts of the memory card.

> > The way the world actually is, is often different from the way we
> > think it is.
>
> We have no way of knowing "how the world actually is."

Yes we do. It's called the scientific method.

When I drop a rock, it falls to the ground.  That is one of a billion
examples of "how the world actually is".  Though how my brain represents
the perception of watching a rock fall, and how it represents my thoughts
of such an event, is unique to me, it's still my knowledge of how the world
actually is.

What we know about the world, is statistical knowledge about the
probabilities of events happening.  These statistical facts are how the
world actually is.  These statistical facts are all we know about the
world.

What you are talking about (I believe) is this confusion over something
like "seeing red".  The idea is that "red" is something created by a brain
process which is our internal understanding of what red is "like".  And
that from this view, all we will every know is our own internal "red" and
we will never known the "red" that exists in the real world.  That's all
just silly crap that comes out from believing all this non material
nonsense.

What we know is exactly like what the camera knows.  When it takes a
picture it "knows" what is represented in the data.  It knows with a high
degree of probability that the one pixel of light on the sensor was picking
up light with twice as much red spectrum energy than blue spectrum energy
within the resolution limit of the sensor system.  This is a real fact
about the universe which is represented by our own internal phenomenal
property of "redness" (aka that data value as represented by elections
moving in wires and pushed into memory cells.

All our knowledge is "about" the universe, but there's no such thinking as
talking as if our knowledge is indirect and and that to "known" the real
universe we would have to have some type of direct connection with it.
There's no such think - it's just an invalid idea to bring up.

when you say:

   We have no way of knowing "how the world actually is."

You are implying that there exists some type of "knowing" which "is how the
world actually is" which is different from our type of knowing.  You are
implying the existence of something like a god entity that knows the real
world where we are stuck only with the shadows.

The shadows are all there is to know - it's all there is period. It's the
the only type of "knowing" there is.  If you use the word "know" you are
talking about the type of knowing we have.  To call it "a shadow" is
invalid because doing so implies there is something else better than the
shadow to "know" when there isn't.  If I know there was red light there, I
have direct knowledge about the red light.

Knowing is not complex. It's what happens when the state of one part of the
universe changes the state of another part of the universe and the result
is that the state of the second part of the universe ends up being
predictive of the other state. It's just information transfer from one
physical object to another.

When two billiard balls collide, the state of the second one changes (it's
velocity and direction change) and in doing so, the new velocity and
direction represents some information about the state of the first ball.

All knowledge in the universe works like that and it's all partial
knowledge because the sate of the second ball is never fully predictive of
the state of the first ball.

It's impossible for a tinny brain like the ones we have to know the full
state of the entire universe.  All told, we only known a very
insignificantly small amount about the state of the universe and about the
probability of how it's state tends to change over time.

But what we do know, is direct knowledge of the real universe - not
knowledge of some "shadows" which prevents us from "knowing" what the
universe is really like.

> All we will ever
> have is the phenomena we experience and the explanatory entities and
> processes we contrive to organize and unify that experience.

Yes, but that is what tells us "how the world actually is".

The fact that apples drop from trees and follow the laws of gravity in
their motion is not some figment of our imagination.  It's the way the
world actually is.  It's why we use the tools of science to separate the
errors in our perception of reality from the way reality actually is.

> >  That happens on all levels all the time.  I think I left
> > my keys in the kitchen, but in fact they are still in the car for
> > example.  My brain has a model of the world that indicates the keys
> > are in the kitchen, but this is a disconnect from reality.  The
> > brain's model of the world is just wrong, but yet we "believe" our
> > brain's model of the world because it's the only thing we have to work
> > with.  Our brain's model IS our world - it's the only world we know.
>
> Yes it is! And that is why all claims about "how the world actually is"
> are hollow.

Funny I talked about my keys.

Today, I took my sick dog out to the front yard, and in helping her around,
I saw in the grass a key to my car.  My first thought was something like:
"gee, my wife lost her copy of my car key in the yard - I'll have to tell
her about this one!".  But then, when I picked it up, I realized it was my
key to my car and not her's so I checked my keys and sure enough, my car
key was missing.  It had fallen off the night before when I was out with
the dog.

So for the whole night, and that morning, my brain's prediction of the
state of the universe was a high probability that my car key was on my key
ring when in fact the key wasn't.  And when I saw the key, my understanding
of the state of the world changed, but it changed incorrectly to the idea
that my wife's key was in the grass, but after collecting more data
(looking closer at the key), my understanding of the state of the world
came back into alignment with the real world.

What we know about the world, is a lot of statistical facts about how it
changes state, and a lot of _estimated_ information about it's current
state.

I currently known with a high degree of certainty that my car key is on my
key chain (I just check3d it as I was writing this).  And since I have a
lot of statistical facts about the odds of a key not being on the ring 60
s4conds after checking it is very low, my model of the state of the world
still has the key located on the key chain.

All this knowledge is "what the world is really like".

These are not "hollow claims" about the world.  Keys stay on their key
chains with very high probability when they are not exposed to external
forces.

We can't have absolute knowledge about any aspect of the universe, we can
only have partial information.  That is why all our knowledge is
probabilistic in nature. There is no such thing as absolute truth in this
universe - and I can say what with almost absolute certainty!

But the probabilistic knowledge we do have, is direct knowledge of how the
world actually is.  The world actually has keys that stay on key chains
with a high degree of probability.

What I'm saying here is simply the material world we understand by science
is the one and only thing there here is to understand.

All the funny ways people like to talk about human brains by using words
like "consciousness" and "mind", and "phenomenal properties" is nothing
more than that - funny ways of talking about a straight forward biological
signal processing organ that makes our arms and legs move in response to
our sensory signals.

Consciousness is not something created by the brain (but yet disjoint from
it).  It's just a  funny word people like to use when talking about the
brain and the things it does.

All this confusion about the nature of reality comes from the illusion the
brain creates by incorrectly classifying mental vents as being disjoint
from phsyical body events when they aren't disjoint at all - they are one
and the same thing.

When I pick up the pen sitting in front of me, and hold it with both hands
with my eyes closed, and feel it, and turn it over, my brain creates a
model of reality that tells me without any doubt that I'm holding one pen,
and not two.

But yet, if I hold two pens, the brain will correctly build a model that
indicates there are two pens in the world and not one.  But yet, the
feelings in my fingers are almost identical in these two cases.  But yet,
there is enough difference, to allow the brain to almost instantly create
the correct model of the world as containing one, or two pens.

But what would happen if our brain didn't build the correct model?  If our
brain built a model saying there were two pens, when in fact there was only
one, we would report that there were two pens.  If that happened, we would
report without a doubt that there were two pens.

In some special conditions, we can fool the brain into making mistakes like
that.  We can use mirrors for example to create a few that makes the brain
create a model which is invalid.  But them by giving the brain a little bit
more information, it sees the error in it's model and it isntantly fixes
the model to reflect the truth.

The brain's percpetion of this thing we call the "mind" and it's perception
of the thing we call the "brain" is exactly like this.  The brain builds a
model which indicates that the mind, and the brain, are not one and the
same pen, but two different pens.  It's this error the brain makes which
causes ALL the confusion over consciousness.  The problem whoever with this
illusion, is that there is no data easily savable to give to our brains
that would cause the brain to instantly really it's error.  This is why
everyone is in the dark, and assumes the illusion isn't just an illusion,
(an error in the model) but that it's something more "real" than that.
It's because no data they have _ever_ been exposed to allows the brain to
build the correct model.

With the other illusions, the brain has already build two-pen and
single-pen models.  The only error, is that the brain temporary selected
the wrong model because it had misleading data.  Built with the mind
illusion, it's never every built the correct model (for most people).  So
it's not a matter of just "trying the other model".  The other model
doesn't exist in the brains of most people - which is why it's so damn hard
to make people see this problem.

Most people, as far as I can grasp, have no ability to understand what a
world with "one pen" is even like - because their brain has never built
such a model for them.  It's a paradigm shift so great, they can't even
grasp it is possible for it to exist - let along believe it's the true
model of reality.  I'm talking about the materialists here - even people
that firmly believe in materialism, are mostly stuck with this two-pen view
of reality, and don't understand they are stuck with it.  Instead of
realizing it's an illusion, and working to change their brain's model of
reality (yes it's possible to do it just by talking about it), they just
accept the illusion is something real without questioning it and instead
just work hard and finding the right words to describe what "consciousness"
is.

I've mostly fixed my brain at this point to use the correct model of
reality - to reject the old two-pen view of reality where my mind and more
important my "self" was something separate from my body.  My mind, my body,
and my self, is just one thing now.  I am just a human body witting a
Usenet message, and nothing more, and nothing less.  I'm not a "conscious"
machine any more than a robot is conscious.  I just happen to have a larger
set of reactions I can produce to my environment than any robot and I have
far better learning algorithms that allow me to develop new useful
reactions to my environment far faster than any robot so far.  And probably
most important, I'm far better at applying a life time of past lessons to
new situations than any man made machine.  But other than simple mechanical
issues like that, there is no big difference that deserves the name
"consciousness" and all the mystery people place on it.

The only thing different between me, and any robot, is the number and type
of sensors, and the data processing algorithm running in it. Though we
could use the word "conscious" as a label for the difference, it would be
misleading because people don't use the word to mean "a difference in how
we react to our environment".  People use it because they think they have
this disconnected set of "thoughts and memories and awareness and even a
disconnected self" and they use the word to basically mean "someone that
has this disconnected awareness in them like I do".

But what the word really means, but far too few people understand, is
"having the illusion of a disconnected awareness and being so delusional
that you think the disconnect is a real property of the world instead of a
property that only exists in the brain's model of the world".

I've debated these points with multiple people over the years, and only one
of two things have every happened as a result.  Either the person I talk to
already understand what I'm saying, so there's nothing to debate.  Or the
person I'm talking to is suffering from the delusion and no amount of talk,
or kindness, or meanness, or badgering, allows them to understand the
delusion.  They simply can't grasp what I'm talking about no matter how I
try to explain it.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
12/5/2008 6:32:51 AM
Curt Welch wrote:
> 
> You are touching on stuff I bitch about all the time here - Dualism.  It's
> what almost everyone is sucked into believing and it's just wrong.
> 
> For most people what you say is true because they have an invalid
> self-model.  They believe the body and the mind are two separate things
> instead of the correct model which matches reality where the mind is just
> part of the body - the part we call the brain.
....

So the software doesnt exist its all only hardware.
Yeah right.

Your crusade against dualism is simply pointless.
First you fail at 'separate things'.
Yes software cannot run without hardware... so they're not really 
separate. And from what I read around here noone really claims they are 
really separate, it's only your (mis)interpretation.

But it's really amazing... I mean, you do some NN's, and you don't 
actually build your own hardware but run simulations on PC, right?
Then how can you miss such an obvious fact - it's information that's 
important. And the hardware, it's just implementation details.

FTR what I expressed above is called 'functionalism'.

There's a number of views different from extreme materialism you know, 
it's not all *evil* dualism:)

Regards...
0
Josip
12/5/2008 9:20:45 AM
Josip Almasi <joe@vrspace.org> wrote:
> Curt Welch wrote:
> >
> > You are touching on stuff I bitch about all the time here - Dualism.
> > It's what almost everyone is sucked into believing and it's just wrong.
> >
> > For most people what you say is true because they have an invalid
> > self-model.  They believe the body and the mind are two separate things
> > instead of the correct model which matches reality where the mind is
> > just part of the body - the part we call the brain.
> ...
>
> So the software doesnt exist its all only hardware.
> Yeah right.

Oh don't be silly.  That's not what I said in any sense.  I've worked my
entire life as a software engineer and have a degree in computer science.
Do you honestly think I would ever say that software doesn't exist?

But is it all hardware?  You bet it is. The software is hardware too.  It
doesn't just "run on the hardware" it IS hardware we are talking about.

> Your crusade against dualism is simply pointless.
> First you fail at 'separate things'.
> Yes software cannot run without hardware... so they're not really
> separate.

Software can't run without hardware!  When we talk like that (and I talk
like that all time when I'm not debating consciousness and the mind body
problem), we imply the software is somehow separate from the hardware when
it's not.

> And from what I read around here noone really claims they are
> really separate, it's only your (mis)interpretation.

> But it's really amazing... I mean, you do some NN's, and you don't
> actually build your own hardware but run simulations on PC, right?

There are advanced concepts here that are going right over your head.

As I wrote in my message, some people seem to understand the points I'm
making and as such, see no conflict in what I write, and the others, can't
even seem to grasp what I'm talking about.

When we program a comptuer WE ARE BUILDING HARDWARE!

If I build a chair out of wood, I take raw materiel and I change it's
phsyical shape into something else using tools.

The exact same thing happens when I program a computer.

I take the raw material, (the computer, and the electrons in the wires, and
the magnetic particles on the surface of the disk), and I use tools to
change their physical configuration into a machine that didn't exist before
I built it.

But yet, even though we are building hardware when we program a computer
just as much as we are building hardware when I build a chair out of wood,
we talk about these two processes very differently.  We talk as you talked
here about how it's the "information" we are dealing with when we
reconfigure a computer into a different machine, but we don't talk about
the "information" being important when we reconfigure a tree into a chair.

Why is this?  Have you every tried to think about it?

> Then how can you miss such an obvious fact - it's information that's
> important. And the hardware, it's just implementation details.

Then when I build a chair, you should say the same thing - it's the
information that's important, the hardware is just an implementation
detail.  And that actually works because we can talk about the design blue
prints of the chair as the "software" (the information that describes the
behavior we are trying to get out of the wood) and the wood being the
hardware the software "runs on".

No one talks about wooden chairs this way but yet it's just as valid as the
way we talk about computers.

So now that I've shown you how building hardware and writing software are
actually the same thing, let me repeat the nonsense you said above:

  > And from what I read around here noone really claims they are
  > really separate, it's only your (mis)interpretation.

  > I mean, you do some NN's, and you don't
  > actually build your own hardware but run simulations on PC, right?

You said "You don't actually build hardware".  But you are dead wrong and
you seem to have no clue you are wrong and this is exactly the
mis-information I'm trying to fight when I write these long messages. We
very much do build hardware when we "do some NN". We do actually build
hardware and you didn't even known it because you are so biased by the way
you were taught to talk and think about computers (and believe me, I
thought the same way for 30 years - it was only about 5 years ago that the
light went off in my head and made me realize how totally fucked up my
thinking was).  And that bias comes from the way we were taught to talk and
think about the brain by calling it a mind.

We are taught to believe that thoughts and ideas and concepts are not
phsyical.  We even have the word intangible as a label for all these
immaterial things.  Then we go from there to thinking that information is
also a concept and that it's also not physical.  Then we extend this idea
of non-phsyical stuff that exists in our physical brain out into the
computer, and talk as if it too has non-physical stuff that exists in it
called information, or just software.  And this is easy to do, because even
though we are building hardware when we program a computer, the hardware we
are building is so small (electrons) we can't see them, or touch them, or
smell them.  And because we can't see the hardware we are building, it
becomes very natural to talk as if it doesn't exist - that it's like the
"mind" - something that exists but is not physical.

And from all that nonsense, comes the words you said above - that you think
I'm not building hardware when I program a computer when in fact I very
much am building hardware.

And that's exactly the type of thinking I'm trying to get people like you
to break free from.  Because if you can't understand computers because the
lies we were taught to believe about the brain has spread out into our
computers, then how much hope do you have to think clearly about what the
brain and mind is?

> FTR what I expressed above is called 'functionalism'.
>
> There's a number of views different from extreme materialism you know,
> it's not all *evil* dualism:)
>
> Regards...

If it causes people like you to fail to understand that we are building
hardware when we program a computer, it is EVIL DUALISM!

I have no problem in continuing to talk like we always talk about computers
and software in our day to day lives.  But here, when we debate the meaning
of mind and brain, we can't afford to be so careless and to allow ourselves
to actually believe the nonsense.  We have to clearly see what's really
happening, instead of what we were led to believe is happening by the way
we talk.

There are many different ways to talk about the world we live in.  These
different ways of talking are called "views of reality", or sometimes
"levels of abstraction".  But how we like to talk about the different
aspects of the universe doesn't change the fact that there is only one
universe here, and it only exists one way (the way it is), not as multiple
views.

All the different views have their use (for the most part), but they are
all here simply to allow us to understand the real nature the universe by
looking at it from different directions.  When I debate with people here, I
don't mind what language you like to use, or which view is the one that
works best for you, as long as I believe you understand the true nature of
the universe we are talking about.

But when you write something like "you are not building hardware" it shows
you don't in fact understand the true nature of the universe and that the
words you speak are not just a view you like to use, but instead, one that
has pulled the wool over you eyes and prevented you from seeing the truth
about what a computer is and what is happening when we program one.  It it
shows you have failed to understand what I've been saying in these messages
- even though you are quick to say "Your crusade against dualism is simply
pointless".  If you can make such a huge error in understanding something
as simple and well understood as computers, how can you begin to correctly
understand the brain?

This is the crusade I'm on - to try and get people to think clearly about
the true nature of the universe instead of allowing themselves to have the
wool pulled over their eyes (and their brain) by the lies inherent in our
dualistic language.  If you can't understand how these ideas are confusing
you when you talk about computers, you will never understand how badly they
are also confusing your view of the brain and of yourself.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
12/5/2008 3:29:50 PM
On Dec 4, 11:32=A0pm, c...@kcwc.com (Curt Welch) wrote:
> A second follow-up to the same message to address the illusion of
> consciousness issues....
>
> "Publius" <m.publ...@nospam.comcast.net> wrote:
<snip corpus of nonsense by Curt)

So know you see, Publius, that Curt lives in an intellectual cave and
thinks only the shadows on the walls are reality, and thence is why
Curt is so confused.

On a second note, most of what Curt says is nonsense by his own prior
reckoning - that there are only interacting particles - so there is no
brain or body or camera or anything else that he speaks of in his
jibberish.

Or maybe that there are only signals or reinforcement learning; can't
figure out, based on Curt's confusions, what the final word is - but
stay tuned!


>
> I've debated these points with multiple people over the years, and only o=
ne
> of two things have every happened as a result. =A0Either the person I tal=
k to
> already understand what I'm saying, so there's nothing to debate.

One can understand what you are saying and still debate or contest
what you are saying, as most do.

> =A0Or the
> person I'm talking to is suffering from the delusion and no amount of tal=
k,
> or kindness, or meanness, or badgering, allows them to understand the
> delusion. =A0They simply can't grasp what I'm talking about no matter how=
 I
> try to explain it.

Just as your brain's RL relegates you to the Cave.
0
Alpha
12/5/2008 4:05:56 PM
On Dec 5, 8:29=A0am, c...@kcwc.com (Curt Welch) wrote:
> Josip Almasi <j...@vrspace.org> wrote:
<snip>

> All the different views have their use (for the most part), but they are
> all here simply to allow us to understand the real nature the universe by
> looking at it from different directions. =A0When I debate with people her=
e, I
> don't mind what language you like to use, or which view is the one that
> works best for you, as long as I believe you understand the true nature o=
f
> the universe we are talking about.

And that "true" nature is "revealed" to you by which of your shadows
(gods) in your cave?

0
Alpha
12/5/2008 4:13:25 PM
Alpha <omegazero2003@yahoo.com> wrote:
> On Dec 5, 8:29=A0am, c...@kcwc.com (Curt Welch) wrote:
> > Josip Almasi <j...@vrspace.org> wrote:
> <snip>
>
> > All the different views have their use (for the most part), but they
> > are all here simply to allow us to understand the real nature the
> > universe by looking at it from different directions. =A0When I debate
> > with people her=
> e, I
> > don't mind what language you like to use, or which view is the one that
> > works best for you, as long as I believe you understand the true nature
> > o=
> f
> > the universe we are talking about.
>
> And that "true" nature is "revealed" to you by which of your shadows
> (gods) in your cave?

The one in the upper left corner?

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
12/5/2008 4:19:18 PM
On Dec 4, 10:32=A0pm, c...@kcwc.com (Curt Welch) wrote:
> When we program a computer WE ARE BUILDING HARDWARE!
>
>
> If I build a chair out of wood, I take raw materiel
> and I change its physical shape into something else
> using tools.
>
>
> The exact same thing happens when I program a computer.
>
>
> I take the raw material, (the computer, and the
> electrons in the wires, and the magnetic particles on
> the surface of the disk), and I use tools to change
> their physical configuration into a machine that didn't
> exist before I built it.
>
>
> But yet, even though we are building hardware when we
> program a computer just as much as we are building
> hardware when I build a chair out of wood, we talk
> about these two processes very differently.


I often point out to those who that talk about what
computers can't do is being based what programs have
failed to do so far. That when they are talking about
the computer they are really making reference to the
program. If you ever read any of those posts you would
have seen I refer to programming as building a machine.


> We talk as you talked here about how it's the
> "information" we are dealing with when we reconfigure
> a computer into a different machine, but we don't
> talk about the "information" being important when we
> reconfigure a tree into a chair. Why is this?  Have
> you every tried to think about it?


Have you read Ross Ashby's book as I suggested when I
wanted you to understand where I was really coming
from rather than where you imagined I was coming from?

Because you seem not to understand why people talk
the way they do you make things up, you imagine they
believe things they don't. Just as computer people
can use mentalistic terms when talking about programs
without for one minute believing there is a ghost in
the machine so to can we talk about the brain in
mentalistic terms without for one minute believing
in a ghost in the brain.

To explain these things at the low level of firing neurons
or switching logic gates would be silly.

JC



0
casey
12/5/2008 7:36:45 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On Dec 4, 10:32=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > When we program a computer WE ARE BUILDING HARDWARE!
> >
> >
> > If I build a chair out of wood, I take raw materiel
> > and I change its physical shape into something else
> > using tools.
> >
> >
> > The exact same thing happens when I program a computer.
> >
> >
> > I take the raw material, (the computer, and the
> > electrons in the wires, and the magnetic particles on
> > the surface of the disk), and I use tools to change
> > their physical configuration into a machine that didn't
> > exist before I built it.
> >
> >
> > But yet, even though we are building hardware when we
> > program a computer just as much as we are building
> > hardware when I build a chair out of wood, we talk
> > about these two processes very differently.
>
> I often point out to those who that talk about what
> computers can't do is being based what programs have
> failed to do so far. That when they are talking about
> the computer they are really making reference to the
> program. If you ever read any of those posts you would
> have seen I refer to programming as building a machine.

I read all your posts and I have seen you talk that way.  I was glad to see
you do it.

> > We talk as you talked here about how it's the
> > "information" we are dealing with when we reconfigure
> > a computer into a different machine, but we don't
> > talk about the "information" being important when we
> > reconfigure a tree into a chair. Why is this?  Have
> > you every tried to think about it?
>
> Have you read Ross Ashby's book as I suggested when I
> wanted you to understand where I was really coming
> from rather than where you imagined I was coming from?

Nope.  Reading it online is a bitch.  I just ordered a used copy so I could
read it.  I don't however expect it to tell me anything I don't already
understand about these ideas nor do I expect it to give me any significant
new insight into what you believe.  The book looks interesting however for
shear historical value and it might help me communicate some ideas to you
and others. It might give me new language to use to talk about these ideas
that will help other people understand a point I want to make.

> Because you seem not to understand why people talk
> the way they do you make things up, you imagine they
> believe things they don't.

I don't think you have ever fully understood what it is I'm saying you
"believe".  You accuse me of accusing you of believing things you don't,
but yet you ignore the fact that the argument works both ways.  You don't
seem to understand what it is I'm accusing you of.  When I say you have
dualistic beliefs, you always translate that into "I think you believe in
the soul or in ghosts".  I've never accused you of that.  I've used words
like that to try and communicate a idea to you that I don't sense you have
ever understood.

There are things you write now and again that are inconsistent with your
belief in materialism and I'm trying to show you how they are inconsistent
with materialism.  What they are consistent with, is dualism.  What you
seem to fail to understand, is that they are inconsistent with materialism.

Thought, you have greatly improved the way you write about most of these
"duality" issues over the years so you seem to keep getting closer and
closer to a consistent set of dualistic beliefs.  It's hard to tell if you
have changed you thinking or if you have just learned not to write the
things that push my buttons on these issues. :)

The point here is that I spent most my life thinking incorrectly about
these dualistic issues.  Even though I believed completely in materialism
(and general never thought twice about after the age of about 6), I had
this large set of confused ideas in me that were inconsistent with
materialism which I never grasped were inconsistent with materialism. Like,
for example, I too once thought that whey I was programing a computer I
wasn't actually building new hardware.  I thought I was just changing the
software which obviously meant I was not changing the hardware - which is
just wrong.

I never thought about the mind body problem or consciousness and I had no
clue it was even something that people were confused about until I came to
this group (c.a.p.).  And after the ideas was brought up, it first took me
a while to even grasp what the fuck people were talking about simply
because I had such a strongly materialistic bias to my thinking I couldn't
even grasp what they thought the issue was.  But then I started to grasp
there were things I believed in that did in fact create an inconsistency
over these issues of consciousness and materialism.  Just what was "seeing
red"? Why do things "look red"?  Basically, I got a glimpse of what all the
fuss was about.  But them one day not longer after that, when debating
these issues, I figured out that the only reasons I was confused, was
because I was thinking was totally muddled.  The language I had been using
to understand computers all these years had made me believe lots of things
that were just dead wrong - like the idea that software was not hardware
and that when I programmed a computer, I was not building hardware.

I had always assumed software was not hardware because the words "software"
is defined to mean "not hardware".  I had always assumed ideas were not
hardware because this is just something we have all had drilled into us.
But then I had the epiphany one day witting a message to c.a.p. which
allowed me to realize that these ideas were all silly bullshit that grew
out of the belief in dualism as a fundamental truth to the culture of
people that evolved the language we have been taught.  And if you instead
fix all the definitions to remove the dualistic errors that are inherent in
English, and replace them with concepts consistent with materialism the
hard problem of consciousness just vanishes.

And the things that change, is that all software becomes part of the
hardware.  The mind becomes the brain.  "red" becomes "neurons firing".
Intangible becomes tangible brain behaviors.  Nouns become verbs with
longer temporal persistence.  The self becomes the body.  The explanatory
gap closes, the question of the zombie no longer makes sense to ask, and
consciousness becomes a word that has no meaning at all because the word
really means "having a self which is separate from, but somehow associated
with, the body".  The question of consciousness is the issue of what that
relation is.  Once you change your thinking to understand materialism and
remove all the left over dualistic ideas from your thought process, the
self and the body are just two different words for the same thing, so the
question of what the nature of the relationship between the body and itself
is just not a question that even makes sense to ask - let alone a "hard"
problem.

I've been talking about all these ideas in this group ever since I had that
epiphany which was probably at least 5 years ago.  But yet, in all this
time, I've never seen evidence you have ever understood, or had for
yourself, the same epiphany even though you say you believe in materialism.

In order to understand this, you have to basically rip out most of what you
were taught to believe when you were taught English, and replace it with
updated beliefs that are consistent with materialism by removing all the
old ideas of dualism.  You believe in materialism, but you also still have
a handful of beliefs in you head which were taught to you by others, which
are inconsistent with materialism, but consistent with dualism.  My goal in
these long messages to you, is to try and get you to understand why some of
your beliefs are simply left over ideas from dualism, and inconsistent with
materialism.

You don't have to understand this to solve AI.  You just have to ignore the
questions of consciousness, and work on the hardware to make machines act
like humans.  If you get that done, you will have solved all questions of
consciousness, even if you don't understand how or why you solved the hard
problem of consciousness.

But, if you can grasp what I've been trying to communicate to you, you will
understand why there is no hard problem of consciousness to solve, and why
it's not something anyone working on AI needs to worry about, or think
about.

However, if you don't understand these points, then it's likely you will
have this nagging belief that once you build the human like robot, you will
believe it's just an unconscious zombie and you will then be left with the
problem of how to make it conscious.  If you believe this, you will never
really understand how close we are to solving AI.

> Just as computer people
> can use mentalistic terms when talking about programs
> without for one minute believing there is a ghost in
> the machine so to can we talk about the brain in
> mentalistic terms without for one minute believing
> in a ghost in the brain.
>
> To explain these things at the low level of firing neurons
> or switching logic gates would be silly.

Yes.  And more important, it would make it so complex we couldn't
understand it.

The point however is that we can fully explain all computer behavior -
including things like "Microsoft Word" behavior, by talking _only_ at the
level of logic gates.  In other words, talking about the logic gates tells
us everything there is to know about the computer to program it or use it.
The only reason we need to abstract the logic gate behavior by talking
about things like lines of code and software modules and algorithms and
files and directories is to leave out the details that aren't important.
By doing that, we reduce the amount of information we have to understand to
a level that allows us to actually understand what the machine is doing.
So when we talk about files and directories, we are in fact also just
talking about what the logic gates are doing (or just talking about the
atoms in the computer are doing).  The files and software and programs are
not in any sense, "something else" that "exists" _in_ the computer.  It's
just a different way of talking about logic gates.

The parallel to all this with the brain is that if we choose to talk about
things like "mind" and "consciousness" we are in fact only talking about
neurons (and other brain parts).  There is nothing else there to talk
about.  There is only other ways to talk about it which are useful at
times, but dangerous if anyone starts to think the mind or consciousness is
something that doesn't fully reduce to the behavior of neurons.  It's not
"created by" neurons, because there is nothing that is "created".  It's
created by the human which chooses to use the word "consciousness" to talk
about neurons.

How this confusion plays out is the problem that people keep saying things
like: "in order to create consciousness, we have to get the processing just
right - but no one understands what the processing is - but we know it
doesn't exist in computers".  Or they make the zombie argument asking the
question of why doesn't the processing happen "in the dark"?  Which implies
they think the right type of processing has created something
(consciousness) that shouldn't otherwise be there according to what we
understand about physics.

If you believe in materialism, and don't have any left over dualistic
beliefs stuck in your head, then there simply is no hard problem of
consciousness.  There simply is no problem at all.

Because you keep talking as if there is a problem of consciousness that
needs to be figured out, (and you have implied this in many different ways
over the past few years), you don't grasp the points I've been talking
about here for the past 5 or so years since I had the epiphany.  I don't
doubt for a second that you believe in materialism and that you have
rejected the ideas of dualistic ghosts in the form of souls.  But what you
haven't done (at least not fully yet as far as I can tell), is removed all
the historic dualistic ideas from your belief system - because you don't
seem to understand they are there.

As a recent example, you said I couldn't sense my neurons.  That is a left
over idea from dualism.  Sensing my thoughts IS sensing my neurons because
thoughts and neurons are one and the same thing (not something created by
the neurons, but the neurons themselves).  To believe otherwise, is to not
fully understand all the ramifications of materialism an