f



Finding useful functions- part 1

Our brains have innate structure tailored by evolutionary processes
over a long period of time.  This structure performs functions that
contribute to our behavior in ways that somewhere along the line
probably helped individuals to survive, or at least didn't hurt.

Many of those functions are not fully determined by genetics alone.
There is an innate framework, but details are filled in by processes
of conditioning and association, and to some degree the framework
itself is mutable if environmental conditions differ sufficiently
from those for which it evolved.  There are few sharp lines between
innate and acquired neural function.

Feature discrimination in the early visual system is sometimes
called innate.  Certainly it is innate that the cells grow into
layers of tissue appropriate for performing useful feature
discriminations.  However, it seems the specific connections and
weights to implement particular discriminations get filled in by
adaptation to correlations in the ensemble of signals flowing from
the retina.  For example, we can change the distribution of
particular detectors dramatically by raising a cat in an abnormal
visual environment.  It seems cells are not so much genetically
determined to perform specific discriminations, as that they acquire
discrimination functions appropriate to the signals they encounter
in their genetically determined position in the network.

There are places where neural projections bring together signals
originating from corresponding points in the left and right eyes.
This allows merging both images to fill in details missing from one
or the other, estimating depth from discrepancies in the two images,
and so on.  There is genetic direction to cause axonal projections
carrying signals from one eye to grow toward the normally expected
locations of the corresponding signal paths from the other.  But
(from experiments on Xenopus frogs) if one eye is surgically rotated
before the connections are formed, so that the locations of
correlated signals are altered, we see the projections grow first
toward the normal target location, then veer off sharply to connect
with the very different cells now in position to be correlated.

Many topographic maps can be found in the brain, so that for example
neigboring sections of neural tissue are excited by stimulii from
adjacent sections of skin.  One might imagine a fixed wiring scheme
under genetic control to hook up these maps, but when we surgically
swap small patches of skin the connections change to preserve the
mapping.  It takes some time, but after a while we find that the
moved sensors now activate sections of the remote neural map that
correspond to their new positions.

A reasonable interpretation is that the "wiring" of neural circuitry
is only loosely determined by a genetic blueprint.  Most of the
actual connections (and therefore the functions performed) are
established as a result of correlations between the activities of
potentially connected cells.  Not only are the initial connections
determined by correlations, but even after a stable connection
pattern is established, the connections will change if the
correlations change.

From the viewpoint of a single cell, it strengthens connections to
others correlated with its own activity and weakens others, much as
postulated by Hebb so many years ago.  While direct observation of
such changes in individual active synapses is still difficult, we
can observe at least one related mechanism in widespread use.  Cells
in a child's brain sprout huge dendritic trees and eventually make
something like 200,000 synaptic connections.  By adulthood these are
trimmed back to an average of 10 to 20 thousand.  The only plausible
explanation for this of which I am aware is that the surviving
connections are those that showed correlation with the activity of
the cell.  Uncorrelated connections simply drop out of the picture.

Overall, the point is that the functions computed by cells in the
brain are largely determined by the correlations encountered in the
signals accessible to the cell, rather than by genetic control.

This is learning or conditioning, but it is not the kind of
feedback-driven learning that is usually intended when one speaks of
operant conditioning.  This sort of learning does not depend on
consequences of the output of the function, and would occur even if
the output were not connected to anything else and could therefore
have no consequences extending beyond the cell doing the learning.

From an evolutionary perspective, such learning mechanisms exist
because they do indeed often have useful behavioral consequences.
But the evolutionary connection is between the learning mechanisms
and ensembles of behavior, not between the individual functions
learned and specific contingencies associated with those functions.

--------

None of the above should be taken as suggesting that other sorts of
learning can be ignored.  To implement AI we will require an
understanding of many facets of adaptive behavior, including the
operant conditioning or reinforcement learning that has been the
sole focus of certain vocal participants in CAP.

But I do suggest that these correlation-driven "unsupervised"
mechanisms provide a critically important underpinning for other
learning paradigms, that they are necessary parts of an explanation
of how all our behavior-generating mechanisms actually work.

<to be continued in further posts>

Bill Modlin





0
modlin1 (3)
10/25/2004 8:05:09 AM
comp.ai.neural-nets 5773 articles. 2 followers. tomhoo (63) is leader. Post Follow

331 Replies
1350 Views

Similar Articles

[PageSpeed] 4

What is important in sensation and perception is that movement of an animal
(or, more specifically, of its receptors) has consequences. When we sweep
our eyes over a patch of red, there are changes in stimulation - such
movement/consequence contingencies are at the heart of learning to perceive
the world.

"Bill Modlin" <modlin1@metrocast.net> wrote in message
news:2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com...
> Our brains have innate structure tailored by evolutionary processes
> over a long period of time.  This structure performs functions that
> contribute to our behavior in ways that somewhere along the line
> probably helped individuals to survive, or at least didn't hurt.
>
> Many of those functions are not fully determined by genetics alone.
> There is an innate framework, but details are filled in by processes
> of conditioning and association, and to some degree the framework
> itself is mutable if environmental conditions differ sufficiently
> from those for which it evolved.  There are few sharp lines between
> innate and acquired neural function.
>
> Feature discrimination in the early visual system is sometimes
> called innate.  Certainly it is innate that the cells grow into
> layers of tissue appropriate for performing useful feature
> discriminations.  However, it seems the specific connections and
> weights to implement particular discriminations get filled in by
> adaptation to correlations in the ensemble of signals flowing from
> the retina.  For example, we can change the distribution of
> particular detectors dramatically by raising a cat in an abnormal
> visual environment.  It seems cells are not so much genetically
> determined to perform specific discriminations, as that they acquire
> discrimination functions appropriate to the signals they encounter
> in their genetically determined position in the network.
>
> There are places where neural projections bring together signals
> originating from corresponding points in the left and right eyes.
> This allows merging both images to fill in details missing from one
> or the other, estimating depth from discrepancies in the two images,
> and so on.  There is genetic direction to cause axonal projections
> carrying signals from one eye to grow toward the normally expected
> locations of the corresponding signal paths from the other.  But
> (from experiments on Xenopus frogs) if one eye is surgically rotated
> before the connections are formed, so that the locations of
> correlated signals are altered, we see the projections grow first
> toward the normal target location, then veer off sharply to connect
> with the very different cells now in position to be correlated.
>
> Many topographic maps can be found in the brain, so that for example
> neigboring sections of neural tissue are excited by stimulii from
> adjacent sections of skin.  One might imagine a fixed wiring scheme
> under genetic control to hook up these maps, but when we surgically
> swap small patches of skin the connections change to preserve the
> mapping.  It takes some time, but after a while we find that the
> moved sensors now activate sections of the remote neural map that
> correspond to their new positions.
>
> A reasonable interpretation is that the "wiring" of neural circuitry
> is only loosely determined by a genetic blueprint.  Most of the
> actual connections (and therefore the functions performed) are
> established as a result of correlations between the activities of
> potentially connected cells.  Not only are the initial connections
> determined by correlations, but even after a stable connection
> pattern is established, the connections will change if the
> correlations change.
>
> From the viewpoint of a single cell, it strengthens connections to
> others correlated with its own activity and weakens others, much as
> postulated by Hebb so many years ago.  While direct observation of
> such changes in individual active synapses is still difficult, we
> can observe at least one related mechanism in widespread use.  Cells
> in a child's brain sprout huge dendritic trees and eventually make
> something like 200,000 synaptic connections.  By adulthood these are
> trimmed back to an average of 10 to 20 thousand.  The only plausible
> explanation for this of which I am aware is that the surviving
> connections are those that showed correlation with the activity of
> the cell.  Uncorrelated connections simply drop out of the picture.
>
> Overall, the point is that the functions computed by cells in the
> brain are largely determined by the correlations encountered in the
> signals accessible to the cell, rather than by genetic control.
>
> This is learning or conditioning, but it is not the kind of
> feedback-driven learning that is usually intended when one speaks of
> operant conditioning.  This sort of learning does not depend on
> consequences of the output of the function, and would occur even if
> the output were not connected to anything else and could therefore
> have no consequences extending beyond the cell doing the learning.
>
> From an evolutionary perspective, such learning mechanisms exist
> because they do indeed often have useful behavioral consequences.
> But the evolutionary connection is between the learning mechanisms
> and ensembles of behavior, not between the individual functions
> learned and specific contingencies associated with those functions.
>
> --------
>
> None of the above should be taken as suggesting that other sorts of
> learning can be ignored.  To implement AI we will require an
> understanding of many facets of adaptive behavior, including the
> operant conditioning or reinforcement learning that has been the
> sole focus of certain vocal participants in CAP.
>
> But I do suggest that these correlation-driven "unsupervised"
> mechanisms provide a critically important underpinning for other
> learning paradigms, that they are necessary parts of an explanation
> of how all our behavior-generating mechanisms actually work.
>
> <to be continued in further posts>
>
> Bill Modlin
>
>
>
>
>


0
Glen
10/25/2004 11:45:47 AM
Bill Modlin wrote:
[...snip explanation of how synapses strengthen, etc]
> This is learning or conditioning, but it is not the kind of
> feedback-driven learning that is usually intended when one speaks of
> operant conditioning.  This sort of learning does not depend on
> consequences of the output of the function, and would occur even if
> the output were not connected to anything else and could therefore
> have no consequences extending beyond the cell doing the learning.

Sorry, but the above is  nonsense. Operant condtioning must use what's 
available - which is precisely the biochemical processes you outline. 
Operant conditioning merely takes advantage of what already occurs in 
the organism. As with any control of any systems, natural or artificial, 
it works by co-operating with existing processes, not by supervening them.

Also, if there is no stimulus from outside the cell, no strengthening of 
synapses etc will occur. Where do you think that stimulus comes from? 
The brain will develop "as it should" (ie, as programmed by the genome) 
if and only if it receives the a[ppropraite stimuli. The stimuli occur 
at several levels: within the cell (switching genes on and off); from 
neighbouring cells; and from the chemical environment of the cell. These 
are ultimately controlled by the environment within which the organism 
lives (this environmental control begins in the mother's body - without 
it, a zygote cannot develop properly, cannot even implant in the womb.)

All learning proceeds by the same basic processes. ISTM that the only 
differences among organisms consist of what each organism is capable of 
learning, and that does depend on the genome. Genetics controls 
development (in concert with the environment, please note), and 
determines such things as the number of neurons and their gross 
organisation, etc. It also controls the the development of all the other 
systemns in the organism, and so defines the range of behaviours any 
given organism may learn. As someone once said, "Don't try to teach a 
pig to sing. It wastes your time and annoys the pig."

I'm now inclined to avoid the distinction between learning and 
development. It drags in some confusing assumptions, such as that 
learning must somehow be directed by an external agency, or worse, that 
learning _can_ be directed. As any teacher knows, you cannot direct a 
child's learning. You can only offer "learning situations", ie, you can 
arrange the environment (the lesson) so that learning may take place. 
But that learning will take place if and only if the child is "ready to 
learn" - IFF its neural and other development has reached a point where 
the inputs the teacher provides can in fact cause learning. All children 
learn from any given lesson - but no two children learn the same 
thing(s). In brief, a teacher can influence the child's development, and 
with care that influence may amount to guiding development in desired 
pathways, but that's all.

0
Wolf
10/25/2004 3:34:39 PM
Bill Modlin wrote:
> Our brains have innate structure tailored by evolutionary processes
> over a long period of time.  This structure performs functions that
> contribute to our behavior in ways that somewhere along the line
> probably helped individuals to survive, or at least didn't hurt.
> 
> Many of those functions are not fully determined by genetics alone.
> There is an innate framework, but details are filled in by processes
> of conditioning and association, and to some degree the framework
> itself is mutable if environmental conditions differ sufficiently
> from those for which it evolved.  There are few sharp lines between
> innate and acquired neural function.
> 
> Feature discrimination in the early visual system is sometimes
> called innate.  Certainly it is innate that the cells grow into
> layers of tissue appropriate for performing useful feature
> discriminations.  However, it seems the specific connections and
> weights to implement particular discriminations get filled in by
> adaptation to correlations in the ensemble of signals flowing from
> the retina.  For example, we can change the distribution of
> particular detectors dramatically by raising a cat in an abnormal
> visual environment.  It seems cells are not so much genetically
> determined to perform specific discriminations, as that they acquire
> discrimination functions appropriate to the signals they encounter
> in their genetically determined position in the network.
> 
> There are places where neural projections bring together signals
> originating from corresponding points in the left and right eyes.
> This allows merging both images to fill in details missing from one
> or the other, estimating depth from discrepancies in the two images,
> and so on.  There is genetic direction to cause axonal projections
> carrying signals from one eye to grow toward the normally expected
> locations of the corresponding signal paths from the other.  But
> (from experiments on Xenopus frogs) if one eye is surgically rotated
> before the connections are formed, so that the locations of
> correlated signals are altered, we see the projections grow first
> toward the normal target location, then veer off sharply to connect
> with the very different cells now in position to be correlated.
> 
> Many topographic maps can be found in the brain, so that for example
> neigboring sections of neural tissue are excited by stimulii from
> adjacent sections of skin.  One might imagine a fixed wiring scheme
> under genetic control to hook up these maps, but when we surgically
> swap small patches of skin the connections change to preserve the
> mapping.  It takes some time, but after a while we find that the
> moved sensors now activate sections of the remote neural map that
> correspond to their new positions.
> 
> A reasonable interpretation is that the "wiring" of neural circuitry
> is only loosely determined by a genetic blueprint.  Most of the
> actual connections (and therefore the functions performed) are
> established as a result of correlations between the activities of
> potentially connected cells.  Not only are the initial connections
> determined by correlations, but even after a stable connection
> pattern is established, the connections will change if the
> correlations change.
> 
> From the viewpoint of a single cell, it strengthens connections to
> others correlated with its own activity and weakens others, much as
> postulated by Hebb so many years ago.  While direct observation of
> such changes in individual active synapses is still difficult, we
> can observe at least one related mechanism in widespread use.  Cells
> in a child's brain sprout huge dendritic trees and eventually make
> something like 200,000 synaptic connections.  By adulthood these are
> trimmed back to an average of 10 to 20 thousand.  The only plausible
> explanation for this of which I am aware is that the surviving
> connections are those that showed correlation with the activity of
> the cell.  Uncorrelated connections simply drop out of the picture.
> 
> Overall, the point is that the functions computed by cells in the
> brain are largely determined by the correlations encountered in the
> signals accessible to the cell, rather than by genetic control.
> 
> This is learning or conditioning, but it is not the kind of
> feedback-driven learning that is usually intended when one speaks of
> operant conditioning.  This sort of learning does not depend on
> consequences of the output of the function, and would occur even if
> the output were not connected to anything else and could therefore
> have no consequences extending beyond the cell doing the learning.
> 

What evidence do you have that this happens at all ?

patty


> From an evolutionary perspective, such learning mechanisms exist
> because they do indeed often have useful behavioral consequences.
> But the evolutionary connection is between the learning mechanisms
> and ensembles of behavior, not between the individual functions
> learned and specific contingencies associated with those functions.
> 
> --------
> 
> None of the above should be taken as suggesting that other sorts of
> learning can be ignored.  To implement AI we will require an
> understanding of many facets of adaptive behavior, including the
> operant conditioning or reinforcement learning that has been the
> sole focus of certain vocal participants in CAP.
> 
> But I do suggest that these correlation-driven "unsupervised"
> mechanisms provide a critically important underpinning for other
> learning paradigms, that they are necessary parts of an explanation
> of how all our behavior-generating mechanisms actually work.
> 
> <to be continued in further posts>
> 
> Bill Modlin
> 
> 
> 
> 
> 
0
patty
10/25/2004 3:38:12 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...


> Overall, the point is that the functions computed by cells in the
> brain are largely determined by the correlations encountered in the
> signals accessible to the cell, rather than by genetic control.
> 

The problem comes if you believe this part so strongly that you gloss
over or disregard or downplay the underlying "foundation" for the
system as provided by genetics. Tabula rasa, it ain't.
0
feedbackdroids
10/25/2004 3:40:43 PM
Bill Modlin wrote:

> "Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
> news:20041025074544.269$G8@news.newsreader.com...
> 
>>What is important in sensation and perception is that movement of
>> an animal (or, more specifically, of its receptors) has consequences. When
>> we sweep our eyes over a patch of red, there are changes in stimulation -
>> such movement/consequence contingencies are at the heart of learning to
>> perceive the world.


> As I said at the end of my note, such consequence-driven learning
> cannot be ignored.  But it is not the only important kind of
> learning, nor is it in any sense the "most" important.   It is the
> most obvious from the outside, it puts the finishing touches on our
> behavior and fine-tunes it, but it is inadequate in itself to
> explain our behavior.  Your position is rather like that of a naive
> driver claiming that the only important parts of an automobile are
> the steering wheel and pedals,  who becomes indignant at the
> suggestion that internal things like engine, transmission, and
> linkages may also be important.
> 
> We cannot perform coordinated motion in the first place, or detect
> any changes in any sort of stimulation, until our neural circuitry
> has become sufficiently organized to support such operations.  Which
> it does by virtue of the correlation-driven learning mechanisms I
> talk about in my note.

That organisation occurs under environmental control. The notion that 
the genes somehow function, and the neural circuitry will develop just 
as it should, in the absence of inputs to the cells is simply false. In 
fact, in the absence of crictal environmental input via the sensory 
nerves, neural development can not only be delayed but permanently 
inhibited.

One of the most elegant experiments to show this involved kittens who 
were raised in different visual environments. In one case, the kittens' 
cages were striped horizontally, and there were no platforms (which 
would have offered an opportunity to learn vertical separaations.) Very 
early in their development, these kittens failed to respond to vertical 
stripes. Those kittens that were raised the longest in this environment 
never leraned to perceive vertical stripes (edges, etc).

Hidden in your analysis is the assumption that genes function absent 
environmental (extra-cellular) inputs. This is generally false. Genes 
must be swtiched on and off, in proper sequence etc, in order for 
development to take place. The chemistry of gene control includes 
extra-cellular inputs. In fact, most genes are controlled by 
extra-cellular inputs - neurons ultimately by the inputs from the 
sensors (which are of course controlled by inputs such as photons.)

"Correlation-driven mechanisms of learning" do not deny the role of the 
environment in learning/development - they confirm it.
0
Wolf
10/25/2004 3:52:25 PM
In article <WvCdnUKMSp9pluDcRVn-iQ@metrocastcablevision.com>, Bill 
Modlin <modlin1@metrocast.net> writes
>
>"Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
>news:20041025074544.269$G8@news.newsreader.com...
>> What is important in sensation and perception is that movement of
>an animal
>> (or, more specifically, of its receptors) has consequences. When
>we sweep
>> our eyes over a patch of red, there are changes in stimulation -
>such
>> movement/consequence contingencies are at the heart of learning to
>perceive
>> the world.
>
>As I said at the end of my note, such consequence-driven learning
>cannot be ignored.  But it is not the only important kind of
>learning, nor is it in any sense the "most" important.

Nonsense.

How do you know this? Why do you talk this way at all? What makes you 
think there are different types of "learning" (leaving aside the dubious 
distinction between operant and classical conditioning)? What is an 
*important* kind of learning? The previous four questions are more 
helpful than you might think.

If you had read Catania's book as I advised you might not be writing 
this way (hence the four questions - although you probably thought them 
just expressing critical indignation at your gall <g> I assume therefore 
that you have not read his book, and so I ask you:  what do you know 
about "learning" and how do you know it?

Until you address that, how do you know whether what you are writing 
makes any more sense than the stuff Zick or Verhey write? Might it be 
that you just write more eloquently?
-- 
David Longley
0
David
10/25/2004 4:06:42 PM
"Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
news:20041025074544.269$G8@news.newsreader.com...
> What is important in sensation and perception is that movement of
an animal
> (or, more specifically, of its receptors) has consequences. When
we sweep
> our eyes over a patch of red, there are changes in stimulation -
such
> movement/consequence contingencies are at the heart of learning to
perceive
> the world.

As I said at the end of my note, such consequence-driven learning
cannot be ignored.  But it is not the only important kind of
learning, nor is it in any sense the "most" important.   It is the
most obvious from the outside, it puts the finishing touches on our
behavior and fine-tunes it, but it is inadequate in itself to
explain our behavior.  Your position is rather like that of a naive
driver claiming that the only important parts of an automobile are
the steering wheel and pedals,  who becomes indignant at the
suggestion that internal things like engine, transmission, and
linkages may also be important.

We cannot perform coordinated motion in the first place, or detect
any changes in any sort of stimulation, until our neural circuitry
has become sufficiently organized to support such operations.  Which
it does by virtue of the correlation-driven learning mechanisms I
talk about in my note.

We cannot solve the credit-assignment problem and discover just
which aspects of the many things going on at once are correlated
with which consequences without the ongoing activity of those same
correlation-driven mechanisms to sort things out into manageable
categorical clumps.

Do you deny the existence of such mechanisms?   Do you claim that
movement/consequence contingencies at the external behavioral level
guide the growth of neural projections to appropriate connections,
even when the connections to be explained are necessary
preconditions to the production of any behavior at all?

Bill


> "Bill Modlin" <modlin1@metrocast.net> wrote in message
> news:2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com...
> > Our brains have innate structure tailored by evolutionary
processes
> > over a long period of time.  This structure performs functions
that
> > contribute to our behavior in ways that somewhere along the line
> > probably helped individuals to survive, or at least didn't hurt.
> >
> > Many of those functions are not fully determined by genetics
alone.
> > There is an innate framework, but details are filled in by
processes
> > of conditioning and association, and to some degree the
framework
> > itself is mutable if environmental conditions differ
sufficiently
> > from those for which it evolved.  There are few sharp lines
between
> > innate and acquired neural function.
> >
> > Feature discrimination in the early visual system is sometimes
> > called innate.  Certainly it is innate that the cells grow into
> > layers of tissue appropriate for performing useful feature
> > discriminations.  However, it seems the specific connections and
> > weights to implement particular discriminations get filled in by
> > adaptation to correlations in the ensemble of signals flowing
from
> > the retina.  For example, we can change the distribution of
> > particular detectors dramatically by raising a cat in an
abnormal
> > visual environment.  It seems cells are not so much genetically
> > determined to perform specific discriminations, as that they
acquire
> > discrimination functions appropriate to the signals they
encounter
> > in their genetically determined position in the network.
> >
> > There are places where neural projections bring together signals
> > originating from corresponding points in the left and right
eyes.
> > This allows merging both images to fill in details missing from
one
> > or the other, estimating depth from discrepancies in the two
images,
> > and so on.  There is genetic direction to cause axonal
projections
> > carrying signals from one eye to grow toward the normally
expected
> > locations of the corresponding signal paths from the other.  But
> > (from experiments on Xenopus frogs) if one eye is surgically
rotated
> > before the connections are formed, so that the locations of
> > correlated signals are altered, we see the projections grow
first
> > toward the normal target location, then veer off sharply to
connect
> > with the very different cells now in position to be correlated.
> >
> > Many topographic maps can be found in the brain, so that for
example
> > neigboring sections of neural tissue are excited by stimulii
from
> > adjacent sections of skin.  One might imagine a fixed wiring
scheme
> > under genetic control to hook up these maps, but when we
surgically
> > swap small patches of skin the connections change to preserve
the
> > mapping.  It takes some time, but after a while we find that the
> > moved sensors now activate sections of the remote neural map
that
> > correspond to their new positions.
> >
> > A reasonable interpretation is that the "wiring" of neural
circuitry
> > is only loosely determined by a genetic blueprint.  Most of the
> > actual connections (and therefore the functions performed) are
> > established as a result of correlations between the activities
of
> > potentially connected cells.  Not only are the initial
connections
> > determined by correlations, but even after a stable connection
> > pattern is established, the connections will change if the
> > correlations change.
> >
> > From the viewpoint of a single cell, it strengthens connections
to
> > others correlated with its own activity and weakens others, much
as
> > postulated by Hebb so many years ago.  While direct observation
of
> > such changes in individual active synapses is still difficult,
we
> > can observe at least one related mechanism in widespread use.
Cells
> > in a child's brain sprout huge dendritic trees and eventually
make
> > something like 200,000 synaptic connections.  By adulthood these
are
> > trimmed back to an average of 10 to 20 thousand.  The only
plausible
> > explanation for this of which I am aware is that the surviving
> > connections are those that showed correlation with the activity
of
> > the cell.  Uncorrelated connections simply drop out of the
picture.
> >
> > Overall, the point is that the functions computed by cells in
the
> > brain are largely determined by the correlations encountered in
the
> > signals accessible to the cell, rather than by genetic control.
> >
> > This is learning or conditioning, but it is not the kind of
> > feedback-driven learning that is usually intended when one
speaks of
> > operant conditioning.  This sort of learning does not depend on
> > consequences of the output of the function, and would occur even
if
> > the output were not connected to anything else and could
therefore
> > have no consequences extending beyond the cell doing the
learning.
> >
> > From an evolutionary perspective, such learning mechanisms exist
> > because they do indeed often have useful behavioral
consequences.
> > But the evolutionary connection is between the learning
mechanisms
> > and ensembles of behavior, not between the individual functions
> > learned and specific contingencies associated with those
functions.
> >
> > --------
> >
> > None of the above should be taken as suggesting that other sorts
of
> > learning can be ignored.  To implement AI we will require an
> > understanding of many facets of adaptive behavior, including the
> > operant conditioning or reinforcement learning that has been the
> > sole focus of certain vocal participants in CAP.
> >
> > But I do suggest that these correlation-driven "unsupervised"
> > mechanisms provide a critically important underpinning for other
> > learning paradigms, that they are necessary parts of an
explanation
> > of how all our behavior-generating mechanisms actually work.
> >
> > <to be continued in further posts>
> >
> > Bill Modlin
>


0
Bill
10/25/2004 5:23:04 PM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

"Bill Modlin" <modlin1@metrocast.net> writes:

[Bill wrote a lot, which I won't quote]

I'm sure you are familiar with my views, so I won't give a detailed
response.  Although I often disagree with Sizemore, his one paragraph
response to your post is very much to the point.

You posted a long wide ranging essay.  But you never tried to make
clear which parts of it are dogma (or a priori assumptions, or
whatever else you want to call it).

If you want discussion, it would help if you would indicate which
parts are open to discussion, and which parts you expect us to assume
as the basis for discussion.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.3.91 (SunOS)

iD8DBQFBfVXivmGe70vHPUMRAgDxAJ9ZM4KTXas3x8FH3pJEec8mLt2lpACg0+ey
F00T5TZIQt11KEfdihzMXxg=
=g2nl
-----END PGP SIGNATURE-----

-- 
 vote for regime change in Washington, Nov 02.

0
Neil
10/25/2004 7:37:14 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:k19fd.3946$rs5.296361@news20.bellglobal.com...
> Bill Modlin wrote:
> [...snip explanation of how synapses strengthen, etc]
> > This is learning or conditioning, but it is not the kind of
> > feedback-driven learning that is usually intended when one
> > speaks of operant conditioning.  This sort of learning does
> > not depend on consequences of the output of the function,
> > and would occur even if the output were not connected to
> > anything else and could therefore have no consequences
> > extending beyond the cell doing the learning.
>
> Sorry, but the above is  nonsense. Operant condtioning must use
> what's available - which is precisely the biochemical processes
> you outline.
>
> Operant conditioning merely takes advantage of what already
> occurs in the organism. As with any control of any systems,
> natural or artificial, it works by co-operating with existing
> processes, not by supervening them.

You say that operant condition must use precisely the biochemical
processes I outlined, that it co-operates with these existing
processes.  I agree, that's the point.  So why is it nonsense to
suggest that these processes have their own principles of operation,
distinguishable from the feedback driven operant conditioning which
uses them?

> Also, if there is no stimulus from outside the cell, no
> strengthening of synapses etc will occur.

Absolutely correct.

> Where do you think that stimulus comes from?
> The brain will develop "as it should" (ie, as programmed by
> the genome) if and only if it receives the appropraite stimuli.
> The stimuli occur at several levels: within the cell (switching
> genes on and off); from neighbouring cells; and from the chemical
> environment of the cell.  These are ultimately controlled by the
> environment within which the organism lives (this environmental
> control begins in the mother's body -without it, a zygote cannot
> develop properly, cannot even implant in the womb.)

True.  Wordy, but nothing to disagree with.

> All learning proceeds by the same basic processes.

Well, in the sense that you sketch below that's true.

But for some purposes, and particularly for the purpose of designing
an AI, it seems useful to make somewhat more fine-grained
distinctions.  If I am looking at possible implementations, the
heuristic of searching for useful connections by making hundreds of
thousands first and quickly eliminating the majority which show no
obvious correlation with the cell's activity is a "process" somewhat
different from training a set of weights in response to proportional
error feedback.  Certainly they are related at some level of
abstraction, but they do require rather different bits of coding.

>                                                    ISTM that the
> only differences among organisms consist of what each organism is
> capable of learning, and that does depend on the genome. Genetics
> controls development (in concert with the environment, please
note),
> and determines such things as the number of neurons and their
gross
> organisation, etc. It also controls the the development of all the
> other systemns in the organism, and so defines the range of
> behaviours any given organism may learn. As someone once said,
> "Don't try to teach a pig to sing. It wastes your time and annoys
> the pig."

All fine, so far as it goes.  Though we might both be surprised by
how
much a pig can be conditioned to do by a determined behaviorist with
the right reinforcement schedules... :-)

>
> I'm now inclined to avoid the distinction between learning and
> development. It drags in some confusing assumptions, such as that
> learning must somehow be directed by an external agency, or worse,
> that learning _can_ be directed. As any teacher knows, you cannot
> direct a child's learning. You can only offer "learning
situations",
> ie, you can arrange the environment (the lesson) so that learning
> may take place.
>
> But that learning will take place if and only if the child is
> "ready to learn" - IFF its neural and other development has
reached
> a point where the inputs the teacher provides can in fact cause
> learning. All children learn from any given lesson - but no two
> children learn the same thing(s). In brief, a teacher can
influence
> the child's development, and with care that influence may amount
to
> guiding development in desired pathways, but that's all.

I could well have written almost the same words, and have certainly
made similar assertions in the past.

None of your post touches on the distinction I made between
algorithms
driven by contingencies of the output ("supervised") and other
algorithms driven by local correlations independent of such
contingencies.

Explain to me again why that distinction was nonsense?

Bill


0
Bill
10/26/2004 2:47:56 AM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:_h9fd.3953$rs5.298231@news20.bellglobal.com...
>> Bill Modlin wrote:
>>
>>> "Glen M. Sizemore" <gmsizemore2@yahoo.comwrote in message
>>> news:20041025074544.269$G8@news.newsreader.com...
>>>
>>>What is important in sensation and perception is that
>>>movement of an animal (or, more specifically, of its
>>>receptors) has consequences. When we sweep our eyes over a
>>>patch of red, there are changes in stimulation - such
>>>movement/consequence contingencies are at the heart of
>>>learning to perceive the world.
>
>>As I said at the end of my note, such consequence-driven
>>learning cannot be ignored.  But it is not the only
>>important kind of learning, nor is it in any sense the
>>"most" important.   It is the most obvious from the
>>outside, it puts the finishing touches on our behavior and
>>fine-tunes it, but it is inadequate in itself to explain
>>our behavior.  Your position is rather like that of a naive
>>driver claiming that the only important parts of an
>>automobile are the steering wheel and pedals,  who becomes
>>indignant at the suggestion that internal things like
>>engine, transmission, and linkages may also be important.
>>
>>We cannot perform coordinated motion in the first place, or
>>detect any changes in any sort of stimulation, until our
>>neural circuitry has become sufficiently organized to
>>support such operations.  Which it does by virtue of the
>>correlation-driven learning mechanisms I talk about in my
>>note.

>That organisation occurs under environmental control. The
>notion that the genes somehow function, and the neural
>circuitry will develop just as it should, in the absence of
>inputs to the cells is simply false. In fact, in the absence
>of crictal environmental input via the sensory nerves,
>neural development can not only be delayed but permanently
>inhibited.
>
>One of the most elegant experiments to show this involved
>kittens who were raised in different visual environments. In
>one case, the kittens' cages were striped horizontally, and
>there were no platforms (which would have offered an
>opportunity to learn vertical separaations.) Very early in
>their development, these kittens failed to respond to
>vertical stripes. Those kittens that were raised the longest
>in this environment never leraned to perceive vertical
>stripes (edges, etc).
>
>Hidden in your analysis is the assumption that genes
>function absent environmental (extra-cellular) inputs. This
>is generally false.  Genes must be swtiched on and off, in
>proper sequence etc, in order for development to take place.
>The chemistry of gene control includes extra-cellular
>inputs. In fact, most genes are controlled by extra-cellular
>inputs - neurons ultimately by the inputs from the sensors
>(which are of course controlled by inputs such as photons.)
>
>"Correlation-driven mechanisms of learning" do not deny the
>role of the environment in learning/development - they
>confirm it.

I just finished a response to your previous posting and
probably could let this go, but...

Wolf, is there any chance you might take a moment to read
what I write before replying to it?

You are still agreeing with me at the top of your voice.

Where do you get the idea that I am assuming development
in the absence of inputs?  I am talking specifically about
how development is influenced by inputs, which can hardly
be construed as a denial of the role of inputs.

You give the same example that I alluded to in my original
post (the kittens raised in odd visual environments) , to make
much the same point as I made, and somehow think that you
are telling me something?

Glen on the other hand actually disagrees with me.

He claims that learning occurs only by adjusting a behavior
mediating function in response to consequences of its
behavioral output.   An organism does something, that has
contingent results, and it is those contingencies that
control learning.  There is no place in his view for
learning from mere correlations among environmental inputs
independent of behavior, there must be some sense in which
the controlling inputs are viewable as contingent on a
behavior that is to be modified.

I believe that he is mistaken, that adjustments in
function induced by correlations among signals not
contingent on behavior play an important role in
behavioral adaptation.  My posting was intended to be
a counter to his position, a listing of several places
where functions are developed without reliance on
behavioral contingencies, which functions nevertheless
have critical bearing on subsequent behavior.

You seem to be taking the same stance as I am.  You
speak of developmental processes that must occur for
a child to be ready to learn any particular lesson just
as I speak of developmental processes which must occur
before a system is able to detect and learn from
behavioral contingencies.

You seem to agree that these developments include the
same ones I cited as examples.  Yet you frame your post
as one of disagreement.  Can you explain this?

Bill


0
Bill
10/26/2004 5:28:02 AM
"patty" <pattyNO@SPAMicyberspace.net> wrote in message
news:E59fd.303716$MQ5.274103@attbi_s52...
> Bill Modlin wrote:
> > Our brains have innate structure tailored by evolutionary
processes
> > over a long period of time.  This structure performs functions
that
> > contribute to our behavior in ways that somewhere along the line
> > probably helped individuals to survive, or at least didn't hurt.
> >
> > Many of those functions are not fully determined by genetics
alone.
> > There is an innate framework, but details are filled in by
processes
> > of conditioning and association, and to some degree the
framework
> > itself is mutable if environmental conditions differ
sufficiently
> > from those for which it evolved.  There are few sharp lines
between
> > innate and acquired neural function.
> >
> > Feature discrimination in the early visual system is sometimes
> > called innate.  Certainly it is innate that the cells grow into
> > layers of tissue appropriate for performing useful feature
> > discriminations.  However, it seems the specific connections and
> > weights to implement particular discriminations get filled in by
> > adaptation to correlations in the ensemble of signals flowing
from
> > the retina.  For example, we can change the distribution of
> > particular detectors dramatically by raising a cat in an
abnormal
> > visual environment.  It seems cells are not so much genetically
> > determined to perform specific discriminations, as that they
acquire
> > discrimination functions appropriate to the signals they
encounter
> > in their genetically determined position in the network.
> >
> > There are places where neural projections bring together signals
> > originating from corresponding points in the left and right
eyes.
> > This allows merging both images to fill in details missing from
one
> > or the other, estimating depth from discrepancies in the two
images,
> > and so on.  There is genetic direction to cause axonal
projections
> > carrying signals from one eye to grow toward the normally
expected
> > locations of the corresponding signal paths from the other.  But
> > (from experiments on Xenopus frogs) if one eye is surgically
rotated
> > before the connections are formed, so that the locations of
> > correlated signals are altered, we see the projections grow
first
> > toward the normal target location, then veer off sharply to
connect
> > with the very different cells now in position to be correlated.
> >
> > Many topographic maps can be found in the brain, so that for
example
> > neigboring sections of neural tissue are excited by stimulii
from
> > adjacent sections of skin.  One might imagine a fixed wiring
scheme
> > under genetic control to hook up these maps, but when we
surgically
> > swap small patches of skin the connections change to preserve
the
> > mapping.  It takes some time, but after a while we find that the
> > moved sensors now activate sections of the remote neural map
that
> > correspond to their new positions.
> >
> > A reasonable interpretation is that the "wiring" of neural
circuitry
> > is only loosely determined by a genetic blueprint.  Most of the
> > actual connections (and therefore the functions performed) are
> > established as a result of correlations between the activities
of
> > potentially connected cells.  Not only are the initial
connections
> > determined by correlations, but even after a stable connection
> > pattern is established, the connections will change if the
> > correlations change.
> >
> > From the viewpoint of a single cell, it strengthens connections
to
> > others correlated with its own activity and weakens others, much
as
> > postulated by Hebb so many years ago.  While direct observation
of
> > such changes in individual active synapses is still difficult,
we
> > can observe at least one related mechanism in widespread use.
Cells
> > in a child's brain sprout huge dendritic trees and eventually
make
> > something like 200,000 synaptic connections.  By adulthood these
are
> > trimmed back to an average of 10 to 20 thousand.  The only
plausible
> > explanation for this of which I am aware is that the surviving
> > connections are those that showed correlation with the activity
of
> > the cell.  Uncorrelated connections simply drop out of the
picture.
> >
> > Overall, the point is that the functions computed by cells in
the
> > brain are largely determined by the correlations encountered in
the
> > signals accessible to the cell, rather than by genetic control.
> >
> > This is learning or conditioning, but it is not the kind of
> > feedback-driven learning that is usually intended when one
speaks of
> > operant conditioning.  This sort of learning does not depend on
> > consequences of the output of the function, and would occur even
if
> > the output were not connected to anything else and could
therefore
> > have no consequences extending beyond the cell doing the
learning.
> >
>
> What evidence do you have that this happens at all ?
>
> patty

I don't understand the question, Patty.  The post is a listing of
well-known experimental observations of functional changes occuring
under circumstances in which the only information available to drive
the change is that provided by correlations accessible locally,
circumstances which preclude the possibility that behavioral
contingency is involved.   That seems to me reasonable evidence that
it does occur... we can see it happening.

From another angle, one might take it as relevant evidence that we
can construct a simple neural model, adjust the weights by some sort
of Hebbian rule, and find that useful functions are indeed computed.
So not only do we see it happening in vivo, we have independent
verification that the interpretation of our observations is
plausible... mechanisms based on the principle we suppose to be
relevant do actually perform as expected, even when stripped down to
the barest essentials.

What sort of evidence are you asking for?

Bill


>
> > From an evolutionary perspective, such learning mechanisms exist
> > because they do indeed often have useful behavioral
consequences.
> > But the evolutionary connection is between the learning
mechanisms
> > and ensembles of behavior, not between the individual functions
> > learned and specific contingencies associated with those
functions.
> >
> > --------
> >
> > None of the above should be taken as suggesting that other sorts
of
> > learning can be ignored.  To implement AI we will require an
> > understanding of many facets of adaptive behavior, including the
> > operant conditioning or reinforcement learning that has been the
> > sole focus of certain vocal participants in CAP.
> >
> > But I do suggest that these correlation-driven "unsupervised"
> > mechanisms provide a critically important underpinning for other
> > learning paradigms, that they are necessary parts of an
explanation
> > of how all our behavior-generating mechanisms actually work.
> >
> > <to be continued in further posts>
> >
> > Bill Modlin
> >
> >
> >
> >
> >


0
Bill
10/26/2004 5:49:11 AM
"dan michaels" <feedbackdroids@yahoo.com> wrote in message
news:8d8494cf.0410250740.6968afef@posting.google.com...
> "Bill Modlin" <modlin1@metrocast.net> wrote in message
news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
>
>
> > Overall, the point is that the functions computed by cells in
the
> > brain are largely determined by the correlations encountered in
the
> > signals accessible to the cell, rather than by genetic control.
> >
>
> The problem comes if you believe this part so strongly that you
gloss
> over or disregard or downplay the underlying "foundation" for the
> system as provided by genetics. Tabula rasa, it ain't.

We've been here many times before, Dan.   I'm not sure we actually
disagree... at worst we quibble over just how much genetic structure
is required.   I certainly don't expect a huge random network with
no initial structure to magically self-organize into a person... at
the very least it has to be part of an organism with genetically
endowed (or designed in, for a robot) initial behaviors and drives.
Perhaps there is a lot more required.

You seem to think that there may be a need for at least 30 subtly
different frameworks to account for the 30-odd visual functional
areas that you are fond of mentioning, and for all I know you could
be right.

Our main difference is in our perception of where best to focus our
current efforts.   I am still sufficiently impressed by the
potential for self organization that I'd like to find out how far it
can take us.  If and when we find something that can't be made to
work by self organization, then we can dig in and see what
additional structure is needed to make it work.

My impression is that you would have us spend many years finding out
just how the brain does it all before even attempting to construct
anything.

My way, perhaps we'll find that we only need a handful of
specialized structures and can be done in a few years.   Worst case
we waste a little time and wind up eventually digging out all the
detail you wanted to start with.  Your way we have no chance of
early success.  Place your bets... but me, I'd rather hope for
something that might be finished in my lifetime.

Bill





0
Bill
10/26/2004 6:20:03 AM
Bill Modlin wrote:
> "patty" <pattyNO@SPAMicyberspace.net> wrote in message
> news:E59fd.303716$MQ5.274103@attbi_s52...
> 
>>Bill Modlin wrote:

http://Finding-useful-functions.notlong.com

In which you arrived at one particular stunning conjecture:

  This sort of learning does not depend on
  consequences of the output of the function,
  and would occur even if the output were not
  connected to anything else and could therefore
  have no consequences extending beyond the
  cell doing the learning.

>>What evidence do you have that this happens at all ?
>>
>>patty
> 
> 
> I don't understand the question, Patty.  The post is a listing of
> well-known experimental observations of functional changes occuring
> under circumstances in which the only information available to drive
> the change is that provided by correlations accessible locally,
> circumstances which preclude the possibility that behavioral
> contingency is involved.   That seems to me reasonable evidence that
> it does occur... we can see it happening.
> 

I see a lot of assertions in your article, but i see no experimental 
observations.  Reporting such observations could amount to pointers to 
actual data or to an experiment where any movement of the organism was 
prevented, yet it could still be proved that learning happened.  Perhaps 
where muscles to the eyes and limbs are severed, yet learning can still 
be demonstrated.

> From another angle, one might take it as relevant evidence that we
> can construct a simple neural model, adjust the weights by some sort
> of Hebbian rule, and find that useful functions are indeed computed.
> So not only do we see it happening in vivo, we have independent
> verification that the interpretation of our observations is
> plausible... mechanisms based on the principle we suppose to be
> relevant do actually perform as expected, even when stripped down to
> the barest essentials.
> 

No i would not take that as evidence that this happens in a natural 
organism.

> What sort of evidence are you asking for?

Hopefully i have explained that above.


Patty
0
patty
10/26/2004 8:53:40 AM
patty wrote:
>[snip] I see a lot of assertions in your article, but i see no experimental
> observations.  Reporting such observations could amount to pointers to
> actual data or to an experiment where any movement of the organism was
> prevented, yet it could still be proved that learning happened. Perhaps where 
> muscles to the eyes and limbs are severed, yet learning
> can still be demonstrated.

An interesting (and short) introduction to the subject of Mr. Modlin's
post can be read in this book review:

http://www.findarticles.com/p/articles/mi_m2483/is_2_22/ai_76698533/print

If you want to dig deeper, take a look at this:

http://itb.biologie.hu-berlin.de/~wiskott/Bibliographies/LearningInvariances.html

But if you're not interested in theoretical matters, just a bit
of empirical support for this research, take a look at this:

http://waisman.wisc.edu/infantlearning/INFANT_RESEARCH.HTML

*SG*


0
Stargazer
10/26/2004 1:18:19 PM
"David Longley" <David@longley.demon.co.uk> wrote in message 
news:Crq7l5GSSSfBFwls@longley.demon.co.uk...
> In article <WvCdnUKMSp9pluDcRVn-iQ@metrocastcablevision.com>, Bill Modlin 
> <modlin1@metrocast.net> writes
>>
>>"Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
>>news:20041025074544.269$G8@news.newsreader.com...
>>> What is important in sensation and perception is that movement of
>>an animal
>>> (or, more specifically, of its receptors) has consequences. When
>>we sweep
>>> our eyes over a patch of red, there are changes in stimulation -
>>such
>>> movement/consequence contingencies are at the heart of learning to
>>perceive
>>> the world.
>>
>>As I said at the end of my note, such consequence-driven learning
>>cannot be ignored.  But it is not the only important kind of
>>learning, nor is it in any sense the "most" important.
>
> Nonsense.
>
> How do you know this? Why do you talk this way at all? What makes you 
> think there are different types of "learning" (leaving aside the dubious 
> distinction between operant and classical conditioning)? What is an 
> *important* kind of learning? The previous four questions are more helpful 
> than you might think.

Nonsense!

>
> If you had read Catania's book as I advised you might not be writing this 
> way (hence the four questions - although you probably thought them just 
> expressing critical indignation at your gall <g> I assume therefore that 
> you have not read his book, and so I ask you:  what do you know about 
> "learning" and how do you know it?
>
> Until you address that, how do you know whether what you are writing makes 
> any more sense than the stuff Zick or Verhey write? Might it be that you 
> just write more eloquently?
> -- 
> David Longley 


0
AlphaOmega2004
10/26/2004 3:28:09 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<C5WdnYeQmMllXODcRVn-3A@metrocastcablevision.com>...
> "dan michaels" <feedbackdroids@yahoo.com> wrote in message
> news:8d8494cf.0410250740.6968afef@posting.google.com...
> > "Bill Modlin" <modlin1@metrocast.net> wrote in message
>  news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> >
> >
> > > Overall, the point is that the functions computed by cells in
>  the
> > > brain are largely determined by the correlations encountered in
>  the
> > > signals accessible to the cell, rather than by genetic control.
> > >
> >
> > The problem comes if you believe this part so strongly that you
>  gloss
> > over or disregard or downplay the underlying "foundation" for the
> > system as provided by genetics. Tabula rasa, it ain't.
> 
> We've been here many times before, Dan.   I'm not sure we actually
> disagree... at worst we quibble over just how much genetic structure
> is required.   I certainly don't expect a huge random network with
> no initial structure to magically self-organize into a person... at
> the very least it has to be part of an organism with genetically
> endowed (or designed in, for a robot) initial behaviors and drives.
> Perhaps there is a lot more required.
> 
> You seem to think that there may be a need for at least 30 subtly
> different frameworks to account for the 30-odd visual functional
> areas that you are fond of mentioning, and for all I know you could
> be right.
> 
> Our main difference is in our perception of where best to focus our
> current efforts.   I am still sufficiently impressed by the
> potential for self organization that I'd like to find out how far it
> can take us.  If and when we find something that can't be made to
> work by self organization, then we can dig in and see what
> additional structure is needed to make it work.
> 
> My impression is that you would have us spend many years finding out
> just how the brain does it all before even attempting to construct
> anything.
> 


Hi Bill, *exactly* the opposite, as I believe I've said many times
around here. I think neuroscience has already given us plenty enough
information that we could start developing computer systems which do
something similar. If I were actively working in this area, that's
what I'd be doing.

Regards the 30 visual areas, I think they are there for a reason, not
by random chance. They appeared as a result of evolution fine-tuning
the system to solve the problems the organisms were presented with. If
you postulate the various areas [for the purposes of research,
simulation, and test] simply as being preprocessing systems, then at
the least they make the job of any s.o.s. or memory-prediction system
they connect to all that much easier. Vision is such an enormously
difficult problem that nature found it couldn't be solved adequately
with only a 1-level memory system, blank self-organizing slate, simple
S-R units, etc. If it were that easy to do, then nature would have
done it that way from the get-go.
==============


> My way, perhaps we'll find that we only need a handful of
> specialized structures and can be done in a few years.   Worst case
> we waste a little time and wind up eventually digging out all the
> detail you wanted to start with.  Your way we have no chance of
> early success.  Place your bets... but me, I'd rather hope for
> something that might be finished in my lifetime.
> 
> Bill


Again, exactly the opposite - you're mixing me up with the
wait-until-eternity we-don't-know-anything guys. As I told John.H just
a day ago, if the early CV people, knowing about limulus and mach
bands 50 or so years ago, had decided to wait another 50 years for
neuroscience to nail down everything about vision, where would we be
today - still running rats. My recommendation is to immediately use
every piece of neuroscience research at our disposal, TODAY. Can I be
any more direct than that. One needn't worry about exactly "how" the
real visual system does it, but can **adapt** [what we know about]
"what" it does.
0
feedbackdroids
10/26/2004 3:30:24 PM
Stargazer wrote:
> patty wrote:
> 
>>[snip] I see a lot of assertions in your article, but i see no experimental
>>observations.  Reporting such observations could amount to pointers to
>>actual data or to an experiment where any movement of the organism was
>>prevented, yet it could still be proved that learning happened. Perhaps where 
>>muscles to the eyes and limbs are severed, yet learning
>>can still be demonstrated.
> 
> 
> An interesting (and short) introduction to the subject of Mr. Modlin's
> post can be read in this book review:
> 
> http://www.findarticles.com/p/articles/mi_m2483/is_2_22/ai_76698533/print
> 
> If you want to dig deeper, take a look at this:
> 
> http://itb.biologie.hu-berlin.de/~wiskott/Bibliographies/LearningInvariances.html
> 
> But if you're not interested in theoretical matters, just a bit
> of empirical support for this research, take a look at this:
> 
> http://waisman.wisc.edu/infantlearning/INFANT_RESEARCH.HTML
> 
> *SG*

Well my take on Bill's type of learning is absence of a training signal 
*and* absence of behavior ... iow it is the nature of the network is to 
  change by noticing correlations in its input - its output is 
irrelevant to the process.  The articles you site above are all just 
about "Unsupervised Learning" ... iow absence of a training signal.  In 
unsupervised learning the organism moves and then notices how those 
motions change what it senses - in Bill's learning the organism just 
notices how what it is sensing changes.  But then maybe i have misread him.

patty
0
patty
10/26/2004 3:56:54 PM
Bill Modlin wrote:
[...]
> None of your post touches on the distinction I made between
> algorithms
> driven by contingencies of the output ("supervised") and other
> algorithms driven by local correlations independent of such
> contingencies.
> 
> Explain to me again why that distinction was nonsense?
> 
> Bill

You assume that the "output" is different from "local correlations." 
 From the P.O.V. of the neuron, there is no difference. What the neuron 
gets is local input. It has no way of distinguishing between an input 
that originates spontaneously in the connected neighbour cell(s) (which 
is what appears to be implied by "local correlations") and input that 
originates from some cell some distance away (ie, at least on 
intermediary cell away). IOW, there is no difference between the inputs 
originating in some environmental feedback and any other input. The 
neuron takes up some messenger molecule, this triggers changes within 
the cell, which eventually result in synaptic strengthening/weakening. 
But those messenger molecules aren't labelled "local" or "supervised 
feedback". NB that as far as current knowledge goes, learning at the 
neural level requires that genes be switched on and off. Again, the 
genes don't know whether the molecules that switch them are the result 
of local processes or more distant ones.

If by "local correlations" you mean something other than differences in 
inputs (which --> cell firing --> synaptic strengthening/weakening), 
then your description thus far is misleading.

0
Wolf
10/26/2004 4:16:15 PM
Bill Modlin wrote:
[...]
> Glen on the other hand actually disagrees with me.
> 
> He claims that learning occurs only by adjusting a behavior
> mediating function in response to consequences of its
> behavioral output.   An organism does something, that has
> contingent results, and it is those contingencies that
> control learning.  There is no place in his view for
> learning from mere correlations among environmental inputs
> independent of behavior, there must be some sense in which
> the controlling inputs are viewable as contingent on a
> behavior that is to be modified.
 >
> I believe that he is mistaken, that adjustments in
> function induced by correlations among signals not
> contingent on behavior play an important role in
> behavioral adaptation.  My posting was intended to be
> a counter to his position, a listing of several places
> where functions are developed without reliance on
> behavioral contingencies, which functions nevertheless
> have critical bearing on subsequent behavior.
> 
> You seem to be taking the same stance as I am.  You
> speak of developmental processes that must occur for
> a child to be ready to learn any particular lesson just
> as I speak of developmental processes which must occur
> before a system is able to detect and learn from
> behavioral contingencies.
> 
> You seem to agree that these developments include the
> same ones I cited as examples.  Yet you frame your post
> as one of disagreement.  Can you explain this?
> 
> Bill

Well, actually I do disagree with you. You refer to signals not 
contingent on behaviour. I see nothing but signals contingent on 
behaviour. That's the point of my reference to the kitten experiment. 
The VC of the kitten can learn only from the signals delivered by the 
eyes - and if none of those signals correlate with vertical edges, the 
kitten will not learn to perceive vertical edges. If "local 
correlations" were sufficient, the kitten would perceive vertical edges 
no matter what it's early visual experience. At any rate, that's how I 
interpret "local correlations": you invoke them to explain learning in 
the absence of external contingencies. But in so doing you assume that 
such learning actually happens. AFAIK, it doesn't. "Development" is as 
dependent on external contingencies as "learning" is - that's why I'm 
inclined to think it's pointless to make a distinction.

It's apparent that "behaviour" to you means only "gross behaviour", that 
of the organism as a whole. I include anything outside the neural net as 
behaviour. Eg, your eye moves, the image on its retina changes, the 
signals from the retina arrive at the VC, and the VC "develops" -- that 
is, it learns to distinguish between vertical and horizontal edges, for 
example. (NB that the retinal cells fire only if the light falling on 
them changes -0 that's why spontaneous eye movement is necessary.)

It occurs to me that by "local correlations" you may mean something like 
the bundling of retinal receptors, so that their signals "mean" and edge 
moving vertically, diagonally, etc as the eye moves. If so, that's still 
signals contingent on behaviour - the behaviour of the eye, in this 
case. It's also a local correlation removed from the VC.

However, when it comes to engineering a learning  machine, the 
distinction between "local correlations" and "feedback" may matter - not 
in the design of the hardware, but in the design of the software, which 
is your focus. OTOH, any algorithm that produces learning in a neural 
net must include those local correlation algorithms. Whether the local 
correlation algorithms have any use on their own is another question. 
You talk of pruning synaptic connections, but whatever criteria you 
build into the algorithm must be external to the LC's. In nature, AFAIK 
the spontaneously generated synaptic connections are in fact pruned by 
feedback from precisely those external contingencies that you seem to 
wish to eliminate from the development of the organism.


0
Wolf
10/26/2004 4:45:52 PM
"David Longley" <David@longley.demon.co.uk> wrote in message
news:Crq7l5GSSSfBFwls@longley.demon.co.uk...
> In article <WvCdnUKMSp9pluDcRVn-iQ@metrocastcablevision.com>, Bill Modlin
> <modlin1@metrocast.net> writes
>>
>>"Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
>>news:20041025074544.269$G8@news.newsreader.com...
>>> What is important in sensation and perception is that movement of
>>an animal
>>> (or, more specifically, of its receptors) has consequences. When
>>we sweep
>>> our eyes over a patch of red, there are changes in stimulation -
>>such
>>> movement/consequence contingencies are at the heart of learning to
>>perceive
>>> the world.
>>
>>As I said at the end of my note, such consequence-driven learning
>>cannot be ignored.  But it is not the only important kind of
>>learning, nor is it in any sense the "most" important.
>
> Nonsense.
>
> How do you know this? Why do you talk this way at all? What makes you
> think there are different types of "learning" (leaving aside the dubious
> distinction between operant and classical conditioning)? What is an
> *important* kind of learning? The previous four questions are more helpful
> than you might think.

Nonsense!

>
> If you had read Catania's book as I advised you might not be writing this
> way (hence the four questions - although you probably thought them just
> expressing critical indignation at your gall <g> I assume therefore that
> you have not read his book, and so I ask you:  what do you know about
> "learning" and how do you know it?
>
> Until you address that, how do you know whether what you are writing makes
> any more sense than the stuff Zick or Verhey write? Might it be that you
> just write more eloquently?
> -- 
> David Longley



0
AlphaOmega2004
10/26/2004 4:55:30 PM
"David Longley" <David@longley.demon.co.uk> wrote in message
news:Crq7l5GSSSfBFwls@longley.demon.co.uk...
> In article <WvCdnUKMSp9pluDcRVn-iQ@metrocastcablevision.com>, Bill Modlin
> <modlin1@metrocast.net> writes
>>
>>"Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
>>news:20041025074544.269$G8@news.newsreader.com...
>>> What is important in sensation and perception is that movement of
>>an animal
>>> (or, more specifically, of its receptors) has consequences. When
>>we sweep
>>> our eyes over a patch of red, there are changes in stimulation -
>>such
>>> movement/consequence contingencies are at the heart of learning to
>>perceive
>>> the world.
>>
>>As I said at the end of my note, such consequence-driven learning
>>cannot be ignored.  But it is not the only important kind of
>>learning, nor is it in any sense the "most" important.
>
> Nonsense.
>
> How do you know this? Why do you talk this way at all? What makes you
> think there are different types of "learning" (leaving aside the dubious
> distinction between operant and classical conditioning)? What is an
> *important* kind of learning? The previous four questions are more helpful
> than you might think.

Nonsense!

>
> If you had read Catania's book as I advised you might not be writing this
> way (hence the four questions - although you probably thought them just
> expressing critical indignation at your gall <g> I assume therefore that
> you have not read his book, and so I ask you:  what do you know about
> "learning" and how do you know it?
>
> Until you address that, how do you know whether what you are writing makes
> any more sense than the stuff Zick or Verhey write? Might it be that you
> just write more eloquently?
> -- 
> David Longley




0
AlphaOmega2004
10/26/2004 5:36:08 PM
On Tue, 26 Oct 2004 14:13:10 -0700, "Bill Modlin"
<modlin1@metrocast.net> in comp.ai.philosophy wrote:

>
>"Stargazer" <fuckoff@spammers.com> wrote in message
>news:10nsjko5csc5t09@news20.forteinc.com...
>> patty wrote:
>>> [snip] I see a lot of assertions in your article, but i
>>> see no experimental observations.  Reporting such
>>> observations could amount to pointers to actual data or
>>> to an experiment where any movement of the organism was
>>> prevented, yet it could still be proved that learning
>>> happened.  Perhaps where muscles to the eyes and limbs
>>> are severed, yet learning can still be demonstrated.
>>
>> An interesting (and short) introduction to the subject of
>> Mr. Modlin's post can be read in this book review:
>>
>>
>http://www.findarticles.com/p/articles/mi_m2483/is_2_22/ai_76698533/print
>>
>> If you want to dig deeper, take a look at this:
>>
>>
>http://itb.biologie.hu-berlin.de/~wiskott/Bibliographies/LearningInvariances.html
>>
>> But if you're not interested in theoretical matters, just
>> a bit of empirical support for this research, take a look
>> at this:
>>
>>
>http://waisman.wisc.edu/infantlearning/INFANT_RESEARCH.HTML
>>
>> *SG*
>
>Thank you, Stargazer.   Those are excellent references.
>
>If people will read them perhaps we might skip this
>introductory phase in which I was just trying to establish
>the reasonableness of talking about unsupervised learning,
>and get to talking about how it happens.
>
>I don't know whether I have anything actually new to
>contribute to the subject.  But I'd like to discuss it, and
>I think it is relevent to AI.

The subject may be relevant to ai, Bill, but there'll likely be no
reasonablness about the discussion until it's relevant to behaviorism.

>Bill
>
>P.S. to Patty... my apologies for not providing actual
>pointers to the experimental reports in that post.  I truly
>thought the experiments I alluded to were sufficiently well
>known that details were unnecessary.
>
>As to servering connections to limbs, such drastic measures
>are not uneeded to prove the point.  It suffices to note
>that the changes observed are such that they cannot affect
>behavior until well after they are made, so that behavioral
>results cannot be taken as guiding the change itself.
>
>Bill
>
>


Regards - Lester
0
lesterDELzick
10/26/2004 6:51:20 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:BKufd.6594$rs5.515234@news20.bellglobal.com...
> Bill Modlin wrote:
> [...]
>> None of your post touches on the distinction I made between
>> algorithms
>> driven by contingencies of the output ("supervised") and other
>> algorithms driven by local correlations independent of such
>> contingencies.
>>
>> Explain to me again why that distinction was nonsense?
>>
>> Bill
>
> You assume that the "output" is different from "local correlations." From 
> the P.O.V. of the neuron, there is no difference. What the neuron gets is 
> local input. It has no way of distinguishing between an input that 
> originates spontaneously in the connected neighbour cell(s) (which is what 
> appears to be implied by "local correlations") and input that originates 
> from some cell some distance away (ie, at least on intermediary cell 
> away). IOW, there is no difference between the inputs originating in some 
> environmental feedback and any other input. The neuron takes up some 
> messenger molecule, this triggers changes within the cell, which 
> eventually result in synaptic strengthening/weakening. But those messenger 
> molecules aren't labelled "local" or "supervised feedback". NB that as far 
> as current knowledge goes, learning at the neural level requires that 
> genes be switched on and off. Again, the genes don't know whether the 
> molecules that switch them are the result of local processes or more 
> distant ones.
>
> If by "local correlations" you mean something other than differences in 
> inputs (which --> cell firing --> synaptic strengthening/weakening), then 
> your description thus far is misleading.
>

I find your comment interesting. I agree that the input is indistinguishable
at the particular cellular level, but I don't see how that means it is 
identical?

Causally, the more distant process which also has a genetic basis can
impact on the global ability to learn which can introduce different domains
of information which will then be processed at the particular level.

I will mention what your comment reminded me of, the Turing Test.
Specific inputs are matched with specific outputs and if the outputs
provided by the program are indistinguishable from a human then
the program output is deemed human. So the assumption is that the
inner content/structure of the AI black box is not important, but that
the functionality is identical. However at some subtle level of questioning
in sufficient time the program will exhibit an aberration due to the program
being unable to anticipate unexpected veins of interrogation provided
by the environment.

I think that near identical information arriving from the environment
is unlikely to be processed globally in an identical manner (the topolocigal 
structure of the synapses invoked) so that two instances of learning
similar material will constitute different 'things learned' whether or not
they are indistinguishable in a particular instance. There is no guarantee
that they will be indistinguishable on all inputs so that indistinguishable
does not mean identical.

Neurons have a firing potential which is subject to "reinforcement".
The signals are subject to degradation due to distance. So some
input stimulus is coded locally at some time t due to its intensity.
If that same input stimulus were to be processed by other synaptic
patterns, then a delay could result. And though the end individual
neuron would 'function identically' the fact learned need not be
identical.

I think your POV, requires stipulating that there is a static chain
of input which processes through identical neural channels. But I
think in the real world, learning arises from a stream of impressions
will act through t+x, t+xx and so on. Which is probably why
firing potentials vary in value and also cascade.

> Again, the genes don't know whether the molecules that switch them are the 
> result of local processes or more distant ones.

That is true and qualifies as indistinguishable. But that does mean the
content to be written is identical just because it corresponds. Between
two global inputs, I think the more immediate process termination will
have a different content than a more distant/time process termination.
The factoid of what is 'learned' may well be indisinguishable but that
again doesn't mean it is identical. I think this description accords to
theories about memory being re-remembered or recreated and the
gist of the memory changes though it may not be possible to
distinguish differences of memories as time goes by. I think learning
requires memory or is manifested in memory and is reinterpreted
to match with in the present events.

> Again, the genes don't know whether the molecules that switch them are the 
> result of local processes or more distant ones.

This is true of the process you mention but the consequence that the
output content is identical does not follow. I think Bill means the final
derived output/content and you mean the process.




0
Stephen
10/26/2004 7:40:08 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:BKufd.6594$rs5.515234@news20.bellglobal.com...
> Bill Modlin wrote:
> [...]
>> None of your post touches on the distinction I made between
>> algorithms
>> driven by contingencies of the output ("supervised") and other
>> algorithms driven by local correlations independent of such
>> contingencies.
>>
>> Explain to me again why that distinction was nonsense?
>>
>> Bill
>
> You assume that the "output" is different from "local correlations." From 
> the P.O.V. of the neuron, there is no difference. What the neuron gets is 
> local input. It has no way of distinguishing between an input that 
> originates spontaneously in the connected neighbour cell(s) (which is what 
> appears to be implied by "local correlations") and input that originates 
> from some cell some distance away (ie, at least on intermediary cell 
> away). IOW, there is no difference between the inputs originating in some 
> environmental feedback and any other input. The neuron takes up some 
> messenger molecule, this triggers changes within the cell, which 
> eventually result in synaptic strengthening/weakening. But those messenger 
> molecules aren't labelled "local" or "supervised feedback". NB that as far 
> as current knowledge goes, learning at the neural level requires that 
> genes be switched on and off. Again, the genes don't know whether the 
> molecules that switch them are the result of local processes or more 
> distant ones.
>
> If by "local correlations" you mean something other than differences in 
> inputs (which --> cell firing --> synaptic strengthening/weakening), then 
> your description thus far is misleading.
>

Perhaps, I don't really understand Bill's idea. I had in mind introspection
as in the expression of "learning from a painful experience". But of course
this has a root of experience in the environment. Otherwise, one would
have to assume it is God given.

Koch has a neurobiological model of mind which has two aspects.
One is more or less unconscious and generates something, I will call it
a representation which is the conscious mind. After sleep more
connections are made available to the conscious mind from something
we have tried to learn about. One would doubt that the conscious
mind fully determines the selection process.

There is probably something right about Quine's notion of internal
propensities which are inherited as that matched intuitions popularly.
I think the idea about instincts is still considered a theory.
Maybe Bill means something like that which has been incorporated
during our evolutionary history, still ultimately from environmental
interaction.  Perhaps his idea appears to be more direct to environment
and genetic change than Darwin's selection of genes due to survival.

Certainly our intelligence and emotional maturity which are distilled
from the environment will contribute to getting enough food and
attracting a mate so that our genes get reintroduced into the gene pool
requires quite a bit of potential within the range of normalcy. This will
include what we are born with as well as how we are formed socially.
Besides the kitten there are feral children who can't learn to speak
after age 9-11 due at least in part to a reduced number of neurons.

Actually, I agree with Neil Rickert's comment:

"You posted a long wide ranging essay.  But you never tried to make
clear which parts of it are dogma (or a priori assumptions, or
whatever else you want to call it).

If you want discussion, it would help if you would indicate which
parts are open to discussion, and which parts you expect us to assume
as the basis for discussion."

SH: I don't think Bill expressed his view specifically enough for your
criticism of it (which I went into in another post) to necessarily be
on target. What you said was true enough, but I'm not so sure that
it countered what he meant by his claim which I took to be content
rather than function and function seemed to be your focus of dispute.

Regards,
Stephen



0
Stephen
10/26/2004 8:26:06 PM
"dan michaels" <feedbackdroids@yahoo.com> wrote in message 
news:8d8494cf.0410260730.72dbdbc5@posting.google.com...
> "Bill Modlin" <modlin1@metrocast.net> wrote in message 
> news:<C5WdnYeQmMllXODcRVn-3A@metrocastcablevision.com>...
>> "dan michaels" <feedbackdroids@yahoo.com> wrote in message
>> news:8d8494cf.0410250740.6968afef@posting.google.com...
>> > "Bill Modlin" <modlin1@metrocast.net> wrote in message
>>  news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
>> >
>> >
>> > > Overall, the point is that the functions computed by cells in
>>  the
>> > > brain are largely determined by the correlations encountered in
>>  the
>> > > signals accessible to the cell, rather than by genetic control.
>> > >
>> >
>> > The problem comes if you believe this part so strongly that you
>>  gloss
>> > over or disregard or downplay the underlying "foundation" for the
>> > system as provided by genetics. Tabula rasa, it ain't.
>>
>> We've been here many times before, Dan.   I'm not sure we actually
>> disagree... at worst we quibble over just how much genetic structure
>> is required.   I certainly don't expect a huge random network with
>> no initial structure to magically self-organize into a person... at
>> the very least it has to be part of an organism with genetically
>> endowed (or designed in, for a robot) initial behaviors and drives.
>> Perhaps there is a lot more required.
>>
>> You seem to think that there may be a need for at least 30 subtly
>> different frameworks to account for the 30-odd visual functional
>> areas that you are fond of mentioning, and for all I know you could
>> be right.
>>
>> Our main difference is in our perception of where best to focus our
>> current efforts.   I am still sufficiently impressed by the
>> potential for self organization that I'd like to find out how far it
>> can take us.  If and when we find something that can't be made to
>> work by self organization, then we can dig in and see what
>> additional structure is needed to make it work.
>>
>> My impression is that you would have us spend many years finding out
>> just how the brain does it all before even attempting to construct
>> anything.
>>
>
>
> Hi Bill, *exactly* the opposite, as I believe I've said many times
> around here. I think neuroscience has already given us plenty enough
> information that we could start developing computer systems which do
> something similar. If I were actively working in this area, that's
> what I'd be doing.
>
> Regards the 30 visual areas, I think they are there for a reason, not
> by random chance. They appeared as a result of evolution fine-tuning
> the system to solve the problems the organisms were presented with. If
> you postulate the various areas [for the purposes of research,
> simulation, and test] simply as being preprocessing systems, then at
> the least they make the job of any s.o.s. or memory-prediction system
> they connect to all that much easier. Vision is such an enormously
> difficult problem that nature found it couldn't be solved adequately
> with only a 1-level memory system, blank self-organizing slate, simple
> S-R units, etc. If it were that easy to do, then nature would have
> done it that way from the get-go.
> ==============
>

Some facts from Anthropology I. Nature has designed several different
visual systems during the course of evolution. Therefore Nature does't
know beforehand,which one is optimal. There is fine-tuning involved, but
not all mistakes are eliminated by natural selection. We have both archaic/
neutral and dysfunctional genes retained in our gene pool. That is because 
evolution is random, there is no goal or purpose waiting for achievement. 
Selection will tend to manifest more survival traits than not. But nobody
can examine the present visual system and say all its parts are needed.

Some functions can be redundant or non-critically dysfunctional and
will not be eliminated from the gene pool just because of chance, so they
will not need to be duplicated in order to make a working system.
Neurobiology could inform what is actually needed but that is beyond
the current capability of science, meaning it is not known yet. Also the
human gene pool is not yet stable. 


0
Stephen
10/26/2004 8:48:06 PM
"Stargazer" <fuckoff@spammers.com> wrote in message
news:10nsjko5csc5t09@news20.forteinc.com...
> patty wrote:
>> [snip] I see a lot of assertions in your article, but i
>> see no experimental observations.  Reporting such
>> observations could amount to pointers to actual data or
>> to an experiment where any movement of the organism was
>> prevented, yet it could still be proved that learning
>> happened.  Perhaps where muscles to the eyes and limbs
>> are severed, yet learning can still be demonstrated.
>
> An interesting (and short) introduction to the subject of
> Mr. Modlin's post can be read in this book review:
>
>
http://www.findarticles.com/p/articles/mi_m2483/is_2_22/ai_76698533/print
>
> If you want to dig deeper, take a look at this:
>
>
http://itb.biologie.hu-berlin.de/~wiskott/Bibliographies/LearningInvariances.html
>
> But if you're not interested in theoretical matters, just
> a bit of empirical support for this research, take a look
> at this:
>
>
http://waisman.wisc.edu/infantlearning/INFANT_RESEARCH.HTML
>
> *SG*

Thank you, Stargazer.   Those are excellent references.

If people will read them perhaps we might skip this
introductory phase in which I was just trying to establish
the reasonableness of talking about unsupervised learning,
and get to talking about how it happens.

I don't know whether I have anything actually new to
contribute to the subject.  But I'd like to discuss it, and
I think it is relevent to AI.

Bill

P.S. to Patty... my apologies for not providing actual
pointers to the experimental reports in that post.  I truly
thought the experiments I alluded to were sufficiently well
known that details were unnecessary.

As to servering connections to limbs, such drastic measures
are not uneeded to prove the point.  It suffices to note
that the changes observed are such that they cannot affect
behavior until well after they are made, so that behavioral
results cannot be taken as guiding the change itself.

Bill


0
Bill
10/26/2004 9:13:10 PM
"Stargazer" <fuckoff@spammers.com> wrote in message 
news:10nsjko5csc5t09@news20.forteinc.com...
> patty wrote:
>>[snip] I see a lot of assertions in your article, but i see no 
>>experimental
>> observations.  Reporting such observations could amount to pointers to
>> actual data or to an experiment where any movement of the organism was
>> prevented, yet it could still be proved that learning happened. Perhaps 
>> where muscles to the eyes and limbs are severed, yet learning
>> can still be demonstrated.
>
> An interesting (and short) introduction to the subject of Mr. Modlin's
> post can be read in this book review:
>
> http://www.findarticles.com/p/articles/mi_m2483/is_2_22/ai_76698533/print
>

"Most unsupervised learning algorithms are based on statistical estimation 
of the input data. As pointed out in the Konen and von der Malsburg article, 
such algorithms generally suffer from the problem of combinatorial explosion 
when dealing with realistically large patterns. They proposed incorporating 
structure, specifically the prior principle of conservation of topological 
structure, into their self-organization network for symmetry detection (see 
also the Gold et al. article). Their article emphasizes geometric 
principles, rather than statistical principles, for unsupervised learning. 
It is revealing to consider the old Minsky-Papert connectedness problem 
(Minsky and Papert 1969) in this context. This problem is one of telling 
connected patterns from disconnected ones. On a two-dimensional grid, there 
are exponentially many connected patterns. In theory, one could get a 
multilayer network to learn the connectedness predicate. However, as pointed 
out by Minsky and Papert (1988), it is practically infeasible because it 
requires far too many training samples and too much learning time. Not until 
recently was a neural network solution found, and the solution to the 
problem is based on a simple architecture with primarily nearest-neighbor 
coupling and an oscillatory correlation representation that labels pixels by 
synchrony and desynchrony (Wang 2000). This solution echoes the point of 
Konen and von der Malsburg on the importance of prior structure. From the 
philosophical point of view, the brain of a newborn possesses genetic 
knowledge resulting from millions of years of evolution. Although, in 
theory, all is learnable, including connectivity and representation, 
computational complexity has to be an important consideration. Hence, future 
investigation on unsupervised learning needs to incorporate appropriate 
prior structure."

SH: The history of AI has proceeded from the symbolic/top-down approach to 
the connectionistic/bottom up approach to the blend of both approaches. 
Minsky, who is mentioned above, was an early advocate of the blend approach. 
He wrote a paper, Neat vs. Scruffy which explained why he thought this was 
the best approach. He also explained why we couldn't just try to simulate 
the actual evolution of Earth/humanity itself. There are zillions of chance 
events which produce zillions to the zillionth power of possible 
permuations, one of which led to intelligent life.

I think this is similar to the cellular automaton approach to the creation 
of the universe discussed by Wolfram. Suppose humanity got lucky and after 
running millions of CA rules over millions of years and the real true CA 
rule happened to be run as a simulation. How would humanity know that it was 
The Truth? Wolfram says that it takes approximately the same amount of time 
to run the simulation as the actual creation event...a few billion years.

Moreover, how do you compare the output of a neural network approach output 
to normal human output? Well, the best attempt will be some type of Turing 
Test which maps the judges evaluating criteria --> appropriate responses to 
revealing questions to some type of dimensional grid, so that you would see 
appropriate questions/answers represented as a trajectory(ies) on the grid. 
Now you run the same questions to the program and see if its outputs map 
within the trajectory(ies) as the subjectively expected human output 
trajectories.

But even now we have Turing programs that could stand up to a few simple 
questions for a few minutes and pass that particular Turing Test. The 
trajectory of its output would be deemed within the realm of human 
trajectories which trace the course of similarity to human type responses, 
sort of like visually seeing several airplanes flying on the same course on 
a radar screen. Now potential Turing Test passing program are exposed as 
non-human by longterm subtle probing which explore areas different to 
anticipate by programming. Similar to how a psychopathic human would 
eventually be exposed to have values or lack of values outside the human 
norm. This might take extended questioning by a psychiatrist before the 
deviance from the human norm was discovered which would be plotted as a 
trajectory outside the normal human grouping or domain.

I don't think it a a priori possible to know how long a suitable candidate 
for an unsupervised or supervised neural network learning simulating human 
output will continue to provide responses within or congruent to the range 
of correct or proper human responses before the trajectories diverge. So I 
see a problem that even if the right unsupervised learning technique is 
actually implemented that its output can be identified and authenticated by 
human evaluators who are capable of knowing that the program output will 
always be approximate and which excludes the possibility of a radical 
departure from human, somewhere down the line. We are still a long way from 
mapping genes to the functions they blueprint within structures of the 
brain.

Bill Modlin writes:
"But I do suggest that these correlation-driven "unsupervised"
mechanisms provide a critically important underpinning for other
learning paradigms, that they are necessary parts of an explanation
of how all our behavior-generating mechanisms actually work."

SH: This does remind me of Christof Koch and a nuerological mind that 
surfaces as a conscious mind representaion(best description?). AI has 
traditionally never wanted to invoke the sub or un conscious mind 
possibility in its methodology.









0
Stephen
10/26/2004 10:20:21 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:navfd.6610$rs5.517749@news20.bellglobal.com...
Bill Modlin


> You talk of pruning synaptic connections, but whatever criteria you build 
> into the algorithm must be external to the LC's. In nature, AFAIK the 
> spontaneously generated synaptic connections are in fact pruned by 
> feedback from precisely those external contingencies that you seem to wish 
> to eliminate from the development of the organism.
>

Well in the case of the kittens, I think the connections are never made
rather than they exist and are pruned. There is not some existing pathway
awaiting imprinting.

But other connections are surely pruned which were originally made
be interaction with the environment.

I not sure all other connections need to be learned from the environment.
Perhaps it is semantics but made possible by the environment does not
mean the same to me as learned from the environment.

For instance drawing the first breath. Or better yet, motor movement
within the womb. I don't think many people are going to agree that
the creation event which leads to the baby kicking in the womb has
been a learning experience which has taught the baby how to kick
in the womb. Nor does having a womb boundary to kick against
count as an external learning stimulus. You would have to count the
development of the foetus as learning experience for the baby. I don't
say that it is impossible conceptually, but that it will seem implausibly
extreme or categorically obsessive to me and a great many other people.
One would have to expand environment to mean the chain of parents of
parents that made the baby who now kicks in the womb.

Embodiment is conceptually distinct from what that body does later.
Ultimately, the universe serves as the environment and there is no
agreement that the environmental universe created itself from an
environmental source, meaning how that something came into exisitence,
its essense, is not the same as what it does after manifesting existence.
I don't mean that this viewpoint is objective to the point of excluding 
yours because it is a philosophical assumption which is what you have made.


0
Stephen
10/26/2004 11:01:57 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:BKufd.6594$rs5.515234@news20.bellglobal.com...
> Bill Modlin wrote:
> [...]
>> None of your post touches on the distinction I made between
>> algorithms
>> driven by contingencies of the output ("supervised") and other
>> algorithms driven by local correlations independent of such
>> contingencies.
>>
>> Explain to me again why that distinction was nonsense?
>>
>> Bill
>
> You assume that the "output" is different from "local correlations." From 
> the P.O.V. of the neuron, there is no difference. What the neuron gets is 
> local input. It has no way of distinguishing between an input that 
> originates spontaneously in the connected neighbour cell(s) (which is what 
> appears to be implied by "local correlations") and input that originates 
> from some cell some distance away (ie, at least on intermediary cell 
> away). IOW, there is no difference between the inputs originating in some 
> environmental feedback and any other input. The neuron takes up some 
> messenger molecule, this triggers changes within the cell, which 
> eventually result in synaptic strengthening/weakening. But those messenger 
> molecules aren't labelled "local" or "supervised feedback". NB that as far 
> as current knowledge goes, learning at the neural level requires that 
> genes be switched on and off. Again, the genes don't know whether the 
> molecules that switch them are the result of local processes or more 
> distant ones.
>
> If by "local correlations" you mean something other than differences in 
> inputs (which --> cell firing --> synaptic strengthening/weakening), then 
> your description thus far is misleading.
>

A reference to standard usage in the literature:
http://www.science.mcmaster.ca/Psychology/becker/papers/beckerzemelHBTNN2ndEd.pdf

"Unsupervised learning algorithms can be distinguished by the absence of any
supervisory feedback from the external environment. Often, however, there is
an implicit _internally-derived_ training signal. This training signal is 
based on
some measure of the quality of the network's internal representation. The 
main
problem in unsupervised learning research is to formulate a performance 
measure
or cost function for the learning, to generate this internal supervisory 
signal. The cost function is also known as the objective function, since it 
sets the objective for the learning process. ..."


0
Stephen
10/26/2004 11:40:23 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message 
news:XaydnYwMhNzdDeDcRVn-qg@metrocastcablevision.com...

>  None of your post touches on the distinction I made between
> algorithms
> driven by contingencies of the output ("supervised") and other
> algorithms driven by local correlations independent of such
> contingencies.
>
> Explain to me again why that distinction was nonsense?
>
> Bill
>
>

I think this is the distinction you refer to:

Global Objective Funcitions or Synaptic Learning Rules"

"Since our concern is with unsupervised learning in _networks_ and
their global environment, we will focus on algorithms based upon
globaly-defined objective functions, rather than syanptic learning rules.
By viewing the learning process as the optimization of a global objective
function, we can reduce a global algorithm into synaptic-level steps
(weight changes), but the converse is not necessariliy true; i.e., a given
synaptic learning rule may not correspond to the derivative of any global
objective function. Further, a well-defined objective function for the
learning allows us to make global predictions about its behavior which
are typically not possible in a bottom-up approach. Finally, the global
objective function provides a qualitative measure of the success, or at
least the convergence, of the learning procedure."
http://www.science.mcmaster.ca/Psychology/becker/papers/becker-zemelHBTNN2ndEd.pdf 
by Sue Becker


0
Stephen
10/27/2004 12:01:50 AM
Stephen Harris wrote:

> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
> news:BKufd.6594$rs5.515234@news20.bellglobal.com...
> 
>>Bill Modlin wrote:
>>[...]
>>
>>>None of your post touches on the distinction I made between
>>>algorithms
>>>driven by contingencies of the output ("supervised") and other
>>>algorithms driven by local correlations independent of such
>>>contingencies.
>>>
>>>Explain to me again why that distinction was nonsense?
>>>
>>>Bill
>>
>>You assume that the "output" is different from "local correlations." From 
>>the P.O.V. of the neuron, there is no difference. What the neuron gets is 
>>local input. It has no way of distinguishing between an input that 
>>originates spontaneously in the connected neighbour cell(s) (which is what 
>>appears to be implied by "local correlations") and input that originates 
>>from some cell some distance away (ie, at least on intermediary cell 
>>away). IOW, there is no difference between the inputs originating in some 
>>environmental feedback and any other input. The neuron takes up some 
>>messenger molecule, this triggers changes within the cell, which 
>>eventually result in synaptic strengthening/weakening. But those messenger 
>>molecules aren't labelled "local" or "supervised feedback". NB that as far 
>>as current knowledge goes, learning at the neural level requires that 
>>genes be switched on and off. Again, the genes don't know whether the 
>>molecules that switch them are the result of local processes or more 
>>distant ones.
>>
>>If by "local correlations" you mean something other than differences in 
>>inputs (which --> cell firing --> synaptic strengthening/weakening), then 
>>your description thus far is misleading.
>>
> 
> 
> Perhaps, I don't really understand Bill's idea. I had in mind introspection
> as in the expression of "learning from a painful experience". But of course
> this has a root of experience in the environment. Otherwise, one would
> have to assume it is God given.

Introspection is a behaviour, too, and learned, besides.  That is to 
say, it is shaped by the social environment. Watch closely how children 
are taught introspection ("How do you feel about that? What do you 
think? Can you remember...?" etc) No doubt introspection arises 
spontaneously, but without feedback of some sort it will not persist.

One of the common assumptions about behaviorism seems to be the notion 
that behaviorists claim they can produce behaviors. They can't. 
Behaviour can be shaped. Behaviorists shape existing behaviors 
experimentally in order to understand how behaviour shaping occurs 
naturally. Their experiments have yielded insights far more subtle than 
Lester for example, with his persistent references to "animal training", 
is capable of understanding.

The question of how an animal produces a particular _unshaped_ behaviour 
is the realm of neuroscience, IMO. From a teacher's p.o.v., the answer 
is irrelevant; and a teacher is a bheaviour shaper. What matters is that 
some existing hehaviour(s) can be elicited, and then shaped. In training 
an athlete, this is obvious. In training what we call the mind, it is 
less so, but it is what actually happens. The following remarks may 
clarify my point. A good teacher always begins with what the child can 
already do/already knows. NB that humans (like all primates) tend to 
imitate behaviour(s) they observe - this propensity helps enormously in 
the task of behaviour shaping. "Creativity" among other things includes 
variations on existing behaviours: "creative" people have been 
reinforced in this behaviour, and so do a lot of it. They also select 
variations in accordance with feedback from the 
teacher/peers/customers/clients/audience/etc. Since creativity consists 
of variations on a theme, an artists will display a characteristic 
style, as we say. A genius is an artist who is capable of very wide 
variations on different themes - but even a genius has a style. One can 
vary only what already exists/is done.

HTH
0
Wolf
10/27/2004 1:43:57 PM
patty wrote:
> Stargazer wrote:
> > patty wrote:
> >
> > > [snip] I see a lot of assertions in your article, but i see no
> > > experimental observations.  Reporting such observations could
> > > amount to pointers to actual data or to an experiment where any
> > > movement of the organism was prevented, yet it could still be
> > > proved that learning happened. Perhaps where muscles to the eyes
> > > and limbs are severed, yet learning can still be demonstrated.
> >
> >
> > An interesting (and short) introduction to the subject of Mr.
> > Modlin's post can be read in this book review:
> >
> > http://www.findarticles.com/p/articles/mi_m2483/is_2_22/ai_76698533/print
> >
> > If you want to dig deeper, take a look at this:
> >
> > http://itb.biologie.hu-berlin.de/~wiskott/Bibliographies/LearningInvariances.html
> >
> > But if you're not interested in theoretical matters, just a bit
> > of empirical support for this research, take a look at this:
> >
> > http://waisman.wisc.edu/infantlearning/INFANT_RESEARCH.HTML
> >
> > *SG*
>
> Well my take on Bill's type of learning is absence of a training
> signal *and* absence of behavior ... iow it is the nature of the
>  network is to change by noticing correlations in its input - its
> output is irrelevant to the process.  The articles you site above are all just
> about "Unsupervised Learning" ... iow absence of a training signal. In 
> unsupervised learning the organism moves and then notices how those
> motions change what it senses - in Bill's learning the organism just
> notices how what it is sensing changes.  But then maybe i have
> misread him.

In unsupervised learning, there isn't such things as "training
signals", but just stimuli. If input stimuli are absent, you may
have only changes of neural connections due to innate architectural
constraints and noise (which, by the way, is one factor influencing
biological networks). Unsupervised learning is a method that alters
synaptic connections not because of error propagated from the desired
outputs (the "training signals"), but because of intrinsic properties
of the signal itself. If the organism moves, then this is not purely
unsupervised, it is a mixture of three processes: unsupervised,
supervised and reinforcement learning.

*SG*


0
Stargazer
10/27/2004 2:42:37 PM
On Wed, 27 Oct 2004 09:43:57 -0400, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

[. . .]

>One of the common assumptions about behaviorism seems to be the notion 
>that behaviorists claim they can produce behaviors. They can't. 
>Behaviour can be shaped. Behaviorists shape existing behaviors 
>experimentally in order to understand how behaviour shaping occurs 
>naturally. Their experiments have yielded insights far more subtle than 
>Lester for example, with his persistent references to "animal training", 
>is capable of understanding.

Now, now, Wolf, be polite. Lester turns out to have been right in
this instance as in all others on record. Glen says it's all behavior.
Well, duh? It's certainly all behavior, but it's not all behaviorism.

Behaviorism only trains behavior; it doesn't explain behavior. Yet
behaviorism anthropomorphizes results of behavior modification as if
it had already explained divergent behavioral mechanisms in animals
and humans.

Lester is certainly capable of understanding subtle animal training
regimens, but Lester is too busy explaining the mechanics of behavior
to study experimental sciences which claim to explain behavior but
actually only explain the modification of behavior, particularly when
Lester has had to do all the leg work of explaining to behaviorists
what it is behaviorists actually do contrary to the anthropomorphic
claims they advance.

>The question of how an animal produces a particular _unshaped_ behaviour 
>is the realm of neuroscience, IMO. From a teacher's p.o.v., the answer 
>is irrelevant; and a teacher is a bheaviour shaper. What matters is that 
>some existing hehaviour(s) can be elicited, and then shaped. In training 
>an athlete, this is obvious. In training what we call the mind, it is 
>less so, but it is what actually happens. The following remarks may 
>clarify my point. A good teacher always begins with what the child can 
>already do/already knows. NB that humans (like all primates) tend to 
>imitate behaviour(s) they observe - this propensity helps enormously in 
>the task of behaviour shaping. "Creativity" among other things includes 
>variations on existing behaviours: "creative" people have been 
>reinforced in this behaviour, and so do a lot of it. They also select 
>variations in accordance with feedback from the 
>teacher/peers/customers/clients/audience/etc. Since creativity consists 
>of variations on a theme, an artists will display a characteristic 
>style, as we say. A genius is an artist who is capable of very wide 
>variations on different themes - but even a genius has a style. One can 
>vary only what already exists/is done.

Yes, but one can explain what already exists and is done as well.

Regards - Lester
0
lesterDELzick
10/27/2004 3:46:39 PM
On 26-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com> wrote:

> He also explained why we couldn't just try to simulate
> the actual evolution of Earth/humanity itself. There are zillions of
> chance
> events which produce zillions to the zillionth power of possible
> permuations, one of which led to intelligent life.

I certainly agree with the premise that it is impractical to simulate the
actual evolution of Earth/humanity.  However, the conjecture that a chance
event produced life is just that -- pure conjecture.

In order to reach the point where evolution can kick in and natural
selection can work requires reaching the point where you have
self-replicating organisms that can pass on genetic information; until you
reach that point, evolution -- by definition -- cannot operate.  The problem
is that the process of passing on genetic information -- DNA/RNA -- is an
incredibly complex process involving the coordinated operation of many
organic "machines" (DNA splitters, transcription RNA, translation,
ribosomes, and transport mechanisms to mention a few).  It is an organic
factory.

No one has come even remotely close to explaining how a "chance event"
turned inanimate compounds into a system with enough components and
organization that it can self-replicate and pass on genetic information. 
You are welcome to take it on faith that chance events accomplished this,
but it certainly does not rise to the level of a scientific fact.

-- 
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com  (decision tree modeling)
http://www.nlreg.com  (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
0
Phil
10/27/2004 4:08:27 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<C5WdnYeQmMllXODcRVn-3A@metrocastcablevision.com>...
> "dan michaels" <feedbackdroids@yahoo.com> wrote in message
> news:8d8494cf.0410250740.6968afef@posting.google.com...
> > "Bill Modlin" <modlin1@metrocast.net> wrote in message
>  news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> >
> >
> > > Overall, the point is that the functions computed by cells in
>  the
> > > brain are largely determined by the correlations encountered in
>  the
> > > signals accessible to the cell, rather than by genetic control.
> > >
> >
> > The problem comes if you believe this part so strongly that you
>  gloss
> > over or disregard or downplay the underlying "foundation" for the
> > system as provided by genetics. Tabula rasa, it ain't.
> 
> We've been here many times before, Dan.   I'm not sure we actually
> disagree... at worst we quibble over just how much genetic structure
> is required.   I certainly don't expect a huge random network with
> no initial structure to magically self-organize into a person... at
> the very least it has to be part of an organism with genetically
> endowed (or designed in, for a robot) initial behaviors and drives.
> Perhaps there is a lot more required.
> 
> You seem to think that there may be a need for at least 30 subtly
> different frameworks to account for the 30-odd visual functional
> areas that you are fond of mentioning, and for all I know you could
> be right.
> 
> Our main difference is in our perception of where best to focus our
> current efforts.   I am still sufficiently impressed by the
> potential for self organization that I'd like to find out how far it
> can take us.  If and when we find something that can't be made to
> work by self organization, then we can dig in and see what
> additional structure is needed to make it work.
> 
> My impression is that you would have us spend many years finding out
> just how the brain does it all before even attempting to construct
> anything.
> 
> My way, perhaps we'll find that we only need a handful of
> specialized structures and can be done in a few years.   Worst case
> we waste a little time and wind up eventually digging out all the
> detail you wanted to start with.  Your way we have no chance of
> early success.  Place your bets... but me, I'd rather hope for
> something that might be finished in my lifetime.
> 
> Bill


Darn, I made a long reply yesterday, but it didn't show up. Didn't
keep a copy either. ?????
0
feedbackdroids
10/27/2004 4:44:15 PM
"Stephen Harris" <cyberguard1048-usenet@yahoo.com> wrote in message news:<aKyfd.10242$6q2.9618@newssvr14.news.prodigy.com>...
> "dan michaels" <feedbackdroids@yahoo.com> wrote in message 
> news:8d8494cf.0410260730.72dbdbc5@posting.google.com...
> > "Bill Modlin" <modlin1@metrocast.net> wrote in message 
> > news:<C5WdnYeQmMllXODcRVn-3A@metrocastcablevision.com>...
> >> "dan michaels" <feedbackdroids@yahoo.com> wrote in message
> >> news:8d8494cf.0410250740.6968afef@posting.google.com...
> >> > "Bill Modlin" <modlin1@metrocast.net> wrote in message
>  news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> >> >
> >> >
> >> > > Overall, the point is that the functions computed by cells in
>  the
> >> > > brain are largely determined by the correlations encountered in
>  the
> >> > > signals accessible to the cell, rather than by genetic control.
> >> > >
> >> >
> >> > The problem comes if you believe this part so strongly that you
>  gloss
> >> > over or disregard or downplay the underlying "foundation" for the
> >> > system as provided by genetics. Tabula rasa, it ain't.
> >>
> >> We've been here many times before, Dan.   I'm not sure we actually
> >> disagree... at worst we quibble over just how much genetic structure
> >> is required.   I certainly don't expect a huge random network with
> >> no initial structure to magically self-organize into a person... at
> >> the very least it has to be part of an organism with genetically
> >> endowed (or designed in, for a robot) initial behaviors and drives.
> >> Perhaps there is a lot more required.
> >>
> >> You seem to think that there may be a need for at least 30 subtly
> >> different frameworks to account for the 30-odd visual functional
> >> areas that you are fond of mentioning, and for all I know you could
> >> be right.
> >>
> >> Our main difference is in our perception of where best to focus our
> >> current efforts.   I am still sufficiently impressed by the
> >> potential for self organization that I'd like to find out how far it
> >> can take us.  If and when we find something that can't be made to
> >> work by self organization, then we can dig in and see what
> >> additional structure is needed to make it work.
> >>
> >> My impression is that you would have us spend many years finding out
> >> just how the brain does it all before even attempting to construct
> >> anything.
> >>
> >
> >
> > Hi Bill, *exactly* the opposite, as I believe I've said many times
> > around here. I think neuroscience has already given us plenty enough
> > information that we could start developing computer systems which do
> > something similar. If I were actively working in this area, that's
> > what I'd be doing.
> >
> > Regards the 30 visual areas, I think they are there for a reason, not
> > by random chance. They appeared as a result of evolution fine-tuning
> > the system to solve the problems the organisms were presented with. If
> > you postulate the various areas [for the purposes of research,
> > simulation, and test] simply as being preprocessing systems, then at
> > the least they make the job of any s.o.s. or memory-prediction system
> > they connect to all that much easier. Vision is such an enormously
> > difficult problem that nature found it couldn't be solved adequately
> > with only a 1-level memory system, blank self-organizing slate, simple
> > S-R units, etc. If it were that easy to do, then nature would have
> > done it that way from the get-go.
> > ==============
> >
> 
> Some facts from Anthropology I. Nature has designed several different
> visual systems during the course of evolution. Therefore Nature does't
> know beforehand,which one is optimal. There is fine-tuning involved, but
> not all mistakes are eliminated by natural selection. We have both archaic/
> neutral and dysfunctional genes retained in our gene pool. That is because 
> evolution is random, there is no goal or purpose waiting for achievement. 
> Selection will tend to manifest more survival traits than not. But nobody
> can examine the present visual system and say all its parts are needed.
> 
> Some functions can be redundant or non-critically dysfunctional and
> will not be eliminated from the gene pool just because of chance, so they
> will not need to be duplicated in order to make a working system.
> Neurobiology could inform what is actually needed but that is beyond
> the current capability of science, meaning it is not known yet. Also the
> human gene pool is not yet stable.



[....... well I see my post from yesterday didn't make it to google,
but part of it made it into Stephen's post ........ ????]


Hi Stephen. What you say is all well and good, but we're talking about
areas of cortex here which only showed up very recently in evolution,
not ones which have been around for 500 MY and are vestigal in
function. They are mainly specific to mammals, and as you go higher in
the tree of mammals, more regions arise, the total cortical area
involved in vision radically increases [up to somewhere around 30-40%
in higher mammals], and the complexity of interconnections increases.
It's a little soon to start referring to them as possibly archaic and
redundant.

More likely new modules have been added at various points in evolution
which specialize for additional visual tasks. From neuro recording
studies, each of the 30 areas appears to have a more or less specific
functional task. Some for color, some of motion, some for shape
recognition, some for binocular disparity, on and on. There doesn't
seem to be a lot of redundant neuroanatomy here.
0
feedbackdroids
10/27/2004 4:58:00 PM
Stephen Harris wrote:

> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
> news:navfd.6610$rs5.517749@news20.bellglobal.com...
> Bill Modlin
> 
> 
> 
>>You talk of pruning synaptic connections, but whatever criteria you build 
>>into the algorithm must be external to the LC's. In nature, AFAIK the 
>>spontaneously generated synaptic connections are in fact pruned by 
>>feedback from precisely those external contingencies that you seem to wish 
>>to eliminate from the development of the organism.
>>
> 
> 
> Well in the case of the kittens, I think the connections are never made
> rather than they exist and are pruned. There is not some existing pathway
> awaiting imprinting.

IMO that's an empirical question. From what I know (admittedly not much) 
of how the CNS develops, my opinion contradicts yours - IMO edge 
detectors are spontaneously generated, and then reinforced or destroyed, 
depending. But I don't insist on my opinion. If the question has been 
settled empirically, perhaps someone here will refer to a paper that 
provides the evidence.

Your reference to an "existing pathway awaiting imprinting" IMO shows a 
misunderstanding of how neural networks develop. "Imprinting" was coined 
by Konrad Lorenz at a time when neural nets were simply not a concept 
available for explaining brain functions (not that they help much - we 
don't have the vocabulary or the math to talk about the all-at-once-ness 
that actually occurs in them, and hobble along with the concept of 
massive parallelism, which is in fact implemented (badly IMO) in a 
collection of serial computers....) He thought of the brain as a tabula 
rasa; and was surprised at the goslings' behaviour because it implied 
that the tabula was able to accept very limited impressions from the 
outside world, that in fact it was selective in what it would accept. 
His observations and experiments actually demolished the tabula rasa 
concept, but I'm not sure that Lorenz ever actually understood what he'd 
done. At any rate, the term has taken on a life of its own, and 
continues to mislead.

Later in your post you claim
..........................
SH:
I not sure all other connections need to be learned from the 
environment. Perhaps it is semantics but made possible by the 
environment does not mean the same to me as learned from the environment.

For instance drawing the first breath.
..............................

Um, well, actually, the first breath does require an external stimulus. 
If that stimulus isn't delivered promptly, the baby will die. Usually, 
the shock of cold, of disconnect from the maternal blood supply, etc , 
are enough stimulus. In some cases (as in my own), it's not enough, and 
the midwife/physician works hard to produce the appropriate stimulus to 
cause the first breath, and to keep the breathing going (as they did for 
me - my mother said it took almost 3/4 hour before they were satisfied 
that I would breathe on my own. I was a slow learner, I guess. :-))

That first breath may not be "learned", but without environmental input 
it won't happen. Nor is the breathing system fully developed at birth. 
Some people argue that one function of babies' crying is to help develop 
that system, including the neural subsystem responsible for controlling 
breathing. Seems plausible to me.

................................
SH:
Or better yet, motor movement
within the womb. I don't think many people are going to agree that
the creation event which leads to the baby kicking in the womb has
been a learning experience which has taught the baby how to kick
in the womb.
....................

Um, I never claimed the first kicks were learned. They happen 
spontaneously. Why do they happen? Presumably the neurons in the 
developing motor cortex have made some connections (neurons do make 
connections spontaneously, after all), and then they fire (neurons do 
fire spontaneously, after all), and the fetus kicks. But what then?

.....................................
SH:
Nor does having a womb boundary to kick against
count as an external learning stimulus.
...............................

This seems merely silly to me. The womb is external to the baby, and in 
fact the uterine environment controls and influences development of the 
fetus from conception onward. (AFAIK, even implantation is under the 
control of the mother - it appears that if there is a serious 
developmental glitch in the first few hours after conception, the 
maternal genes that permit implantation are not switched on, and there 
is and early miscarriage - the kind that usually isn't even noticed by 
the woman. Some embryologists have estimated that upwards of 1/2 of all 
fertilisations never result in implantation.)

.....................
SH:
You would have to count the
development of the foetus as learning experience for the baby.
.....................

Well, some is and some isn't. To some extent it's a semantic quibble - 
I've already said I'm uneasy about the distinction between learning and 
development. Maybe it would be better to say I'm uneasy about where to 
draw the line.

..................
SH:
I don't
say that it is impossible conceptually, but that it will seem 
implausibly extreme or categorically obsessive to me and a great many 
other people. One would have to expand environment to mean the chain of 
parents of parents that made the baby who now kicks in the womb.
....................

Well, expanding the notion of environment that far wouldn't bother me at 
all, although then the question shifts to a different level.

However, it should be obvious that "environment" is often imprecisely 
used. I try very hard to use it in the limited sens of "whatever is 
outside the system or subsystem that is under discussion."

For DNA, RNA, etc, it's the cell itself. For a neuron, that is anything 
outside the cell. For a neural network, it includes other neural 
networks. For a brain or any other organ, it includes a other organs in 
the organism. For an organism, it's whatever is outside its skin. This 
notion of environment is one reason I prefer to think that "It's 
behaviour all the way down," even though that is more of a slogan than a 
an axiom. But even as a mere slogan it has its uses: it should make one 
look for the actual environment of whatever we are talking about, and 
enable one to avoid vague notions of "what's out there."


0
Wolf
10/27/2004 6:45:53 PM
Stephen Harris wrote:
[...]
> 
> A reference to standard usage in the literature:
> http://www.science.mcmaster.ca/Psychology/becker/papers/beckerzemelHBTNN2ndEd.pdf
> 
> "Unsupervised learning algorithms can be distinguished by the absence of any
> supervisory feedback from the external environment. Often, however, there is
> an implicit _internally-derived_ training signal. This training signal is 
> based on
> some measure of the quality of the network's internal representation. The 
> main
> problem in unsupervised learning research is to formulate a performance 
> measure
> or cost function for the learning, to generate this internal supervisory 
> signal. The cost function is also known as the objective function, since it 
> sets the objective for the learning process. ..."
> 
> 

OK, but I find this conceptually fuzzy.

Why should any external feedback be "supervised"? Is there a difference 
between supervised and unsupervised external feedback?

External to what, exactly?

"Implicit" and "internally derived" seem to me to be conflicting 
notions. If the training signal is implicit, then it is a feature of the 
algorithm's architecture (ie, the topology of the network itself.) If it 
is "internally derived" then it isn't implicit, since deriving it makes 
it explicit.

Then there is "the training signal" as "some measure of the quality of 
the network's internal representation." "Quality" brings in some 
_external_ judgement, ie, I fail to see how an internally derived signal 
can possibly do this.

What is meant by "formulat[ing] a cost function...."? Does this refer to 
some computing of weights of connections, in terms of which one can 
decide if the network has learned what it should; or the network can 
decide that it has learned what it should? If so, there seems again to 
be a smuggled in external criterion.

Calling the learning objective a "function" makes opaque what should be 
clear. "Function" is a much abused word, and I can't figure what exactly 
is meant here. It also fuzzifies what exactly is meant by a system 
having learned something. The usual definitions all specify some change 
in the system's behaviour, either as a goal in itself, or as a sign of 
the desired goal.

And there's that term "supervisory" again. If it's an "unsupervised" 
learning algorithm, where does the supervision in the internally derived 
signal come from? What exactly is meant by supervision in this context, 
anyhow?

I can see, sort of, why "local correlations" matter. If the algorithm 
were realised in hardware, I suppose that each element's connections to 
other elements would have to stabilise when some optimal state is 
reached. Local correlations could favour some connections over others 
(if they are made at all). I suppose some built-in function in each 
element could be used for this. But some external signal would still be 
needed to fine tune the system.
0
Wolf
10/27/2004 7:30:23 PM
Stargazer wrote:

[...]
> 
> In unsupervised learning, there isn't such things as "training
> signals", but just stimuli.

How can a training signal be different from a stimulus? Are you claiming 
that when you train a dog, the signals its neural networks receive are 
different than when it's learning on its own? Are you claiming that if I 
cause a signals as I intend, the signal is different than if it just 
happens? These are not frivolous questions - your language impplies them.

> If input stimuli are absent, you may
> have only changes of neural connections due to innate architectural
> constraints and noise (which, by the way, is one factor influencing
> biological networks).

Er, yup, that's obvious.

> Unsupervised learning is a method that alters
> synaptic connections not because of error propagated from the desired
> outputs (the "training signals")

Outputs are training signals? How? I mean, if they are outputs, then 
they aren't received by the network, right? So what are you leaving out 
here?

, but because of intrinsic properties
> of the signal itself.

Er, are you saying that the signals received by the network vary 
depending on their source? That may be possible in silicon, but they 
aren't in natural networks. A nerve spike train is a nerve spike train 
no matter what its ultimate origin. It's the _connection_ that 
differentiate sources (eg, the nerves in the optic bundle terminate in 
different parts of the visual cortex.)

> If the organism moves, then this is not purely
> unsupervised, it is a mixture of three processes: unsupervised,
> supervised and reinforcement learning.

Which is which?

And how do the signals differ in "intrinsic qualities"? Where are the 
supervised, unsupervised, and reinforced learning signals received? 
Where and how are they propagated? How can the receptors tell which is 
is which? What experiments have shown that this is in fact what happens 
in an organism?
0
Wolf
10/27/2004 7:46:55 PM
Phil Sherrod wrote:

> On 26-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com> wrote:
[...]
> 
> No one has come even remotely close to explaining how a "chance event"
> turned inanimate compounds into a system with enough components and
> organization that it can self-replicate and pass on genetic information. 
> You are welcome to take it on faith that chance events accomplished this,
> but it certainly does not rise to the level of a scientific fact.

Oh, well, I suppose not to your satisfaction. I suspect it's partly a 
problem with your concept of chance, and partly sheer ignorance of how 
chance actually operates to produce orderly structure and processes. 
I'll try to give you a simple example, from which you can extrapolate.

Consider the snowflake. Just how molecules of water are oriented when 
they collide with the growing snowflake is a matter of chance. So is 
their relative speed, the total number of molecules, the sequence of air 
temperatures experienced by the snowflake as it moves through the air, 
etc etc etc. Yet we get a well-ordered crystal. How come? Because it's 
not a matter of chance alone. It's a matter of chance operations 
constrained by properties of matter and energy. Because water molecules 
have certain properties, they will form hexagonal crystals. Because 
other factors in the growth of a snowflake are a matter of chance, each 
snowflake will be different. And it will be a matter of chance which 
snowflake you observe on your window for a moment or two before it melts.

BTW, the notion of randomness, which underlies the notion of chance, has 
turned out to be remarkably difficult to define in mathematically 
rigorous ways. There is a large literature on the subject, some of which 
I have consulted. It's not easy. For that matter, the related and easier 
notion of probability is difficult enough that most people have mostly 
wrong ideas about it, which is one reason casinos prosper, and why 
government lotteries have become a major source of revenue, in most 
states and provinces equalling or exceeding the revenue from sales taxes.

HTH

0
Wolf
10/27/2004 8:02:16 PM
On 27-Oct-2004, Wolf Kirchmeir <wwolfkir@sympatico.ca> wrote:

> Consider the snowflake. Just how molecules of water are oriented when
> they collide with the growing snowflake is a matter of chance. So is
> their relative speed, the total number of molecules, the sequence of air
> temperatures experienced by the snowflake as it moves through the air,
> etc etc etc. Yet we get a well-ordered crystal. How come? Because it's
> not a matter of chance alone. It's a matter of chance operations
> constrained by properties of matter and energy. Because water molecules
> have certain properties, they will form hexagonal crystals. Because
> other factors in the growth of a snowflake are a matter of chance, each
> snowflake will be different. And it will be a matter of chance which
> snowflake you observe on your window for a moment or two before it melts.

There is no denying that there are striking patterns in nature:  A snowflake
is a good example as is crystalline structure and the inner working of an
atom.

The problem is that there is an enormous difference between a snowflake that
is created by chance and an organic factory that is self-replicating and
able to pass on genetic information.  There are at least dozens -- and
probably hundreds -- of cooperating processes and machines that have to work
together for even the simplest organism to reproduce.  Random collisions and
chemical reactions may produce compounds like amino acids, but getting from
amino acids to the simplest known self-reproducing organism is like going
from bars of silicon and aluminum to a Pentium CPU.  And you can't use
incremental evolutionary development to explain the process until
reproduction is going.  You have (literally) a chicken and egg paradox.

Think about how many snowflakes have been formed since the beginning of
Earth -- zillions.  But with all of those zillions of snowflakes, and all of
the random collisions between snowflakes, snowflakes have not come alive,
they do not reproduce, and they have not evolved into super snowflakes.

-- 
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com  (decision tree modeling)
http://www.nlreg.com  (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
0
Phil
10/27/2004 8:55:11 PM
Phil Sherrod wrote:
[...]
> In order to reach the point where evolution can kick in and natural
> selection can work requires reaching the point where you have
> self-replicating organisms that can pass on genetic information; until you
> reach that point, evolution -- by definition -- cannot operate.[...][

There is a good deal of experimental evidence that molecules other that 
DNA and RNA replicate. In fact, molecules aren't needed. Back in the 60s 
IIRC, Scientific American had an article on flat shapes that would form 
into larger groupings that replicated the original shape, or sdhapes 
that would from chians analogous to carbon chains, peptides, etc. You 
put a bunch of these into a shallow tray, and shake, Fun stuff. :-)

Self-organisation seems to be a feature of molecules built on carbon 
chains and rings. It's just one of those things these molecules do... 
and some of the results are self-replicators. There are self replicators 
built on crystals, too. Some of these are stable than others, and so are 
more likely to persist and participate in reactions that produce even 
more complex replicating molecules. So the notion that DNA/RNA were 
needed before evolution could start is false. I would turn the argument 
around: as soon as self replicating molecules appeared, evolution was 
bound to happen.

Three of websites I found by googling on "self replicating molecules." 
There are loads of them.

web.mit.edu/newsoffice/tt/1990/may09/23124.html
( Class of self-replicators)

www.answersingenesis.org/docs/3974.asp
(self-replicating RNA)

www.vepachedu.org/science.htm
(report on a vareity of self replicators)

www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve& 
db=PubMed&list_uids=12650644&dopt=Abstract
(evolable self replicators)

Have fun!


0
Wolf
10/27/2004 8:55:52 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<C5WdnYeQmMllXODcRVn-3A@metrocastcablevision.com>...
> "dan michaels" <feedbackdroids@yahoo.com> wrote in message
> news:8d8494cf.0410250740.6968afef@posting.google.com...
> > "Bill Modlin" <modlin1@metrocast.net> wrote in message
>  news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> >
> >
> > > Overall, the point is that the functions computed by cells in
>  the
> > > brain are largely determined by the correlations encountered in
>  the
> > > signals accessible to the cell, rather than by genetic control.
> > >
> >
> > The problem comes if you believe this part so strongly that you
>  gloss
> > over or disregard or downplay the underlying "foundation" for the
> > system as provided by genetics. Tabula rasa, it ain't.
> 
> We've been here many times before, Dan.   I'm not sure we actually
> disagree... at worst we quibble over just how much genetic structure
> is required.   I certainly don't expect a huge random network with
> no initial structure to magically self-organize into a person... at
> the very least it has to be part of an organism with genetically
> endowed (or designed in, for a robot) initial behaviors and drives.
> Perhaps there is a lot more required.
> 
> You seem to think that there may be a need for at least 30 subtly
> different frameworks to account for the 30-odd visual functional
> areas that you are fond of mentioning, and for all I know you could
> be right.

Not frameworks, I think it's pretty much the same "organizational"
"adaptive" whatever you might want to call it, "intelligent" algorithm
in each and every functional area. What changes is the heuristics.
Each of these deal with different computational problems, and it might
be impossible to solve those adequately without some computational
hints that kick in at some time of the development schedule.

At any rate, the cues that can be built in will turn out to be
computational, e.g. highly general! This is the argument from the
poverty of genetic information available to build such complex
machines. There isn't much! (So, something like Chomsky's P&P
framework is way out of question)

Regards,

--
Eray Ozkural
0
examachine
10/27/2004 11:03:41 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> A reasonable interpretation is that the "wiring" of neural circuitry
> is only loosely determined by a genetic blueprint.  Most of the
> actual connections (and therefore the functions performed) are
> established as a result of correlations between the activities of
> potentially connected cells.  Not only are the initial connections
> determined by correlations, but even after a stable connection
> pattern is established, the connections will change if the
> correlations change.

Right on the target! I agree with you.

In my opinion, it's even possible that all learning can be reduced to
some form of unsupervised "correlation" learning. But just what is
this "correlation" that you mention, hard to tell. My  mirror of truth
says it's not traditional statistics.

Regards,

--
Eray Ozkural
0
examachine
10/27/2004 11:07:46 PM
"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> Our brains have innate structure tailored by evolutionary processes
> over a long period of time.  This structure performs functions that
> contribute to our behavior in ways that somewhere along the line
> probably helped individuals to survive, or at least didn't hurt.
> 
> Many of those functions are not fully determined by genetics alone.
> There is an innate framework, but details are filled in by processes
> of conditioning and association, and to some degree the framework
> itself is mutable if environmental conditions differ sufficiently
> from those for which it evolved.  There are few sharp lines between
> innate and acquired neural function.
> 
> Feature discrimination in the early visual system is sometimes
> called innate.  Certainly it is innate that the cells grow into
> layers of tissue appropriate for performing useful feature
> discriminations.  However, it seems the specific connections and
> weights to implement particular discriminations get filled in by
> adaptation to correlations in the ensemble of signals flowing from
> the retina.  For example, we can change the distribution of
> particular detectors dramatically by raising a cat in an abnormal
> visual environment.  It seems cells are not so much genetically
> determined to perform specific discriminations, as that they acquire
> discrimination functions appropriate to the signals they encounter
> in their genetically determined position in the network.
> 
> There are places where neural projections bring together signals
> originating from corresponding points in the left and right eyes.
> This allows merging both images to fill in details missing from one
> or the other, estimating depth from discrepancies in the two images,
> and so on.  There is genetic direction to cause axonal projections
> carrying signals from one eye to grow toward the normally expected
> locations of the corresponding signal paths from the other.  But
> (from experiments on Xenopus frogs) if one eye is surgically rotated
> before the connections are formed, so that the locations of
> correlated signals are altered, we see the projections grow first
> toward the normal target location, then veer off sharply to connect
> with the very different cells now in position to be correlated.
> 
> Many topographic maps can be found in the brain, so that for example
> neigboring sections of neural tissue are excited by stimulii from
> adjacent sections of skin.  One might imagine a fixed wiring scheme
> under genetic control to hook up these maps, but when we surgically
> swap small patches of skin the connections change to preserve the
> mapping.  It takes some time, but after a while we find that the
> moved sensors now activate sections of the remote neural map that
> correspond to their new positions.
> 
> A reasonable interpretation is that the "wiring" of neural circuitry
> is only loosely determined by a genetic blueprint.  Most of the
> actual connections (and therefore the functions performed) are
> established as a result of correlations between the activities of
> potentially connected cells.  Not only are the initial connections
> determined by correlations, but even after a stable connection
> pattern is established, the connections will change if the
> correlations change.
> 
> From the viewpoint of a single cell, it strengthens connections to
> others correlated with its own activity and weakens others, much as
> postulated by Hebb so many years ago.  While direct observation of
> such changes in individual active synapses is still difficult, we
> can observe at least one related mechanism in widespread use.  Cells
> in a child's brain sprout huge dendritic trees and eventually make
> something like 200,000 synaptic connections.  By adulthood these are
> trimmed back to an average of 10 to 20 thousand.  The only plausible
> explanation for this of which I am aware is that the surviving
> connections are those that showed correlation with the activity of
> the cell.  Uncorrelated connections simply drop out of the picture.
> 
> Overall, the point is that the functions computed by cells in the
> brain are largely determined by the correlations encountered in the
> signals accessible to the cell, rather than by genetic control.
> 
> This is learning or conditioning, but it is not the kind of
> feedback-driven learning that is usually intended when one speaks of
> operant conditioning.  This sort of learning does not depend on
> consequences of the output of the function, and would occur even if
> the output were not connected to anything else and could therefore
> have no consequences extending beyond the cell doing the learning.
> 
> From an evolutionary perspective, such learning mechanisms exist
> because they do indeed often have useful behavioral consequences.
> But the evolutionary connection is between the learning mechanisms
> and ensembles of behavior, not between the individual functions
> learned and specific contingencies associated with those functions.
> 
> --------
> 
> None of the above should be taken as suggesting that other sorts of
> learning can be ignored.  To implement AI we will require an
> understanding of many facets of adaptive behavior, including the
> operant conditioning or reinforcement learning that has been the
> sole focus of certain vocal participants in CAP.
> 
> But I do suggest that these correlation-driven "unsupervised"
> mechanisms provide a critically important underpinning for other
> learning paradigms, that they are necessary parts of an explanation
> of how all our behavior-generating mechanisms actually work.
> 
> <to be continued in further posts>

This is an excellent post. Could you please also send it to
ai-philosophy? I've invited you to our group.

Regards,

--
Eray Ozkural
0
examachine
10/27/2004 11:25:16 PM
In article <320e992a.0410271507.1a26da83@posting.google.com>, Eray 
Ozkural  exa <examachine@gmail.com> writes
>"Bill Modlin" <modlin1@metrocast.net> wrote in message 
>news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
>> A reasonable interpretation is that the "wiring" of neural circuitry
>> is only loosely determined by a genetic blueprint.  Most of the
>> actual connections (and therefore the functions performed) are
>> established as a result of correlations between the activities of
>> potentially connected cells.  Not only are the initial connections
>> determined by correlations, but even after a stable connection
>> pattern is established, the connections will change if the
>> correlations change.
>
>Right on the target! I agree with you.
>
>In my opinion, it's even possible that all learning can be reduced to
>some form of unsupervised "correlation" learning. But just what is
>this "correlation" that you mention, hard to tell. My  mirror of truth
>says it's not traditional statistics.
>
>Regards,
>
>--
>Eray Ozkural

You are a dimwit! If you ask me (or Glen) nicely, we might tell you why.
-- 
David Longley
0
David
10/27/2004 11:30:49 PM
Hmmmm ... I found my missing posting from yesterday on another forum
[seems many other sites are mirroring google - cool] .....

http://www.gdse.com/servlet/gdse.news?gid=1217&st=40
http://www.gdse.com/servlet/gdse.nwsgrp?mid=231531354
========================


From: feedbackdroids@yahoo.com
Subject: Re: Finding useful functions- part 1
Newsgroups: comp.ai.philosophy comp.ai.neural-nets 
Date: 26-Oct-2004 14:00:19

"Bill Modlin" <modlin1@metrocast.net> wrote in message news:<C5WdnYeQmMllXODcRVn-3A@metrocastcablevision.com>...
> "dan michaels" <feedbackdroids@yahoo.com> wrote in message
> news:8d8494cf.0410250740.6968afef@posting.google.com...
> > "Bill Modlin" <modlin1@metrocast.net> wrote in message
>  news:<2IOdnXgZS_WCFOHcRVn-jA@metrocastcablevision.com>...
> >
> >
> > > Overall, the point is that the functions computed by cells in
>  the
> > > brain are largely determined by the correlations encountered in
>  the
> > > signals accessible to the cell, rather than by genetic control.
> > >
> >
> > The problem comes if you believe this part so strongly that you
>  gloss
> > over or disregard or downplay the underlying "foundation" for the
> > system as provided by genetics. Tabula rasa, it ain't.
> 
> We've been here many times before, Dan.   I'm not sure we actually
> disagree... at worst we quibble over just how much genetic structure
> is required.   I certainly don't expect a huge random network with
> no initial structure to magically self-organize into a person... at
> the very least it has to be part of an organism with genetically
> endowed (or designed in, for a robot) initial behaviors and drives.
> Perhaps there is a lot more required.
> 
> You seem to think that there may be a need for at least 30 subtly
> different frameworks to account for the 30-odd visual functional
> areas that you are fond of mentioning, and for all I know you could
> be right.
> 
> Our main difference is in our perception of where best to focus our
> current efforts.   I am still sufficiently impressed by the
> potential for self organization that I'd like to find out how far it
> can take us.  If and when we find something that can't be made to
> work by self organization, then we can dig in and see what
> additional structure is needed to make it work.
> 
> My impression is that you would have us spend many years finding out
> just how the brain does it all before even attempting to construct
> anything.
> 


Hi Bill, *exactly* the opposite, as I believe I've said many times
around here. I think neuroscience has already given us plenty enough
information that we could start developing computer systems which do
something similar. If I were actively working in this area, that's
what I'd be doing.

Regards the 30 visual areas, I think they are there for a reason, not
by random chance. They appeared as a result of evolution fine-tuning
the system to solve the problems the organisms were presented with. If
you postulate the various areas [for the purposes of research,
simulation, and test] simply as being preprocessing systems, then at
the least they make the job of any s.o.s. or memory-prediction system
they connect to all that much easier. Vision is such an enormously
difficult problem that nature found it couldn't be solved adequately
with only a 1-level memory system, blank self-organizing slate, simple
S-R units, etc. If it were that easy to do, then nature would have
done it that way from the get-go.
==============


> My way, perhaps we'll find that we only need a handful of
> specialized structures and can be done in a few years.   Worst case
> we waste a little time and wind up eventually digging out all the
> detail you wanted to start with.  Your way we have no chance of
> early success.  Place your bets... but me, I'd rather hope for
> something that might be finished in my lifetime.
> 
> Bill


Again, exactly the opposite - you're mixing me up with the
wait-until-eternity we-don't-know-anything guys. As I told John.H just
a day ago, if the early CV people, knowing about limulus and mach
bands 50 or so years ago, had decided to wait another 50 years for
neuroscience to nail down everything about vision, where would we be
today - still running rats. My recommendation is to immediately use
every piece of neuroscience research at our disposal, TODAY. Can I be
any more direct than that. One needn't worry about exactly "how" the
real visual system does it, but can **adapt** [what we know about]
"what" it does.
0
feedbackdroids
10/28/2004 2:40:34 AM
<please accept my apologies if this is double-posted.  not sure what
happened just now, but it did not show up on my news server...>

"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:BKufd.6594$rs5.515234@news20.bellglobal.com...
> Bill Modlin wrote:
> [...]
> > None of your post touches on the distinction I made
> > between algorithms driven by contingencies of the output
> > ("supervised") and other algorithms driven by local
> > correlations independent of such contingencies.
> >
> > Explain to me again why that distinction was nonsense?
> >
> > Bill


> You assume that the "output" is different from "local
> correlations." From the P.O.V. of the neuron, there is no
> difference. What the neuron gets is local input. It has no
> way of distinguishing between an input that originates
> spontaneously in the connected neighbour cell(s) (which is
> what appears to be implied by "local correlations") and
> input that originates from some cell some distance away
> (ie, at least on intermediary cell away). IOW, there is no
> difference between the inputs originating in some
> environmental feedback and any other input.  The neuron
> takes up some messenger molecule, this triggers changes
> within the cell, which eventually result in synaptic
> strengthening/weakening.  But those messenger molecules
> aren't labelled "local" or "supervised feedback". NB that
> as far as current knowledge goes, learning at the neural
> level requires that genes be switched on and off. Again,
> the genes don't know whether the molecules that switch
> them are the result of local processes or more distant
> ones.
>
> If by "local correlations" you mean something other than
> differences in inputs (which --cell firing --synaptic
> strengthening/weakening), then your description thus far
> is misleading.
>

Let's start over.

You describe pretty much the same situation I see, in which a cell
"learns"... i.e. changes its various operating parameters, thus
changing its functional mapping between inputs and outputs, based on
purely local signals.  These local signals include its own firings,
activations of its synapses by other cells firing, and changes in
various chemical concentrations in which it is immersed.  As you
say, none of these inputs are labelled.

It seems obvious to me that under these circumstances, if there is
to be any systematic rule or principle guiding the way the cell
changes its response function, it must be formulated in terms of the
signals accessible to the cell, with no reference to any possible
remote and indirect consequences of those changes.  The "adjustment
rules" can depend on the strength and frequency of these local
signals and on the timing relationships among them, and that pretty
much exhausts the available possibilities.

This also seems to be the position you are taking.  Which confuses
me, since on other occasions you seem to argue for a stance much
like Glen's, where all "learning" is caused by remote behavioral
contingencies.

Let me try to pose an unambiguous example of the conflict.

A pigeon can be trained to discriminate pictures containing trucks
from other pictures lacking trucks.

This is done by selectively reinforcing some behavior (pecking a
button?) in the presence of the truck pictures, and not in the
presence of others.

At this level of description, this is a supervised learning process,
driven by an experimenter-enforced correlation between rewards and
behaviors.  The rewards are contingent on the production of the
right behavior under the right conditions, and the pigeon contains
mechanisms to adjust its behavior to maximize rewards.

That's fine, so far as it goes.

But when I look at discriminating that class of pictures so that it
can be recognized as a condition for the rewarded behavior, I see a
pretty complicated process.  There are  billions of cells computing
functions of whatever inputs they have access to, responding to all
sorts of "features" at dozens of levels, bringing together
information from many areas of the picture, to eventually reach a
level at which there is a signal of some sort that indicates whether
or not there is a truck somewhere in the picture.

That truck-signal is correlated with the rewards and the behavior,
so it makes sense at least at a handwaving level that a supervised
learning process could incorporate it, and produce the behavioral
modifications that we observe.

But most of those intermediate signals in the long path from retina
to truck-signal are not correllated with anything in the high level
description of the experiment.  They aren't correlated with trucks,
or rewards, or pecking, and therefore could not have been shaped by
any of those things.

To me it seems obvious that they must be shaped by local rules
involving relationships among signals accessible to the cell.
Specifically they cannot depend on producing some effect in the
external environment and reacting to  contingent results of that
effect.  Their connections to the external environment are extremely
indirect, there is no way that any correlation mediated by external
contingencies could be communicated to them.

But Glen seems to say that discriminations must be learned as a
result of behavioral contingencies.  For example, his response to my
original post was:

> What is important in sensation and perception is that
> movement of an animal (or, more specifically, of its
> receptors) has consequences. When we sweep our eyes
> over a patch of red, there are changes in stimulation -
> such movement/consequence contingencies are at the heart
> of learning to perceive the world.

But this would seem to imply that each of those billions of cells
involved in discriminating trucks somehow "knows" that we moved our
eyes, and can correlate this with the changes in the other signals
it has access to, a claim which I find incredible.

Certainly the signals a cell can see generally originate in
environmental stimulii.  Cells learn their functions from
relationships observable in those signals, so all the learning is in
a sense ultimately traceable to the environment.  But the specific
relationships observable at any point in the network are heavily
dependent on other functional transforms which generated those
signals, and are seldom directly mappable to any particular
behavioral contingency identifiable at external levels of
description.

Where do you stand on this matter?

Bill



0
Bill
10/28/2004 8:42:22 AM
Bill Modlin wrote:

[...]
> 
> Let's start over.

OK.

> You describe pretty much the same situation I see, in which a cell
> "learns"... i.e. changes its various operating parameters, thus
> changing its functional mapping between inputs and outputs, based on
> purely local signals.  These local signals include its own firings,
> activations of its synapses by other cells firing, and changes in
> various chemical concentrations in which it is immersed.  As you
> say, none of these inputs are labelled.

Cells don't learn. That's a hierarchy error - its the network that 
learns, maybe. Certainly the network of networks that we call, say, the 
visual cortex learns.

The cell's repsonses to inputs are modified by some inputs, yes, but 
that is not the same as learning. It's analogous to the content of a RAM 
location being modified by some inputs. I would;nt say the memory 
location learned anything - its electrical charge changed, is all.

> It seems obvious to me that under these circumstances, if there is
> to be any systematic rule or principle guiding the way the cell
> changes its response function, it must be formulated in terms of the
> signals accessible to the cell, with no reference to any possible
> remote and indirect consequences of those changes.

Yes.

 > The "adjustment
> rules" can depend on the strength and frequency of these local
> signals and on the timing relationships among them, and that pretty
> much exhausts the available possibilities.

In a natural system, it also includes the chemistry of the surrounding 
medium, which wil modify the way the signals act on and in the cell. 
That's a crucial fact, IMO. Ie, "chemical messengers" will promote or 
inhibit the transmission of siganls across the synpatic gaps. Since 
these messenger molecules are emitted by other cells, including 
non-neural ones, the picture is much more complex. I haven't a 
conceptual handle oin thsi, certainly not in terms of message content, 
signal labelling, etc etc etc.

> This also seems to be the position you are taking.  Which confuses
> me, since on other occasions you seem to argue for a stance much
> like Glen's, where all "learning" is caused by remote behavioral
> contingencies.

I've already said that cells don't learn. In any case, figuring out how 
neural nets' behaviours/functions change doesn't refute the position 
that such changes are initiated by "remote external contingencies".

  > Let me try to pose an unambiguous example of the conflict.
> 
> A pigeon can be trained to discriminate pictures containing trucks
> from other pictures lacking trucks.
> 
> This is done by selectively reinforcing some behavior (pecking a
> button?) in the presence of the truck pictures, and not in the
> presence of others.
> 
> At this level of description, this is a supervised learning process,
> driven by an experimenter-enforced correlation between rewards and
> behaviors.  The rewards are contingent on the production of the
> right behavior under the right conditions, and the pigeon contains
> mechanisms to adjust its behavior to maximize rewards.

I see no reason to talk about "supervised" learning processes, since 
that word smuggles in the experimenter's intentions. The pigeon will 
leran in exactly the same in naturem, the only difference being that 
random behaviours will be reinforced rather trhan pre-selected ones. So 
what?

The mechnaisms that "adjust the pigeon's behaviour" include the cellular 
changes that you seem to think exem[pligy some other kind of learning.

> That's fine, so far as it goes.
> 
> But when I look at discriminating that class of pictures so that it
> can be recognized as a condition for the rewarded behavior, I see a
> pretty complicated process.  There are  billions of cells computing
> functions of whatever inputs they have access to, responding to all
> sorts of "features" at dozens of levels, bringing together
> information from many areas of the picture, to eventually reach a
> level at which there is a signal of some sort that indicates whether
> or not there is a truck somewhere in the picture.


So the process is complicated. So what? When I watcvh a rainstorm, I see 
billions of raindrops, millions of turbulence scells, etc. The proces 
seems pretty complicated. The net result is still that tings get very wet.

> That truck-signal is correlated with the rewards and the behavior,
> so it makes sense at least at a handwaving level that a supervised
> learning process could incorporate it, and produce the behavioral
> modifications that we observe.
> 
> But most of those intermediate signals in the long path from retina
> to truck-signal are not correllated with anything in the high level
> description of the experiment.  They aren't correlated with trucks,
> or rewards, or pecking, and therefore could not have been shaped by
> any of those things.

Yes, that's true, but why should they be?

ASn analogous problem: how doe the hundreds or thousands of fish in a 
school of fish all "know how to cahnge dierction? They don't Each fish 
knows that the immediately surrounding fish are coming closer or getting 
further away, so it adjusts its direction and speed to mainatin ceratin 
distance. That's a fine example of "local correlations", and IMO is the 
way one must think about it. It doesn't matter where the chnage in 
duirection originates (s few fish see a shark, and chnage direction) - 
the fish inside the school don't get the message "Shark nearby, get out 
of the way". They get only messages about chnaging distance between 
themselves, and that's what they respond to.

Let the fish be signals moving through a NN, let the changing distances 
be variations in signals passin between neurons, and let the resspnse of 
the fish be the cellular/synaptic changes. Then the school's change of 
direction ie the result is different NN functions. The analogy is good 
enough to clarify the concept, IMO.

> To me it seems obvious that they must be shaped by local rules
> involving relationships among signals accessible to the cell.
> Specifically they cannot depend on producing some effect in the
> external environment and reacting to  contingent results of that
> effect.  Their connections to the external environment are extremely
> indirect, there is no way that any correlation mediated by external
> contingencies could be communicated to them.

It dopesn't matter how indirect the connections to the xternal 
environment are. See the school-of-fish analogy.

> But Glen seems to say that discriminations must be learned as a
> result of behavioral contingencies.  For example, his response to my
> original post was:

No, Glen says discriminations of environmental contingencies cause 
learning. You got it wrong, hence our confusion about the relationship 
between environment and learning. The shark sets off a change of 
direction in a few fish, and that change propagates through the whole 
school. Not the preception of the shark, please note. What we see is 
"the school of fish changed direction and escaped the shark."  But the 
10024th fish doesn't know there is a shark nearby.

Just so, perception of a red light and the pecking of a key etc set off 
changes in the pigeon that we see as "The pigeon has l;earned to peck a 
key to get food whenever it sees a red light." But the pigeon's retinal 
cells don't know anything about the food, the pigeon's motor cortex know 
nothning about a red light, and so on. Of course a pigoen is a more 
complex system than a school of sish, but the principle of local actions 
resulting in global behaviours applies equally well.

>>What is important in sensation and perception is that
>>movement of an animal (or, more specifically, of its
>>receptors) has consequences. When we sweep our eyes
>>over a patch of red, there are changes in stimulation -
>>such movement/consequence contingencies are at the heart
>>of learning to perceive the world.
> 
> 
> But this would seem to imply that each of those billions of cells
> involved in discriminating trucks somehow "knows" that we moved our
> eyes, and can correlate this with the changes in the other signals
> it has access to, a claim which I find incredible.

It implies no such thing. But your faulty reasoning does lead to an 
incredible conclusion. IMO you should examine your assumptions, 
specifically the one that cells "know" and "learn."

> Certainly the signals a cell can see generally originate in
> environmental stimulii.  Cells learn their functions from
> relationships observable in those signals, so all the learning is in
> a sense ultimately traceable to the environment.

Cells don't learn!!!!!! Get rid of that idea, and thinsg will be much 
clearer. Celsl don't learn functions - they have functions. Some of 
those functions depend on teh uptake of extra-cellular chemicals, for 
example. But performing thode functions when those chemicals are present 
isn't learning, it's just what the cell does.

 > But the specific
> relationships observable at any point in the network are heavily
> dependent on other functional transforms which generated those
> signals, and are seldom directly mappable to any particular
> behavioral contingency identifiable at external levels of
> description.

I think you are confused about what you mean by a "specific
relationship observable at any point in the network". I certainly am 
confused about what you mean. Do you mean the likelihood that a signal 
will cross that point? Do you mean the number of other points it's 
connected to? Do you mean the kind of signal present at that polint? 
(see above about messenger molecules originating from other parts of the 
organsism.) Do you mean the strength of the signal, or its frequenecy if 
repeated, etc? Etc. Or do you mean the topology of the network, both 
local and global?

> Where do you stand on this matter?

See above. IMO, you have serious gaps in factual knowlwdge (certainly 
more than I do, and I have a lot), and your conceptualisation of the 
hierarchy of networks and hence of processes within them is vague, 
ambiguous, and error ridden. Eg, you want to locate learning in cells, 
which is like locating a car's motion in the valve train.

0
Wolf
10/28/2004 1:19:42 PM
In article <nk6gd.15541$rs5.865264@news20.bellglobal.com>, Wolf 
Kirchmeir <wwolfkir@sympatico.ca> writes
>Bill Modlin wrote:
>
>[...]
>>  Let's start over.
>
>OK.
>
>> You describe pretty much the same situation I see, in which a cell
>> "learns"... i.e. changes its various operating parameters, thus
>> changing its functional mapping between inputs and outputs, based on
>> purely local signals.  These local signals include its own firings,
>> activations of its synapses by other cells firing, and changes in
>> various chemical concentrations in which it is immersed.  As you
>> say, none of these inputs are labelled.
>
>Cells don't learn. That's a hierarchy error - its the network that 
>learns, maybe. Certainly the network of networks that we call, say, the 
>visual cortex learns.
>
>The cell's repsonses to inputs are modified by some inputs, yes, but 
>that is not the same as learning. It's analogous to the content of a 
>RAM location being modified by some inputs. I would;nt say the memory 
>location learned anything - its electrical charge changed, is all.
>
>> It seems obvious to me that under these circumstances, if there is
>> to be any systematic rule or principle guiding the way the cell
>> changes its response function, it must be formulated in terms of the
>> signals accessible to the cell, with no reference to any possible
>> remote and indirect consequences of those changes.
>
>Yes.
>
>> The "adjustment
>> rules" can depend on the strength and frequency of these local
>> signals and on the timing relationships among them, and that pretty
>> much exhausts the available possibilities.
>
>In a natural system, it also includes the chemistry of the surrounding 
>medium, which wil modify the way the signals act on and in the cell. 
>That's a crucial fact, IMO. Ie, "chemical messengers" will promote or 
>inhibit the transmission of siganls across the synpatic gaps. Since 
>these messenger molecules are emitted by other cells, including 
>non-neural ones, the picture is much more complex. I haven't a 
>conceptual handle oin thsi, certainly not in terms of message content, 
>signal labelling, etc etc etc.
>
>> This also seems to be the position you are taking.  Which confuses
>> me, since on other occasions you seem to argue for a stance much
>> like Glen's, where all "learning" is caused by remote behavioral
>> contingencies.
>
>I've already said that cells don't learn. In any case, figuring out how 
>neural nets' behaviours/functions change doesn't refute the position 
>that such changes are initiated by "remote external contingencies".
>
> > Let me try to pose an unambiguous example of the conflict.
>>  A pigeon can be trained to discriminate pictures containing trucks
>> from other pictures lacking trucks.
>>  This is done by selectively reinforcing some behavior (pecking a
>> button?) in the presence of the truck pictures, and not in the
>> presence of others.
>>  At this level of description, this is a supervised learning process,
>> driven by an experimenter-enforced correlation between rewards and
>> behaviors.  The rewards are contingent on the production of the
>> right behavior under the right conditions, and the pigeon contains
>> mechanisms to adjust its behavior to maximize rewards.
>
>I see no reason to talk about "supervised" learning processes, since 
>that word smuggles in the experimenter's intentions. The pigeon will 
>leran in exactly the same in naturem, the only difference being that 
>random behaviours will be reinforced rather trhan pre-selected ones. So 
>what?
>
>The mechnaisms that "adjust the pigeon's behaviour" include the 
>cellular changes that you seem to think exem[pligy some other kind of 
>learning.
>
>> That's fine, so far as it goes.
>>  But when I look at discriminating that class of pictures so that it
>> can be recognized as a condition for the rewarded behavior, I see a
>> pretty complicated process.  There are  billions of cells computing
>> functions of whatever inputs they have access to, responding to all
>> sorts of "features" at dozens of levels, bringing together
>> information from many areas of the picture, to eventually reach a
>> level at which there is a signal of some sort that indicates whether
>> or not there is a truck somewhere in the picture.
>
>
>So the process is complicated. So what? When I watcvh a rainstorm, I 
>see billions of raindrops, millions of turbulence scells, etc. The 
>proces seems pretty complicated. The net result is still that tings get 
>very wet.
>
>> That truck-signal is correlated with the rewards and the behavior,
>> so it makes sense at least at a handwaving level that a supervised
>> learning process could incorporate it, and produce the behavioral
>> modifications that we observe.
>>  But most of those intermediate signals in the long path from retina
>> to truck-signal are not correllated with anything in the high level
>> description of the experiment.  They aren't correlated with trucks,
>> or rewards, or pecking, and therefore could not have been shaped by
>> any of those things.
>
>Yes, that's true, but why should they be?
>
>ASn analogous problem: how doe the hundreds or thousands of fish in a 
>school of fish all "know how to cahnge dierction? They don't Each fish 
>knows that the immediately surrounding fish are coming closer or 
>getting further away, so it adjusts its direction and speed to mainatin 
>ceratin distance. That's a fine example of "local correlations", and 
>IMO is the way one must think about it. It doesn't matter where the 
>chnage in duirection originates (s few fish see a shark, and chnage 
>direction) - the fish inside the school don't get the message "Shark 
>nearby, get out of the way". They get only messages about chnaging 
>distance between themselves, and that's what they respond to.
>
>Let the fish be signals moving through a NN, let the changing distances 
>be variations in signals passin between neurons, and let the resspnse 
>of the fish be the cellular/synaptic changes. Then the school's change 
>of direction ie the result is different NN functions. The analogy is 
>good enough to clarify the concept, IMO.
>
>> To me it seems obvious that they must be shaped by local rules
>> involving relationships among signals accessible to the cell.
>> Specifically they cannot depend on producing some effect in the
>> external environment and reacting to  contingent results of that
>> effect.  Their connections to the external environment are extremely
>> indirect, there is no way that any correlation mediated by external
>> contingencies could be communicated to them.
>
>It dopesn't matter how indirect the connections to the xternal 
>environment are. See the school-of-fish analogy.
>
>> But Glen seems to say that discriminations must be learned as a
>> result of behavioral contingencies.  For example, his response to my
>> original post was:
>
>No, Glen says discriminations of environmental contingencies cause 
>learning. You got it wrong, hence our confusion about the relationship 
>between environment and learning. The shark sets off a change of 
>direction in a few fish, and that change propagates through the whole 
>school. Not the preception of the shark, please note. What we see is 
>"the school of fish changed direction and escaped the shark."  But the 
>10024th fish doesn't know there is a shark nearby.
>
>Just so, perception of a red light and the pecking of a key etc set off 
>changes in the pigeon that we see as "The pigeon has l;earned to peck a 
>key to get food whenever it sees a red light." But the pigeon's retinal 
>cells don't know anything about the food, the pigeon's motor cortex 
>know nothning about a red light, and so on. Of course a pigoen is a 
>more complex system than a school of sish, but the principle of local 
>actions resulting in global behaviours applies equally well.
>
>>>What is important in sensation and perception is that
>>>movement of an animal (or, more specifically, of its
>>>receptors) has consequences. When we sweep our eyes
>>>over a patch of red, there are changes in stimulation -
>>>such movement/consequence contingencies are at the heart
>>>of learning to perceive the world.
>>   But this would seem to imply that each of those billions of cells
>> involved in discriminating trucks somehow "knows" that we moved our
>> eyes, and can correlate this with the changes in the other signals
>> it has access to, a claim which I find incredible.
>
>It implies no such thing. But your faulty reasoning does lead to an 
>incredible conclusion. IMO you should examine your assumptions, 
>specifically the one that cells "know" and "learn."
>
>> Certainly the signals a cell can see generally originate in
>> environmental stimulii.  Cells learn their functions from
>> relationships observable in those signals, so all the learning is in
>> a sense ultimately traceable to the environment.
>
>Cells don't learn!!!!!! Get rid of that idea, and thinsg will be much 
>clearer. Celsl don't learn functions - they have functions. Some of 
>those functions depend on teh uptake of extra-cellular chemicals, for 
>example. But performing thode functions when those chemicals are 
>present isn't learning, it's just what the cell does.
>
>> But the specific
>> relationships observable at any point in the network are heavily
>> dependent on other functional transforms which generated those
>> signals, and are seldom directly mappable to any particular
>> behavioral contingency identifiable at external levels of
>> description.
>
>I think you are confused about what you mean by a "specific
>relationship observable at any point in the network". I certainly am 
>confused about what you mean. Do you mean the likelihood that a signal 
>will cross that point? Do you mean the number of other points it's 
>connected to? Do you mean the kind of signal present at that polint? 
>(see above about messenger molecules originating from other parts of 
>the organsism.) Do you mean the strength of the signal, or its 
>frequenecy if repeated, etc? Etc. Or do you mean the topology of the 
>network, both local and global?
>
>> Where do you stand on this matter?
>
>See above. IMO, you have serious gaps in factual knowlwdge (certainly 
>more than I do, and I have a lot), and your conceptualisation of the 
>hierarchy of networks and hence of processes within them is vague, 
>ambiguous, and error ridden. Eg, you want to locate learning in cells, 
>which is like locating a car's motion in the valve train.
>

He's been told this many times before, by both myself and Glen. For some 
reason, like all too many others in our species he seems to think he 
knows better. He hasn't told us why he thinks he knows better, and he 
certainly hasn't shown that he understand the problems that those who 
have worked on behavioural plasticity ("learning") over the past 70 
years or so within the EAB have been addressing. Instead he looks to the 
literature where he finds comfort from other misguided nitwits who don't 
actually make any practical headway either. The reality is that a 
rational person would look to the EAB first to see where their 
half-baked, (and therefore probably errant) "ideas" have been tacitly 
imbibed from (through the usual dynamics of our public verbal 
behaviour). That this advice or point is still so widely argued against 
(or just ignored) by so many here and elsewhere is an interesting 
observation about the resistance to behavioural plasticity itself, which 
is of course just grist to my mill and why I bother to post anything 
here at all.

Good look at bringing Bill to his senses. I only hope you get better 
remuneration than Glen and I have - as to date, he's just come back and 
peddled more recalcitrance in slightly more veiled (albeit still 
obnoxious) way than some of the other more coarse reprobates here.
-- 
David Longley
0
David
10/28/2004 2:54:30 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
>
> [...]
> >
> > In unsupervised learning, there isn't such things as "training
> > signals", but just stimuli.
>
> How can a training signal be different from a stimulus? Are you
> claiming that when you train a dog, the signals its neural networks
> receive are different than when it's learning on its own? Are you
> claiming that if I cause a signals as I intend, the signal is
> different than if it just happens? These are not frivolous questions
> - your language impplies them.

Training signals is a common term, used to mean a prepared
(contrived) sequence of stimuli. It is a jargon used by
artificial neural network researchers, and must be understood
in that context.

> > If input stimuli are absent, you may
> > have only changes of neural connections due to innate architectural
> > constraints and noise (which, by the way, is one factor influencing
> > biological networks).
>
> Er, yup, that's obvious.

Mrs. Patty had some doubts.

> > Unsupervised learning is a method that alters
> > synaptic connections not because of error propagated from the
> > desired outputs (the "training signals")
>
> Outputs are training signals? How? I mean, if they are outputs, then
> they aren't received by the network, right? So what are you leaving
> out here?

Again, this is common parlance among NN researchers. When you
train a network, you put a pattern in the input and let it
calculate its output. The desired output (what we want it to
learn, the signal that will impart training to the network,
the "training signal") is then compared with this calculated
output and, through backpropagation, error corrections are derived
and that is used to modify internal weights. This is the way
supervised learning is commonly understood to happen.

> , but because of intrinsic properties
> > of the signal itself.
>
> Er, are you saying that the signals received by the network vary
> depending on their source? That may be possible in silicon, but they
> aren't in natural networks. A nerve spike train is a nerve spike train
> no matter what its ultimate origin. It's the _connection_ that
> differentiate sources (eg, the nerves in the optic bundle terminate in
> different parts of the visual cortex.)

Lost you here.
I said that in unsupervised learning, connection weights among neurons
aren't adjusted because of backpropagation of errors (from the outputs),
but instead by an internal process that is driven by the statistical
properties of the signals fed as input. One can say that such
process may use hebbian ideas, just to cite a well known method.

> > If the organism moves, then this is not purely
> > unsupervised, it is a mixture of three processes: unsupervised,
> > supervised and reinforcement learning.
>
> Which is which?
>
> And how do the signals differ in "intrinsic qualities"? Where are the
> supervised, unsupervised, and reinforced learning signals received?
> Where and how are they propagated? How can the receptors tell which is
> is which? What experiments have shown that this is in fact what
> happens in an organism?

Too many questions, Mr Kirchmeir. One can easily write a book
trying to answer all that. Signals differ in their "intrinsic
qualities": different statistical properties. Random signals,
for instance, usually cannot have their dimensions reduced. Signals
obtained from natural environments often may have their
dimensionality reduced significantly. By using methods such as
local principal component analysis or population codes learned
by MDL or even factor analysis by conventional delta-rule process,
one can have important reductions in dimensionality, which reveal
relevant features. In reinforcement learning, the organism will
receive an external indication of how good or bad it is to keep
and use the features just learned.

*SG*


0
Stargazer
10/28/2004 3:24:43 PM
Wolf Kirchmeir wrote:

> ASn analogous problem: how doe the hundreds or thousands of fish in a 
> school of fish all "know how to cahnge dierction? They don't Each fish 
> knows that the immediately surrounding fish are coming closer or getting 
> further away, so it adjusts its direction and speed to mainatin ceratin 
> distance. That's a fine example of "local correlations", and IMO is the 
> way one must think about it. It doesn't matter where the chnage in 
> duirection originates (s few fish see a shark, and chnage direction) - 
> the fish inside the school don't get the message "Shark nearby, get out 
> of the way". They get only messages about chnaging distance between 
> themselves, and that's what they respond to.
> 
> Let the fish be signals moving through a NN, let the changing distances 
> be variations in signals passin between neurons, and let the resspnse of 
> the fish be the cellular/synaptic changes. Then the school's change of 
> direction ie the result is different NN functions. The analogy is good 
> enough to clarify the concept, IMO.
> 

Thanks :) ... that analogy works well for me.  Yesterday i was driving 
in Tukwila and noticed a large flock of black birds flying 
synchronously.  There must have been a hundred or more.  They would all 
change direction,  this way and that way willy nelly,  seemingly 
simultaneously.  It was spectacular.  They flew and flew and flew.  What 
*were* they thinking ?

> No, Glen says discriminations of environmental contingencies cause 
> learning. 

I'm just curious about the language here.  If i substitute "are changes 
in the NN" for "cause learning" in your sentence as follows 
"Discriminations of environmental contingencies are changes in the NN", 
is it still true ?

patty

0
patty
10/28/2004 4:27:51 PM
Stargazer wrote:
> Wolf Kirchmeir wrote:
> 
>>Stargazer wrote:
[snip a number oif clear answers to my questions - thanks. I think. :-)]
> *SG*

Your answers clear up some misconceptions on my part, but they also show 
  terminological obfuscation on the part of artificial neural network 
researchers.

Throughout your explanation, the term "signal" is used ambiguously. It 
sometimes seems to apply to an input to a single neuron, and sometimes 
to a collection of inputs to a network of neurons. IMO this is 
confusing. Very. It's a hierarchy error, which always cause trouble.

Also, calling the calculated output of a NN a "training signal" because 
it's compared to the desired outcome is confusing, at least to me, for 
whom a "training signal" is a "signal that trains", ie, an input to the 
NN. And the use of "signal" for both inputs and outputs is confusing, 
since IMO an output is a signal to the experimenter, not the NN.

All in all, my immediate impression is that workers in artificial NNs 
don't have a clear conception of what they are trying to do. Not that 
that is a bad thing - after all, it's early days yet, and one of the 
functions of research is to clarify the questions one is trying to 
answer. My comments as a pure outsider may or may not help clarify 
vagueness. Either way, thinking about your explanations has been 
interesting.


0
Wolf
10/28/2004 9:45:24 PM
patty wrote:

> Wolf Kirchmeir wrote:
[...]
>> No, Glen says discriminations of environmental contingencies cause 
>> learning. 
> 
> 
> I'm just curious about the language here.  If i substitute "are changes 
> in the NN" for "cause learning" in your sentence as follows 
> "Discriminations of environmental contingencies are changes in the NN", 
> is it still true ?
> 
> patty
> 

Why change "cause" to "are"?

0
Wolf
10/28/2004 10:31:09 PM
Wolf Kirchmeir wrote:
> patty wrote:
> 
>> Wolf Kirchmeir wrote:
> 
> [...]
> 
>>> No, Glen says discriminations of environmental contingencies cause 
>>> learning. 
>>
>>
>>
>> I'm just curious about the language here.  If i substitute "are 
>> changes in the NN" for "cause learning" in your sentence as follows 
>> "Discriminations of environmental contingencies are changes in the 
>> NN", is it still true ?
>>
>> patty
>>
> 
> Why change "cause" to "are"?
> 

Because there may not be any distinction between the changes in the NN 
and the discriminations of environmental contingencies.  They may be 
different descriptions of the same thing.  The morning star doth not the 
evening star cause.  Now certainly they are descriptions of different 
(aspects?) of a some thing, but that does not change the (fact?) that 
they are the same thing.   Does it?  Now if they are not the same thing, 
then what is the distinction?

patty

0
patty
10/28/2004 11:00:56 PM
patty wrote:

> Wolf Kirchmeir wrote:
> 
>> patty wrote:
>>
>>> Wolf Kirchmeir wrote:
>>
>>
>> [...]
>>
>>>> No, Glen says discriminations of environmental contingencies cause 
>>>> learning. 
>>>
>>>
>>>
>>>
>>> I'm just curious about the language here.  If i substitute "are 
>>> changes in the NN" for "cause learning" in your sentence as follows 
>>> "Discriminations of environmental contingencies are changes in the 
>>> NN", is it still true ?
>>>
>>> patty
>>>
>>
>> Why change "cause" to "are"?
>>
> 
> Because there may not be any distinction between the changes in the NN 
> and the discriminations of environmental contingencies.  They may be 
> different descriptions of the same thing.  The morning star doth not the 
> evening star cause.  Now certainly they are descriptions of different 
> (aspects?) of a some thing, but that does not change the (fact?) that 
> they are the same thing.   Does it?  Now if they are not the same thing, 
> then what is the distinction?
> 
> patty
> 

Point taken.

IMO you are identifying the process of discriminating contingencies with 
learning. I'm not sure about that. I prefer to claim that discriminating 
contingencies causes learning, if the a organism responds. But we can't 
tell whether it discriminated contingencies if it didn't respond. And if 
it doesn't respond it will not learn. So we may say that the 
contingencies caused learning, but that's a shortcut. Hence I'm not sure 
about your identification.

H'm.

0
Wolf
10/28/2004 11:13:08 PM
In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
Kirchmeir <wwolfkir@sympatico.ca> writes
>Stargazer wrote:
>> Wolf Kirchmeir wrote:
>>
>>>Stargazer wrote:
>[snip a number oif clear answers to my questions - thanks. I think. :-)]
>> *SG*
>
>Your answers clear up some misconceptions on my part, but they also 
>show  terminological obfuscation on the part of artificial neural 
>network researchers.
>
>Throughout your explanation, the term "signal" is used ambiguously. It 
>sometimes seems to apply to an input to a single neuron, and sometimes 
>to a collection of inputs to a network of neurons. IMO this is 
>confusing. Very. It's a hierarchy error, which always cause trouble.
>
>Also, calling the calculated output of a NN a "training signal" because 
>it's compared to the desired outcome is confusing, at least to me, for 
>whom a "training signal" is a "signal that trains", ie, an input to the 
>NN. And the use of "signal" for both inputs and outputs is confusing, 
>since IMO an output is a signal to the experimenter, not the NN.
>
>All in all, my immediate impression is that workers in artificial NNs 
>don't have a clear conception of what they are trying to do. Not that 
>that is a bad thing - after all, it's early days yet, and one of the 
>functions of research is to clarify the questions one is trying to 
>answer. My comments as a pure outsider may or may not help clarify 
>vagueness. Either way, thinking about your explanations has been 
>interesting.
>
>
Another way of putting it is that the early ANN folk didn't know what 
was being done in the EAB back in the 30s, 40s and 50s (note that all of 
the former folks' work came out of those decades but they seem to have 
an uncanny knack of misrepresenting of just not understanding their 
sources). Nor did they understand the way that philosophy was going in 
the same period (most "AI" and "Cognitive Scientists" *still* appear to 
be pre 1929 Carnap or early Wittgensteinian). It appears to me that they 
took some basic "programming" algorithms which *simulated* (or at best 
controlled) some of the experimental schedules/equipment (in those early 
days it was largely switchboard and other telephonic paraphernalia) and 
just renamed their statistical models or descriptions of this "rule 
governed behaviour" something more catchy ie "Artificial Neural 
Networks" or "cell assemblies" (Hebb was always talking about a 
Conceptual Nervous System and he did it rather poorly relative to the 
efforts of Hull, Guthrie or Estes - he just said it all in more popular, 
familiar intensional language ensuring that more science-shy people 
lapped it up!). This propensity to generate misnomers and repackage, 
plagiarise or re-badge others' *empirical* work as something new and 
"algorithmic" or "analytic" simply through name changes allows them to 
sell a load of nonsense to the unwary who don't see this sleight of hand 
for what it is.  When they make out that what they have to say somehow 
captures what's essential about "cognitive" or "mental" life I just see 
fraud, something which I think is endemic within psychology and has been 
for decades. It makes "Cognitive Science" a Ptolemeic monster which in 
my view is far worse than the original, as this monster has no practical 
utility over the behavioural work itself, and actually gets in the way 
of advancing that science by soaking up funding on grounds that's it's 
closer to common sense folk psychology! These people make quite 
ludicrous grant proposals, with fantastic promises which make the 
realistic aims of real science look trivial in comparison. This just 
shapes up lying and turns science into marketing. These people actually 
take students and other naive folk back to ways of thinking which were 
abandoned well over a century ago.
-- 
David Longley
0
David
10/28/2004 11:16:19 PM
Wolf Kirchmeir wrote:
> patty wrote:
> 
>> Wolf Kirchmeir wrote:
>>
>>> patty wrote:
>>>
>>>> Wolf Kirchmeir wrote:
>>>
>>>
>>>
>>> [...]
>>>
>>>>> No, Glen says discriminations of environmental contingencies cause 
>>>>> learning. 
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> I'm just curious about the language here.  If i substitute "are 
>>>> changes in the NN" for "cause learning" in your sentence as follows 
>>>> "Discriminations of environmental contingencies are changes in the 
>>>> NN", is it still true ?
>>>>
>>>> patty
>>>>
>>>
>>> Why change "cause" to "are"?
>>>
>>
>> Because there may not be any distinction between the changes in the NN 
>> and the discriminations of environmental contingencies.  They may be 
>> different descriptions of the same thing.  The morning star doth not 
>> the evening star cause.  Now certainly they are descriptions of 
>> different (aspects?) of a some thing, but that does not change the 
>> (fact?) that they are the same thing.   Does it?  Now if they are not 
>> the same thing, then what is the distinction?
>>
>> patty
>>
> 
> Point taken.
> 
> IMO you are identifying the process of discriminating contingencies with 
> learning. I'm not sure about that. I prefer to claim that discriminating 
> contingencies causes learning, if the a organism responds. But we can't 
> tell whether it discriminated contingencies if it didn't respond. And if 
> it doesn't respond it will not learn. So we may say that the 
> contingencies caused learning, but that's a shortcut. Hence I'm not sure 
> about your identification.
> 
> H'm.
> 

Ok, just trying to get the ontology straight.  A process creates a 
change, yet the process is not the change.  The process is called 
"discriminating of environmental contingencies", the effect of the 
process is called "learning" and that learning can also be called 
"changes in the NN".   Does that hold together well?

Incidentally is it not just a conjecture that "if it doesn't respond it 
will not learn" ?  Can't there be learning without a response?  Does a 
null response count as a response?

patty
0
patty
10/29/2004 12:25:06 AM
patty wrote:
> Wolf Kirchmeir wrote:
> 
>> patty wrote:
>>
>>> Wolf Kirchmeir wrote:
>>>
>>>> patty wrote:
>>>>
>>>>> Wolf Kirchmeir wrote:
>>>>
>>>>
>>>>
>>>>
>>>> [...]
>>>>
>>>>>> No, Glen says discriminations of environmental contingencies cause 
>>>>>> learning. 
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I'm just curious about the language here.  If i substitute "are 
>>>>> changes in the NN" for "cause learning" in your sentence as follows 
>>>>> "Discriminations of environmental contingencies are changes in the 
>>>>> NN", is it still true ?
>>>>>
>>>>> patty
>>>>>
>>>>
>>>> Why change "cause" to "are"?
>>>>
>>>
>>> Because there may not be any distinction between the changes in the 
>>> NN and the discriminations of environmental contingencies.  They may 
>>> be different descriptions of the same thing.  The morning star doth 
>>> not the evening star cause.  Now certainly they are descriptions of 
>>> different (aspects?) of a some thing, but that does not change the 
>>> (fact?) that they are the same thing.   Does it?  Now if they are not 
>>> the same thing, then what is the distinction?
>>>
>>> patty
>>>
>>
>> Point taken.
>>
>> IMO you are identifying the process of discriminating contingencies 
>> with learning. I'm not sure about that. I prefer to claim that 
>> discriminating contingencies causes learning, if the a organism 
>> responds. But we can't tell whether it discriminated contingencies if 
>> it didn't respond. And if it doesn't respond it will not learn. So we 
>> may say that the contingencies caused learning, but that's a shortcut. 
>> Hence I'm not sure about your identification.
>>
>> H'm.
>>
> 
> Ok, just trying to get the ontology straight.  A process creates a 
> change, yet the process is not the change.  The process is called 
> "discriminating of environmental contingencies", the effect of the 
> process is called "learning" and that learning can also be called 
> "changes in the NN".   Does that hold together well?

Seems to, but needs to be more precisely stated. "Learn" usually means 
"change behaviour in some way." "Change behaviour" means any one of "add 
to, reconfigure, delete" or any combination thereof. IOW, if we observe 
a fairly consistent change in behaviour following some repeated 
contingencies, we say the animal/person has learned.

In school the contingencies are the whole paraphernalia of lessons and 
testing. (Note that students must learn how to do a test! This is one of 
the objectives of the primary curriculum, although it's not usually 
described that way. Instead, kids get "homework", and then in class they 
get similar tasks, which are called tests.) EG, we say a student has 
"learned a poem by heart" when (s)he can recite it more or less error 
free. We instruct or lead a student through a learning-by-heart routine, 
eg, read aloud several times, then recite the first few lines without 
reading the text but with prompting by another student, then repeat with 
additional lines and less prompting, and do this until reading or 
prompting is no longer necessary. NB that the reading of the text, the 
not-reading of the text, the prompting/not prompting, the repetition 
itself are all contingencies.

And so it goes.

> Incidentally is it not just a conjecture that "if it doesn't respond it 
> will not learn" ?  Can't there be learning without a response?  Does a 
> null response count as a response?

If you mean response at the whole-organism level, no, since we define 
learning as a change in behaviour. If we don't see a change in 
behaviour, there has been no learning. Learned behaviours are clearly 
responses.

If you mean response at the level of the nervous system (for example), 
then IMO we should distinguish between changes in NN functions that 
occur because some genetic program has been invoked by, say, an increase 
in HGF, and changes that occur because of some set of novel inputs to 
the NN. The first is development (I want to say pure and simple, but 
Oscar Wilde's dictum prevents me). The second is learning. So, yes, in 
the second sense, we can say that there can be no learning without some 
response even at the NN level. That response would be the changed 
function of the NN. But as I've said in another post, the boundary 
between learning and development is somewhat blurry.

You may have in mind that people can learn things that we don't they 
have learned. Leaving aside the possibility that we merely haven't 
observed the evidence of learning, you may still wish to argue that 
people can learn all sorts of things without giving signs  that learning 
has occurred. There are several objections to this claim; here are three:

A) the subject may introspect and realise/become aware of having learned 
something. For example, a student may become confident of having gotten 
a poem by heart after reciting "in his head". But that internal 
recitation is a behaviour that the student observes in him or herself, 
and so fits the definition of learning. Moreover, the student may remain 
confident even if not called on to provide proof of learning by public 
recitation.

B) people often discover that they have learned something when they 
realise they are able to do something they did not know they could do. 
Again they are observing their own behaviour. This also true of 
"understanding something." Think about it....

C) There nay be no occasion to display the learning. That is, there has 
been no contingency that elicits the behaviour that we take as a sign 
that learning has occurred. In this case, the subject may also be 
unaware that (s)he has learned something. This IMO is the situation when 
people talk about "unconscious learning", by which they usually appear 
to mean learning that they didn't know they did until some contingency 
elicits evidence that they have learned something. The universe being a 
rather cantankerous place, I believe that we have all learned all sorts 
of things we haven't had occasion to display, and so shall remain 
forever ignorant of. If that sounds like a paradox, just remember Prof. 
G. Marx's comment on paradoxes. :-0

Good night, sleep tight, etc.
0
Wolf
10/29/2004 3:48:31 AM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:nk6gd.15541$rs5.865264@news20.bellglobal.com...
> Bill Modlin wrote:
>
> [...]
> >
> > Let's start over.
>
> OK.
>
> > You describe pretty much the same situation I see, in
> > which a cell "learns"... i.e. changes its various
> > operating parameters, thus changing its functional
> > mapping between inputs and outputs, based on
> > purely local signals.  These local signals include its
> > own firings, activations of its synapses by other cells
> > firing, and changes in various chemical concentrations
> > in which it is immersed.  As you say, none of these
> > inputs are labelled.
>
> Cells don't learn. That's a hierarchy error - its the
> network that learns, maybe. Certainly the network of
> networks that we call, say, the visual cortex learns.

Rubbish.  I put the word in quotes and immediately defined
what i meant by it... "i.e. changes its various operating
parameters, thus changing its functional mapping between
inputs and outputs".   When you are not busy inventing
quibbles you use the word the same way, as for example when
you earlier said "learning at the neural level requires that
genes be switched on and off", in context with messenger
molecules and the like.

> The cell's repsonses to inputs are modified by some
> inputs, yes, but that is not the same as learning. It's
> analogous to the content of a RAM location being modified
> by some inputs. I would;nt say the memory location learned
> anything - its electrical charge changed, is all.

More pointless quibbling over a word. We both know that here
we are talking about changes to the functional parameters of
a cell.

> > It seems obvious to me that under these circumstances,
> > if there is to be any systematic rule or principle
> > guiding the way the cell changes its response function,
> > it must be formulated in terms of the signals accessible
> > to the cell, with no reference to any possible
> > remote and indirect consequences of those changes.
>
> Yes.
>
> > The "adjustment rules" can depend on the strength and
> > frequency of these local signals and on the timing
> > relationships among them, and that pretty much exhausts
> > the available possibilities.
>
> In a natural system, it also includes the chemistry of the
> surrounding medium, which wil modify the way the signals
> act on and in the cell.
> That's a crucial fact, IMO. Ie, "chemical messengers" will
> promote or inhibit the transmission of siganls across the
> synpatic gaps.  Since these messenger molecules are
> emitted by other cells, including non-neural ones, the
> picture is much more complex. I haven't a conceptual
> handle oin thsi, certainly not in terms of message
> content, signal labelling, etc etc etc.

Again I ask that you read what you are responding to.

I said "These local signals include its own firings,
activations of its synapses by other cells firing, and
CHANGES IN VARIOUS CHEMICAL CONCENTRATIONS
IN WHICH IT IS IMMERSED."

How does that omit anything you mention here?

> > This also seems to be the position you are taking.
> > Which confuses me, since on other occasions you seem to
> > argue for a stance much like Glen's, where all
> > "learning" is caused by remote behavioral
> > contingencies.
>
> I've already said that cells don't learn. In any case,
> figuring out how neural nets' behaviours/functions change
> doesn't refute the position that such changes are
> initiated by "remote external contingencies".

Was your substitution of "external" for "behavioral"
significant?.  "External contingencies", in the sense of
statistical regularities suggesting the presence of a
coherent object, are indeed initiators for most of the
changes of interest.   But the behavior of the organism
enters only incidentally into the causal chain leading to
the signalling of those regularities to the neurons and
their adaptation to them.   The same changes would ensue if
a similar series of stimulii were observed passively in a
moving scene rather than scanned behaviorally from a static
scene.  Behavior itself does not enter into the relevant
contingencies.

> > Let me try to pose an unambiguous example of the
> > conflict.
> >
> > A pigeon can be trained to discriminate pictures
> > containing trucks from other pictures lacking trucks.
> >
> > This is done by selectively reinforcing some behavior
> > (pecking a button?) in the presence of the truck
> > pictures, and not in the presence of others.
> >
> > At this level of description, this is a supervised
> > learning process, driven by an experimenter-enforced
> > correlation between rewards and behaviors.  The rewards
> > are contingent on the production of the right behavior
> > under the right conditions, and the pigeon contains
> > mechanisms to adjust its behavior to maximize rewards.
>
> I see no reason to talk about "supervised" learning
> processes, since that word smuggles in the experimenter's
> intentions. The pigeon will leran in exactly the same in
> naturem, the only difference being that random behaviours
> will be reinforced rather trhan pre-selected ones. So
> what?

Obviously you are not aware of the ordinary technical usage
of the word "supervised" in this context.   Go read a book
or something, I'm getting tired of trying to educate you.
Hint: it is still just as "supervised" when natural forces
do the supervision.

> The mechnaisms that "adjust the pigeon's behaviour"
> include the cellular changes that you seem to think
> exem[pligy some other kind of learning.
>
> > That's fine, so far as it goes.
> >
> > But when I look at discriminating that class of pictures
> > so that it can be recognized as a condition for the
> > rewarded behavior, I see a pretty complicated process.
> > There are  billions of cells computing functions of
> > whatever inputs they have access to, responding to
> > all sorts of "features" at dozens of levels, bringing
> > together information from many areas of the picture, to
> > eventually reach a level at which there is a signal of
> > some sort that indicates whether or not there is a truck
> > somewhere in the picture.
>
>
> So the process is complicated. So what? When I watcvh a
> rainstorm, I see billions of raindrops, millions of
> turbulence scells, etc. The proces seems pretty
> complicated. The net result is still that tings get
> very wet.

An analogous situation would be if you claimed that "things
get very wet" is relevant to the path of a raindrop.  The
path is part of a big picture in which all those paths
add up to a net result of things getting wet.  But you can
hardly explain the details of a particular path by saying
that a drop moved a particular way just to get things wet.

> > That truck-signal is correlated with the rewards and the
> > behavior, so it makes sense at least at a handwaving
> > level that a supervised learning process could
> > incorporate it, and produce the behavioral
> > modifications that we observe.
> >
> > But most of those intermediate signals in the long path
> > from retina to truck-signal are not correllated with
> > anything in the high level description of the
> > experiment.  They aren't correlated with trucks,
> > or rewards, or pecking, and therefore could not have
> > been shaped by any of those things.
>
> Yes, that's true, but why should they be?

They should not and are not.  But the behaviorist viewpoint
Glen espouses entails that contingencies among such
externally describable and observable things are all we are
allowed to invoke in explanation of the shaping of behavior.

> ASn analogous problem: how doe the hundreds or thousands
> of fish in a school of fish all "know how to cahnge
> dierction? They don't Each fish knows that the immediately
> surrounding fish are coming closer or getting
> further away, so it adjusts its direction and speed to
> mainatin ceratin distance. That's a fine example of "local
> correlations", and IMO is the way one must think about it.
> It doesn't matter where the chnage in duirection
> originates (s few fish see a shark, and chnage direction)
> the fish inside the school don't get the message "Shark
> nearby, get out of the way". They get only messages about
> chnaging distance between themselves, and that's what they
> respond to.
>
> Let the fish be signals moving through a NN, let the
> changing distances be variations in signals passin between
> neurons, and let the resspnse of the fish be the
> cellular/synaptic changes. Then the school's change of
> direction ie the result is different NN functions. The
> analogy is good enough to clarify the concept, IMO.
>

Ah!  Finally a clue as to how you can believe things that
appear so silly to me.  Perhaps this explains Glen's
misperception as well.  Thank you, thank you, thank you!

The problem with this analogy is that the coupling functions
between neighboring fish in a school are linear and thus
correlation-preserving, while the coupling between
neighboring neurons in a network is nonlinear and
correlation-changing.

I imagine that you understand at some level the distinction
between linear and nonlinear functions.  You probably know
about intermodulation of variables input to a nonlinear
function, so that the output can no longer be treated as a
sum of independent functions applied to each variable but
must be represented as a function of the combined state of
all the inputs.

But you probably have not fully assimilated this knowledge
into your reasoning.  Which allows you to imagine that a
linear network might be usefully analogous to a nonlinear
one for purposes of our current discuassion, though it is
not.  This lack of assimilation also prevents you from
appreciating the significance of "local correlations", as in
your linear view of a network there is no significance as
correlations persist throughout.

In the network of fish essentially the same signal is
recreated at each node, so that it retains its close
correlation with the original summary behavioral response to
the shark on the part of a few fish nearest the shark.  This
ripples out from the origin in a pleasingly intuitive
manner, with easily visualized linear interactions with
other inputs such as obstacles or constrictions to to flow
of the school.

Things don't work that way in a non-linear network.

At each node multiple signals are combined in such a way
that they intermodulate to produce new signals rather than
linear transforms of the ones with which we started.  These
new signals have new correlations with new things
corresponding to aggregates or combinations of their inputs,
and weaker correlations with individual inputs.  Formally,
there is no necessary correlation at all with the inputs.
In practice we rely on the probable coupling of real-world
relationships across multiple statistical orders to allow us
to use the original pairwise statistics as heuristic guides
to finding useful higher order functions.  But despite this
heuristically useful coupling, correlations with the inputs
are weakened at each node, and after a few steps disappear
into the background noise.

In general, there is no correlation between inputs and
outputs of a multi-stage nonlinear multivariate transform.

For a deep multilayer network, interior variables will
generally have no significant correllation with either
inputs or outputs of the network as a whole, but only with a
subset of the interior hidden variables generated at similar
levels of indirectness or abstraction.

In such networks there are no spreading ripples conveying
the same message throughout the network.  At each node there
is a new message, conveying new and different information.
Some of these may be correlated, most are not.  Only
correlated signals are useful for the directed modification
of the function to be performed by any particular node.

-----

That should be enough to make the original point that
externally identifiable factors such as behavior or
contingent results of that behavior are generally
uncorrelated with the proper function of interior nodes of a
complex nonlinear network, even though those interior nodes
are ultimately part of the causal chain leading to the
behavior.

Which means that behavior-based explanations are irrelevant
to the appropriate adaptation of those interior functions.

The next step is to introduce relevant explanations of how
interior cells come to perform useful functions, which is
what I intended for "part 2" of this topic.  It may be
obvious by now that first we need ways to collect correlated
signals together, and then we need ways to generate useful
new outputs from those correlated signals.

Part 2, a sketch of possible approaches to finding and
exploiting correlations among interior hidden variables,
should be coming soon.

Right now it's hard to see for sure what will be in part 3.
I'd guess that the next topic should probably be about how
we bridge from unsupervised data-directed self organization
to goal directed behavior... how we actually implement
reinforement learning or operant conditioning on top of this
substructure.  But I'll have to see how the conversation
goes...

Bill



0
Bill
10/29/2004 9:57:47 AM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> > Wolf Kirchmeir wrote:
> >
> > > Stargazer wrote:
> [snip a number oif clear answers to my questions - thanks. I think.
> :-)]
> > *SG*
>
> Your answers clear up some misconceptions on my part, but they also
>  show terminological obfuscation on the part of artificial neural
> network researchers.
>
> Throughout your explanation, the term "signal" is used ambiguously. It
> sometimes seems to apply to an input to a single neuron, and sometimes
> to a collection of inputs to a network of neurons. IMO this is
> confusing. Very. It's a hierarchy error, which always cause trouble.

I agree that it is confusing. But it is not a hierarchy error, it
reflects the common understanding by a group of researchers. Such
peculiar meaning of words is also commonplace in many specialized
fields. Inside these fields such expressions makes sense (because
all of the people in that area use the linguistic expressions to
mean the same thing). For anyone outside that field, it often
appears as an incoherent lump of ideas (well, sometimes it IS an
incoherent lump of ideas; some french post-modernist texts
come to mind).

> Also, calling the calculated output of a NN a "training signal"
> because it's compared to the desired outcome is confusing, at least
> to me, for whom a "training signal" is a "signal that trains", ie, an
> input to the NN. And the use of "signal" for both inputs and outputs
> is confusing, since IMO an output is a signal to the experimenter,
> not the NN.
>
> All in all, my immediate impression is that workers in artificial NNs
> don't have a clear conception of what they are trying to do. Not that
> that is a bad thing - after all, it's early days yet, and one of the
> functions of research is to clarify the questions one is trying to
> answer. My comments as a pure outsider may or may not help clarify
> vagueness. Either way, thinking about your explanations has been
> interesting.

The thing worsens a lot when some discipline uses the same word to
mean different things than other fields. The word "learning", for
instance, mean different things for neuroscientists, behaviorists,
computational learning theorists, machine learning specialists,
cognitive psychologists, neural researchers, AI scientists,
psychophysics psychologists, etc. In this forum (in the few
days I'm peeking) I've seen quite a lot of disagreements whose root
stems on different ways of defining some words. Most of these
definitions can be said to be tenable, but only inside that
specific discipline. It is often a conceptual failure (besides
being unproductive) to criticize one field using the understandings
of a different field.

*SG*


0
Stargazer
10/29/2004 1:49:48 PM
Bill Modlin wrote:
[...] snip quibbles about alleged quibbles, which were rather attempts 
at clarifying the issues/processes]

> Was your substitution of "external" for "behavioral"
> significant?.  "External contingencies", in the sense of
> statistical regularities suggesting the presence of a
> coherent object, are indeed initiators for most of the
> changes of interest.

At what level? Why statistical regularities? AFAIK, it's the 
organisation (structure, topology) of the network of NNs that "indicates 
the presence of a coherent object." "Statistical regularities" are 
relevant when computing "coherent objects" in a bit-mapped image, for 
example, but the retina + visual cortex appears to do it differently. Or 
so I understand from my reading on the subject.

> But the behavior of the organism
> enters only incidentally into the causal chain leading to
> the signalling of those regularities to the neurons and
> their adaptation to them.

So do all inputs, at all levels, so your use of "incidentally" seems to 
me an error. IOW, you might just as well characterise the inputs from a 
neighbouring NN as "incidental."

> The same changes would ensue if
> a similar series of stimulii were observed passively in a
> moving scene rather than scanned behaviorally from a static
> scene.  Behavior itself does not enter into the relevant
> contingencies.

If you think "behaviour" is limited to skeletal-muscular events, think 
again. IMO this misconception on your part is one of the reasons for 
some of your errors. Also, there is no such thing as "passive 
observation." Observation (perception) is a behaviour, and a very 
complex one. So complex, it's not fully understood yet. (At one end, the 
"conscious awareness" end, it may never be fully understood, only 
described, much as quantum mechanics is not fully understood, only 
described. This analogy may account for Penrose's speculations.)

>>>Let me try to pose an unambiguous example of the
>>>conflict.
>>>
>>>A pigeon can be trained to discriminate pictures
>>>containing trucks from other pictures lacking trucks.
[snip for brevity]
>>I see no reason to talk about "supervised" learning
>>processes, since that word smuggles in the experimenter's
>>intentions. The pigeon will leran in exactly the same in
>>naturem, the only difference being that random behaviours
>>will be reinforced rather trhan pre-selected ones. So
>>what?
> 
> 
> Obviously you are not aware of the ordinary technical usage
> of the word "supervised" in this context.   Go read a book
> or something, I'm getting tired of trying to educate you.
> Hint: it is still just as "supervised" when natural forces
> do the supervision.

Firstly, that's not "ordinary technical usage", it's AI usage, 
apparently. So why use the word "supervised"? I've been enlightened 
about the meaning of "supervised" by Stephen Harris - I think it's a 
stupid term. It assumes that there is some other non-supervised form of 
learning, which in turn assumes that learning is something other than a 
change in behaviour. If so, just what is it? Are you claiming that 
learning "really" is something other than a change in behaviour, and 
that changes in behaviour are only "incidentally"  signs that learning 
has taken place? If so, you have relocated learning somewhere else, and 
should use a different term. I vote for "reconfiguration of neural 
networks" I don't have enough Latin or Greek to make up a nice technical 
word to replace that phrase, though - sorry about that.

>>The mechnaisms that "adjust the pigeon's behaviour"
>>include the cellular changes that you seem to think
>>exem[pligy some other kind of learning.
>>
>>
>>>That's fine, so far as it goes.
>>>
>>>But when I look at discriminating that class of pictures
>>>so that it can be recognized as a condition for the
>>>rewarded behavior, I see a pretty complicated process.
>>>There are  billions of cells computing functions of
>>>whatever inputs they have access to, responding to
>>>all sorts of "features" at dozens of levels, bringing
>>>together information from many areas of the picture, to
>>>eventually reach a level at which there is a signal of
>>>some sort that indicates whether or not there is a truck
>>>somewhere in the picture.
>>
>>
>>So the process is complicated. So what? When I watcvh a
>>rainstorm, I see billions of raindrops, millions of
>>turbulence scells, etc. The proces seems pretty
>>complicated. The net result is still that tings get
>>very wet.
> 
> 
> An analogous situation would be if you claimed that "things
> get very wet" is relevant to the path of a raindrop.  The
> path is part of a big picture in which all those paths
> add up to a net result of things getting wet.  But you can
> hardly explain the details of a particular path by saying
> that a drop moved a particular way just to get things wet.

Sorry, I was refuting your assumption that that is what I was claiming. 
I don't and didn't.

You're also assuming intentions and purposes. I guess you are assuming 
that learning  must be for some purpose. It isn't. Learning is just a 
change in behaviour.

[snip brief summary pf pigeon pecking trials]
>>>That truck-signal is correlated with the rewards and the
>>>behavior, so it makes sense at least at a handwaving
>>>level that a supervised learning process could
>>>incorporate it, and produce the behavioral
>
>>>But most of those intermediate signals in the long path
>>>from retina to truck-signal are not correllated with
>>>anything in the high level description of the
>>>experiment.  They aren't correlated with trucks,
>>>or rewards, or pecking, and therefore could not have
>>>been shaped by any of those things.
>>
>>Yes, that's true, but why should they be?
> 
> 
> They should not and are not.  But the behaviorist viewpoint
> Glen espouses entails that contingencies among such
> externally describable and observable things are all we are
> allowed to invoke in explanation of the shaping of behavior.

You are either misunderstanding or misrepresenting Glen's p.o.v., which 
is that "Physiology mediates behaviour", and "Physiology is not needed 
to account for relationships between contingencies and behaviour." I 
would go further: adding physiology to account for those relationships 
is likely to obfuscate rather than enlighten. The question of what 
physiological changes are necessary for changes in behaviour to occur is 
at a different level of explanation than the relationships between 
contingencies and behaviour. Some generalisations may be possible along 
the lines of "When a human reaches puberty, a number of synapses are 
destroyed and new ones formed, hence the addition of courtship 
behaviours to the human repertoire." But one would still have to account 
for actual courtship behaviour in terms of the contingencies encountered 
by Jim and Joan (which include the culture in which they are raised, etc 
etc etc etc). There's no other way. As you yourself complain, 
information is changed so much at the NN level that we can't correlate 
those functions with a glimpse of Joan's or Jim's face on the other side 
of the room.

To put it another way: if you want to know why some specific behaviour 
occurs, or why it is shaped in some specific way, all that's available 
to explain that are the contingencies (and the histories of the 
individuals, but let's not complicate matters further). If you want to 
know why some _class_ of behaviours is possible at all (and hence it is 
possible to shape them in some way), then physiology (including a lot 
more than NNs) is necessary. If you want to know how to make a machine 
that will learn with experimental "supervision" or without, then you 
must study both levels of organisation. At least.

As for "explanation" - it's not clear to me what you understand by that 
term.

>>ASn analogous problem: how do the hundreds or thousands
>>of fish in a school of fish all "know how to change
>>direction? They don't. Each fish knows that the immediately
>>surrounding [snip for brevity]
> 
> Ah!  Finally a clue as to how you can believe things that
> appear so silly to me.  Perhaps this explains Glen's
> misperception as well.  Thank you, thank you, thank you!
> 
> The problem with this analogy is that the coupling functions
> between neighboring fish in a school are linear and thus
> correlation-preserving, while the coupling between
> neighboring neurons in a network is nonlinear and
> correlation-changing.

So? My comments allow for that. The principle is the same whether the 
system's (school of fish, NN) behaviour changes (is different the next 
time it encounters a shark) or not. It would hold no matter what the 
form of the correlations: local action results in global behaviour. 
Granted, non-linera functions tend to be counter-intuitive - that;s why 
I used school of fish. It's easier to see the principle operating in 
such a system. But surely you should have no trouble scaling it up 
another level of complexity.

[snip explanation of linear and non-linear functions, and how systems
instantiate them]
> In general, there is no correlation between inputs and
> outputs of a multi-stage nonlinear multivariate transform.

Quite so, if you mean it's impossible to predict the output as a 
function of the input alone, but that it is necessary to compute all the 
intermediate steps to determine the output. I understand all that. What 
I don't understand is why you think this refutes the behaviorist claim. 
I also don't understand why the output of a non-linear function is not a 
correlation, while a linear one is, but I'm probably just using 
"correlation" to mean " computed/able relationship between two or more 
variables" or some such - not ordinary technical usage, I guess.

> For a deep multilayer network, interior variables will
> generally have no significant correllation with either
> inputs or outputs of the network as a whole, but only with a
> subset of the interior hidden variables generated at similar
> levels of indirectness or abstraction.
> 
> In such networks there are no spreading ripples conveying
> the same message throughout the network.  At each node there
> is a new message, conveying new and different information.
> Some of these may be correlated, most are not.  Only
> correlated signals are useful for the directed modification
> of the function to be performed by any particular node.

Quite so. I understand all that, perhaps even more intuitively than you 
do. But I still don't understand why you think it refutes or sidelines 
the behaviorist claim.

> -----
> 
> That should be enough to make the original point that
> externally identifiable factors such as behavior or
> contingent results of that behavior are generally
> uncorrelated with the proper function of interior nodes of a
> complex nonlinear network, even though those interior nodes
> are ultimately part of the causal chain leading to the
> behavior.

Again, your usage of un/correlated is confusing to me. I think you just 
mean "there is no simple correlation between behaviours and 
contingencies on the one hand and the functions of neural networks on 
the other." True, true.

> Which means that behavior-based explanations are irrelevant
> to the appropriate adaptation of those interior functions.

I thought that's what I said, and have said several times. I'll say it 
again: The motor cortex NNs that initiate and control the pigeon's 
pecking at a red button "know" nothing at all about the colour red, nor 
do they know anything about a truck. Neither do they "know" about the 
feedback between the visual cortex and the motor cortex that guides the 
peck to nearly the centre of the button every time. Etc.

So what? Just what is your point? Why are you so het up about 
eliminating behaviour from an attempt to figure out how to make a 
learning machine? Granted, you don't need to know what specific 
behaviour produces/triggers the changes in the NN, but you sure as hell 
need to know about it when you test your machine.

> The next step is to introduce relevant explanations of how
> interior cells come to perform useful functions, which is
> what I intended for "part 2" of this topic.  It may be
> obvious by now that first we need ways to collect correlated
> signals together, and then we need ways to generate useful
> new outputs from those correlated signals.
 >
> Part 2, a sketch of possible approaches to finding and
> exploiting correlations among interior hidden variables,
> should be coming soon.
> 
> Right now it's hard to see for sure what will be in part 3.
> I'd guess that the next topic should probably be about how
> we bridge from unsupervised data-directed self organization
> to goal directed behavior... 

Hey, what's with this "unsupervised" here? Are you referring to 
spontaneous changes within the NN? If so, what do does "data-directed" 
mean? Aren't inputs that originate in, uh, behaviours a kind of data? 
Aren't the behaviours themselves a kind of data? Or are you talking 
about data structures/arrays/whatever, stored in some device outside the 
NN, that the NN "learns"? If so, that would look like "supervised" 
learning to me - the "training signal" would be a match between the NNs 
output and the data structure, wouldn't it?

Once again, terminological fuzz. Sigh.


>how we actually implement
> reinforcement learning or operant conditioning on top of this
> substructure.  But I'll have to see how the conversation
> goes...

IMO, the "useful functions" that you refer to are probably those 
"reinforcements" that occur in operant conditioning. But it's too early 
to tell, since I'm not sure what you would class as useful functions.

I also suspect that behavioral neurologists (you know, the guys and gals 
that try to correlate changes in neural networks in the brain with 
behavioral changes of the organism) will arrive at some useful stage 
sooner than the effort you're exemplifying. But what do I know. It's 
been years since I did any coding, let alone any program design, and 
that was pretty simple stuff, just about using data to learn things I 
needed to know about a student.
0
Wolf
10/29/2004 3:12:22 PM
Stargazer wrote:
> Wolf Kirchmeir wrote:
> 
>>Stargazer wrote:
>>
>>>Wolf Kirchmeir wrote:
>>>
>>>
>>>>Stargazer wrote:
>>
>>[snip a number oif clear answers to my questions - thanks. I think.
>>:-)]
>>
>>>*SG*
>>
>>Your answers clear up some misconceptions on my part, but they also
>> show terminological obfuscation on the part of artificial neural
>>network researchers.
>>
>>Throughout your explanation, the term "signal" is used ambiguously. It
>>sometimes seems to apply to an input to a single neuron, and sometimes
>>to a collection of inputs to a network of neurons. IMO this is
>>confusing. Very. It's a hierarchy error, which always cause trouble.
> 
> 
> I agree that it is confusing. But it is not a hierarchy error, it
> reflects the common understanding by a group of researchers. Such
> peculiar meaning of words is also commonplace in many specia
[snip useful comments that follow]

It's still a hierarchy error, IMO. Consider that the effect of the 
signal is different when it's an simple input to a neuron (essentially, 
the neuron will fire or it won't, although in some cases the neuron's 
response to subsequent signals is changed). When it's a complex multiple 
input to a NN (it could have any of a range of effects, from  null to a 
change in output in response to a repetition of the signal to a change 
in output to a different signal etc etc.) It may make sense to speak of 
the "quality" (mathematically defined) of an complex signal, but not of 
a simple signal, which is of one kind or another. And so on.

But whether it's a hierarchy error or not, the term is ambiguous and 
hence likely to cause errors. That's enough to require a refinement in 
terminology IMO. Terminology should express or label concepts accurately 
and unambiguously.
0
Wolf
10/29/2004 4:38:07 PM

Bill (responding to Wolf): Let's start over.

You describe pretty much the same situation I see, in which a cell
"learns"... i.e. changes its various operating parameters, thus
changing its functional mapping between inputs and outputs, based on
purely local signals.  These local signals include its own firings,
activations of its synapses by other cells firing, and changes in
various chemical concentrations in which it is immersed.  As you
say, none of these inputs are labelled.

It seems obvious to me that under these circumstances, if there is
to be any systematic rule or principle guiding the way the cell
changes its response function, it must be formulated in terms of the
signals accessible to the cell, with no reference to any possible
remote and indirect consequences of those changes.  The "adjustment
rules" can depend on the strength and frequency of these local
signals and on the timing relationships among them, and that pretty
much exhausts the available possibilities.

This also seems to be the position you are taking.  Which confuses
me, since on other occasions you seem to argue for a stance much
like Glen's, where all "learning" is caused by remote behavioral
contingencies.



GS: Say there are synapses between "afferent" neurons from the sensory
systems and "efferents" to central pattern generators ("afferent" and
"efferents" appear in quotes because they are in the CNS). When the
efferents fire (and they do so spontaneously sometimes) they produce
movements. The afferents from the sensory systems have, at first, little
effect on the firing of the efferents. In addition to these considerations,
there are hardwired detectors that send signals to, say, the nucleus
accumbens, whenever a particular stimulus occurs (the reinforcer). The NAcc
sends diffuse projections to the area where the aforementioned
afferent/efferent synapses are. Now, the post-synaptic tissue may be altered
when certain chemicals are dumped into the area, IF they have recently
"fired." And they are altered in such a way as to be sensitive to the
afferents firing. The rest should be clear - the animal does something as
the result of the efferents firing and that something results in the
appearance of the reinforcer. The hard-wired system fires and this causes
the release of the chemical that alters the synapse. When the stimuli that
fired the sensory system recur, they are more likely to fire the efferents
that produced the response. This, then, eventually, produces a discriminated
operant. So, the "critical" synapses are modified by the local "reinforcer
signal," but the sequence of events is possible (on a sustained basis) only
when there is an external contingency, for it is the external contingency
which results in the proper timing of the reinforcement and the behavior in
question. The "local" events would be the same in the absence of a
contingency (i.e., response-independent delivery) and behavior would get
reinforced. The behavior would be whatever happened to accidentally be
temporally contiguous with the reinforcer. Such behavior Skinner called
"superstitious," and he pointed out that such behavior should drift over
time because, obviously, there is no contingency that forces continued
temporal relations between a particular class of responses and the
reinforcer.

Bill: Let me try to pose an unambiguous example of the conflict.

A pigeon can be trained to discriminate pictures containing trucks
from other pictures lacking trucks.

This is done by selectively reinforcing some behavior (pecking a
button?) in the presence of the truck pictures, and not in the
presence of others.

At this level of description, this is a supervised learning process,
driven by an experimenter-enforced correlation between rewards and
behaviors.



GS: What if we were talking about a child that learned to shake a tree
branch to get an apple? Would that be "supervised?"



Bill: The rewards are contingent on the production of the
right behavior under the right conditions, and the pigeon contains
mechanisms to adjust its behavior to maximize rewards.



GS: The first part is right, the second part, debatable.

Bill: That's fine, so far as it goes.

But when I look at discriminating that class of pictures so that it
can be recognized as a condition for the rewarded behavior, I see a
pretty complicated process.  There are  billions of cells computing
functions of whatever inputs they have access to, responding to all
sorts of "features" at dozens of levels, bringing together
information from many areas of the picture, to eventually reach a
level at which there is a signal of some sort that indicates whether
or not there is a truck somewhere in the picture.



GS: Well, I would agree that the process involves other learning that took
place a long time ago. However, I am arguing that this other behavior is
none other than perceptual behavior, and that perceptual behavior is
behavior that is reinforced by certain sensory consequences. For example,
convergence and tracking may come into being - or at least be fine-tuned -
because "turning two similar stimuli into one" (convergence) and maintaining
stimulation of the fovea (tracking) are reinforcers. Other sorts of sensory
changes may, at first, be reinforcing simply because they are novel. Later,
the kind of behavior acquired in this fashion becomes part of behavior
acquired because of the role the thing seen plays in larger contingencies.

Bill: That truck-signal is correlated with the rewards and the behavior,
so it makes sense at least at a handwaving level that a supervised
learning process could incorporate it, and produce the behavioral
modifications that we observe.

But most of those intermediate signals in the long path from retina
to truck-signal are not correllated with anything in the high level
description of the experiment.  They aren't correlated with trucks,
or rewards, or pecking, and therefore could not have been shaped by
any of those things.



GS: You mean that signals from the retina are uncorrelated with the pattern
of reflected light? I think you mean that the signals are also correlated
with a multitude of other things as would be the case if "hardwired feature
detectors" provided a sort of "perceptual atom." But the stimulus present
causes a certain kind of behavior if enough of the elements are present.

Bill: To me it seems obvious that they must be shaped by local rules
involving relationships among signals accessible to the cell.
Specifically they cannot depend on producing some effect in the
external environment and reacting to  contingent results of that
effect.  Their connections to the external environment are extremely
indirect, there is no way that any correlation mediated by external
contingencies could be communicated to them.



GS: I just don't think I follow you here, Bill.

Bill: But Glen seems to say that discriminations must be learned as a
result of behavioral contingencies.  For example, his response to my
original post was:

> What is important in sensation and perception is that
> movement of an animal (or, more specifically, of its
> receptors) has consequences. When we sweep our eyes
> over a patch of red, there are changes in stimulation -
> such movement/consequence contingencies are at the heart
> of learning to perceive the world.

But this would seem to imply that each of those billions of cells
involved in discriminating trucks somehow "knows" that we moved our
eyes, and can correlate this with the changes in the other signals
it has access to, a claim which I find incredible.



GS: No, it means that the movements come to be caused by the stimulation
because there is a contingency (that arises because of the structure of the
sensory apparatus) between movement of the sensory receptors and changes in
stimulation. This is true even though each neuron only "knows" that it has
recently fired and that there is "reinforcer juice" in the chemical milieu.

Bill: Certainly the signals a cell can see generally originate in
environmental stimulii.  Cells learn their functions from
relationships observable in those signals, so all the learning is in
a sense ultimately traceable to the environment.



GS: That's right.



Bill: But the specific
relationships observable at any point in the network are heavily
dependent on other functional transforms which generated those
signals, and are seldom directly mappable to any particular
behavioral contingency identifiable at external levels of
description.

Where do you stand on this matter?



GS: Well, now you know where I stand.

"Bill Modlin" <modlin1@metrocast.net> wrote in message
news:XrGdnXKuAOTWGx3cRVn-tg@metrocastcablevision.com...


0
Glen
10/29/2004 6:25:13 PM
in article _3tgd.22767$Qs6.1769700@news20.bellglobal.com, Wolf Kirchmeir at
wwolfkir@sympatico.ca wrote on 10/29/04 8:12 AM:

> Bill Modlin wrote:

[snip]

>> Obviously you are not aware of the ordinary technical usage
>> of the word "supervised" in this context.   Go read a book
>> or something, I'm getting tired of trying to educate you.
>> Hint: it is still just as "supervised" when natural forces
>> do the supervision.
> 
> Firstly, that's not "ordinary technical usage", it's AI usage,
> apparently. So why use the word "supervised"? I've been enlightened
> about the meaning of "supervised" by Stephen Harris - I think it's a
> stupid term. It assumes that there is some other non-supervised form of
> learning, which in turn assumes that learning is something other than a
> change in behaviour. If so, just what is it? Are you claiming that
> learning "really" is something other than a change in behaviour, and
> that changes in behaviour are only "incidentally"  signs that learning
> has taken place? If so, you have relocated learning somewhere else, and
> should use a different term. I vote for "reconfiguration of neural
> networks" I don't have enough Latin or Greek to make up a nice technical
> word to replace that phrase, though - sorry about that.

Below is an excerpt from something I posted last december. It might help
clarify the terminology, the technical distinction between supervised and
unsupervised learning algorithms. But before re-posting that excerpt I'll
take another stab at clarifying some of the notions.

To appreciate the context it might help to imagine you are a programmer
charged with some programming task. Suppose the task is to write a
subroutine that converts temperatures given in degrees Fahrenheit to degrees
Centigrade. Well it is a simple task because there is a unique fucntion that
correctly maps inputs to outputs - the function is not something the program
has to learn, you simply implement the formula. Now suppose the task is to
write a subroutine that sorts records from a database by some field - lets
say loan applications sorted by applicant's last name. This is still a
function - you can think of the input list  of records as a point in some
tuple-space, and the output list of records as a point in the same
tuple-space - but there is in general more than one function that is correct
if there are applicants with the same last name (any ordering of the sublist
of Smiths is correct), and there are many sorting algorithms that make
various tradeoffs, but there is still no need for the program to learn a
valid sorting function - the map from input to output is well defined (upto
permutation of equi-named sublists) and you simply implement some sorting
algorithm. Now suppose the task is to write a subroutine that assigns a
default-risk to loan applications. This routine is a function that maps
tuples to scalers. There is no known correct function. One approach is to
"learn" a function from examples - a set of loan application records each
labled with DEFAULT/NO_DEFAULT. This is a training set. It consists of
input/output pairs. There are a great many algorithms that induce a function
from labled examples, and a substantial body of theory on the reliabilty of
the induced functions. All such algorithms search a space of functions for a
"good" function. One way in which they vary is the set of constraints that
define the function space searched - linear threshold functions, for
example. A simple if not useful such function would be: if (INCOME >=
THRESHOLD) default-risk = 0 else default-risk = 1. This partitions the input
space into two equivalence classes seperated by a hyperplane perpendicular
to one of the axes of the input space. This is a single parameter family of
functions - a function space. Specifying the value of THRESHOLD completely
specifies a unique function from that space. The function space to be
searched can be made a little richer by allowing the input space to be
partitioned into convex regions bounded by hyperplanes at arbitrary
orientations - a space of linear threshold functions. To specify a function
from such a space requires more parameters - it has more degrees of freedom.
The function space can be made again more general by allowing the surfaces
bounding equivalence classes to be curved - polynomial classifiers, for
example. Note that the input space is not changing here, only the allowable
ways of carving it up. Each time we make the function space to be searched
more general we increase the ability of the induced function to make fine
distinctions, and we increase the number of parameters required to specify a
function from the space. There are tradeoffs. There is a substantial body of
applicable theory with results, for example, on how well such functions
induced from examples can be expected to generalize to unseen cases. The
induction of such functions is "supervised learning", so called because it
works on inputs labled with the correct answer. When I was writing character
recognition software I payed people to sit at a terminal and label bitmaps
by character class (e.g. this one's a '2') - ground truth. Yes, they make
mistakes, they mislabel the training data sometimes. It is "supervised"
because correct answers are provided. It is "learning" because the function
is induced from data rather than known a priori. It is a form of programming
somewhat more specialised than writing a Fahrenheit-to-Centigrade converter,
or a bubblesort routine. It has its own set of concepts and specialised
terminology.

Note by the way that "neural networks" are only one of many ways to approach
the induction and implimentation of functions from training data. Support
Vector Machines are the state of the art, at least in some areas. Typicaly
the "learning algorithm" that finds an optimal function from the space of
functions searched by SVM's is simply a form of convex quadratic
programming, taylored somewhat to exploit characteristics of SVM's, but
still a topic well studied in optimization theory. I am skeptical that
calling this "reconfiguration of neural networks" constitutes a
clarification of terminology.

"Unsupervised learning" is different. It can still be thought of as the
induction of a function, so it is still "learning" in that sense, but in
this case there is no "supervision" in that there are no class-labled (or
scalar/vector/tuple-value-labled) training samples available. Often the task
is to induce naturaly occuring categories. Often the task is to "learn" a
probability distribution function (e.g. the statistics of some data stream).
Applications include data reduction, hypothesis generation, hypothesis
testing, and forcasting, among others. Suppose, for example, you are given
the task of writing a data compression routine. The JPEG (Joint Photographic
Experts Group) Still Image Data Compression Standard is a case in point.
Compression consists of three stages: 1) A transformation of the image (the
discrete cosine transform in this case, back in the day before wavelets were
widely known), 2) A weighted quantization of the coefficients of the
transform (more bits for the lower frequency components), and 3) A lossles
entropy encoding of the coefficient bit-stream. The last stage, the entropy
encoder, can be either a Huffman code, or an Arithmetic code - the later
keeps a running estimate of the probability distribution over a set of
contexts (the probability model can be as simple as the probability of a 0
or a 1 bit in the data stream). This estimate is "learned" from the data
stream as it goes by. There is no "supervision", but there is an a priori
set of constraints on the form of the probability distributions to be
learned - the function space to be serched.

So why lump these tasks together under "machine learning", and give them
names that both link and distinguish them - supervised and unsupervised
learning? For the same reason this is done in any field - they share a
common body of theory and techniques. Classification, regression, cluster
analysis - they are all forms of function induction, There are neural net
based classifiers and induction algorithms, regression analyzers and cluster
analyzers, nearest neighbor based classifiers and induction algorithms,
regression analyzers and cluster analyzers, and support vector machine
classifiers and induction algorithms, regression analyzers and cluster
analyzers, to name a few. Much of the underlying theory comes from outside
computer science, from statistical analysis, optimization theory, and
operator theory, but the requirments of building working programs, data
structure and algorithms, adds another element. The term "machine learning"
is well chosen, and the subdivision into supervised and unsupervised machine
learning is well founded.

Anyway, here is what I wrote before.


/ Start Excerpt/
When "machine learning" is applied to practical problems, where the issue is
not to investigate, simulate, or in any way elucidate the behavior of
organisms, but just to build a mechanism that does some useful work (say
"read" the dollar amount on a check) two broad categories of algorithm are
available: supervised and unsupervised "learning". In the former there is,
as everybody knows, a "training set" consisting of input/desired-output
pairs - the task is to induce a mapping of input to output from available
examples; the hope is that the induced mapping generalizes to unseen inputs
in some useful way. When the set of desired outputs is discrete and finite,
say the set of decimal digits, the resulting input to output mappping, no
matter how it is constructed, is called a "classifier". It induces a
partition of input space into equivalence classes. So in this case
"supervised learning" boils down, mathematicaly, to the search for a "good"
partition of input space. Many such search algorithms exist, as everyone
knows, but due diligince raises questions: will the algorithm converge, how
much training is enough, how well does the induced partition generalize to
unseen inputs, can we give a mathematicaly precise answer to the question
"what tends to confirm an induction"? It is possible to give strong
guarantees about the performance on unseen data of a classifier induced from
a training set, provided certain conditions are met. There are many ways to
state the results in terms of conditions met, one of the simplest is:
assuming the training data and the unseen data are both generated by an
unknown but fixed stochastic process then the probability of the
misclasssification rate on unseen data exceeding epsilon is less than delta,
where delta increases as a function of the number of free parameters in the
space of input partitions. Note what the result does not say - it does not
require the training set to be "statisticaly representative" of the unseen
data, it requires only that the training set be generated by the same
stochastic process that generates the unseen data. Nor does it say the rate
of misclasssification will be less than some fixed threshold, it says only
that this rate has some calculable probability of being less than a given
threshold. Note what the result does say - the probability of
misclasssification rises as a function of the degrees of freedom in the set
of input space partitions available to the learning algorithm. It gives a
precise meaning to the problem of "overfitting": it is possible, given too
wide a lattitude of input space partitions, to induce a partition that
"memorizes" the examples without generalizing at all. Generalization
requires constraints on the space of partitions considered. On the one hand
this space must be rich enough to make essential discriminations, on the
other hand it must be as economical as possible to allow induction to
generalize with guaranteed results.

Unsupervised "learning" amounts to characterizing statistical properties of
unlabled input - the realm of cluster analysis, a search for categories,
where the catagories are not prespecified, a search for "natural kinds",
lumpiness in the continuum of input space. There are, as everyone knows,
scads of cluster analysis algorithms. Kohonen's "self organizing maps" are
an interesting case of the "competitive learning" approach. In this scheme
there is a fixed sized pool of "representatives" R, where each
representative R[i] has both a value, typicaly a vector, and a neighborhood
of representatives to which it is connected. For each input vector v the
representative R[j] closest to v under some prespecified metric is found.
R[j]'s value is altered by some prespecified fraction to move it closer to
v, and the values of all of R[j]'s neighbors are also altered by some
usually smaller prespecified fraction to move closer to the value of v.
There are variations on the scheme. After convergence, if the algorithm
converges, the representatives R are topographicaly ordered in a way
representative of the distribution of the data - neighboring representatives
are also close under the metric over the input vector space even if the
values of neighboring representatives are not correlated initialy. In
densley populated regions of input space the "map" preserves fine
distinctions, elsewhere distinctions become coarse.

In the supervised case the system's response to inputs is actively shaped
via training; in the unsupervised case the system's response to inputs
developes as a result of exposure to the input stream.
\End Excerpt\


0
Michael
10/29/2004 10:29:53 PM
"Phil Sherrod" <phil.sherrod@REMOVETHISsandh.com> wrote in message 
news:o-6dnT695IvaWuLcRVn-pg@giganews.com...
>
> On 26-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com> wrote:
>
>> He also explained why we couldn't just try to simulate
>> the actual evolution of Earth/humanity itself. There are zillions of
>> chance
>> events which produce zillions to the zillionth power of possible
>> permuations, one of which led to intelligent life.
>
> I certainly agree with the premise that it is impractical to simulate the
> actual evolution of Earth/humanity.  However, the conjecture that a chance
> event produced life is just that -- pure conjecture.
>

Well, the general umbrella of these forums is cognitive science.
So my post doesn't entertain a non-scientific explanation --> by God or 
Design.

So lets go back further, to the creation of the universe that allows
for the possibility of life as we know it. Our universe was created by
design, or by accident which means randomly. I don't argue against
a religious belief, but I do argue that religion is not a proper subject
of science. So I leave it out of my post as not part of cognitive science.

Science does not claim that the universe is not have levels of what
we perceive to be organizational. But that there is no initial purpose
that led to life. Before there was life, elements had to created in stars
that are essential to life. Science says there was no guiding hand to
that, it just happened. And it is in that sense I use the word random.

Supposing that there was a meteor that impacted earth and changed
the environment so drastically that dinosaurs became extinct and primates
had a chance to become dominant; that was a matter of chance.

Evolution is a theory, which means it is not a proven fact. Fossils are a
matter of fact, but the interpretation of how they got there is not a fact
and I don't claim that. I do claim that if you post on a scientific forum 
then
you use the cause and effect of science which does not admit God as a
cause. I do not suppose that creation as a purposeful pattern would be
random, it would be Designed. I do suppose that if there is no Intelligent
Creator of the universe, that the initial conditions of the universe were
random and could change from one universe to another as in the theory
about new universes being born out of white/black hole evaporation.
These baby universes have their own laws and universal constants and
they have predictable behavior. That universe may or may not have laws
which are favorably attuned to the appearance of life.

Randomness occurs long before being used to describe that parents
contributing genes to an egg has a large matter of chance involved.
To me the Universe has a Purpose, a master plan behind it, or it does
not and it is random in the sense of not having a Purpose driving a
pattern, which is the scientific assumption, not a fact or a proof.
My post is delivered within the context of that scientific assumption,
which is generally known to its readers.

I know that there are some who believe that God made the universe
and then allowed it to proceed in a manner which matches scientific
conclusion. That seems like a desperate compromise to me.

> In order to reach the point where evolution can kick in and natural
> selection can work requires reaching the point where you have
> self-replicating organisms that can pass on genetic information; until you
> reach that point, evolution -- by definition -- cannot operate.  The 
> problem
> is that the process of passing on genetic information -- DNA/RNA -- is an
> incredibly complex process involving the coordinated operation of many
> organic "machines" (DNA splitters, transcription RNA, translation,
> ribosomes, and transport mechanisms to mention a few).  It is an organic
> factory.
>
> No one has come even remotely close to explaining how a "chance event"
> turned inanimate compounds into a system with enough components and
> organization that it can self-replicate and pass on genetic information.
> You are welcome to take it on faith that chance events accomplished this,
> but it certainly does not rise to the level of a scientific fact.
>

This is a scientific theory, not a fact. Science has not and never will 
prove
that God does not exist. If God exists, then life if purposeful. If God does
not exist, then the conditions for life coming into existence are considered
random, and that is the scientific view which is used on scientific forums.
Science does not suppose that there was a Reason behind creating Life.
That does not mean there were no universal elements or properties involved.

Like the decay of a radioactive isotope is considered random. But that
does not mean that there is no lawful physical process involved, it does
not decay by magic. How the universe came into existence by science
compared how it came into existence by religion are hardly reconcilable.

Unpredictably yours,
Stephen

> -- 
> Phil Sherrod
> (phil.sherrod 'at' sandh.com)
> http://www.dtreg.com  (decision tree modeling)
> http://www.nlreg.com  (nonlinear regression)
> http://www.NewsRover.com (Usenet newsreader)
> http://www.LogRover.com (Web statistics analysis) 


0
Stephen
10/30/2004 1:04:07 AM
"Phil Sherrod" <phil.sherrod@REMOVETHISsandh.com> wrote in message 
news:dbGdnfDtYcHnlx3cRVn-3g@giganews.com...
>
> On 27-Oct-2004, Wolf Kirchmeir <wwolfkir@sympatico.ca> wrote:
>
> The problem is that there is an enormous difference between a snowflake 
> that
> is created by chance and an organic factory that is self-replicating and
> able to pass on genetic information.  There are at least dozens -- and
> probably hundreds -- of cooperating processes and machines that have to 
> work
> together for even the simplest organism to reproduce.  Random collisions 
> and
> chemical reactions may produce compounds like amino acids, but getting 
> from
> amino acids to the simplest known self-reproducing organism is like going
> from bars of silicon and aluminum to a Pentium CPU.  And you can't use
> incremental evolutionary development to explain the process until
> reproduction is going.  You have (literally) a chicken and egg paradox.
>

Science has not done a very good job in explaining consciousness to date.
If it had, there would be viewer believers in various religions. I don't 
deny
this, but when I post on a scientific topic I use the scientific 
conventions.
The topic you bring up is argued on talk.origins 


0
Stephen
10/30/2004 1:11:23 AM
In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
Kirchmeir <wwolfkir@sympatico.ca> writes
>Stargazer wrote:
>> Wolf Kirchmeir wrote:
>>
>>>Stargazer wrote:
>[snip a number oif clear answers to my questions - thanks. I think. :-)]
>> *SG*
>
>Your answers clear up some misconceptions on my part, but they also 
>show  terminological obfuscation on the part of artificial neural 
>network researchers.
>
>Throughout your explanation, the term "signal" is used ambiguously. It 
>sometimes seems to apply to an input to a single neuron, and sometimes 
>to a collection of inputs to a network of neurons. IMO this is 
>confusing. Very. It's a hierarchy error, which always cause trouble.
>
>Also, calling the calculated output of a NN a "training signal" because 
>it's compared to the desired outcome is confusing, at least to me, for 
>whom a "training signal" is a "signal that trains", ie, an input to the 
>NN. And the use of "signal" for both inputs and outputs is confusing, 
>since IMO an output is a signal to the experimenter, not the NN.
>
>All in all, my immediate impression is that workers in artificial NNs 
>don't have a clear conception of what they are trying to do. Not that 
>that is a bad thing - after all, it's early days yet, and one of the 
>functions of research is to clarify the questions one is trying to 
>answer. My comments as a pure outsider may or may not help clarify 
>vagueness. Either way, thinking about your explanations has been 
>interesting.
>
>
Another way of putting it is that the early ANN folk didn't know what 
was being done in the EAB back in the 30s, 40s and 50s (note that all of 
the former folks' work came out of those decades but they seem to have 
an uncanny knack of misrepresenting of just not understanding their 
sources). Nor did they understand the way that philosophy was going in 
the same period (most "AI" and "Cognitive Scientists" *still* appear to 
be pre 1929 Carnap or early Wittgensteinian). It appears to me that they 
took some basic "programming" algorithms which *simulated* (or at best 
controlled) some of the experimental schedules/equipment (in those early 
days it was largely switchboard and other telephonic paraphernalia) and 
just renamed their statistical models or descriptions of this "rule 
governed behaviour" something more catchy ie "Artificial Neural 
Networks" or "cell assemblies" (Hebb was always talking about a 
Conceptual Nervous System and he did it rather poorly relative to the 
efforts of Hull, Guthrie or Estes - he just said it all in more popular, 
familiar, intensional language, ensuring that more science-shy people 
lapped it up!). This propensity to generate misnomers and repackage, 
plagiarise or re-badge others' *empirical* work as something new and 
"algorithmic" or "analytic" simply through name changing allows them to 
sell a load of nonsense to the unwary who don't see this sleight of hand 
for what it is.  When they make out that what they have to say somehow 
captures what's essential about "cognitive" or "mental" life I just see 
fraud, something which I think is endemic within psychology, and has 
been for decades. It makes "Cognitive Science" a Ptolemeic monster, 
which in my view is far worse than the original, as this monster has no 
practical utility over the original behavioural work itself, and yet 
actually gets in the way of advancing that science by soaking up funding 
on grounds that it's closer to common sense folk psychology! Such people 
make quite ludicrous grant proposals, with fantastic promises, which 
make the realistic aims of real science look trivial in comparison. This 
just shapes up lying, and turns science into marketing. Such "leaders" 
take students and other naive folk back to ways of thinking which were 
abandoned well over a century ago. They're entrepreneurs!
-- 
David Longley
0
David
10/30/2004 1:27:01 AM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:1GSfd.11528$Qs6.1183648@news20.bellglobal.com...
> Stephen Harris wrote:
> [...]
>>
>> A reference to standard usage in the literature:
>> http://www.science.mcmaster.ca/Psychology/becker/papers/beckerzemelHBTNN2ndEd.pdf
>>
>> "Unsupervised learning algorithms can be distinguished by the absence of 
>> any
>> supervisory feedback from the external environment. Often, however, there 
>> is
>> an implicit _internally-derived_ training signal. This training signal is 
>> based on
>> some measure of the quality of the network's internal representation. The 
>> main
>> problem in unsupervised learning research is to formulate a performance 
>> measure
>> or cost function for the learning, to generate this internal supervisory 
>> signal. The cost function is also known as the objective function, since 
>> it sets the objective for the learning process. ..."
>>
>>
>
> OK, but I find this conceptually fuzzy.
>
> Why should any external feedback be "supervised"? Is there a difference 
> between supervised and unsupervised external feedback?
>

I saw you in another post criticize the non-standard usage of terms in
different fields. How could it be otherwise?

Even in the same field terms are not always used synonymously.
There is no Grand Adjudicator who reads all papers, understands all papers,
decides which is the best term for all the fields to employ, decides when
"coined" words are inappropriate and then has the authority to prevent
publication in all cases where the GA deems the terminology does
not serve the common weal of conceptual clarity.

Watch out for Turing's usage of "computer" because he means a human. 


0
Stephen
10/30/2004 1:50:50 AM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:_3tgd.22767$Qs6.1769700@news20.bellglobal.com...

> Firstly, that's not "ordinary technical usage", it's AI usage, apparently. 
> So why use the word "supervised"? I've been enlightened about the meaning 
> of "supervised" by Stephen Harris - I think it's a stupid term. It assumes 
> that there is some other non-supervised form of learning, which in turn 
> assumes that learning is something other than a change in behaviour. .

Supervised learning means there are external constraints applied
to the processing of the network while it is processing which
influences the eventual output.

Unsupervised learning means that the network does not receive
external inputs while it is processing. The processing works only
with the internal structure of the network, then produces output.

Surely this describes two diiferent situations/outputs. It seems to me that
you are claiming that because ultimately a programmer constructs
the rule for the network, and that ultimately a programmer constructs
the rules for a bias which is to be applied to the operation of that network
which results from externally monitoring that network, that these 
conditions/
outputs are the same. Whereas, the external output generated is going to be
different because they don't have the same rules. Maybe you assumed
these situations had the same rules when you went into that identical input 
stuff.

> which in turn assumes that learning is something other than a change in 
> behaviour. .

It is assuming two different processes (network internal only and 
network+bias
where bias is the additional external constraint imposed) will produce
two different outputs or behaviors which is exactly a change in behavior
and is what is expected. The supervised method has feedback with the network
while the network is running and alters its eventual 'conclusion'. That 
conclusion
is not the same conclusion as the network reaches without outside 
intervention
as when the network runs by itself on only its own initial rules.

These are two different methods and achieve different results with the
unsupervised usually taking much longer but "learning" more accurately. 


0
Stephen
10/30/2004 2:20:15 AM
In article <_rCgd.15358$6q2.11265@newssvr14.news.prodigy.com>, Stephen 
Harris <cyberguard1048-usenet@yahoo.com> writes
>
>"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
>news:1GSfd.11528$Qs6.1183648@news20.bellglobal.com...
>> Stephen Harris wrote:
>> [...]
>>>
>>> A reference to standard usage in the literature:
>>> 
>>>http://www.science.mcmaster.ca/Psychology/becker/papers/beckerzemelHBT
>>>NN2ndEd.pdf
>>>
>>> "Unsupervised learning algorithms can be distinguished by the absence of
>>> any
>>> supervisory feedback from the external environment. Often, however, there
>>> is
>>> an implicit _internally-derived_ training signal. This training signal is
>>> based on
>>> some measure of the quality of the network's internal representation. The
>>> main
>>> problem in unsupervised learning research is to formulate a performance
>>> measure
>>> or cost function for the learning, to generate this internal supervisory
>>> signal. The cost function is also known as the objective function, since
>>> it sets the objective for the learning process. ..."
>>>
>>>
>>
>> OK, but I find this conceptually fuzzy.
>>
>> Why should any external feedback be "supervised"? Is there a difference
>> between supervised and unsupervised external feedback?
>>
>
>I saw you in another post criticize the non-standard usage of terms in
>different fields. How could it be otherwise?
>
>Even in the same field terms are not always used synonymously.
>There is no Grand Adjudicator who reads all papers, understands all papers,
>decides which is the best term for all the fields to employ, decides when
>"coined" words are inappropriate and then has the authority to prevent
>publication in all cases where the GA deems the terminology does
>not serve the common weal of conceptual clarity.
>
>Watch out for Turing's usage of "computer" because he means a human.
>
>

Watch out as it might look like you might use that anarchism within 
"cognitive science" to concoct whatever creative bullshit you wish. But 
don't mind me, I'm probably just *older* than you, and therefore more 
"senile".
-- 
David Longley
0
David
10/30/2004 2:54:23 AM
Glen, the compound post was getting pretty long and jumbled, so I
want to break it into pieces.  Here's the first part.

"Glen M. Sizemore" <gmsizemore2@yahoo.com> wrote in message
news:20041029142512.174$Sc@news.newsreader.com...

GS: Say there are synapses between "afferent" neurons from the
sensory systems and "efferents" to central pattern generators
("afferent" and "efferents" appear in quotes because they are in the
CNS). When the efferents fire (and they do so spontaneously
sometimes) they produce movements. The afferents from the sensory
systems have, at first, little effect on the firing of the
efferents. In addition to these considerations, there are hardwired
detectors that send signals to, say, the nucleus accumbens, whenever
a particular stimulus occurs (the reinforcer).

The NAcc sends diffuse projections to the area where the
aforementioned afferent/efferent synapses are. Now, the
post-synaptic tissue may be altered when certain chemicals are
dumped into the area, IF they have recently "fired." And they are
altered in such a way as to be sensitive to the afferents firing.

The rest should be clear - the animal does something as the result
of the efferents firing and that something results in the appearance
of the reinforcer. The hard-wired system fires and this causes the
release of the chemical that alters the synapse. When the stimuli
that fired the sensory system recur, they are more likely to fire
the efferents that produced the response. This, then, eventually,
produces a discriminated operant.

So, the "critical" synapses are modified by the local "reinforcer
signal," but the sequence of events is possible (on a sustained
basis) only when there is an external contingency, for it is the
external contingency which results in the proper timing of the
reinforcement and the behavior in question. The "local" events would
be the same in the absence of a contingency (i.e.,
response-independent delivery) and behavior would get reinforced.
The behavior would be whatever happened to accidentally be
temporally contiguous with the reinforcer. Such behavior Skinner
called "superstitious," and he pointed out that such behavior should
drift over time because, obviously, there is no contingency that
forces continued temporal relations between a particular class of
responses and the reinforcer.

Bill: (new)

Ok.  Let me play that back to you.

Some neurons have inputs from sensory circuitry and outputs to motor
control circuitry.  When those neurons in the middle fire for
whatever reason, they cause movements, aka behavior.

There is a reinforcement mechanism that can increase the sensitivity
of recently fired cells to recently activated inputs.  This
mechanism acts on the neurons mentioned above, probably by flooding
the area with some "reinforcing chemicals", and can be triggered by
various sensory inputs, some of which are pre-defined genetically.

Suppose that some particular behavioral output is coupled to (a
sensory input that will trigger) the reinforcing mechanism.  Then
whenever that behavior occurs, the sensitivity of all cells recently
fired to their recently activated inputs will be increased.  Since
the reinforcement is triggered by a particular behavior, the cells
affected will be mostly those involved in producing that behavior.
The increased sensitivity makes those cells more likely to fire in
the future, increasing the probablility of that behavior rather than
others.

You didn't mention it, but to make this work right we also need to
posit some sort of offsetting reduction in sensitivity for
unreinforced connections.  If sensitivity can only increase,
eventually even randomly activated pathways will reach maximum
sensitivity and everything will be trying to fire at once.  So let's
say that every time a cell fires, its sensitivity to all inputs is
reduced a bit.  We might also have sensitivity of a particular
synapse drop a bit when the presynaptic cell fires.  Reinforcement
more than makes up for these losses, but connections that are seldom
or never reinforced tend to weaken and eventually drop out of the
picture.

Now suppose that the reinforcement depends not just on the behavior,
but on the behavior occuring under some specific conditions,
conditions presumably detectable as some combination of inputs from
the sensory systems.

Exactly the same sequence occurs, except that now only a selected
subset of the sensory inputs to the central cells is reinforced...
just those inputs that correspond to the condition to be
distinguished.  Other inputs occuring at other times are weakened,
and cease to cause the behavior.  Eventually we see that the
behavior occurs under only the condition we reinforced, and we say
that the organism has learned a discrimination.

If reinforcement occurs without a connection to a particular
behavior, whatever happened to be going on at the time will be
reinforced, and we can see temporary increases in some behaviors
over others.  Since these behaviors are more frequent, they are more
likely than others to be accidentally reinforced again, and such
superstitious responding may build to considerable strength.
However without an actual connection, without an actually contingent
relationship, these spurious reinforcements will eventually fail to
happen, other things will be reinforced, and the original
superstition will be supplanted by another, or by some more stable
behavior driven by actual contingencies.

------------------------------------------------------

In my restatement I omitted neurological detail that did not seem
relevant and added a few refinements, but generally tried to keep
close to your own explanation.  I hope you can agree that my version
is acceptable, that it conveys essentially the same message that you
intended, and that it demonstrates that I really do understand what
you are talking about.

If that's not the case, I'd appreciate another iteration in which
you explain where you think I got it wrong, because I'd like to get
at least part of this complicated topic behind us, so there is at
least a small basis of common understanding to build on.

If we can get that far, there are a couple more refinements It like
to introduce...

------------------------------------------------------

Anyway.

I accept that something like this probably does happen in the brain.
We left out a lot, but so far as it goes it seems a plausible gloss
over how operant conditioning might be implemented.

But it is not enough.

The setup we described is just a minor variant of a perceptron
learning model.  You've reinvented the Perceptron, and inherited all
its well established limitations.... most importantly, with such a
scheme you can only discriminate linearly separable regions of the
input space.

Which means you are critically dependent on getting the right inputs
from the sensory channels, so that the discriminations you need to
make at this level are linearly separable.

To get those inputs we need many levels of other processing, and we
need explanations of how those other levels come into existence.

We can pass some of the problem over to evolution, and say the
processing is innate.

But this still leaves a lot unexplained, a lot of functions that are
unlikely to be innate, but somehow do develop in both human and
pigeon brains.

You address this to some extent later in this post, so I'll defer
further discussion until we get to that part.

Bill


0
Bill
10/30/2004 7:34:48 AM
> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> news:1GSfd.11528$Qs6.1183648@news20.bellglobal.com...

> > Why should any external feedback be "supervised"? Is there a
difference
> > between supervised and unsupervised external feedback?

Just noticed this question that I must have missed when originally
posted...

If there is external feedback, that constitutes supervision.
"Supervised" means "relies on external feedback".   As opposed to
"unsupervised" where there is no feedback available.

An unsupervised process still has inputs from external sources.
It's just that none of those inputs are "feedback", none of them are
conditional on or functionally derived from its prior outputs.

An unsupervised process must make its own decisions about what sort
of outputs to produce, usually based on trying to establish some
sort of desirable statistical relationship to its inputs.   This may
involve local "feedback loops" of various kinds, in which for
example the last output is used as input to some sort of parameter
adjustment function.   But the point is that there is no external
process which evaluates the output and feeds back information about
its validity or appropriateness.  There is no way for an
unsupervised process to evaluate the external effect of its outputs.
It is operating "open loop", blind to the external consequences of
its actions.

There are a lot of things that need feedback to suceed.  You can't
use an unsupervised process to learn how to do anything in
particular, for that you need some feedback.  Feedback is such a
powerful tool, and goal directed processes are so central to our
thinking about "intelligence" and intelligent behavior, that it
might seem there is no place in the picture for unsupervised
processes.

But unsupervised processes can be very useful in organising inputs
and sorting them into categories, activities that depend only on the
data itself, not on the eventual use of the data for any particular
purpose.

Imagine that someone gives you a huge box of miscellaneous strange
objects, all jumbled together.   Even if you don't know what any of
them are or what you might do with them, a reasonable first thing to
do might be to sort them into piles of similar-looking things, to
see how many different kinds there are, and how many of each.   And
then you might start looking for bits that seem to fit together
nicely, and put those next to each other, or assemble them and look
for possible larger assemblies.... this is unsupervised activity.

Then a supervisor comes along and explains what is to be
accomplished and starts teaching you how to use them.  Or just
starts giving encouraging smiles when he sees you doing something
good, so that you eventually figure it out for yourself.

But no matter what it turns out you can do with them eventually, you
are almost certainly better prepared for it by having gone through
that unsupervised stage of organizing the material first and
learning what you have to work with, rather than trying to dig
things out of an unfamiliar jumble when suddenly confronted with a
supervised task.

Bill Modlin





0
Bill
10/30/2004 8:26:13 AM
In article <Xdidnc_kNbnquB7cRVn-tA@metrocastcablevision.com>, Bill 
Modlin <modlin1@metrocast.net> writes
>
>> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
>> news:1GSfd.11528$Qs6.1183648@news20.bellglobal.com...
>
>> > Why should any external feedback be "supervised"? Is there a
>difference
>> > between supervised and unsupervised external feedback?
>
>Just noticed this question that I must have missed when originally
>posted...
>
>If there is external feedback, that constitutes supervision.
>"Supervised" means "relies on external feedback".   As opposed to
>"unsupervised" where there is no feedback available.
>
>An unsupervised process still has inputs from external sources.
>It's just that none of those inputs are "feedback", none of them are
>conditional on or functionally derived from its prior outputs.

I thought I should ask: have you looked into the "differences" between 
classical and operant conditioning Bill? Have you entertained the 
possibility that you may be missing something through skipping history?

If on the other hand, you consider there's some nobility/mileage in 
neologizing and plagiarizing perhaps you can tell us why, and why it 
isn't a delinquent sign of modern times ;-).

PS. Incidentally, have you looked into the problematic nature of 
intensional/referential opacity or bought Catania yet?

Kind regards,

-- 
David Longley
0
David
10/30/2004 8:48:40 AM
David Longley wrote:
> In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
> Kirchmeir <wwolfkir@sympatico.ca> writes
> 
>> Stargazer wrote:
>>
>>> Wolf Kirchmeir wrote:
>>>
>>>> Stargazer wrote:
>>
>> [snip a number oif clear answers to my questions - thanks. I think. :-)]
>>
>>> *SG*
>>
>>
>> Your answers clear up some misconceptions on my part, but they also 
>> show  terminological obfuscation on the part of artificial neural 
>> network researchers.
>>
>> Throughout your explanation, the term "signal" is used ambiguously. It 
>> sometimes seems to apply to an input to a single neuron, and sometimes 
>> to a collection of inputs to a network of neurons. IMO this is 
>> confusing. Very. It's a hierarchy error, which always cause trouble.
>>
>> Also, calling the calculated output of a NN a "training signal" 
>> because it's compared to the desired outcome is confusing, at least to 
>> me, for whom a "training signal" is a "signal that trains", ie, an 
>> input to the NN. And the use of "signal" for both inputs and outputs 
>> is confusing, since IMO an output is a signal to the experimenter, not 
>> the NN.
>>
>> All in all, my immediate impression is that workers in artificial NNs 
>> don't have a clear conception of what they are trying to do. Not that 
>> that is a bad thing - after all, it's early days yet, and one of the 
>> functions of research is to clarify the questions one is trying to 
>> answer. My comments as a pure outsider may or may not help clarify 
>> vagueness. Either way, thinking about your explanations has been 
>> interesting.
>>
>>
> Another way of putting it is that the early ANN folk didn't know what 
> was being done in the EAB back in the 30s, 40s and 50s (note that all of 
> the former folks' work came out of those decades but they seem to have 
> an uncanny knack of misrepresenting of just not understanding their 
> sources). Nor did they understand the way that philosophy was going in 
> the same period (most "AI" and "Cognitive Scientists" *still* appear to 
> be pre 1929 Carnap or early Wittgensteinian). It appears to me that they 
> took some basic "programming" algorithms which *simulated* (or at best 
> controlled) some of the experimental schedules/equipment (in those early 
> days it was largely switchboard and other telephonic paraphernalia) and 
> just renamed their statistical models 

Yeah, sure , like Markov wasn't studying these structures way back in 
1906.  I coded my first NN after studying Markov chains, i didn't even 
know about Hull et all.  There has always been a close association 
between engineering (computers) and the math departments.  Did the 
psychology departments inform that process ... maybe ... but maybe not 
as much as it appears to you.  Maybe Minsky could tell us what primarily 
informed *his* research ....

patty


or descriptions of this "rule
> governed behaviour" something more catchy ie "Artificial Neural 
> Networks" or "cell assemblies" (Hebb was always talking about a 
> Conceptual Nervous System and he did it rather poorly relative to the 
> efforts of Hull, Guthrie or Estes - he just said it all in more popular, 
> familiar, intensional language, ensuring that more science-shy people 
> lapped it up!). This propensity to generate misnomers and repackage, 
> plagiarise or re-badge others' *empirical* work as something new and 
> "algorithmic" or "analytic" simply through name changing allows them to 
> sell a load of nonsense to the unwary who don't see this sleight of hand 
> for what it is.  When they make out that what they have to say somehow 
> captures what's essential about "cognitive" or "mental" life I just see 
> fraud, something which I think is endemic within psychology, and has 
> been for decades. It makes "Cognitive Science" a Ptolemeic monster, 
> which in my view is far worse than the original, as this monster has no 
> practical utility over the original behavioural work itself, and yet 
> actually gets in the way of advancing that science by soaking up funding 
> on grounds that it's closer to common sense folk psychology! Such people 
> make quite ludicrous grant proposals, with fantastic promises, which 
> make the realistic aims of real science look trivial in comparison. This 
> just shapes up lying, and turns science into marketing. Such "leaders" 
> take students and other naive folk back to ways of thinking which were 
> abandoned well over a century ago. They're entrepreneurs!
0
patty
10/30/2004 10:52:10 AM
Michael Olea wrote:
> in article _3tgd.22767$Qs6.1769700@news20.bellglobal.com, Wolf
> Kirchmeir at wwolfkir@sympatico.ca wrote on 10/29/04 8:12 AM:
>
> > Bill Modlin wrote:
>
> [snip]
>
> > > Obviously you are not aware of the ordinary technical usage
> > > of the word "supervised" in this context.   Go read a book
> > > or something, I'm getting tired of trying to educate you.
> > > Hint: it is still just as "supervised" when natural forces
> > > do the supervision.
> >
> > Firstly, that's not "ordinary technical usage", it's AI usage,
> > apparently. So why use the word "supervised"? I've been enlightened
> > about the meaning of "supervised" by Stephen Harris - I think it's a
> > stupid term. It assumes that there is some other non-supervised
> > form of learning, which in turn assumes that learning is something
> > other than a change in behaviour. If so, just what is it? Are you
> > claiming that learning "really" is something other than a change in
> > behaviour, and that changes in behaviour are only "incidentally"
> > signs that learning has taken place? If so, you have relocated
> > learning somewhere else, and should use a different term. I vote
> > for "reconfiguration of neural networks" I don't have enough Latin
> > or Greek to make up a nice technical word to replace that phrase,
> > though - sorry about that.
>
> Below is an excerpt from something I posted last december. It might
> help clarify the terminology, the technical distinction between
> supervised and unsupervised learning algorithms. But before
> re-posting that excerpt I'll take another stab at clarifying some of
> the notions.
[big snip]

Excellent post (although I doubt some people here will understand
all its practical and theoretical implications).

*SG*


0
Stargazer
10/30/2004 1:51:34 PM
Stephen Harris wrote:
>
> Supervised learning means there are external constraints applied
> to the processing of the network while it is processing which
> influences the eventual output.
>
> Unsupervised learning means that the network does not receive
> external inputs while it is processing. The processing works only
> with the internal structure of the network, then produces output.

Just a small observation here. In unsupervised learning, the network
in fact receives external input (otherwise it would not do much). It
does not receive a user-supplied output (training signals or training
sets) with which to compare its own output in order to derive error
correction. Supervised systems, on the other hand, receive this
user-supplied sets, which is employed (usually) as an error correction
to be back-fed (at least in traditional multilayer perceptrons).

> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> news:_3tgd.22767$Qs6.1769700@news20.bellglobal.com...
>
> > Firstly, that's not "ordinary technical usage", it's AI usage,
> > apparently. So why use the word "supervised"? I've been enlightened
> > about the meaning of "supervised" by Stephen Harris - I think it's
> > a stupid term. It assumes that there is some other non-supervised
> > form of learning, which in turn assumes that learning is something
> > other than a change in behaviour. .

Learning as a change in behavior is a definition under the behaviorist
rationale. This assumption leads to all the well known theoretical
edifice. However, learning is defined differently by the neuroscientist.
Thus, it may be "stupid" from a behaviorist point of view, but it
is not from neuroscience. It doesn't seem wise to criticize someone
else's definitions. You can accept it or reject it, or you can (at most)
point out that such definitions may lead to theoretical/conceptual
dead-ends or empirically flawed models, which is not the case
regarding supervised/unsupervised learning.

*SG*


0
Stargazer
10/30/2004 2:37:33 PM
Stephen Harris wrote:
[...]
> So lets go back further, to the creation of the universe that allows
> for the possibility of life as we know it. Our universe was created by
> design, or by accident which means randomly. 

No, "by accident" does mean "randomly." Even accidents are governed by 
the rules, and the consequnecs of the accidents are certainly not random.


0
Wolf
10/30/2004 2:51:52 PM
Bill Modlin wrote:

[...]
> Now suppose that the reinforcement depends not just on the behavior,
> but on the behavior occuring under some specific conditions,
> conditions presumably detectable as some combination of inputs from
> the sensory systems.

At this point, you are actually describing the discrimination of 
contingencies.

Just tho't you should know.


0
Wolf
10/30/2004 2:58:04 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
news:OSNgd.41614$rs5.1279258@news20.bellglobal.com...
> Stephen Harris wrote:
> [...]
>> So lets go back further, to the creation of the universe that allows
>> for the possibility of life as we know it. Our universe was created 
>> by
>> design, or by accident which means randomly.
>
> No, "by accident" does mean "randomly." Even accidents are governed by 
> the rules, and the consequnecs of the accidents are certainly not 
> random.

If you say that "consequences of accidents are certainly not random" 
then our Universe came into being randomly neither for "accidents" are 
also consequences. 


0
JPL
10/30/2004 3:17:27 PM
JPL Verhey wrote:

> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message 
> news:OSNgd.41614$rs5.1279258@news20.bellglobal.com...
> 
>>Stephen Harris wrote:
>>[...]
>>
>>>So lets go back further, to the creation of the universe that allows
>>>for the possibility of life as we know it. Our universe was created 
>>>by
>>>design, or by accident which means randomly.
>>
>>No, "by accident" does mean "randomly." Even accidents are governed by 
>>the rules, and the consequnecs of the accidents are certainly not 
>>random.
> 
> 
> If you say that "consequences of accidents are certainly not random" 
> then our Universe came into being randomly neither for "accidents" are 
> also consequences. 
> 

Ahhh .. the search for an a priori.  Should parallel lines meet at 
infinity?  ... if i draw that point on my page, it exists ... if not, 
then it does not ... hence I am the a priori of my page.  God creates 
man, or man creates god ... if you are free, your can choose the a 
priori on your page.

patty


0
patty
10/30/2004 3:56:27 PM
Bill Modlin wrote:

>>"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
>>news:1GSfd.11528$Qs6.1183648@news20.bellglobal.com...
> 
> 
>>>Why should any external feedback be "supervised"? Is there a
> 
> difference
> 
>>>between supervised and unsupervised external feedback?
> 
> 
> Just noticed this question that I must have missed when originally
> posted...
> 
> If there is external feedback, that constitutes supervision.
> "Supervised" means "relies on external feedback".   As opposed to
> "unsupervised" where there is no feedback available.

This sounds like self-organisation to me.

> An unsupervised process still has inputs from external sources.
> It's just that none of those inputs are "feedback", none of them are
> conditional on or functionally derived from its prior outputs.

In other words, if a cat wanders about the yard and sees a bird it 
hasn't flushed out hiding after hearing it, and responds, then that's an 
example of an unsupervised process? Whereas if it hears the bird, and 
flushes it out of hiding, then seeing the bird is a supervised process?

> An unsupervised process must make its own decisions about what sort
> of outputs to produce, usually based on trying to establish some
> sort of desirable statistical relationship to its inputs.   This may
> involve local "feedback loops" of various kinds, in which for
> example the last output is used as input to some sort of parameter
> adjustment function.   But the point is that there is no external
> process which evaluates the output and feeds back information about
> its validity or appropriateness.  There is no way for an
> unsupervised process to evaluate the external effect of its outputs.
> It is operating "open loop", blind to the external consequences of
> its actions.

Since AFAIK the process you describe is exactly what happens when a NN 
is adjusted as a consequence of some feedback, I fail to see any real 
difference. IOW, the NN can't tell whether an external input is fed back 
or not. It's just external input. Besides, you are describing a process 
that depends on local (neuron-to-neuron) correlations, which cannot, by 
your own admission even, "know" anything about any external source if 
inputs. IOW, this process occurs whether or not it's initiated and 
subsequently "adjusted" by external inputs. So its seem to me that 
"supervised" and "unsupervised" refer to  different levels of the same 
process. It looks very much that "supervised" means "global", and 
"unsupervised" means  "local."  If that's not the intention, the concept 
is confused, to put it gently.

> There are a lot of things that need feedback to suceed.  You can't
> use an unsupervised process to learn how to do anything in
> particular, for that you need some feedback. 

Not true, see below: repetition is sometimes enough.

> Feedback is such a
> powerful tool, and goal directed processes are so central to our
> thinking about "intelligence" and intelligent behavior, that it
> might seem there is no place in the picture for unsupervised
> processes.

IMO this is a red herring. Lets leave intelligence out of it for the moment.

> But unsupervised processes can be very useful in organising inputs
> and sorting them into categories, activities that depend only on the
> data itself, not on the eventual use of the data for any particular
> purpose.

What data? Where does the NN get them from?  It's at this point that the 
distinction seems to me to get really confusing. If the NN is operating 
on the data (eg, sorting it), then the data are external to the NN. The 
NN accepts them as input and sends the categories (or sorted data or 
whatever) as output. Whether the data are derived from external feedback 
seems to me to matter not at all - data are data.

Or are you positing some "internal data"? That's a concept borrowed from 
computers, and even in that realm it's confusing. All data are external 
to the algorithm that processes them, whether they are written into the 
program code or copied from some other source makes no difference whatever.

I can see using the algorithm code itself as data, i.e., changing it. Is 
that what you mean? But even in that case, the algorithm must treat the 
data as external - it must either process a copy of itself, or it must 
be partitioned so that subroutines within it can treat other subroutines 
as data. Either way, the data processed is external to the algorithm.

Or do you mean the properties of the elements and connections of the NN? 
If so, we are back where we started: local correlations between elements 
of the NN, at which level I can see no difference between supervised and 
unsupervised as described by you and others.

So my critique amounts to this: un/supervision is a confused concept.

> Imagine that someone gives you a huge box of miscellaneous strange
> objects, all jumbled together.   Even if you don't know what any of
> them are or what you might do with them, a reasonable first thing to
> do might be to sort them into piles of similar-looking things, to
> see how many different kinds there are, and how many of each.   And
> then you might start looking for bits that seem to fit together
> nicely, and put those next to each other, or assemble them and look
> for possible larger assemblies.... this is unsupervised activity.
> 
> Then a supervisor comes along and explains what is to be
> accomplished and starts teaching you how to use them.  Or just
> starts giving encouraging smiles when he sees you doing something
> good, so that you eventually figure it out for yourself.
>
> But no matter what it turns out you can do with them eventually, you
> are almost certainly better prepared for it by having gone through
> that unsupervised stage of organizing the material first and
> learning what you have to work with, rather than trying to dig
> things out of an unfamiliar jumble when suddenly confronted with a
> supervised task.

This example is one of global functioning. Moreover, I must already have 
been trained in sorting in order to do at least this much. When and how 
was that training done? Or was I just made that way?

Also, "unsupervised" seems to include the notion of self-organisation, 
as mentioned earlier.

And if this is how it is supposed to work _within_ a NN, then the 
neurons must themselves be a kind of network. That's an idea that I've 
been mulling over the last couple of days, but if it turns out to make 
sense, it will probably make nonsense of un/supervised.

The discussion of NNs in terms of un/supervised learning, statistical 
patterns in the data, local correlations, etc, ignores what to my mind 
is at least equally important: the topology of the connections themselves.

The discussion also ignores what EAB has in fact demonstrated, that 
external feedback is _not_ required to produce learning. Mere repetition 
of external stimuli is enough. EG, make a NN that will produce a 
specified output, eg, any given number of repeated  outputs from a 
single input. (That's purely a matter of architecture.)

Now add features such as a second input source, activation potential 
changes at the connections, and so on. We now have a network that can 
learn to produce the output from a different range of inputs than I 
started with. That is, suppose the second input source is connected 
through inactive nodes. Combine inputs from sources through those nodes 
until the relevant connections are activated as desired, and then the 
network will respond the same way to the second input as to the first. 
Note that this training does _not_ require feedback, just repeated 
inputs from two sources. But I can't do this unless the architecture of 
the NN allows it.

Now, if I understand you right, this is an example of unsupervised 
learning. Yet because it depends on external inputs it's supervised. Or 
is it both at the same time? Why?

Now add a small complication: two NNs A and B. Let A contain some 
unactivated connections . Let A initially respond only to external 
inputs, as above. Let B be an oscillator that produces outputs at 
intervals. Let B make a connection with A through some inactive node. 
Let A respond to the external input. Let some of those inputs (depending 
on relative timing) combine with inputs from B through the inactive node 
in A. Increase the sensitivity of that node with each repetition of the 
combined inputs, as above. At some point, A will begin to produce 
outputs in time with the cyclic outputs of B. -- Here too, there has 
been no feedback, yet external inputs are crucial to the changes in A's 
behaviour, since without them the inputs from B could have no effect.

Is this supervised learning or not? Why?
0
Wolf
10/30/2004 4:28:10 PM
Stargazer wrote:

> Stephen Harris wrote:
> 
>>Supervised learning means there are external constraints applied
>>to the processing of the network while it is processing which
>>influences the eventual output.
>>
>>Unsupervised learning means that the network does not receive
>>external inputs while it is processing. The processing works only
>>with the internal structure of the network, then produces output.
> 
> 
> Just a small observation here. In unsupervised learning, the network
> in fact receives external input (otherwise it would not do much). It
> does not receive a user-supplied output (training signals or training
> sets) with which to compare its own output in order to derive error
> correction. Supervised systems, on the other hand, receive this
> user-supplied sets, which is employed (usually) as an error correction
> to be back-fed (at least in traditional multilayer perceptrons).
> 
>

If that is the case, the supervised ans unsupervised NNs must have 
different architectures. Or so it seems to me. In that case, supervised 
NNs must contain unsupervised sub-NNs. Or?

0
Wolf
10/30/2004 4:31:25 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:CYNgd.41639$rs5.1280530@news20.bellglobal.com...
> Bill Modlin wrote:
>
> [...]
> > Now suppose that the reinforcement depends not just on the
behavior,
> > but on the behavior occuring under some specific conditions,
> > conditions presumably detectable as some combination of inputs
from
> > the sensory systems.
>
> At this point, you are actually describing the discrimination of
> contingencies.
>
> Just tho't you should know.

Which is what both of us said, if you would bother to read before
responding.

Just tho't you should know...



0
Bill
10/30/2004 6:37:37 PM
Stargazer wrote:
[...]
> 
> Learning as a change in behavior is a definition under the behaviorist
> rationale. This assumption leads to all the well known theoretical
> edifice. However, learning is defined differently by the neuroscientist.
> Thus, it may be "stupid" from a behaviorist point of view, but it
> is not from neuroscience. It doesn't seem wise to criticize someone
> else's definitions. You can accept it or reject it, or you can (at most)
> point out that such definitions may lead to theoretical/conceptual
> dead-ends or empirically flawed models, which is not the case
> regarding supervised/unsupervised learning.
> 
> *SG*

Well, I've tried to criticise the fuzziness of the concepts. I'll accept 
"learning" as any change in behaviour, at any level. What I find 
confusing is that sometimes "learning" is used in the behaviorial sense, 
sometimes in the sense of "change in the functioning of a neural 
network", sometimes in the sense "changes in a neuron's firing 
patterns", etc etc, all at the same time. Now, I don't care what you 
call these, so long as you keep the hierarchy straight. Terminology can 
help or hinder -- I would think that people would be more careful with 
their choice of terms. (Footnote)

I see the following hierarchy:
behaviour -- physiological functioning -- neural network functioning -- 
neuron functioning

Information flows both ways along this hierarchy, and it's important IMO 
to talk as clearly as possible. It seems to me that learning at one 
level implies learning at a lower level.

I also find it odd that people invent labels for two kinds of learning 
and seem to believe thay have discovered something new (or in some cases 
have refuted the behaviorist stance.)  I've reread the posts on 
supervised vs unsupervised learning, and insofar as I've made sense of 
the explanations, unsupervised learning is classical condioning, and 
supervise learning is operant conditioning. (BTW, Stephen Harris's 
comment that unsupervised learning  requires some sort of external 
input, else the NN won't do much of anything, was the key that unlocked 
the puzzle. Up till then, I was trying to figure out how unsupervised 
learning was different from random neural firing.)

The lesson I draw is that learning in the behaviorist sense scales up 
and down; or if you like, repeats fractal-like at different scales. It 
is "behaviour all the way down", and back up again. If that notion 
reflects reality, then something new has IMO been discovered. 
Considering that a neuron is a rather complex entity (as is shown when 
you sketch a meta-program to simulate how it works), it seems that 
learning may scale even to that level.

"It's all rather complicated, really."
0
Wolf
10/30/2004 7:06:09 PM
P: Yeah, sure , like Markov wasn't studying these structures way back
in  1906.  I coded my first NN after studying Markov chains, i didn't
even  know about Hull et all.  There has always been a close
association  between engineering (computers) and the math departments.
 Did the  psychology departments inform that process ... maybe ... but
maybe not as much as it appears to you.  Maybe Minsky could tell us
what primarily informed *his* research ....

GS: But what makes you think that he can report it accurately? Or that
he would do so honestly? I think he's smart, but I think he might be
dishonest and egotistical. That would be a problem. No?


patty <pattyNO@SPAMicyberspace.net> wrote in message news:<unKgd.334213$3l3.127896@attbi_s03>...
> David Longley wrote:
> > In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
> > Kirchmeir <wwolfkir@sympatico.ca> writes
> > 
> >> Stargazer wrote:
> >>
> >>> Wolf Kirchmeir wrote:
> >>>
> >>>> Stargazer wrote:
> >>
> >> [snip a number oif clear answers to my questions - thanks. I think. :-)]
> >>
> >>> *SG*
> >>
> >>
> >> Your answers clear up some misconceptions on my part, but they also 
> >> show  terminological obfuscation on the part of artificial neural 
> >> network researchers.
> >>
> >> Throughout your explanation, the term "signal" is used ambiguously. It 
> >> sometimes seems to apply to an input to a single neuron, and sometimes 
> >> to a collection of inputs to a network of neurons. IMO this is 
> >> confusing. Very. It's a hierarchy error, which always cause trouble.
> >>
> >> Also, calling the calculated output of a NN a "training signal" 
> >> because it's compared to the desired outcome is confusing, at least to 
> >> me, for whom a "training signal" is a "signal that trains", ie, an 
> >> input to the NN. And the use of "signal" for both inputs and outputs 
> >> is confusing, since IMO an output is a signal to the experimenter, not 
> >> the NN.
> >>
> >> All in all, my immediate impression is that workers in artificial NNs 
> >> don't have a clear conception of what they are trying to do. Not that 
> >> that is a bad thing - after all, it's early days yet, and one of the 
> >> functions of research is to clarify the questions one is trying to 
> >> answer. My comments as a pure outsider may or may not help clarify 
> >> vagueness. Either way, thinking about your explanations has been 
> >> interesting.
> >>
> >>
> > Another way of putting it is that the early ANN folk didn't know what 
> > was being done in the EAB back in the 30s, 40s and 50s (note that all of 
> > the former folks' work came out of those decades but they seem to have 
> > an uncanny knack of misrepresenting of just not understanding their 
> > sources). Nor did they understand the way that philosophy was going in 
> > the same period (most "AI" and "Cognitive Scientists" *still* appear to 
> > be pre 1929 Carnap or early Wittgensteinian). It appears to me that they 
> > took some basic "programming" algorithms which *simulated* (or at best 
> > controlled) some of the experimental schedules/equipment (in those early 
> > days it was largely switchboard and other telephonic paraphernalia) and 
> > just renamed their statistical models 
> 
> Yeah, sure , like Markov wasn't studying these structures way back in 
> 1906.  I coded my first NN after studying Markov chains, i didn't even 
> know about Hull et all.  There has always been a close association 
> between engineering (computers) and the math departments.  Did the 
> psychology departments inform that process ... maybe ... but maybe not 
> as much as it appears to you.  Maybe Minsky could tell us what primarily 
> informed *his* research ....
> 
> patty
> 
> 
> or descriptions of this "rule
> > governed behaviour" something more catchy ie "Artificial Neural 
> > Networks" or "cell assemblies" (Hebb was always talking about a 
> > Conceptual Nervous System and he did it rather poorly relative to the 
> > efforts of Hull, Guthrie or Estes - he just said it all in more popular, 
> > familiar, intensional language, ensuring that more science-shy people 
> > lapped it up!). This propensity to generate misnomers and repackage, 
> > plagiarise or re-badge others' *empirical* work as something new and 
> > "algorithmic" or "analytic" simply through name changing allows them to 
> > sell a load of nonsense to the unwary who don't see this sleight of hand 
> > for what it is.  When they make out that what they have to say somehow 
> > captures what's essential about "cognitive" or "mental" life I just see 
> > fraud, something which I think is endemic within psychology, and has 
> > been for decades. It makes "Cognitive Science" a Ptolemeic monster, 
> > which in my view is far worse than the original, as this monster has no 
> > practical utility over the original behavioural work itself, and yet 
> > actually gets in the way of advancing that science by soaking up funding 
> > on grounds that it's closer to common sense folk psychology! Such people 
> > make quite ludicrous grant proposals, with fantastic promises, which 
> > make the realistic aims of real science look trivial in comparison. This 
> > just shapes up lying, and turns science into marketing. Such "leaders" 
> > take students and other naive folk back to ways of thinking which were 
> > abandoned well over a century ago. They're entrepreneurs!
0
gmsizemore2
10/30/2004 7:17:36 PM
On Sat, 30 Oct 2004 10:51:52 -0400, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Stephen Harris wrote:
>[...]
>> So lets go back further, to the creation of the universe that allows
>> for the possibility of life as we know it. Our universe was created by
>> design, or by accident which means randomly. 
>
>No, "by accident" does mean "randomly." Even accidents are governed by 
>the rules, and the consequnecs of the accidents are certainly not random.

I have to agree with Wolf here. Nor do I think it is proven that the
universe was created.

Regards - Lester
0
lesterDELzick
10/30/2004 7:45:41 PM
On Sat, 30 Oct 2004 15:06:09 -0400, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Stargazer wrote:
>[...]
>> 
>> Learning as a change in behavior is a definition under the behaviorist
>> rationale. This assumption leads to all the well known theoretical
>> edifice. However, learning is defined differently by the neuroscientist.
>> Thus, it may be "stupid" from a behaviorist point of view, but it
>> is not from neuroscience. It doesn't seem wise to criticize someone
>> else's definitions. You can accept it or reject it, or you can (at most)
>> point out that such definitions may lead to theoretical/conceptual
>> dead-ends or empirically flawed models, which is not the case
>> regarding supervised/unsupervised learning.
>> 
>> *SG*
>
>Well, I've tried to criticise the fuzziness of the concepts. I'll accept 
>"learning" as any change in behaviour, at any level. What I find 
>confusing is that sometimes "learning" is used in the behaviorial sense, 
>sometimes in the sense of "change in the functioning of a neural 
>network", sometimes in the sense "changes in a neuron's firing 
>patterns", etc etc, all at the same time. Now, I don't care what you 
>call these, so long as you keep the hierarchy straight. Terminology can 
>help or hinder -- I would think that people would be more careful with 
>their choice of terms. (Footnote)
>
>I see the following hierarchy:
>behaviour -- physiological functioning -- neural network functioning -- 
>neuron functioning
>
>Information flows both ways along this hierarchy, and it's important IMO 
>to talk as clearly as possible. It seems to me that learning at one 
>level implies learning at a lower level.
>
>I also find it odd that people invent labels for two kinds of learning 
>and seem to believe thay have discovered something new (or in some cases 
>have refuted the behaviorist stance.)  I've reread the posts on 
>supervised vs unsupervised learning, and insofar as I've made sense of 
>the explanations, unsupervised learning is classical condioning, and 
>supervise learning is operant conditioning. (BTW, Stephen Harris's 
>comment that unsupervised learning  requires some sort of external 
>input, else the NN won't do much of anything, was the key that unlocked 
>the puzzle. Up till then, I was trying to figure out how unsupervised 
>learning was different from random neural firing.)
>
>The lesson I draw is that learning in the behaviorist sense scales up 
>and down; or if you like, repeats fractal-like at different scales. It 
>is "behaviour all the way down", and back up again. If that notion 
>reflects reality, then something new has IMO been discovered. 
>Considering that a neuron is a rather complex entity (as is shown when 
>you sketch a meta-program to simulate how it works), it seems that 
>learning may scale even to that level.
>
>"It's all rather complicated, really."

I'll say. "It is behavior all the way down" is analogous to patty's
ida that it's all belief. Such facile observations are nugatory with
respect to science because they fail to discrimate when and how belief
becomes knowledge and behavior becomes learning and a host of other
mental effects.

Regards - Lester
0
lesterDELzick
10/30/2004 7:53:24 PM
On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
in comp.ai.philosophy wrote:

>in article _3tgd.22767$Qs6.1769700@news20.bellglobal.com, Wolf Kirchmeir at
>wwolfkir@sympatico.ca wrote on 10/29/04 8:12 AM:
>
>> Bill Modlin wrote:
>
>[snip]
>
>>> Obviously you are not aware of the ordinary technical usage
>>> of the word "supervised" in this context.   Go read a book
>>> or something, I'm getting tired of trying to educate you.
>>> Hint: it is still just as "supervised" when natural forces
>>> do the supervision.
>> 
>> Firstly, that's not "ordinary technical usage", it's AI usage,
>> apparently. So why use the word "supervised"? I've been enlightened
>> about the meaning of "supervised" by Stephen Harris - I think it's a
>> stupid term. It assumes that there is some other non-supervised form of
>> learning, which in turn assumes that learning is something other than a
>> change in behaviour. If so, just what is it? Are you claiming that
>> learning "really" is something other than a change in behaviour, and
>> that changes in behaviour are only "incidentally"  signs that learning
>> has taken place? If so, you have relocated learning somewhere else, and
>> should use a different term. I vote for "reconfiguration of neural
>> networks" I don't have enough Latin or Greek to make up a nice technical
>> word to replace that phrase, though - sorry about that.
>
>Below is an excerpt from something I posted last december. It might help
>clarify the terminology, the technical distinction between supervised and
>unsupervised learning algorithms. But before re-posting that excerpt I'll
>take another stab at clarifying some of the notions.
>
>To appreciate the context it might help to imagine you are a programmer
>charged with some programming task. Suppose the task is to write a
>subroutine that converts temperatures given in degrees Fahrenheit to degrees
>Centigrade. Well it is a simple task because there is a unique fucntion that
>correctly maps inputs to outputs - the function is not something the program
>has to learn, you simply implement the formula. Now suppose the task is to
>write a subroutine that sorts records from a database by some field - lets
>say loan applications sorted by applicant's last name. This is still a
>function - you can think of the input list  of records as a point in some
>tuple-space, and the output list of records as a point in the same
>tuple-space - but there is in general more than one function that is correct
>if there are applicants with the same last name (any ordering of the sublist
>of Smiths is correct), and there are many sorting algorithms that make
>various tradeoffs, but there is still no need for the program to learn a
>valid sorting function - the map from input to output is well defined (upto
>permutation of equi-named sublists) and you simply implement some sorting
>algorithm. Now suppose the task is to write a subroutine that assigns a
>default-risk to loan applications. This routine is a function that maps
>tuples to scalers. There is no known correct function. One approach is to
>"learn" a function from examples - a set of loan application records each
>labled with DEFAULT/NO_DEFAULT. This is a training set. It consists of
>input/output pairs. There are a great many algorithms that induce a function
>from labled examples, and a substantial body of theory on the reliabilty of
>the induced functions. All such algorithms search a space of functions for a
>"good" function. One way in which they vary is the set of constraints that
>define the function space searched - linear threshold functions, for
>example. A simple if not useful such function would be: if (INCOME >=
>THRESHOLD) default-risk = 0 else default-risk = 1. This partitions the input
>space into two equivalence classes seperated by a hyperplane perpendicular
>to one of the axes of the input space. This is a single parameter family of
>functions - a function space. Specifying the value of THRESHOLD completely
>specifies a unique function from that space. The function space to be
>searched can be made a little richer by allowing the input space to be
>partitioned into convex regions bounded by hyperplanes at arbitrary
>orientations - a space of linear threshold functions. To specify a function
>from such a space requires more parameters - it has more degrees of freedom.
>The function space can be made again more general by allowing the surfaces
>bounding equivalence classes to be curved - polynomial classifiers, for
>example. Note that the input space is not changing here, only the allowable
>ways of carving it up. Each time we make the function space to be searched
>more general we increase the ability of the induced function to make fine
>distinctions, and we increase the number of parameters required to specify a
>function from the space. There are tradeoffs. There is a substantial body of
>applicable theory with results, for example, on how well such functions
>induced from examples can be expected to generalize to unseen cases. The
>induction of such functions is "supervised learning", so called because it
>works on inputs labled with the correct answer. When I was writing character
>recognition software I payed people to sit at a terminal and label bitmaps
>by character class (e.g. this one's a '2') - ground truth. Yes, they make
>mistakes, they mislabel the training data sometimes. It is "supervised"
>because correct answers are provided. It is "learning" because the function
>is induced from data rather than known a priori. It is a form of programming
>somewhat more specialised than writing a Fahrenheit-to-Centigrade converter,
>or a bubblesort routine. It has its own set of concepts and specialised
>terminology.
>
>Note by the way that "neural networks" are only one of many ways to approach
>the induction and implimentation of functions from training data. Support
>Vector Machines are the state of the art, at least in some areas. Typicaly
>the "learning algorithm" that finds an optimal function from the space of
>functions searched by SVM's is simply a form of convex quadratic
>programming, taylored somewhat to exploit characteristics of SVM's, but
>still a topic well studied in optimization theory. I am skeptical that
>calling this "reconfiguration of neural networks" constitutes a
>clarification of terminology.
>
>"Unsupervised learning" is different. It can still be thought of as the
>induction of a function, so it is still "learning" in that sense, but in
>this case there is no "supervision" in that there are no class-labled (or
>scalar/vector/tuple-value-labled) training samples available. Often the task
>is to induce naturaly occuring categories. Often the task is to "learn" a
>probability distribution function (e.g. the statistics of some data stream).
>Applications include data reduction, hypothesis generation, hypothesis
>testing, and forcasting, among others. Suppose, for example, you are given
>the task of writing a data compression routine. The JPEG (Joint Photographic
>Experts Group) Still Image Data Compression Standard is a case in point.
>Compression consists of three stages: 1) A transformation of the image (the
>discrete cosine transform in this case, back in the day before wavelets were
>widely known), 2) A weighted quantization of the coefficients of the
>transform (more bits for the lower frequency components), and 3) A lossles
>entropy encoding of the coefficient bit-stream. The last stage, the entropy
>encoder, can be either a Huffman code, or an Arithmetic code - the later
>keeps a running estimate of the probability distribution over a set of
>contexts (the probability model can be as simple as the probability of a 0
>or a 1 bit in the data stream). This estimate is "learned" from the data
>stream as it goes by. There is no "supervision", but there is an a priori
>set of constraints on the form of the probability distributions to be
>learned - the function space to be serched.
>
>So why lump these tasks together under "machine learning", and give them
>names that both link and distinguish them - supervised and unsupervised
>learning? For the same reason this is done in any field - they share a
>common body of theory and techniques. Classification, regression, cluster
>analysis - they are all forms of function induction, There are neural net
>based classifiers and induction algorithms, regression analyzers and cluster
>analyzers, nearest neighbor based classifiers and induction algorithms,
>regression analyzers and cluster analyzers, and support vector machine
>classifiers and induction algorithms, regression analyzers and cluster
>analyzers, to name a few. Much of the underlying theory comes from outside
>computer science, from statistical analysis, optimization theory, and
>operator theory, but the requirments of building working programs, data
>structure and algorithms, adds another element. The term "machine learning"
>is well chosen, and the subdivision into supervised and unsupervised machine
>learning is well founded.
>
>Anyway, here is what I wrote before.
>
>
>/ Start Excerpt/
>When "machine learning" is applied to practical problems, where the issue is
>not to investigate, simulate, or in any way elucidate the behavior of
>organisms, but just to build a mechanism that does some useful work (say
>"read" the dollar amount on a check) two broad categories of algorithm are
>available: supervised and unsupervised "learning". In the former there is,
>as everybody knows, a "training set" consisting of input/desired-output
>pairs - the task is to induce a mapping of input to output from available
>examples; the hope is that the induced mapping generalizes to unseen inputs
>in some useful way. When the set of desired outputs is discrete and finite,
>say the set of decimal digits, the resulting input to output mappping, no
>matter how it is constructed, is called a "classifier". It induces a
>partition of input space into equivalence classes. So in this case
>"supervised learning" boils down, mathematicaly, to the search for a "good"
>partition of input space. Many such search algorithms exist, as everyone
>knows, but due diligince raises questions: will the algorithm converge, how
>much training is enough, how well does the induced partition generalize to
>unseen inputs, can we give a mathematicaly precise answer to the question
>"what tends to confirm an induction"? It is possible to give strong
>guarantees about the performance on unseen data of a classifier induced from
>a training set, provided certain conditions are met. There are many ways to
>state the results in terms of conditions met, one of the simplest is:
>assuming the training data and the unseen data are both generated by an
>unknown but fixed stochastic process then the probability of the
>misclasssification rate on unseen data exceeding epsilon is less than delta,
>where delta increases as a function of the number of free parameters in the
>space of input partitions. Note what the result does not say - it does not
>require the training set to be "statisticaly representative" of the unseen
>data, it requires only that the training set be generated by the same
>stochastic process that generates the unseen data. Nor does it say the rate
>of misclasssification will be less than some fixed threshold, it says only
>that this rate has some calculable probability of being less than a given
>threshold. Note what the result does say - the probability of
>misclasssification rises as a function of the degrees of freedom in the set
>of input space partitions available to the learning algorithm. It gives a
>precise meaning to the problem of "overfitting": it is possible, given too
>wide a lattitude of input space partitions, to induce a partition that
>"memorizes" the examples without generalizing at all. Generalization
>requires constraints on the space of partitions considered. On the one hand
>this space must be rich enough to make essential discriminations, on the
>other hand it must be as economical as possible to allow induction to
>generalize with guaranteed results.
>
>Unsupervised "learning" amounts to characterizing statistical properties of
>unlabled input - the realm of cluster analysis, a search for categories,
>where the catagories are not prespecified, a search for "natural kinds",
>lumpiness in the continuum of input space. There are, as everyone knows,
>scads of cluster analysis algorithms. Kohonen's "self organizing maps" are
>an interesting case of the "competitive learning" approach. In this scheme
>there is a fixed sized pool of "representatives" R, where each
>representative R[i] has both a value, typicaly a vector, and a neighborhood
>of representatives to which it is connected. For each input vector v the
>representative R[j] closest to v under some prespecified metric is found.
>R[j]'s value is altered by some prespecified fraction to move it closer to
>v, and the values of all of R[j]'s neighbors are also altered by some
>usually smaller prespecified fraction to move closer to the value of v.
>There are variations on the scheme. After convergence, if the algorithm
>converges, the representatives R are topographicaly ordered in a way
>representative of the distribution of the data - neighboring representatives
>are also close under the metric over the input vector space even if the
>values of neighboring representatives are not correlated initialy. In
>densley populated regions of input space the "map" preserves fine
>distinctions, elsewhere distinctions become coarse.
>
>In the supervised case the system's response to inputs is actively shaped
>via training; in the unsupervised case the system's response to inputs
>developes as a result of exposure to the input stream.
>\End Excerpt\

Allow me to comment without malice, Michael, that we now have a good
working explanation for producing software that learns to do something
useful without claiming to do something with intelligence. Is it your
contention that doing something useful is intelligence or that it
doesn't matter? Or that intelligence amounts to doing something
useful. Seems to me I've seen mechanically intelligent people doing
plenty of useless things.

Regards - Lester
0
lesterDELzick
10/30/2004 8:10:26 PM
Glen M. Sizemore wrote:
> P: Yeah, sure , like Markov wasn't studying these structures way back
> in  1906.  I coded my first NN after studying Markov chains, i didn't
> even  know about Hull et all.  There has always been a close
> association  between engineering (computers) and the math departments.
>  Did the  psychology departments inform that process ... maybe ... but
> maybe not as much as it appears to you.  Maybe Minsky could tell us
> what primarily informed *his* research ....
> 
> GS: But what makes you think that he can report it accurately? Or that
> he would do so honestly? I think he's smart, but I think he might be
> dishonest and egotistical. That would be a problem. No?
> 

Well he's 74, what would profit his lying about it now?  I would think 
he would want to set the record straight.  To suspect this man of 
dishonesty just because he belongs to a class of AI researchers strikes 
me as the worst kind of stereotyping.

Now in my case things might be different.  But here it is anyway to the 
best of my memory.   When i started thinking about AI seriously in the 
70's i specifically remember reading two books:  one was a discrete 
mathematics text and it included Markov chains, the other was the book 
which introduced the percepton (i wish i could remember the title).  The 
latter was almost definitely informed by neuroscience and\or psychology 
- the former almost certainly was not.  Now to be honest i have no idea 
which i read first.  However i do know that when it came time to perform 
on a programming contract which could not be solved with a symbolic 
strategy, the discrete mathematics text told me how to do it - the other 
book was quite useless.

Incidentally the contract was to read a ten key touch tone phone input 
from a flaky signal.  This was before we had chips readily available to 
do it.

patty


> 
> patty <pattyNO@SPAMicyberspace.net> wrote in message news:<unKgd.334213$3l3.127896@attbi_s03>...
> 
>>David Longley wrote:
>>
>>>In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
>>>Kirchmeir <wwolfkir@sympatico.ca> writes
>>>
>>>
>>>>Stargazer wrote:
>>>>
>>>>
>>>>>Wolf Kirchmeir wrote:
>>>>>
>>>>>
>>>>>>Stargazer wrote:
>>>>
>>>>[snip a number oif clear answers to my questions - thanks. I think. :-)]
>>>>
>>>>
>>>>>*SG*
>>>>
>>>>
>>>>Your answers clear up some misconceptions on my part, but they also 
>>>>show  terminological obfuscation on the part of artificial neural 
>>>>network researchers.
>>>>
>>>>Throughout your explanation, the term "signal" is used ambiguously. It 
>>>>sometimes seems to apply to an input to a single neuron, and sometimes 
>>>>to a collection of inputs to a network of neurons. IMO this is 
>>>>confusing. Very. It's a hierarchy error, which always cause trouble.
>>>>
>>>>Also, calling the calculated output of a NN a "training signal" 
>>>>because it's compared to the desired outcome is confusing, at least to 
>>>>me, for whom a "training signal" is a "signal that trains", ie, an 
>>>>input to the NN. And the use of "signal" for both inputs and outputs 
>>>>is confusing, since IMO an output is a signal to the experimenter, not 
>>>>the NN.
>>>>
>>>>All in all, my immediate impression is that workers in artificial NNs 
>>>>don't have a clear conception of what they are trying to do. Not that 
>>>>that is a bad thing - after all, it's early days yet, and one of the 
>>>>functions of research is to clarify the questions one is trying to 
>>>>answer. My comments as a pure outsider may or may not help clarify 
>>>>vagueness. Either way, thinking about your explanations has been 
>>>>interesting.
>>>>
>>>>
>>>
>>>Another way of putting it is that the early ANN folk didn't know what 
>>>was being done in the EAB back in the 30s, 40s and 50s (note that all of 
>>>the former folks' work came out of those decades but they seem to have 
>>>an uncanny knack of misrepresenting of just not understanding their 
>>>sources). Nor did they understand the way that philosophy was going in 
>>>the same period (most "AI" and "Cognitive Scientists" *still* appear to 
>>>be pre 1929 Carnap or early Wittgensteinian). It appears to me that they 
>>>took some basic "programming" algorithms which *simulated* (or at best 
>>>controlled) some of the experimental schedules/equipment (in those early 
>>>days it was largely switchboard and other telephonic paraphernalia) and 
>>>just renamed their statistical models 
>>
>>Yeah, sure , like Markov wasn't studying these structures way back in 
>>1906.  I coded my first NN after studying Markov chains, i didn't even 
>>know about Hull et all.  There has always been a close association 
>>between engineering (computers) and the math departments.  Did the 
>>psychology departments inform that process ... maybe ... but maybe not 
>>as much as it appears to you.  Maybe Minsky could tell us what primarily 
>>informed *his* research ....
>>
>>patty
>>
>>
>>or descriptions of this "rule
>>
>>>governed behaviour" something more catchy ie "Artificial Neural 
>>>Networks" or "cell assemblies" (Hebb was always talking about a 
>>>Conceptual Nervous System and he did it rather poorly relative to the 
>>>efforts of Hull, Guthrie or Estes - he just said it all in more popular, 
>>>familiar, intensional language, ensuring that more science-shy people 
>>>lapped it up!). This propensity to generate misnomers and repackage, 
>>>plagiarise or re-badge others' *empirical* work as something new and 
>>>"algorithmic" or "analytic" simply through name changing allows them to 
>>>sell a load of nonsense to the unwary who don't see this sleight of hand 
>>>for what it is.  When they make out that what they have to say somehow 
>>>captures what's essential about "cognitive" or "mental" life I just see 
>>>fraud, something which I think is endemic within psychology, and has 
>>>been for decades. It makes "Cognitive Science" a Ptolemeic monster, 
>>>which in my view is far worse than the original, as this monster has no 
>>>practical utility over the original behavioural work itself, and yet 
>>>actually gets in the way of advancing that science by soaking up funding 
>>>on grounds that it's closer to common sense folk psychology! Such people 
>>>make quite ludicrous grant proposals, with fantastic promises, which 
>>>make the realistic aims of real science look trivial in comparison. This 
>>>just shapes up lying, and turns science into marketing. Such "leaders" 
>>>take students and other naive folk back to ways of thinking which were 
>>>abandoned well over a century ago. They're entrepreneurs!
0
patty
10/30/2004 8:28:30 PM
in article 10o7736it6vsd8c@news20.forteinc.com, Stargazer at
fuckoff@spammers.com wrote on 10/30/04 6:51 AM:

> Michael Olea wrote:
>> in article _3tgd.22767$Qs6.1769700@news20.bellglobal.com, Wolf
>> Kirchmeir at wwolfkir@sympatico.ca wrote on 10/29/04 8:12 AM:
>> 
>>> Bill Modlin wrote:
>> 
>> [snip]
>> 
>>>> Obviously you are not aware of the ordinary technical usage
>>>> of the word "supervised" in this context.   Go read a book
>>>> or something, I'm getting tired of trying to educate you.
>>>> Hint: it is still just as "supervised" when natural forces
>>>> do the supervision.
>>> 
>>> Firstly, that's not "ordinary technical usage", it's AI usage,
>>> apparently. So why use the word "supervised"? I've been enlightened
>>> about the meaning of "supervised" by Stephen Harris - I think it's a
>>> stupid term. It assumes that there is some other non-supervised
>>> form of learning, which in turn assumes that learning is something
>>> other than a change in behaviour. If so, just what is it? Are you
>>> claiming that learning "really" is something other than a change in
>>> behaviour, and that changes in behaviour are only "incidentally"
>>> signs that learning has taken place? If so, you have relocated
>>> learning somewhere else, and should use a different term. I vote
>>> for "reconfiguration of neural networks" I don't have enough Latin
>>> or Greek to make up a nice technical word to replace that phrase,
>>> though - sorry about that.
>> 
>> Below is an excerpt from something I posted last december. It might
>> help clarify the terminology, the technical distinction between
>> supervised and unsupervised learning algorithms. But before
>> re-posting that excerpt I'll take another stab at clarifying some of
>> the notions.
> [big snip]
> 
> Excellent post (although I doubt some people here will understand
> all its practical and theoretical implications).

Thanks. It may be interesting to note that Duda and Hart (Pattern
Classification and Scene Analysis) once refered to statistical learning
theory as elegant but without practical use ;-)

> 
> *SG*
> 
> 

0
Michael
10/30/2004 9:12:40 PM
in article 41849d2b.15043409@netnews.att.net, Lester Zick at
lesterDELzick@worldnet.att.net wrote on 10/30/04 1:10 PM:

> On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
> in comp.ai.philosophy wrote:
> 

[snip]

> 
> Allow me to comment without malice, Michael, that we now have a good
> working explanation for producing software that learns to do something
> useful without claiming to do something with intelligence. Is it your
> contention that doing something useful is intelligence or that it
> doesn't matter? Or that intelligence amounts to doing something
> useful. Seems to me I've seen mechanically intelligent people doing
> plenty of useless things.

I was not making any claim one way or another about intelligence, Lester,
rather just trying to give a sense of what "supervised learning" and
"unsupervised learning" mean in the context where those terms are part of a
technical vocabulary - the construction of algorithms - though I would lean
to the "it doesn't matter" point of view. A brief quote from Quine's essay
"Natural Kinds":

=====
An example is the disposition called intelligence - the ability, vaguely
speaking, to learn quickly and to solve problems. Sometime, whether in terms
of proteins or colloids or nerve nets or overt behavior, the relevant branch
of science may reach the stage where a similarity notion can be constructed
capable of making even the notion of intelligence respectable. And
superflous.
=====

> 
> Regards - Lester
>

Regards.

0
Michael
10/30/2004 9:13:51 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
>
> > Stephen Harris wrote:
> >
> > > Supervised learning means there are external constraints applied
> > > to the processing of the network while it is processing which
> > > influences the eventual output.
> > >
> > > Unsupervised learning means that the network does not receive
> > > external inputs while it is processing. The processing works only
> > > with the internal structure of the network, then produces output.
> >
> >
> > Just a small observation here. In unsupervised learning, the network
> > in fact receives external input (otherwise it would not do much). It
> > does not receive a user-supplied output (training signals or
> > training sets) with which to compare its own output in order to
> > derive error correction. Supervised systems, on the other hand,
> > receive this user-supplied sets, which is employed (usually) as an
> > error correction to be back-fed (at least in traditional multilayer
> > perceptrons).
>
> If that is the case, the supervised ans unsupervised NNs must have
> different architectures. Or so it seems to me. In that case,
> supervised NNs must contain unsupervised sub-NNs. Or?

Or... none of the above, they just have different learning paradigms.
In unsupervised networks, neurons adjust their connection weights
according to rules that have nothing to do with what the user
wants. Neurons act more like coincidence detectors. In that sense,
a typical unsupervised NN may have comparable architecture with
a typical MLP (multilayer perceptron). There are several
unsupervised learning algorithms (hebbian, self-organizing
maps, etc). Different architectures, on the other hand, imply
different styles of connection among neurons. An MLP, for instance,
is architecturally different from recurrent networks (such as SRNs,
Elman's Simple Recurrent Networks). And both of them are supervised
NNs. In other networks, the supervised/unsupervised distinction may
be a bit blurred (such as in Hopfield networks, which are totally
recurrent and don't have explicit inputs or outputs).

*SG*


0
Stargazer
10/30/2004 9:22:25 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> [...]
> >
> > Learning as a change in behavior is a definition under the
> > behaviorist rationale. This assumption leads to all the well known
> > theoretical edifice. However, learning is defined differently by
> > the neuroscientist. Thus, it may be "stupid" from a behaviorist
> > point of view, but it is not from neuroscience. It doesn't seem wise to 
> > criticize someone
> > else's definitions. You can accept it or reject it, or you can (at
> > most) point out that such definitions may lead to
> > theoretical/conceptual dead-ends or empirically flawed models,
> > which is not the case regarding supervised/unsupervised learning.
> >
> > *SG*
>
> Well, I've tried to criticise the fuzziness of the concepts. I'll
> accept "learning" as any change in behaviour, at any level. What I
> find confusing is that sometimes "learning" is used in the
> behaviorial sense, sometimes in the sense of "change in the
> functioning of a neural network", sometimes in the sense "changes in
> a neuron's firing patterns", etc etc, all at the same time. Now, I
> don't care what you call these, so long as you keep the hierarchy
> straight. Terminology can help or hinder -- I would think that people
> would be more careful with their choice of terms. (Footnote)
>
> I see the following hierarchy:
> behaviour -- physiological functioning -- neural network functioning
> -- neuron functioning
>
> Information flows both ways along this hierarchy, and it's important
> IMO to talk as clearly as possible. It seems to me that learning at
> one level implies learning at a lower level.
>
> I also find it odd that people invent labels for two kinds of learning
> and seem to believe thay have discovered something new (or in some
> cases have refuted the behaviorist stance.)  I've reread the posts on
> supervised vs unsupervised learning, and insofar as I've made sense of
> the explanations, unsupervised learning is classical condioning, and
> supervise learning is operant conditioning. (BTW, Stephen Harris's
> comment that unsupervised learning  requires some sort of external
> input, else the NN won't do much of anything, was the key that
> unlocked the puzzle. Up till then, I was trying to figure out how
> unsupervised learning was different from random neural firing.)
>
> The lesson I draw is that learning in the behaviorist sense scales up
> and down; or if you like, repeats fractal-like at different scales. It
> is "behaviour all the way down", and back up again. If that notion
> reflects reality, then something new has IMO been discovered.
> Considering that a neuron is a rather complex entity (as is shown when
> you sketch a meta-program to simulate how it works), it seems that
> learning may scale even to that level.
>
> "It's all rather complicated, really."

It seems complicated because we're talking of different paradigms
here. Under behaviorist parlance, learning is related to the changes
in behavior of the organism/mechanism that one is studying, while in
neuroscience and artificial neural networks, learning is a change
of internal parameters (and/or architecture, if one wants to account
for plasticity) of the organism/mechanism. Although logically sound
and coherent, the behaviorist definition prevents the analysis of
internal modifications of the mechanism that aren't immediately
reflected in corresponding changes of behavior. For the behaviorist,
unsupervised learning does not exist, and this may explain why it seems
difficult to understand it.

*SG*


0
Stargazer
10/30/2004 9:27:24 PM
Lester Zick wrote:

> On Sat, 30 Oct 2004 15:06:09 -0400, Wolf Kirchmeir
> <wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
> 
> 
>>Stargazer wrote:
>>[...]
>>
>>>Learning as a change in behavior is a definition under the behaviorist
>>>rationale. This assumption leads to all the well known theoretical
>>>edifice. However, learning is defined differently by the neuroscientist.
>>>Thus, it may be "stupid" from a behaviorist point of view, but it
>>>is not from neuroscience. It doesn't seem wise to criticize someone
>>>else's definitions. You can accept it or reject it, or you can (at most)
>>>point out that such definitions may lead to theoretical/conceptual
>>>dead-ends or empirically flawed models, which is not the case
>>>regarding supervised/unsupervised learning.
>>>
>>>*SG*
>>
>>Well, I've tried to criticise the fuzziness of the concepts. I'll accept 
>>"learning" as any change in behaviour, at any level. What I find 
>>confusing is that sometimes "learning" is used in the behaviorial sense, 
>>sometimes in the sense of "change in the functioning of a neural 
>>network", sometimes in the sense "changes in a neuron's firing 
>>patterns", etc etc, all at the same time. Now, I don't care what you 
>>call these, so long as you keep the hierarchy straight. Terminology can 
>>help or hinder -- I would think that people would be more careful with 
>>their choice of terms. (Footnote)
>>
>>I see the following hierarchy:
>>behaviour -- physiological functioning -- neural network functioning -- 
>>neuron functioning
>>
>>Information flows both ways along this hierarchy, and it's important IMO 
>>to talk as clearly as possible. It seems to me that learning at one 
>>level implies learning at a lower level.
>>
>>I also find it odd that people invent labels for two kinds of learning 
>>and seem to believe thay have discovered something new (or in some cases 
>>have refuted the behaviorist stance.)  I've reread the posts on 
>>supervised vs unsupervised learning, and insofar as I've made sense of 
>>the explanations, unsupervised learning is classical condioning, and 
>>supervise learning is operant conditioning. (BTW, Stephen Harris's 
>>comment that unsupervised learning  requires some sort of external 
>>input, else the NN won't do much of anything, was the key that unlocked 
>>the puzzle. Up till then, I was trying to figure out how unsupervised 
>>learning was different from random neural firing.)
>>
>>The lesson I draw is that learning in the behaviorist sense scales up 
>>and down; or if you like, repeats fractal-like at different scales. It 
>>is "behaviour all the way down", and back up again. If that notion 
>>reflects reality, then something new has IMO been discovered. 
>>Considering that a neuron is a rather complex entity (as is shown when 
>>you sketch a meta-program to simulate how it works), it seems that 
>>learning may scale even to that level.
>>
>>"It's all rather complicated, really."
> 
> 
> I'll say. "It is behavior all the way down" is analogous to patty's
> ida that it's all belief. Such facile observations are nugatory with
> respect to science because they fail to discrimate when and how belief
> becomes knowledge and behavior becomes learning and a host of other
> mental effects.
> 
> Regards - Lester

To correct the record i would not say something as stupid as "it's 
belief all the way down".  Beliefs are just a propositional attitudes 
and there certainly is more to existence than just those.  What i might 
have said would have been something more like, we tend not to rationally 
think outside of our own propositional attitudes.  A formal system 
running on a computer certainly cannot function outside of its beliefs 
(were its statements to be considered beliefs).  Those of you who think 
you can make statements that are true even outside of your assumptions 
and beliefs, are fundamentally misguided.  You are arrogantly taking a 
god's eye view.  You are quite literally talking out of your heads.

Incidentally, Lester, you can change your assumptions and make 
tautologies false.

patty
0
patty
10/30/2004 9:29:14 PM
Michael Olea wrote:
> in article 10o7736it6vsd8c@news20.forteinc.com, Stargazer at
> fuckoff@spammers.com wrote on 10/30/04 6:51 AM:
>
> > Michael Olea wrote:
> > > in article _3tgd.22767$Qs6.1769700@news20.bellglobal.com, Wolf
> > > Kirchmeir at wwolfkir@sympatico.ca wrote on 10/29/04 8:12 AM:
> > >
> > > > Bill Modlin wrote:
> > >
> > > [snip]
> > >
> > > > > Obviously you are not aware of the ordinary technical usage
> > > > > of the word "supervised" in this context.   Go read a book
> > > > > or something, I'm getting tired of trying to educate you.
> > > > > Hint: it is still just as "supervised" when natural forces
> > > > > do the supervision.
> > > >
> > > > Firstly, that's not "ordinary technical usage", it's AI usage,
> > > > apparently. So why use the word "supervised"? I've been
> > > > enlightened about the meaning of "supervised" by Stephen Harris
> > > > - I think it's a stupid term. It assumes that there is some
> > > > other non-supervised form of learning, which in turn assumes
> > > > that learning is something other than a change in behaviour. If
> > > > so, just what is it? Are you claiming that learning "really" is
> > > > something other than a change in behaviour, and that changes in
> > > > behaviour are only "incidentally" signs that learning has taken
> > > > place? If so, you have relocated learning somewhere else, and
> > > > should use a different term. I vote for "reconfiguration of
> > > > neural networks" I don't have enough Latin or Greek to make up
> > > > a nice technical word to replace that phrase, though - sorry
> > > > about that.
> > >
> > > Below is an excerpt from something I posted last december. It
> > > might help clarify the terminology, the technical distinction
> > > between supervised and unsupervised learning algorithms. But
> > > before re-posting that excerpt I'll take another stab at
> > > clarifying some of the notions.
> > [big snip]
> >
> > Excellent post (although I doubt some people here will understand
> > all its practical and theoretical implications).
>
> Thanks. It may be interesting to note that Duda and Hart (Pattern
> Classification and Scene Analysis) once refered to statistical
> learning theory as elegant but without practical use ;-)

Well, history repeats itself: some time ago I heard people
saying the very same thing about SVM. They are starting to swallow
what they said, syllable by syllable.

*SG*


0
Stargazer
10/30/2004 9:35:56 PM
Bill Modlin wrote:

> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> news:5hPgd.32047$Qs6.2170742@news20.bellglobal.com...
> 
> snip most, to try to get to the point...
> 
> 
>>Now, if I understand you right, this is an example of unsupervised
>>learning. Yet because it depends on external inputs it's
> 
> supervised.
> 
>>Or is it both at the same time? Why?
> 
> 
> You still conflate "external inputs" with the subclass of inputs
> which are "feedback".
> 
> Do you understand the concept of causal or functional dependency?
> 
> You've been told several times that "feedback" is an input which is
> caused by or functionally dependent on prior outputs of the system.
> 
> To be useful, the feedback must be detectably correlated with or
> related to the operation of the system, so that it can be used to
> guide adjustments to the way the system processes future inputs.  To
> be useful, the feedback must be processed differently from other
> inputs, it must have effects on the system distinct from those of
> other non-feedback inputs.
> 

How does the system (or even the system designer) know which are which ?

patty


> The difference between having feedback and not having feedback
> should not be all that hard to grasp... I don't understand why you
> are having so much trouble.  All systems require external input, but
> most external input is not functionally dependent on prior outputs,
> so most external input is not feedback.
> 
> Is it just the vocabulary that confuses you?  Do you have the same
> problem distinguishing operant conditioning from classical
> conditioning?   Would you argue that they are indistinguishable?
> 
> If you can see a difference there, then recognize that it is the
> same difference which is denoted by the distinction between
> supervised and unsupervised.
> 
> Bill Modlin
> 
> 
0
patty
10/30/2004 9:48:16 PM
In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>Wolf Kirchmeir wrote:
>> Stargazer wrote:
>> [...]
>> >
>> > Learning as a change in behavior is a definition under the
>> > behaviorist rationale. This assumption leads to all the well known
>> > theoretical edifice. However, learning is defined differently by
>> > the neuroscientist. Thus, it may be "stupid" from a behaviorist
>> > point of view, but it is not from neuroscience. It doesn't seem wise to
>> > criticize someone
>> > else's definitions. You can accept it or reject it, or you can (at
>> > most) point out that such definitions may lead to
>> > theoretical/conceptual dead-ends or empirically flawed models,
>> > which is not the case regarding supervised/unsupervised learning.
>> >
>> > *SG*
>>
>> Well, I've tried to criticise the fuzziness of the concepts. I'll
>> accept "learning" as any change in behaviour, at any level. What I
>> find confusing is that sometimes "learning" is used in the
>> behaviorial sense, sometimes in the sense of "change in the
>> functioning of a neural network", sometimes in the sense "changes in
>> a neuron's firing patterns", etc etc, all at the same time. Now, I
>> don't care what you call these, so long as you keep the hierarchy
>> straight. Terminology can help or hinder -- I would think that people
>> would be more careful with their choice of terms. (Footnote)
>>
>> I see the following hierarchy:
>> behaviour -- physiological functioning -- neural network functioning
>> -- neuron functioning
>>
>> Information flows both ways along this hierarchy, and it's important
>> IMO to talk as clearly as possible. It seems to me that learning at
>> one level implies learning at a lower level.
>>
>> I also find it odd that people invent labels for two kinds of learning
>> and seem to believe thay have discovered something new (or in some
>> cases have refuted the behaviorist stance.)  I've reread the posts on
>> supervised vs unsupervised learning, and insofar as I've made sense of
>> the explanations, unsupervised learning is classical condioning, and
>> supervise learning is operant conditioning. (BTW, Stephen Harris's
>> comment that unsupervised learning  requires some sort of external
>> input, else the NN won't do much of anything, was the key that
>> unlocked the puzzle. Up till then, I was trying to figure out how
>> unsupervised learning was different from random neural firing.)
>>
>> The lesson I draw is that learning in the behaviorist sense scales up
>> and down; or if you like, repeats fractal-like at different scales. It
>> is "behaviour all the way down", and back up again. If that notion
>> reflects reality, then something new has IMO been discovered.
>> Considering that a neuron is a rather complex entity (as is shown when
>> you sketch a meta-program to simulate how it works), it seems that
>> learning may scale even to that level.
>>
>> "It's all rather complicated, really."
>
>It seems complicated because we're talking of different paradigms
>here. Under behaviorist parlance, learning is related to the changes
>in behavior of the organism/mechanism that one is studying, while in
>neuroscience and artificial neural networks, learning is a change
>of internal parameters (and/or architecture, if one wants to account
>for plasticity) of the organism/mechanism. Although logically sound
>and coherent, the behaviorist definition prevents the analysis of
>internal modifications of the mechanism that aren't immediately
>reflected in corresponding changes of behavior. For the behaviorist,
>unsupervised learning does not exist, and this may explain why it seems
>difficult to understand it.
>
>*SG*
>
>

Not that you meant to say this (as you clearly don't understand what 
Wolf and I have said), but do you have any idea as to why a radical 
behaviourist might have doubts as to whether *classical* conditioning 
naturally exists?

Modlin's ignorance and presumptive arrogance is , like yours and others' 
here absolutely *staggering*. When he and others "ask" for enlightenment 
or share their views, it really does amount to little more than idiotic 
presumption and I know you can't see why. None of you have any idea of 
the scope of your ignorance, and why it's not something one can redress 
in five minutes, five months, or even perhaps in five years. Look into 
the "web of belief" discussed elsewhere - it's a critical mass notion 
which depends on experience and none of you have that experience. I wish 
such scotoma *were* just simple foibles of eccentric, over confident 
posters in forums such as this, but they're not, the problem is 
ubiquitous.

One consequence is that the cognoscenti generally just make polite 
excuses and leave folk such as yourself to your nonsense. because you're 
not picked up on it, you persist in your ignorance. It really is just 
too aversive an experience for most to persist in trying to teach any of 
you - seriously! Take this as a hint that you're doing something *very*, 
*very* wrong and that the way you're currently behaving will just ensure 
that you'll never see what you're doing wrong, never mind even begin to 
redress it.

Imagine your worst fears as to your level of ignorance and try to 
believe that's true!
-- 
David Longley
0
David
10/30/2004 9:57:59 PM
In article <unKgd.334213$3l3.127896@attbi_s03>, patty 
<pattyNO@SPAMicyberspace.net> writes
>David Longley wrote:
>> In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
>>Kirchmeir <wwolfkir@sympatico.ca> writes
>>
>>> Stargazer wrote:
>>>
>>>> Wolf Kirchmeir wrote:
>>>>
>>>>> Stargazer wrote:
>>>
>>> [snip a number oif clear answers to my questions - thanks. I think. :-)]
>>>
>>>> *SG*
>>>
>>>
>>> Your answers clear up some misconceptions on my part, but they also 
>>>show  terminological obfuscation on the part of artificial neural 
>>>network researchers.
>>>
>>> Throughout your explanation, the term "signal" is used ambiguously. 
>>>It  sometimes seems to apply to an input to a single neuron, and 
>>>sometimes  to a collection of inputs to a network of neurons. IMO 
>>>this is  confusing. Very. It's a hierarchy error, which always cause trouble.
>>>
>>> Also, calling the calculated output of a NN a "training signal" 
>>>because it's compared to the desired outcome is confusing, at least 
>>>to  me, for whom a "training signal" is a "signal that trains", ie, 
>>>an  input to the NN. And the use of "signal" for both inputs and 
>>>outputs  is confusing, since IMO an output is a signal to the 
>>>experimenter, not  the NN.
>>>
>>> All in all, my immediate impression is that workers in artificial 
>>>NNs  don't have a clear conception of what they are trying to do. Not 
>>>that  that is a bad thing - after all, it's early days yet, and one 
>>>of the  functions of research is to clarify the questions one is 
>>>trying to  answer. My comments as a pure outsider may or may not help 
>>>clarify  vagueness. Either way, thinking about your explanations has 
>>>been  interesting.
>>>
>>>
>> Another way of putting it is that the early ANN folk didn't know what 
>>was being done in the EAB back in the 30s, 40s and 50s (note that all 
>>of  the former folks' work came out of those decades but they seem to 
>>have  an uncanny knack of misrepresenting of just not understanding 
>>their  sources). Nor did they understand the way that philosophy was 
>>going in  the same period (most "AI" and "Cognitive Scientists" 
>>*still* appear to  be pre 1929 Carnap or early Wittgensteinian). It 
>>appears to me that they  took some basic "programming" algorithms 
>>which *simulated* (or at best controlled) some of the experimental 
>>schedules/equipment (in those early  days it was largely switchboard 
>>and other telephonic paraphernalia) and  just renamed their statistical models
>
>Yeah, sure , like Markov wasn't studying these structures way back in 
>1906.  I coded my first NN after studying Markov chains, i didn't even 
>know about Hull et all.  There has always been a close association 
>between engineering (computers) and the math departments.  Did the 
>psychology departments inform that process ... maybe ... but maybe not 
>as much as it appears to you.  Maybe Minsky could tell us what 
>primarily informed *his* research ....
>
>patty
>

This response, though written immediately after the above reply appeared 
has been delayed intentionally to give others time to have their say as 
you wish. I wonder if you might be able to work this out, or discover it 
through Googling? Tell us: what were the three references at the end the 
paper in The Bulletin of Mathematical Biophysics, 5: 115-133 1943 were 
(and whilst you are at it, you might like to see if you can dig out 
chapter XII in the book "Information Storage and Neural Control" ed 
Fields and Abbott 1963 (the first reference is there too, but is widely 
available elsewhere). What might that latter paper have to do with that 
Wittgenstein section you cited and which I provided a link to?

Let me tell you a story.

Somewhere around 1980/1 or so, I spent an awful lot of time and effort 
drawing what you might call "neural nets". I was trying to work out how 
noradrenaline (aka norepinephine) and serotonin (aka 5-HT) and dopamine 
might be figuring in the control of operant behaviour. I thought this 
might be a good use of my time in between the session change over which 
I had to do every 20 mins at the animal house about 500 yards away. This 
is what I did as a way of thinking about the experiments I was running 
with rats (about  60 a day, using four Skinner Boxes, 20 mins a session 
- I'm sure you can do the maths, but that works out at 15 x 20mins, or, 
including change over time of 5-10 mins about 7 or 8 hrs a day. One of 
these experiments ran for 3 months (an FI-60), and these experiments had 
to run 7 days a week. At this time, I wrote about 40 pages on "learning" 
and what you might call nets, and I showed it to the guy I shared the 
lab with - someone who kind of acted as one of my thesis advisors. He 
showed it to some others a couple of doors down (it was a small 
division, about 20 scientists if that, maybe 7 labs on our level in the 
division, and two or three downstairs). One of the people a few doors 
down was someone who is well known for his work on LTP (if you look into 
where I was at the time, this might become clearer). This chap (whose 
thesis had been entitled (as I recall) "How The Brain Works", said I 
might like to go and see a good guy at the NPL (Uttley), but I didn't go 
because of the almost 12/7 commitment above, something which seemed to 
go on for nearly 4 years. My main supervisor had also made it clear that 
my work should be purely empirical and was probably a bit worried about 
me asking if I could submit a theoretical PhD. One of the things I 
recall at the time was that whilst I was running back and forth to the 
animal house every 20 minutes to change over the rats in the Skinner 
Boxes, the LTP chap, and another, inscrutable and very wise oriental 
gentleman who had the lab next to him (and who I respect and admire as 
much as I do the other chap), began running linear algebra courses in 
the coffee room. Meanwhile, I continued with my little diagrams and 
experiments. When he read my 40 pages of scribbles and rants, I remember 
the chap I shared the lab with saying "hey Dave, there might be 
something in this, but you know, there are many ways this could be 
implemented". I said "I know". I knew this because he'd earlier taught 
me how to patch wires on our BRS/LVE board to link together the solid 
state modules which together ran the operant schedules and clocked up 
the data (we later moved from solid state to FAST BASIC programming, and 
the old spaghetti largely went by the way - something you might like to 
think upon further). Anyway, the above oriental gentleman (Glen knows 
who he was), said to me one day when I was trying to enthuse him about 
the then famous 1971 model of what some might call "unsupervised 
learning "you know, Dave, the guys doing this are nice guys, and when I 
was at ******* I tried to help them out with some of their maths. What 
you really need to do is put it all into Hilbert Space......". He gave 
me some other good advice which I've tried to act.

My advice to you is......do your homework and stop being a smartass and 
understand what the "web of belief" is really all about in terms of 
holism. Why does a sentence mean one thing you me and another to you? 
What does it take to do innovative research?



>
>or descriptions of this "rule
>> governed behaviour" something more catchy ie "Artificial Neural 
>>Networks" or "cell assemblies" (Hebb was always talking about a 
>>Conceptual Nervous System and he did it rather poorly relative to the 
>>efforts of Hull, Guthrie or Estes - he just said it all in more 
>>popular,  familiar, intensional language, ensuring that more 
>>science-shy people  lapped it up!). This propensity to generate 
>>misnomers and repackage,  plagiarise or re-badge others' *empirical* 
>>work as something new and  "algorithmic" or "analytic" simply through 
>>name changing allows them to  sell a load of nonsense to the unwary 
>>who don't see this sleight of hand  for what it is.  When they make 
>>out that what they have to say somehow  captures what's essential 
>>about "cognitive" or "mental" life I just see  fraud, something which 
>>I think is endemic within psychology, and has  been for decades. It 
>>makes "Cognitive Science" a Ptolemeic monster,  which in my view is 
>>far worse than the original, as this monster has no  practical utility 
>>over the original behavioural work itself, and yet  actually gets in 
>>the way of advancing that science by soaking up funding  on grounds 
>>that it's closer to common sense folk psychology! Such people  make 
>>quite ludicrous grant proposals, with fantastic promises, which make 
>>the realistic aims of real science look trivial in comparison. This 
>>just shapes up lying, and turns science into marketing. Such "leaders" 
>>take students and other naive folk back to ways of thinking which were 
>>entrepreneurs!

-- 
David Longley
0
David
10/30/2004 10:06:05 PM
David Longley wrote:
> In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer
> <fuckoff@spammers.com> writes
> > Wolf Kirchmeir wrote:
> > > Stargazer wrote:
> > > [...]
> > > >
> > > > Learning as a change in behavior is a definition under the
> > > > behaviorist rationale. This assumption leads to all the well
> > > > known theoretical edifice. However, learning is defined
> > > > differently by the neuroscientist. Thus, it may be "stupid" from a 
> > > > behaviorist
> > > > point of view, but it is not from neuroscience. It doesn't seem
> > > > wise to criticize someone
> > > > else's definitions. You can accept it or reject it, or you can
> > > > (at most) point out that such definitions may lead to
> > > > theoretical/conceptual dead-ends or empirically flawed models,
> > > > which is not the case regarding supervised/unsupervised
> > > > learning. *SG*
> > >
> > > Well, I've tried to criticise the fuzziness of the concepts. I'll
> > > accept "learning" as any change in behaviour, at any level. What I
> > > find confusing is that sometimes "learning" is used in the
> > > behaviorial sense, sometimes in the sense of "change in the
> > > functioning of a neural network", sometimes in the sense "changes
> > > in a neuron's firing patterns", etc etc, all at the same time.
> > > Now, I don't care what you call these, so long as you keep the
> > > hierarchy straight. Terminology can help or hinder -- I would
> > > think that people would be more careful with their choice of
> > > terms. (Footnote) I see the following hierarchy:
> > > behaviour -- physiological functioning -- neural network
> > > functioning -- neuron functioning
> > >
> > > Information flows both ways along this hierarchy, and it's
> > > important IMO to talk as clearly as possible. It seems to me that
> > > learning at one level implies learning at a lower level.
> > >
> > > I also find it odd that people invent labels for two kinds of
> > > learning and seem to believe thay have discovered something new
> > > (or in some cases have refuted the behaviorist stance.)  I've
> > > reread the posts on supervised vs unsupervised learning, and
> > > insofar as I've made sense of the explanations, unsupervised
> > > learning is classical condioning, and supervise learning is
> > > operant conditioning. (BTW, Stephen Harris's comment that
> > > unsupervised learning  requires some sort of external input, else
> > > the NN won't do much of anything, was the key that unlocked the
> > > puzzle. Up till then, I was trying to figure out how unsupervised
> > > learning was different from random neural firing.) The lesson I draw is that 
> > > learning in the behaviorist sense
> > > scales up and down; or if you like, repeats fractal-like at
> > > different scales. It is "behaviour all the way down", and back up
> > > again. If that notion reflects reality, then something new has
> > > IMO been discovered. Considering that a neuron is a rather
> > > complex entity (as is shown when you sketch a meta-program to
> > > simulate how it works), it seems that learning may scale even to
> > > that level. "It's all rather complicated, really."
> >
> > It seems complicated because we're talking of different paradigms
> > here. Under behaviorist parlance, learning is related to the changes
> > in behavior of the organism/mechanism that one is studying, while in
> > neuroscience and artificial neural networks, learning is a change
> > of internal parameters (and/or architecture, if one wants to account
> > for plasticity) of the organism/mechanism. Although logically sound
> > and coherent, the behaviorist definition prevents the analysis of
> > internal modifications of the mechanism that aren't immediately
> > reflected in corresponding changes of behavior. For the behaviorist,
> > unsupervised learning does not exist, and this may explain why it
> > seems difficult to understand it.
> >
> > *SG*
> >
> >
>
> Not that you meant to say this (as you clearly don't understand what
> Wolf and I have said), but do you have any idea as to why a radical
> behaviourist might have doubts as to whether *classical* conditioning
> naturally exists?
>
> [remainder nonsensical rants suppressed]

Classical, operant, vicarious learning, schedules of reinforcements,
discrimination training, etc., are all concepts that I know well,
but they are mostly restricted to the behavioristic domain.
Useful as they are, it is not wise to suggest that they may be
indiscriminately applied to other areas of scientific knowledge.
It is like using cosmological time frames as reference to the
discussion of the aerodynamics of the flight of birds.

*SG*


0
Stargazer
10/30/2004 10:48:35 PM
In article <10o86i31bsbj437@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>David Longley wrote:
>> In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer
>> <fuckoff@spammers.com> writes
>> > Wolf Kirchmeir wrote:
>> > > Stargazer wrote:
>> > > [...]
>> > > >
>> > > > Learning as a change in behavior is a definition under the
>> > > > behaviorist rationale. This assumption leads to all the well
>> > > > known theoretical edifice. However, learning is defined
>> > > > differently by the neuroscientist. Thus, it may be "stupid" from a
>> > > > behaviorist
>> > > > point of view, but it is not from neuroscience. It doesn't seem
>> > > > wise to criticize someone
>> > > > else's definitions. You can accept it or reject it, or you can
>> > > > (at most) point out that such definitions may lead to
>> > > > theoretical/conceptual dead-ends or empirically flawed models,
>> > > > which is not the case regarding supervised/unsupervised
>> > > > learning. *SG*
>> > >
>> > > Well, I've tried to criticise the fuzziness of the concepts. I'll
>> > > accept "learning" as any change in behaviour, at any level. What I
>> > > find confusing is that sometimes "learning" is used in the
>> > > behaviorial sense, sometimes in the sense of "change in the
>> > > functioning of a neural network", sometimes in the sense "changes
>> > > in a neuron's firing patterns", etc etc, all at the same time.
>> > > Now, I don't care what you call these, so long as you keep the
>> > > hierarchy straight. Terminology can help or hinder -- I would
>> > > think that people would be more careful with their choice of
>> > > terms. (Footnote) I see the following hierarchy:
>> > > behaviour -- physiological functioning -- neural network
>> > > functioning -- neuron functioning
>> > >
>> > > Information flows both ways along this hierarchy, and it's
>> > > important IMO to talk as clearly as possible. It seems to me that
>> > > learning at one level implies learning at a lower level.
>> > >
>> > > I also find it odd that people invent labels for two kinds of
>> > > learning and seem to believe thay have discovered something new
>> > > (or in some cases have refuted the behaviorist stance.)  I've
>> > > reread the posts on supervised vs unsupervised learning, and
>> > > insofar as I've made sense of the explanations, unsupervised
>> > > learning is classical condioning, and supervise learning is
>> > > operant conditioning. (BTW, Stephen Harris's comment that
>> > > unsupervised learning  requires some sort of external input, else
>> > > the NN won't do much of anything, was the key that unlocked the
>> > > puzzle. Up till then, I was trying to figure out how unsupervised
>> > > learning was different from random neural firing.) The lesson I 
>> > >draw is that
>> > > learning in the behaviorist sense
>> > > scales up and down; or if you like, repeats fractal-like at
>> > > different scales. It is "behaviour all the way down", and back up
>> > > again. If that notion reflects reality, then something new has
>> > > IMO been discovered. Considering that a neuron is a rather
>> > > complex entity (as is shown when you sketch a meta-program to
>> > > simulate how it works), it seems that learning may scale even to
>> > > that level. "It's all rather complicated, really."
>> >
>> > It seems complicated because we're talking of different paradigms
>> > here. Under behaviorist parlance, learning is related to the changes
>> > in behavior of the organism/mechanism that one is studying, while in
>> > neuroscience and artificial neural networks, learning is a change
>> > of internal parameters (and/or architecture, if one wants to account
>> > for plasticity) of the organism/mechanism. Although logically sound
>> > and coherent, the behaviorist definition prevents the analysis of
>> > internal modifications of the mechanism that aren't immediately
>> > reflected in corresponding changes of behavior. For the behaviorist,
>> > unsupervised learning does not exist, and this may explain why it
>> > seems difficult to understand it.
>> >
>> > *SG*
>> >
>> >
>>
>> Not that you meant to say this (as you clearly don't understand what
>> Wolf and I have said), but do you have any idea as to why a radical
>> behaviourist might have doubts as to whether *classical* conditioning
>> naturally exists?
>>
>> [remainder nonsensical rants suppressed]
>
>Classical, operant, vicarious learning, schedules of reinforcements,
>discrimination training, etc., are all concepts that I know well,
>but they are mostly restricted to the behavioristic domain.
>Useful as they are, it is not wise to suggest that they may be
>indiscriminately applied to other areas of scientific knowledge.
>It is like using cosmological time frames as reference to the
>discussion of the aerodynamics of the flight of birds.
>
>*SG*
>
>
I assure you, you *don't* know *any* of these things well. You're 
another ignorant, deluded, idiot with a bad education. You'll find many 
like minded twits posting to c.a.p. How long you wish to remain an 
ignorant, deluded, idiot is basically in your hands.
-- 
David Longley
0
David
10/30/2004 11:16:48 PM
In article <unKgd.334213$3l3.127896@attbi_s03>, patty 
<pattyNO@SPAMicyberspace.net> writes
>David Longley wrote:
>> In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
>>Kirchmeir <wwolfkir@sympatico.ca> writes
>>
>>> Stargazer wrote:
>>>
>>>> Wolf Kirchmeir wrote:
>>>>
>>>>> Stargazer wrote:
>>>
>>> [snip a number oif clear answers to my questions - thanks. I think. :-)]
>>>
>>>> *SG*
>>>
>>>
>>> Your answers clear up some misconceptions on my part, but they also 
>>>show  terminological obfuscation on the part of artificial neural 
>>>network researchers.
>>>
>>> Throughout your explanation, the term "signal" is used ambiguously. 
>>>It  sometimes seems to apply to an input to a single neuron, and 
>>>sometimes  to a collection of inputs to a network of neurons. IMO 
>>>this is  confusing. Very. It's a hierarchy error, which always cause trouble.
>>>
>>> Also, calling the calculated output of a NN a "training signal" 
>>>because it's compared to the desired outcome is confusing, at least 
>>>to  me, for whom a "training signal" is a "signal that trains", ie, 
>>>an  input to the NN. And the use of "signal" for both inputs and 
>>>outputs  is confusing, since IMO an output is a signal to the 
>>>experimenter, not  the NN.
>>>
>>> All in all, my immediate impression is that workers in artificial 
>>>NNs  don't have a clear conception of what they are trying to do. Not 
>>>that  that is a bad thing - after all, it's early days yet, and one 
>>>of the  functions of research is to clarify the questions one is 
>>>trying to  answer. My comments as a pure outsider may or may not help 
>>>clarify  vagueness. Either way, thinking about your explanations has 
>>>been  interesting.
>>>
>>>
>> Another way of putting it is that the early ANN folk didn't know what 
>>was being done in the EAB back in the 30s, 40s and 50s (note that all 
>>of  the former folks' work came out of those decades but they seem to 
>>have  an uncanny knack of misrepresenting of just not understanding 
>>their  sources). Nor did they understand the way that philosophy was 
>>going in  the same period (most "AI" and "Cognitive Scientists" 
>>*still* appear to  be pre 1929 Carnap or early Wittgensteinian). It 
>>appears to me that they  took some basic "programming" algorithms 
>>which *simulated* (or at best controlled) some of the experimental 
>>schedules/equipment (in those early days it was largely switchboard 
>>and other telephonic paraphernalia) and  just renamed their statistical models
>
>Yeah, sure , like Markov wasn't studying these structures way back in 
>1906.  I coded my first NN after studying Markov chains, i didn't even 
>know about Hull et all.  There has always been a close association 
>between engineering (computers) and the math departments.  Did the 
>psychology departments inform that process ... maybe ... but maybe not 
>as much as it appears to you.  Maybe Minsky could tell us what 
>primarily informed *his* research ....
>
>patty
>

This response, though written immediately after the above reply appeared 
has been delayed intentionally to give others time to have their say as 
you wish. I wonder if you might be able to work this out, or discover it 
through Googling? Tell us: what were the three references at the end the 
paper in The Bulletin of Mathematical Biophysics, 5: 115-133 1943 were 
(and whilst you are at it, you might like to see if you can dig out 
chapter XII in the book "Information Storage and Neural Control" eds: 
Fields and Abbott 1963 (the first reference is there too, but is widely 
available elsewhere). What might that latter paper have to do with that 
Wittgenstein section you cited and which I provided a link to?

Let me tell you a story.

Somewhere around 1980/1 or so, I spent an awful lot of time and effort 
drawing what you might call "neural nets". I was trying to work out how 
noradrenaline (aka norepinephine) and serotonin (aka 5-HT) and dopamine 
might be figuring in the control of operant behaviour. I thought this 
might be a good use of my time in between the session change over which 
I had to do every 20 mins at the animal house about 500 yards away. This 
is what I did as a way of thinking about the experiments I was running 
with rats (about  60 a day, using four Skinner Boxes, 20 mins a session 
- I'm sure you can do the maths, but that works out at 15 x 20mins, or, 
including change over time of 5-10 mins about 7 or 8 hrs a day. One of 
these experiments ran for 3 months (an FI-60), and these experiments had 
to run 7 days a week. At this time, I wrote about 40 pages on "learning" 
and what you might call nets, and I showed it to the guy I shared the 
lab with - someone who kind of acted as one of my thesis advisors. He 
showed it to some others a couple of doors down (it was a small 
division, about 20 scientists if that, maybe 7 labs on our level in the 
division, and two or three downstairs). One of the people a few doors 
down was someone who is well known for his work on LTP (if you look into 
where I was at the time, this might become clearer). This chap (whose 
thesis had been entitled (as I recall) "How The Brain Works", said I 
might like to go and see a good guy at the NPL (Uttley), but I didn't go 
because of the almost 12/7 commitment above, something which seemed to 
go on for nearly 4 years. My main supervisor had also made it clear that 
my work should be purely empirical and was probably a bit worried about 
me asking if I could submit a theoretical PhD. One of the things I 
recall at the time was that whilst I was running back and forth to the 
animal house every 20 minutes to change over the rats in the Skinner 
Boxes, the LTP chap, and another, inscrutable and very wise oriental 
gentleman who had the lab next to him (and who I respect and admire as 
much as I do the other chap), began running linear algebra courses in 
the coffee room. Meanwhile, I continued with my little diagrams and 
experiments. When he read my 40 pages of scribbles and rants, I remember 
the chap I shared the lab with saying "hey Dave, there might be 
something in this, but you know, there are many ways this could be 
implemented". I said "I know". I knew this because he'd earlier taught 
me how to patch wires on our BRS/LVE board to link together the solid 
state modules which together ran the operant schedules and clocked up 
the data (we later moved from solid state to FAST BASIC programming, and 
the old spaghetti largely went by the way - something you might like to 
think upon further). Anyway, the above oriental gentleman (Glen knows 
who he was), said to me one day when I was trying to enthuse him about 
the then famous 1971 model of what some might call "unsupervised 
learning "you know, Dave, the guys doing this are nice guys, and when I 
was at ******* I tried to help them out with some of their maths. What 
you really need to do is put it all into Hilbert Space......". He gave 
me some other good advice which I've tried to act.

My advice to you is......do your homework and stop being a smartass and 
understand what the "web of belief" is really all about in terms of 
holism. Why might a sentence (or set of sentences) mean one thing you me 
and another to you? Why might what I believe be "better" than what you 
believe? What does it take to do innovative research?

>
>or descriptions of this "rule
>> governed behaviour" something more catchy ie "Artificial Neural 
>>Networks" or "cell assemblies" (Hebb was always talking about a 
>>Conceptual Nervous System and he did it rather poorly relative to the 
>>efforts of Hull, Guthrie or Estes - he just said it all in more 
>>popular,  familiar, intensional language, ensuring that more 
>>science-shy people  lapped it up!). This propensity to generate 
>>misnomers and repackage,  plagiarise or re-badge others' *empirical* 
>>work as something new and  "algorithmic" or "analytic" simply through 
>>name changing allows them to  sell a load of nonsense to the unwary 
>>who don't see this sleight of hand  for what it is.  When they make 
>>out that what they have to say somehow  captures what's essential 
>>about "cognitive" or "mental" life I just see  fraud, something which 
>>I think is endemic within psychology, and has  been for decades. It 
>>makes "Cognitive Science" a Ptolemeic monster,  which in my view is 
>>far worse than the original, as this monster has no  practical utility 
>>over the original behavioural work itself, and yet  actually gets in 
>>the way of advancing that science by soaking up funding  on grounds 
>>that it's closer to common sense folk psychology! Such people  make 
>>quite ludicrous grant proposals, with fantastic promises, which make 
>>the realistic aims of real science look trivial in comparison. This 
>>just shapes up lying, and turns science into marketing. Such "leaders" 
>>take students and other naive folk back to ways of thinking which were entrepreneurs!
-- 
David Longley
0
David
10/30/2004 11:29:46 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:5hPgd.32047$Qs6.2170742@news20.bellglobal.com...

snip most, to try to get to the point...

> Now, if I understand you right, this is an example of unsupervised
> learning. Yet because it depends on external inputs it's
supervised.
> Or is it both at the same time? Why?

You still conflate "external inputs" with the subclass of inputs
which are "feedback".

Do you understand the concept of causal or functional dependency?

You've been told several times that "feedback" is an input which is
caused by or functionally dependent on prior outputs of the system.

To be useful, the feedback must be detectably correlated with or
related to the operation of the system, so that it can be used to
guide adjustments to the way the system processes future inputs.  To
be useful, the feedback must be processed differently from other
inputs, it must have effects on the system distinct from those of
other non-feedback inputs.

The difference between having feedback and not having feedback
should not be all that hard to grasp... I don't understand why you
are having so much trouble.  All systems require external input, but
most external input is not functionally dependent on prior outputs,
so most external input is not feedback.

Is it just the vocabulary that confuses you?  Do you have the same
problem distinguishing operant conditioning from classical
conditioning?   Would you argue that they are indistinguishable?

If you can see a difference there, then recognize that it is the
same difference which is denoted by the distinction between
supervised and unsupervised.

Bill Modlin


0
Bill
10/31/2004 12:32:54 AM
On Sat, 30 Oct 2004 22:57:59 +0100, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer 
><fuckoff@spammers.com> writes
>>Wolf Kirchmeir wrote:
>>> Stargazer wrote:

[. . .]

>One consequence is that the cognoscenti generally just make polite 
>excuses and leave folk such as yourself to your nonsense.

So, what are we to conclude, David, that that you're not one of the
cognoscenti or that you don't know how to make polite excuses?

Regards - Lester
0
lesterDELzick
10/31/2004 1:15:55 AM
On Sat, 30 Oct 2004 21:13:51 GMT, Michael Olea <oleaj@sbcglobal.net>
in comp.ai.philosophy wrote:

>in article 41849d2b.15043409@netnews.att.net, Lester Zick at
>lesterDELzick@worldnet.att.net wrote on 10/30/04 1:10 PM:
>
>> On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
>> in comp.ai.philosophy wrote:
>> 
>
>[snip]
>
>> 
>> Allow me to comment without malice, Michael, that we now have a good
>> working explanation for producing software that learns to do something
>> useful without claiming to do something with intelligence. Is it your
>> contention that doing something useful is intelligence or that it
>> doesn't matter? Or that intelligence amounts to doing something
>> useful. Seems to me I've seen mechanically intelligent people doing
>> plenty of useless things.
>
>I was not making any claim one way or another about intelligence, Lester,
>rather just trying to give a sense of what "supervised learning" and
>"unsupervised learning" mean in the context where those terms are part of a
>technical vocabulary - the construction of algorithms - though I would lean
>to the "it doesn't matter" point of view. A brief quote from Quine's essay
>"Natural Kinds":

Pretty much as I thought, Michael, and quite well done. But please
spare me the Quine. I'm sure he must have said something one time
or another. In the larger context of unimportant things, I'm sure he
looms large.

>=====
>An example is the disposition called intelligence - the ability, vaguely
>speaking, to learn quickly and to solve problems. Sometime, whether in terms
>of proteins or colloids or nerve nets or overt behavior, the relevant branch
>of science may reach the stage where a similarity notion can be constructed
>capable of making even the notion of intelligence respectable. And
>superflous.
>=====
>
>> 
>> Regards - Lester
>>
>
>Regards.
>


Regards - Lester
0
lesterDELzick
10/31/2004 1:21:28 AM
On Sat, 30 Oct 2004 21:29:14 GMT, patty <pattyNO@SPAMicyberspace.net>
in comp.ai.philosophy wrote:

>Lester Zick wrote:
>
>> On Sat, 30 Oct 2004 15:06:09 -0400, Wolf Kirchmeir
>> <wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>> 
>> 
>>>Stargazer wrote:
>>>[...]
>>>
>>>>Learning as a change in behavior is a definition under the behaviorist
>>>>rationale. This assumption leads to all the well known theoretical
>>>>edifice. However, learning is defined differently by the neuroscientist.
>>>>Thus, it may be "stupid" from a behaviorist point of view, but it
>>>>is not from neuroscience. It doesn't seem wise to criticize someone
>>>>else's definitions. You can accept it or reject it, or you can (at most)
>>>>point out that such definitions may lead to theoretical/conceptual
>>>>dead-ends or empirically flawed models, which is not the case
>>>>regarding supervised/unsupervised learning.
>>>>
>>>>*SG*
>>>
>>>Well, I've tried to criticise the fuzziness of the concepts. I'll accept 
>>>"learning" as any change in behaviour, at any level. What I find 
>>>confusing is that sometimes "learning" is used in the behaviorial sense, 
>>>sometimes in the sense of "change in the functioning of a neural 
>>>network", sometimes in the sense "changes in a neuron's firing 
>>>patterns", etc etc, all at the same time. Now, I don't care what you 
>>>call these, so long as you keep the hierarchy straight. Terminology can 
>>>help or hinder -- I would think that people would be more careful with 
>>>their choice of terms. (Footnote)
>>>
>>>I see the following hierarchy:
>>>behaviour -- physiological functioning -- neural network functioning -- 
>>>neuron functioning
>>>
>>>Information flows both ways along this hierarchy, and it's important IMO 
>>>to talk as clearly as possible. It seems to me that learning at one 
>>>level implies learning at a lower level.
>>>
>>>I also find it odd that people invent labels for two kinds of learning 
>>>and seem to believe thay have discovered something new (or in some cases 
>>>have refuted the behaviorist stance.)  I've reread the posts on 
>>>supervised vs unsupervised learning, and insofar as I've made sense of 
>>>the explanations, unsupervised learning is classical condioning, and 
>>>supervise learning is operant conditioning. (BTW, Stephen Harris's 
>>>comment that unsupervised learning  requires some sort of external 
>>>input, else the NN won't do much of anything, was the key that unlocked 
>>>the puzzle. Up till then, I was trying to figure out how unsupervised 
>>>learning was different from random neural firing.)
>>>
>>>The lesson I draw is that learning in the behaviorist sense scales up 
>>>and down; or if you like, repeats fractal-like at different scales. It 
>>>is "behaviour all the way down", and back up again. If that notion 
>>>reflects reality, then something new has IMO been discovered. 
>>>Considering that a neuron is a rather complex entity (as is shown when 
>>>you sketch a meta-program to simulate how it works), it seems that 
>>>learning may scale even to that level.
>>>
>>>"It's all rather complicated, really."
>> 
>> 
>> I'll say. "It is behavior all the way down" is analogous to patty's
>> ida that it's all belief. Such facile observations are nugatory with
>> respect to science because they fail to discrimate when and how belief
>> becomes knowledge and behavior becomes learning and a host of other
>> mental effects.
>> 
>> Regards - Lester
>
>To correct the record i would not say something as stupid as "it's 
>belief all the way down".  Beliefs are just a propositional attitudes 
>and there certainly is more to existence than just those.  What i might 
>have said would have been something more like, we tend not to rationally 
>think outside of our own propositional attitudes.  A formal system 
>running on a computer certainly cannot function outside of its beliefs 
>(were its statements to be considered beliefs).  Those of you who think 
>you can make statements that are true even outside of your assumptions 
>and beliefs, are fundamentally misguided.  You are arrogantly taking a 
>god's eye view.  You are quite literally talking out of your heads.

Whatever. It's quite clear where your term web of belief comes from
since David uses the term. Perhaps you could rationalize your own web
of belief a little further.

>Incidentally, Lester, you can change your assumptions and make 
>tautologies false.

And incidentally, patty, I'm still awaiting some explanation for your
objections to my critique of tautologies and empirical truth.

Regards - Lester
0
lesterDELzick
10/31/2004 1:25:48 AM
in article 4184e676.16056124@netnews.att.net, Lester Zick at
lesterDELzick@worldnet.att.net wrote on 10/30/04 6:21 PM:

> On Sat, 30 Oct 2004 21:13:51 GMT, Michael Olea <oleaj@sbcglobal.net>
> in comp.ai.philosophy wrote:
> 
>> in article 41849d2b.15043409@netnews.att.net, Lester Zick at
>> lesterDELzick@worldnet.att.net wrote on 10/30/04 1:10 PM:
>> 
>>> On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
>>> in comp.ai.philosophy wrote:
>>> 
>> 
>> [snip]
>> 
>>> 
>>> Allow me to comment without malice, Michael, that we now have a good
>>> working explanation for producing software that learns to do something
>>> useful without claiming to do something with intelligence. Is it your
>>> contention that doing something useful is intelligence or that it
>>> doesn't matter? Or that intelligence amounts to doing something
>>> useful. Seems to me I've seen mechanically intelligent people doing
>>> plenty of useless things.
>> 
>> I was not making any claim one way or another about intelligence, Lester,
>> rather just trying to give a sense of what "supervised learning" and
>> "unsupervised learning" mean in the context where those terms are part of a
>> technical vocabulary - the construction of algorithms - though I would lean
>> to the "it doesn't matter" point of view. A brief quote from Quine's essay
>> "Natural Kinds":
> 
> Pretty much as I thought, Michael, and quite well done. But please
> spare me the Quine. I'm sure he must have said something one time
> or another. In the larger context of unimportant things, I'm sure he
> looms large.

Heheh. I know you did not much care for Dos Dogmas, and as I recall you got
so bored with it that you did not read the whole thing, despite its brevity,
but I find Quine's writings provocative. Of thought. Yesterday I read
Natural Kinds for the fourth time - maybe I am just slow, but I keep getting
more out of that one essay. Bear in mind that I actualy write cluster
analysis code, so the ideas he wrestles with in Natural Kinds are not just
abstractions, but matters of personal practical concern. Cluster Analysis is
after all an attempt to automate a rigorous elucidation of natural kinds.

> 
>> =====
>> An example is the disposition called intelligence - the ability, vaguely
>> speaking, to learn quickly and to solve problems. Sometime, whether in terms
>> of proteins or colloids or nerve nets or overt behavior, the relevant branch
>> of science may reach the stage where a similarity notion can be constructed
>> capable of making even the notion of intelligence respectable. And
>> superflous.
>> =====
>> 
>>> 
>>> Regards - Lester
>>> 
>> 
>> Regards.
>> 
> 
> 
> Regards - Lester

0
Michael
10/31/2004 2:12:20 AM
"patty" <pattyNO@SPAMicyberspace.net> wrote in message
news:A_Tgd.335753$MQ5.168209@attbi_s52...
> Bill Modlin wrote:
>
> > "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> > news:5hPgd.32047$Qs6.2170742@news20.bellglobal.com...
> >
> > snip most, to try to get to the point...
> >
> >
> >>Now, if I understand you right, this is an example of
unsupervised
> >>learning. Yet because it depends on external inputs it's
> >
> > supervised.
> >
> >>Or is it both at the same time? Why?
> >
> >
> > You still conflate "external inputs" with the subclass of inputs
> > which are "feedback".
> >
> > Do you understand the concept of causal or functional
dependency?
> >
> > You've been told several times that "feedback" is an input which
is
> > caused by or functionally dependent on prior outputs of the
system.
> >
> > To be useful, the feedback must be detectably correlated with or
> > related to the operation of the system, so that it can be used
to
> > guide adjustments to the way the system processes future inputs.
To
> > be useful, the feedback must be processed differently from other
> > inputs, it must have effects on the system distinct from those
of
> > other non-feedback inputs.
> >
>
> How does the system (or even the system designer) know which are
which ?
>
> patty

Generally, they come through quite different unrelated channels with
distinct purposes... there is no possibility of confusion.

In the imagined biological implementation of operant conditioning
which Glen sketched in another thread, the feedback comes in through
hardvired "reinforcement detectors", and causes certain sections of
tissue to be flooded with special chemicals.   Not easy to confuse
with the normal inputs processed through the synapses of the neurons
mediating between sensory input and motor control.

In a typical ANN training experiment, one supplies frames of input
variables, and a quite separate variable that indicates what you
want the network to generate as output for each particular frame.
The training logic runs the normal inputs through the net, compares
it to the desired value, and adjusts the network parameters to try
to reduce the error for subsequent trials.

In an unsupervised situation, you have only the input frames, the
special purpose "training" signal is missing.  There may still be
logic that attempts to adjust the network parameters after each
frame, perhaps to emphasise correlations that you are keeping track
of or some such, but there is no special channel to tell you whether
the outputs you've been generating are right or wrong.

Bill


0
Bill
10/31/2004 3:48:55 AM
In article <BDA99812.C01A%oleaj@sbcglobal.net>, Michael Olea 
<oleaj@sbcglobal.net> writes
>in article 4184e676.16056124@netnews.att.net, Lester Zick at
>lesterDELzick@worldnet.att.net wrote on 10/30/04 6:21 PM:
>
>> On Sat, 30 Oct 2004 21:13:51 GMT, Michael Olea <oleaj@sbcglobal.net>
>> in comp.ai.philosophy wrote:
>>
>>> in article 41849d2b.15043409@netnews.att.net, Lester Zick at
>>> lesterDELzick@worldnet.att.net wrote on 10/30/04 1:10 PM:
>>>
>>>> On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
>>>> in comp.ai.philosophy wrote:
>>>>
>>>
>>> [snip]
>>>
>>>>
>>>> Allow me to comment without malice, Michael, that we now have a good
>>>> working explanation for producing software that learns to do something
>>>> useful without claiming to do something with intelligence. Is it your
>>>> contention that doing something useful is intelligence or that it
>>>> doesn't matter? Or that intelligence amounts to doing something
>>>> useful. Seems to me I've seen mechanically intelligent people doing
>>>> plenty of useless things.
>>>
>>> I was not making any claim one way or another about intelligence, Lester,
>>> rather just trying to give a sense of what "supervised learning" and
>>> "unsupervised learning" mean in the context where those terms are part of a
>>> technical vocabulary - the construction of algorithms - though I would lean
>>> to the "it doesn't matter" point of view. A brief quote from Quine's essay
>>> "Natural Kinds":
>>
>> Pretty much as I thought, Michael, and quite well done. But please
>> spare me the Quine. I'm sure he must have said something one time
>> or another. In the larger context of unimportant things, I'm sure he
>> looms large.
>
>Heheh. I know you did not much care for Dos Dogmas, and as I recall you got
>so bored with it that you did not read the whole thing, despite its brevity,
>but I find Quine's writings provocative. Of thought. Yesterday I read
>Natural Kinds for the fourth time - maybe I am just slow, but I keep getting
>more out of that one essay. Bear in mind that I actualy write cluster
>analysis code, so the ideas he wrestles with in Natural Kinds are not just
>abstractions, but matters of personal practical concern. Cluster Analysis is
>after all an attempt to automate a rigorous elucidation of natural kinds.
>

The one thing one soon learns after using cluster analysis practically 
is that cluster analysis (agglomerative or divisive), *always* produces 
clusters (just like Factor analysis always produces factors (they are 
closely related). In many applications (apart from the nice concrete 
biological ones used to illustrate them) there's nothing natural about 
them <g> They're descriptive statistical tools. cf. first website 
reference for the beginning of a long series which took just this line 
and then think "Fragments".



>>
>>> =====
>>> An example is the disposition called intelligence - the ability, vaguely
>>> speaking, to learn quickly and to solve problems. Sometime, whether in terms
>>> of proteins or colloids or nerve nets or overt behavior, the relevant branch
>>> of science may reach the stage where a similarity notion can be constructed
>>> capable of making even the notion of intelligence respectable. And
>>> superflous.
>>> =====
>>>
>>>>
>>>> Regards - Lester
>>>>
>>>
>>> Regards.
>>>
>>
>>
>> Regards - Lester
>

-- 
David Longley
http://www.longley.demon.co.uk
0
David
10/31/2004 8:00:24 AM
In article <4184e532.15732286@netnews.att.net>, Lester Zick 
<lesterDELzick@worldnet.att.net> writes
>On Sat, 30 Oct 2004 22:57:59 +0100, David Longley
><David@longley.demon.co.uk> in comp.ai.philosophy wrote:
>
>>In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer
>><fuckoff@spammers.com> writes
>>>Wolf Kirchmeir wrote:
>>>> Stargazer wrote:
>
>[. . .]
>
>>One consequence is that the cognoscenti generally just make polite
>>excuses and leave folk such as yourself to your nonsense.
>
>So, what are we to conclude, David, that that you're not one of the
>cognoscenti or that you don't know how to make polite excuses?
>
>Regards - Lester

Not quite Lester, I'm a super-being, and I'm benevolently playing with 
you ;-)

You won't know it (cf. intensional opacity) but there are lots of us 
super-beings about. Some of us created political correctness, others 
"cognitive science", others (a little less imaginative) had a go at even 
more eccentric products (<http://www.phallic.org/onanism-fashion/> in an 
effort to keep you and the rest of your onanistic finger dribbling ilk 
from more serious forms of abuse.

We're not users but anti-abusers (educators). Consider it a subtle 
selection, or management procedure.

cf. "Beyond Freedom and Dignity" (1971), "Manufacturing Consent" (1992), 
"The Bell Curve" 1994, or ..well...., I'll let you work that out for 
yourself (but do look up Nisbett & Wilson 1977 sometime, and remind 
Patty to do the same <g>

-- 
David Longley
http:longley.demon.co.uk/Frag.htm

0
David
10/31/2004 8:25:30 AM
In article <unKgd.334213$3l3.127896@attbi_s03>, patty 
<pattyNO@SPAMicyberspace.net> writes
>David Longley wrote:
>> In article <tKdgd.17800$Qs6.1523401@news20.bellglobal.com>, Wolf 
>>Kirchmeir <wwolfkir@sympatico.ca> writes
>>
>>> Stargazer wrote:
>>>
>>>> Wolf Kirchmeir wrote:
>>>>
>>>>> Stargazer wrote:
>>>
>>> [snip a number oif clear answers to my questions - thanks. I think. :-)]
>>>
>>>> *SG*
>>>
>>>
>>> Your answers clear up some misconceptions on my part, but they also 
>>>show  terminological obfuscation on the part of artificial neural 
>>>network researchers.
>>>
>>> Throughout your explanation, the term "signal" is used ambiguously. 
>>>It  sometimes seems to apply to an input to a single neuron, and 
>>>sometimes  to a collection of inputs to a network of neurons. IMO 
>>>this is  confusing. Very. It's a hierarchy error, which always cause trouble.
>>>
>>> Also, calling the calculated output of a NN a "training signal" 
>>>because it's compared to the desired outcome is confusing, at least 
>>>to  me, for whom a "training signal" is a "signal that trains", ie, 
>>>an  input to the NN. And the use of "signal" for both inputs and 
>>>outputs  is confusing, since IMO an output is a signal to the 
>>>experimenter, not  the NN.
>>>
>>> All in all, my immediate impression is that workers in artificial 
>>>NNs  don't have a clear conception of what they are trying to do. Not 
>>>that  that is a bad thing - after all, it's early days yet, and one 
>>>of the  functions of research is to clarify the questions one is 
>>>trying to  answer. My comments as a pure outsider may or may not help 
>>>clarify  vagueness. Either way, thinking about your explanations has 
>>>been  interesting.
>>>
>>>
>> Another way of putting it is that the early ANN folk didn't know what 
>>was being done in the EAB back in the 30s, 40s and 50s (note that all 
>>of  the former folks' work came out of those decades but they seem to 
>>have  an uncanny knack of misrepresenting of just not understanding 
>>their  sources). Nor did they understand the way that philosophy was 
>>going in  the same period (most "AI" and "Cognitive Scientists" 
>>*still* appear to  be pre 1929 Carnap or early Wittgensteinian). It 
>>appears to me that they  took some basic "programming" algorithms 
>>which *simulated* (or at best controlled) some of the experimental 
>>schedules/equipment (in those early days it was largely switchboard 
>>and other telephonic paraphernalia) and  just renamed their statistical models
>
>Yeah, sure , like Markov wasn't studying these structures way back in 
>1906.  I coded my first NN after studying Markov chains, i didn't even 
>know about Hull et all.  There has always been a close association 
>between engineering (computers) and the math departments.  Did the 
>psychology departments inform that process ... maybe ... but maybe not 
>as much as it appears to you.  Maybe Minsky could tell us what 
>primarily informed *his* research ....
>
>patty
>

This response, though written immediately after the above reply 
appeared, has been delayed intentionally to give others time to have 
their say as you wish. I wonder if you might be able to work this out, 
or discover it through Googling? Tell us: what were the three references 
at the end the paper in The Bulletin of Mathematical Biophysics, 5: 
115-133 1943 ? Whilst you're at it, you might like to dig out chapter 
XII in the book "Information Storage and Neural Control" eds: Fields and 
Abbott 1963 (the first article above is there as well, but it's also 
widely available elsewhere too). What might the chapter XII have to do 
with that Wittgenstein section you cited from the Tractatus, and which I 
provided a further link to? Just how good is your web eh?

Let me tell you a story.

Somewhere around 1980/1 or so, I spent an awful lot of time and effort 
drawing what you might call "neural nets". I was trying to work out how 
noradrenaline (aka norepinephine) and serotonin (aka 5-HT) and dopamine 
(not to mention loads of peptides) might be figuring in the control of 
operant behaviour. I thought this might be a good use of my time in 
between the session change-overs which I had to do every 20 mins at the 
animal house which was about 500 yards away via my own "runway" <g>. 
This is what I did as a way of thinking about the experiments I was 
running with rats (about  60 a day, using four Skinner Boxes, 20 mins a 
session - I'm sure you can do the maths, but that works out at 15 x 
20mins, or, including change over time of 5-10 mins where animals were 
swapped over and the equipment checked, about 7 or 8 hrs a day. One of 
these experiments ran for 3 months (an FI-60), and such experiments had 
to run 7 days a week (no weekends of when you run experiments with 
rats).

At this time, I wrote about 40 pages on "learning" and what you might 
call nets, and I showed it to the guy I shared the lab with - (someone 
who kind of acted as one of my thesis advisors). He showed it to some 
others a couple of doors down (it was a small division, about 20 
scientists if that, maybe 7 labs on our level in the division, and two 
or three downstairs). One of the people a few doors down was someone who 
is well known for his work on LTP (if you look into where I was at the 
time, this might become clearer). This chap (whose thesis had been 
entitled (as I recall) "How The Brain Works"), said I might like to go 
and see a good guy at the NPL (Uttley), but I didn't go because of the 
almost 12/7 commitment above, something which seemed to go on for nearly 
4 years without a break. My main supervisor (a research psychiatrist) 
had also made it clear that my work should be purely empirical and I 
suspect he was probably a bit worried about me asking if I could submit 
a theoretical PhD. One of the things I recall at the time was that 
whilst I was running back and forth to the animal house every 20 minutes 
to change over the rats in the Skinner Boxes, the LTP chap, and another, 
inscrutable and very wise oriental gentleman who had the lab next to him 
(and whom I respect and admire as much as I do the other chap), began 
running linear algebra courses in the coffee room. Meanwhile, I 
continued with my little diagrams and experiments.

When the chap in my lab read my 40 pages of scribbles and rants, I 
remember him saying "hey Dave, there might be something in this, but you 
know, there are many ways this could be implemented". I said "I know". I 
knew this because he'd earlier taught me how to patch wires on our 
BRS/LVE board to link together the solid state modules which together 
ran the operant schedules and clocked up the data (we later moved from 
solid state to FAST BASIC programming, and the old spaghetti largely 
went by the way - something you might like to think upon further). 
Anyway, the above oriental gentleman (Glen knows who he was), said to me 
one day as I was rather naively trying to enthuse him about the then 
famous 1971 model of what some might call "unsupervised learning (and 
this might make Glen squeal with laughter!) "you know, Dave, the guys 
doing this are nice guys, but when I was at ******* I tried to help them 
out with some of their maths. What you really need to do is put it all 
into Hilbert Space......". I rushed off an bought Halmos. He also gave 
me some other good practical advice which I've tried to act upon.

My advice to you is......do your homework, stop being a smartass and 
understand what the "web of belief" is *really* all about in terms of 
holism. Why might a sentence (or set of sentences) mean one thing you me 
and another to you? Why might what I believe be "better" than what you 
believe? What does it take to do innovative research? What is the sine 
qua non? What is the message of "enlightened" empiricism?


>
>or descriptions of this "rule
>> governed behaviour" something more catchy ie "Artificial Neural 
>>Networks" or "cell assemblies" (Hebb was always talking about a 
>>Conceptual Nervous System and he did it rather poorly relative to the 
>>efforts of Hull, Guthrie or Estes - he just said it all in more 
>>popular,  familiar, intensional language, ensuring that more 
>>science-shy people  lapped it up!). This propensity to generate 
>>misnomers and repackage,  plagiarise or re-badge others' *empirical* 
>>work as something new and  "algorithmic" or "analytic" simply through 
>>name changing allows them to  sell a load of nonsense to the unwary 
>>who don't see this sleight of hand  for what it is.  When they make 
>>out that what they have to say somehow  captures what's essential 
>>about "cognitive" or "mental" life I just see  fraud, something which 
>>I think is endemic within psychology, and has  been for decades. It 
>>makes "Cognitive Science" a Ptolemeic monster,  which in my view is 
>>far worse than the original, as this monster has no  practical utility 
>>over the original behavioural work itself, and yet  actually gets in 
>>the way of advancing that science by soaking up funding  on grounds 
>>that it's closer to common sense folk psychology! Such people  make 
>>quite ludicrous grant proposals, with fantastic promises, which make 
>>the realistic aims of real science look trivial in comparison. This 
>>just shapes up lying, and turns science into marketing. Such "leaders" 
>>take students and other naive folk back to ways of thinking which were entrepreneurs!
-- 
David Longley
http://www.longley.demon.co.uk/Frag.htm

0
David
10/31/2004 9:19:05 AM
Just a technical note: the phrase "discrimination of contingencies"
implies something other than the fact that discriminative
contingencies frequently produce discriminated operants. It implies
that the contingencies themselves are what are "discriminated" rather
than the discriminative stimuli that enter into the contingencies.
Such a thing is possible but has not been done very often - Andy
Lattal did it, I think. The way it is done is that the animal responds
under two different schedules, and periodically, two other manipulanda
become available and responding on one is reinforced if Schedule A was
in effect and responding on the other is reinforced if Schedule B was
in effect.

Glen


Wolf Kirchmeir <wwolfkir@sympatico.ca> wrote in message news:<CYNgd.41639$rs5.1280530@news20.bellglobal.com>...
> Bill Modlin wrote:
> 
> [...]
> > Now suppose that the reinforcement depends not just on the behavior,
> > but on the behavior occuring under some specific conditions,
> > conditions presumably detectable as some combination of inputs from
> > the sensory systems.
> 
> At this point, you are actually describing the discrimination of 
> contingencies.
> 
> Just tho't you should know.
0
gmsizemore2
10/31/2004 1:36:37 PM
On Sun, 31 Oct 2004 02:12:20 GMT, Michael Olea <oleaj@sbcglobal.net>
in comp.ai.philosophy wrote:

>in article 4184e676.16056124@netnews.att.net, Lester Zick at
>lesterDELzick@worldnet.att.net wrote on 10/30/04 6:21 PM:
>
>> On Sat, 30 Oct 2004 21:13:51 GMT, Michael Olea <oleaj@sbcglobal.net>
>> in comp.ai.philosophy wrote:
>> 
>>> in article 41849d2b.15043409@netnews.att.net, Lester Zick at
>>> lesterDELzick@worldnet.att.net wrote on 10/30/04 1:10 PM:
>>> 
>>>> On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
>>>> in comp.ai.philosophy wrote:
>>>> 
>>> 
>>> [snip]
>>> 
>>>> 
>>>> Allow me to comment without malice, Michael, that we now have a good
>>>> working explanation for producing software that learns to do something
>>>> useful without claiming to do something with intelligence. Is it your
>>>> contention that doing something useful is intelligence or that it
>>>> doesn't matter? Or that intelligence amounts to doing something
>>>> useful. Seems to me I've seen mechanically intelligent people doing
>>>> plenty of useless things.
>>> 
>>> I was not making any claim one way or another about intelligence, Lester,
>>> rather just trying to give a sense of what "supervised learning" and
>>> "unsupervised learning" mean in the context where those terms are part of a
>>> technical vocabulary - the construction of algorithms - though I would lean
>>> to the "it doesn't matter" point of view. A brief quote from Quine's essay
>>> "Natural Kinds":
>> 
>> Pretty much as I thought, Michael, and quite well done. But please
>> spare me the Quine. I'm sure he must have said something one time
>> or another. In the larger context of unimportant things, I'm sure he
>> looms large.
>
>Heheh. I know you did not much care for Dos Dogmas, and as I recall you got
>so bored with it that you did not read the whole thing, despite its brevity,
>but I find Quine's writings provocative. Of thought. Yesterday I read
>Natural Kinds for the fourth time - maybe I am just slow, but I keep getting
>more out of that one essay. Bear in mind that I actualy write cluster
>analysis code, so the ideas he wrestles with in Natural Kinds are not just
>abstractions, but matters of personal practical concern. Cluster Analysis is
>after all an attempt to automate a rigorous elucidation of natural kinds.

I've had occasion to watch Quine respond to questions on various
topics recently in an interview on a PBS program called "The Examined
Life" regarding philosophy in general. He starts off sensibly enough
but gradually begins to babble. But he was in good company.

In Los Dos he uses the reverse strategy and starts off babbling. I
think it was only a page or two at most I got through. And people
complain about my writing. At least I have something sensible to say.
It may not be recognized for what it is - as I recall you couldn't
make heads or tails of Differential Cognition -, but it quickly comes
to a point. Whether people agree or disagree they can do it quickly.

>>> =====
>>> An example is the disposition called intelligence - the ability, vaguely
>>> speaking, to learn quickly and to solve problems. Sometime, whether in terms
>>> of proteins or colloids or nerve nets or overt behavior, the relevant branch
>>> of science may reach the stage where a similarity notion can be constructed
>>> capable of making even the notion of intelligence respectable. And
>>> superflous.
>>> =====
>>> 
>>>> 
>>>> Regards - Lester
>>>> 
>>> 
>>> Regards.
>>> 
>> 
>> 
>> Regards - Lester
>


Regards - Lester
0
lesterDELzick
10/31/2004 3:13:16 PM
On Sun, 31 Oct 2004 08:25:30 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <4184e532.15732286@netnews.att.net>, Lester Zick 
><lesterDELzick@worldnet.att.net> writes
>>On Sat, 30 Oct 2004 22:57:59 +0100, David Longley
>><David@longley.demon.co.uk> in comp.ai.philosophy wrote:
>>
>>>In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer
>>><fuckoff@spammers.com> writes
>>>>Wolf Kirchmeir wrote:
>>>>> Stargazer wrote:
>>
>>[. . .]
>>
>>>One consequence is that the cognoscenti generally just make polite
>>>excuses and leave folk such as yourself to your nonsense.
>>
>>So, what are we to conclude, David, that that you're not one of the
>>cognoscenti or that you don't know how to make polite excuses?
>>
>>Regards - Lester
>
>Not quite Lester, I'm a super-being, and I'm benevolently playing with 
>you ;-)

Yeah, David, you've really got the benevolent part down pat.

>You won't know it (cf. intensional opacity) but there are lots of us 
>super-beings about. Some of us created political correctness, others 
>"cognitive science", others (a little less imaginative) had a go at even 
>more eccentric products (<http://www.phallic.org/onanism-fashion/> in an 
>effort to keep you and the rest of your onanistic finger dribbling ilk 
>from more serious forms of abuse.

Which didn't work either.

>We're not users but anti-abusers (educators). Consider it a subtle 
>selection, or management procedure.

Apparently so subtle it doesn't work.

>cf. "Beyond Freedom and Dignity" (1971), "Manufacturing Consent" (1992), 
>"The Bell Curve" 1994, or ..well...., I'll let you work that out for 
>yourself (but do look up Nisbett & Wilson 1977 sometime, and remind 
>Patty to do the same <g>

Why?

Regards - Lester
0
lesterDELzick
10/31/2004 3:17:35 PM
On Sun, 31 Oct 2004 02:12:20 GMT, Michael Olea <oleaj@sbcglobal.net>
in comp.ai.philosophy wrote:

>in article 4184e676.16056124@netnews.att.net, Lester Zick at
>lesterDELzick@worldnet.att.net wrote on 10/30/04 6:21 PM:
>
>> On Sat, 30 Oct 2004 21:13:51 GMT, Michael Olea <oleaj@sbcglobal.net>
>> in comp.ai.philosophy wrote:
>> 
>>> in article 41849d2b.15043409@netnews.att.net, Lester Zick at
>>> lesterDELzick@worldnet.att.net wrote on 10/30/04 1:10 PM:
>>> 
>>>> On Fri, 29 Oct 2004 22:29:53 GMT, Michael Olea <oleaj@sbcglobal.net>
>>>> in comp.ai.philosophy wrote:
>>>> 
>>> 
>>> [snip]
>>> 
>>>> 
>>>> Allow me to comment without malice, Michael, that we now have a good
>>>> working explanation for producing software that learns to do something
>>>> useful without claiming to do something with intelligence. Is it your
>>>> contention that doing something useful is intelligence or that it
>>>> doesn't matter? Or that intelligence amounts to doing something
>>>> useful. Seems to me I've seen mechanically intelligent people doing
>>>> plenty of useless things.
>>> 
>>> I was not making any claim one way or another about intelligence, Lester,
>>> rather just trying to give a sense of what "supervised learning" and
>>> "unsupervised learning" mean in the context where those terms are part of a
>>> technical vocabulary - the construction of algorithms - though I would lean
>>> to the "it doesn't matter" point of view. A brief quote from Quine's essay
>>> "Natural Kinds":
>> 
>> Pretty much as I thought, Michael, and quite well done. But please
>> spare me the Quine. I'm sure he must have said something one time
>> or another. In the larger context of unimportant things, I'm sure he
>> looms large.
>
>Heheh. I know you did not much care for Dos Dogmas, and as I recall you got
>so bored with it that you did not read the whole thing, despite its brevity,
>but I find Quine's writings provocative. Of thought. Yesterday I read
>Natural Kinds for the fourth time - maybe I am just slow, but I keep getting
>more out of that one essay. Bear in mind that I actualy write cluster
>analysis code, so the ideas he wrestles with in Natural Kinds are not just
>abstractions, but matters of personal practical concern. Cluster Analysis is
>after all an attempt to automate a rigorous elucidation of natural kinds.

Hi Michael - Let me append a few substantive comments. However, since
I have no hands on experience doing the kind of machine heuristics you
discuss, I hope you'll excuse any egregious blunders I make.

With respect to supervised versus unsupervised learning, I prefer to
call the supervised version simply training. Learning is something we
know changes behavior, but we don't really exactly know how or why.

One thing that occurred to me as I read your analysis of training, was
that you give an algorithm the correct answer and allow it to come up
with a correct result somehow. But how is the provided answer correct?
You admit that actual answers are problematic and do not originate in
any well defined algorithm.

So I would think a more appropriate term might be the expected answer
in which case the problem remains as to why any expected answer is
correct or not if it is and if it isn't then why is it expected? If
the only answer is that it's your money at stake, then I can certainly
understand the prerogative but not the idea of correctness in other
than applied and not abstract terms. And if answers cannot be given in
abstract terms, then the application is really just ambiguous.

Question: Are you looking for the machine to give you answers on
optimal utility under given parameters? Or are you giving the machine
some standard of optimal utility which you expect it to reach as part
of its training?

Regards - Lester
0
lesterDELzick
10/31/2004 3:46:48 PM
Stargazer wrote:
[...]
> 
> It seems complicated because we're talking of different paradigms
> here. Under behaviorist parlance, learning is related to the changes
> in behavior of the organism/mechanism that one is studying, while in
> neuroscience and artificial neural networks, learning is a change
> of internal parameters (and/or architecture, if one wants to account
> for plasticity) of the organism/mechanism. Although logically sound
> and coherent, the behaviorist definition prevents the analysis of
> internal modifications of the mechanism that aren't immediately
> reflected in corresponding changes of behavior. For the behaviorist,
> unsupervised learning does not exist, and this may explain why it seems
> difficult to understand it.
> 
> *SG*

I don't think that for the behaviorist unsupervised learning does not 
exist - as I understand the term, it seems to be more or less synonymous 
with classical conditioning, which does not depend on feedback to the 
organism's behaviour, but only on repeated presentation of the 
conditioning stimulus.

Nor does behaviourism prevent study of the underlying changes; it claims 
that that is a different level of analysis, is all. Nor does it claim 
that changes in the underlying mechanism must be "immediately reflected 
in corresponding changes in behaviour." Drop the "immediately", though, 
and you've characterised the behaviourist p.o.v correctly, IMO.

On the whole, I think the distinctions are still fuzzy. According to 
Stephen Harris, unsupervised learning may be triggered by external 
stimuli, but does not depend on external feedback. Your comments above 
and elsewhere imply that unsupervised learning may take place without 
external inputs. I would distinguish these as two types of change in the 
NN, and call the latter "development."

I earlier said I was uneasy with the distinction between development and 
learning, since much development requires external inputs. On 
reflection, IMO we should distinguish between:
a) NN changes controlled by external, fed back inputs (supervised 
learning, or operant conditioning);
b) NN changes caused by external inputs (unsupervised learning, or 
classical conditioning);
c) NN changes cause by changes in the NN itself (??? development???)

In nature, changes in hormones (for example) surrounding the NN may 
affect the NN's functions, including those that cause changes in the NN. 
Moreover, a NN may be extended, curtailed, or combined with another NN 
when the chemistry of the surrounding medium changes. If we avoid 
calling such external chemical changes inputs, then we have development 
clearly distinguished from learning. Or so it seems to me. 
Behaviourally, development would then differ from learning in that it 
modifies the repertoire of behaviours that may be shaped, but does not 
shape those behaviours. In fact, behaviorism cannot and does not claim 
to account for the emergence or production of new behaviours, only for 
their shaping by environmental contingencies. For clarity, it should be 
said that new combinations of existing behaviors are not not new 
behaviours in this sense. Note also that there must be corresponding 
development/change in the physiology of the organism - it's no good 
producing a NN that controls flight if the organism doesn't have wings 
to fly with.

There is of course a complication, in that learning in the behaviorist 
sense can be and is influenced by changes in the chemistry surrounding 
the NN. So much so, that for, example, if one studies for test while 
high on caffeine, one's performance on the test is affected by the 
amount of coffee drunk prior to the test. But this is IMO a minor quirk 
of the system -- caffeine AFAIK does not trigger NN development as 
hormones do.
0
Wolf
10/31/2004 5:15:23 PM
Bill Modlin wrote:
> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> news:5hPgd.32047$Qs6.2170742@news20.bellglobal.com...
> 
> snip most, to try to get to the point...
> 
> 
>>Now, if I understand you right, this is an example of unsupervised
>>learning. Yet because it depends on external inputs it's
> 
> supervised.
> 
>>Or is it both at the same time? Why?
> 
> 
> You still conflate "external inputs" with the subclass of inputs
> which are "feedback".

No I don't, but the way you wrote your previous posts implied that 
conflation.

0
Wolf
10/31/2004 5:17:51 PM
Lester Zick wrote:
[...]
> 
> Pretty much as I thought, Michael, and quite well done. But please
> spare me the Quine. I'm sure he must have said something one time
> or another. In the larger context of unimportant things, I'm sure he
> looms large.


As the backwoods farmer said, when he went to the city and saw a giraffe 
in the zoo, "There ain't no sech animal!"
0
Wolf
10/31/2004 5:24:13 PM
Lester Zick wrote:
> On Sat, 30 Oct 2004 21:29:14 GMT, patty <pattyNO@SPAMicyberspace.net>
> in comp.ai.philosophy wrote:
> 
> 
>>Lester Zick wrote:
>>
>>
>>>On Sat, 30 Oct 2004 15:06:09 -0400, Wolf Kirchmeir
>>><wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>>>
>>>
>>>
>>>>Stargazer wrote:
>>>>[...]
>>>>
>>>>
>>>>>Learning as a change in behavior is a definition under the behaviorist
>>>>>rationale. This assumption leads to all the well known theoretical
>>>>>edifice. However, learning is defined differently by the neuroscientist.
>>>>>Thus, it may be "stupid" from a behaviorist point of view, but it
>>>>>is not from neuroscience. It doesn't seem wise to criticize someone
>>>>>else's definitions. You can accept it or reject it, or you can (at most)
>>>>>point out that such definitions may lead to theoretical/conceptual
>>>>>dead-ends or empirically flawed models, which is not the case
>>>>>regarding supervised/unsupervised learning.
>>>>>
>>>>>*SG*
>>>>
>>>>Well, I've tried to criticise the fuzziness of the concepts. I'll accept 
>>>>"learning" as any change in behaviour, at any level. What I find 
>>>>confusing is that sometimes "learning" is used in the behaviorial sense, 
>>>>sometimes in the sense of "change in the functioning of a neural 
>>>>network", sometimes in the sense "changes in a neuron's firing 
>>>>patterns", etc etc, all at the same time. Now, I don't care what you 
>>>>call these, so long as you keep the hierarchy straight. Terminology can 
>>>>help or hinder -- I would think that people would be more careful with 
>>>>their choice of terms. (Footnote)
>>>>
>>>>I see the following hierarchy:
>>>>behaviour -- physiological functioning -- neural network functioning -- 
>>>>neuron functioning
>>>>
>>>>Information flows both ways along this hierarchy, and it's important IMO 
>>>>to talk as clearly as possible. It seems to me that learning at one 
>>>>level implies learning at a lower level.
>>>>
>>>>I also find it odd that people invent labels for two kinds of learning 
>>>>and seem to believe thay have discovered something new (or in some cases 
>>>>have refuted the behaviorist stance.)  I've reread the posts on 
>>>>supervised vs unsupervised learning, and insofar as I've made sense of 
>>>>the explanations, unsupervised learning is classical condioning, and 
>>>>supervise learning is operant conditioning. (BTW, Stephen Harris's 
>>>>comment that unsupervised learning  requires some sort of external 
>>>>input, else the NN won't do much of anything, was the key that unlocked 
>>>>the puzzle. Up till then, I was trying to figure out how unsupervised 
>>>>learning was different from random neural firing.)
>>>>
>>>>The lesson I draw is that learning in the behaviorist sense scales up 
>>>>and down; or if you like, repeats fractal-like at different scales. It 
>>>>is "behaviour all the way down", and back up again. If that notion 
>>>>reflects reality, then something new has IMO been discovered. 
>>>>Considering that a neuron is a rather complex entity (as is shown when 
>>>>you sketch a meta-program to simulate how it works), it seems that 
>>>>learning may scale even to that level.
>>>>
>>>>"It's all rather complicated, really."
>>>
>>>
>>>I'll say. "It is behavior all the way down" is analogous to patty's
>>>ida that it's all belief. Such facile observations are nugatory with
>>>respect to science because they fail to discrimate when and how belief
>>>becomes knowledge and behavior becomes learning and a host of other
>>>mental effects.
>>>
>>>Regards - Lester
>>
>>To correct the record i would not say something as stupid as "it's 
>>belief all the way down".  Beliefs are just a propositional attitudes 
>>and there certainly is more to existence than just those.  What i might 
>>have said would have been something more like, we tend not to rationally 
>>think outside of our own propositional attitudes.  A formal system 
>>running on a computer certainly cannot function outside of its beliefs 
>>(were its statements to be considered beliefs).  Those of you who think 
>>you can make statements that are true even outside of your assumptions 
>>and beliefs, are fundamentally misguided.  You are arrogantly taking a 
>>god's eye view.  You are quite literally talking out of your heads.
> 
> 
> Whatever. It's quite clear where your term web of belief comes from
> since David uses the term. Perhaps you could rationalize your own web
> of belief a little further.
> 
> 
>>Incidentally, Lester, you can change your assumptions and make 
>>tautologies false.
> 
> 
> And incidentally, patty, I'm still awaiting some explanation for your
> objections to my critique of tautologies and empirical truth.
> 

We have been through all of that before.  What will we learn by going 
through it again?  You do not use these words (tautology, empirical, 
truth, analytic, proof, etc) the way they are used in the culture. 
Consequently there is no real communication between us.

patty
0
patty
10/31/2004 5:32:48 PM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:r39hd.6043$OD3.108863@news20.bellglobal.com...

> I earlier said I was uneasy with the distinction between
development
> and learning, since much development requires external inputs. On
> reflection, IMO we should distinguish between:
> a) NN changes controlled by external, fed back inputs (supervised
> learning, or operant conditioning);
> b) NN changes caused by external inputs (unsupervised learning, or
> classical conditioning);
> c) NN changes cause by changes in the NN itself (???
development???)

Excellent.  And "development" seems as good a word as any.  If we
use the word "learning", it would seem to apply to the first two,
the ones dependent on inputs, and not to development except perhaps
by a poetic stretch.

Of course, in practice these 3 kinds of changes are often all going
on at once in the same networks and even in the same cells, and they
may share mechanisms in common.   So it can be difficult or perhaps
even pointless to try to say just which is responsible for some
changes... generally they all contribute in their subtly different
ways to the behavior of an organism.   Even "input" is fuzzy... is
locally generated random thermal noise an "external input"?  Is a
signal from spontaneous firing of a cell an "input"?   It depends on
ones purpose, there is no "correct" answer.

Nevertheless, I find that the distinctions are worthwhile for they
guidance they provide for constructing systems that change in
ultimately useful ways.   If we are to have operant conditioning, we
have to include special mechanisms to induce changes in response to
particular inputs treated as reward or feedback, distinct from those
other inputs which are causally connected to the outputs from which
the feedback is derived.  If we are to have classical conditioning
or other forms of unsupervised self-organization, we must have
mechanisms such that changes are induced by the processing of any
inputs to produce results, not requiring a separate priviledged
class of inputs designated as feedback or reward.  If we are to have
development independent of inputs, we must have mechanisms which
change the system over time in accordance with some predetermined
plan.

Again, this posting is good news.  Excellent.

Now perhaps we can talk about a few more details of how these things
might be implemented?

Bill






0
Bill
10/31/2004 6:54:12 PM
In article <FeqdnaaiD9dOqRjcRVn-hg@metrocastcablevision.com>, Bill 
Modlin <modlin1@metrocast.net> writes
>
>"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
>news:r39hd.6043$OD3.108863@news20.bellglobal.com...
>
>> I earlier said I was uneasy with the distinction between
>development
>> and learning, since much development requires external inputs. On
>> reflection, IMO we should distinguish between:
>> a) NN changes controlled by external, fed back inputs (supervised
>> learning, or operant conditioning);
>> b) NN changes caused by external inputs (unsupervised learning, or
>> classical conditioning);
>> c) NN changes cause by changes in the NN itself (???
>development???)
>
>Excellent.  And "development" seems as good a word as any.  If we
>use the word "learning", it would seem to apply to the first two,
>the ones dependent on inputs, and not to development except perhaps
>by a poetic stretch.
>
>Of course, in practice these 3 kinds of changes are often all going
>on at once in the same networks and even in the same cells, and they
>may share mechanisms in common.   So it can be difficult or perhaps
>even pointless to try to say just which is responsible for some
>changes... generally they all contribute in their subtly different
>ways to the behavior of an organism.   Even "input" is fuzzy... is
>locally generated random thermal noise an "external input"?  Is a
>signal from spontaneous firing of a cell an "input"?   It depends on
>ones purpose, there is no "correct" answer.
>
>Nevertheless, I find that the distinctions are worthwhile for they
>guidance they provide for constructing systems that change in
>ultimately useful ways.   If we are to have operant conditioning, we
>have to include special mechanisms to induce changes in response to
>particular inputs treated as reward or feedback, distinct from those
>other inputs which are causally connected to the outputs from which
>the feedback is derived.  If we are to have classical conditioning
>or other forms of unsupervised self-organization, we must have
>mechanisms such that changes are induced by the processing of any
>inputs to produce results, not requiring a separate priviledged
>class of inputs designated as feedback or reward.  If we are to have
>development independent of inputs, we must have mechanisms which
>change the system over time in accordance with some predetermined
>plan.
>
>Again, this posting is good news.  Excellent.
>
>Now perhaps we can talk about a few more details of how these things
>might be implemented?
>
>Bill
>

Not to pre-empt Wolf, but let me remind you of the section on ANNs in 
"Fragments" <http://www.longley.demon.co.uk/Frag.htm>. What was that all 
about? What's *wrong* with ANNs? What's wrong with our folk psychology? 
What's wrong with people?

Why are people prejudiced? What's wrong, for instance, with concluding 
from the -2SD mean IQ of sub-sahara Africa relative to the UK mean, or 
the -1SD mean IQ of African undergraduates, or USA Afro-Carribeans that 
blacks are less intelligent than whites and whites are less intelligent 
than yellows (East Asians)? Is anything missing? If so, can you tell us?

-- 
David Longley
0
David
10/31/2004 9:56:58 PM
On Sun, 31 Oct 2004 12:24:13 -0500, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Lester Zick wrote:
>[...]
>> 
>> Pretty much as I thought, Michael, and quite well done. But please
>> spare me the Quine. I'm sure he must have said something one time
>> or another. In the larger context of unimportant things, I'm sure he
>> looms large.
>
>
>As the backwoods farmer said, when he went to the city and saw a giraffe 
>in the zoo, "There ain't no sech animal!"

OK. I'll bite. Why did you say "There ain't no sech animal!"?

Regards - Lester
0
lesterDELzick
10/31/2004 10:03:34 PM
On Sun, 31 Oct 2004 17:32:48 GMT, patty <pattyNO@SPAMicyberspace.net>
in comp.ai.philosophy wrote:

>Lester Zick wrote:
>> On Sat, 30 Oct 2004 21:29:14 GMT, patty <pattyNO@SPAMicyberspace.net>
>> in comp.ai.philosophy wrote:
>> 
>> 
>>>Lester Zick wrote:
>>>
>>>
>>>>On Sat, 30 Oct 2004 15:06:09 -0400, Wolf Kirchmeir
>>>><wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>>>>
>>>>
>>>>
>>>>>Stargazer wrote:
>>>>>[...]
>>>>>
>>>>>
>>>>>>Learning as a change in behavior is a definition under the behaviorist
>>>>>>rationale. This assumption leads to all the well known theoretical
>>>>>>edifice. However, learning is defined differently by the neuroscientist.
>>>>>>Thus, it may be "stupid" from a behaviorist point of view, but it
>>>>>>is not from neuroscience. It doesn't seem wise to criticize someone
>>>>>>else's definitions. You can accept it or reject it, or you can (at most)
>>>>>>point out that such definitions may lead to theoretical/conceptual
>>>>>>dead-ends or empirically flawed models, which is not the case
>>>>>>regarding supervised/unsupervised learning.
>>>>>>
>>>>>>*SG*
>>>>>
>>>>>Well, I've tried to criticise the fuzziness of the concepts. I'll accept 
>>>>>"learning" as any change in behaviour, at any level. What I find 
>>>>>confusing is that sometimes "learning" is used in the behaviorial sense, 
>>>>>sometimes in the sense of "change in the functioning of a neural 
>>>>>network", sometimes in the sense "changes in a neuron's firing 
>>>>>patterns", etc etc, all at the same time. Now, I don't care what you 
>>>>>call these, so long as you keep the hierarchy straight. Terminology can 
>>>>>help or hinder -- I would think that people would be more careful with 
>>>>>their choice of terms. (Footnote)
>>>>>
>>>>>I see the following hierarchy:
>>>>>behaviour -- physiological functioning -- neural network functioning -- 
>>>>>neuron functioning
>>>>>
>>>>>Information flows both ways along this hierarchy, and it's important IMO 
>>>>>to talk as clearly as possible. It seems to me that learning at one 
>>>>>level implies learning at a lower level.
>>>>>
>>>>>I also find it odd that people invent labels for two kinds of learning 
>>>>>and seem to believe thay have discovered something new (or in some cases 
>>>>>have refuted the behaviorist stance.)  I've reread the posts on 
>>>>>supervised vs unsupervised learning, and insofar as I've made sense of 
>>>>>the explanations, unsupervised learning is classical condioning, and 
>>>>>supervise learning is operant conditioning. (BTW, Stephen Harris's 
>>>>>comment that unsupervised learning  requires some sort of external 
>>>>>input, else the NN won't do much of anything, was the key that unlocked 
>>>>>the puzzle. Up till then, I was trying to figure out how unsupervised 
>>>>>learning was different from random neural firing.)
>>>>>
>>>>>The lesson I draw is that learning in the behaviorist sense scales up 
>>>>>and down; or if you like, repeats fractal-like at different scales. It 
>>>>>is "behaviour all the way down", and back up again. If that notion 
>>>>>reflects reality, then something new has IMO been discovered. 
>>>>>Considering that a neuron is a rather complex entity (as is shown when 
>>>>>you sketch a meta-program to simulate how it works), it seems that 
>>>>>learning may scale even to that level.
>>>>>
>>>>>"It's all rather complicated, really."
>>>>
>>>>
>>>>I'll say. "It is behavior all the way down" is analogous to patty's
>>>>ida that it's all belief. Such facile observations are nugatory with
>>>>respect to science because they fail to discrimate when and how belief
>>>>becomes knowledge and behavior becomes learning and a host of other
>>>>mental effects.
>>>>
>>>>Regards - Lester
>>>
>>>To correct the record i would not say something as stupid as "it's 
>>>belief all the way down".  Beliefs are just a propositional attitudes 
>>>and there certainly is more to existence than just those.  What i might 
>>>have said would have been something more like, we tend not to rationally 
>>>think outside of our own propositional attitudes.  A formal system 
>>>running on a computer certainly cannot function outside of its beliefs 
>>>(were its statements to be considered beliefs).  Those of you who think 
>>>you can make statements that are true even outside of your assumptions 
>>>and beliefs, are fundamentally misguided.  You are arrogantly taking a 
>>>god's eye view.  You are quite literally talking out of your heads.
>> 
>> 
>> Whatever. It's quite clear where your term web of belief comes from
>> since David uses the term. Perhaps you could rationalize your own web
>> of belief a little further.
>> 
>> 
>>>Incidentally, Lester, you can change your assumptions and make 
>>>tautologies false.
>> 
>> 
>> And incidentally, patty, I'm still awaiting some explanation for your
>> objections to my critique of tautologies and empirical truth.
>> 
>
>We have been through all of that before.  What will we learn by going 
>through it again?  

We haven't been through it at all. I've been through it all. You
registered a demurrer but refuse to relate it to anything I said.
You're just spinning your web of belief aimlessly without direction. 

>                               You do not use these words (tautology, empirical, 
>truth, analytic, proof, etc) the way they are used in the culture.

Of course not. What would be the purpose of that? I was explaining
properties of tautologies in relation to empirical truth as these
concepts are used in the culture. So far you've said nothing
responsive except you don't like what I said.
 
>Consequently there is no real communication between us.

Yet apparently there was enough communication for you to register
your dislike for what I said but not enough communication for you to
explain your dislike. If that's all you intend to do, I suggest you do
us both a favor and keep your remarks to yourself. I think we can all
take your disapprobation for granted.

Regards - Lester
0
lesterDELzick
10/31/2004 10:13:41 PM
On Sun, 31 Oct 2004 08:25:30 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <4184e532.15732286@netnews.att.net>, Lester Zick 
><lesterDELzick@worldnet.att.net> writes
>>On Sat, 30 Oct 2004 22:57:59 +0100, David Longley
>><David@longley.demon.co.uk> in comp.ai.philosophy wrote:
>>
>>>In article <10o81pseu1mn26@news20.forteinc.com>, Stargazer
>>><fuckoff@spammers.com> writes
>>>>Wolf Kirchmeir wrote:
>>>>> Stargazer wrote:
>>
>>[. . .]
>>
>>>One consequence is that the cognoscenti generally just make polite
>>>excuses and leave folk such as yourself to your nonsense.
>>
>>So, what are we to conclude, David, that that you're not one of the
>>cognoscenti or that you don't know how to make polite excuses?
>>
>>Regards - Lester
>
>Not quite Lester, I'm a super-being, and I'm benevolently playing with 
>you ;-)

Yeah, David, you've really got the benevolent part down pat.

>You won't know it (cf. intensional opacity) but there are lots of us 
>super-beings about. Some of us created political correctness, others 
>"cognitive science", others (a little less imaginative) had a go at even 
>more eccentric products (<http://www.phallic.org/onanism-fashion/> in an 
>effort to keep you and the rest of your onanistic finger dribbling ilk 
>from more serious forms of abuse.

Which didn't work either.

>We're not users but anti-abusers (educators). Consider it a subtle 
>selection, or management procedure.

Apparently so subtle it doesn't work.

>cf. "Beyond Freedom and Dignity" (1971), "Manufacturing Consent" (1992), 
>"The Bell Curve" 1994, or ..well...., I'll let you work that out for 
>yourself (but do look up Nisbett & Wilson 1977 sometime, and remind 
>Patty to do the same <g>

Why?

Regards - Lester
0
lesterDELzick
10/31/2004 10:14:39 PM
"David Longley" <David@longley.demon.co.uk> wrote in message

>><Wolf listed 3 categories of modification strategies for
neural nets: supervised, unsupervised, and developmental,
and I agreed with the list. I then suggested that we shift
to talking about how to implement some of the strategies.
Details elided to address David's separate topic.>

> Not to pre-empt Wolf, but let me remind you of the section
> on ANNs in "Fragments"
> <http://www.longley.demon.co.uk/Frag.htm>.
> What was that all about? What's *wrong* with ANNs? What's
> wrong with our folk psychology?  What's wrong with people?

> Why are people prejudiced? What's wrong, for instance,
> with concluding from the -2SD mean IQ of sub-sahara Africa
> relative to the UK mean, or the -1SD mean IQ of African
> undergraduates, or USA Afro-Carribeans that blacks are
> less intelligent than whites and whites are less
> intelligent than yellows (East Asians)? Is anything
> missing? If so, can you tell us?
>
> --
> David Longley

Thank you for reminding me... I'd almost forgotten the
underlying agenda that puts your raving into perspective.

    David vs CAP

David holds that human thinking, that what passes for
"reasoning" in naive human experience, is so terribly
flawed, biased, and generally unscientific that there can
be no value whatever in reproducing it with a machine.

To David, the only imaginable purpose for an AI is to
supplant error-riddled human intensional heuristics with
rational extensional methods.  To him, AI *is* the
advancement of extensional science.

Which leads to conflict with those of us who notice that
despite obvious inadequacies of our biological intelligence
in matters of rational estimation and deductive logic, there
are still many things people manage to do quite well, things
people accomplish much more effectively than any machine
we've yet been able to design.

Those of us who find value in natural intelligence would
like nothing better than to emulate it with a machine.  To
us, AI is the search for ways to do that, ways to get a
machine to do the things we find so trivially easy.   Like
shopping for groceries, or cleaning a house, or following
ambiguous instructions.  Without becoming immediately bogged
down in an exponential explosion of combinatorial
possibilities to be evaluated.  Without tripping over a
frame problem.

The answer to David's first question, "what's wrong with
ANNs", is that their operation is somewhat analogous to the
operation of our own brains.  Oversimplified, incomplete,
but still recognizably similar in some of the ways they
solve problems.

And since our brains clearly use those terrible intensional
heuristics, as David has painstakingly documented for all
the world to see, this makes ANNs irrational, and makes
anyone who takes them seriously a threat to the progress of
humanity.  It even makes them despicable and evil, if they
seem to understand the problem and yet knowingly persist in
talking about such nasty things.

Of course, to the rest of us the resemblance of ANN
heuristics to some human methods is what makes them worth
talking about.

David simply is unable to understand that many problems
cannot be addressed without heuristics.  He includes quotes
in Fragments that make this point quite well, but manages to
slide past them undaunted, his grail of replacing human
thought with pristine rationality miraculously intact.

The sad part is that David is right about the value of
replacing intuitive judgment with objective measures and
rational statistical analysis in places where the data is
available to support such an effort.  He's got a legitimate
point, which probably deserves more attention than it gets
in the flux of politically biased funding schemes.  If he'd
stick to that, he might actually do some good.  But instead
he falls prey to the biases he decries and overgeneralizes
to the point of obvious irrationality... while steadfastly
resisting any appeal to intuitive notions like common sense
to recognize his error.

Bill Modlin



0
Bill
11/1/2004 1:36:56 AM
Bill Modlin wrote:
> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> news:r39hd.6043$OD3.108863@news20.bellglobal.com...
> 
> 
>>I earlier said I was uneasy with the distinction between
> 
> development
> 
>>and learning, since much development requires external inputs. On
>>reflection, IMO we should distinguish between:
>>a) NN changes controlled by external, fed back inputs (supervised
>>learning, or operant conditioning);
>>b) NN changes caused by external inputs (unsupervised learning, or
>>classical conditioning);
>>c) NN changes cause by changes in the NN itself (???
> 
> development???)
> 
> Excellent.  And "development" seems as good a word as any.  If we
> use the word "learning", it would seem to apply to the first two,
> the ones dependent on inputs, and not to development except perhaps
> by a poetic stretch.
> 
> Of course, in practice these 3 kinds of changes are often all going
> on at once in the same networks and even in the same cells, and they
> may share mechanisms in common.   So it can be difficult or perhaps
> even pointless to try to say just which is responsible for some
> changes... 

Precisely.
0
Wolf
11/1/2004 2:24:58 AM
Bill Modlin wrote:
[...] If we are to have classical conditioning
> or other forms of unsupervised self-organization, we must have
> mechanisms such that changes are induced by the processing of any
> inputs to produce results, not requiring a separate priviledged
> class of inputs designated as feedback or reward.

Inputs to neurons, or signals to NNs, are not labelled feedback or 
reward. From the p.o.v of a neuron, a synaptic input is a synaptic 
input, etc. From the p.o.v. of a NN, a signal is a signal. _You_ may be 
able to observe that the signal is a fed back one, or that it comes from 
a "reward module", but to the receiving NN it's just another signal.

Posit a NN of sufficient complexity to "do something useful." It will 
have some receptor neurons that receive messages from other parts of the 
system; some output neurons, that send messages to other parts of the 
system; an some (prob. a large number) of intermediate neurons. A signal 
will consist of some pattern of input (or output) messages - and by 
pattern I intend both space and time. Just how a particular input signal 
is converted into some particular output signal depends of course on its 
pattern, since that will determine which processor neurons fire and in 
which sequence, etc. And whether or not the inputs signals will change 
the function of the NN will again depend on its pattern, since that will 
determine whether any of the neurons will change their activation 
levels. And whether the function of the NN stabilises will depend on 
siuch things as the topology of the NN and the inhibiting of further 
changes in activation levels of individual neurons, etc.

But in none of this is there anything that requires us to posit 
different mechanisms for processing feedback or "reward" signals. They 
are just another signal, is all.

0
Wolf
11/1/2004 4:08:31 AM
"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
news:RDihd.44$dj2.27841@news20.bellglobal.com...
> Bill Modlin wrote:
> [...] If we are to have classical conditioning
> > or other forms of unsupervised self-organization, we
> > must have mechanisms such that changes are induced by
> > the processing of any inputs to produce results, not
> > requiring a separate priviledged class of inputs
> > designated as feedback or reward.
>
> Inputs to neurons, or signals to NNs, are not labelled
> feedback or reward. From the p.o.v of a neuron, a synaptic
> input is a synaptic input, etc. From the p.o.v. of a NN, a
> signal is a signal. _You_ may be able to observe that the
> signal is a fed back one, or that it comes from a "reward
> module", but to the receiving NN it's just another signal.

> Posit a NN of sufficient complexity to "do something
> useful." It will have some receptor neurons that receive
> messages from other parts of the system; some output
> neurons, that send messages to other parts of the system;
> an some (prob. a large number) of intermediate neurons. A
> signal will consist of some pattern of input (or output)
> messages - and by pattern I intend both space and time.
> Just how a particular input signal is converted into some
> particular output signal depends of course on its pattern,
> since that will determine which processor neurons fire and
> in which sequence, etc. And whether or not the inputs
> signals will change the function of the NN will again
> depend on its pattern, since that will determine whether
> any of the neurons will change their activation levels.
> And whether the function of the NN stabilises will depend
> on siuch things as the topology of the NN and the
> inhibiting of further changes in activation levels of
> individual neurons, etc.

> But in none of this is there anything that requires us to
> posit different mechanisms for processing feedback or
> "reward" signals.  They are just another signal, is all.

No, you missed the point.  The reward signal is *not* just
another signal from the p.o.v. of the neuron.

You cannot have feedback directed systems unless there are
special paths for the feedback, which do something quite
different from the normal inputs to the adjusted cells.  The
"something quite different" is that they change the
operating parameters for subsequent inputs, rather than
being themselves processed as inputs.

One possibility is for example the one described by Glen, in
which cells with inputs from sensory data and outputs to
motor control are affected by a separate hardwired "reward
detector" which floods the area with chemicals causing
selective reinforcement of recently active connections.  This
flooding isn't "just another signal", it does not itself produce an
output.  Instead it changes the way subsequent outputs are
produced.

For "unsupervised" operation, we still need something that
changes the operating parameters... the "reinforcement
chemicals" or whatever, and some way to generate a "change"
signal.  But instead of a separate channel importing a
fed-back "change" signal, we derive the change signal from
the other locally available signals, without waiting for the
output to propagate to the external world and possibly
produce a contingent feedback.

This is important because this way we can have many
different local "change" signals, each tailored to the
circumstances existing at the place where it is generated,
and correlated with what is going on there.

With external feedback, there is only one global "change"
signal, which may be correlated with signals in the
immediate behavior generating circuitry, but will not be
correlated with other signals along the path from sensors to
motor control... the signals processed by those many
intermediate neurons you mentioned.  Many of these other
signals contribute as intermediate terms in the production
of the behavior, but their relationship is indirect, and
there is no useful direct correlation with the behavior or
its results which might be used to direct changes..

The NN you described is actually more like a pure
unsupervised system, with no provision for supervision
or operant conditioning.

Bill Modlin



0
Bill
11/1/2004 5:40:38 AM
In article <K8-dna5buaCrDhjcRVn-jA@metrocastcablevision.com>, Bill 
Modlin <modlin1@metrocast.net> writes
>
>"David Longley" <David@longley.demon.co.uk> wrote in message
>
>>><Wolf listed 3 categories of modification strategies for
>neural nets: supervised, unsupervised, and developmental,
>and I agreed with the list. I then suggested that we shift
>to talking about how to implement some of the strategies.
>Details elided to address David's separate topic.>
>
>> Not to pre-empt Wolf, but let me remind you of the section
>> on ANNs in "Fragments"
>> <http://www.longley.demon.co.uk/Frag.htm>.
>> What was that all about? What's *wrong* with ANNs? What's
>> wrong with our folk psychology?  What's wrong with people?
>
>> Why are people prejudiced? What's wrong, for instance,
>> with concluding from the -2SD mean IQ of sub-sahara Africa
>> relative to the UK mean, or the -1SD mean IQ of African
>> undergraduates, or USA Afro-Carribeans that blacks are
>> less intelligent than whites and whites are less
>> intelligent than yellows (East Asians)? Is anything
>> missing? If so, can you tell us?
>>
>> --
>> David Longley
>
>Thank you for reminding me... I'd almost forgotten the
>underlying agenda that puts your raving into perspective.
>
>    David vs CAP

1) It's wider than CAP.


>
>David holds that human thinking, that what passes for
>"reasoning" in naive human experience, is so terribly
>flawed, biased, and generally unscientific that there can
>be no value whatever in reproducing it with a machine.
>
>To David, the only imaginable purpose for an AI is to
>supplant error-riddled human intensional heuristics with
>rational extensional methods.  To him, AI *is* the
>advancement of extensional science.
>
>Which leads to conflict with those of us who notice that
>despite obvious inadequacies of our biological intelligence
>in matters of rational estimation and deductive logic, there
>are still many things people manage to do quite well, things
>people accomplish much more effectively than any machine
>we've yet been able to design.
>
>Those of us who find value in natural intelligence would
>like nothing better than to emulate it with a machine.  To
>us, AI is the search for ways to do that, ways to get a
>machine to do the things we find so trivially easy.   Like
>shopping for groceries, or cleaning a house, or following
>ambiguous instructions.  Without becoming immediately bogged
>down in an exponential explosion of combinatorial
>possibilities to be evaluated.  Without tripping over a
>frame problem.
>
>The answer to David's first question, "what's wrong with
>ANNs", is that their operation is somewhat analogous to the
>operation of our own brains.  Oversimplified, incomplete,
>but still recognizably similar in some of the ways they
>solve problems.
>
>And since our brains clearly use those terrible intensional
>heuristics, as David has painstakingly documented for all
>the world to see, this makes ANNs irrational, and makes
>anyone who takes them seriously a threat to the progress of
>humanity.  It even makes them despicable and evil, if they
>seem to understand the problem and yet knowingly persist in
>talking about such nasty things.
>
>Of course, to the rest of us the resemblance of ANN
>heuristics to some human methods is what makes them worth
>talking about.
>
>David simply is unable to understand that many problems
>cannot be addressed without heuristics.  He includes quotes
>in Fragments that make this point quite well, but manages to
>slide past them undaunted, his grail of replacing human
>thought with pristine rationality miraculously intact.
>
>The sad part is that David is right about the value of
>replacing intuitive judgment with objective measures and
>rational statistical analysis in places where the data is
>available to support such an effort.  He's got a legitimate
>point, which probably deserves more attention than it gets
>in the flux of politically biased funding schemes.  If he'd
>stick to that, he might actually do some good.  But instead
>he falls prey to the biases he decries and overgeneralizes
>to the point of obvious irrationality... while steadfastly
>resisting any appeal to intuitive notions like common sense
>to recognize his error.
>
>Bill Modlin
>
>
>
2) I'll leave "someone else" to point out the *obvious* biases/flaws in 
the operation of your intensional heuristics!

3) You're behaving like an ignorant and arrogant twit. Let's see if you 
can redeem yourself by making the person who does 2) *yourself* and 
whether *you* can tell us how you did it.
-- 
David Longley
0
David
11/1/2004 8:37:21 AM
David Longley wrote:
> In article <10o86i31bsbj437@news20.forteinc.com>, Stargazer
> <fuckoff@spammers.com> writes
> > David Longley wrote:
> > >
> > > Not that you meant to say this (as you clearly don't understand
> > > what Wolf and I have said), but do you have any idea as to why a
> > > radical behaviourist might have doubts as to whether *classical*
> > > conditioning naturally exists?
> > >
> > > [remainder nonsensical rants suppressed]
> >
> > Classical, operant, vicarious learning, schedules of reinforcements,
> > discrimination training, etc., are all concepts that I know well,
> > but they are mostly restricted to the behavioristic domain.
> > Useful as they are, it is not wise to suggest that they may be
> > indiscriminately applied to other areas of scientific knowledge.
> > It is like using cosmological time frames as reference to the
> > discussion of the aerodynamics of the flight of birds.
> >
> > *SG*
> >
> I assure you, you *don't* know *any* of these things well. You're
> another ignorant, deluded, idiot with a bad education. You'll find
> many like minded twits posting to c.a.p. How long you wish to remain
> an ignorant, deluded, idiot is basically in your hands.

Ah, I see. I have stepped on the resident troll. I thought
George Bajzdar was the worse of this NG, but he doesn't seem
to be comparable to the idiocy and psychotic behavior of this
Longley chap. Why does your dad let you play with the Internet?
Isn't it time for you to go back to high school?

*SG*


0
Stargazer
11/1/2004 1:09:51 PM
In article <10ocdcsepneulfa@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>David Longley wrote:
>> In article <10o86i31bsbj437@news20.forteinc.com>, Stargazer
>> <fuckoff@spammers.com> writes
>> > David Longley wrote:
>> > >
>> > > Not that you meant to say this (as you clearly don't understand
>> > > what Wolf and I have said), but do you have any idea as to why a
>> > > radical behaviourist might have doubts as to whether *classical*
>> > > conditioning naturally exists?
>> > >
>> > > [remainder nonsensical rants suppressed]
>> >
>> > Classical, operant, vicarious learning, schedules of reinforcements,
>> > discrimination training, etc., are all concepts that I know well,
>> > but they are mostly restricted to the behavioristic domain.
>> > Useful as they are, it is not wise to suggest that they may be
>> > indiscriminately applied to other areas of scientific knowledge.
>> > It is like using cosmological time frames as reference to the
>> > discussion of the aerodynamics of the flight of birds.
>> >
>> > *SG*
>> >
>> I assure you, you *don't* know *any* of these things well. You're
>> another ignorant, deluded, idiot with a bad education. You'll find
>> many like minded twits posting to c.a.p. How long you wish to remain
>> an ignorant, deluded, idiot is basically in your hands.
>
>Ah, I see. I have stepped on the resident troll. I thought
>George Bajzdar was the worse of this NG, but he doesn't seem
>to be comparable to the idiocy and psychotic behavior of this
>Longley chap. Why does your dad let you play with the Internet?
>Isn't it time for you to go back to high school?
>
>*SG*
>
>
Clueless.... but clearly predicted... Sadly.
-- 
David Longley
0
David
11/1/2004 1:30:22 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> [...]
> >
> > It seems complicated because we're talking of different paradigms
> > here. Under behaviorist parlance, learning is related to the changes
> > in behavior of the organism/mechanism that one is studying, while in
> > neuroscience and artificial neural networks, learning is a change
> > of internal parameters (and/or architecture, if one wants to account
> > for plasticity) of the organism/mechanism. Although logically sound
> > and coherent, the behaviorist definition prevents the analysis of
> > internal modifications of the mechanism that aren't immediately
> > reflected in corresponding changes of behavior. For the behaviorist,
> > unsupervised learning does not exist, and this may explain why it
> > seems difficult to understand it.
> >
> > *SG*
>
> I don't think that for the behaviorist unsupervised learning does not
> exist - as I understand the term, it seems to be more or less
> synonymous with classical conditioning, which does not depend on
> feedback to the organism's behaviour, but only on repeated
> presentation of the conditioning stimulus.

It's not exactly the same thing. In classical conditioning you
must have an US-CS pair, in "pure" unsupervised learning that's not
necessary.

If I was required to stretch a bit such notions, I would equate
unsupervised learning with the behavioristic notion of vicarious
learning (or "observational learning"). This notion was first
investigated by Thorndike (with negative results), and only
later it received substantial empirical support. For
example, an experiment by Barnett & Benedetti (1960) put a
subject to watch one acquaintance subjected to shocks
after a buzzer sounded. The observer was never shocked, but
it was found that GSR (galvanic skin response) of the observer
reflected an emotional arousal due to listening the buzzer.
Thus, the buzzer became a CS for fear in the observer.

This is perhaps the closer one can get to unsupervised learning
from a behavioristic standpoint, but this is still very far from
what it is meant in a lower level of analysis.

Unsupervised learning, from a cortical point of view, is often
associated with the changes in the patterns of connections among
neurons due to experience (stimulation). In one of the most studied
cases, the visual cortex, orientation selectivity and binocularity
emerge as a function of the kind of experiences that the visual
system is subjected to. Yes, this is a self-organizing process.
There are several computational models of such processes, some
of them replicating the formation of cortical columns. This is
nowadays a very advanced area of cortical modeling.

*SG*


0
Stargazer
11/1/2004 1:52:49 PM
In article <10ocftehqh2q87c@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>Wolf Kirchmeir wrote:
>> Stargazer wrote:
>> [...]
>> >
>> > It seems complicated because we're talking of different paradigms
>> > here. Under behaviorist parlance, learning is related to the changes
>> > in behavior of the organism/mechanism that one is studying, while in
>> > neuroscience and artificial neural networks, learning is a change
>> > of internal parameters (and/or architecture, if one wants to account
>> > for plasticity) of the organism/mechanism. Although logically sound
>> > and coherent, the behaviorist definition prevents the analysis of
>> > internal modifications of the mechanism that aren't immediately
>> > reflected in corresponding changes of behavior. For the behaviorist,
>> > unsupervised learning does not exist, and this may explain why it
>> > seems difficult to understand it.
>> >
>> > *SG*
>>
>> I don't think that for the behaviorist unsupervised learning does not
>> exist - as I understand the term, it seems to be more or less
>> synonymous with classical conditioning, which does not depend on
>> feedback to the organism's behaviour, but only on repeated
>> presentation of the conditioning stimulus.
>
>It's not exactly the same thing. In classical conditioning you
>must have an US-CS pair, in "pure" unsupervised learning that's not
>necessary.

Tell us how you know that.

Whilst you are at it, tell us what the sine qua non is for classical 
conditioning and how you know *that*. What are you betraying by the way 
that you write, and apropos my second question, what's the implicit 
reference? (These are basic undergraduate level questions incidentally, 
but what I have been saying requires more).
>
>If I was required to stretch a bit such notions, I would equate
>unsupervised learning with the behavioristic notion of vicarious
>learning (or "observational learning").

What is "the behavioristic notion" of.......learning"? It's quite clear 
to me that you don't know what you're talking about (or what we are 
talking about and think the way to address that is to rush off and 
browse the web for a potted high school level history). As a 
consequence, you're just writing journalistic. You don't appear to 
understand that this is an *empirical science* not an opportunity for 
post modern literary analysis!


> This notion was first
>investigated by Thorndike (with negative results), and only
>later it received substantial empirical support. For
>example, an experiment by Barnett & Benedetti (1960) put a
>subject to watch one acquaintance subjected to shocks
>after a buzzer sounded. The observer was never shocked, but
>it was found that GSR (galvanic skin response) of the observer
>reflected an emotional arousal due to listening the buzzer.
>Thus, the buzzer became a CS for fear in the observer.
>
>This is perhaps the closer one can get to unsupervised learning
>from a behavioristic standpoint, but this is still very far from
>what it is meant in a lower level of analysis.
>
>Unsupervised learning, from a cortical point of view, is often
>associated with the changes in the patterns of connections among
>neurons due to experience (stimulation).

How do you know this? You're just making it up!

> In one of the most studied
>cases, the visual cortex, orientation selectivity and binocularity
>emerge as a function of the kind of experiences that the visual
>system is subjected to. Yes, this is a self-organizing process.

Do you have any idea what the cortex is? Where do it's "inputs" come 
from, where do its "outputs" go? Take the afferent and efferent arms of 
the "reflex arc" and besides critiquing the notion of the "reflex arc", 
try to find out what I said a while back (I called it "part 3") about 
primary, secondary, tertiary and so on sensory and motor nuclei and 
inter-neurones. What am I describing? How are these "layers" modulated? 
What modulates them? Is it "unsupervised" (perhaps) just because the 
researcher doesn't look anywhere else because elsewhere isn't his 
"research interest" (aka competence). You are writing nonsense, not to 
mention the fact that you're now blurring ANNs with real neurones. What 
you're writing is just more of the insane metaphysics that I've 
criticised elsewhere, and that you can find others writing articles and 
books on it is precisely why I don't write books and articles on it but 
do what I do instead. Like you, those who write like that are idiots.

>There are several computational models of such processes, some
>of them replicating the formation of cortical columns. This is
>nowadays a very advanced area of cortical modeling.

Really? Several "computational models" eh? You don't say? Do you not 
think Ptolemeic astronomers had "computational models"? Do you not think 
astrologers and "auraologists" currently have "computational models" 
(pretty close to fMRIs in many cases)? Do you not see what's wrong with 
"computational models"? Silly question, of course you don't, and you 
won't listen either.

You're an idiot because you don't know when to listen. My advice to you 
is to look up the origin of the word "I D I O T" and try to learn to 
listen.....and whilst you are at it, look up what Wiesel said about 
c.elegans.
-- 
David Longley
0
David
11/1/2004 3:37:57 PM
David Longley wrote:
>
> [hallucinations omitted]

Time to spice up your medications, cranky boy.
Are you bipolar or is this the way you always behave?

God bless the killfile!

*PLONK*



0
Stargazer
11/1/2004 4:20:10 PM
In article <10ocohohemdr000@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>David Longley wrote:
>>
>> [hallucinations omitted]

How do you know?
>
>Time to spice up your medications, cranky boy.
>Are you bipolar or is this the way you always behave?
>
>God bless the killfile!
>
>*PLONK*
>

The noise you make as you stick your head up your rectum!

You really won't find the answers to anything up there you know, 
familiar though it may be. You'll just write even more intoxicated 
nonsense. Did you make *any* effort to read Modlin's post, my response, 
or what I said in my above corrective post to you? You should ask why 
not if you haven't, and if you surreptitiously have, you should ask why 
you do that too!
-- 
David Longley
0
David
11/1/2004 4:46:20 PM
On 29-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com> wrote:

> Well, the general umbrella of these forums is cognitive science.
> So my post doesn't entertain a non-scientific explanation --> by God or
> Design.

I don't recall mentioning God or religion in any of my messages.  If I did,
please quote the section.

Regarding Design, it is hard for anyone with a rational mind to deny the
remarkable order and "design" of the universe and living creatures.  We can
argue about the origin of this design, but the natural order is so far from
random that you have to postulate either a designer or a natural process
that operated contrary to entropy to produce the universe we live in.  I'm
not trying to convince you that the designer was God, but I do argue that is
difficult to hold the position that the universe as we know it is simply the
product of random chance.

> Science does not claim that the universe is not have levels of what
> we perceive to be organizational. But that there is no initial purpose
> that led to life. Before there was life, elements had to created in stars
> that are essential to life. Science says there was no guiding hand to
> that, it just happened. And it is in that sense I use the word random.

There are hundreds of known physical constants in the universe (speed of
light, mass of electron, gravitational constant, nuclear forces, etc.) that
must have precisely the values they have for the universe to exist as we
know it; if you make even slight changes to any of them, the universe would
be wildly different.  It sure is a coincidence that all of them fell into
place through random chance.

> Evolution is a theory, which means it is not a proven fact. Fossils are a
> matter of fact, but the interpretation of how they got there is not a fact
> and I don't claim that. I do claim that if you post on a scientific forum
> then
> you use the cause and effect of science which does not admit God as a
> cause

I never stated that life was created by God (but I don't argue against it
either).  My point was that as deeper understanding is gained of the
incredible complexity of the fundamental life processes, it becomes
increasingly difficult to explain it through random events and incremental
Darwinian evolutionary theory.  Too many simultaneous, interacting processes
have to get going at the same time to make a self-replicating organism that
can pass on genetic information and begin a natural selection cycle. 
Getting from minerals in water to DNA/RNA and all of the machinery necessary
for replication is an unexplained phenomenon.  If you want to accept that it
happened on faith, that's fine; but it is not science.

-- 
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com  (decision tree modeling)
http://www.nlreg.com  (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
0
Phil
11/1/2004 4:55:10 PM
"Phil Sherrod" <phil.sherrod@REMOVETHISsandh.com> wrote in message 
news:3t-dnVHnwtUh9BvcRVn-uw@giganews.com...
>
> On 29-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com> 
> wrote:
>
>> Well, the general umbrella of these forums is cognitive science.
>> So my post doesn't entertain a non-scientific explanation --> by God 
>> or
>> Design.
>
> I don't recall mentioning God or religion in any of my messages.  If I 
> did,
> please quote the section.
>
> Regarding Design, it is hard for anyone with a rational mind to deny 
> the
> remarkable order and "design" of the universe and living creatures. 
> We can
> argue about the origin of this design, but the natural order is so far 
> from
> random that you have to postulate either a designer or a natural 
> process
> that operated contrary to entropy to produce the universe we live in. 
> I'm
> not trying to convince you that the designer was God, but I do argue 
> that is
> difficult to hold the position that the universe as we know it is 
> simply the
> product of random chance.
>
>> Science does not claim that the universe is not have levels of what
>> we perceive to be organizational. But that there is no initial 
>> purpose
>> that led to life. Before there was life, elements had to created in 
>> stars
>> that are essential to life. Science says there was no guiding hand to
>> that, it just happened. And it is in that sense I use the word 
>> random.
>
> There are hundreds of known physical constants in the universe (speed 
> of
> light, mass of electron, gravitational constant, nuclear forces, etc.) 
> that
> must have precisely the values they have for the universe to exist as 
> we
> know it; if you make even slight changes to any of them, the universe 
> would
> be wildly different.  It sure is a coincidence that all of them fell 
> into
> place through random chance.
>
>> Evolution is a theory, which means it is not a proven fact. Fossils 
>> are a
>> matter of fact, but the interpretation of how they got there is not a 
>> fact
>> and I don't claim that. I do claim that if you post on a scientific 
>> forum
>> then
>> you use the cause and effect of science which does not admit God as a
>> cause
>
> I never stated that life was created by God (but I don't argue against 
> it
> either).  My point was that as deeper understanding is gained of the
> incredible complexity of the fundamental life processes, it becomes
> increasingly difficult to explain it through random events and 
> incremental
> Darwinian evolutionary theory.  Too many simultaneous, interacting 
> processes
> have to get going at the same time to make a self-replicating organism 
> that
> can pass on genetic information and begin a na   turalselectioncycle.
> Getting from minerals in water to DNA/RNA and all of the machinery 
> necessary
> for replication is an unexplained phenomenon.  If you want to accept 
> that it
> happened on faith, that's fine; but it is not science.

I agree with the general notion of what you say, but like to add that 
perhaps the way organic life forms emerged and evolved on earth was not 
less a miracle than the emergence of protons and neutrons in the early 
universe. In fact, without protons and neutrons organic life would not 
be possible and be what it is. 


0
JPL
11/1/2004 5:01:08 PM
Bill Modlin wrote:
[...]
> 
> No, you missed the point.  The reward signal is *not* just
> another signal from the p.o.v. of the neuron.
> 
> You cannot have feedback directed systems unless there are
> special paths for the feedback,

Precisely.

> which do something quite
> different from the normal inputs to the adjusted cells.  The
> "something quite different" is that they change the
> operating parameters for subsequent inputs, rather than

Many things can change the operating parameters of a neuron - 
endorphins, pH levels, temperature, etc. If some molecule switches on a 
gene within the neuron, then the next inputs (up some number) will 
change the way the neuron responds to subsequent inputs. Then the gene 
switches off, and the neuron's activation level stabilises at the new 
level. Or so I understand the process. In nature, some of these gnenes 
switch on and off repeatedly, others only once in the organism's 
lifetime. Does that give you any ideas? What are the analogues to these 
rpocesses in an artificial NN? It seems to me that a neuron must itself 
be a pretty complicated system, in order for it to be "adjusted" so taht 
subsequent inputs ahve different effects.

It also seems to me that "input" is a fuzzy concept.
0
Wolf
11/1/2004 5:40:28 PM
in article unKgd.334213$3l3.127896@attbi_s03, patty at
pattyNO@SPAMicyberspace.net wrote on 10/30/04 3:52 AM:

> 
> Yeah, sure , like Markov wasn't studying these structures way back in
> 1906.  I coded my first NN after studying Markov chains, i didn't even
> know about Hull et all.  There has always been a close association
> between engineering (computers) and the math departments.  Did the
> psychology departments inform that process ... maybe ... but maybe not
> as much as it appears to you.  Maybe Minsky could tell us what primarily
> informed *his* research ....
> 
> patty
> 

Let us hasten then to a higher plane
Where dyads tread the fields of Venn
Their indices bedecked from 1 to N
Comingled in an endless Markov Chain

  - best effort recollection of something uttered by a contraption in
Stanislaw Lem's "Cyberiad"

0
Michael
11/1/2004 6:09:11 PM
in article 41850290.20590540@netnews.att.net, Lester Zick at
lesterDELzick@worldnet.att.net wrote on 10/31/04 7:46 AM:

>
> Hi Michael - Let me append a few substantive comments. However, since
> I have no hands on experience doing the kind of machine heuristics you
> discuss, I hope you'll excuse any egregious blunders I make.

Hi, Lester. Sure. I'm not worried about what you have your hands on.

> 
> With respect to supervised versus unsupervised learning, I prefer to
> call the supervised version simply training. Learning is something we
> know changes behavior, but we don't really exactly know how or why.

Well it often is called training - the ubiquitous task lists with laughable
schedule and cost estimates dear to the managerial caste will sometimes have
a bulleted item: train the nets - 1 week. Using the first law of project
estimation - double the estimate and change to the next higher order units -
this translates two 2 months.

In context, "training" is a reasonable term, but both "unsupervised learning
alogrithms and "supervised learning" alogrithms can be considered
"training", though in the unsupervised case it would be rather like training
without a safety net. Often there is a "training phase" and once the
product, whatever it is, moves into production the training wheels come off.
Or they may be reattached for a little retraining now and then in a  system
with a "training mode" (flip a toggle). Then there are systems that aspire
to "continuous learning" - these may go through a supervised learning phase
in the lab, and continue their education via unsupervised learning in the
field (on the job training).

> 
> One thing that occurred to me as I read your analysis of training, was
> that you give an algorithm the correct answer and allow it to come up
> with a correct result somehow. But how is the provided answer correct?
> You admit that actual answers are problematic and do not originate in
> any well defined algorithm.

There are two issues here - in one case there are identifiable correct
answers, but the procedure by which humans, say, make such identifications
is not sufficiently well known to yield an algorithm. In the other case
expectation (the calculation of a probability distribution) is the best that
can be done. For example, character recognizers are often constructed using
supervized learning algorithms. Let's suppose the task is to build a
recognizer of handwritten decimal digits. First you collect (or purchase) a
set of bitmaps of handwritten digits, maybe around 100,000 bitmaps, 10,000
for each digit. Then you pay data-entry operators (which may be you and your
buddies, or your colleagues' teenage kids, or someone's grad students, or
bona fide keyers from some temp agency, people whose jobs will be eliminated
by your work) to sit at a terminal and create the truth files associated
with the bitmaps. The assumption is that data-entry operators can correctly
distinguish '0' from '1' from '2', etc. even though nobody knows how they do
it. In reality, of course, some handwritten digits are so ambiguous that
there is no "correct" answer, and these go into the heck-if-I-know bin '?'.
This data collection phase is a non-trivial enterprise - best to use some
sort of two-pass blind keying scheme where each bitmap is labled (as a '2'
or whatever) by two different operators, where the second operator is not
allowed to know what the first operator called it. When both operators give
it the same label the bitmap and it's label go into the training set,
otherwise it goes to a resolution operator with the authority to label it
and send it into the database, or to file it in the '?'- bin. You can buy
"truthed' data sets.

The other example I gave, a routine that classifies loan applications into
WILL_DEFAULT, WONT_DEFAULT bins is of the second type. Few people can look
at a loan application and identify the "correct" answer. So you look at case
histories. You get some large set of loan applications labled DID_DEFAULT or
DID_NOT_DEFAULT, and attempt to construct a function that takes an
application record as input and creates a probability-of-defaulting as
output. One hopes the information on the application has some value in
assessing such probabilities (astrological sign, for example - the learning
algorithm may conclude that Leo's rarely pay up).

> 
> So I would think a more appropriate term might be the expected answer
> in which case the problem remains as to why any expected answer is
> correct or not if it is and if it isn't then why is it expected? If
> the only answer is that it's your money at stake, then I can certainly
> understand the prerogative but not the idea of correctness in other
> than applied and not abstract terms. And if answers cannot be given in
> abstract terms, then the application is really just ambiguous.

The answer lies in the scandal of induction - the suspicion that similar
circumstances lead to similar outcomes. The actuarial sciences concern
themselves with investigating "what tends to confirm an induction", and
that, in a nutshell, is what "machine learning" is all about. (What does he
mean by "similar"?)

By the way, equating, as some here seem to do, "machine learning" with the
study of what various forms of ANN can or cannot do is rather like equating
mathematics with the study of what a particular sliderule can do.

> 
> Question: Are you looking for the machine to give you answers on
> optimal utility under given parameters? Or are you giving the machine
> some standard of optimal utility which you expect it to reach as part
> of its training?

Both, depending on circumstances.

> 
> Regards - Lester

0
Michael
11/1/2004 6:09:17 PM
Phil Sherrod wrote:
> On 29-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com>
> wrote:
>
> > Well, the general umbrella of these forums is cognitive science.
> > So my post doesn't entertain a non-scientific explanation --> by
> > God or Design.
>
> I don't recall mentioning God or religion in any of my messages.  If
> I did, please quote the section.
>
> Regarding Design, it is hard for anyone with a rational mind to deny
> the remarkable order and "design" of the universe and living
> creatures.  We can argue about the origin of this design, but the
> natural order is so far from random that you have to postulate either
> a designer or a natural process that operated contrary to entropy to
> produce the universe we live in.  I'm not trying to convince you that
> the designer was God, but I do argue that is difficult to hold the
> position that the universe as we know it is simply the product of
> random chance.

Living creatures are not the product of random chance. They are
the product of natural selection. Random mutations (and sexual
recombinations) are just components that add diversity, upon
which pressures of natural selection operates. Additionally,
these natural processes do not operate "contrary to entropy".
They may locally diminish entropy, but all living creatures
spread more energy than they "concentrate". On a system level,
tendency of entropy is always of growth.

*SG*


0
Stargazer
11/1/2004 6:15:59 PM
Michael Olea wrote:

> in article unKgd.334213$3l3.127896@attbi_s03, patty at
> pattyNO@SPAMicyberspace.net wrote on 10/30/04 3:52 AM:
> 
> 
>>Yeah, sure , like Markov wasn't studying these structures way back in
>>1906.  I coded my first NN after studying Markov chains, i didn't even
>>know about Hull et all.  There has always been a close association
>>between engineering (computers) and the math departments.  Did the
>>psychology departments inform that process ... maybe ... but maybe not
>>as much as it appears to you.  Maybe Minsky could tell us what primarily
>>informed *his* research ....
>>
>>patty
>>
> 
> 
> Let us hasten then to a higher plane
> Where dyads tread the fields of Venn
> Their indices bedecked from 1 to N
> Comingled in an endless Markov Chain
> 
>   - best effort recollection of something uttered by a contraption in
> Stanislaw Lem's "Cyberiad"
> 

:)  but ...

Let us hasten then to a higher plane
where triads rule and quads tread the fields of Venn,
their molecules bedecked from 1 to N,
combining in endless webs of change !

patty


0
patty
11/1/2004 6:33:32 PM
Stargazer wrote:

> Phil Sherrod wrote:
> 
>>On 29-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com>
>>wrote:
>>
>>
>>>Well, the general umbrella of these forums is cognitive science.
>>>So my post doesn't entertain a non-scientific explanation --> by
>>>God or Design.
>>
>>I don't recall mentioning God or religion in any of my messages.  If
>>I did, please quote the section.
>>
>>Regarding Design, it is hard for anyone with a rational mind to deny
>>the remarkable order and "design" of the universe and living
>>creatures.  We can argue about the origin of this design, but the
>>natural order is so far from random that you have to postulate either
>>a designer or a natural process that operated contrary to entropy to
>>produce the universe we live in.  I'm not trying to convince you that
>>the designer was God, but I do argue that is difficult to hold the
>>position that the universe as we know it is simply the product of
>>random chance.
> 
> 
> Living creatures are not the product of random chance. They are
> the product of natural selection. Random mutations (and sexual
> recombinations) are just components that add diversity, upon
> which pressures of natural selection operates. Additionally,
> these natural processes do not operate "contrary to entropy".
> They may locally diminish entropy, but all living creatures
> spread more energy than they "concentrate". On a system level,
> tendency of entropy is always of growth.
> 
> *SG*
> 

Is there not some bias implicit in any measure of entropy ?

patty
0
patty
11/1/2004 6:38:06 PM
Bill Modlin wrote:
> "Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
> news:RDihd.44$dj2.27841@news20.bellglobal.com...
> 
[...]
> No, you missed the point.  The reward signal is *not* just
> another signal from the p.o.v. of the neuron.
> 
> You cannot have feedback directed systems unless there are
> special paths for the feedback, which do something quite
> different from the normal inputs to the adjusted cells.  The
> "something quite different" is that they change the
> operating parameters for subsequent inputs, rather than
> being themselves processed as inputs.

[answered in another post]

> One possibility is for example the one described by Glen, in
> which cells with inputs from sensory data and outputs to
> motor control are affected by a separate hardwired "reward
> detector" which floods the area with chemicals causing
> selective reinforcement of recently active connections.  This
> flooding isn't "just another signal", it does not itself produce an
> output.  Instead it changes the way subsequent outputs are
> produced.

[snip an elaboration of the above, including what amounts to a 
description of how a signal might cause changes within a NN]

a) why should a signal produce an output? It doesn't do so in 
unsupervised learning, by definition of "unsupervised." I understand 
"signal" to mean "some spatio-temporal combination of inputs", which as 
far as I can tell is the meaning that makes sense in the passages quoted 
to enlighten me on ceratin technical usages.

b) "reward modules" could also provide synaptic signals to a NN.

c) "floods of chemicals" also affect outputs.

Again, I see nothing distinctive about feedback as compared to other 
signals. The only difference is that of originating source, which is 
hidden from the responding NN by intervening layers of NNs. Those 
intervening layers may combine inputs from further out spatially and 
temporally in such a way that the NN responds by adjusting itself; IOW 
those intervening NNs shape the signal.

"Floods of chemicals" affect the synaptic junctions etc, which amounts 
to filtering and shaping signals. But there is nothing in the _signal_ 
that labels it "feedback." You yourself point out that feedback 
"requires different pathways." That means different topologies of the 
NNs involved, which IMO is the crucial fact about any changes in NNs, 
feedback controlled or not.

There may be electronic or data-structure analogues to "floods of 
chemicals", but these too must resolve into patterns of inputs, ie 
signals, to the artificial NN. Just why some of these signals should be 
recognised as feedback, and others not, escapes me. It matters for the 
designer of the system, and it matters for testing the system, but at 
the level of NNs it matters not at all. What you need at that level is 
signals that cause the desired changes; etc. How to use feedback to 
create these signals is IMO another question.

0
Wolf
11/1/2004 7:20:50 PM
Stargazer wrote:
[...]
> Unsupervised learning, from a cortical point of view, is often
> associated with the changes in the patterns of connections among
> neurons due to experience (stimulation). In one of the most studied
> cases, the visual cortex, orientation selectivity and binocularity
> emerge as a function of the kind of experiences that the visual
> system is subjected to. Yes, this is a self-organizing process.
> There are several computational models of such processes, some
> of them replicating the formation of cortical columns. This is
> nowadays a very advanced area of cortical modeling.
> 
> *SG*

This does not IMO refute, neither does it extend, the behaviourist 
concept of learning. It is an example of, in fact. IOW, cortical and 
other physiological changes must occur when an organism learns anything 
at all, including orientation sensitivity (which can be nicely 
controlled by environmental contingencies, as I'm sure you know), and 
binocularity (which can also be controlled by environmental 
contingencies, as I'm sure you know.)
0
Wolf
11/1/2004 7:32:17 PM
in article Opmcr6AYuJhBFwk5@longley.demon.co.uk, David Longley at
David@longley.demon.co.uk wrote on 10/31/04 12:00 AM:

> In article <BDA99812.C01A%oleaj@sbcglobal.net>, Michael Olea
> <oleaj@sbcglobal.net> writes

[snip]

>> 
>> Yesterday I read
>> Natural Kinds for the fourth time - maybe I am just slow, but I keep getting
>> more out of that one essay. Bear in mind that I actualy write cluster
>> analysis code, so the ideas he wrestles with in Natural Kinds are not just
>> abstractions, but matters of personal practical concern. Cluster Analysis is
>> after all an attempt to automate a rigorous elucidation of natural kinds.
>> 
> 
> The one thing one soon learns after using cluster analysis practically
> is that cluster analysis (agglomerative or divisive), *always* produces
> clusters (just like Factor analysis always produces factors (they are
> closely related).

Well, the statement that Factor Analysis always produces factors is another
way of stating that every finite vector space has a finite set of basis
vectors - the issue is whether or not the analysis yields a reduction in the
dimension of the basis (or, as in the case of JPEG, a useful tradeoff
between compression and fidelity), but yes, point taken. Any diligent
application of these tools, in the cluster analysis form of the problem,
therefore begins with an assesment of "clustering tendency", and concludes
with an investigation of "cluster validity".

The working assumption is that there is structure in the world, regularities
to be unveiled, not necessarily in tune with our "innate subjective spacing
of qualities". An interesting paper in which clustering (an agglomerative
hierarchical clustering) is based not on a metric, or similarity function,
but on mutual information is:

[A] E Schneidman, W Bialek, & MJ Berry II,  An information theoretic
approach to the functional classification of neurons, in Advances in Neural
Information Processing 15, S Becker, S Thrun & K Obermayer, eds, pp 197-204
(MIT Press, Cambridge, 2003).

Available at: 
http://www.princeton.edu/~wbialek/our_papers/schneidman+al_03a.pdf

/Start Abstract/

A population of neurons typically exhibits a broad diversity of responses to
sensory inputs. The intuitive notion of functional classification is that
cells can be clustered so that most of the diversity is captured by the
identity of the clusters rather than by individuals within clusters. We show
how this intuition can be made precise using information theory, without any
need to introduce a metric on the space of stimuli or responses. Applied to
the retinal ganglion cells of the salamander, this approach recovers
classical results, but also provides clear evidence for subclasses beyond
those identified previously. Further, we find that each of the ganglion
cells is functionally unique, and that even within the same subclass only a
few spikes are needed to reliably distinguish between cells.

\End Abstract\


> In many applications (apart from the nice concrete
> biological ones used to illustrate them) there's nothing natural about
> them <g>

This is probably how grue emeralds were first discovered ;-)

> They're descriptive statistical tools. cf. first website
> reference for the beginning of a long series which took just this line
> and then think "Fragments".

Back to the question I think Modlin was contemplating - are there processes
in primate cortex that are analogous in their operation to such tools. Or to
put it another way:

Atick JJ, Redlich AN (1992) What does the retina know about natural scenes?
Neural Computation, 4, 196-210.

Barlow HB (2001) Redundancy reduction revisited. Network: Computation in
Neural Systems, 12, 241-253.

Olshausen BA, Field DJ (1996a). Emergence of simple-cell receptive field
properties by learning a sparse code for natural images. Nature, 381,
607-609.

(and many more)

http://redwood.ucdavis.edu/bruno/
http://www-2.cs.cmu.edu/~lewicki/
http://www.cnbc.cmu.edu/~tai/
http://www.jneurosci.org/cgi/content/abstract/13/11/4700


0
Michael
11/1/2004 7:44:28 PM
On  1-Nov-2004, "Stargazer" <fuckoff@spammers.com> wrote:

> Living creatures are not the product of random chance. They are
> the product of natural selection. Random mutations (and sexual
> recombinations) are just components that add diversity, upon
> which pressures of natural selection operates.

You left out something... How do you go from inanimate compounds to living
creatures capable of reproducing, passing on genetic material and engaging
in natural selection.  Is that a result of random chance?

You also didn't address the rather remarkable coincidence that so many
physical constants required to produce our universe all happen to have
exactly the right values  -- random chance?

-- 
Phil Sherrod
(phil.sherrod 'at' sandh.com)
http://www.dtreg.com  (decision tree modeling)
http://www.nlreg.com  (nonlinear regression)
http://www.NewsRover.com (Usenet newsreader)
http://www.LogRover.com (Web statistics analysis)
0
Phil
11/1/2004 7:56:45 PM
On Mon, 01 Nov 2004 18:09:17 GMT, Michael Olea <oleaj@sbcglobal.net>
in comp.ai.philosophy wrote:

>in article 41850290.20590540@netnews.att.net, Lester Zick at
>lesterDELzick@worldnet.att.net wrote on 10/31/04 7:46 AM:
>
>>
>> Hi Michael - Let me append a few substantive comments. However, since
>> I have no hands on experience doing the kind of machine heuristics you
>> discuss, I hope you'll excuse any egregious blunders I make.
>
>Hi, Lester. Sure. I'm not worried about what you have your hands on.

Not a problem. I wash my hands frequently. I just wish I could wash my
hands of David. He doesn't seem to be soluble in science, only in pure
snake oil.
 
>> With respect to supervised versus unsupervised learning, I prefer to
>> call the supervised version simply training. Learning is something we
>> know changes behavior, but we don't really exactly know how or why.
>
>Well it often is called training - the ubiquitous task lists with laughable
>schedule and cost estimates dear to the managerial caste will sometimes have
>a bulleted item: train the nets - 1 week. Using the first law of project
>estimation - double the estimate and change to the next higher order units -
>this translates two 2 months.

Some of the toughest things to learn were realistic planning and
budgeting.

>In context, "training" is a reasonable term, but both "unsupervised learning
>alogrithms and "supervised learning" alogrithms can be considered
>"training", though in the unsupervised case it would be rather like training
>without a safety net. Often there is a "training phase" and once the
>product, whatever it is, moves into production the training wheels come off.
>Or they may be reattached for a little retraining now and then in a  system
>with a "training mode" (flip a toggle). Then there are systems that aspire
>to "continuous learning" - these may go through a supervised learning phase
>in the lab, and continue their education via unsupervised learning in the
>field (on the job training).

Seems pretty reasonable.

>> One thing that occurred to me as I read your analysis of training, was
>> that you give an algorithm the correct answer and allow it to come up
>> with a correct result somehow. But how is the provided answer correct?
>> You admit that actual answers are problematic and do not originate in
>> any well defined algorithm.
>
>There are two issues here - in one case there are identifiable correct
>answers, but the procedure by which humans, say, make such identifications
>is not sufficiently well known to yield an algorithm. In the other case
>expectation (the calculation of a probability distribution) is the best that
>can be done. For example, character recognizers are often constructed using
>supervized learning algorithms. Let's suppose the task is to build a
>recognizer of handwritten decimal digits. First you collect (or purchase) a
>set of bitmaps of handwritten digits, maybe around 100,000 bitmaps, 10,000
>for each digit. Then you pay data-entry operators (which may be you and your
>buddies, or your colleagues' teenage kids, or someone's grad students, or
>bona fide keyers from some temp agency, people whose jobs will be eliminated
>by your work) to sit at a terminal and create the truth files associated
>with the bitmaps. The assumption is that data-entry operators can correctly
>distinguish '0' from '1' from '2', etc. even though nobody knows how they do
>it. In reality, of course, some handwritten digits are so ambiguous that
>there is no "correct" answer, and these go into the heck-if-I-know bin '?'.
>This data collection phase is a non-trivial enterprise - best to use some
>sort of two-pass blind keying scheme where each bitmap is labled (as a '2'
>or whatever) by two different operators, where the second operator is not
>allowed to know what the first operator called it. When both operators give
>it the same label the bitmap and it's label go into the training set,
>otherwise it goes to a resolution operator with the authority to label it
>and send it into the database, or to file it in the '?'- bin. You can buy
>"truthed' data sets.

Character sets have ready definitions, however.

>The other example I gave, a routine that classifies loan applications into
>WILL_DEFAULT, WONT_DEFAULT bins is of the second type. Few people can look
>at a loan application and identify the "correct" answer. So you look at case
>histories. You get some large set of loan applications labled DID_DEFAULT or
>DID_NOT_DEFAULT, and attempt to construct a function that takes an
>application record as input and creates a probability-of-defaulting as
>output. One hopes the information on the application has some value in
>assessing such probabilities (astrological sign, for example - the learning
>algorithm may conclude that Leo's rarely pay up).

A more difficult application obviously. So, in this case there would
be no uniquely correct machine training algorithm?
 
>> So I would think a more appropriate term might be the expected answer
>> in which case the problem remains as to why any expected answer is
>> correct or not if it is and if it isn't then why is it expected? If
>> the only answer is that it's your money at stake, then I can certainly
>> understand the prerogative but not the idea of correctness in other
>> than applied and not abstract terms. And if answers cannot be given in
>> abstract terms, then the application is really just ambiguous.
>
>The answer lies in the scandal of induction - the suspicion that similar
>circumstances lead to similar outcomes. The actuarial sciences concern
>themselves with investigating "what tends to confirm an induction", and
>that, in a nutshell, is what "machine learning" is all about. (What does he
>mean by "similar"?)
>
>By the way, equating, as some here seem to do, "machine learning" with the
>study of what various forms of ANN can or cannot do is rather like equating
>mathematics with the study of what a particular sliderule can do.

Thankfully we've had mathematicians to elucidate math. And, of course,
cognitive science has me.
 
>> Question: Are you looking for the machine to give you answers on
>> optimal utility under given parameters? Or are you giving the machine
>> some standard of optimal utility which you expect it to reach as part
>> of its training?
>
>Both, depending on circumstances.

More or less as I expected, Michael. At least I was asking relevant
questions. Thanks. Now, if you could just develope a machinable
learning algorithm to get rid of David.

Regards - Lester
0
lesterDELzick
11/1/2004 8:17:07 PM
Lester Zick wrote:

> Not a problem. I wash my hands frequently. I just wish I could wash my
> hands of David. 

You wish is my command!  Look under your Usenet menu, tools, message 
filters ... or as as appropriate to your particular platform ...  and 
put David on ignore.  Never respond to him again!  If you follow these 
instructions your hands will be, of him, washed.

> He doesn't seem to be soluble in science, only in pure
> snake oil.
>  

ROFL !!

> More or less as I expected, Michael. At least I was asking relevant
> questions. Thanks. Now, if you could just develope a machinable
> learning algorithm to get rid of David.
> 

The problem is not with the availability of algorithms (see above).  The 
problem is with your ability to follow them.   David is addictive.  He 
should come with a warning.

patty
0
patty
11/1/2004 8:32:03 PM
In article <BDABD221.C06F%oleaj@sbcglobal.net>, Michael Olea 
<oleaj@sbcglobal.net> writes
>in article Opmcr6AYuJhBFwk5@longley.demon.co.uk, David Longley at
>David@longley.demon.co.uk wrote on 10/31/04 12:00 AM:
>
>> In article <BDA99812.C01A%oleaj@sbcglobal.net>, Michael Olea
>> <oleaj@sbcglobal.net> writes
>
>[snip]
>
>>>
>>> Yesterday I read
>>> Natural Kinds for the fourth time - maybe I am just slow, but I keep getting
>>> more out of that one essay. Bear in mind that I actualy write cluster
>>> analysis code, so the ideas he wrestles with in Natural Kinds are not just
>>> abstractions, but matters of personal practical concern. Cluster Analysis is
>>> after all an attempt to automate a rigorous elucidation of natural kinds.
>>>
>>
>> The one thing one soon learns after using cluster analysis practically
>> is that cluster analysis (agglomerative or divisive), *always* produces
>> clusters (just like Factor analysis always produces factors (they are
>> closely related).
>
>Well, the statement that Factor Analysis always produces factors is another
>way of stating that every finite vector space has a finite set of basis
>vectors - the issue is whether or not the analysis yields a reduction in the
>dimension of the basis (or, as in the case of JPEG, a useful tradeoff
>between compression and fidelity), but yes, point taken. Any diligent
>application of these tools, in the cluster analysis form of the problem,
>therefore begins with an assesment of "clustering tendency", and concludes
>with an investigation of "cluster validity".
>
>The working assumption is that there is structure in the world, regularities
>to be unveiled, not necessarily in tune with our "innate subjective spacing
>of qualities". An interesting paper in which clustering (an agglomerative
>hierarchical clustering) is based not on a metric, or similarity function,
>but on mutual information is:
>
>[A] E Schneidman, W Bialek, & MJ Berry II,  An information theoretic
>approach to the functional classification of neurons, in Advances in Neural
>Information Processing 15, S Becker, S Thrun & K Obermayer, eds, pp 197-204
>(MIT Press, Cambridge, 2003).
>
>Available at:
>http://www.princeton.edu/~wbialek/our_papers/schneidman+al_03a.pdf
>
>/Start Abstract/
>
>A population of neurons typically exhibits a broad diversity of responses to
>sensory inputs. The intuitive notion of functional classification is that
>cells can be clustered so that most of the diversity is captured by the
>identity of the clusters rather than by individuals within clusters. We show
>how this intuition can be made precise using information theory, without any
>need to introduce a metric on the space of stimuli or responses. Applied to
>the retinal ganglion cells of the salamander, this approach recovers
>classical results, but also provides clear evidence for subclasses beyond
>those identified previously. Further, we find that each of the ganglion
>cells is functionally unique, and that even within the same subclass only a
>few spikes are needed to reliably distinguish between cells.
>
>\End Abstract\
>
>
>> In many applications (apart from the nice concrete
>> biological ones used to illustrate them) there's nothing natural about
>> them <g>
>
>This is probably how grue emeralds were first discovered ;-)
>
>> They're descriptive statistical tools. cf. first website
>> reference for the beginning of a long series which took just this line
>> and then think "Fragments".
>
>Back to the question I think Modlin was contemplating - are there processes
>in primate cortex that are analogous in their operation to such tools. Or to
>put it another way:
>
>Atick JJ, Redlich AN (1992) What does the retina know about natural scenes?
>Neural Computation, 4, 196-210.
>
>Barlow HB (2001) Redundancy reduction revisited. Network: Computation in
>Neural Systems, 12, 241-253.
>
>Olshausen BA, Field DJ (1996a). Emergence of simple-cell receptive field
>properties by learning a sparse code for natural images. Nature, 381,
>607-609.
>
>(and many more)
>
>http://redwood.ucdavis.edu/bruno/
>http://www-2.cs.cmu.edu/~lewicki/
>http://www.cnbc.cmu.edu/~tai/
>http://www.jneurosci.org/cgi/content/abstract/13/11/4700
>
>
I don't think the above is going to help you,  Modlin, or anyone else 
Michael. It's on completely the wrong track (except for *modelling* 
other empirical work perhaps in neuroscience and behaviour - but you 
have to begin to ask questions there, not stop). If I thought otherwise, 
I wouldn't have gone into *applied* behaviour analysis research and 
development *from* neuroscience over twenty years ago. This is why I've 
said it would be wiser to try to criticize Modlin's post in order to try 
to see *why* I've been saying that he (and others pursuing that line) 
may be *fundamentally* misguided. It is a radical criticism and so far 
most folk here (and elsewhere) have failed to grasp it.

There are, incidentally, many distance metrics which can be used in 
cluster, just as there are different rotation etc techniques in factor 
(the one you refer to above was central to some statistical (stochastic) 
learning models back in the 1950s, and folk ignore Skinner's early 
criticism of Bush and Mosteller etc at their peril (as I keep seeing 
here and elsewhere). Folk just think they know better, despite the 
evidence telling them they don't.  Note, one can even select different 
methods in SAS and SPSS! The bottom line is that a few years work with 
these technologies will usually teach one that in the end, one has to 
make other decisions in terms of a greater web of belief - this is the 
importance of pragmatic, empirical context.

To paraphrase (and I shouldn't): it isn't what's inside the head, (or 
chips) that matters, it's what they're both inside of. Read "Fragments", 
follow up the other references in that list of related papers and links 
I've periodically posted here, and, give it all some careful rethinking.

Kind regards,

-- 
David Longley
0
David
11/1/2004 8:33:26 PM
Phil Sherrod wrote:
> On  1-Nov-2004, "Stargazer" <fuckoff@spammers.com> wrote:
>
> > Living creatures are not the product of random chance. They are
> > the product of natural selection. Random mutations (and sexual
> > recombinations) are just components that add diversity, upon
> > which pressures of natural selection operates.
>
> You left out something... How do you go from inanimate compounds to
> living creatures capable of reproducing, passing on genetic material
> and engaging in natural selection.  Is that a result of random chance?

We don't know yet. What we do know is that all difficult questions
we had in the past (such as the "inability of humans to synthesize
organic compounds", which was, in the past, a difficult question)
were explained as being the result of natural processes, not of
"supernatural influences". Christian de Duve's "Vital Dust" is one
hell of a reading regarding these issues.

> You also didn't address the rather remarkable coincidence that so many
> physical constants required to produce our universe all happen to have
> exactly the right values  -- random chance?

I'm not surprised by the fact that several physical constants have
the exact value they must have in order for life to strive. I'm not
surprised because I don't know if this is a "suspicious coincidence"
or just a statistically frequent situation. Leaving anthropic ideas
behind, one could only be surprised with such a thing if one could
know how many "universes" (sample size) there are out there. Is our
universe the one and only? Or are there 10^10^10 universes?
All these questions aren't scientific questions (they cannot be
empirically investigated), thus anything related to these matters
is only metaphysical speculation, fine ways to entertain people
in cocktail parties.

*SG*


0
Stargazer
11/1/2004 8:35:00 PM
patty wrote:
> Stargazer wrote:
>
> > Phil Sherrod wrote:
> >
> > > On 29-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com>
> > > wrote:
> > >
> > >
> > > > Well, the general umbrella of these forums is cognitive science.
> > > > So my post doesn't entertain a non-scientific explanation --> by
> > > > God or Design.
> > >
> > > I don't recall mentioning God or religion in any of my messages. If I did, 
> > > please quote the section.
> > >
> > > Regarding Design, it is hard for anyone with a rational mind to
> > > deny the remarkable order and "design" of the universe and living
> > > creatures.  We can argue about the origin of this design, but the
> > > natural order is so far from random that you have to postulate
> > > either a designer or a natural process that operated contrary to
> > > entropy to produce the universe we live in.  I'm not trying to
> > > convince you that the designer was God, but I do argue that is
> > > difficult to hold the position that the universe as we know it is
> > > simply the product of random chance.
> >
> >
> > Living creatures are not the product of random chance. They are
> > the product of natural selection. Random mutations (and sexual
> > recombinations) are just components that add diversity, upon
> > which pressures of natural selection operates. Additionally,
> > these natural processes do not operate "contrary to entropy".
> > They may locally diminish entropy, but all living creatures
> > spread more energy than they "concentrate". On a system level,
> > tendency of entropy is always of growth.
> >
> > *SG*
>
> Is there not some bias implicit in any measure of entropy ?

If one is using entropy in the "folk" sense, yes. Social sciences,
for instance, often use the term associated with "disorder". One
can only say that something is "disorganized" if one is comparing
it with an orderly way. That obviously depends on the eye of the
observer (anything written in cyrillic would seem garbled for
our eyes). Entropy, in the physical sense, is related to the state
of dispersion of energy in a system. This does not depend on any
observer bias.

*SG*


0
Stargazer
11/1/2004 8:41:36 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> [...]
> > Unsupervised learning, from a cortical point of view, is often
> > associated with the changes in the patterns of connections among
> > neurons due to experience (stimulation). In one of the most studied
> > cases, the visual cortex, orientation selectivity and binocularity
> > emerge as a function of the kind of experiences that the visual
> > system is subjected to. Yes, this is a self-organizing process.
> > There are several computational models of such processes, some
> > of them replicating the formation of cortical columns. This is
> > nowadays a very advanced area of cortical modeling.
> >
> > *SG*
>
> This does not IMO refute, neither does it extend, the behaviourist
> concept of learning. It is an example of, in fact. IOW, cortical and
> other physiological changes must occur when an organism learns
> anything at all, including orientation sensitivity (which can be
> nicely controlled by environmental contingencies, as I'm sure you
> know), and binocularity (which can also be controlled by environmental
> contingencies, as I'm sure you know.)

Of course it does not refute the behaviorist concept of learning
(after all, that's a *definition*), but it obviously extend it. For
example, Gary Blasdel's experimental work on the ocular dominance
bands of monkey's striate cortex have been computationally
reproduced by Miikkulainen and Sirosh in a quite remarkable study
(the LISSOM model).

*SG*


0
Stargazer
11/1/2004 8:54:10 PM
Lester Zick wrote:
>
> Not a problem. I wash my hands frequently. I just wish I could wash my
> hands of David. He doesn't seem to be soluble in science, only in pure
> snake oil.

LOL. Jerry Seinfeld beware: you have a serious competitor here.

> More or less as I expected, Michael. At least I was asking relevant
> questions. Thanks. Now, if you could just develope a machinable
> learning algorithm to get rid of David.

You already have this algorithm: it's called killfile.

*SG*


0
Stargazer
11/1/2004 9:00:07 PM
Michael Olea wrote:

> in article Opmcr6AYuJhBFwk5@longley.demon.co.uk, David Longley at
> David@longley.demon.co.uk wrote on 10/31/04 12:00 AM:
> 
>>The one thing one soon learns after using cluster analysis practically
>>is that cluster analysis (agglomerative or divisive), *always* produces
>>clusters (just like Factor analysis always produces factors (they are
>>closely related).
> 
> 
> Well, the statement that Factor Analysis always produces factors is another
> way of stating that every finite vector space has a finite set of basis
> vectors - the issue is whether or not the analysis yields a reduction in the
> dimension of the basis (or, as in the case of JPEG, a useful tradeoff
> between compression and fidelity), but yes, point taken. Any diligent
> application of these tools, in the cluster analysis form of the problem,
> therefore begins with an assesment of "clustering tendency", and concludes
> with an investigation of "cluster validity".
> 
> The working assumption is that there is structure in the world, regularities
> to be unveiled, not necessarily in tune with our "innate subjective spacing
> of qualities". 

.... and which assumption is frequently valid :)  Another prime example 
of clustering and its validity is a Google search.  There is a reason we 
find these working as well as we do.  Incidentally it may not be any 
coincidence that the head of R&D at Google wrote the text book on AI.

patty
0
patty
11/1/2004 9:26:49 PM
Stargazer wrote:

> patty wrote:
> 
>>Stargazer wrote:
>>
>>
>>>Phil Sherrod wrote:
>>>
>>>
>>>>On 29-Oct-2004, "Stephen Harris" <cyberguard1048-usenet@yahoo.com>
>>>>wrote:
>>>>
>>>>
>>>>
>>>>>Well, the general umbrella of these forums is cognitive science.
>>>>>So my post doesn't entertain a non-scientific explanation --> by
>>>>>God or Design.
>>>>
>>>>I don't recall mentioning God or religion in any of my messages. If I did, 
>>>>please quote the section.
>>>>
>>>>Regarding Design, it is hard for anyone with a rational mind to
>>>>deny the remarkable order and "design" of the universe and living
>>>>creatures.  We can argue about the origin of this design, but the
>>>>natural order is so far from random that you have to postulate
>>>>either a designer or a natural process that operated contrary to
>>>>entropy to produce the universe we live in.  I'm not trying to
>>>>convince you that the designer was God, but I do argue that is
>>>>difficult to hold the position that the universe as we know it is
>>>>simply the product of random chance.
>>>
>>>
>>>Living creatures are not the product of random chance. They are
>>>the product of natural selection. Random mutations (and sexual
>>>recombinations) are just components that add diversity, upon
>>>which pressures of natural selection operates. Additionally,
>>>these natural processes do not operate "contrary to entropy".
>>>They may locally diminish entropy, but all living creatures
>>>spread more energy than they "concentrate". On a system level,
>>>tendency of entropy is always of growth.
>>>
>>>*SG*
>>
>>Is there not some bias implicit in any measure of entropy ?
> 
> 
> If one is using entropy in the "folk" sense, yes. Social sciences,
> for instance, often use the term associated with "disorder". One
> can only say that something is "disorganized" if one is comparing
> it with an orderly way. That obviously depends on the eye of the
> observer (anything written in cyrillic would seem garbled for
> our eyes). Entropy, in the physical sense, is related to the state
> of dispersion of energy in a system. This does not depend on any
> observer bias.
> 
> *SG*
> 

I think the question is what kind of entropy are you and Phil talking 
about above?  Also bear in mind that any measuremnt of entropy must 
needs contain the bias of the measuring insturments.  In fact the very 
definition of "energy" contains a bias.  So for my money you have not 
gotten out of my bind just by calling it physical.

patty
0
patty
11/1/2004 9:55:16 PM
patty wrote:
> Stargazer wrote:
>
> > patty wrote:
> >
> > > Stargazer wrote:
> > >
> > >
> > > > Phil Sherrod wrote:
> > > >
> > > >
> > > > > On 29-Oct-2004, "Stephen Harris"
> > > > > <cyberguard1048-usenet@yahoo.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > Well, the general umbrella of these forums is cognitive
> > > > > > science. So my post doesn't entertain a non-scientific
> > > > > > explanation --> by God or Design.
> > > > >
> > > > > I don't recall mentioning God or religion in any of my
> > > > > messages. If I did, please quote the section.
> > > > >
> > > > > Regarding Design, it is hard for anyone with a rational mind
> > > > > to deny the remarkable order and "design" of the universe and
> > > > > living creatures.  We can argue about the origin of this
> > > > > design, but the natural order is so far from random that you have to 
> > > > > postulate
> > > > > either a designer or a natural process that operated contrary
> > > > > to entropy to produce the universe we live in.  I'm not trying to
> > > > > convince you that the designer was God, but I do argue that is
> > > > > difficult to hold the position that the universe as we know
> > > > > it is simply the product of random chance.
> > > >
> > > >
> > > > Living creatures are not the product of random chance. They are
> > > > the product of natural selection. Random mutations (and sexual
> > > > recombinations) are just components that add diversity, upon
> > > > which pressures of natural selection operates. Additionally,
> > > > these natural processes do not operate "contrary to entropy".
> > > > They may locally diminish entropy, but all living creatures
> > > > spread more energy than they "concentrate". On a system level,
> > > > tendency of entropy is always of growth.
> > > >
> > > > *SG*
> > >
> > > Is there not some bias implicit in any measure of entropy ?
> >
> >
> > If one is using entropy in the "folk" sense, yes. Social sciences,
> > for instance, often use the term associated with "disorder". One
> > can only say that something is "disorganized" if one is comparing
> > it with an orderly way. That obviously depends on the eye of the
> > observer (anything written in cyrillic would seem garbled for
> > our eyes). Entropy, in the physical sense, is related to the state
> > of dispersion of energy in a system. This does not depend on any
> > observer bias.
> >
> > *SG*
> >
>
> I think the question is what kind of entropy are you and Phil talking
> about above?  Also bear in mind that any measuremnt of entropy must
> needs contain the bias of the measuring insturments.  In fact the very
> definition of "energy" contains a bias.  So for my money you have not
> gotten out of my bind just by calling it physical.

I'm not sure I understand what you call "bias" here. Instrumentation
may be subject to reading errors, but that's a topic to be handled
by experimental procedures and adequate statistical analysis, with
several techniques that can be used to minor such influences.

But it is time now to say that entropy is a word with two relatively
distinct senses. In physics, it measures a property of irreversible
processes and is defined by dS = q/T. Life is an irreversible process
(dead people don't live again ;-), thus entropy increases. The
compression and decompression of a gas by an ideal piston is a
reversible process (notice: ideal piston).

Entropy, in communications theory (commonly known as information
theory) is defined as H = - Sum(pi.log(pi)), where pi is the
probability of occurrence of the ith event. This is related to
Shannon's definition of information (main difference is the
minus sign), which is also associated to the probabilities of
events.

*SG*


0
Stargazer
11/1/2004 10:40:56 PM
in article tSxhd.284777$wV.10852@attbi_s54, patty at
pattyNO@SPAMicyberspace.net wrote on 11/1/04 1:26 PM:

> Michael Olea wrote:
> 
>> in article Opmcr6AYuJhBFwk5@longley.demon.co.uk, David Longley at
>> David@longley.demon.co.uk wrote on 10/31/04 12:00 AM:
>> 
>>> The one thing one soon learns after using cluster analysis practically
>>> is that cluster analysis (agglomerative or divisive), *always* produces
>>> clusters (just like Factor analysis always produces factors (they are
>>> closely related).
>> 
>> 
>> Well, the statement that Factor Analysis always produces factors is another
>> way of stating that every finite vector space has a finite set of basis
>> vectors - the issue is whether or not the analysis yields a reduction in the
>> dimension of the basis (or, as in the case of JPEG, a useful tradeoff
>> between compression and fidelity), but yes, point taken. Any diligent
>> application of these tools, in the cluster analysis form of the problem,
>> therefore begins with an assesment of "clustering tendency", and concludes
>> with an investigation of "cluster validity".
>> 
>> The working assumption is that there is structure in the world, regularities
>> to be unveiled, not necessarily in tune with our "innate subjective spacing
>> of qualities". 
> 
> ... and which assumption is frequently valid :)  Another prime example
> of clustering and its validity is a Google search.  There is a reason we
> find these working as well as we do.  Incidentally it may not be any
> coincidence that the head of R&D at Google wrote the text book on AI.
> 
> patty

So I tried a google search on:  Head R&D AI Google textbook

The first of 33 results found contained this:

Peter Norvig is the Director of Search Quality at Google Inc.. He is a
Fellow and Councilor of the American Association for Artificial Intelligence
and co-author of Artificial Intelligence: A Modern Approach, the leading
textbook in the field.

There was also on the same page something about head-mounted displays...

0
Michael
11/1/2004 10:42:15 PM
Michael Olea wrote:
> in article tSxhd.284777$wV.10852@attbi_s54, patty at
> pattyNO@SPAMicyberspace.net wrote on 11/1/04 1:26 PM:
> 
> 
>>Michael Olea wrote:
>>
>>
>>>in article Opmcr6AYuJhBFwk5@longley.demon.co.uk, David Longley at
>>>David@longley.demon.co.uk wrote on 10/31/04 12:00 AM:
>>>
>>>
>>>>The one thing one soon learns after using cluster analysis practically
>>>>is that cluster analysis (agglomerative or divisive), *always* produces
>>>>clusters (just like Factor analysis always produces factors (they are
>>>>closely related).
>>>
>>>
>>>Well, the statement that Factor Analysis always produces factors is another
>>>way of stating that every finite vector space has a finite set of basis
>>>vectors - the issue is whether or not the analysis yields a reduction in the
>>>dimension of the basis (or, as in the case of JPEG, a useful tradeoff
>>>between compression and fidelity), but yes, point taken. Any diligent
>>>application of these tools, in the cluster analysis form of the problem,
>>>therefore begins with an assesment of "clustering tendency", and concludes
>>>with an investigation of "cluster validity".
>>>
>>>The working assumption is that there is structure in the world, regularities
>>>to be unveiled, not necessarily in tune with our "innate subjective spacing
>>>of qualities". 
>>
>>... and which assumption is frequently valid :)  Another prime example
>>of clustering and its validity is a Google search.  There is a reason we
>>find these working as well as we do.  Incidentally it may not be any
>>coincidence that the head of R&D at Google wrote the text book on AI.
>>
>>patty
> 
> 
> So I tried a google search on:  Head R&D AI Google textbook
> 
> The first of 33 results found contained this:
> 
> Peter Norvig is the Director of Search Quality at Google Inc.. He is a
> Fellow and Councilor of the American Association for Artificial Intelligence
> and co-author of Artificial Intelligence: A Modern Approach, the leading
> textbook in the field.
> 

Now that's a live example of what i call Internet ai :)   Incidentally i 
didn't pre test my words to fake that retrieval ... they came verbatim 
out of my flaky memory (were i to have such a memory).

patty

0
patty
11/1/2004 11:01:01 PM
On Mon, 1 Nov 2004 18:00:07 -0300, "Stargazer" <fuckoff@spammers.com>
in comp.ai.philosophy wrote:

>Lester Zick wrote:
>>
>> Not a problem. I wash my hands frequently. I just wish I could wash my
>> hands of David. He doesn't seem to be soluble in science, only in pure
>> snake oil.
>
>LOL. Jerry Seinfeld beware: you have a serious competitor here.

Much grass, SG. You're a good audience and David cuts such a
ridiculous figure.

>> More or less as I expected, Michael. At least I was asking relevant
>> questions. Thanks. Now, if you could just develope a machinable
>> learning algorithm to get rid of David.
>
>You already have this algorithm: it's called killfile.

Unfortunately my newsreader is FreeAgent and doesn't support killfile
that I can tell. Apart from that it's a great reader. Besides, how
could I develop my comic appeal without David?

Regards - Lester
0
lesterDELzick
11/1/2004 11:40:13 PM
On Mon, 1 Nov 2004 17:35:00 -0300, "Stargazer" <fuckoff@spammers.com>
in comp.ai.philosophy wrote:

>Phil Sherrod wrote:
>> On  1-Nov-2004, "Stargazer" <fuckoff@spammers.com> wrote:
>>
>> > Living creatures are not the product of random chance. They are
>> > the product of natural selection. Random mutations (and sexual
>> > recombinations) are just components that add diversity, upon
>> > which pressures of natural selection operates.
>>
>> You left out something... How do you go from inanimate compounds to
>> living creatures capable of reproducing, passing on genetic material
>> and engaging in natural selection.  Is that a result of random chance?
>
>We don't know yet. What we do know is that all difficult questions
>we had in the past (such as the "inability of humans to synthesize
>organic compounds", which was, in the past, a difficult question)
>were explained as being the result of natural processes, not of
>"supernatural influences". Christian de Duve's "Vital Dust" is one
>hell of a reading regarding these issues.
>
>> You also didn't address the rather remarkable coincidence that so many
>> physical constants required to produce our universe all happen to have
>> exactly the right values  -- random chance?
>
>I'm not surprised by the fact that several physical constants have
>the exact value they must have in order for life to strive. I'm not
>surprised because I don't know if this is a "suspicious coincidence"
>or just a statistically frequent situation. Leaving anthropic ideas
>behind, one could only be surprised with such a thing if one could
>know how many "universes" (sample size) there are out there. Is our
>universe the one and only? Or are there 10^10^10 universes?
>All these questions aren't scientific questions (they cannot be
>empirically investigated), thus anything related to these matters
>is only metaphysical speculation, fine ways to entertain people
>in cocktail parties.

Only one universe, only three spatial dimensions. Which is why I'm not
on late night TV. I guess I just can't be funny all the time.

Regards - Lester
0
lesterDELzick
11/1/2004 11:44:12 PM
Stargazer wrote:
> patty wrote:
> 
>>Stargazer wrote:
>>
>>
>>>patty wrote:
>>>
>>>
>>>>Stargazer wrote:
>>>>
>>>>
>>>>
>>>>>Phil Sherrod wrote:
>>>>>
>>>>>
>>>>>
>>>>>>On 29-Oct-2004, "Stephen Harris"
>>>>>><cyberguard1048-usenet@yahoo.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>Well, the general umbrella of these forums is cognitive
>>>>>>>science. So my post doesn't entertain a non-scientific
>>>>>>>explanation --> by God or Design.
>>>>>>
>>>>>>I don't recall mentioning God or religion in any of my
>>>>>>messages. If I did, please quote the section.
>>>>>>
>>>>>>Regarding Design, it is hard for anyone with a rational mind
>>>>>>to deny the remarkable order and "design" of the universe and
>>>>>>living creatures.  We can argue about the origin of this
>>>>>>design, but the natural order is so far from random that you have to 
>>>>>>postulate
>>>>>>either a designer or a natural process that operated contrary
>>>>>>to entropy to produce the universe we live in.  I'm not trying to
>>>>>>convince you that the designer was God, but I do argue that is
>>>>>>difficult to hold the position that the universe as we know
>>>>>>it is simply the product of random chance.
>>>>>
>>>>>
>>>>>Living creatures are not the product of random chance. They are
>>>>>the product of natural selection. Random mutations (and sexual
>>>>>recombinations) are just components that add diversity, upon
>>>>>which pressures of natural selection operates. Additionally,
>>>>>these natural processes do not operate "contrary to entropy".
>>>>>They may locally diminish entropy, but all living creatures
>>>>>spread more energy than they "concentrate". On a system level,
>>>>>tendency of entropy is always of growth.
>>>>>
>>>>>*SG*
>>>>
>>>>Is there not some bias implicit in any measure of entropy ?
>>>
>>>
>>>If one is using entropy in the "folk" sense, yes. Social sciences,
>>>for instance, often use the term associated with "disorder". One
>>>can only say that something is "disorganized" if one is comparing
>>>it with an orderly way. That obviously depends on the eye of the
>>>observer (anything written in cyrillic would seem garbled for
>>>our eyes). Entropy, in the physical sense, is related to the state
>>>of dispersion of energy in a system. This does not depend on any
>>>observer bias.
>>>
>>>*SG*
>>>
>>
>>I think the question is what kind of entropy are you and Phil talking
>>about above?  Also bear in mind that any measuremnt of entropy must
>>needs contain the bias of the measuring insturments.  In fact the very
>>definition of "energy" contains a bias.  So for my money you have not
>>gotten out of my bind just by calling it physical.
> 
> 
> I'm not sure I understand what you call "bias" here. Instrumentation
> may be subject to reading errors, but that's a topic to be handled
> by experimental procedures and adequate statistical analysis, with
> several techniques that can be used to minor such influences.
> 
> But it is time now to say that entropy is a word with two relatively
> distinct senses. In physics, it measures a property of irreversible
> processes and is defined by dS = q/T. Life is an irreversible process
> (dead people don't live again ;-), thus entropy increases. The
> compression and decompression of a gas by an ideal piston is a
> reversible process (notice: ideal piston).
> 

Well with this definition one obvious source of bias is what is the 
process and what is not the process.  For different choices, you will 
get different measurements.  What is telling is your comment "Life is an 
irreversible process (dead people don't live again ;-)".   I, for 
example, would not have made that choice ... i would *not* have chosen 
the individual as the foreground\background of the process ... i would 
have chosen the species and said "Life is a cyclic reversible process 
.... a mother gives birth to her son and the process repeats".

Now i must admit that i am a bit beyond my depth here, not being a 
physicist, but in my heart of hearts i do believe that life breaks the 
second law of thermodynamics for some frame of reference.

patty

> Entropy, in communications theory (commonly known as information
> theory) is defined as H = - Sum(pi.log(pi)), where pi is the
> probability of occurrence of the ith event. This is related to
> Shannon's definition of information (main difference is the
> minus sign), which is also associated to the probabilities of
> events.
> 
> *SG*
> 
> 
0
patty
11/1/2004 11:46:04 PM
On Mon, 01 Nov 2004 22:42:15 GMT, Michael Olea <oleaj@sbcglobal.net>
in comp.ai.philosophy wrote:

>in article tSxhd.284777$wV.10852@attbi_s54, patty at
>pattyNO@SPAMicyberspace.net wrote on 11/1/04 1:26 PM:
>
>> Michael Olea wrote:
>> 
>>> in article Opmcr6AYuJhBFwk5@longley.demon.co.uk, David Longley at
>>> David@longley.demon.co.uk wrote on 10/31/04 12:00 AM:
>>> 
>>>> The one thing one soon learns after using cluster analysis practically
>>>> is that cluster analysis (agglomerative or divisive), *always* produces
>>>> clusters (just like Factor analysis always produces factors (they are
>>>> closely related).
>>> 
>>> 
>>> Well, the statement that Factor Analysis always produces factors is another
>>> way of stating that every finite vector space has a finite set of basis
>>> vectors - the issue is whether or not the analysis yields a reduction in the
>>> dimension of the basis (or, as in the case of JPEG, a useful tradeoff
>>> between compression and fidelity), but yes, point taken. Any diligent
>>> application of these tools, in the cluster analysis form of the problem,
>>> therefore begins with an assesment of "clustering tendency", and concludes
>>> with an investigation of "cluster validity".
>>> 
>>> The working assumption is that there is structure in the world, regularities
>>> to be unveiled, not necessarily in tune with our "innate subjective spacing
>>> of qualities". 
>> 
>> ... and which assumption is frequently valid :)  Another prime example
>> of clustering and its validity is a Google search.  There is a reason we
>> find these working as well as we do.  Incidentally it may not be any
>> coincidence that the head of R&D at Google wrote the text book on AI.
>> 
>> patty
>
>So I tried a google search on:  Head R&D AI Google textbook
>
>The first of 33 results found contained this:
>
>Peter Norvig is the Director of Search Quality at Google Inc.. He is a
>Fellow and Councilor of the American Association for Artificial Intelligence
>and co-author of Artificial Intelligence: A Modern Approach, the leading
>textbook in the field.
>
>There was also on the same page something about head-mounted displays...

Michael, you need a license to do comedy on this newsgroup. But as I'm
sure you're licensed elsewhere, I'll let it pass this time.

Regards - Lester
0
lesterDELzick
11/1/2004 11:49:11 PM
Stargazer wrote:
> Wolf Kirchmeir wrote:
[...]
>>This does not IMO refute, neither does it extend, the behaviourist
>>concept of learning. [...]
> 
> 
> Of course it does not refute the behaviorist concept of learning
> (after all, that's a *definition*), but it obviously extend it. For
> example, Gary Blasdel's experimental work on the ocular dominance
> bands of monkey's striate cortex have been computationally
> reproduced by Miikkulainen and Sirosh in a quite remarkable study
> (the LISSOM model).
> 
> *SG*


Thanks for the reference. That's the sort of thing that may prove 
useful. I still think, though, that the structure of the NNs is of the 
essence. "Some populaton of neurons", as the phrase went in some other 
post, just doesn't cut it. "Some configuration of neurons", however, might.
0
Wolf
11/2/2004 3:14:43 AM
Lester Zick wrote:
[...]
> 
> Unfortunately my newsreader is FreeAgent and doesn't support killfile
> that I can tell. Apart from that it's a great reader. Besides, how
> could I develop my comic appeal without David?
> 
> Regards - Lester

Doesn't it have filtering?
0
Wolf
11/2/2004 3:17:17 AM
patty wrote:
> Stargazer wrote:
> > patty wrote:
> >
> > > Stargazer wrote:
> > >
> > I'm not sure I understand what you call "bias" here. Instrumentation
> > may be subject to reading errors, but that's a topic to be handled
> > by experimental procedures and adequate statistical analysis, with
> > several techniques that can be used to minor such influences.
> >
> > But it is time now to say that entropy is a word with two relatively
> > distinct senses. In physics, it measures a property of irreversible
> > processes and is defined by dS = q/T. Life is an irreversible
> > process (dead people don't live again ;-), thus entropy increases.
> > The compression and decompression of a gas by an ideal piston is a
> > reversible process (notice: ideal piston).
> >
>
> Well with this definition one obvious source of bias is what is the
> process and what is not the process.  For different choices, you will
> get different measurements.  What is telling is your comment "Life is
> an irreversible process (dead people don't live again ;-)".   I, for
> example, would not have made that choice ... i would *not* have chosen
> the individual as the foreground\background of the process ... i would
> have chosen the species and said "Life is a cyclic reversible process
> ... a mother gives birth to her son and the process repeats".
>
> Now i must admit that i am a bit beyond my depth here, not being a
> physicist, but in my heart of hearts i do believe that life breaks the
> second law of thermodynamics for some frame of reference.

Now I understand what is troubling you: the frame of reference or
(more appropriately) the conceptual level of analysis. Physicists
usually choose the atomic or subatomic level of analysis. Chemists
choose the molecular level. Biologists choose the cellular level.
Neuroscientists choose the neural or groups of neurons level.
Psychologists choose the behavior level. Social scientists choose
the population level. Diplomats and politicians choose the nation
level.

Although some fundamental laws may permeate from one level to another,
each level usually have its own specific theoretical and conceptual
constructions. Some of these conceptual constructions may be
explained ("reduced") to lower levels, but some may not. In terms
of physics, entropy is usually seen at the particle level (or, at
most, the population of particles, as is the case in statistical
mechanics). But when you talk about life as a perennial process,
you're talking at a different level, where conventional definitions
of entropy won't apply. Life, seen from a "micro" level, is
indistinguishable from all other chemical reactions that happen
around us, and it always obeys the second law of thermodynamics.
One must be cautious to understand what is the current level of
analysis.

As an example, suppose that you hook up a gigantic computer system
to capture all the financial transactions of the planet. Each burger
bought at McDonalds is registered as a transaction. This computer
will register all "micropayments" of all the monetary transactions
between people. From that level of analysis, all you can do is to
observe eventual changes in the sum of the monetary values in
circulation in the planet (something like a "global inflation factor"),
and perhaps something like a "speed factor" accounting for the
velocity with which money is changing hands, but not much more.
Thus, at this level of analysis you would not be able to answer
such an interesting question as this one: "is money flowing from
the rich to the poor or the opposite?". In order to answer that,
you have to step up one level of analysis, where you will have to
create, define and consider new concepts, such as "rich", "poor",
"third-world country", etc. This is a different model than the one
of the micropayments (although it uses it as a starting point). Much
of the diversity found in today's scientific disciplines must be
seen as a result of these different levels of analysis.

*SG*


0
Stargazer
11/2/2004 2:21:21 PM
On Mon, 01 Nov 2004 22:17:17 -0500, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Lester Zick wrote:
>[...]
>> 
>> Unfortunately my newsreader is FreeAgent and doesn't support killfile
>> that I can tell. Apart from that it's a great reader. Besides, how
>> could I develop my comic appeal without David?
>> 
>> Regards - Lester
>
>Doesn't it have filtering?

My Netscape mail reader does have filtering which unfortunately
doesn't include non English criteria and I have used it. However, my
newsreader doesn't seem to. But the price was right. Maybe their
pay-per-view version has that capability.

Regards - Lester
0
lesterDELzick
11/2/2004 2:42:28 PM
Lester Zick wrote:
> On Mon, 01 Nov 2004 22:17:17 -0500, Wolf Kirchmeir
> <wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
> 
> 
>>Lester Zick wrote:
>>[...]
>>
>>>Unfortunately my newsreader is FreeAgent and doesn't support killfile
>>>that I can tell. Apart from that it's a great reader. Besides, how
>>>could I develop my comic appeal without David?
>>>
>>>Regards - Lester
>>
>>Doesn't it have filtering?
> 
> 
> My Netscape mail reader does have filtering which unfortunately
> doesn't include non English criteria and I have used it. However, my
> newsreader doesn't seem to. But the price was right. Maybe their
> pay-per-view version has that capability.
> 
> Regards - Lester

Switch to Mozilla 1.7.3 its better than Netscape and its always free. 
<http://www.mozilla.org/>

patty
0
patty
11/2/2004 2:45:54 PM
Stargazer wrote:

> patty wrote:
> 
>>Stargazer wrote:
>>
>>>patty wrote:
>>>
>>>
>>>>Stargazer wrote:
>>>>
>>>
>>>I'm not sure I understand what you call "bias" here. Instrumentation
>>>may be subject to reading errors, but that's a topic to be handled
>>>by experimental procedures and adequate statistical analysis, with
>>>several techniques that can be used to minor such influences.
>>>
>>>But it is time now to say that entropy is a word with two relatively
>>>distinct senses. In physics, it measures a property of irreversible
>>>processes and is defined by dS = q/T. Life is an irreversible
>>>process (dead people don't live again ;-), thus entropy increases.
>>>The compression and decompression of a gas by an ideal piston is a
>>>reversible process (notice: ideal piston).
>>>
>>
>>Well with this definition one obvious source of bias is what is the
>>process and what is not the process.  For different choices, you will
>>get different measurements.  What is telling is your comment "Life is
>>an irreversible process (dead people don't live again ;-)".   I, for
>>example, would not have made that choice ... i would *not* have chosen
>>the individual as the foreground\background of the process ... i would
>>have chosen the species and said "Life is a cyclic reversible process
>>... a mother gives birth to her son and the process repeats".
>>
>>Now i must admit that i am a bit beyond my depth here, not being a
>>physicist, but in my heart of hearts i do believe that life breaks the
>>second law of thermodynamics for some frame of reference.
> 
> 
> Now I understand what is troubling you: the frame of reference or
> (more appropriately) the conceptual level of analysis. Physicists
> usually choose the atomic or subatomic level of analysis. Chemists
> choose the molecular level. Biologists choose the cellular level.
> Neuroscientists choose the neural or groups of neurons level.
> Psychologists choose the behavior level. Social scientists choose
> the population level. Diplomats and politicians choose the nation
> level.
> 
> Although some fundamental laws may permeate from one level to another,
> each level usually have its own specific theoretical and conceptual
> constructions. Some of these conceptual constructions may be
> explained ("reduced") to lower levels, but some may not. In terms
> of physics, entropy is usually seen at the particle level (or, at
> most, the population of particles, as is the case in statistical
> mechanics). But when you talk about life as a perennial process,
> you're talking at a different level, where conventional definitions
> of entropy won't apply. Life, seen from a "micro" level, is
> indistinguishable from all other chemical reactions that happen
> around us, and it always obeys the second law of thermodynamics.
> One must be cautious to understand what is the current level of
> analysis.
> 
> As an example, suppose that you hook up a gigantic computer system
> to capture all the financial transactions of the planet. Each burger
> bought at McDonalds is registered as a transaction. This computer
> will register all "micropayments" of all the monetary transactions
> between people. From that level of analysis, all you can do is to
> observe eventual changes in the sum of the monetary values in
> circulation in the planet (something like a "global inflation factor"),
> and perhaps something like a "speed factor" accounting for the
> velocity with which money is changing hands, but not much more.
> Thus, at this level of analysis you would not be able to answer
> such an interesting question as this one: "is money flowing from
> the rich to the poor or the opposite?". In order to answer that,
> you have to step up one level of analysis, where you will have to
> create, define and consider new concepts, such as "rich", "poor",
> "third-world country", etc. This is a different model than the one
> of the micropayments (although it uses it as a starting point). Much
> of the diversity found in today's scientific disciplines must be
> seen as a result of these different levels of analysis.
> 
> *SG*
> 
> 

I think we are on the same wavelength here :)   The conceptual 
definitions for things like life, intelligence, free will, etc are 
particularly vulnerable to choices of the frame of reference.

patty
0
patty
11/2/2004 3:58:19 PM
On Tue, 02 Nov 2004 14:45:54 GMT, patty <pattyNO@SPAMicyberspace.net>
in comp.ai.philosophy wrote:

>Lester Zick wrote:
>> On Mon, 01 Nov 2004 22:17:17 -0500, Wolf Kirchmeir
>> <wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>> 
>> 
>>>Lester Zick wrote:
>>>[...]
>>>
>>>>Unfortunately my newsreader is FreeAgent and doesn't support killfile
>>>>that I can tell. Apart from that it's a great reader. Besides, how
>>>>could I develop my comic appeal without David?
>>>>
>>>>Regards - Lester
>>>
>>>Doesn't it have filtering?
>> 
>> 
>> My Netscape mail reader does have filtering which unfortunately
>> doesn't include non English criteria and I have used it. However, my
>> newsreader doesn't seem to. But the price was right. Maybe their
>> pay-per-view version has that capability.
>> 
>> Regards - Lester
>
>Switch to Mozilla 1.7.3 its better than Netscape and its always free. 
><http://www.mozilla.org/>

Never heard of it. I use Netscape for email only.I use Forte FreeAgent
for the usenet.

Regards - Lester
0
lesterDELzick
11/2/2004 7:12:56 PM
In article <v8Ohd.46889$R05.27839@attbi_s53>, patty 
<pattyNO@SPAMicyberspace.net> writes
>
>I think we are on the same wavelength here :)   The conceptual 
>definitions for things like life, intelligence, free will, etc are 
>particularly vulnerable to choices of the frame of reference.
>
>patty

In other words, such talk is irrelevant and just bosh.

A variant might be hoping one is "on the same wavelength" with someone 
regarding whether chocolate vs. blue-berry muffins are "nice" - perhaps 
as a prelude to getting into each others pants!

Why else would people such as yourself and starstruck utter such asinine 
rubbish? You're doing nothing different from guys who talk about "the 
game" or valley-gals who talk "whatever". All you're doing is using a 
different *medium*, and that's because you basically don't know how to 
talk *rationally* (scientifically).
-- 
David Longley
0
David
11/2/2004 7:24:32 PM
Lester Zick wrote:
> On Tue, 02 Nov 2004 14:45:54 GMT, patty <pattyNO@SPAMicyberspace.net>
> in comp.ai.philosophy wrote:
> 
> 
>>Lester Zick wrote:
>>
>>>On Mon, 01 Nov 2004 22:17:17 -0500, Wolf Kirchmeir
>>><wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>>>
>>>
>>>
>>>>Lester Zick wrote:
>>>>[...]
>>>>
>>>>
>>>>>Unfortunately my newsreader is FreeAgent and doesn't support killfile
>>>>>that I can tell. Apart from that it's a great reader. Besides, how
>>>>>could I develop my comic appeal without David?
>>>>>
>>>>>Regards - Lester
>>>>
>>>>Doesn't it have filtering?
>>>
>>>
>>>My Netscape mail reader does have filtering which unfortunately
>>>doesn't include non English criteria and I have used it. However, my
>>>newsreader doesn't seem to. But the price was right. Maybe their
>>>pay-per-view version has that capability.
>>>
>>>Regards - Lester
>>
>>Switch to Mozilla 1.7.3 its better than Netscape and its always free. 
>><http://www.mozilla.org/>
> 
> 
> Never heard of it. I use Netscape for email only.I use Forte FreeAgent
> for the usenet.
> 
> Regards - Lester

I'm surprised you haven't heard of it.  Mozilla is the code base that 
supports the Netscape product.  It is the open software project started 
by (and still, i think, primarily funded by) Netscape which is now owned 
by AOL.  The complete system gives you a browser, email, and a 
newsreader all in one integrated package.  It's biggest advantage for me 
over the other one is that you can eliminate pop ups and is it is not as 
venerable to viruses, worms and other pestilence.   Not to mention tab 
browsing which has become essential to the way i navigate the web.   And 
it's not just me, it has been making serious in rodes against IE since 
mid year <http://technocrat.net/article.pl?sid=04/07/13/037250>.  And, 
for me, george is gone, its a lot quieter in here without his racket. 
If i wanted to, i could get rid of you and Longley too.

Try it you'll like it :)
<http://www.mozilla.org/>

patty
0
patty
11/2/2004 7:56:44 PM
In article <+Xqwk5Kq+VhBFwhk@longley.demon.co.uk>, David Longley 
<David@longley.demon.co.uk> writes
>In article <FeqdnaaiD9dOqRjcRVn-hg@metrocastcablevision.com>, Bill 
>Modlin <modlin1@metrocast.net> writes
>>
>>"Wolf Kirchmeir" <wwolfkir@sympatico.ca> wrote in message
>>news:r39hd.6043$OD3.108863@news20.bellglobal.com...
>>
>>> I earlier said I was uneasy with the distinction between
>>development
>>> and learning, since much development requires external inputs. On
>>> reflection, IMO we should distinguish between:
>>> a) NN changes controlled by external, fed back inputs (supervised
>>> learning, or operant conditioning);
>>> b) NN changes caused by external inputs (unsupervised learning, or
>>> classical conditioning);
>>> c) NN changes cause by changes in the NN itself (???
>>development???)
>>
>>Excellent.  And "development" seems as good a word as any.  If we
>>use the word "learning", it would seem to apply to the first two,
>>the ones dependent on inputs, and not to development except perhaps
>>by a poetic stretch.
>>
>>Of course, in practice these 3 kinds of changes are often all going
>>on at once in the same networks and even in the same cells, and they
>>may share mechanisms in common.   So it can be difficult or perhaps
>>even pointless to try to say just which is responsible for some
>>changes... generally they all contribute in their subtly different
>>ways to the behavior of an organism.   Even "input" is fuzzy... is
>>locally generated random thermal noise an "external input"?  Is a
>>signal from spontaneous firing of a cell an "input"?   It depends on
>>ones purpose, there is no "correct" answer.
>>
>>Nevertheless, I find that the distinctions are worthwhile for they
>>guidance they provide for constructing systems that change in
>>ultimately useful ways.   If we are to have operant conditioning, we
>>have to include special mechanisms to induce changes in response to
>>particular inputs treated as reward or feedback, distinct from those
>>other inputs which are causally connected to the outputs from which
>>the feedback is derived.  If we are to have classical conditioning
>>or other forms of unsupervised self-organization, we must have
>>mechanisms such that changes are induced by the processing of any
>>inputs to produce results, not requiring a separate priviledged
>>class of inputs designated as feedback or reward.  If we are to have
>>development independent of inputs, we must have mechanisms which
>>change the system over time in accordance with some predetermined
>>plan.
>>
>>Again, this posting is good news.  Excellent.
>>
>>Now perhaps we can talk about a few more details of how these things
>>might be implemented?
>>
>>Bill
>>
>
>Not to pre-empt Wolf, but let me remind you of the section on ANNs in 
>"Fragments" <http://www.longley.demon.co.uk/Frag.htm>. What was that 
>all about? What's *wrong* with ANNs? What's wrong with our folk 
>psychology? What's wrong with people?
>
>Why are people prejudiced? What's wrong, for instance, with concluding 
>from the -2SD mean IQ of sub-sahara Africa relative to the UK mean, or 
>the -1SD mean IQ of African undergraduates, or USA Afro-Carribeans that 
>blacks are less intelligent than whites and whites are less intelligent 
>than yellows (East Asians)? Is anything missing? If so, can you tell us?
>

You notably haven't responded to the second point. There's a profoundly 
critically point within it which you shouldn't worry too much about 
having some difficulty with grasping (though that doesn't mean I will 
tell you). As it is, I suspect you (and many others here) haven't a clue 
what I'm talking about. That's because it's representative of something 
worthy of serious, critical self-analysis, and it *is* something I have 
covered before.

In your reply, you assert that I generalise to the point of  being 
irrational. That's a very bold assertion, but as stated it's *only* a 
bold assertion (i.e. nefarious rhetoric in my book). As far as I know, 
all that I have said is sound, and to date not refuted (and I don't 
refer to folk in  c.a.p, although I would take what Glen, Wolf and 
sometimes even you <g>, seriously). I just try *not* to keep repeating 
the evidence as I've already done so ad nauseam to no avail not to 
mention tedious criticism That's very much my point of course. The 
evidence often doesn't matter because most people behave like idiots and 
don't pick up on it. This is something that many, sadly, are only too 
happy to count on (as I'm sure most folk reading this and thinking about 
their *current* concerns will appreciate)

I think you should state your evidence or make an effort to listen more 
carefully. Assuming that latter, what was wrong with your post? Let's 
see you do you can set an example for less "enlightened" others <g>.
-- 
David Longley
0
David
11/2/2004 10:17:00 PM
On Tue, 02 Nov 2004 19:56:44 GMT, patty <pattyNO@SPAMicyberspace.net>
in comp.ai.philosophy wrote:

>Lester Zick wrote:
>> On Tue, 02 Nov 2004 14:45:54 GMT, patty <pattyNO@SPAMicyberspace.net>
>> in comp.ai.philosophy wrote:
>> 
>> 
>>>Lester Zick wrote:
>>>
>>>>On Mon, 01 Nov 2004 22:17:17 -0500, Wolf Kirchmeir
>>>><wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>>>>
>>>>
>>>>
>>>>>Lester Zick wrote:
>>>>>[...]
>>>>>
>>>>>
>>>>>>Unfortunately my newsreader is FreeAgent and doesn't support killfile
>>>>>>that I can tell. Apart from that it's a great reader. Besides, how
>>>>>>could I develop my comic appeal without David?
>>>>>>
>>>>>>Regards - Lester
>>>>>
>>>>>Doesn't it have filtering?
>>>>
>>>>
>>>>My Netscape mail reader does have filtering which unfortunately
>>>>doesn't include non English criteria and I have used it. However, my
>>>>newsreader doesn't seem to. But the price was right. Maybe their
>>>>pay-per-view version has that capability.
>>>>
>>>>Regards - Lester
>>>
>>>Switch to Mozilla 1.7.3 its better than Netscape and its always free. 
>>><http://www.mozilla.org/>
>> 
>> 
>> Never heard of it. I use Netscape for email only.I use Forte FreeAgent
>> for the usenet.
>> 
>> Regards - Lester
>
>I'm surprised you haven't heard of it.  Mozilla is the code base that 
>supports the Netscape product.  It is the open software project started 
>by (and still, i think, primarily funded by) Netscape which is now owned 
>by AOL.  The complete system gives you a browser, email, and a 
>newsreader all in one integrated package.  It's biggest advantage for me 
>over the other one is that you can eliminate pop ups and is it is not as 
>venerable to viruses, worms and other pestilence.   Not to mention tab 
>browsing which has become essential to the way i navigate the web.   And 
>it's not just me, it has been making serious in rodes against IE since 
>mid year <http://technocrat.net/article.pl?sid=04/07/13/037250>.  And, 
>for me, george is gone, its a lot quieter in here without his racket. 
>If i wanted to, i could get rid of you and Longley too.
>
>Try it you'll like it :)
><http://www.mozilla.org/>

Well, I would like it if George would vanish. David I can live with.
The problem with the kind of software you're talking about (if I get
the picture) is that it works with non resident code it has to import
from the web. I use Netscape 2 that is resident code and comes up and
operates much faster. My FreeAgent is also resident code. I use just
the public telephone lines so resident/nonresident software execution
speed is a significant issue for me. So I'll probably stick with the
quo status.

Regards - Lester
0
lesterDELzick
11/2/2004 10:55:48 PM
Lester Zick wrote:
[...]
>>Switch to Mozilla 1.7.3 its better than Netscape and its always free. 
>><http://www.mozilla.org/>
> 
> 
> Never heard of it. I use Netscape for email only.I use Forte FreeAgent
> for the usenet.
> 
> Regards - Lester

It's better than IE, IMO, and unlike IE it doesn't do dumb things like 
open attachments without you sayso. Also, it doesn't suffer from the 
security holes of IE. That alone makes it worth using. Maximum PC 
Magazine rated it above IE, BTW, which should tell you something, since 
generally MaxPc prefers Windows over other OSs.

There are two other Mozilla products:
a) Firefox, which is the browser in Mozilla, with no e-mail or usenet 
capability. I use it almost exclusively for web-surfing (there are a few 
sites that work only with IE, guess why). Only downside: you contact a 
website owner via clickable email - for that you need Mozilla.
b) Thunderbird: e-mail and Usenet only, no websites. Not as good as 
PMMail for e-mail, but OK. Good for Usenet.

All Mozilla products are free, but you are invited to contribute cash, 
or buy related merchandise to help defray costs.
0
Wolf
11/3/2004 1:11:11 AM
Lester Zick wrote:
[...]
> 
> Well, I would like it if George would vanish. David I can live with.
> The problem with the kind of software you're talking about (if I get
> the picture) is that it works with non resident code it has to import
> from the web. I use Netscape 2 that is resident code and comes up and
> operates much faster. My FreeAgent is also resident code. I use just
> the public telephone lines so resident/nonresident software execution
> speed is a significant issue for me. So I'll probably stick with the
> quo status.
> 
> Regards - Lester

Oh fergawdssake, just go visit the site and see what it offers. If you 
don't want to spend the time to download the program, just order the CD. 
It's $5 last time I looked.

0
Wolf
11/3/2004 1:17:58 AM
Lester Zick wrote:
> On Tue, 02 Nov 2004 19:56:44 GMT, patty <pattyNO@SPAMicyberspace.net>
> in comp.ai.philosophy wrote:
> 
> 
>>Lester Zick wrote:
>>
>>>On Tue, 02 Nov 2004 14:45:54 GMT, patty <pattyNO@SPAMicyberspace.net>
>>>in comp.ai.philosophy wrote:
>>>
>>>
>>>
>>>>Lester Zick wrote:
>>>>
>>>>
>>>>>On Mon, 01 Nov 2004 22:17:17 -0500, Wolf Kirchmeir
>>>>><wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>Lester Zick wrote:
>>>>>>[...]
>>>>>>
>>>>>>
>>>>>>
>>>>>>>Unfortunately my newsreader is FreeAgent and doesn't support killfile
>>>>>>>that I can tell. Apart from that it's a great reader. Besides, how
>>>>>>>could I develop my comic appeal without David?
>>>>>>>
>>>>>>>Regards - Lester
>>>>>>
>>>>>>Doesn't it have filtering?
>>>>>
>>>>>
>>>>>My Netscape mail reader does have filtering which unfortunately
>>>>>doesn't include non English criteria and I have used it. However, my
>>>>>newsreader doesn't seem to. But the price was right. Maybe their
>>>>>pay-per-view version has that capability.
>>>>>
>>>>>Regards - Lester
>>>>
>>>>Switch to Mozilla 1.7.3 its better than Netscape and its always free. 
>>>><http://www.mozilla.org/>
>>>
>>>
>>>Never heard of it. I use Netscape for email only.I use Forte FreeAgent
>>>for the usenet.
>>>
>>>Regards - Lester
>>
>>I'm surprised you haven't heard of it.  Mozilla is the code base that 
>>supports the Netscape product.  It is the open software project started 
>>by (and still, i think, primarily funded by) Netscape which is now owned 
>>by AOL.  The complete system gives you a browser, email, and a 
>>newsreader all in one integrated package.  It's biggest advantage for me 
>>over the other one is that you can eliminate pop ups and is it is not as 
>>venerable to viruses, worms and other pestilence.   Not to mention tab 
>>browsing which has become essential to the way i navigate the web.   And 
>>it's not just me, it has been making serious in rodes against IE since 
>>mid year <http://technocrat.net/article.pl?sid=04/07/13/037250>.  And, 
>>for me, george is gone, its a lot quieter in here without his racket. 
>>If i wanted to, i could get rid of you and Longley too.
>>
>>Try it you'll like it :)
>><http://www.mozilla.org/>
> 
> 
> Well, I would like it if George would vanish. David I can live with.
> The problem with the kind of software you're talking about (if I get
> the picture) is that it works with non resident code it has to import
> from the web. I use Netscape 2 that is resident code and comes up and
> operates much faster. My FreeAgent is also resident code. I use just
> the public telephone lines so resident/nonresident software execution
> speed is a significant issue for me. So I'll probably stick with the
> quo status.
> 
> Regards - Lester

All of Mozilla's code is "resident", you can operate it without even 
being on the Internet.  Are you afraid to download and install it?

Newsgroups trimmed.

patty
0
patty
11/3/2004 8:52:42 AM
David Longley wrote:
> In article <+Xqwk5Kq+VhBFwhk@longley.demon.co.uk>, David Longley 
> <David@longley.demon.co.uk> writes

< snipped an exchange with Wolf as only marginally related to these
remarks David interjected earlier, or to his current questions: >

>> Not to pre-empt Wolf, but let me remind you of the section on ANNs in 
>> "Fragments" <http://www.longley.demon.co.uk/Frag.htm>. What was that 
>> all about? What's *wrong* with ANNs? What's wrong with our folk 
>> psychology? What's wrong with people?
>>
>> Why are people prejudiced? What's wrong, for instance, with concluding 
>> from the -2SD mean IQ of sub-sahara Africa relative to the UK mean, or 
>> the -1SD mean IQ of African undergraduates, or USA Afro-Carribeans 
>> that blacks are less intelligent than whites and whites are less 
>> intelligent than yellows (East Asians)? Is anything missing? If so, 
>> can you tell us?

Since David did not include my previous reply I'll paste it here:

---- posted previously ----

David holds that human thinking, that what passes for
"reasoning" in naive human experience, is so terribly
flawed, biased, and generally unscientific that there can
be no value whatever in reproducing it with a machine.

To David, the only imaginable purpose for an AI is to
supplant error-riddled human intensional heuristics with
rational extensional methods.  To him, AI *is* the
advancement of extensional science.

Which leads to conflict with those of us who notice that
despite obvious inadequacies of our biological intelligence
in matters of rational estimation and deductive logic, there
are still many things people manage to do quite well, things
people accomplish much more effectively than any machine
we've yet been able to design.

Those of us who find value in natural intelligence would
like nothing better than to emulate it with a machine.  To
us, AI is the search for ways to do that, ways to get a
machine to do the things we find so trivially easy.   Like
shopping for groceries, or cleaning a house, or following
ambiguous instructions.  Without becoming immediately bogged
down in an exponential explosion of combinatorial
possibilities to be evaluated.  Without tripping over a
frame problem.

The answer to David's first question, "what's wrong with
ANNs", is that their operation is somewhat analogous to the
operation of our own brains.  Oversimplified, incomplete,
but still recognizably similar in some of the ways they
solve problems.

And since our brains clearly use those terrible intensional
heuristics, as David has painstakingly documented for all
the world to see, this makes ANNs irrational, and makes
anyone who takes them seriously a threat to the progress of
humanity.  It even makes them despicable and evil, if they
seem to understand the problem and yet knowingly persist in
talking about such nasty things.

Of course, to the rest of us the resemblance of ANN
heuristics to some human methods is what makes them worth
talking about.

David simply is unable to understand that many problems
cannot be addressed without heuristics.  He includes quotes
in Fragments that make this point quite well, but manages to
slide past them undaunted, his grail of replacing human
thought with pristine rationality miraculously intact.

The sad part is that David is right about the value of
replacing intuitive judgment with objective measures and
rational statistical analysis in places where the data is
available to support such an effort.  He's got a legitimate
point, which probably deserves more attention than it gets
in the flux of politically biased funding schemes.  If he'd
stick to that, he might actually do some good.  But instead
he falls prey to the biases he decries and overgeneralizes
to the point of obvious irrationality... while steadfastly
resisting any appeal to intuitive notions like common sense
to recognize his error.

Bill Modlin

----- end of pasted material ----

> You notably haven't responded to the second point. There's a profoundly 
> critically point within it which you shouldn't worry too much about 
> having some difficulty with grasping (though that doesn't mean I will 
> tell you). As it is, I suspect you (and many others here) haven't a clue 
> what I'm talking about. That's because it's representative of something 
> worthy of serious, critical self-analysis, and it *is* something I have 
> covered before.

You are right, I haven't a clue as to why you feel IQ scores are
relevant here.  Since you won't tell me and said not to worry, I won't.

Personally I feel that enough cultural bias is inherent in such tests to
invalidate comparison across cultural groups.  But even if a
"culture-free" IQ test were possible, IQ is only a weak correlate of
the complex of laudatory attributes we call "intelligence", so the
judgmental statements would still be inappropriate.

> In your reply, you assert that I generalise to the point of  being 
> irrational. That's a very bold assertion, but as stated it's *only* a 
> bold assertion (i.e. nefarious rhetoric in my book). As far as I know, 
> all that I have said is sound, and to date not refuted (and I don't 
> refer to folk in  c.a.p, although I would take what Glen, Wolf and 
> sometimes even you <g>, seriously). I just try *not* to keep repeating 
> the evidence as I've already done so ad nauseam to no avail not to 
> mention tedious criticism That's very much my point of course. The 
> evidence often doesn't matter because most people behave like idiots and 
> don't pick up on it. This is something that many, sadly, are only too 
> happy to count on (as I'm sure most folk reading this and thinking about 
> their *current* concerns will appreciate)
> 
> I think you should state your evidence or make an effort to listen more 
> carefully. Assuming that latter, what was wrong with your post? Let's 
> see you do you can set an example for less "enlightened" others <g>.

You have presented evidence that human judgment is based on heuristics
which lead to suboptimal results in cases where there is sufficient data
to apply more accurate procedures for comparison.  You have also
presented evidence that the biases inherent in these heuristics are
difficult to overcome by education and training, that training in better
methods of reasoning often does not generalize well.

You argue that we should therefore prefer to collect data and apply
well-founded statistical and logical procedures rather than rely on
expert clinical judgment, as that judgment is likely to be biased and
suboptimal.  This is a reasonable position.

However, you leap from the reasonable assertion that there are more
accurate procedures which should be used when possible to a conviction
that it is always possible.  You assume that these procedures, which you
label "rational", "scientific", and "extensional", are sufficient in
themselves to guide all decision processes, that they can replace naive
intensional heuristic reasoning for all purposes.

Which isn't true. The type of reasoning you recommend depends on
repeatable observations of recurring events, and those do not exist
until discovered by the intensional heuristics of perception.

You focus on what happens after a scientist defines an experimental 
protocol, with some objective specification of measurements or 
observations to be made.  At that point we can collect data and look for 
useful relationships among our variables, and at that point it is true 
that there are potentially better ways to analyze the information than 
naive clinical judgment.

I'm looking at how we get to that point, how we can decide in the first 
place that some particular set of observations out of all the 
combinatorial explosion of possibilities is indeed worth investigating.

It may be hard for you to see this is as a difficult problem, since you 
look at a situation through human eyes, and it is "obvious" that there 
are only a few potentially relevant variables involved.  There are 
skills needed to do a good job of experimental design to be sure you are 
measuring what you intended to measure, but you don't see even a 
fraction of the vast number of potentially measurable aspects of any 
given situation.

You don't have to do any sort of analysis to decide that it is probably 
not worth recording the variations in the relative positions of each of 
the hairs on a rat's skin as he moves around a skinner box, or noting 
the moment-to-moment minute fluctuations in the air pressure and 
temperature a millimeter away from an arbitrary point on its 
hindquarters.  You don't consider such possibilities and reject them, 
they simply never arise for consideration in the first place.

This is a good thing.  It also turns out to be hard to explain in terms 
of purely logical extensional processes, so hard that it is essentially 
impossible.  This is a facet of what is sometimes known as the "frame 
problem", the problem of getting a logical device to focus on the 
relevant aspects of a situation without first having to sort through an 
infinite regress of irrelevant possibilities.

You solve it effortlessly, without even thinking about it, as part of 
your innate perceptual processing.  As you examine any situation you 
project on it an intensional structure highlighting the probable major 
causal focii and their rough relationships... this pushes on that, and 
may cause that other thing to topple...

You can do this because you use those intensional heuristics your own 
evidence shows to dominate human reasoning.  You can do this better than 
any system using extensional logic because intensional heuristics are 
more appropriate than extensional logic for managing the sort of 
information flow available to us through our senses.

We reason using intensional heuristics because they work better than 
extensional analysis, in real-life situations characterized by too many 
variables and too little information to support extensional analysis.

These heuristics don't give us the accuracy achievable with extensional 
procedures under ideal conditions.  You focus on that aspect of the 
comparison and argue that we should strive to suppress the use of 
heuristics, change our way of thinking and speaking to conform to an 
extensional ideal.

But your argument is fundamentally misguided, as the two are not 
interchangeable.  Extensional analysis is impotent for guiding decisions 
in the vast majority of situations encountered in the real world.  An 
organism cannot survive using extensional reasoning alone: heuristic 
intensionality is required to deal with the frame problem and reduce the 
dimensionality of problems sufficiently to bring them within the scope 
of extensional analysis.  Extensional analysis is icing on the cake, we 
can survive without it.  We can't survive without heuristics.

As with humans, so with our constructs.  An AI must be constructed first 
and foremost to employ intensional heuristics successfully.  Once we get 
that working, we can add in extensional reasoning as an occasionally 
useful overlay.

Bill Modlin
0
Bill
11/5/2004 4:22:59 AM
Bill Modlin wrote:
> [big snip]
>  An organism cannot survive using extensional reasoning alone:
> heuristic intensionality is required to deal with the frame problem
> and reduce the dimensionality of problems sufficiently to bring them
> within the scope of extensional analysis.  Extensional analysis is
> icing on the cake, we can survive without it.  We can't survive
> without heuristics.

Your whole post could have just said the above, and that would be
enough to make your point. Congratulations.

*SG*


0
Stargazer
11/5/2004 11:51:52 AM
> <David@longley.demon.co.uk> writes
> >
> > Why are people prejudiced? What's wrong, for instance, with
> > concluding from the -2SD mean IQ of sub-sahara Africa relative to
> > the UK mean, or the -1SD mean IQ of African undergraduates, or
> > USA Afro-Carribeans that blacks are less intelligent than whites
> > and whites are less intelligent than yellows (East Asians)? Is
> > anything missing? If so, can you tell us?

Stop being so idiotic and do yourself a favor: grab a copy of
Jared Diamond's "Guns, Germs and Steel" and learn why what
you say is just fermented old crap.

*SG*


0
Stargazer
11/5/2004 11:55:35 AM
Stargazer wrote:
> Bill Modlin wrote:
> 
>>[big snip]
>> An organism cannot survive using extensional reasoning alone:
>>heuristic intensionality is required to deal with the frame problem
>>and reduce the dimensionality of problems sufficiently to bring them
>>within the scope of extensional analysis.  Extensional analysis is
>>icing on the cake, we can survive without it.  We can't survive
>>without heuristics.
> 
> 
> Your whole post could have just said the above, and that would be
> enough to make your point. Congratulations.
> 
> *SG*


IMO this is a specious disinction, or else shows a misunderstanding of 
extensionality, or else uses extension/intension in an unorthodox way. I 
can't tell which, since I'm not privy to Bill's intensions. To the 
extent that heuristics work, they are extensional. The intensional 
component is relevant only to the degree to which it affects extension 
(which may be great.) See Quine on the translation problem, w/ IMO 
crystallises the extension - intension issue. WQ shows that a) two 
people with a different intension for a term may have the same 
extension; and b) that common extension is the necessary condition for 
communication. The same principle applies to heuristics: the intensional 
component can be utterly different for two systems, yet so long as they 
have the same extension, they will be equally effective. At the level of 
NNs, IMO it's an irrelevant issue, since the only test that makes sense 
is efficacy: presumably (if we're still on topic for this thread) we 
want to design the NN so it performs "useful functions". In that case, 
extensions rule. Its intension(s), if any, don't matter.

Footnote:
Quine's result has implications for the communicating-with-an-alien 
problem, w/ took up a lot of unnecessary space on thsi forum not so long 
ago. IMO this one reduces to communicating-with-other-species, a problem 
that every person who interacts with domestic animals has solved more or 
less successfully. Of course, there will be people who insist that 
unless you have the same intension as your interlocutor, you haven't 
"really communicated." Such people have a touching trust in the ability 
of humans to guess correctly at each other's intensions, a trust that 15 
minutes in a literature class discussing a poem should dispel forever.

If BM merely means "internal" vs "external", the distinction is fuzzy, 
to put it mildly. But IMO BM has a fuzzy notion of internal - extrenal 
anyhow, since he ignores levels: analysis at the level of a single 
(minimal) NN automatically makes all other NNs connected to it external 
to it. That's a distinctction that matters IMO, since it affects the 
extension of "external signal" and "quality of external signal", both of 
which are necessary concepts, I think (allowing for the ambiguity of 
"quality, w/ should be resolvable.)
0
Wolf
11/5/2004 2:48:09 PM
In article <16idncPwtouMnRbcRVn-vQ@metrocastcablevision.com>, Bill 
Modlin <wdmalias-cap@yahoo.com> writes
>David Longley wrote:
>> In article <+Xqwk5Kq+VhBFwhk@longley.demon.co.uk>, David Longley 
>><David@longley.demon.co.uk> writes
>
>< snipped an exchange with Wolf as only marginally related to these
>remarks David interjected earlier, or to his current questions: >
>
>>> Not to pre-empt Wolf, but let me remind you of the section on ANNs 
>>>in "Fragments" <http://www.longley.demon.co.uk/Frag.htm>. What was 
>>>that  all about? What's *wrong* with ANNs? What's wrong with our folk 
>>>psychology? What's wrong with people?
>>>
>>> Why are people prejudiced? What's wrong, for instance, with 
>>>concluding from the -2SD mean IQ of sub-sahara Africa relative to the 
>>>UK mean, or  the -1SD mean IQ of African undergraduates, or USA 
>>>Afro-Carribeans  that blacks are less intelligent than whites and 
>>>whites are less  intelligent than yellows (East Asians)? Is anything 
>>>missing? If so,  can you tell us?
>
>Since David did not include my previous reply I'll paste it here:
>
>---- posted previously ----
>
>David holds that human thinking, that what passes for
>"reasoning" in naive human experience, is so terribly
>flawed, biased, and generally unscientific that there can
>be no value whatever in reproducing it with a machine.
>
>To David, the only imaginable purpose for an AI is to
>supplant error-riddled human intensional heuristics with
>rational extensional methods.  To him, AI *is* the
>advancement of extensional science.
>
>Which leads to conflict with those of us who notice that
>despite obvious inadequacies of our biological intelligence
>in matters of rational estimation and deductive logic, there
>are still many things people manage to do quite well, things
>people accomplish much more effectively than any machine
>we've yet been able to design.
>
>Those of us who find value in natural intelligence would
>like nothing better than to emulate it with a machine.  To
>us, AI is the search for ways to do that, ways to get a
>machine to do the things we find so trivially easy.   Like
>shopping for groceries, or cleaning a house, or following
>ambiguous instructions.  Without becoming immediately bogged
>down in an exponential explosion of combinatorial
>possibilities to be evaluated.  Without tripping over a
>frame problem.
>
>The answer to David's first question, "what's wrong with
>ANNs", is that their operation is somewhat analogous to the
>operation of our own brains.  Oversimplified, incomplete,
>but still recognizably similar in some of the ways they
>solve problems.
>
>And since our brains clearly use those terrible intensional
>heuristics, as David has painstakingly documented for all
>the world to see, this makes ANNs irrational, and makes
>anyone who takes them seriously a threat to the progress of
>humanity.  It even makes them despicable and evil, if they
>seem to understand the problem and yet knowingly persist in
>talking about such nasty things.
>
>Of course, to the rest of us the resemblance of ANN
>heuristics to some human methods is what makes them worth
>talking about.
>
>David simply is unable to understand that many problems
>cannot be addressed without heuristics.  He includes quotes
>in Fragments that make this point quite well, but manages to
>slide past them undaunted, his grail of replacing human
>thought with pristine rationality miraculously intact.
>
>The sad part is that David is right about the value of
>replacing intuitive judgment with objective measures and
>rational statistical analysis in places where the data is
>available to support such an effort.  He's got a legitimate
>point, which probably deserves more attention than it gets
>in the flux of politically biased funding schemes.  If he'd
>stick to that, he might actually do some good.  But instead
>he falls prey to the biases he decries and overgeneralizes
>to the point of obvious irrationality... while steadfastly
>resisting any appeal to intuitive notions like common sense
>to recognize his error.
>
>Bill Modlin
>
>----- end of pasted material ----
>
>> You notably haven't responded to the second point. There's a 
>>profoundly critically point within it which you shouldn't worry too 
>>much about  having some difficulty with grasping (though that doesn't 
>>mean I will  tell you). As it is, I suspect you (and many others here) 
>>haven't a clue  what I'm talking about. That's because it's 
>>representative of something  worthy of serious, critical 
>>self-analysis, and it *is* something I have  covered before.
>
>You are right, I haven't a clue as to why you feel IQ scores are
>relevant here.  Since you won't tell me and said not to worry, I won't.
>
I've given you enough hints. Why doing you think about what might happen 
if you pointed one of you imaginary "neural nets" at the issue? You seem 
to only take my advice when it suits you, and that, I'm afraid, is what 
makes you an idiot!

>Personally I feel that enough cultural bias is inherent in such tests to
>invalidate comparison across cultural groups.

You know nothing about this, you're just making up what you like to suit 
your ignorant preconceptions. What are the "cultural biases"? How has 
such talk been repudiated? Do you realise that when you tell us what you 
think, you're likely as not just showing us what you don't know and that 
you won't be corrected? That's why *I* get so angry with you. There's no 
humility or awareness that you may not know, just naive presumption.

>  But even if a
>"culture-free" IQ test were possible, IQ is only a weak correlate of
>the complex of laudatory attributes we call "intelligence", so the
>judgmental statements would still be inappropriate.
>
>> In your reply, you assert that I generalise to the point of  being 
>>irrational. That's a very bold assertion, but as stated it's *only* a 
>>bold assertion (i.e. nefarious rhetoric in my book). As far as I know, 
>>all that I have said is sound, and to date not refuted (and I don't 
>>refer to folk in  c.a.p, although I would take what Glen, Wolf and 
>>sometimes even you <g>, seriously). I just try *not* to keep repeating 
>>the evidence as I've already done so ad nauseam to no avail not to 
>>mention tedious criticism That's very much my point of course. The 
>>evidence often doesn't matter because most people behave like idiots 
>>and don't pick up on it. This is something that many, sadly, are only 
>>too  happy to count on (as I'm sure most folk reading this and 
>>thinking about  their *current* concerns will appreciate)
>>  I think you should state your evidence or make an effort to listen 
>>more carefully. Assuming that latter, what was wrong with your post? 
>>Let's  see you do you can set an example for less "enlightened" others >><g>.
>
>You have presented evidence that human judgment is based on heuristics
>which lead to suboptimal results in cases where there is sufficient data
>to apply more accurate procedures for comparison.  You have also
>presented evidence that the biases inherent in these heuristics are
>difficult to overcome by education and training, that training in better
>methods of reasoning often does not generalize well.
>
>You argue that we should therefore prefer to collect data and apply
>well-founded statistical and logical procedures rather than rely on
>expert clinical judgment, as that judgment is likely to be biased and
>suboptimal.  This is a reasonable position.
>
But you haven't understood it, your paraphrasing is a translation which 
loses everything that's important. It isn't a matter of anything being 
"sub-optimal"! If it was, we could just say that more evidence is better 
than less evidence, and that our "onboard computer's" memory isn't big 
enough. That isn't the issue - it's more profound than that. Intension 
does not determine extension. You need to learn the language of science 
here, and that science is behaviour analysis.

>However, you leap from the reasonable assertion that there are more
>accurate procedures which should be used when possible to a conviction
>that it is always possible.  You assume that these procedures, which you
>label "rational", "scientific", and "extensional", are sufficient in
>themselves to guide all decision processes, that they can replace naive
>intensional heuristic reasoning for all purposes.

There's no leaping except in your imagination. How many times have you 
been told to look into the nature of intensional opacity, the "double 
standard" and anomalous monism? What is the problem I alluded to with 
the IQ issue? What is the *subtle* point I was making which makes all 
your small minded talk about unsupervised learning such absolute bosh? 
You need to think far more deeply about this and stop attributing 
trivial stuff to *me*.

>
>Which isn't true. The type of reasoning you recommend depends on
>repeatable observations of recurring events, and those do not exist
>until discovered by the intensional heuristics of perception.

They aren't "types" of reasoning. This is about what we *call* 
reasoning. You have no grasp of the scope of this analysis. You're not 
choosing between a trowel and a shovel!

>
>You focus on what happens after a scientist defines an experimental 
>protocol, with some objective specification of measurements or 
>observations to be made.  At that point we can collect data and look 
>for useful relationships among our variables, and at that point it is 
>true that there are potentially better ways to analyze the information 
>than naive clinical judgment.

No, this is just your daft rendering of what's been explicated. You are 
still peddling your own unquestioned agenda without looking into the 
reasons why I have said it is sterile. When you finally start to 
question your own assumptions you may start to see what you are being 
told.
>
>I'm looking at how we get to that point, how we can decide in the first 
>place that some particular set of observations out of all the 
>combinatorial explosion of possibilities is indeed worth investigating.
>
>It may be hard for you to see this is as a difficult problem, since you 
>look at a situation through human eyes, and it is "obvious" that there 
>are only a few potentially relevant variables involved.  There are 
>skills needed to do a good job of experimental design to be sure you 
>are measuring what you intended to measure, but you don't see even a 
>fraction of the vast number of potentially measurable aspects of any 
>given situation.
>

How would you know? What research have you ever done in these areas?

>You don't have to do any sort of analysis to decide that it is probably 
>not worth recording the variations in the relative positions of each of 
>the hairs on a rat's skin as he moves around a skinner box, or noting 
>the moment-to-moment minute fluctuations in the air pressure and 
>temperature a millimeter away from an arbitrary point on its hindquarters.

How do you know? What do you know about rats' whiskers?

> You don't consider such possibilities and reject them, they simply 
>never arise for consideration in the first place.

Wrong! Read some of the literature!
>
>This is a good thing.  It also turns out to be hard to explain in terms 
>of purely logical extensional processes, so hard that it is essentially 
>impossible.  This is a facet of what is sometimes known as the "frame 
>problem", the problem of getting a logical device to focus on the 
>relevant aspects of a situation without first having to sort through an 
>infinite regress of irrelevant possibilities.
>

More cognitivist nonsense. We have been down this route before viz. that 
quaint take which people in "AI" put on "credit assignment". That's so 
stupid that Glen and I have both said that it just shows how idiotic 
those in "AI" really are. They're idiotic because of their ignorance of 
the fact that "credit assignment" *is* what operant conditioning is 
largely all about! Like you, these people have completely the wrong take 
on "knowledge" and I have gone to great lengths to show how and why 
they're still operating as pre 1929 empiricists if not Kantians. Stop 
thinking you have *any* of this right, you don't.

>You solve it effortlessly, without even thinking about it, as part of 
>your innate perceptual processing.  As you examine any situation you 
>project on it an intensional structure highlighting the probable major 
>causal focii and their rough relationships... this pushes on that, and 
>may cause that other thing to topple...

There you go again. All those hidden little assumptions doing all the 
work for you. It's an appeal to essentialism and ultimately vitalism. 
Everything that science is not, everything that science repudiates, and 
guess what, people like you make it your core assumption. Have you ever 
wondered whether you are getting nowhere because you're looking in the 
wrong direction, begging all the questions?

>
>You can do this because you use those intensional heuristics your own 
>evidence shows to dominate human reasoning.  You can do this better 
>than any system using extensional logic because intensional heuristics 
>are more appropriate than extensional logic for managing the sort of 
>information flow available to us through our senses.
>
>We reason using intensional heuristics because they work better than 
>extensional analysis, in real-life situations characterized by too many 
>variables and too little information to support extensional analysis.

How do you know any of this? Over the past 30,000 years we have advanced 
not because of what's inside our heads but what we've managed to put our 
heads inside of. Strange though that may seem to some, that is not our 
rectums!

>
>These heuristics don't give us the accuracy achievable with extensional 
>procedures under ideal conditions.  You focus on that aspect of the 
>comparison and argue that we should strive to suppress the use of 
>heuristics, change our way of thinking and speaking to conform to an 
>extensional ideal.
>
No, what I do is use that research to illustrate something far more 
profound. You're just so arrogant that you haven't seen it.

>But your argument is fundamentally misguided, as the two are not 
>interchangeable.  Extensional analysis is impotent for guiding 
>decisions in the vast majority of situations encountered in the real 
>world.

Look, this is not a neo-Kantian rehash that I am pointing out (do you 
even know what that famous precept was about concepts? Do you understand 
how it pertained to the analytic and synthetic? the a priori and a 
posteriori? Do you understand what his "Copernican revolution" was and 
how it was supposed to sort out the Humean predicament? Do you not know 
why he was *wrong* and why I keep telling people to read "Two Dogmas of 
Empiricism"? Do you know why I keep saying that cognitivists like you 
don't know what happened back in 1928 and are just peddling stuff well 
past it's sell by date? Idiots here think it's radical behaviourism 
which is passe! They have no idea how wrong they are, and most behaviour 
analysts don't bother enlightening them, they just treat them!!

> An organism cannot survive using extensional reasoning alone: 
>heuristic intensionality is required to deal with the frame problem and 
>reduce the dimensionality of problems sufficiently to bring them within 
>the scope of extensional analysis.  Extensional analysis is icing on 
>the cake, we can survive without it.  We can't survive without heuristics.

You're writing ignorant nonsense, substituting your own uses for these 
terms and missing the point as a consequence.

>
>As with humans, so with our constructs.  An AI must be constructed 
>first and foremost to employ intensional heuristics successfully.  Once 
>we get that working, we can add in extensional reasoning as an 
>occasionally useful overlay.
>
>Bill Modlin

The above represents a Modlinesque *translation* of what I have said in 
"Fragments" and elsewhere. This is what you persistently do. You 
substitute your own misunderstanding of what you read and then argue 
with be about it. This is the classic straw man argument, except I've 
told you how you go about doing it without realising - ie I've 
highlighted an example of intensional opacity. What you do above, as 
usual, is just repeat your naive prejudices about what I have said, and 
because you haven't critically questioned whether you have got what I 
have been saying right, you just repeat the same old mistakes. It's 
tiresome and it's why I say it's a waste of time trying to tell you 
anything - you just argue!

This really is hopeless as I've said many times before. It is not a 
matter of choosing between two "methods", that's where you, Patty and 
many others get it all so fundamentally wrong. Intensional and 
extensional are not some alternative labels for something else familiar 
like subjective and objective, they're terms which explicate something 
important about our linguistic behaviour and what that, like our other 
behaviours, is all too prone to. This ultimately comes down to operant 
discrimination, and you've been pointed, many times, to key pieces of 
research such as the Lashley-Wade hypothesis to give you some idea how 
and why this is so fundamental. Instead, of looking into that carefully 
and seeing the problem I have highlighted (it's not in your head or 
computer) you just translate extensional and intensional into a couple 
of familiar psychological constructs and go wandering down the yellow 
brick road!

You've not grasped how and why this is such an important point, and 
until you do you won't learn anything useful, or understand the scope of 
what I have been saying. If you'd looked carefully into what I have 
cited in "Fragments" and other material which supplements it, you'd have 
realised that what I've referred to as clinical judgement *is* the same 
as actuarial judgement, it's just done "in the head", ie less rigorously 
and with less rigor. You need to come to grips with the nature of the 
"double standard" and what it means to be the value of a variable.

-- 
David Longley
0
David
11/5/2004 2:49:09 PM
Stargazer wrote:
>><David@longley.demon.co.uk> writes
>>
>>>Why are people prejudiced? What's wrong, for instance, with
>>>concluding from the -2SD mean IQ of sub-sahara Africa relative to
>>>the UK mean, or the -1SD mean IQ of African undergraduates, or
>>>USA Afro-Carribeans that blacks are less intelligent than whites
>>>and whites are less intelligent than yellows (East Asians)? Is
>>>anything missing? If so, can you tell us?
> 
> 
> Stop being so idiotic and do yourself a favor: grab a copy of
> Jared Diamond's "Guns, Germs and Steel" and learn why what
> you say is just fermented old crap.
> 
> *SG*
> 
> 

You've misunderstood DL's point. He's questioning the prejudiced 
conclusions. He's pointing to why they are wrong: they omit the 
environment within which the systems (people) operate. Diamond's book is 
an extended (and thoroughly engaging) gloss on this principle.

The reason the whole IQ testing enterprise is misguided (and hence very 
nasty in its consequences) is that it is not behaviorist: it pays no 
attention whatever to the role the environment plays in developing 
"intelligence", and hnece in the role it plays in the testee's responses 
to the test questions. Etc etc etc. The assumption that intelligence is 
a proeprty of the sytem alone, without realtion to its environment, is 
what DL is criticising.

The behaviorist stance may be paraphrased thus: If you take a fish out 
of the water, it can't behave like a fish.
0
Wolf
11/5/2004 2:57:15 PM
In article <10omqapmfsbe71e@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>Bill Modlin wrote:
>> [big snip]
>>  An organism cannot survive using extensional reasoning alone:
>> heuristic intensionality is required to deal with the frame problem
>> and reduce the dimensionality of problems sufficiently to bring them
>> within the scope of extensional analysis.  Extensional analysis is
>> icing on the cake, we can survive without it.  We can't survive
>> without heuristics.
>
>Your whole post could have just said the above, and that would be
>enough to make your point. Congratulations.
>
>*SG*
>
You idiot - we also kill, cheat, main and act like apes (and you plus 
the other obnoxiously ignorant idiots here and elsewhere) because of 
"heuristics". What's that got to do with "intelligence"? Do you not see 
the problem? How ya gonna decide eh?
-- 
David Longley
http://www.longley.demon.co.uk/Frag.htm
0
David
11/5/2004 3:04:57 PM
In article <10omqhp41vnkp79@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>> <David@longley.demon.co.uk> writes
>> >
>> > Why are people prejudiced? What's wrong, for instance, with
>> > concluding from the -2SD mean IQ of sub-sahara Africa relative to
>> > the UK mean, or the -1SD mean IQ of African undergraduates, or
>> > USA Afro-Carribeans that blacks are less intelligent than whites
>> > and whites are less intelligent than yellows (East Asians)? Is
>> > anything missing? If so, can you tell us?
>
>Stop being so idiotic and do yourself a favor: grab a copy of
>Jared Diamond's "Guns, Germs and Steel" and learn why what
>you say is just fermented old crap.
>
>*SG*
>
>
Tell us.
-- 
David Longley
0
David
11/5/2004 3:05:49 PM
In article <10omqhp41vnkp79@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>> <David@longley.demon.co.uk> writes
>> >
>> > Why are people prejudiced? What's wrong, for instance, with
>> > concluding from the -2SD mean IQ of sub-sahara Africa relative to
>> > the UK mean, or the -1SD mean IQ of African undergraduates, or
>> > USA Afro-Carribeans that blacks are less intelligent than whites
>> > and whites are less intelligent than yellows (East Asians)? Is
>> > anything missing? If so, can you tell us?
>
>Stop being so idiotic and do yourself a favor: grab a copy of
>Jared Diamond's "Guns, Germs and Steel" and learn why what
>you say is just fermented old crap.
>
>*SG*
>
>
<http://www.lrainc.com/swtaboo/stalkers/jpr_ggs.html>
-- 
David Longley
0
David
11/5/2004 3:29:55 PM
On Fri, 05 Nov 2004 09:48:09 -0500, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Stargazer wrote:
>> Bill Modlin wrote:
>> 
>>>[big snip]
>>> An organism cannot survive using extensional reasoning alone:
>>>heuristic intensionality is required to deal with the frame problem
>>>and reduce the dimensionality of problems sufficiently to bring them
>>>within the scope of extensional analysis.  Extensional analysis is
>>>icing on the cake, we can survive without it.  We can't survive
>>>without heuristics.
>> 
>> 
>> Your whole post could have just said the above, and that would be
>> enough to make your point. Congratulations.
>> 
>> *SG*
>
>
>IMO this is a specious disinction, or else shows a misunderstanding of 
>extensionality, or else uses extension/intension in an unorthodox way.

So please to explain extension/intension in an orthodox way or stop
complaining how people use the terms. They are nothing more than
robust neologistic euphemisms for objective/subjective. So, please to
explain, sahib, how to get from one to the other without subjective
heuristics.

I 
>can't tell which, since I'm not privy to Bill's intensions. To the 
>extent that heuristics work, they are extensional.

Say what?

>                                                                                    The intensional 
>component is relevant only to the degree to which it affects extension

And your proof for this conjecture would be what, Wolf? That it
conforms to your arbitrary assumption above?

Regards - Lester
0
lesterDELzick
11/5/2004 3:46:15 PM
Bill Modlin wrote:
[...]
> Personally I feel that enough cultural bias is inherent in such tests to
> invalidate comparison across cultural groups. 

This is a behaviorist statement, and then some.

> But even if a
> "culture-free" IQ test were possible, IQ is only a weak correlate of
> the complex of laudatory attributes we call "intelligence", so the
> judgmental statements would still be inappropriate.

Sheesh, Bill, the attributes aren't laudatory, they're laudable. But to 
talk of of intellgence as something laudable is itself a culture bound 
behaviour, so this sentence is a muddle and a mess. But I won't hold you 
responsible for it, I'm sure you tapped it out in the heat of the 
moment, and didn't revise your post (which was a hell of a long one for 
the points you wanted to make: reminds me of one of Charles Lamb's PSs: 
"I apologise for the length of this letter, but I am very busy, and did 
not have the rime to make it shorter."
0
Wolf
11/5/2004 3:52:30 PM
On Thu, 04 Nov 2004 23:22:59 -0500, Bill Modlin
<wdmalias-cap@yahoo.com> in comp.ai.philosophy wrote:

>David Longley wrote:
>> In article <+Xqwk5Kq+VhBFwhk@longley.demon.co.uk>, David Longley 
>> <David@longley.demon.co.uk> writes
>
>< snipped an exchange with Wolf as only marginally related to these
>remarks David interjected earlier, or to his current questions: >
>
>>> Not to pre-empt Wolf, but let me remind you of the section on ANNs in 
>>> "Fragments" <http://www.longley.demon.co.uk/Frag.htm>. What was that 
>>> all about? What's *wrong* with ANNs? What's wrong with our folk 
>>> psychology? What's wrong with people?
>>>
>>> Why are people prejudiced? What's wrong, for instance, with concluding 
>>> from the -2SD mean IQ of sub-sahara Africa relative to the UK mean, or 
>>> the -1SD mean IQ of African undergraduates, or USA Afro-Carribeans 
>>> that blacks are less intelligent than whites and whites are less 
>>> intelligent than yellows (East Asians)? Is anything missing? If so, 
>>> can you tell us?
>
>Since David did not include my previous reply I'll paste it here:
>
>---- posted previously ----
>
>David holds that human thinking, that what passes for
>"reasoning" in naive human experience, is so terribly
>flawed, biased, and generally unscientific that there can
>be no value whatever in reproducing it with a machine.
>
>To David, the only imaginable purpose for an AI is to
>supplant error-riddled human intensional heuristics with
>rational extensional methods.  To him, AI *is* the
>advancement of extensional science.
>
>Which leads to conflict with those of us who notice that
>despite obvious inadequacies of our biological intelligence
>in matters of rational estimation and deductive logic, there
>are still many things people manage to do quite well, things
>people accomplish much more effectively than any machine
>we've yet been able to design.
>
>Those of us who find value in natural intelligence would
>like nothing better than to emulate it with a machine.  To
>us, AI is the search for ways to do that, ways to get a
>machine to do the things we find so trivially easy.   Like
>shopping for groceries, or cleaning a house, or following
>ambiguous instructions.  Without becoming immediately bogged
>down in an exponential explosion of combinatorial
>possibilities to be evaluated.  Without tripping over a
>frame problem.
>
>The answer to David's first question, "what's wrong with
>ANNs", is that their operation is somewhat analogous to the
>operation of our own brains.  Oversimplified, incomplete,
>but still recognizably similar in some of the ways they
>solve problems.

David's main complaint is that subjective heuristics would supplant
behaviorism's anthropomorphosis of homologies in rat brains. It's
more professional jealousy than science. He doesn't care one way or
the other about the truth of the matter. To David it's all a matter of
rationalizing away the subjective so he can get some respect for
training rats. It's a threat to the philosophical anthropomorphosis of
animal behavior in behaviorism.

>And since our brains clearly use those terrible intensional
>heuristics, as David has painstakingly documented for all
>the world to see, this makes ANNs irrational, and makes
>anyone who takes them seriously a threat to the progress of
>humanity.  It even makes them despicable and evil, if they
>seem to understand the problem and yet knowingly persist in
>talking about such nasty things.
>
>Of course, to the rest of us the resemblance of ANN
>heuristics to some human methods is what makes them worth
>talking about.
>
>David simply is unable to understand that many problems
>cannot be addressed without heuristics.  He includes quotes
>in Fragments that make this point quite well, but manages to
>slide past them undaunted, his grail of replacing human
>thought with pristine rationality miraculously intact.
>
>The sad part is that David is right about the value of
>replacing intuitive judgment with objective measures and
>rational statistical analysis in places where the data is
>available to support such an effort.  He's got a legitimate
>point, which probably deserves more attention than it gets
>in the flux of politically biased funding schemes.  If he'd
>stick to that, he might actually do some good.  But instead
>he falls prey to the biases he decries and overgeneralizes
>to the point of obvious irrationality... while steadfastly
>resisting any appeal to intuitive notions like common sense
>to recognize his error.
>
>Bill Modlin
>
>----- end of pasted material ----
>
>> You notably haven't responded to the second point. There's a profoundly 
>> critically point within it which you shouldn't worry too much about 
>> having some difficulty with grasping (though that doesn't mean I will 
>> tell you). As it is, I suspect you (and many others here) haven't a clue 
>> what I'm talking about. That's because it's representative of something 
>> worthy of serious, critical self-analysis, and it *is* something I have 
>> covered before.
>
>You are right, I haven't a clue as to why you feel IQ scores are
>relevant here.  Since you won't tell me and said not to worry, I won't.
>
>Personally I feel that enough cultural bias is inherent in such tests to
>invalidate comparison across cultural groups.  But even if a
>"culture-free" IQ test were possible, IQ is only a weak correlate of
>the complex of laudatory attributes we call "intelligence", so the
>judgmental statements would still be inappropriate.
>
>> In your reply, you assert that I generalise to the point of  being 
>> irrational. That's a very bold assertion, but as stated it's *only* a 
>> bold assertion (i.e. nefarious rhetoric in my book). As far as I know, 
>> all that I have said is sound, and to date not refuted (and I don't 
>> refer to folk in  c.a.p, although I would take what Glen, Wolf and 
>> sometimes even you <g>, seriously). I just try *not* to keep repeating 
>> the evidence as I've already done so ad nauseam to no avail not to 
>> mention tedious criticism That's very much my point of course. The 
>> evidence often doesn't matter because most people behave like idiots and 
>> don't pick up on it. This is something that many, sadly, are only too 
>> happy to count on (as I'm sure most folk reading this and thinking about 
>> their *current* concerns will appreciate)
>> 
>> I think you should state your evidence or make an effort to listen more 
>> carefully. Assuming that latter, what was wrong with your post? Let's 
>> see you do you can set an example for less "enlightened" others <g>.
>
>You have presented evidence that human judgment is based on heuristics
>which lead to suboptimal results in cases where there is sufficient data
>to apply more accurate procedures for comparison.  You have also
>presented evidence that the biases inherent in these heuristics are
>difficult to overcome by education and training, that training in better
>methods of reasoning often does not generalize well.
>
>You argue that we should therefore prefer to collect data and apply
>well-founded statistical and logical procedures rather than rely on
>expert clinical judgment, as that judgment is likely to be biased and
>suboptimal.  This is a reasonable position.
>
>However, you leap from the reasonable assertion that there are more
>accurate procedures which should be used when possible to a conviction
>that it is always possible.  You assume that these procedures, which you
>label "rational", "scientific", and "extensional", are sufficient in
>themselves to guide all decision processes, that they can replace naive
>intensional heuristic reasoning for all purposes.
>
>Which isn't true. The type of reasoning you recommend depends on
>repeatable observations of recurring events, and those do not exist
>until discovered by the intensional heuristics of perception.
>
>You focus on what happens after a scientist defines an experimental 
>protocol, with some objective specification of measurements or 
>observations to be made.  At that point we can collect data and look for 
>useful relationships among our variables, and at that point it is true 
>that there are potentially better ways to analyze the information than 
>naive clinical judgment.
>
>I'm looking at how we get to that point, how we can decide in the first 
>place that some particular set of observations out of all the 
>combinatorial explosion of possibilities is indeed worth investigating.
>
>It may be hard for you to see this is as a difficult problem, since you 
>look at a situation through human eyes, and it is "obvious" that there 
>are only a few potentially relevant variables involved.  There are 
>skills needed to do a good job of experimental design to be sure you are 
>measuring what you intended to measure, but you don't see even a 
>fraction of the vast number of potentially measurable aspects of any 
>given situation.
>
>You don't have to do any sort of analysis to decide that it is probably 
>not worth recording the variations in the relative positions of each of 
>the hairs on a rat's skin as he moves around a skinner box, or noting 
>the moment-to-moment minute fluctuations in the air pressure and 
>temperature a millimeter away from an arbitrary point on its 
>hindquarters.  You don't consider such possibilities and reject them, 
>they simply never arise for consideration in the first place.
>
>This is a good thing.  It also turns out to be hard to explain in terms 
>of purely logical extensional processes, so hard that it is essentially 
>impossible.  This is a facet of what is sometimes known as the "frame 
>problem", the problem of getting a logical device to focus on the 
>relevant aspects of a situation without first having to sort through an 
>infinite regress of irrelevant possibilities.
>
>You solve it effortlessly, without even thinking about it, as part of 
>your innate perceptual processing.  As you examine any situation you 
>project on it an intensional structure highlighting the probable major 
>causal focii and their rough relationships... this pushes on that, and 
>may cause that other thing to topple...
>
>You can do this because you use those intensional heuristics your own 
>evidence shows to dominate human reasoning.  You can do this better than 
>any system using extensional logic because intensional heuristics are 
>more appropriate than extensional logic for managing the sort of 
>information flow available to us through our senses.
>
>We reason using intensional heuristics because they work better than 
>extensional analysis, in real-life situations characterized by too many 
>variables and too little information to support extensional analysis.
>
>These heuristics don't give us the accuracy achievable with extensional 
>procedures under ideal conditions.  You focus on that aspect of the 
>comparison and argue that we should strive to suppress the use of 
>heuristics, change our way of thinking and speaking to conform to an 
>extensional ideal.
>
>But your argument is fundamentally misguided, as the two are not 
>interchangeable.  Extensional analysis is impotent for guiding decisions 
>in the vast majority of situations encountered in the real world.  An 
>organism cannot survive using extensional reasoning alone: heuristic 
>intensionality is required to deal with the frame problem and reduce the 
>dimensionality of problems sufficiently to bring them within the scope 
>of extensional analysis.  Extensional analysis is icing on the cake, we 
>can survive without it.  We can't survive without heuristics.

The objective can't even be without the subjective. We can't even be
without the subjective. Rocks don't do extensional heuristics.

>As with humans, so with our constructs.  An AI must be constructed first 
>and foremost to employ intensional heuristics successfully.  Once we get 
>that working, we can add in extensional reasoning as an occasionally 
>useful overlay.
>
>Bill Modlin


Regards - Lester
0
lesterDELzick
11/5/2004 4:00:27 PM
David Longley wrote:

<his usual spew, snipped>

Ah well.  But at least I'm relieved of one worry... after posting the 
previous note I was concerned that David might actually understand, and 
be driven to despondency...


Bill
0
Bill
11/5/2004 4:40:39 PM
In article <DjNid.27834$OD3.1327879@news20.bellglobal.com>, Wolf 
Kirchmeir <wwolfkir@sympatico.ca> writes
>Bill Modlin wrote:
>[...]
>> Personally I feel that enough cultural bias is inherent in such tests to
>> invalidate comparison across cultural groups.
>
>This is a behaviorist statement, and then some.
>
>> But even if a
>> "culture-free" IQ test were possible, IQ is only a weak correlate of
>> the complex of laudatory attributes we call "intelligence", so the
>> judgmental statements would still be inappropriate.
>
>Sheesh, Bill, the attributes aren't laudatory, they're laudable. But to 
>talk of of intellgence as something laudable is itself a culture bound 
>behaviour, so this sentence is a muddle and a mess. But I won't hold 
>you responsible for it, I'm sure you tapped it out in the heat of the 
>moment, and didn't revise your post (which was a hell of a long one for 
>the points you wanted to make: reminds me of one of Charles Lamb's PSs: 
>"I apologise for the length of this letter, but I am very busy, and did 
>not have the rime to make it shorter."

One small (?) point. There are technical problems with the notion of IQ 
which have to do with the measurement scale and the tendency to reify 
"intelligence" as an essential property. Some of these issues are being 
addressed and one should look to Jensen and Plomin there perhaps. In the 
UK, government has surreptitiously (in my view) smuggled IQ back in as 
"Cognitive Ability Tests" which have a critical place in our education 
system today, just as SATs etc do in the USA. People exploit intensional 
opacity, and many of those who are most vociferous about what is wrong 
with IQ etc, end up using proxies without a care in the world simply 
because they don't think it's functionally the same thing. The extent to 
which this sleight of hand goes on (hence my incessant references to 
intensional or referential opacity, is staggering). People don't see it 
in themselves or others, and that's a function of what this post is all 
about too. This leads to all kinds of problems as I'm sure you, at 
least, appreciate.

But we should not be too quick to throw out the baby with the bath water 
when it comes to IQ. As a tool for investigating and managing human 
behavioural diversity I think it has great merits, but there are sound 
reasons why its use is usually restricted to those who know how to use 
the tests and how to make sense of people's scores (e.g the WAIS and its 
subscales). People don't tend to understand what Herrnstein and Murray 
repeatedly said about individuals vs groups either. I think Jensen, 
Rushton, Brand, Herrnstein, Eysenck etc are much maligned and 
fundamentally misrepresented and understood as a consequence, and this 
is extremely bad for science and culture (everywhere). There *is* human 
behavioural diversity and some of this is physiological. Some of this 
can not be compensated for by technology (as visual deficits or other 
ailments are - at least not conceivably, because the research is being 
hampered!). Individual and population sub-group differences are bound to 
have come under selection pressures over the past hundred thousand years 
or so, and some of these differences have been socially engineered 
within our very recent history (slavery (note the blacks in Africa were 
involved), jewish persecution for two thousand years, HIV recently, 
poverty and war, and exponential population growth in Africa today etc). 
We need to look into anything which gives us better control over our 
environments, and thus, behaviour. Treating some problems as "sensitive" 
or "politically incorrect" is anathema to open science. "Political 
correctness" just drives the issues underground and taints discussion 
with bias as a consequence. I've pointed out some of the practical 
problems with this elsewhere over "Cognitive Skills", "SSRIs" and I 
could name others. Unless the unpalatable facts are openly discussed, 
others will, and do, egregiously exploit the uncertainty for their own 
ends. One of my points to Bill was to encourage him to look further than 
phenotypical markers. How does one see further than the heuristics take 
us? I'm not sure my point there has been picked up on yet - which is 
very much the point of course <g>

Kind regards,


Ps. Advice to others, please just ignore Zick. It's clear from the way 
that he constructs his sentences that he's just an obnoxious troll out 
to foment and feed on conflict. Sadly, I think Patty plays this same 
emotional, almost psychopathic, vampirish game.
-- 
David Longley
0
David
11/5/2004 4:55:42 PM
In article <XbOdnU3w8-xvMRbcRVn-3w@metrocastcablevision.com>, Bill 
Modlin <wdmalias-cap@yahoo.com> writes
>David Longley wrote:
>
><his usual spew, snipped>
>
>Ah well.  But at least I'm relieved of one worry... after posting the 
>previous note I was concerned that David might actually understand, and 
>be driven to despondency...
>
>
>Bill

It doesn't seem to bother you that you have nothing to show for your 
bold claims. Neither does anyone else (except where they pilfer and 
rename). That's why I keep telling you that you're behaving like the 
other idiots in "AI" and Cognitive Science. I'm explaining to you why 
you're on a vain, ignorant and bankrupt search for the Holy Grail and 
other chimeras - and guess what, you know better and think the way to 
deal with criticism is to become abusive (either by ignoring the 
criticism or explicitly as above). I at least have shown what we can do, 
and how we go about doing it.

You think you know better. Ask yourself how you know that. Where's your 
evidence? Or doesn't that matter?
-- 
David Longley
http://www.longley.demon.co.uk

0
David
11/5/2004 5:08:18 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> > > <David@longley.demon.co.uk> writes
> > >
> > > > Why are people prejudiced? What's wrong, for instance, with
> > > > concluding from the -2SD mean IQ of sub-sahara Africa relative
> > > > to the UK mean, or the -1SD mean IQ of African undergraduates,
> > > > or USA Afro-Carribeans that blacks are less intelligent than
> > > > whites and whites are less intelligent than yellows (East
> > > > Asians)? Is anything missing? If so, can you tell us?
> >
> >
> > Stop being so idiotic and do yourself a favor: grab a copy of
> > Jared Diamond's "Guns, Germs and Steel" and learn why what
> > you say is just fermented old crap.
> >
> > *SG*
>
> You've misunderstood DL's point. He's questioning the prejudiced
> conclusions. He's pointing to why they are wrong: they omit the
> environment within which the systems (people) operate. Diamond's book
> is an extended (and thoroughly engaging) gloss on this principle.
>
> The reason the whole IQ testing enterprise is misguided (and hence
> very nasty in its consequences) is that it is not behaviorist: it
> pays no attention whatever to the role the environment plays in
> developing "intelligence", and hnece in the role it plays in the
> testee's responses to the test questions. Etc etc etc. The assumption
> that intelligence is a proeprty of the sytem alone, without realtion
> to its environment, is what DL is criticising.
>
> The behaviorist stance may be paraphrased thus: If you take a fish out
> of the water, it can't behave like a fish.

Thanks for supporting my original claim.

If I understand what you say in your last sentence above, the
behavior that behaviorists measure of rats and pigeons in a
laboratory is not the "real behavior" of these animals "in natura",
but just particular behaviors in an artificial setting.
Ok, we agree after all ;-)

*SG*


0
Stargazer
11/5/2004 6:16:02 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> > Bill Modlin wrote:
> >
> > > [big snip]
> > > An organism cannot survive using extensional reasoning alone:
> > > heuristic intensionality is required to deal with the frame
> > > problem and reduce the dimensionality of problems sufficiently to
> > > bring them within the scope of extensional analysis.  Extensional
> > > analysis is icing on the cake, we can survive without it.  We
> > > can't survive without heuristics.
> >
> > Your whole post could have just said the above, and that would be
> > enough to make your point. Congratulations.
> >
> > *SG*
>
> IMO this is a specious disinction, or else shows a misunderstanding of
> extensionality, or else uses extension/intension in an unorthodox
> way. I can't tell which, since I'm not privy to Bill's intensions. To
> the extent that heuristics work, they are extensional. The intensional
> component is relevant only to the degree to which it affects extension
> (which may be great.) See Quine on the translation problem, w/ IMO
> crystallises the extension - intension issue. WQ shows that a) two
> people with a different intension for a term may have the same
> extension; and b) that common extension is the necessary condition for
> communication. The same principle applies to heuristics: the
> intensional component can be utterly different for two systems, yet
> so long as they have the same extension, they will be equally
> effective. At the level of NNs, IMO it's an irrelevant issue, since
> the only test that makes sense is efficacy: presumably (if we're
> still on topic for this thread) we want to design the NN so it
> performs "useful functions". In that case, extensions rule. Its
> intension(s), if any, don't matter.
> Footnote:
> Quine's result has implications for the communicating-with-an-alien
> problem, w/ took up a lot of unnecessary space on thsi forum not so
> long ago. IMO this one reduces to communicating-with-other-species, a
> problem that every person who interacts with domestic animals has
> solved more or less successfully. Of course, there will be people who
> insist that unless you have the same intension as your interlocutor,
> you haven't "really communicated." Such people have a touching trust
> in the ability of humans to guess correctly at each other's
> intensions, a trust that 15 minutes in a literature class discussing
> a poem should dispel forever.
> If BM merely means "internal" vs "external", the distinction is fuzzy,
> to put it mildly. But IMO BM has a fuzzy notion of internal - extrenal
> anyhow, since he ignores levels: analysis at the level of a single
> (minimal) NN automatically makes all other NNs connected to it
> external to it. That's a distinctction that matters IMO, since it
> affects the extension of "external signal" and "quality of external
> signal", both of which are necessary concepts, I think (allowing for
> the ambiguity of "quality, w/ should be resolvable.)

You're trying to mix quinean philosophically charged ideas with
what is done in science, as practiced by today's scientists. These
things are immiscible. It is not possible to develop science (the
pursuit of knowledge of _unknown realms_) within purely extensional
contexts. Scientific definitions during that phase must begin
being provisional and property-oriented. This is not to say
that extensional definitions, heuristics, logics and practices
are not useful or that they should not be constructed. Of course
they are important (even methodologically), but not during most
of the time of discovery. Most of this time is spent creating
and evaluating intensional heuristics and formulating hypotheses
to be empirically tested (in other words, refining and polishing
eventual extentional referents that will later be used in
formalization). Without doing this way nothing new would ever be
discovered. If baby Quine thought this way since he was a child,
he would be still trying to look for an extensional definition
of even numbers.

*SG*


0
Stargazer
11/5/2004 6:29:29 PM
Wolf Kirchmeir wrote:
> Stargazer wrote:
> 
>> Bill Modlin wrote:
>>
>>> [big snip]
>>> An organism cannot survive using extensional reasoning alone:
>>> heuristic intensionality is required to deal with the frame problem
>>> and reduce the dimensionality of problems sufficiently to bring them
>>> within the scope of extensional analysis.  Extensional analysis is
>>> icing on the cake, we can survive without it.  We can't survive
>>> without heuristics.
>>
>>
>>
>> Your whole post could have just said the above, and that would be
>> enough to make your point. Congratulations.
>>
>> *SG*
> 
> 
> 
> IMO this is a specious disinction, or else shows a misunderstanding of 
> extensionality, or else uses extension/intension in an unorthodox way. I 
> can't tell which, since I'm not privy to Bill's intensions. 

Translating that last sentence into Vally Girl Talk (VGT) you get "you 
can't read Bill's thoughts".   But in a more serious vein I asked about 
this use of these terms when i first started posting here.  You can read 
the thread off of
<http://groups.google.com/groups?selm=4e799a1d.0308240420.65309d34%40posting.google.com>
For me the best answer came from Stephen Harris referring to Anders post
<http://groups.google.com/groups?selm=599i7m%24p6u%40usenet.srv.cis.pitt.edu>

 From Quine we get

""An opaque construction is one in which you cannot in general supplant 
a singular term by a codesignative term (one referring to the same 
object) without disturbing the truth value of the containing sentence. 
In an opaque construction you also cannot in general supplant a general 
term by a coextensive term (one true of the same objects), nor a 
component sentence by a sentence of the same truth value, without 
disturbing the truth value of the containing sentence. All three 
failures are called failures of extensionality""

And from Longely himself we get words to the effect that an intensional 
context is one which fails extensionality.  Note that Longley does not 
(usually) use the intensional\extensional distinction in the set 
theoretic sense of the word.  It is difficult for us who come from other 
backgrounds to keep that straight and frequently there is confusion on 
that point.

Longley uses the term  "intensional opacity" a lot.  Just this morning i 
figured out how to translate that into VGT.  It just means that people 
don't understand each other.  Where a context is extensional then people 
are talking about the same thing, otherwise they probably are not.  We 
do see a lot of that on cap.


> To the 
> extent that heuristics work, they are extensional. The intensional 
> component is relevant only to the degree to which it affects extension 
> (which may be great.) 

Which can be true by definition.  It does, however, presume that there 
is some God's eye extensional context to which we have access.  My point 
is that we can remove that presumption by talking of the relationship 
between contexts rather than some extesionality "property" of a single 
context.  For example: Israel's context of the defense of its nation is 
intensional to the context of the plight of Palestinians.  In VGT, they 
don't understand each other.

> See Quine on the translation problem, w/ IMO 
> crystallises the extension - intension issue. WQ shows that a) two 
> people with a different intension for a term may have the same 
> extension; and b) that common extension is the necessary condition for 
> communication. The same principle applies to heuristics: the intensional 
> component can be utterly different for two systems, yet so long as they 
> have the same extension, they will be equally effective. 

Well, i think you are using the set theoretic sense of the dichotomy 
here, but i still understand what you mean.  Quine's joking that the 
class of animals with hearts has the same extension as the class of 
animals with kidneys comes to mind.  But if a forensic examiner's 
heuristic for determining whether people had kidneys was to look for 
their hearts, then he would be jumping to some bad conclusions.  In this 
case the definition of the class by intension (by the test for the 
particular organs) is what works.  I think that there has been a lot of 
confusion on this point.

> At the level of 
> NNs, IMO it's an irrelevant issue, since the only test that makes sense 
> is efficacy: presumably (if we're still on topic for this thread) we 
> want to design the NN so it performs "useful functions". In that case, 
> extensions rule. Its intension(s), if any, don't matter.
> 

In this case i think it is you who are playing free and loose with this 
word "intension".  But i still think i know what you mean :)  Now using 
our new "intensional to" predicate we can state your thesis above from a 
wider context:  "A designer wants to design the NN so that it performs 
useful functions for the designer.  Functions which are intensional to 
the NN do not matter to the designer. "

> Footnote:
> Quine's result has implications for the communicating-with-an-alien 
> problem, w/ took up a lot of unnecessary space on thsi forum not so long 
> ago. IMO this one reduces to communicating-with-other-species, a problem 
> that every person who interacts with domestic animals has solved more or 
> less successfully. Of course, there will be people who insist that 
> unless you have the same intension as your interlocutor, you haven't 
> "really communicated." 

That's a funny one.  We communicate well with our domestic chickens ... 
err we eat them.  Yes, we have solved our problems of communicating with 
them.  Remember that famous twilight zone episode where humans were 
translating an alien manual and just as the humans were getting on the 
alien's ship to be taken to their home planet, somebody found out it was 
a cook book.

> Such people have a touching trust in the ability 
> of humans to guess correctly at each other's intensions, a trust that 15 
> minutes in a literature class discussing a poem should dispel forever.
> 

I am totally with you here.  Now could you tell me why DL plays so many 
guessing games ?

> If BM merely means "internal" vs "external", the distinction is fuzzy, 
> to put it mildly. 

Personally i don't see what your problem is with this distinction. 
Specify a time-space manifold, like for example the skin of an 
individual, those processes which occur inside that manifold are 
internal, those outside are external.

> But IMO BM has a fuzzy notion of internal - extrenal 
> anyhow, since he ignores levels: analysis at the level of a single 
> (minimal) NN automatically makes all other NNs connected to it external 
> to it. That's a distinctction that matters IMO, since it affects the 
> extension of "external signal" and "quality of external signal", both of 
> which are necessary concepts, I think (allowing for the ambiguity of 
> "quality, w/ should be resolvable.)

How the manifold is defined is entirely up to the convenience and 
purposes of the agent and the technology making the analysis.

Incidentally, TIA, for a rational discussion of these issues :)

patty
0
patty
11/5/2004 7:15:41 PM
In article <10onhkajel02623@news20.forteinc.com>, Stargazer 
<fuckoff@spammers.com> writes
>Wolf Kirchmeir wrote:
>> Stargazer wrote:
>> > Bill Modlin wrote:
>> >
>> > > [big snip]
>> > > An organism cannot survive using extensional reasoning alone:
>> > > heuristic intensionality is required to deal with the frame
>> > > problem and reduce the dimensionality of problems sufficiently to
>> > > bring them within the scope of extensional analysis.  Extensional
>> > > analysis is icing on the cake, we can survive without it.  We
>> > > can't survive without heuristics.
>> >
>> > Your whole post could have just said the above, and that would be
>> > enough to make your point. Congratulations.
>> >
>> > *SG*
>>
>> IMO this is a specious disinction, or else shows a misunderstanding of
>> extensionality, or else uses extension/intension in an unorthodox
>> way. I can't tell which, since I'm not privy to Bill's intensions. To
>> the extent that heuristics work, they are extensional. The intensional
>> component is relevant only to the degree to which it affects extension
>> (which may be great.) See Quine on the translation problem, w/ IMO
>> crystallises the extension - intension issue. WQ shows that a) two
>> people with a different intension for a term may have the same
>> extension; and b) that common extension is the necessary condition for
>> communication. The same principle applies to heuristics: the
>> intensional component can be utterly different for two systems, yet
>> so long as they have the same extension, they will be equally
>> effective. At the level of NNs, IMO it's an irrelevant issue, since
>> the only test that makes sense is efficacy: presumably (if we're
>> still on topic for this thread) we want to design the NN so it
>> performs "useful functions". In that case, extensions rule. Its
>> intension(s), if any, don't matter.
>> Footnote:
>> Quine's result has implications for the communicating-with-an-alien
>> problem, w/ took up a lot of unnecessary space on thsi forum not so
>> long ago. IMO this one reduces to communicating-with-other-species, a
>> problem that every person who interacts with domestic animals has
>> solved more or less successfully. Of course, there will be people who
>> insist that unless you have the same intension as your interlocutor,
>> you haven't "really communicated." Such people have a touching trust
>> in the ability of humans to guess correctly at each other's
>> intensions, a trust that 15 minutes in a literature class discussing
>> a poem should dispel forever.
>> If BM merely means "internal" vs "external", the distinction is fuzzy,
>> to put it mildly. But IMO BM has a fuzzy notion of internal - extrenal
>> anyhow, since he ignores levels: analysis at the level of a single
>> (minimal) NN automatically makes all other NNs connected to it
>> external to it. That's a distinctction that matters IMO, since it
>> affects the extension of "external signal" and "quality of external
>> signal", both of which are necessary concepts, I think (allowing for
>> the ambiguity of "quality, w/ should be resolvable.)
>
>You're trying to mix quinean philosophically charged ideas with
>what is done in science, as practiced by today's scientists. These
>things are immiscible. It is not possible to develop science (the
>pursuit of knowledge of _unknown realms_) within purely extensional
>contexts. Scientific definitions during that phase must begin
>being provisional and property-oriented. This is not to say
>that extensional definitions, heuristics, logics and practices
>are not useful or that they should not be constructed. Of course
>they are important (even methodologically), but not during most
>of the time of discovery. Most of this time is spent creating
>and evaluating intensional heuristics and formulating hypotheses
>to be empirically tested (in other words, refining and polishing
>eventual extentional referents that will later be used in
>formalization). Without doing this way nothing new would ever be
>discovered. If baby Quine thought this way since he was a child,
>he would be still trying to look for an extensional definition
>of even numbers.
>
>*SG*
>
>
The above merely confirms what I have said elsewhere about what your are 
doing, ie, writing idiotic drivel. What you say above is all false or 
just nonsense, and you can't see why because like others of your ilk you 
have no conception of truth. That what you do write *is* rubbish could 
be ascertained by checking the facts, but you don't. Only human beings 
can behave as idiotically as you illustrate here and elsewhere, so 
you'll be welcomed by other idiots like Zick, Verhey, Michaels, Ozkural, 
Zero and no end of other delinquents.

People like you are a blight.
-- 
David Longley
0
David
11/5/2004 7:23:25 PM
On Fri, 5 Nov 2004 15:04:57 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <10omqapmfsbe71e@news20.forteinc.com>, Stargazer 
><fuckoff@spammers.com> writes
>>Bill Modlin wrote:
>>> [big snip]
>>>  An organism cannot survive using extensional reasoning alone:
>>> heuristic intensionality is required to deal with the frame problem
>>> and reduce the dimensionality of problems sufficiently to bring them
>>> within the scope of extensional analysis.  Extensional analysis is
>>> icing on the cake, we can survive without it.  We can't survive
>>> without heuristics.
>>
>>Your whole post could have just said the above, and that would be
>>enough to make your point. Congratulations.
>>
>>*SG*
>>
>You idiot - we also kill, cheat, main and act like apes (and you plus 
>the other obnoxiously ignorant idiots here and elsewhere) because of 
>"heuristics". What's that got to do with "intelligence"? Do you not see 
>the problem? How ya gonna decide eh?

Two questions, David -

What's intelligence, and how do rats you train rats do on IQ tests?

Regards - Lester
0
lesterDELzick
11/5/2004 7:27:10 PM
On Fri, 05 Nov 2004 09:57:15 -0500, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Stargazer wrote:
>>><David@longley.demon.co.uk> writes
>>>
>>>>Why are people prejudiced? What's wrong, for instance, with
>>>>concluding from the -2SD mean IQ of sub-sahara Africa relative to
>>>>the UK mean, or the -1SD mean IQ of African undergraduates, or
>>>>USA Afro-Carribeans that blacks are less intelligent than whites
>>>>and whites are less intelligent than yellows (East Asians)? Is
>>>>anything missing? If so, can you tell us?
>> 
>> 
>> Stop being so idiotic and do yourself a favor: grab a copy of
>> Jared Diamond's "Guns, Germs and Steel" and learn why what
>> you say is just fermented old crap.
>> 
>> *SG*
>> 
>> 
>
>You've misunderstood DL's point. He's questioning the prejudiced 
>conclusions. He's pointing to why they are wrong: they omit the 
>environment within which the systems (people) operate. Diamond's book is 
>an extended (and thoroughly engaging) gloss on this principle.
>
>The reason the whole IQ testing enterprise is misguided (and hence very 
>nasty in its consequences) is that it is not behaviorist: it pays no 
>attention whatever to the role the environment plays in developing 
>"intelligence", and hnece in the role it plays in the testee's responses 
>to the test questions. Etc etc etc. The assumption that intelligence is 
>a proeprty of the sytem alone, without realtion to its environment, is 
>what DL is criticising.

Funny I got the impression David was critiquing relative stupidity in
subharans. But I'm sure he'll appreciate the apologia nonetheless.

>The behaviorist stance may be paraphrased thus: If you take a fish out 
>of the water, it can't behave like a fish.


Regards - Lester
0
lesterDELzick
11/5/2004 7:31:20 PM
In article <xjQid.300500$wV.185021@attbi_s54>, patty 
<pattyNO@SPAMicyberspace.net> writes
<snip>
>And from Longely himself we get words to the effect that an intensional 
>context is one which fails extensionality.  Note that Longley does not 
>(usually) use the intensional\extensional distinction in the set 
>theoretic sense of the word.  It is difficult for us who come from 
>other backgrounds to keep that straight and frequently there is 
>confusion on that point.
>

The work I'm always referring to tacitly is a computer system where the 
critical issue was always the nature of the predicates. Anyone who has 
even had a casual look at that and remembered any of it would not have 
fabricated the nonsense you keep peddling here.

>Longley uses the term  "intensional opacity" a lot.  Just this morning 
>i figured out how to translate that into VGT.

Why do you do any such thing? Solely to foment the same sort of nonsense 
which Zick does.

> It just means that people don't understand each other.

No it doesn't. But why let that little fact bother you eh? You spin out 
no end of nonsense in order to engage others in an exchange and like 
Zick, that is the end of the matter. That's all you are after, and it's 
that that's being explicated as pathological, both here and elsewhere as 
"cognitive science". It's vacuous and neurotic.
-- 
David Longley
0
David
11/5/2004 7:41:53 PM
David Longley wrote:

> Ps. Advice to others, please just ignore Zick. It's clear from the way 
> that he constructs his sentences that he's just an obnoxious troll out 
> to foment and feed on conflict. Sadly, I think Patty plays this same 
> emotional, almost psychopathic, vampirish game.

The question is who is sucking who's blood here.

Newsgroups trimmed.

patty
0
patty
11/5/2004 7:47:20 PM
On Fri, 5 Nov 2004 17:08:18 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <XbOdnU3w8-xvMRbcRVn-3w@metrocastcablevision.com>, Bill 
>Modlin <wdmalias-cap@yahoo.com> writes
>>David Longley wrote:
>>
>><his usual spew, snipped>
>>
>>Ah well.  But at least I'm relieved of one worry... after posting the 
>>previous note I was concerned that David might actually understand, and 
>>be driven to despondency...
>>
>>
>>Bill
>
>It doesn't seem to bother you that you have nothing to show for your 
>bold claims. Neither does anyone else (except where they pilfer and 
>rename). That's why I keep telling you that you're behaving like the 
>other idiots in "AI" and Cognitive Science. I'm explaining to you why 
>you're on a vain, ignorant and bankrupt search for the Holy Grail and 
>other chimeras - and guess what, you know better and think the way to 
>deal with criticism is to become abusive (either by ignoring the 
>criticism or explicitly as above). I at least have shown what we can do, 
>and how we go about doing it.
>
>You think you know better. Ask yourself how you know that. Where's your 
>evidence? Or doesn't that matter?

Well, David, here's the problem. We've been asking you exactly that
for a long time now and all we get are ambiguous tract citations. So,
I guess we're in a standoff. You claim we have no evidence. We claim
you have no evidence.

Regards - Lester
0
lesterDELzick
11/5/2004 8:01:33 PM
On Fri, 05 Nov 2004 10:52:30 -0500, Wolf Kirchmeir
<wwolfkir@sympatico.ca> in comp.ai.philosophy wrote:

>Bill Modlin wrote:
>[...]
>> Personally I feel that enough cultural bias is inherent in such tests to
>> invalidate comparison across cultural groups. 
>
>This is a behaviorist statement, and then some.
>
>> But even if a
>> "culture-free" IQ test were possible, IQ is only a weak correlate of
>> the complex of laudatory attributes we call "intelligence", so the
>> judgmental statements would still be inappropriate.
>
>Sheesh, Bill, the attributes aren't laudatory, they're laudable. But to 
>talk of of intellgence as something laudable is itself a culture bound 
>behaviour, so this sentence is a muddle and a mess. But I won't hold you 
>responsible for it, I'm sure you tapped it out in the heat of the 
>moment, and didn't revise your post (which was a hell of a long one for 
>the points you wanted to make: reminds me of one of Charles Lamb's PSs: 
>"I apologise for the length of this letter, but I am very busy, and did 
>not have the rime to make it shorter."

Well, there's at least one behaviorist who makes a better essayist
than scientist.

Regards - Lester
0
lesterDELzick
11/5/2004 8:01:34 PM
On Fri, 5 Nov 2004 16:55:42 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

[. . .]

>Ps. Advice to others, please just ignore Zick. It's clear from the way 
>that he constructs his sentences that he's just an obnoxious troll out 
>to foment and feed on conflict. Sadly, I think Patty plays this same 
>emotional, almost psychopathic, vampirish game.
>-- 
>David Longley

Ah, the thanks I get for spelling out the value of tautological truth.
David would make me a martyr to the cause of the mind and mental
effects rather than deal in science. Glen and John H. have already
taken the pledge. Looks like I've been more successful than I would
have thought possible. Bruised egos heal slowly.

Regards - Lester
0
lesterDELzick
11/5/2004 8:01:35 PM
On Fri, 05 Nov 2004 19:47:20 GMT, patty <pattyNO@SPAMicyberspace.net>
in comp.ai.philosophy wrote:

>David Longley wrote:
>
>> Ps. Advice to others, please just ignore Zick. It's clear from the way 
>> that he constructs his sentences that he's just an obnoxious troll out 
>> to foment and feed on conflict. Sadly, I think Patty plays this same 
>> emotional, almost psychopathic, vampirish game.
>
>The question is who is sucking who's blood here.

David's rather obviously just sucking wind.

Regards - Lester
0
lesterDELzick
11/5/2004 8:04:30 PM
On Fri, 5 Nov 2004 19:23:25 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <10onhkajel02623@news20.forteinc.com>, Stargazer 
><fuckoff@spammers.com> writes

[. . .]

>>You're trying to mix quinean philosophically charged ideas with
>>what is done in science, as practiced by today's scientists. These
>>things are immiscible. It is not possible to develop science (the
>>pursuit of knowledge of _unknown realms_) within purely extensional
>>contexts. Scientific definitions during that phase must begin
>>being provisional and property-oriented. This is not to say
>>that extensional definitions, heuristics, logics and practices
>>are not useful or that they should not be constructed. Of course
>>they are important (even methodologically), but not during most
>>of the time of discovery. Most of this time is spent creating
>>and evaluating intensional heuristics and formulating hypotheses
>>to be empirically tested (in other words, refining and polishing
>>eventual extentional referents that will later be used in
>>formalization). Without doing this way nothing new would ever be
>>discovered. If baby Quine thought this way since he was a child,
>>he would be still trying to look for an extensional definition
>>of even numbers.
>>
>>*SG*
>>
>>
>The above merely confirms what I have said elsewhere about what your are 
>doing, ie, writing idiotic drivel. What you say above is all false or 
>just nonsense, and you can't see why because like others of your ilk you 
>have no conception of truth. That what you do write *is* rubbish could 
>be ascertained by checking the facts, but you don't. Only human beings 
>can behave as idiotically as you illustrate here and elsewhere, so 
>you'll be welcomed by other idiots like Zick, Verhey, Michaels, Ozkural, 
>Zero and no end of other delinquents.
>
>People like you are a blight.
>-- 
>David Longley

But David. We've given you the best years of our lives. Unfortunately.

Regards - Lester
0
lesterDELzick
11/5/2004 8:07:26 PM
On Fri, 5 Nov 2004 19:41:53 +0000, David Longley
<David@longley.demon.co.uk> in comp.ai.philosophy wrote:

>In article <xjQid.300500$wV.185021@attbi_s54>, patty 
><pattyNO@SPAMicyberspace.net> writes
><snip>
>>And from Longely himself we get words to the effect that an intensional 
>>context is one which fails extensionality.  Note that Longley does not 
>>(usually) use the intensional\extensional distinction in the set 
>>theoretic sense of the word.  It is difficult for us who come from 
>>other backgrounds to keep that straight and frequently there is 
>>confusion on that point.
>>
>
>The work I'm always referring to tacitly is a computer system where the 
>critical issue was always the nature of the predicates. Anyone who has 
>even had a casual look at that and remembered any of it would not have 
>fabricated the nonsense you keep peddling here.
>
>>Longley uses the term  "intensional opacity" a lot.  Just this morning 
>>i figured out how to translate that into VGT.
>
>Why do you do any such thing? Solely to foment the same sort of nonsense 
>which Zick does.

Ah, jealousy is indeed a green eyed monster. No matter, you'll always
remain the group's pet behaviorist.

>> It just means that people don't understand each other.
>
>No it doesn't. But why let that little fact bother you eh? You spin out 
>no end of nonsense in order to engage others in an exchange and like 
>Zick, that is the end of the matter. That's all you are after, and it's 
>that that's being explicated as pathological, both here and elsewhere as 
>"cognitive science". It's vacuous and neurotic.

And scientific.

Regards - Lester
0
lesterDELzick
11/5/2004 8:13:19 PM
"David Longley" <David@longley.demon.co.uk> wrote in message 
news:1NYSCdBtM9iBFw4m@longley.demon.co.uk...
> In article <10onhkajel02623@news20.forteinc.com>, Stargazer 
> <fuckoff@spammers.com> writes
>>Wolf Kirchmeir wrote:
>>> Stargazer wrote:
>>> > Bill Modlin wrote:
>>> >
>>> > > [big snip]
>>> > > An organism cannot survive using extensional reasoning alone:
>>> > > heuristic intensionality is required to deal with the frame
>>> > > problem and reduce the dimensionality of problems sufficiently to
>>> > > bring them within the scope of extensional analysis.  Extensional
>>> > > analysis is icing on the cake, we can survive without it.  We
>>> > > can't survive without heuristics.
>>> >
>>> > Your whole post could have just said the above, and that would be
>>> > enough to make your point. Congratulations.
>>> >
>>> > *SG*
>>>
>>> IMO this is a specious disinction, or else shows a misunderstanding of
>>> extensionality, or else uses extension/intension in an unorthodox
>>> way. I can't tell which, since I'm not privy to Bill's intensions. To
>>> the extent that heuristics work, they are extensional. The intensional
>>> component is relevant only to the degree to which it affects extension
>>> (which may be great.) See Quine on the translation problem, w/ IMO
>>> crystallises the extension - intension issue. WQ shows that a) two
>>> people with a different intension for a term may have the same
>>> extension; and b) that common extension is the necessary condition for
>>> communication. The same principle applies to heuristics: the
>>> intensional component can be utterly different for two systems, yet
>>> so long as they have the same extension, they will be equally
>>> effective. At the level of NNs, IMO it's an irrelevant issue, since
>>> the only test that makes sense is efficacy: presumably (if we're
>>> still on topic for this thread) we want to design the NN so it
>>> performs "useful functions". In that case, extensions rule. Its
>>> intension(s), if any, don't matter.
>>> Footnote:
>>> Quine's result has implications for the communicating-with-an-alien
>>> problem, w/ took up a lot of unnecessary space on thsi forum not so
>>> long ago. IMO this one reduces to communicating-with-other-species, a
>>> problem that every person who interacts with domestic animals has
>>> solved more or less successfully. Of course, there will be people who
>>> insist that unless you have the same intension as your interlocutor,
>>> you haven't "really communicated." Such people have a touching trust
>>> in the ability of humans to guess correctly at each other's
>>> intensions, a trust that 15 minutes in a literature class discussing
>>> a poem should dispel forever.
>>> If BM merely means "internal" vs "external", the distinction is fuzzy,
>>> to put it mildly. But IMO BM has a fuzzy notion of internal - extrenal
>>> anyhow, since he ignores levels: analysis at the level of a single
>>> (minimal) NN automatically makes all other NNs connected to it
>>> external to it. That's a distinctction that matters IMO, since it
>>> affects the extension of "external signal" and "quality of external
>>> signal", both of which are necessary concepts, I think (allowing for
>>> the ambiguity of "quality