f



Why are Neural Nets not AI? #2

Hi there,

I made an appointment to see one of my lecturers a few days back and said it
was going to be about AI.  When I got there, I aske dhim about Neural Nets
and he told me that NNs are not considered a branch of AI.
I didnt want to disagree but he did not explain why.  Could someone please
elaborate and explain why NNs are not considered a bramch of AI?

Thanks
Allan

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Allan
7/29/2003 12:31:10 PM
comp.ai 2560 articles. 0 followers. Post Follow

15 Replies
1199 Views

Similar Articles

[PageSpeed] 34

Allan Bruce <abruce@takeawaycsd.abdn.ac.uk> wrote:
> Hi there,

> I made an appointment to see one of my lecturers a few days back and said it
> was going to be about AI.  When I got there, I aske dhim about Neural Nets
> and he told me that NNs are not considered a branch of AI.
> I didnt want to disagree but he did not explain why.  Could someone please
> elaborate and explain why NNs are not considered a bramch of AI?

> Thanks
> Allan

Dunno. They were when I went to grad school. But that was almost a decade
ago.  Since I'm not working in that area of the field, I don't know.

But my experience is that a lot of these statements in academia may be 
an individual's opinion.  You might get a second opinion. ;-)

Julie

-- 
Julie 
**********
Check out my Travel Pages (non-commercial) at 
http://www.dragonsholm.org/travel.htm

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Juliana
7/30/2003 2:59:42 AM
Allan Bruce wrote:
> Hi there,
> 
> I made an appointment to see one of my lecturers a few days back and said it
> was going to be about AI.  When I got there, I aske dhim about Neural Nets
> and he told me that NNs are not considered a branch of AI.
> I didnt want to disagree but he did not explain why.  Could someone please
> elaborate and explain why NNs are not considered a bramch of AI?

Many people consider NNs to be part of AI.  So you'd probably have to 
ask him.

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Randolph
7/30/2003 3:14:48 AM
Allan Bruce wrote:
> Hi there,
> 
> I made an appointment to see one of my lecturers a few days back and said it
> was going to be about AI.  When I got there, I aske dhim about Neural Nets
> and he told me that NNs are not considered a branch of AI.
> I didnt want to disagree but he did not explain why.  Could someone please
> elaborate and explain why NNs are not considered a bramch of AI?
> 

One could define AI however one chose to, I suppose, but despite your
lecturer's claim, NNs are considered AI by many people. They are an
important component of work in the fields of speech recognition and
computer vision, and still an important topic in their own right.

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Foss
7/30/2003 3:15:31 AM
"Allan Bruce" <abruce@TAKEAWAYcsd.abdn.ac.uk> writes:

> Hi there,
> 
> I made an appointment to see one of my lecturers a few days back and said it
> was going to be about AI.  When I got there, I aske dhim about Neural Nets
> and he told me that NNs are not considered a branch of AI.
> I didnt want to disagree but he did not explain why.  Could someone please
> elaborate and explain why NNs are not considered a bramch of AI?
> 
> Thanks
> Allan
> 

NNs are indeed a branch of AI. This is not only my opinion, but
evidently also that of Russell & Norvig since they devote a section in
their book "Artificial Intelligence: A Modern Approach" to it. There
are some (older people especially) who think that NNs are not part of
AI, because they think NNs are trivial. This is largely due, IMO, to
the work of Minsky & Papert (in their book Perceptrons). Since that
time, their have been numerous developments in the field - although
even then they were only trivial if you limited yourself to analyzing
networks that were easily analyzed (i.e, the trivial
ones). Ironically, when all is said and done, NNs may end up being the
only enduring part of AI (although I'm sure to get some howls here),
due to the perception, as Parnas wrote, that AI are those problems
that we don't know how to solve. Therefore, once we understand those
problems, they cease to be AI. (This is meant to be flippant, but yet
it still explains how things such as playing chess and speech
recognition are more and more NOT considered to be AI, although this
depends on who is doing the considering.)

---------------------------------------------------------------------
                          | "Good and evil both increase at compound
Ben Hocking, Grad Student | interest. That is why the little
hocking@cs.virginia.edu   | decisions you and I make every day are of
                          | such infinite importance." - C. S. Lewis
---------------------------------------------------------------------

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Ashlie
7/30/2003 3:42:02 AM
"Allan Bruce" <abruce@TAKEAWAYcsd.abdn.ac.uk> wrote in message news:<bg5pcg$5lf$1@mulga.cs.mu.OZ.AU>...
> Hi there,
> 
> I made an appointment to see one of my lecturers a few days back and said it
> was going to be about AI.  When I got there, I aske dhim about Neural Nets
> and he told me that NNs are not considered a branch of AI.
> I didnt want to disagree but he did not explain why.  Could someone please
> elaborate and explain why NNs are not considered a bramch of AI?

That's not quite correct. *Artificial* neural nets are part of machine
learning. At least people do use them. I have a book called "Readings
in machine learning" edited by Shavlik and Dietterich which is a
collection of ML papers and it has neural net papers. ML is considered
a branch of AI. ANNs are not some kind of sworn enemy. :)

But the biological neural nets aren't AI, because they aren't computer
science. Of course, studying them can turn out to be computer science
because as soon as you write an algorithm or you have a mathematical
theory about a computing device you are making computer science.

What would not be AI, I think, unless they are used in modelling
intelligence: artificial life. Genetic algorithms and genetic
programming methods are used in AI, but they are not unique to AI. (In
fact, GP is one of the methods that has the potential for "stronger"
AIs) You can find GA in graph layout research, too.

I think the real reason your lecturer said that was because there was
kind of a tension among some outdated AI researchers in the form of
awful camps called connectionism and symbolicism. And connectionists
had to say unscientific things about symbolicists and vica versa.

Thanks,

__
Eray Ozkural

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
erayo
7/30/2003 3:45:27 AM
You should ask your lecturer what he means.  Frankly, it's nigh absurd to
say neural networks are not part of AI.

> This is largely due, IMO, to the work of Minsky & Papert (in their book
> Perceptrons). Since that time, their have been numerous developments in
> the field - although even then they were only trivial if you limited
> yourself to analyzing networks that were easily analyzed (i.e, the
> trivial ones).

What's most ironic and tragic about Perceptrons is that, nowadays, people
do a lot of interesting stuff with perceptrons.  There is a lot of image
analysis, for instance, begins with a wavelet transform of the image,
followed by some kind of component analysis (PCA e.g.), followed by
running the resulting coefficients through a perceptron.  Or, SVMs can be
thought of as fancy perceptrons -- just add a kernel to a perceptron and 
POOF! you have an SVM.

> Ironically, when all is said and done, NNs may end up being the
> only enduring part of AI (although I'm sure to get some howls here),

Oh come on!  This is beyond flippant, especially given the reality that 
neural nets as a field has virtually imploded.

About chess.  People fight about this as a test problem.  For all
practical purposes, it's not very compelling anymore.  But that's a matter
of popular press -- it's not sexy.  However, the AI problem of learning to
play chess with minimal human hand holding is far from being solved.  
Deep Blue does not suffice as a solution by any stretch of the
imagination.  I should point out, the AI problem of learning to play
TICTACTOE from minimal human input has barely, and in my opinion
unconvincingly, been solved.  We have a very long way to go indeed.  As a
field, we can hardly agree on how to frame the problems, let alone say
what it means to solve them.

Anthony

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Anthony
7/30/2003 10:07:23 PM
> Expert systems are considered to be strong AI.In expert system we
> define rules and based on the rules inferenceing is done on the
> concepts defined. Hence you are deriving more information from your
> existing information based on the rules.
>

Would I be correct in thinking that a planning agent is a type of expert
system?

Thanks
Allan

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Allan
7/31/2003 8:50:48 AM
Ashlie Benjamin Hocking <abh2n@cobra.cs.Virginia.EDU> wrote in message news:<bg7eq0$qeo$1@mulga.cs.mu.OZ.AU>...
> Ironically, when all is said and done, NNs may end up being the
> only enduring part of AI (although I'm sure to get some howls here),
> due to the perception, as Parnas wrote, that AI are those problems
> that we don't know how to solve. Therefore, once we understand those
> problems, they cease to be AI.

NNs may end up being the only enduring part of AI? May I ask you why
mimicking the form of CNS is more likely to succeed than mimicking the
form of biological evolution, then? Why is such cargo cult going to
achieve better than scientific analysis of problems?

The definition you cite from Parnas does not look scientific at all to
me. I thought AI had something to do with *intelligence*. You could
make that sort of definition for any field of science: "Physics is
about problems that we don't know how to solve. Therefore, Newtonian
mechanics is no more physics" Judge for yourself how absurd that kind
of argument is, especially to a scientist of the field being harassed.

Let us assume we accomplished making of a human-level AI. Once we
understand how that works, it ceases to be AI? No.

On the other hand, let's say you make a really simple learning program
that does optical character recognition and you make it part of an
engineering app. like a hand-held computer. Then, does the learning
algorithm cease to be AI? I don't think so. Nevertheless, to people
who are not concerned with the algorithm the whole application will
seem like mere engineering.

Thanks,

__
Eray Ozkural (exa) <erayo at cs.bilkent.edu.tr>

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
erayo
8/1/2003 8:55:52 AM
Allan Bruce wrote:
> 
> Hi there,
> 
> I made an appointment to see one of my lecturers a few days back and said it
> was going to be about AI.  When I got there, I aske dhim about Neural Nets
> and he told me that NNs are not considered a branch of AI.
> I didnt want to disagree but he did not explain why.

I'm disappointed: It was a good question and it deserved an answer.

>  Could someone please
> elaborate and explain why NNs are not considered a bramch of AI?

It depends how AI is defined.   I would define it as attempting to
model real intelligence, and with that definition, /artificial/ neural
nets (backprop, Kohonen, Hopfield, etc.) don't meet the definition
because they don't model how the brain actually works -- they do things
quite differently.   There is a fair amount known about the working of
neurons, and at least parts of the brain (e.g. the visual cortex)
and they work quite differently from ANNs.

Of course, my definition would tend to rule out a lot of GOFAI
as well, where there is no attempt to model cognition as is
understood by cognitive scientists.

HTH.

> Thanks
> Allan

-- 
:ugah179 (home page: http://web.onetel.com/~hibou/)

"I'm outta here.  Python people are much nicer."
                -- Erik Naggum (out of context)

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Donald
8/3/2003 1:19:26 AM
Ashlie Benjamin Hocking wrote:
> 
> "Allan Bruce" <abruce@TAKEAWAYcsd.abdn.ac.uk> writes:
> 
> > Hi there,
> >
> > I made an appointment to see one of my lecturers a few days back and said it
> > was going to be about AI.  When I got there, I aske dhim about Neural Nets
> > and he told me that NNs are not considered a branch of AI.
> > I didnt want to disagree but he did not explain why.  Could someone please
> > elaborate and explain why NNs are not considered a bramch of AI?
> >
> > Thanks
> > Allan
> >
> 
> NNs are indeed a branch of AI. This is not only my opinion, but
> evidently also that of Russell & Norvig since they devote a section in
> their book "Artificial Intelligence: A Modern Approach" to it. There
> are some (older people especially) who think that NNs are not part of
> AI, because they think NNs are trivial. This is largely due, IMO, to
> the work of Minsky & Papert (in their book Perceptrons).

Concepts as complex as AI are fairly fuzzy and there's room for
disagreement about definitions. 

For me, it's nothing to do with age, and nothing to do with
the capabilities of Rosenblatt's perceptrons, and everything to
do with how ANNs work.   Many of the ANN algorithms have counterparts
in statistics (e.g. non-linear regression, clustering algorithms).
Would you consider a program which embodies those techniques, written
in say SAS, to be AI?

> Since that
> time, their have been numerous developments in the field - although
> even then they were only trivial if you limited yourself to analyzing
> networks that were easily analyzed (i.e, the trivial
> ones).

No one's denying there's been considerable progress since Perceptrons
was written.

> Ironically, when all is said and done, NNs may end up being the
> only enduring part of AI (although I'm sure to get some howls here),
> due to the perception, as Parnas wrote, that AI are those problems
> that we don't know how to solve. Therefore, once we understand those
> problems, they cease to be AI. (This is meant to be flippant, but yet
> it still explains how things such as playing chess and speech
> recognition are more and more NOT considered to be AI, although this
> depends on who is doing the considering.)

The successful chess programs largely use brute-force look-ahead and
memorization of textbook openings.   While good chess players do indeed
use look-ahead and memorization, they also use much more complex
reasoning which is /not/ modelled in those chess programs.   The chess
programs which /do/ model this don't yet play at grandmaster level.

Parnas's point is often made and is I think may be based on a
misunderstanding of the history of AI.   Long ago, it /was/
thought that people reasoned using search and logic, and back
then programs such as General Problem Solver were considered AI;
later they realized that people needed vast amounts of knowledge,
and programs such as Mycin were considered AI but no longer GPS;
later yet they realized that people used common sense and how to
do /that/ became the hottest topic in AI.   A program like GPS
was AI at the time, because back then it attempted to model
human reasoning; but now that we know that people don't reason
in that way, such a program would no longer be considered AI.

-- 
:ugah179 (home page: http://web.onetel.com/~hibou/)

"I'm outta here.  Python people are much nicer."
                -- Erik Naggum (out of context)

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Donald
8/3/2003 1:20:25 AM
Ashlie Benjamin Hocking wrote:
>> Ironically, when all is said and done, NNs may end up being the
>> only enduring part of AI (although I'm sure to get some howls here),
>> due to the perception, as Parnas wrote, that AI are those problems
>> that we don't know how to solve. Therefore, once we understand those
>> problems, they cease to be AI.

Eray Ozkural wrote:
> NNs may end up being the only enduring part of AI? May I ask you why
> mimicking the form of CNS is more likely to succeed than mimicking the
> form of biological evolution, then? Why is such cargo cult going to
> achieve better than scientific analysis of problems?
> 
> The definition you cite from Parnas does not look scientific at all to
> me. I thought AI had something to do with *intelligence*. You could
> make that sort of definition for any field of science: "Physics is
> about problems that we don't know how to solve. Therefore, Newtonian
> mechanics is no more physics" Judge for yourself how absurd that kind
> of argument is, especially to a scientist of the field being harassed.
> 
> Let us assume we accomplished making of a human-level AI. Once we
> understand how that works, it ceases to be AI? No.
> 
> On the other hand, let's say you make a really simple learning program
> that does optical character recognition and you make it part of an
> engineering app. like a hand-held computer. Then, does the learning
> algorithm cease to be AI? I don't think so. Nevertheless, to people
> who are not concerned with the algorithm the whole application will
> seem like mere engineering.

The observation by Parnas was made decades ago, and seems even more
apt to me now. This was meant to be tongue-in-cheek and nothing
more. Many people actually in AI do indeed consider many solved
problems (i.e., understood) to be AI, and rightly so. However, these
solved problems (such as speech and handwriting recognition) are
slowly becoming less mysterious and so less appealing to many of us
who study AI. My comment about NNs, very few of which mimic the CNS
very realistically, IMO, was meant to refer to the untractability of
understanding NNs that are actually capable of solving interesting
problems. Sure, one can talk about Ising spin-glasses and minimizing
energy states, finding minimal entropies, etc., but the _truly_
interesting neural networks are not currently strongly amenable to
such analysis. I will agree that genetic programming and GAs could
also be said to fall into such a category.

Furthermore, I do consider myself a scientist of (both) fields being
"harrassed", although I willingly admit to still having much to
learn. Very few people think that "Physics is about problems that we
don't know how to solve", but certainly there is that perception about
AI (rightly or wrongly). Consider this: Do you consider DFS and BFS to
be AI? What if these are applied to a program to play a perfect game
of tic-tac-toe (AKA noughts and crosses)? If you do not think that DFS
(or BFS) is AI, then what about minimax? These are all covered in most
AI textbooks, although some might argue only as introductory material
to understand before tackling "true" AI. I'm sure many people in the
field of AI would consider these all to be (trivial) examples of AI,
but I would wager that most people outside of the field of AI, but
within the field of CS, DFS and BFS are nothing like AI.

Additionally, your last sentence proves the very point I was trying to
make (albeit not very well, evidently).

Finally, let me reiterate - the comment by Parnas was meant to be
tongue-in-cheek - definitely by me, and I suspect even by him when he
made it. Don't take it too seriously. (For those not in the know
David Lorge Parnas is not a researcher in the field of AI, but is a
highly respected professor and software engineer, author of "A rational
design process: how and why to fake it" as well as numerous other
articles and books.) Also, here's the exact quote from Parnas "[first
definition of two of AI] The use of computers to solve problems that
previously could be solved only by applying human intelligence."
Followed later in the same article by "Something can fit the
definition of AI-1 today, but, once we see how the program works and
understand the problem, we will not think of it as AI anymore." (From
Parnas, D. L., Software Aspects of Strategic Defense Systems, American
Scientist, Vol. 73, No. 5, Sept.-Oct. 1985, pp. 432-440.)

---------------------------------------------------------------------
                          | "Good and evil both increase at compound
Ben Hocking, Grad Student | interest. That is why the little
hocking@cs.virginia.edu   | decisions you and I make every day are of
                          | such infinite importance." - C. S. Lewis
---------------------------------------------------------------------

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Ashlie
8/3/2003 1:23:41 AM
Ashlie Benjamin Hocking <abh2n@cobra.cs.Virginia.EDU> wrote:
>> My comment about NNs, very few of which mimic the CNS
>> very realistically,
 
erayo@bilkent.edu.tr (Eray Ozkural  exa) wrote:
> I wonder which algorithms (not models, that isn't the real question)
> mimic the CNS very realistically. It looks like only Hebbian learning
> comes close to biological plausibility (like used in Kohonen networks)

Well, _none_ of them mimic the entire CNS very realistically (unless
you limit yourself to squids, etc.) Levy neural networks do an
excellent job (in my very _biased_ opinion) of modelling the CA3
region of the mamalian hippocampus.

Ashlie Benjamin Hocking <abh2n@cobra.cs.Virginia.EDU> wrote:
>>  IMO, was meant to refer to the intractability of
>> understanding NNs that are actually capable of solving interesting
>> problems.

erayo@bilkent.edu.tr (Eray Ozkural  exa) wrote:
> You sound as if you think ANN learning algorithms are the only methods
> that are actually capable of solving interesting problems, but that is
> not the case. There are a variety of methods that are capable of such
> feats. In fact, Hans Moravec told he didn't use large-scale ANNs
> because they didn't fare well for his perception algorithms. I can
> understand that because training algorithms are too inefficient to
> scale-up.

I'll admit to sounding that way, and, no, I don't really believe
that. My reaction was more an over-reaction to the idea (that I've
heard too many times) that NNs are not really AI. My attempt was to
turn a weakness of NNs (the extreme difficulty of analyzing them) into
a strength (through the Parnas argument).
 
erayo@bilkent.edu.tr (Eray Ozkural  exa) wrote:
> Yours is an intriguing thought. What are those interesting problems? I
> don't see neural networks handling high-dimensional large-scale
> complex machine learning problems any time soon (at least not the
> current algorithms) I suppose that would be your definition of
> "interesting" in the context of machine learning.

There are several different classes of interesting problems, but I'm
sure we agree on the most interesting of those. (E.g., passing the
Turing test - which I'm sure you agree won't happen any time soon with
_any_ algorithm. "Soon" means within a couple decades - I won't
predict for or against passing the Turing test 20 years from now.) A
very, very biased class is understanding the human brain. As mentioned
earlier, the Levy NNs definitely are already contributing to this. (By
making predictions that can and have been verified through
neuroscience.)

As for other interesting problems solved by neural nets, I enjoy the
work of Elman with respect to learning parts of speech without direct
input that such things even exist. He uses a multi-layer perceptron
with "context neurons" and back-prop. The "context neurons" are what
help the network to maintain state, and hence, be aware of chronology,
etc.

A more trivial, but still interesting, example is that of handwriting
recognition - used in today's Palm Pilots. (Something that would not
have been considered trivial at all 10-20 years ago, I believe - just
to beat a dead horse.)

>> Sure, one can talk about Ising spin-glasses and minimizing
>> energy states, finding minimal entropies, etc., but the _truly_
>> interesting neural networks are not currently strongly amenable to
>> such analysis.
> 
> This, I believe is a misleading view of neural networks as it seems to
> assign ANNs a special status in theoretical analysis. One should not
> forget that a MLFF network is basically a general purpose computer.
> Then, MLFF learning (with fixed topology) is a search in a *subset* of
> the computation space, ie. a (small) function space. Algorithms such
> as error back-propagation learning seek the proper function to model
> the I/O.

My point was to argue pretty much the same. I.e., although certain
classes of NNs can be analyzed, by the time you've simplified them
enough to analyze, you've removed much of what makes them interesting.

> Now, it should not come as a surprise why we cannot "see inside" those
> neural networks that are learnt. A 100 lines of computer code has the
> same properties. The function it corresponds to can be so complex that
> it can avoid analysis for years (especially if that is a high level
> programming language).
> 
> Every algorithm is a constructive proof. If you think about it, a lot
> of the state-of-the-art algorithms (like say, all-to-all shortest path
> algorithms) that require very elaborate mathematical understanding,
> are just a few 10s of lines. Then, it is not hard to see why it would
> be hard to analyze a large enough ANN. But give me a small enough ANN
> with numbers on it, and I will tell you what it does. (It's not
> different than giving me a piece of machine code, and asking me what
> the algorithm does)
> 
>> I will agree that genetic programming and GAs could
>> also be said to fall into such a category.
> 
> In fact, all CS falls into that category as indicated above.

Touche'. In fact, the aforementioned Parnas would make exactly that
argument, I'm sure. However, there are definitely differences here, if
only of scale. Most programmers at least have the illusion that they
know what their programs are doing and why. Although people working
with NNs can give general arguments about why one NN architecture
works better than another to solve a particular class of problems,
they'll be hard-pressed to explain exactly why a (large) NN fails and
another succeeds when the two have the same architecture.

>> Consider this: Do you consider DFS and BFS to
>> be AI? 
> 
> DFS and BFS are graph algorithms. They are mentioned in AIMA as
> "uninformed search algorithms", because that's what they do:
> systematic searching in graphs. They are there to show the
> fundamentals of search algorithms, to give a complete picture of the
> subject. Therefore, I would say they are part of AI subject, but they
> are not unique to AI since they are two fundamental algorithms in
> general algorithms research.
> 
> Do you consider a gradient-descent search in a function space to be
> AI?

I think gradient-descent search would fall into the same
category, namely "[it is] part of of AI subject, but [it is] not
unique to AI".

So, in summary, I rescind many of my previous statements, but I stand
by my basic premise: Not only are NNs part of AI, they are a
fundamental part of AI and will continue to be for the forseeable
future.

---------------------------------------------------------------------
                          | "Good and evil both increase at compound
Ben Hocking, Grad Student | interest. That is why the little
hocking@cs.virginia.edu   | decisions you and I make every day are of
                          | such infinite importance." - C. S. Lewis
---------------------------------------------------------------------

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Ashlie
8/6/2003 12:34:03 AM
Anthony Bucci <abucci@cs.brandeis.edu> wrote in message news:<bgpifj$br7$1@mulga.cs.mu.OZ.AU>...
> This statement about ANNs is not right.  You can implement a general
> purpose computer inside of a specially-designed neural network.  You can
> make some appeal to the fact that anything happening inside a general
> purpose computer is itself a computer.  But, neural networks are best
> thought of as functions, as classifiers, as models of brains.  They are
> independent of computing devices -- you can implement them directly in
> hardware, in tinker toys, or in chemistry, quite independently of
> general-purpose computation.

The moment you do that with hardware, tinker toys, or chemistry you've
built a computing device. Much the same as all kinds of computers. I
can build you an x86-compatible computer from water pipes and valves
(or whatnot)

All computation is multiply realizable.

> > In fact, all CS falls into that category as indicated above.
> 
> This statement is very hard to understand, especially in light of what was
> said previously.  Hocking said that "interesting" neural nets are not
> amenable to the sorts of analyses which have been done thus far.  Fine,
> that specifies which neural networks s/he likes.  You talked about not
> being able to understand ANNs (which already seems off topic).  Now you're
> lumping CS into the same bin.  Am I understanding you rightly?  If so, I
> don't get it.  Much theoretical CS is very crisply argued and quite well
> understood, regardless of the complicatedness of other parts of CS.

It's not too hard to understand.

I will give you an ml code with 200 lines that does something complex.
Can you figure out what it does mathematically? The anwer is the same
with neural networks. You can do some sorts of analysis, but nothing
*guaranteed* to give you the answer. And among all possible algorithms
of that program size, it is likely that a computer scientist won't be
able to tell the function of 99% percent of the codes. That too is a
basic property of all computation. I wish researchers took theory of
computation courses more seriously.

I maintain my position that ANNs do not possess the sort of special
significance "connectionists" want to assign to them. They are just
computers.

Regards,

__
Eray Ozkural

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
erayo
8/7/2003 1:37:08 AM
> The moment you do that with hardware, tinker toys, or chemistry you've
> built a computing device. Much the same as all kinds of computers. I
> can build you an x86-compatible computer from water pipes and valves
> (or whatnot)

The question is, what's the most "natural" description of the system.  If
you connect a bunch of pipes together, and it just so happens those pipes
can be used to compute something, that knowledge won't help you figure out
why your toilet won't flush.  There's a lower level of description
(plumbing) which is more likely to tell you something useful about your
toilet.

To put it another way, computation is an abstract process which ignores
the details of the substrate in which it's embedded.  If those substrate
details matter to what you're thinking about, as they do in ANN research,
then abstracting them away and thinking only of the computation is
ignoring the very essense of what you're trying to study.
 
> The anwer is the same with neural networks. You can do some sorts of
> analysis, but nothing *guaranteed* to give you the answer.

I refer you to Ofer Melnik's PhD dissertation, in which he presents
algorithms for getting exact information about what an ANN is doing.  You
can see this at my lab's web site, http://demo.cs.brandeis.edu/ in the
"publications" section.  My take on the matter is that people in neural
nets disagree with you wholeheartedly.

> And among all possible algorithms of that program size, it is likely
> that a computer scientist won't be able to tell the function of 99%
> percent of the codes. That too is a basic property of all computation. I
> wish researchers took theory of computation courses more seriously.

This is still very hard to understand.  A language needn't be dismissed as
flawed or useless simply because you can express uninterpretable things in
it.  For instance, in English I can say:  in over in the running above
Fred upside purple.  The fact that this is gibberish does not change that
you're able to understand everything else I've said.  So I still don't 
understand what you're getting at.
 
> I maintain my position that ANNs do not possess the sort of special
> significance "connectionists" want to assign to them. They are just
> computers.

ANNs come with extra baggage.  The entirety of the field is in the extra
baggage, not in the computational substrate.  Similarly, if I build a
computer out of yellow tinker toys, I'm missing the point if I focus my
attention on the fact that it's yellow -- color is incidental to what's
being demonstrated.

Anthony

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
Anthony
8/8/2003 12:07:57 AM
Anthony Bucci <abucci@cs.brandeis.edu> wrote in message news:<bgupkn$be9$1@mulga.cs.mu.OZ.AU>...
> The question is, what's the most "natural" description of the system.  If
> you connect a bunch of pipes together, and it just so happens those pipes
> can be used to compute something, that knowledge won't help you figure out
> why your toilet won't flush.  There's a lower level of description
> (plumbing) which is more likely to tell you something useful about your
> toilet. 

If I intend to make a computer, I can make a computer out of any kind
of physical substrate. Same with ANNs which is just another
formulation of computation.

> > And among all possible algorithms of that program size, it is likely
> > that a computer scientist won't be able to tell the function of 99%
> > percent of the codes. That too is a basic property of all computation. I
> > wish researchers took theory of computation courses more seriously.
> 
> This is still very hard to understand.  A language needn't be dismissed as
> flawed or useless simply because you can express uninterpretable things in
> it.  For instance, in English I can say:  in over in the running above
> Fred upside purple.  The fact that this is gibberish does not change that
> you're able to understand everything else I've said.  So I still don't 
> understand what you're getting at.
>  

I don't think this has anything to do with the language being flawed.
That is a natural property of computation. Any formal system as
powerful as a TM will necessarily be hard to analyze, it can get
complex very rapidly. ANNs are hard to analyze, sure, any machine code
is hard to analyze exactly in the same way.

The nice thing about ANNs is that they are quite uniform and they
represent a clear compositional structure, much like a primitive
functional language.

> > I maintain my position that ANNs do not possess the sort of special
> > significance "connectionists" want to assign to them. They are just
> > computers.
> 
> ANNs come with extra baggage.  The entirety of the field is in the extra
> baggage, not in the computational substrate.  Similarly, if I build a
> computer out of yellow tinker toys, I'm missing the point if I focus my
> attention on the fact that it's yellow -- color is incidental to what's
> being demonstrated.

Your computer-toy analogy is misleading.

I have claimed that ANNs are *just* computers. It turns out that an
ANN is a computer, which means that ANNs can do anything other
computers can do and vica versa! (A Universal Turing Machine can
simulate all of them that is)

You have claimed that they are something else in addition. If that
were the case, there wouldn't be a formal translation of an ANN model
into another computational model. My mathematical knowledge says that
if you can simulate ANNs in software, that cannot be the case.
Therefore, you must be wrong.

I must challenge you to show what that "extra baggage" that makes ANNs
so special is! Please respond in the framework of above two
paragraphs.

To say that ANNs are "more powerful" or "something else" than
computation is highly misleading.

Any machine learning algorithm that produces Turing-complete models
have exactly the same affordances with ANN learning algorithms (say
genetic programming). Your arguments in fact comprise much of the
confusion that started the camps of "connectionism" and "symbolicism".

My standard reply to this confusion was crafted without reference to
other works, only based on my knowledge of theory of computation.
However, I then came across an essay on Marvin Minsky's home page that
said quite similar things. I suggest you to read it:
http://web.media.mit.edu/~minsky/papers/SymbolicVs.Connectionist.html

Regards,

__
Eray Ozkural
PhD student
CS Dept., Bilkent Univ., Ankara

[ comp.ai is moderated.  To submit, just post and be patient, or if ]
[ that fails mail your article to <comp-ai@moderators.isc.org>, and ]
[ ask your news administrator to fix the problems with your system. ]
0
erayo
8/9/2003 7:42:01 AM
Reply: