f



Hypothesis: Paradox of self-reference such as the Halting Problem is an error of reasoning

Basis:
Lemma01: Meaning can only be correctly specified within an acyclic 
directed graph:

a) Montague [meaning postulates] must be specified within acyclic 
di-graphs. (connections between elements)

b) Connections between Montague [meaning postulates] (principle of 
compositionality) must not produce cycles.

Lemma02: The [meaning postulate] of all self-reference paradoxes can 
only be fully specified within a di-graph that contains cycles.

Lemma03: The Halting Problem and the Liar Paradox are both 
self-reference paradoxes.

Conclusion:
The Halting Problem and the Liar Paradox are errors of 
specification/reasoning because their complete [meaning postulates] 
necessarily always contain cycles.
0
Peter
11/19/2013 11:25:27 AM
comp.theory 5139 articles. 1 followers. marty.musatov (1143) is leader. Post Follow

566 Replies
1922 Views

Similar Articles

[PageSpeed] 50

Peter Olcott <OCR4Screen> writes:

> Basis:
> Lemma01: Meaning can only be correctly specified within an acyclic
> directed graph:
>
> a) Montague [meaning postulates] must be specified within acyclic
> di-graphs. (connections between elements)
>
> b) Connections between Montague [meaning postulates] (principle of
> compositionality) must not produce cycles.
>
> Lemma02: The [meaning postulate] of all self-reference paradoxes can
> only be fully specified within a di-graph that contains cycles.
>
> Lemma03: The Halting Problem and the Liar Paradox are both
> self-reference paradoxes.
>
> Conclusion:
> The Halting Problem and the Liar Paradox are errors of
> specification/reasoning because their complete [meaning postulates]
> necessarily always contain cycles.

That's a fine collection of linguistic definitions, and I hope you find
the conclusion comforting.  However, the halting theorem remains a
theorem of mathematics, no matter how you label it.

-- 
Ben.
0
Ben
11/19/2013 1:58:02 PM
This is a re-post because I messed up the headers...

Peter Olcott <OCR4Screen> writes:

> Basis:
> Lemma01: Meaning can only be correctly specified within an acyclic
> directed graph:
>
> a) Montague [meaning postulates] must be specified within acyclic
> di-graphs. (connections between elements)
>
> b) Connections between Montague [meaning postulates] (principle of
> compositionality) must not produce cycles.
>
> Lemma02: The [meaning postulate] of all self-reference paradoxes can
> only be fully specified within a di-graph that contains cycles.
>
> Lemma03: The Halting Problem and the Liar Paradox are both
> self-reference paradoxes.
>
> Conclusion:
> The Halting Problem and the Liar Paradox are errors of
> specification/reasoning because their complete [meaning postulates]
> necessarily always contain cycles.

That's a fine collection of linguistic definitions, and I hope you find
the conclusion comforting.  However, the halting theorem remains a
theorem of mathematics, no matter how you label it.

-- 
Ben.
0
Ben
11/19/2013 2:00:41 PM
On 11/19/2013 8:00 AM, Ben Bacarisse wrote:
> This is a re-post because I messed up the headers...
>
> Peter Olcott <OCR4Screen> writes:
>
>> Basis:
>> Lemma01: Meaning can only be correctly specified within an acyclic
>> directed graph:
>>
>> a) Montague [meaning postulates] must be specified within acyclic
>> di-graphs. (connections between elements)
>>
>> b) Connections between Montague [meaning postulates] (principle of
>> compositionality) must not produce cycles.
>>
>> Lemma02: The [meaning postulate] of all self-reference paradoxes can
>> only be fully specified within a di-graph that contains cycles.
>>
>> Lemma03: The Halting Problem and the Liar Paradox are both
>> self-reference paradoxes.
>>
>> Conclusion:
>> The Halting Problem and the Liar Paradox are errors of
>> specification/reasoning because their complete [meaning postulates]
>> necessarily always contain cycles.
>
> That's a fine collection of linguistic definitions, and I hope you find
> the conclusion comforting.  However, the halting theorem remains a
> theorem of mathematics, no matter how you label it.
>

The language of mathematics is insufficiently expressive to discern the 
error of the fallacy of self-reference.

It can not be correctly concluded that an error does not exist on the 
basis that this error can not be expressed within the limitations of any 
specific mode of expression such as the language of mathematics.

It can only be correctly concluded that within this specific mode of 
expression that this error can not be seen.
0
Peter
11/19/2013 2:17:47 PM
Peter Olcott wrote:

> The language of mathematics is insufficiently expressive to discern the
> error of the fallacy of self-reference.

The language of mathematics is as expressive as any language.  Why? 
Because the language of mathematics includes the natural languages of 
mathematicians.


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
11/19/2013 4:14:24 PM
Peter Olcott <OCR4Screen> writes:

> On 11/19/2013 8:00 AM, Ben Bacarisse wrote:
>> This is a re-post because I messed up the headers...
>>
>> Peter Olcott <OCR4Screen> writes:
<snip>
>>> Conclusion:
>>> The Halting Problem and the Liar Paradox are errors of
>>> specification/reasoning because their complete [meaning postulates]
>>> necessarily always contain cycles.
>>
>> That's a fine collection of linguistic definitions, and I hope you find
>> the conclusion comforting.  However, the halting theorem remains a
>> theorem of mathematics, no matter how you label it.
>
> The language of mathematics is insufficiently expressive to discern
> the error of the fallacy of self-reference.

The same sets are decidable.  The same sets are undecidable.  Nothing
has changed since you first started this nearly a decade ago[1].  Had
you not been distracted by making your fortune from on-screen OCR[2] you
might have something more to show for the years of pondering these
questions.

For those not familiar with the history, Peter Olcott starts from a
theological position and does whatever is needed to resolve the ensuing
conflict:

Message-ID: <L7mdnZ0QUroncUbSnZ2dnUVZ_radnZ2d@giganews.com>
| If God can not solve the Halting Problem, then there is something
| wrong with the problem.

Of course, that does not mean he's wrong (he's wrong for very down to
earth reasons) but it does give some hints about how open to reason he
in on this matter.  Given the flexibility in the notion of 'god' it
would surely have been simpler to decide that he *can* solve the halting
problem and more on from there.  Since that, sadly, seems not be an
option, the solution is to find a way to label the question as invalid or
fallacious.  Although this is misleading, it does, of course, alter the
question in any substantive way.  The answer is still the same.

[1] It's been going on since at least 2004, but it may be even longer
than that.

[2] http://www.ocr4screen.com/Download.html A summer deadline has been
missed it seems.  Oh well, 
-- 
Ben.
0
Ben
11/19/2013 5:03:15 PM
On 11/19/2013 10:14 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> The language of mathematics is insufficiently expressive to discern the
>> error of the fallacy of self-reference.
>
> The language of mathematics is as expressive as any language.  Why?
> Because the language of mathematics includes the natural languages of
> mathematicians.
>
>
The language of humans (natural language) although sufficiently 
expressive to show things such as the fallacy of self-reference has not 
yet been made sufficiently precise such that this error can be discerned 
by those accustomed to using conventional mathematical notation (or most 
others).

The most typical response is along the lines of the lack of capability 
to express this error using conventional mathematical notation indicates 
that the error does not actually exist.

What is needed is a sufficiently expressive and completely precise 
notational convention. Richard Montague provided the foundational basis 
for such a notational system.

I propose one more detail to be added to the system proposed by Richard 
Montague: Meanings can only be correctly connected together (principle 
of compositionality) using an acyclic directed graph.

Fallacies of self-reference can not be completely specified within any 
acyclic directed graph, this is what indicates that they are erroneous.


0
Peter
11/19/2013 5:04:05 PM
On 11/19/2013 11:03 AM, Ben Bacarisse wrote:
> [2]http://www.ocr4screen.com/Download.html  A summer deadline has been
> missed it seems.  Oh well,

http://pixels2words.com/Download.html
0
Peter
11/19/2013 5:21:04 PM
On 11/21/2013 2:49 AM, Franz Gnaedinger wrote:
> On Tuesday, November 19, 2013 12:25:27 PM UTC+1, Peter Olcott wrote:
>> Basis:
>>
>> Lemma01: Meaning can only be correctly specified within an acyclic
>> directed graph:
>>
>> a) Montague [meaning postulates] must be specified within acyclic
>> di-graphs. (connections between elements)
>>
>> b) Connections between Montague [meaning postulates] (principle of
>> compositionality) must not produce cycles.
>>
>> Lemma02: The [meaning postulate] of all self-reference paradoxes can
>> only be fully specified within a di-graph that contains cycles.
>>
>> Lemma03: The Halting Problem and the Liar Paradox are both
>> self-reference paradoxes.
>>
>> Conclusion:
>>
>> The Halting Problem and the Liar Paradox are errors of
>> specification/reasoning because their complete [meaning postulates]
>> necessarily always contain cycles.
>
> Last time I gave you the advice to look out for
> a more modest but realistic application of your
> knowledge-condensing machine, but you said No,
> it must be the machine of All Knowledge that goes
> against Goedel's proved theorems.

I do not think that is what I said, your paraphrase seems incorrect.
What I said would have been something along the lines that we must first 
find the inherent essential structure of the universal set of all 
knowledge before we can correctly begin correctly composing any subset 
of this knowledge.

> Meanwhile someone
> else did what you refused to consider, a teenager,
> by then seventeen years old: he wrote an app that
> condenses online articles into summaries of four
> hundred words - you can read the summary, and if you
> like it you can download the entire article, saves
> you a lot of reading time. He sold his app to a big
> company, and made a lot of money. While you go on
> relying on the magic of words and dreaming of your
> God machine that will undo Goedel.
>

http://en.wikipedia.org/wiki/Cyc
http://www.cyc.com/

I am searching for the most efficient and effective strategy to complete 
the CYC project such that it can by itself compose the complete meaning 
postulates for any and all knowledge not yet contained within its [boot 
strap] knowledge base.
0
Peter
11/21/2013 11:36:10 AM
In article <HKSdnc8L-5O0bRDPnZ2dnUVZ_radnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> I do not think that is what I said, your paraphrase seems incorrect.
> What I said would have been something along the lines that we must first 
> find the inherent essential structure of the universal set of all 
> knowledge before we can correctly begin correctly composing any subset 
> of this knowledge.

Or, put another way, we need to define what intelligence is before we 
can seriously attempt to construct it artificially.  I've covered this 
many times in past discussions.  My current thinking remains that we 
function as Expectation Engines.

> http://en.wikipedia.org/wiki/Cyc
> http://www.cyc.com/
> 
> I am searching for the most efficient and effective strategy to complete 
> the CYC project such that it can by itself compose the complete meaning 
> postulates for any and all knowledge not yet contained within its [boot 
> strap] knowledge base.

Give up on CYC.  Start from scratch.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
11/21/2013 6:28:34 PM
On 11/21/2013 12:28 PM, Doc O'Leary wrote:
> In article <HKSdnc8L-5O0bRDPnZ2dnUVZ_radnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> I do not think that is what I said, your paraphrase seems incorrect.
>> What I said would have been something along the lines that we must first
>> find the inherent essential structure of the universal set of all
>> knowledge before we can correctly begin correctly composing any subset
>> of this knowledge.
>
> Or, put another way, we need to define what intelligence is before we
> can seriously attempt to construct it artificially.

I don't think that this is entirely true. We only need the capability to 
reproduce the functional end results of intelligence, this may not 
require any understanding at all of the underlying implementation details.

The key aspect that I was addressing in my prior response is that any 
entirely adequate model of knowledge must make sure that this model 
works with every element in the set of all knowledge. This will not 
require omniscience it will only require fully understanding the 
inherent structure of this set of all knowledge.

> I've covered this
> many times in past discussions.  My current thinking remains that we
> function as Expectation Engines.
>
>> http://en.wikipedia.org/wiki/Cyc
>> http://www.cyc.com/
>>
>> I am searching for the most efficient and effective strategy to complete
>> the CYC project such that it can by itself compose the complete meaning
>> postulates for any and all knowledge not yet contained within its [boot
>> strap] knowledge base.
>
> Give up on CYC.  Start from scratch.
>

0
Peter
11/21/2013 7:52:21 PM
A much more readable and competent attempt to deny undecidability of
halting was written by Eric C.R. Hehner. He is an established computer
scientist with a decent publication record. To me the most rewarding
part (rewarding in a perhaps weird sense) was when I temporarily ignored
the flaws in his reasoning and thought about the consequences of his
conclusion: what would change if he were right. Here:

  www.cs.toronto.edu/~hehner/PHP.pdf

The discussions in comp.theory eventually led me to do a literature
survey and research of my own on the proportion of hard instances among
all instances, given an incomplete halting tester. I found surprisingly
few papers; excluding one, they were surprisingly recent; they were
surprisingly diverse and unaware of each other; and the old one had been
cited surprisingly few times. After submitting the camera-ready version
but before the conference I realized that the part of my Theorem 9 about
type-B testers is wrong. The truth is formulated in the conference
slides and basically the opposite to what I claimed in the paper. Here
is an (uncorrected) extended version of the paper:

  http://arxiv.org/abs/1307.7066

The slides of the conference talk are here:

  http://www.cs.tut.fi/%7eava/ill_k.pdf

If anybody reading this is fluent in 40 year old versions of recursive
function theory and willing to help me, I would like to discuss the
second result of Nancy Lynch mentioned in my survey. If it can be
carried over to programming languages, it would be a very strong result.

--- Antti Valmari ---

0
Antti
11/22/2013 12:09:39 PM
On 11/22/2013 5:45 AM, Franz Gnaedinger wrote:
> On Thursday, November 21, 2013 12:36:10 PM UTC+1, Peter Olcott wrote:
>> I do not think that is what I said, your paraphrase seems incorrect.
>> What I said would have been something along the lines that we must first
>> find the inherent essential structure of the universal set of all
>> knowledge before we can correctly begin correctly composing any subset
>> of this knowledge.
>>
>> I am searching for the most efficient and effective strategy to complete
>> the CYC project such that it can by itself compose the complete meaning
>> postulates for any and all knowledge not yet contained within its [boot
>> strap] knowledge base.
> Your machine already works, not condensing knowledge
> but generating feelings of grandiosity and omnipotence.
> Look at the words you are using.

> Building an actual
> machine and writing a computer program that really works,
I didn't say that. The most I hope to accomplish is finding a strategy 
that can provide some guidance for the architectural design of such a 
machine.

> although limited in application and performance, is far more
> satisfying than blowing verbal soap bubbles, invoking universal
> all encompassing total knowledge by means of word magic and
> nothing else, against Goedel's proved theorems.
I am only hoping to make some progress on the above. To discover the 
inherent structure of all knowledge would be called Compositionality 
within Linguistics.
http://plato.stanford.edu/entries/compositionality/

I have had some ideas that seemed to be insightful about this yet so few 
people work in this field that I could find no one to critique them.

Another problem is that the conventional terminology within the field of 
Linguistics seems to divide things up in a way that greatly obscures 
rather than elucidates. Discovering exactly how the AtomicUnitsOfMeaning 
form larger meanings is stymied by this conventional terminology.

The ideas of the two paragraphs can be divided like this:
1) Determine the inherent structure of all knowledge
(Compositionality within Linguistics)

http://en.wikipedia.org/wiki/Bootstrapping
2) Determine the BootStrap subset of this knowledge
0
Peter
11/22/2013 2:04:22 PM
In article <ceidnRFFcYvp-RPPnZ2dnUVZ_qCdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> On 11/21/2013 12:28 PM, Doc O'Leary wrote:
> > In article <HKSdnc8L-5O0bRDPnZ2dnUVZ_radnZ2d@giganews.com>,
> >   Peter Olcott <OCR4Screen> wrote:
> >
> >> I do not think that is what I said, your paraphrase seems incorrect.
> >> What I said would have been something along the lines that we must first
> >> find the inherent essential structure of the universal set of all
> >> knowledge before we can correctly begin correctly composing any subset
> >> of this knowledge.
> >
> > Or, put another way, we need to define what intelligence is before we
> > can seriously attempt to construct it artificially.
> 
> I don't think that this is entirely true. We only need the capability to 
> reproduce the functional end results of intelligence, this may not 
> require any understanding at all of the underlying implementation details.

You are wrong.  Your way of thinking is precisely why very little 
progress has been made in AI in decades.  Or, sure, we may soon have 
some "functional end results" like self-driving cars, but they have 
*nothing* to do with artificial intelligence.  Watson was just another 
such cheat.  The Turing Test itself is a cop-out.

> The key aspect that I was addressing in my prior response is that any 
> entirely adequate model of knowledge must make sure that this model 
> works with every element in the set of all knowledge. This will not 
> require omniscience it will only require fully understanding the 
> inherent structure of this set of all knowledge.

Yes, exactly what I referred to.  The "inherent structure" of knowledge 
with respect to intelligence is a core issue that seldom gets discussed.  
I have often called out the proponents who hand wave "learning" as the 
AI panacea, because they never seem to be able to explain *what* is 
fundamentally being "learned".  And, something I maintain is just as 
important, what happens when when an intelligent system determines it is 
*wrong* about something.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
11/22/2013 6:21:24 PM
On 11/22/2013 12:21 PM, Doc O'Leary wrote:
> In article <ceidnRFFcYvp-RPPnZ2dnUVZ_qCdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> On 11/21/2013 12:28 PM, Doc O'Leary wrote:
>>> In article <HKSdnc8L-5O0bRDPnZ2dnUVZ_radnZ2d@giganews.com>,
>>>    Peter Olcott <OCR4Screen> wrote:
>>>
>>>> I do not think that is what I said, your paraphrase seems incorrect.
>>>> What I said would have been something along the lines that we must first
>>>> find the inherent essential structure of the universal set of all
>>>> knowledge before we can correctly begin correctly composing any subset
>>>> of this knowledge.
>>> Or, put another way, we need to define what intelligence is before we
>>> can seriously attempt to construct it artificially.
>> I don't think that this is entirely true. We only need the capability to
>> reproduce the functional end results of intelligence, this may not
>> require any understanding at all of the underlying implementation details.
> You are wrong.  Your way of thinking is precisely why very little
> progress has been made in AI in decades.  Or, sure, we may soon have
> some "functional end results" like self-driving cars, but they have
> *nothing* to do with artificial intelligence.  Watson was just another
> such cheat.  The Turing Test itself is a cop-out.

I would tend to agree with your assessment of Watson and the Turing test.
What I am proposing is that once the structure of thought is fully 
understood that human like reasoning will be (at least mostly) entailed 
by the details of this structure.

>
>> The key aspect that I was addressing in my prior response is that any
>> entirely adequate model of knowledge must make sure that this model
>> works with every element in the set of all knowledge. This will not
>> require omniscience it will only require fully understanding the
>> inherent structure of this set of all knowledge.
> Yes, exactly what I referred to.  The "inherent structure" of knowledge
> with respect to intelligence is a core issue that seldom gets discussed.
It is good to have agreement on this crucial point.

> I have often called out the proponents who hand wave "learning" as the
> AI panacea, because they never seem to be able to explain *what* is
> fundamentally being "learned".
Exactly !
Everyone seems to be skipping this fundamental prerequisite.

> And, something I maintain is just as
> important, what happens when when an intelligent system determines it is
> *wrong* about something.

Yes, learning from its mistakes.
0
Peter
11/22/2013 7:25:03 PM
sci.lang removed

Peter Olcott wrote:

> Lemma03: The Halting Problem and the Liar Paradox are both
> self-reference paradoxes.

Why do you call the halting problem a self-reference paradox?


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
11/22/2013 7:28:19 PM
On 11/22/2013 1:28 PM, Peter Percival wrote:
> sci.lang removed
>
> Peter Olcott wrote:
>
>> Lemma03: The Halting Problem and the Liar Paradox are both
>> self-reference paradoxes.
>
> Why do you call the halting problem a self-reference paradox?
>
>
http://plato.stanford.edu/entries/self-reference/#ConConProCom
0
Peter
11/22/2013 8:30:00 PM
On Thursday, November 21, 2013 11:36:10 AM UTC, Peter Olcott wrote:
> On 11/21/2013 2:49 AM, Franz Gnaedinger wrote:
> 
> > On Tuesday, November 19, 2013 12:25:27 PM UTC+1, Peter Olcott wrote:
> 
> >> Basis:
> 
> >>
> 
> >> Lemma01: Meaning can only be correctly specified within an acyclic
> 
> >> directed graph:
> 
> >>
> 
> >> a) Montague [meaning postulates] must be specified within acyclic
> 
> >> di-graphs. (connections between elements)
> 
> >>
> 
> >> b) Connections between Montague [meaning postulates] (principle of
> 
> >> compositionality) must not produce cycles.
> 
> >>
> 
> >> Lemma02: The [meaning postulate] of all self-reference paradoxes can
> 
> >> only be fully specified within a di-graph that contains cycles.
> 
> >>
> 
> >> Lemma03: The Halting Problem and the Liar Paradox are both
> 
> >> self-reference paradoxes.
> 
> >>
> 
> >> Conclusion:
> 
> >>
> 
> >> The Halting Problem and the Liar Paradox are errors of
> 
> >> specification/reasoning because their complete [meaning postulates]
> 
> >> necessarily always contain cycles.
> 
> >
> 
> > Last time I gave you the advice to look out for
> 
> > a more modest but realistic application of your
> 
> > knowledge-condensing machine, but you said No,
> 
> > it must be the machine of All Knowledge that goes
> 
> > against Goedel's proved theorems.
> 
> 
> 
> I do not think that is what I said, your paraphrase seems incorrect.
> 
> What I said would have been something along the lines that we must first 
> 
> find the inherent essential structure of the universal set of all 
> 
> knowledge before we can correctly begin correctly composing any subset 
> 
> of this knowledge.
> 
> 
> 
> > Meanwhile someone
> 
> > else did what you refused to consider, a teenager,
> 
> > by then seventeen years old: he wrote an app that
> 
> > condenses online articles into summaries of four
> 
> > hundred words - you can read the summary, and if you
> 
> > like it you can download the entire article, saves
> 
> > you a lot of reading time. He sold his app to a big
> 
> > company, and made a lot of money. While you go on
> 
> > relying on the magic of words and dreaming of your
> 
> > God machine that will undo Goedel.
> 
> >
> 
> 
> 
> http://en.wikipedia.org/wiki/Cyc
> 
> http://www.cyc.com/
> 
> 
> 
> I am searching for the most efficient and effective strategy to complete 
> 
> the CYC project such that it can by itself compose the complete meaning 
> 
> postulates for any and all knowledge not yet contained within its [boot 
> 
> strap] knowledge base.

Maybe if you looked at map reading & GPS? I was looking at some fabulous engineering theories in comp.ai patents? like 'speed' & Lambda?
0
me
11/22/2013 10:04:52 PM
"Antti Valmari" <Antti.Valmari@c.s.t.u.t.f.i.invalid> wrote in message 
news:l6nhi4$cue$1@news.cc.tut.fi...
>
> A much more readable and competent attempt to deny undecidability of
> halting was written by Eric C.R. Hehner. He is an established computer
> scientist with a decent publication record. To me the most rewarding
> part (rewarding in a perhaps weird sense) was when I temporarily ignored
> the flaws in his reasoning

I find that line of reasoning quite compelling, although at an informal 
level.  Could you maybe tell which (at least in essence) are those flaws?

Julio

> and thought about the consequences of his
> conclusion: what would change if he were right. Here:
>
>  www.cs.toronto.edu/~hehner/PHP.pdf
<snip>
 

0
Julio
11/23/2013 9:09:14 AM
Julio Di Egidio schreef op 23/11/2013 10:09:
> "Antti Valmari" <Antti.Valmari@c.s.t.u.t.f.i.invalid> wrote in message
> news:l6nhi4$cue$1@news.cc.tut.fi...
>>
>> A much more readable and competent attempt to deny undecidability of
>> halting was written by Eric C.R. Hehner. He is an established computer
>> scientist with a decent publication record. To me the most rewarding
>> part (rewarding in a perhaps weird sense) was when I temporarily ignored
>> the flaws in his reasoning
>
> I find that line of reasoning quite compelling, although at an informal
> level.  Could you maybe tell which (at least in essence) are those flaws?

Since decades I've been stuck with this layman's question about halting 
decidability:
Is a simple line like
"IF RND < 0.0000000000001 THEN END"
allowed to be part of the program under examination?
If not, why not?
If yes, then how would one but conceive the idea of deciding upon its 
halting or not?

guido google:wugi

0
wugi
11/23/2013 10:08:48 PM
"wugi" <brol@brol.be> wrote in message 
news:l6r91d$cu1$1@speranza.aioe.org...

> Since decades I've been stuck with this layman's question about halting 
> decidability:
> Is a simple line like
> "IF RND < 0.0000000000001 THEN END"
> allowed to be part of the program under examination?
> If not, why not?
> If yes, then how would one but conceive the idea of deciding upon its 
> halting or not?

Rather consider this:

    while (!(rnd() < 1e-13)) {}

where we assume rnd() in [0, 1) reasonably distributed, and 0 < 1e-13 == 
true.

Unless I am missing something, of course it halts, eventually...

Julio
 

0
Julio
11/23/2013 11:36:13 PM
wugi wrote:
> Julio Di Egidio schreef op 23/11/2013 10:09:
>> "Antti Valmari" <Antti.Valmari@c.s.t.u.t.f.i.invalid> wrote in message
>> news:l6nhi4$cue$1@news.cc.tut.fi...
>>>
>>> A much more readable and competent attempt to deny undecidability of
>>> halting was written by Eric C.R. Hehner. He is an established computer
>>> scientist with a decent publication record. To me the most rewarding
>>> part (rewarding in a perhaps weird sense) was when I temporarily ignored
>>> the flaws in his reasoning
>>
>> I find that line of reasoning quite compelling, although at an informal
>> level.  Could you maybe tell which (at least in essence) are those flaws?
>
> Since decades I've been stuck with this layman's question about halting
> decidability:
> Is a simple line like
> "IF RND < 0.0000000000001 THEN END"
> allowed to be part of the program under examination?

Yes, so long as RND is a Turing computable function.  Note that the RNDs 
found in program libraries are deterministic and only seemingly random. 
  The numbers they return are often called pseudo-random for that reason.

> If not, why not?
> If yes, then how would one but conceive the idea of deciding upon its
> halting or not?

No difference.

> guido google:wugi
>


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
11/23/2013 11:54:38 PM
"Julio Di Egidio" <julio@diegidio.name> wrote in message news:l6re5j$32l$1@dont-email.me...
> "wugi" <brol@brol.be> wrote in message 
> news:l6r91d$cu1$1@speranza.aioe.org...
> 
>> Since decades I've been stuck with this layman's question about halting 
>> decidability:
>> Is a simple line like
>> "IF RND < 0.0000000000001 THEN END"
>> allowed to be part of the program under examination?
>> If not, why not?
>> If yes, then how would one but conceive the idea of deciding upon its 
>> halting or not?
> 
> Rather consider this:
> 
>    while (!(rnd() < 1e-13)) {}
> 
> where we assume rnd() in [0, 1) reasonably distributed, and 0 < 1e-13 == 
> true.
> 
> Unless I am missing something, of course it halts, eventually...
> 
> Julio

It'll eventually halt only if the mantissa of the hardware floating point
delivered by rnd() has at least 13 digits. Internally the rnd() function
will have to be using arithmetic with significantly longer mantissas.

pjk

















 
0
pauljk
11/25/2013 11:32:33 AM
"pauljk" <paul.kriha@xtra.co.nz> wrote in message 
news:l6vcgl$6gv$1@dont-email.me...
> "Julio Di Egidio" <julio@diegidio.name> wrote in message 
> news:l6re5j$32l$1@dont-email.me...
>> "wugi" <brol@brol.be> wrote in message 
>> news:l6r91d$cu1$1@speranza.aioe.org...
>>
>>> Since decades I've been stuck with this layman's question about halting 
>>> decidability:
>>> Is a simple line like
>>> "IF RND < 0.0000000000001 THEN END"
>>> allowed to be part of the program under examination?
>>> If not, why not?
>>> If yes, then how would one but conceive the idea of deciding upon its 
>>> halting or not?
>>
>> Rather consider this:
>>
>>    while (!(rnd() < 1e-13)) {}
>>
>> where we assume rnd() in [0, 1) reasonably distributed, and 0 < 1e-13 == 
>> true.
>>
>> Unless I am missing something, of course it halts, eventually...
>
> It'll eventually halt only if the mantissa of the hardware floating point
> delivered by rnd() has at least 13 digits. Internally the rnd() function
> will have to be using arithmetic with significantly longer mantissas.

No, that's wrong: I have given the conditions, in particular you need 0 < 
1e-13, regardless of the precision of the output of rnd().

Julio
 

0
Julio
11/25/2013 12:31:46 PM
"Julio Di Egidio" <julio@diegidio.name> wrote in message 
news:l6vfvq$p13$1@dont-email.me...
> "pauljk" <paul.kriha@xtra.co.nz> wrote in message 
> news:l6vcgl$6gv$1@dont-email.me...
>> "Julio Di Egidio" <julio@diegidio.name> wrote in message 
>> news:l6re5j$32l$1@dont-email.me...
>>> "wugi" <brol@brol.be> wrote in message 
>>> news:l6r91d$cu1$1@speranza.aioe.org...
>>>
>>>> Since decades I've been stuck with this layman's question about halting 
>>>> decidability:
>>>> Is a simple line like
>>>> "IF RND < 0.0000000000001 THEN END"
>>>> allowed to be part of the program under examination?
>>>> If not, why not?
>>>> If yes, then how would one but conceive the idea of deciding upon its 
>>>> halting or not?
>>>
>>> Rather consider this:
>>>
>>>    while (!(rnd() < 1e-13)) {}
>>>
>>> where we assume rnd() in [0, 1) reasonably distributed, and 0 < 1e-13 == 
>>> true.
>>>
>>> Unless I am missing something, of course it halts, eventually...
>>
>> It'll eventually halt only if the mantissa of the hardware floating point
>> delivered by rnd() has at least 13 digits. Internally the rnd() function
>> will have to be using arithmetic with significantly longer mantissas.
>
> No, that's wrong: I have given the conditions, in particular you need 0 < 
> 1e-13, regardless of the precision of the output of rnd().

In fact, the lower the precision, more quickly it will halt (on the 
average).  Just think the case where rnd() returns either 0.0 or 0.5...

Julio
 

0
Julio
11/25/2013 12:35:03 PM
On 11/25/2013 1:55 AM, Franz Gnaedinger wrote:
> On Sunday, November 24, 2013 1:24:27 PM UTC+1, Peter Olcott wrote:
>>
>> The reference did not merely use the term [Atomic Unit of Meaning] it
>> explained what this is.
>> Did you bother to look at the reference?
>
> No, I did not waste my time that way. I asked you
> for one example of a word with an atomic unit of
> meaning. You didn't bother giving me an example.

Simple Theory of Types (Atoms of Meaning)
http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944

Atoms of meaning are below the level of meaning contained within words. 
Every word is comprised of its constituent (Atoms of Meaning).

Objects of thought are divided into types: (Atoms of Meaning)
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations

The Concept {Is Larger Than} forms a type of relation between a pair of 
other types. This concept also has its own constituent parts. 
Conventionally this would be referred to as a two place predicate.

A two place predicate would itself be a type of relation.
It would be a relation between a Predicate type and its two 
ObjectOfPredicate types.

> And you don't get the difference between a model
> and a theorem. The atomic unit of meaning is a model,
> a working model, just like the atom in early quantum
> theory was seen as a tiny tiny solar system. One
> formulates a model and looks how far one can get.
> The model of the mini-solar system was helpful
> for a while, and led quite far, but then it had to be
> abandoned for a more complex model of probability waves.
> You could get somewhere with an atomic unit of meaning,
> but it is not a theorem, you can't announce a machine
> that will condense all knowledge - a machine of all
> knowledge, a God machine - on the basis of a very limited
> working model. So again, give me an example of a word
> that has an atomic unit of meaning, and I'll show you
> how limited that working model is.
>

0
Peter
11/25/2013 1:24:59 PM
On 11/23/13 11:09, Julio Di Egidio wrote:
> "Antti Valmari" <Antti.Valmari@c.s.t.u.t.f.i.invalid> wrote in message
> news:l6nhi4$cue$1@news.cc.tut.fi...
>>
>> A much more readable and competent attempt to deny undecidability of
>> halting was written by Eric C.R. Hehner. ...
>> the flaws in his reasoning ...
>
> I find that line of reasoning quite compelling, although at an informal
> level.  Could you maybe tell which (at least in essence) are those flaws?
>
> Julio

He presents a number of examples of ill-defined concepts. Then he
presents the halting problem contradiction and says that it is
analogous, so couldn't it be that the halting function is ill-defined
instead of well-defined but uncomputable. All his examples work out also
under the interpretation that a function may be well-defined but
uncomputable. When reading, just be careful to distinguish program code
from definitions.

When deriving a contradiction from an ill-defined concept, the reason of
the contradiction indeed is that the definition is bad. If we assume the
existence of the smallest natural number that is different from itself,
then indeed we get weird consequences.

In the halting problem contradiction, the starting point is a
hypothetical piece of code that computes the halting function. In that
kind of a situation, there are two possible sources of the
contradiction: (A) the function that the code should compute does not
exist, or (B) the function does exist but the code does not. How could
we know which one is the right one?

Before continuing, let me tell that in this research field, only
deterministic programs are considered. So each time the same program is
started with the same input, it behaves the same. This assumption is
usually not said out loud, but it is there. I point this out because
some other posts wondered about this.

(A) requires accepting that the halting function is ill-defined. The
definition of the halting function is intuitively among the simplest
possible definitions, and mathematically it is similar to many others.
Intuitively, when some program is executed on some input, it either is
or is not the case that the execution eventually stops. Almost all
computer scientists and mathematicians find nothing wrong with this. If
something were wrong here, then we would have a nasty problem: what does
distinguish this definition from numerous others, so that we could
reject this one as ill-defined, without rejecting everything?

Some opponents of uncomputability confuse this by saying that the result
of the halting function is self-contradictory or undefined, when a
certain piece of code based on the halting tester is given as the input.
So, at least for this input, both the reply "eventually stops" and the
reply "does not eventually stop" is wrong, they say. This they call the
self-reference paradox. What they fail to notice or refuse to accept is
that this input is non-existent. The input is based on a halting tester,
but because the halting tester does not exist, also the input does not
exist. The contradiction does not arise from assuming the correct answer
for each program and instance. The contradiction arises from assuming
that there is a piece of code who finds the correct answer.

On the other hand, (B) requires accepting the existence of uncomputable
functions. It means abandoning the idea that for each well-defined
function, there is a program that computes it. But what evidence is
there that for each well-defined function, there is a program that
computes it? None. There only is some peoples' hope that it should be so.


>> and thought about the consequences of his
>> conclusion: what would change if he were right.

The established science partitions the attempts to define functions into
three classes:

(A) Bad definitions
(B) Good definitions yielding uncomputable functions
(C) Good definitions yielding computable functions

It seems to me that accepting Hehner's view would mean fusing the
classes (A) and (B) and giving new names:

(A union B) Bad definitions
(C) Good definitions

The fact that there is no halting tester would remain, it would just be
given a new explanation. The same holds for all undecidable problems. We
would lose a huge amount of established non-computer-science
mathematics. We would have a very difficult obligation of stating
criteria for well-definedness, so that we could avoid bad definitions in
the future.


--- Antti Valmari ---

0
Antti
11/25/2013 2:05:43 PM
On 11/24/13 00:08, wugi wrote:
> Since decades I've been stuck with this layman's question about halting
> decidability:
> Is a simple line like
> "IF RND < 0.0000000000001 THEN END"
> allowed to be part of the program under examination?

I assume that RND yields a random value.

Usually, for simplicity, it is assumed that the programs under
discussion are deterministic, so this line would be ruled out. After you
understand the standard theory, you can let randomness or nondeterminism
enter the game and analyse the consequences.

Randomness brings in new notions that may sound weird. Assume tossing a
fair coin until getting heads. Non-termination is possible but has
probability zero. So probability zero is not the same thing as
impossibility.


--- Antti Valmari ---

0
Antti
11/25/2013 2:14:43 PM
On 11/25/2013 8:05 AM, Antti Valmari wrote:
> On 11/23/13 11:09, Julio Di Egidio wrote:
>> "Antti Valmari" <Antti.Valmari@c.s.t.u.t.f.i.invalid> wrote in message
>> news:l6nhi4$cue$1@news.cc.tut.fi...
>>>
>>> A much more readable and competent attempt to deny undecidability of
>>> halting was written by Eric C.R. Hehner. ...
>>> the flaws in his reasoning ...
>>
>> I find that line of reasoning quite compelling, although at an informal
>> level.  Could you maybe tell which (at least in essence) are those flaws?
>>
>> Julio
>
> He presents a number of examples of ill-defined concepts. Then he
> presents the halting problem contradiction and says that it is
> analogous, so couldn't it be that the halting function is ill-defined
> instead of well-defined but uncomputable. All his examples work out also
> under the interpretation that a function may be well-defined but
> uncomputable. When reading, just be careful to distinguish program code
> from definitions.
>
> When deriving a contradiction from an ill-defined concept, the reason of
> the contradiction indeed is that the definition is bad. If we assume the
> existence of the smallest natural number that is different from itself,
> then indeed we get weird consequences.
>
> In the halting problem contradiction, the starting point is a
> hypothetical piece of code that computes the halting function. In that
> kind of a situation, there are two possible sources of the
> contradiction: (A) the function that the code should compute does not
> exist, or (B) the function does exist but the code does not. How could
> we know which one is the right one?
>
> Before continuing, let me tell that in this research field, only
> deterministic programs are considered. So each time the same program is
> started with the same input, it behaves the same. This assumption is
> usually not said out loud, but it is there. I point this out because
> some other posts wondered about this.
>
> (A) requires accepting that the halting function is ill-defined. The
> definition of the halting function is intuitively among the simplest
> possible definitions, and mathematically it is similar to many others.
> Intuitively, when some program is executed on some input, it either is
> or is not the case that the execution eventually stops. Almost all
> computer scientists and mathematicians find nothing wrong with this. If
> something were wrong here, then we would have a nasty problem: what does
> distinguish this definition from numerous others, so that we could
> reject this one as ill-defined, without rejecting everything?
>
> Some opponents of uncomputability confuse this by saying that the result
> of the halting function is self-contradictory or undefined, when a
> certain piece of code based on the halting tester is given as the input.
> So, at least for this input, both the reply "eventually stops" and the
> reply "does not eventually stop" is wrong, they say. This they call the
> self-reference paradox. What they fail to notice or refuse to accept is
> that this input is non-existent. The input is based on a halting tester,
> but because the halting tester does not exist, also the input does not
> exist. The contradiction does not arise from assuming the correct answer
> for each program and instance. The contradiction arises from assuming
> that there is a piece of code who finds the correct answer.
>
> On the other hand, (B) requires accepting the existence of uncomputable
> functions. It means abandoning the idea that for each well-defined
> function, there is a program that computes it. But what evidence is
> there that for each well-defined function, there is a program that
> computes it? None. There only is some peoples' hope that it should be so.
>
>
>>> and thought about the consequences of his
>>> conclusion: what would change if he were right.
>
> The established science partitions the attempts to define functions into
> three classes:
>
> (A) Bad definitions
> (B) Good definitions yielding uncomputable functions
> (C) Good definitions yielding computable functions
>
> It seems to me that accepting Hehner's view would mean fusing the
> classes (A) and (B) and giving new names:
>
> (A union B) Bad definitions
> (C) Good definitions
>
> The fact that there is no halting tester would remain, it would just be
> given a new explanation. The same holds for all undecidable problems. We
> would lose a huge amount of established non-computer-science
> mathematics. We would have a very difficult obligation of stating
> criteria for well-definedness, so that we could avoid bad definitions in
> the future.
>

Your conclusion here is similar to a remark in the
link Mr. Greene provided elsewhere.  If one looks at
the paragraph just above the position given in the link,

http://plato.stanford.edu/entries/self-reference/#ExtAltKriTheTru

one will find that Kripke's approach to the liar paradox using
many-valued Kleene truth is different from others, but does not
eliminate the problem which introduces hierarchies.

In fact, the remark expresses this fact in another
way related to hierarchies.  Just like there can be
truths of second-order logic which are not provable
in first-order logic, the truth that Kripke's object
language cannot express that the liar paradox is expressible
in the meta-language.









0
fom
11/25/2013 6:07:29 PM
"Julio Di Egidio" <julio@diegidio.name> wrote in message news:l6vg5v$q47$1@dont-email.me...
> "Julio Di Egidio" <julio@diegidio.name> wrote in message 
> news:l6vfvq$p13$1@dont-email.me...
>> "pauljk" <paul.kriha@xtra.co.nz> wrote in message 
>> news:l6vcgl$6gv$1@dont-email.me...
>>> "Julio Di Egidio" <julio@diegidio.name> wrote in message 
>>> news:l6re5j$32l$1@dont-email.me...
>>>> "wugi" <brol@brol.be> wrote in message 
>>>> news:l6r91d$cu1$1@speranza.aioe.org...
>>>>
>>>>> Since decades I've been stuck with this layman's question about halting 
>>>>> decidability:
>>>>> Is a simple line like
>>>>> "IF RND < 0.0000000000001 THEN END"
>>>>> allowed to be part of the program under examination?
>>>>> If not, why not?
>>>>> If yes, then how would one but conceive the idea of deciding upon its 
>>>>> halting or not?
>>>>
>>>> Rather consider this:
>>>>
>>>>    while (!(rnd() < 1e-13)) {}
>>>>
>>>> where we assume rnd() in [0, 1) reasonably distributed, and 0 < 1e-13 == 
>>>> true.
>>>>
>>>> Unless I am missing something, of course it halts, eventually...
>>>
>>> It'll eventually halt only if the mantissa of the hardware floating point
>>> delivered by rnd() has at least 13 digits. Internally the rnd() function
>>> will have to be using arithmetic with significantly longer mantissas.
>>
>> No, that's wrong: I have given the conditions, in particular you need 0 < 
>> 1e-13, regardless of the precision of the output of rnd().
> 
> In fact, the lower the precision, more quickly it will halt (on the 
> average).  Just think the case where rnd() returns either 0.0 or 0.5...

This just shows that without defining the precision of the random
number calculations and defining the rounding rules you cannot
say how often will rnd() be smaller than le-13.

What's more, if your computer's mantissa holds 12 or less decimal digits,
your le-13 would be held internally as zero, and rnd() would never be
smaller than that.

Regarding your example, assuming the same precision applies
to both le-13 and rnd(), neither of the rnd() values of 0.0 and 0.5
would be smaller than le-13 held internally as 0.0.

pjk
 
0
pauljk
11/26/2013 1:43:34 AM
On 11/26/2013 1:50 AM, Franz Gnaedinger wrote:
> On Monday, November 25, 2013 11:01:09 AM UTC+1, Peter Olcott wrote:
>>
>> I see what other people disrespect you.
>
> Yes, those who eternally post meta-messages,
> no linguistic and no scientific messages,
> and hang around in sci.lang for playing online
> killer games.
>
> I prefer to speak of language and to bring up
> scientific arguments. Meanwhile I found out
> where you make the principle mistake, all other
> mistakes come from that one. The 'atomic unit of
> meaning' is a working model - let us start from
> the idea of prime words and look how far we get -,
> but you consider it a theorem, a theorem of absolute
> validity. But then you get into problems, for your
> theory of a machine that will condense the absolute
> universal total complete knowledge you always conjure
> goes against Goedel's proved theorems. You dismiss
> Goedel's theorems that are proved for a working model
> you elevate into the status of a theorem of the general
> validity mathematicians and logicians proved for Goedel's
> theorems ...
>

My original Hypothesis is that ALL self-reference paradoxes are formed 
only through errors of reasoning. This would include Goedel's 
Incompleteness. Apparently Kurt Goedel also agrees with this statement:

Simple Theory of Types (Atoms of Meaning)
http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944

He concluded the (1) theory of simple types and (2) axiomatic set 
theory, "permit the derivation of modern mathematics and at the same 
time avoid all known paradoxes" (G�del 1944:126

Atoms of meaning are below the level of meaning contained within words. 
Every word is comprised of its constituent (Atoms of Meaning).

Objects of thought are divided into types: (Atoms of Meaning)
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations

The Concept {Is Larger Than} forms a type of relation between a pair of 
other types. This concept also has its own constituent parts. 
Conventionally this would be referred to as a two place predicate.

A two place predicate would itself be a type of relation.
It would be a relation between a Predicate type and its two 
ObjectOfPredicate types.
0
Peter
11/26/2013 12:22:41 PM
Peter Olcott <OCR4Screen> writes:

> My original Hypothesis is that ALL self-reference paradoxes are formed
> only through errors of reasoning. This would include Goedel's
> Incompleteness. Apparently Kurt Goedel also agrees with this
> statement:
>
> Simple Theory of Types (Atoms of Meaning)
> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>
> He concluded the (1) theory of simple types and (2) axiomatic set
> theory, "permit the derivation of modern mathematics and at the same
> time avoid all known paradoxes" (Gödel 1944:126

You think that Gödel thought that GIT involved errors of resoning?

-- 
Alan Smaill
0
Alan
11/26/2013 12:44:24 PM
On 11/26/2013 6:44 AM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> My original Hypothesis is that ALL self-reference paradoxes are formed
>> only through errors of reasoning. This would include Goedel's
>> Incompleteness. Apparently Kurt Goedel also agrees with this
>> statement:
>>
>> Simple Theory of Types (Atoms of Meaning)
>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>
>> He concluded the (1) theory of simple types and (2) axiomatic set
>> theory, "permit the derivation of modern mathematics and at the same
>> time avoid all known paradoxes" (Gödel 1944:126
>
> You think that Gödel thought that GIT involved errors of resoning?
>

Since ALL paradoxes were resolved, (by the Simple Theory of Types) 
therefore the Goedel Incompleteness Theorem paradox was resolved. He did 
not go as far as saying that the alternative view (of the Incompleteness 
Theorem) was erroneous, he did say that the alternative view was not 
necessary. Mathematics is not necessarily incomplete.
0
Peter
11/26/2013 12:53:51 PM
Peter Olcott <OCR4Screen> writes:

> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>> only through errors of reasoning. This would include Goedel's
>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>> statement:
>>>
>>> Simple Theory of Types (Atoms of Meaning)
>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>
>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>> theory, "permit the derivation of modern mathematics and at the same
>>> time avoid all known paradoxes" (Gödel 1944:126
>>
>> You think that Gödel thought that GIT involved errors of resoning?
>
> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
> therefore the Goedel Incompleteness Theorem paradox was resolved.

In what sense?
That GIT is seen as not paradoxical by Gödel, maybe?

> He did not go as far as saying that the alternative view (of the
> Incompleteness Theorem) was erroneous, he did say that the alternative
> view was not necessary.

Where did he say that?
Not in the quote above, anyway.

> Mathematics is not necessarily incomplete.

I don't see at all that you have Gödel's imprimatur for that claim.

-- 
Alan Smaill
0
Alan
11/26/2013 1:15:09 PM
On 11/26/2013 7:15 AM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>>> only through errors of reasoning. This would include Goedel's
>>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>>> statement:
>>>>
>>>> Simple Theory of Types (Atoms of Meaning)
>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>>
>>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>>> theory, "permit the derivation of modern mathematics and at the same
>>>> time avoid all known paradoxes" (Gödel 1944:126
>>>
>>> You think that Gödel thought that GIT involved errors of resoning?
>>
>> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
>> therefore the Goedel Incompleteness Theorem paradox was resolved.
>
> In what sense?
> That GIT is seen as not paradoxical by Gödel, maybe?
>
>> He did not go as far as saying that the alternative view (of the
>> Incompleteness Theorem) was erroneous, he did say that the alternative
>> view was not necessary.
>
> Where did he say that?
> Not in the quote above, anyway.
>
>> Mathematics is not necessarily incomplete.
>
> I don't see at all that you have Gödel's imprimatur for that claim.
>

He concluded the theory of simple types, "permit the derivation of 
modern mathematics and at the same time avoid all known paradoxes" 
(Gödel 1944:126

I guess for communication to effectively occur we must mutually agree on 
the meaning of the terms in the above. It is my understanding that the 
key aspect of the Incompleteness Theorem was that it resulted in a 
paradox when attempting to derive modern mathematics. Within the 
specific context of these meanings Kurt Gödel said that this result was 
not necessary.
0
Peter
11/26/2013 1:36:39 PM
Peter Olcott wrote:

> It is my understanding that the
> key aspect of the Incompleteness Theorem was that it resulted in a
> paradox when attempting to derive modern mathematics.

Then your understanding is wrong.  Gödel's incompleteness theorem can be 
looked up, thereby enabling you to correct your misunderstanding.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
11/26/2013 1:43:47 PM
On 11/26/2013 7:15 AM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>>> only through errors of reasoning. This would include Goedel's
>>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>>> statement:
>>>>
>>>> Simple Theory of Types (Atoms of Meaning)
>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>>
>>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>>> theory, "permit the derivation of modern mathematics and at the same
>>>> time avoid all known paradoxes" (Gödel 1944:126
>>>
>>> You think that Gödel thought that GIT involved errors of resoning?
>>
>> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
>> therefore the Goedel Incompleteness Theorem paradox was resolved.
>
> In what sense?
> That GIT is seen as not paradoxical by Gödel, maybe?

Perhaps, yet it seems clear that Wittgenstein thought it to be the same 
sort of self-reference paradox that I am referring to:

"He interpreted it as a kind of logical paradox"

They are particularly concerned with the interpretation of a Gödel 
sentence for an ω-inconsistent theory as actually saying "I am not provable"

http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein

>
>> He did not go as far as saying that the alternative view (of the
>> Incompleteness Theorem) was erroneous, he did say that the alternative
>> view was not necessary.
>
> Where did he say that?
> Not in the quote above, anyway.
>
>> Mathematics is not necessarily incomplete.
>
> I don't see at all that you have Gödel's imprimatur for that claim.
>

0
Peter
11/26/2013 2:15:14 PM
On 11/26/2013 7:43 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> It is my understanding that the
>> key aspect of the Incompleteness Theorem was that it resulted in a
>> paradox when attempting to derive modern mathematics.
>
> Then your understanding is wrong.  Gödel's incompleteness theorem can be
> looked up, thereby enabling you to correct your misunderstanding.
>

Perhaps, yet it seems clear that Wittgenstein thought it to be the same 
sort of self-reference paradox that I am referring to:

"He interpreted it as a kind of logical paradox"

They are particularly concerned with the interpretation of a Gödel 
sentence for an ω-inconsistent theory as actually saying "I am not provable"

http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein 

0
Peter
11/26/2013 2:16:37 PM
On 11/26/2013 7:43 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> It is my understanding that the
>> key aspect of the Incompleteness Theorem was that it resulted in a
>> paradox when attempting to derive modern mathematics.
>
> Then your understanding is wrong.  Gödel's incompleteness theorem can be
> looked up, thereby enabling you to correct your misunderstanding.
>

http://sammelpunkt.philo.at:8080/1674/1/ohmacht.pdf
Apparently Wittgenstein's view is very close to my own view.
0
Peter
11/26/2013 3:18:06 PM
Peter Olcott wrote:
> On 11/26/2013 7:43 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>
>>> It is my understanding that the
>>> key aspect of the Incompleteness Theorem was that it resulted in a
>>> paradox when attempting to derive modern mathematics.
>>
>> Then your understanding is wrong.  Gödel's incompleteness theorem can be
>> looked up, thereby enabling you to correct your misunderstanding.
>>
>
> Perhaps, yet it seems clear that Wittgenstein thought it to be the same
> sort of self-reference paradox that I am referring to:
>
> "He interpreted it as a kind of logical paradox"
>
> They are particularly concerned with the interpretation of a Gödel
> sentence for an ω-inconsistent theory as actually saying "I am not
> provable"
>
> http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein

You will have noticed that not everything in that section is positive 
about Wittgenstein.  I know nothing about the matter and it may be that 
close reading of the works referred to (both Wittgenstein's and the 
scholarly commentaries) repays the effort.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
11/26/2013 3:28:56 PM
Antti Valmari schreef op 25/11/2013 15:14:
> On 11/24/13 00:08, wugi wrote:
>> Since decades I've been stuck with this layman's question about halting
>> decidability:
>> Is a simple line like
>> "IF RND < 0.0000000000001 THEN END"
>> allowed to be part of the program under examination?
>
> I assume that RND yields a random value.
>
> Usually, for simplicity, it is assumed that the programs under
> discussion are deterministic, so this line would be ruled out. After you
> understand the standard theory, you can let randomness or nondeterminism
> enter the game and analyse the consequences.
>
> Randomness brings in new notions that may sound weird. Assume tossing a
> fair coin until getting heads. Non-termination is possible but has
> probability zero. So probability zero is not the same thing as
> impossibility.

That's why with a few such tricks meseems undecidability is not far 
beyond the corner...

guido google:wugi


0
wugi
11/26/2013 4:42:39 PM
On 11/26/2013 9:28 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 11/26/2013 7:43 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>
>>>> It is my understanding that the
>>>> key aspect of the Incompleteness Theorem was that it resulted in a
>>>> paradox when attempting to derive modern mathematics.
>>>
>>> Then your understanding is wrong.  Gödel's incompleteness theorem can be
>>> looked up, thereby enabling you to correct your misunderstanding.
>>>
>>
>> Perhaps, yet it seems clear that Wittgenstein thought it to be the same
>> sort of self-reference paradox that I am referring to:
>>
>> "He interpreted it as a kind of logical paradox"
>>
>> They are particularly concerned with the interpretation of a Gödel
>> sentence for an ω-inconsistent theory as actually saying "I am not
>> provable"
>>
>> http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein
>>
>
> You will have noticed that not everything in that section is positive
> about Wittgenstein.  I know nothing about the matter and it may be that
> close reading of the works referred to (both Wittgenstein's and the
> scholarly commentaries) repays the effort.
>

Yes here is one of those:
http://sammelpunkt.philo.at:8080/1674/1/ohmacht.pdf
0
Peter
11/26/2013 5:08:49 PM
On 11/25/2013 8:14 AM, Antti Valmari wrote:
> On 11/24/13 00:08, wugi wrote:
>> Since decades I've been stuck with this layman's question about halting
>> decidability:
>> Is a simple line like
>> "IF RND < 0.0000000000001 THEN END"
>> allowed to be part of the program under examination?
>
> I assume that RND yields a random value.
>
> Usually, for simplicity, it is assumed that the programs under
> discussion are deterministic, so this line would be ruled out. After you
> understand the standard theory, you can let randomness or nondeterminism
> enter the game and analyse the consequences.
>
> Randomness brings in new notions that may sound weird. Assume tossing a
> fair coin until getting heads. Non-termination is possible but has
> probability zero. So probability zero is not the same thing as
> impossibility.
>
>
> --- Antti Valmari ---
>
I think that you are making a subtle fallacy of equivocation error: It 
seems to me that any probability of exactly zero must perfectly equate 
with impossibility.

Maybe you are saying something along the lines of:
Although it never will happen, it is not impossible for it to happen.
0
Peter
11/26/2013 5:18:46 PM
Peter Olcott <OCR4Screen> writes:

> On 11/26/2013 7:15 AM, Alan Smaill wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>>>> only through errors of reasoning. This would include Goedel's
>>>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>>>> statement:
>>>>>
>>>>> Simple Theory of Types (Atoms of Meaning)
>>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>>>
>>>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>>>> theory, "permit the derivation of modern mathematics and at the same
>>>>> time avoid all known paradoxes" (Gödel 1944:126
>>>>
>>>> You think that Gödel thought that GIT involved errors of resoning?
>>>
>>> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
>>> therefore the Goedel Incompleteness Theorem paradox was resolved.
>>
>> In what sense?
>> That GIT is seen as not paradoxical by Gödel, maybe?
>>
>>> He did not go as far as saying that the alternative view (of the
>>> Incompleteness Theorem) was erroneous, he did say that the alternative
>>> view was not necessary.
>>
>> Where did he say that?
>> Not in the quote above, anyway.
>>
>>> Mathematics is not necessarily incomplete.
>>
>> I don't see at all that you have Gödel's imprimatur for that claim.
>
> He concluded the theory of simple types, "permit the derivation of
> modern mathematics and at the same time avoid all known paradoxes"
> (Gödel 1944:126
>
> I guess for communication to effectively occur we must mutually agree
> on the meaning of the terms in the above. It is my understanding that
> the key aspect of the Incompleteness Theorem was that it resulted in a
> paradox when attempting to derive modern mathematics.

If you mean by paradox a situation in which our usual reasoning
breaks down, then I do not think that this is the case
(and nor did Gödel say this anywhere I have seen).

If you mean by paradox that the situation appears strange or unforeseen,
well, maybe -- but that is a whole different situation.

> Within the
> specific context of these meanings Kurt Gödel said that this result
> was not necessary.

You are ignoring the possibility that Gödel thought that GIT was *not*
paradoxical in the first sense above, and so his comment does not touch
GIT at all.


-- 
Alan Smaill
0
Alan
11/26/2013 5:37:25 PM
Peter Olcott <OCR4Screen> writes:

> On 11/26/2013 7:15 AM, Alan Smaill wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>>>> only through errors of reasoning. This would include Goedel's
>>>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>>>> statement:
>>>>>
>>>>> Simple Theory of Types (Atoms of Meaning)
>>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>>>
>>>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>>>> theory, "permit the derivation of modern mathematics and at the same
>>>>> time avoid all known paradoxes" (Gödel 1944:126
>>>>
>>>> You think that Gödel thought that GIT involved errors of resoning?
>>>
>>> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
>>> therefore the Goedel Incompleteness Theorem paradox was resolved.
>>
>> In what sense?
>> That GIT is seen as not paradoxical by Gödel, maybe?
>
> Perhaps, yet it seems clear that Wittgenstein thought it to be the
> same sort of self-reference paradox that I am referring to:
>
> "He interpreted it as a kind of logical paradox"
>
> They are particularly concerned with the interpretation of a Gödel
> sentence for an ω-inconsistent theory as actually saying "I am not
> provable"
>
> http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein

It's your view on Gödel's own opinion that I find unconvincing;
yes, Wittgenstein had an idiosyncratic take on the situation,
such that you will struggle to find that anyone takes W. seriously
on this (quite unlike most of W's work).

>>> He did not go as far as saying that the alternative view (of the
>>> Incompleteness Theorem) was erroneous, he did say that the alternative
>>> view was not necessary.
>>
>> Where did he say that?
>> Not in the quote above, anyway.
>>
>>> Mathematics is not necessarily incomplete.
>>
>> I don't see at all that you have Gödel's imprimatur for that claim.
>>
>

-- 
Alan Smaill
0
Alan
11/26/2013 5:47:08 PM
On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
<OCR4Screen> wrote in
<news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
sci.lang,comp.theory,sci.logic:

> On 11/25/2013 8:14 AM, Antti Valmari wrote:

[...]

>> Randomness brings in new notions that may sound weird.
>> Assume tossing a fair coin until getting heads.
>> Non-termination is possible but has probability zero. So
>> probability zero is not the same thing as impossibility.

> I think that you are making a subtle fallacy of
> equivocation error: It seems to me that any probability
> of exactly zero must perfectly equate with impossibility.

It doesn’t, and there is no equivocation error.  Antti’s
statement is precisely correct.

[...]

Brian
0
Brian
11/26/2013 5:56:47 PM
On 11/26/2013 11:37 AM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 11/26/2013 7:15 AM, Alan Smaill wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>>>>> only through errors of reasoning. This would include Goedel's
>>>>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>>>>> statement:
>>>>>>
>>>>>> Simple Theory of Types (Atoms of Meaning)
>>>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>>>>
>>>>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>>>>> theory, "permit the derivation of modern mathematics and at the same
>>>>>> time avoid all known paradoxes" (Gödel 1944:126
>>>>>
>>>>> You think that Gödel thought that GIT involved errors of resoning?
>>>>
>>>> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
>>>> therefore the Goedel Incompleteness Theorem paradox was resolved.
>>>
>>> In what sense?
>>> That GIT is seen as not paradoxical by Gödel, maybe?
>>>
>>>> He did not go as far as saying that the alternative view (of the
>>>> Incompleteness Theorem) was erroneous, he did say that the alternative
>>>> view was not necessary.
>>>
>>> Where did he say that?
>>> Not in the quote above, anyway.
>>>
>>>> Mathematics is not necessarily incomplete.
>>>
>>> I don't see at all that you have Gödel's imprimatur for that claim.
>>
>> He concluded the theory of simple types, "permit the derivation of
>> modern mathematics and at the same time avoid all known paradoxes"
>> (Gödel 1944:126
>>
>> I guess for communication to effectively occur we must mutually agree
>> on the meaning of the terms in the above. It is my understanding that
>> the key aspect of the Incompleteness Theorem was that it resulted in a
>> paradox when attempting to derive modern mathematics.
>
> If you mean by paradox a situation in which our usual reasoning
> breaks down, then I do not think that this is the case
> (and nor did Gödel say this anywhere I have seen).
>
> If you mean by paradox that the situation appears strange or unforeseen,
> well, maybe -- but that is a whole different situation.
>
>> Within the
>> specific context of these meanings Kurt Gödel said that this result
>> was not necessary.
>
> You are ignoring the possibility that Gödel thought that GIT was *not*
> paradoxical in the first sense above, and so his comment does not touch
> GIT at all.
>
>
Yes you may be correct.
0
Peter
11/26/2013 6:08:15 PM
On 11/26/2013 11:47 AM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 11/26/2013 7:15 AM, Alan Smaill wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 11/26/2013 6:44 AM, Alan Smaill wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> My original Hypothesis is that ALL self-reference paradoxes are formed
>>>>>> only through errors of reasoning. This would include Goedel's
>>>>>> Incompleteness. Apparently Kurt Goedel also agrees with this
>>>>>> statement:
>>>>>>
>>>>>> Simple Theory of Types (Atoms of Meaning)
>>>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>>>>
>>>>>> He concluded the (1) theory of simple types and (2) axiomatic set
>>>>>> theory, "permit the derivation of modern mathematics and at the same
>>>>>> time avoid all known paradoxes" (Gödel 1944:126
>>>>>
>>>>> You think that Gödel thought that GIT involved errors of resoning?
>>>>
>>>> Since ALL paradoxes were resolved, (by the Simple Theory of Types)
>>>> therefore the Goedel Incompleteness Theorem paradox was resolved.
>>>
>>> In what sense?
>>> That GIT is seen as not paradoxical by Gödel, maybe?
>>
>> Perhaps, yet it seems clear that Wittgenstein thought it to be the
>> same sort of self-reference paradox that I am referring to:
>>
>> "He interpreted it as a kind of logical paradox"
>>
>> They are particularly concerned with the interpretation of a Gödel
>> sentence for an ω-inconsistent theory as actually saying "I am not
>> provable"
>>
>> http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein
>
> It's your view on Gödel's own opinion that I find unconvincing;
> yes, Wittgenstein had an idiosyncratic take on the situation,
> such that you will struggle to find that anyone takes W. seriously
> on this (quite unlike most of W's work).
>
Yes that seems to be the case. Do most people tend to agree that the 
Halting Problem and the Incompleteness Theorem are essentially analogous?

0
Peter
11/26/2013 6:10:39 PM
On 11/26/2013 11:56 AM, Brian M. Scott wrote:
> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
> <OCR4Screen> wrote in
> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
> sci.lang,comp.theory,sci.logic:
>
>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>
> [...]
>
>>> Randomness brings in new notions that may sound weird.
>>> Assume tossing a fair coin until getting heads.
>>> Non-termination is possible but has probability zero. So
>>> probability zero is not the same thing as impossibility.
>
>> I think that you are making a subtle fallacy of
>> equivocation error: It seems to me that any probability
>> of exactly zero must perfectly equate with impossibility.
>
> It doesn’t, and there is no equivocation error.  Antti’s
> statement is precisely correct.
>
> [...]
>
> Brian
>
Maybe he was saying something along the lines of:
Although it never will happen, it is not impossible for it to happen.
In that case what he said made sense.

On the other hand the probability of any impossible event would be zero 
would it not?
0
Peter
11/26/2013 6:14:23 PM
Brian M. Scott wrote (26-11-2013 17:56):
> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
> <OCR4Screen> wrote in
> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
> sci.lang,comp.theory,sci.logic:
>
>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>
> [...]
>
>>> Randomness brings in new notions that may sound weird.
>>> Assume tossing a fair coin until getting heads.
>>> Non-termination is possible but has probability zero. So
>>> probability zero is not the same thing as impossibility.
>
>> I think that you are making a subtle fallacy of
>> equivocation error: It seems to me that any probability
>> of exactly zero must perfectly equate with impossibility.
>
> It doesn’t, and there is no equivocation error.  Antti’s
> statement is precisely correct.

But is the probability exactly zero? How would one go on about to prove it? 
(The problem could be that 'exact' can't apply to 'probability', but I have 
no idea what the literature says about that.)

0
UTF
11/26/2013 6:15:23 PM
On 11/26/2013 9:28 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 11/26/2013 7:43 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>
>>>> It is my understanding that the
>>>> key aspect of the Incompleteness Theorem was that it resulted in a
>>>> paradox when attempting to derive modern mathematics.
>>>
>>> Then your understanding is wrong.  Gödel's incompleteness theorem can be
>>> looked up, thereby enabling you to correct your misunderstanding.
>>>
>>
>> Perhaps, yet it seems clear that Wittgenstein thought it to be the same
>> sort of self-reference paradox that I am referring to:
>>
>> "He interpreted it as a kind of logical paradox"
>>
>> They are particularly concerned with the interpretation of a Gödel
>> sentence for an ω-inconsistent theory as actually saying "I am not
>> provable"
>>
>> http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein
>>
>
> You will have noticed that not everything in that section is positive
> about Wittgenstein.  I know nothing about the matter and it may be that
> close reading of the works referred to (both Wittgenstein's and the
> scholarly commentaries) repays the effort.
>

Following links, one can get to this
paper, at least,

http://www3.nd.edu/~tbays/papers/wnp.pdf

It contains the paragraph from Wittgenstein
in question.  I think it unlikely that it supports
Peter's contention of being "the same sort of self-reference
paradox" since Peter began his thread speaking of
meaning postulates and Wittgenstein seems to be
questioning the admissibility of a provability predicate.




0
fom
11/26/2013 6:18:52 PM
On 11/26/2013 12:15 PM, António Marques wrote:
> Brian M. Scott wrote (26-11-2013 17:56):
>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>> <OCR4Screen> wrote in
>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>> sci.lang,comp.theory,sci.logic:
>>
>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>>
>> [...]
>>
>>>> Randomness brings in new notions that may sound weird.
>>>> Assume tossing a fair coin until getting heads.
>>>> Non-termination is possible but has probability zero. So
>>>> probability zero is not the same thing as impossibility.
>>
>>> I think that you are making a subtle fallacy of
>>> equivocation error: It seems to me that any probability
>>> of exactly zero must perfectly equate with impossibility.
>>
>> It doesn’t, and there is no equivocation error.  Antti’s
>> statement is precisely correct.
>
> But is the probability exactly zero? How would one go on about to prove
> it? (The problem could be that 'exact' can't apply to 'probability', but
> I have no idea what the literature says about that.)
>

I can't tell exactly and precisely what differences there could be 
between a probability of zero and an impossibility. Certainly any 
impossible task has a probability of zero, Perhaps not the other way 
around, depending upon how one defines one's terms.
0
Peter
11/26/2013 6:25:17 PM
On 11/26/2013 12:18 PM, fom wrote:
> On 11/26/2013 9:28 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>> On 11/26/2013 7:43 AM, Peter Percival wrote:
>>>> Peter Olcott wrote:
>>>>
>>>>> It is my understanding that the
>>>>> key aspect of the Incompleteness Theorem was that it resulted in a
>>>>> paradox when attempting to derive modern mathematics.
>>>>
>>>> Then your understanding is wrong.  Gödel's incompleteness theorem
>>>> can be
>>>> looked up, thereby enabling you to correct your misunderstanding.
>>>>
>>>
>>> Perhaps, yet it seems clear that Wittgenstein thought it to be the same
>>> sort of self-reference paradox that I am referring to:
>>>
>>> "He interpreted it as a kind of logical paradox"
>>>
>>> They are particularly concerned with the interpretation of a Gödel
>>> sentence for an ω-inconsistent theory as actually saying "I am not
>>> provable"
>>>
>>> http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein
>>>
>>>
>>
>> You will have noticed that not everything in that section is positive
>> about Wittgenstein.  I know nothing about the matter and it may be that
>> close reading of the works referred to (both Wittgenstein's and the
>> scholarly commentaries) repays the effort.
>>
>
> Following links, one can get to this
> paper, at least,
>
> http://www3.nd.edu/~tbays/papers/wnp.pdf
>
> It contains the paragraph from Wittgenstein
> in question.  I think it unlikely that it supports
> Peter's contention of being "the same sort of self-reference
> paradox" since Peter began his thread speaking of
> meaning postulates and Wittgenstein seems to be
> questioning the admissibility of a provability predicate.
>
>
>

Attributed to Wittgenstein or Godel?
  "I am not provable"

http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems#Wittgenstein

In any case it seems to have the same error as:
"this proposition is false"

0
Peter
11/26/2013 6:30:24 PM
Peter Olcott wrote:

> Yes that seems to be the case. Do most people tend to agree that the
> Halting Problem and the Incompleteness Theorem are essentially analogous?

The unsolvability of the halting problem implies a version of Gödel's 
incompleteness theorem--a version weaker than Gödel's original.

The answer to your question is "no": most people haven't heard of the 
halting problem or the incompleteness theorem.


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
11/26/2013 6:33:13 PM
On 11/26/2013 12:15 PM, António Marques wrote:
> Brian M. Scott wrote (26-11-2013 17:56):
>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>> <OCR4Screen> wrote in
>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>> sci.lang,comp.theory,sci.logic:
>>
>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>>
>> [...]
>>
>>>> Randomness brings in new notions that may sound weird.
>>>> Assume tossing a fair coin until getting heads.
>>>> Non-termination is possible but has probability zero. So
>>>> probability zero is not the same thing as impossibility.
>>
>>> I think that you are making a subtle fallacy of
>>> equivocation error: It seems to me that any probability
>>> of exactly zero must perfectly equate with impossibility.
>>
>> It doesn’t, and there is no equivocation error.  Antti’s
>> statement is precisely correct.
>
> But is the probability exactly zero? How would one go on about to prove
> it? (The problem could be that 'exact' can't apply to 'probability', but
> I have no idea what the literature says about that.)
>

The numeric value of the probability is
not an issue.  A probability is defined
as a mapping over subsets of an underlying
system of events.  Relative to that system,
an impossible event corresponds to the empty
set of events.  So, the probability of impossibility
is 0.  The converse does not hold.

When probability is interpreted relative
to measure-theoretic ideas, the analogy
is a set of measure zero.  Not every set
of measure zero is empty.

Hopefully, I have not misstated anything
here.  It has been some time since I studied
these matters.

0
fom
11/26/2013 7:24:00 PM
On Tue, 26 Nov 2013 18:15:23 +0000, António Marques
<antonioprm@sapo.pt> wrote in
<news:l72ofo$ut8$1@dont-email.me> in
sci.lang,comp.theory,sci.logic:

> Brian M. Scott wrote (26-11-2013 17:56):

>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>> <OCR4Screen> wrote in
>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>> sci.lang,comp.theory,sci.logic:

>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:

>> [...]

>>>> Randomness brings in new notions that may sound weird.
>>>> Assume tossing a fair coin until getting heads.
>>>> Non-termination is possible but has probability zero. So
>>>> probability zero is not the same thing as impossibility.

>>> I think that you are making a subtle fallacy of
>>> equivocation error: It seems to me that any probability
>>> of exactly zero must perfectly equate with impossibility.

>> It doesn’t, and there is no equivocation error.  Antti’s
>> statement is precisely correct.

> But is the probability exactly zero? 

Yes.

> How would one go on about to prove it? 

By demonstrating that the singleton set representing that
outcome has measure 0 in the appropriate probability measure
-- which is trivial.

[...]
0
Brian
11/26/2013 7:27:49 PM
On Tue, 26 Nov 2013 12:14:23 -0600, Peter Olcott
<OCR4Screen> wrote in
<news:37GdncPPYsydeAnPnZ2dnUVZ_rqdnZ2d@giganews.com> in
sci.lang,comp.theory,sci.logic:

> On 11/26/2013 11:56 AM, Brian M. Scott wrote:

>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>> <OCR4Screen> wrote in
>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>> sci.lang,comp.theory,sci.logic:

>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:

>> [...]

>>>> Randomness brings in new notions that may sound weird.
>>>> Assume tossing a fair coin until getting heads.
>>>> Non-termination is possible but has probability zero. So
>>>> probability zero is not the same thing as impossibility.

>>> I think that you are making a subtle fallacy of
>>> equivocation error: It seems to me that any probability
>>> of exactly zero must perfectly equate with impossibility.

>> It doesn’t, and there is no equivocation error.  Antti’s
>> statement is precisely correct.

> Maybe he was saying something along the lines of:

> Although it never will happen, it is not impossible for it
> to happen. In that case what he said made sense.

He said exactly what he meant.  It is in principle possible,
and its probability is 0.

> On the other hand the probability of any impossible event
> would be zero would it not?

Yes.

Brian
0
Brian
11/26/2013 7:29:16 PM
Brian M. Scott wrote (26-11-2013 19:27):
> On Tue, 26 Nov 2013 18:15:23 +0000, António Marques
> <antonioprm@sapo.pt> wrote in
> <news:l72ofo$ut8$1@dont-email.me> in
> sci.lang,comp.theory,sci.logic:
>
>> Brian M. Scott wrote (26-11-2013 17:56):
>
>>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>>> <OCR4Screen> wrote in
>>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>>> sci.lang,comp.theory,sci.logic:
>
>>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>
>>> [...]
>
>>>>> Randomness brings in new notions that may sound weird.
>>>>> Assume tossing a fair coin until getting heads.
>>>>> Non-termination is possible but has probability zero. So
>>>>> probability zero is not the same thing as impossibility.
>
>>>> I think that you are making a subtle fallacy of
>>>> equivocation error: It seems to me that any probability
>>>> of exactly zero must perfectly equate with impossibility.
>
>>> It doesn’t, and there is no equivocation error.  Antti’s
>>> statement is precisely correct.
>
>> But is the probability exactly zero?
>
> Yes.
>
>> How would one go on about to prove it?
>
> By demonstrating that the singleton set representing that
> outcome has measure 0 in the appropriate probability measure
> -- which is trivial.

Does not then the problem morph into how to prove that the probability 
measure is the appropriate one?

0
UTF
11/26/2013 8:12:02 PM
fom wrote (26-11-2013 19:24):
> On 11/26/2013 12:15 PM, António Marques wrote:
>> Brian M. Scott wrote (26-11-2013 17:56):
>>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>>> <OCR4Screen> wrote in
>>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>>> sci.lang,comp.theory,sci.logic:
>>>
>>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>>>
>>> [...]
>>>
>>>>> Randomness brings in new notions that may sound weird.
>>>>> Assume tossing a fair coin until getting heads.
>>>>> Non-termination is possible but has probability zero. So
>>>>> probability zero is not the same thing as impossibility.
>>>
>>>> I think that you are making a subtle fallacy of
>>>> equivocation error: It seems to me that any probability
>>>> of exactly zero must perfectly equate with impossibility.
>>>
>>> It doesn’t, and there is no equivocation error.  Antti’s
>>> statement is precisely correct.
>>
>> But is the probability exactly zero? How would one go on about to prove
>> it? (The problem could be that 'exact' can't apply to 'probability', but
>> I have no idea what the literature says about that.)
>>
>
> The numeric value of the probability is
> not an issue.  A probability is defined
> as a mapping over subsets of an underlying
> system of events.  Relative to that system,
> an impossible event corresponds to the empty
> set of events.  So, the probability of impossibility
> is 0.  The converse does not hold.

I suppose the issue is with this last statement.

> When probability is interpreted relative
> to measure-theoretic ideas, the analogy
> is a set of measure zero.  Not every set
> of measure zero is empty.

This is interesting, is there anywhere I can find a discussion of this 
specific point? (some things don't google well)

> Hopefully, I have not misstated anything
> here.  It has been some time since I studied
> these matters.

I'm quite ignorant of set theory. I don't mind it if others frame the 
questions in terms of it or anything else that works, I'd just like to 
follow where it all leads to.

0
UTF
11/26/2013 8:18:18 PM
On 11/26/2013 3:18 PM, António Marques wrote:
> fom wrote (26-11-2013 19:24):
[...]
>>>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:

>>>>>> Randomness brings in new notions that may sound weird.
>>>>>> Assume tossing a fair coin until getting heads.
>>>>>> Non-termination is possible but has probability zero. So
>>>>>> probability zero is not the same thing as impossibility.

[...]

>> The numeric value of the probability is
>> not an issue.  A probability is defined
>> as a mapping over subsets of an underlying
>> system of events.  Relative to that system,
>> an impossible event corresponds to the empty
>> set of events.  So, the probability of impossibility
>> is 0.  The converse does not hold.
>
> I suppose the issue is with this last statement.
>
>> When probability is interpreted relative
>> to measure-theoretic ideas, the analogy
>> is a set of measure zero.  Not every set
>> of measure zero is empty.
>
> This is interesting, is there anywhere I can find a discussion of
> this specific point? (some things don't google well)

This might be a good place to start. I haven't done more than glance at
the page, but I've had good luck with Wikipedia in the past.
http://en.wikipedia.org/wiki/Measure_zero

The measure of a set might be thought of as a generalization of
area, but usable for really quite bizarre sets.

So, the measure of a set with one point is zero, because the
greatest lower bound of lengths of intervals containing it is zero.
Not very surprising.

The measure of a countable set is also zero. This may be less obvious.
Consider the set A = { a1, a2, a3, ... }. Cover a1 with an interval of
length L/2, a2 with an interval of length L/4, ..., ak with an
interval of length L/2^k, ...    All the intervals together can cover
no more than a length L/2 + L/4 + ... = L, and L can be made
arbitrarily small. So the greatest lower bound of the measure of
a cover of A is zero.

Speaking of weird notions, this is one of my favorites:
The rationals are a countable set. This means that they are a set of
measure zero. This means that they can be covered by intervals
of an arbitrarily small total length. What does this look like?

Perhaps we can think of it as an arbitrarily tiny pat of butter
being spread on an infinite piece of toast. Every rational is
inside at least one of these intervals, and almost none of the
irrationals are inside. But the rationals and irrationals all
are arbitrarily close to each other.

I'm sure this is true, but I just do not see how to imagine this.

>> Hopefully, I have not misstated anything
>> here.  It has been some time since I studied
>> these matters.
>
> I'm quite ignorant of set theory. I don't mind it if others
> frame the questions in terms of it or anything else that works,
> I'd just like to follow where it all leads to.

There is another way to look at this particular problem, not as
powerful as measure theory, but more direct.

The probability of getting an infinite sequence of heads (ISoH)
is the probability of getting one head (H) followed by an
infinite sequence of heads (ISoH). Therefore,
    P(ISoH) = P(H)*P(ISoH) = 1/2*P(ISoH)
Thus
    P(ISoH) = 0


0
Jim
11/27/2013 1:28:12 AM
On 11/26/2013 2:18 PM, António Marques wrote:
> fom wrote (26-11-2013 19:24):
>> On 11/26/2013 12:15 PM, António Marques wrote:
>>> Brian M. Scott wrote (26-11-2013 17:56):
>>>> On Tue, 26 Nov 2013 11:18:46 -0600, Peter Olcott
>>>> <OCR4Screen> wrote in
>>>> <news:Y-ednY_cqplqSgnPnZ2dnUVZ_hmdnZ2d@giganews.com> in
>>>> sci.lang,comp.theory,sci.logic:
>>>>
>>>>> On 11/25/2013 8:14 AM, Antti Valmari wrote:
>>>>
>>>> [...]
>>>>
>>>>>> Randomness brings in new notions that may sound weird.
>>>>>> Assume tossing a fair coin until getting heads.
>>>>>> Non-termination is possible but has probability zero. So
>>>>>> probability zero is not the same thing as impossibility.
>>>>
>>>>> I think that you are making a subtle fallacy of
>>>>> equivocation error: It seems to me that any probability
>>>>> of exactly zero must perfectly equate with impossibility.
>>>>
>>>> It doesn’t, and there is no equivocation error.  Antti’s
>>>> statement is precisely correct.
>>>
>>> But is the probability exactly zero? How would one go on about to prove
>>> it? (The problem could be that 'exact' can't apply to 'probability', but
>>> I have no idea what the literature says about that.)
>>>
>>
>> The numeric value of the probability is
>> not an issue.  A probability is defined
>> as a mapping over subsets of an underlying
>> system of events.  Relative to that system,
>> an impossible event corresponds to the empty
>> set of events.  So, the probability of impossibility
>> is 0.  The converse does not hold.
>
> I suppose the issue is with this last statement.
>

Yes.


>> When probability is interpreted relative
>> to measure-theoretic ideas, the analogy
>> is a set of measure zero.  Not every set
>> of measure zero is empty.
>
> This is interesting, is there anywhere I can find a discussion of this
> specific point? (some things don't google well)
>

It helps to know a few words.  Start with "ring
of sets":

http://en.wikipedia.org/wiki/Ring_of_sets

Next, read a little bit about measures.  You
will not have to read everything.  But, it is
important to grasp the general idea that what
is being measured are subsets of some collection:

http://en.wikipedia.org/wiki/Measure_%28mathematics%29


You may read these links to your heart's content (or
until you are bored with irrelevancies):

http://en.wikipedia.org/wiki/Probability_measure

http://en.wikipedia.org/wiki/Probability_space

http://en.wikipedia.org/wiki/Null_set

To my knowledge, Kolgomorov gave the first axiomatization
of probability.  There is a simple discussion of his
axioms here,

http://en.wikipedia.org/wiki/Probability_axioms

And, there is a slightly different version of probability
discussed here in which countable additivity fails,

http://en.wikipedia.org/wiki/Cox%27s_theorem



>> Hopefully, I have not misstated anything
>> here.  It has been some time since I studied
>> these matters.
>
> I'm quite ignorant of set theory. I don't mind it if others frame the
> questions in terms of it or anything else that works, I'd just like to
> follow where it all leads to.
>

The classical axioms are cast in terms of set
theory (put this under the auspices of the notion
that "set" is a unifying principle of mathematics --
you may or may not disagree :-) ).  To the best of
my knowledge, the general notion of measure falls out
of real analysis and the work Lebesgue did on integration.
But, as the link above mentions, there had been others who
contributed.  I do not know the exact history of
the matter.






0
fom
11/27/2013 4:37:13 AM
fom writes:

[snip]

> To my knowledge, Kolgomorov gave the first axiomatization of
> probability.  There is a simple discussion of his axioms here,
> 
> http://en.wikipedia.org/wiki/Probability_axioms
> 
> And, there is a slightly different version of probability
> discussed here in which countable additivity fails,
> 
> http://en.wikipedia.org/wiki/Cox%27s_theorem

I think it's better to say that Cox's system doesn't have countable
additivity. Saying it fails sounds like it's there to be used but
produces wrong results, which is not the case. Kolmogorov presents
such a finite axiomatication in his book, too, before generalizing to
countable additivity.

Other differences are more interesting: For Cox, following Keynes,
conditional probability is the basic concept, and probability is meant
to be universal like logic. Kolmogorov axiomatizes the concept of a
probability measure and only then defines conditional probability, and
each probability measure is different.

Incidentally, the infinite sequence of heads is not an observable
event. It would take literally forever to get it.

[I didn't notice this thread is cross-posted. Too late.]
0
Jussi
11/27/2013 6:41:48 AM
On 11/27/2013 12:41 AM, Jussi Piitulainen wrote:
> fom writes:
>
> [snip]
>
>> To my knowledge, Kolgomorov gave the first axiomatization of
>> probability.  There is a simple discussion of his axioms here,
>>
>> http://en.wikipedia.org/wiki/Probability_axioms
>>
>> And, there is a slightly different version of probability
>> discussed here in which countable additivity fails,
>>
>> http://en.wikipedia.org/wiki/Cox%27s_theorem
>
> I think it's better to say that Cox's system doesn't have countable
> additivity. Saying it fails sounds like it's there to be used but
> produces wrong results, which is not the case. Kolmogorov presents
> such a finite axiomatication in his book, too, before generalizing to
> countable additivity.
>
> Other differences are more interesting: For Cox, following Keynes,
> conditional probability is the basic concept, and probability is meant
> to be universal like logic. Kolmogorov axiomatizes the concept of a
> probability measure and only then defines conditional probability, and
> each probability measure is different.
>
> Incidentally, the infinite sequence of heads is not an observable
> event. It would take literally forever to get it.
>
> [I didn't notice this thread is cross-posted. Too late.]
>

I am in full agreement.  This is not anything
for which I have great proficiency.  Time takes
its toll when one does not calculate with
regularity.

Thanks for the correction.

I left the cross-postings because I know not from
whence the question arose.



0
fom
11/27/2013 7:05:55 AM
On 11/26/2013 4:04 PM, George Greene wrote:
> On Thursday, November 21, 2013 9:34:14 PM UTC-5, Peter Olcott wrote:
>> No Turing Machines ever halt or fail to halt in any possible world
>> because they are defined as fictional.
>
> That is an extremely idiotic use of the word "because".
> ALL worlds other than THE ACTUAL one ARE FICTIONAL, yet some of them
> ARE ALSO POSSIBLE.  Besides, TMs are in THE ACUTAL world.  They are part
> OF THIS reality.  Everything that gets defined BY REAL PEOPLE and talked
> about IN THE REAL WORLD participates, thereby, in THE REAL world.
> The fact that abstractions are not concrete DOES NOT mean that they
> are not part OF THIS FACTUAL world.  In addition to being part of this
> world, they are are part of MOST POSSIBLE worlds AS WELL, if they are
> consistently defined.  What is more important is that No Turing Machines
> BOTH halt AND fail to halt in any possible world -- THAT is a fact.
>
> You look kind of stupid saying that a thing doesn't exist when it is presented
> directly to you.
>
No Turing Machine ever Halts because Halting logically entails that a
thing exists physically.

Turing Machines are defined to only exist conceptually thus they do not 
exist physically, therefore no Turing Machine ever Halts. We can imagine 
that it halts and we can imagine that it fails to halt, yet this is not 
identical with {halting} and {failing to halt}. Therefore No Turing 
Machine ever halts.

We can also imagine that all Turing Machine never halt and always halt, 
with imagination even the incoherent is possible. In the physical world 
the incoherent is not possible.

The concept of {Possible Worlds} is from the formal semantics of natural 
language as developed by Richard Montague. Lacking a formal semantics of 
natural language (The mathematics of the meaning of words) complete 
communication is impossible. People get stuck in very subtle differences 
in the meaning of their terms, not ever realizing that these differences 
even exist.
0
Peter
11/27/2013 11:43:06 AM
fom wrote (27-11-2013 07:05):
> I left the cross-postings because I know not from whence the question
> arose.

Mine from sci.lang; I have no idea where Mr Olcott lives, he does show up 
here sometimes.

Thank you three for your replies. Your leads took me to the concept of 
'almost everywhere' and then 'almost surely' (this very problem is discussed 
in https://en.wikipedia.org/wiki/Almost_surely#Tossing_a_coin). The 
explanations given in 
https://en.wikipedia.org/wiki/Law_of_large_numbers#Differences_between_the_weak_law_and_the_strong_law 
seem to be exactly what I originally wondered about.
0
UTF
11/27/2013 12:28:55 PM
On 11/28/2013 1:50 AM, Franz Gnaedinger wrote:
> On Wednesday, November 27, 2013 11:01:37 AM UTC+1, Peter Olcott wrote:
>> I already did it twice and like many on Usenet, all you do is glance at
>> a couple of words before forming your refutation.
>>
>> And those that do this also tend to dishonestly snip the context so that
>> their error is not as obvious to others reading.
> I formed my refutation along the discussions with you
> the other time you came here and flooded sci.lang for
> weeks and weeks in many parallel threads with your
> phantasy of an omniscient machine composed from
> Wikipedia snippets. You gave one or two words and their
> 'atomic units of meaning' then, but I showed you that they
> are far from atomic, you just truncated a lot of meanings
> that are also covered and implied by those words.
>
> For the fourth time now: give me a word and its atomic
> unit of meaning.
This is only a further elaboration of what I already said:

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
Objects of thought are divided into types:
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations

The above are the Atoms of Meaning that I (and Kurt G�del) propose all 
objects of thought are comprised of.
The word {GreaterThan} is a type of {Relation} between  a pair of 
{Individuals}.

This is not all of the {Atoms of Meaning} that {GreaterThan} is composed 
of.
The full meaning of the word {GreaterThan} can only be specified in 
terms of everything that it applies to.
The word {GreaterThan} has many subtypes, depending upon what it is 
being applied to:
{Numerically_Greater_Than}, {Physically_Greater_Than}  et cetera.

Thus the full meaning of any word is defined in terms of other words 
that are defined in term of other words recursively quite deep.

None-the-Less all of these meanings of all of these words have the 
{Atoms of Meaning} listed above as their foundation.
0
Peter
11/28/2013 12:34:40 PM
Peter Olcott <OCR4Screen> writes:
<snip>
> No Turing Machine ever Halts because Halting logically entails that a
> thing exists physically.

So, given that it's universal, what possible interest can there be in
this property that you call 'Halting'?  (I've kept your capitals to
distinguish 'Halting' from 'halting' -- a property possessed only by an
interesting subset of Turing machines).

Please try to keep the distinction between Halting and halting clear in
future posts.  It is critical to know when you are talking about a
made-up and useless property shared by all Turing machines, and when you
might be trying to talk about the mathematical concept of halting.

<snip>
-- 
Ben.
0
Ben
11/28/2013 12:42:44 PM
On 11/28/2013 6:42 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> No Turing Machine ever Halts because Halting logically entails that a
>> thing exists physically.
> So, given that it's universal, what possible interest can there be in
> this property that you call 'Halting'?  (I've kept your capitals to
> distinguish 'Halting' from 'halting' -- a property possessed only by an
> interesting subset of Turing machines).

It is not really an actual property that is actually possessed by 
imaginary Turing Machines.
{halting} can only be a property of things that exist physically.
A thing that never moves can not {halt} (cease moving).

>
> Please try to keep the distinction between Halting and halting clear in
> future posts.  It is critical to know when you are talking about a
> made-up and useless property shared by all Turing machines, and when you
> might be trying to talk about the mathematical concept of halting.
>
> <snip>

0
Peter
11/28/2013 1:26:26 PM
On 11/28/2013 6:42 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> No Turing Machine ever Halts because Halting logically entails that a
>> thing exists physically.
> So, given that it's universal, what possible interest can there be in
> this property that you call 'Halting'?  (I've kept your capitals to
> distinguish 'Halting' from 'halting' -- a property possessed only by an
> interesting subset of Turing machines).

It is not really an actual property that is actually possessed by 
imaginary Turing Machines.
{halting} can only be a property of things that exist physically.
A thing that (does not exist physically, thus) never moves can not 
{halt} (cease moving).

>
> Please try to keep the distinction between Halting and halting clear in
> future posts.  It is critical to know when you are talking about a
> made-up and useless property shared by all Turing machines, and when you
> might be trying to talk about the mathematical concept of halting.
>
> <snip>

0
Peter
11/28/2013 2:47:07 PM
Ben Bacarisse wrote (28-11-2013 12:42):
> Peter Olcott <OCR4Screen> writes:
>> No Turing Machine ever Halts because Halting logically entails that a
>> thing exists physically.
>
> So, given that it's universal, what possible interest can there be in
> this property that you call 'Halting'?  (I've kept your capitals to
> distinguish 'Halting' from 'halting' -- a property possessed only by an
> interesting subset of Turing machines).
>
> Please try to keep the distinction between Halting and halting clear in
> future posts.  It is critical to know when you are talking about a
> made-up and useless property shared by all Turing machines, and when you
> might be trying to talk about the mathematical concept of halting.

As enticing as it is to treat Mr Olcott's statement this way, it does hark 
back to some of the arguments that are sometimes made in earnest apropos 
some problems and paradoxes.

0
UTF
11/28/2013 2:54:29 PM
On 11/28/2013 8:54 AM, António Marques wrote:
> Ben Bacarisse wrote (28-11-2013 12:42):
>> Peter Olcott <OCR4Screen> writes:
>>> No Turing Machine ever Halts because Halting logically entails that a
>>> thing exists physically.
>>
>> So, given that it's universal, what possible interest can there be in
>> this property that you call 'Halting'?  (I've kept your capitals to
>> distinguish 'Halting' from 'halting' -- a property possessed only by an
>> interesting subset of Turing machines).
>>
>> Please try to keep the distinction between Halting and halting clear in
>> future posts.  It is critical to know when you are talking about a
>> made-up and useless property shared by all Turing machines, and when you
>> might be trying to talk about the mathematical concept of halting.
>
> As enticing as it is to treat Mr Olcott's statement this way, it does 
> hark back to some of the arguments that are sometimes made in earnest 
> apropos some problems and paradoxes.
>
http://dictionary.reference.com/browse/apropos
0
Peter
11/28/2013 3:46:25 PM
Peter Olcott <OCR4Screen> writes:

> On 11/28/2013 6:42 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>> <snip>
>>> No Turing Machine ever Halts because Halting logically entails that a
>>> thing exists physically.
>> So, given that it's universal, what possible interest can there be in
>> this property that you call 'Halting'?  (I've kept your capitals to
>> distinguish 'Halting' from 'halting' -- a property possessed only by an
>> interesting subset of Turing machines).
>
> It is not really an actual property that is actually possessed by
> imaginary Turing Machines.

I think that was clear, but it can't hurt to have you re-state it.  We
now know you are not saying anything interesting about Turing
machines.

> {halting} can only be a property of things that exist physically.
> A thing that never moves can not {halt} (cease moving).

I might wonder what the {}s signify, but since you are not making any
non-trivial claims about Turing machines, the exact details do not
really matter.

<snip>
-- 
Ben.
0
Ben
11/28/2013 4:01:23 PM
António Marques <antonioprm@sapo.pt> writes:

> Ben Bacarisse wrote (28-11-2013 12:42):
>> Peter Olcott <OCR4Screen> writes:
>>> No Turing Machine ever Halts because Halting logically entails that a
>>> thing exists physically.
>>
>> So, given that it's universal, what possible interest can there be in
>> this property that you call 'Halting'?  (I've kept your capitals to
>> distinguish 'Halting' from 'halting' -- a property possessed only by an
>> interesting subset of Turing machines).
>>
>> Please try to keep the distinction between Halting and halting clear in
>> future posts.  It is critical to know when you are talking about a
>> made-up and useless property shared by all Turing machines, and when you
>> might be trying to talk about the mathematical concept of halting.
>
> As enticing as it is to treat Mr Olcott's statement this way, it does
> hark back to some of the arguments that are sometimes made in earnest
> apropos some problems and paradoxes.

I don't see your point.  Re-defining a key term can, of course, make a
problem appear to vanish, but it's always important to keep track which
definition is in use.  Why does that fact that people often re-define
terms make it any less 'enticing' to be clear that it is happening here?

-- 
Ben.
0
Ben
11/28/2013 4:14:50 PM
Ben Bacarisse wrote (28-11-2013 16:14):
> António Marques <antonioprm@sapo.pt> writes:
>
>> Ben Bacarisse wrote (28-11-2013 12:42):
>>> Peter Olcott <OCR4Screen> writes:
>>>> No Turing Machine ever Halts because Halting logically entails that a
>>>> thing exists physically.
>>>
>>> So, given that it's universal, what possible interest can there be in
>>> this property that you call 'Halting'?  (I've kept your capitals to
>>> distinguish 'Halting' from 'halting' -- a property possessed only by an
>>> interesting subset of Turing machines).
>>>
>>> Please try to keep the distinction between Halting and halting clear in
>>> future posts.  It is critical to know when you are talking about a
>>> made-up and useless property shared by all Turing machines, and when you
>>> might be trying to talk about the mathematical concept of halting.
>>
>> As enticing as it is to treat Mr Olcott's statement this way, it does
>> hark back to some of the arguments that are sometimes made in earnest
>> apropos some problems and paradoxes.
>
> I don't see your point.  Re-defining a key term can, of course, make a
> problem appear to vanish, but it's always important to keep track which
> definition is in use.  Why does that fact that people often re-define
> terms make it any less 'enticing' to be clear that it is happening here?

I said nothing about being clear. My objection was to the sarcasm, and only 
inasmuch as others are spared it.

0
UTF
11/28/2013 4:22:17 PM
On 11/28/2013 10:22 AM, António Marques wrote:
> Ben Bacarisse wrote (28-11-2013 16:14):
>> António Marques <antonioprm@sapo.pt> writes:
>>
>>> Ben Bacarisse wrote (28-11-2013 12:42):
>>>> Peter Olcott <OCR4Screen> writes:
>>>>> No Turing Machine ever Halts because Halting logically entails that a
>>>>> thing exists physically.
>>>>
>>>> So, given that it's universal, what possible interest can there be in
>>>> this property that you call 'Halting'?  (I've kept your capitals to
>>>> distinguish 'Halting' from 'halting' -- a property possessed only by an
>>>> interesting subset of Turing machines).
>>>>
>>>> Please try to keep the distinction between Halting and halting clear in
>>>> future posts.  It is critical to know when you are talking about a
>>>> made-up and useless property shared by all Turing machines, and when
>>>> you
>>>> might be trying to talk about the mathematical concept of halting.
>>>
>>> As enticing as it is to treat Mr Olcott's statement this way, it does
>>> hark back to some of the arguments that are sometimes made in earnest
>>> apropos some problems and paradoxes.
>>
>> I don't see your point.  Re-defining a key term can, of course, make a
>> problem appear to vanish, but it's always important to keep track which
>> definition is in use.  Why does that fact that people often re-define
>> terms make it any less 'enticing' to be clear that it is happening here?
>
> I said nothing about being clear. My objection was to the sarcasm, and
> only inasmuch as others are spared it.
>

I thought that of Ben once, too, and made a similar remark.

In that case, subsequent exchanges had shown me to be in
error.  He simply demonstrates a certain "matter of factness"
that is subject to misinterpretation at times.

You, of course, may draw different conclusions.  And, without
question, Ben is as capable of sarcasm as anyone.



0
fom
11/28/2013 4:34:38 PM
In article <L62dncjDPaf9pQrPnZ2dnUVZ_g-dnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> This is only a further elaboration of what I already said:
> 
> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
> Objects of thought are divided into types:
> a) Individuals
> b) Properties of individuals
> c) Relations between individuals
> d) Properties of such relations

There are a few problems with this.  One is reducibility.  For example, 
many programming languages have the notion of "first-class" types; a 
function might not be accessible as such, but a method or closure might 
be.  In your above classification, it strikes me that "properties" and 
"relations" may actually be a form of "individuals", thus reducing 
everything to one first-class type.

I also take issue with types as simple taxonomy.  OK, so you call 
something an "individual".  So what?  As I noted for learning (another 
obvious, "so what" AI attribute), it doesn't say anything about the 
underlying system's function.  If you want to go looking for meaning, 
you have to go *much* farther than that.

> The above are the Atoms of Meaning that I (and Kurt G�del) propose all 
> objects of thought are comprised of.
> The word {GreaterThan} is a type of {Relation} between  a pair of 
> {Individuals}.

Words are not meaning.  They don't even point to atoms of meaning.  
Language is but an imperfect artifact of brain communication.  Neurons 
themselves communicate in a very . . . inhuman way.  Meaning (to the 
degree that such a concept exists) must be shown to be derivable in such 
a system.  And *then* we can talk about what it will take to 
artificially duplicate it.

> Thus the full meaning of any word is defined in terms of other words 
> that are defined in term of other words recursively quite deep.

To me, words are not defined in terms of other words.  Things have 
internal meaning, to which I can *ascribe* words.  If those words *to 
you* do not match my meaning, I can try to use different words until we 
are in sync.  I don't have any particular notion that such a search is 
necessarily recursive or deep.

> None-the-Less all of these meanings of all of these words have the 
> {Atoms of Meaning} listed above as their foundation.

I doubt it.  Even in physics, atoms are not the starting or stopping 
point when it comes to explaining the Universe.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
11/28/2013 6:29:35 PM
António Marques <antonioprm@sapo.pt> writes:

> Ben Bacarisse wrote (28-11-2013 16:14):
>> António Marques <antonioprm@sapo.pt> writes:
>>
>>> Ben Bacarisse wrote (28-11-2013 12:42):
>>>> Peter Olcott <OCR4Screen> writes:
>>>>> No Turing Machine ever Halts because Halting logically entails that a
>>>>> thing exists physically.
>>>>
>>>> So, given that it's universal, what possible interest can there be in
>>>> this property that you call 'Halting'?  (I've kept your capitals to
>>>> distinguish 'Halting' from 'halting' -- a property possessed only by an
>>>> interesting subset of Turing machines).
>>>>
>>>> Please try to keep the distinction between Halting and halting clear in
>>>> future posts.  It is critical to know when you are talking about a
>>>> made-up and useless property shared by all Turing machines, and when you
>>>> might be trying to talk about the mathematical concept of halting.
>>>
>>> As enticing as it is to treat Mr Olcott's statement this way, it does
>>> hark back to some of the arguments that are sometimes made in earnest
>>> apropos some problems and paradoxes.
>>
>> I don't see your point.  Re-defining a key term can, of course, make a
>> problem appear to vanish, but it's always important to keep track which
>> definition is in use.  Why does that fact that people often re-define
>> terms make it any less 'enticing' to be clear that it is happening here?
>
> I said nothing about being clear. My objection was to the sarcasm, and
> only inasmuch as others are spared it.

Ah, I missed that altogether.  I intended no sarcasm at all -- I was
being absolutely literal -- but I accept that what I intended may very
well not be what I achieved.

When I try to read it as sarcastic, I am still a bit confused, but since
the base confusion is due to my not having has the effect I intended, it
may not be worth drilling down any further.

-- 
Ben.
0
Ben
11/28/2013 7:04:28 PM
On 11/28/2013 12:29 PM, Doc O'Leary wrote:
> In article <L62dncjDPaf9pQrPnZ2dnUVZ_g-dnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> This is only a further elaboration of what I already said:
>>
>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>> Objects of thought are divided into types:
>> a) Individuals
>> b) Properties of individuals
>> c) Relations between individuals
>> d) Properties of such relations
>
> There are a few problems with this.  One is reducibility.  For example,
> many programming languages have the notion of "first-class" types; a
> function might not be accessible as such, but a method or closure might
> be.  In your above classification, it strikes me that "properties" and
> "relations" may actually be a form of "individuals", thus reducing
> everything to one first-class type.

I am not talking about programming languages. I am talking about the 
inherent structure of the natural order of the set of all conceptual 
knowledge.

>
> I also take issue with types as simple taxonomy.  OK, so you call
> something an "individual".  So what?  As I noted for learning (another
> obvious, "so what" AI attribute), it doesn't say anything about the
> underlying system's function.  If you want to go looking for meaning,
> you have to go *much* farther than that.

That is true. I am only proposing {atoms of meaning}, there is of course 
very much more to meaning than its atoms.

>
>> The above are the Atoms of Meaning that I (and Kurt G�del) propose all
>> objects of thought are comprised of.
>> The word {GreaterThan} is a type of {Relation} between  a pair of
>> {Individuals}.
>
> Words are not meaning.  They don't even point to atoms of meaning.

I would agree. I see words as placeholders for meaning postulates that 
are composed of other meaning postulates, on and on ... recursively 
quite deep. These meaning postulates are each connected together on the 
basis of their semantic atom role.

> Language is but an imperfect artifact of brain communication.  Neurons
> themselves communicate in a very . . . inhuman way.  Meaning (to the
> degree that such a concept exists) must be shown to be derivable in such
> a system.  And *then* we can talk about what it will take to
> artificially duplicate it.
>
>> Thus the full meaning of any word is defined in terms of other words
>> that are defined in term of other words recursively quite deep.
>
> To me, words are not defined in terms of other words.  Things have
> internal meaning, to which I can *ascribe* words.  If those words *to
> you* do not match my meaning, I can try to use different words until we
> are in sync.  I don't have any particular notion that such a search is
> necessarily recursive or deep.
If one were to completely define the single noun {house} such that every 
detail about everything that could ever relate to this word in any way 
is fully specified, the complete definition of the single word {house} 
would be quite large.

For example one word related to the noun {house} would be the noun 
{carpenter}. One property of the noun {house} that is {size} would pull 
in a subset of {mathematics} to the meaning postulate of the noun {house}.

>
>> None-the-Less all of these meanings of all of these words have the
>> {Atoms of Meaning} listed above as their foundation.
>
> I doubt it.  Even in physics, atoms are not the starting or stopping
> point when it comes to explaining the Universe.
>

a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations

I (and Kurt G�del) propose that objects of thought can be divided into 
the above types. Although these objects of thought entail their own 
meaning postulates, they still remain semantic atoms because they form 
the basis for all meaning postulates, including their own.
0
Peter
11/29/2013 12:09:53 PM
On 11/29/2013 6:12 AM, Franz Gnaedinger wrote:
> On Thursday, November 28, 2013 1:34:40 PM UTC+1, Peter Olcott wrote:
>
> computerese again but finally provided an example of what
> I asked him for: 'greater than' has the atomic unit of meaning
> 'greater than'.

I use the convention {GreaterThan} as a placeholder for the meaning 
postulate that completely specifies the entire and complete meaning of 
the word. I am referring to these things mostly within the context of 
Richard Montague's foundation.

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
Objects of thought are divided into types:
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations

{GreaterThan} is not by itself an atom of meaning because it can be 
further divided into a: [Relation] between two [Individuals]

We might also say that the [Property] of this [Relation] is {Size}.
Each [Individual] would have its own {MeasureOfSize}.

>
> I recall an article in The American Scientist (either from
> the late 1990s or early 2000s) on the ambiguities of terms
> like for example 'equal' and 'the same' and 'identical -
> fine in everyday language but problematic in higher logic.
> I don't remember whether they included the relation 'greater
> than' or not but will anyway make my case for its ambiguity.
>
> Look at the following numbers:
>
>    0.00000000000000000000000000000000000000000000000001
>    0.00000000000000000000000000000000000000000000000002
>
> The second number is greater than the first one, double as much.
> English 'greater than' implies that someone or something else
> is great. Neither of these numbers are great. They are tiny
> tiny tiny. So in this case 'greater than' is not even great,
> far far less than great.

The term {GreaterThan} is not assumed to have anything at all to do with 
the term {Great}. These each would have their own meaning postulates.

>
> Fifty is greater than pi, but while nobody calls fifty a great
> number, pi certainly is, a universal number appearing everywhere
> and at most unexpected places
>
>    50 greater than 3.14159...
>    3.14159... greater than 50
>
> A fly has a small moving energy, but when an elementary particle
> has that energy then it is greater than the energy of absolutely
> any other particle - and neutrinos of that incredibly high energy
> have been detected by Ice Cube.
>
> Being invited to a fine meal is great, but being invited to watch
> a baseball game from the VIP lounge is even greater. How do you
> measure 'great' and 'greater than' in such cases?
>
> We human beings have a lot of internal scales from which we
> choose and apply the right one with ease and not even noticing
> that we make a choice. Machines don't have them, so the simple
> relation 'greater than' becomes a big problem for the machine
> you have in mind.
>

0
Peter
11/29/2013 12:32:08 PM
In article <Dqmdnf5RspiNGQXPnZ2dnUVZ_jCdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> I am not talking about programming languages. I am talking about the 
> inherent structure of the natural order of the set of all conceptual 
> knowledge.

I am not talking about programming languages either.  I was merely 
referring to them as an example, hence my "For example" qualifier.  
Please update the structure of your conceptual knowledge properly . . .

> That is true. I am only proposing {atoms of meaning}, there is of course 
> very much more to meaning than its atoms.

And my point is that your proposal may be inherently wrong.  Meaning 
might not exist at the atomic level.  It might be necessary to go to the 
equivalent of subatomic particles.  Or maybe we'll need break out the 
equivalent of quantum field theory.  Unless you have a genuine theory 
that quantified meaning at some level you're labeling "atomic", it isn't 
scientific to impose your assumptions on the nature of intelligence.

> I would agree. I see words as placeholders for meaning postulates that 
> are composed of other meaning postulates, on and on ... recursively 
> quite deep. These meaning postulates are each connected together on the 
> basis of their semantic atom role.

You're likely wrong.  Or, put another way, you don't get to side-step 
the definition of intelligence by hand waving with recursion.

> If one were to completely define the single noun {house} such that every 
> detail about everything that could ever relate to this word in any way 
> is fully specified, the complete definition of the single word {house} 
> would be quite large.

No, it wouldn't.  It would be essentially *infinite*.  You're looking at 
this backwards, and it's leading you astray.  There is simply no way 
that words "completely" do anything except imperfectly refer to one 
individual's thoughts.  The mapping between meaning and words is *very* 
complex, in both directions; so much so that even the most intelligent 
creatures on the planet don't do it particularly well.

> For example one word related to the noun {house} would be the noun 
> {carpenter}. One property of the noun {house} that is {size} would pull 
> in a subset of {mathematics} to the meaning postulate of the noun {house}.

Nonsense.  Carpentry is only related to housing when it is 
professionally built out of wood.  Thatched huts and igloos (and 
countless other constructions) may have no such relation.  Neither does 
it require explicit mathematics to make something large enough for 
regular human habitation.  Entities are multiplying as we speak.  Your 
approach is not grounded in reality and should be abandoned.

> I (and Kurt G�del) propose that objects of thought can be divided into 
> the above types.

I understood that.  You are both wrong.  Do you understand that?  It 
takes more than name dropping to be right.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
11/29/2013 5:43:40 PM
On 11/29/2013 11:43 AM, Doc O'Leary wrote:
> In article <Dqmdnf5RspiNGQXPnZ2dnUVZ_jCdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> I am not talking about programming languages. I am talking about the
>> inherent structure of the natural order of the set of all conceptual
>> knowledge.
>
> I am not talking about programming languages either.  I was merely
> referring to them as an example, hence my "For example" qualifier.
> Please update the structure of your conceptual knowledge properly . . .
>
>> That is true. I am only proposing {atoms of meaning}, there is of course
>> very much more to meaning than its atoms.
>
> And my point is that your proposal may be inherently wrong.  Meaning

My response to Franz provides a concrete example that shows otherwise.

> might not exist at the atomic level.  It might be necessary to go to the
> equivalent of subatomic particles.  Or maybe we'll need break out the
> equivalent of quantum field theory.  Unless you have a genuine theory
> that quantified meaning at some level you're labeling "atomic", it isn't
> scientific to impose your assumptions on the nature of intelligence.
>
>> I would agree. I see words as placeholders for meaning postulates that
>> are composed of other meaning postulates, on and on ... recursively
>> quite deep. These meaning postulates are each connected together on the
>> basis of their semantic atom role.
>
> You're likely wrong.  Or, put another way, you don't get to side-step
> the definition of intelligence by hand waving with recursion.
>
>> If one were to completely define the single noun {house} such that every
>> detail about everything that could ever relate to this word in any way
>> is fully specified, the complete definition of the single word {house}
>> would be quite large.
>
> No, it wouldn't.  It would be essentially *infinite*.  You're looking at
> this backwards, and it's leading you astray.  There is simply no way
> that words "completely" do anything except imperfectly refer to one
> individual's thoughts.  The mapping between meaning and words is *very*
> complex, in both directions; so much so that even the most intelligent
> creatures on the planet don't do it particularly well.
>
>> For example one word related to the noun {house} would be the noun
>> {carpenter}. One property of the noun {house} that is {size} would pull
>> in a subset of {mathematics} to the meaning postulate of the noun {house}.
>
> Nonsense.  Carpentry is only related to housing when it is
> professionally built out of wood.  Thatched huts and igloos (and
> countless other constructions) may have no such relation.  Neither does
> it require explicit mathematics to make something large enough for
> regular human habitation.  Entities are multiplying as we speak.  Your
> approach is not grounded in reality and should be abandoned.
>
>> I (and Kurt G�del) propose that objects of thought can be divided into
>> the above types.
>
> I understood that.  You are both wrong.  Do you understand that?  It
> takes more than name dropping to be right.
>

0
Peter
11/29/2013 5:56:20 PM
Ben Bacarisse wrote (28-11-2013 19:04):
> I intended no sarcasm at all -- I was being absolutely literal

OK, I understand.
0
UTF
11/29/2013 6:03:11 PM
On 11/30/2013 2:39 AM, Dr. HotSalt wrote:
> On Friday, November 29, 2013 4:09:53 AM UTC-8, Peter Olcott wrote:
>> On 11/28/2013 12:29 PM, Doc O'Leary wrote:
>>
>>> In article <L62dncjDPaf9pQrPnZ2dnUVZ_g-dnZ2d@giganews.com>,
>>>    Peter Olcott <OCR4Screen> wrote:
>>>> This is only a further elaboration of what I already said:
>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>>> Objects of thought are divided into types:
>>>> a) Individuals
>>>> b) Properties of individuals
>>>> c) Relations between individuals
>>>> d) Properties of such relations
>>> There are a few problems with this.  One is reducibility.  For example,
>>> many programming languages have the notion of "first-class" types; a
>>> function might not be accessible as such, but a method or closure might
>>> be.  In your above classification, it strikes me that "properties" and
>>> "relations" may actually be a form of "individuals", thus reducing
>>> everything to one first-class type.
>>>
>>> I also take issue with types as simple taxonomy.  OK, so you call
>>> something an "individual".  So what?  As I noted for learning (another
>>> obvious, "so what" AI attribute), it doesn't say anything about the
>>> underlying system's function.  If you want to go looking for meaning,
>>> you have to go *much* farther than that.
>> That is true. I am only proposing {atoms of meaning}, there is of course
>> very much more to meaning than its atoms.
>>
>>>> The above are the Atoms of Meaning that I (and Kurt G�del) propose all
>>>> objects of thought are comprised of.
>>>> The word {GreaterThan} is a type of {Relation} between  a pair of
>>>> {Individuals}.
>>> Words are not meaning.  They don't even point to atoms of meaning.
>> I would agree. I see words as placeholders for meaning postulates that
>> are composed of other meaning postulates, on and on ... recursively
>> quite deep. These meaning postulates are each connected together on the
>> basis of their semantic atom role.
>    Words, meaning postulates, whatever placeholder you want to use for a currently ill-defined term, must be themselves defined, as you say, recursively by others of their kind.
Words are the variable names, and the meanings are what are assigned to 
these variable names. When this is done in a totally rigorous manner 
such as Montague Grammar, the combination of words and their meaning is 
called a meaning postulates.
>
>    Your use of the qualifier "atoms" in "atoms of meaning" may therefore be the, uh, "incorrect" term, since "atom" means literally "indivisible". Ordinary physical atoms, and your kind, all comprise parts that combine according to certain rules, just as atoms obey certain rules for them to combine.
>
>    Do you now propose a "subatomic" layer of meaning on which to base "atomic units"?
>
>    There may well be multiple layers of recursion but there's got to be a bottom layer. I can't wait till you get to the Higgs Field of meaning.
The atoms of meaning that I refer to form the set of most basic types of 
elements that can be connected together (within an acyclic digraph) to 
form meaning postulates. That latter three of these elements also 
indicate the type of connection that is formed.
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations
>>>> Thus the full meaning of any word is defined in terms of other words
>>>> that are defined in term of other words recursively quite deep.
>>> To me, words are not defined in terms of other words.  Things have
>>> internal meaning, to which I can *ascribe* words.  If those words *to
>>> you* do not match my meaning, I can try to use different words until we
>>> are in sync.  I don't have any particular notion that such a search is
>>> necessarily recursive or deep.
>    Logically indefensible. If a word has intrinsic meaning, not only can it *not* be accurately defined in terms of other words, it does not *need* defining.
>
>    Please give one example of a word that needs no defining, or that can not *accurately* be defined by other words.

I never said or meant anything like what you are responding to.
Here is a concrete example:

{GreaterThan} is not by itself an atom of meaning because it can be 
further divided into a: [Relation] between two [Individuals]
We might also say that the [Property] of this [Relation] is {Size}.
Each [Individual] would have its own {MeasureOfSize}.

>
>> If one were to completely define the single noun {house} such that every
>> detail about everything that could ever relate to this word in any way
>> is fully specified, the complete definition of the single word {house}
>> would be quite large.
>    I dispute the need to include the full specification of everything that *could* relate to "house". Did you remember "blue ice" and "meteors"?
I am unaware of your reference.
One thing that must be specified  to make sure that the specification of 
the term {house} is categorically exhaustively complete is {nail}.

If {nail} is not specified then {building}(verb) a {house} can not be 
fully understood.
{Nail} further requires {NailGun} and {NailGun} requires konowledge of 
using a {NailGun} which requires knowledge of {HumanBeing}.
Without all of these connections knowledge is not complete.

>
>> For example one word related to the noun {house} would be the noun
>> {carpenter}. One property of the noun {house} that is {size} would pull
>> in a subset of {mathematics} to the meaning postulate of the noun {house}.
>    Uh, that seems to assume a priori that a house must be built at least partially of wood.
>
>    I would have started with the concept of "shelter", but that's me.
Yes that is a good idea. The way to do it is to define a knowledge 
inheritance hierarchy.

>
>>>> None-the-Less all of these meanings of all of these words have the
>>>> {Atoms of Meaning} listed above as their foundation.
>>> I doubt it.  Even in physics, atoms are not the starting or stopping
>>> point when it comes to explaining the Universe.
>> a) Individuals
>>
>> b) Properties of individuals
>>
>> c) Relations between individuals
>>
>> d) Properties of such relations
>    Do you suggest that this represents how information is parsed in the human neural network?
>
>
>    Dr. HotSalt
I (and Kurt Godel} are suggesting that these are the fundamental 
building blocks of all knowledge.
0
Peter
11/30/2013 12:45:58 PM
In article <U8mdnT9g9JjfSAXPnZ2dnUVZ_h-dnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> My response to Franz provides a concrete example that shows otherwise.

Doubtful.  More to the point, whatever discussion you're referencing is 
not in the newsgroups I'm following.  Answer *me* if you want to answer 
me.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
11/30/2013 5:49:37 PM
On 11/30/2013 11:49 AM, Doc O'Leary wrote:
> In article <U8mdnT9g9JjfSAXPnZ2dnUVZ_h-dnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> My response to Franz provides a concrete example that shows otherwise.
> Doubtful.  More to the point, whatever discussion you're referencing is
> not in the newsgroups I'm following.  Answer *me* if you want to answer
> me.
>
Okay here you go.

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
Objects of thought are divided into types:
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations

Concrete example of partial meaning postulate for GreaterThan using 
proposed AtomsOfMeaning:

{GreaterThan} is not by itself an atom of meaning because it can be 
further divided into a: [Relation] between two [Individuals]

We might also say that the [Property] of this [Relation] is {Size}.
Each [Individual] would have its own {MeasureOfSize}.

{MeaningPostulate}
[SemanticAtom]
0
Peter
11/30/2013 7:31:25 PM
On 12/1/2013 8:36 AM, George Greene wrote:
> On Saturday, November 30, 2013 11:50:10 PM UTC-5, Peter Olcott wrote:
>> Do you know much about the formal semantics of natural language?
> Basic logic is a lot simpler than natural language.
> If you don't know basic logic then you don't know enough
> to be trying to call error on people who DO know it.
>
>> If not then you would not know enough to know that you do not know enough.
> I know enough.  It is you who don't.
> What I DON'T know but would like to IS WHY YOU PERSIST.
> You are STUNNINGLY BAD at this.
> What kind OF HUBRIS DOES IT TAKE to think you can take on
> THE WHOLE ACADEMIC COMMUNITY over this issue??
>
>
The representational gap between mathematics and the formal semantics of 
natural language is that the former has no way of specifying 
semantically well-formed propositions whereas the latter does have such 
a way. It is this representation gap that continues to hide the fact 
that the HP, IT, and LP are merely ill-formed.
0
Peter
12/1/2013 3:19:05 PM
Peter Olcott wrote:

> The representational gap between mathematics and the formal semantics of
> natural language is that the former has no way of specifying
> semantically well-formed propositions whereas the latter does have such
> a way. It is this representation gap that continues to hide the fact
> that the HP, IT, and LP are merely ill-formed.

Here's my guess; you've not read one, not even one, formal account of 
the halting problem or the incompleteness theorem.  If you want some 
references, do ask.  You don't, you won't.  The last thing a crank wants 
is to learn the facts.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/1/2013 3:37:10 PM
In article <2JSdnUQp5eCToAfPnZ2dnUVZ_vidnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> Okay here you go.
> 
> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
> Objects of thought are divided into types:
> a) Individuals
> b) Properties of individuals
> c) Relations between individuals
> d) Properties of such relations

Uh, that is just you continuing to parrot the same thing you always do.  
I have already refuted this.  Please stop repeating yourself and apply 
some critical thinking.

> Concrete example of partial meaning postulate for GreaterThan using 
> proposed AtomsOfMeaning:
> 
> {GreaterThan} is not by itself an atom of meaning because it can be 
> further divided into a: [Relation] between two [Individuals]
> 
> We might also say that the [Property] of this [Relation] is {Size}.
> Each [Individual] would have its own {MeasureOfSize}.
> 
> {MeaningPostulate}
> [SemanticAtom]

And exactly what part of my post is this supposed to address?  Again, I 
say your position is flawed from fundamental principles.  Please show 
*any* "atom of meaning" that cannot be related to anything else.

Even this simpler example is as flawed as the house example.  As a 
comparison, greater than may have nothing at all to do with any 
particular measure of size.  For example, situationally, I'd probably 
rather have a knife than a machine gun to defend myself if I were diving 
under water, but in other scenarios the "greater than" evaluation would 
go the other way.  In that way, as I've quoted Carl Sagan on before, 
you're trying to make an apple pie from scratch without first inventing 
the Universe.

As the related saying goes, the whole is greater than the sum of its 
parts.  You are flat-out *wrong* to think you can decompose meaning into 
some a priori packaging of information.  AI has made very little 
progress because many people have followed that same path.  Reality 
doesn't work that way.  You are best served by picking a more solid 
foundation for your work than what you currently reference ad nauseam.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/1/2013 5:06:31 PM
On 12/1/2013 11:06 AM, Doc O'Leary wrote:
> In article <2JSdnUQp5eCToAfPnZ2dnUVZ_vidnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> Okay here you go.
>>
>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>> Objects of thought are divided into types:
>> a) Individuals
>> b) Properties of individuals
>> c) Relations between individuals
>> d) Properties of such relations
> Uh, that is just you continuing to parrot the same thing you always do.
> I have already refuted this.  Please stop repeating yourself and apply
> some critical thinking.
>
>> Concrete example of partial meaning postulate for GreaterThan using
>> proposed AtomsOfMeaning:
>>
>> {GreaterThan} is not by itself an atom of meaning because it can be
>> further divided into a: [Relation] between two [Individuals]
>>
>> We might also say that the [Property] of this [Relation] is {Size}.
>> Each [Individual] would have its own {MeasureOfSize}.
>>
>> {MeaningPostulate}
>> [SemanticAtom]
> And exactly what part of my post is this supposed to address?  Again, I
> say your position is flawed from fundamental principles.  Please show
> *any* "atom of meaning" that cannot be related to anything else.
We are defining our terms differently. I am not defining my terms such 
that semantic atomicity is opposed to semantic holism. I am saying that 
everything is related to everything else on the basis of the above four 
semantic atoms. Also I am speaking from the Context of Montague Grammar, 
and not any other conception of semantics.

I specifically broke the meaning postulate of {GreaterThan} down into 
its atoms of the [relation] between two [individuals].

>
> Even this simpler example is as flawed as the house example.  As a
> comparison, greater than may have nothing at all to do with any
> particular measure of size.  For example, situationally, I'd probably
> rather have a knife than a machine gun to defend myself if I were diving
> under water,
That would not be an example of the literal conception of {GreaterThan}  
that I am referring to.
You example is more along the lines of {Appropriateness} rather than 
{GreaterThan} .

> but in other scenarios the "greater than" evaluation would
> go the other way.  In that way, as I've quoted Carl Sagan on before,
> you're trying to make an apple pie from scratch without first inventing
> the Universe.
That is an excellent example of the interconnectedness of all knowledge 
that I have referred to.

> As the related saying goes, the whole is greater than the sum of its
> parts.  You are flat-out *wrong* to think you can decompose meaning into
> some a priori packaging of information.  AI has made very little
> progress because many people have followed that same path.  Reality
> doesn't work that way.  You are best served by picking a more solid
> foundation for your work than what you currently reference ad nauseam.
>

0
Peter
12/1/2013 6:47:04 PM
On 12/1/2013 9:37 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> The representational gap between mathematics and the formal semantics of
>> natural language is that the former has no way of specifying
>> semantically well-formed propositions whereas the latter does have such
>> a way. It is this representation gap that continues to hide the fact
>> that the HP, IT, and LP are merely ill-formed.
>
> Here's my guess; you've not read one, not even one, formal account of 
> the halting problem or the incompleteness theorem.  If you want some 
> references, do ask.  You don't, you won't.  The last thing a crank 
> wants is to learn the facts.
>
So is there a way to define the requirements of a semantically well 
formed proposition or not?
0
Peter
12/2/2013 1:22:43 AM
Peter Olcott wrote:
> On 12/1/2013 9:37 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>
>>> The representational gap between mathematics and the formal semantics of
>>> natural language is that the former has no way of specifying
>>> semantically well-formed propositions whereas the latter does have such
>>> a way. It is this representation gap that continues to hide the fact
>>> that the HP, IT, and LP are merely ill-formed.
>>
>> Here's my guess; you've not read one, not even one, formal account of
>> the halting problem or the incompleteness theorem.  If you want some
>> references, do ask.  You don't, you won't.  The last thing a crank
>> wants is to learn the facts.
>>
> So is there a way to define the requirements of a semantically well
> formed proposition or not?

There are various theories of meaning: 
http://plato.stanford.edu/entries/meaning/.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/2/2013 10:50:01 AM
On 12/2/2013 4:50 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/1/2013 9:37 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>
>>>> The representational gap between mathematics and the formal
>>>> semantics of
>>>> natural language is that the former has no way of specifying
>>>> semantically well-formed propositions whereas the latter does have such
>>>> a way. It is this representation gap that continues to hide the fact
>>>> that the HP, IT, and LP are merely ill-formed.
>>>
>>> Here's my guess; you've not read one, not even one, formal account of
>>> the halting problem or the incompleteness theorem.  If you want some
>>> references, do ask.  You don't, you won't.  The last thing a crank
>>> wants is to learn the facts.
>>>
>> So is there a way to define the requirements of a semantically well
>> formed proposition or not?
>
> There are various theories of meaning:
> http://plato.stanford.edu/entries/meaning/.
>

Yes. I am going on the basis that Richard Montague developed, the 
Montague (semantic) Grammar. Thus I am speaking in terms of meaning 
postulates.

http://plato.stanford.edu/entries/montague-semantics/
0
Peter
12/2/2013 11:26:05 AM
On 12/2/2013 5:03 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> P = "The color of my car is five feet long"
>> What rules of logic would provide the means to determine that this
>> proposition is not true?
>
> There are numerous logics.  If you have one in mind, is "The color of my
> car is five feet long" a wwf in it?  Does your logic have semantics? The
> rules of English (a different matter all together) change from time to
> time and from place to place, but you might start with a dictionary that
> will tell you how "color" works.

http://plato.stanford.edu/entries/montague-semantics/

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944

Monatague meaning postulates combined with Kurt Gödel's objects of thought:

divided into types, namely: individuals, properties of individuals, 
relations between individuals, properties of such relations, etc. (with 
a similar hierarchy for extensions), and that sentences of the form: " a 
has the property φ ", " b bears the relation R to c ", etc. are

meaningless, if a, b, c, R, φ are not of types fitting together.

The last line of this indicates at least one criterion measure for 
discerning utterances that do not form semantically valid propositions.

In particular it defines the way that we know that the color [property] 
of a car [individual] would not have a length [property].

>
>>> Obviously, for MOST mathematical and programming statements,
>>> YOU CAN'T do that.  BUT IN LOGIC, YOU CAN.
>>> That is what MAKES logic LOGIC!
>>>
>>> You can, for locally relevant example, know that there DOES NOT
>>> exist a barber who shaves all and only those barbers who do not
>>> shave themselves, WITHOUT KNOWING what "shave" means or "barber" means!!
>>
>> Unless it is translated into its set theory form...
>> The set of all sets that do not contain themselves.
>
> Just because you can write the words "The set" doesn't mean there is
> such a thing.
>

The set of all sets that do not contain themselves exists at least as a 
misconception. Likewise with these utterances:
"I am not provable"
"I am lying"

Utterances can be validated as semantically ill formed in other ways:
Utterances that apply self-reference in a way that results in 
contradiction are not well-formed propositions.

1) This sentence contains five words.
2) This sentence contains thirty-five words.
3) This sentence is true.
4) This sentence truly contains six words.
5) This sentence proves itself true.
6) It is false that this sentence contains seventeen words.
7) It can not be proven that this sentence is true.

Some of the above sentences are semantically valid propositions and 
others are not.

0
Peter
12/2/2013 11:58:31 AM
Peter Olcott wrote:

> The set of all sets that do not contain themselves exists at least as a
> misconception. Likewise with these utterances:
> "I am not provable"

You know, don't you, that Gödel's sentence does _not_ say "I am not 
provable"?  You should because I've told you more than once already.  "I 
am not provable" is a very informal rendering of it.  Gödel's actual 
sentence is a wff in the theory of arithmetic, it is about the natural 
numbers under successor, sum and product, it is as well formed (and as 
true) as "2+3=5".

> "I am lying"
>
> Utterances can be validated as semantically ill formed in other ways:
> Utterances that apply self-reference in a way that results in
> contradiction are not well-formed propositions.
>
> 1) This sentence contains five words.
> 2) This sentence contains thirty-five words.
> 3) This sentence is true.
> 4) This sentence truly contains six words.
> 5) This sentence proves itself true.
> 6) It is false that this sentence contains seventeen words.
> 7) It can not be proven that this sentence is true.
>
> Some of the above sentences are semantically valid propositions and
> others are not.
>


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/2/2013 12:06:48 PM
On 12/2/2013 6:06 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> The set of all sets that do not contain themselves exists at least as a
>> misconception. Likewise with these utterances:
>> "I am not provable"
>
> You know, don't you, that Gödel's sentence does _not_ say "I am not
> provable"?  You should because I've told you more than once already.  "I

I also know (that it was reported) that Gödel included a direct quote of 
this sentence in his paper, thus outweighing your apparent error.

> am not provable" is a very informal rendering of it.  Gödel's actual
> sentence is a wff in the theory of arithmetic, it is about the natural
> numbers under successor, sum and product, it is as well formed (and as
> true) as "2+3=5".

Using a means of expression that lacked any criterion measure for 
verifying the semantic validity of its proposition.

http://www.cs.odu.edu/~toida/nerzic/content/logic/pred_logic/construction/wff_intro.html

This validation is entirely at the syntactic rather than semantic level 
of analysis.

How could these rules discern that the utterance:
"The color of my car is five feet long."
is not a well formed proposition?

Can the WFF rules do this?
The {color} [Property] of my {car} [Individual] has a length [Property]...

>
>> "I am lying"
>>
>> Utterances can be validated as semantically ill formed in other ways:
>> Utterances that apply self-reference in a way that results in
>> contradiction are not well-formed propositions.
>>
>> 1) This sentence contains five words.
>> 2) This sentence contains thirty-five words.
>> 3) This sentence is true.
>> 4) This sentence truly contains six words.
>> 5) This sentence proves itself true.
>> 6) It is false that this sentence contains seventeen words.
>> 7) It can not be proven that this sentence is true.
>>
>> Some of the above sentences are semantically valid propositions and
>> others are not.
>>
>
>

0
Peter
12/2/2013 12:22:59 PM
Peter Olcott wrote:
> On 12/2/2013 6:06 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>
>>> The set of all sets that do not contain themselves exists at least as a
>>> misconception. Likewise with these utterances:
>>> "I am not provable"
>>
>> You know, don't you, that Gödel's sentence does _not_ say "I am not
>> provable"?  You should because I've told you more than once already.  "I
>
> I also know (that it was reported) that Gödel included a direct quote of
> this sentence in his paper, thus outweighing your apparent error.

It's an informal version, do you not get that?  Have you read the 
paper[1]?  No.  And yet you feel qualified to comment on it.  Here's an 
English translation by van Heijenoort:

   We therefore have before us a proposition that says about itself
   that it is not provable [in PM].^{15}

Footnote 15 reads:

   Contrary to appearances, such a proposition involves no faulty
   circularity, for initially it [only] asserts that a certain well-
   defined formula (namely, the one obtained from the qth
   formula in the lexicographical order by a certain substitution)
   is unprovable.  Only subsequently (and so to speak by chance)
   does it turn out that this formula is precisely the one by which
   the proposition was expressed.

The bits in [] are not my additions.  So, did you get that:

   such a proposition involves no faulty circularity

?  The translation was endorsed by Gödel.  Next time you quote him, 
please give chapter and verse as I have.

>> am not provable" is a very informal rendering of it.  Gödel's actual
>> sentence is a wff in the theory of arithmetic, it is about the natural
>> numbers under successor, sum and product, it is as well formed (and as
>> true) as "2+3=5".
>
> Using a means of expression that lacked any criterion measure for
> verifying the semantic validity of its proposition.
>
> http://www.cs.odu.edu/~toida/nerzic/content/logic/pred_logic/construction/wff_intro.html
>
>
> This validation is entirely at the syntactic rather than semantic level
> of analysis.
>
> How could these rules discern that the utterance:
> "The color of my car is five feet long."
> is not a well formed proposition?
>
> Can the WFF rules do this?
> The {color} [Property] of my {car} [Individual] has a length [Property]...


I don't know.  You haven't said what the rules are.

[1] Gödel, 'On formally undecidable propositions...' English translation 
in 'From Frege to Gödel', ed Jean van Heijenoort.
Other versions are available.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/2/2013 1:43:28 PM
On 12/2/2013 7:43 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/2/2013 6:06 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>
>>>> The set of all sets that do not contain themselves exists at least as a
>>>> misconception. Likewise with these utterances:
>>>> "I am not provable"
>>>
>>> You know, don't you, that Gödel's sentence does _not_ say "I am not
>>> provable"?  You should because I've told you more than once already.  "I
>>
>> I also know (that it was reported) that Gödel included a direct quote of
>> this sentence in his paper, thus outweighing your apparent error.
>
> It's an informal version, do you not get that?  Have you read the
> paper[1]?  No.  And yet you feel qualified to comment on it.  Here's an
> English translation by van Heijenoort:
>
>    We therefore have before us a proposition that says about itself
>    that it is not provable [in PM].^{15}
>
> Footnote 15 reads:
>
>    Contrary to appearances, such a proposition involves no faulty
>    circularity, for initially it [only] asserts that a certain well-
>    defined formula (namely, the one obtained from the qth
>    formula in the lexicographical order by a certain substitution)
>    is unprovable.  Only subsequently (and so to speak by chance)
>    does it turn out that this formula is precisely the one by which
>    the proposition was expressed.
>
> The bits in [] are not my additions.  So, did you get that:
>
>    such a proposition involves no faulty circularity
>
> ?  The translation was endorsed by Gödel.  Next time you quote him,
> please give chapter and verse as I have.

I have no links to English translations of his papers.

>
>>> am not provable" is a very informal rendering of it.  Gödel's actual
>>> sentence is a wff in the theory of arithmetic, it is about the natural
>>> numbers under successor, sum and product, it is as well formed (and as
>>> true) as "2+3=5".
>>
>> Using a means of expression that lacked any criterion measure for
>> verifying the semantic validity of its proposition.
>>
>> http://www.cs.odu.edu/~toida/nerzic/content/logic/pred_logic/construction/wff_intro.html
>>
>>
>>
>> This validation is entirely at the syntactic rather than semantic level
>> of analysis.
>>
>> How could these rules discern that the utterance:
>> "The color of my car is five feet long."
>> is not a well formed proposition?
>>
>> Can the WFF rules do this?
>> The {color} [Property] of my {car} [Individual] has a length
>> [Property]...
>
>
> I don't know.  You haven't said what the rules are.

In the first case the rules are specified as the WFF rules referenced in 
the link shown above. Can the WFF rules by themselves discern that the 
following utterance is not a semantically well formed proposition?

The length of the color of my car is five feet.

In the second case the combination of Montague {meaning postulates} with 
Gödel [objects of thought], can determine that the {color} (meaning 
postulate} does not have a {length} [property] (object of thought), thus 
discerning that the preceding utterance is not a semantically 
well-formed proposition.

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
....meaningless, if a, b, c, R, φ are not of types fitting together.

>
> [1] Gödel, 'On formally undecidable propositions...' English translation
> in 'From Frege to Gödel', ed Jean van Heijenoort.
> Other versions are available.
>

0
Peter
12/2/2013 1:59:55 PM
On 12/2/2013 11:02 AM, DKleinecke wrote:
> On Monday, December 2, 2013 5:59:55 AM UTC-8, Peter Olcott wrote:
>>
>> The length of the color of my car is five feet.
>>
>> In the second case the combination of Montague {meaning postulates} with
>> G�del [objects of thought], can determine that the {color} (meaning
>> postulate} does not have a {length} [property] (object of thought), thus
>> discerning that the preceding utterance is not a semantically
>> well-formed proposition.
>
> The trouble with this example is that colors DO have lengths. The visible spectrum is 400-700 nanometers in wave length.
>
> I admit that this counter-example is "not playing fair". But, if one wants to deal with the real world, this is the kind of thing that happens. And, when it does, all too often the response is denial (I imagine this denial as the first step in response to a dying idea) and nothing happens.
>

I would call your counter example the fallacy of equivocation of 
equating the WaveLength of a spectrum of light with the length (in feet) 
of the (direct physical sensation) of {Color} .

These two would definitely have different meaning postulates within any 
correct ontology using Montague Grammar.

I am not talking about the real world in terms of the human tendency to 
make mistakes. I am talking about the mathematical abstraction of the 
inherent and fundamental structure of the set of all conceptual knowledge.
0
Peter
12/2/2013 5:31:48 PM
In article <Y4mdnejUkrK0GQbPnZ2dnUVZ_hOdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> We are defining our terms differently. I am not defining my terms such 
> that semantic atomicity is opposed to semantic holism. I am saying that 
> everything is related to everything else on the basis of the above four 
> semantic atoms. Also I am speaking from the Context of Montague Grammar, 
> and not any other conception of semantics.

To what end?  It's a big "so what" in my book if you get no closer to 
the definition of intelligence.  You're over 50 years too late if you 
think symbolic reasoning is going to cause a revolution in AI.


> I specifically broke the meaning postulate of {GreaterThan} down into 
> its atoms of the [relation] between two [individuals].

And I specifically pointed out that the relationship is more complicated 
than you suggest.  Science demands that either your theory account for 
reality, or you wisely should work on a theory that does.

> > Even this simpler example is as flawed as the house example.  As a
> > comparison, greater than may have nothing at all to do with any
> > particular measure of size.  For example, situationally, I'd probably
> > rather have a knife than a machine gun to defend myself if I were diving
> > under water,
> That would not be an example of the literal conception of {GreaterThan}  
> that I am referring to.

There is no inherent, "atom" like that.  Or, rather, to suggest there is 
a singular mathematical definition requires you to incorporate the 
universe of mathematics into your base expressions.  The meaning is what 
people bring to the concept, not the other way around.

> You example is more along the lines of {Appropriateness} rather than 
> {GreaterThan} .

Now you're starting to compare relationships.  That doesn't appear to be 
covered in your 4 types.  This is the nature of "first-class" issues 
that I noted in my initial response.  Please think through the 
implications of what you're doing, because it starts sounding a bit 
nutty when empty theories turn into patchwork creations.

> > but in other scenarios the "greater than" evaluation would
> > go the other way.  In that way, as I've quoted Carl Sagan on before,
> > you're trying to make an apple pie from scratch without first inventing
> > the Universe.
> That is an excellent example of the interconnectedness of all knowledge 
> that I have referred to.

Stop stopping at the surface.  Not just interconnected, but sometimes 
contradictory.  Or even outright wrong.  Even more challenging: 
unknowns!  None of your reductionistic "atomic" thinking seems to have 
any hopes of meaningfully reducing these sorts of things into 4 vague 
types.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/2/2013 6:15:15 PM
On 12/2/2013 12:15 PM, Doc O'Leary wrote:
> In article <Y4mdnejUkrK0GQbPnZ2dnUVZ_hOdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> We are defining our terms differently. I am not defining my terms such
>> that semantic atomicity is opposed to semantic holism. I am saying that
>> everything is related to everything else on the basis of the above four
>> semantic atoms. Also I am speaking from the Context of Montague Grammar,
>> and not any other conception of semantics.
>
> To what end?  It's a big "so what" in my book if you get no closer to
> the definition of intelligence.  You're over 50 years too late if you
> think symbolic reasoning is going to cause a revolution in AI.

As soon as we can determine the fundamental natural inherent structure 
of the set of all conceptual knowledge a self-populating ontology can be 
built. This is the key to machine learning at the conceptual level of 
words.

>
>> I specifically broke the meaning postulate of {GreaterThan} down into
>> its atoms of the [relation] between two [individuals].
>
> And I specifically pointed out that the relationship is more complicated
> than you suggest.  Science demands that either your theory account for
> reality, or you wisely should work on a theory that does.

It is enormously complex. It is complex to the degree that accomplishing 
very much useful by manual encoding common sense knowledge will take an 
infeasible amount of time unless the entire focus is on achieving the 
critical mass of a self-populating ontology. In this case only a 
BootStrap ontology need be built manually.

http://en.wikipedia.org/wiki/No_Silver_Bullet
Because the problem is infeasibly complex we must eliminate every subtle 
trace of Accidental Complexity and boil the problem space down to its 
totally essential elements.

>
>>> Even this simpler example is as flawed as the house example.  As a
>>> comparison, greater than may have nothing at all to do with any
>>> particular measure of size.  For example, situationally, I'd probably
>>> rather have a knife than a machine gun to defend myself if I were diving
>>> under water,
>> That would not be an example of the literal conception of {GreaterThan}
>> that I am referring to.
>
> There is no inherent, "atom" like that.  Or, rather, to suggest there is
> a singular mathematical definition requires you to incorporate the
> universe of mathematics into your base expressions.  The meaning is what
> people bring to the concept, not the other way around.
>
>> You example is more along the lines of {Appropriateness} rather than
>> {GreaterThan} .
>
1) individuals,
2) properties of individuals,
3) relations between individuals,
4) properties of such relations

> Now you're starting to compare relationships.  That doesn't appear to be
> covered in your 4 types.

4) properties of such relations

> This is the nature of "first-class" issues
> that I noted in my initial response.  Please think through the
> implications of what you're doing, because it starts sounding a bit
> nutty when empty theories turn into patchwork creations.

0
Peter
12/2/2013 7:09:43 PM
Peter Olcott wrote:
> On 12/2/2013 7:43 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>> On 12/2/2013 6:06 AM, Peter Percival wrote:
>>>> Peter Olcott wrote:
>>>>
>>>>> The set of all sets that do not contain themselves exists at least
>>>>> as a
>>>>> misconception. Likewise with these utterances:
>>>>> "I am not provable"
>>>>
>>>> You know, don't you, that Gödel's sentence does _not_ say "I am not
>>>> provable"?  You should because I've told you more than once
>>>> already.  "I
>>>
>>> I also know (that it was reported) that Gödel included a direct quote of
>>> this sentence in his paper, thus outweighing your apparent error.
>>
>> It's an informal version, do you not get that?  Have you read the
>> paper[1]?  No.  And yet you feel qualified to comment on it.  Here's an
>> English translation by van Heijenoort:
>>
>>    We therefore have before us a proposition that says about itself
>>    that it is not provable [in PM].^{15}
>>
>> Footnote 15 reads:
>>
>>    Contrary to appearances, such a proposition involves no faulty
>>    circularity, for initially it [only] asserts that a certain well-
>>    defined formula (namely, the one obtained from the qth
>>    formula in the lexicographical order by a certain substitution)
>>    is unprovable.  Only subsequently (and so to speak by chance)
>>    does it turn out that this formula is precisely the one by which
>>    the proposition was expressed.
>>
>> The bits in [] are not my additions.  So, did you get that:
>>
>>    such a proposition involves no faulty circularity
>>
>> ?  The translation was endorsed by Gödel.  Next time you quote him,
>> please give chapter and verse as I have.
>
> I have no links to English translations of his papers.
>
>>
>>>> am not provable" is a very informal rendering of it.  Gödel's actual
>>>> sentence is a wff in the theory of arithmetic, it is about the natural
>>>> numbers under successor, sum and product, it is as well formed (and as
>>>> true) as "2+3=5".
>>>
>>> Using a means of expression that lacked any criterion measure for
>>> verifying the semantic validity of its proposition.
>>>
>>> http://www.cs.odu.edu/~toida/nerzic/content/logic/pred_logic/construction/wff_intro.html
>>>
>>>
>>>
>>>
>>> This validation is entirely at the syntactic rather than semantic level
>>> of analysis.
>>>
>>> How could these rules discern that the utterance:
>>> "The color of my car is five feet long."
>>> is not a well formed proposition?
>>>
>>> Can the WFF rules do this?
>>> The {color} [Property] of my {car} [Individual] has a length
>>> [Property]...
>>
>>
>> I don't know.  You haven't said what the rules are.
>
> In the first case the rules are specified as the WFF rules referenced in
> the link shown above. Can the WFF rules by themselves discern that the
> following utterance is not a semantically well formed proposition?

Gödel's incompleteness theorem has got nothing to do with semantics, 
it's all about the limitations of proof, a syntactic matter.


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/3/2013 1:53:41 PM
On 12/3/2013 7:53 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/2/2013 7:43 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>> On 12/2/2013 6:06 AM, Peter Percival wrote:
>>>>> Peter Olcott wrote:
>>>>>
>>>>>> The set of all sets that do not contain themselves exists at least
>>>>>> as a
>>>>>> misconception. Likewise with these utterances:
>>>>>> "I am not provable"
>>>>>
>>>>> You know, don't you, that Gödel's sentence does _not_ say "I am not
>>>>> provable"?  You should because I've told you more than once
>>>>> already.  "I
>>>>
>>>> I also know (that it was reported) that Gödel included a direct 
>>>> quote of
>>>> this sentence in his paper, thus outweighing your apparent error.
>>>
>>> It's an informal version, do you not get that?  Have you read the
>>> paper[1]?  No.  And yet you feel qualified to comment on it. Here's an
>>> English translation by van Heijenoort:
>>>
>>>    We therefore have before us a proposition that says about itself
>>>    that it is not provable [in PM].^{15}
>>>
>>> Footnote 15 reads:
>>>
>>>    Contrary to appearances, such a proposition involves no faulty
>>>    circularity, for initially it [only] asserts that a certain well-
>>>    defined formula (namely, the one obtained from the qth
>>>    formula in the lexicographical order by a certain substitution)
>>>    is unprovable.  Only subsequently (and so to speak by chance)
>>>    does it turn out that this formula is precisely the one by which
>>>    the proposition was expressed.
>>>
>>> The bits in [] are not my additions.  So, did you get that:
>>>
>>>    such a proposition involves no faulty circularity
>>>
>>> ?  The translation was endorsed by Gödel.  Next time you quote him,
>>> please give chapter and verse as I have.
>>
>> I have no links to English translations of his papers.
>>
>>>
>>>>> am not provable" is a very informal rendering of it.  Gödel's actual
>>>>> sentence is a wff in the theory of arithmetic, it is about the 
>>>>> natural
>>>>> numbers under successor, sum and product, it is as well formed 
>>>>> (and as
>>>>> true) as "2+3=5".
>>>>
>>>> Using a means of expression that lacked any criterion measure for
>>>> verifying the semantic validity of its proposition.
>>>>
>>>> http://www.cs.odu.edu/~toida/nerzic/content/logic/pred_logic/construction/wff_intro.html 
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> This validation is entirely at the syntactic rather than semantic 
>>>> level
>>>> of analysis.
>>>>
>>>> How could these rules discern that the utterance:
>>>> "The color of my car is five feet long."
>>>> is not a well formed proposition?
>>>>
>>>> Can the WFF rules do this?
>>>> The {color} [Property] of my {car} [Individual] has a length
>>>> [Property]...
>>>
>>>
>>> I don't know.  You haven't said what the rules are.
>>
>> In the first case the rules are specified as the WFF rules referenced in
>> the link shown above. Can the WFF rules by themselves discern that the
>> following utterance is not a semantically well formed proposition?
>
> Gödel's incompleteness theorem has got nothing to do with semantics, 
> it's all about the limitations of proof, a syntactic matter.
>
>
Thus if there was an error at the semantic level, this error would be 
completely invisible if only examined at the syntactic level. Proofs 
outside of mathematics must also include the semantic [correspondence 
theory of truth] level of analysis, and thus must rely upon more than 
the mere reshuffling of meaningless symbols.
0
Peter
12/3/2013 2:06:54 PM
On 2013-12-03, Peter Percival <peterxpercival@hotmail.com> wrote:
> Peter Olcott wrote:
>> On 12/2/2013 7:43 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>> On 12/2/2013 6:06 AM, Peter Percival wrote:
>>>>> Peter Olcott wrote:

>>>>>> The set of all sets that do not contain themselves exists at least
>>>>>> as a
>>>>>> misconception. Likewise with these utterances:
>>>>>> "I am not provable"

>>>>> You know, don't you, that G�del's sentence does _not_ say "I am not
>>>>> provable"?  You should because I've told you more than once
>>>>> already.  "I

>>>> I also know (that it was reported) that G�del included a direct quote of
>>>> this sentence in his paper, thus outweighing your apparent error.

>>> It's an informal version, do you not get that?  Have you read the
>>> paper[1]?  No.  And yet you feel qualified to comment on it.  Here's an
>>> English translation by van Heijenoort:

>>>    We therefore have before us a proposition that says about itself
>>>    that it is not provable [in PM].^{15}

>>> Footnote 15 reads:

>>>    Contrary to appearances, such a proposition involves no faulty
>>>    circularity, for initially it [only] asserts that a certain well-
>>>    defined formula (namely, the one obtained from the qth
>>>    formula in the lexicographical order by a certain substitution)
>>>    is unprovable.  Only subsequently (and so to speak by chance)
>>>    does it turn out that this formula is precisely the one by which
>>>    the proposition was expressed.

>>> The bits in [] are not my additions.  So, did you get that:

>>>    such a proposition involves no faulty circularity

>>> ?  The translation was endorsed by G�del.  Next time you quote him,
>>> please give chapter and verse as I have.

>> I have no links to English translations of his papers.


>>>>> am not provable" is a very informal rendering of it.  G�del's actual
>>>>> sentence is a wff in the theory of arithmetic, it is about the natural
>>>>> numbers under successor, sum and product, it is as well formed (and as
>>>>> true) as "2+3=5".

>>>> Using a means of expression that lacked any criterion measure for
>>>> verifying the semantic validity of its proposition.

>>>> http://www.cs.odu.edu/~toida/nerzic/content/logic/pred_logic/construction/wff_intro.html




>>>> This validation is entirely at the syntactic rather than semantic level
>>>> of analysis.

>>>> How could these rules discern that the utterance:
>>>> "The color of my car is five feet long."
>>>> is not a well formed proposition?

>>>> Can the WFF rules do this?
>>>> The {color} [Property] of my {car} [Individual] has a length
>>>> [Property]...


>>> I don't know.  You haven't said what the rules are.

>> In the first case the rules are specified as the WFF rules referenced in
>> the link shown above. Can the WFF rules by themselves discern that the
>> following utterance is not a semantically well formed proposition?

> G�del's incompleteness theorem has got nothing to do with semantics, 
> it's all about the limitations of proof, a syntactic matter.

The incompleteness theorem itself is as you say, but it does involve
a matter of semantics.  What led to it was the paradox that certain
statements could be neither true nor false, and also that certain
apparent definitions are not definitions; these are semantic considerations.


-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hrubin@stat.purdue.edu         Phone: (765)494-6054   FAX: (765)494-0558
0
Herman
12/3/2013 6:32:57 PM
In article <3ZWdnSHhEL5qRwHPnZ2dnUVZ_uydnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> On 12/2/2013 12:15 PM, Doc O'Leary wrote:
> > In article <Y4mdnejUkrK0GQbPnZ2dnUVZ_hOdnZ2d@giganews.com>,
> >   Peter Olcott <OCR4Screen> wrote:
> >
> >> We are defining our terms differently. I am not defining my terms such
> >> that semantic atomicity is opposed to semantic holism. I am saying that
> >> everything is related to everything else on the basis of the above four
> >> semantic atoms. Also I am speaking from the Context of Montague Grammar,
> >> and not any other conception of semantics.
> >
> > To what end?  It's a big "so what" in my book if you get no closer to
> > the definition of intelligence.  You're over 50 years too late if you
> > think symbolic reasoning is going to cause a revolution in AI.
> 
> As soon as we can determine the fundamental natural inherent structure 
> of the set of all conceptual knowledge a self-populating ontology can be 
> built.

While likely necessary, it is not sufficient.  More importantly, *your* 
preferred structure has been refuted.  It lacks the rigor to stand up to 
even my casual pokes.

> This is the key to machine learning at the conceptual level of 
> words.

I see no evidence that learning occurs at a "word" level, to whatever 
"atomic" degree you think you can decompose meaning.  Plainly stated, 
our sole examples of intelligence come from brains, which are 
collections of neurons that, after being constructed with great 
complexity, communicate in very complex ways and only (possibly) behave 
intelligently after decades of exercise.  Whether you think you can 
simulated that and call it AI, or work out a higher level construct the 
brain implements and actually get AI, you have to do more than blindly 
adhere to some taxonomy conceived by Godel.

> In this case only a 
> BootStrap ontology need be built manually.

And I have mentally constructed one based on your 4 types.  It is 
crashing constantly.  I have given you the debugging output, but what I 
*don't* hear is you fixing your code.

> http://en.wikipedia.org/wiki/No_Silver_Bullet
> Because the problem is infeasibly complex we must eliminate every subtle 
> trace of Accidental Complexity and boil the problem space down to its 
> totally essential elements.

And, as I stated (e.g., reduction and self-reference), you have not done 
that.  Stop parroting the words of others and *fix* your theory!

> 1) individuals,
> 2) properties of individuals,
> 3) relations between individuals,
> 4) properties of such relations
> 
> > Now you're starting to compare relationships.  That doesn't appear to be
> > covered in your 4 types.
> 
> 4) properties of such relations

But how do you make the comparison without an relationship rule 
equivalent to rule 3?  And, back to the reduction/first-class issue, 
what is fundamentally the difference between 2 and 4?  I'm still waiting 
for a solid answer why you don't just reduce everything to "individuals" 
and have "relations" be individuals that have the "property" of relating 
other individuals.

For the last time, stop parroting and start adding to the work.  If you 
do, as I have tried to, you may also come to the same conclusion that I 
have.  If not, you appear to be lost in the woods and, having come 
across the path of symbolic reasoning, think you're on the trail to AI, 
but you're still just going around in circles.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/3/2013 8:31:45 PM
On 12/3/2013 2:31 PM, Doc O'Leary wrote:
> In article <3ZWdnSHhEL5qRwHPnZ2dnUVZ_uydnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> On 12/2/2013 12:15 PM, Doc O'Leary wrote:
>>> In article <Y4mdnejUkrK0GQbPnZ2dnUVZ_hOdnZ2d@giganews.com>,
>>>    Peter Olcott <OCR4Screen> wrote:
>>>
>>>> We are defining our terms differently. I am not defining my terms such
>>>> that semantic atomicity is opposed to semantic holism. I am saying that
>>>> everything is related to everything else on the basis of the above four
>>>> semantic atoms. Also I am speaking from the Context of Montague Grammar,
>>>> and not any other conception of semantics.
>>> To what end?  It's a big "so what" in my book if you get no closer to
>>> the definition of intelligence.  You're over 50 years too late if you
>>> think symbolic reasoning is going to cause a revolution in AI.
>> As soon as we can determine the fundamental natural inherent structure
>> of the set of all conceptual knowledge a self-populating ontology can be
>> built.
> While likely necessary, it is not sufficient.  More importantly, *your*
> preferred structure has been refuted.  It lacks the rigor to stand up to
> even my casual pokes.
I would estimate based on that statement that you are not very familiar 
with the Montague Grammar of semantics.

>> This is the key to machine learning at the conceptual level of
>> words.
> I see no evidence that learning occurs at a "word" level, to whatever
> "atomic" degree you think you can decompose meaning.  Plainly stated,
> our sole examples of intelligence come from brains, which are
> collections of neurons that, after being constructed with great
> complexity, communicate in very complex ways and only (possibly) behave
> intelligently after decades of exercise.  Whether you think you can
> simulated that and call it AI, or work out a higher level construct the
> brain implements and actually get AI, you have to do more than blindly
> adhere to some taxonomy conceived by Godel.

It is almost entirely on the basis of the work of Richard Montague, that 
I refer.
Without having a great understanding of this work it may not be possible 
to fully appreciate what I am saying.

> In this case only a
> BootStrap ontology need be built manually.
> And I have mentally constructed one based on your 4 types.  It is
> crashing constantly.  I have given you the debugging output, but what I
> *don't* hear is you fixing your code.
It seems to me that I was correcting your misconceptions, not fixing my 
code.

>
>> http://en.wikipedia.org/wiki/No_Silver_Bullet
>> Because the problem is infeasibly complex we must eliminate every subtle
>> trace of Accidental Complexity and boil the problem space down to its
>> totally essential elements.
> And, as I stated (e.g., reduction and self-reference), you have not done
> that.  Stop parroting the words of others and *fix* your theory!
>
>> 1) individuals,
>> 2) properties of individuals,
>> 3) relations between individuals,
>> 4) properties of such relations
>>
>>> Now you're starting to compare relationships.  That doesn't appear to be
>>> covered in your 4 types.
>> 4) properties of such relations
> But how do you make the comparison without an relationship rule
> equivalent to rule 3?
That is why rules 1) though 4) are called atoms. These atoms mutually 
depend upon each other.

> And, back to the reduction/first-class issue,
> what is fundamentally the difference between 2 and 4?
 From OOP, a property is an attribute. An attribute is a component part.
Thus a [property] is a special type of [relation] between a [individual] 
whole and its [individual] part.

> I'm still waiting
> for a solid answer why you don't just reduce everything to "individuals"
> and have "relations" be individuals that have the "property" of relating
> other individuals.
That may be equally valid.
> For the last time, stop parroting and start adding to the work.  If you
> do, as I have tried to, you may also come to the same conclusion that I
> have.  If not, you appear to be lost in the woods and, having come
> across the path of symbolic reasoning, think you're on the trail to AI,
> but you're still just going around in circles.
0
Peter
12/3/2013 10:44:05 PM
On 12/3/2013 12:32 PM, Herman Rubin wrote:
> On 2013-12-03, Peter Percival <peterxpercival@hotmail.com> wrote:
>> Peter Olcott wrote:
>>> in the first case the rules are specified as the WFF rules referenced in
>>> the link shown above. Can the WFF rules by themselves discern that the
>>> following utterance is not a semantically well formed proposition?
>> G�del's incompleteness theorem has got nothing to do with semantics,
>> it's all about the limitations of proof, a syntactic matter.
> The incompleteness theorem itself is as you say, but it does involve
> a matter of semantics.  What led to it was the paradox that certain
> statements could be neither true nor false, and also that certain
> apparent definitions are not definitions; these are semantic considerations.
>
>
My hypothesis is that all utterances that can be neither true nor false 
are not semantically well formed propositions.

If the IT derives anything that can be neither true nor false then the 
IT is not dealing with semantically well formed propositions.
0
Peter
12/4/2013 12:26:35 AM
On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>>
>> Individuals are one of four types of atoms of meaning that connect
>> together within an acyclic di-graph.
>
> Individual means un-dividable, Latin / Romance
> counterpart of Greek a-tomos, un-cuttable. You know
> that physical atoms can be split. And human beings
> are not really individuals. As Goethe said: how little

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944

The meanings that I and Kurt G�del are providing may have as little as 
nothing to do with their original meanings.

All that we are saying is that every element of the set of all 
conceptual knowledge can aptly fit into at least one of these four basic 
types:
1) individuals
2) properties of individuals
3) relations between individuals
4) properties of such relations

Only a valid counter-example could show otherwise.

I further propose that these four basic types form the most useful basis 
for organizing the set of all conceptual knowledge such that every 
element of this set is most efficiently and effectively represented 
within an ontology using the Richard Montague grammar of semantics to 
form connected meaning postulates.

As another poster indicated, this ontology must also be organized as an 
inheritance hierarchy. I am certain that this advice is correct.

http://en.wikipedia.org/wiki/Ward_Cunningham
As Ward Cunningham taught me in a personal email correspondence when I 
asked for his single most important criterion measure of the quality of 
a software system, he said redundancy must be eliminated.

http://en.wikipedia.org/wiki/No_Silver_Bullet
To create the most effective and efficient universal ontology of the 
elements of the set of conceptual knowledge all redundancy must be 
completely eliminated, otherwise accidental (inessential) complexity is 
added. Accidental (inessential) complexity is devastating to the design 
of a universal ontology.

There can be nothing more complex than the design of a system capable of 
completely representing everything, therefore every subtle trace of 
inessential complexity must be totally eliminated or the creation of 
such a system remains forever infeasible.

It is my opinion that there are only two criterion measures required to 
find the optimal representation of the set of all conceptual knowledge:

1) Each (meaning postulate) element should be complete. Every aspect of 
the meaning of a term must be exhaustively specified by its connections 
to other meaning postulates.

2) All redundancy should be completely eliminated.

> do we have and are we that we can rightly call our own.
> You got your genes from a long line of ancestors.
> Your mind is formed by the culture you are born into.
> Your body contains ten times more bacteria than cells,
> and the eukariotic cell emerged as a symbiosis of ancient
> bacteria. What is really individual about you? Even your
> project of an all knowing machine goes back to someone
> else, Laplace and his all knowing demon. Nothing new
> and original and individual in what you are telling here,
> you are just warming up old ideas. What could really be
> original and individual would be some idea you gained from
> your own practical experience, but you sweep that away
> in the name of your phantasm.
>
> Perhaps your quest for completely complete completeness
> is an excuse for never doing and achieving anything?
> You could use the idea of 'individuals' and 'relations'
> in the proper sense of a working model and, say, examine
> relations among words in the English lanuage and mind
> - what words appear in each other's vicinity? For that
> purpose you'd have to find a good formulation for vicinity
> in language, then write a program, then run it on large
> electronical data-bases, without knowing if you'll get
> a result. But if, you could map the English language
> in a new way, and gain both recognition and fundings.
> Or you might have an insight as you go along with your
> work, and develop it. However, you prefer the free
> delusion of grandeur others find in squaring the circle
> or designing one more perpetuum mobile. They always make
> words, as you do, and achieve nothing, like you.
>

0
Peter
12/4/2013 1:00:50 PM
On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote:
> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>=20
> > On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>=20
> >>
>=20
> >> Individuals are one of four types of atoms of meaning that connect
>=20
> >> together within an acyclic di-graph.
>=20
> >
>=20
> > Individual means un-dividable, Latin / Romance
>=20
> > counterpart of Greek a-tomos, un-cuttable. You know
>=20
> > that physical atoms can be split. And human beings
>=20
> > are not really individuals. As Goethe said: how little
>=20
>=20
>=20
> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>=20
>=20
>=20
> The meanings that I and Kurt G=EF=BF=BDdel are providing may have as litt=
le as=20
>=20
> nothing to do with their original meanings.
>=20


Is your above sentence semantically correct?
You are saying that You (first) and then KG provided...
How could that happen, and especially when?
Why don't you write a simple program where you list all the properties that=
 you can imagine, and then for each individual you checkmark if they apply =
or not.
(In this particular case if you had the DOB and DOD for "I" and KG as prope=
rties, you would have found if there is an intersection, a period of time w=
hen 'I" and KG could have met, and thus you might have seen if the sentence=
 is semantically correct).
After you have a comprehensive list of the properties and the checkmarks fo=
r each individual, the program would be able to verify if a sentence is sem=
antically correct or not.
JP

>=20
> All that we are saying is that every element of the set of all=20
>=20
> conceptual knowledge can aptly fit into at least one of these four basic=
=20
>=20
> types:
>=20
> 1) individuals
>=20
> 2) properties of individuals
>=20
> 3) relations between individuals
>=20
> 4) properties of such relations
>=20
>=20
>=20
> Only a valid counter-example could show otherwise.
>=20
>=20
>=20
> I further propose that these four basic types form the most useful basis=
=20
>=20
> for organizing the set of all conceptual knowledge such that every=20
>=20
> element of this set is most efficiently and effectively represented=20
>=20
> within an ontology using the Richard Montague grammar of semantics to=20
>=20
> form connected meaning postulates.
>=20
>=20
>=20
> As another poster indicated, this ontology must also be organized as an=
=20
>=20
> inheritance hierarchy. I am certain that this advice is correct.
>=20
>=20
>=20
> http://en.wikipedia.org/wiki/Ward_Cunningham
>=20
> As Ward Cunningham taught me in a personal email correspondence when I=20
>=20
> asked for his single most important criterion measure of the quality of=
=20
>=20
> a software system, he said redundancy must be eliminated.
>=20
>=20
>=20
> http://en.wikipedia.org/wiki/No_Silver_Bullet
>=20
> To create the most effective and efficient universal ontology of the=20
>=20
> elements of the set of conceptual knowledge all redundancy must be=20
>=20
> completely eliminated, otherwise accidental (inessential) complexity is=
=20
>=20
> added. Accidental (inessential) complexity is devastating to the design=
=20
>=20
> of a universal ontology.
>=20
>=20
>=20
> There can be nothing more complex than the design of a system capable of=
=20
>=20
> completely representing everything, therefore every subtle trace of=20
>=20
> inessential complexity must be totally eliminated or the creation of=20
>=20
> such a system remains forever infeasible.
>=20
>=20
>=20
> It is my opinion that there are only two criterion measures required to=
=20
>=20
> find the optimal representation of the set of all conceptual knowledge:
>=20
>=20
>=20
> 1) Each (meaning postulate) element should be complete. Every aspect of=
=20
>=20
> the meaning of a term must be exhaustively specified by its connections=
=20
>=20
> to other meaning postulates.
>=20
>=20
>=20
> 2) All redundancy should be completely eliminated.
>=20
>=20
>=20
> > do we have and are we that we can rightly call our own.
>=20
> > You got your genes from a long line of ancestors.
>=20
> > Your mind is formed by the culture you are born into.
>=20
> > Your body contains ten times more bacteria than cells,
>=20
> > and the eukariotic cell emerged as a symbiosis of ancient
>=20
> > bacteria. What is really individual about you? Even your
>=20
> > project of an all knowing machine goes back to someone
>=20
> > else, Laplace and his all knowing demon. Nothing new
>=20
> > and original and individual in what you are telling here,
>=20
> > you are just warming up old ideas. What could really be
>=20
> > original and individual would be some idea you gained from
>=20
> > your own practical experience, but you sweep that away
>=20
> > in the name of your phantasm.
>=20
> >
>=20
> > Perhaps your quest for completely complete completeness
>=20
> > is an excuse for never doing and achieving anything?
>=20
> > You could use the idea of 'individuals' and 'relations'
>=20
> > in the proper sense of a working model and, say, examine
>=20
> > relations among words in the English lanuage and mind
>=20
> > - what words appear in each other's vicinity? For that
>=20
> > purpose you'd have to find a good formulation for vicinity
>=20
> > in language, then write a program, then run it on large
>=20
> > electronical data-bases, without knowing if you'll get
>=20
> > a result. But if, you could map the English language
>=20
> > in a new way, and gain both recognition and fundings.
>=20
> > Or you might have an insight as you go along with your
>=20
> > work, and develop it. However, you prefer the free
>=20
> > delusion of grandeur others find in squaring the circle
>=20
> > or designing one more perpetuum mobile. They always make
>=20
> > words, as you do, and achieve nothing, like you.
>=20
> >

0
JP
12/4/2013 3:17:11 PM
On 12/4/2013 9:17 AM, JP wrote:
> On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote:
>> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>>
>>> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>>
>>>>
>>
>>>> Individuals are one of four types of atoms of meaning that connect
>>
>>>> together within an acyclic di-graph.
>>
>>>
>>
>>> Individual means un-dividable, Latin / Romance
>>
>>> counterpart of Greek a-tomos, un-cuttable. You know
>>
>>> that physical atoms can be split. And human beings
>>
>>> are not really individuals. As Goethe said: how little
>>
>>
>>
>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>
>>
>>
>> The meanings that I and Kurt G�del are providing may have as little as
>>
>> nothing to do with their original meanings.
>>
>
>
> Is your above sentence semantically correct?
> You are saying that You (first) and then KG provided...
> How could that happen, and especially when?

He provided these meanings within one context (objects of thought).
I am providing these same meanings with the additional context of 
Montague Grammar. In both cases it seems that KG and I are referring to 
the essential concept of semantic atoms (objects of thought).
0
Peter
12/4/2013 3:29:02 PM
In article <4ZidnWq8rPoqwwPPnZ2dnUVZ_qWdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> I would estimate based on that statement that you are not very familiar 
> with the Montague Grammar of semantics.

I would estimate based on that statement that you still are not adopting 
a reality-based, scientific approach to AI.  I am familiar enough with 
Montague's work to understand that the fundamental premise is wrong.  
Like the work of Godel you parrot, it doesn't stand up to even 
rudimentary analysis.  Why you keep digging at the failed efforts (from 
an AI standpoint) of people from 50+ years ago is beyond me.  For your 
own sake, employ some independent critical thinking when you evaluate 
the work of others; it will directly improve your own work.

> It is almost entirely on the basis of the work of Richard Montague, that 
> I refer.
> Without having a great understanding of this work it may not be possible 
> to fully appreciate what I am saying.

Wrong is wrong.  If you want to be appreciated, you'll need to do more 
than echo the misguided efforts of those who have come before you.

> > And I have mentally constructed one based on your 4 types.  It is
> > crashing constantly.  I have given you the debugging output, but what I
> > *don't* hear is you fixing your code.
> It seems to me that I was correcting your misconceptions, not fixing my 
> code.

You were doing neither.  When I point out inconsistencies in your 
approach, it is a mistake to start from the position that I am failing 
to understand the task at hand.  That's not to say I can't be wrong, but 
as a scientist (if you are a scientist) you must first look back at your 
own theory to see if *you* aren't in error.

> >> 1) individuals,
> >> 2) properties of individuals,
> >> 3) relations between individuals,
> >> 4) properties of such relations
> >>
> >>> Now you're starting to compare relationships.  That doesn't appear to be
> >>> covered in your 4 types.
> >> 4) properties of such relations
> > But how do you make the comparison without an relationship rule
> > equivalent to rule 3?
> That is why rules 1) though 4) are called atoms. These atoms mutually 
> depend upon each other.

Uh, what?  The points I raise have *nothing* to do with your 
"corrections".  I am exploring the clarity of expression in the proposed 
system, which is questionable/lacking.  You are unwilling or unable to 
address these issues, which likely means that the lack of understanding 
is on your end, not mine.

> > And, back to the reduction/first-class issue,
> > what is fundamentally the difference between 2 and 4?
>  From OOP, a property is an attribute. An attribute is a component part.
> Thus a [property] is a special type of [relation] between a [individual] 
> whole and its [individual] part.

But this is not stated when you parrot your list of 4 rules/types.  
Either you're throwing together a patchwork, Frankenstein of a theory on 
the fly, or you're not being forthright in discussing it.

> > I'm still waiting
> > for a solid answer why you don't just reduce everything to "individuals"
> > and have "relations" be individuals that have the "property" of relating
> > other individuals.
> That may be equally valid.

I assure you, it is not.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/4/2013 6:41:17 PM
On 12/4/2013 12:41 PM, Doc O'Leary wrote:
> In article <4ZidnWq8rPoqwwPPnZ2dnUVZ_qWdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> I would estimate based on that statement that you are not very familiar
>> with the Montague Grammar of semantics.
>
> I would estimate based on that statement that you still are not adopting
> a reality-based, scientific approach to AI.  I am familiar enough with
> Montague's work to understand that the fundamental premise is wrong.

Perhaps you do not understand it as well as you think you do.
I myself can see how it can be extended such that zero gaps of 
specification would exist.

> Like the work of Godel you parrot, it doesn't stand up to even
> rudimentary analysis.

Provide one simple concrete example where it does not work. Perhaps your 
analysis was too rudimentary.

> Why you keep digging at the failed efforts (from
> an AI standpoint) of people from 50+ years ago is beyond me.  For your
> own sake, employ some independent critical thinking when you evaluate
> the work of others; it will directly improve your own work.
>
>> It is almost entirely on the basis of the work of Richard Montague, that
>> I refer.
>> Without having a great understanding of this work it may not be possible
>> to fully appreciate what I am saying.
>
> Wrong is wrong.  If you want to be appreciated, you'll need to do more
> than echo the misguided efforts of those who have come before you.
>
>>> And I have mentally constructed one based on your 4 types.  It is
>>> crashing constantly.  I have given you the debugging output, but what I
>>> *don't* hear is you fixing your code.
>> It seems to me that I was correcting your misconceptions, not fixing my
>> code.
>
> You were doing neither.  When I point out inconsistencies in your
> approach, it is a mistake to start from the position that I am failing
> to understand the task at hand.  That's not to say I can't be wrong, but
> as a scientist (if you are a scientist) you must first look back at your
> own theory to see if *you* aren't in error.
>
>>>> 1) individuals,
>>>> 2) properties of individuals,
>>>> 3) relations between individuals,
>>>> 4) properties of such relations
>>>>
>>>>> Now you're starting to compare relationships.  That doesn't appear to be
>>>>> covered in your 4 types.
>>>> 4) properties of such relations
>>> But how do you make the comparison without an relationship rule
>>> equivalent to rule 3?
>> That is why rules 1) though 4) are called atoms. These atoms mutually
>> depend upon each other.
>
> Uh, what?  The points I raise have *nothing* to do with your
> "corrections".  I am exploring the clarity of expression in the proposed
> system, which is questionable/lacking.  You are unwilling or unable to
> address these issues, which likely means that the lack of understanding
> is on your end, not mine.
>
>>> And, back to the reduction/first-class issue,
>>> what is fundamentally the difference between 2 and 4?
>>   From OOP, a property is an attribute. An attribute is a component part.
>> Thus a [property] is a special type of [relation] between a [individual]
>> whole and its [individual] part.
>
> But this is not stated when you parrot your list of 4 rules/types.
> Either you're throwing together a patchwork, Frankenstein of a theory on
> the fly, or you're not being forthright in discussing it.
>
>>> I'm still waiting
>>> for a solid answer why you don't just reduce everything to "individuals"
>>> and have "relations" be individuals that have the "property" of relating
>>> other individuals.
>> That may be equally valid.
>
> I assure you, it is not.
>

0
Peter
12/4/2013 7:00:15 PM
On Wednesday, December 4, 2013 5:29:02 PM UTC+2, Peter Olcott wrote:
> On 12/4/2013 9:17 AM, JP wrote:
>=20
> > On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote:
>=20
> >> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>=20
> >>
>=20
> >>> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> Individuals are one of four types of atoms of meaning that connect
>=20
> >>
>=20
> >>>> together within an acyclic di-graph.
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>> Individual means un-dividable, Latin / Romance
>=20
> >>
>=20
> >>> counterpart of Greek a-tomos, un-cuttable. You know
>=20
> >>
>=20
> >>> that physical atoms can be split. And human beings
>=20
> >>
>=20
> >>> are not really individuals. As Goethe said: how little
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> The meanings that I and Kurt G=EF=BF=BDdel are providing may have as l=
ittle as
>=20
> >>
>=20
> >> nothing to do with their original meanings.
>=20
> >>
>=20
> >
>=20
> >
>=20
> > Is your above sentence semantically correct?
>=20
> > You are saying that You (first) and then KG provided...
>=20
> > How could that happen, and especially when?
>=20
>=20
>=20
> He provided these meanings within one context (objects of thought).
>=20
> I am providing these same meanings with the additional context of=20
>=20
> Montague Grammar. In both cases it seems that KG and I are referring to=
=20
>=20
> the essential concept of semantic atoms (objects of thought).


That was not my question.=20
My question is if your statement is semantically well defined, according to=
 Montague grammar:

"The meanings that I and Kurt G=EF=BF=BDdel are providing may have as littl=
e as      nothing to do with their original meanings."

I hope you will show that all your statements in this discussion are based =
on the Montague grammar so that I will  have something to learn from follow=
ing it.
JP
0
JP
12/5/2013 4:21:33 AM
On 12/4/2013 10:21 PM, JP wrote:
> On Wednesday, December 4, 2013 5:29:02 PM UTC+2, Peter Olcott wrote:
>> On 12/4/2013 9:17 AM, JP wrote:
>>
>>> On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote:
>>
>>>> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>>
>>>>
>>
>>>>> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>> Individuals are one of four types of atoms of meaning that connect
>>
>>>>
>>
>>>>>> together within an acyclic di-graph.
>>
>>>>
>>
>>>>>
>>
>>>>
>>
>>>>> Individual means un-dividable, Latin / Romance
>>
>>>>
>>
>>>>> counterpart of Greek a-tomos, un-cuttable. You know
>>
>>>>
>>
>>>>> that physical atoms can be split. And human beings
>>
>>>>
>>
>>>>> are not really individuals. As Goethe said: how little
>>
>>>>
>>
>>>>
>>
>>>>
>>
>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>
>>>>
>>
>>>>
>>
>>>>
>>
>>>> The meanings that I and Kurt G�del are providing may have as little as
>>
>>>>
>>
>>>> nothing to do with their original meanings.
>>
>>>>
>>
>>>
>>
>>>
>>
>>> Is your above sentence semantically correct?
>>
>>> You are saying that You (first) and then KG provided...
>>
>>> How could that happen, and especially when?
>>
>>
>>
>> He provided these meanings within one context (objects of thought).
>>
>> I am providing these same meanings with the additional context of
>>
>> Montague Grammar. In both cases it seems that KG and I are referring to
>>
>> the essential concept of semantic atoms (objects of thought).
>
>
> That was not my question.
> My question is if your statement is semantically well defined, according to Montague grammar:
>
> "The meanings that I and Kurt Godel are providing may have as little as nothing to do with their original meanings."

This only pertains to how the meanings of words evolve over time such 
that no one continues to use the words as they were intended to be used 
originally.

The meanings that I am KG are applying to the terms above become 
terms-of-the art of the formal semantics of natural language. These 
terms-of-the-art meanings take one of the current usage sense meanings 
as their basis.

>
> I hope you will show that all your statements in this discussion are based on the Montague grammar so that I will  have something to learn from following it.
> JP
>

I am attempting to extend and or adapt the application of Montague 
Grammar such that it can be universally applied to encode the universal 
set of all conceptual knowledge.

It took me quite a while to see how Montague Grammar could be applied to 
questions. Then I found one author's extension that simply defined a 
question as a proposition with a missing piece.
0
Peter
12/5/2013 11:24:28 AM
On Thursday, December 5, 2013 1:24:28 PM UTC+2, Peter Olcott wrote:
> On 12/4/2013 10:21 PM, JP wrote:
>=20
> > On Wednesday, December 4, 2013 5:29:02 PM UTC+2, Peter Olcott wrote:
>=20
> >> On 12/4/2013 9:17 AM, JP wrote:
>=20
> >>
>=20
> >>> On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote:
>=20
> >>
>=20
> >>>> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>> Individuals are one of four types of atoms of meaning that connect
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>> together within an acyclic di-graph.
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> Individual means un-dividable, Latin / Romance
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> counterpart of Greek a-tomos, un-cuttable. You know
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> that physical atoms can be split. And human beings
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> are not really individuals. As Goethe said: how little
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> The meanings that I and Kurt G=EF=BF=BDdel are providing may have as=
 little as
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> nothing to do with their original meanings.
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>> Is your above sentence semantically correct?
>=20
> >>
>=20
> >>> You are saying that You (first) and then KG provided...
>=20
> >>
>=20
> >>> How could that happen, and especially when?
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> He provided these meanings within one context (objects of thought).
>=20
> >>
>=20
> >> I am providing these same meanings with the additional context of
>=20
> >>
>=20
> >> Montague Grammar. In both cases it seems that KG and I are referring t=
o
>=20
> >>
>=20
> >> the essential concept of semantic atoms (objects of thought).
>=20
> >
>=20
> >
>=20
> > That was not my question.
>=20
> > My question is if your statement is semantically well defined, accordin=
g to Montague grammar:
>=20
> >
>=20
> > "The meanings that I and Kurt Godel are providing may have as little as=
 nothing to do with their original meanings."
>=20
>=20
>=20
> This only pertains to how the meanings of words evolve over time such=20
>=20
> that no one continues to use the words as they were intended to be used=
=20
>=20
> originally.
>=20
>=20
>=20
> The meanings that I am KG are applying to the terms above become=20
>=20
> terms-of-the art of the formal semantics of natural language. These=20
>=20
> terms-of-the-art meanings take one of the current usage sense meanings=20
>=20
> as their basis.
>=20
>=20
>=20
> >
>=20
> > I hope you will show that all your statements in this discussion are ba=
sed on the Montague grammar so that I will  have something to learn from fo=
llowing it.
>=20
> > JP
>=20
> >
>=20
>=20
>=20
> I am attempting to extend and or adapt the application of Montague=20
>=20
> Grammar such that it can be universally applied to encode the universal=
=20
>=20
> set of all conceptual knowledge.
>=20
>=20
>=20
> It took me quite a while to see how Montague Grammar could be applied to=
=20
>=20
> questions. Then I found one author's extension that simply defined a=20
>=20
> question as a proposition with a missing piece.


Thank you for the response, but you are still as far away from anything int=
eresting as last year.
JP
0
JP
12/5/2013 1:28:55 PM
On 12/5/2013 7:28 AM, JP wrote:
> On Thursday, December 5, 2013 1:24:28 PM UTC+2, Peter Olcott wrote:
>> On 12/4/2013 10:21 PM, JP wrote:
>>
>>> On Wednesday, December 4, 2013 5:29:02 PM UTC+2, Peter Olcott wrote:
>>
>>>> On 12/4/2013 9:17 AM, JP wrote:
>>
>>>>
>>
>>>>> On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote:
>>
>>>>
>>
>>>>>> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrote:
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>>>
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>>> Individuals are one of four types of atoms of meaning that connect
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>>> together within an acyclic di-graph.
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>>
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>> Individual means un-dividable, Latin / Romance
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>> counterpart of Greek a-tomos, un-cuttable. You know
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>> that physical atoms can be split. And human beings
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>> are not really individuals. As Goethe said: how little
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>> The meanings that I and Kurt G�del are providing may have as little as
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>> nothing to do with their original meanings.
>>
>>>>
>>
>>>>>>
>>
>>>>
>>
>>>>>
>>
>>>>
>>
>>>>>
>>
>>>>
>>
>>>>> Is your above sentence semantically correct?
>>
>>>>
>>
>>>>> You are saying that You (first) and then KG provided...
>>
>>>>
>>
>>>>> How could that happen, and especially when?
>>
>>>>
>>
>>>>
>>
>>>>
>>
>>>> He provided these meanings within one context (objects of thought).
>>
>>>>
>>
>>>> I am providing these same meanings with the additional context of
>>
>>>>
>>
>>>> Montague Grammar. In both cases it seems that KG and I are referring to
>>
>>>>
>>
>>>> the essential concept of semantic atoms (objects of thought).
>>
>>>
>>
>>>
>>
>>> That was not my question.
>>
>>> My question is if your statement is semantically well defined, according to Montague grammar:
>>
>>>
>>
>>> "The meanings that I and Kurt Godel are providing may have as little as nothing to do with their original meanings."
>>
>>
>>
>> This only pertains to how the meanings of words evolve over time such
>>
>> that no one continues to use the words as they were intended to be used
>>
>> originally.
>>
>>
>>
>> The meanings that I am KG are applying to the terms above become
>>
>> terms-of-the art of the formal semantics of natural language. These
>>
>> terms-of-the-art meanings take one of the current usage sense meanings
>>
>> as their basis.
>>
>>
>>
>>>
>>
>>> I hope you will show that all your statements in this discussion are based on the Montague grammar so that I will  have something to learn from following it.
>>
>>> JP
>>
>>>
>>
>>
>>
>> I am attempting to extend and or adapt the application of Montague
>>
>> Grammar such that it can be universally applied to encode the universal
>>
>> set of all conceptual knowledge.
>>
>>
>>
>> It took me quite a while to see how Montague Grammar could be applied to
>>
>> questions. Then I found one author's extension that simply defined a
>>
>> question as a proposition with a missing piece.
>
>
> Thank you for the response, but you are still as far away from anything interesting as last year.
> JP
>

The problem with this is that intuitions are difficult to translate into 
words. I can see (for example) how the Montague Grammar of semantics 
could address every aspect of natural language compositionality.

I estimate the reason why others may not appreciate the verbalized 
insights might be that they may have too much investment in how the 
conventional terminology of linguistics divides up the key conceptions.

It is difficult to simply throw all these conceptions away and start 
from scratch.

I could (for example) elaborate on each conventional division of the 
compositionality problem and show how MG could be used to express these. 
The conversation has never gotten that far yet.
0
Peter
12/5/2013 2:40:15 PM
On Thursday, December 5, 2013 4:40:15 PM UTC+2, Peter Olcott wrote:
> On 12/5/2013 7:28 AM, JP wrote:
>=20
> > On Thursday, December 5, 2013 1:24:28 PM UTC+2, Peter Olcott wrote:
>=20
> >> On 12/4/2013 10:21 PM, JP wrote:
>=20
> >>
>=20
> >>> On Wednesday, December 4, 2013 5:29:02 PM UTC+2, Peter Olcott wrote:
>=20
> >>
>=20
> >>>> On 12/4/2013 9:17 AM, JP wrote:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> On Wednesday, December 4, 2013 3:00:50 PM UTC+2, Peter Olcott wrote=
:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>> On 12/4/2013 2:27 AM, Franz Gnaedinger wrote:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>> On Tuesday, December 3, 2013 12:00:41 PM UTC+1, Peter Olcott wrot=
e:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>>> Individuals are one of four types of atoms of meaning that conne=
ct
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>>> together within an acyclic di-graph.
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>> Individual means un-dividable, Latin / Romance
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>> counterpart of Greek a-tomos, un-cuttable. You know
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>> that physical atoms can be split. And human beings
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>> are not really individuals. As Goethe said: how little
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>> http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_194=
4
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>> The meanings that I and Kurt G=EF=BF=BDdel are providing may have =
as little as
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>> nothing to do with their original meanings.
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> Is your above sentence semantically correct?
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> You are saying that You (first) and then KG provided...
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> How could that happen, and especially when?
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> He provided these meanings within one context (objects of thought).
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> I am providing these same meanings with the additional context of
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> Montague Grammar. In both cases it seems that KG and I are referring=
 to
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> the essential concept of semantic atoms (objects of thought).
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>> That was not my question.
>=20
> >>
>=20
> >>> My question is if your statement is semantically well defined, accord=
ing to Montague grammar:
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>> "The meanings that I and Kurt Godel are providing may have as little =
as nothing to do with their original meanings."
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> This only pertains to how the meanings of words evolve over time such
>=20
> >>
>=20
> >> that no one continues to use the words as they were intended to be use=
d
>=20
> >>
>=20
> >> originally.
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> The meanings that I am KG are applying to the terms above become
>=20
> >>
>=20
> >> terms-of-the art of the formal semantics of natural language. These
>=20
> >>
>=20
> >> terms-of-the-art meanings take one of the current usage sense meanings
>=20
> >>
>=20
> >> as their basis.
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>> I hope you will show that all your statements in this discussion are =
based on the Montague grammar so that I will  have something to learn from =
following it.
>=20
> >>
>=20
> >>> JP
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> I am attempting to extend and or adapt the application of Montague
>=20
> >>
>=20
> >> Grammar such that it can be universally applied to encode the universa=
l
>=20
> >>
>=20
> >> set of all conceptual knowledge.
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >> It took me quite a while to see how Montague Grammar could be applied =
to
>=20
> >>
>=20
> >> questions. Then I found one author's extension that simply defined a
>=20
> >>
>=20
> >> question as a proposition with a missing piece.
>=20
> >
>=20
> >
>=20
> > Thank you for the response, but you are still as far away from anything=
 interesting as last year.
>=20
> > JP
>=20
> >
>=20
>=20
>=20
> The problem with this is that intuitions are difficult to translate into=
=20
>=20
> words. I can see (for example) how the Montague Grammar of semantics=20
>=20
> could address every aspect of natural language compositionality.
>=20
>=20
>=20
> I estimate the reason why others may not appreciate the verbalized=20
>=20
> insights might be that they may have too much investment in how the=20
>=20
> conventional terminology of linguistics divides up the key conceptions.
>=20
>=20
>=20
> It is difficult to simply throw all these conceptions away and start=20
>=20
> from scratch.
>=20
>=20
>=20
> I could (for example) elaborate on each conventional division of the=20
>=20
> compositionality problem and show how MG could be used to express these.=
=20
>=20
> The conversation has never gotten that far yet.





As I understand you are using the following model:

Objects of thought are divided into types:
a) Individuals
b) Properties of individuals
c) Relations between individuals
d) Properties of such relations=20

Why don't just start working using an example and see where it leads you?

a) an individual is a What or a Who
b) properties are the responses to what, who, where and when
c) relations of properties are responses to how and why
d) go back to each property in b) and answer to what, who, when and where

and so on.

You will find soon that you have problems starting with b), when you will h=
ave to differentiate between physical, objective properties and subjective =
properties.
JP
0
JP
12/5/2013 4:35:31 PM
 >>
 >> The problem with this is that intuitions are difficult to translate 
into
 >> words. I can see (for example) how the Montague Grammar of semantics
 >> could address every aspect of natural language compositionality.
 >>
 >> I estimate the reason why others may not appreciate the verbalized
 >> insights might be that they may have too much investment in how the
 >> conventional terminology of linguistics divides up the key conceptions.
 >>
 >> It is difficult to simply throw all these conceptions away and start
 >> from scratch.
 >>
 >> I could (for example) elaborate on each conventional division of the
 >> compositionality problem and show how MG could be used to express 
these.
 >> The conversation has never gotten that far yet.
 >>
 >
 > As I understand you are using the following model:
 >
 > Objects of thought are divided into types:
 > a) Individuals
 > b) Properties of individuals
 > c) Relations between individuals
 > d) Properties of such relations
 >
 > Why don't just start working using an example and see
 > where it leads you?
 >
 > a) an individual is a What or a Who
 > b) properties are the responses to what, who, where and when
 > c) relations of properties are responses to how and why
 > d) go back to each property in b) and answer to what, who,
 > when and where and so on.
 >
 > You will find soon that you have problems starting
 > with b), when you will have to differentiate between
 > physical, objective properties and subjective properties.
 > JP

What do you mean by subjective property?

a) Individuals  (any element in U)
b) Properties of individuals (Like an attribute in OOP)
c) Relations between individuals (any kind of relationship at all)
d) Properties of such relations  (the type of (c))

I see no problems at all:

Can you please provide a specific concrete example, and I will address 
it directly.

0
Peter
12/5/2013 5:18:00 PM
In article <aM6dnWnbEZwj5gLPnZ2dnUVZ_rqdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> On 12/4/2013 12:41 PM, Doc O'Leary wrote:
> > In article <4ZidnWq8rPoqwwPPnZ2dnUVZ_qWdnZ2d@giganews.com>,
> >   Peter Olcott <OCR4Screen> wrote:
> >
> >> I would estimate based on that statement that you are not very familiar
> >> with the Montague Grammar of semantics.
> >
> > I would estimate based on that statement that you still are not adopting
> > a reality-based, scientific approach to AI.  I am familiar enough with
> > Montague's work to understand that the fundamental premise is wrong.
> 
> Perhaps you do not understand it as well as you think you do.
> I myself can see how it can be extended such that zero gaps of 
> specification would exist.

Perhaps you do not understand it as well as you think you do.  I myself 
have already pointed out that to "extend" it essentially results in an 
infinite specification.  The fundamental problem is implicit vs. 
explicit meaning.

> 
> > Like the work of Godel you parrot, it doesn't stand up to even
> > rudimentary analysis.
> 
> Provide one simple concrete example where it does not work. Perhaps your 
> analysis was too rudimentary.

A more likely explanation is that your knee-jerk desire to defend your 
pet theory has you taking a non-scientific approach.  It is *you* who 
should be offering tests to your theory.  I have provided countless 
examples of their failings, but you have yet to take them to heart.  
Here is another one; hopefully simple enough for you to grasp:

Billy thinks Sally said the ball was red.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/5/2013 5:58:03 PM
On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
> >>
> 
>  >> The problem with this is that intuitions are difficult to translate 
> 
> into
> 
>  >> words. I can see (for example) how the Montague Grammar of semantics
> 
>  >> could address every aspect of natural language compositionality.
> 
>  >>
> 
>  >> I estimate the reason why others may not appreciate the verbalized
> 
>  >> insights might be that they may have too much investment in how the
> 
>  >> conventional terminology of linguistics divides up the key conceptions.
> 
>  >>
> 
>  >> It is difficult to simply throw all these conceptions away and start
> 
>  >> from scratch.
> 
>  >>
> 
>  >> I could (for example) elaborate on each conventional division of the
> 
>  >> compositionality problem and show how MG could be used to express 
> 
> these.
> 
>  >> The conversation has never gotten that far yet.
> 
>  >>
> 
>  >
> 
>  > As I understand you are using the following model:
> 
>  >
> 
>  > Objects of thought are divided into types:
> 
>  > a) Individuals
> 
>  > b) Properties of individuals
> 
>  > c) Relations between individuals
> 
>  > d) Properties of such relations
> 
>  >
> 
>  > Why don't just start working using an example and see
> 
>  > where it leads you?
> 
>  >
> 
>  > a) an individual is a What or a Who
> 
>  > b) properties are the responses to what, who, where and when
> 
>  > c) relations of properties are responses to how and why
> 
>  > d) go back to each property in b) and answer to what, who,
> 
>  > when and where and so on.
> 
>  >
> 
>  > You will find soon that you have problems starting
> 
>  > with b), when you will have to differentiate between
> 
>  > physical, objective properties and subjective properties.
> 
>  > JP
> 
> 
> 
> What do you mean by subjective property?
> 

Tall, big, small, etc.
Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.  
JP


> 
> a) Individuals  (any element in U)
> 
> b) Properties of individuals (Like an attribute in OOP)
> 
> c) Relations between individuals (any kind of relationship at all)
> 
> d) Properties of such relations  (the type of (c))
> 
> 
> 
> I see no problems at all:
> 
> 
> 
> Can you please provide a specific concrete example, and I will address 
> 
> it directly.


I don't know where to start but I think that you have used the term of house once, am I correct?
Can you start with it or any other one you wish. It is your project, not mine. 
JP
0
JP
12/5/2013 6:26:31 PM
On 12/5/2013 11:58 AM, Doc O'Leary wrote:
> In article <aM6dnWnbEZwj5gLPnZ2dnUVZ_rqdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> On 12/4/2013 12:41 PM, Doc O'Leary wrote:
>>> In article <4ZidnWq8rPoqwwPPnZ2dnUVZ_qWdnZ2d@giganews.com>,
>>>    Peter Olcott <OCR4Screen> wrote:
>>>
>>>> I would estimate based on that statement that you are not very familiar
>>>> with the Montague Grammar of semantics.
>>>
>>> I would estimate based on that statement that you still are not adopting
>>> a reality-based, scientific approach to AI.  I am familiar enough with
>>> Montague's work to understand that the fundamental premise is wrong.
>>
>> Perhaps you do not understand it as well as you think you do.
>> I myself can see how it can be extended such that zero gaps of
>> specification would exist.
>
> Perhaps you do not understand it as well as you think you do.  I myself
> have already pointed out that to "extend" it essentially results in an
> infinite specification.  The fundamental problem is implicit vs.
> explicit meaning.
>

Could you provide a specific concrete example?

>>
>>> Like the work of Godel you parrot, it doesn't stand up to even
>>> rudimentary analysis.
>>
>> Provide one simple concrete example where it does not work. Perhaps your
>> analysis was too rudimentary.
>
> A more likely explanation is that your knee-jerk desire to defend your
> pet theory has you taking a non-scientific approach.  It is *you* who
> should be offering tests to your theory.  I have provided countless
> examples of their failings, but you have yet to take them to heart.
> Here is another one; hopefully simple enough for you to grasp:
>
> Billy thinks Sally said the ball was red.
>

I am just throwing this together from imperfect memory, but, it is 
something like this:

BILLY {believes} (SALLY {said} "The ball was red")

0
Peter
12/5/2013 6:37:15 PM
On 12/5/2013 12:26 PM, JP wrote:
> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
>>>>
>>
>>   > You will find soon that you have problems starting
>>   > with b), when you will have to differentiate between
>>   > physical, objective properties and subjective properties.
>>   > JP
>>
>>
>>
>> What do you mean by subjective property?
>>
>
> Tall, big, small, etc.
> Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.
> JP
>

Simply store them at the precision that they specify.
{House} [Individual]
{House}.{Size} [Property]
{House}.{Size}.{Value} [Property]
{House}.{Size}.{Value}.{Large} [Property]

>
>>
>> a) Individuals  (any element in U)
>> b) Properties of individuals (Like an attribute in OOP)
>> c) Relations between individuals (any kind of relationship at all)
>> d) Properties of such relations  (the type of (c))
>>
>>
>>
>> I see no problems at all:
>>
>> Can you please provide a specific concrete example, and I will address
>>
>> it directly.
>
>
> I don't know where to start but I think that you have used the term of house once, am I correct?
> Can you start with it or any other one you wish. It is your project, not mine.
> JP
>
(see above).

0
Peter
12/5/2013 7:00:57 PM
On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
> On 12/5/2013 12:26 PM, JP wrote:
> 
> > On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
> 
> >>>>
> 
> >>
> 
> >>   > You will find soon that you have problems starting
> 
> >>   > with b), when you will have to differentiate between
> 
> >>   > physical, objective properties and subjective properties.
> 
> >>   > JP
> 
> >>
> 
> >>
> 
> >>
> 
> >> What do you mean by subjective property?
> 
> >>
> 
> >
> 
> > Tall, big, small, etc.
> 
> > Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.
> 
> > JP
> 
> >
> 
> 
> 
> Simply store them at the precision that they specify.
> 
> {House} [Individual]
> 
> {House}.{Size} [Property]
> 
> {House}.{Size}.{Value} [Property]
> 
> {House}.{Size}.{Value}.{Large} [Property]
> 
> 

What is Value?
JP



> >
> 
> >>
> 
> >> a) Individuals  (any element in U)
> 
> >> b) Properties of individuals (Like an attribute in OOP)
> 
> >> c) Relations between individuals (any kind of relationship at all)
> 
> >> d) Properties of such relations  (the type of (c))
> 
> >>
> 
> >>
> 
> >>
> 
> >> I see no problems at all:
> 
> >>
> 
> >> Can you please provide a specific concrete example, and I will address
> 
> >>
> 
> >> it directly.
> 
> >
> 
> >
> 
> > I don't know where to start but I think that you have used the term of house once, am I correct?
> 
> > Can you start with it or any other one you wish. It is your project, not mine.
> 
> > JP
> 
> >
> 
> (see above).

0
JP
12/5/2013 7:06:47 PM
On 12/5/2013 1:06 PM, JP wrote:
> On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
>> On 12/5/2013 12:26 PM, JP wrote:
>>
>>> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
>>
>>>>>>
>>
>>>>
>>
>>>>    > You will find soon that you have problems starting
>>
>>>>    > with b), when you will have to differentiate between
>>
>>>>    > physical, objective properties and subjective properties.
>>
>>>>    > JP
>>
>>>>
>>
>>>>
>>
>>>>
>>
>>>> What do you mean by subjective property?
>>
>>>>
>>
>>>
>>
>>> Tall, big, small, etc.
>>
>>> Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.
>>
>>> JP
>>
>>>
>>
>>
>>
>> Simply store them at the precision that they specify.
>>
>> {House} [Individual]
>>
>> {House}.{Size} [Property]
>>
>> {House}.{Size}.{Value} [Property]
>>
>> {House}.{Size}.{Value}.{Large} [Property]
>>
>>
>
> What is Value?
> JP

A [Property] of {Size}.

0
Peter
12/5/2013 7:41:14 PM
On Thursday, December 5, 2013 9:41:14 PM UTC+2, Peter Olcott wrote:
> On 12/5/2013 1:06 PM, JP wrote:
> 
> > On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
> 
> >> On 12/5/2013 12:26 PM, JP wrote:
> 
> >>
> 
> >>> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
> 
> >>
> 
> >>>>>>
> 
> >>
> 
> >>>>
> 
> >>
> 
> >>>>    > You will find soon that you have problems starting
> 
> >>
> 
> >>>>    > with b), when you will have to differentiate between
> 
> >>
> 
> >>>>    > physical, objective properties and subjective properties.
> 
> >>
> 
> >>>>    > JP
> 
> >>
> 
> >>>>
> 
> >>
> 
> >>>>
> 
> >>
> 
> >>>>
> 
> >>
> 
> >>>> What do you mean by subjective property?
> 
> >>
> 
> >>>>
> 
> >>
> 
> >>>
> 
> >>
> 
> >>> Tall, big, small, etc.
> 
> >>
> 
> >>> Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.
> 
> >>
> 
> >>> JP
> 
> >>
> 
> >>>
> 
> >>
> 
> >>
> 
> >>
> 
> >> Simply store them at the precision that they specify.
> 
> >>
> 
> >> {House} [Individual]
> 
> >>
> 
> >> {House}.{Size} [Property]
> 
> >>
> 
> >> {House}.{Size}.{Value} [Property]
> 
> >>
> 
> >> {House}.{Size}.{Value}.{Large} [Property]
> 
> >>
> 
> >>
> 
> >
> 
> > What is Value?
> 
> > JP
> 
> 
> 
> A [Property] of {Size}.


It does not explain anything it just becomes recursive, as size is a property of... which in turn is a property of...and so on.  
It has to be simple, but simple is a a relative property. 
There are 7 billion people and "simple" is different for each of them, it is subject to their own frame of reference.
Just labeling something as property does not work for me.
Good luck with your project.
JP
JP
 
0
JP
12/5/2013 8:37:44 PM
On 12/5/2013 2:37 PM, JP wrote:
> On Thursday, December 5, 2013 9:41:14 PM UTC+2, Peter Olcott wrote:
>> On 12/5/2013 1:06 PM, JP wrote:
>>
>>> On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
>>>> On 12/5/2013 12:26 PM, JP wrote:
>>>>> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
>>>>>>     > You will find soon that you have problems starting
>>>>>>     > with b), when you will have to differentiate between
>>>>>>     > physical, objective properties and subjective properties.
>>>>>>     > JP
>>>>>> What do you mean by subjective property?
>>>>> Tall, big, small, etc.
>>>>> Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.
>>>>> JP
>>>> Simply store them at the precision that they specify.
>>>> {House} [Individual]
>>>> {House}.{Size} [Property]
>>>> {House}.{Size}.{Value} [Property]
>>>> {House}.{Size}.{Value}.{Large} [Property]
>>> What is Value?
>>> JP
>>
>> A [Property] of {Size}.
> It does not explain anything it just becomes recursive, as size is a property of... which in turn is a property of...and so on.
I am only trying to model the set of conceptual knowledge the way that 
it is already inherently organized. Thanks to Montague Grammar I have a 
language to express these preexisting relationships.

> It has to be simple, but simple is a a relative property.
> There are 7 billion people and "simple" is different for each of them, it is subject to their own frame of reference.
Knowledge already has its own natural order of connections, we only have 
to express what is already there.
> Just labeling something as property does not work for me.
> Good luck with your project.
> JP
> JP
>   
There are four semantic atoms, properties are only one of the four:
1) Individuals
2) Properties
3) Relations
4) Properties of Relations
0
Peter
12/6/2013 1:20:28 AM
On Friday, December 6, 2013 3:20:28 AM UTC+2, Peter Olcott wrote:
> On 12/5/2013 2:37 PM, JP wrote:
>=20
> > On Thursday, December 5, 2013 9:41:14 PM UTC+2, Peter Olcott wrote:
>=20
> >> On 12/5/2013 1:06 PM, JP wrote:
>=20
> >>
>=20
> >>> On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
>=20
> >>>> On 12/5/2013 12:26 PM, JP wrote:
>=20
> >>>>> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
>=20
> >>>>>>     > You will find soon that you have problems starting
>=20
> >>>>>>     > with b), when you will have to differentiate between
>=20
> >>>>>>     > physical, objective properties and subjective properties.
>=20
> >>>>>>     > JP
>=20
> >>>>>> What do you mean by subjective property?
>=20
> >>>>> Tall, big, small, etc.
>=20
> >>>>> Whenever these simple physical properties are not expressed as quan=
tities, they become subjective, different from one observer to another.
>=20
> >>>>> JP
>=20
> >>>> Simply store them at the precision that they specify.
>=20
> >>>> {House} [Individual]
>=20
> >>>> {House}.{Size} [Property]
>=20
> >>>> {House}.{Size}.{Value} [Property]
>=20
> >>>> {House}.{Size}.{Value}.{Large} [Property]
>=20
> >>> What is Value?
>=20
> >>> JP
>=20
> >>
>=20
> >> A [Property] of {Size}.
>=20
> > It does not explain anything it just becomes recursive, as size is a pr=
operty of... which in turn is a property of...and so on.
>=20
> I am only trying to model the set of conceptual knowledge the way that=20
>=20
> it is already inherently organized. Thanks to Montague Grammar I have a=
=20
>=20
> language to express these preexisting relationships.
>=20
>=20
>=20
> > It has to be simple, but simple is a a relative property.
>=20
> > There are 7 billion people and "simple" is different for each of them, =
it is subject to their own frame of reference.
>=20
> Knowledge already has its own natural order of connections, we only have=
=20
>=20
> to express what is already there.
>=20
> > Just labeling something as property does not work for me.
>=20
> > Good luck with your project.
>=20
> > JP
>=20
> > JP
>=20
> >  =20
>=20
> There are four semantic atoms, properties are only one of the four:
>=20
> 1) Individuals
>=20
> 2) Properties
>=20
> 3) Relations
>=20
> 4) Properties of Relations

Did you ever try to take a step on this road?

If a Value can be true or false, and it is part of another property that ca=
n have true or false value, and which is part of a huge number of propertie=
s, and all can have true or false values, and these true or false values ar=
e assigned independently by 7 billion people, how many combinations will yo=
u have?
And these values change all the time, as people learn, grow, die, etc.   =
=20
A big house can be big to somebody accustomed only to a specific size, but =
seem small to somebody accustomed to another size.
All these values are individual, and they change continuously.
Please try to think it thru on your own as I do not want to do the work for=
 you. Good luck.
JP
0
JP
12/6/2013 7:10:58 AM
On 12/6/2013 1:22 AM, Franz Gnaedinger wrote:
>
>> In these cases a [property] is the {Part-Whole}[relation] between
>> [individuals].
>>
>> I (and KG) propose that the four elements listed above are the {Atoms of
>> Meaning} (KG call them "Objects of Thought").
>>
>> There is much more to meanings than the atoms. The atoms are used to
>> construct meaning postulates, and these meaning postulates must be
>> connected together. Meaning is derived from the connections between
>> meaning postulates. The meaning postulates have atoms of meaning at
>> their foundation.
>
> You are an individual. Your hairs are your property.
> What if you loose a hair, does it remain your property?
The term [Property} is to be taken to mean [Attribute] from object 
oriented programming.

> or does it become someone else's property? or an individual?
> What is the sun? an individual? our property? the property
Every element of the set of all things is an [Individual].

> of the unknown ruler of the Milky Way? or the property of the
> divine powers? the property of God or the Great Spirit or Allah
> who created the world and every star in the sky and every grain
> of sand on Earth and every hair you lost? If so, every alleged
> individual becomes the property of God. You are no individual
> anymore but the property of God or the Great Spirit or Allah,
> your hair dito, everything you own and are.
Montague Grammar combined with Semantic Atoms (KG objects of thought) 
are merely the means to specify the set of all conceptual knowledge. 
They form the language of thought.

>
> As far as I know Goedel tried to prove the existence of God.
> He could have done so along the above lines: the only real
> individual is God. However, Goedel did not prove that the
> four categories 'individuals' and 'their properties' and
> 'their relations' and the 'properties of their relations'
> can map the world entirely and correctly, whereas his two

He did mostly did not pursue this idea much at all, and he only wrote it 
in a footnote.

> theorems (indecideability and incompleteness) are proved.
> You confound a working model with a proved theorem while
> you are dismissing two proved theorems - a double mistake.
>

Actually I am beginning to use Montague Grammar to show that the
Natural Language form of the Incompleteness Theorem is incorrect.
At least one reader on these forums seems to agree that my reasoning is 
correct.

The Liar Paradox: "I am lying"
The English form of the Incompleteness Theorem: "I am not provable"
both err in the same way. The error is they they each are attempting to 
relate to something (that I call an object of truth) in a possible 
world, and they both fail to form this mapping.

What is the Liar Parodox lying about?
It is lying about lying.
What is it lying about lying about?
It lying about lying lying.
(infinitely recursive structure that never gets to its truth object).


We negate the KG sentence to make analysis simpler.
Not("I am not provable") becomes "I am provable"
What are you trying to prove?
I am trying to prove that I am provable.
What are you trying to prove that your are provable about?
(infinitely recursive structure that never gets to its truth object).


0
Peter
12/6/2013 11:26:14 AM
On 12/6/2013 1:10 AM, JP wrote:
> On Friday, December 6, 2013 3:20:28 AM UTC+2, Peter Olcott wrote:
>> On 12/5/2013 2:37 PM, JP wrote:
>>
>>> On Thursday, December 5, 2013 9:41:14 PM UTC+2, Peter Olcott wrote:
>>
>>>> On 12/5/2013 1:06 PM, JP wrote:
>>
>>>>
>>
>>>>> On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
>>
>>>>>> On 12/5/2013 12:26 PM, JP wrote:
>>
>>>>>>> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrote:
>>
>>>>>>>>      > You will find soon that you have problems starting
>>
>>>>>>>>      > with b), when you will have to differentiate between
>>
>>>>>>>>      > physical, objective properties and subjective properties.
>>
>>>>>>>>      > JP
>>
>>>>>>>> What do you mean by subjective property?
>>
>>>>>>> Tall, big, small, etc.
>>
>>>>>>> Whenever these simple physical properties are not expressed as quantities, they become subjective, different from one observer to another.
>>
>>>>>>> JP
>>
>>>>>> Simply store them at the precision that they specify.
>>
>>>>>> {House} [Individual]
>>
>>>>>> {House}.{Size} [Property]
>>
>>>>>> {House}.{Size}.{Value} [Property]
>>
>>>>>> {House}.{Size}.{Value}.{Large} [Property]
>>
>>>>> What is Value?
>>
>>>>> JP
>>
>>>>
>>
>>>> A [Property] of {Size}.
>>
>>> It does not explain anything it just becomes recursive, as size is a property of... which in turn is a property of...and so on.
>>
>> I am only trying to model the set of conceptual knowledge the way that
>>
>> it is already inherently organized. Thanks to Montague Grammar I have a
>>
>> language to express these preexisting relationships.
>>
>>
>>
>>> It has to be simple, but simple is a a relative property.
>>
>>> There are 7 billion people and "simple" is different for each of them, it is subject to their own frame of reference.
>>
>> Knowledge already has its own natural order of connections, we only have
>>
>> to express what is already there.
>>
>>> Just labeling something as property does not work for me.
>>
>>> Good luck with your project.
>>
>>> JP
>>
>>> JP
>>
>>>
>>
>> There are four semantic atoms, properties are only one of the four:
>>
>> 1) Individuals
>>
>> 2) Properties
>>
>> 3) Relations
>>
>> 4) Properties of Relations
>
> Did you ever try to take a step on this road?
>
> If a Value can be true or false, and it is part of another property that can have true or false value, and which is part of a huge number of properties, and all can have true or false values, and these true or false values are assigned independently by 7 billion people, how many combinations will you have?

It makes no difference.

> And these values change all the time, as people learn, grow, die, etc.
The base ontology only has to store a finite amount of information of 
the how fundamental the elements from the set of all things fit together 
and relate to each other. This ontology is updated as new things are 
created.

The dynamic ontology uses the base ontology such that it can comprehend 
specific instances of events. An abstract form of the dynamic ontology 
(a summary) can be stored as the history of these vents.

> A big house can be big to somebody accustomed only to a specific size, but seem small to somebody accustomed to another size.
> All these values are individual, and they change continuously.

The aspect that changes is stored in the dynamic ontology as it is 
needed. The base ontology stays the same. Although differing people's 
opinion of the size of your house (large or small) will vary across 
individuals and time, the actual dimensions of the house itself remain 
static until they are physically changed.

> Please try to think it thru on your own as I do not want to do the work for you. Good luck.
> JP
>

0
Peter
12/6/2013 11:40:44 AM
In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> > Perhaps you do not understand it as well as you think you do.  I myself
> > have already pointed out that to "extend" it essentially results in an
> > infinite specification.  The fundamental problem is implicit vs.
> > explicit meaning.
> >
> 
> Could you provide a specific concrete example?

You have yourself!  The very use of "you" requires a person to drawn on 
an internal context to give meaning to the word.  Natural languages are 
*full* of such implicit constructs, which really have no explicit 
meaning unless you're dealing with a like-minded individual.  The 
mistake you're making is in thinking that language is a data store 
rather than a communication channel.


> > Billy thinks Sally said the ball was red.
> 
> I am just throwing this together from imperfect memory, but, it is 
> something like this:
> 
> BILLY {believes} (SALLY {said} "The ball was red")

I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary 
addition of notation.  The word "said" alone remains entirely ambiguous, 
because in common usage that can refer to all manner of spoken or 
written statements.  I also don't understand why you decided not to 
notate ball and red, instead treating the information as if it were a 
quotation.  There is a really incompleteness in the ideas you put forth.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/6/2013 5:52:25 PM
In article <ee6dnXGQ6uX0UD3PnZ2dnUVZ_jKdnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> On 12/5/2013 12:26 PM, JP wrote:
> >
> > Tall, big, small, etc.
> > Whenever these simple physical properties are not expressed as quantities, 
> > they become subjective, different from one observer to another.
> > JP
> >
> 
> Simply store them at the precision that they specify.

This is not sufficient.  You must also relate them to the thing they are 
compared with, and likely the thing doing the comparison.  As I have 
stated before but you refuse to see, that necessarily brings in the 
universe of connected "atoms" for every atom you think you can specify.  
Your approach is a non-starter.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/6/2013 6:03:07 PM
On 12/6/2013 11:52 AM, Doc O'Leary wrote:
> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>>> Perhaps you do not understand it as well as you think you do.  I myself
>>> have already pointed out that to "extend" it essentially results in an
>>> infinite specification.  The fundamental problem is implicit vs.
>>> explicit meaning.
>>>
>>
>> Could you provide a specific concrete example?
>
> You have yourself!  The very use of "you" requires a person to drawn on
> an internal context to give meaning to the word.  Natural languages are
> *full* of such implicit constructs, which really have no explicit
> meaning unless you're dealing with a like-minded individual.  The
> mistake you're making is in thinking that language is a data store
> rather than a communication channel.
>

That is not true, very many things have common meanings that are defined 
tautologically such that others not sharing these meanings are either 
unaware of these conceptions or have misconceptions.

The literal meaning of {human being} never literally means {a large box 
of chocolates}.

>
>>> Billy thinks Sally said the ball was red.
>>
>> I am just throwing this together from imperfect memory, but, it is
>> something like this:
>>
>> BILLY {believes} (SALLY {said} "The ball was red")
>
> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
> addition of notation.  The word "said" alone remains entirely ambiguous,
> because in common usage that can refer to all manner of spoken or
> written statements.  I also don't understand why you decided not to
> notate ball and red, instead treating the information as if it were a
> quotation.  There is a really incompleteness in the ideas you put forth.
>

I did not bother with greatly detailed elaboration because I thought it 
would not be productive, as your question about the quotation indicates:

There is a huge difference between the ball actually being red, and what 
BILLY may believe about what SALLY said.

Sally may have actually said: "The ball is ready" meaning that the huge 
party is now open for guests to arrive.

These nuances of meaning can be explicitly encoded within Montague Grammar.

0
Peter
12/6/2013 7:12:18 PM
On 12/6/2013 12:03 PM, Doc O'Leary wrote:
> In article <ee6dnXGQ6uX0UD3PnZ2dnUVZ_jKdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> On 12/5/2013 12:26 PM, JP wrote:
>>>
>>> Tall, big, small, etc.
>>> Whenever these simple physical properties are not expressed as quantities,
>>> they become subjective, different from one observer to another.
>>> JP
>>>
>>
>> Simply store them at the precision that they specify.
>
> This is not sufficient.  You must also relate them to the thing they are
> compared with, and likely the thing doing the comparison.  As I have
> stated before but you refuse to see, that necessarily brings in the
> universe of connected "atoms" for every atom you think you can specify.

Of course it does this is the way that knowledge works. The meaning 
postulates (and thus their atoms) of Boat oars and integers are 
connected together whenever you need to count boat oars. They are not 
connected together when you merely need to row, row, row the boat.

> Your approach is a non-starter.
>

0
Peter
12/6/2013 7:38:28 PM
On 12/6/2013 12:03 PM, Doc O'Leary wrote:
> In article <ee6dnXGQ6uX0UD3PnZ2dnUVZ_jKdnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> On 12/5/2013 12:26 PM, JP wrote:
>>>
>>> Tall, big, small, etc.
>>> Whenever these simple physical properties are not expressed as quantities,
>>> they become subjective, different from one observer to another.
>>> JP
>>>
>>
>> Simply store them at the precision that they specify.
>
> This is not sufficient.  You must also relate them to the thing they are
> compared with, and likely the thing doing the comparison.  As I have
> stated before but you refuse to see, that necessarily brings in the
> universe of connected "atoms" for every atom you think you can specify.
> Your approach is a non-starter.
>

Of course it does this is the way that knowledge works. The meaning 
postulates (and thus their atoms) of Boat oars and integers are 
connected together whenever you need to count boat oars. They are not 
connected together when you merely need to row, row, row the boat.

There are two different ontologies:
1) The base ontology that defines each conception (specified as meaning 
postulates) and how these conceptions can connect to other conceptions.

2) A dynamic (discourse) ontology that connects elements from the base 
ontology together for a specific purpose. (A single specific instance of 
a situation).

By explicitly dividing knowledge representation into these two parts, 
knowledge representation becomes finite.
0
Peter
12/6/2013 8:25:29 PM
On 12/6/2013 12:52 PM, Doc O'Leary wrote:
> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>  Peter Olcott <OCR4Screen> wrote:
> 
>>> Perhaps you do not understand it as well as you think you do.  I myself
>>> have already pointed out that to "extend" it essentially results in an
>>> infinite specification.  The fundamental problem is implicit vs.
>>> explicit meaning.
>>>
>>
>> Could you provide a specific concrete example?
> 
> You have yourself!  The very use of "you" requires a person to drawn on 
> an internal context to give meaning to the word.  Natural languages are 
> *full* of such implicit constructs, which really have no explicit 
> meaning unless you're dealing with a like-minded individual.  The 
> mistake you're making is in thinking that language is a data store 
> rather than a communication channel.

Well put.

Another way of looking at it is that language
contains "hints" for finding "meaning" (i.e.,
for understanding) rather than "meaning" itself.

>>> Billy thinks Sally said the ball was red.
>>
>> I am just throwing this together from imperfect memory, but, it is 
>> something like this:
>>
>> BILLY {believes} (SALLY {said} "The ball was red")
> 
> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary 
> addition of notation.  The word "said" alone remains entirely ambiguous, 
> because in common usage that can refer to all manner of spoken or 
> written statements.  I also don't understand why you decided not to 
> notate ball and red, instead treating the information as if it were a 
> quotation.  There is a really incompleteness in the ideas you put forth.

Tak
--
----------------------------------------------------------------+-----
Tak To                                            takto@alum.mit.eduxx
--------------------------------------------------------------------^^
 [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr


0
Tak
12/6/2013 9:36:52 PM
On 12/6/2013 3:36 PM, Tak To wrote:
> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>   Peter Olcott <OCR4Screen> wrote:
>>
>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>> have already pointed out that to "extend" it essentially results in an
>>>> infinite specification.  The fundamental problem is implicit vs.
>>>> explicit meaning.
>>>>
>>> Could you provide a specific concrete example?
>> You have yourself!  The very use of "you" requires a person to drawn on
>> an internal context to give meaning to the word.  Natural languages are
>> *full* of such implicit constructs, which really have no explicit
>> meaning unless you're dealing with a like-minded individual.  The
>> mistake you're making is in thinking that language is a data store
>> rather than a communication channel.
> Well put.
>
> Another way of looking at it is that language
> contains "hints" for finding "meaning" (i.e.,
> for understanding) rather than "meaning" itself.
For the meaning of direct physical sensations it may not be as much as 
hints, such as explaining exactly what a rainbow looks like to one whom 
has always been blind.  It seems to me that the meaning of all 
conceptual knowledge can be exhaustively encoded within Montague Grammar 
and its enhancements.

>
>>>> Billy thinks Sally said the ball was red.
>>> I am just throwing this together from imperfect memory, but, it is
>>> something like this:
>>>
>>> BILLY {believes} (SALLY {said} "The ball was red")
>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>> addition of notation.  The word "said" alone remains entirely ambiguous,
>> because in common usage that can refer to all manner of spoken or
>> written statements.  I also don't understand why you decided not to
>> notate ball and red, instead treating the information as if it were a
>> quotation.  There is a really incompleteness in the ideas you put forth.
> Tak
> --
> ----------------------------------------------------------------+-----
> Tak To                                            takto@alum.mit.eduxx
> --------------------------------------------------------------------^^
>   [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr
>
>

0
Peter
12/7/2013 3:24:16 AM
"Peter Olcott" <OCR4Screen> wrote in message news:gZidnW8JTqp_CT_PnZ2dnUVZ_rWdnZ2d@giganews.com...
> On 12/6/2013 3:36 PM, Tak To wrote:
>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>   Peter Olcott <OCR4Screen> wrote:
>>>
>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>> have already pointed out that to "extend" it essentially results in an
>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>> explicit meaning.
>>>>>
>>>> Could you provide a specific concrete example?
>>> You have yourself!  The very use of "you" requires a person to drawn on
>>> an internal context to give meaning to the word.  Natural languages are
>>> *full* of such implicit constructs, which really have no explicit
>>> meaning unless you're dealing with a like-minded individual.  The
>>> mistake you're making is in thinking that language is a data store
>>> rather than a communication channel.
>> Well put.
>>
>> Another way of looking at it is that language
>> contains "hints" for finding "meaning" (i.e.,
>> for understanding) rather than "meaning" itself.
> For the meaning of direct physical sensations it may not be as much as 
> hints, such as explaining exactly what a rainbow looks like to one whom 

Oy! This really hurts, man, it really does!

> has always been blind.  It seems to me that the meaning of all 
> conceptual knowledge can be exhaustively encoded within Montague Grammar 
> and its enhancements.
> 

[...]

pjk 
0
pauljk
12/7/2013 1:29:01 PM
On 12/7/2013 7:29 AM, pauljk wrote:
>
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:gZidnW8JTqp_CT_PnZ2dnUVZ_rWdnZ2d@giganews.com...
>> On 12/6/2013 3:36 PM, Tak To wrote:
>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>   Peter Olcott <OCR4Screen> wrote:
>>>>
>>>>>> Perhaps you do not understand it as well as you think you do.  I 
>>>>>> myself
>>>>>> have already pointed out that to "extend" it essentially results 
>>>>>> in an
>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>> explicit meaning.
>>>>>>
>>>>> Could you provide a specific concrete example?
>>>> You have yourself!  The very use of "you" requires a person to 
>>>> drawn on
>>>> an internal context to give meaning to the word.  Natural languages 
>>>> are
>>>> *full* of such implicit constructs, which really have no explicit
>>>> meaning unless you're dealing with a like-minded individual.  The
>>>> mistake you're making is in thinking that language is a data store
>>>> rather than a communication channel.
>>> Well put.
>>>
>>> Another way of looking at it is that language
>>> contains "hints" for finding "meaning" (i.e.,
>>> for understanding) rather than "meaning" itself.
>> For the meaning of direct physical sensations it may not be as much 
>> as hints, such as explaining exactly what a rainbow looks like to one 
>> whom 
>
> Oy! This really hurts, man, it really does!
If you are serious, I am sorry.

>
>> has always been blind.  It seems to me that the meaning of all 
>> conceptual knowledge can be exhaustively encoded within Montague 
>> Grammar and its enhancements.
>>
>
> [...]
>
> pjk 

0
Peter
12/7/2013 1:44:00 PM
On Friday, December 6, 2013 1:40:44 PM UTC+2, Peter Olcott wrote:
> On 12/6/2013 1:10 AM, JP wrote:
>=20
> > On Friday, December 6, 2013 3:20:28 AM UTC+2, Peter Olcott wrote:
>=20
> >> On 12/5/2013 2:37 PM, JP wrote:
>=20
> >>
>=20
> >>> On Thursday, December 5, 2013 9:41:14 PM UTC+2, Peter Olcott wrote:
>=20
> >>
>=20
> >>>> On 12/5/2013 1:06 PM, JP wrote:
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>>> On Thursday, December 5, 2013 9:00:57 PM UTC+2, Peter Olcott wrote:
>=20
> >>
>=20
> >>>>>> On 12/5/2013 12:26 PM, JP wrote:
>=20
> >>
>=20
> >>>>>>> On Thursday, December 5, 2013 7:18:00 PM UTC+2, Peter Olcott wrot=
e:
>=20
> >>
>=20
> >>>>>>>>      > You will find soon that you have problems starting
>=20
> >>
>=20
> >>>>>>>>      > with b), when you will have to differentiate between
>=20
> >>
>=20
> >>>>>>>>      > physical, objective properties and subjective properties.
>=20
> >>
>=20
> >>>>>>>>      > JP
>=20
> >>
>=20
> >>>>>>>> What do you mean by subjective property?
>=20
> >>
>=20
> >>>>>>> Tall, big, small, etc.
>=20
> >>
>=20
> >>>>>>> Whenever these simple physical properties are not expressed as qu=
antities, they become subjective, different from one observer to another.
>=20
> >>
>=20
> >>>>>>> JP
>=20
> >>
>=20
> >>>>>> Simply store them at the precision that they specify.
>=20
> >>
>=20
> >>>>>> {House} [Individual]
>=20
> >>
>=20
> >>>>>> {House}.{Size} [Property]
>=20
> >>
>=20
> >>>>>> {House}.{Size}.{Value} [Property]
>=20
> >>
>=20
> >>>>>> {House}.{Size}.{Value}.{Large} [Property]
>=20
> >>
>=20
> >>>>> What is Value?
>=20
> >>
>=20
> >>>>> JP
>=20
> >>
>=20
> >>>>
>=20
> >>
>=20
> >>>> A [Property] of {Size}.
>=20
> >>
>=20
> >>> It does not explain anything it just becomes recursive, as size is a =
property of... which in turn is a property of...and so on.
>=20
> >>
>=20
> >> I am only trying to model the set of conceptual knowledge the way that
>=20
> >>
>=20
> >> it is already inherently organized. Thanks to Montague Grammar I have =
a
>=20
> >>
>=20
> >> language to express these preexisting relationships.
>=20
> >>
>=20
> >>
>=20
> >>
>=20
> >>> It has to be simple, but simple is a a relative property.
>=20
> >>
>=20
> >>> There are 7 billion people and "simple" is different for each of them=
, it is subject to their own frame of reference.
>=20
> >>
>=20
> >> Knowledge already has its own natural order of connections, we only ha=
ve
>=20
> >>
>=20
> >> to express what is already there.
>=20
> >>
>=20
> >>> Just labeling something as property does not work for me.
>=20
> >>
>=20
> >>> Good luck with your project.
>=20
> >>
>=20
> >>> JP
>=20
> >>
>=20
> >>> JP
>=20
> >>
>=20
> >>>
>=20
> >>
>=20
> >> There are four semantic atoms, properties are only one of the four:
>=20
> >>
>=20
> >> 1) Individuals
>=20
> >>
>=20
> >> 2) Properties
>=20
> >>
>=20
> >> 3) Relations
>=20
> >>
>=20
> >> 4) Properties of Relations
>=20
> >
>=20
> > Did you ever try to take a step on this road?
>=20
> >
>=20
> > If a Value can be true or false, and it is part of another property tha=
t can have true or false value, and which is part of a huge number of prope=
rties, and all can have true or false values, and these true or false value=
s are assigned independently by 7 billion people, how many combinations wil=
l you have?
>=20
>=20
>=20
> It makes no difference.
>=20
>=20
>=20
> > And these values change all the time, as people learn, grow, die, etc.
>=20
> The base ontology only has to store a finite amount of information of=20
>=20
> the how fundamental the elements from the set of all things fit together=
=20
>=20
> and relate to each other. This ontology is updated as new things are=20
>=20
> created.
>=20
>=20
>=20
> The dynamic ontology uses the base ontology such that it can comprehend=
=20
>=20
> specific instances of events. An abstract form of the dynamic ontology=20
>=20
> (a summary) can be stored as the history of these vents.
>=20
>=20
>=20
> > A big house can be big to somebody accustomed only to a specific size, =
but seem small to somebody accustomed to another size.
>=20
> > All these values are individual, and they change continuously.
>=20
>=20
>=20
> The aspect that changes is stored in the dynamic ontology as it is=20
>=20
> needed. The base ontology stays the same. Although differing people's=20
>=20
> opinion of the size of your house (large or small) will vary across=20
>=20
> individuals and time, the actual dimensions of the house itself remain=20
>=20
> static until they are physically changed.
>=20
>=20
>=20
> > Please try to think it thru on your own as I do not want to do the work=
 for you. Good luck.
>=20
> > JP
>=20
> >

I am not sure if you are not a chatbox, because your replies are mostly a r=
epetition of the same old stuff, showing not comprehension of my questions.
Anyway, the stuff you are talking about is learned by 5th graders in the sc=
hools of most non english speaking countries.
I gave you already the hints previously when I wrote the questions that app=
ly to each of them.
A. The individual is called noun.
B. The property is called adjective.
C. The relationship between properties is called verb.
D. The properties of the relationships are called adverbs.
I would be surprised if you understand even this elementary stuff, but bein=
g a slow time of the year, and you being the only circus in town...
JP
0
JP
12/7/2013 4:36:19 PM
On 12/7/2013 8:29 AM, pauljk wrote:
> 
> "Peter Olcott" <OCR4Screen> wrote in message news:gZidnW8JTqp_CT_PnZ2dnUVZ_rWdnZ2d@giganews.com...
>> On 12/6/2013 3:36 PM, Tak To wrote:
>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>   Peter Olcott <OCR4Screen> wrote:
>>>>
>>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>>> have already pointed out that to "extend" it essentially results in an
>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>> explicit meaning.
>>>>>>
>>>>> Could you provide a specific concrete example?
>>>> You have yourself!  The very use of "you" requires a person to drawn on
>>>> an internal context to give meaning to the word.  Natural languages are
>>>> *full* of such implicit constructs, which really have no explicit
>>>> meaning unless you're dealing with a like-minded individual.  The
>>>> mistake you're making is in thinking that language is a data store
>>>> rather than a communication channel.
>>> Well put.
>>>
>>> Another way of looking at it is that language
>>> contains "hints" for finding "meaning" (i.e.,
>>> for understanding) rather than "meaning" itself.
>> For the meaning of direct physical sensations it may not be as much as 
>> hints, such as explaining exactly what a rainbow looks like to one whom 
> 
> Oy! This really hurts, man, it really does!

Yo! This is not AUE, no "Oy!" allowed.

No emoticon Nazi's either. :-)

Tak
--
----------------------------------------------------------------+-----
Tak To                                            takto@alum.mit.eduxx
--------------------------------------------------------------------^^
 [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr




0
Tak
12/7/2013 5:00:12 PM
In article <FLSdnfz6_7kMvD_PnZ2dnUVZ_j2dnZ2d@giganews.com>,
 Peter Olcott <OCR4Screen> wrote:

> The literal meaning of {human being} never literally means {a large box 
> of chocolates}.

You clearly have no idea about the realities of language history or 
linguistics in general.  To me, you sound essentially like a fool from 
the 1800's saying "The literal meaning of {gay} never literally means {a 
homosexual male}."

> I did not bother with greatly detailed elaboration because I thought it 
> would not be productive, as your question about the quotation indicates:

Oh, it's not that it wouldn't be "productive", but rather that it 
supports my point and highlights the mistakes you're making.  You didn't 
bother because, for some reason, you would rather cling to your 
misguided efforts.  You'd be better served writing your work off as a 
sunk cost, and then start following scientific principles if you 
actually want to have any hopes of making progress.

> There is a huge difference between the ball actually being red, and what 
> BILLY may believe about what SALLY said.

Indeed.  So why can you not make the connection between that and the 
greater problems in your approach?

> Sally may have actually said: "The ball is ready" meaning that the huge 
> party is now open for guests to arrive.
> 
> These nuances of meaning can be explicitly encoded within Montague Grammar.

Not if the information wasn't readily available to encode in the first 
place.

At this point, though, I think I've done all I can to show you your most 
obvious errors.  That you insist on refusing to see them is something 
beyond me.  I'm done with you.

-- 
iPhone apps that matter:    http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, theremailer.net,
    and probably your server, too.
0
Doc
12/7/2013 5:40:45 PM
On 12/7/2013 11:40 AM, Doc O'Leary wrote:
> In article <FLSdnfz6_7kMvD_PnZ2dnUVZ_j2dnZ2d@giganews.com>,
>   Peter Olcott <OCR4Screen> wrote:
>
>> The literal meaning of {human being} never literally means {a large box
>> of chocolates}.
> You clearly have no idea about the realities of language history or
> linguistics in general.  To me, you sound essentially like a fool from
> the 1800's saying "The literal meaning of {gay} never literally means {a
> homosexual male}."
>
>> I did not bother with greatly detailed elaboration because I thought it
>> would not be productive, as your question about the quotation indicates:
> Oh, it's not that it wouldn't be "productive", but rather that it
> supports my point and highlights the mistakes you're making.  You didn't
> bother because, for some reason, you would rather cling to your
> misguided efforts.  You'd be better served writing your work off as a
> sunk cost, and then start following scientific principles if you
> actually want to have any hopes of making progress.
>
>> There is a huge difference between the ball actually being red, and what
>> BILLY may believe about what SALLY said.
> Indeed.  So why can you not make the connection between that and the
> greater problems in your approach?

Because I can see solutions to all of the specific instances of greater 
problems that I have encountered. I have been pondering these things for 
thirty years. Now that I read about Montague Grammar last year, I have 
the terminology and framework to express these ideas.

Montague Grammar already has the means to distinguish between objective 
fact and a belief that could possibly be incorrect. It would be 
incorrect to encode {the ball is red}, as you suggested. It is only true 
that BILLY {believes} that SALLY {said}: "the ball is red".

>
>> Sally may have actually said: "The ball is ready" meaning that the huge
>> party is now open for guests to arrive.
>>
>> These nuances of meaning can be explicitly encoded within Montague Grammar.
> Not if the information wasn't readily available to encode in the first
> place.
>
> At this point, though, I think I've done all I can to show you your most
> obvious errors.  That you insist on refusing to see them is something
> beyond me.  I'm done with you.
>

0
Peter
12/7/2013 8:31:12 PM
On 12/6/2013 10:24 PM, Peter Olcott wrote:
> On 12/6/2013 3:36 PM, Tak To wrote:
>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>   Peter Olcott <OCR4Screen> wrote:
>>>
>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>> have already pointed out that to "extend" it essentially results in an
>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>> explicit meaning.
>>>>>
>>>> Could you provide a specific concrete example?
>>> You have yourself!  The very use of "you" requires a person to drawn on
>>> an internal context to give meaning to the word.  Natural languages are
>>> *full* of such implicit constructs, which really have no explicit
>>> meaning unless you're dealing with a like-minded individual.  The
>>> mistake you're making is in thinking that language is a data store
>>> rather than a communication channel.

>> Well put.
>>
>> Another way of looking at it is that language
>> contains "hints" for finding "meaning" (i.e.,
>> for understanding) rather than "meaning" itself.

> For the meaning of direct physical sensations it may not be as much as 
> hints, such as explaining exactly what a rainbow looks like to one whom 
> has always been blind.  It seems to me that the meaning of all 
> conceptual knowledge can be exhaustively encoded within Montague Grammar 
> and its enhancements.

Before tackling the abstraction notion of "meaning",
let's backtrack a bit and focus on what "understanding
(a meaning)" is.  Borrowing from Turing's Test, to
have understood X (or to have knowledge of X) is to be
able to answer questions about X (to the extend of one's
understanding/knowledge).  Thus, one needs a model or
representation of knowledge about X, as well as a way
to draw inferences from the representation.  This is
precisely what the field of "Knowledge Representation"
(KR) in Cognitive Science is about.

As two people communicating with each other through
language, each will add new constructions to his
own internal representation of knowledge based on
the input from the other.  Note that each person
would build his internal representation in his
own way, and the summation of all the representations
(let's call it one's Model Of Everything -- MOE) is
the mind itself.

Note also that the form of one's internal MOE has
no relationship to the internal structure of the
language at all.  In this light, one can say that
language carries no meaning by itself but will create
meaning (and understanding) when interpreted by a mind.

And if one insists on calling the information carried
in the language constructs (words, phrases, sentences,
etc) "meaning", then one must remember that language-
meaning and MOE-meaning are at two distinct levels
of abstraction, with totally different epistemological
"primitives" (including axioms).

In any case, no matter what one is trying to model
-- language-meaning or MOE-meaning -- in a tractable
scale, one is immediately faced with that fact that
the selection of "primitives" is almost entirely
arbitrary.  With this, very few claims of universality
can be justified.  In other words, what works in
a micro-reality such as SHRDLU cannot be generalized
to the real world at all.  In short, why bother?

I also don't see why one would choose to use the
notation of a language/grammar to "encode" meaning.
Why not use, say, some form of lambda calculus
instead? At the very least, one can easily build
rules/relationships and entities out of existing
rules/relationships and entities.

>>>>> Billy thinks Sally said the ball was red.
>>>> I am just throwing this together from imperfect memory, but, it is
>>>> something like this:
>>>>
>>>> BILLY {believes} (SALLY {said} "The ball was red")
>>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>>> addition of notation.  The word "said" alone remains entirely ambiguous,
>>> because in common usage that can refer to all manner of spoken or
>>> written statements.  I also don't understand why you decided not to
>>> notate ball and red, instead treating the information as if it were a
>>> quotation.  There is a really incompleteness in the ideas you put forth.

Tak
--
----------------------------------------------------------------+-----
Tak To                                            takto@alum.mit.eduxx
--------------------------------------------------------------------^^
 [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr

0
Tak
12/7/2013 9:02:13 PM
On 12/7/2013 3:02 PM, Tak To wrote:
> On 12/6/2013 10:24 PM, Peter Olcott wrote:
>> On 12/6/2013 3:36 PM, Tak To wrote:
>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>    Peter Olcott <OCR4Screen> wrote:
>>>>
>>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>>> have already pointed out that to "extend" it essentially results in an
>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>> explicit meaning.
>>>>>>
>>>>> Could you provide a specific concrete example?
>>>> You have yourself!  The very use of "you" requires a person to drawn on
>>>> an internal context to give meaning to the word.  Natural languages are
>>>> *full* of such implicit constructs, which really have no explicit
>>>> meaning unless you're dealing with a like-minded individual.  The
>>>> mistake you're making is in thinking that language is a data store
>>>> rather than a communication channel.
>>> Well put.
>>>
>>> Another way of looking at it is that language
>>> contains "hints" for finding "meaning" (i.e.,
>>> for understanding) rather than "meaning" itself.
>> For the meaning of direct physical sensations it may not be as much as
>> hints, such as explaining exactly what a rainbow looks like to one whom
>> has always been blind.  It seems to me that the meaning of all
>> conceptual knowledge can be exhaustively encoded within Montague Grammar
>> and its enhancements.
> Before tackling the abstraction notion of "meaning",
> let's backtrack a bit and focus on what "understanding
> (a meaning)" is.  Borrowing from Turing's Test, to
> have understood X (or to have knowledge of X) is to be
> able to answer questions about X (to the extend of one's
> understanding/knowledge).  Thus, one needs a model or
> representation of knowledge about X, as well as a way
> to draw inferences from the representation.  This is
> precisely what the field of "Knowledge Representation"
> (KR) in Cognitive Science is about.

Yes. This is the first thing that I read about in my quest.
What I am about to say may seem quite terse, yet it may also be apt:
The key to (KR) is to fully elaborate the inherent rules of the 
compositionality of natural language.

> As two people communicating with each other through
> language, each will add new constructions to his
> own internal representation of knowledge based on
> the input from the other.  Note that each person
> would build his internal representation in his
> own way,

I would call this the discourse ontology.

> and the summation of all the representations
> (let's call it one's Model Of Everything -- MOE) is
> the mind itself.
>
> Note also that the form of one's internal MOE has
> no relationship to the internal structure of the
> language at all.  In this light, one can say that
> language carries no meaning by itself but will create
> meaning (and understanding) when interpreted by a mind.
>
> And if one insists on calling the information carried
> in the language constructs (words, phrases, sentences,
> etc) "meaning", then one must remember that language-
> meaning and MOE-meaning are at two distinct levels
> of abstraction, with totally different epistemological
> "primitives" (including axioms).
If I understand you correctly this would be the natural language of the 
discourse mapping to the discourse ontology mapping to the base 
ontology. The base ontology is something like your MOE.

> In any case, no matter what one is trying to model
> -- language-meaning or MOE-meaning -- in a tractable
> scale, one is immediately faced with that fact that
> the selection of "primitives" is almost entirely
> arbitrary.  With this, very few claims of universality
> can be justified.  In other words, what works in
> a micro-reality such as SHRDLU cannot be generalized
> to the real world at all.  In short, why bother?
There is already an inherent natural structure that fully elaborates 
every detail of all of the rules of natural language compositionality 
(NLC) that only need be discovered. Once these rules have been fully 
elaborated the (KR) problem would seem to be fully addressed.

> I also don't see why one would choose to use the
> notation of a language/grammar to "encode" meaning.
> Why not use, say, some form of lambda calculus
> instead? At the very least, one can easily build
> rules/relationships and entities out of existing
> rules/relationships and entities.

I think that Montague proposed this, and this encoding may form the 
basis for his semantic grammar. Ultimately these meaning postulates must 
map to the equivalent of the human mind's representation. I

>
>>>>>> Billy thinks Sally said the ball was red.
>>>>> I am just throwing this together from imperfect memory, but, it is
>>>>> something like this:
>>>>>
>>>>> BILLY {believes} (SALLY {said} "The ball was red")
>>>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>>>> addition of notation.  The word "said" alone remains entirely ambiguous,
>>>> because in common usage that can refer to all manner of spoken or
>>>> written statements.  I also don't understand why you decided not to
>>>> notate ball and red, instead treating the information as if it were a
>>>> quotation.  There is a really incompleteness in the ideas you put forth.
> Tak
> --
> ----------------------------------------------------------------+-----
> Tak To                                            takto@alum.mit.eduxx
> --------------------------------------------------------------------^^
>   [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr
>

0
Peter
12/8/2013 2:29:50 PM
On 12/04/13 02:26, Peter Olcott wrote:
> My hypothesis is that all utterances that can be neither true nor false
> are not semantically well formed propositions.

An utterrance that talks about a non-existing object may simultaneously
be well-formed and yield a contradiction. This is often used to prove
that a certain object does not exist. It is a form of proof by
contradiction.

What you call "paradox of self-reference" arises because the reasoning
starts with a non-existing object, the halting tester. It is neither an
error of reasoning nor confusion caused by ill-defined concepts.
Instead, it is a correct demonstration that the halting tester does not
exist.

Your position seems to be that the halting function is an ill-defined
concept. If so, then the halting tester would not exist, because the
function that it is supposed to compute would not exist. So both your
position and the established scientific one yield that the halting
tester does not exist. However, after admitting that it does not exist,
no paradox of self-reference remains.

--- Antti Valmari ---

0
Antti
12/9/2013 8:08:08 AM
On 12/9/2013 2:08 AM, Antti Valmari wrote:
> On 12/04/13 02:26, Peter Olcott wrote:
>> My hypothesis is that all utterances that can be neither true nor false
>> are not semantically well formed propositions.
> An utterrance that talks about a non-existing object may simultaneously
> be well-formed and yield a contradiction. This is often used to prove
> that a certain object does not exist. It is a form of proof by
> contradiction.
>
> What you call "paradox of self-reference" arises because the reasoning
> starts with a non-existing object, the halting tester. It is neither an
> error of reasoning nor confusion caused by ill-defined concepts.
> Instead, it is a correct demonstration that the halting tester does not
> exist.
>
> Your position seems to be that the halting function is an ill-defined
> concept. If so, then the halting tester would not exist, because the
> function that it is supposed to compute would not exist. So both your
> position and the established scientific one yield that the halting
> tester does not exist. However, after admitting that it does not exist,
> no paradox of self-reference remains.
>
> --- Antti Valmari ---
>
It is like asking is anyone smart enough to correctly answer this 
question? (referring to itself)
And then inferring from the negative answer the limitations of human 
intelligence.
0
Peter
12/9/2013 9:59:44 AM
On 09/12/13 08:08, Antti Valmari wrote:
> An utterrance that talks about a non-existing object may simultaneously
> be well-formed and yield a contradiction. This is often used to prove
> that a certain object does not exist. It is a form of proof by
> contradiction.

	Indeed so.  But many people [not just Peter!] find such proofs
not entirely convincing, and in many [all?] such cases, there is also
a more direct proof.  For example, many of the usual proofs that sqrt(2)
is irrational run something like:  "Suppose to the contrary that sqrt(2)
== a/b where [...].  Therefore [...], which is a contradiction, and so
sqrt(2) is irrational."  But it is just as easy to prove that if a and
b are integers, then a^2 == 2b^2 iff a == b == 0.  Corollary:  if b /= 0
then a/b /= sqrt(2).  Corollary:  sqrt(2) is irrational.  No non-existent
objects, no contradictions.

> What you call "paradox of self-reference" arises because the reasoning
> starts with a non-existing object, the halting tester.  [...]

	Right.  But it doesn't need to.  It's just as easy to show that
any given program is not a halting tester, with no self-reference [even
by Peter's standards] and no contradiction, just by looking at the
possible behaviours of related programs.  But if every program is not
a halting tester, then no program is a halting tester, QED.

	[There is, of course, no *actual* self-reference in the usual
proofs.  It's just that when we talk about Turing machines, the
distinction between a running program and its source tends to get
somewhat blurred.  It's a distinction that everyone who writes a
compiler, and even more so everyone who writes an operating system,
has to get crystal clear.  But it's not that hard to see that editing
the source of an editor (compiling the source of a compiler, printing
the source of a print facility, ...) is not a self-reference, the
reference being "protected" by the "source".]

-- 
Andy Walker,
Nottingham.
0
Andy
12/9/2013 12:39:08 PM
On 12/9/2013 2:08 AM, Antti Valmari wrote:
> On 12/04/13 02:26, Peter Olcott wrote:
>> My hypothesis is that all utterances that can be neither true nor false
>> are not semantically well formed propositions.
>
> An utterrance that talks about a non-existing object may simultaneously
> be well-formed and yield a contradiction. This is often used to prove
> that a certain object does not exist. It is a form of proof by
> contradiction.
>
Sure, [Square Circles] requiring mutually exclusive attributes (of 
entirely round and flat sides) is my favorite example.

> What you call "paradox of self-reference" arises because the reasoning
> starts with a non-existing object, the halting tester. It is neither an
> error of reasoning nor confusion caused by ill-defined concepts.
> Instead, it is a correct demonstration that the halting tester does not
> exist.
>

In exactly the same way that the following question can not be correctly 
answered:

Referring to itself:
"What is the correct answer to this question?"


> Your position seems to be that the halting function is an ill-defined
> concept. If so, then the halting tester would not exist, because the
> function that it is supposed to compute would not exist. So both your
> position and the established scientific one yield that the halting
> tester does not exist. However, after admitting that it does not exist,
> no paradox of self-reference remains.
>
> --- Antti Valmari ---
>

That ill-formed questions have no possible correct answer places no 
actual limit on the maximum capabilities of human intelligence. In this 
exact same way the lack of a solution to the Halting Problem places no 
actual limit on computability.

The verifiable truth of the preceding paragraph provides a complete 
summation of my entire position regarding the Halting Problem.
0
Peter
12/9/2013 1:15:59 PM
Peter Olcott <OCR4Screen> writes:

> On 12/9/2013 2:08 AM, Antti Valmari wrote:
<snip>
>> What you call "paradox of self-reference" arises because the reasoning
>> starts with a non-existing object, the halting tester. It is neither an
>> error of reasoning nor confusion caused by ill-defined concepts.
>> Instead, it is a correct demonstration that the halting tester does not
>> exist.
>>
>
> In exactly the same way that the following question can not be
> correctly answered:
>
> Referring to itself:
> "What is the correct answer to this question?"
<snip>
> That ill-formed questions have no possible correct answer places no
> actual limit on the maximum capabilities of human intelligence. In
> this exact same way the lack of a solution to the Halting Problem
> places no actual limit on computability.

You keep giving analogous questions that are not analogous.  Here are
some questions about Turing machines:

  Does machine M enter any of its states more than once?
  Does machine M enter all of its states at least once?
  Does machine M ever write a non-blank symbol to the tape?
  Does machine M's state transition function contain a loop?

(If you are unclear about what I mean my any of these, please ask).

Do any of these contain self-reference of the sort that makes the
question meaningless to you?  Whatever your answer, can you explain it?
I.e. how can I learn to detect the presence of absence of this
mysterious self-reference you think exists in the halting problem?

> The verifiable truth of the preceding paragraph provides a complete
> summation of my entire position regarding the Halting Problem.

If the preceding paragraph is to have any useful meaning (i.e. if it is
not trivial in the sense that all questions are meaningful or all are
meaningless) then we need a way to tell the difference and you have only
given examples that don't have any connection to halting.

If you are tempted to simply post some more examples, please don't.
It's a waste of everyone's time; I can come up with thousands on my own.
Just say that you don't yet know how to tell if a question is (in your
sense) ill-formed or not and we can take a break on that topic.

-- 
Ben.
0
Ben
12/9/2013 2:37:22 PM
On 12/9/2013 8:37 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/9/2013 2:08 AM, Antti Valmari wrote:
> <snip>
>>> What you call "paradox of self-reference" arises because the reasoning
>>> starts with a non-existing object, the halting tester. It is neither an
>>> error of reasoning nor confusion caused by ill-defined concepts.
>>> Instead, it is a correct demonstration that the halting tester does not
>>> exist.
>>>
>>
>> In exactly the same way that the following question can not be
>> correctly answered:
>>
>> Referring to itself:
>> "What is the correct answer to this question?"
> <snip>
>> That ill-formed questions have no possible correct answer places no
>> actual limit on the maximum capabilities of human intelligence. In
>> this exact same way the lack of a solution to the Halting Problem
>> places no actual limit on computability.
>
> You keep giving analogous questions that are not analogous.  Here are
> some questions about Turing machines:
>
>    Does machine M enter any of its states more than once?
>    Does machine M enter all of its states at least once?
>    Does machine M ever write a non-blank symbol to the tape?
>    Does machine M's state transition function contain a loop?
>
> (If you are unclear about what I mean my any of these, please ask).
>
> Do any of these contain self-reference of the sort that makes the
> question meaningless to you?  Whatever your answer, can you explain it?
> I.e. how can I learn to detect the presence of absence of this
> mysterious self-reference you think exists in the halting problem?
>
>> The verifiable truth of the preceding paragraph provides a complete
>> summation of my entire position regarding the Halting Problem.
>
> If the preceding paragraph is to have any useful meaning (i.e. if it is
> not trivial in the sense that all questions are meaningful or all are
> meaningless) then we need a way to tell the difference and you have only
> given examples that don't have any connection to halting.

The propositional logic that I provided last year shows this.
Feel free to continue to dispute this within your own mind.

>
> If you are tempted to simply post some more examples, please don't.
> It's a waste of everyone's time; I can come up with thousands on my own.
> Just say that you don't yet know how to tell if a question is (in your
> sense) ill-formed or not and we can take a break on that topic.
>

0
Peter
12/9/2013 5:23:12 PM
Peter Olcott <OCR4Screen> writes:

> On 12/9/2013 8:37 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/9/2013 2:08 AM, Antti Valmari wrote:
>> <snip>
>>>> What you call "paradox of self-reference" arises because the reasoning
>>>> starts with a non-existing object, the halting tester. It is neither an
>>>> error of reasoning nor confusion caused by ill-defined concepts.
>>>> Instead, it is a correct demonstration that the halting tester does not
>>>> exist.
>>>>
>>>
>>> In exactly the same way that the following question can not be
>>> correctly answered:
>>>
>>> Referring to itself:
>>> "What is the correct answer to this question?"
>> <snip>
>>> That ill-formed questions have no possible correct answer places no
>>> actual limit on the maximum capabilities of human intelligence. In
>>> this exact same way the lack of a solution to the Halting Problem
>>> places no actual limit on computability.
>>
>> You keep giving analogous questions that are not analogous.  Here are
>> some questions about Turing machines:
>>
>>    Does machine M enter any of its states more than once?
>>    Does machine M enter all of its states at least once?
>>    Does machine M ever write a non-blank symbol to the tape?
>>    Does machine M's state transition function contain a loop?
>>
>> (If you are unclear about what I mean my any of these, please ask).
>>
>> Do any of these contain self-reference of the sort that makes the
>> question meaningless to you?  Whatever your answer, can you explain it?
>> I.e. how can I learn to detect the presence of absence of this
>> mysterious self-reference you think exists in the halting problem?

You don't want to have a go?  Is "does M halt" the only "bad" question
you are sure about?  Can some, none or all of these be correctly
answered?

>>> The verifiable truth of the preceding paragraph provides a complete
>>> summation of my entire position regarding the Halting Problem.
>>
>> If the preceding paragraph is to have any useful meaning (i.e. if it is
>> not trivial in the sense that all questions are meaningful or all are
>> meaningless) then we need a way to tell the difference and you have only
>> given examples that don't have any connection to halting.
>
> The propositional logic that I provided last year shows this.

Last year you tried to answer this question and the result was a
train-wreck so I can see why you don't want to try that again.  But
regardless of whether you try or not, it remains a key question.  Until
you can answer it, the "bad" questions are just the ones you decree are
bad, and that will be of no interest to anyone but you.

> Feel free to continue to dispute this within your own mind.

And I feel free to do here as well, but thanks for your permission to
do so in my mind!  (I don't really think you meant to say that -- you
were probably trying to find a polite way to that you don't want me to
reply to your posts.)

<snip>
-- 
Ben.
0
Ben
12/9/2013 5:54:39 PM
On 12/9/2013 2:08 AM, Antti Valmari wrote:
> On 12/04/13 02:26, Peter Olcott wrote:
>> My hypothesis is that all utterances that can be neither true nor false
>> are not semantically well formed propositions.
>
> An utterance that talks about a non-existing object may simultaneously
> be well-formed and yield a contradiction. This is often used to prove
> that a certain object does not exist. It is a form of proof by
> contradiction.
>
The reason that I am not being understood is that myself and my readers 
are exactly one level of indirection out-of-sync.

1) Can a halt tester be built that can correctly determine whether or 
not any arbitrary program will halt? No

2) Can questions be asked that are impossible to correctly answer? Yes

3) Can you correctly answer this question? (referring to itself)

4) Is the problem posed to the otherwise potential halt tester analogous 
to (3) ?

> What you call "paradox of self-reference" arises because the reasoning
> starts with a non-existing object, the halting tester. It is neither an
> error of reasoning nor confusion caused by ill-defined concepts.
> Instead, it is a correct demonstration that the halting tester does not
> exist.

We are one level of indirection out-of-sync here.
I am not talking about the typical proof by contradiction, I am talking 
about the reason behind why the contradiction occurs.

The Halting Problem asks a question like (2) and provides a question 
like (3) as the evidence of the answer to the question like (2).

>
> Your position seems to be that the halting function is an ill-defined
> concept. If so, then the halting tester would not exist, because the
> function that it is supposed to compute would not exist. So both your
> position and the established scientific one yield that the halting
> tester does not exist. However, after admitting that it does not exist,
> no paradox of self-reference remains.
>
> --- Antti Valmari ---
>

0
Peter
12/9/2013 8:52:58 PM
On 12/09/13 22:52, Peter Olcott wrote:
> We are one level of indirection out-of-sync here.
> I am not talking about the typical proof by contradiction, I am talking
> about the reason behind why the contradiction occurs.

I and many others say that the reason is that the halting tester does
not exist. You seem to say that there is more to it.

Why should there be more to it? This citation from you looks very
strongly like you admit that the halting tester does not exist:

> 1) Can a halt tester be built that can correctly determine whether or
> not any arbitrary program will halt? No

The non-existence of the halting tester suffices to resolve the paradox,
so no additional explanation is needed. Do you disagree here?

You seem to claim that somewhere in the theorem or its proof, there is
something ill-defined like "the length of colour red" or "this statement
is false". I and many others testify that we are familiar with issues of
that kind, and there is no such thing involved. Like Ben pointed out,
presenting analogies does not show that something is wrong with the
theorem or its proof, because it is not obvious (and not true) that the
analogies apply.

> 2) Can questions be asked that are impossible to correctly answer? Yes
>
> 3) Can you correctly answer this question? (referring to itself)

> The Halting Problem asks a question like (2) and provides a question
> like (3) as the evidence of the answer to the question like (2).

You seem to say that the question "does a given program halt on a given
input" is impossible to correctly answer. It is impossible in the
restricted sense that there is no algorithm for always finding the
answer. This is different from non-existence of the answer. For any
program and input, either "yes" or "no" is the correct answer; what is
lacking is a way of finding the answer, not the answer itself.

--- Antti Valmari ---

0
Antti
12/10/2013 9:37:15 AM
On 12/10/2013 3:37 AM, Antti Valmari wrote:
> On 12/09/13 22:52, Peter Olcott wrote:
>> We are one level of indirection out-of-sync here.
>> I am not talking about the typical proof by contradiction, I am talking
>> about the reason behind why the contradiction occurs.
>
> I and many others say that the reason is that the halting tester does
> not exist. You seem to say that there is more to it.

Do square circles not exist because they don't exist, or do they not 
exist because they require the simultaneous presence of mutually 
exclusive properties?

>
> Why should there be more to it? This citation from you looks very
> strongly like you admit that the halting tester does not exist:
>
>> 1) Can a halt tester be built that can correctly determine whether or
>> not any arbitrary program will halt? No
>
> The non-existence of the halting tester suffices to resolve the paradox,
> so no additional explanation is needed. Do you disagree here?

It utterly ignores the next level of indirection of the reason why the 
answer is no.

>
> You seem to claim that somewhere in the theorem or its proof, there is
> something ill-defined like "the length of colour red" or "this statement
> is false". I and many others testify that we are familiar with issues of
> that kind, and there is no such thing involved. Like Ben pointed out,
> presenting analogies does not show that something is wrong with the
> theorem or its proof, because it is not obvious (and not true) that the
> analogies apply.

http://www.cprogramming.com/tutorial/computersciencetheory/halting.html
SELF-HALT(program)
{
   if(DOES-HALT(program, program))
     infinite loop
   else
     halt
}

>
>> 2) Can questions be asked that are impossible to correctly answer? Yes
>>
>> 3) Can you correctly answer this question? (referring to itself)
>
>> The Halting Problem asks a question like (2) and provides a question
>> like (3) as the evidence of the answer to the question like (2).
>
> You seem to say that the question "does a given program halt on a given
> input" is impossible to correctly answer. It is impossible in the
> restricted sense that there is no algorithm for always finding the
> answer. This is different from non-existence of the answer. For any
> program and input, either "yes" or "no" is the correct answer; what is
> lacking is a way of finding the answer, not the answer itself.
>
> --- Antti Valmari ---
>

The two different levels of indirection come from two different 
perspectives. The perspective of the outside observer, and the 
perspective of the potential halt decider.

When the outside observer is asked:
Does Program P Halt in input I?

This is an entirely different question that when the potential Halt 
Decider is asked:
Does Program P Halt in input I?

1) If I ask you: "Can you correctly answer this question?"
(neither "yes" nor "no" is correct)

This is an entirely different question than:

2) I ask someone else if  Antti Valmari can answer this question:
"Can you correctly answer this question?"
("no" is the correct answer).

The different level of indirection changes the underlying semantics of 
the question.

Possibly the difficulty here is that few mathematicians have much skill 
within the field of the formal semantics of natural language, and few 
linguists have much skill in mathematical proofs. The rare individual 
that has a sufficient degree of both of these skills may be able to see 
what I am saying now that I have made it more concrete.

0
Peter
12/10/2013 12:28:57 PM
On 12/09/13 14:39, Andy Walker wrote:
>     Indeed so.  But many people [not just Peter!] find such proofs
> not entirely convincing, and in many [all?] such cases, there is also
> a more direct proof.

I like the idea of using direct proofs instead of contradiction proofs
whenever easy. I tried that with my students, when proving that some
language is not regular. I showed that each finite automaton that
accepts at least every string in the language also accepts something
extra. I did not notice any change in understanding. However, the change
should have been rather big to be visible in my student sample.


>     Right.  But it doesn't need to.  It's just as easy to show that
> any given program is not a halting tester, with no self-reference [even
> by Peter's standards] and no contradiction, just by looking at the
> possible behaviours of related programs.  But if every program is not
> a halting tester, then no program is a halting tester, QED.

Let me try. Let P(X,Y) be any program that, for any two finite byte
strings X and Y, terminates replying "yes" or "no". Let Q(X) be the
following program:

  if P(X,X) then enter eternal loop

Q(X) terminates if and only if P(X,X) replies "no". In particular, Q(Q)
terminates if and only if P(Q,Q) replies "no". So P(X,Y) does not reply
correctly to the question "does program X halt on input Y", when X = Y =
Q. Thus P is not a halting tester.

To me Q(Q) seems self-reference in some sense. Of course, it is what you
call distinction between a running program and its source. But you say
that it tends to get somewhat blurred. That is the (or a) problem! So
can you get rid of Q(Q)?

As I and others have written earlier, self-reference is avoided by first
proving that the Busy Beaver function is not computable [1], and then
pointing out that if a halting tester existed, then the Busy Beaver
function could be computed. But the latter part is proof by
contradiction, and I do not see how it could be easily converted to a
direct proof.

1. Rad�, Tibor (May 1962) "On non-computable functions"
http://www.alcatel-lucent.com/bstj/vol41-1962/articles/bstj41-3-877.pdf

--- Antti Valmari ---

0
Antti
12/10/2013 1:55:42 PM
Antti Valmari <Antti.Valmari@c.s.t.u.t.f.i.invalid> writes:

> On 12/09/13 22:52, Peter Olcott wrote:
>> We are one level of indirection out-of-sync here.
>> I am not talking about the typical proof by contradiction, I am talking
>> about the reason behind why the contradiction occurs.
>
> I and many others say that the reason is that the halting tester does
> not exist. You seem to say that there is more to it.
>
> Why should there be more to it? This citation from you looks very
> strongly like you admit that the halting tester does not exist:
>
>> 1) Can a halt tester be built that can correctly determine whether or
>> not any arbitrary program will halt? No
>
> The non-existence of the halting tester suffices to resolve the paradox,
> so no additional explanation is needed. Do you disagree here?

I think I remember your name from the big Peter Olcott threads of summer
2012 so maybe you already know this, but there may still be value in
pointing it out.  Peter does accept that there is no halting tester, the
problem is that he does not like this fact.  Similarly he does not like
the incompleteness of certain formal systems (though I imagine he is
less sure that the proofs of these stand up).  His purpose is to use
language to find some way of talking about things so that these results
no longer upset his view of the world.  This view is, quite explicitly,
a theological one.

He has to be able to view the perfectly well defined questions involved
in halting as "invalid" or "ill-formed" or some such term in order to
resolve the cognitive dissonance he's created in his head.  The
non-computable functions and the un-provable theorems must be
categorised as "merely ill-formed" or, at best, "pathological".

There's lot of evidence that he does not want to get at the truth of the
matter.  He has repeatedly refused to look at other un-computability
arguments such the one commonly used to show that the "busy beaver"
function is not TM computable.  It's hard to see that argument in the
same light, so it will cause all sorts of problems if he ever peeks into
that corner.

And of course there is the elephant in the room.  The set of functions
from Sigma* to Sigma* is uncountable, whereas the set of TMs is not.
Peter has solved this problem (in his head) by deciding that there is
only one infinity.  Out of the unholy trinity or incompleteness, halting
and un-countability, the last is the only one that Peter rejects at the
mathematical level (though this may have changed).

<snip>
> You seem to say that the question "does a given program halt on a given
> input" is impossible to correctly answer. It is impossible in the
> restricted sense that there is no algorithm for always finding the
> answer. This is different from non-existence of the answer. For any
> program and input, either "yes" or "no" is the correct answer; what is
> lacking is a way of finding the answer, not the answer itself.

I've lost count of how many times this has been said to him.  In the
last big thread his usual reply was just to post a nonsense question
again and again as if that addressed the issue.  At one point he tried
to say that TM might do something other than halt or not halt, but that
did not go well.

Of course, in one way Peter is absolutely correct!  The decision
problems of arithmetic and halting and so on are, in some sense,
inherently problematic.  They contain, in their very definitions, the
reasons why they are impossible.  Unfortunately it is not because they
are ill-formed or paradoxical, it is simply that both arithmetic and
Turing machines can talk about themselves (and each other of course).

-- 
Ben.
0
Ben
12/10/2013 2:14:39 PM
On 12/10/13 14:28, Peter Olcott wrote:
> The two different levels of indirection come from two different
> perspectives. The perspective of the outside observer, and the
> perspective of the potential halt decider.
> 
> When the outside observer is asked:
> Does Program P Halt in input I?
> 
> This is an entirely different question that when the potential Halt
> Decider is asked:
> Does Program P Halt in input I?

I believe I understand what you are aiming at: sometimes two apparently
identical questions are actually different questions because of a
difference in the context. However, this is not the case with the
theorem and proof of the non-existence of halting testers. The outside
observer and the non-existing halting tester mean the very same thing
with P, they mean the very same thing with I, and they mean the very
same thing with "halts on".

--- Antti Valmari ---

0
Antti
12/10/2013 3:23:21 PM
On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
> Antti Valmari <Antti.Valmari@c.s.t.u.t.f.i.invalid> writes:
>
>> On 12/09/13 22:52, Peter Olcott wrote:
>>> We are one level of indirection out-of-sync here.
>>> I am not talking about the typical proof by contradiction, I am talking
>>> about the reason behind why the contradiction occurs.
>>
>> I and many others say that the reason is that the halting tester does
>> not exist. You seem to say that there is more to it.
>>
>> Why should there be more to it? This citation from you looks very
>> strongly like you admit that the halting tester does not exist:
>>
>>> 1) Can a halt tester be built that can correctly determine whether or
>>> not any arbitrary program will halt? No
>>
>> The non-existence of the halting tester suffices to resolve the paradox,
>> so no additional explanation is needed. Do you disagree here?
>
> I think I remember your name from the big Peter Olcott threads of summer
> 2012 so maybe you already know this, but there may still be value in
> pointing it out.  Peter does accept that there is no halting tester, the
> problem is that he does not like this fact.  Similarly he does not like
> the incompleteness of certain formal systems (though I imagine he is
> less sure that the proofs of these stand up).  His purpose is to use
> language to find some way of talking about things so that these results
> no longer upset his view of the world.  This view is, quite explicitly,
> a theological one.
>
> He has to be able to view the perfectly well defined questions involved
> in halting as "invalid" or "ill-formed" or some such term in order to
> resolve the cognitive dissonance he's created in his head.  The
> non-computable functions and the un-provable theorems must be
> categorised as "merely ill-formed" or, at best, "pathological".
>
> There's lot of evidence that he does not want to get at the truth of the
> matter.  He has repeatedly refused to look at other un-computability
> arguments such the one commonly used to show that the "busy beaver"
> function is not TM computable.  It's hard to see that argument in the
> same light, so it will cause all sorts of problems if he ever peeks into
> that corner.
>
> And of course there is the elephant in the room.  The set of functions
> from Sigma* to Sigma* is uncountable, whereas the set of TMs is not.
> Peter has solved this problem (in his head) by deciding that there is
> only one infinity.  Out of the unholy trinity or incompleteness, halting
> and un-countability, the last is the only one that Peter rejects at the
> mathematical level (though this may have changed).
>
> <snip>
>> You seem to say that the question "does a given program halt on a given
>> input" is impossible to correctly answer. It is impossible in the
>> restricted sense that there is no algorithm for always finding the
>> answer. This is different from non-existence of the answer. For any
>> program and input, either "yes" or "no" is the correct answer; what is
>> lacking is a way of finding the answer, not the answer itself.
>
> I've lost count of how many times this has been said to him.  In the
> last big thread his usual reply was just to post a nonsense question
> again and again as if that addressed the issue.  At one point he tried
> to say that TM might do something other than halt or not halt, but that
> did not go well.
>


> Of course, in one way Peter is absolutely correct!  The decision
> problems of arithmetic and halting and so on are, in some sense,
> inherently problematic.  They contain, in their very definitions, the
> reasons why they are impossible.

This is the essence of my whole point.

> Unfortunately it is not because they are ill-formed or paradoxical,

This is where you seem to fail to fully understand what I am saying.
I am allocating the name "ill-formed" to describe any question that 
inherently defines itself as having no possible correct answer.

I see this as apt because to look at these things otherwise would seem 
to indicate that there is an actual limitation to computation. If the 
only limitation to computation is that algorithms can not accomplish 
what has been defined as impossible, this is no actual limitation at 
all. Because of this I describe these impossible questions as ill-formed.

> it is simply that both arithmetic and
> Turing machines can talk about themselves (and each other of course).
>

0
Peter
12/10/2013 3:34:22 PM
On 12/10/2013 9:23 AM, Antti Valmari wrote:
> On 12/10/13 14:28, Peter Olcott wrote:
>> The two different levels of indirection come from two different
>> perspectives. The perspective of the outside observer, and the
>> perspective of the potential halt decider.
>>
>> When the outside observer is asked:
>> Does Program P Halt in input I?
>>
>> This is an entirely different question that when the potential Halt
>> Decider is asked:
>> Does Program P Halt in input I?
>
> I believe I understand what you are aiming at: sometimes two apparently
> identical questions are actually different questions because of a
> difference in the context. However, this is not the case with the
> theorem and proof of the non-existence of halting testers. The outside
> observer and the non-existing halting tester mean the very same thing
> with P, they mean the very same thing with I, and they mean the very
> same thing with "halts on".
>
> --- Antti Valmari ---
>

It is not possible to fully appreciate my point without carefully 
reading all that I said about this point. The specific concrete example 
that you did not respond to is crucial to fully appreciating my position.
0
Peter
12/10/2013 3:58:58 PM
On 12/8/2013 9:29 AM, Peter Olcott wrote:
> On 12/7/2013 3:02 PM, Tak To wrote:
>> On 12/6/2013 10:24 PM, Peter Olcott wrote:
>>> On 12/6/2013 3:36 PM, Tak To wrote:
>>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>>    Peter Olcott <OCR4Screen> wrote:
>>>>>
>>>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>>>> have already pointed out that to "extend" it essentially results in an
>>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>>> explicit meaning.
>>>>>>>
>>>>>> Could you provide a specific concrete example?
>>>>> You have yourself!  The very use of "you" requires a person to drawn on
>>>>> an internal context to give meaning to the word.  Natural languages are
>>>>> *full* of such implicit constructs, which really have no explicit
>>>>> meaning unless you're dealing with a like-minded individual.  The
>>>>> mistake you're making is in thinking that language is a data store
>>>>> rather than a communication channel.
>>>> Well put.
>>>>
>>>> Another way of looking at it is that language
>>>> contains "hints" for finding "meaning" (i.e.,
>>>> for understanding) rather than "meaning" itself.
>>> For the meaning of direct physical sensations it may not be as much as
>>> hints, such as explaining exactly what a rainbow looks like to one whom
>>> has always been blind.  It seems to me that the meaning of all
>>> conceptual knowledge can be exhaustively encoded within Montague Grammar
>>> and its enhancements.

>> Before tackling the abstraction notion of "meaning",
>> let's backtrack a bit and focus on what "understanding
>> (a meaning)" is.  Borrowing from Turing's Test, to
>> have understood X (or to have knowledge of X) is to be
>> able to answer questions about X (to the extend of one's
>> understanding/knowledge).  Thus, one needs a model or
>> representation of knowledge about X, as well as a way
>> to draw inferences from the representation.  This is
>> precisely what the field of "Knowledge Representation"
>> (KR) in Cognitive Science is about.
> 
> Yes. This is the first thing that I read about in my quest.
> What I am about to say may seem quite terse, yet it may also be apt:
> The key to (KR) is to fully elaborate the inherent rules of the 
> compositionality of natural language.

No. See below.

>> As two people communicating with each other through
>> language, each will add new constructions to his
>> own internal representation of knowledge based on
>> the input from the other.  Note that each person
>> would build his internal representation in his
>> own way,
> 
> I would call this the discourse ontology.

I don't know what "ontology" means here.  Do you
meaning something like a rule system?

>> and the summation of all the representations
>> (let's call it one's Model Of Everything -- MOE) is
>> the mind itself.
>>
>> Note also that the form of one's internal MOE has
>> no relationship to the internal structure of the
>> language at all.  In this light, one can say that
>> language carries no meaning by itself but will create
>> meaning (and understanding) when interpreted by a mind.
>>
>> And if one insists on calling the information carried
>> in the language constructs (words, phrases, sentences,
>> etc) "meaning", then one must remember that language-
>> meaning and MOE-meaning are at two distinct levels
>> of abstraction, with totally different epistemological
>> "primitives" (including axioms).

> If I understand you correctly this would be the natural language of the 
> discourse mapping to the discourse ontology mapping to the base 
> ontology. The base ontology is something like your MOE.

I don't know what your "ontology" is, much less
"base ontology".

In any case, the term "mapping" seems to imply that
one is at a level of abstract comparable to that of
the other, which is contrary to my view.

Note that one's MOE is built entirely from empirical
experiences.  All the language rules are induced from
past experiences of discourse.  Each unit (word,
phrase, etc) carries a summary of past usage.  Likewise
for each abstract concept.  One can't point to a
sub-part of the MOE and says, here is the equivalent
of the the concept "I", or "if", or "mother".  There
is no "mapping".

Consider a computer executing a BASIC interpreter
which is executing a BASIC program.  There is the
logic of the BASIC program, the logic of the BASIC
interpreter, the logic of machine instruction,
the logic of the microcode, the logic of the digital
circuitry, as well as the logic (i.e., physical laws)
of the electronics.  Each is at a different level of
abstraction.  There is no "mapping" across the levels.

There is a word -- innumeracy -- that describes,
among other things, the inability to grasp the
magnitude of a number in a context.  I wish there
were a word that would describe the inability to
grasp complexity and level of abstraction.
That would have saved me a lot of time when
discussing cognitive science issues.

>> In any case, no matter what one is trying to model
>> -- language-meaning or MOE-meaning -- in a tractable
>> scale, one is immediately faced with that fact that
>> the selection of "primitives" is almost entirely
>> arbitrary.  With this, very few claims of universality
>> can be justified.  In other words, what works in
>> a micro-reality such as SHRDLU cannot be generalized
>> to the real world at all.  In short, why bother?

> There is already an inherent natural structure that fully elaborates 
> every detail of all of the rules of natural language compositionality 
> (NLC) that only need be discovered. Once these rules have been fully 
> elaborated the (KR) problem would seem to be fully addressed.

Again, these "rules of NLC" (if definable at all)
are at a disparate level of abstraction and are thus
 ill-suited for knowledge representation.

>> I also don't see why one would choose to use the
>> notation of a language/grammar to "encode" meaning.
>> Why not use, say, some form of lambda calculus
>> instead? At the very least, one can easily build
>> rules/relationships and entities out of existing
>> rules/relationships and entities.
> 
> I think that Montague proposed this, and this encoding may form the 
> basis for his semantic grammar. Ultimately these meaning postulates must 
> map to the equivalent of the human mind's representation. I

Why repeat his mistakes?

>>>>>>> Billy thinks Sally said the ball was red.
>>>>>> I am just throwing this together from imperfect memory, but, it is
>>>>>> something like this:
>>>>>>
>>>>>> BILLY {believes} (SALLY {said} "The ball was red")
>>>>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>>>>> addition of notation.  The word "said" alone remains entirely ambiguous,
>>>>> because in common usage that can refer to all manner of spoken or
>>>>> written statements.  I also don't understand why you decided not to
>>>>> notate ball and red, instead treating the information as if it were a
>>>>> quotation.  There is a really incompleteness in the ideas you put forth.

Tak
--
----------------------------------------------------------------+-----
Tak To                                            takto@alum.mit.eduxx
--------------------------------------------------------------------^^
 [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr

0
Tak
12/10/2013 6:50:03 PM
On 12/10/2013 12:50 PM, Tak To wrote:
> On 12/8/2013 9:29 AM, Peter Olcott wrote:
>> On 12/7/2013 3:02 PM, Tak To wrote:
>>> On 12/6/2013 10:24 PM, Peter Olcott wrote:
>>>> On 12/6/2013 3:36 PM, Tak To wrote:
>>>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>>>     Peter Olcott <OCR4Screen> wrote:
>>>>>>
>>>>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>>>>> have already pointed out that to "extend" it essentially results in an
>>>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>>>> explicit meaning.
>>>>>>>>
>>>>>>> Could you provide a specific concrete example?
>>>>>> You have yourself!  The very use of "you" requires a person to drawn on
>>>>>> an internal context to give meaning to the word.  Natural languages are
>>>>>> *full* of such implicit constructs, which really have no explicit
>>>>>> meaning unless you're dealing with a like-minded individual.  The
>>>>>> mistake you're making is in thinking that language is a data store
>>>>>> rather than a communication channel.
>>>>> Well put.
>>>>>
>>>>> Another way of looking at it is that language
>>>>> contains "hints" for finding "meaning" (i.e.,
>>>>> for understanding) rather than "meaning" itself.
>>>> For the meaning of direct physical sensations it may not be as much as
>>>> hints, such as explaining exactly what a rainbow looks like to one whom
>>>> has always been blind.  It seems to me that the meaning of all
>>>> conceptual knowledge can be exhaustively encoded within Montague Grammar
>>>> and its enhancements.
>
>>> Before tackling the abstraction notion of "meaning",
>>> let's backtrack a bit and focus on what "understanding
>>> (a meaning)" is.  Borrowing from Turing's Test, to
>>> have understood X (or to have knowledge of X) is to be
>>> able to answer questions about X (to the extend of one's
>>> understanding/knowledge).  Thus, one needs a model or
>>> representation of knowledge about X, as well as a way
>>> to draw inferences from the representation.  This is
>>> precisely what the field of "Knowledge Representation"
>>> (KR) in Cognitive Science is about.
>>
>> Yes. This is the first thing that I read about in my quest.
>> What I am about to say may seem quite terse, yet it may also be apt:
>> The key to (KR) is to fully elaborate the inherent rules of the
>> compositionality of natural language.
>
> No. See below.
>
>>> As two people communicating with each other through
>>> language, each will add new constructions to his
>>> own internal representation of knowledge based on
>>> the input from the other.  Note that each person
>>> would build his internal representation in his
>>> own way,
>>
>> I would call this the discourse ontology.
>
> I don't know what "ontology" means here.  Do you
> meaning something like a rule system?

http://en.wikipedia.org/wiki/Cyc

>
>>> and the summation of all the representations
>>> (let's call it one's Model Of Everything -- MOE) is
>>> the mind itself.
>>>
>>> Note also that the form of one's internal MOE has
>>> no relationship to the internal structure of the
>>> language at all.  In this light, one can say that
>>> language carries no meaning by itself but will create
>>> meaning (and understanding) when interpreted by a mind.
>>>
>>> And if one insists on calling the information carried
>>> in the language constructs (words, phrases, sentences,
>>> etc) "meaning", then one must remember that language-
>>> meaning and MOE-meaning are at two distinct levels
>>> of abstraction, with totally different epistemological
>>> "primitives" (including axioms).
>
>> If I understand you correctly this would be the natural language of the
>> discourse mapping to the discourse ontology mapping to the base
>> ontology. The base ontology is something like your MOE.
>
> I don't know what your "ontology" is, much less
> "base ontology".

A base ontology would be like a dictionary the explicitly defines every 
detail of every word. For example the concept of {sphere} will be 
directly linked to most all of analytical geometry.

(see above)
It all logically follows from the fundamental basis that Richard 
Montague provided, typically referred to the Montague Grammar of semantics.

>
> In any case, the term "mapping" seems to imply that
> one is at a level of abstract comparable to that of
> the other, which is contrary to my view.

All this mapping is fundamentally based on the correspondence theory of 
truth. Montague added several more aspects to this conception:
model theoretic, possible worlds.

>
> Note that one's MOE is built entirely from empirical
> experiences.  All the language rules are induced from
> past experiences of discourse.  Each unit (word,
> phrase, etc) carries a summary of past usage.  Likewise
> for each abstract concept.  One can't point to a
> sub-part of the MOE and says, here is the equivalent
> of the the concept "I", or "if", or "mother".  There
> is no "mapping".

The mapping from the term to its meaning.
This is not your mother--->"mother"

It is only a set of symbols that are a stand-in
for the general concept of a {mother}.

Sure there are two linguistic mappings:
extension and intension.

Montague added at least two others:
1) Model of a possible world
2) A Possible world itself

>
> Consider a computer executing a BASIC interpreter
> which is executing a BASIC program.  There is the
> logic of the BASIC program, the logic of the BASIC
> interpreter, the logic of machine instruction,
> the logic of the microcode, the logic of the digital
> circuitry, as well as the logic (i.e., physical laws)
> of the electronics.  Each is at a different level of
> abstraction.  There is no "mapping" across the levels.

{mapping} as in the abstraction of a mathematical mapping, in other 
words a precise one-way correspondence between one thing and another.

In this case your example perfectly showed the concept of mapping.

>
> There is a word -- innumeracy -- that describes,
> among other things, the inability to grasp the
> magnitude of a number in a context.  I wish there
> were a word that would describe the inability to
> grasp complexity and level of abstraction.
> That would have saved me a lot of time when
> discussing cognitive science issues.
>
>>> In any case, no matter what one is trying to model
>>> -- language-meaning or MOE-meaning -- in a tractable
>>> scale, one is immediately faced with that fact that
>>> the selection of "primitives" is almost entirely
>>> arbitrary.  With this, very few claims of universality
>>> can be justified.  In other words, what works in
>>> a micro-reality such as SHRDLU cannot be generalized
>>> to the real world at all.  In short, why bother?
>
>> There is already an inherent natural structure that fully elaborates
>> every detail of all of the rules of natural language compositionality
>> (NLC) that only need be discovered. Once these rules have been fully
>> elaborated the (KR) problem would seem to be fully addressed.
>
> Again, these "rules of NLC" (if definable at all)
> are at a disparate level of abstraction and are thus
>   ill-suited for knowledge representation.

I may seem that way. I am not referring to how these things are 
generally represented within natural language. I am referring to a 
mathematical formalism that could represent the pure conceptions that 
the natural language is referring to. Montague Grammar would be the basis.

>
>>> I also don't see why one would choose to use the
>>> notation of a language/grammar to "encode" meaning.
>>> Why not use, say, some form of lambda calculus
>>> instead? At the very least, one can easily build
>>> rules/relationships and entities out of existing
>>> rules/relationships and entities.
>>
>> I think that Montague proposed this, and this encoding may form the
>> basis for his semantic grammar. Ultimately these meaning postulates must
>> map to the equivalent of the human mind's representation. I
>
> Why repeat his mistakes?

Perhaps he made some mistakes, I did not notice any. I had already 
thought up this gist of his ideas before I ever read anything about his 
work.

<not credible (but true)>
One of these things that I thought up on my own was the correspondence 
theory of truth. Years later I read that it already had a name.
</not credible (but true)>

It was just last year that I decided to start reading about what others 
had done, so I read through a book that I bought back in 1995: Formal 
Semantics An Introduction by Ronnie Cann.

I also bought Knowledge Representation and Reasoning by Brachman and 
Levesque because this is my real interest. The problem with the KR 
approach is that is starts with a simple understanding and attempts to 
extend it to larger problems, so the KR system has to be continually 
redesigned to accommodate these enhancements as greater understanding is 
achieved.

The advantage with first exhaustive solving the compositionality problem 
within linguistics is that the final design of the knowledge 
representation (KR) system is fully robust. No more little toy systems 
that can provide tiny increments of progress. Solve the big problem 
first, and then everything else falls right into place.

Although the initial problem is larger, one gets to an optimal complete 
solution in minimal time.

>
>>>>>>>> Billy thinks Sally said the ball was red.
>>>>>>> I am just throwing this together from imperfect memory, but, it is
>>>>>>> something like this:
>>>>>>>
>>>>>>> BILLY {believes} (SALLY {said} "The ball was red")
>>>>>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>>>>>> addition of notation.  The word "said" alone remains entirely ambiguous,
>>>>>> because in common usage that can refer to all manner of spoken or
>>>>>> written statements.  I also don't understand why you decided not to
>>>>>> notate ball and red, instead treating the information as if it were a
>>>>>> quotation.  There is a really incompleteness in the ideas you put forth.
>
> Tak
> --
> ----------------------------------------------------------------+-----
> Tak To                                            takto@alum.mit.eduxx
> --------------------------------------------------------------------^^
>   [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr
>

0
Peter
12/10/2013 8:23:28 PM
Peter Olcott <OCR4Screen> writes:

> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
<snip>
>> Of course, in one way Peter is absolutely correct!  The decision
>> problems of arithmetic and halting and so on are, in some sense,
>> inherently problematic.  They contain, in their very definitions, the
>> reasons why they are impossible.
>
> This is the essence of my whole point.
>
>> Unfortunately it is not because they are ill-formed or paradoxical,
>
> This is where you seem to fail to fully understand what I am saying.

I understand you perfectly.  There is nothing unclear about the words
you use (in this instance at least) and you express yourself with
admirable clarity.  Unfortunately you are wrong.

> I am allocating the name "ill-formed" to describe any question that
> inherently defines itself as having no possible correct answer.

But, as has been said so many times before, the question "is there a
halting tester" has a correct answer: no.  It is in the very nature of
what I said (to which you seemed to agree) that existence questions
about impossible things have well-defined answers -- always no.

> I see this as apt because to look at these things otherwise would seem
> to indicate that there is an actual limitation to computation.

You repeatedly refuse to look at other limits of computation.  There is
no TM that can say if two TMs compute the same function, there is no TM
that can say if TM will accept a given string, there is no TM that can
compute the "busy beaver" function, and there is no TM that can decide
halting.  You've latched onto this last one, but there are many, many
others.

> If the
> only limitation to computation is that algorithms can not accomplish
> what has been defined as impossible, this is no actual limitation at
> all. Because of this I describe these impossible questions as
> ill-formed.

All the non-computable functions are not computable for some reason
inherent in their specification.  what other kind of reason could there
possibly be?.  Does that mean we can all go home now?

The only thing that separates

  A) is there an even prime greater than 2?
  B) is there a halting decider?
  C) is there and odd perfect number?

is the degree of obviousness.  Almost everyone can answer A, B took some
time and ingenuity, and C is still open, although almost everyone thinks
that there is something about being perfect that precludes such a number
from being odd.  The impossibility of existence can reside nowhere other
than in the definition of the things involved, but it sounds rather
dismissive to say these things are "defined as impossible" because it
suggests that one might have defined them otherwise.  I would be
perfectly happy with "the definition renders them impossible" or "the
definition precludes their existent".  If that quiets you demons we can,
indeed, all go home.

One thing that you can't do, however, is to say that any of A, B or C is
ill-formed.  You can't take language that is in common use and
appropriate it for your own purposes.  All are perfectly well-formed
questions with correct yes/no answers.  Even C has a correct yes/no
answer, though we don't yet know (and my never know) what it is.

-- 
Ben.
0
Ben
12/10/2013 8:56:12 PM
On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
> <snip>
>>> Of course, in one way Peter is absolutely correct!  The decision
>>> problems of arithmetic and halting and so on are, in some sense,
>>> inherently problematic.  They contain, in their very definitions, the
>>> reasons why they are impossible.
>> This is the essence of my whole point.
>>
>>> Unfortunately it is not because they are ill-formed or paradoxical,
>> This is where you seem to fail to fully understand what I am saying.
> I understand you perfectly.  There is nothing unclear about the words
> you use (in this instance at least) and you express yourself with
> admirable clarity.  Unfortunately you are wrong.
Thanks for the comment on clarity.
When a person is defining a new term using existing meanings this new 
term is correct by tautology.
If I say Let X = Y, no one is free to correctly say that X != Y.

>
>> I am allocating the name "ill-formed" to describe any question that
>> inherently defines itself as having no possible correct answer.
> But, as has been said so many times before, the question "is there a
> halting tester" has a correct answer: no.  It is in the very nature of
> what I said (to which you seemed to agree) that existence questions
> about impossible things have well-defined answers -- always no.

We already agree on the essence of my whole point:

Your words:
"They contain, in their very definitions,
the reasons why they are impossible."

everything else is of much less consequence.

>
>> I see this as apt because to look at these things otherwise would seem
>> to indicate that there is an actual limitation to computation.
> You repeatedly refuse to look at other limits of computation.  There is
Yes, I compulsively insist on stayed focused on the point
without changing the subject until that point is made.

I have now finally made my point with at least one person.

> no TM that can say if two TMs compute the same function, there is no TM
> that can say if TM will accept a given string, there is no TM that can
> compute the "busy beaver" function, and there is no TM that can decide
> halting.  You've latched onto this last one, but there are many, many
> others.
>
>> If the
>> only limitation to computation is that algorithms can not accomplish
>> what has been defined as impossible, this is no actual limitation at
>> all. Because of this I describe these impossible questions as
>> ill-formed.
> All the non-computable functions are not computable for some reason
> inherent in their specification.  what other kind of reason could there
> possibly be?.  Does that mean we can all go home now?
As soon as we reach a consensus with many other readers.

> The only thing that separates
>
>    A) is there an even prime greater than 2?
>    B) is there a halting decider?
>    C) is there and odd perfect number?
>
> is the degree of obviousness.  Almost everyone can answer A, B took some
> time and ingenuity, and C is still open, although almost everyone thinks
> that there is something about being perfect that precludes such a number
> from being odd.  The impossibility of existence can reside nowhere other
> than in the definition of the things involved, but it sounds rather
> dismissive to say these things are "defined as impossible" because it
> suggests that one might have defined them otherwise.  I would be
> perfectly happy with "the definition renders them impossible" or "the
> definition precludes their existent".  If that quiets you demons we can,
> indeed, all go home.
>
> One thing that you can't do, however, is to say that any of A, B or C is
> ill-formed.  You can't take language that is in common use and
> appropriate it for your own purposes.  All are perfectly well-formed
> questions with correct yes/no answers.  Even C has a correct yes/no
> answer, though we don't yet know (and my never know) what it is.

When I define a term using tautology such as:
Let X = Y
Then no one else is free to correctly conclude that X != Y

Whether the combination of terms such as {ill-formed} and {question} 
correspond somewhat to the common meanings of those terms when applied 
together to describe the case of questions lacking a possible correct 
answer, this aspect may be debatable.

The primary reason why I consider the term ill-formed question apt when 
applied to the Halting Problem, is that the inability to accomplish the 
impossible does not form any actual limit on computation.

Even and otherwise all powerful deity is absolutely helpless within the 
goal of creating an actual square circle.
0
Peter
12/11/2013 1:16:22 AM
Peter Olcott <OCR4Screen> writes:

> On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>> <snip>
>>>> Of course, in one way Peter is absolutely correct!  The decision
>>>> problems of arithmetic and halting and so on are, in some sense,
>>>> inherently problematic.  They contain, in their very definitions, the
>>>> reasons why they are impossible.
>>> This is the essence of my whole point.
>>>
>>>> Unfortunately it is not because they are ill-formed or paradoxical,
>>> This is where you seem to fail to fully understand what I am saying.
>> I understand you perfectly.  There is nothing unclear about the words
>> you use (in this instance at least) and you express yourself with
>> admirable clarity.  Unfortunately you are wrong.

> Thanks for the comment on clarity.
> When a person is defining a new term using existing meanings this new
> term is correct by tautology.
> If I say Let X = Y, no one is free to correctly say that X != Y.

If the term is well-know with an established meaning, you can't do that
and have any hope of communicating clearly.  You can't do it all in a
medium like Usenet where context is not preserved.

>>> I am allocating the name "ill-formed" to describe any question that
>>> inherently defines itself as having no possible correct answer.

>> But, as has been said so many times before, the question "is there a
>> halting tester" has a correct answer: no.  It is in the very nature of
>> what I said (to which you seemed to agree) that existence questions
>> about impossible things have well-defined answers -- always no.
>
> We already agree on the essence of my whole point:

Can I take it, then, that you agree that according to your definition of
"ill-formed" the question "is there a halting tester" is not ill-formed?

> Your words:
> "They contain, in their very definitions,
> the reasons why they are impossible."
>
> everything else is of much less consequence.

Then why have you come here?  Everyone here knows that the properties of
TMs -- in particular what can and can not be computed by them -- follows
inexorably from the definitions involved.  Proofs are, after all,
syntactic things.  What is it that you think you have just discovered
that was worth posting about?

>>> I see this as apt because to look at these things otherwise would seem
>>> to indicate that there is an actual limitation to computation.
>> You repeatedly refuse to look at other limits of computation.  There is

> Yes, I compulsively insist on stayed focused on the point
> without changing the subject until that point is made.

There is no way to sugar coat this: you are blinkered, not focused.
Repeating the same errors again and again, some of which you made years
ago is not staying focused, it's staying ignorant.

> I have now finally made my point with at least one person.

You've made your point with lots of people.  I see lots of replies that
make it clear that the poster has got your point exactly.

>> no TM that can say if two TMs compute the same function, there is no TM
>> that can say if TM will accept a given string, there is no TM that can
>> compute the "busy beaver" function, and there is no TM that can decide
>> halting.  You've latched onto this last one, but there are many, many
>> others.
>>
>>> If the
>>> only limitation to computation is that algorithms can not accomplish
>>> what has been defined as impossible, this is no actual limitation at
>>> all. Because of this I describe these impossible questions as
>>> ill-formed.
>> All the non-computable functions are not computable for some reason
>> inherent in their specification.  what other kind of reason could there
>> possibly be?.  Does that mean we can all go home now?

> As soon as we reach a consensus with many other readers.

I suspect most people here will consider what I've said as obvious to
the point of banality, but I very much doubt that you agree with me.

>> The only thing that separates
>>
>>    A) is there an even prime greater than 2?
>>    B) is there a halting decider?
>>    C) is there and odd perfect number?
>>
>> is the degree of obviousness.  Almost everyone can answer A, B took some
>> time and ingenuity, and C is still open, although almost everyone thinks
>> that there is something about being perfect that precludes such a number
>> from being odd.  The impossibility of existence can reside nowhere other
>> than in the definition of the things involved, but it sounds rather
>> dismissive to say these things are "defined as impossible" because it
>> suggests that one might have defined them otherwise.  I would be
>> perfectly happy with "the definition renders them impossible" or "the
>> definition precludes their existent".  If that quiets you demons we can,
>> indeed, all go home.
>>
>> One thing that you can't do, however, is to say that any of A, B or C is
>> ill-formed.  You can't take language that is in common use and
>> appropriate it for your own purposes.  All are perfectly well-formed
>> questions with correct yes/no answers.  Even C has a correct yes/no
>> answer, though we don't yet know (and my never know) what it is.
>
> When I define a term using tautology such as:
> Let X = Y
> Then no one else is free to correctly conclude that X != Y

That's a child's view of language, and it won't work -- especially here.
But if you really believe that words can be defined like this, here's a
test: why not call all such problems "undecidable" rather than
"ill-formed"?  It is, after all, just a word and any word is a good as
any other, provided the definition is clear.

> Whether the combination of terms such as {ill-formed} and {question}
> correspond somewhat to the common meanings of those terms when applied
> together to describe the case of questions lacking a possible correct
> answer, this aspect may be debatable.

Oh dear, there it is again.  The question "is there a halting decider"
has a correct answer: no.  It is therefore not ill-formed, even by your
misguided attempt to redefine the term.  Is there any point in asking if
you agree with this -- you seem to duck the question every time anyone
asks it?

> The primary reason why I consider the term ill-formed question apt
> when applied to the Halting Problem, is that the inability to
> accomplish the impossible does not form any actual limit on
> computation.

What you've just said is literally nonsense.  The *only* limits to what
can be computed are those things that are impossible to compute.  That's
what "the limits of computation" means.

> Even and otherwise all powerful deity is absolutely helpless within
> the goal of creating an actual square circle.

I don't understand a high enough proportion of the words in that remark
to comment on it.

-- 
Ben.
0
Ben
12/11/2013 3:30:52 AM
On 12/10/2013 9:30 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>> <snip>
>>>>> Of course, in one way Peter is absolutely correct!  The decision
>>>>> problems of arithmetic and halting and so on are, in some sense,
>>>>> inherently problematic.  They contain, in their very definitions, the
>>>>> reasons why they are impossible.
>>>> This is the essence of my whole point.
>>>>
>>>>> Unfortunately it is not because they are ill-formed or paradoxical,
>>>> This is where you seem to fail to fully understand what I am saying.
>>> I understand you perfectly.  There is nothing unclear about the words
>>> you use (in this instance at least) and you express yourself with
>>> admirable clarity.  Unfortunately you are wrong.
>> Thanks for the comment on clarity.
>> When a person is defining a new term using existing meanings this new
>> term is correct by tautology.
>> If I say Let X = Y, no one is free to correctly say that X != Y.
> If the term is well-know with an established meaning, you can't do that
> and have any hope of communicating clearly.  You can't do it all in a
> medium like Usenet where context is not preserved.
>
>>>> I am allocating the name "ill-formed" to describe any question that
>>>> inherently defines itself as having no possible correct answer.
>>> But, as has been said so many times before, the question "is there a
>>> halting tester" has a correct answer: no.  It is in the very nature of
>>> what I said (to which you seemed to agree) that existence questions
>>> about impossible things have well-defined answers -- always no.
>> We already agree on the essence of my whole point:
> Can I take it, then, that you agree that according to your definition of
> "ill-formed" the question "is there a halting tester" is not ill-formed?

Your words:
"They contain, in their very definitions,
the reasons why they are impossible."

It never was the halt tester itself that was ill-formed, it was elements 
from the set of all possible inputs that was ill-formed. So in the same 
way that a C++ compiler would fail to correctly compile an English Poem, 
some input to halt testers will not result in correct output.

>> Your words:
>> "They contain, in their very definitions,
>> the reasons why they are impossible."
>>
>> everything else is of much less consequence.
> Then why have you come here?  Everyone here knows that the properties of
> TMs -- in particular what can and can not be computed by them -- follows
> inexorably from the definitions involved.  Proofs are, after all,
> syntactic things.  What is it that you think you have just discovered
> that was worth posting about?
The Halting Problem shows no actual limit to computation. That an 
impossible thing can not be accomplished does not form any actual limit. 
To say that the halting problem places a limit on computation would be 
like saying that square circles place a limit on geometry.

>>>> I see this as apt because to look at these things otherwise would seem
>>>> to indicate that there is an actual limitation to computation.
>>> You repeatedly refuse to look at other limits of computation.  There is
>> Yes, I compulsively insist on stayed focused on the point
>> without changing the subject until that point is made.
> There is no way to sugar coat this: you are blinkered, not focused.
> Repeating the same errors again and again, some of which you made years
> ago is not staying focused, it's staying ignorant.
>
>> I have now finally made my point with at least one person.
> You've made your point with lots of people.  I see lots of replies that
> make it clear that the poster has got your point exactly.
>
>>> no TM that can say if two TMs compute the same function, there is no TM
>>> that can say if TM will accept a given string, there is no TM that can
>>> compute the "busy beaver" function, and there is no TM that can decide
>>> halting.  You've latched onto this last one, but there are many, many
>>> others.
>>>
>>>> If the
>>>> only limitation to computation is that algorithms can not accomplish
>>>> what has been defined as impossible, this is no actual limitation at
>>>> all. Because of this I describe these impossible questions as
>>>> ill-formed.
>>> All the non-computable functions are not computable for some reason
>>> inherent in their specification.  what other kind of reason could there
>>> possibly be?.  Does that mean we can all go home now?
>> As soon as we reach a consensus with many other readers.
> I suspect most people here will consider what I've said as obvious to
> the point of banality, but I very much doubt that you agree with me.
>
>>> The only thing that separates
>>>
>>>     A) is there an even prime greater than 2?
>>>     B) is there a halting decider?
>>>     C) is there and odd perfect number?
>>>
>>> is the degree of obviousness.  Almost everyone can answer A, B took some
>>> time and ingenuity, and C is still open, although almost everyone thinks
>>> that there is something about being perfect that precludes such a number
>>> from being odd.  The impossibility of existence can reside nowhere other
>>> than in the definition of the things involved, but it sounds rather
>>> dismissive to say these things are "defined as impossible" because it
>>> suggests that one might have defined them otherwise.  I would be
>>> perfectly happy with "the definition renders them impossible" or "the
>>> definition precludes their existent".  If that quiets you demons we can,
>>> indeed, all go home.
>>>
>>> One thing that you can't do, however, is to say that any of A, B or C is
>>> ill-formed.  You can't take language that is in common use and
>>> appropriate it for your own purposes.  All are perfectly well-formed
>>> questions with correct yes/no answers.  Even C has a correct yes/no
>>> answer, though we don't yet know (and my never know) what it is.
>> When I define a term using tautology such as:
>> Let X = Y
>> Then no one else is free to correctly conclude that X != Y
> That's a child's view of language, and it won't work -- especially here.
> But if you really believe that words can be defined like this, here's a
> test: why not call all such problems "undecidable" rather than
> "ill-formed"?  It is, after all, just a word and any word is a good as
> any other, provided the definition is clear.
>
>> Whether the combination of terms such as {ill-formed} and {question}
>> correspond somewhat to the common meanings of those terms when applied
>> together to describe the case of questions lacking a possible correct
>> answer, this aspect may be debatable.
> Oh dear, there it is again.  The question "is there a halting decider"
> has a correct answer: no.  It is therefore not ill-formed, even by your
> misguided attempt to redefine the term.  Is there any point in asking if
> you agree with this -- you seem to duck the question every time anyone
> asks it?
>
>> The primary reason why I consider the term ill-formed question apt
>> when applied to the Halting Problem, is that the inability to
>> accomplish the impossible does not form any actual limit on
>> computation.
> What you've just said is literally nonsense.  The *only* limits to what
> can be computed are those things that are impossible to compute.  That's
> what "the limits of computation" means.
>
>> Even and otherwise all powerful deity is absolutely helpless within
>> the goal of creating an actual square circle.
> I don't understand a high enough proportion of the words in that remark
> to comment on it.
>

0
Peter
12/11/2013 9:53:04 AM
On 12/10/2013 9:48 PM, George Greene wrote:
> On Tuesday, December 10, 2013 8:16:22 PM UTC-5, Peter Olcott wrote:
>> When a person is defining a new term using existing meanings this new
>> term is correct by tautology.
>
> I wish I could type a macro that expands to this:
> "We may always quote 'The Princess Bride':
>   'I don't think that word means what you think it means.'
> A tautology is a proposition that is true irrespective of the
> truth-values of its component/atomic propositional-"variables".
> It's a proposition whose truth-table has a T in every row of its
> payoff column.  In particular it is NOT a DEFINITION.
>
>

Tautology in the sense of impossibly false.

>> If I say Let X = Y, no one is free to correctly say that X != Y.
>
> Well, sure, if X is a NEW term.  For better or for worse, though,
> you are not fond of thinking up new names, since, if no one had ever
> seen them before, they would be deprecated as nonsensical.  You prefer
> to use words that HAVE been seen before, and unfortunately for you,
> these ARE NOT "new" -- they ALREADY HAVE definitions IN THE DICTIONARY.
> And if your proposed definition does SUFFICIENT VIOLENCE to THE PRIOR
> KNOWN definition, THEN, YES, WE DO get to say that your X does not equal
> your Y, because if you're going to call your X "X", WE REQUIRE that it
> have at LEAST A LITTLE in COMMON with OUR X.
>

Another apt label for a question lacking a possible answer would be 
simply an impossible question.

It is obvious that the following question is syntactically ill-formed:
When are you going to go to the?

This question seems semantically ill-formed:
1) How many feet long is the color of your car?

(color refers to the visual sensory perception and not a wavelength of a 
spectrum of light).

It seems clear that the next question is semantically ill-formed:
2) What is the value of an integer that is less than 3 and greater than 
5? (The answer must be numeric).

The distinction between an impossible question and an ill-formed 
question (if there is any) might be the difference between question (1) 
and question (2).

Question (1) seems clearly semantically ill-formed, whereas question (2) 
may only be impossible. When we add that the answer must be numeric to 
question (2), then this question may transition from impossible to 
ill-formed.

Distinction between impossible and ill-formed questions:
http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
sentences of the form: " a has the property φ ", " b bears the relation 
R to c ", etc. are meaningless, if a, b, c, R, φ are not of types 
fitting together.
0
Peter
12/11/2013 11:27:18 AM
On 12/11/2013 1:43 AM, Franz Gnaedinger wrote:
> On Tuesday, December 10, 2013 2:17:45 PM UTC+1, Peter Olcott wrote:
>>
>> Maybe this may seem a more apt description of atomic units of meaning:
>> The set of all conceptions can be further divided into:
>> a) Conceptions (elements of the set)
>> b) Connections between conceptions.
>
> Examples please. If you want to clarify language,
> you can't use ambiguous terms and avoid giving examples,

http://dictionary.reference.com/browse/concept
By {concept} I mean something like the above noun when this {concept} is 
represented using words. When I use terms I try to stick with their 
common meaning, or point out the alternative sense meaning that I am 
referring to.

By connections between conceptions I am referring to every way that two 
or more conceptions might be related to each other. Examples of 
connections between conceptions:

(curly braces indicate exhaustively elaborated meaning postulates)

a) {Human} is a {Type_Of} {Living_Being}.
b) {False}   = {True} + {Negation}
c) {Walked}  = {Walk} + {Past_Tense}
d) {Boats}   = {Boat} + {Plural}
e) {Type_Of} = {Derived_Type_Relation} of {Inheritance_Hierarchy}
f) "+" {Combined_With} Operator

Computerese:
1) Conceptions are represented as the nodes of an acyclic directed graph.

2) Connections between conceptions are represented as the edges of this 
acyclic directed graph.

> hiding behind computerese. You give me the impression
> that you despise real language for its ambiguities.
> However, these very ambiguities do a fine job for us.
> We live uncertain lives in an unsteady world, yet
> language balances our lives, makes us feel at home
> and save. Language can deal with uncertainties and
> evens them out more or less. We ride on a log in
> a mountain river, yet we believe to sail across
> a peaceful lake. You fail to appreciate what real
> language does for us. You can't obtain absolute
> certainty from language, nor complete knowledge.
> Be thankful what language does and don't complain
> about what it doesn't and can't.
>

0
Peter
12/11/2013 12:22:12 PM
On 11/12/13 09:53, Peter Olcott wrote:
> [...] So
> in the same way that a C++ compiler would fail to correctly compile
> an English Poem,

	Whoa!  The correct behaviour for a C++ compiler fed with an
English Poem [assuming that this is not also accidentally a correct
C++ program] is to print an appropriate error message and then halt
without producing object code.  A correct compiler should not, for
any source code, either fail to run to completion or produce object
code that does not conform to the language specification.  But I
don't know of any compiler [for C++ or any other language] on my
current machine which fails this test for "an English Poem".  If
one does, I would describe this as a serious bug.

>		  some input to halt testers will not result in
> correct output.

	There can be no correct halt testers, whereas there is no
reason to suppose that there can be no correct C++ compiler.  There
is an intrinsic difference between a purported halt tester [which
demonstrably produces incorrect output on some inputs] and a
purported C++ compiler which happens, by reason of a bug, to produce
incorrect object code.

-- 
Andy Walker,
Nottingham.
0
Andy
12/11/2013 3:20:52 PM
On 12/11/2013 9:20 AM, Andy Walker wrote:
> On 11/12/13 09:53, Peter Olcott wrote:
>> [...] So
>> in the same way that a C++ compiler would fail to correctly compile
>> an English Poem,
>
>      Whoa!  The correct behaviour for a C++ compiler fed with an
> English Poem [assuming that this is not also accidentally a correct
> C++ program] is to print an appropriate error message and then halt
> without producing object code.  A correct compiler should not, for
> any source code, either fail to run to completion or produce object
> code that does not conform to the language specification.  But I
> don't know of any compiler [for C++ or any other language] on my
> current machine which fails this test for "an English Poem".  If
> one does, I would describe this as a serious bug.
>
>>           some input to halt testers will not result in
>> correct output.
>
>      There can be no correct halt testers, whereas there is no
> reason to suppose that there can be no correct C++ compiler.  There
> is an intrinsic difference between a purported halt tester [which
> demonstrably produces incorrect output on some inputs] and a
> purported C++ compiler which happens, by reason of a bug, to produce
> incorrect object code.
>

You removed too much of what I said such that what you are responding to 
does not correspond to what I said.
0
Peter
12/11/2013 3:42:39 PM
On 10/12/13 13:55, Antti Valmari wrote:
>> [...]  It's just as easy to show that
>> any given program is not a halting tester, with no self-reference [even
>> by Peter's standards] and no contradiction, just by looking at the
>> possible behaviours of related programs.  But if every program is not
>> a halting tester, then no program is a halting tester, QED.
> Let me try. Let P(X,Y) be any program that, for any two finite byte
> strings X and Y, terminates replying "yes" or "no".

	We can't in general know that P terminates, so I'd prefer to
add non-termination as a potential outcome.
>						  Let Q(X) be the
> following program:
>
>    if P(X,X) then enter eternal loop
>
> Q(X) terminates if and only if P(X,X) replies "no". In particular, Q(Q)
> terminates if and only if P(Q,Q) replies "no". So P(X,Y) does not reply
> correctly to the question "does program X halt on input Y", when X = Y =
> Q. Thus P is not a halting tester.

	OK [modulo the above].

> To me Q(Q) seems self-reference in some sense. Of course, it is what you
> call distinction between a running program and its source. But you say
> that it tends to get somewhat blurred. That is the (or a) problem! So
> can you get rid of Q(Q)?

	Yes, though the ice is thin.  For very thin ice, we can let R be a
  copy of Q, and run Q(R).  For ice that may be thick enough to stand on,
we can let S be a "Q" derived in a different way from P;  for example, S
could be the result of running an editor on P and replacing every instance
of "exit (success)" by "label: goto label" and every instance of "exit
(failure)" by "exit (success)".  Now we can run S(Q) or Q(S).

> As I and others have written earlier, self-reference is avoided by first
> proving that the Busy Beaver function is not computable [...].

	Yes;  I was one of the "others"!

-- 
Andy Walker,
Nottingham.
0
Andy
12/11/2013 4:11:27 PM
On 12/11/13 11:53, Peter Olcott wrote:
> It never was the halt tester itself that was ill-formed, it was elements
> from the set of all possible inputs that was ill-formed.

Precisely the opposite. The halting function is well-defined, exists,
and yields no paradoxes. The paradox only emerges when we assume that
there is a program that computes the halting function. Try to formulate
the paradox without mentioning the halting tester!


> So in the same
> way that a C++ compiler would fail to correctly compile an English Poem,
> some input to halt testers will not result in correct output.

An English Poem is a byte string that exists but breaks the C++ language
rules. The paradoxical input in the halting proof is not like that. The
paradoxical input does not break the language rules. The problem with
the paradoxical input is that part of it does not exist, and thus the
paradoxical input does not exist.


> The Halting Problem shows no actual limit to computation. That an
> impossible thing can not be accomplished does not form any actual limit.
> To say that the halting problem places a limit on computation would be
> like saying that square circles place a limit on geometry.

Precisely the opposite. We have a well-defined function that lacks a
program that computes it. That is an actual limit to computation. And,
like many have said, there are others. The Busy Beaver function is a
good example.


(From another posting)
> I am allocating the name "ill-formed" to describe any question that
> inherently defines itself as having no possible correct answer.

First, that allocation is different from the common one. The question
"find the x such that 2x+3 = 11" is very common. It asks a student to
solve an equation. The student is expected to reply "x = 4". The
question "find the x such that x = x+1" also asks the student to solve
an equation. The question has no possible correct answer, but the
student is not expected to reply "this is an ill-formed question" but
"this equation does not have a solution".

Second, what is the ill-formed question in the halting tester issue? The
question "is there a halting tester" has been answered "no" by all of
us, so it does have a possible correct answer. The question "what would
a halting tester reply, if it were given as both arguments the code that
consists of itself surrounded by a piece of code that does the opposite
of the reply" has no possible correct answer. The reason for that is
that the halting tester does not exist. There is no x that satisfies x =
x+1, and there is no halting tester.


> I see this as apt because to look at these things otherwise would
> seem to indicate that there is an actual limitation to computation.
> If the only limitation to computation is that algorithms can not
> accomplish what has been defined as impossible, this is no actual
> limitation at all. Because of this I describe these impossible
> questions as ill-formed.

This argumentation yields that any result in mathematics is no result at
all, because it was built in the definitions. For instance, it yields
the following obviously bad conclusion: the fact that the speed of light
cannot be exceeded is no actual limitation at all, because it was built
in the axioms of the theory of relativity.

The impossibility of the halting tester was built in the definitions
only in the same sense as every mathematical result is built in the
definitions. The definitions were built to model our intuition on
computation. It so happened that the definitions imply uncomputability.
The get out of this, we would have to find where our intuition about
computation has been wrong and change the definitions accordingly.
However, this approach is likely to fail, because computation has been
formalized in a number of radically different ways, but it later turned
out that they all yield the same notion (and all have uncomputability).


> The specific concrete example that you did not respond to is crucial
> to fully appreciating my position.

Do you mean this one?

> Do square circles not exist because they don't exist, or do they not
> exist because they require the simultaneous presence of mutually
> exclusive properties?

I am happy enough with "they require the simultaneous presence of
mutually exclusive properties". And I am happy with the idea that
halting testers do not exist for a similar reason. But the "mutually
exclusive properties" are properties of the halting tester, not of the
halting function. It never was elements from the set of all possible
inputs that was ill-formed, it was the halt tester itself that was
ill-formed.


--- Antti Valmari ---

0
Antti
12/11/2013 4:14:44 PM
On 12/11/2013 9:42 AM, Peter Olcott wrote:
> On 12/11/2013 9:20 AM, Andy Walker wrote:
>> On 11/12/13 09:53, Peter Olcott wrote:
>>> [...] So
>>> in the same way that a C++ compiler would fail to correctly compile
>>> an English Poem,
>>
>>      Whoa!  The correct behaviour for a C++ compiler fed with an
>> English Poem [assuming that this is not also accidentally a correct
>> C++ program] is to print an appropriate error message and then halt
>> without producing object code.  A correct compiler should not, for
>> any source code, either fail to run to completion or produce object
>> code that does not conform to the language specification.  But I
>> don't know of any compiler [for C++ or any other language] on my
>> current machine which fails this test for "an English Poem".  If
>> one does, I would describe this as a serious bug.
>>
>>>           some input to halt testers will not result in
>>> correct output.
>>
>>      There can be no correct halt testers, whereas there is no
>> reason to suppose that there can be no correct C++ compiler.  There
>> is an intrinsic difference between a purported halt tester [which
>> demonstrably produces incorrect output on some inputs] and a
>> purported C++ compiler which happens, by reason of a bug, to produce
>> incorrect object code.
>>
>
> You removed too much of what I said such that what you are responding to
> does not correspond to what I said.
http://en.wikipedia.org/wiki/Cherry_picking_(fallacy)
0
Peter
12/11/2013 4:49:23 PM
On 12/11/2013 10:14 AM, Antti Valmari wrote:
> On 12/11/13 11:53, Peter Olcott wrote:
>> It never was the halt tester itself that was ill-formed, it was elements
>> from the set of all possible inputs that was ill-formed.
>
> Precisely the opposite. The halting function is well-defined, exists,
> and yields no paradoxes. The paradox only emerges when we assume that
> there is a program that computes the halting function. Try to formulate
> the paradox without mentioning the halting tester!
>
I say that the halt tester is not ill-formed and your refute this by 
agreeing with me.

In software engineering the halting tester is another word for the halt 
function. You are probably speaking from a math perspective that I am 
not sufficiently familiar with.

>
>> So in the same
>> way that a C++ compiler would fail to correctly compile an English Poem,
>> some input to halt testers will not result in correct output.
>
> An English Poem is a byte string that exists but breaks the C++ language
> rules. The paradoxical input in the halting proof is not like that. The
> paradoxical input does not break the language rules.

Yet you essentially agree with what I said above that the input is the 
issue.

> The problem with
> the paradoxical input is that part of it does not exist, and thus the
> paradoxical input does not exist.
>
>
>> The Halting Problem shows no actual limit to computation. That an
>> impossible thing can not be accomplished does not form any actual limit.
>> To say that the halting problem places a limit on computation would be
>> like saying that square circles place a limit on geometry.
>
> Precisely the opposite. We have a well-defined function that lacks a
> program that computes it. That is an actual limit to computation. And,
> like many have said, there are others. The Busy Beaver function is a
> good example.
>
>
> (From another posting)
>> I am allocating the name "ill-formed" to describe any question that
>> inherently defines itself as having no possible correct answer.
>
> First, that allocation is different from the common one.

What is the common meaning of ill-formed question?

> The question
> "find the x such that 2x+3 = 11" is very common. It asks a student to
> solve an equation. The student is expected to reply "x = 4". The
> question "find the x such that x = x+1" also asks the student to solve
> an equation. The question has no possible correct answer, but the
> student is not expected to reply "this is an ill-formed question" but
> "this equation does not have a solution".

Yes I agree, unless the answer is restricted to be numeric. At this 
point it transforms from a question with no possible answer (an 
impossible question) to an ill-formed question.

http://en.wikipedia.org/wiki/History_of_type_theory#G.C3.B6del_1944
sentences of the form: " a has the property φ ", " b bears the relation 
R to c ", etc. are meaningless, if a, b, c, R, φ are not of types 
fitting together.

What is the length of the color** of your car?
**color defined as sensory visual perception and not a wavelength of s 
spectrum of light.

>
> Second, what is the ill-formed question in the halting tester issue? The
> question "is there a halting tester" has been answered "no" by all of
> us, so it does have a possible correct answer. The question "what would
> a halting tester reply, if it were given as both arguments the code that
> consists of itself surrounded by a piece of code that does the opposite
> of the reply" has no possible correct answer. The reason for that is
> that the halting tester does not exist. There is no x that satisfies x =
> x+1, and there is no halting tester.
>
>
>> I see this as apt because to look at these things otherwise would
>> seem to indicate that there is an actual limitation to computation.
>> If the only limitation to computation is that algorithms can not
>> accomplish what has been defined as impossible, this is no actual
>> limitation at all. Because of this I describe these impossible
>> questions as ill-formed.
>
> This argumentation yields that any result in mathematics is no result at
> all, because it was built in the definitions. For instance, it yields
> the following obviously bad conclusion: the fact that the speed of light
> cannot be exceeded is no actual limitation at all, because it was built
> in the axioms of the theory of relativity.
>
> The impossibility of the halting tester was built in the definitions
> only in the same sense as every mathematical result is built in the
> definitions. The definitions were built to model our intuition on
> computation. It so happened that the definitions imply uncomputability.
> The get out of this, we would have to find where our intuition about
> computation has been wrong and change the definitions accordingly.
> However, this approach is likely to fail, because computation has been
> formalized in a number of radically different ways, but it later turned
> out that they all yield the same notion (and all have uncomputability).
>
>
>> The specific concrete example that you did not respond to is crucial
>> to fully appreciating my position.
>
> Do you mean this one?
>
>> Do square circles not exist because they don't exist, or do they not
>> exist because they require the simultaneous presence of mutually
>> exclusive properties?

No, I mean this pair:

Asking you:
1) Can you correctly answer this question? (referring to itself)
(neither answer is correct)

Asking someone else:
2) Can Antti Valmari correctly answer this question:
    Can you correctly answer this question? (referring to itself)
(The correct answer is "no")

The analog to the Halting Problem is that the second level of 
indirection of (2) has the clear answer of "no" when someone outside of 
the potential halt tester's perspective is asked Can the Halting Problem 
be solved?

The first level of indirection corresponds to question (1), where the 
potential halt tester is asked does this program P halt on input I?

>
> I am happy enough with "they require the simultaneous presence of
> mutually exclusive properties". And I am happy with the idea that
> halting testers do not exist for a similar reason.

It is great that we got this far.

> But the "mutually
> exclusive properties" are properties of the halting tester, not of the
> halting function.
> It never was elements from the set of all possible
> inputs that was ill-formed, it was the halt tester itself that was
> ill-formed.

It seems that you just contradicted yourself here.
What did you mean by:
 > The paradoxical input in the halting proof...


>
>
> --- Antti Valmari ---
>

0
Peter
12/11/2013 6:13:35 PM
On 12/11/2013 12:01 PM, DKleinecke wrote:
> On Wednesday, December 11, 2013 4:22:12 AM UTC-8, Peter Olcott wrote:
>
>> By {concept} I mean something like the above noun when this {concept} is
>> represented using words. When I use terms I try to stick with their
>> common meaning, or point out the alternative sense meaning that I am
>> referring to.
>
>> By connections between conceptions I am referring to every way that two
>> or more conceptions might be related to each other. Examples of
>> connections between conceptions:
>
>> (curly braces indicate exhaustively elaborated meaning postulates)
>
>> a) {Human} is a {Type_Of} {Living_Being}.
>> b) {False}   = {True} + {Negation}
>> c) {Walked}  = {Walk} + {Past_Tense}
>> d) {Boats}   = {Boat} + {Plural}
>> e) {Type_Of} = {Derived_Type_Relation} of {Inheritance_Hierarchy}
>> f) "+" {Combined_With} Operator
>
> How do you choose which term is the head? Why not
>       {true} = {false} + {anti-negation} ?

The one of minimal complexity is chosen.

>
> There are languages where what translates as past-tense is morphologically simpler than whatever translates as present-tense. As an aside there are languages (Arabic for example) that do not have clear time-based tense systems.
> Do you believe that you can base your ontology studies on English alone?
>
> There are languages (Arabic again) where collectives (which are semantically plural regardless of the formal concord) form singulars by obvious morphological changes. From South American examples (about the animals)
>       {bat} = {bats} + {individual}
>

These language level distinctions are abstracted out of the pure 
conceptions within the ontology. Every language must have some way of 
representing that something occurred prior to now, and every language 
would have to have some way of saying I have more than one of these 
things. I used English for my examples because that is the language that 
I know.

> Chomsky once took the position that since all human language stemmed from the same human linguistic facility it was only necessary to study one human
> language. Very few other people agreed and I feel sure Chomsky no longer holds such a position.
>
>

0
Peter
12/11/2013 6:26:12 PM
On 12/11/2013 10:14 AM, Antti Valmari wrote:
> On 12/11/13 11:53, Peter Olcott wrote:
>> It never was the halt tester itself that was ill-formed, it was elements
>> from the set of all possible inputs that was ill-formed.
>
> Precisely the opposite. The halting function is well-defined, exists,
> and yields no paradoxes. The paradox only emerges when we assume that
> there is a program that computes the halting function. Try to formulate
> the paradox without mentioning the halting tester!
>
>
>> So in the same
>> way that a C++ compiler would fail to correctly compile an English Poem,
>> some input to halt testers will not result in correct output.
>
> An English Poem is a byte string that exists but breaks the C++ language
> rules. The paradoxical input in the halting proof is not like that. The
> paradoxical input does not break the language rules. The problem with
> the paradoxical input is that part of it does not exist, and thus the
> paradoxical input does not exist.
>
>
>> The Halting Problem shows no actual limit to computation. That an
>> impossible thing can not be accomplished does not form any actual limit.
>> To say that the halting problem places a limit on computation would be
>> like saying that square circles place a limit on geometry.
>
> Precisely the opposite. We have a well-defined function that lacks a
> program that computes it. That is an actual limit to computation. And,
> like many have said, there are others. The Busy Beaver function is a
> good example.
>
>
> (From another posting)
>> I am allocating the name "ill-formed" to describe any question that
>> inherently defines itself as having no possible correct answer.
>
> First, that allocation is different from the common one. The question
> "find the x such that 2x+3 = 11" is very common. It asks a student to
> solve an equation. The student is expected to reply "x = 4". The
> question "find the x such that x = x+1" also asks the student to solve
> an equation. The question has no possible correct answer, but the
> student is not expected to reply "this is an ill-formed question" but
> "this equation does not have a solution".
>

This is a very well-construed distinction.  To go just
a step "deeper" I would say that

"The solution set for this equation is empty"

One philosophical difference between set theory and
mereology is the "existence" of an empty set.  This
distinction reflects certain historical trends.

When Frege constructed his ideas, his semantics had
been essentially negative free logic.  But, empty
statements -- that is, those that are fictional or
those that are self-contradictory -- are secondarily
mapped to false statements in order to make the
semantics bivalent.

This disappeared, in large part, because of Russell's
objections.  It reappears with Tarski with the added
provision that the language be *formalized*.  The
question of "ill-formed statements" is addressed before
the logic is applied.

In mereology, the empty set is rejected before the
philosophical system is formalized.  Questions of what
is and what is not well-formed constitute methodological
practice.

In order to describe abstract structures generally,
one needs variables or schematic terms.  So, one
obtains well-formed statements with no answer such
as

"find the x such that x = x+1"

The general description of a Turing machine is well-formed
in its specification.  So, one must expect situations
for which a solution set is empty.

> Second, what is the ill-formed question in the halting tester issue? The
> question "is there a halting tester" has been answered "no" by all of
> us, so it does have a possible correct answer. The question "what would
> a halting tester reply, if it were given as both arguments the code that
> consists of itself surrounded by a piece of code that does the opposite
> of the reply" has no possible correct answer. The reason for that is
> that the halting tester does not exist. There is no x that satisfies x =
> x+1, and there is no halting tester.
>
>
>> I see this as apt because to look at these things otherwise would
>> seem to indicate that there is an actual limitation to computation.
>> If the only limitation to computation is that algorithms can not
>> accomplish what has been defined as impossible, this is no actual
>> limitation at all. Because of this I describe these impossible
>> questions as ill-formed.
>
> This argumentation yields that any result in mathematics is no result at
> all, because it was built in the definitions. For instance, it yields
> the following obviously bad conclusion: the fact that the speed of light
> cannot be exceeded is no actual limitation at all, because it was built
> in the axioms of the theory of relativity.
>
> The impossibility of the halting tester was built in the definitions
> only in the same sense as every mathematical result is built in the
> definitions. The definitions were built to model our intuition on
> computation. It so happened that the definitions imply uncomputability.
> The get out of this, we would have to find where our intuition about
> computation has been wrong and change the definitions accordingly.
> However, this approach is likely to fail, because computation has been
> formalized in a number of radically different ways, but it later turned
> out that they all yield the same notion (and all have uncomputability).
>
>
>> The specific concrete example that you did not respond to is crucial
>> to fully appreciating my position.
>
> Do you mean this one?
>
>> Do square circles not exist because they don't exist, or do they not
>> exist because they require the simultaneous presence of mutually
>> exclusive properties?
>
> I am happy enough with "they require the simultaneous presence of
> mutually exclusive properties". And I am happy with the idea that
> halting testers do not exist for a similar reason. But the "mutually
> exclusive properties" are properties of the halting tester, not of the
> halting function. It never was elements from the set of all possible
> inputs that was ill-formed, it was the halt tester itself that was
> ill-formed.
>

Ah. So, you ended up in the same place with comparison
to self-contradictory descriptions.

:-)




0
fom
12/11/2013 6:39:16 PM
On 11/12/13 16:49, Peter Olcott wrote:
> On 12/11/2013 9:42 AM, Peter Olcott wrote:
>> You removed too much of what I said such that what you are responding to
>> does not correspond to what I said.

	I remind you that all recent articles are nearby, and that netiquette
requires that replies be trimmed.  I removed only parts of your article that
I had no interest in responding to.

> http://en.wikipedia.org/wiki/Cherry_picking_(fallacy)

	Then here is your complete paragraph, re-formatted and with lines
numbered:

  1  " It never was the halt tester itself that was ill-formed,
  2  " it was elements from the set of all possible inputs that was ill-formed.
  3  "  So in the same way that a C++ compiler would fail to correctly compile
  4  " an English Poem,
  5  " some input to halt testers will not result in correct output. "

Line 1:  I see no point debating whether something that does not exist is
   ill-formed or not.
Line 2:  Antti's function "P(X,Y)" [and I agree with him] accepts X and Y
   as arbitrary strings of bytes, and a correct halt tester [if only such
   a thing existed] would be required to do likewise.  I don't see how an
   arbitrary string can be ill-formed;  but again, I see no point debating
   what a non-existent program should do with such input.
Lines 3&4:  As per my previous article, a C++ compiler that does not
   correctly compile an English Poem [by halting after emitting a diagnostic
   message] has a bug which can, and should, be corrected;  no C++ compiler
   known to me has this bug.
Line 5:  Again, you are claiming things about something that does not
   exist, and it is not "the same way";  as a buggy purported compiler can
   be corrected but a purported halt tester cannot.

-- 
Andy Walker,
Nottingham.
0
Andy
12/11/2013 8:21:01 PM
On 12/11/2013 2:21 PM, Andy Walker wrote:
> On 11/12/13 16:49, Peter Olcott wrote:
>> On 12/11/2013 9:42 AM, Peter Olcott wrote:
>>> You removed too much of what I said such that what you are responding to
>>> does not correspond to what I said.
>
>      I remind you that all recent articles are nearby, and that netiquette
> requires that replies be trimmed.  I removed only parts of your article
> that
> I had no interest in responding to.
>
>> http://en.wikipedia.org/wiki/Cherry_picking_(fallacy)
>
>      Then here is your complete paragraph, re-formatted and with lines
> numbered:
>
>   1  " It never was the halt tester itself that was ill-formed,
>   2  " it was elements from the set of all possible inputs that was
> ill-formed.
>   3  "  So in the same way that a C++ compiler would fail to correctly
> compile
>   4  " an English Poem,
>   5  " some input to halt testers will not result in correct output. "
>
> Line 1:  I see no point debating whether something that does not exist is
>    ill-formed or not.
> Line 2:  Antti's function "P(X,Y)" [and I agree with him] accepts X and Y
>    as arbitrary strings of bytes, and a correct halt tester [if only such
>    a thing existed] would be required to do likewise.  I don't see how an
>    arbitrary string can be ill-formed;  but again, I see no point debating
>    what a non-existent program should do with such input.
> Lines 3&4:  As per my previous article, a C++ compiler that does not
>    correctly compile an English Poem [by halting after emitting a
> diagnostic
>    message] has a bug which can, and should, be corrected;  no C++ compiler
>    known to me has this bug.
> Line 5:  Again, you are claiming things about something that does not
>    exist, and it is not "the same way";  as a buggy purported compiler can
>    be corrected but a purported halt tester cannot.
>

These are the words that you missed:

On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
"They contain, in their very definitions,
the reasons why they are impossible."

The Halting Problem shows no actual limit to computation. That an 
impossible thing can not be accomplished does not form any actual limit. 
To say that the halting problem places a limit on computation would be 
like saying that square circles place a limit on geometry.


0
Peter
12/11/2013 8:37:44 PM
Peter Olcott <OCR4Screen> writes:

> On 12/10/2013 9:30 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>> <snip>
>>>>>> Of course, in one way Peter is absolutely correct!  The decision
>>>>>> problems of arithmetic and halting and so on are, in some sense,
>>>>>> inherently problematic.  They contain, in their very definitions, the
>>>>>> reasons why they are impossible.
>>>>> This is the essence of my whole point.
>>>>>
>>>>>> Unfortunately it is not because they are ill-formed or paradoxical,
>>>>> This is where you seem to fail to fully understand what I am saying.
>>>> I understand you perfectly.  There is nothing unclear about the words
>>>> you use (in this instance at least) and you express yourself with
>>>> admirable clarity.  Unfortunately you are wrong.
>>> Thanks for the comment on clarity.
>>> When a person is defining a new term using existing meanings this new
>>> term is correct by tautology.
>>> If I say Let X = Y, no one is free to correctly say that X != Y.
>> If the term is well-know with an established meaning, you can't do that
>> and have any hope of communicating clearly.  You can't do it all in a
>> medium like Usenet where context is not preserved.
>>
>>>>> I am allocating the name "ill-formed" to describe any question that
>>>>> inherently defines itself as having no possible correct answer.
>>>> But, as has been said so many times before, the question "is there a
>>>> halting tester" has a correct answer: no.  It is in the very nature of
>>>> what I said (to which you seemed to agree) that existence questions
>>>> about impossible things have well-defined answers -- always no.

>>> We already agree on the essence of my whole point:

>> Can I take it, then, that you agree that according to your definition of
>> "ill-formed" the question "is there a halting tester" is not
>> ill-formed?

This question hangs in the air, as yet unanswered.

> Your words:
> "They contain, in their very definitions,
> the reasons why they are impossible."

Is this supposed to answer my question?  If it is, I can't work out your
answer from it.  Is the question "is there a halting tester"
"ill-formed" in your sense of not having a "possible correct answer"?

> It never was the halt tester itself that was ill-formed, it was
> elements from the set of all possible inputs that was ill-formed.

No.  This is the same old stuff from last year and my reply has to be
the same: for all possible inputs, for every single one, there is a
correct answer.  You have never been able to give an example of a
question that is "ill-formed" in your sense that applies to halting.
You give examples from other domains, but not halting.  The reason is
simple -- there are none for you to give.  Every input represents a
simple question: does the input encode a machine/input pair such that
the machine will halt when given that input?  Every such question has a
correct yes/no answer.  The bigger question: "does a halting decider
exist?" also has a correct yes/no answer: no.

Last time you kept coming back with "what time is your hat" or "what
colour is Monday?" but eventually you have to say what is "ill-formed"
about halting, not hats.  Has your argument changed at all in the last
18 months?  It does not look like it.

> So
> in the same way that a C++ compiler would fail to correctly compile an
> English Poem, some input to halt testers will not result in correct
> output.

There is nothing impossible about the existence of a C++ compiler so I
wonder why you think there is any analogy.  C++ compilers exist but
halting testers don't, so saying that a halting test will not result in
the correct output is like saying what the fairy at the bottom of your
garden says about some Turing machine.  You can't argue sensibly about
the properties of nonexistent things.

>>> Your words:
>>> "They contain, in their very definitions,
>>> the reasons why they are impossible."
>>>
>>> everything else is of much less consequence.
>> Then why have you come here?  Everyone here knows that the properties of
>> TMs -- in particular what can and can not be computed by them -- follows
>> inexorably from the definitions involved.  Proofs are, after all,
>> syntactic things.  What is it that you think you have just discovered
>> that was worth posting about?

> The Halting Problem shows no actual limit to computation. That an
> impossible thing can not be accomplished does not form any actual
> limit. To say that the halting problem places a limit on computation
> would be like saying that square circles place a limit on geometry.

I think you should write this in big letters and pin up in your house is
some prominent position.  Maybe it will make you feel better eventually
so that in a few years time you no longer need to keep saying that here.

Your dogma is that a logical impossibility is not a limit.  That's a
re-definition of language on a massive scale, because as far as abstract
models of computation go, there is no other possible meaning of the term
"limit on computation".  The model is design to remove all impediments
to computation other than those that follow logically from the notion of
computation itself.

What you are saying that there is no limit on computation because the
only functions that can't be computed are the functions that can't be
computed.  Of course that does not sound right, and it won't quiet your
anxiety, so you have to label one half of that tautology with a label
like "ill-formed" which a reassuring carries meaning.

<snip>
You did not respond to this:

>> But if you really believe that words can be defined like this, here's a
>> test: why not call all such problems "undecidable" rather than
>> "ill-formed"?  It is, after all, just a word and any word is a good as
>> any other, provided the definition is clear.

You could prove that your choice of "ill-formed" is simply an arbitrary
definition by agreeing to use some other term.  Would some other term do
the job, or must you use a term that carries meaning beyond the
definition you give it?

<snip>
-- 
Ben.
0
Ben
12/11/2013 9:40:43 PM
On 11/12/13 20:37, Peter Olcott wrote:
> These are the words that you missed:
> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
> "They contain, in their very definitions,
> the reasons why they are impossible."

	I didn't miss them.  They represent a point of philosophy, of
little or no interest to me personally.

> The Halting Problem shows no actual limit to computation. That an
> impossible thing can not be accomplished does not form any actual
> limit.

	The demonstration that the HP is impossible to solve does
however show a limit to actual computation.  Moreover, this result
can be used to demonstrate a whole raft of related impossibilities.

>	  To say that the halting problem places a limit on computation
> would be like saying that square circles place a limit on geometry.

	This is a category error.  Square circles, like halting
testers, do not exist.  The HP, like the question of whether a
circle can be made square, does exist as a well-formed problem.
The more important point is that many people would like to write
a halting tester [or, even more so, a program to solve one of the
related problems], many apparently-intelligent people have devoted
much time and expense to this quest [often even after being shown
a proof of its inevitable failure], and there is serious interest
in knowing how far along this quest progress can be made.  Square
circles have no such devotees.

-- 
Andy Walker,
Nottingham.
0
Andy
12/11/2013 9:40:55 PM
On 12/11/2013 3:40 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/10/2013 9:30 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>>> <snip>
>>>>>>> Of course, in one way Peter is absolutely correct!  The decision
>>>>>>> problems of arithmetic and halting and so on are, in some sense,
>>>>>>> inherently problematic.  They contain, in their very definitions, the
>>>>>>> reasons why they are impossible.
>>>>>> This is the essence of my whole point.
>>>>>>
>>>>>>> Unfortunately it is not because they are ill-formed or paradoxical,
>>>>>> This is where you seem to fail to fully understand what I am saying.
>>>>> I understand you perfectly.  There is nothing unclear about the words
>>>>> you use (in this instance at least) and you express yourself with
>>>>> admirable clarity.  Unfortunately you are wrong.
>>>> Thanks for the comment on clarity.
>>>> When a person is defining a new term using existing meanings this new
>>>> term is correct by tautology.
>>>> If I say Let X = Y, no one is free to correctly say that X != Y.
>>> If the term is well-know with an established meaning, you can't do that
>>> and have any hope of communicating clearly.  You can't do it all in a
>>> medium like Usenet where context is not preserved.
>>>
>>>>>> I am allocating the name "ill-formed" to describe any question that
>>>>>> inherently defines itself as having no possible correct answer.
>>>>> But, as has been said so many times before, the question "is there a
>>>>> halting tester" has a correct answer: no.  It is in the very nature of
>>>>> what I said (to which you seemed to agree) that existence questions
>>>>> about impossible things have well-defined answers -- always no.
>>>> We already agree on the essence of my whole point:
>>> Can I take it, then, that you agree that according to your definition of
>>> "ill-formed" the question "is there a halting tester" is not
>>> ill-formed?
> This question hangs in the air, as yet unanswered.

That question would be very substantially insufficiently precise.

>
>> Your words:
>> "They contain, in their very definitions,
>> the reasons why they are impossible."
> Is this supposed to answer my question?  If it is, I can't work out your
> answer from it.  Is the question "is there a halting tester"
> "ill-formed" in your sense of not having a "possible correct answer"?
>
>> It never was the halt tester itself that was ill-formed, it was
>> elements from the set of all possible inputs that was ill-formed.
> No.  This is the same old stuff from last year and my reply has to be
> the same: for all possible inputs, for every single one, there is a
> correct answer.
You are still failing to explicitly acknowledge the two different levels of
indirection that change the fundamental meaning of the question:

Asking you:
1) Can you correctly answer this question? (referring to itself)
(neither answer is correct)

Asking someone else:
2) Can  Ben Bacarisse correctly answer this question:
    Can you correctly answer this question? (referring to itself)
(The correct answer is "no")

The analog to the Halting Problem is that the second level of 
indirection of (2) has the clear answer of "no" when someone outside of 
the potential halt tester's perspective is asked: Can the Halting 
Problem be solved?

The first level of indirection corresponds to question (1), where the 
potential halt tester is asked does this program P halt on input I?

<rest snipped>
0
Peter
12/11/2013 10:48:20 PM
Peter Olcott wrote:

> 1) Can you correctly answer this question? (referring to itself)
> (neither answer is correct)

I take it that 'neither answer' refers to 'yes' and 'no'.  Why is 
neither of them correct?


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/11/2013 10:57:59 PM
On 12/11/2013 3:40 PM, Andy Walker wrote:
> On 11/12/13 20:37, Peter Olcott wrote:
>> These are the words that you missed:
>> On 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>> "They contain, in their very definitions,
>> the reasons why they are impossible."
>
>     I didn't miss them.  They represent a point of philosophy, of
> little or no interest to me personally.
>
>> The Halting Problem shows no actual limit to computation. That an
>> impossible thing can not be accomplished does not form any actual
>> limit.
>
>     The demonstration that the HP is impossible to solve does
> however show a limit to actual computation.  Moreover, this result
> can be used to demonstrate a whole raft of related impossibilities.
>
>>       To say that the halting problem places a limit on computation
>> would be like saying that square circles place a limit on geometry.
>
>     This is a category error.  Square circles, like halting
> testers, do not exist.  The HP, like the question of whether a
> circle can be made square, does exist as a well-formed problem.
So you agree that they are analogous?

> The more important point is that many people would like to write
> a halting tester [or, even more so, a program to solve one of the
> related problems], many apparently-intelligent people have devoted
> much time and expense to this quest [often even after being shown
> a proof of its inevitable failure], and there is serious interest
> in knowing how far along this quest progress can be made.  Square
> circles have no such devotees.
>
My devotion is to grok knowledge representation (KR) sufficiently that 
all the remaining gaps of the comprehension of comprehension can be filled.

Within this goal it does seem that the field of formal semantics within 
linguistics is enormously further along than the knowledge 
representation (KR) field of computer science.

One is starting from reality and trying to increasingly understand how 
the set of all concepts connect together and the other is constantly 
redesigning toy systems to achieve tiny little increments of insight.
0
Peter
12/11/2013 11:00:55 PM
On 11/12/13 23:00, Peter Olcott wrote:
>>>	To say that the halting problem places a limit on computation
>>> would be like saying that square circles place a limit on geometry.
>>	This is a category error.  Square circles, like halting
>> testers, do not exist.  The HP, like the question of whether a
>> circle can be made square, does exist as a well-formed problem.
> So you agree that they are analogous?

	If "they" means the HP and square circles, then no;  as I
said, that would be a category error.  Square circles are analogous
to halting testers;  and the problem of the existence of square
circles is analogous to the HP.  But problems of the form "Does X
exist?" for various "X" are not analogous to the various "X".

-- 
Andy Walker,
Nottingham.
0
Andy
12/11/2013 11:45:21 PM
On 12/11/2013 4:57 PM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> 1) Can you correctly answer this question? (referring to itself)
>> (neither answer is correct)
>
> I take it that 'neither answer' refers to 'yes' and 'no'.  Why is 
> neither of them correct?
>
>
If you say "no" you just correctly answered the question so "no" is 
incorrect.
If you say "yes" you answered the question, yet this answer is not correct.
0
Peter
12/11/2013 11:55:30 PM
On 12/11/2013 5:45 PM, Andy Walker wrote:
> On 11/12/13 23:00, Peter Olcott wrote:
>>>>     To say that the halting problem places a limit on computation
>>>> would be like saying that square circles place a limit on geometry.
>>>     This is a category error.  Square circles, like halting
>>> testers, do not exist.  The HP, like the question of whether a
>>> circle can be made square, does exist as a well-formed problem.
>> So you agree that they are analogous?
>
>     If "they" means the HP and square circles, then no;  as I
> said, that would be a category error. 

> Square circles are analogous
> to halting testers;  and the problem of the existence of square
> circles is analogous to the HP. 

So good we agree on this crucial point:
The halting decider is defined in such a way as to prevent it from 
possibly existing.
0
Peter
12/12/2013 12:01:20 AM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 3:40 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/10/2013 9:30 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
<snip>
>>>> Can I take it, then, that you agree that according to your definition of
>>>> "ill-formed" the question "is there a halting tester" is not
>>>> ill-formed?
>> This question hangs in the air, as yet unanswered.
>
> That question would be very substantially insufficiently precise.

This means that after at least 18 months of focusing on this matter you
can't tell if your own definition applies to this question or not.  And
it's not a tangential question, it's the question at the heart of the
topic you've been focused on.

>>> Your words:
>>> "They contain, in their very definitions,
>>> the reasons why they are impossible."
>> Is this supposed to answer my question?  If it is, I can't work out your
>> answer from it.  Is the question "is there a halting tester"
>> "ill-formed" in your sense of not having a "possible correct answer"?
>>
>>> It never was the halt tester itself that was ill-formed, it was
>>> elements from the set of all possible inputs that was ill-formed.
>> No.  This is the same old stuff from last year and my reply has to be
>> the same: for all possible inputs, for every single one, there is a
>> correct answer.

> You are still failing to explicitly acknowledge the two different levels of
> indirection that change the fundamental meaning of the question:
>
> Asking you:
> 1) Can you correctly answer this question? (referring to itself)
> (neither answer is correct)
>
> Asking someone else:
> 2) Can  Ben Bacarisse correctly answer this question:
>    Can you correctly answer this question? (referring to itself)
> (The correct answer is "no")
>
> The analog to the Halting Problem is that the second level of
> indirection of (2) has the clear answer of "no" when someone outside
> of the potential halt tester's perspective is asked: Can the Halting
> Problem be solved?
>
> The first level of indirection corresponds to question (1), where the
> potential halt tester is asked does this program P halt on input I?

Same as 18 months ago, I see.  There are so many things wrong with this
analogy that it would take hours to unravel it.

It would be simpler if you just stated the (or a) question that is
"ill-formed" by your definition.  The one (or one of the ones) that is
actually involved in halting -- not in some pretend situation about
people.  It can't be the one your analogy suggests -- "does this program
P halt on input I?" because that has a correct answer.  You are trying
to link this question to a particular machine with the phrase "where the
potential halt tester is asked..." but you fall just short of posing it
as a question.

Every question I try to extract from this analogy fails to be
"ill-formed" (by your definition) for the simple reason that halting
testers don't exist.  Therefore you can't find one to ask the supposedly
paradoxical question (this is pretty much the same point made by Antti
Valmari).  There is nothing paradoxical about presenting a TM that can't
decide halting with some input -- even about itself.  If the input is
intended to represent a halting instance, the random TM might get the
answer right, or it might get it wrong, but since no one can claim that
it is always right about halting instances here can't be a paradox.

-- 
Ben.
0
Ben
12/12/2013 1:31:30 AM
On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 3:40 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/10/2013 9:30 PM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> On 12/10/2013 2:56 PM, Ben Bacarisse wrote:
> <snip>
>>>>> Can I take it, then, that you agree that according to your definition of
>>>>> "ill-formed" the question "is there a halting tester" is not
>>>>> ill-formed?
>>> This question hangs in the air, as yet unanswered.
>> That question would be very substantially insufficiently precise.
> This means that after at least 18 months of focusing on this matter you
> can't tell if your own definition applies to this question or not.  And
> it's not a tangential question, it's the question at the heart of the
> topic you've been focused on.
>
>>>> Your words:
>>>> "They contain, in their very definitions,
>>>> the reasons why they are impossible."
>>> Is this supposed to answer my question?  If it is, I can't work out your
>>> answer from it.  Is the question "is there a halting tester"
>>> "ill-formed" in your sense of not having a "possible correct answer"?
>>>
>>>> It never was the halt tester itself that was ill-formed, it was
>>>> elements from the set of all possible inputs that was ill-formed.
>>> No.  This is the same old stuff from last year and my reply has to be
>>> the same: for all possible inputs, for every single one, there is a
>>> correct answer.
>> You are still failing to explicitly acknowledge the two different levels of
>> indirection that change the fundamental meaning of the question:
>>
>> Asking you:
>> 1) Can you correctly answer this question? (referring to itself)
>> (neither answer is correct)
>>
>> Asking someone else:
>> 2) Can  Ben Bacarisse correctly answer this question:
>>     Can you correctly answer this question? (referring to itself)
>> (The correct answer is "no")
>>
>> The analog to the Halting Problem is that the second level of
>> indirection of (2) has the clear answer of "no" when someone outside
>> of the potential halt tester's perspective is asked: Can the Halting
>> Problem be solved?
>>
>> The first level of indirection corresponds to question (1), where the
>> potential halt tester is asked does this program P halt on input I?
> Same as 18 months ago, I see.  There are so many things wrong with this
> analogy that it would take hours to unravel it.
Let's keep it simple:
Can you correctly answer this question? (referring to itself)
0
Peter
12/12/2013 1:56:51 AM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 5:45 PM, Andy Walker wrote:
>> On 11/12/13 23:00, Peter Olcott wrote:
>>>>>     To say that the halting problem places a limit on computation
>>>>> would be like saying that square circles place a limit on geometry.
>>>>     This is a category error.  Square circles, like halting
>>>> testers, do not exist.  The HP, like the question of whether a
>>>> circle can be made square, does exist as a well-formed problem.
>>> So you agree that they are analogous?
>>
>>     If "they" means the HP and square circles, then no;  as I
>> said, that would be a category error. 
>
>> Square circles are analogous
>> to halting testers;  and the problem of the existence of square
>> circles is analogous to the HP. 
>
> So good we agree on this crucial point:
> The halting decider is defined in such a way as to prevent it from
> possibly existing.

It's interesting how readily you reveal your motivation though your
choice of words.  There is implied intent in the word "prevent".  There
is indignation in the force of "possibly existing".  There is a
suggestion that it's all arbitrary with "defined in such a way".  And
there is an attempt to deflect the reader from the problem (halting) and
onto some presumably unrealistic solution to it ("the halting decider").
What you care about is not that halting can't be decided, but that it
should look like something deliberately done to spoil the party.

Halting is a natural question to ask, once you have a model of
computation where it is not obvious property of all computations.  How
else could halting be defined?

Anyway, we are not all meanies.  There are models of computation in
which halting is decidable, so you could always assume that your God
uses one of those.  Then you would not need to persuade yourself that
something being impossible does not constitute a limit on what can be
done.

-- 
Ben.
0
Ben
12/12/2013 2:02:02 AM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
<snip>
>>> You are still failing to explicitly acknowledge the two different levels of
>>> indirection that change the fundamental meaning of the question:
>>>
>>> Asking you:
>>> 1) Can you correctly answer this question? (referring to itself)
>>> (neither answer is correct)
>>>
>>> Asking someone else:
>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>     Can you correctly answer this question? (referring to itself)
>>> (The correct answer is "no")
>>>
>>> The analog to the Halting Problem is that the second level of
>>> indirection of (2) has the clear answer of "no" when someone outside
>>> of the potential halt tester's perspective is asked: Can the Halting
>>> Problem be solved?
>>>
>>> The first level of indirection corresponds to question (1), where the
>>> potential halt tester is asked does this program P halt on input I?

>> Same as 18 months ago, I see.  There are so many things wrong with this
>> analogy that it would take hours to unravel it.

> Let's keep it simple:
> Can you correctly answer this question? (referring to itself)

Yes.

-- 
Ben.
0
Ben
12/12/2013 2:09:06 AM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
<snip>
>>> You are still failing to explicitly acknowledge the two different levels of
>>> indirection that change the fundamental meaning of the question:
>>>
>>> Asking you:
>>> 1) Can you correctly answer this question? (referring to itself)
>>> (neither answer is correct)
>>>
>>> Asking someone else:
>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>     Can you correctly answer this question? (referring to itself)
>>> (The correct answer is "no")
>>>
>>> The analog to the Halting Problem is that the second level of
>>> indirection of (2) has the clear answer of "no" when someone outside
>>> of the potential halt tester's perspective is asked: Can the Halting
>>> Problem be solved?
>>>
>>> The first level of indirection corresponds to question (1), where the
>>> potential halt tester is asked does this program P halt on input I?

>> Same as 18 months ago, I see.  There are so many things wrong with this
>> analogy that it would take hours to unravel it.

> Let's keep it simple:
> Can you correctly answer this question? (referring to itself)

42.

-- 
Ben.
0
Ben
12/12/2013 2:10:44 AM
On 12/11/2013 8:10 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
> <snip>
>>>> You are still failing to explicitly acknowledge the two different levels of
>>>> indirection that change the fundamental meaning of the question:
>>>>
>>>> Asking you:
>>>> 1) Can you correctly answer this question? (referring to itself)
>>>> (neither answer is correct)
>>>>
>>>> Asking someone else:
>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>      Can you correctly answer this question? (referring to itself)
>>>> (The correct answer is "no")
>>>>
>>>> The analog to the Halting Problem is that the second level of
>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>> Problem be solved?
>>>>
>>>> The first level of indirection corresponds to question (1), where the
>>>> potential halt tester is asked does this program P halt on input I?
>
>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>> analogy that it would take hours to unravel it.
>
>> Let's keep it simple:
>> Can you correctly answer this question? (referring to itself)
>
> 42.
>

42 is always the answer.

chuckle




0
fom
12/12/2013 3:28:26 AM
On 12/12/2013 4:02 AM, Peter Olcott wrote:
> On 12/12/2013 1:52 AM, Franz Gnaedinger wrote:
>> On Wednesday, December 11, 2013 12:56:40 PM UTC+1, Peter Olcott wrote:
>>> 1) [Individuals]
>>>
>>> 2) [Relations]
>>>
>>> [Properties of relations] and [properties of individuals] would both be
>>> a subtype of [relations].
>>>
>>> As Franz pointed out the term [Individual] is not most apt. It is
>>> applying a term with a preexisting meaning to a concept that does not
>>> quite fit.
>> Why not use the right words instead of their uncles,
>> as Mark Twain said?
>>
>>    1) entities
>>    2) attributes
>>    3) relations
>>
>> This would be a valid triple of grammatical categories.
>> However, the treasure of knowledge and information
>> contained in and conveyed by language can't be retrieved
>> with one grammar alone. There are various grammars,
>> each on shedding light on special aspects the others miss.
>> For example standard grammar has no way to judge the
>> position of words within a sentence, provided they are
>> free, like in Latin or Greek. Pater Luwig Ruhstaller
>> OSB developed a revolutionary grammar based on functors
>> and arguments which can be represented as budding circles
>> - the functors being centers of circles and the arguments
>> points of the circumference that can become the centers
>> of further circles - and allow to draw up tension diagrams,
>> as explained on an old page of mine
>>
>>    http://www.seshat.ch/home/grammar.htm
>>
>> In my opinion, only many grammars combined can do language
>> justice, comparable to the several brain areas that achieve
>> language in the mind. I'd give you the advice to develop
>> one such grammar and really begin doing something and writing
>> a program, freeing yourself from the bandage of completeness.
>> You can do both at the same time, write a modest program
>> and look out for completeness, like Einstein was doing
>> physics based on natural constants while dreaming of another
>> physics based on numbers like 2 and e and pi, a pure physics,
>> as it were, no longer depending from seemingly arbitrary numbers.
> The key to exhaustively solving the compositionality problem within
> linguistics in minimal time is to make the representational system as
> simple as it can possibly be.
> These are my new semantic atoms:
> 1) Conceptions
> 2) Connections between conceptions
>
> The internal representation of semantic meanings will be enormously
> simpler than its surface structure.
> Every kind of grammar that could ever be created could be encoded as
> connections between concepts.
>
> Although there may be numerous ways to say: "Bob is going to the store."
> The underlying semantics will always be something like:
> {Physical_Transportation} (
> {Specific_Person}(Bob)  {From_Current_Physical_Location}
> {To_Physical_Location}(Retail_Establishment)
> )  {TimeFrame}(Present_Or_Future)
>
>
>
>

Although there may be numerous ways to say:
"Bob is going to the store."
The underlying semantics will always be something like:

{Event} (
   {Time_Frame} ( {Present_Or_Future} )
   {Physical_Transportation} (
     {Specific_Person}(BOB)
     {From_Current_Physical_Location}
     {To_Physical_Location}( {Retail_Establishment}(?) )
   )
)

{Curly_Braces} Indicate fully elaborated meaning postulates within a 
base ontology.

Base Ontology: Like a dictionary that exhaustively defines the complete 
meaning of every subtle nuance of every word. It does this using 
mathematical formalisms based on directed acyclic graph of conceptions 
(nodes) and connections between conceptions (edges).

These {Conceptions} and {Connections between Conceptions} form the 
semantic atoms. (Thanks Franz Gnaedinger for helping me to simplify this).

CAPITALIZATION Indicates a specific value for a variable.

? Question_Mark Indicates a missing value for a variable.

0
Peter
12/12/2013 11:32:50 AM
On 12/12/2013 5:42 AM, George Greene wrote:
> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>> If you say "yes" you answered the question, yet this answer is not correct.
>
> It IS SO TOO correct.
> How exactly could YOU hope to prove otherwise?!?
>
>
By fully elaborating the specific details of the underlying meaning 
postulates using Montague Grammar and its logically entailed enhancements.

This depends upon something that [fom] understood, that others here may 
not have understood: the [object of truth].

We have to step outside of the frame_of_reference of the question itself 
and look at it from the perspective of an outsider.

Can anyone else correctly answer the question:
Can you correctly answer this question?

The correct answer is clearly "no" because this question, like the 
Liar_Paradox lacks an object of truth.

<analysis>
  Can you correctly answer this question?
  Q) What are you asking me about?

  A) I am asking you about whether or not you can correctly answer this 
question.

  Q) What question?

  A) The question that is asking you about whether or not you can 
correctly answer this question.
</analysis>
0
Peter
12/12/2013 12:02:23 PM
On 12/11/2013 8:10 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
> <snip>
>>>> You are still failing to explicitly acknowledge the two different levels of
>>>> indirection that change the fundamental meaning of the question:
>>>>
>>>> Asking you:
>>>> 1) Can you correctly answer this question? (referring to itself)
>>>> (neither answer is correct)
>>>>
>>>> Asking someone else:
>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>      Can you correctly answer this question? (referring to itself)
>>>> (The correct answer is "no")
>>>>
>>>> The analog to the Halting Problem is that the second level of
>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>> Problem be solved?
>>>>
>>>> The first level of indirection corresponds to question (1), where the
>>>> potential halt tester is asked does this program P halt on input I?
>
>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>> analogy that it would take hours to unravel it.
>
>> Let's keep it simple:
>> Can you correctly answer this question? (referring to itself)
>
> 42.
>

Yes that is funny
http://en.wikipedia.org/wiki/42_(number)
0
Peter
12/12/2013 12:05:33 PM
On 12/11/2013 8:09 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
> <snip>
>>>> You are still failing to explicitly acknowledge the two different levels of
>>>> indirection that change the fundamental meaning of the question:
>>>>
>>>> Asking you:
>>>> 1) Can you correctly answer this question? (referring to itself)
>>>> (neither answer is correct)
>>>>
>>>> Asking someone else:
>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>      Can you correctly answer this question? (referring to itself)
>>>> (The correct answer is "no")
>>>>
>>>> The analog to the Halting Problem is that the second level of
>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>> Problem be solved?
>>>>
>>>> The first level of indirection corresponds to question (1), where the
>>>> potential halt tester is asked does this program P halt on input I?
>
>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>> analogy that it would take hours to unravel it.
>
>> Let's keep it simple:
>> Can you correctly answer this question? (referring to itself)
>
> Yes.
>

See my reply to George.
0
Peter
12/12/2013 12:06:07 PM
On 12/11/2013 8:02 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 5:45 PM, Andy Walker wrote:
>>> On 11/12/13 23:00, Peter Olcott wrote:
>>>>>>      To say that the halting problem places a limit on computation
>>>>>> would be like saying that square circles place a limit on geometry.
>>>>>      This is a category error.  Square circles, like halting
>>>>> testers, do not exist.  The HP, like the question of whether a
>>>>> circle can be made square, does exist as a well-formed problem.
>>>> So you agree that they are analogous?
>>>
>>>      If "they" means the HP and square circles, then no;  as I
>>> said, that would be a category error.
>>
>>> Square circles are analogous
>>> to halting testers;  and the problem of the existence of square
>>> circles is analogous to the HP.
>>
>> So good we agree on this crucial point:
>> The halting decider is defined in such a way as to prevent it from
>> possibly existing.
>
> It's interesting how readily you reveal your motivation though your
> choice of words.

My ultimate motivation is to 100% completely comprehend exactly and 
precisely what comprehension is such that the compositionality problem 
within linguistics can be made fully elaborated and exhaustively 
complete such that endeavors such as the CYC project can use this design 
to complete a base ontology that includes the complete meaning of every 
conception (far more than mere common sense conceptions).

The reason that I am here is to fill in the crucial little gaps of the 
comprehension of comprehension that pertain to paradoxes.

> There is implied intent in the word "prevent".  There
> is indignation in the force of "possibly existing".  There is a
> suggestion that it's all arbitrary with "defined in such a way".  And
> there is an attempt to deflect the reader from the problem (halting) and
> onto some presumably unrealistic solution to it ("the halting decider").
> What you care about is not that halting can't be decided, but that it
> should look like something deliberately done to spoil the party.
>
> Halting is a natural question to ask, once you have a model of
> computation where it is not obvious property of all computations.  How
> else could halting be defined?
>
> Anyway, we are not all meanies.  There are models of computation in
> which halting is decidable, so you could always assume that your God
> uses one of those.  Then you would not need to persuade yourself that
> something being impossible does not constitute a limit on what can be
> done.
>

0
Peter
12/12/2013 12:13:43 PM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 8:09 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>> <snip>
>>>>> You are still failing to explicitly acknowledge the two different levels of
>>>>> indirection that change the fundamental meaning of the question:
>>>>>
>>>>> Asking you:
>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>> (neither answer is correct)
>>>>>
>>>>> Asking someone else:
>>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>>      Can you correctly answer this question? (referring to itself)
>>>>> (The correct answer is "no")
>>>>>
>>>>> The analog to the Halting Problem is that the second level of
>>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>>> Problem be solved?
>>>>>
>>>>> The first level of indirection corresponds to question (1), where the
>>>>> potential halt tester is asked does this program P halt on input I?
>>
>>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>>> analogy that it would take hours to unravel it.
>>
>>> Let's keep it simple:
>>> Can you correctly answer this question? (referring to itself)
>>
>> Yes.
>
> See my reply to George.

That post says nothing about the analogy with halting.  I thought you
were going to take me through some didactic process that would reveal
how apt your analogy was.  You did, after all, post your "let's keep it
simple" in direct reply to my saying that it was wrong.

-- 
Ben.
0
Ben
12/12/2013 12:51:49 PM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 8:10 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>> <snip>
>>>>> You are still failing to explicitly acknowledge the two different levels of
>>>>> indirection that change the fundamental meaning of the question:
>>>>>
>>>>> Asking you:
>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>> (neither answer is correct)
>>>>>
>>>>> Asking someone else:
>>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>>      Can you correctly answer this question? (referring to itself)
>>>>> (The correct answer is "no")
>>>>>
>>>>> The analog to the Halting Problem is that the second level of
>>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>>> Problem be solved?
>>>>>
>>>>> The first level of indirection corresponds to question (1), where the
>>>>> potential halt tester is asked does this program P halt on input I?
>>
>>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>>> analogy that it would take hours to unravel it.
>>
>>> Let's keep it simple:
>>> Can you correctly answer this question? (referring to itself)
>>
>> 42.
>
> Yes that is funny
> http://en.wikipedia.org/wiki/42_(number)

No, it was one of two answers to your question.  I've answered your
question with "42", where do we go from here?

-- 
Ben.
0
Ben
12/12/2013 12:54:07 PM
Peter Olcott <OCR4Screen> writes:

> On 12/11/2013 8:02 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/11/2013 5:45 PM, Andy Walker wrote:
>>>> On 11/12/13 23:00, Peter Olcott wrote:
>>>>>>>      To say that the halting problem places a limit on computation
>>>>>>> would be like saying that square circles place a limit on geometry.
>>>>>>      This is a category error.  Square circles, like halting
>>>>>> testers, do not exist.  The HP, like the question of whether a
>>>>>> circle can be made square, does exist as a well-formed problem.
>>>>> So you agree that they are analogous?
>>>>
>>>>      If "they" means the HP and square circles, then no;  as I
>>>> said, that would be a category error.
>>>
>>>> Square circles are analogous
>>>> to halting testers;  and the problem of the existence of square
>>>> circles is analogous to the HP.
>>>
>>> So good we agree on this crucial point:
>>> The halting decider is defined in such a way as to prevent it from
>>> possibly existing.
>>
>> It's interesting how readily you reveal your motivation though your
>> choice of words.
>
> My ultimate motivation is to 100% completely comprehend exactly and
> precisely what comprehension is such that the compositionality problem
> within linguistics can be made fully elaborated and exhaustively
> complete such that endeavors such as the CYC project can use this
> design to complete a base ontology that includes the complete meaning
> of every conception (far more than mere common sense conceptions).

Good luck with that.

> The reason that I am here is to fill in the crucial little gaps of the
> comprehension of comprehension that pertain to paradoxes.

There is no sign of any change in your comprehension from that last
time you brought all this up.  This is probably because you refuse to
read about the subject, preferring instead to remain "focused".

<snip>
-- 
Ben.
0
Ben
12/12/2013 1:00:55 PM
On 12/12/2013 6:51 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 8:09 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>> <snip>
>>>>>> You are still failing to explicitly acknowledge the two different levels of
>>>>>> indirection that change the fundamental meaning of the question:
>>>>>>
>>>>>> Asking you:
>>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>>> (neither answer is correct)
>>>>>>
>>>>>> Asking someone else:
>>>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>>>       Can you correctly answer this question? (referring to itself)
>>>>>> (The correct answer is "no")
>>>>>>
>>>>>> The analog to the Halting Problem is that the second level of
>>>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>>>> Problem be solved?
>>>>>>
>>>>>> The first level of indirection corresponds to question (1), where the
>>>>>> potential halt tester is asked does this program P halt on input I?
>>>
>>>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>>>> analogy that it would take hours to unravel it.
>>>
>>>> Let's keep it simple:
>>>> Can you correctly answer this question? (referring to itself)
>>>
>>> Yes.
>>
>> See my reply to George.
>
> That post says nothing about the analogy with halting.  I thought you
> were going to take me through some didactic process that would reveal
> how apt your analogy was.  You did, after all, post your "let's keep it
> simple" in direct reply to my saying that it was wrong.
>

I must construct comprehension step by step, element by element with 
zero gaps. The complete comprehension of the impossibility of correctly 
answering the question: Can you correctly answer this question?
is an absolutely mandatory prerequisite to constructing the analogy.

I did not expect people to fail to grasp this foundation so I have to 
backtrack and fill in these gaps.
0
Peter
12/12/2013 1:02:58 PM
On 12/12/2013 6:54 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 8:10 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>> <snip>
>>>>>> You are still failing to explicitly acknowledge the two different levels of
>>>>>> indirection that change the fundamental meaning of the question:
>>>>>>
>>>>>> Asking you:
>>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>>> (neither answer is correct)
>>>>>>
>>>>>> Asking someone else:
>>>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>>>       Can you correctly answer this question? (referring to itself)
>>>>>> (The correct answer is "no")
>>>>>>
>>>>>> The analog to the Halting Problem is that the second level of
>>>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>>>> Problem be solved?
>>>>>>
>>>>>> The first level of indirection corresponds to question (1), where the
>>>>>> potential halt tester is asked does this program P halt on input I?
>>>
>>>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>>>> analogy that it would take hours to unravel it.
>>>
>>>> Let's keep it simple:
>>>> Can you correctly answer this question? (referring to itself)
>>>
>>> 42.
>>
>> Yes that is funny
>> http://en.wikipedia.org/wiki/42_(number)
>
> No, it was one of two answers to your question.  I've answered your
> question with "42", where do we go from here?
>

42 is an incorrect answer.
The answer is required to be from the set of {yes, no}.
0
Peter
12/12/2013 1:04:31 PM
On 12/12/2013 7:00 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/11/2013 8:02 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/11/2013 5:45 PM, Andy Walker wrote:
>>>>> On 11/12/13 23:00, Peter Olcott wrote:
>>>>>>>>       To say that the halting problem places a limit on computation
>>>>>>>> would be like saying that square circles place a limit on geometry.
>>>>>>>       This is a category error.  Square circles, like halting
>>>>>>> testers, do not exist.  The HP, like the question of whether a
>>>>>>> circle can be made square, does exist as a well-formed problem.
>>>>>> So you agree that they are analogous?
>>>>>
>>>>>       If "they" means the HP and square circles, then no;  as I
>>>>> said, that would be a category error.
>>>>
>>>>> Square circles are analogous
>>>>> to halting testers;  and the problem of the existence of square
>>>>> circles is analogous to the HP.
>>>>
>>>> So good we agree on this crucial point:
>>>> The halting decider is defined in such a way as to prevent it from
>>>> possibly existing.
>>>
>>> It's interesting how readily you reveal your motivation though your
>>> choice of words.
>>
>> My ultimate motivation is to 100% completely comprehend exactly and
>> precisely what comprehension is such that the compositionality problem
>> within linguistics can be made fully elaborated and exhaustively
>> complete such that endeavors such as the CYC project can use this
>> design to complete a base ontology that includes the complete meaning
>> of every conception (far more than mere common sense conceptions).
>
> Good luck with that.

Whatever progress that I make along this path will be very pleasing.
If there is not a goal, then there can be no path. Forming an ultimate 
goal optimizes progress along the path.

>
>> The reason that I am here is to fill in the crucial little gaps of the
>> comprehension of comprehension that pertain to paradoxes.
>
> There is no sign of any change in your comprehension from that last
> time you brought all this up.  This is probably because you refuse to
> read about the subject, preferring instead to remain "focused".
>
> <snip>
>

I have not yet built my analogical proof based on the impossibility of 
answering the question:
What is the correct answer to this question?
This aspect is entirely new material.

I can not merely build this proof by simply providing it because it 
depends upon new ideas that must be sufficiently elaborated. The only 
way that I can tell that these ideas have been sufficiently elaborated 
is when comprehension occurs within the minds of my readers. This 
comprehension is occurring this time much more than any other time.

You and [fom] were the first to explicitly acknowledge comprehension of 
and agreement with key elements of my position.

The concept that [fom] fully understood that I refer to an an [object of 
truth] is required for the next step of understanding exactly why it is 
impossible to provide a correct (yes or no) answer to the following 
question:

Can you correctly answer this question? (referring to itself)

Until this is completely understood, forming a precise analogy to the 
Halting Problem is not possible.

0
Peter
12/12/2013 1:20:06 PM
Peter Olcott wrote:
> On 12/11/2013 4:57 PM, Peter Percival wrote:
>> Peter Olcott wrote:
>>
>>> 1) Can you correctly answer this question? (referring to itself)
>>> (neither answer is correct)
>>
>> I take it that 'neither answer' refers to 'yes' and 'no'.  Why is
>> neither of them correct?
>>
>>
> If you say "no" you just correctly answered the question so "no" is
> incorrect.
> If you say "yes" you answered the question, yet this answer is not correct.

Why is "yes" not the correct answer?

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 2:25:17 PM
On 12/12/2013 8:25 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/11/2013 4:57 PM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>
>>>> 1) Can you correctly answer this question? (referring to itself)
>>>> (neither answer is correct)
>>>
>>> I take it that 'neither answer' refers to 'yes' and 'no'.  Why is
>>> neither of them correct?
>>>
>>>
>> If you say "no" you just correctly answered the question so "no" is
>> incorrect.
>> If you say "yes" you answered the question, yet this answer is not
>> correct.
>
> Why is "yes" not the correct answer?
>

This depends upon something that [fom] understood, that others here may 
not have understood: the [object of truth].

We have to step outside of the frame_of_reference of the question itself 
and look at it from the perspective of an outsider.

Can anyone else correctly answer the question:
Can you correctly answer this question?

The correct answer is clearly "no" because this question, like the 
Liar_Paradox lacks an object of truth.

<analysis>
  Can you correctly answer this question?
  Q) What are you asking me about?

  A) I am asking you about whether or not you can correctly answer this 
question.

  Q) What question?

  A) The question that is asking you about whether or not you can 
correctly answer this question.
</analysis>
0
Peter
12/12/2013 2:31:17 PM
Peter Olcott wrote:

> [...] The complete comprehension of the impossibility of correctly
> answering the question: Can you correctly answer this question?
> is an absolutely mandatory prerequisite to constructing the analogy.

You sound like Nam and his insistence that people agree with something 
before he can continue.  In his case it's an unconvincing delaying 
tactic.  Unfortunately Nam is mentally inadequate and so he can't see 
that it's unconvincing.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 2:37:15 PM
Peter Olcott wrote:

> Until this is completely understood, forming a precise analogy to the
> Halting Problem is not possible.

Precise analogy?  No such thing.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 2:39:39 PM
Peter Olcott wrote:
>
> The reason that I am here

Quite Messianic.  Perhaps it's the time of year.

> is to fill in the crucial little gaps of the
> comprehension of comprehension that pertain to paradoxes.




-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 2:41:38 PM
Peter Olcott wrote:
> On 12/12/2013 5:42 AM, George Greene wrote:
>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>> If you say "yes" you answered the question, yet this answer is not
>>> correct.
>>
>> It IS SO TOO correct.
>> How exactly could YOU hope to prove otherwise?!?
>>
>>
> By fully elaborating the specific details of the underlying meaning
> postulates using Montague Grammar and its logically entailed enhancements.
>
> This depends upon something that [fom] understood, that others here may
> not have understood: the [object of truth].
>
> We have to step outside of the frame_of_reference of the question itself
> and look at it from the perspective of an outsider.
>
> Can anyone else correctly answer the question:
> Can you correctly answer this question?
>
> The correct answer is clearly "no" because this question, like the
> Liar_Paradox lacks an object of truth.
>
> <analysis>
>   Can you correctly answer this question?
>   Q) What are you asking me about?

Straw man.  Who says any Q (except the Q in Peter Olcott's script) will 
ask that.  Why doesn't Q just say "yes"?

>   A) I am asking you about whether or not you can correctly answer this
> question.
>
>   Q) What question?
>
>   A) The question that is asking you about whether or not you can
> correctly answer this question.
> </analysis>

A poor analysis.  A is Peter Olcott, but Q is also Peter Olcott.  Real 
Qs (me, George) have answered "yes".  That is what you should be dealing 
with.


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 2:46:38 PM
Peter Olcott wrote:
> On 12/12/2013 8:25 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>> On 12/11/2013 4:57 PM, Peter Percival wrote:
>>>> Peter Olcott wrote:
>>>>
>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>> (neither answer is correct)
>>>>
>>>> I take it that 'neither answer' refers to 'yes' and 'no'.  Why is
>>>> neither of them correct?
>>>>
>>>>
>>> If you say "no" you just correctly answered the question so "no" is
>>> incorrect.
>>> If you say "yes" you answered the question, yet this answer is not
>>> correct.
>>
>> Why is "yes" not the correct answer?
>>
>
> This depends upon something that [fom] understood, that others here may
> not have understood: the [object of truth].
>
> We have to step outside of the frame_of_reference of the question itself
> and look at it from the perspective of an outsider.
>
> Can anyone else correctly answer the question:
> Can you correctly answer this question?
>
> The correct answer is clearly "no"

If you wish.  But earlier (it's still visible above) you said that "no" 
was incorrect.  You seem to lack reliability.

> because this question, like the
> Liar_Paradox lacks an object of truth.
>
> <analysis>
>   Can you correctly answer this question?
>   Q) What are you asking me about?
>
>   A) I am asking you about whether or not you can correctly answer this
> question.
>
>   Q) What question?
>
>   A) The question that is asking you about whether or not you can
> correctly answer this question.
> </analysis>

I reject your analysis.  If I were asked "Can you correctly answer this 
question?" I would reply "Yes."  I wouldn't as, as your Q does, "What 
are you asking me about?"  So your Q is a red herring.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 3:20:15 PM
On 12/12/2013 8:46 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/12/2013 5:42 AM, George Greene wrote:
>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>>> If you say "yes" you answered the question, yet this answer is not
>>>> correct.
>>>
>>> It IS SO TOO correct.
>>> How exactly could YOU hope to prove otherwise?!?
>>>
>>>
>> By fully elaborating the specific details of the underlying meaning
>> postulates using Montague Grammar and its logically entailed
>> enhancements.
>>
>> This depends upon something that [fom] understood, that others here may
>> not have understood: the [object of truth].
>>
>> We have to step outside of the frame_of_reference of the question itself
>> and look at it from the perspective of an outsider.
>>
>> Can anyone else correctly answer the question:
>> Can you correctly answer this question?
>>
>> The correct answer is clearly "no" because this question, like the
>> Liar_Paradox lacks an object of truth.
>>
>> <analysis>
>>   Can you correctly answer this question?
>>   Q) What are you asking me about?
>
> Straw man.  Who says any Q (except the Q in Peter Olcott's script) will
> ask that.  Why doesn't Q just say "yes"?

For exactly the same sort of (analogous) reason that the answer to a 
variation of the Liar_Paradox is not simply yes:

Is this sentence false? (referring to itself)

It requires a full understanding of the concept that I have referred to 
as [truth object], and fom has explained as:

On 12/5/2013 12:30 PM, fom wrote:
 > A truth bearer is the referent of a term that
 > instantiates its intension.
 >
 > In model theory or computational contexts, it
 > might be called a witness.
 >

This seems to capture the object_of_truth concept aptly:
http://plato.stanford.edu/entries/truth-identity/

Paraphrase of the above:
If a proposition is true, then it must be true about something in a 
possible world.

I define Truth as simply the unidirectional mathematical mapping between 
abstract representations of actuality and actuality itself.

This would seem to succinctly capture the essence of both the 
correspondence theory of truth, and the identity theory of truth.
0
Peter
12/12/2013 3:22:55 PM
On 12/12/2013 8:37 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> [...] The complete comprehension of the impossibility of correctly
>> answering the question: Can you correctly answer this question?
>> is an absolutely mandatory prerequisite to constructing the analogy.
>
> You sound like Nam and his insistence that people agree with something
> before he can continue.  In his case it's an unconvincing delaying
> tactic.  Unfortunately Nam is mentally inadequate and so he can't see
> that it's unconvincing.
>

That seems common for some people.  I had
Mr. Epstein doing that to me when I had tried
to assist him with a proof.  He had no apparent
agendas.  But, because being challenged at every
turn is counter to how the abilities I might
have could contribute, the exchange could not
proceed.

At a different level, I once had a co-worker
who argued with me all of the time.  Since there
had been no personal animosity between us, I
asked him one day why he did these things.  He
smiled and explained that it was like being
married; the tit-for-tat of disputes helped him
to be a better husband.  The motivation behind
all of his arguing had been his sincere way of
trying to improve himself and his relationship
with his scaffold partner.

Naturally, I laughed and explained to him that
I was not his wife!

chuckle


0
fom
12/12/2013 4:08:23 PM
On 12/12/2013 9:20 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/12/2013 8:25 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>> On 12/11/2013 4:57 PM, Peter Percival wrote:
>>>>> Peter Olcott wrote:
>>>>>
>>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>>> (neither answer is correct)
>>>>>
>>>>> I take it that 'neither answer' refers to 'yes' and 'no'.  Why is
>>>>> neither of them correct?
>>>>>
>>>>>
>>>> If you say "no" you just correctly answered the question so "no" is
>>>> incorrect.
>>>> If you say "yes" you answered the question, yet this answer is not
>>>> correct.
>>>
>>> Why is "yes" not the correct answer?
>>>
>>
>> This depends upon something that [fom] understood, that others here may
>> not have understood: the [object of truth].
>>
>> We have to step outside of the frame_of_reference of the question itself
>> and look at it from the perspective of an outsider.
>>
>> Can anyone else correctly answer the question:
>> Can you correctly answer this question?
>>
>> The correct answer is clearly "no"
>
> If you wish.  But earlier (it's still visible above) you said that "no"
> was incorrect.  You seem to lack reliability.

There are two different frame-of-reference levels of indirection that 
you did not account for in your above assessment:

1) Can you correctly answer this question? (referring to itself)
(neither answer is correct)

Asking someone else:
2) Can Peter Percival correctly answer this question:
    Can you correctly answer this question? (referring to itself)
(The correct answer is "no")

The single question:
Can you correctly answer this question? (referring to itself)

has completely different semantic meaning depending upon each of the two 
different levels of frame-of-reference indirection shown in (1) and (2) 
above.


0
Peter
12/12/2013 4:18:02 PM
On 12/12/2013 8:46 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/12/2013 5:42 AM, George Greene wrote:
>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>>> If you say "yes" you answered the question, yet this answer is not
>>>> correct.
>>>
>>> It IS SO TOO correct.
>>> How exactly could YOU hope to prove otherwise?!?
>>>
>>>
>> By fully elaborating the specific details of the underlying meaning
>> postulates using Montague Grammar and its logically entailed
>> enhancements.
>>
>> This depends upon something that [fom] understood, that others here may
>> not have understood: the [object of truth].
>>

Well, Mr. Olcott should not be citing
me in this forum.  I have about as much
credibility as Mr. Plutonium.

But, there is an intensional presupposition
of truth in logic and there is an extensional
semantic conception of truth which depends on
truth bearers.

Logic cannot proceed as a study of inference
without the intensional presupposition.  These
epistemological limitations may be thought of as
relating to the intensional presupposition.

Hence, they are not necessarily resolved by
an appeal to some different relation with
truth bearers.  And, meaning postulates are
analytical conceptions of truth.  They constitute
constraints on an existing logical system and
are unlikely to be able to influence the intensional
presupposition of truth.

A deductive calculus makes no sense if its
purpose is not to hold truth constant across
tranformations of statement.  This kind of
truth has no truth bearer.




0
fom
12/12/2013 4:20:22 PM
On 12/12/2013 8:39 AM, Peter Percival wrote:
> Peter Olcott wrote:
>
>> Until this is completely understood, forming a precise analogy to the
>> Halting Problem is not possible.
>
> Precise analogy?  No such thing.
>

3 is to 6 as 2 is to 4 as 9 is to 18, precisely.
0
Peter
12/12/2013 4:20:42 PM
Peter Olcott wrote:
> On 12/12/2013 8:46 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>> On 12/12/2013 5:42 AM, George Greene wrote:
>>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>>>> If you say "yes" you answered the question, yet this answer is not
>>>>> correct.
>>>>
>>>> It IS SO TOO correct.
>>>> How exactly could YOU hope to prove otherwise?!?
>>>>
>>>>
>>> By fully elaborating the specific details of the underlying meaning
>>> postulates using Montague Grammar and its logically entailed
>>> enhancements.
>>>
>>> This depends upon something that [fom] understood, that others here may
>>> not have understood: the [object of truth].
>>>
>>> We have to step outside of the frame_of_reference of the question itself
>>> and look at it from the perspective of an outsider.
>>>
>>> Can anyone else correctly answer the question:
>>> Can you correctly answer this question?
>>>
>>> The correct answer is clearly "no" because this question, like the
>>> Liar_Paradox lacks an object of truth.
>>>
>>> <analysis>
>>>   Can you correctly answer this question?
>>>   Q) What are you asking me about?
>>
>> Straw man.  Who says any Q (except the Q in Peter Olcott's script) will
>> ask that.  Why doesn't Q just say "yes"?
>
> For exactly the same sort of (analogous) reason that the answer to a
> variation of the Liar_Paradox is not simply yes:

Is it exact or is it analogous?
Is it exact or is it "sort of"?

[...]
> I define Truth as simply the unidirectional mathematical mapping between
> abstract representations of actuality and actuality itself.

Good for you.  If you talk a private language no one will understand you.

>
> This would seem to succinctly capture the essence of both the
> correspondence theory of truth, and the identity theory of truth.


-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 4:55:56 PM
Peter Olcott wrote:
> On 12/12/2013 8:39 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>
>>> Until this is completely understood, forming a precise analogy to the
>>> Halting Problem is not possible.
>>
>> Precise analogy?  No such thing.
>>
>
> 3 is to 6 as 2 is to 4 as 9 is to 18, precisely.

Those are ratios.  Ratios are not analogies.

Yes, I am aware that the word "analogy" comes from the Greek for 
"equality of ratios"; but I doubt that the word is used thus nowadays.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 5:05:02 PM
On 12/12/2013 10:20 AM, fom wrote:
> On 12/12/2013 8:46 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>> On 12/12/2013 5:42 AM, George Greene wrote:
>>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>>>> If you say "yes" you answered the question, yet this answer is not
>>>>> correct.
>>>>
>>>> It IS SO TOO correct.
>>>> How exactly could YOU hope to prove otherwise?!?
>>>>
>>>>
>>> By fully elaborating the specific details of the underlying meaning
>>> postulates using Montague Grammar and its logically entailed
>>> enhancements.
>>>
>>> This depends upon something that [fom] understood, that others here may
>>> not have understood: the [object of truth].
>>>
>
> Well, Mr. Olcott should not be citing
> me in this forum.  I have about as much
> credibility as Mr. Plutonium.
>
I try to as much as possible completely ignore credibility and go 
directly to validity and soundness.

> But, there is an intensional presupposition
> of truth in logic and there is an extensional
> semantic conception of truth which depends on
> truth bearers.
>
> Logic cannot proceed as a study of inference
> without the intensional presupposition.  These
> epistemological limitations may be thought of as
> relating to the intensional presupposition.
>
> Hence, they are not necessarily resolved by
> an appeal to some different relation with
> truth bearers.  And, meaning postulates are
> analytical conceptions of truth.  They constitute
> constraints on an existing logical system and
> are unlikely to be able to influence the intensional
> presupposition of truth.
>
> A deductive calculus makes no sense if its
> purpose is not to hold truth constant across
> tranformations of statement.  This kind of
> truth has no truth bearer.
>
>
>

https://webspace.utexas.edu/rms9/www/Publications/RMSPresupposition%20and%20intensionality(Critica).pdf 

I could not be sure how well I understood what you are saying because of 
a possible gap in my understanding about this term:
[intensional presupposition] because the intended meaning may be 
somewhat idiomatic.

The above link is what I found. Perhaps you have a link to a more 
concise meaning of your intended use of this term?
0
Peter
12/12/2013 5:09:01 PM
On 12/12/2013 10:55 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/12/2013 8:46 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>> On 12/12/2013 5:42 AM, George Greene wrote:
>>>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>>>>> If you say "yes" you answered the question, yet this answer is not
>>>>>> correct.
>>>>>
>>>>> It IS SO TOO correct.
>>>>> How exactly could YOU hope to prove otherwise?!?
>>>>>
>>>>>
>>>> By fully elaborating the specific details of the underlying meaning
>>>> postulates using Montague Grammar and its logically entailed
>>>> enhancements.
>>>>
>>>> This depends upon something that [fom] understood, that others here may
>>>> not have understood: the [object of truth].
>>>>
>>>> We have to step outside of the frame_of_reference of the question
>>>> itself
>>>> and look at it from the perspective of an outsider.
>>>>
>>>> Can anyone else correctly answer the question:
>>>> Can you correctly answer this question?
>>>>
>>>> The correct answer is clearly "no" because this question, like the
>>>> Liar_Paradox lacks an object of truth.
>>>>
>>>> <analysis>
>>>>   Can you correctly answer this question?
>>>>   Q) What are you asking me about?
>>>
>>> Straw man.  Who says any Q (except the Q in Peter Olcott's script) will
>>> ask that.  Why doesn't Q just say "yes"?
>>
>> For exactly the same sort of (analogous) reason that the answer to a
>> variation of the Liar_Paradox is not simply yes:
>
> Is it exact or is it analogous?
> Is it exact or is it "sort of"?

Yes my wording may have been a little inconsistent.

I consider a precise analogy when two things might be as much as exactly 
the same along at least one dimension. I have not yet formalized the 
meaning of the term analogy, so the prior statement may still be at 
least slightly incorrect.

>
> [...]
>> I define Truth as simply the unidirectional mathematical mapping between
>> abstract representations of actuality and actuality itself.
>
> Good for you.  If you talk a private language no one will understand you.

How does your statement apply to my statement? (to me all the terms are 
self-evident, that might not be the case with others).

>
>>
>> This would seem to succinctly capture the essence of both the
>> correspondence theory of truth, and the identity theory of truth.
>
>

0
Peter
12/12/2013 5:23:44 PM
Peter Olcott wrote:
> On 12/12/2013 10:55 AM, Peter Percival wrote:
>> Peter Olcott wrote:
>>> On 12/12/2013 8:46 AM, Peter Percival wrote:
>>>> Peter Olcott wrote:
>>>>> On 12/12/2013 5:42 AM, George Greene wrote:
>>>>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott wrote:
>>>>>>> If you say "yes" you answered the question, yet this answer is not
>>>>>>> correct.
>>>>>>
>>>>>> It IS SO TOO correct.
>>>>>> How exactly could YOU hope to prove otherwise?!?
>>>>>>
>>>>>>
>>>>> By fully elaborating the specific details of the underlying meaning
>>>>> postulates using Montague Grammar and its logically entailed
>>>>> enhancements.
>>>>>
>>>>> This depends upon something that [fom] understood, that others here
>>>>> may
>>>>> not have understood: the [object of truth].
>>>>>
>>>>> We have to step outside of the frame_of_reference of the question
>>>>> itself
>>>>> and look at it from the perspective of an outsider.
>>>>>
>>>>> Can anyone else correctly answer the question:
>>>>> Can you correctly answer this question?
>>>>>
>>>>> The correct answer is clearly "no" because this question, like the
>>>>> Liar_Paradox lacks an object of truth.
>>>>>
>>>>> <analysis>
>>>>>   Can you correctly answer this question?
>>>>>   Q) What are you asking me about?
>>>>
>>>> Straw man.  Who says any Q (except the Q in Peter Olcott's script) will
>>>> ask that.  Why doesn't Q just say "yes"?
>>>
>>> For exactly the same sort of (analogous) reason that the answer to a
>>> variation of the Liar_Paradox is not simply yes:
>>
>> Is it exact or is it analogous?
>> Is it exact or is it "sort of"?
>
> Yes my wording may have been a little inconsistent.
>
> I consider a precise analogy when two things might be as much as exactly
> the same along at least one dimension. I have not yet formalized the
> meaning of the term analogy, so the prior statement may still be at
> least slightly incorrect.
>
>>
>> [...]
>>> I define Truth as simply the unidirectional mathematical mapping between
>>> abstract representations of actuality and actuality itself.
>>
>> Good for you.  If you talk a private language no one will understand you.
>
> How does your statement apply to my statement? (to me all the terms are
> self-evident, that might not be the case with others).

"I define truth..." and, above, "I have not yet formalized the meaning 
of the term analogy..."  How about not defining and formalizing the 
meanings of familiar words?  (Here "analogy" and "truth".)  Instead, why 
not use them as others use them?  Won't that make you more easily 
understood?

>>> This would seem to succinctly capture the essence of both the
>>> correspondence theory of truth, and the identity theory of truth.

-- 
Madam Life's a piece in bloom,
Death goes dogging everywhere:
She's the tenant of the room,
He's the ruffian on the stair.
0
Peter
12/12/2013 5:31:56 PM
On 12/12/2013 11:05 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/12/2013 8:39 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>
>>>> Until this is completely understood, forming a precise analogy to the
>>>> Halting Problem is not possible.
>>>
>>> Precise analogy?  No such thing.
>>>
>>
>> 3 is to 6 as 2 is to 4 as 9 is to 18, precisely.
>
> Those are ratios.  Ratios are not analogies.

The precise analogy is: divided by two.

>
> Yes, I am aware that the word "analogy" comes from the Greek for
> "equality of ratios"; but I doubt that the word is used thus nowadays.
>

http://dictionary.reference.com/browse/analogy
Water is to ice as lava is to rock.

The precise dimension of correspondence (analogy) is:
liquid state to solid state.
0
Peter
12/12/2013 5:34:56 PM
On 12/12/2013 11:31 AM, Peter Percival wrote:
> Peter Olcott wrote:
>> On 12/12/2013 10:55 AM, Peter Percival wrote:
>>> Peter Olcott wrote:
>>>> On 12/12/2013 8:46 AM, Peter Percival wrote:
>>>>> Peter Olcott wrote:
>>>>>> On 12/12/2013 5:42 AM, George Greene wrote:
>>>>>>> On Wednesday, December 11, 2013 6:55:30 PM UTC-5, Peter Olcott
>>>>>>> wrote:
>>>>>>>> If you say "yes" you answered the question, yet this answer is not
>>>>>>>> correct.
>>>>>>>
>>>>>>> It IS SO TOO correct.
>>>>>>> How exactly could YOU hope to prove otherwise?!?
>>>>>>>
>>>>>>>
>>>>>> By fully elaborating the specific details of the underlying meaning
>>>>>> postulates using Montague Grammar and its logically entailed
>>>>>> enhancements.
>>>>>>
>>>>>> This depends upon something that [fom] understood, that others here
>>>>>> may
>>>>>> not have understood: the [object of truth].
>>>>>>
>>>>>> We have to step outside of the frame_of_reference of the question
>>>>>> itself
>>>>>> and look at it from the perspective of an outsider.
>>>>>>
>>>>>> Can anyone else correctly answer the question:
>>>>>> Can you correctly answer this question?
>>>>>>
>>>>>> The correct answer is clearly "no" because this question, like the
>>>>>> Liar_Paradox lacks an object of truth.
>>>>>>
>>>>>> <analysis>
>>>>>>   Can you correctly answer this question?
>>>>>>   Q) What are you asking me about?
>>>>>
>>>>> Straw man.  Who says any Q (except the Q in Peter Olcott's script)
>>>>> will
>>>>> ask that.  Why doesn't Q just say "yes"?
>>>>
>>>> For exactly the same sort of (analogous) reason that the answer to a
>>>> variation of the Liar_Paradox is not simply yes:
>>>
>>> Is it exact or is it analogous?
>>> Is it exact or is it "sort of"?
>>
>> Yes my wording may have been a little inconsistent.
>>
>> I consider a precise analogy when two things might be as much as exactly
>> the same along at least one dimension. I have not yet formalized the
>> meaning of the term analogy, so the prior statement may still be at
>> least slightly incorrect.
>>
>>>
>>> [...]
>>>> I define Truth as simply the unidirectional mathematical mapping
>>>> between
>>>> abstract representations of actuality and actuality itself.
>>>
>>> Good for you.  If you talk a private language no one will understand
>>> you.
>>
>> How does your statement apply to my statement? (to me all the terms are
>> self-evident, that might not be the case with others).
>
> "I define truth..." and, above, "I have not yet formalized the meaning
> of the term analogy..."  How about not defining and formalizing the
> meanings of familiar words?  (Here "analogy" and "truth".)  Instead, why
> not use them as others use them?  Won't that make you more easily
> understood?

I want to eliminate the wiggle room of imprecision that is otherwise 
inherent in natural language.  I do this so that some of the very subtle 
nuances that I am trying to communicate are able to be fully understood. 
Although the common meanings of the terms are my starting point, these 
common meanings allow things slip through the cracks.

>
>>>> This would seem to succinctly capture the essence of both the
>>>> correspondence theory of truth, and the identity theory of truth.
>

0
Peter
12/12/2013 5:40:46 PM
Peter Olcott <OCR4Screen> writes:

> On 12/12/2013 6:54 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/11/2013 8:10 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> On 12/11/2013 7:31 PM, Ben Bacarisse wrote:
>>>>>> Peter Olcott <OCR4Screen> writes:
>>>> <snip>
>>>>>>> You are still failing to explicitly acknowledge the two different levels of
>>>>>>> indirection that change the fundamental meaning of the question:
>>>>>>>
>>>>>>> Asking you:
>>>>>>> 1) Can you correctly answer this question? (referring to itself)
>>>>>>> (neither answer is correct)
>>>>>>>
>>>>>>> Asking someone else:
>>>>>>> 2) Can  Ben Bacarisse correctly answer this question:
>>>>>>>       Can you correctly answer this question? (referring to itself)
>>>>>>> (The correct answer is "no")
>>>>>>>
>>>>>>> The analog to the Halting Problem is that the second level of
>>>>>>> indirection of (2) has the clear answer of "no" when someone outside
>>>>>>> of the potential halt tester's perspective is asked: Can the Halting
>>>>>>> Problem be solved?
>>>>>>>
>>>>>>> The first level of indirection corresponds to question (1), where the
>>>>>>> potential halt tester is asked does this program P halt on input I?
>>>>
>>>>>> Same as 18 months ago, I see.  There are so many things wrong with this
>>>>>> analogy that it would take hours to unravel it.
>>>>
>>>>> Let's keep it simple:
>>>>> Can you correctly answer this question? (referring to itself)
>>>>
>>>> 42.
>>>
>>> Yes that is funny
>>> http://en.wikipedia.org/wiki/42_(number)
>>
>> No, it was one of two answers to your question.  I've answered your
>> question with "42", where do we go from here?
>
> 42 is an incorrect answer.
> The answer is required to be from the set of {yes, no}.

So you say, but I was born free, which was my (in part) point.  I
thought we were going to have a Socratic dialogue, so I chose to play
devil's advocate.  Human beings are will-full, capricious and often
misunderstand things.  That's why I answered twice, once with a silly
answer.  TMs can't do that sort of thing, so what sort of human being
should I pretend to be to make the analogy work?  I must reply for a
fixed set, it seems.  OK.  Am I allowed to think and understand the
question?  TMs can't do either.  If I am not allowed to think, how can I
answer?

-- 
Ben.
0
Ben
12/12/2013 5:51:13 PM
Peter Olcott <OCR4Screen> writes:
<snip>
> I did not expect people to fail to grasp this foundation so I have to
> backtrack and fill in these gaps.

You might also consider arguing about the actual subject (halting)
rather than about asking people contrived questions.  Arguing by analogy
in a computer science forum is bringing a knife to a gun fight -- it may
get people talking, but you won't convince anyone with it.  You will
eventually have to return to talking about halting.

-- 
Ben.
0
Ben
12/12/2013 5:56:34 PM
On 12/12/2013 11:56 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> I did not expect people to fail to grasp this foundation so I have to
>> backtrack and fill in these gaps.
>
> You might also consider arguing about the actual subject (halting)
> rather than about asking people contrived questions.  Arguing by analogy
> in a computer science forum is bringing a knife to a gun fight -- it may
> get people talking, but you won't convince anyone with it.  You will
> eventually have to return to talking about halting.
>

None-the-less it is quite often the case that analogies bring much 
greater insight into the analysis of problems by transforming the 
problem into another problem that is enormously easier to understand.
0
Peter
12/12/2013 6:28:03 PM
Peter Olcott <OCR4Screen> writes:
<snip>
> http://dictionary.reference.com/browse/analogy
> Water is to ice as lava is to rock.
>
> The precise dimension of correspondence (analogy) is:
> liquid state to solid state.

This has been missing from your analogies which is partly what I wanted
to illustrate with my answers to your "keep is simple" question.  Note
that when you explain the "dimension" -- the way the things are
analogous -- you end up with more information than the analogy
provides.  The explanation alone is better -- all the analogy does is
reinforce it.

Without the explanation, an analogy can't provide new meaning.  The
dictionary example is clear because we already understand it, and if we
don't you've provided the explanation.  But without one or the other it
would be useless.  If I tell you that a heffalump is to an elephant as a
wedding cake is to a biscuit, what have I been able to convey?

-- 
Ben.
0
Ben
12/12/2013 8:05:18 PM
Peter Olcott <OCR4Screen> writes:

> On 12/12/2013 11:56 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>> <snip>
>>> I did not expect people to fail to grasp this foundation so I have to
>>> backtrack and fill in these gaps.
>>
>> You might also consider arguing about the actual subject (halting)
>> rather than about asking people contrived questions.  Arguing by analogy
>> in a computer science forum is bringing a knife to a gun fight -- it may
>> get people talking, but you won't convince anyone with it.  You will
>> eventually have to return to talking about halting.
>
> None-the-less it is quite often the case that analogies bring much
> greater insight into the analysis of problems by transforming the
> problem into another problem that is enormously easier to understand.

So far I think you have only been able to confuse yourself with your
analogies.  Since you don't understand computability very well, the
chance that you can come up with an analogous problem, so closely
mirroring halting that analysis of one will carry over to the other is
virtually nil.  You certainly haven't got close yet.

-- 
Ben.
0
Ben
12/12/2013 8:09:53 PM
On 12/12/2013 12:17 PM, DKleinecke wrote:
> On Wednesday, December 11, 2013 10:26:12 AM UTC-8, Peter Olcott wrote:
>> On 12/11/2013 12:01 PM, DKleinecke wrote:
>
>>> How do you choose which term is the head? Why not
>>>        {true} = {false} + {anti-negation} ?
>
>> The one of minimal complexity is chosen.
>
> In this discussion you cannot just say that - you would need to define "complexity" and in order to use it the way you did you would have to define it as a linear measure (in order to recognize "minimal").

Of course I agree with this correct reasoning. Within the context of my 
proposal I would at least initially envision that complexity would be 
minimized by minimizing the number of connections between concepts.

Another way to look at what would seem to be the same thing is that 
every trace of ambiguity would be eliminated from the inheritance 
hierarchy of semantic meaning.

>
> This is not a trivial problem with your ontological proposal. It seems to me that one must assume two different concepts - {true} and {false} with a two-way connection - one from {true} to {false} and one in the opposite direction. Then, of course, the graph is no longer acyclic. As nearly as I can tell there are a great many cases where the composition of features can go in ether direction and therefore must be taken to go in both directions.
>
http://plato.stanford.edu/entries/truth-correspondence/

It has seemed to me that Truth itself is essentially the unidirectional 
mathematical mapping between abstract representations of actuality and 
actuality itself.

The concept of {true} is derived from the concept of {truth} and 
contains all of this mapping and other details. The concept of {false} 
would be the {negation} of {true}.

I am not aware of any case where there would be a need for any cycles in 
the graph. Could you cite some examples, then maybe I can see if I can 
resolve them to test my theory.

>>> Do you believe that you can base your ontology studies on English alone?
>
>> These language level distinctions are abstracted out of the pure
>> conceptions within the ontology. Every language must have some way of
>> representing that something occurred prior to now, and every language
>> would have to have some way of saying I have more than one of these
>> things. I used English for my examples because that is the language that
>> I know.
>
> Even in English there are problems. For example the word "put" cannot make the present-past distinction. But the real difficulty is that the time aspect of the verb is often made syntactically rather than at the word level. The English future is an example of a time difference made syntactically. And it is worth mentioning that there are languages which do not make a simple past-present distinction but rather have several different flavors of how past - like recent, distant and in mythological times.
>
I may not that I may have the best terminology, maybe you can help me 
here. The meaning that I intend is that all of these natural language 
related complexities (is there a single term for this?) would be 
essentially ignored within the actual knowledge representation (KR) 
ontology itself. I am pretty sure that this can be accomplished because 
I can see many of the details of the how this would be accomplished.

> I am arguing that your program for a universal ontology cannot succeed because it does not include many linguistic phenomenon that are not present in English.
>
We would not be representing (within the ontology) English words and 
their meanings per se. Instead we would be directly representing the 
essential meanings themselves as pure conceptions that are (as much as 
possible) independent of their (specific human language) form of 
representation.

Each of these pure conceptions will at least closely approximate a word 
or phrase within natural language.

> PS: I used phenomenon as a collective.
>

0
Peter
12/12/2013 8:15:32 PM
On 12/12/2013 2:09 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/12/2013 11:56 AM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>> <snip>
>>>> I did not expect people to fail to grasp this foundation so I have to
>>>> backtrack and fill in these gaps.
>>>
>>> You might also consider arguing about the actual subject (halting)
>>> rather than about asking people contrived questions.  Arguing by analogy
>>> in a computer science forum is bringing a knife to a gun fight -- it may
>>> get people talking, but you won't convince anyone with it.  You will
>>> eventually have to return to talking about halting.
>>
>> None-the-less it is quite often the case that analogies bring much
>> greater insight into the analysis of problems by transforming the
>> problem into another problem that is enormously easier to understand.
>
> So far I think you have only been able to confuse yourself with your
> analogies.  Since you don't understand computability very well, the
> chance that you can come up with an analogous problem, so closely
> mirroring halting that analysis of one will carry over to the other is
> virtually nil.  You certainly haven't got close yet.
>

http://en.wikipedia.org/wiki/Lingua_franca
You already acknowledged the key element of my position. The remaining 
details are difficult to understand because they have not been 
understood previously thus we lack a lingua Franca to discuss them.
0
Peter
12/12/2013 8:21:53 PM
Peter Olcott <OCR4Screen> writes:
<snip>
> You already acknowledged the key element of my position.

For at least the last 18 months, the key element of your position has
been that halting involves some question to which there is no correct
answer.  You've given this property of a question several different
names, but you've never given an example that applies to halting.
That's because all halting questions (does M halt on input I), and the
bigger question of decidability, all have correct answers.

If this is not the key element of your position, you've done your best
to make everyone think it is.

I am not sure what "acknowledged" means here.  I certainly acknowledge
that this is your position.  I've understood it for some time.  Are you
hinting that I agree with it?  I don't.

> The remaining
> details are difficult to understand because they have not been
> understood previously thus we lack a lingua Franca to discuss them.

What is the point of details when the key element of your position is
wrong?  Do you no longer believe that halting is "wrong", "invalid",
"pathological" or "ill-formed" (whatever the word du jour is) because it
asks a question to which there is no correct answer?  Have you come to
accept that the undecidablility of halting is simply a logical
consequence of TM computation?  If you have made such a radical shift,
you've kept it to yourself.

-- 
Ben.
0
Ben
12/12/2013 9:11:57 PM
> Lemma01: Meaning can only be correctly specified within an acyclic=20
>=20
> directed graph:

I disagree with this assumption. In fact I think the converse is true.

However, there is something to the larger argument of properly understandin=
g context.

"true" is not well defined.

A plausible mathematical definition of "true" is a statement reachable from=
 a set of axioms.

That is, a proof is true if it starts at some "true" starting point and thr=
ough a series of "true" steps is arrived at.

But G=F6del's first incompleteness theorem says there are true statements t=
hat cannot be proved.

How can we assert that the statement is true? If we can prove it... then in=
completeness doesn't apply. If we can't prove it then we can't know that it=
 is true.

Okay - we can use some other system to prove that the statement is true. Ho=
wever, there are infinitely many other systems - in some the statement will=
 be true, in some it will be false (and some it won't be defined).

For a given set of axioms there are:

i) a number of statements that are directly reachable from those axioms and=
/or any true statement within those axioms.
ii) a number of statements that are not reachable from the axioms, but are =
reachable by some extension of the axioms.
iii) a number of statements that can never be reached from any "true" state=
 of the system.

The first incompleteness theorem refers to the iii) statements.

Which begs the question of what a system is...?

If there exists a state within a system that is completely disconnected fro=
m every other state of the system... that cannot be reached from any other =
"true" state in the system... In what way is that state part of the system?

Without changing the results of the incompleteness theorem, we could equall=
y view it as a statement that there are languages that describe systems suc=
h that a statement can be made that is not truly part of the system being d=
escribed.

That is, languages used to describe systems may be redundant to the point t=
hat statements that are irrelevant to the system can be constructed.

This interpretation radically changes the meaning of G=F6del's first incomp=
leteness theorem without challenging its results.
0
Mark
12/13/2013 6:41:25 AM
On 12/12/2013 12:00 AM, Peter Olcott wrote:
> On 12/11/2013 3:40 PM, Andy Walker wrote:

[snip]

>>     This is a category error.  Square circles, like halting
>> testers, do not exist.  The HP, like the question of whether a
>> circle can be made square, does exist as a well-formed problem.
> So you agree that they are analogous?

No, they are not analogous. Let me try to present Andy's argument
using a bit more "computer sciency" terms.

You confuse specification and program.

As an example, take sorting. A specification for a sorting function
would look something like this:

   If T is a list, and T' = sort(T), then T' is a list
   containing exactly the elements of T, such that, if
   i, j are indices and i <= j, then T'[i] <= T'[j].

This specification is not a program. It does not specify how this
sorted list T' is constructed. It only defines what properties the
returned list must have. There are several programs which adhere to
this specification. They can use several algorithms, such as bubble
sort, quicksort, merge sort, etc.

Now, consider halting. The specification for the program "halt" is:

   If P is (the source code of) a program and x is an input value,
   then halt(P,x)=true if program P halts on x and halt(P,x)=false
   if P doesn't halt on x.

This specification is not a program, because it doesn't say how
the program operates. It only gives conditions on the output. Since
the specification doesn't talk about specifications (only about
programs) there is no self-reference in the specification. Not if
you use mathematical notation, and not if you want to talk about it
in vague {Sentences} in some {Natural Language}.

(On a side note: This specification is what theoretical computer
scientists call the Halting Problem. The word "problem" is another
word for, informally said, a "yes-no question".)

The specification is well-defined. In fact, a program which adheres
to the specification would be very useful. However, such a program
does not exist, because if one assumes one exists, one reaches a
contradiction. The contradiction, however, is not in the
specification. It is used in the proof in a way that is completely
standard: if from the assumption of the existence of Q we can
obtain a contradiction (something "ill-formed" in your terminology),
then Q does not exist.

I hope this helps.

[snip]

regards,
Sander
0
H
12/13/2013 10:30:59 AM
On 12/13/2013 12:41 AM, Mark Lawson wrote:
>> Lemma01: Meaning can only be correctly specified within an acyclic
>>
>> directed graph:
> I disagree with this assumption. In fact I think the converse is true.
I have another discussion in another set of forums that is much
more deeply elaborating this. Since this is most apt in the
sci.lang forum, that is where I am discussing this. The actual
specific subject is the compostionality problem of linguistics.

 From a computer science point of view, try to imagine an
inheritance hierarchy that allows goto statements.

> However, there is something to the larger argument of properly understanding context.
>
> "true" is not well defined.

Yes that is seems to be a brilliant insight. That insight forms a much
clearer basis for the reasoning that I a providing.

Notational convention:
I use {curly_braces} to indicate a fully elaborated meaning postulates
from Montague Semantics.

http://plato.stanford.edu/entries/montague-semantics/

The definitions below are mere ballpark approximations of these
exhaustively complete meaning postulates.

The actual meaning postulates would link to other meaning postulates
within a tree (acyclic directed graph) until every meaning of every word
is exhaustively specified. Inheritance of meanings from other more general
meanings would be crucial.

Let's try this to see if it works:
{Truth} is the unidirectional mathematical mapping between abstract
representations of actuality and actuality itself.

{true} is when an abstract representation does map to its actuality.
{false} is when an abstract representation fails to map to its actuality.

Actualities can also include other abstract representations thus forming
a mapping between representations.

> A plausible mathematical definition of "true" is a statement reachable from a set of axioms.
>
> That is, a proof is true if it starts at some "true" starting point and through a series of "true" steps is arrived at.

Sound conclusions are formed when  reasoning begins with true premises
and applies valid reasoning.

> But G�del's first incompleteness theorem says there are true statements that cannot be proved.
>
> How can we assert that the statement is true? If we can prove it... then incompleteness doesn't apply. If we can't prove it then we can't know that it is true.
>
> Okay - we can use some other system to prove that the statement is true. However, there are infinitely many other systems - in some the statement will be true, in some it will be false (and some it won't be defined).
I am trying to use the definitions that I provided above for {true} and 
{truth}.
This is essentially the same as the correspondence theory of truth.
http://plato.stanford.edu/entries/truth-correspondence/
This is the basis of Montague Semantics.

> For a given set of axioms there are:
>
> i) a number of statements that are directly reachable from those axioms and/or any true statement within those axioms.
> ii) a number of statements that are not reachable from the axioms, but are reachable by some extension of the axioms.
> iii) a number of statements that can never be reached from any "true" state of the system.
My current position is that the language of mathematics is insufficiently
expressive (very shallow semantics) and natural language is insufficiently
precise to fully understand things such as self-reference paradoxes.

The key insight that you seemed to have provided is the the conception
of {true} has not yet been sufficiently formalized. I think that this 
insight
is quite aptly put. It is much more succinct than the way that I was 
saying it.

> The first incompleteness theorem refers to the iii) statements.
>
> Which begs the question of what a system is...?
>
> If there exists a state within a system that is completely disconnected from every other state of the system... that cannot be reached from any other "true" state in the system... In what way is that state part of the system?
>
> Without changing the results of the incompleteness theorem, we could equally view it as a statement that there are languages that describe systems such that a statement can be made that is not truly part of the system being described.
>
> That is, languages used to describe systems may be redundant to the point that statements that are irrelevant to the system can be constructed.
>
> This interpretation radically changes the meaning of G�del's first incompleteness theorem without challenging its results.

0
Peter
12/13/2013 1:37:30 PM
"H.J. Sander Bruggink" <sander.bruggink@uni-due.de> writes:

> On 12/12/2013 12:00 AM, Peter Olcott wrote:
>> On 12/11/2013 3:40 PM, Andy Walker wrote:
>
> [snip]
>
>>>     This is a category error.  Square circles, like halting
>>> testers, do not exist.  The HP, like the question of whether a
>>> circle can be made square, does exist as a well-formed problem.
>> So you agree that they are analogous?
>
> No, they are not analogous. Let me try to present Andy's argument
> using a bit more "computer sciency" terms.
>
> You confuse specification and program.
>
> As an example, take sorting. A specification for a sorting function
> would look something like this:
>
>   If T is a list, and T' = sort(T), then T' is a list
>   containing exactly the elements of T, such that, if
>   i, j are indices and i <= j, then T'[i] <= T'[j].
>
> This specification is not a program. It does not specify how this
> sorted list T' is constructed. It only defines what properties the
> returned list must have. There are several programs which adhere to
> this specification. They can use several algorithms, such as bubble
> sort, quicksort, merge sort, etc.
>
> Now, consider halting. The specification for the program "halt" is:
>
>   If P is (the source code of) a program and x is an input value,
>   then halt(P,x)=true if program P halts on x and halt(P,x)=false
>   if P doesn't halt on x.
>
> This specification is not a program, because it doesn't say how
> the program operates. It only gives conditions on the output. Since
> the specification doesn't talk about specifications (only about
> programs) there is no self-reference in the specification. Not if
> you use mathematical notation, and not if you want to talk about it
> in vague {Sentences} in some {Natural Language}.

What bothers people, though, is that this is the specification of a
program about programs, so there is always the possibility that you
could apply such a program to itself.  Peter mistakes this for some
dangerous kind of logical self-reference, but it is no more illogical
than turning the bathroom scales upside down to see what they weigh.
There are lots of "programs about programs" that do exist -- they give
correct answers in all cases, even about themselves.

Last year, Peter spent weeks trying to say what was different about
halting and these other "programs about programs".  He did not know
about Rice's theorem in those days.

<snip>
> The specification is well-defined. In fact, a program which adheres
> to the specification would be very useful. However, such a program
> does not exist, because if one assumes one exists, one reaches a
> contradiction.

The same specification/implementation distinction applies to proofs.
The theorem "no TM decides halting" has many different proofs and not
all rely on assuming that a decider exists.  For obvious reasons, Peter
has refused to look at these other proofs.

I should stress that I agree wholeheartedly that the specification is
well-defined.  There absolutely is a mathematical function from the set
of TMs to the set {true, false} that determines halting, and it is
perfectly valid to ask if there is a TM that implements this function.

> The contradiction, however, is not in the
> specification.

But here I am tempted to disagree, though it may be just a different use
of words.  I would argue that the specification of all non-existent
objects contains a contradiction, though it is sometimes hard to draw it
out.  For example, when talking about N:

  n such that n < 0
  n with n > 3 and n < 2
  n > 2 and prime(n)
  prime(n) such that for all m > n not(prime(m))
  n > 2 such that there are a, b and c with a^n + b^n = c^n
  n such that odd(n) and perfect(n)

do any of these specifications contain a contradiction in your sense?
As far as I am concerned they all do (though my hat is ready to be baked
if the last one exists).

If any of the things involved were different (the system of arithmetic,
the meaning of N, the definition of prime, and so on) then some of these
might specify a number.  That they don't, follows logically from these
definitions.

In the physical world, things can be non-existent either because their
specification is contradictory (dry water, carbon-free CO2) or because
they simply don't exist (a taxidermist named Cyril currently living in
Ayacucho).  This distinction is missing in mathematics.

<snip>
-- 
Ben.
0
Ben
12/13/2013 1:47:37 PM
On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> You already acknowledged the key element of my position.
> For at least the last 18 months, the key element of your position has
> been that halting involves some question to which there is no correct
> answer.

Meaning exactly this:
12/10/2013 8:14 AM, Ben Bacarisse wrote:
decision problems of arithmetic and halting and so on ...
contain, in their very definitions, the reasons why they are impossible.

https://groups.google.com/forum/#!original/sci.logic/pDQHZ5_zEp4/4QXHZpTXfOUJ

> You've given this property of a question several different
> names, but you've never given an example that applies to halting.
> That's because all halting questions (does M halt on input I), and the
> bigger question of decidability, all have correct answers.
>
> If this is not the key element of your position, you've done your best
> to make everyone think it is.
>
> I am not sure what "acknowledged" means here.  I certainly acknowledge
> that this is your position.  I've understood it for some time.  Are you
> hinting that I agree with it?  I don't.
>
>> The remaining
>> details are difficult to understand because they have not been
>> understood previously thus we lack a lingua Franca to discuss them.
> What is the point of details when the key element of your position is
> wrong?  Do you no longer believe that halting is "wrong", "invalid",
> "pathological" or "ill-formed" (whatever the word du jour is) because it
> asks a question to which there is no correct answer?  Have you come to
> accept that the undecidablility of halting is simply a logical
> consequence of TM computation?  If you have made such a radical shift,
> you've kept it to yourself.
>

0
Peter
12/13/2013 4:14:59 PM
Peter Olcott <OCR4Screen> writes:

> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>> <snip>
>>> You already acknowledged the key element of my position.
>> For at least the last 18 months, the key element of your position has
>> been that halting involves some question to which there is no correct
>> answer.
>
> Meaning exactly this:
> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
> decision problems of arithmetic and halting and so on ...
> contain, in their very definitions, the reasons why they are
> impossible.

You have misunderstood me.  I don't see the connection between what you
claim (that there is no correct answer) and the fact that certain
decision problems are unsolvable.  One of the defining characteristics
of a decision problem is that there must always be a correct answer
yes/no answer.  (As usual, it would help if you knew the terms used in
the subject, it might avoid simple misunderstanding like this.)

-- 
Ben.
0
Ben
12/13/2013 4:35:17 PM
On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>> <snip>
>>>> You already acknowledged the key element of my position.
>>> For at least the last 18 months, the key element of your position has
>>> been that halting involves some question to which there is no correct
>>> answer.
>> Meaning exactly this:
>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>> decision problems of arithmetic and halting and so on ...
>> contain, in their very definitions, the reasons why they are
>> impossible.
> You have misunderstood me.  I don't see the connection between what you
> claim (that there is no correct answer) and the fact that certain
> decision problems are unsolvable.
It is not merely that they are unsolvable, it is as you said they are 
defined to be impossible to solve.

> One of the defining characteristics
> of a decision problem is that there must always be a correct answer
> yes/no answer.  (As usual, it would help if you knew the terms used in
> the subject, it might avoid simple misunderstanding like this.)

0
Peter
12/13/2013 5:06:07 PM
On 13/12/13 13:47, Ben Bacarisse wrote:
[...]
>    n > 2 and prime(n)
[...]
> do any of these specifications contain a contradiction in your sense?
> As far as I am concerned they all do [...]

	I confess to a certain surprise that you seem to think there
are no primes greater than 2.

-- 
Andy Walker,
Nottingham.
0
Andy
12/13/2013 7:20:28 PM
Peter Olcott <OCR4Screen> writes:

> On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>> <snip>
>>>>> You already acknowledged the key element of my position.
>>>> For at least the last 18 months, the key element of your position has
>>>> been that halting involves some question to which there is no correct
>>>> answer.
>>> Meaning exactly this:
>>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>> decision problems of arithmetic and halting and so on ...
>>> contain, in their very definitions, the reasons why they are
>>> impossible.
>> You have misunderstood me.  I don't see the connection between what you
>> claim (that there is no correct answer) and the fact that certain
>> decision problems are unsolvable.

> It is not merely that they are unsolvable, it is as you said they are
> defined to be impossible to solve.

That's a historical statement about motivation.  You'd have to study the
history, and maybe even the psychology, of the various players involved
in developing the subject before you could be sure of the intent behind
all the various definitions.  And even then you might never be able to
find out.

To take one example, the completeness of arithmetic was defined long
before it was shown to be incomplete.  Completeness was not defined to
be impossible, it was defined in a spirit of hope.  Incompleteness came
as a deep disappointment to many.

>> One of the defining characteristics
>> of a decision problem is that there must always be a correct answer
>> yes/no answer.  (As usual, it would help if you knew the terms used in
>> the subject, it might avoid simple misunderstanding like this.)

You didn't comment on this, nor did you try to point out a connection
between what I said and your claim about "correct answers".  Would it be
true to say that you've abandoned your "no correct answer" criterion
and want to use mine instead?

-- 
Ben.
0
Ben
12/13/2013 7:31:15 PM
Andy Walker <news@cuboid.co.uk> writes:

> On 13/12/13 13:47, Ben Bacarisse wrote:
> [...]
>>    n > 2 and prime(n)
> [...]
>> do any of these specifications contain a contradiction in your sense?
>> As far as I am concerned they all do [...]
>
> 	I confess to a certain surprise that you seem to think there
> are no primes greater than 2.

Good catch, thanks.  I meant even primes greater than 2.

-- 
Ben.
0
Ben
12/13/2013 7:33:17 PM
On 12/13/2013 12:45 PM, DKleinecke wrote:
> On Thursday, December 12, 2013 12:15:32 PM UTC-8, Peter Olcott wrote:
>> On 12/12/2013 12:17 PM, DKleinecke wrote:
>>> you need to define "complexity" and in order to use it the way you did you would have to define it as a linear measure (in order to recognize "minimal").
>> Within the context of my proposal I would at least initially envision that complexity would be minimized by minimizing the number of connections between concepts.
>   
> I cannot help hearing in this idea echoes of Chomsky's speculations about how to define the evaluative measure of grammars which would pick out the "best" grammar. A fair amount was written on the subject but it became apparent that no satisfactory conclusion was in sight and the matter was dropped.
In software engineering there is something called a layered architecture:
http://en.wikipedia.org/wiki/Multilayered_architecture

There is also the concept of abstract design (the functional results 
that must be achieved) and implementation details (exactly step-by-step 
how to implement the design). The natural language layer of the design 
including natural language syntax (is an implementation detail that) 
must be ignored when designing the ontology.

>> It has seemed to me that Truth itself is essentially the unidirectional
>> mathematical mapping between abstract representations of actuality and
>> actuality itself.
>   
>> The concept of {true} is derived from the concept of {truth} and
>> contains all of this mapping and other details. The concept of {false}
>> would be the {negation} of {true}.
>   
> It is hard (impossible?) to offer a principled reason why true has any priority over false. It seems to me that there are many more false propositions than true ones.
You just answered your own question.

>   (Since both sets are countably infinite I need to define "many more". I mean that if you take a random statement in proposition form the probability it is false will be much greater than 50%.)
The set of every possible conception that is presently known will always 
be a finite set.

>
>> I am not aware of any case where there would be a need for any cycles in
>> the graph. Could you cite some examples, then maybe I can see if I can
>> resolve them to test my theory.
>   
> I believe the example of true-false is such an example. Any number of other pairs - like lucky-unlucky - would also serve. I think kinship systems provide more elaborate examples of situations where concepts form a cyclic network.
Merely define one, and define the other in terms of the negation of the 
one, no cycles required.

>
>> I may not that I may have the best terminology, maybe you can help me
>> here. The meaning that I intend is that all of these natural language
>> related complexities (is there a single term for this?) would be
>> essentially ignored within the actual knowledge representation (KR)
>> ontology itself. I am pretty sure that this can be accomplished because
>> I can see many of the details of the how this would be accomplished.
>   
> I think you don't mean "ignored". I hope you mean "provided for". I would have little respect for an ontology that ignores the difference between, say, "go" and "went".
At the ontological layer of the design other layers really must be 
ignored. By making a layered design that can safely ignore other layers, 
the infeasible enormous complexity is unraveled into feasibility.

1) "He went"
2) "He will go"

The essential meaning of the words "go" and "went" would be unified within
the {Event}: {Transportation_Of_Self} {From_Location} {To_Location} 
{Within_Time_Frame}.

This is an extremely rough example of how the essential meaning of these
two words could be unified. It is only intended to show that unification is
possible.

Since these words have different shades of meaning only those that are
identical would be unified. Although you can command someone to "go",
you can not command someone to "went".

>
>> We would not be representing (within the ontology) English words and
>> their meanings per se. Instead we would be directly representing the
>> essential meanings themselves as pure conceptions that are (as much as
>> possible) independent of their (specific human language) form of
>> representation.
>>
>> Each of these pure conceptions will at least closely approximate a word
>> or phrase within natural language.
> Doesn't what you are proposing correspond in a fashion to the Chomskian innate language capability?
No the ideas that I presented above are directly from the Knowledge 
Representation (KR) aspect of Artificial Intelligence (AI) of computer 
science.

> Couldn't it be described as a model of how that innate human capability might work? I think you would be better off referring to Chomsky on linguistic matters than Montague.

Montague's work provides the basis such that all conceptions can be 
represented as mathematical formalisms. This degree of specificity is 
required to provide human like comprehension to machines. Ultimately 
when humans comprehend things ambiguity is resolved.
0
Peter
12/13/2013 7:36:19 PM
On 12/13/2013 1:31 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>> <snip>
>>>>>> You already acknowledged the key element of my position.
>>>>> For at least the last 18 months, the key element of your position has
>>>>> been that halting involves some question to which there is no correct
>>>>> answer.
>>>> Meaning exactly this:
>>>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>> decision problems of arithmetic and halting and so on ...
>>>> contain, in their very definitions, the reasons why they are
>>>> impossible.
>>> You have misunderstood me.  I don't see the connection between what you
>>> claim (that there is no correct answer) and the fact that certain
>>> decision problems are unsolvable.
>> It is not merely that they are unsolvable, it is as you said they are
>> defined to be impossible to solve.
> That's a historical statement about motivation.
"contain, in their very definitions" is not a historical statement about 
motivation.

> You'd have to study the
> history, and maybe even the psychology, of the various players involved
> in developing the subject before you could be sure of the intent behind
> all the various definitions.  And even then you might never be able to
> find out.
When the definition of a problem (the formalized specification of the 
problem)
contains its own impossibility,  these "decision problems... are 
inherently problematic"

> To take one example, the completeness of arithmetic was defined long
> before it was shown to be incomplete.  Completeness was not defined to
> be impossible, it was defined in a spirit of hope.  Incompleteness came
> as a deep disappointment to many.
>
>>> One of the defining characteristics
>>> of a decision problem is that there must always be a correct answer
>>> yes/no answer.  (As usual, it would help if you knew the terms used in
>>> the subject, it might avoid simple misunderstanding like this.)
> You didn't comment on this, nor did you try to point out a connection
> between what I said and your claim about "correct answers".  Would it be
> true to say that you've abandoned your "no correct answer" criterion
> and want to use mine instead?
>
We already went through this too many times.
I provide a very simple clear example that proves my point and you 
ignore it.
0
Peter
12/13/2013 8:00:39 PM
Peter Olcott <OCR4Screen> writes:

> On 12/13/2013 1:31 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>> <snip>
>>>>>>> You already acknowledged the key element of my position.
>>>>>> For at least the last 18 months, the key element of your position has
>>>>>> been that halting involves some question to which there is no correct
>>>>>> answer.
>>>>> Meaning exactly this:
>>>>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>>> decision problems of arithmetic and halting and so on ...
>>>>> contain, in their very definitions, the reasons why they are
>>>>> impossible.
>>>> You have misunderstood me.  I don't see the connection between what you
>>>> claim (that there is no correct answer) and the fact that certain
>>>> decision problems are unsolvable.
>>> It is not merely that they are unsolvable, it is as you said they are
>>> defined to be impossible to solve.
>> That's a historical statement about motivation.

> "contain, in their very definitions" is not a historical statement
> about motivation.

No of course -- those are my words.  It's yours I am objecting to.  I am
very happy for you to switch to using my interpretation of halting, but
you should be sure you know what you are signing up to first!

>> You'd have to study the
>> history, and maybe even the psychology, of the various players involved
>> in developing the subject before you could be sure of the intent behind
>> all the various definitions.  And even then you might never be able to
>> find out.

> When the definition of a problem (the formalized specification of the
> problem)
> contains its own impossibility,  these "decision problems... are
> inherently problematic"

Yes.

>> To take one example, the completeness of arithmetic was defined long
>> before it was shown to be incomplete.  Completeness was not defined to
>> be impossible, it was defined in a spirit of hope.  Incompleteness came
>> as a deep disappointment to many.
>>
>>>> One of the defining characteristics
>>>> of a decision problem is that there must always be a correct answer
>>>> yes/no answer.  (As usual, it would help if you knew the terms used in
>>>> the subject, it might avoid simple misunderstanding like this.)
>> You didn't comment on this, nor did you try to point out a connection
>> between what I said and your claim about "correct answers".  Would it be
>> true to say that you've abandoned your "no correct answer" criterion
>> and want to use mine instead?
>>
> We already went through this too many times.
> I provide a very simple clear example that proves my point and you
> ignore it.

I don't believe you.

-- 
Ben.
0
Ben
12/13/2013 8:36:04 PM
On 12/10/2013 3:23 PM, Peter Olcott wrote:
> On 12/10/2013 12:50 PM, Tak To wrote:
>> On 12/8/2013 9:29 AM, Peter Olcott wrote:
>>> On 12/7/2013 3:02 PM, Tak To wrote:
>>>> On 12/6/2013 10:24 PM, Peter Olcott wrote:
>>>>> On 12/6/2013 3:36 PM, Tak To wrote:
>>>>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>>>>     Peter Olcott <OCR4Screen> wrote:
>>>>>>>
>>>>>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>>>>>> have already pointed out that to "extend" it essentially results in an
>>>>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>>>>> explicit meaning.
>>>>>>>>>
>>>>>>>> Could you provide a specific concrete example?

>>>>>>> You have yourself!  The very use of "you" requires a person to drawn on
>>>>>>> an internal context to give meaning to the word.  Natural languages are
>>>>>>> *full* of such implicit constructs, which really have no explicit
>>>>>>> meaning unless you're dealing with a like-minded individual.  The
>>>>>>> mistake you're making is in thinking that language is a data store
>>>>>>> rather than a communication channel.

>>>>>> Well put.
>>>>>>
>>>>>> Another way of looking at it is that language
>>>>>> contains "hints" for finding "meaning" (i.e.,
>>>>>> for understanding) rather than "meaning" itself.
>>>>> For the meaning of direct physical sensations it may not be as much as
>>>>> hints, such as explaining exactly what a rainbow looks like to one whom
>>>>> has always been blind.  It seems to me that the meaning of all
>>>>> conceptual knowledge can be exhaustively encoded within Montague Grammar
>>>>> and its enhancements.
>>
>>>> Before tackling the abstraction notion of "meaning",
>>>> let's backtrack a bit and focus on what "understanding
>>>> (a meaning)" is.  Borrowing from Turing's Test, to
>>>> have understood X (or to have knowledge of X) is to be
>>>> able to answer questions about X (to the extend of one's
>>>> understanding/knowledge).  Thus, one needs a model or
>>>> representation of knowledge about X, as well as a way
>>>> to draw inferences from the representation.  This is
>>>> precisely what the field of "Knowledge Representation"
>>>> (KR) in Cognitive Science is about.
>>>
>>> Yes. This is the first thing that I read about in my quest.
>>> What I am about to say may seem quite terse, yet it may also be apt:
>>> The key to (KR) is to fully elaborate the inherent rules of the
>>> compositionality of natural language.
>>
>> No. See below.
>>
>>>> As two people communicating with each other through
>>>> language, each will add new constructions to his
>>>> own internal representation of knowledge based on
>>>> the input from the other.  Note that each person
>>>> would build his internal representation in his
>>>> own way,
>>>
>>> I would call this the discourse ontology.
>>
>> I don't know what "ontology" means here.  Do you
>> meaning something like a rule system?
> 
> http://en.wikipedia.org/wiki/Cyc

OK.

>>>> and the summation of all the representations
>>>> (let's call it one's Model Of Everything -- MOE) is
>>>> the mind itself.
>>>>
>>>> Note also that the form of one's internal MOE has
>>>> no relationship to the internal structure of the
>>>> language at all.  In this light, one can say that
>>>> language carries no meaning by itself but will create
>>>> meaning (and understanding) when interpreted by a mind.
>>>>
>>>> And if one insists on calling the information carried
>>>> in the language constructs (words, phrases, sentences,
>>>> etc) "meaning", then one must remember that language-
>>>> meaning and MOE-meaning are at two distinct levels
>>>> of abstraction, with totally different epistemological
>>>> "primitives" (including axioms).
>>
>>> If I understand you correctly this would be the natural language of the
>>> discourse mapping to the discourse ontology mapping to the base
>>> ontology. The base ontology is something like your MOE.
>>
>> I don't know what your "ontology" is, much less
>> "base ontology".
> 
> A base ontology would be like a dictionary the explicitly defines every 
> detail of every word. For example the concept of {sphere} will be 
> directly linked to most all of analytical geometry.
> 
> (see above)

OK.

I still don't agree that there is a "discourse ontology".

> It all logically follows from the fundamental basis that Richard 
> Montague provided, typically referred to the Montague Grammar of semantics.

I don't see how a system like Cyc (which looks to me
is Prolog-like) "follows from" Montague Grammar.  At
best we can say they are different attempts to
conceptualize the same things.

One can express recursive functions in terms of
Turing machines, but it does not make the latter
unique in terms of computational power.  Turing
machines can be expressed in terms of recursive
function as well.   Montague Grammar is not unique
in being able to form constructs that bear
meaning in a fundamental way.

>> In any case, the term "mapping" seems to imply that
>> one is at a level of abstract comparable to that of
>> the other, which is contrary to my view.
> 
> All this mapping is fundamentally based on the correspondence theory of 
> truth. Montague added several more aspects to this conception:
> model theoretic, possible worlds.

What is "the correspondence theory of truth"?

>> Note that one's MOE is built entirely from empirical
>> experiences.  All the language rules are induced from
>> past experiences of discourse.  Each unit (word,
>> phrase, etc) carries a summary of past usage.  Likewise
>> for each abstract concept.  One can't point to a
>> sub-part of the MOE and says, here is the equivalent
>> of the the concept "I", or "if", or "mother".  There
>> is no "mapping".
> 
> The mapping from the term to its meaning.
> This is not your mother--->"mother"
> 
> It is only a set of symbols that are a stand-in
> for the general concept of a {mother}.

I don't understand the "only" part.  Perhaps a
concrete set of symbols (primitives?) would help.

> Sure there are two linguistic mappings:
> extension and intension.

Is there a difference between extension and intension
at the MOE level?

> Montague added at least two others:
> 1) Model of a possible world
> 2) A Possible world itself

Sorry, I don't understand these terms in the present
context.

>> Consider a computer executing a BASIC interpreter
>> which is executing a BASIC program.  There is the
>> logic of the BASIC program, the logic of the BASIC
>> interpreter, the logic of machine instruction,
>> the logic of the microcode, the logic of the digital
>> circuitry, as well as the logic (i.e., physical laws)
>> of the electronics.  Each is at a different level of
>> abstraction.  There is no "mapping" across the levels.
> 
> {mapping} as in the abstraction of a mathematical mapping, in other 
> words a precise one-way correspondence between one thing and another.
> 
> In this case your example perfectly showed the concept of mapping.

? If you in fact agreeing to my view of things, then
you would have to agree that language-meaning has to
be built on top of MOE-meaning.  You can't start with
discourse and work back.

>> There is a word -- innumeracy -- that describes,
>> among other things, the inability to grasp the
>> magnitude of a number in a context.  I wish there
>> were a word that would describe the inability to
>> grasp complexity and level of abstraction.
>> That would have saved me a lot of time when
>> discussing cognitive science issues.
>>
>>>> In any case, no matter what one is trying to model
>>>> -- language-meaning or MOE-meaning -- in a tractable
>>>> scale, one is immediately faced with that fact that
>>>> the selection of "primitives" is almost entirely
>>>> arbitrary.  With this, very few claims of universality
>>>> can be justified.  In other words, what works in
>>>> a micro-reality such as SHRDLU cannot be generalized
>>>> to the real world at all.  In short, why bother?
>>
>>> There is already an inherent natural structure that fully elaborates
>>> every detail of all of the rules of natural language compositionality
>>> (NLC) that only need be discovered. Once these rules have been fully
>>> elaborated the (KR) problem would seem to be fully addressed.
>>
>> Again, these "rules of NLC" (if definable at all)
>> are at a disparate level of abstraction and are thus
>>   ill-suited for knowledge representation.
> 
> I may seem that way. I am not referring to how these things are 
> generally represented within natural language. I am referring to a 
> mathematical formalism that could represent the pure conceptions that 
> the natural language is referring to. Montague Grammar would be the basis.

Which "pure concepts"?  At times it seems that you
are focusing too much on the mathematical formalism
and ignore the importance of choosing the primitives.

>>>> I also don't see why one would choose to use the
>>>> notation of a language/grammar to "encode" meaning.
>>>> Why not use, say, some form of lambda calculus
>>>> instead? At the very least, one can easily build
>>>> rules/relationships and entities out of existing
>>>> rules/relationships and entities.
>>>
>>> I think that Montague proposed this, and this encoding may form the
>>> basis for his semantic grammar. Ultimately these meaning postulates must
>>> map to the equivalent of the human mind's representation. I
>>
>> Why repeat his mistakes?
> 
> Perhaps he made some mistakes, I did not notice any. I had already 
> thought up this gist of his ideas before I ever read anything about his 
> work.
> 
> <not credible (but true)>
> One of these things that I thought up on my own was the correspondence 
> theory of truth. Years later I read that it already had a name.
> </not credible (but true)>
> 
> It was just last year that I decided to start reading about what others 
> had done, so I read through a book that I bought back in 1995: Formal 
> Semantics An Introduction by Ronnie Cann.
> 
> I also bought Knowledge Representation and Reasoning by Brachman and 
> Levesque because this is my real interest. The problem with the KR 
> approach is that is starts with a simple understanding and attempts to 
> extend it to larger problems, so the KR system has to be continually 
> redesigned to accommodate these enhancements as greater understanding is 
> achieved.

Indeed.  Yes I said, any tractable system requires
a set of primitives[*] that are basically arbitrary.

[*] rules of composition

> The advantage with first exhaustive solving the compositionality problem 
> within linguistics is that the final design of the knowledge 
> representation (KR) system is fully robust. No more little toy systems 
> that can provide tiny increments of progress. Solve the big problem 
> first, and then everything else falls right into place.

The project will then be too complex to be tractable.
And being intractable in turn means that one can't
demonstrate the validity of one's choice of
primitives.  Everything will just become hand waving.

I don't how using Montague Grammar or any other formalism
would reduce the complexity involved.  However, I would
recommend choosing a formalism in which parametrization
can be expressed easily (like the Prolog like constructs
of Cyc).

> Although the initial problem is larger, one gets to an optimal complete 
> solution in minimal time.

Good luck.

>>>>>>>>> Billy thinks Sally said the ball was red.
>>>>>>>> I am just throwing this together from imperfect memory, but, it is
>>>>>>>> something like this:
>>>>>>>>
>>>>>>>> BILLY {believes} (SALLY {said} "The ball was red")
>>>>>>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>>>>>>> addition of notation.  The word "said" alone remains entirely ambiguous,
>>>>>>> because in common usage that can refer to all manner of spoken or
>>>>>>> written statements.  I also don't understand why you decided not to
>>>>>>> notate ball and red, instead treating the information as if it were a
>>>>>>> quotation.  There is a really incompleteness in the ideas you put forth.

Tak
--
----------------------------------------------------------------+-----
Tak To                                            takto@alum.mit.eduxx
--------------------------------------------------------------------^^
 [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr


0
Tak
12/13/2013 9:13:21 PM
On 12/13/2013 2:36 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/13/2013 1:31 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>>> <snip>
>>>>>>>> You already acknowledged the key element of my position.
>>>>>>> For at least the last 18 months, the key element of your position has
>>>>>>> been that halting involves some question to which there is no correct
>>>>>>> answer.
>>>>>> Meaning exactly this:
>>>>>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>>>> decision problems of arithmetic and halting and so on ...
>>>>>> contain, in their very definitions, the reasons why they are
>>>>>> impossible.
>>>>> You have misunderstood me.  I don't see the connection between what you
>>>>> claim (that there is no correct answer) and the fact that certain
>>>>> decision problems are unsolvable.
>>>> It is not merely that they are unsolvable, it is as you said they are
>>>> defined to be impossible to solve.
>>> That's a historical statement about motivation.
>> "contain, in their very definitions" is not a historical statement
>> about motivation.
> No of course -- those are my words.  It's yours I am objecting to.  I am
> very happy for you to switch to using my interpretation of halting, but
> you should be sure you know what you are signing up to first!
>
>>> You'd have to study the
>>> history, and maybe even the psychology, of the various players involved
>>> in developing the subject before you could be sure of the intent behind
>>> all the various definitions.  And even then you might never be able to
>>> find out.
>> When the definition of a problem (the formalized specification of the
>> problem)
>> contains its own impossibility,  these "decision problems... are
>> inherently problematic"
> Yes.

So we agree then. We just use different terminology to describe these 
identical views.

When a problem is defined such that the specification of the problem 
itself causes this problem to be impossible to solve this is exactly the 
same thing that I mean by ill-formed question.

The simple example that I provided was the question:
What time is it (yes or no) ?

In this case the specification of the question causes this question to 
have no possible correct answer.

>
>>> To take one example, the completeness of arithmetic was defined long
>>> before it was shown to be incomplete.  Completeness was not defined to
>>> be impossible, it was defined in a spirit of hope.  Incompleteness came
>>> as a deep disappointment to many.
>>>
>>>>> One of the defining characteristics
>>>>> of a decision problem is that there must always be a correct answer
>>>>> yes/no answer.  (As usual, it would help if you knew the terms used in
>>>>> the subject, it might avoid simple misunderstanding like this.)
>>> You didn't comment on this, nor did you try to point out a connection
>>> between what I said and your claim about "correct answers".  Would it be
>>> true to say that you've abandoned your "no correct answer" criterion
>>> and want to use mine instead?
>>>
>> We already went through this too many times.
>> I provide a very simple clear example that proves my point and you
>> ignore it.
> I don't believe you.
>

0
Peter
12/13/2013 9:36:57 PM
On 12/13/2013 3:13 PM, Tak To wrote:
> On 12/10/2013 3:23 PM, Peter Olcott wrote:
>> On 12/10/2013 12:50 PM, Tak To wrote:
>>> On 12/8/2013 9:29 AM, Peter Olcott wrote:
>>>> On 12/7/2013 3:02 PM, Tak To wrote:
>>>>> On 12/6/2013 10:24 PM, Peter Olcott wrote:
>>>>>> On 12/6/2013 3:36 PM, Tak To wrote:
>>>>>>> On 12/6/2013 12:52 PM, Doc O'Leary wrote:
>>>>>>>> In article <e9udnTtlbOtHWj3PnZ2dnUVZ_qmdnZ2d@giganews.com>,
>>>>>>>>      Peter Olcott <OCR4Screen> wrote:
>>>>>>>>
>>>>>>>>>> Perhaps you do not understand it as well as you think you do.  I myself
>>>>>>>>>> have already pointed out that to "extend" it essentially results in an
>>>>>>>>>> infinite specification.  The fundamental problem is implicit vs.
>>>>>>>>>> explicit meaning.
>>>>>>>>>>
>>>>>>>>> Could you provide a specific concrete example?
>>>>>>>> You have yourself!  The very use of "you" requires a person to drawn on
>>>>>>>> an internal context to give meaning to the word.  Natural languages are
>>>>>>>> *full* of such implicit constructs, which really have no explicit
>>>>>>>> meaning unless you're dealing with a like-minded individual.  The
>>>>>>>> mistake you're making is in thinking that language is a data store
>>>>>>>> rather than a communication channel.
>>>>>>> Well put.
>>>>>>>
>>>>>>> Another way of looking at it is that language
>>>>>>> contains "hints" for finding "meaning" (i.e.,
>>>>>>> for understanding) rather than "meaning" itself.
>>>>>> For the meaning of direct physical sensations it may not be as much as
>>>>>> hints, such as explaining exactly what a rainbow looks like to one whom
>>>>>> has always been blind.  It seems to me that the meaning of all
>>>>>> conceptual knowledge can be exhaustively encoded within Montague Grammar
>>>>>> and its enhancements.
>>>>> Before tackling the abstraction notion of "meaning",
>>>>> let's backtrack a bit and focus on what "understanding
>>>>> (a meaning)" is.  Borrowing from Turing's Test, to
>>>>> have understood X (or to have knowledge of X) is to be
>>>>> able to answer questions about X (to the extend of one's
>>>>> understanding/knowledge).  Thus, one needs a model or
>>>>> representation of knowledge about X, as well as a way
>>>>> to draw inferences from the representation.  This is
>>>>> precisely what the field of "Knowledge Representation"
>>>>> (KR) in Cognitive Science is about.
>>>> Yes. This is the first thing that I read about in my quest.
>>>> What I am about to say may seem quite terse, yet it may also be apt:
>>>> The key to (KR) is to fully elaborate the inherent rules of the
>>>> compositionality of natural language.
>>> No. See below.
>>>
>>>>> As two people communicating with each other through
>>>>> language, each will add new constructions to his
>>>>> own internal representation of knowledge based on
>>>>> the input from the other.  Note that each person
>>>>> would build his internal representation in his
>>>>> own way,
>>>> I would call this the discourse ontology.
>>> I don't know what "ontology" means here.  Do you
>>> meaning something like a rule system?
>> http://en.wikipedia.org/wiki/Cyc
> OK.
>
>>>>> and the summation of all the representations
>>>>> (let's call it one's Model Of Everything -- MOE) is
>>>>> the mind itself.
>>>>>
>>>>> Note also that the form of one's internal MOE has
>>>>> no relationship to the internal structure of the
>>>>> language at all.  In this light, one can say that
>>>>> language carries no meaning by itself but will create
>>>>> meaning (and understanding) when interpreted by a mind.
>>>>>
>>>>> And if one insists on calling the information carried
>>>>> in the language constructs (words, phrases, sentences,
>>>>> etc) "meaning", then one must remember that language-
>>>>> meaning and MOE-meaning are at two distinct levels
>>>>> of abstraction, with totally different epistemological
>>>>> "primitives" (including axioms).
>>>> If I understand you correctly this would be the natural language of the
>>>> discourse mapping to the discourse ontology mapping to the base
>>>> ontology. The base ontology is something like your MOE.
>>> I don't know what your "ontology" is, much less
>>> "base ontology".
>> A base ontology would be like a dictionary the explicitly defines every
>> detail of every word. For example the concept of {sphere} will be
>> directly linked to most all of analytical geometry.
>>
>> (see above)
> OK.
>
> I still don't agree that there is a "discourse ontology".
This is my own idea. It is required because one must have some way of 
storing the pragmatics details of the specific situation's context.

>
>> It all logically follows from the fundamental basis that Richard
>> Montague provided, typically referred to the Montague Grammar of semantics.
> I don't see how a system like Cyc (which looks to me
> is Prolog-like) "follows from" Montague Grammar.  At
> best we can say they are different attempts to
> conceptualize the same things.
I am not saying CYC exact as they are doing it. I am saying something 
like CYC that meets a similar goal.

> One can express recursive functions in terms of
> Turing machines, but it does not make the latter
> unique in terms of computational power.  Turing
> machines can be expressed in terms of recursive
> function as well.   Montague Grammar is not unique
> in being able to form constructs that bear
> meaning in a fundamental way.
I am talking about representing a machine's comprehension of language 
that becomes at least equivalent to that of a human. The structure of 
the knowledge is crucial to this goal, the underlying machine 
architecture is irrelevant.

To build an ontology capable of representing every conception that can 
be conceived we must totally understand how these conceptions connect 
together.

>>> In any case, the term "mapping" seems to imply that
>>> one is at a level of abstract comparable to that of
>>> the other, which is contrary to my view.
>> All this mapping is fundamentally based on the correspondence theory of
>> truth. Montague added several more aspects to this conception:
>> model theoretic, possible worlds.
> What is "the correspondence theory of truth"?
http://plato.stanford.edu/entries/truth-correspondence/
My version that I derived long before becoming aware of the above is simply:
Truth is the unidirectional mathematical mapping from abstract 
representations of actuality to actuality itself.


>
>>> Note that one's MOE is built entirely from empirical
>>> experiences.
What is the empirical experience of infinity?

>>> All the language rules are induced from
>>> past experiences of discourse.  Each unit (word,
>>> phrase, etc) carries a summary of past usage.  Likewise
>>> for each abstract concept.  One can't point to a
>>> sub-part of the MOE and says, here is the equivalent
>>> of the the concept "I", or "if", or "mother".  There
>>> is no "mapping".
>> The mapping from the term to its meaning.
>> This is not your mother--->"mother"
>>
>> It is only a set of symbols that are a stand-in
>> for the general concept of a {mother}.
> I don't understand the "only" part.  Perhaps a
> concrete set of symbols (primitives?) would help.

Words map to the things themselves in a possible world.
Words are not these things they only represent these things symbolically.

>
>> Sure there are two linguistic mappings:
>> extension and intension.
> Is there a difference between extension and intension
> at the MOE level?
Yes.

>> Montague added at least two others:
>> 1) Model of a possible world
>> 2) A Possible world itself
> Sorry, I don't understand these terms in the present
> context.

http://plato.stanford.edu/entries/montague-semantics/

>
>>> Consider a computer executing a BASIC interpreter
>>> which is executing a BASIC program.  There is the
>>> logic of the BASIC program, the logic of the BASIC
>>> interpreter, the logic of machine instruction,
>>> the logic of the microcode, the logic of the digital
>>> circuitry, as well as the logic (i.e., physical laws)
>>> of the electronics.  Each is at a different level of
>>> abstraction.  There is no "mapping" across the levels.
>> {mapping} as in the abstraction of a mathematical mapping, in other
>> words a precise one-way correspondence between one thing and another.
>>
>> In this case your example perfectly showed the concept of mapping.
> ? If you in fact agreeing to my view of things, then
> you would have to agree that language-meaning has to
> be built on top of MOE-meaning.  You can't start with
> discourse and work back.
There would be at least two layers to the system:
1) An ontology of pure conceptions
2) A bidirectional mathematical mapping between natural language and 
this ontology.

>>> There is a word -- innumeracy -- that describes,
>>> among other things, the inability to grasp the
>>> magnitude of a number in a context.  I wish there
>>> were a word that would describe the inability to
>>> grasp complexity and level of abstraction.
>>> That would have saved me a lot of time when
>>> discussing cognitive science issues.
>>>
>>>>> In any case, no matter what one is trying to model
>>>>> -- language-meaning or MOE-meaning -- in a tractable
>>>>> scale, one is immediately faced with that fact that
>>>>> the selection of "primitives" is almost entirely
>>>>> arbitrary.  With this, very few claims of universality
>>>>> can be justified.  In other words, what works in
>>>>> a micro-reality such as SHRDLU cannot be generalized
>>>>> to the real world at all.  In short, why bother?
>>>> There is already an inherent natural structure that fully elaborates
>>>> every detail of all of the rules of natural language compositionality
>>>> (NLC) that only need be discovered. Once these rules have been fully
>>>> elaborated the (KR) problem would seem to be fully addressed.
>>> Again, these "rules of NLC" (if definable at all)
>>> are at a disparate level of abstraction and are thus
>>>    ill-suited for knowledge representation.
>> I may seem that way. I am not referring to how these things are
>> generally represented within natural language. I am referring to a
>> mathematical formalism that could represent the pure conceptions that
>> the natural language is referring to. Montague Grammar would be the basis.
> Which "pure concepts"?  At times it seems that you
> are focusing too much on the mathematical formalism
> and ignore the importance of choosing the primitives.
There are only two semantic atoms in my current proposal
1) Conceptions (nodes in a di-graph)
2) Connections between conceptions (edges in a di-graph)

I spent a very long time creating examples in my reply to others, please 
look at these.

>>>>> I also don't see why one would choose to use the
>>>>> notation of a language/grammar to "encode" meaning.
>>>>> Why not use, say, some form of lambda calculus
>>>>> instead? At the very least, one can easily build
>>>>> rules/relationships and entities out of existing
>>>>> rules/relationships and entities.
>>>> I think that Montague proposed this, and this encoding may form the
>>>> basis for his semantic grammar. Ultimately these meaning postulates must
>>>> map to the equivalent of the human mind's representation. I
>>> Why repeat his mistakes?
>> Perhaps he made some mistakes, I did not notice any. I had already
>> thought up this gist of his ideas before I ever read anything about his
>> work.
>>
>> <not credible (but true)>
>> One of these things that I thought up on my own was the correspondence
>> theory of truth. Years later I read that it already had a name.
>> </not credible (but true)>
>>
>> It was just last year that I decided to start reading about what others
>> had done, so I read through a book that I bought back in 1995: Formal
>> Semantics An Introduction by Ronnie Cann.
>>
>> I also bought Knowledge Representation and Reasoning by Brachman and
>> Levesque because this is my real interest. The problem with the KR
>> approach is that is starts with a simple understanding and attempts to
>> extend it to larger problems, so the KR system has to be continually
>> redesigned to accommodate these enhancements as greater understanding is
>> achieved.
> Indeed.  Yes I said, any tractable system requires
> a set of primitives[*] that are basically arbitrary.
In order to make the system of minimal complexity one must begin by 
choosing the primitives that would provide this.

The design of a system as complex as the complete representation of the 
set of all conceptions must have its complexity minimized from the 
beginning or this system remains infeasible to design.

Populating this system with conceptions must be automated because no one 
or group will have enough time to essentially write a book about all the 
details of everything.

The focus should be on a BootStrap level of understanding of language, 
such that the system can learn a greater level of comprehension on its own.

> [*] rules of composition
>
>> The advantage with first exhaustive solving the compositionality problem
>> within linguistics is that the final design of the knowledge
>> representation (KR) system is fully robust. No more little toy systems
>> that can provide tiny increments of progress. Solve the big problem
>> first, and then everything else falls right into place.
> The project will then be too complex to be tractable.
> And being intractable in turn means that one can't
> demonstrate the validity of one's choice of
> primitives.  Everything will just become hand waving.
The set of conceptions already has a pre-existing and optimal natural 
order that only need be discovered.

> I don't how using Montague Grammar or any other formalism
> would reduce the complexity involved.
It does not reduce the complexity, it provides the means of representation.

> However, I would
> recommend choosing a formalism in which parametrization
> can be expressed easily (like the Prolog like constructs
> of Cyc).
CYC is in the ballpark of the right idea. It might be better for them to 
focus more directly on the Boot Strap level of natural language 
understanding rather than simply the common sense level of 
understanding. A Boot Strap level of understanding would be able to 
learn common sense by reading about it.

>
>> Although the initial problem is larger, one gets to an optimal complete
>> solution in minimal time.
> Good luck.
>
>>>>>>>>>> Billy thinks Sally said the ball was red.
>>>>>>>>> I am just throwing this together from imperfect memory, but, it is
>>>>>>>>> something like this:
>>>>>>>>>
>>>>>>>>> BILLY {believes} (SALLY {said} "The ball was red")
>>>>>>>> I'm sorry, but I fail to see any "atoms of meaning" in that rudimentary
>>>>>>>> addition of notation.  The word "said" alone remains entirely ambiguous,
>>>>>>>> because in common usage that can refer to all manner of spoken or
>>>>>>>> written statements.  I also don't understand why you decided not to
>>>>>>>> notate ball and red, instead treating the information as if it were a
>>>>>>>> quotation.  There is a really incompleteness in the ideas you put forth.
> Tak
> --
> ----------------------------------------------------------------+-----
> Tak To                                            takto@alum.mit.eduxx
> --------------------------------------------------------------------^^
>   [taode takto ~{LU5B~}]      NB: trim the xx to get my real email addr
>
>

0
Peter
12/13/2013 10:20:42 PM
Peter Olcott <OCR4Screen> writes:

> On 12/13/2013 2:36 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 12/13/2013 1:31 PM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
>>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>>
>>>>>>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>>>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>>>> <snip>
>>>>>>>>> You already acknowledged the key element of my position.
>>>>>>>> For at least the last 18 months, the key element of your position has
>>>>>>>> been that halting involves some question to which there is no correct
>>>>>>>> answer.
>>>>>>> Meaning exactly this:
>>>>>>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>>>>> decision problems of arithmetic and halting and so on ...
>>>>>>> contain, in their very definitions, the reasons why they are
>>>>>>> impossible.
>>>>>> You have misunderstood me.  I don't see the connection between what you
>>>>>> claim (that there is no correct answer) and the fact that certain
>>>>>> decision problems are unsolvable.
>>>>> It is not merely that they are unsolvable, it is as you said they are
>>>>> defined to be impossible to solve.
>>>> That's a historical statement about motivation.
>>> "contain, in their very definitions" is not a historical statement
>>> about motivation.
>> No of course -- those are my words.  It's yours I am objecting to.  I am
>> very happy for you to switch to using my interpretation of halting, but
>> you should be sure you know what you are signing up to first!
>>
>>>> You'd have to study the
>>>> history, and maybe even the psychology, of the various players involved
>>>> in developing the subject before you could be sure of the intent behind
>>>> all the various definitions.  And even then you might never be able to
>>>> find out.
>>> When the definition of a problem (the formalized specification of the
>>> problem)
>>> contains its own impossibility,  these "decision problems... are
>>> inherently problematic"
>> Yes.
>
> So we agree then. We just use different terminology to describe these
> identical views.

Maybe.  I'm not sure because you've no retracted anything yet, but all
credit to you if you've seen that you were wrong before.  That takes
some courage.

> When a problem is defined such that the specification of the problem
> itself causes this problem to be impossible to solve this is exactly
> the same thing that I mean by ill-formed question.

No, you can't call these problems that.  They are well-formed problems
whose definition is natural.  You know full well that the choice you've
made ("ill-formed") is not just some arbitrary word used to label this
property.  You've chosen it for a reason, and everyone else is permitted
to reject for the same reason -- it already means something that can't be
forgotten just because you decree it.

> The simple example that I provided was the question:
> What time is it (yes or no) ?

That's an English sentence.  Nothing I've said applies to natural
language questions about the world.

> In this case the specification of the question causes this question to
> have no possible correct answer.

No that's not what I'm talking about.  It did seem unlikely that would
have decided to agree with me.  I am not talking about natural language.
I am not talking about questions that have no correct answer.  I am
talking about well-defined questions in and about formal systems.

<snip>
-- 
Ben.
0
Ben
12/13/2013 10:30:25 PM
On 12/13/2013 4:30 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/13/2013 2:36 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 12/13/2013 1:31 PM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> On 12/13/2013 10:35 AM, Ben Bacarisse wrote:
>>>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>>>
>>>>>>>> On 12/12/2013 3:11 PM, Ben Bacarisse wrote:
>>>>>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>>>>> <snip>
>>>>>>>>>> You already acknowledged the key element of my position.
>>>>>>>>> For at least the last 18 months, the key element of your position has
>>>>>>>>> been that halting involves some question to which there is no correct
>>>>>>>>> answer.
>>>>>>>> Meaning exactly this:
>>>>>>>> 12/10/2013 8:14 AM, Ben Bacarisse wrote:
>>>>>>>> decision problems of arithmetic and halting and so on ...
>>>>>>>> contain, in their very definitions, the reasons why they are
>>>>>>>> impossible.
>>>>>>> You have misunderstood me.  I don't see the connection between what you
>>>>>>> claim (that there is no correct answer) and the fact that certain
>>>>>>> decision problems are unsolvable.
>>>>>> It is not merely that they are unsolvable, it is as you said they are
>>>>>> defined to be impossible to solve.
>>>>> That's a historical statement about motivation.
>>>> "contain, in their very definitions" is not a historical statement
>>>> about motivation.
>>> No of course -- those are my words.  It's yours I am objecting to.  I am
>>> very happy for you to switch to using my interpretation of halting, but
>>> you should be sure you know what you are signing up to first!
>>>
>>>>> You'd have to study the
>>>>> history, and maybe even the psychology, of the various players involved
>>>>> in developing the subject before you could be sure of the intent behind
>>>>> all the various definitions.  And even then you might never be able to
>>>>> find out.
>>>> When the definition of a problem (the formalized specification of the
>>>> problem)
>>>> contains its own impossibility,  these "decision problems... are
>>>> inherently problematic"
>>> Yes.
>> So we agree then. We just use different terminology to describe these
>> identical views.
> Maybe.  I'm not sure because you've no retracted anything yet, but all
> credit to you if you've seen that you were wrong before.  That takes
> some courage.
>
>> When a problem is defined such that the specification of the problem
>> itself causes this problem to be impossible to solve this is exactly
>> the same thing that I mean by ill-formed question.
> No, you can't call these problems that.  They are well-formed problems
> whose definition is natural.  You know full well that the choice you've
> made ("ill-formed") is not just some arbitrary word used to label this
> property.  You've chosen it for a reason, and everyone else is permitted
> to reject for the same reason -- it already means something that can't be
> forgotten just because you decree it.
>
>> The simple example that I provided was the question:
>> What time is it (yes or no) ?
> That's an English sentence.  Nothing I've said applies to natural
> language questions about the world.
>
>> In this case the specification of the question causes this question to
>> have no possible correct answer.
> No that's not what I'm talking about.  It did seem unlikely that would
> have decided to agree with me.  I am not talking about natural language.
> I am not talking about questions that have no correct answer.  I am
> talking about well-defined questions in and about formal systems.
>
> <snip>
When you say well formed, you mean unambiguously specified.

When I say ill-formed I mean:
decision problems... contain, in their very definitions, the reasons why 
they are impossible.

0
Peter
12/13/2013 10:45:47 PM
Peter Olcott <OCR4Screen> writes:

> When I say ill-formed I mean:
> decision problems... contain, in their very definitions, the reasons
> why they are impossible.

This is just weird;
there *is* a decision procedure for whether a given propositional
formula is a tautology or not.

How can that be, if decision procedures *in general* contain, in their
very definitions, the reasons why they are impossible.

That's what it looks like your are claiming.
I realise this is not your wording, but you have signed up to it,
so I ask you to justify your understanding of it, given the example
just mentioned (and many others).


-- 
Alan Smaill
0
Alan
12/13/2013 10:54:26 PM
On 12/13/2013 4:54 PM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> When I say ill-formed I mean:
>> decision problems... contain, in their very definitions, the reasons
>> why they are impossible.
> This is just weird;
> there *is* a decision procedure for whether a given propositional
> formula is a tautology or not.
>
> How can that be, if decision procedures *in general* contain, in their
> very definitions, the reasons why they are impossible.

As can be seen from the context that you quoted the term "ill-formed" 
applies to:
decision problems... contain, in their very definitions, the reasons
why they are impossible.

Did you think that I meant that all decision problems *in general* are 
ill-formed?

> That's what it looks like your are claiming.
> I realise this is not your wording, but you have signed up to it,
> so I ask you to justify your understanding of it, given the example
> just mentioned (and many others).
>
>


0
Peter
12/13/2013 11:07:04 PM
Alan Smaill <smaill@SPAMinf.ed.ac.uk> writes:

> Peter Olcott <OCR4Screen> writes:
>
>> When I say ill-formed I mean:
>> decision problems... contain, in their very definitions, the reasons
>> why they are impossible.
>
> This is just weird;
> there *is* a decision procedure for whether a given propositional
> formula is a tautology or not.
>
> How can that be, if decision procedures *in general* contain, in their
> very definitions, the reasons why they are impossible.
>
> That's what it looks like your are claiming.
> I realise this is not your wording, but you have signed up to it,
> so I ask you to justify your understanding of it, given the example
> just mentioned (and many others).

And, for the record, it's not my wording any more either.  For one thing
all the context has been lost.

-- 
Ben.
0
Ben
12/13/2013 11:29:02 PM
On 12/13/2013 5:29 PM, Ben Bacarisse wrote:
> Alan Smaill <smaill@SPAMinf.ed.ac.uk> writes:
>
>> Peter Olcott <OCR4Screen> writes:
>>
>>> When I say ill-formed I mean:
>>> decision problems... contain, in their very definitions, the reasons
>>> why they are impossible.
>> This is just weird;
>> there *is* a decision procedure for whether a given propositional
>> formula is a tautology or not.
>>
>> How can that be, if decision procedures *in general* contain, in their
>> very definitions, the reasons why they are impossible.
>>
>> That's what it looks like your are claiming.
>> I realise this is not your wording, but you have signed up to it,
>> so I ask you to justify your understanding of it, given the example
>> just mentioned (and many others).
> And, for the record, it's not my wording any more either.  For one thing
> all the context has been lost.
>

https://groups.google.com/forum/#!original/sci.logic/pDQHZ5_zEp4/4QXHZpTXfOUJ

12/10/2013 8:14 AM, Ben Bacarisse wrote:
Of course, in one way Peter is absolutely correct!  The decision
problems of arithmetic and halting and so on are, in some sense,
inherently problematic.  They contain, in their very definitions, the
reasons why they are impossible.  Unfortunately it is not because they
are ill-formed or paradoxical, it is simply that both arithmetic and
Turing machines can talk about themselves (and each other of course).

0
Peter
12/13/2013 11:34:33 PM
Peter Olcott <OCR4Screen> writes:
<snip>
> When I say ill-formed I mean:
> decision problems... contain, in their very definitions, the reasons
> why they are impossible.

I think you mean something more like "unsolvable decision problems
contain, in their very definitions, the reasons why they are
impossible".  (Though it could be better worded.  By cherry picking
phrases you omit the fact the other definitions matter too -- for
example the definition of a TM).

Anyway, you should not do that.  Don't use words with one technical
meaning for some other technical meaning.

-- 
Ben.
0
Ben
12/13/2013 11:43:06 PM
On 12/13/2013 5:43 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> When I say ill-formed I mean:
>> decision problems... contain, in their very definitions, the reasons
>> why they are impossible.
> I think you mean something more like "unsolvable decision problems
> contain, in their very definitions, the reasons why they are
> impossible".  (Though it could be better worded.  By cherry picking
> phrases you omit the fact the other definitions matter too -- for
> example the definition of a TM).
>
> Anyway, you should not do that.  Don't use words with one technical
> meaning for some other technical meaning.
>
That way of saying it might not so clearly specify that decision problems
such as the Halting Problem are analogous to questions such as:
a) What time is it (yes or no)?
b) How many feet long is the color** of your car?

In that both the above questions and the decision problems that
you referred to:

"contain, in their very definitions, the reasons why they are impossible."

** Visual sensory perception, not the wavelength of a spectrum of light.
0
Peter
12/14/2013 12:07:29 AM
On 12/13/2013 5:43 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> When I say ill-formed I mean:
>> decision problems... contain, in their very definitions, the reasons
>> why they are impossible.
> I think you mean something more like "unsolvable decision problems
> contain, in their very definitions, the reasons why they are
> impossible".
That way of saying it might not so clearly specify that decision problems
such as the Halting Problem are analogous to questions such as:
a) What time is it (yes or no)?
b) How many feet long is the color** of your car?

In that both the above questions and the decision problems that
you referred to:

"contain, in their very definitions, the reasons why they are impossible."

** Visual sensory perception, not the wavelength of a spectrum of light.

> (Though it could be better worded.  By cherry picking
> phrases you omit the fact the other definitions matter too -- for
> example the definition of a TM).
>
> Anyway, you should not do that.  Don't use words with one technical
> meaning for some other technical meaning.
>

0
Peter
12/14/2013 12:11:12 AM
Peter Olcott <OCR4Screen> writes:

> On 12/13/2013 5:43 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>> <snip>
>>> When I say ill-formed I mean:
>>> decision problems... contain, in their very definitions, the reasons
>>> why they are impossible.
>> I think you mean something more like "unsolvable decision problems
>> contain, in their very definitions, the reasons why they are
>> impossible".  (Though it could be better worded.  By cherry picking
>> phrases you omit the fact the other definitions matter too -- for
>> example the definition of a TM).
>>
>> Anyway, you should not do that.  Don't use words with one technical
>> meaning for some other technical meaning.
>>
> That way of saying it might not so clearly specify that decision problems
> such as the Halting Problem are analogous to questions such as:
> a) What time is it (yes or no)?
> b) How many feet long is the color** of your car?

Being able to lump a whole bunch of other statements into the term
"ill-formed" does not make redefining it any more acceptable.  It makes
it, if anything, worse.  It already has a useful meaning in a technical
forum like this, and if you want to communicate with others you should
leave that meaning alone.

I'm not sure there's any mileage in going back and forth about this
term.  I am sure you will continue to use it, and people will continue
to object.  That was always about finding words that make you feel
comfortable with theorems that you accept.

-- 
Ben.
0
Ben
12/14/2013 1:34:09 AM
On 12/13/2013 7:34 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 12/13/2013 5:43 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>> <snip>
>>>> When I say ill-formed I mean:
>>>> decision problems... contain, in their very definitions, the reasons
>>>> why they are impossible.
>>> I think you mean something more like "unsolvable decision problems
>>> contain, in their very definitions, the reasons why they are
>>> impossible".  (Though it could be better worded.  By cherry picking
>>> phrases you omit the fact the other definitions matter too -- for
>>> example the definition of a TM).
>>>
>>> Anyway, you should not do that.  Don't use words with one technical
>>> meaning for some other technical meaning.
>>>
>> That way of saying it might not so clearly specify that decision problems
>> such as the Halting Problem are analogous to questions such as:
>> a) What time is it (yes or no)?
>> b) How many feet long is the color** of your car?
> Being able to lump a whole bunch of other statements into the term
> "ill-formed" does not make redefining it any more acceptable.  It makes
> it, if anything, worse.  It already has a useful meaning in a technical
> forum like this, and if you want to communicate with others you should
> leave that meaning alone.
>
> I'm not sure there's any mileage in going back and forth about this
> term.  I am sure you will continue to use it, and people will continue
> to object.  That was always about finding words that make you feel
> comfortable with theorems that you accept.
>
That the Halting Problem and the questions:
a) What time is it (yes or no)?
b) How many feet long is the color** of your car?

are exactly analogous in that both of these
"contain, in their very definitions, the reasons
why they are impossible."

is probably by itself sufficient, no matter what this is called.

It seems pretty obvious that there is something wrong
with the above questions. It also seems pretty obvious
that what is wrong with the questions is that they:
"contain, in their very definitions, the reasons
why they are impossible."

Therefore my original point seems completely proven:
That the Halting Problem is like asking the question
What time is it (yes or no)?
0
Peter
12/14/2013 2:00:32 AM
On 12/13/2013 4:54 PM, Alan Smaill wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> When I say ill-formed I mean:
>> decision problems... contain, in their very definitions, the reasons
>> why they are impossible.
>
> This is just weird;
> there *is* a decision procedure for whether a given propositional
> formula is a tautology or not.
>
> How can that be, if decision procedures *in general* contain, in their
> very