b83503104@yahoo.com wrote: > Tukey (inventor of FFT) thinks that an approximate solution of the > exact problem is often more useful than the exact solution of an > approximate problem. > > I find it hard to argue which one is more important or useful. Once > you believe in one of them, your belief will lead your research style > to either algorithm-centered or model-construction-centered. > > Anybody wants to elaborate on either of these two views? > Interesting background on FFTs http://mathworld.wolfram.com/FastFourierTransform.html http://en.wikipedia.org/wiki/Fast_Fourier_transform

0 |

5/15/2006 5:56:55 AM

b83503104@yahoo.com wrote: > Tukey (inventor of FFT) thinks that an approximate solution of the > exact problem is often more useful than the exact solution of an > approximate problem. > > I find it hard to argue which one is more important or useful. Once > you believe in one of them, your belief will lead your research style > to either algorithm-centered or model-construction-centered. Your question is of central importance to the philosophy of mathematics, one which can be appreciated only by those who actually _do_ the mathematics, not just claim they do. I would categorize the two styles: 1. Symbolic mathematics 2. Computational mathematics In my experience, I cannot decide between the two. The 'most important' or 'most fundamental' choice depends on the areas of mathematics you are working in. If you are messing around in number theory, the exact symbolic solution is usually _the only solution_ that matters (i.e. "what looks nicest is better"). You can approximate an exact symbolic solution so many other ways and the symbolisms can be, seamingly, _irreconcilable_. However, if you are dealing in computational mathematics, things like Neural networks, self-organizing maps, cellular automata, then accurate but computationally simpler approximations seem to be more fundamenta (i.e the least complex algorithms) > Anybody wants to elaborate on either of these two views?

0 |

5/15/2006 7:03:47 AM

In article <1147676627.082933.272060@j33g2000cwa.googlegroups.com>, schoenfeld1@gmail.com says... > > b83503104@yahoo.com wrote: > > Tukey (inventor of FFT) thinks that an approximate solution of the > > exact problem is often more useful than the exact solution of an > > approximate problem. > > > > I find it hard to argue which one is more important or useful. Once > > you believe in one of them, your belief will lead your research style > > to either algorithm-centered or model-construction-centered. > > Your question is of central importance to the philosophy of > mathematics, one which can be appreciated only by those who actually > _do_ the mathematics, not just claim they do. > > I would categorize the two styles: > 1. Symbolic mathematics > 2. Computational mathematics > > In my experience, I cannot decide between the two. The 'most important' > or 'most fundamental' choice depends on the areas of mathematics you > are working in. If you are messing around in number theory, the exact > symbolic solution is usually _the only solution_ that matters (i.e. > "what looks nicest is better"). You can approximate an exact symbolic > solution so many other ways and the symbolisms can be, seamingly, > _irreconcilable_. > > > However, if you are dealing in computational mathematics, things like > Neural networks, self-organizing maps, cellular automata, then accurate > but computationally simpler approximations seem to be more fundamenta > (i.e the least complex algorithms) > > > > > > > Anybody wants to elaborate on either of these two views? > > A complex calculation tend to be very confusing and its very hard for people not completely briefed on the subject to accept unless its 100% accurate. So in my experience, all else being equal, the simplest calculation method is preferred. -- Be careful, what you predict with the theory of human-caused global warming as it will be tested soon enough as we aren't going to reduce carbon dioxide emissions. Observations of Bernard - No 99

0 |

5/15/2006 9:46:32 AM

BernardZ wrote: > In article <1147676627.082933.272060@j33g2000cwa.googlegroups.com>, > schoenfeld1@gmail.com says... > > > > b83503104@yahoo.com wrote: > > > Tukey (inventor of FFT) thinks that an approximate solution of the > > > exact problem is often more useful than the exact solution of an > > > approximate problem. > > > > > > I find it hard to argue which one is more important or useful. Once > > > you believe in one of them, your belief will lead your research style > > > to either algorithm-centered or model-construction-centered. > > > > Your question is of central importance to the philosophy of > > mathematics, one which can be appreciated only by those who actually > > _do_ the mathematics, not just claim they do. > > > > I would categorize the two styles: > > 1. Symbolic mathematics > > 2. Computational mathematics > > > > In my experience, I cannot decide between the two. The 'most important' > > or 'most fundamental' choice depends on the areas of mathematics you > > are working in. If you are messing around in number theory, the exact > > symbolic solution is usually _the only solution_ that matters (i.e. > > "what looks nicest is better"). You can approximate an exact symbolic > > solution so many other ways and the symbolisms can be, seamingly, > > _irreconcilable_. > > > > > > However, if you are dealing in computational mathematics, things like > > Neural networks, self-organizing maps, cellular automata, then accurate > > but computationally simpler approximations seem to be more fundamenta > > (i.e the least complex algorithms) > > > > > > > > > > > > > Anybody wants to elaborate on either of these two views? > > > > > > A complex calculation tend to be very confusing and its very hard for > people not completely briefed on the subject to accept unless its 100% > accurate. > > So in my experience, all else being equal, the simplest calculation > method is preferred. Yes, but you are neglecting the other aspect - the _symbolic_ mathematics. Symbolic mathematics is an approach which emphasizes the use of symbols to describe the underlying 'structure' of the mathematics - something not possible with computational mathematics. For example, the symbol 'pi' represents something essentially incalculable. Yet, without actually calculating it, it can be proven that whatever this symbol represents, it carries some logic that allows it to interact with other things in very strange ways - those other things can be given symbols too. This to me is the classical approach to mathematics - the actual 'structure' being described by these strings of symbols seems to transcend any logic an algorithm could possibly describe. However, were you to actually bring these symbols into reality, that is, to make sense of them in a meaningful way, you need to evaluate via an algorithm, and when you do, you lose the infinite (seamingly 'divine') precision and structure originally described. So the question is, was the original string of symbols independently relevant or they merely as relevant as the algorithms used to evaluate them? I would like to know the answer to this question. > > -- > Be careful, what you predict with the theory of human-caused global > warming as it will be tested soon enough as we aren't going to reduce > carbon dioxide emissions. > > Observations of Bernard - No 99

0 |

5/15/2006 1:28:00 PM

In article <1147672097.414277.240280@j33g2000cwa.googlegroups.com>, b83503104@yahoo.com <b83503104@yahoo.com> wrote: >Tukey (inventor of FFT) thinks that an approximate solution of the >exact problem is often more useful than the exact solution of an >approximate problem. Tukey and Cooley were rediscoverers of the FFT. Both methods are somewhat dangerous, and Tukey was, alas, an expositor of both types of excesses. One has to be somewhat careful of both, and use the best mathematics one can manage at both stages, and not just consider simple alternatives. It is also likely that both will have to be done in the same problem. >I find it hard to argue which one is more important or useful. Once >you believe in one of them, your belief will lead your research style >to either algorithm-centered or model-construction-centered. NEVER construct a model of a type just because one has an algorithm for that type of model. Beware of using transformations for simplification. Structural models are not good regression models, and vice versa. Know what you are doing and why, and why it may not be a good idea at all. Model building and numerical analysis are both arts; treat them as such. >Anybody wants to elaborate on either of these two views? See my "Commandments" on my web page, http://www.stat.purdue.edu/~hrubin/ . -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Department of Statistics, Purdue University hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558

0 |

5/15/2006 7:00:54 PM

b83503104@yahoo.com wrote: > Tukey (inventor of FFT) thinks that an approximate solution of the > exact problem is often more useful than the exact solution of an > approximate problem. > > I find it hard to argue which one is more important or useful. Once > you believe in one of them, your belief will lead your research style > to either algorithm-centered or model-construction-centered. > > Anybody wants to elaborate on either of these two views? I'll bite, Tucker's solution, the Exact solution to the Exact Problem = E(P) = 0. Ken

0 |

5/15/2006 7:54:39 PM

b83503104@yahoo.com wrote: > Tukey (inventor of FFT) Tukey did not invent the FFT. Oscar Buneman, my PhD advisor at Stanford, used the FFT in 1939 during his pioneering WWII research that explains how the radar klystron works. His PhD students had been using it for years before I came in 1964. Imagine our research group's surprise when Cooley and Tukey published their paper in 1965. No, Oscar did not invent the FFT either. According to him, it was a well known technique in Germany during the early 1930s and was probably discovered well before 1900. \> exact problem is often more useful than the exact solution of an > approximate problem. > > I find it hard to argue which one is more important or useful. Once > you believe in one of them, your belief will lead your research style > to either algorithm-centered or model-construction-centered. > > Anybody wants to elaborate on either of these two views? I always start with the exact problem. If I can't solve it exactly, I state, clearly, the assertions and assumptions that lead to the approximate problem. Next, I obtain an exact or approximate solution to the approximate problem. Finally, I go back to see if the solution is consistent with the assertions and assumptions. So, you see, I consider both as just being part of the same solution. I attended a physics/engineering job applicant seminar at MIT Lincoln Laboratory. The applicant began his presentation with an approximate Electromagnetic wave equation. I asked him where the equation came from and he said it was "well known". Then all hell broke loose. My colleagues forced him to derive the equation from first principles (i.e., Maxwell's Equations). That took him ~ 1/2 hr out of a 50 minute seminar (it should have taken him 3 minutes). Needless to say, he was not offered a job. Hope this helps. Greg

0 |

5/15/2006 8:58:31 PM

Can you elaborate further? I believe when you refer to model building as an "art", then clearly the artist would use one of those two tools, or both. Otherwise, if the person is creating an exact solution for an exact problem formulation then we'd call her a scientist.

0 |

5/15/2006 9:22:42 PM

"Greg Heath" <heath@alumni.brown.edu> wrote in message news:1147726711.432827.16440@i39g2000cwa.googlegroups.com... > > b83503104@yahoo.com wrote: > > Tukey (inventor of FFT) > > Tukey did not invent the FFT. > > Oscar Buneman, my PhD advisor at Stanford, used the FFT in > 1939 during his pioneering WWII research that explains how the > radar klystron works. His PhD students had been using it for years > before I came in 1964. I am curious to know what type of Pre-WWII electronics could have used an FFT.

0 |

5/15/2006 10:00:47 PM

"Richard Henry" <rphenry@home.com> writes: > "Greg Heath" <heath@alumni.brown.edu> wrote in message > news:1147726711.432827.16440@i39g2000cwa.googlegroups.com... > > > > b83503104@yahoo.com wrote: > > > Tukey (inventor of FFT) > > > > Tukey did not invent the FFT. > > > > Oscar Buneman, my PhD advisor at Stanford, used the FFT in > > 1939 during his pioneering WWII research that explains how the > > radar klystron works. His PhD students had been using it for years > > before I came in 1964. > > I am curious to know what type of Pre-WWII electronics could have used an > FFT. Didn't either Euler or Gauss invent the FFT? Or at least the DFT computed recursively. Phil -- The man who is always worrying about whether or not his soul would be damned generally has a soul that isn't worth a damn. -- Oliver Wendell Holmes, Sr. (1809-1894), American physician and writer

0 |

5/15/2006 10:48:57 PM

In article <1147728162.754529.145990@v46g2000cwv.googlegroups.com>, datamatter@gmail.com writes: >Can you elaborate further? I believe when you refer to model building >as an "art", then clearly the artist would use one of those two tools, >or both. Otherwise, if the person is creating an exact solution for an >exact problem formulation then we'd call her a scientist. > It is very exceedingly rare for a scientist to be in a position to create an exact solution for an exact problem formulation. Mati Meron | "When you argue with a fool, meron@cars.uchicago.edu | chances are he is doing just the same"

0 |

5/15/2006 10:57:38 PM

"Greg Heath" <heath@alumni.brown.edu> wrote in news:1147726711.432827.16440 @i39g2000cwa.googlegroups.com: > > b83503104@yahoo.com wrote: >> Tukey (inventor of FFT) > > Tukey did not invent the FFT. > > Oscar Buneman, my PhD advisor at Stanford, used the FFT in > 1939 during his pioneering WWII research that explains how the > radar klystron works. His PhD students had been using it for years > before I came in 1964. > > Imagine our research group's surprise when Cooley and Tukey > published their paper in 1965. > > No, Oscar did not invent the FFT either. According to him, it was > a well known technique in Germany during the early 1930s and > was probably discovered well before 1900. The FFT is not much use without electronic calculators. Do you have a cite for this "well known technique" prior to 1900 or even the 1930's. Klazmon. >SNIP>

0 |

5/15/2006 11:10:04 PM

Llanzlan Klazmon <Klazmon@llurdiaxorb.govt> wrote in news:Xns97C5719B522A6Klazmonllurdiaxorbgo@203.97.37.6: > "Greg Heath" <heath@alumni.brown.edu> wrote in > news:1147726711.432827.16440 @i39g2000cwa.googlegroups.com: > >> >> b83503104@yahoo.com wrote: >>> Tukey (inventor of FFT) >> >> Tukey did not invent the FFT. >> >> Oscar Buneman, my PhD advisor at Stanford, used the FFT in >> 1939 during his pioneering WWII research that explains how the >> radar klystron works. His PhD students had been using it for years >> before I came in 1964. >> >> Imagine our research group's surprise when Cooley and Tukey >> published their paper in 1965. >> >> No, Oscar did not invent the FFT either. According to him, it was >> a well known technique in Germany during the early 1930s and >> was probably discovered well before 1900. > > The FFT is not much use without electronic calculators. Do you have a > cite for this "well known technique" prior to 1900 or even the 1930's. > > Klazmon. OK. I looked into this myself. It appears that none other than Carl Gauss figured out the key result in 1805. Klazmon. > > >>SNIP> >

0 |

5/15/2006 11:18:32 PM

In article <oS6ag.3086$KB.904@fed1read08>, Richard Henry <rphenry@home.com> wrote: >"Greg Heath" <heath@alumni.brown.edu> wrote in message >news:1147726711.432827.16440@i39g2000cwa.googlegroups.com... >> b83503104@yahoo.com wrote: >> > Tukey (inventor of FFT) >> Tukey did not invent the FFT. >> Oscar Buneman, my PhD advisor at Stanford, used the FFT in >> 1939 during his pioneering WWII research that explains how the >> radar klystron works. His PhD students had been using it for years >> before I came in 1964. >I am curious to know what type of Pre-WWII electronics could have used an >FFT. I do not see that electronics are necessary. By 1939, there were mechanical and electro-mechanical desk calculators; I have used such to good advantage. Fourier did his work in the early 1800s. Many papers before WWII give computations of Fourier transforms of numerical series. As far as we can tell, Gauss invented the FFT, but published it in an obscure place, as he could not see much value in it. -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Department of Statistics, Purdue University hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558

0 |

5/16/2006 12:20:47 AM

Richard Henry wrote: > "Greg Heath" <heath@alumni.brown.edu> wrote in message > news:1147726711.432827.16440@i39g2000cwa.googlegroups.com... > > > > b83503104@yahoo.com wrote: > > > Tukey (inventor of FFT) > > > > Tukey did not invent the FFT. > > > > Oscar Buneman, my PhD advisor at Stanford, used the FFT in > > 1939 during his pioneering WWII research that explains how the > > radar klystron works. His PhD students had been using it for years > > before I came in 1964. > > I am curious to know what type of Pre-WWII electronics could have used an > FFT. Oscar was of Jewish heritage and left Germany for England in 1933. When England declared war on Germany, he was interred and worked as an applied mathematician in an English research lab. In an effort to understand the physics of the radar klystron, he simulated, on a mechanical computer, the trajectories of electrons in a cylindrical geometry under the influence of combined applied and self electric fields. I don't remember if an applied axial magnetic field was present. If it was, it was easily incorporated. The self electric fields were computed in the quasistatic approximation using Poisson's equation. Since the geometry was cylindrical, fourier transforming in the axial and azimuthal directions reduced the partial differential equation in (r,theta,z) to an ordinary differential equation in radius. After the war he spent 5 years in nuclear physics research in Canada before finding his niche in computational plasma physics at Stanford. As a grad student in the late 60's I used the azimuthal fft to simulate plasma and electron beams in cylindrical geometry. The simulations led to the rediscovery of the breakup of hollow beams into rotating vortex patterns (analgous to Karman vortex streets in hydrodynamics). My PhD thesis was a theoretical analysis of the instability of the hollow beam and the resulting stability of the rotating vortex patterns. Roger Hockey, another student of Oscar's, pioneered the use of our simulation techniques in astronomical applications by simulating the formation of spiral galaxies. He has written a book on computational simulation which covers a lot of the details. Hope this helps. Greg

0 |

5/16/2006 12:40:13 AM

> So the question > is, was the original string of symbols independently relevant or they > merely as relevant as the algorithms used to evaluate them? I would > like to know the answer to this question. The original string of symbols and algorithms can create a framework where further systematic discussions is possible, measurements are possible and predictions can be made. For example political science models you often look at a few selected factors to access the like hood of an event to occur.

0 |

5/16/2006 11:36:18 AM

In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, Greg Heath <heath@alumni.brown.edu> wrote: >Oscar was of Jewish heritage and left Germany for England in 1933. >When England declared war on Germany, he was interred and worked >as an applied mathematician in an English research lab. You mean "interned", I hope. Robert Israel israel@math.ubc.ca Department of Mathematics http://www.math.ubc.ca/~israel University of British Columbia Vancouver, BC, Canada

0 |

5/16/2006 8:31:02 PM

In article <e4dcq6$il5$1@nntp.itservices.ubc.ca>, israel@math.ubc.ca (Robert Israel) wrote: > In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, > Greg Heath <heath@alumni.brown.edu> wrote: > > >Oscar was of Jewish heritage and left Germany for England in 1933. > >When England declared war on Germany, he was interred and worked > >as an applied mathematician in an English research lab. > > You mean "interned", I hope. > > Robert Israel israel@math.ubc.ca > Department of Mathematics http://www.math.ubc.ca/~israel > University of British Columbia Vancouver, BC, Canada How many others were interred in English research labs, I wonder?

0 |

5/16/2006 8:53:51 PM

b83503104@yahoo.com wrote: > Tukey (inventor of FFT) thinks that an approximate solution of the > exact problem is often more useful than the exact solution of an > approximate problem. > > I find it hard to argue which one is more important or useful. Once > you believe in one of them, your belief will lead your research style > to either algorithm-centered or model-construction-centered. > > Anybody wants to elaborate on either of these two views? If you don't specify a problem or a context, "which is better" discussions are pointless. Example: If you want to prove certain properties about a system, it is often very difficult without a closed form solution. As for example, convergence or stability. Example: If you want to build some hardware, it is often only required to demonstrate that certain values are within some range. As for example, you might be interested in pressure or stress on components that will fail if the value gets too large. You may be able to prove this with an approximate solution. Example: If your goal is to demonstrate that an approximate solution method is "close enough" then you need something authoritative. One such authoritative thing is an exact solution. And a method of getting one is called "solution generation." You work backwards from an exact solution that you know isn't too massively unphysical, and work out what the various system parameters and conditions would have to be in order for that solution to be right. Then you compare your method with that solution. For example, if you wanted to know the temperature near some heat source, and you knew it was very roughly parabolic, you could then work backwards from a parabolic temperature profile, and work out what the heat diffusion would have to be for that profile. Then you could apply your numerical method to that system, and then compare the answers to your exact solution. So, you need to say what your problem is, and what your goals w.r.t. that problem are, in order to know whether one method or another is preferable. Socks

0 |

5/16/2006 9:13:57 PM

Robert Israel wrote: > In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, > Greg Heath <heath@alumni.brown.edu> wrote: > > >Oscar was of Jewish heritage and left Germany for England in 1933. > >When England declared war on Germany, he was interred and worked > >as an applied mathematician in an English research lab. > > You mean "interned", I hope. Yes. It is very difficult to do anything, except decay, when you are interred. Greg P.S. I did a web search to double check. Lo and behold, there are many out there who make that mistake.

0 |

5/17/2006 12:47:58 AM

Virgil wrote: > In article <e4dcq6$il5$1@nntp.itservices.ubc.ca>, > israel@math.ubc.ca (Robert Israel) wrote: > > > In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, > > Greg Heath <heath@alumni.brown.edu> wrote: > > > > >Oscar was of Jewish heritage and left Germany for England in 1933. > > >When England declared war on Germany, he was interred and worked > > >as an applied mathematician in an English research lab. > > > > You mean "interned", I hope. > > > > Robert Israel israel@math.ubc.ca > > Department of Mathematics http://www.math.ubc.ca/~israel > > University of British Columbia Vancouver, BC, Canada > > How many others were interred in English research labs, I wonder? While surfing for unambiguous definintions of interred and interned, I ran into usenet postings discussing US, Canadian, German, and Japanese relocation, internment and concentration camps. I don't think it would be too difficult to hone in on sources which would give statistics for the British camps. However, I'm not sure what categories would have been chosen for statistical summarization. Hope this helps. Greg

0 |

5/17/2006 1:00:10 AM

Llanzlan Klazmon wrote: > OK. I looked into this myself. It appears that none other than Carl Gauss > figured out the key result in 1805. Wikipedia cites the following: Carl Friedrich Gauss, "Nachlass: Theoria interpolationis methodo nova tractata," Werke band 3, 265-327 (K=F6nigliche Gesellschaft der Wissenschaften, G=F6ttingen, 1866). See also M. T. Heideman, D. H. Johnson, and C. S. Burrus, "Gauss and the history of the fast Fourier transform," IEEE ASSP Magazine 1 (4), 14-21 (1984). Well, color me surprised.

0 |

5/17/2006 3:07:56 AM

Greg Heath wrote: > Robert Israel wrote: > > In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, > > Greg Heath <heath@alumni.brown.edu> wrote: > > > > >Oscar was of Jewish heritage and left Germany for England in 1933. > > >When England declared war on Germany, he was interred and worked > > >as an applied mathematician in an English research lab. > > > > You mean "interned", I hope. > > Yes. > > It is very difficult to do anything, except decay, when you > are interred. On the other hand, Jean Leray came up with the Leray spectral sequence and other neat stuff in a POW camp in Austria. Many people have been interred since then trying to understand it all. It's worked out well in other fields also; Olivier Messiaen wrote Quatuor pour la fin du temps while a guest of the German government during World War II.

0 |

5/17/2006 3:18:07 AM

"Schoenfeld" <schoenfeld1@gmail.com> wrote in message news:1147676627.082933.272060@j33g2000cwa.googlegroups.com... > > b83503104@yahoo.com wrote: >> Tukey (inventor of FFT) thinks that an approximate solution of the >> exact problem is often more useful than the exact solution of an >> approximate problem. >> >> I find it hard to argue which one is more important or useful. Once >> you believe in one of them, your belief will lead your research style >> to either algorithm-centered or model-construction-centered. > > Your question is of central importance to the philosophy of > mathematics, one which can be appreciated only by those who actually > _do_ the mathematics, not just claim they do. > > I would categorize the two styles: > 1. Symbolic mathematics > 2. Computational mathematics > > In my experience, I cannot decide between the two. The 'most important' > or 'most fundamental' choice depends on the areas of mathematics you > are working in. If you are messing around in number theory, the exact > symbolic solution is usually _the only solution_ that matters (i.e. > "what looks nicest is better"). You can approximate an exact symbolic > solution so many other ways and the symbolisms can be, seamingly, > _irreconcilable_. > > > However, if you are dealing in computational mathematics, things like > Neural networks, self-organizing maps, cellular automata, then accurate > but computationally simpler approximations seem to be more fundamenta > (i.e the least complex algorithms) > > > > > >> Anybody wants to elaborate on either of these two views? > ++++++++++++++++++++++++++++++++++++++++++++++++++++ This has been a very interesting thread. Now where does the process of reducing data by statistical methods fit in? Is it 1 or 2.? As Bob L and others have pointed out, statistics uses approximations of conceptual reality. This conceptual reality,(i.e randomness exists as a mathematical model) is not clear. How can randomness be constructed, Does it only exist in computational mathematics? What is an exact mathematical model of randomness? How do the mathematicians deal with the randomness of "quarks"? and the possibility of objects moving from a finite mathematical structure to an unknown, incomplete entity with undefined boundaries? Are our statements about the interval that a population mean lies within, strictly computational mathematics? What is the meaning of an exact solution in computational mathematics, given the fact that the realized computational process is finite and limited? Gentile clearly said that computers do no do exact mathematics. Is a computer output then an approximate solutions to an exact problem? The more I think about this, the more I understand the inability to frame inquires such that they are "well structured" for logical combinations and results (e.g. The Oxford school). Other than the above, I thought this thread was excellent, and all participants are to be thanked for their contribution. DAH

0 |

5/17/2006 3:30:20 AM

Gene Ward Smith wrote: > Greg Heath wrote: > > Robert Israel wrote: > > > In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, > > > Greg Heath <heath@alumni.brown.edu> wrote: > > > > > > >Oscar was of Jewish heritage and left Germany for England in 1933. > > > >When England declared war on Germany, he was interred and worked > > > >as an applied mathematician in an English research lab. > > > > > > You mean "interned", I hope. > > > > Yes. > > > > It is very difficult to do anything, except decay, when you > > are interred. > > On the other hand, Jean Leray came up with the Leray spectral sequence > and other neat stuff in a POW camp in Austria. Many people have been > interred since then trying to understand it all. It's worked out well > in other fields also; Olivier Messiaen wrote Quatuor pour la fin du > temps while a guest of the German government during World War II. Sorry, my reply was intended to be clever. I was so clever that I failed to clarify the main point: "Internment" is synonymous with "confinement" whereas "Interrment" is synonymous with "burial" ! Hope this helps. Greg

0 |

5/17/2006 8:27:04 AM

In article <lNwag.131$Fw1.184417@news.sisna.com>, David A. Heiser <daheiser@gvn.net> wrote: >"Schoenfeld" <schoenfeld1@gmail.com> wrote in message >news:1147676627.082933.272060@j33g2000cwa.googlegroups.com... >> b83503104@yahoo.com wrote: ...................... >> However, if you are dealing in computational mathematics, things like >> Neural networks, self-organizing maps, cellular automata, then accurate >> but computationally simpler approximations seem to be more fundamenta >> (i.e the least complex algorithms) >>> Anybody wants to elaborate on either of these two views? >++++++++++++++++++++++++++++++++++++++++++++++++++++ >This has been a very interesting thread. >Now where does the process of reducing data by statistical methods fit in? >Is it 1 or 2.? >As Bob L and others have pointed out, statistics uses approximations of >conceptual reality. This conceptual reality,(i.e randomness exists as a >mathematical model) is not clear. How can randomness be constructed, Does it >only exist in computational mathematics? What is an exact mathematical model >of randomness? Randomness cannot be "constructed". While the ideas of probability and randomness might have originated from repeated events under "identical" conditions, the fundamental concepts in probability are those of the unrepeatable event, and related ideas such as random variable. These can be REPRESENTED as subsets of a measure space and measurable functions on such a space, but this is a representation, not the concept itself. >How do the mathematicians deal with the randomness of "quarks"? and the >possibility of objects moving from a finite mathematical structure to an >unknown, incomplete entity with undefined boundaries? The representation here is in terms of quantum processes, which are far worse than random processes. However, the observations form a random process; it is what goes on between the observations which is not too well understood. >Are our statements about the interval that a population mean lies within, >strictly computational mathematics? >What is the meaning of an exact solution in computational mathematics, given >the fact that the realized computational process is finite and limited? >Gentile clearly said that computers do no do exact mathematics. Is a >computer output then an approximate solutions to an exact problem? Usually by "exact" solution we mean withing acceptable computational error. >The more I think about this, the more I understand the inability to frame >inquires such that they are "well structured" for logical combinations and >results (e.g. The Oxford school). >Other than the above, I thought this thread was excellent, and all >participants are to be thanked for their contribution. >DAH -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Department of Statistics, Purdue University hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558

0 |

5/18/2006 7:57:30 PM

b83503104@yahoo.com wrote: > Tukey (inventor of FFT) thinks that an approximate solution of the > exact problem is often more useful than the exact solution of an > approximate problem. In most cases the problem is what is the problem. > > I find it hard to argue which one is more important or useful. Once > you believe in one of them, your belief will lead your research style > to either algorithm-centered or model-construction-centered. I wonder if you think there are algorithmic solutions to problems that do not require a model of the problem to be solved. > > Anybody wants to elaborate on either of these two views? You views are the outcome of a totally distorted and wrong view of how science (and engineering) works. Mike

0 |

5/18/2006 8:32:29 PM

Most models, if carefully structured, are by definition, �exact� (see class A, below), with the qualifications of Godel's 1931 "Incompleteness Theorem". On the other hand, all empirical statements are approximations, at best. (See class B.) David Hume, �An Enquiry Concerning Human Understanding� (1777 edition) <http://www.etext.leeds.ac.uk/hume/ehu/ehupbsb.htm> first addressed this problem with clarity: He divided all statements of language into two classes: A. Statements that concern relations only among words, ideas or symbols: [Those relations are the propositions, definitions and rules of language, mathematics and deductive logic; they�re social contracts of convenience, designed to keep us on the same page.] Such statements are absolutely true or false (or undecidable, if some critical definition or premise is missing), only as a result of prior agreement about how they�re to be used. And, B. empirical statements, which are the province of inductive �logic�, concern �matters of fact�. [Inductive reasoning is defined as the process of extrapolating or interpolating from observation.] Empirical statements depend upon sets of data that are always incomplete, partial samplings of an as yet unobserved whole. Hume was the first to note that there�s no logical way to guarantee that future observations will track those of the past. This was his 'incompleteness theorem'. The overall task of science can be modeled as the problem of determining which class A statements (idealistic hypothetical models) best fit well-established class B statements (the realistic facts). This sub-set of class A statements provides our most confident description of physical and biological reality. Most remaining class A statements may have other values (e.g., aesthetic, emotional, logical, mathematical, ideological, religious) but generally fall outside the realm of science. Within this model, there will always be some class A statements that lie in limbo; theoretical constructions, seemingly empirically verifiable, but so far neither supported nor refuted by �direct� observation. Based on what may be deduced from the �rest of reality�, these currently seem either quite possible (Higgs boson, gravity waves, String Theory) or improbable (impenetrable shields against ballistic missiles, caches of Iraqi WMD, extraterrestrial intelligence, etc.). They�re science fiction today, but perhaps science fact tomorrow. Hope this helps. Len Ornstein Mike wrote: > b83503104@yahoo.com wrote: >> Tukey (inventor of FFT) thinks that an approximate solution of the >> exact problem is often more useful than the exact solution of an >> approximate problem. > > In most cases the problem is what is the problem. > >> I find it hard to argue which one is more important or useful. Once >> you believe in one of them, your belief will lead your research style >> to either algorithm-centered or model-construction-centered. > > I wonder if you think there are algorithmic solutions to problems that > do not require a model of the problem to be solved. > >> Anybody wants to elaborate on either of these two views? > > You views are the outcome of a totally distorted and wrong view of how > science (and engineering) works. > > Mike >

0 |

5/19/2006 5:28:42 PM

Leonard Ornstein wrote: > Most models, if carefully structured, are by definition, 'exact' (see > class A, below), with the qualifications of Godel's 1931 "Incompleteness > Theorem". > > On the other hand, all empirical statements are approximations, at best. > (See class B.) > > David Hume, "An Enquiry Concerning Human Understanding" (1777 edition) > <http://www.etext.leeds.ac.uk/hume/ehu/ehupbsb.htm> first addressed this > problem with clarity: Philosophy has progressed a lot since them. Hume's analyis was very simplistic although he was right about the problem of induction from a philosophical perspective only. > > He divided all statements of language into two classes: > > A. Statements that concern relations only among words, ideas or symbols: > [Those relations are the propositions, definitions and rules of > language, mathematics and deductive logic; they're social contracts of > convenience, designed to keep us on the same page.] Such statements are > absolutely true or false (or undecidable, if some critical definition or > premise is missing), only as a result of prior agreement about how > they're to be used. And, of ourse, he said nothing new, nothing that Aristotle has not said already with his categorical logic. > > B. empirical statements, which are the province of inductive 'logic', > concern "matters of fact". [Inductive reasoning is defined as the > process of extrapolating or interpolating from observation.] Empirical > statements depend upon sets of data that are always incomplete, partial > samplings of an as yet unobserved whole. Hume was the first to note that > there's no logical way to guarantee that future observations will track > those of the past. This was his 'incompleteness theorem'. The problem of induction was known since antiquity. But there is what is called "crying evidence". > The overall task of science can be modeled as the problem of determining > which class A statements (idealistic hypothetical models) best fit > well-established class B statements (the realistic facts). This sub-set > of class A statements provides our most confident description of > physical and biological reality. Most remaining class A statements may > have other values (e.g., aesthetic, emotional, logical, mathematical, > ideological, religious) but generally fall outside the realm of science. > No. The problem of science today is to come up with class A statements that generate new class B statements which in turn corroborate those class A statements. > Within this model, there will always be some class A statements that lie > in limbo; theoretical constructions, seemingly empirically verifiable, > but so far neither supported nor refuted by 'direct' observation. Based > on what may be deduced from the 'rest of reality', these currently seem > either quite possible (Higgs boson, gravity waves, String Theory) or > improbable (impenetrable shields against ballistic missiles, caches of > Iraqi WMD, extraterrestrial intelligence, etc.). They're science fiction > today, but perhaps science fact tomorrow. > > Hope this helps. It is too simplistic and antiquated to add anything of value. Mike > > Len Ornstein > > > Mike wrote: > > b83503104@yahoo.com wrote: > >> Tukey (inventor of FFT) thinks that an approximate solution of the > >> exact problem is often more useful than the exact solution of an > >> approximate problem. > > > > In most cases the problem is what is the problem. > > > >> I find it hard to argue which one is more important or useful. Once > >> you believe in one of them, your belief will lead your research style > >> to either algorithm-centered or model-construction-centered. > > > > I wonder if you think there are algorithmic solutions to problems that > > do not require a model of the problem to be solved. > > > >> Anybody wants to elaborate on either of these two views? > > > > You views are the outcome of a totally distorted and wrong view of how > > science (and engineering) works. > > > > Mike > >

0 |

5/19/2006 11:53:44 PM

Mike: Mike wrote: > Leonard Ornstein wrote: >> Most models, if carefully structured, are by definition, 'exact' (see >> class A, below), with the qualifications of Godel's 1931 "Incompleteness >> Theorem". >> >> On the other hand, all empirical statements are approximations, at best. >> (See class B.) >> >> David Hume, "An Enquiry Concerning Human Understanding" (1777 edition) >> <http://www.etext.leeds.ac.uk/hume/ehu/ehupbsb.htm> first addressed this >> problem with clarity: > > Philosophy has progressed a lot since them. Hume's analysis was very > simplistic although he was right about the problem of induction from a > philosophical perspective only. Could you elaborate on "from a philosophical perspective ONLY"? > >> He divided all statements of language into two classes: >> >> A. Statements that concern relations only among words, ideas or symbols: >> [Those relations are the propositions, definitions and rules of >> language, mathematics and deductive logic; they're social contracts of >> convenience, designed to keep us on the same page.] Such statements are >> absolutely true or false (or undecidable, if some critical definition or >> premise is missing), only as a result of prior agreement about how >> they're to be used. And, > > > of course, he said nothing new, nothing that Aristotle has not said > already with his categorical logic. > >> B. empirical statements, which are the province of inductive 'logic', >> concern "matters of fact". [Inductive reasoning is defined as the >> process of extrapolating or interpolating from observation.] Empirical >> statements depend upon sets of data that are always incomplete, partial >> samplings of an as yet unobserved whole. Hume was the first to note that >> there's no logical way to guarantee that future observations will track >> those of the past. This was his 'incompleteness theorem'. > > The problem of induction was known since antiquity. But there is what > is called "crying evidence". > Do you mean that the evidence for the incompleteness of induction was crying out to be recognized? But, by raising the case from the implicit to explicit level, Hume changed the world for the rest of us. > >> The overall task of science can be modeled as the problem of determining >> which class A statements (idealistic hypothetical models) best fit >> well-established class B statements (the realistic facts). This sub-set >> of class A statements provides our most confident description of >> physical and biological reality. Most remaining class A statements may >> have other values (e.g., aesthetic, emotional, logical, mathematical, >> ideological, religious) but generally fall outside the realm of science. >> > > No. The problem of science today is to come up with class A statements > that generate new class B statements which in turn corroborate those > class A statements. Certainly you're right; for science to keep moving ahead, new theoretical science needs to be generated to stimulate new experimental science. That small correction doesn't undo the model. > >> Within this model, there will always be some class A statements that lie >> in limbo; theoretical constructions, seemingly empirically verifiable, >> but so far neither supported nor refuted by 'direct' observation. Based >> on what may be deduced from the 'rest of reality', these currently seem >> either quite possible (Higgs boson, gravity waves, String Theory) or >> improbable (impenetrable shields against ballistic missiles, caches of >> Iraqi WMD, extraterrestrial intelligence, etc.). They're science fiction >> today, but perhaps science fact tomorrow. >> > >> Hope this helps. > > It is too simplistic and antiquated to add anything of value. > The aim was to remind that theory can be exact, but our knowledge of reality never can be; and in the end, it's confident understanding of reality and an ability to 'predict' outcomes that's science's main job. Algorithm-centered and model-construction-centered theories should both lead to observations and experiments. How productive the experiments turn out to be is really what counts. Sometimes the first will be more productive; sometimes the second. And sometimes we have to wait an awfully long time to find out whether a particular model isn't just science fiction. Is that too simplistic? Len > Mike > > > >> Len Ornstein >> >> >> Mike wrote: >>> b83503104@yahoo.com wrote: >>>> Tukey (inventor of FFT) thinks that an approximate solution of the >>>> exact problem is often more useful than the exact solution of an >>>> approximate problem. >>> In most cases the problem is what is the problem. >>> >>>> I find it hard to argue which one is more important or useful. Once >>>> you believe in one of them, your belief will lead your research style >>>> to either algorithm-centered or model-construction-centered. >>> I wonder if you think there are algorithmic solutions to problems that >>> do not require a model of the problem to be solved. >>> >>>> Anybody wants to elaborate on either of these two views? >>> You views are the outcome of a totally distorted and wrong view of how >>> science (and engineering) works. >>> >>> Mike >>> >

0 |

5/20/2006 1:21:01 AM

Greg Heath wrote: > Gene Ward Smith wrote: > > Greg Heath wrote: > > > Robert Israel wrote: > > > > In article <1147740013.573048.310170@v46g2000cwv.googlegroups.com>, > > > > Greg Heath <heath@alumni.brown.edu> wrote: > > > > > > > > >Oscar was of Jewish heritage and left Germany for England in 1933. > > > > >When England declared war on Germany, he was interred and worked > > > > >as an applied mathematician in an English research lab. > > > > > > > > You mean "interned", I hope. > > > > > > Yes. > > > > > > It is very difficult to do anything, except decay, when you > > > are interred. > > > > On the other hand, Jean Leray came up with the Leray spectral sequence > > and other neat stuff in a POW camp in Austria. Many people have been > > interred since then trying to understand it all. It's worked out well > > in other fields also; Olivier Messiaen wrote Quatuor pour la fin du > > temps while a guest of the German government during World War II. > > Sorry, my reply was intended to be clever. I was so clever that I > failed > to clarify the main point: > > "Internment" is synonymous with "confinement" > > whereas > > "Interrment" is synonymous with "burial" ! > > Hope this helps. Hoary joke has Beethoven continuing to work while interred. Researcher opening tomb finds great man sitting up at sarcophagus, erasing sheet after sheet of music manuscript. "What are you doing sir!", the startled investigator asks. "Decomposing".

0 |

5/20/2006 7:28:45 PM

In article <1148153325.061905.224880@u72g2000cwu.googlegroups.com>, "Edward Green" <spamspamspam3@netzero.com> wrote: <snip setup> >Hoary joke has Beethoven continuing to work while interred. > >Researcher opening tomb finds great man sitting up at sarcophagus, >erasing sheet after sheet of music manuscript. "What are you doing >sir!", the startled investigator asks. "Decomposing". <GROAN> /BAH

0 |

5/21/2006 10:59:10 AM