f



The difference between programmable and non-programmable logic--if there is one.

A recent Real World Technology article makes a big deal about the 
difference between programmable and non-programmable logic in talking 
about Intel's latest GPU's.

So far as I know, every electronic circuit derives its usefulness from 
its ability to behave differently depending on state that comes from 
somewhere else.  What's the difference between that and "programmable?"

The Von Neumann-Turing computing model is fairly rigid.  It imagines an 
instruction stream driving the processing of a data stream.  This model 
is already broken in numerous ways: interrupts, microcode, network 
inputs, peripheral cards, and all kinds of stuff outside the instruction 
stream that describe how a computer actually behaves.

Hacking exploits the fact that "instructions" don't have to come labeled 
as such and that nominally fixed-function logic can be made to do all 
kinds of strange things if you punch the right buttons in the right order.

Will we ever get past the current hack (instructions+data=computation)? 
  Or will we just keep adding fuzz to an arbitrarily rigid model of 
computation?

Robert.
0
rbmyersusa (542)
10/22/2011 11:14:26 PM
comp.arch 7611 articles. 0 followers. carchreader (32) is leader. Post Follow

29 Replies
577 Views

Similar Articles

[PageSpeed] 19

On 10/22/2011 6:14 PM, Robert Myers wrote:
> A recent Real World Technology article makes a big deal about the
> difference between programmable and non-programmable logic in talking
> about Intel's latest GPU's.
>
> So far as I know, every electronic circuit derives its usefulness from
> its ability to behave differently depending on state that comes from
> somewhere else. What's the difference between that and "programmable?"

I would beg to differ with that.  Electronic circuits are useful because 
they behave predictably, given known inputs.  A nand gate calculates a 
boolean result based on its inputs.  A latch stores the state of its 
input at some instant in time as defined by a clock.  The behavior of 
the logic is deterministic.
>
> The Von Neumann-Turing computing model is fairly rigid. It imagines an
> instruction stream driving the processing of a data stream. This model
> is already broken in numerous ways: interrupts, microcode, network
> inputs, peripheral cards, and all kinds of stuff outside the instruction
> stream that describe how a computer actually behaves.

In one model of computing, instructions and data are stored in the same 
space and are distinguishable only by context.  This arrangement makes 
the instructions vulnerable since there is no a-priori difference 
between the instructions and the data, and in fact in some cases the two 
are deliberately intermixed, as in self modifying code.
>
> Hacking exploits the fact that "instructions" don't have to come labeled
> as such and that nominally fixed-function logic can be made to do all
> kinds of strange things if you punch the right buttons in the right order.

Hacking in one form takes advantage of the fact that sometimes the 
behavior of a complex mechanism is not completely specified in the 
presence of unanticipated inputs.  The logic does not change its 
function.  Other times hacking takes advantage of human failings to 
provide the necessary input to achieve the desired end.

Check out "crashme", "ping of death", and several other examples.  The 
various buffer overflow exploits are a result of both hardware and 
software making assumptions about each other's behavior without doing 
anything to assure it.  Trust but verify is good motto.
>
> Will we ever get past the current hack (instructions+data=computation)?
> Or will we just keep adding fuzz to an arbitrarily rigid model of
> computation?
>
> Robert.

The current model is not the only model.  However, it is quite useful. 
It will endure until it is replaced by something more useful.  It would 
be quite easy (Harvard architecture) to separate instructions from data. 
  I think the truth is that, to my surprise, few give much of a hoot 
about security and reliability.

0
delcecchi (122)
10/23/2011 1:52:28 AM
On Sat, 22 Oct 2011 20:52:28 -0500, Del Cecchi wrote:

> The current model is not the only model.  However, it is quite useful.
> It will endure until it is replaced by something more useful.  It would
> be quite easy (Harvard architecture) to separate instructions from data.
>   I think the truth is that, to my surprise, few give much of a hoot
> about security and reliability.

Separating instructions from data doesn't necessarily contribute 
significantly to security *or* reliability.  Now that stacks are not-
executable and code segments are not-writeable, the cool kids are making 
exploits do what they want by chaining little threaded interpreters 
together out of the selection of instructions available just before 
return statements in the attacked programs.  All you need then is a 
buffer overrun that lets you set up a "program" composed of return 
addresses on the stack.  This tactic would, of course, work just as well 
if the code were in a separate memory space, rather than just visible but 
not writeable.

If you want secure and reliable, using this theory, you probably have to 
abandon the whole notion of dynamically reprogrammable (i.e., go 
embedded: turn all computers into "devices").  At that point it doesn't 
matter much if your logic is programmable or not...

You could also restrict yourself (not a large restriction, it seems to 
me) to one of the many programming environments that don't admit that 
mode of failure.  There will still be failures though, just as there are 
failures in non-reprogrammable, Harvard-architected embedded systems: 
people make mistakes.  Most of the recent web (in)security news seems to 
center around protocol errors, which is a design-level problem, not 
especially related to the implementation technology.

Cheers,

-- 
Andrew
0
areilly--- (96)
10/23/2011 3:06:46 AM
In article <9ghem6F1tdU1@mid.individual.net>,
Andrew Reilly  <areilly---@bigpond.net.au> wrote:
>On Sat, 22 Oct 2011 20:52:28 -0500, Del Cecchi wrote:
>
>> The current model is not the only model.  However, it is quite useful.
>> It will endure until it is replaced by something more useful.  It would
>> be quite easy (Harvard architecture) to separate instructions from data.
>>   I think the truth is that, to my surprise, few give much of a hoot
>> about security and reliability.
>
>Separating instructions from data doesn't necessarily contribute 
>significantly to security *or* reliability.  ...

While that is true, protecting instructions against being overwritten
helps significantly with reliability in unchecked languages like C,
C++ and (as normally implemented) Fortran.  How much?  At a wild
guess, a 30% improvement - significant but not major.  It doesn't
help at all in checked or interpreted languages, of course.

However, Del's point is good, and there are a LOT of things that
could help a great deal with both for negligible extra complexity.
One ancient one is the concept of UNPRIVILEGED levels of protection,
where pages are always readable but have a settable level, code can
always lower its level, but only code in a level N instruction
segment can raise its level to N or change the level of a level N
page.

That would enable a massive improvement in the reliability of
run-time systems and debuggers, by protecting their data from being
trashed by the program, by accident or simplistic attacks.  Yes,
I know that debuggers shouldn't rely on anything in the debugged
process (and especially not execute code there), but the ghastly
hardware and operating system facilities often make that unavoidable.

The point about Turing completeness is true but irrelevant.
Theoretical security is possible but, in practice, it is a game
between the hacker and spook, whether in computing, war, or the
suppression of dissent.  And better tools for one side obviously
help that side.


Regards,
Nick Maclaren.
0
nmm12 (1380)
10/23/2011 8:50:39 AM
In article <y7Ioq.5255$ZO7.5060@newsfe05.iad>, rbmyersusa@gmail.com 
(Robert Myers) wrote:

>  What's the difference between that and "programmable?"

 The truth table for something like an And gate is deterministic. You 
know what you are going to get out given an input. It is possible 
(though not always practical) to write a truth table for any piece of 
fixed logic however complex. Off course once you have the table you may 
be able to implement it some other way. One trick in the 8 bit days was 
to use a PROM.

 Ken Young
0
kenney (238)
10/23/2011 4:20:59 PM
On Oct 22, 7:52=A0pm, Del Cecchi <delcec...@gmail.com> wrote:

> The current model is not the only model. =A0However, it is quite useful.
> It will endure until it is replaced by something more useful. =A0It would
> be quite easy (Harvard architecture) to separate instructions from data.

I remember reading about how the Bell System ESS computers worked in a
book by James Martin. I keep wondering when someone will build a
computer like that: a general-purpose PC that loads a program into
memory on a secondary computer, memory that is physically RAM, but
which the secondary computer can only read - and the secondary
computer behaves like a WebTV device.

So you can update its program easily, and you can *explicitly*
download programs with it that you run on your general-purpose PC (so
there is still danger from bad programs getting in thanks to DNS
spoofing) - but malicious code injection and the like becomes well-
nigh impossible.

John Savard
0
jsavard (654)
10/23/2011 4:22:33 PM
Robert Myers wrote:
> A recent Real World Technology article makes a big deal about the
> difference between programmable and non-programmable logic in talking
> about Intel's latest GPU's.

This is probably because Intel (in Larrabee) worked from the assumption 
that an almost pure sw approach could be sufficient, with very few 
hardwired logic units to help out (basically the specialized 
load/unpack/swizzle unit and the scatter/gather unit).

Performance/throughput was from having a _lot_ of cores, and very wide 
vector registers (64-byte cache lines).

In this model a programmer is free to basically program any core to do 
any kind of task.

Currently they are shipping much more of a traditional GPU, where all 
the currently standard stages of a graphics pipeline has dedicated hw.

This is of course more energy-efficient, but also much harder to use for 
GPGPU style HPC programming.

Terje
-- 
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
0
Terje
10/23/2011 9:28:33 PM
On 10/23/2011 12:20 PM, kenney@cix.compulink.co.uk wrote:
> In article<y7Ioq.5255$ZO7.5060@newsfe05.iad>, rbmyersusa@gmail.com
> (Robert Myers) wrote:
>
>>   What's the difference between that and "programmable?"
>
>   The truth table for something like an And gate is deterministic. You
> know what you are going to get out given an input. It is possible
> (though not always practical) to write a truth table for any piece of
> fixed logic however complex. Off course once you have the table you may
> be able to implement it some other way. One trick in the 8 bit days was
> to use a PROM.
>

That's more along the lines of the kind of question I was asking.

If you take a *really* abstract view of digital devices, programmable or 
otherwise, they are devices that map the integers into the integers.

Viewing logic that way got Godel to his famous result.  I assume that 
Turing must have followed a similar path, although I'm less familiar 
with even the outlines of the proof.  In such an abstract view, the 
distinction between instructions and data disappears, as I x I is 
isomorphic to I.

One way to describe either hardware or software is to write out the map 
from I to I that it defines (the equivalent of your truth table).  In 
practice, that is done in only the simplest of cases, as you have 
already described.

Maybe you need to be a Godel or a Turing even to formulate the question 
I'd like to ask.  Out of all the gigantic space of maps from I to I, we 
have chosen almost stereotyped procedures for describing those maps. 
The stereotyping necessarily limits the class of maps we can describe 
and has a profound effect on the complexity of the description.

By putting "fixed-function" circuitry into a GPU, you limit the class of 
maps it can span, but you also reduce the complexity of the description 
of the maps and, in practical terms, you can usually do whatever it is 
much faster, in part inevitably because the space you are wandering 
through as you compute is much smaller (you have many fewer choices to 
make and thus many fewer gate delays).

I suppose that I am trying to imagine a less restrictive, more abstract, 
way of doing business--something in the color space bounded by the truth 
table of a nand gate, an assembly language program, and the set of all 
arbitrary maps from the integers to the integers.

My apologies for sounding like another frequent poster here, but I'm 
doing the best I can.

Robert.

0
rbmyersusa (542)
10/23/2011 10:23:59 PM
On 10/23/2011 5:23 PM, Robert Myers wrote:
> On 10/23/2011 12:20 PM, kenney@cix.compulink.co.uk wrote:
>> In article<y7Ioq.5255$ZO7.5060@newsfe05.iad>, rbmyersusa@gmail.com
>> (Robert Myers) wrote:
>>
>>> What's the difference between that and "programmable?"
>>
>> The truth table for something like an And gate is deterministic. You
>> know what you are going to get out given an input. It is possible
>> (though not always practical) to write a truth table for any piece of
>> fixed logic however complex. Off course once you have the table you may
>> be able to implement it some other way. One trick in the 8 bit days was
>> to use a PROM.
>>
>
> That's more along the lines of the kind of question I was asking.
>
> If you take a *really* abstract view of digital devices, programmable or
> otherwise, they are devices that map the integers into the integers.
>
> Viewing logic that way got Godel to his famous result. I assume that
> Turing must have followed a similar path, although I'm less familiar
> with even the outlines of the proof. In such an abstract view, the
> distinction between instructions and data disappears, as I x I is
> isomorphic to I.
>
> One way to describe either hardware or software is to write out the map
> from I to I that it defines (the equivalent of your truth table). In
> practice, that is done in only the simplest of cases, as you have
> already described.
>
> Maybe you need to be a Godel or a Turing even to formulate the question
> I'd like to ask. Out of all the gigantic space of maps from I to I, we
> have chosen almost stereotyped procedures for describing those maps. The
> stereotyping necessarily limits the class of maps we can describe and
> has a profound effect on the complexity of the description.
>
> By putting "fixed-function" circuitry into a GPU, you limit the class of
> maps it can span, but you also reduce the complexity of the description
> of the maps and, in practical terms, you can usually do whatever it is
> much faster, in part inevitably because the space you are wandering
> through as you compute is much smaller (you have many fewer choices to
> make and thus many fewer gate delays).
>
> I suppose that I am trying to imagine a less restrictive, more abstract,
> way of doing business--something in the color space bounded by the truth
> table of a nand gate, an assembly language program, and the set of all
> arbitrary maps from the integers to the integers.
>
> My apologies for sounding like another frequent poster here, but I'm
> doing the best I can.
>
> Robert.
>

How does time or sequentialness enter into this abstraction?  Is there a 
difference between a shift register and a wire?
0
delcecchi (122)
10/24/2011 12:46:41 AM
On 10/23/2011 8:46 PM, Del Cecchi wrote:

>
> How does time or sequentialness enter into this abstraction? Is there a
> difference between a shift register and a wire?

I think you are asking me to answer a deep question about the nature of 
concurrency that I am not qualified to answer.  I don't think I can even 
formulate the question properly.

Even circuitry that would fit my abstraction uses concurrency 
internally, but we need not care.  In the end, the device maps input to 
output.

Let's assume, for the moment, that we are willing to limit ourselves to 
algorithms that can be serialized.  I think that all algorithms that can 
be done on a Turing machine fit into that category.

Robert.

0
rbmyersusa (542)
10/24/2011 1:27:32 AM
Robert Myers <rbmyersusa@gmail.com> writes:


> If you take a *really* abstract view of digital devices, programmable
> or otherwise, they are devices that map the integers into the
> integers.

That basically takes you back to the Turing model of computing, where a
program takes all of its inputs when it starts and produces all of its
output when it ends.  Whether they are integers or tapes containing
symbols is of little consequence.

Modern computers interact, so you need a potentially infinite sequence
of inputs (events) and a potentially infinite sequence of outputs
(responses).  In addition of being potentially infinite, there is also a
possibility that future events may be influenced by past responses, so
you can't assume all inputs are available from the beginning: Some
inputs _are_ only available some time after certain outputs are
produced.

But you can see interactive computation as a function that maps a pair
of integers to a pair of integers and which is iterated in the following
way: One of the output integers (the state) is fed back as one of the
input integers and the other acts as input and output.  A zero input or
output could mean that no value is read or written, so you don't need to
have a strict interleaving of input-output-input-output ... .

Interrupts and so on can be modelled as inputs.  All that is required is
that there is a bound on the amount of computation between each i/o
step.

	Torben
0
torbenm265 (288)
10/24/2011 8:36:06 AM
On 10/24/2011 4:36 AM, Torben �gidius Mogensen wrote:
> Robert Myers<rbmyersusa@gmail.com>  writes:
>
>
>> If you take a *really* abstract view of digital devices, programmable
>> or otherwise, they are devices that map the integers into the
>> integers.
>
> That basically takes you back to the Turing model of computing, where a
> program takes all of its inputs when it starts and produces all of its
> output when it ends.  Whether they are integers or tapes containing
> symbols is of little consequence.
>
> Modern computers interact,...

A subject that computer architecture has been stuck on that doesn't 
interest me.  I wrote a post trying to explain why I was unenthusiastic 
about IBM building "supercomputers."  My objection is that, 
fundamentally, IBM has to be obsessed with interaction, because that's 
what machines designed for business have to do.

My willingness to discard interaction as an issue may have big 
consequences or no consequences.

Most who study mathematics are at some point put off by the bag of 
special tricks you are constantly being hit with.  I reached that point 
when I took integral calculus.  I could learn the special tricks well 
enough. Some of them I even remember.  What was completely missing was 
any kind of unifying principle.  Sometimes, if you pursue a subject in 
mathematics with sufficient doggedness, you will discover that the 
special tricks are only clues to something much more general and powerful.

Computer architecture is stuck at the bag of special tricks stage.  If I 
were lucky, I assume that the abstraction that has occurred to me would 
eventually lead me down some familiar paths.  The equivalent of an 
uncomputable number is an algorithm whose shortest expression is of 
roughly the same size as writing out the input-output table--hardly a 
useful discovery for either hardware or software.

Robert.
0
rbmyersusa (542)
10/24/2011 11:01:45 PM
On 10/24/2011 7:01 PM, Robert Myers wrote:

> The equivalent of an
> uncomputable number is an algorithm whose shortest expression is of
> roughly the same size as writing out the input-output table--hardly a
> useful discovery for either hardware or software.
>

But I suspect that an answer to the question Eugene Miya asked long ago 
about complexity is lurking in there.  An algorithm becomes more and 
more complex as the number of symbols required to write it out 
approaches the size (by some measure, possibly with the appearance of a 
logarithm), of just writing out the input-output table.

Robert.


0
rbmyersusa (542)
10/25/2011 2:56:19 AM
On 10/24/2011 4:01 PM, Robert Myers wrote:

> Computer architecture is stuck at the bag of special tricks stage.

Which is exactly why I want to collect as many of the special tricks as 
I can in one place, and, I hope, start seeing the patterns.
0
andy (402)
10/25/2011 4:08:56 AM
Robert Myers wrote:
> On 10/24/2011 7:01 PM, Robert Myers wrote:
>
>> The equivalent of an
>> uncomputable number is an algorithm whose shortest expression is of
>> roughly the same size as writing out the input-output table--hardly a
>> useful discovery for either hardware or software.
>>
>
> But I suspect that an answer to the question Eugene Miya asked long ago
> about complexity is lurking in there. An algorithm becomes more and more
> complex as the number of symbols required to write it out approaches the
> size (by some measure, possibly with the appearance of a logarithm), of
> just writing out the input-output table.

There are lots and lots of problems where the best solution is indeed a 
full or partial lookup table. :-)

It is particularly when such a lookup table can simultaneously handle 
all corner cases, turning them into straight-line code, that it really 
helps out.

I.e. you can generate a very fast case-insensitive Boyer-Moore search 
this way, even when the input characters can be 16 or 32 bits or consist 
of variable-length utf-8 encodings.

Solving it procedurally can be orders of magnitude slower.

In the same BM algorithm you can unroll the search without checking the 
current answer before the end, if the table lookup keeps the current 
search position constant for a matching value.

Terje
-- 
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
0
Terje
10/25/2011 7:18:09 AM
On 10/24/2011 6:01 PM, Robert Myers wrote:
> On 10/24/2011 4:36 AM, Torben �gidius Mogensen wrote:
>> Robert Myers<rbmyersusa@gmail.com> writes:
>>
>>
>>> If you take a *really* abstract view of digital devices, programmable
>>> or otherwise, they are devices that map the integers into the
>>> integers.
>>
>> That basically takes you back to the Turing model of computing, where a
>> program takes all of its inputs when it starts and produces all of its
>> output when it ends. Whether they are integers or tapes containing
>> symbols is of little consequence.
>>
>> Modern computers interact,...
>
> A subject that computer architecture has been stuck on that doesn't
> interest me. I wrote a post trying to explain why I was unenthusiastic
> about IBM building "supercomputers." My objection is that,
> fundamentally, IBM has to be obsessed with interaction, because that's
> what machines designed for business have to do.
>
Wow, you snuck that one right by me.

> My willingness to discard interaction as an issue may have big
> consequences or no consequences.
>
> Most who study mathematics are at some point put off by the bag of
> special tricks you are constantly being hit with. I reached that point
> when I took integral calculus. I could learn the special tricks well
> enough. Some of them I even remember. What was completely missing was
> any kind of unifying principle. Sometimes, if you pursue a subject in
> mathematics with sufficient doggedness, you will discover that the
> special tricks are only clues to something much more general and powerful.

In the case of integrals I don't thing there is anything but a bag of 
tricks.  That is why the CRC book had table of integrals.  But then I am 
not "the mathematician that others all quote, the greatest that ever got 
chalk on his coat"
>
> Computer architecture is stuck at the bag of special tricks stage. If I
> were lucky, I assume that the abstraction that has occurred to me would
> eventually lead me down some familiar paths. The equivalent of an
> uncomputable number is an algorithm whose shortest expression is of
> roughly the same size as writing out the input-output table--hardly a
> useful discovery for either hardware or software.
>
> Robert.

0
delcecchi (122)
10/26/2011 12:53:06 AM
On 10/25/2011 8:53 PM, Del Cecchi wrote:

>>
>> Most who study mathematics are at some point put off by the bag of
>> special tricks you are constantly being hit with. I reached that point
>> when I took integral calculus. I could learn the special tricks well
>> enough. Some of them I even remember. What was completely missing was
>> any kind of unifying principle. Sometimes, if you pursue a subject in
>> mathematics with sufficient doggedness, you will discover that the
>> special tricks are only clues to something much more general and
>> powerful.
>
> In the case of integrals I don't thing there is anything but a bag of
> tricks. That is why the CRC book had table of integrals. But then I am
> not "the mathematician that others all quote, the greatest that ever got
> chalk on his coat"

Well, Del, today is your lucky day.  Just as with many things in EE, if 
you take integrals over into the complex plane, things look much less 
arbitrary and much more systematic.

These mysteries I learned from Norman Levinson, the last to make any 
really significant progress on the zeroes of the Riemann zeta function. 
  He had been Norbert Wiener's student.  Not the greatest mathematicians 
ever to have gotten chalk on their coats, perhaps, but a far step from 
most first year calculus instructors.  As for me, I got one of the two 
C's in the class.  The other student to receive a C is famous.

Robert.
0
rbmyersusa (542)
10/26/2011 1:29:56 AM
Andy \"Krazy\" Glew wrote:
> 
> On 10/24/2011 4:01 PM, Robert Myers wrote:
> 
> > Computer architecture is stuck at the bag of special tricks stage.
> 
> Which is exactly why I want to collect as many of the special tricks as
> I can in one place, and, I hope, start seeing the patterns.

Hmmm.... sort of like the evolution of electric motors.
There were motors going back to the 1820's but good ones
didn't appear until the 1870's.  I guess it took a while
to really understand how to design a magnetic circuit.
0
nospam121 (105)
10/26/2011 2:25:19 AM
On 10/25/2011 7:25 PM, Mark Thorson wrote:
> Andy \"Krazy\" Glew wrote:
>>
>> On 10/24/2011 4:01 PM, Robert Myers wrote:
>>
>>> Computer architecture is stuck at the bag of special tricks stage.
>>
>> Which is exactly why I want to collect as many of the special tricks as
>> I can in one place, and, I hope, start seeing the patterns.
>
> Hmmm.... sort of like the evolution of electric motors.
> There were motors going back to the 1820's but good ones
> didn't appear until the 1870's.  I guess it took a while
> to really understand how to design a magnetic circuit.

and, something I heard before (from someone IRL, unverified), was that 
is wasn't until around the 1940s or so that someone had the idea of 
replacing the cloth and paper wrapped windings with lacquered windings.

0
cr88192355 (1928)
10/26/2011 2:45:25 AM
In article <j87lli$9le$1@dont-email.me>,
Del Cecchi  <delcecchi@gmail.com> wrote:
>>
>> Most who study mathematics are at some point put off by the bag of
>> special tricks you are constantly being hit with. I reached that point
>> when I took integral calculus. I could learn the special tricks well
>> enough. Some of them I even remember. What was completely missing was
>> any kind of unifying principle. Sometimes, if you pursue a subject in
>> mathematics with sufficient doggedness, you will discover that the
>> special tricks are only clues to something much more general and powerful.
>
>In the case of integrals I don't thing there is anything but a bag of 
>tricks.  That is why the CRC book had table of integrals.  But then I am 
>not "the mathematician that others all quote, the greatest that ever got 
>chalk on his coat"

Nor am I, but I used to be a fairly decent one.  This is a bit tricky,
because there are some very simple underlying principles - at the
theoretical end - but the gap between that and the formulae that
engineers need is immense, with some systematic links for some
classes of integral but only ad hoc ones for others.


Regards,
Nick Maclaren.
0
nmm12 (1380)
10/26/2011 6:55:08 AM
On Oct 25, 7:29=A0pm, Robert Myers <rbmyers...@gmail.com> wrote:

> Well, Del, today is your lucky day. =A0Just as with many things in EE, if
> you take integrals over into the complex plane, things look much less
> arbitrary and much more systematic.

The complex plane certainly is significant to analysis.

Thus, each differentiable function becomes a conformal mapping.

Doing integration without special tricks... well, one can use _series
methods_. But if you want an analytical expression of your integral,
yes, it's tricky, because that's the kind of mapping that the mapping
from a function to its derivative happens to be, because of how the
chain rule and the product rule happen to work.

No magic is going to make it go away, and integration is so vitally
necessary to applied mathematics that we can't afford to just throw up
our hands and say we don't like its inelegance.

John Savard
0
jsavard (654)
10/26/2011 3:57:10 PM
On 26 oct, 11:57, Quadibloc <jsav...@ecn.ab.ca> wrote:
> On Oct 25, 7:29=A0pm, Robert Myers <rbmyers...@gmail.com> wrote:
>
> > Well, Del, today is your lucky day. =A0Just as with many things in EE, =
if
> > you take integrals over into the complex plane, things look much less
> > arbitrary and much more systematic.
>
> The complex plane certainly is significant to analysis.
>
> Thus, each differentiable function becomes a conformal mapping.
>
> Doing integration without special tricks... well, one can use _series
> methods_. But if you want an analytical expression of your integral,
> yes, it's tricky, because that's the kind of mapping that the mapping
> from a function to its derivative happens to be, because of how the
> chain rule and the product rule happen to work.
>
> No magic is going to make it go away, and integration is so vitally
> necessary to applied mathematics that we can't afford to just throw up
> our hands and say we don't like its inelegance.
>

Many important integrals can be done straightforwardly using contour
integration and Cauchy's formula.

I would venture to say that the integrals that come up that can be
done that way come up in practice far more commonly than most of the
integrals taught in elementary calculus.

A few years back, I tutored a student taking advanced-placement
calculus as a senior in high school.  One of the homework problems
could be done (so far as I could see) only by using an inobvious
trigonometric substitution.  Even the instructor in class couldn't do
the problem, and he pressed for how my student had managed to do it,
he caved and admitted that he had had help.  There is probably some
pedagogic value in trying to crack such nuts, but, as to their
usefulness in everyday practice, I never saw it.

Robert.

0
rbmyersusa (542)
10/26/2011 6:11:25 PM
On 10/26/2011 1:11 PM, Robert Myers wrote:
> On 26 oct, 11:57, Quadibloc<jsav...@ecn.ab.ca>  wrote:
>> On Oct 25, 7:29 pm, Robert Myers<rbmyers...@gmail.com>  wrote:
>>
>>> Well, Del, today is your lucky day.  Just as with many things in EE, if
>>> you take integrals over into the complex plane, things look much less
>>> arbitrary and much more systematic.
>>
>> The complex plane certainly is significant to analysis.
>>
>> Thus, each differentiable function becomes a conformal mapping.
>>
>> Doing integration without special tricks... well, one can use _series
>> methods_. But if you want an analytical expression of your integral,
>> yes, it's tricky, because that's the kind of mapping that the mapping
>> from a function to its derivative happens to be, because of how the
>> chain rule and the product rule happen to work.
>>
>> No magic is going to make it go away, and integration is so vitally
>> necessary to applied mathematics that we can't afford to just throw up
>> our hands and say we don't like its inelegance.
>>
>
> Many important integrals can be done straightforwardly using contour
> integration and Cauchy's formula.
>
> I would venture to say that the integrals that come up that can be
> done that way come up in practice far more commonly than most of the
> integrals taught in elementary calculus.
>
> A few years back, I tutored a student taking advanced-placement
> calculus as a senior in high school.  One of the homework problems
> could be done (so far as I could see) only by using an inobvious
> trigonometric substitution.  Even the instructor in class couldn't do
> the problem, and he pressed for how my student had managed to do it,
> he caved and admitted that he had had help.  There is probably some
> pedagogic value in trying to crack such nuts, but, as to their
> usefulness in everyday practice, I never saw it.
>
> Robert.
>
Brings back memories of taking "vector calculus" from the math 
department as a senior EE.  Huge mistake, since we had already learned 
all that material in EM fields class.  But it was an easy 3 credit A.  I 
could have done the midterm in my sleep.

As for integral calculus, it was a collection of tricks.  "try letting 
x=cosine u and .... " no turn the crank at all.  You had to be able to 
recognize that it looked like something you would get if ....

If there was some underlying principle or technique, the math professors 
in the late 60's didn't tell us about it.


0
delcecchi (122)
10/26/2011 9:43:15 PM
ummm...

f(n,m)(x) = Sum(x^K * ln(f(n)(x))^L / f(m)(x))

Which would include all functions f(nest)(var) being analytical. A generalization of polynomials. The 1/x <-> ln(x) on the L @ 0 thing is perhaps the focus of this research.

Cheers Jacko
0
jackokring (1001)
10/26/2011 9:45:53 PM
On Monday, October 24, 2011 6:01:45 PM UTC-5, Robert Myers wrote:
> Computer architecture is stuck at the bag of special tricks stage.

And has been since about 1973. Back then they had, caches, reservation stations, pipelined function units, virtual memory, multi-banked memory systems, and even rudementory branch predictors.

About the only developments seen are in predictors, and decode-width.

Mitch
0
MitchAlsup (270)
10/27/2011 1:03:15 AM
> About the only developments seen are in predictors, and decode-width.

Don't forget memory consistency,


        Stefan
0
monnier (242)
10/27/2011 1:43:13 AM
Andy \"Krazy\" Glew wrote:
> 
> On 10/24/2011 4:01 PM, Robert Myers wrote:
> 
> > Computer architecture is stuck at the bag of special tricks stage.
> 
> Which is exactly why I want to collect as many of the special tricks as
> I can in one place, and, I hope, start seeing the patterns.

I just remembered there is sort of a model for that.
_The_Machine_Gun_ by Shinn is a series of 5 volumes
(later a sixth was added) published in the 1950's
which is a comprehensive review of machine gun technology.
Volume 1 is mostly history, and I forget what the other
volumes specifically cover, but one volume was sort of a
classification of different approaches to each facet
of design.  For example, most machine guns have a way
to quickly change barrels.  There's only a few different
ways to attach the barrel, and an example of each is shown
as drawings that highlight the essential features.
It's really a great example of an expert laying out
all the choices facing a machine gun designer.
0
nospam121 (105)
10/30/2011 12:39:38 AM
On 10/29/2011 5:39 PM, Mark Thorson wrote:
> Andy \"Krazy\" Glew wrote:
>>
>> On 10/24/2011 4:01 PM, Robert Myers wrote:
>>
>>> Computer architecture is stuck at the bag of special tricks stage.
>>
>> Which is exactly why I want to collect as many of the special tricks as
>> I can in one place, and, I hope, start seeing the patterns.
>
> I just remembered there is sort of a model for that.
> _The_Machine_Gun_ by Shinn is a series of 5 volumes
> (later a sixth was added) published in the 1950's
> which is a comprehensive review of machine gun technology.
> Volume 1 is mostly history, and I forget what the other
> volumes specifically cover, but one volume was sort of a
> classification of different approaches to each facet
> of design.  For example, most machine guns have a way
> to quickly change barrels.  There's only a few different
> ways to attach the barrel, and an example of each is shown
> as drawings that highlight the essential features.
> It's really a great example of an expert laying out
> all the choices facing a machine gun designer.


yep.

I remembered before I wrote about ideas for various ways to handle 
dynamic type-checking.


meanwhile, a lot of other information one can find on the internet only 
naively references a single possibility, such as "tagged unions" or 
"tagged pointers".

common C++ ABIs also have a scheme for doing RTTI (generally entry 0 in 
the vtable pointing to a class-info structure or similar).

so:
object-layout:
	vtable pointer;
	data...
vtable-layout:
	class-info pointer;
	virtual methods...


meanwhile, I use a scheme which keeps a lot of this more hidden 
(technically the info is either stored in the MM/GC's object header, or 
inferred from the memory address), but just from what I can gather, as 
far as most of the internet is concerned, this strategy does not exist.

granted, the system as it exists, is partly an outgrowth of the 
tradeoffs of working with dynamic types and garbage-collection in C code 
(it emerged gradually as various strategies competed, competing in terms 
of convenience vs performance vs memory overhead).


but, yeah, there are many things like this...
0
cr88192355 (1928)
10/30/2011 4:54:05 PM
> I remembered before I wrote about ideas for various ways to handle dynamic
> type-checking.

There's "Representing Type Information in Dynamically Typed Languages"
(ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/typeinfo.ps.gz)


        Stefan
0
monnier (242)
10/30/2011 5:50:29 PM
On 10/30/2011 10:50 AM, Stefan Monnier wrote:
>> I remembered before I wrote about ideas for various ways to handle dynamic
>> type-checking.
>
> There's "Representing Type Information in Dynamically Typed Languages"
> (ftp://ftp.cs.indiana.edu/pub/scheme-repository/doc/pubs/typeinfo.ps.gz)
>

seems I already had a copy of this paper...
however, lacking a post-script viewer, I invoked ghostscript to convert 
it into a PDF so I could view it in Acrobat Reader.

but, anyways, by this paper, my scheme is a hybrid of object-pointers 
and typed-locations.

in the case of my system, the GC stores the type in the memory-block 
header (along with the object size, some GC-related bits, ...). this is 
mostly hidden from any client code though (it sees an interface which 
looks/behaves much like "malloc", but with an optional type-name field).

generally, I represent most dynamic types as string-based names 
(internally to the GC though, they are represented as type-numbers which 
are keyed to a hash table, with a number of special-case types having 
been micro-optimized).


locations outside the address-space proper are used for encoding some 
special cases, such as the space from 0xC0000000..0xFFFFFFFF on 32-bit 
x86 and ARM (except in certain cases).

on x86-64, a 56-bit space starting at 0x7F000000_00000000 is used 
instead (it is divided up into 48-bit regions).

these regions encode "fixnum" and "flonum" types, among others (allowing 
for 28 bit fixnum and flonum types on x86 and ARM, and 48 on x86-64).

an edge case is if using Linux with a full 4GB addressable space, in 
which case it will resort to using "mmap" to reserve 24-bit regions for 
fixnum and flonum.

"cons cells" are also handled specially by the GC, as my scripting 
language (unlike a "proper" JavaScript variant) internally makes fairly 
heavy use of cons-cells and lists (the VM more-or-less descended from a 
Scheme implementation...).


some other parts of the type-system use a JVM-style "signature string" 
system (though the notation is a bit more complex, and is based more 
closely on the IA64 C++ ABI, but is technically a hybrid notation).

there is also a database-system thrown into the mix (albeit a 
system-registry-like hierarchical system), ...

a lot of this is for things like "hey, here is a pointer to some random 
C struct, give me the value of the field named N" and similar (so the 
system will try to figure out the type of struct pointed to by the 
pointer, look up the field, and fetch the value).


or such...
0
cr88192355 (1928)
10/30/2011 10:12:42 PM
Reply:

Similar Artilces:

Programmers, Programmers, Programmers, ...
As Steve Balmer correctly stated, while making his monkey dance, it is applications and hence programmers that make a platform. The fact though is that if you want to do professional programming, then Linux is the platform for you. I know that this statement will get the heckels up on a lot of trolls in C.O.L.A, but I have a recent experience that proves this. I am currently working for a Windows only house producing a system that receives and transmits around 1000 telegrams per second in each direction on a UDP socket, translates them into a different format and creates a log entry for each ...

Internet operated programmable relay, Programmable Logic Controller PLC with encryption
Internet operated programmable relay, Programmable Logic Controller PLC with encrypted transmission Programmable relay for home automation. PC software with AES256 encryption Android software with AES256 encryption http://www.elkom.com.tw/?section=2&subid=19 ...

Compiling programmes within programmes.
Hello All, I looked at the previous posts where people had similar problems like I do now: how do you compile a programme within another. Here is the error message IDL> make_roi % LOADCT: Loading table RAINBOW % Attempt to call undefined procedure/function: 'RECONKERNEL'. So I first tried adding this to make_roi.pro !path = !path+':/home/karthik/ROI/30Apr2003' That did not work. And then I added: Resolve_routine, 'FileMenu.pro' and Resolve_routine, 'Reconkernel.pro' Still no luck. Then tried David's method (from an earlier post) PRO make_ro...

What programme do i need to create my own programme?
I want to make my own proramme, wot programme do i need to make my own? please wb to me! If you want to make a proramme you need a special program called a comiler which will translate soure coe ino obect coe which can then be lined to produce an execuale. Then the only questions are which lanuage you want to use, and which comiler for that lanuage as there are often seeral choices, e.g. for ++ there is Icrosoft Isual Stuio, MnGW, Borand C++ Buider, and that's just some of what is availale for Indows; there are other possiiliies for Linx, BS, Apple Mainosh, and so on. Hoe hat heps...

One State, One Clarion programmer
Hi All Clarion fans! Clarion Jobs Page was updated as well as the List of programmers and companies with Clarion technology in the USA and in other countries. But I'm not able to find Clarion programmers from: Alaska, Delaware, Hawaii, New Mexico, North & South Dakota, Rhode Island, and Wyoming. Any Developer and Clarion fan there? You know, "One State, One Clarion programmer" = one Clarion job / project / success story and maybe more in the future! -- Best Regards! Alfred Blaho dB & Web Programmer for hire http://www.artofprogramming.net Clarion Jobs & Caree...

programm
hi please how can i writ program example the input is ( ASEM ) the output is (MESA) thank you That one is easy ... text = 'Hello My Friend'; fliplr(text) ans = dneirF yM olleH ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE IT HERE. it is one of the leading online money making programme.JOIN WIT
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE IT HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE IT HERE. it is one of the leading online money making programme.JOIN WIT
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE IT HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN WITH T
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN #2
EARN MONEY FROM YOUR HOME.last week i have earn 2,00,000 DOLLARS . it is a FREE money making programme.ANYONE CAN MAKE MONEY HERE. it is one of the leading online money making programme.JOIN WITH THE FREE MONEY MAKING PROGRAMME .the link is below http://jeeva235.blogspot.com/ MAKE MONEY HERE ...

SQLRef
Dear All Software Developers, I am a hardcore programmer who treat programming as a hobby. Recently, I have just created a website (http://www.sqlref.com) to share my knowledge and programming experience and to record down any new tricks that I learned. This website started with SQL info. becaue I think SQL is the most basic but must know thing for all software developers. Eventually, I will make this website covers other topics such as .NET, SQL Server, Oracle, DB2 and so on. I will update it everyday. I think all developers should be able to access the SQL reference / Guide at any time, a...

How to use Photon programme to start the other programme?
Hello everyone: I am a beginner to learn the QNX. Recently I try to write a Photon programme to start the other programme wroten with the IDE. for example: we know the simplest programme in IDE is the "Welcome to use the QNX." NOW, I want to start it in Photon programme. e.g. In Photon programme, when I press the "Start" button, it can start the "Welcome to use the QNX." programme and display the sentence --- "Welcome to use the QNX." in the Photon programme. I want to ask you how to realize this? For this problem, I have tried for many ...

How to call external programmes out of my own C programme?
--Signature=_Wed__19_Jan_2005_22_38_05_+0100_=IMSpOzJiW.B/FMQ Content-Type: text/plain; charset=US-ASCII Content-Disposition: inline Content-Transfer-Encoding: 7bit Hello there, one of my programmes is to calculate some data and then print the output to a file which is specially prepared for opening with maple. What I'd like to do now is to auto-call maple with the path to this file as an argument at the end of my calculations from within my C programme so no user interaction would be needed any longer. best regards Moritz Beller -- web http://www.4momo.de mail momo dot ...

work for Calrion programmer (programmer needed)
Hi, we need a Clarion programmer to do some work in Calrion environment (need to fix some things in Clarion database) please e-mail to nick@primedebtsoft.com What kind of things? Is this extensive or just a simple verify app? How much? -- Thanks, John Hamilton POSitive Software Company http://www.gopositive.com "PrimeDebt Soft" <sal@primedebtsoft.com> wrote in message news:f408276d.0307281220.4951036f@posting.google.com... > Hi, > > we need a Clarion programmer to do some work in Calrion environment > (need to fix some things in Clarion database) > > ...

Non-Windows User/Non JavaScript programmer wants help
Hi, As I don't have Microsoft MPlayer for Linux (I believe there is none ;-) And I'm not very JavaScript literate, I could use help to determine if it is possible to figure out the actual URL of the asx file to some Linux streaming app to play the stream here at this site: http://tuner1.dc1.sonixtream.com/solon/media/tuner/Tuner?aff=fmtalki&useSame=true&type=IE As it looks to me, the script is using the MPlayer API to fetch the asx file and in principle it should be possible to give the asx file to Linux MPlayer (not an MS product) and it should be able to stream it; or the ...

is it possible to load different texture on different object in multi Viewport programm
Hi Guys, can you please tell me is it possible to implement different texture on different object and rendering different object simultaneously in multi Viewport programe. raghunandan_1081@yahoo.com wrote: > Hi Guys, can you please tell me is it possible to implement > different texture on different object and rendering different > object simultaneously in multi Viewport programe. Just select the different textures before rendering the different objects. man glBindTexture Wolfgang Draxinger -- ...

Why do Perl programmers make more money than Python programmers
According to CIO.com, Python programmers make only $83,000 per year, while Perl programmers make $93,000 per year. http://www.cio.com/slideshow/detail/97819?source=ifwartcio#slide10 http://www.cio.com/slideshow/detail/97819?source=ifwartcio#slide11 I would like to know, what explains the discrepancy. Thank you! i Most likely more legacy Perl code in mission critical systems S Sent from my pocket UNIVAC. On May 5, 2013, at 10:11 AM, Ignoramus16992 <ignoramus16992@NOSPAM.16992.inv= alid> wrote: > According to CIO.com, Python programmers make only $83,000 p...

Difference between programmer and engineer?
What is the difference between a software programmer and software engineer? Thanks, Brett Brett wrote: > What is the difference between a software programmer and software engineer? That depends who wrote the terms. Maybe a programmer writes programs and an engineer solves problems, so it's the difference between the small picture and big picture. -- Phlip http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces "Phlip" <phlip_cpp@yahoo.com> wrote in message news:rSo7d.5734$ge6.4881@newssvr16.news.prodigy.com... > Brett wrote: > >...

programms
I have some cad's programms catalogue: ed_de@SoftHome.net Send me a catalogue. dukedb@gmail.com ...

few programmers
left many obscurists not many people with the mental capacity to manage universal systems ...

JUnit for non-programmer
Is there a quick way for non-programmer(QA) to learn how to read JUnit tests and may be how to write them? If so, any good book recomendations, documentations, etc. ? THANKS!!! mnemonik23 Slava wrote: > Is there a quick way for non-programmer(QA) to learn how to read JUnit > tests and may be how to write them? > > If so, any good book recomendations, documentations, etc. ? > > THANKS!!! > mnemonik23 Rule #1: JUnit is an unit testing tool Rule #2: Unit testing is not black box testing ( by QA , as such ) , indeed it is white-box testing. To put it in simple terms, ...

Web resources about - The difference between programmable and non-programmable logic--if there is one. - comp.arch

Resources last updated: 3/7/2016 3:44:36 PM