f



Errors when cross-compiling the kernel

Trying to cross-compile the Raspberry Pi kernel on a Debian 7 virtual 
PC.  I've got quite a way through the process, and it seems to start 
compiling, but I'm getting the following error:

   HOSTLD  scripts/genksyms/genksyms
   CC      scripts/mod/empty.o
/home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as: 
error while loading shared libraries: libz.so.1: cannot open shared 
object file: No such file or directory

Any ideas?  I can't find libz.so anywhere....

Thanks,
David
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/14/2013 2:46:11 PM
comp.sys.raspberry-pi 775 articles. 1 followers. Post Follow

93 Replies
765 Views

Similar Articles

[PageSpeed] 54

David Taylor wrote:

> Any ideas?  I can't find libz.so anywhere....

Maybe this page can help:
http://packages.debian.org/search?searchon=contents&keywords=libz.so


0
Paul
12/14/2013 2:58:21 PM
On Sat, 14 Dec 2013 14:46:11 +0000, David Taylor wrote:

>    HOSTLD  scripts/genksyms/genksyms
>    CC      scripts/mod/empty.o
> /home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as: 
> error while loading shared libraries: libz.so.1: cannot open shared 
> object file: No such file or directory
>
> Any ideas?  I can't find libz.so anywhere...

It's in the zlib1g package, maybe this one isn't installed?

gregor
-- 
 .''`.  Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
 : :' : Debian GNU/Linux user, admin, and developer  -  http://www.debian.org/
 `. `'  Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
   `-   NP: Bettina Wegner: Auf der Wiese
0
gregor
12/14/2013 3:20:08 PM
On 14/12/2013 14:58, Paul Berger wrote:
> David Taylor wrote:
>
>> Any ideas?  I can't find libz.so anywhere....
>
> Maybe this page can help:
> http://packages.debian.org/search?searchon=contents&keywords=libz.so

Thanks, Paul.  I wouldn't have known about that page, as I'm more of a 
beginner with Linux.  Now to see why it's not in the RPi download.  The 
git fetch/checkout "download" was faulty, so I had to use the .tar 
download.  But, the exact same .tar download /did/ compile on the RPi.

Right, more progress.  From searching with Google (yes, I should have 
done this first, but I thought it was just me being ham-fisted) it seems 
that the problem is that libz.so is actually a host library, and not a 
Raspberry Pi one.  Further, as I'm using 64-bit Linux on a virtual PC, I 
need to install the 32-bit version of certain libraries, so the next 
part of the magic spell (it seems like that at times!) is:

sudo  dpkg  --add-architecture  i386 # enable multi-arch
sudo  apt-get  update

Then run:
sudo  apt-get  install  ia32-libs

It's taken many days to get this far!
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/14/2013 4:10:39 PM
On 14/12/2013 15:20, gregor herrmann wrote:
> On Sat, 14 Dec 2013 14:46:11 +0000, David Taylor wrote:
>
>>     HOSTLD  scripts/genksyms/genksyms
>>     CC      scripts/mod/empty.o
>> /home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as:
>> error while loading shared libraries: libz.so.1: cannot open shared
>> object file: No such file or directory
>>
>> Any ideas?  I can't find libz.so anywhere...
>
> It's in the zlib1g package, maybe this one isn't installed?
>
> gregor

Thanks, Gregor.  As I mentioned to Paul, installing the 32-bit host 
libraries on the 64-bit Linux I was using fixed the compile problem.  It 
now remains to be seen whether I am brave enough to try my own 
cross-compiled kernel on a real system.  Yes, I will be using a spare SD 
card imaged from the existing working one!  Very nice to be able to 
"backup" and "restore" cards on a house PC.
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/14/2013 4:52:32 PM
On 14/12/13 16:10, David Taylor wrote:

>
> It's taken many days to get this far!

When I were a lad you had to write the compiler...

-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/14/2013 6:18:16 PM
On 14/12/2013 18:18, The Natural Philosopher wrote:
> On 14/12/13 16:10, David Taylor wrote:
>
>>
>> It's taken many days to get this far!
>
> When I were a lad you had to write the compiler...
>
didn't we already do all that a couple of months ago?
0
Guesser
12/14/2013 6:59:27 PM
On 14/12/2013 18:18, The Natural Philosopher wrote:
> On 14/12/13 16:10, David Taylor wrote:
>
>>
>> It's taken many days to get this far!
>
> When I were a lad you had to write the compiler...

My first programming task was updating the Assembler on an IBM 1130 to 
accept free-format input, to suit the paper tape the department was 
using rather than punched cards.

It was, IIRC, easier than dealing with Linux!
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/14/2013 7:17:51 PM
David Taylor wrote:

> On 14/12/2013 18:18, The Natural Philosopher wrote:
>> On 14/12/13 16:10, David Taylor wrote:
>>
>>>
>>> It's taken many days to get this far!
>>
>> When I were a lad you had to write the compiler...
> 
> My first programming task was updating the Assembler on an IBM 1130 to
> accept free-format input, to suit the paper tape the department was
> using rather than punched cards.
> 
> It was, IIRC, easier than dealing with Linux!

Well yeah.  IBM 1130.  In those days machines were too simple and stupid to 
look out for themselves.  GE-415 the same.  Do whatever it was told.  Had no 
choice.

	Mel.

0
Mel
12/14/2013 9:14:32 PM
Mel Wilson <mwilson@the-wire.com> wrote:
> David Taylor wrote:
> 
>> On 14/12/2013 18:18, The Natural Philosopher wrote:
>>> On 14/12/13 16:10, David Taylor wrote:
>>> 
>>>> 
>>>> It's taken many days to get this far!
>>> 
>>> When I were a lad you had to write the compiler...
>> 
>> My first programming task was updating the Assembler on an IBM 1130 to
>> accept free-format input, to suit the paper tape the department was
>> using rather than punched cards.
>> 
>> It was, IIRC, easier than dealing with Linux!
> 
> Well yeah.  IBM 1130.  In those days machines were too simple and stupid to 
> look out for themselves.  GE-415 the same.  Do whatever it was told.  Had no 
> choice.
> 
> 	Mel.

Anything you're used to is easy. Anything you're not used to is hard. ;-)
-- 
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
0
Michael
12/15/2013 4:23:42 AM
On 15/12/2013 04:23, Michael J. Mahon wrote:
> Mel Wilson <mwilson@the-wire.com> wrote:
[]
>> Well yeah.  IBM 1130.  In those days machines were too simple and stupid to
>> look out for themselves.  GE-415 the same.  Do whatever it was told.  Had no
>> choice.
>>
>> 	Mel.
>
> Anything you're used to is easy. Anything you're not used to is hard. ;-)

Yes, there's an element (or 14) of truth in that.  But waiting 15 
minutes (or 5 hours if you compile on the RPi) to find that something is 
wrong is definitely not as productive as using e.g. Delphi on the PC. 
Virtually instant, and you can check out each change step by step.

I wrote up what I've found so far:
   http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html

BTW: On the 1130 the only error message you got back was "Error".  Not 
even a line number....
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/15/2013 2:30:31 PM
On 15/12/13 14:30, David Taylor wrote:
> On 15/12/2013 04:23, Michael J. Mahon wrote:
>> Mel Wilson <mwilson@the-wire.com> wrote:
> []
>>> Well yeah.  IBM 1130.  In those days machines were too simple and
>>> stupid to
>>> look out for themselves.  GE-415 the same.  Do whatever it was told.
>>> Had no
>>> choice.
>>>
>>>     Mel.
>>
>> Anything you're used to is easy. Anything you're not used to is hard. ;-)
>
> Yes, there's an element (or 14) of truth in that.  But waiting 15
> minutes (or 5 hours if you compile on the RPi) to find that something is
> wrong is definitely not as productive as using e.g. Delphi on the PC.
> Virtually instant, and you can check out each change step by step.
>
> I wrote up what I've found so far:
>    http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html
>
> BTW: On the 1130 the only error message you got back was "Error".  Not
> even a line number....

always compile first on a native target if only to check the code for 
syntax errors...

Always use Make to ensure that you didn't recompile more than you need 
to at any given stage..

then when you have the code working in a simulator/emulator, then burn 
your ROM or Flash..



-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/15/2013 3:31:29 PM
On 15/12/2013 15:31, The Natural Philosopher wrote:
[]
> always compile first on a native target if only to check the code for
> syntax errors...
>
> Always use Make to ensure that you didn't recompile more than you need
> to at any given stage..
>
> then when you have the code working in a simulator/emulator, then burn
> your ROM or Flash..

Good in theory, but....

When a compile takes a significant part of the day (as with compiling 
the kernel on the RPi), making multiple runs is extremely time 
consuming!  Unfortunately, even if you want to change just one option, 
if it's your first compile it still takes almost all the working day.

What simulator would you recommend for the Raspberry Pi kernel?

BTW: the problem arises because the supplied kernel was compiled with 
tickless, which makes the kernel-mode GPIO/PPS work very poorly. 
Changing this one flag makes a worthwhile improvement bringing the 
averaged NTP jitter down from 3.9 microseconds to 1.2 microseconds, with 
similar improvements in offset, and correcting an NTP reporting error.

Cheers,
David
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/15/2013 5:43:47 PM
On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
<david-taylor@blueyonder.co.uk.invalid> declaimed the following:

>
>When a compile takes a significant part of the day (as with compiling 
>the kernel on the RPi), making multiple runs is extremely time 
>consuming!  Unfortunately, even if you want to change just one option, 
>if it's your first compile it still takes almost all the working day.
>
	I spent nearly 6 months in the early 80s in an environment where two
builds a day (for a single application) was a good day. Worse was having to
message the sysop to "kill the rabble" (RABL, for ReenABLe -- a batch job
I'd written designed to clear out "connection" state from a "database"); it
meant I'd concluded the entire database needed to be rebuilt (not an
operational database -- though the application itself wasn't considered a
database app; it was a requirements traceability tool, lacking dynamic
table creation -- a few added capabilities would have given it relational
algebra).

	We were porting a FORTRAN-IV application to something I call
FORTRAN-minus-2. The ported code ended up filled with variables named: inx,
linx, jinx, minx, etc. as

	call xyz(a-1, b+2, a+b) 
had to be converted to

	inx = a-1
	linx = b+2
	jinx = a+b
	call xyz(inx, linx, jinx)

as the compiler could not handle expressions as arguments in a subroutine
(or function) call.
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
0
Dennis
12/16/2013 1:03:44 AM
On 16/12/13 01:03, Dennis Lee Bieber wrote:
> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
> <david-taylor@blueyonder.co.uk.invalid> declaimed the following:
>
>>
>> When a compile takes a significant part of the day (as with compiling
>> the kernel on the RPi), making multiple runs is extremely time
>> consuming!  Unfortunately, even if you want to change just one option,
>> if it's your first compile it still takes almost all the working day.
>>
> 	I spent nearly 6 months in the early 80s in an environment where two
> builds a day (for a single application) was a good day. Worse was having to
> message the sysop to "kill the rabble" (RABL, for ReenABLe -- a batch job
> I'd written designed to clear out "connection" state from a "database"); it
> meant I'd concluded the entire database needed to be rebuilt (not an
> operational database -- though the application itself wasn't considered a
> database app; it was a requirements traceability tool, lacking dynamic
> table creation -- a few added capabilities would have given it relational
> algebra).
>
> 	We were porting a FORTRAN-IV application to something I call
> FORTRAN-minus-2. The ported code ended up filled with variables named: inx,
> linx, jinx, minx, etc. as
>
> 	call xyz(a-1, b+2, a+b)
> had to be converted to
>
> 	inx = a-1
> 	linx = b+2
> 	jinx = a+b
> 	call xyz(inx, linx, jinx)
>
> as the compiler could not handle expressions as arguments in a subroutine
> (or function) call.
>
try coding in C for a 6809 then ...with 256k of memory in paged 
banks...all the library code was 'select which ROM bank to use, call the 
function, get something back in the registers restore ROM bank that 
called you and return;'

We had a DSP co processor too.  400Mhz digital scope that was. I'd have 
KILLED for a pi.


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/16/2013 2:37:25 AM
Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
> <david-taylor@blueyonder.co.uk.invalid> declaimed the following:
>
>>
>>When a compile takes a significant part of the day (as with compiling 
>>the kernel on the RPi), making multiple runs is extremely time 
>>consuming!  Unfortunately, even if you want to change just one option, 
>>if it's your first compile it still takes almost all the working day.
>>
> 	I spent nearly 6 months in the early 80s in an environment where two
> builds a day (for a single application) was a good day.

I agree with you.  The kids today can only develop in an IDE where they
can just compile&run with a keypress and have results in a second.
We used to have to wait for hours before the project was compiled and
new tests could be done.

In fact my first experience with programming was in an RJE environment
where you had to submit your source (on cards) and have them back with
a listing (with run results or a listing with syntax errors) the next
working day.

I can tell you this makes you think twice before you code something.
My first program (of course a trivial one) in fact compiled OK on the
first try!  But that was after spending most of the afternoon to check
and double-check (and more) to make sure it was OK, and after the
teacher assured me that it would be impossible to get it OK the first
time.

Having quick turnaround for compile&run IMHO leads to poor software
quality, because the tendency is to get functionality OK by trial and
error (running it until it no longer fails with the test cases at hand)
instead of by carefully looking at the algorithm and its implementation.
0
Rob
12/16/2013 9:15:05 AM
Rob <nomail@example.com> wrote:
> Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
>> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
>> <david-taylor@blueyonder.co.uk.invalid> declaimed the following:
>> 
>>> 
>>> When a compile takes a significant part of the day (as with compiling 
>>> the kernel on the RPi), making multiple runs is extremely time 
>>> consuming!  Unfortunately, even if you want to change just one option, 
>>> if it's your first compile it still takes almost all the working day.
>>> 
>> 	I spent nearly 6 months in the early 80s in an environment where two
>> builds a day (for a single application) was a good day.
> 
> I agree with you.  The kids today can only develop in an IDE where they
> can just compile&run with a keypress and have results in a second.
> We used to have to wait for hours before the project was compiled and
> new tests could be done.
> 
> In fact my first experience with programming was in an RJE environment
> where you had to submit your source (on cards) and have them back with
> a listing (with run results or a listing with syntax errors) the next
> working day.
> 
> I can tell you this makes you think twice before you code something.
> My first program (of course a trivial one) in fact compiled OK on the
> first try!  But that was after spending most of the afternoon to check
> and double-check (and more) to make sure it was OK, and after the
> teacher assured me that it would be impossible to get it OK the first
> time.
> 
> Having quick turnaround for compile&run IMHO leads to poor software
> quality, because the tendency is to get functionality OK by trial and
> error (running it until it no longer fails with the test cases at hand)
> instead of by carefully looking at the algorithm and its implementation.

Glad you said this, because I was about to. ;-)

With one turnaround per day, plus a core dump (yes, it was core memory) on
execution errors, *every* data structure in memory was painstakingly
examined to find multiple problems per compile-execute cycle. Of course the
stack--and the stack "residue" beyond the current top-of-stack--was one of
the first data structures examined forensically. 

Any detail that was not exactly as expected resulted in either finding a
latent error or revising my understanding of the program's behavior, or
(often) both. 

The result was that after several cycles, my understanding of the
implications of the code I had written and it's interactions with the
hardware/software environment was richly improved. My confidence in the
code that worked was substantiated and my corrections to code that failed
were well thought out. 

I sometimes found both compiler and OS bugs as well as my own, many of
which did not actually prevent my code from getting correct answers!

When computer cycles are precious, brain cycles are required to wring the
maximum amount of information from each trial run. The effects on both code
quality and programmer confidence (and humility) are remarkable. 

My experience managing today's programmers is that they frequently have no
idea what their code actually does during execution. They are often amazed
when they discover that their use of dynamic storage allocation is wasting
90% of the allocated memory, or that a procedure is being executed two
orders of magnitude more frequently than they expected!  And their tools
and tests, combined with their inaccurate understanding of their code's
behavior, prevent them from finding out. 

They are very poorly prepared to program for performance, since, for
example, they have no practical grasp that a cache miss costs an order of
magnitude more than a hit, and a page miss, perhaps four orders of
magnitude. 

Interactive programming does not preclude the development of craft, but it
apparently significantly impedes it. 

All this becomes practically hopeless in modern application environments
where one's code constantly invokes libraries, that call libraries, etc.,
etc., until "Hello, world" requires thirty million instructions and has a
working set of a hundred megabytes!  

Such progress in the name of eye candy...
-- 
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
0
Michael
12/16/2013 5:07:47 PM
On 16/12/2013 09:15, Rob wrote:
[]
> Having quick turnaround for compile&run IMHO leads to poor software
> quality, because the tendency is to get functionality OK by trial and
> error (running it until it no longer fails with the test cases at hand)
> instead of by carefully looking at the algorithm and its implementation.

Yes, I also remember the days of queuing, or waiting overnight for a 
run's output to be returned.

I disagree with you about today's development, though.  My experience 
with C/C++ suggests that it's too slow.  Having to wait a few minutes to 
see the effect of a change encourages developers to change too much at 
once, rather than a line at a time.  I find that with Delphi - where it 
really is the instant compile and run you criticise - I make much 
smaller changes and can be sure that each change has worked before 
introducing the next

I hope the Raspberry Pi encourages similar developments.

(And I think that algorithms are very important.  Many people seem to 
want to do (or to get the compiler to do) minor optimisations of code 
which may work well only on one processor family, whereas my own 
experience suggests that using a profiler to find out where the delays 
are /really/ happening has most often pointed to regions of the program 
where I was not expecting there to be delays, pointing either to less 
than optimum algorithm design or, in one case, some debug code which had 
been left in.)
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/16/2013 5:15:41 PM
On Sun, 15 Dec 2013 14:30:31 +0000, David Taylor wrote:

> I wrote up what I've found so far:
>    http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html

Short addition to your script:

With something like

#v+
# arch/arm/configs/bcmrpi_defconfig
export PLATFORM=bcmrpi 
ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig
#v-

you can use the default config instead of an existing one or going
through menuconfig manually.

(Useful if you want to switch to e.g. the rpi-3.10.y branch and don't
have an existing config as a starting point.)

gregor
-- 
 .''`.  Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
 : :' : Debian GNU/Linux user, admin, and developer  -  http://www.debian.org/
 `. `'  Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
   `-   NP: Nick Cave And The Bad Seeds: Fable Of The Brown Ape
0
gregor
12/16/2013 9:46:54 PM
On Mon, 16 Dec 2013 02:37:25 +0000, The Natural Philosopher wrote:

> try coding in C for a 6809 then ...with 256k of memory in paged
> banks...all the library code was 'select which ROM bank to use, call the
> function, get something back in the registers restore ROM bank that
> called you and return;'
>
Out of curiosity, which OS were you using?

I've used uniFlex on SWTPc boxes but don't remember jumping through those 
hoops (though we were writing in the Sculptor 4GL, which compiled to an 
intermediate interpreted form (and bloody fast too) rather than all the 
way to binary.

I've also got considerable time with OS-9, though on a 68000 rather than 
as level 1 or 2 on a 6809, but am certain that, as level 2 managed memory 
in 4K chunks, it was nothing like as convoluted as the stuff you're 
describing. In fact, once I'd replaced the Microware shell with the EFFO 
one, it was a real pleasure to use.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/16/2013 10:49:45 PM
On Mon, 16 Dec 2013 17:15:41 +0000, David Taylor wrote:

> I disagree with you about today's development, though.  My experience
> with C/C++ suggests that it's too slow.  Having to wait a few minutes to
> see the effect of a change encourages developers to change too much at
> once, rather than a line at a time.  I find that with Delphi - where it
> really is the instant compile and run you criticise - I make much
> smaller changes and can be sure that each change has worked before
> introducing the next
>
What are you running on? 

My fairly average rig (dual core 3.2 GHz Athlon, 4GB RAM running Fedora 
18 and using the GNU C compiler is compiling and linking 2100 statements. 
600k of code in 1.1 seconds. A complete regression test suite (so far 
amounting to 21 test scripts) runs in 0.38 seconds. All run from a 
console with make for the compile and bash handling regression tests, 
natch, natch.

Put it this way: the build runs way too fast to see what's happening 
while its running. The regression tests are the same, though, as you 
might hope, they only display script names and any deviations from 
expected results.

> I hope the Raspberry Pi encourages similar developments.
>
It does since it has the same toolset. Just don't expect it to be quite 
as nippy, though intelligent use of make to minimise the amount of work 
involved in a build makes a heap of difference. However, its quite a bit 
faster than my old OS-9/68000 system ever was, but then again that was 
cranked by a 25MHz 68020 rather than a 800MHz ARM.

I really cut my teeth on an ICL 1902S running a UDAS exec or George 2 and 
like others have said, never expected more than one test shot per day per 
project: the machine was running customer's work during the day, so we 
basically had an overnight development slot and, if we were dead lucky, 
sometimes a second lunchtime slot while the ops had lunch - if we were 
prepared to run the beast ourselves. 

You haven't really programmed unless you've punched your own cards and 
corrected them on a 12 key manual card punch....
 
but tell that to the kids of today....

> (And I think that algorithms are very important.
>
Yes.

> some debug code which had been left in.)
>
I always leave that in, controlled by a command-line option or the 
program's configuration file. Properly managed, the run-time overheads 
are small but the payoff over the years from having well thought-out 
debugging code in production programs is immense.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/16/2013 11:27:46 PM
On 16/12/2013 23:27, Martin Gregorie wrote:
> On Mon, 16 Dec 2013 17:15:41 +0000, David Taylor wrote:
>
>> I disagree with you about today's development, though.  My experience
>> with C/C++ suggests that it's too slow.  Having to wait a few minutes to
>> see the effect of a change encourages developers to change too much at
>> once, rather than a line at a time.  I find that with Delphi - where it
>> really is the instant compile and run you criticise - I make much
>> smaller changes and can be sure that each change has worked before
>> introducing the next
>>
> What are you running on?
>
> My fairly average rig (dual core 3.2 GHz Athlon, 4GB RAM running Fedora
> 18 and using the GNU C compiler is compiling and linking 2100 statements.
> 600k of code in 1.1 seconds. A complete regression test suite (so far
> amounting to 21 test scripts) runs in 0.38 seconds. All run from a
> console with make for the compile and bash handling regression tests,
> natch, natch.
>
> Put it this way: the build runs way too fast to see what's happening
> while its running. The regression tests are the same, though, as you
> might hope, they only display script names and any deviations from
> expected results.
>
>> I hope the Raspberry Pi encourages similar developments.
>>
> It does since it has the same toolset. Just don't expect it to be quite
> as nippy, though intelligent use of make to minimise the amount of work
> involved in a build makes a heap of difference. However, its quite a bit
> faster than my old OS-9/68000 system ever was, but then again that was
> cranked by a 25MHz 68020 rather than a 800MHz ARM.
>
> I really cut my teeth on an ICL 1902S running a UDAS exec or George 2 and
> like others have said, never expected more than one test shot per day per
> project: the machine was running customer's work during the day, so we
> basically had an overnight development slot and, if we were dead lucky,
> sometimes a second lunchtime slot while the ops had lunch - if we were
> prepared to run the beast ourselves.
>
> You haven't really programmed unless you've punched your own cards and
> corrected them on a 12 key manual card punch....
>
> but tell that to the kids of today....
>
>> (And I think that algorithms are very important.
>>
> Yes.
>
>> some debug code which had been left in.)
>>
> I always leave that in, controlled by a command-line option or the
> program's configuration file. Properly managed, the run-time overheads
> are small but the payoff over the years from having well thought-out
> debugging code in production programs is immense.

Martin,

I'm running on a quad-core Windows 7/64 system, and judging the time 
taken to compile the 9 programs in the NTP suite using Visual Studi0 
2010.  These are almost always a compile from scratch, and not a 
recompile where little will have changed.  Your 1.1 second figure would 
be more than acceptable, and very similar to what I see when using 
Embarcadero's Delphi which is my prime development environment.

On the RPi I have used Lazarus which is similar, and allows almost 
common code between Windows and Linux programs.

Cards were used by the Computer Department at university when they 
bought an IBM 360, and a room full of card punches was rather noisy!  I 
can't recall now whether it was noisier than the room full of 8-track 
paper tape Flexowriters we at the Engineering Department were using, and 
yes, we did patch those by hand at times.  Almost all of the access to 
the IBM 1130 we had was hands-on by the researchers and some undergraduates.

Leaving debug code is a good idea, except when it accounts for 90% of 
the program's execution time as seen by a real-time profiler.  I do 
still try and make my own code as compact as possible, but particularly 
as fast as possible, and the profiler has been a big help there.  I 
haven't done any serious debugging on the RPi, though - it's been more 
struggling with things like GNU radio build taking 19 hours and then 
failing!
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/17/2013 8:54:19 AM
In article <l8ncft$4p2$1@dont-email.me>, david-
taylor@blueyonder.co.uk.invalid says...
> 
> On 16/12/2013 09:15, Rob wrote:
> []
> > Having quick turnaround for compile&run IMHO leads to poor software
> > quality, because the tendency is to get functionality OK by trial and
> > error (running it until it no longer fails with the test cases at hand)
> > instead of by carefully looking at the algorithm and its implementation.
> 
> Yes, I also remember the days of queuing, or waiting overnight for a 
> run's output to be returned.
> 
> I disagree with you about today's development, though.  My experience 
> with C/C++ suggests that it's too slow.  Having to wait a few minutes to 
> see the effect of a change encourages developers to change too much at 
> once, rather than a line at a time.  I find that with Delphi - where it 
> really is the instant compile and run you criticise - I make much 
> smaller changes and can be sure that each change has worked before 
> introducing the next

I find that instant grahpical interface make a change, compile and run
ebcourages the youngsters to try ANYTHING to fix problem and not use
any form of version control. Then they go off fixing everything else 
they have now broken because they did not acquire data first, to find 
out where the problem maybe then use debugs or other data to prove 
the area of fault, then prove what the fault is if necessary using 
pencil, paper and a bit of grey matter.

> I hope the Raspberry Pi encourages similar developments.
> 
> (And I think that algorithms are very important.  Many people seem to 
> want to do (or to get the compiler to do) minor optimisations of code 
> which may work well only on one processor family, whereas my own 
> experience suggests that using a profiler to find out where the delays 
> are /really/ happening has most often pointed to regions of the program 
> where I was not expecting there to be delays, pointing either to less 
> than optimum algorithm design or, in one case, some debug code which had 
> been left in.)

Most people want to put any old code down first, not interested in 
algorithm or design etc..


-- 
Paul Carpenter          | paul@pcserviceselectronics.co.uk
<http://www.pcserviceselectronics.co.uk/>    PC Services
<http://www.pcserviceselectronics.co.uk/pi/>  Raspberry Pi Add-ons
<http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font
<http://www.gnuh8.org.uk/>  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
<http://www.badweb.org.uk/> For those web sites you hate
0
Paul
12/17/2013 10:01:33 AM
On 17/12/2013 10:01, Paul wrote:
[]
> I find that instant grahpical interface make a change, compile and run
> ebcourages the youngsters to try ANYTHING to fix problem and not use
> any form of version control. Then they go off fixing everything else
> they have now broken because they did not acquire data first, to find
> out where the problem maybe then use debugs or other data to prove
> the area of fault, then prove what the fault is if necessary using
> pencil, paper and a bit of grey matter.
[]

If that's the case, surely they should be better trained in using the 
tools, rather than deliberately making the tools slower and more 
difficult to use?  Give points for algorithm design!

(That originally came out as "give pints" - might be something in that!)
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/17/2013 11:31:24 AM
David Taylor <david-taylor@blueyonder.co.uk.invalid> wrote:
> On 17/12/2013 10:01, Paul wrote:
> []
>> I find that instant grahpical interface make a change, compile and run
>> ebcourages the youngsters to try ANYTHING to fix problem and not use
>> any form of version control. Then they go off fixing everything else
>> they have now broken because they did not acquire data first, to find
>> out where the problem maybe then use debugs or other data to prove
>> the area of fault, then prove what the fault is if necessary using
>> pencil, paper and a bit of grey matter.
> []
> 
> If that's the case, surely they should be better trained in using the
> tools, rather than deliberately making the tools slower and more
> difficult to use?  Give points for algorithm design!
> 
> (That originally came out as "give pints" - might be something in that!)

Certainly training would help, but the critical missing
ingredient--necessitated by cumbersome tools--is the development of
engineering discipline...and that is always in short supply. 
-- 
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
0
Michael
12/17/2013 3:59:58 PM
On Tue, 17 Dec 2013 08:54:19 +0000, David Taylor wrote:

> On the RPi I have used Lazarus which is similar, and allows almost
> common code between Windows and Linux programs.
>
I don't know about Lazarus: but the C source is identical on the RPi 
since it uses the same GNU C compiler and make that all Linux systems use.
  
> I can't recall now whether it was noisier than the room full of 8-track
> paper tape Flexowriters we at the Engineering Department were using, and
> yes, we did patch those by hand at times.
>
I used those at Uni, but they were feeding an Elliott 503, a set of huge 
grey boxes housing solid state electronics but made entirely with 
discrete transistors. It compiled Algol 60 direct from paper tape and, 
embarrassingly, no matter what I tried on the 1902S, I was never able to 
come near the Ellott's compile times: just shows the inherent superiority 
of 50 microsecond core backing store over 2800 rpm disk drives. 

> Leaving debug code is a good idea, except when it accounts for 90% of
> the program's execution time as seen by a real-time profiler.
>
I that case it was done very badly. The trick of minimising overhead is 
the be able to use something like:

  if (debug)
  {
    /* debug tests and displays */
  }

rather than leaving, e.g. assertions, inline in live code or, worse, 
having debugging code so interwoven with the logic that it can't be 
disabled during normal operation. I agree that the overheads of that 
approach are high, where the overheads of several "if (debug)..." 
statement are about as low as its possible to get.  


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/17/2013 10:55:04 PM
On 17/12/2013 22:55, Martin Gregorie wrote:
>  The trick of minimising overhead is
> the be able to use something like:
>
>    if (debug)
>    {
>      /* debug tests and displays */
>    }
>

I think you mean
	if (unlikely(debug))
	{
		debug stuff
	}

If you want low impact, then tell the compiler it isn't likely so it can 
twiddle the branch prediction stuff.

> rather than leaving, e.g. assertions, inline in live code or

I don't know which compiler you use, but in mine assert is only compiled 
into code in debug builds. There's nothing left in a non-debug build.

Andy


0
mm0fmf
12/18/2013 12:03:10 AM
Martin Gregorie <martin@address-in-sig.invalid> wrote:
> I that case it was done very badly. The trick of minimising overhead is 
> the be able to use something like:
>
>   if (debug)
>   {
>     /* debug tests and displays */
>   }
>
> rather than leaving, e.g. assertions, inline in live code or, worse, 
> having debugging code so interwoven with the logic that it can't be 
> disabled during normal operation.

Normally in C you use the preprocessor to eliminate all debug code at
compile time when it is no longer required, so even the overhead of
the if (debug) and the size of the code in the if statement is no
longer there.
0
Rob
12/18/2013 8:38:29 AM
On 17/12/2013 22:55, Martin Gregorie wrote:
> On Tue, 17 Dec 2013 08:54:19 +0000, David Taylor wrote:
[]
>> Leaving debug code is a good idea, except when it accounts for 90% of
>> the program's execution time as seen by a real-time profiler.
>>
> I that case it was done very badly. The trick of minimising overhead is
> the be able to use something like:
>
>    if (debug)
>    {
>      /* debug tests and displays */
>    }
>
> rather than leaving, e.g. assertions, inline in live code or, worse,
> having debugging code so interwoven with the logic that it can't be
> disabled during normal operation. I agree that the overheads of that
> approach are high, where the overheads of several "if (debug)..."
> statement are about as low as its possible to get.

Not necessarily bad, just doing a lot of stuff not necessary to the 
production version.  But now it's as you recommend - optional - using 
conditional compile or boolean variables as you show.
-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/18/2013 3:07:38 PM
In comp.sys.raspberry-pi message <l8qko8$8do$2@dont-email.me>, Tue, 17
Dec 2013 22:55:04, Martin Gregorie <martin@address-in-sig.invalid>
posted:

>I used those at Uni, but they were feeding an Elliott 503, a set of huge
>grey boxes housing solid state electronics but made entirely with
>discrete transistors. It compiled Algol 60 direct from paper tape and,
>embarrassingly, no matter what I tried on the 1902S, I was never able to
>come near the Ellott's compile times: just shows the inherent superiority
>of 50 microsecond core backing store over 2800 rpm disk drives.

At one stage, I used an Elliott 905, with only paper tape - a 250
char/sec reader, and a punch (and console TTY, maybe?).

By sticking to the end of the Algol compiler a short program, the
compiler could be persuaded to read from a BS4421 interface, initially
with a 1000 char/sec reader.  By instead connecting the BS4421 to the
site Network, a speed of (IIRC) about 6000 char/sec could be obtained.


Earlier, I used an ICT/ICL 1905.  Its CPU had two features not commonly
found in modern machines :

    (1) A machine-code instruction "OBEY",
    (2) A compartment which in ours stored the site engineer's lunch.

-- 
 (c) John Stockton, nr London, UK.  Mail via homepage.  Turnpike v6.05  MIME.
  Web  <http://www.merlyn.demon.co.uk/> - FAQqish topics, acronyms and links;
  Astro stuff via astron-1.htm, gravity0.htm ; quotings.htm, pascal.htm, etc.
0
Dr
12/18/2013 7:36:47 PM
On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:

> I don't know which compiler you use, but in mine assert is only compiled
> into code in debug builds. There's nothing left in a non-debug build.
>
This build of which you speak is the problem with that approach: you'll 
have to recompile the program before you can start debugging the problem 
while I can simply I can ask the user to set the debug flag, do it again 
and unset the debug flag.

Your recompile to turn assertions back on can take days in a real life 
situation because you may need to do full release tests and get 
management buy-in before you can let your user run it on live data. 
Alternatively,  it can take at least as long to work out what combo of 
data and user action is needed to duplicate the bug and then make it 
happen on a testing system. Bear in mind that Murphy will make sure this 
happens on sensitive data and that as a consequence you'll have hell's 
delight getting enough access to the live system to work out what 
happened, let alone being able to get hold of sufficient relevant data to 
reproduce the problem.

Two real world examples. In both cases we left debugging code in the 
production system:

(1) The BBC's Orpheus system dealt with very complex musical data and was 
used by extremely bright music planning people. I provided a debug 
control screen for them so they could instantly turn on debug, repeat the 
action and turn debug off: probably took 15-20 seconds to do and I'd get 
the diagnostic output the next day. A significant number of times the 
problem was finger trouble, easy to spot because I had their input and 
easy to talk them through it too. If it was a genuine bug or something 
that needed enhancement, such as searching for classical music works by 
name, I had absolutely all the information we needed to design and 
implement the change: input, program flow, DB access, and output.

(2) We also left debugging in a very high volume system that handled call 
detail records for a UK telco. This used exactly the debug enabling 
method I showed earlier and yet it still managed to process 8000 CDRs/sec 
(or 35,000 phone number lookups/sec if you prefer) and that was back in 
2001 running on a DEC Alpha box. As I said, the overheads of even a few 
tens of "if (debug)" tests per program cycle where invisible in the 
actual scheme of things.

My conclusion is that recompiling to remove well designed debugging code, 
without measuring the effectiveness doing it, is yet another example of 
premature optimization.
 

-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/18/2013 10:17:56 PM
On Wed, 18 Dec 2013 08:38:29 +0000, Rob wrote:

> Martin Gregorie <martin@address-in-sig.invalid> wrote:
>> I that case it was done very badly. The trick of minimising overhead is
>> the be able to use something like:
>>
>>   if (debug)
>>   {
>>     /* debug tests and displays */
>>   }
>>
>> rather than leaving, e.g. assertions, inline in live code or, worse,
>> having debugging code so interwoven with the logic that it can't be
>> disabled during normal operation.
> 
> Normally in C you use the preprocessor to eliminate all debug code at
> compile time when it is no longer required, so even the overhead of the
> if (debug) and the size of the code in the if statement is no longer
> there.
>
Indeed, but why bother unless you have actual measurements that let you 
quantify the trade-off between the performance increase of removing it 
and improved problem resolution in the live environment?
  

-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/18/2013 10:21:50 PM
On 18/12/2013 22:17, Martin Gregorie wrote:
> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>
>> I don't know which compiler you use, but in mine assert is only compiled
>> into code in debug builds. There's nothing left in a non-debug build.
>>
> you'll
> have to recompile the program before you can start debugging

You may have bugs, I don't! :-)
0
mm0fmf
12/18/2013 11:45:02 PM
On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf <none@mailinator.com> declaimed
the following:

>On 18/12/2013 22:17, Martin Gregorie wrote:
>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>
>>> I don't know which compiler you use, but in mine assert is only compiled
>>> into code in debug builds. There's nothing left in a non-debug build.
>>>
>> you'll
>> have to recompile the program before you can start debugging
>
>You may have bugs, I don't! :-)

It's an undocumented feature...
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
0
Dennis
12/19/2013 12:21:18 AM
In article <l8pcmc$2kv$1@dont-email.me>, david-
taylor@blueyonder.co.uk.invalid says...
> 
> On 17/12/2013 10:01, Paul wrote:
> []
> > I find that instant grahpical interface make a change, compile and run
> > ebcourages the youngsters to try ANYTHING to fix problem and not use
> > any form of version control. Then they go off fixing everything else
> > they have now broken because they did not acquire data first, to find
> > out where the problem maybe then use debugs or other data to prove
> > the area of fault, then prove what the fault is if necessary using
> > pencil, paper and a bit of grey matter.
> []
> 
> If that's the case, surely they should be better trained in using the 
> tools, rather than deliberately making the tools slower and more 
> difficult to use?  Give points for algorithm design!

In exams they do and for documentation, but most coders and the like 
especially studenst are lazy with that and want to play with code, not
writiung things down.

It is not the tools but the tool using them no matter what training.

> (That originally came out as "give pints" - might be something in 
that!)



-- 
Paul Carpenter          | paul@pcserviceselectronics.co.uk
<http://www.pcserviceselectronics.co.uk/>    PC Services
<http://www.pcserviceselectronics.co.uk/pi/>  Raspberry Pi Add-ons
<http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font
<http://www.gnuh8.org.uk/>  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
<http://www.badweb.org.uk/> For those web sites you hate
0
Paul
12/19/2013 12:22:53 AM
David Taylor <david-taylor@blueyonder.co.uk.invalid> wrote:
> On 17/12/2013 10:01, Paul wrote:
> []
>> I find that instant grahpical interface make a change, compile and run
>> ebcourages the youngsters to try ANYTHING to fix problem and not use
>> any form of version control. Then they go off fixing everything else
>> they have now broken because they did not acquire data first, to find
>> out where the problem maybe then use debugs or other data to prove
>> the area of fault, then prove what the fault is if necessary using
>> pencil, paper and a bit of grey matter.
> []
>
> If that's the case, surely they should be better trained in using the 
> tools, rather than deliberately making the tools slower and more 
> difficult to use?  Give points for algorithm design!

I don't propose to make tools slower, maybe a bit more difficult to
use yes.  What I don't like is singlestepping etc.  That encourages
fixing boundary errors by just adding a check or an offset, and also
makes developers believe that they can get a correct algorithm by
just trying test cases until it looks ok.
0
Rob
12/19/2013 8:56:25 AM
On 19/12/13 08:56, Rob wrote:
> David Taylor <david-taylor@blueyonder.co.uk.invalid> wrote:
>> On 17/12/2013 10:01, Paul wrote:
>> []
>>> I find that instant grahpical interface make a change, compile and run
>>> ebcourages the youngsters to try ANYTHING to fix problem and not use
>>> any form of version control. Then they go off fixing everything else
>>> they have now broken because they did not acquire data first, to find
>>> out where the problem maybe then use debugs or other data to prove
>>> the area of fault, then prove what the fault is if necessary using
>>> pencil, paper and a bit of grey matter.
>> []
>>
>> If that's the case, surely they should be better trained in using the
>> tools, rather than deliberately making the tools slower and more
>> difficult to use?  Give points for algorithm design!
>
> I don't propose to make tools slower, maybe a bit more difficult to
> use yes.  What I don't like is singlestepping etc.  That encourages
> fixing boundary errors by just adding a check or an offset, and also
> makes developers believe that they can get a correct algorithm by
> just trying test cases until it looks ok.
>
The 'IPCC' approach to coding...
...I'll get my coat..

-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/19/2013 9:43:32 AM
On 16/12/2013 21:46, gregor herrmann wrote:
> #v+
> # arch/arm/configs/bcmrpi_defconfig
> export PLATFORM=bcmrpi
> ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig
> #v-

Many thanks for that, Gregor.  I'll have a play.  I did see that 
3.10.23+ was now the current version - and that it has drivers for DVB-T 
sticks.  Apart from that, anything worthwhile in 3.10?  Would I need to 
recompile my customised NTP?

-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/19/2013 7:40:54 PM
On Thu, 19 Dec 2013 08:56:25 +0000, Rob wrote:

> I don't propose to make tools slower, maybe a bit more difficult to use
> yes.  What I don't like is singlestepping etc. 
>
Why not insist on them writing proper test cases before writing or 
compiling any code. 'Proper' involves specifying both inputs and outputs
(if trextual output, to the letter) and including corner cases and 
erroneous inputs as well as straight forward clean path tests.

I routinely do that for my own code: write a test harness and scripts for 
it. These scripts include expected results either as comments or as 
expected results fields which the test harness checks.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/19/2013 8:56:52 PM
On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:

> On 18/12/2013 22:17, Martin Gregorie wrote:
>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>
>>> I don't know which compiler you use, but in mine assert is only
>>> compiled into code in debug builds. There's nothing left in a
>>> non-debug build.
>>>
>> you'll have to recompile the program before you can start debugging
> 
> You may have bugs, I don't! :-)

Either thats pure bullshit or you don't test your code properly.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/19/2013 8:58:35 PM
Martin Gregorie <martin@address-in-sig.invalid> wrote:
> On Thu, 19 Dec 2013 08:56:25 +0000, Rob wrote:
>
>> I don't propose to make tools slower, maybe a bit more difficult to use
>> yes.  What I don't like is singlestepping etc. 
>>
> Why not insist on them writing proper test cases before writing or 
> compiling any code. 'Proper' involves specifying both inputs and outputs
> (if trextual output, to the letter) and including corner cases and 
> erroneous inputs as well as straight forward clean path tests.

Those that cannot devise a properly working algorithm and write the
code that implements it usually cannot write proper testcases either.

Clear examples are code to sort an array or to search a value in a
sorted array using binary search.  Remember "sorting and searching"
by Donald Knuth?

It will take the typical singlestep-modify-test-again programmer many
many iterations before he will be satisfied that the code works OK,
and it will fail within an hour of first release.

The more theoretical approach will require some study but will pay
off in reliability.
0
Rob
12/19/2013 9:09:01 PM
On Thu, 19 Dec 2013 19:40:54 +0000, David Taylor wrote:

> On 16/12/2013 21:46, gregor herrmann wrote:
>> #v+
>> # arch/arm/configs/bcmrpi_defconfig
>> export PLATFORM=bcmrpi
>> ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig
>> #v-
>
> Many thanks for that, Gregor.  I'll have a play.

you're welcome, and I hope you're successful as well.

> I did see that 
> 3.10.23+ was now the current version - and that it has drivers for DVB-T 
> sticks.  Apart from that, anything worthwhile in 3.10?  Would I need to 
> recompile my customised NTP?

that's something I can't answer; it's just that I prefer more recent
kernel versions out of principle :)


gregor
-- 
 .''`.  Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
 : :' : Debian GNU/Linux user, admin, and developer  -  http://www.debian.org/
 `. `'  Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
   `-   NP: Misha Alperin: Psalm No.1
0
gregor
12/19/2013 9:30:21 PM
On 19/12/2013 20:58, Martin Gregorie wrote:
> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
>
>> On 18/12/2013 22:17, Martin Gregorie wrote:
>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>>
>>>> I don't know which compiler you use, but in mine assert is only
>>>> compiled into code in debug builds. There's nothing left in a
>>>> non-debug build.
>>>>
>>> you'll have to recompile the program before you can start debugging
>>
>> You may have bugs, I don't! :-)
>
> Either thats pure bullshit or you don't test your code properly.
>
>
Mmmmm.... maybe it's time you considered drinking decaf!

;-)
0
mm0fmf
12/19/2013 11:10:11 PM
On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:

> Those that cannot devise a properly working algorithm and write the code
> that implements it usually cannot write proper testcases either.
>
Probably true, but I'd strongly suggest it is a skill that can be taught 
but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a 
hint of trying to teach it in any course I've been om or in any 
programming book I've read.

If you've seen this approach to testing taught, then please tell us about 
it.

> Clear examples are code to sort an array or to search a value in a
> sorted array using binary search.  Remember "sorting and searching" by
> Donald Knuth?
>
I've not read Knuth, but I own and have read copies of Sedgewick's 
"Algorithms" and Wirth's "Algorithms + Data Structures = Programs", which 
I suspect is a fair approximation to having all four volumes of Knuth, 
and with the added advantage that these use Pascal rather than idealized 
assembler as example code. Sedgewicks' code is particularly easy to 
transcribe directly into C. Been there, done that.

> It will take the typical singlestep-modify-test-again programmer many
> many iterations before he will be satisfied that the code works OK, and
> it will fail within an hour of first release.
> 
Very true.

> The more theoretical approach will require some study but will pay off
> in reliability.
>
Dunno about 'theoretical', but if you start cutting code before thinking 
through what you must achieve, preferably by iterating it on paper or at 
least as a test file, until you understand what you're doing and can 
explain why it is the best approach to another programmer, then you're 
heading up a blind ally at full throttle.

On top of that there are probably issues with structuring the code that 
you didn't think of and that will bite your bum unless dealt with. IME 
Wirth's "top-down incremental development" approach helps a lot here. 
Look it up if you've not heard of it. 

This approach solves many of the code structuring problems that bottom-up 
development can cause. Use it and be prepared to redesign/restructure/
replace existing code as soon as you realize that the code organization 
you started with is becoming harder to work with. These difficulties are 
only highlighting issues you should have fixed before starting to cut 
code. The only good way out is to admit that the code you've ended up 
with is crap and do something about it, i.e. rewrite/refactor the ugly 
bits and try to never make that mistake again.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/20/2013 1:20:19 AM
On Thu, 19 Dec 2013 23:10:11 +0000, mm0fmf wrote:

> On 19/12/2013 20:58, Martin Gregorie wrote:
>> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
>>
>>> On 18/12/2013 22:17, Martin Gregorie wrote:
>>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>>>
>>>>> I don't know which compiler you use, but in mine assert is only
>>>>> compiled into code in debug builds. There's nothing left in a
>>>>> non-debug build.
>>>>>
>>>> you'll have to recompile the program before you can start debugging
>>>
>>> You may have bugs, I don't! :-)
>>
>> Either thats pure bullshit or you don't test your code properly.
>>
>>
> Mmmmm.... maybe it's time you considered drinking decaf!
>
Pure experience over a few decades, dear boy. 

Anybody who claims to have written bugfree code that is more complex than 
"Hello World" is talking out his arse.

-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/20/2013 1:24:52 AM
On Fri, 20 Dec 2013 01:24:52 +0000, Martin Gregorie wrote:

> On Thu, 19 Dec 2013 23:10:11 +0000, mm0fmf wrote:
> 
>> On 19/12/2013 20:58, Martin Gregorie wrote:
>>> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
>>>
>>>> On 18/12/2013 22:17, Martin Gregorie wrote:
>>>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>>>>
>>>>>> I don't know which compiler you use, but in mine assert is only
>>>>>> compiled into code in debug builds. There's nothing left in a
>>>>>> non-debug build.
>>>>>>
>>>>> you'll have to recompile the program before you can start debugging
>>>>
>>>> You may have bugs, I don't! :-)
>>>
>>> Either thats pure bullshit or you don't test your code properly.
>>>
>>>
>> Mmmmm.... maybe it's time you considered drinking decaf!
>>
> Pure experience over a few decades, dear boy.
> 
> Anybody who claims to have written bugfree code that is more complex
> than "Hello World" is talking out his arse.

I should have added that I've met so-called programmers[*] who couldn't 
write even that without introducing bugs.

* One particularly memorable example cut COBOL I had to fix on the 
infamous GNS Naval Dockyard project. This clown didn't know that COBOL 
code drops through from one paragraph to the next by default and 
consequently wrote code like this:

PARA-1.
   NOTE sentences doing stuff.
   GO TO PARA-2.
PARA-2.
   NOTE more sentences doing stuff.
   ...

Other contractors knew him from previous projects and said that they'd 
never seen him write a working program. He always managed to leave with 
his last paycheck just before the deadline for his program to be 
delivered. He was always known for turning up late, doing sod all during 
the day, staying late and claiming overtime. 


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/20/2013 2:39:33 AM
On 20/12/2013 02:39, Martin Gregorie wrote:
> Other contractors knew him from previous projects and said that they'd
> never seen him write a working program. He always managed to leave with
> his last paycheck just before the deadline for his program to be
> delivered. He was always known for turning up late, doing sod all during
> the day, staying late and claiming overtime.
>

Sounds like the guy was a genius to me!

0
Guesser
12/20/2013 2:42:22 AM
On 20/12/13 01:20, Martin Gregorie wrote:
> On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:
>
>> Those that cannot devise a properly working algorithm and write the code
>> that implements it usually cannot write proper testcases either.
>>
> Probably true, but I'd strongly suggest it is a skill that can be taught
> but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a
> hint of trying to teach it in any course I've been om or in any
> programming book I've read.
>

That's probably because you only read books or attended cousre on 
'programming; or 'computer science'.

Try reading books on 'software engineering' which cover all of this in 
far more detail.



> If you've seen this approach to testing taught, then please tell us about
> it.
>
>> Clear examples are code to sort an array or to search a value in a
>> sorted array using binary search.  Remember "sorting and searching" by
>> Donald Knuth?
>>
> I've not read Knuth, but I own and have read copies of Sedgewick's
> "Algorithms" and Wirth's "Algorithms + Data Structures = Programs", which
> I suspect is a fair approximation to having all four volumes of Knuth,
> and with the added advantage that these use Pascal rather than idealized
> assembler as example code. Sedgewicks' code is particularly easy to
> transcribe directly into C. Been there, done that.
>
>> It will take the typical singlestep-modify-test-again programmer many
>> many iterations before he will be satisfied that the code works OK, and
>> it will fail within an hour of first release.
>>
> Very true.
>
>> The more theoretical approach will require some study but will pay off
>> in reliability.
>>
> Dunno about 'theoretical', but if you start cutting code before thinking
> through what you must achieve, preferably by iterating it on paper or at
> least as a test file, until you understand what you're doing and can
> explain why it is the best approach to another programmer, then you're
> heading up a blind ally at full throttle.
>

indeed.

Been on projects run exactly like that.

> On top of that there are probably issues with structuring the code that
> you didn't think of and that will bite your bum unless dealt with. IME
> Wirth's "top-down incremental development" approach helps a lot here.
> Look it up if you've not heard of it.
>

or bottom up...

> This approach solves many of the code structuring problems that bottom-up
> development can cause. Use it and be prepared to redesign/restructure/
> replace existing code as soon as you realize that the code organization
> you started with is becoming harder to work with. These difficulties are
> only highlighting issues you should have fixed before starting to cut
> code. The only good way out is to admit that the code you've ended up
> with is crap and do something about it, i.e. rewrite/refactor the ugly
> bits and try to never make that mistake again.
>
>
you have to do both., At the bottom emd you have to build the sort of 
library of useful objects to deal with the hardware or operating system 
interface. At the top you need a structured approach to map the needs of 
the design into one or more user interfaces, and in between is an unholy 
mess that is neither perfectly addressed by either method. In essence 
you have to think about it until you see a way to do it.

In general this takes about three iterations. because that's how long it 
takes to actually fully understand the problem.

Whether those iterations are on paper or in code is scarcely germane, 
the work is the same.

What is not possible is to arrive at a result that is problem free 
without actually understanding the problem fully. That is the mistake we 
are talking about. Top down or bottom up are just places to start. In 
the end you need top to bottom and all places in between.


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/20/2013 7:51:19 AM
On 20/12/13 01:24, Martin Gregorie wrote:
> On Thu, 19 Dec 2013 23:10:11 +0000, mm0fmf wrote:
>
>> On 19/12/2013 20:58, Martin Gregorie wrote:
>>> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
>>>
>>>> On 18/12/2013 22:17, Martin Gregorie wrote:
>>>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>>>>
>>>>>> I don't know which compiler you use, but in mine assert is only
>>>>>> compiled into code in debug builds. There's nothing left in a
>>>>>> non-debug build.
>>>>>>
>>>>> you'll have to recompile the program before you can start debugging
>>>>
>>>> You may have bugs, I don't! :-)
>>>
>>> Either thats pure bullshit or you don't test your code properly.
>>>
>>>
>> Mmmmm.... maybe it's time you considered drinking decaf!
>>
> Pure experience over a few decades, dear boy.
>
> Anybody who claims to have written bugfree code that is more complex than
> "Hello World" is talking out his arse.
>

No, it can be done, just not at the first pass.

its not hard to write and to test for bug free code for all the 
eventualities you thought of, its what happens when an eventuality you 
didn't think of comes along....


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/20/2013 7:53:45 AM
On 20/12/13 02:39, Martin Gregorie wrote:
> On Fri, 20 Dec 2013 01:24:52 +0000, Martin Gregorie wrote:
>
>> On Thu, 19 Dec 2013 23:10:11 +0000, mm0fmf wrote:
>>
>>> On 19/12/2013 20:58, Martin Gregorie wrote:
>>>> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
>>>>
>>>>> On 18/12/2013 22:17, Martin Gregorie wrote:
>>>>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
>>>>>>
>>>>>>> I don't know which compiler you use, but in mine assert is only
>>>>>>> compiled into code in debug builds. There's nothing left in a
>>>>>>> non-debug build.
>>>>>>>
>>>>>> you'll have to recompile the program before you can start debugging
>>>>>
>>>>> You may have bugs, I don't! :-)
>>>>
>>>> Either thats pure bullshit or you don't test your code properly.
>>>>
>>>>
>>> Mmmmm.... maybe it's time you considered drinking decaf!
>>>
>> Pure experience over a few decades, dear boy.
>>
>> Anybody who claims to have written bugfree code that is more complex
>> than "Hello World" is talking out his arse.
>
> I should have added that I've met so-called programmers[*] who couldn't
> write even that without introducing bugs.
>
> * One particularly memorable example cut COBOL I had to fix on the
> infamous GNS Naval Dockyard project. This clown didn't know that COBOL
> code drops through from one paragraph to the next by default and
> consequently wrote code like this:
>
> PARA-1.
>     NOTE sentences doing stuff.
>     GO TO PARA-2.
> PARA-2.
>     NOTE more sentences doing stuff.
>     ...
>
> Other contractors knew him from previous projects and said that they'd
> never seen him write a working program. He always managed to leave with
> his last paycheck just before the deadline for his program to be
> delivered. He was always known for turning up late, doing sod all during
> the day, staying late and claiming overtime.
>

And then he went into politics?

>


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/20/2013 7:54:35 AM
Martin Gregorie <martin@address-in-sig.invalid> wrote:
> On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:
>
>> Those that cannot devise a properly working algorithm and write the code
>> that implements it usually cannot write proper testcases either.
>>
> Probably true, but I'd strongly suggest it is a skill that can be taught 
> but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a 
> hint of trying to teach it in any course I've been om or in any 
> programming book I've read.
>
> If you've seen this approach to testing taught, then please tell us about 
> it.

My informatics teacher was focussing a lot on algorithms and proof
of correctness, and explained a lot about the types of errors you have
to watch out for.
However, that is over 30 years ago.  We also learned generic principles
of compilers, operating systems, machine code, etc.
I hear that today they only train you how to work in specific MS tools
and if you are lucky present some info about Linux.

About books: what I found is that many books that explain programming
do not cover the topic of error handling.  It is left as an exercise
for the reader, or as a more complicated topic not covered now.

In practice it is quite important to think about error handling before
starting to write code.  When it is added as an afterthought it will
be quite tricky to get it right.
(especially when it involves recovery, not only bombing out when
something unexpected happens, and when some kind of configurable logging
of problems that does not overflow during normal operation is desired)

My experience with larger projects is that a lot of time is spent
discussing an error handling strategy and the result still is not
satisfactory and often has a lot of variation depending on who wrote
the specific module.
0
Rob
12/20/2013 9:14:31 AM
Martin Gregorie <martin@address-in-sig.invalid> wrote:
> On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:
> 
>> Those that cannot devise a properly working algorithm and write the code
>> that implements it usually cannot write proper testcases either.
>> 
> Probably true, but I'd strongly suggest it is a skill that can be taught 
> but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a 
> hint of trying to teach it in any course I've been om or in any 
> programming book I've read.
> 
> If you've seen this approach to testing taught, then please tell us about 
> it.
> 
>> Clear examples are code to sort an array or to search a value in a
>> sorted array using binary search.  Remember "sorting and searching" by
>> Donald Knuth?
>> 
> I've not read Knuth, but I own and have read copies of Sedgewick's 
> "Algorithms" and Wirth's "Algorithms + Data Structures = Programs", which 
> I suspect is a fair approximation to having all four volumes of Knuth, 
> and with the added advantage that these use Pascal rather than idealized 
> assembler as example code. Sedgewicks' code is particularly easy to 
> transcribe directly into C. Been there, done that.
> 
>> It will take the typical singlestep-modify-test-again programmer many
>> many iterations before he will be satisfied that the code works OK, and
>> it will fail within an hour of first release.
>> 
> Very true.
> 
>> The more theoretical approach will require some study but will pay off
>> in reliability.
>> 
> Dunno about 'theoretical', but if you start cutting code before thinking 
> through what you must achieve, preferably by iterating it on paper or at 
> least as a test file, until you understand what you're doing and can 
> explain why it is the best approach to another programmer, then you're 
> heading up a blind ally at full throttle.
> 
> On top of that there are probably issues with structuring the code that 
> you didn't think of and that will bite your bum unless dealt with. IME 
> Wirth's "top-down incremental development" approach helps a lot here. 
> Look it up if you've not heard of it. 
> 
> This approach solves many of the code structuring problems that bottom-up 
> development can cause. Use it and be prepared to redesign/restructure/
> replace existing code as soon as you realize that the code organization 
> you started with is becoming harder to work with. These difficulties are 
> only highlighting issues you should have fixed before starting to cut 
> code. The only good way out is to admit that the code you've ended up 
> with is crap and do something about it, i.e. rewrite/refactor the ugly 
> bits and try to never make that mistake again.

One of my maxims is:  "Our most important design tool is the wastebasket,
and it is much underused."

Until you have considered several different approaches to writing a program
(in enough detail to see the advantages and disadvantages of each), you
have no idea whether you are proceeding appropriately. 

An empty wastebasket is a sign of trouble unless you've done it before and
know exactly how to proceed. (And yes, pencil and paper are the right tools
at the outset. ;-)

One should never get too wrapped around the axle on the issue of bottom-up
vs. top-down. Virtually every real programming effort will involve both. 

Proper high-level structure is a result of top-down thinking, while
efficient use of machines and libraries requires bottom-up thinking. When
insightful top-down design and careful bottom-up design meet elegantly in
the middle, a beautiful and efficient program is the result. 
-- 
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
0
Michael
12/20/2013 9:19:34 AM
The Natural Philosopher <tnp@invalid.invalid> wrote:
> On 20/12/13 01:20, Martin Gregorie wrote:
>> On Thu, 19 Dec 2013 21:09:01 +0000, Rob wrote:
>> 
>>> Those that cannot devise a properly working algorithm and write the code
>>> that implements it usually cannot write proper testcases either.
>>> 
>> Probably true, but I'd strongly suggest it is a skill that can be taught
>> but that nobody ever bothers to teach it. Evidence: I've *NEVER* seen a
>> hint of trying to teach it in any course I've been om or in any
>> programming book I've read.
>> 
> 
> That's probably because you only read books or attended cousre on
> 'programming; or 'computer science'.
> 
> Try reading books on 'software engineering' which cover all of this in far more detail.
> 
> 
> 
>> If you've seen this approach to testing taught, then please tell us about
>> it.
>> 
>>> Clear examples are code to sort an array or to search a value in a
>>> sorted array using binary search.  Remember "sorting and searching" by
>>> Donald Knuth?
>>> 
>> I've not read Knuth, but I own and have read copies of Sedgewick's
>> "Algorithms" and Wirth's "Algorithms + Data Structures = Programs", which
>> I suspect is a fair approximation to having all four volumes of Knuth,
>> and with the added advantage that these use Pascal rather than idealized
>> assembler as example code. Sedgewicks' code is particularly easy to
>> transcribe directly into C. Been there, done that.
>> 
>>> It will take the typical singlestep-modify-test-again programmer many
>>> many iterations before he will be satisfied that the code works OK, and
>>> it will fail within an hour of first release.
>>> 
>> Very true.
>> 
>>> The more theoretical approach will require some study but will pay off
>>> in reliability.
>>> 
>> Dunno about 'theoretical', but if you start cutting code before thinking
>> through what you must achieve, preferably by iterating it on paper or at
>> least as a test file, until you understand what you're doing and can
>> explain why it is the best approach to another programmer, then you're
>> heading up a blind ally at full throttle.
>> 
> 
> indeed.
> 
> Been on projects run exactly like that.
> 
>> On top of that there are probably issues with structuring the code that
>> you didn't think of and that will bite your bum unless dealt with. IME
>> Wirth's "top-down incremental development" approach helps a lot here.
>> Look it up if you've not heard of it.
>> 
> 
> or bottom up...
> 
>> This approach solves many of the code structuring problems that bottom-up
>> development can cause. Use it and be prepared to redesign/restructure/
>> replace existing code as soon as you realize that the code organization
>> you started with is becoming harder to work with. These difficulties are
>> only highlighting issues you should have fixed before starting to cut
>> code. The only good way out is to admit that the code you've ended up
>> with is crap and do something about it, i.e. rewrite/refactor the ugly
>> bits and try to never make that mistake again.
>> 
>> 
> you have to do both., At the bottom emd you have to build the sort of
> library of useful objects to deal with the hardware or operating system
> interface. At the top you need a structured approach to map the needs of
> the design into one or more user interfaces, and in between is an unholy
> mess that is neither perfectly addressed by either method. In essence you
> have to think about it until you see a way to do it.
> 
> In general this takes about three iterations. because that's how long it
> takes to actually fully understand the problem.
> 
> Whether those iterations are on paper or in code is scarcely germane, the work is the same.
> 
> What is not possible is to arrive at a result that is problem free
> without actually understanding the problem fully. That is the mistake we
> are talking about. Top down or bottom up are just places to start. In the
> end you need top to bottom and all places in between.

Hear, hear!

I'm in complete agreement. That's what I get for reading and responding in
order of posting. ;-)
-- 
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
0
Michael
12/20/2013 9:21:37 AM
On 20/12/2013 09:14, Rob wrote:
> In practice it is quite important to think about error handling before
> starting to write code.  When it is added as an afterthought it will
> be quite tricky to get it right.
> (especially when it involves recovery, not only bombing out when
> something unexpected happens, and when some kind of configurable logging
> of problems that does not overflow during normal operation is desired)
>

My functions always check that functions they called completed 
correctly[1] and return appropriate error codes if they didn't.

That's where I get stuck though, the actual main program loop tends to 
just execute some "print error message and terminate" code or if I'm 
really feeling generous to users "indicate that operation failed and go 
back to main input loop" ;)


[1] of course the exception is the functions that "can never fail" [2]. 
No need to check those ;)
[2] unless they do of course, but you'll never know because I didn't 
return any status.
0
Guesser
12/20/2013 9:43:05 AM
Guesser <alistair@alistairsserver.no-ip.org> wrote:
> On 20/12/2013 09:14, Rob wrote:
>> In practice it is quite important to think about error handling before
>> starting to write code.  When it is added as an afterthought it will
>> be quite tricky to get it right.
>> (especially when it involves recovery, not only bombing out when
>> something unexpected happens, and when some kind of configurable logging
>> of problems that does not overflow during normal operation is desired)
>>
>
> My functions always check that functions they called completed 
> correctly[1] and return appropriate error codes if they didn't.
>
> That's where I get stuck though, the actual main program loop tends to 
> just execute some "print error message and terminate" code or if I'm 
> really feeling generous to users "indicate that operation failed and go 
> back to main input loop" ;)

That suffices for simple programs.  I normally use that method as well.
But in a more complicated environment you may want to have logging
of the complete stack of error reasons.  Your function fails because
it received an error from a lower level function, attempted some
recovery using another function but that also failed.  The two lower
level functions each returned errors because they got errors from
even lower levels.
But you don't want every function to log any error it encounters.
Sometimes errors are expected and you are prepared to handle them
using an alternative method or other recovery.  So the logging has
to be deferred until the upper level decides there is a problem,
yet you want the details of the errors occurring in the lower level
functions.

This makes it more complicated then just returning error numbers.
0
Rob
12/20/2013 10:24:33 AM
On 20/12/2013 10:24, Rob wrote:
> Guesser <alistair@alistairsserver.no-ip.org> wrote:
>> On 20/12/2013 09:14, Rob wrote:
>>> In practice it is quite important to think about error handling before
>>> starting to write code.  When it is added as an afterthought it will
>>> be quite tricky to get it right.
>>> (especially when it involves recovery, not only bombing out when
>>> something unexpected happens, and when some kind of configurable logging
>>> of problems that does not overflow during normal operation is desired)
>>>
>>
>> My functions always check that functions they called completed
>> correctly[1] and return appropriate error codes if they didn't.
>>
>> That's where I get stuck though, the actual main program loop tends to
>> just execute some "print error message and terminate" code or if I'm
>> really feeling generous to users "indicate that operation failed and go
>> back to main input loop" ;)
>
> That suffices for simple programs.  I normally use that method as well.
> But in a more complicated environment you may want to have logging
> of the complete stack of error reasons.  Your function fails because
> it received an error from a lower level function, attempted some
> recovery using another function but that also failed.  The two lower
> level functions each returned errors because they got errors from
> even lower levels.
> But you don't want every function to log any error it encounters.
> Sometimes errors are expected and you are prepared to handle them
> using an alternative method or other recovery.  So the logging has
> to be deferred until the upper level decides there is a problem,
> yet you want the details of the errors occurring in the lower level
> functions.
>
> This makes it more complicated then just returning error numbers.
>

All my code tends to be for the Sinclair Spectrum so logging anything is 
a bit of a problem - my main project at the moment is in fact a 
filesystem implementation so if the function that's failing is OPEN or 
WRITE dumping a log is not an option :D
0
Guesser
12/20/2013 10:31:21 AM
On 20/12/13 10:24, Rob wrote:
> Guesser <alistair@alistairsserver.no-ip.org> wrote:
>> On 20/12/2013 09:14, Rob wrote:
>>> In practice it is quite important to think about error handling before
>>> starting to write code.  When it is added as an afterthought it will
>>> be quite tricky to get it right.
>>> (especially when it involves recovery, not only bombing out when
>>> something unexpected happens, and when some kind of configurable logging
>>> of problems that does not overflow during normal operation is desired)
>>>
>>
>> My functions always check that functions they called completed
>> correctly[1] and return appropriate error codes if they didn't.
>>
>> That's where I get stuck though, the actual main program loop tends to
>> just execute some "print error message and terminate" code or if I'm
>> really feeling generous to users "indicate that operation failed and go
>> back to main input loop" ;)
>
> That suffices for simple programs.  I normally use that method as well.
> But in a more complicated environment you may want to have logging
> of the complete stack of error reasons.  Your function fails because
> it received an error from a lower level function, attempted some
> recovery using another function but that also failed.  The two lower
> level functions each returned errors because they got errors from
> even lower levels.
> But you don't want every function to log any error it encounters.
> Sometimes errors are expected and you are prepared to handle them
> using an alternative method or other recovery.  So the logging has
> to be deferred until the upper level decides there is a problem,
> yet you want the details of the errors occurring in the lower level
> functions.
>
> This makes it more complicated then just returning error numbers.
>
One comms program I wrote  was the prime example of when you really do 
NOT want to do that.

At the bottom was a PCI card that was essentially a 50 baud current loop 
telex interface.

On top of that were ten layers of routines that handled valid or invalid 
codes sent and recieved from the other end,  each one a different level.

If the 'wire broke' - a not infrequent event when telexing darkest 
Africa - you really didn't want to pass all that lot up the stack and 
handle it at a high level.

The solutions was simple. An area of memory with an error code.

Then before even attempting a connection, or answering a call, the error 
was cleared and  setjmp was performed. If the return from that showed an 
error in the code, then a diagnostic was printed and the program 
returned to its main loop, or tried again.

Any error anywhere down the stack wrote a unique code in the error 
memory and called longjmp.

So error handling was of the form at every level

if(error)
	abort(my_unique_error_code)

and that was all you had to do.

Upstairs at te exit from the longjmp, there was a switch statement, each 
one of whose case corresponded to a unique error code, and then 
performed whatever response was appropriate for that error code. Up to 
and including resetting the hardware completely, setting retry counters 
and so on.

This made handling new errors a cinch.
Add a new entry to myerrors.h
add a new case to the error handler
check for that error wherever most appropriate and call abort..


By having error handling as a completely separate module, the program 
flow for normal operations was not obscured by error handling and vice 
versa.

By breaking all the rules of 'structured programming' I achieved a 
cleaner neater and more structured program..

And it was a lot easier to debug.

I mention this to illustrate that like the pirates code, structured 
programming techniques are 'only a sort of guideline'



-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/20/2013 11:05:08 AM
On Fri, 20 Dec 2013 11:05:08 +0000
The Natural Philosopher <tnp@invalid.invalid> wrote:

> By breaking all the rules of 'structured programming' I achieved a 
> cleaner neater and more structured program.

But didn't using GOTO make you feel dirty??  :-)

0
Rob
12/20/2013 12:25:09 PM
Guesser <alistair@alistairsserver.no-ip.org> wrote:
> On 20/12/2013 10:24, Rob wrote:
>> Guesser <alistair@alistairsserver.no-ip.org> wrote:
>>> On 20/12/2013 09:14, Rob wrote:
>>>> In practice it is quite important to think about error handling before
>>>> starting to write code.  When it is added as an afterthought it will
>>>> be quite tricky to get it right.
>>>> (especially when it involves recovery, not only bombing out when
>>>> something unexpected happens, and when some kind of configurable logging
>>>> of problems that does not overflow during normal operation is desired)
>>>>
>>>
>>> My functions always check that functions they called completed
>>> correctly[1] and return appropriate error codes if they didn't.
>>>
>>> That's where I get stuck though, the actual main program loop tends to
>>> just execute some "print error message and terminate" code or if I'm
>>> really feeling generous to users "indicate that operation failed and go
>>> back to main input loop" ;)
>>
>> That suffices for simple programs.  I normally use that method as well.
>> But in a more complicated environment you may want to have logging
>> of the complete stack of error reasons.  Your function fails because
>> it received an error from a lower level function, attempted some
>> recovery using another function but that also failed.  The two lower
>> level functions each returned errors because they got errors from
>> even lower levels.
>> But you don't want every function to log any error it encounters.
>> Sometimes errors are expected and you are prepared to handle them
>> using an alternative method or other recovery.  So the logging has
>> to be deferred until the upper level decides there is a problem,
>> yet you want the details of the errors occurring in the lower level
>> functions.
>>
>> This makes it more complicated then just returning error numbers.
>>
>
> All my code tends to be for the Sinclair Spectrum so logging anything is 
> a bit of a problem - my main project at the moment is in fact a 
> filesystem implementation so if the function that's failing is OPEN or 
> WRITE dumping a log is not an option :D

Sure, but even in an environment where you can only display errors
to the user this ugly problem shows up all the time.

E.g. you have a function to read configuration, it calls other functions
that finally open some files.  One of the files cannot be opened and
a "cannot open file" error is returned to the higher level.
The upper level gets "cannot open file" error as the reason for the
whole function block to fail.
But you don't (unless you are Microsoft) want to display useless
alerts like "Cannot open file [OK]" or revert to "internal error [OK]",
you want to display a helpful message that tells the user (or admin)
WHICH file cannot be opened.  Yet you don't want to alert the user
about any file that cannot be opened, there may be files in the system
that are optional or there may be alternatives for the same file.

To solve this, a slightly more sophisticated error handling philosophy
is required.
0
Rob
12/20/2013 12:57:02 PM
On 20/12/2013 12:57, Rob wrote:
> To solve this, a slightly more sophisticated error handling philosophy
> is required.
>

To implement that, a slightly more sophisticated programmer is required :D
0
Guesser
12/20/2013 1:34:29 PM
On Fri, 20 Dec 2013 01:24:52 +0000 (UTC), Martin Gregorie
<martin@address-in-sig.invalid> declaimed the following:

>
>Anybody who claims to have written bugfree code that is more complex than 
>"Hello World" is talking out his arse.

	It would take me half an hour of reading to even do that much in Java
(and likely C#) since so much of them is buried deep in hierarchical module
trees.
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
0
Dennis
12/20/2013 4:36:29 PM
On 20/12/13 12:25, Rob Morley wrote:
> On Fri, 20 Dec 2013 11:05:08 +0000
> The Natural Philosopher <tnp@invalid.invalid> wrote:
>
>> By breaking all the rules of 'structured programming' I achieved a
>> cleaner neater and more structured program.
>
> But didn't using GOTO make you feel dirty??  :-)
>

No. immensely relieved like when you have a bloody great crap and say 
'there I did it'...

it enabled me to pull the project forward at least two weeks and deliver 
on time and on budget, and it was a lot easier to understand.

Sometimes 'go to jail., go directly to jail, do not pass go, do not 
collect £200' is actually a simpler way to get the job done.


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/20/2013 6:01:48 PM
On 20/12/13 13:34, Guesser wrote:
> On 20/12/2013 12:57, Rob wrote:
>> To solve this, a slightly more sophisticated error handling philosophy
>> is required.
>>
>
> To implement that, a slightly more sophisticated programmer is required :D
+1

and ROFL


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/20/2013 6:02:31 PM
On Fri, 20 Dec 2013 07:51:19 +0000, The Natural Philosopher wrote:

> you have to do both., At the bottom emd you have to build the sort of
> library of useful objects to deal with the hardware or operating system
> interface. At the top you need a structured approach to map the needs of
> the design into one or more user interfaces, and in between is an unholy
> mess that is neither perfectly addressed by either method. In essence
> you have to think about it until you see a way to do it.
> 
> In general this takes about three iterations. because that's how long it
> takes to actually fully understand the problem.
> 
> Whether those iterations are on paper or in code is scarcely germane,
> the work is the same.
> 
> What is not possible is to arrive at a result that is problem free
> without actually understanding the problem fully. That is the mistake we
> are talking about. Top down or bottom up are just places to start. In
> the end you need top to bottom and all places in between.
>
I'll drink to that. What I was really objecting to with bottom-up is the 
idea that you can simply keep on building upwards until you magically top 
it out and that this will give a good result. In practice I think you 
need to start with top-down and design far enough down to know what 
support modules you're going to need and how they'll fit together. *Then* 
you can do a bit of bottom-up design to build that layer.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/20/2013 8:36:34 PM
On Fri, 20 Dec 2013 10:31:21 +0000, Guesser wrote:

> All my code tends to be for the Sinclair Spectrum so logging anything is
> a bit of a problem - my main project at the moment is in fact a
> filesystem implementation so if the function that's failing is OPEN or
> WRITE dumping a log is not an option :D
>
Well, now you've got a system with an excellent event logging system. The 
Linux logging subsystem classifies the events that get logged and can 
even give you control over which logged details get displayed and which 
log files they get written to. It even has configurable mechanisms to 
automatically discard old log entries. Take a look in /var/log to see 
what gets logged, at 'man logrotate' for how log files are managed and at 
'man syslog' to see how to write log entries. 

Use less and grep to search through the logs and display their contents.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/20/2013 8:53:48 PM
Martin Gregorie <martin@address-in-sig.invalid> wrote:
> Well, now you've got a system with an excellent event logging system. The 
> Linux logging subsystem classifies the events that get logged and can 
> even give you control over which logged details get displayed and which 
> log files they get written to. It even has configurable mechanisms to 
> automatically discard old log entries.

AFAIK it has no mechanism to control logging of earlier messages based
on later events.  e.g. you send all results of DNS lookups to the log
daemon with some thread identifier and only when a "system unreachable"
event occurs all the related DNS lookup results plus the event are written
to the log.  When the event does not occur the DNS lookup results are
simply discarded.

This kind of functionality is what you require when you want detailed
logs of events without logging loads and loads of information that is
not required (in this example: all the lookup results that do not end
in a failure at top level).

Usually programs in Linux resort to some kind of "debugging mode" where
the program logs more verbosely when required during testing.  However,
in practice the debugging mode is not on when the failure occurs, or
it is on all the time and the log gets flooded with useless information,
possibly causing the disk to fill or the performance to worsen.
0
Rob
12/20/2013 9:04:55 PM
On Fri, 20 Dec 2013 21:04:55 +0000, Rob wrote:

> AFAIK it has no mechanism to control logging of earlier messages based
> on later events.  e.g. you send all results of DNS lookups to the log
> daemon with some thread identifier and only when a "system unreachable"
> event occurs all the related DNS lookup results plus the event are
> written to the log.  When the event does not occur the DNS lookup
> results are simply discarded.
>
Sure, unless you've written the program to handle that. However, what I 
wished to do was to show 'guesser' what's available out of the box and 
that even that's better than simply stopping with an "Error: the sky has 
fallen" message, unless, of course its pointing out that an essential 
file is either missing or unreadable, when simply stopping and saying so 
is about all you can do and will tell the user all he needs to know to 
fix the problem. 

About the best semi-automatic, no-brainer approach to that is Java's 
stack dump. Anything better will need a bit of thought,

FWIW, one of the best solutions I've seen was in the guts of ICL's George 
3 OS. That used a circular buffer that held about 100 messages. Tracing 
info was written to it during normal operation and it was dumped to the 
printer if a fatal error ocurred. In fact, George used two buffers, one 
quite fine grained and the other was much coarser: the fine-grained 
buffer content corresponded to the last 3 or so entries in the coarse 
buffer. The benefit of this approach is that you get the history leading 
to the error while a stack dump shows where you were but not really what 
led to the crash.

I've used this approach with just a single buffer in long running 
programs. In them I dumped the buffer when a serious error ocurred, 
followed by said serious error message. Then, depending on the severity 
of the error, the program might go on running or terminate at that point.
  
> This kind of functionality is what you require when you want detailed
> logs of events without logging loads and loads of information that is
> not required (in this example: all the lookup results that do not end in
> a failure at top level).
> 
And this is precisely what the circular buffer gives you: the immediate 
history leading to the error without the preceding megabytes of log file 
to wade through and chew up disk space.


-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/20/2013 9:35:39 PM
On 20/12/13 20:36, Martin Gregorie wrote:
> On Fri, 20 Dec 2013 07:51:19 +0000, The Natural Philosopher wrote:
>
>> you have to do both., At the bottom emd you have to build the sort of
>> library of useful objects to deal with the hardware or operating system
>> interface. At the top you need a structured approach to map the needs of
>> the design into one or more user interfaces, and in between is an unholy
>> mess that is neither perfectly addressed by either method. In essence
>> you have to think about it until you see a way to do it.
>>
>> In general this takes about three iterations. because that's how long it
>> takes to actually fully understand the problem.
>>
>> Whether those iterations are on paper or in code is scarcely germane,
>> the work is the same.
>>
>> What is not possible is to arrive at a result that is problem free
>> without actually understanding the problem fully. That is the mistake we
>> are talking about. Top down or bottom up are just places to start. In
>> the end you need top to bottom and all places in between.
>>
> I'll drink to that. What I was really objecting to with bottom-up is the
> idea that you can simply keep on building upwards until you magically top
> it out and that this will give a good result. In practice I think you
> need to start with top-down and design far enough down to know what
> support modules you're going to need and how they'll fit together. *Then*
> you can do a bit of bottom-up design to build that layer.
>
>
actually its more of a boundary problem. The 'bottom boundary' is 
defined by ether the hardware or the operating system and its libraries 
and the 'top boundary' is defined by the user or other interface. Those 
are the bits that HAVE to conform to spec other wise it doesn't work.

Its the middle ware where all the choices are. And the hard work is


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/21/2013 1:18:20 AM
On 2013-12-20, Rob Morley <nospam@ntlworld.com> wrote:
> On Fri, 20 Dec 2013 11:05:08 +0000
> The Natural Philosopher <tnp@invalid.invalid> wrote:
>
>> By breaking all the rules of 'structured programming' I achieved a 
>> cleaner neater and more structured program.
>
> But didn't using GOTO make you feel dirty??  :-)

longjmp() is not goto, it takes you somewhere old, (not somewhere new)
and it's dynamically targeted (not s destination fixed at compile time)

structured programming doesn't work for real life situations unless
you can have exception handling, longjmp() is a good way to implement
that in C.

-- 
For a good time: install ntp
0
Jasen
12/21/2013 1:49:34 AM
On 21/12/13 01:49, Jasen Betts wrote:
> On 2013-12-20, Rob Morley <nospam@ntlworld.com> wrote:
>> On Fri, 20 Dec 2013 11:05:08 +0000
>> The Natural Philosopher <tnp@invalid.invalid> wrote:
>>
>>> By breaking all the rules of 'structured programming' I achieved a
>>> cleaner neater and more structured program.
>>
>> But didn't using GOTO make you feel dirty??  :-)
>
> longjmp() is not goto, it takes you somewhere old, (not somewhere new)
> and it's dynamically targeted (not s destination fixed at compile time)
>
> structured programming doesn't work for real life situations unless
> you can have exception handling, longjmp() is a good way to implement
> that in C.
>
and its fearfully efficient. simply load the stack pointer and RET.

If your intention is to carry an error code up through a deep nest of 
subroutines it achieves that result very efficiently


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/21/2013 2:33:32 AM
On 21 Dec 2013 01:49:34 GMT
Jasen Betts <jasen@xnet.co.nz> wrote:

> On 2013-12-20, Rob Morley <nospam@ntlworld.com> wrote:
> > On Fri, 20 Dec 2013 11:05:08 +0000
> > The Natural Philosopher <tnp@invalid.invalid> wrote:
> >
> >> By breaking all the rules of 'structured programming' I achieved a 
> >> cleaner neater and more structured program.
> >
> > But didn't using GOTO make you feel dirty??  :-)
> 
> longjmp() is not goto,

I know, I was being facetious.

0
Rob
12/21/2013 2:41:38 AM
Martin Gregorie <martin@address-in-sig.invalid> wrote:
> FWIW, one of the best solutions I've seen was in the guts of ICL's George 
> 3 OS. That used a circular buffer that held about 100 messages. Tracing 
> info was written to it during normal operation and it was dumped to the 
> printer if a fatal error ocurred. In fact, George used two buffers, one 
> quite fine grained and the other was much coarser: the fine-grained 
> buffer content corresponded to the last 3 or so entries in the coarse 
> buffer. The benefit of this approach is that you get the history leading 
> to the error while a stack dump shows where you were but not really what 
> led to the crash.

Yes that looks good!
It should be possible to integrate that in syslogd or some pre-processor
that captures and holds items for later optional release to syslogd.
0
Rob
12/21/2013 9:15:52 AM
The Natural Philosopher <tnp@invalid.invalid> wrote:
> On 21/12/13 01:49, Jasen Betts wrote:
>> On 2013-12-20, Rob Morley <nospam@ntlworld.com> wrote:
>>> On Fri, 20 Dec 2013 11:05:08 +0000
>>> The Natural Philosopher <tnp@invalid.invalid> wrote:
>>>
>>>> By breaking all the rules of 'structured programming' I achieved a
>>>> cleaner neater and more structured program.
>>>
>>> But didn't using GOTO make you feel dirty??  :-)
>>
>> longjmp() is not goto, it takes you somewhere old, (not somewhere new)
>> and it's dynamically targeted (not s destination fixed at compile time)
>>
>> structured programming doesn't work for real life situations unless
>> you can have exception handling, longjmp() is a good way to implement
>> that in C.
>>
> and its fearfully efficient. simply load the stack pointer and RET.
>
> If your intention is to carry an error code up through a deep nest of 
> subroutines it achieves that result very efficiently

Unfortunately it makes resource management a mess.
It is customary in C to allocate dynamic memory and allocate other
resources (like file handles) in functions and release them before return.
Using setjump causes memory and descriptor leaks in that case.

In languages like perl, where resources are automatically released when
they go out of scope, there is no such concern.
0
Rob
12/21/2013 9:18:47 AM
On 2013-12-21, Rob <nomail@example.com> wrote:
> The Natural Philosopher <tnp@invalid.invalid> wrote:
>> On 21/12/13 01:49, Jasen Betts wrote:
>>>
>>> longjmp() is not goto, it takes you somewhere old, (not somewhere new)
>>> and it's dynamically targeted (not a destination fixed at compile time)
>>>
>>> structured programming doesn't work for real life situations unless
>>> you can have exception handling, longjmp() is a good way to implement
>>> that in C.
>>>
>> and its fearfully efficient. simply load the stack pointer and RET.
>>
>> If your intention is to carry an error code up through a deep nest of 
>> subroutines it achieves that result very efficiently
>
> Unfortunately it makes resource management a mess.
> It is customary in C to allocate dynamic memory and allocate other
> resources (like file handles) in functions and release them before return.
> Using setjump causes memory and descriptor leaks in that case.

If it does you're doing it wrong.  the right way is to use longjmp to
intercept the exception and release your resources, or to use stack
storage for temporary allocations (alloca()), another option is a 
one-way ticket out of the application and let the operating-system 
clean up the hanging resources.

-- 
For a good time: install ntp
0
Jasen
12/21/2013 10:41:40 AM
On 21/12/13 09:18, Rob wrote:
> The Natural Philosopher <tnp@invalid.invalid> wrote:
>> On 21/12/13 01:49, Jasen Betts wrote:
>>> On 2013-12-20, Rob Morley <nospam@ntlworld.com> wrote:
>>>> On Fri, 20 Dec 2013 11:05:08 +0000
>>>> The Natural Philosopher <tnp@invalid.invalid> wrote:
>>>>
>>>>> By breaking all the rules of 'structured programming' I achieved a
>>>>> cleaner neater and more structured program.
>>>>
>>>> But didn't using GOTO make you feel dirty??  :-)
>>>
>>> longjmp() is not goto, it takes you somewhere old, (not somewhere new)
>>> and it's dynamically targeted (not s destination fixed at compile time)
>>>
>>> structured programming doesn't work for real life situations unless
>>> you can have exception handling, longjmp() is a good way to implement
>>> that in C.
>>>
>> and its fearfully efficient. simply load the stack pointer and RET.
>>
>> If your intention is to carry an error code up through a deep nest of
>> subroutines it achieves that result very efficiently
>
> Unfortunately it makes resource management a mess.
> It is customary in C to allocate dynamic memory and allocate other
> resources (like file handles) in functions and release them before return.
> Using setjump causes memory and descriptor leaks in that case.
>
> In languages like perl, where resources are automatically released when
> they go out of scope, there is no such concern.
>

Yes,. I didn't use dynamic allocation in the example given - just 
stack/static.

You can of course regsister each memory block and its size in a 
structure, and free them as a linked list.


Agreed at a certain point the complexity of doing that negates the whole 
purpose of the exercise!

-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/21/2013 2:08:37 PM
On 21/12/13 10:41, Jasen Betts wrote:
> On 2013-12-21, Rob <nomail@example.com> wrote:
>> The Natural Philosopher <tnp@invalid.invalid> wrote:
>>> On 21/12/13 01:49, Jasen Betts wrote:
>>>>
>>>> longjmp() is not goto, it takes you somewhere old, (not somewhere new)
>>>> and it's dynamically targeted (not a destination fixed at compile time)
>>>>
>>>> structured programming doesn't work for real life situations unless
>>>> you can have exception handling, longjmp() is a good way to implement
>>>> that in C.
>>>>
>>> and its fearfully efficient. simply load the stack pointer and RET.
>>>
>>> If your intention is to carry an error code up through a deep nest of
>>> subroutines it achieves that result very efficiently
>>
>> Unfortunately it makes resource management a mess.
>> It is customary in C to allocate dynamic memory and allocate other
>> resources (like file handles) in functions and release them before return.
>> Using setjump causes memory and descriptor leaks in that case.
>
> If it does you're doing it wrong.  the right way is to use longjmp to
> intercept the exception and release your resources, or to use stack
> storage for temporary allocations (alloca()), another option is a
> one-way ticket out of the application and let the operating-system
> clean up the hanging resources.
>
basically making the point it's a tool like any other, and needs to be 
thought out whatever.


-- 
Ineptocracy

(in-ep-toc’-ra-cy) – a system of government where the least capable to 
lead are elected by the least capable of producing, and where the 
members of society least likely to sustain themselves or succeed, are 
rewarded with goods and services paid for by the confiscated wealth of a 
diminishing number of producers.

0
The
12/21/2013 2:09:41 PM
Finally, it's compiled, and I've been able to get the new kernel across 
the the RPi via its SD card.  My notes:

   http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html

Many thanks for all the help!

-- 
Cheers,
David
Web: http://www.satsignal.eu
0
David
12/22/2013 5:54:17 PM
On Sun, 22 Dec 2013 17:54:17 +0000, David Taylor wrote:

> Finally, it's compiled, and I've been able to get the new kernel across 
> the the RPi via its SD card.  My notes:
>    http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html

I'm glad it works for you.
And thanks for documenting everything publicly and thus providing a
great resource for others.

gregor
-- 
 .''`.  Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06
 : :' : Debian GNU/Linux user, admin, and developer  -  http://www.debian.org/
 `. `'  Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe
   `-   NP: The Mamas & The Papas: I Call Your Name
0
gregor
12/23/2013 4:15:48 AM
On Mon, 16 Dec 2013 11:07:47 -0600, Michael J. Mahon wrote:

> Rob <nomail@example.com> wrote:
>> Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
>>> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
>>> <david-taylor@blueyonder.co.uk.invalid> declaimed the following:
>>> 
>>> 
>>>> When a compile takes a significant part of the day (as with compiling
>>>> the kernel on the RPi), making multiple runs is extremely time
>>>> consuming!  Unfortunately, even if you want to change just one
>>>> option, if it's your first compile it still takes almost all the
>>>> working day.
--
Old timers my know about the 70s dual between:
 European: algol -> Pascal
and 
 US: C [actually originated from UK B]

Many C-users said "Oberon [Pascal's descendant] doesn't work",
because when they test-compiled the eg. compiler, there was no
grinding-wheels-and-smoke. It just wrote <Compiling ?? 75..9 Bytes>.
Because it was DONE!
> 
> Interactive programming does not preclude the development of craft, but
> it apparently significantly impedes it.
> 
No. Interactive programming is like when you walk into your darkened room
and you know how to reach out and toggle the light-switch, for immediate
feed-back; vs filling-in-forms to send to HQ.
Also the writing-near-english-notes-to-man-in-box, instead of a multi
switch point of view, impedes understanding.

> All this becomes practically hopeless in modern application environments
> where one's code constantly invokes libraries, that call libraries,
> etc., etc., until "Hello, world" requires thirty million instructions
> and has a working set of a hundred megabytes!
> 
This is caused by *evolving* the systems, by just adding layers; instead
of re-starting from fundamentals, which IMO are the 'human attributes'.
Eg. short term memory limited to 3 items and recognition is cheaper than
memory..etc. Therefore menu-based systems are very efficient.

> Such progress in the name of eye candy...

Correct.

===> PS. here's a further example confirming your point of
layers-of-office-boys concealing the underlying physical reality:-
I want the actual byte-sequence of eg.
#1234 -> r0 [loadImediate 1234 into register0]
r0 -> memN [store contents of reg0 to memoryN]
br Relative:Here [branch to here; to prevent running into next bytes]

Apparently ARM-asm stores the constants at the end of the code-area,
for optimisation. So `#1234 -> r0` doesn't exist as a single instruction.
I just want the actual byte-strings for a few such pseudo-instructions
to be able to debug using only the shell & `dd`.

I refuse to open another canOworms: asm-tool-chain; but googling for
ARM+asm+listing, gives me the same old office-boys-filling-out-forms.
They don't want to EXPOSE the actual bytes.

Where's the forum for people who came up via hardware, registers, hex-code
instead of came down from english-literature to filling-out-forms?


0
Unknown
12/23/2013 5:52:11 PM
On 23/12/2013 17:52, Unknown wrote:

>
> Where's the forum for people who came up via hardware, registers, hex-code
> instead of came down from english-literature to filling-out-forms?
>
>


Please do not feed the troll.

0
mm0fmf
12/23/2013 6:50:06 PM
On 2013-12-23, Unknown <dog@gmail.com> wrote:
> Where's the forum for people who came up via hardware, registers, hex-code
> instead of came down from english-literature to filling-out-forms?

Oh, we're here. But some of us have better things to do than be a free
keypunch service for someone who refuses to learn anything; I had first
boot of my Cortex-M3 port of CP/M-68K this weekend.

We've shown you were to find the documents you need and how to get
assembly listings. You'd be better served by cracking the books than
whining about how no one will help you muddle through in your quest
to remain ignorant.
-- 
roger ivie
rivie@ridgenet.net
0
Roger
12/23/2013 7:05:31 PM
On 23/12/13 19:05, Roger Ivie wrote:
> On 2013-12-23, Unknown<dog@gmail.com>  wrote:
>> Where's the forum for people who came up via hardware, registers, hex-code
>> instead of came down from english-literature to filling-out-forms?

Hex code? What's wrong with good old Octal?
>
> Oh, we're here. But some of us have better things to do than be a free
> keypunch service for someone who refuses to learn anything; I had first
> boot of my Cortex-M3 port of CP/M-68K this weekend.
>
> We've shown you were to find the documents you need and how to get
> assembly listings. You'd be better served by cracking the books than
> whining about how no one will help you muddle through in your quest
> to remain ignorant.

Indeed. "Dog" or whatever made up nomenclature that poster is currently 
using doesn't appear the slightest bit interested in learning anything 
about how the ARM chips work or how to program them.

Instead it seems to be obsessed with with it's own muddled code 
structure and demanding that people give it details on how to map that 
to its out-dated 8-bit world of confusion.

I would pity them, but... really "Read the PDF. It gives you everything 
you need to know in an easy to understand format." isn't that difficult 
to comprehend, right?

Each bit is laid out and detailed. The Arm has the neatest instruction 
set of any processor that I have had to work with.

I seriously doubt this "Dog" (or whatever) can comprehend the simplicity 
of the "one word per instruction" and simplicity of the code.
0
Dom
12/23/2013 7:55:38 PM
On Mon, 16 Dec 2013 23:27:46 +0000 (UTC), Martin Gregorie <martin@address-in-sig.invalid> wrote:

  [...]

> You haven't really programmed unless you've punched your own cards
> and corrected them on a 12 key manual card punch....
>  
> but tell that to the kids of today....

You had _cards_??

The first machine I tried to program used a flat panel with holes that
was programmed with jumper wires.  Oh, the machine _read_ cards, and
it punched cards, and it printed from cards, but you programmed it
with what looked like an oversized prototyping board.  See the "Unit
record equipment" section of this Wikipedia entry:

  http://en.wikipedia.org/wiki/Plugboard

Heck, we had to program it walking six miles through the snow... and
it was uphill both ways!

( I like today's embedded systems a _lot_ better. <grin!> )


Frank McKenney
-- 
  A college degree has become the most reliable way to get ahead in
  life. ...  This is partly the result of genuine changes in the
  economy. ...  A lot of it, however, has rather little to do with
  economic "need". ...there is, in fact precious little evidence that
  producing more and more graduates does anything for a country's
  growth rate.  It does, however, change the job market.

  The more graduates there are, the more employers hire them in
  preference to non-graduates. ...  Many of the jobs now taken by
  graduates have not changed their skill requirements since the days
  when they were done perfectly well by high-school graduates who had
  never gone to college.  But now you need a degree to get hired.  All
  of this generates a self-propelling inflation of degree
  requirements, with more jobs demanding graduates and more people
  heading into higher education.

                -- Alison Wolf / The XX Factor
-- 
Frank McKenney, McKenney Associates
Richmond, Virginia / (804) 320-4887
Munged E-mail: frank uscore mckenney aatt mindspring ddoott com

0
Frnak
12/23/2013 8:11:43 PM
On Mon, 23 Dec 2013 17:52:11 +0000, Unknown wrote:

> On Mon, 16 Dec 2013 11:07:47 -0600, Michael J. Mahon wrote:
> 
>> Rob <nomail@example.com> wrote:
>>> Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
>>>> On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor
>>>> <david-taylor@blueyonder.co.uk.invalid> declaimed the following:
>>>> 
>>>> 
>>>>> When a compile takes a significant part of the day (as with
>>>>> compiling the kernel on the RPi), making multiple runs is extremely
>>>>> time consuming!  Unfortunately, even if you want to change just one
>>>>> option, if it's your first compile it still takes almost all the
>>>>> working day.
> --
> Old timers my know about the 70s dual between:
>  European: algol -> Pascal
>
There was no duel.
             --^-
Algol 60 was in widespread use for scientific and engineering by 1967, 
which was precisely what it was intended for. Along with FORTRAN it had 
no real concept of strings or string handling apart from the ability to 
print string literals. Its other disadvantage was that the Algol 60 
Report, which specified the language in 1960, didn't say anything about 
how i/o was to be implemented. Elliott Algol, which was released in 1962 
and which I used in 1967 as part of my thesis material, used reserved 
words with their own statement syntax to implement i/o but almost every 
other implementation provided an i/o library which was accessed via 
procedure calls.  This was not entirely surprising since its authors 
thought one of its main uses was in allowing people to describe 
algorithms to each other without necessarily involving a computer.

Algol 60 was the first major block-structured language, and so is best 
thought of as the common ancestor of Pascal, Simula, BCPL, B and C.

Pascal, OTOH is largely the result of Nicholas Wirth throwing teddy out 
of the pram when, at the end of the 1960s, the Algol Committee decided 
that it preferred Algol-68 to Wirth's Algol-W as the Algol-60 successor. 

Pascal was written in 1968/9 and differed quite a lot in that it did 
specify how i/o was to be carried out and introduced well-thought string 
handling and records, similar to C's structs. However, there's at least a 
hint that Wirth thought of it as a teaching language and a means of 
communicating algorithms between people. I used Algol 68 much more than I 
did Pascal and am fairly certain that the Pascal language was specified 
with little or no thought of support for people developing separately 
compiled procedure libraries (though I could be wrong here), which is 
quite unlike Algol-68, (or certainly its R version) which made explicit 
library support a part of the language. Pascal, OTOH, was agnostic: if 
the compilation system you were using allowed for the creation and 
linking of procedure libraries then you could use them.
 
> and
>  US: C [actually originated from UK B]
> 
Nope. BCPL was developed in the Cambridge Computer Labs. K&R saw it, and 
produced a close derivative called B. However, both were quite limited, 
though block structured, since they only supported a single variable type 
which could hold either an integer or a character. C was developed from B 
by adding, among other things, more elementary data types (longs, shorts, 
floats and doubles as well as composite data types (struct and its 
extension typedef).

> This is caused by *evolving* the systems, by just adding layers; instead
> of re-starting from fundamentals, which IMO are the 'human attributes'.
>
Well, that statement alone makes it quite obvious that you've never 
written programs of any size or complexity.
 

-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/23/2013 8:12:15 PM
On Mon, 23 Dec 2013 14:11:43 -0600, Frnak McKenney wrote:

> On Mon, 16 Dec 2013 23:27:46 +0000 (UTC), Martin Gregorie
> <martin@address-in-sig.invalid> wrote:
> 
>   [...]
> 
>> You haven't really programmed unless you've punched your own cards and
>> corrected them on a 12 key manual card punch....
>>  
>> but tell that to the kids of today....
> 
> You had _cards_??
> 
> The first machine I tried to program used a flat panel with holes that
> was programmed with jumper wires.  Oh, the machine _read_ cards, and it
> punched cards, and it printed from cards, but you programmed it with
> what looked like an oversized prototyping board.  See the "Unit record
> equipment" section of this Wikipedia entry:
> 
>   http://en.wikipedia.org/wiki/Plugboard
>
Takes flying guess: you're not, by any chance talking about a 1004 are 
you? I never met one, but a guy I used to work with had cut his 
programming teeth on one..

I started out using paper tape and a Flexowriter (the Elliott 503 that 
fed on the tape demanded upper and lower case code, so no teletypes in 
its kitchen...). I discovered the joys of cards when I got my first job, 
which was with ICL.
 

-- 
martin@   | Martin Gregorie
gregorie. | Essex, UK
org       |
0
Martin
12/23/2013 8:20:17 PM
On 23/12/2013 19:55, Dom wrote:
> On 23/12/13 19:05, Roger Ivie wrote:
>> On 2013-12-23, Unknown<dog@gmail.com>  wrote:
>>> Where's the forum for people who came up via hardware, registers,
>>> hex-code
>>> instead of came down from english-literature to filling-out-forms?
>
> Hex code? What's wrong with good old Octal?
>>
>> Oh, we're here. But some of us have better things to do than be a free
>> keypunch service for someone who refuses to learn anything; I had first
>> boot of my Cortex-M3 port of CP/M-68K this weekend.
>>
>> We've shown you were to find the documents you need and how to get
>> assembly listings. You'd be better served by cracking the books than
>> whining about how no one will help you muddle through in your quest
>> to remain ignorant.
>
> Indeed. "Dog" or whatever made up nomenclature that poster is currently
> using doesn't appear the slightest bit interested in learning anything
> about how the ARM chips work or how to program them.
>
> Instead it seems to be obsessed with with it's own muddled code
> structure and demanding that people give it details on how to map that
> to its out-dated 8-bit world of confusion.
>
> I would pity them, but... really "Read the PDF. It gives you everything
> you need to know in an easy to understand format." isn't that difficult
> to comprehend, right?
>
> Each bit is laid out and detailed. The Arm has the neatest instruction
> set of any processor that I have had to work with.
>
> I seriously doubt this "Dog" (or whatever) can comprehend the simplicity
> of the "one word per instruction" and simplicity of the code.

Guenter has been trolling like this for sometime in many newsgroups and 
forums. A Google search for avoid9pdf@gmail.com is most enlightening!

Look at the software he's using to post. That's not something you use if 
you don't have much clue about computers, processors, software and 
computer science.

Walks like a troll, talks like a troll, acts like a troll. It's a troll.





0
mm0fmf
12/23/2013 8:37:37 PM
Frnak McKenney <frnak@far.from.the.madding.crowd.com> wrote:
> You had _cards_??
>
> The first machine I tried to program used a flat panel with holes that
> was programmed with jumper wires.  Oh, the machine _read_ cards, and
> it punched cards, and it printed from cards, but you programmed it
> with what looked like an oversized prototyping board.

We had a machine at school that looked like that but did not even
read cards...  it was an analog computer.  Basically a collection
of amplifiers, integrators and other analog circuits that you could
interconnect just like shown on those pictures.
0
Rob
12/23/2013 9:33:17 PM
On Mon, 23 Dec 2013 19:55:38 +0000, Dom <domafp@blueyonder.co.uk> declaimed
the following:

>On 23/12/13 19:05, Roger Ivie wrote:
>> On 2013-12-23, Unknown<dog@gmail.com>  wrote:
>>> Where's the forum for people who came up via hardware, registers, hex-code
>>> instead of came down from english-literature to filling-out-forms?
>
>Hex code? What's wrong with good old Octal?

	Hex is rather unambiguous... But Octal could be pure octal or Split
Octal (177777 vs 377377).

	Though octal does fit well when interpreting the 8080 instruction set
-- 3 bit register fields.
-- 
	Wulfraed                 Dennis Lee Bieber         AF6VN
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
0
Dennis
12/24/2013 1:06:11 AM
On 2013-12-24, Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:
> 	Though octal does fit well when interpreting the 8080 instruction set
> -- 3 bit register fields.

Same reason it works well for the PDP-11.
-- 
roger ivie
rivie@ridgenet.net
0
Roger
12/24/2013 2:50:35 AM
On 24/12/13 01:06, Dennis Lee Bieber wrote:
> On Mon, 23 Dec 2013 19:55:38 +0000, Dom<domafp@blueyonder.co.uk>  declaimed
> the following:
>
>> On 23/12/13 19:05, Roger Ivie wrote:
>>> On 2013-12-23, Unknown<dog@gmail.com>   wrote:
>>>> Where's the forum for people who came up via hardware, registers, hex-code
>>>> instead of came down from english-literature to filling-out-forms?
>>
>> Hex code? What's wrong with good old Octal?
>
> 	Hex is rather unambiguous... But Octal could be pure octal or Split
> Octal (177777 vs 377377).
>
> 	Though octal does fit well when interpreting the 8080 instruction set
> -- 3 bit register fields.

It works very well on a 6 bit character set (GBCD) and 9 bit ASCII in a 
36 bit word.
0
Dom
12/24/2013 5:50:11 AM
In article <l90t3r$u9f$3@news.albasani.net>,
 The Natural Philosopher <tnp@invalid.invalid> wrote:

> On 20/12/13 02:39, Martin Gregorie wrote:
> > On Fri, 20 Dec 2013 01:24:52 +0000, Martin Gregorie wrote:
> >
> >> On Thu, 19 Dec 2013 23:10:11 +0000, mm0fmf wrote:
> >>
> >>> On 19/12/2013 20:58, Martin Gregorie wrote:
> >>>> On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf wrote:
> >>>>
> >>>>> On 18/12/2013 22:17, Martin Gregorie wrote:
> >>>>>> On Wed, 18 Dec 2013 00:03:10 +0000, mm0fmf wrote:
> >>>>>>
> >>>>>>> I don't know which compiler you use, but in mine assert is only
> >>>>>>> compiled into code in debug builds. There's nothing left in a
> >>>>>>> non-debug build.
> >>>>>>>
> >>>>>> you'll have to recompile the program before you can start debugging
> >>>>>
> >>>>> You may have bugs, I don't! :-)
> >>>>
> >>>> Either thats pure bullshit or you don't test your code properly.
> >>>>
> >>>>
> >>> Mmmmm.... maybe it's time you considered drinking decaf!
> >>>
> >> Pure experience over a few decades, dear boy.
> >>
> >> Anybody who claims to have written bugfree code that is more complex
> >> than "Hello World" is talking out his arse.
> >
> > I should have added that I've met so-called programmers[*] who couldn't
> > write even that without introducing bugs.
> >
> > * One particularly memorable example cut COBOL I had to fix on the
> > infamous GNS Naval Dockyard project. This clown didn't know that COBOL
> > code drops through from one paragraph to the next by default and
> > consequently wrote code like this:
> >
> > PARA-1.
> >     NOTE sentences doing stuff.
> >     GO TO PARA-2.
> > PARA-2.
> >     NOTE more sentences doing stuff.
> >     ...
> >
> > Other contractors knew him from previous projects and said that they'd
> > never seen him write a working program. He always managed to leave with
> > his last paycheck just before the deadline for his program to be
> > delivered. He was always known for turning up late, doing sod all during
> > the day, staying late and claiming overtime.
> >
> 
> And then he went into politics?
> 
> >

More likely a conslutant or lobbyist.

-- 
Never attribute to stupidity that which can be explained by greed. Me.
0
Walter
2/16/2014 11:24:52 PM
In article <MB%tu.1$mK7.0@fx32.am4>, mm0fmf <none@mailinator.com> 
wrote:

> >
> > Where's the forum for people who came up via hardware, registers, hex-code
> > instead of came down from english-literature to filling-out-forms?
> >
> >

Hex? You had hex? We used octal and we liked it.

-- 
Never attribute to stupidity that which can be explained by greed. Me.
0
Walter
2/16/2014 11:35:48 PM
On 16/02/2014 23:35, Walter Bushell wrote:
> In article <MB%tu.1$mK7.0@fx32.am4>, mm0fmf <none@mailinator.com>
> wrote:
>
>>>
>>> Where's the forum for people who came up via hardware, registers, hex-code
>>> instead of came down from english-literature to filling-out-forms?
>>>
>>>
>
> Hex? You had hex? We used octal and we liked it.
>

If you really grew up using octal you'd know how to quote properly!


0
mm0fmf
2/16/2014 11:43:17 PM
In article <q2cMu.2398$rO3.761@fx02.am4>,
 mm0fmf <none@mailinator.com> wrote:

> On 16/02/2014 23:35, Walter Bushell wrote:
> > In article <MB%tu.1$mK7.0@fx32.am4>, mm0fmf <none@mailinator.com>
> > wrote:
> >
> >>>
> >>> Where's the forum for people who came up via hardware, registers, hex-code
> >>> instead of came down from english-literature to filling-out-forms?
> >>>
> >>>
> >
> > Hex? You had hex? We used octal and we liked it.
> >
> 
> If you really grew up using octal you'd know how to quote properly!

Hey, I was born before the old stone age, in the vacuum age as far as 
confusors were concerned.

-- 
Never attribute to stupidity that which can be explained by greed. Me.
0
Walter
2/19/2014 3:10:41 AM
Reply:

Similar Artilces:

[9fans] Compiling kernel on Raspberry Pi
Hi, I have tried to re-build the kernel on the Raspberry Pi along the lines as documented here: http://plan9.bell-labs.com/wiki/plan9/compiling_kernels/index.html Concretely, I did the following: % cd /sys/src/9/bcm % mk 'CONF=pi' The build starts, but soon exists with an error: fpi.c:2 5c: '../omap/fpi.c' does not exist: ../omap/fpi.c In ../omap there is a file called fpiarm.c, maybe that is what is needed here? Is there an argument to mk missing? Best, Holger On 26 Apr 2013 00:12:57 +0200 "Holger Sebert" <Holger.Sebert@ruhr-uni-boc...

Cross-compiling NTP for the Raspberry Pi
I have the script at the end of this note which successfully compiles a 32-bit Linux kernel for the Raspberry Pi when run on a 64-bit Debian run in a virtual machine under Windows. So far, so good. What I would now like to do is to use the same tools to cross compile NTP for the Raspberry pi (because it takes an hour on the RPi itself). I have been trying to follow the guide here: http://support.ntp.org/bin/view/Dev/Cross-compilingNTP bit I fail even at the first command: config.guess which isn't recognised. I would be the first to acknowledge that I'm ...

Compiling a kernel... a compiler error!
OK, I am compiling a home-brew kernel. At the beginning, it was all Assembly, and ld.exe performed perfectly for linking. However,when I added some standard c code, I got an error which I can't resolve. First of all, here are my calls. BTW, NASM is the Assembly compiler: C:\djgpp\bin\gcc.exe -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions -nostdinc -fno-builtin -I./include -c -o main.o main.c C:\Osdever\NASM\nasm.exe -f aout -o start.o start.asm C:\djgpp\bin\ld.exe -T link.ld -o kernel.bin start.o main.o And here is the error from gcc.exe: In file included from &l...

Help with cross-compiling NTP for the Raspberry Pi requested
I am trying to cross-compile NTP for the Raspberry Pi, and I am following the instructions for NTP here: http://support.ntp.org/bin/view/Dev/Cross-compilingNTP and the successful cross-compile I did for the RPi kernel here: http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html Just following the "NTP" instructions, although I did end up with executables, they did not run on the RPi (giving an error something like "binary format error"), and their size is very similar to the executables for the Intel Debian system on which I was cross compi...

Cross-compiling NTP for the Raspberry Pi
My initial write-up is here: http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html#ntp Comments and corrections welcomed. There's still some work to be done. Many, many thanks to folk in the comp.protocols.time.ntp Usenet group, without whose help this would not have been possible for this Linux novice. -- Cheers, David Web: http://www.satsignal.eu On 03/08/2014 16:26, David Taylor wrote: > My initial write-up is here: > > http://www.satsignal.eu/raspberry-pi/kernel-cross-compile.html#ntp > > Comments and corrections welcomed. Ther...

gprbuild not working on GNAT 2015 GPL for raspberry pi cross compile
I am trying to get gprbuild to work for the GNAT 2015 GPL Raspberry Pi 2 Linux (32 bits) release. When I try to build one of my project files, I get the error, "warning: no compiler specified for language "Ada", ignoring all its sources" I can build most of my projects with arm-linux-gnueabihf-gnatmake instead of gprbuild. However, some of my projects involve multi-language support, which fails to build. I suspect that if I could get gprbuild to work, then I would be able to build these projects. I have googled and found others with similar problems on o...

Cross-compiling error when compiling 2.6.1...
I come across the following error: checking for chflags... configure: error: cannot run test program while cross compiling See `config.log' for more details. make-3.81[1]: *** [/nobackup/garrcoop/python_upgrade/contrib/python/obj-mips32/Makefile] Error 1 make-3.81[1]: Leaving directory `/nobackup/garrcoop/python_upgrade' make-3.81: *** [all] Error 2 The blurb from configure that does this test says: # On Tru64, chflags seems to be present, but calling it will # exit Python Basically this testcase (while valid for Tru64) doesn't work with many cross-compilation environments. Cou...

[tao-users] tao_ifr from IFR_Service in a ARM9 cross compilation error to compile
--00504501416d53702a047cdffaf1 Content-Type: text/plain; charset=ISO-8859-1 ACE VERSION: 5.5a - 5.6a and 5.7.5 TAO VERSION: 1.5a - 1.6a and 1.7.5 HOST MACHINE and OPERATING SYSTEM: PC Linux Debian Lenny 2.6.26-2-486 TARGET MACHINE and OPERATING SYSTEM: ARM-linux-v5EJl 2.4.20-celf3 HOST COMPILER NAME AND VERSION (AND PATCHLEVEL): GCC-3.3.2 glibc-2.3.2 TARGET COMPILER NAME AND VERSION (AND PATCHLEVEL): ARM-LINUX gcc-3.3.2-glibc-2.3.2 CONTENTS OF $ACE_ROOT/ace/config.h: config-linux.h CONTENTS OF $ACE_ROOT/include/makeinclude/platform_macros.GNU: pl...

Kernel compile errors for 2.6.9
I installed RHEL4 Update 4 on my laptop (kernel 2.6.9-42) and am trying to just educate myself on the process of compiling a new kernel. What better way to do this than just go through a exercise, right? So i download the kernel 2.6.9 from kernel.org. I then attempt to configure it using the same .config file from my existing, working kernel. I make bzImage. And then i attempt to compile all modules. This is where I keep getting the following errors: drivers/scsi/qla2xxx/qla_os.c: In function `qla2x00_queuecommand': drivers/scsi/qla2xxx/qla_os.c:315: sorry, unimplemented: inlining faile...

Error in kernel compilation
Hi all, I downloaded kernel 2.4.20 in my home directory. switched to root. I did 'make xconfig'. Din't do any changes, saved and exit. on command prompt I typed 'make dep' which was successful and then 'make'. The result of make was as follows: (these are the last few lines of the output) make[3]: Entering directory `/home/vineet/linux-2.4.20/drivers/ide' ld -m elf_i386 -r -o ide-mod.o ide.o ide-features.o ide-taskfile.o cmd640.o ide-adma.o ide-dma.o ide-pci.o piix.o rz1000.o ide-proc.o ld -m elf_i386 -r -o ide-probe-mod.o ide-probe.o ide-geometry....

Error compiling the kernel...
Hi, I am trying to upgrade my kernel from 2.2.20. I have downloaded the latest stable kernel release i.e. 2.4.21. I tried compiling it, but i got a few errors while 'make dep', I'm not sure if this is an error or this is just the way it ends, and also during 'make modules_install'. These are the erorrs(these are the last parts of both the errors, I couldnt get the whole because of my terminal) : - make dep: - make[4]: Entering directory `/usr/src/linux-2.4.21/lib/zlib_deflate' /usr/src/linux-2.4.21/scripts/mkdep -D__KERNEL__ -I/usr/src/linux-2.4.21/includ...

kernel compiling errors
Hello I have some problomes. I compiling kernel and when I am booting kernel this error ocured Kernel panic : kernel can not find init file... if any one can help me please conttact me from this email: hadi85azary@yahoo.com Thanks. hadiu wrote: > Hello I have some problomes. > I compiling kernel and when I am booting kernel this error ocured > Kernel panic : kernel can not find init file... > if any one can help me please conttact me from this email: > hadi85azary@yahoo.com Please post the following information: - kernel version, - the *exact* error mess...

kernel compile error
Hi community, I'm trying to compile from a GENERIC kernel, release 1.6.2, where I've just modified the following options: options RTC_OFFSET=-60 # hardware clock is this many mins. west of GMT options PCKBD_LAYOUT="KB_IT" options GATEWAY # packet forwarding options IPFILTER_DEFAULT_BLOCK # block all packets by default These are the error messages I've got: cc -ffreestanding -O2 -Werror -Wall -Wno-main -Wno-format-zero-length -Wpointer-arith -Wmissing-prototypes -Wstrict-prototypes -Wno-uninitialized -Di386 -I. -...

Error in Kernel Compilation
hi greeting, i am getting this error plz help Kernel version: linux 2.4.22 OS :Red Hat 9 cp dummy.o /lib/modules/2.4.22/kernel/drivers/net/ cp: cannot stat `dummy.o': No such file or directory make[2]: *** [_modinst__] Error 1 make[2]: Leaving directory `/usr/src/linux-2.4.22/drivers/net' make[1]: *** [_modinst_net] Error 2 make[1]: Leaving directory `/usr/src/linux-2.4.22/drivers' make: *** [_modinst_drivers] Error 2 but there is a file dummy.c /usr/src/linux-2.4.22/drivers i tried to compile the file but because of error i did produced the o file. i tried many times extracting...

Web resources about - Errors when cross-compiling the kernel - comp.sys.raspberry-pi

Blog Rebuild: Build Systems & Cross-Compiling
This is an entry in a series about rebuilding my custom blog with react, CSP, and other modern tech. Read more in the blog rebuild series. A ...

Resources last updated: 1/27/2016 7:17:11 AM