f



GNU GCC compiler optimization question

Hello,

I have a question regarding compiler optimization of a cross compiler
GCC (m68k). We are using version 3.4.0.
When activating compiler optimization with the options =84- O1, -O2, -
O3 =93 there is a number of individual optimization flags set in the
background. With =84- O1 =93 there are, as in the GNU documentation
described, at least 10 optimization flags (-funit-at-a-time, -fomit-
frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
branch-probability, -fcprop-registers).

The following sentence in the GNU documentation makes us very
curious : Chapter =933.10 Options That Control Optimization=94 -> =84Not al=
l
optimizations are controlled directly by a flag. Only optimizations
that have a flag are listed. =93

Does anyone have experience what are the additional optimizations?
Is there a description existing? Why is it not possible to control
them via flags?

Thanks for your help.
0
12/4/2008 9:17:25 AM
comp.arch.embedded 20047 articles. 1 followers. Post Follow

7 Replies
512 Views

Similar Articles

[PageSpeed] 20

Karl-Heinz Rossmann wrote:
> Hello,
> 
> I have a question regarding compiler optimization of a cross compiler
> GCC (m68k). We are using version 3.4.0.
> When activating compiler optimization with the options �- O1, -O2, -
> O3 � there is a number of individual optimization flags set in the
> background. With �- O1 � there are, as in the GNU documentation
> described, at least 10 optimization flags (-funit-at-a-time, -fomit-
> frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
> optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
> branch-probability, -fcprop-registers).
> 
> The following sentence in the GNU documentation makes us very
> curious : Chapter �3.10 Options That Control Optimization� -> �Not all
> optimizations are controlled directly by a flag. Only optimizations
> that have a flag are listed. �
> 
> Does anyone have experience what are the additional optimizations?
> Is there a description existing? Why is it not possible to control
> them via flags?
> 
> Thanks for your help.

gcc has a very large number of flags (especially later versions - 3.4.0 
is quite old now) for enabling and disabling optimisations, and for 
tuning parameters.  But there will be plenty of small optimisations 
where it is simply not worth having a flag (including all the 
documentation that must go with it) since no one is likely to want to 
enable or disable them individually, and very few people will be 
interested in the details.  For most users, the -Ox flags give the 
easiest way to pick optimisation levels.  Sometimes it can be useful to 
explicitly specify other flags (such as for more or less loop 
unrolling).  But flags like "-fcprop-registers" are normally only of 
interest to gcc developers and testers - details of non-flagged 
optimisations are even less relevant to normal users.

So if you really want to know the fine details, you'll probably want to 
look at the gcc source code.  There may also be some information in the 
"gcc internals" documentation on the gcc web site, and the gcc 
developers mailing lists might be some help (search the archives before 
asking!).



0
david2384 (2168)
12/4/2008 9:50:37 AM
Hello:

Given that the "-Ox"s are grab bags of optimizations, it makes a lot
of sense that not all of them, especially
those which may be unsafe in some circumstances, not be included. The
"use if you know what you're doing"
flags, in particular, probably don't belong in a grab bag.

Example:

(from http://gcc.gnu.org/onlinedocs/gcc-4.3.2/gcc/Optimize-Options.html#Opt=
imize-Options)

--start of quote--

-funsafe-math-optimizations
    Allow optimizations for floating-point arithmetic that (a) assume
that arguments and results are valid and (b) may violate IEEE or ANSI
standards. When used at link-time, it may include libraries or startup
files that change the default FPU control word or other similar
optimizations.

    This option is not turned on by any -O option since it can result
in incorrect output for programs which depend on an exact
implementation of IEEE or ISO rules/specifications for math functions.
It may, however, yield faster code for programs that do not require
the guarantees of these specifications. Enables -fno-signed-zeros, -
fno-trapping-math, -fassociative-math and -freciprocal-math.

--end of quote--

Now, most of us write code which do not depend on an exact
implementation of IEEE floating point arithmetic, and in addition
(;-)) run well debugged code running on well-conditioned data. Such
flags, then, are possibly useful.

Other flags are useful for people with a priori knowledge of the
behavior of the code when used on the target data and chip.

(I hope this was useful.)

Nicolas Robidoux
Universite Laurentienne

On Dec 4, 4:17=A0am, Karl-Heinz Rossmann <karl-
heinz.rossm...@liebherr.com> wrote:
> Hello,
>
> I have a question regarding compiler optimization of a cross compiler
> GCC (m68k). We are using version 3.4.0.
> When activating compiler optimization with the options =84- O1, -O2, -
> O3 =93 there is a number of individual optimization flags set in the
> background. With =84- O1 =93 there are, as in the GNU documentation
> described, at least 10 optimization flags (-funit-at-a-time, -fomit-
> frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
> optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
> branch-probability, -fcprop-registers).
>
> The following sentence in the GNU documentation makes us very
> curious : Chapter =933.10 Options That Control Optimization=94 -> =84Not =
all
> optimizations are controlled directly by a flag. Only optimizations
> that have a flag are listed. =93
>
> Does anyone have experience what are the additional optimizations?
> Is there a description existing? Why is it not possible to control
> them via flags?
>
> Thanks for your help.

0
12/4/2008 5:36:21 PM
Karl-Heinz Rossmann wrote:
> Hello,
> 
> I have a question regarding compiler optimization of a cross compiler
> GCC (m68k). We are using version 3.4.0.
> When activating compiler optimization with the options �- O1, -O2, -
> O3 � there is a number of individual optimization flags set in the
> background. With �- O1 � there are, as in the GNU documentation
> described, at least 10 optimization flags (-funit-at-a-time, -fomit-
> frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
> optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
> branch-probability, -fcprop-registers).
> 
> The following sentence in the GNU documentation makes us very
> curious : Chapter �3.10 Options That Control Optimization� -> �Not all
> optimizations are controlled directly by a flag. Only optimizations
> that have a flag are listed. �
> 
> Does anyone have experience what are the additional optimizations?
> Is there a description existing? Why is it not possible to control
> them via flags?
> 

For embedded code, use -Os or -O2 and be happy.

The remaining tweaks may produce marginally better
code, but you have to get the assembly listings and
compare the results with the particular flag on and off.

A much better optimization was changing from GCC 3.xx
to GCC 4.xx. For my ARM code shrunk in the avreage
by 10% (several hundreds of kbytes of raw code).

-- 

Tauno Voipio
tauno voipio (at) iki fi

0
tauno.voipio (652)
12/4/2008 7:12:10 PM
Karl-Heinz Rossmann wrote:
> I have a question regarding compiler optimization of a cross compiler
> GCC (m68k). We are using version 3.4.0.
> When activating compiler optimization with the options ?- O1, -O2, -
> O3 ? there is a number of individual optimization flags set in the
> background. With ?- O1 ? there are, as in the GNU documentation
> described, at least 10 optimization flags (-funit-at-a-time, -fomit-
> frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
> optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
> branch-probability, -fcprop-registers).

If you really want to see what optimizations are valid simply run:

$ touch test.c
$ <your-gcc-here> test.c -Os -S -fverbose-asm -o test-Os.s
$ <your-gcc-here> test.c -O0 -S -fverbose-asm -o test-O0.s
$ <your-gcc-here> test.c -O1 -S -fverbose-asm -o test-O1.s
$ <your-gcc-here> test.c -O2 -S -fverbose-asm -o test-O2.s

Now you can diff the *.s files and search for flags you don't want and
manipulate them on the command line.

jbe
0
jbeisert (17)
12/4/2008 7:28:34 PM
On 4 Dez., 20:28, Juergen Beisert <jbeis...@netscape.net> wrote:
> Karl-Heinz Rossmann wrote:
> > I have a question regarding compiler optimization of a cross compiler
> > GCC (m68k). We are using version 3.4.0.
> > When activating compiler optimization with the options ?- O1, -O2, -
> > O3 ? there is a number of individual optimization flags set in the
> > background. With ?- O1 ? there are, as in the GNU documentation
> > described, at least 10 optimization flags (-funit-at-a-time, -fomit-
> > frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
> > optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
> > branch-probability, -fcprop-registers).
>
> If you really want to see what optimizations are valid simply run:
>
> $ touch test.c
> $ <your-gcc-here> test.c -Os -S -fverbose-asm -o test-Os.s
> $ <your-gcc-here> test.c -O0 -S -fverbose-asm -o test-O0.s
> $ <your-gcc-here> test.c -O1 -S -fverbose-asm -o test-O1.s
> $ <your-gcc-here> test.c -O2 -S -fverbose-asm -o test-O2.s
>
> Now you can diff the *.s files and search for flags you don't want and
> manipulate them on the command line.
>
> jbe

Thank you for all the answers.
Maybe I have to add some details to my first question.
I am working in the aerospace domain (DO-178B). If we want to use
optimizations it is necessary for us to know exactly what kind of
optimization is done. Therefore we also need to know _every_
optimization in detail. Otherwise we cannot guarantee if the compiler
doesn't introduce some features into the assembler code which are e.g.
not deterministic. Is there any possibility to get a list of _all_
available optimizations without digging in the source code of GCC?
0
12/5/2008 10:25:04 AM
On 5 Dec, 11:25, Karl-Heinz Rossmann <karl-
heinz.rossm...@liebherr.com> wrote:
>
> Thank you for all the answers.
> Maybe I have to add some details to my first question.
> I am working in the aerospace domain (DO-178B). If we want to use
> optimizations it is necessary for us to know exactly what kind of
> optimization is done. Therefore we also need to know _every_
> optimization in detail. Otherwise we cannot guarantee if the compiler
> doesn't introduce some features into the assembler code which are e.g.
> not deterministic. Is there any possibility to get a list of _all_
> available optimizations without digging in the source code of GCC?

Have you tried asking the vendor you purchased your DO-178B-certified
version of gcc from?
0
12/5/2008 11:10:21 AM
Karl-Heinz Rossmann wrote:
> On 4 Dez., 20:28, Juergen Beisert <jbeis...@netscape.net> wrote:
>> Karl-Heinz Rossmann wrote:
>>> I have a question regarding compiler optimization of a cross compiler
>>> GCC (m68k). We are using version 3.4.0.
>>> When activating compiler optimization with the options ?- O1, -O2, -
>>> O3 ? there is a number of individual optimization flags set in the
>>> background. With ?- O1 ? there are, as in the GNU documentation
>>> described, at least 10 optimization flags (-funit-at-a-time, -fomit-
>>> frame-pointer, -fdefer-pop, -fmerge-constants, -fthread-jumps, -floop-
>>> optimize, -fif-conversion, -fif-conversion2, -fdelayed-branch, -fguess-
>>> branch-probability, -fcprop-registers).
>> If you really want to see what optimizations are valid simply run:
>>
>> $ touch test.c
>> $ <your-gcc-here> test.c -Os -S -fverbose-asm -o test-Os.s
>> $ <your-gcc-here> test.c -O0 -S -fverbose-asm -o test-O0.s
>> $ <your-gcc-here> test.c -O1 -S -fverbose-asm -o test-O1.s
>> $ <your-gcc-here> test.c -O2 -S -fverbose-asm -o test-O2.s
>>
>> Now you can diff the *.s files and search for flags you don't want and
>> manipulate them on the command line.
>>
>> jbe
> 
> Thank you for all the answers.
> Maybe I have to add some details to my first question.
> I am working in the aerospace domain (DO-178B). If we want to use
> optimizations it is necessary for us to know exactly what kind of
> optimization is done. Therefore we also need to know _every_
> optimization in detail. Otherwise we cannot guarantee if the compiler
> doesn't introduce some features into the assembler code which are e.g.
> not deterministic. Is there any possibility to get a list of _all_
> available optimizations without digging in the source code of GCC?

I don't think such information makes sense for *any* compiler.  It is 
meaningless to try to say what is an "optimisation" - there is no line 
that can be drawn between "code generation" and "optimisation".

The gcc source code will let you see *exactly* how the source code is 
generated, if you are willing to spend enough time studying it.

Otherwise, I'd recommend a macro assembler.

Of course, you could just find a compiler (gcc or otherwise) supplier 
with appropriate certification, or a third-party that can do such 
certification.  Or you could do appropriate testing of the compiler, 
tools, and application yourself.
0
david.brown (580)
12/5/2008 1:33:51 PM
Reply:

Similar Artilces:

how to let the gcc compiler optimizer know that my GCC inline asm code is using stack?
how to let the gcc compiler optimizer know that my GCC inline asm code is using stack?It keeps flushing the items I pushed onto stack. On Sun, 07 Oct 2012 17:55:56 -0700, 伏虎 wrote: > how to let the gcc compiler optimizer know that my GCC inline asm code > is using stack?It keeps flushing the items I pushed onto stack. I don't know if you can. If I needed to get that deep into things, I'd either allocate C variables and pass them to the assembly code, or I'd just write the whole function in assembly. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com 伏虎 wrote: > how to let the gcc compiler optimizer know that my GCC inline asm > code is using stack?It keeps flushing the items I pushed onto stack. > I doubt that it does. Did you start the assembly as 'C' code then optimize it? That's a good way to assure that all the stack indexing is correct... but you need to gen the assembler using the same optimization level... So when you turn the optimizing off the problem goes away? Might be worth having a separate compilation unit at different optimization level... which might be a whole other can of worms.... -- Les Cargill On 08/10/2012 01:55, =E4=BC=8F=E8=99=8E wrote: > how to let the gcc com...

A !GCC-compiler question (C++)
Hello, using Microsoft Visual Studio following programme works properly: #include <iostream> using namespace std; void main() { cout << "Hello World!" << endl; } Using the GCC-compiler (4.1.1) for RISC OS doesn't work: *gcc cpp.test -o test test.cpp:1:20: error: iostream: No such file or directory test.cpp:3: error: '::main' must return 'int' test.cpp: In function 'int main()': test.cpp:5: error: 'cout' was not declared in this scope test.cpp:5: error: 'endl' was not declared in this scope Can me anybody give a short hint what's wrong? Reading the help-file of !GCC wasn't just such helpful for me yet. Thanks, Alex' -- Alexander Ausserstorfer, Bavaria RISC OS since 1994 http://home.chiemgau-net.de/ausserstorfer/ On 22/08/2011 17:23, Alexander Ausserstorfer wrote: > #include<iostream> Shouldn't that be <iostream.h> ? > using namespace std; > void main() > { > cout<< "Hello World!"<< endl; > } > > Using the GCC-compiler (4.1.1) for RISC OS doesn't work: > > *gcc cpp.test -o test > test.cpp:1:20: error: iostream: No such file or directory > test.cpp:3: error: '::main' must return 'int' Seems a strange requirement... > test.cpp:5: error: 'cout' was not declared in this scope > test.cpp:5: error: 'endl' was not declared in this scope Logical if iostream wasn&...

gcc compile / link questions
I compile and link Python extension modules using the script gcc -fPIC -g -I/usr/local/include/python2.3 \ -Wall -Wstrict-prototypes -c mymodule.c g++ -shared mymodule.o -L/usr/local/lib -o mymodule.so It works for me but it isn't pretty. Is there a better way to write it? Gcc finds all the libraries that need to be linked in. For example, "/usr/local/lib/python2.3/site-packages/numarray/libnumarray.so". How does gcc do this? I created a .so file "utilities.so" that contains some C functions that are called in mymodule.c but are not visible from Python. Both &...

c compiler question (!GCC)
Hi, I compiled the programme: | #include <stdio.h> | | void* main() | { | int e=1; | do { | printf("\nEingabe: "); | fflush(stdin); | scanf("%d",&e); | printf("\nEingabe war: %d",e); | } while (e != 0); | return; | } The result is: > *test2 > > Eingabe: 123 > 432 > 2435 > 7546 > 345 > 756 > 456 > 345 > 5 > 0 > > Eingabe war: 123 > Eingabe: > Eingabe war: 432 > Eingabe: > Eingabe war: 2435 > Eingabe: > Eingabe war: 7546 > Eingabe: > Eingabe war: 345 > Eingabe: > Eingabe war: 756 > Eingabe: > Eingabe war: 456 > Eingabe: > Eingabe war: 345 > Eingabe: > Eingabe war: 5 > Eingabe: > Eingabe war: 0* The question I have is: why are all inputs taken before everything else is printed on screen? I expected that it take one input, print one line, take another input etc (this is what MS Visual Studio does). Thank you for helping me to understand the !GCC compiler. A. -- Alexander Ausserstorfer, Bavaria http://home.chiemgau-net.de/ausserstorfer/ On Sat, 01 Jan 2011 13:33:58 +0100, "Alexander Ausserstorfer" <bavariasound@chiemgau-net.de> wrote: >Hi, > >I compiled the programme: > >[...] >| fflush(stdin); >[...] > >The question I have is: why are all inputs taken before everything else >is printed on screen? I expected that it take one input, print one line, >take another input etc (this is what MS...

Compilation and optimization using gcc
Hi there! When compiling, let's say, Bash 2.04, the exe file I get is 1.3 MB but the downloaded one out of the binaries from delorie.com is 560k. As described in the "readme" I ran "./configure" and then "make". Everthing seems to be well, except, regarding the size of the exe file. Or, is it necessary to optimize the compile / link procedure manually? Any help appreciated! Best regards, Markus -- Please reply to group only. For private email please use http://www.dipl-ing-kessler.de/email.htm > When compiling, let's say, Bash 2.04, the exe file I get is 1.3 MB but > the downloaded one out of the binaries from delorie.com is 560k. As > described in the "readme" I ran "./configure" and then "make". Everthing > seems to be well, except, regarding the size of the exe file. Or, is it > necessary to optimize the compile / link procedure manually? Files on delorie.com are normally stripped (strip foo.exe) and compressed (with upx). Try strip first; it's easy and probably all you need. DJ Delorie wrote: > > When compiling, let's say, Bash 2.04, the exe file I get is 1.3 MB but > > the downloaded one out of the binaries from delorie.com is 560k. As > > described in the "readme" I ran "./configure" and then "make". Everthing > > seems to be well, except, regarding the size of the exe file. Or, is it > > necessary to optimiz...

GNU/gcc optimization for size
I'm working to cross compiling (for arm, with arm-elf-* tools) some pieces of code and I have to pay really attention to the compiled size. Some tests with others compilers (arm, m16c, 8051) show me that the GNU performance, in respect with the size consumption, are the worst. I will ask you if my tests are credible or not? And If not, I hope someone can tell me were I'm wrong. This are the tests results: ARM 37324 Bytes 8051 37903 Bytes M16C 42132 Bytes GNU 47488 Bytes I have obtained the GNU value of 47488 Bytes from the Map file (I create it with the linker option -Wl,-Map=.\Map.map) reading the first ..text section hexadecimal address and converting it into a decimal value. The other compiler's options I have used are: -v -Os -Tflash.ld -nostartfiles -Wl,-Map=.\Map.map,--cref,-nostdlib -s -o... Thanks, Maja >Some tests with others compilers (arm, m16c, 8051) show me that the >GNU performance, in respect with the size consumption, are the worst. >I will ask you if my tests are credible or not? And If not, I hope >someone can tell me were I'm wrong. Throw out the M16 and 8051 results. The instruction sets are different from ARM so that's like comparing apples to oranges. I assume that by GNU, you mean the ARM version of the GNU compiler? You might want to look at the actual code generated between the ARM and GNU compilers to see what the difference is. > This are the tests results: > ARM 37324 Bytes > GN...

gcc
Hi: I've built a couple gcc/ada cross compilers in the past but I'm really having trouble with this one. I have to build a gcc 4.1.2 based Ada cross compiler. The host is Linux. The target is a powerpc-mpc8248-linux-uclibc (PowerPC 603, Linux 2.6.20, uClibc-0.9.29). When building the compiler, I get a message cp -p ../../gcc/ada/sinfo.ads ../../gcc/ada/nmake.adt .../../gcc/ada/xnmake.adb ada/bldtools/nmake_b (cd ada/bldtools/nmake_b; gnatmake -q xnmake ; ./xnmake -b ../../nmake.adb ) gnatbind: Cannot find: s-stalib.ali gnatmake: *** bind failed. Would anyone know why the error occurs? Its confusing why s-stalib is not being compiled. I am using gcc 4.4.4 as the host compiler. Thanks Mark mark writes on comp.lang.ada: > I've built a couple gcc/ada cross compilers in the past but I'm really > having trouble with this one. I have to build a gcc 4.1.2 based Ada > cross compiler. The host is Linux. The target is a > powerpc-mpc8248-linux-uclibc (PowerPC 603, Linux 2.6.20, > uClibc-0.9.29). When building the compiler, I get a message > > cp -p ../../gcc/ada/sinfo.ads ../../gcc/ada/nmake.adt > ../../gcc/ada/xnmake.adb ada/bldtools/nmake_b > (cd ada/bldtools/nmake_b; gnatmake -q xnmake ; ./xnmake -b ../../nmake.adb ) > gnatbind: Cannot find: s-stalib.ali > gnatmake: *** bind failed. > > Would anyone know why the error occurs? Its confusing why s-stalib is > not being compiled. I am using gcc 4.4.4 as...

Simple question regarding GCC compiler
Hi All, I notice that people are using the gcc compiler. Excuse my ignorance but are you using it as a cross compiler from a Unix/cygwin environment or natively on the Amiga? I notice that there is an old version of gcc (Amiga hosted) on the Aminet. Is there a site somewhere with prebuilt binaries for the amiga? Thanks, Shaun Shaun James <yibbidy@bigpond.net.au> wrote: > I notice that people are using the gcc compiler. Excuse my ignorance but > are you using it as a cross compiler from a Unix/cygwin environment or > natively on the Amiga? Both :) I ...

A couple of questions on lisp compiler/optimization art
Hi there, Much as enjoy lisp, I still spend quite a bit of my time worrying about single-instruction details of generated C code (embedded DSP applications). I was wondering, this morning, whether the state-of-lisp- compiler-art ran to inlining of leaf functions, and all of the usual subsequent optimization steps like CSE and constant propagation? That would seem to require that function definitions hang around symbolically, as well as having machine code generated. Not too hard I suppose. Anyway, that was thought-1. The larger question in my mind was whether this code generation and inlining process was ever invoked on runtime- (nebulous concept, I know) -generated closures? That is, if the free variables closed in a function that was subsequently called from a larger function (defined after the closure was formed), could constant values of closed free variables be part of the inlining and optimization process of the new, larger function? I couldn't see any particular reason why that couldn't happen, but I haven't heard it described. Something along the lines of: (defun a (offset) (lambda (x) (+ x offset)) (defvar a3) (setf a3 (a 3)) (defun foo () t) (defun bar (x) x) (defun something-big (some args) (cond ((foo) (bar (funcall a3 4))))) Would it be common for a contemporary lisp compiler to code the call to a3 directly as (fx+ 4 3), (rather than always finding the "offset" variab...

Simple Question on GCC, GNU make and the % Implicit Rule
I am trying to compile a C program using GCC and GNU make, using the % implicit rule % with a suffix of .obj, instead of the usual .o. However, I get an error report that As cannot write foo/obj. The make file lines are of the form # flags: CCflags -Wall -O2 -mthrowback -mlibscl -U__STD_C__ -c -o # Target: !RunImage: foo.obj DrLink -rescan @.obj.foo -o !RunImage # Dependencies: %.obj: %.c GCCBIN:gcc $(CCflags) $@ $< The obj directory exists, and if I explicitly place a rule for the file as # Dependencies: obj.foo: c.foo GCCBIN: $(CCflags) obj.foo c.foo the compilation proceeds without As reporting the error. Although, if I state the rule as # Target: !RunImage: foo.obj DrLink -rescan foo.obj -o !RunImage #Dependencies: foo.obj: c.foo GCCBIN:gcc $(CCflags) foo.obj c.foo then I get the error again. So, how do I get GCC to accept the form foo.<suffix> with a suffix other than .o, and how do I get the % rule to work with it successfully? ----------- pdm - using VA-Adjust on a PC, and RO 4.02 on a Risc Pc, waiting for the A9Home to appear. pete.daniel@tiscali.co.uk wrote: > So, how do I get GCC to accept the form foo.<suffix> with > a suffix other than .o, and how do I get the % rule to > work with it successfully? You'll need to add :obj to the end of UnixEnv$as$sfix and potentially UnixEnv$gcc$sfix too. See the !gcc.!Run file for the current settings. Theo Theo Markettos wrote: >...

gnu compiler optimizes out "asm" statements
This is related to my question about interrupts in an STM32F303 processor. It turns out that the problem is in the compiler (or I'm going insane, which is never outside the realm of possibility when I'm working on embedded software). I'm coding in C++, and I'm using a clever dodge for protecting chunks of code from getting interrupted. Basically, I have a class that protects a block of code from being interrupted. The constructor saves the interrupt state then disables interrupts, and the destructor restores interrupts. This has been reliable for me for y...

C vs. C++ and C compiler optimization question
/* Consider the following C snippet: */ Foo *pFoo = NULL; void FooLength(Foo *pFoo); void FooHello(Foo **ppFoo); void FooWorld(Foo **ppFoo); void FooLength(Foo *pFoo) { int count = 0; for (Foo *p = pFoo; *p; p++) ++count; return count; } void FooHello(Foo **ppFoo) { int count = FooLength(*ppFoo); // ... } void FooWorld(Foo **ppFoo) { int count = FooLength(*ppFoo); // ... } // Consider the following C++ snippet: class FooHelloWorld { public: void FooHello(Foo **ppFoo) { iCount = FooLength(*ppFoo); // ... } void FooWorld(Foo **ppFoo) { // no need to recompute iCount // ... } private: void FooLength(Foo *pFoo) { int count = 0; for (Foo *p = pFoo; *p; p++) ++count; return count; } int iCount; Foo **ppFoo = NULL; }; ---------------------------------------------------------- We could have done the same in C by adding an extra global variable or function parameter. However, global variables tend to pollute, so in C we don't tend to keep too many of them, and too many function parameters can make a function too hard to understand. On the other hand in C++ we can place global variables inside classes so we can cache results easily without the need to pollute the global namespace or pass many arguments around. I wonder if the C compiler could notice that the same count value is being reused and automatically optimize by creating a global ...

How do you get gnu gcc to build a 64 bit binary compiler?
Problems building 64 bit gnu gcc x86 Solaris 10 source gcc-3.4.6 How do you get gnu gcc to build a 64 bit binary compiler? I've got two gcc-3.4.6 source directories. And am compiling the first with ../configure -with-arch-64=cpu --with-tune-64=cpu --with-cpu-64=cpu in order to make a 64 bit capable gcc compiler ready and able to compile the second source directory with the same configure line (albeit with a different destination and install directory) in order to end up with a 64 bit gcc compiler. The problem is that both compilations produce the same result, a 32 bit gcc /usr/local/bin/gcc: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped Below is a list of compile options for gnu gcc http://gcc.gnu.org/install/configure.html Joe Young <j.joeyoung@gmail.com> writes: > Problems building 64 bit gnu gcc > x86 Solaris 10 > source gcc-3.4.6 I suggest you try a group about gcc -- that's where the gcc experts will be. > How do you get gnu gcc to build a 64 bit binary compiler? > > I've got two gcc-3.4.6 source directories. And am compiling the first > with > ./configure -with-arch-64=cpu --with-tune-64=cpu --with-cpu-64=cpu Literally that line? The documentation suggests you should say ./configure -with-arch-64=i386 --with-tune-64=i386 --with-cpu-64=i386 > in order to make a 64 bit capable gcc compiler ready and able to > compile > the second source directory with the same conf...

SN#10278 GNU gcc 3.3 Compiler Suite Released
SYSTEM NEWS FOR SUN USERS Vol 65 Issue 1 2003-06-30 Article 10278 from section "Developer's Section" Now Available on sunfreeware.com The GNU gcc 3.3 compiler suite is now released and available on the sunfreeware.com Web site. Pre-compiled versions for the Solaris[TM] Operating System (Solaris OS) (SPARC[R] Platform Edition) 2.6, 7, 8 and 9 and for Solaris OS (x86 Platform Edition) 8 and 9 can be found on sunfreeware.com. The Gnat ada programs are not yet included, however. Site administrator Steve Christensen suggests that "...gcc-3.3 appears to offer support for the creation of 64-bit executables when the source code permits it. Programs like top, lsof, ipfilter, and others support, and may need, such compiles to work properly when running the 64-bit versions of Solaris 7, 8, and 9 on SPARC platforms." "When you compile something with any of these compilers," he continues, "the executable may end up depending on one or more of the libraries in /usr/local/lib such as libgcc_s.so. An end user may need these libraries, but not want the entire gcc file set. I have provided a package called libgcc-3.3 for each level of Solaris OS. This contains all the files from /usr/local/lib generated by a gcc package installation. An end user can install this or a subset. You can ...

Memory constraint (GNU gcc blackfin) not recognized by visualdsp compiler...how to fix?
I am trying to build this function in visualdsp but the compiler doesn't recognize the "m" constraint (only GNU gcc for blackfin does). I have tried to replace the "m" constraints by other constraints recognized by the visualdsp compiler, but I always get this message: gnu asm requires too many ['some letter'] regs For example: gnu asm requires too many D regs The function I am trying to build is a function from the speex codec: filter_mem16 If you want to see what the function looks like, take a look here: http://svn.xiph.org/trunk/speex/libspeex/filters_b...

Re: compiler for embedded processor, job market? -- newbie's question
I think the job property of a backend compiler writer is more like a doctor-type job. The more experienced you are, the more valuable you are. Frontend seems require less experience. The reason embedded processor companies need compiler wirter is the chips are diversified in this venue. ABI of general purpose processors is quite uniformed. ALSO, the dynamic scheduling capability provided by hardware really leave little space for the benefit that naive opportimizations can provide. But I think thing may be better if IA-64 can prove its capability coz IA-64 kind of processors need a lot of sta...

Software Engineer (Linux GNU Embedded C Porting GCC ARM Drivers) Cambridge
http://www.electronicsweekly.com/Jobs/2006/03/03/114001824/Software Engineer (Linux GNU Embedded C Porting GCC ARM Drivers).htm "Familiarity with Linux is key for this role, at least 3 years use/software design experience is required. Suitable candidates will have lots of embedded experience, Linux Kernel level programming, device drivers and knowledge of the GNU toolchain. The company design embedded single board computers mainly for instrumentation and control systems. Benefits include 25 days holiday, life assurance, pension scheme and flexible working environment. " John Bailo...

Building GCC-4.1.1 C/C++ compiler for LPC3180 + VFP + Embedded Linux
Hello, I want to have a cross compiler running on my win32 cygwin, that can cross compile C/C++ code for LPC3180 with VFP to run on embedded linux. I downloaded gcc-4.1.1 source onto win32 cygwin (I dont even know if I downloaded the right compiler source for the job), and started reading the instructions on how to compile the compiler. First it says you have to configure the build, then build the compiler. I'm not an expert and there are so many configuration parameters to configure the build before building the actual compiler, that I know will take me 100 years to figure out. Does...

ANNOUNCE: DJGPP port of GPC (GNU Pascal Compiler 20041218) for gcc-3.2.3
GPC, the GNU Pascal compiler of the GNU Compiler Collection stable version 20041218 is available for the gcc-3.2.3 backend. The gpc main site is at http://gnu-pascal.de Archives are available from DJGPP main site and mirrors http://www.delorie.com/djgpp/getting.html in directory v2gnu binaries / documentation / sources are in ftp://ftp.delorie.com/pub/djgpp/current/v2gnu/gpc323b.zip ftp://ftp.delorie.com/pub/djgpp/current/v2gnu/gpc323d.zip ftp://ftp.delorie.com/pub/djgpp/current/v2gnu/gpc323s.zip Unzip as usual in your DJGPP tree, with an unzipper which keeps directory structure. (unzip32.exe you can find in the simtelnet root DJGPP directory is recommended) Then read %DJDIR%/gnu/gcc-3.23/gcc/p/readme.djgpp for final instructions (binary) and/or %DJDIR%/build/readme.gpc (sources) Since this compiler is part of the gcc-3.2.3 GNU Compiler Collection, it needs that gcc323b.zip be installed simultaneously. You can upgrade (or downgrade) to an other gcc version if you keep all files that gcc323b.zip installs in and below the %DJDIR%/lib directory. There will be no confusion neither for gcc nor for gpc. What is GPC ? From the beginning of the GPC FAQ 1 GNU Pascal ************ 1.1 What and why? ================= The purpose of the GNU Pascal project is to produce a Pascal compiler (called GNU Pascal or GPC) which * combines the clarity of Pascal with powerful tools suitable for real-life programming, * supports both the Pasca...

Bug report
gs 8.54 does not compile with gcc 3.4.6 on i686-pc-linux-gnu. 'make' barfs on: .... (snip) ... gcc `cat ./obj/cc.tr` -DHAVE_MKSTEMP -O2 -Wall -Wstrict-prototypes -Wmissing-de clarations -Wmissing-prototypes -fno-builtin -fno-common -DGX_COLOR_INDEX_TYPE= 'unsigned long long' -I./obj -I./src -o ./obj/gdevpdfe.o -c ./src/gdevpdfe.c ../src/gdevpdfe.c:189: error: syntax error before "pdf_uuid_time" ../src/gdevpdfe.c:190: warning: return type defaults to `int' ../src/gdevpdfe.c:190: warning: no previous prototype for 'pdf_uuid_time' ../src/gdevpdfe.c: In function `pdf_uuid_time': ../src/gdevpdfe.c:192: error: `uint64_t' undeclared (first use in this function) ../src/gdevpdfe.c:192: error: (Each undeclared identifier is reported only once ../src/gdevpdfe.c:192: error: for each function it appears in.) ../src/gdevpdfe.c:192: error: syntax error before "t" ../src/gdevpdfe.c:195: error: `t' undeclared (first use in this function) ../src/gdevpdfe.c:195: error: syntax error before numeric constant ../src/gdevpdfe.c:191: warning: unused variable `dt' ../src/gdevpdfe.c: At top level: ../src/gdevpdfe.c:212: error: syntax error before "uint64_t" ../src/gdevpdfe.c:213: warning: function declaration isn't a prototype ../src/gdevpdfe.c: In function `pdf_make_uuid': ../src/gdevpdfe.c:215: error: `uuid_time' undeclared (first use in this function) ../src/gdevpdfe.c:225: error: `time_seq' undeclared (fir...

question about a command like 'goto ' in Python's bytecode or it's just a compiler optimization?
My Python version is 2.5.2; When I reading the bytecode of some pyc file, I always found that there are many jump command from different position,but to the same position. You can see this situation in following code(this bytecode is just from one .pyc file and I don't have its source .py file): ...... 526 POP_TOP '' 527 LOAD_FAST 'imeHandle' 530 LOAD_ATTR 'isCnInput' 533 CALL_FUNCTION_0 '' 536 JUMP_IF_FALSE '574' 539 POP_TOP '' 540 LOAD_FAST 'GUIDefine' 543 LOAD_ATTR 'Can...

Is it possible to build an Ada cross-compiler for an 8-bit embedded target now that gcc 3.X has support for Ada?
Subject pretty much says it all. Now that you can build an Ada compiler using gcc 3.X (well, according to the build instructions of gcc 3.x :-)), I began to wonder whether it was possible to build a cross-compiler for an 8-bit target processor. The 8-bit processor family in question is already supported by gcc (I believe). When I say "possible", I just mean "follow the cross-compiling instructions and out pops an Ada compiler" - I don't mean, "spend months of work patching various files and then you might have an Ada compiler" :-) I would dearly like to convince some managers here that Ada is a reasonable alternative to C/C++ for product development. I don't mind spending a lot of my own time re-writing my current C code in Ada to help prove the point, but I need the tools to at least contemplate giving it a go :-). The cross-compiler would be built and run on a Windoze box (just answering (one of) the obvious questions :-)). Thanks Peter I just came across Rolf's porting efforts for getting Ada/gnat runing on an AVR processor (http://sourceforge.net/projects/avr-ada/) - so the question is answered. It is not straight forward or easy at all. Looks like I'll have to wait even longer before I can get a vehicle that will help convince management............ "Peter Milliken" <peterm@resmed.com.au> wrote in message news:zgRsb.200$co2.10332@nnrp1.ozemail.com.au... > Subject pretty much says it all. > ...

An old question:gcc and arm-linux-gcc?
gcc already has a option "-mcu",may use arm or i386. But why I need arm-linux-gcc? what's different? In fact you do need a tool chain. see: http://www.uclinux.org -Michael On 2009-01-31, uusky <uulinux@gmail.com> wrote: > gcc already has a option "-mcu",may use arm or i386. Really? You've got a copy of gcc that accepts either arm or i386? What happens when you try that? > But why I need arm-linux-gcc? what's different? Each build of gcc only supports one family of processors (IA32 is one family, ARM is another). You can use ...

Newbie question: What is the importance for a compiler to be able to compile itself?
HI all, What is the importance with a compiler that is able to compile itself? As far as I can see from my limited wisdom, it is not a requirement and perhaps does not even prove anything substantial about the compiler? I mean, a C/C++ compiler may be written in Java and so will at most be compiled by a Java compiler, and vice versa, right? Also, assemblers do not assemble themselves for the most part, and XSLT transformers do not transform themselves either. Just curius. On May 19, 1:01 pm, armen...@gmail.com wrote: > What is the importance with a compiler that is able to compile itself? Let us define the "intended application domain" of a language as the set of applications the language designer intended the language to be used for. Some languages include compilers as part of their intended application domain (for example C), and some do not (for example XSLT). If the intended application domain of a language X includes a compiler for language X, than it is a good demonstration and test to implement one. It demonstrates that the language can "eat its own dog food". It tests many language features such as reading and writing to secondary storage, string handling, complex data structures and intensive algorithms. -Andrew. On 2009-05-19, armencho@gmail.com <armencho@gmail.com> wrote: > What is the importance with a compiler that is able to compile itself? > As far as I can see from my limited wisdom, it is not a requirement > ...

Web resources about - GNU GCC compiler optimization question - comp.arch.embedded

Local Search Optimization - Your Business First - Local SEO
Local Search Optimization provides accurate, reliable local search results guaranteed to connect your business first to high value customers ...

Search Engine Optimization (SEO) Journal
Writing by Nick Stamoulis SEO is a marathon, not a sprint. I’ve said it before and I’ll say it again—SEO is a slow, ongoing process that you ...

Web Site Optimization: Speed Up Your Site website optimization web speed optimize web site performance ...
Website optimization speeds up slow web sites, increases website traffic, and improves conversion rates. Our web optimization services increase ...

Optimization Online
is a repository of e-prints about optimization and related topics . Submissions to are moderated by a team of volunteer coordinators . Coordinators ...

SEO, Search Engine Optimization, SEO Optimisation Company : SearchEngineOptimization.co.uk
Search Engine Optimization Services from the UK's leading SEO company. Try our Search Engine Optimisation Services and increase your rankings. ...

No free lunch in search and optimization - Wikipedia, the free encyclopedia
The problem is to rapidly find a solution among candidates a, b, and c that is as good as any other, where goodness is either 0 or 1. There are ...

App Store Optimization
App Store Optimization (ASO) is the first step to succeed in mobile app marketing. ASO helps your...

INFOGRAPHIC: 9 Steps Toward Facebook Graph Search Optimization
With Facebook’s Graph Search rolling out to users in the U.S. , it’s time for page administrators to start thinking about how their pages can ...

iTunes U for iOS adds support for Managed Apple IDs, Shared iPad optimization, & Spotlight Search
Earlier this week, Apple released the first version of its new Classroom iOS app for iPad , and today the company’s other educational app iTunes ...

Intel’s ‘Tick-Tock’ Seemingly Dead, Becomes ‘Process-Architecture-Optimization’
... by the wayside for the next two lithographic nodes at a minimum, to be replaced with a three element cycle known as ‘Process-Architecture-Optimization’. ...

Resources last updated: 3/25/2016 1:27:39 PM