f



Why I detest configure.

I'm currently try to compile a package. configure in one shell results in 'cc 
.... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' which 
fails. Now I have to go through a few dozen arcane shell scripts to figure out 
why.

If all the complications caused by autogen autoconf automake configure .lo .la 
autoseppuku libfool etc were invariably handled by these scripts like proper 
black boxes, I would have no complaint. But they don't. Instead they're gray 
boxes where they work sometimes, and when they don't, they're bleeding too 
bleeding opaque to fix quickly. Instead it's a whole subindustry trying to get 
them to work again. It seems what system I do the tar xf on can break some 
packages. Ah, yes, robust portability.

I see cmake in the package, and I toss it immediately. Life's too short to get 
cmake projects to work.

Brick wall. Head. Bash until wall collapses.

-- 
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted.
'I desire mercy, not sacrifice.'
If you assume the final scene is a dying delusion as Tom Cruise drowns below
the Louvre, then Edge of Tomorrow has a happy ending. Kill Tom repeat..
0
Siri
8/7/2016 7:28:31 PM
comp.unix.programmer 10848 articles. 0 followers. kokososo56 (350) is leader. Post Follow

10 Replies
617 Views

Similar Articles

[PageSpeed] 5

Siri Cruise <chine.bleu@yahoo.com> wrote:
> I'm currently try to compile a package. configure in one shell results in 'cc 
> ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' which 
> fails. Now I have to go through a few dozen arcane shell scripts to figure out 
> why.

The building of .lo objects suggests automake and libtool.

While autoconf configure scripts are imperfect, there's nothing better out
there for feature discovery, IMO.[1] You can use autoconf separately from
automake (though they don't make it easy), and write portable Makefiles that
can be optionally used standalone. The GNU standard (the closest thing to a
real standard, at least) for Makefile macro names and _usage_ are useful
here, and I wish more people applied them

  CFLAGS, CPPFLAGS, LDFLAGS, and LIBS.

so human and automated installers could reliably circumvent the automated
bits and invoke the Makefiles directly. I also usually use SOFLAGS as
sometimes neither CFLAGS nor LDFLAGS is suitable, depending how object files
are built.

Those are meant to be user specifiable and capable of overriding the
configure script-generated values. The Makefile rules are supposed to use
ALL_-prefixed macros. So ALL_CFLAGS is usually defined like

  ALL_CFLAGS = -DPROJECT_VAR=1 $(CFLAGS)

If a project's default configuration is just totally broken in an
environment, ideally you should also be able to specify ALL_CFLAGS, et al,
directly so long as you understand you may need to define project-specific
macros.

Alas, it's rare that Makefile writers are that conscientious. I try to be,
but it's difficult for non-trivial projects.

I also change the way that autoconf generates config.h. autoconf-generated
HAVE macros are intended to be tested for definedness, not arithmetically.
But that precludes overriding them from the command line. I redefine
AH_TEMPLATE so that the autoconf-generated config.h defines HAVE macros with
boolean values (0 or 1), but only if the HAVE macro isn't already defined.
This permits overriding feature detection through CPPFLAGS.


[1] I came up with my own project, autoguess
(https://github.com/wahern/autoguess), which is a single header file relying
exclusively on preprocessor tests for feature detection, intended as a
substitute for an autoconf-generated config.h. It was inspired by the
Pre-defined Compiler Macros project
(https://sourceforge.net/p/predef/wiki/Home/), which the authors have kept
up-to-date for over a decade, now.

However, some projects like musl libc intentionally hide vendor-specific
macros and explicitly prefer use of autoconf or similar dynamic feature
detection. For that and other reasons autoconf is strictly a better
solution, though my config.h.guess is a decent alternative, especially for
modern systems. (I use over half-a-dozen VMs plus a Polarhome AIX account to
test feature detection.)

0
william
8/9/2016 7:08:42 PM
On 2016-08-09, <william@wilbur.25thandClement.com> <william@wilbur.25thandClement.com> wrote:
> Siri Cruise <chine.bleu@yahoo.com> wrote:
>> I'm currently try to compile a package. configure in one shell results in 'cc 
>> ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' which 
>> fails. Now I have to go through a few dozen arcane shell scripts to figure out 
>> why.
>
> The building of .lo objects suggests automake and libtool.
>
> While autoconf configure scripts are imperfect, there's nothing better out
> there for feature discovery, IMO.[1] You can use autoconf separately from

There absolutely is: a hand-crafted script which detects features
relevant to your software, on all the platforms you care about.

Autoconf is so bad, that it's better to just distribute a cleanly
nonportable program than to use it.

It's more productive for a user to patch things to get the software to
build and work than to dick around with Autoconf.

I.e. Autoconf is demonstrably worse than nothing; it's like a fucking
cure for acne that produces skin cancer.
0
Kaz
8/9/2016 7:21:06 PM
In article <qamp7d-dq6.ln1@wilbur.25thandClement.com>,
 <william@wilbur.25thandClement.com> wrote:

> Siri Cruise <chine.bleu@yahoo.com> wrote:
> > I'm currently try to compile a package. configure in one shell results in 
> > 'cc 
> > ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' 
> > which 
> > fails. Now I have to go through a few dozen arcane shell scripts to figure 
> > out 
> > why.
> 
> The building of .lo objects suggests automake and libtool.
> 
> While autoconf configure scripts are imperfect, there's nothing better out
> there for feature discovery, IMO.[1] You can use autoconf separately from

Actually there is. Either make the configure etc bulletproof or figure out a way 
to write transparent enough scripts that others can debug them. I wasted a 
weekend trying to figure out why freetype builds in one shell but not another 
with identical environments. I ended up junking it. Even after I extracted the 
clang statements from the make output, I suddenly got a bunch of '#include 
SOME_CRAP' failing even though the clang calls were identical and the sources 
unchanged.

Now I got something about ranlib puking.

Apparently somewhere the mtimes got mooshed so then it started complaining about 
autoconf-1.N not available even though autoconf-1.(N+1) was in the PATH. Rather 
than having a dozen autoconf versions compiled, I sed the hardcoded version 
strings. Expecting people to screw around with a dozen autoconfs is not a 
bulletproof configure. autoconf-1.N should cope with previous versions.

I had a broken convert command in the PATH. The simplest fix was to ed the 
configure.ac and configure to force it not to use convert. Of course a 
bulletproof makefile could have done something
    if ! convert icon.svg icon.png; then cp prebuilt-icon.png icon.png; fi
instead of
    @HAVE_CONVERSION_TOOL_TRUE@ convert icon.svg icon.png
    @HAVE_CONVERSION_TOOL_FALSE@ cp prebuilt-icon.png icon.png
and aborting if the convert fails even though an alternative command is already 
known.

> automake (though they don't make it easy), and write portable Makefiles that

So don't write portable Makefiles if they have to be unreliable incomprehensible 
spaghetti code. I'd much rather deal with system specific makefiles if how a 
makefile are selected is done in a comprehensible fashion.

And use tcl or python or something else if that's the only way avoid quote hell 
in sh.

>   CFLAGS, CPPFLAGS, LDFLAGS, and LIBS.

If only it were that simple for anti-debuggable scripts that are broken some 
truly unique way. I have to use CFLAGS and LDFLAGS to build cpu (-arch) and 
system (-isysroot) specific versions. I then use my merge script to combine the 
install directories into fat binaries and cpu conditional headers.

> Alas, it's rare that Makefile writers are that conscientious. I try to be,
> but it's difficult for non-trivial projects.

It doesn't matter how shiny the C is if the build procedure is broken. 
Uncompilable code is as worthless as broken code.

> [1] I came up with my own project, autoguess
> (https://github.com/wahern/autoguess), which is a single header file relying
> exclusively on preprocessor tests for feature detection, intended as a

I don't mind configuration programs to generate header files as long as the 
programs work. For some of my projects I use a short c program that #defines 
sizeof macros for various types when I need the type sizes in preprocessor #ifs. 
I can use it for things like letting the preprocessor select the correct printf 
type letter for my various integer and real types.
    #if sizeof_Int<=sizeof_int
        #define IntFormat "d"
    #elif sizeof_Int<=sizeof_long
        #define IntFormat "ld"
    #else
        #define IntFormat "lld"
    #endif

    Int x;
    printf("%5" IntFormat "\n", x);

-- 
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted.
'I desire mercy, not sacrifice.'
If you assume the final scene is a dying delusion as Tom Cruise drowns below
the Louvre, then Edge of Tomorrow has a happy ending. Kill Tom repeat..
0
Siri
8/9/2016 9:25:32 PM
Kaz Kylheku <418-837-1574@kylheku.com> wrote:
> On 2016-08-09, <william@wilbur.25thandClement.com> <william@wilbur.25thandClement.com> wrote:
>> Siri Cruise <chine.bleu@yahoo.com> wrote:
>>> I'm currently try to compile a package. configure in one shell results in 'cc
>>> ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' which
>>> fails. Now I have to go through a few dozen arcane shell scripts to figure out
>>> why.
>>
>> The building of .lo objects suggests automake and libtool.
>>
>> While autoconf configure scripts are imperfect, there's nothing better out
>> there for feature discovery, IMO.[1] You can use autoconf separately from
>
> There absolutely is: a hand-crafted script which detects features
> relevant to your software, on all the platforms you care about.
>
> Autoconf is so bad, that it's better to just distribute a cleanly
> nonportable program than to use it.
>
> It's more productive for a user to patch things to get the software to
> build and work than to dick around with Autoconf.

The installer doesn't have to dick around with autoconf as long as you
preserve the ability to use the Makefile directly and define all flags
explicitly. At the same time, the installer doesn't have to dick around with
your homegrown solution when he's expecting the typical ./configure && make
&& make install to work. For better or worse, for issues like
cross-compilation packagers rely on autoconf semantics and contrivances.
This is a _process_ issue where technical elegance matters not one iota.
What matters is consistency; simply having a script named "configure" does
not provide the expected consistency.

The author doesn't have to dick around recreating a framework that autoconf
already implements. I also once disliked autoconf. I still do. I _have_
implemented my own scripts for feature detection. Once I realized I was
needlessly recreating the wheel as well as frustrating packagers, I changed
my mind. That happens when your project is tested by real-world portability
issues and you discover the inadequecies of your homegrown solution. Most of
my projects work out-of-the-box on recent versions of AIX, FreeBSD,
Linux/glibc, Linux/musl, NetBSD, Minix, OpenBSD, OS X, and Solaris. And that
includes using O(1) polling APIs and other non-trivial interfaces.

In terms of helping to find bugs (real bugs, not just portability bugs),
maintaining support for AIX, OS X, and Solaris is most important because the
implementations are so distinct and thus more likely to expose accidental
assumptions. And those are precisely the platforms where the years of hacks
that have gone into autoconf to address corner cases are most beneficial;
where recreating that level of comprehensiveness becomes much too costly.

Also, over the years I've learned that compile-time feature detection is
often inadequate. I'd rather spend my time implementing the necessary
run-time feature detection than recreating autoconf out of sheer aesthetic
anxiety.

If you only care about POSIX APIs on relatively modern open source systems,
then, yes, it's trivial to roll your own solution or none at all.

> I.e. Autoconf is demonstrably worse than nothing; it's like a fucking
> cure for acne that produces skin cancer.

Setting aside the size of the configure script and other litter, what
precisely are your problems with it? Be careful not to conflate autoconf
with automake, libtool, etc. Here are my problems with autoconf

1) Version compatibility. That's only relevant during development and
release time, but that's also when it's the most costly. I'm almost on the
verge of deciding to commit the generated configure script to the
repository. It's a giant PITA to maintain a suitable environment on every
system you test on. Some new OS release comes out every few months and it's
a waste of my time to recreate a development environment each time.

2) The authors no longer officially support autoconf without also using
automake. But they aren't trying to break that mode of use, either. It's
only a headache when bootstraping, generating required auxiliary files like
install.sh.

3) The use of definedness tests instead of arithmetic tests for HAVE feature
macros. It precludes overriding config.h from the command line. But I fix
that in a 6-line preamble at the top of configure.ac.

4) The tests for file-scoped variables and for enums are poor. They
basically rely on the same logic as the function tests, but that's not
always adequate.

0
william
8/9/2016 9:29:57 PM
<william@wilbur.25thandClement.com> writes:
>Siri Cruise <chine.bleu@yahoo.com> wrote:
>> I'm currently try to compile a package. configure in one shell results in 'cc 
>> ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' which 
>> fails. Now I have to go through a few dozen arcane shell scripts to figure out 
>> why.

>While autoconf configure scripts are imperfect, there's nothing better out
>there for feature discovery, ..

What features on modern OS's is anybody expecting to find? Since 99%
of software shipping with configure is pretty much hard coded to
something linux flavored or at least something that is modern POSIX,
do we really need automated checks for string.h vs. strings.h
recursing into dozens of directories over and over again? Only for the
implementor to ignore all those findings because they can't figure
out how to properly use the tool anyway?


-- 
Doug McIntyre
doug@themcintyres.us
0
Doug
8/9/2016 11:06:41 PM
Doug McIntyre <merlyn@dork.geeks.org> wrote:
> <william@wilbur.25thandClement.com> writes:
>>Siri Cruise <chine.bleu@yahoo.com> wrote:
>>> I'm currently try to compile a package. configure in one shell results in 'cc 
>>> ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' which 
>>> fails. Now I have to go through a few dozen arcane shell scripts to figure out 
>>> why.
> 
>>While autoconf configure scripts are imperfect, there's nothing better out
>>there for feature discovery, ..
> 
> What features on modern OS's is anybody expecting to find? Since 99%
> of software shipping with configure is pretty much hard coded to
> something linux flavored or at least something that is modern POSIX,
> do we really need automated checks for string.h vs. strings.h
> recursing into dozens of directories over and over again? Only for the
> implementor to ignore all those findings because they can't figure
> out how to properly use the tool anyway?

No, you don't. And while that's a legitimate gripe of autoconf, that's one
of only about 10 headers that autoconf will, by default, search for.

While many configure scripts chew through many dozens and perhaps a hundred
or more arguably useless tests, the vast majority of those were chosen by
the author. Only the first 20 or so lines you see from a typical configure
script are boilerplate. As a practical matter, those aren't the things
that feed disdain for autoconf.

My point is merely that autoconf is useful; that as a practical matter
there's no better tool existing out there; and that it's not worth the time
or effort to reproduce those parts of autoconf that work decently.

I'm not making the claim that autoconf is ideal, or that it's not almost
universally abused. But I doubt switching to another tool would provide
improved portability in such cases, or that the few proper feature tests
would be implemented correctly. Clearly people aren't actually _testing_
their build adequately.

I fully support the notion that people should be writing simple and clear
makefiles from the get-go. Habitually starting a project using autotools is
a bad habit, IMO. But I'll take a poorly written autoconf project over,
e.g., one that relies on cmake or ninja.

I'd love to see a better autoconf--something that produces or helps you to
produce a dependency-less feature detection script. I'd love to see a better
automake--something that produces or helps you to produce a portable and
comprehensible makefile for non-trivial builds. I'd love to see a better
libtool--something that eases the task of invoking compilers, linkers, and
their various options. But they don't exist.

The next best alternative is often nothing at all, but if you need
compile-time feature detection, and especially if the tests are numerous or
cannot be preprocessor based, then I think judicious application of autoconf
is not only warranted but preferable, all things considered. (And there's
much to consider.) By contrast, IME, for automake and libtool the costs
outweigh the benefits.

0
william
8/10/2016 12:22:53 AM
Siri Cruise <chine.bleu@yahoo.com> wrote:
> In article <qamp7d-dq6.ln1@wilbur.25thandClement.com>,
>  <william@wilbur.25thandClement.com> wrote:
> 
>> Siri Cruise <chine.bleu@yahoo.com> wrote:
>> > I'm currently try to compile a package. configure in one shell results in 
>> > 'cc 
>> > ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' 
>> > which 
>> > fails. Now I have to go through a few dozen arcane shell scripts to figure 
>> > out 
>> > why.
>> 
>> The building of .lo objects suggests automake and libtool.
>> 
>> While autoconf configure scripts are imperfect, there's nothing better out
>> there for feature discovery, IMO.[1] You can use autoconf separately from
> 
> Actually there is. Either make the configure etc bulletproof or figure out a way 
> to write transparent enough scripts that others can debug them. I wasted a 
> weekend trying to figure out why freetype builds in one shell but not another 
> with identical environments. I ended up junking it. Even after I extracted the 
> clang statements from the make output, I suddenly got a bunch of '#include 
> SOME_CRAP' failing even though the clang calls were identical and the sources 
> unchanged.
> 
> Now I got something about ranlib puking.

And you think problems similar to these would just disappear without
autoconf?

I mean, yes, literally in the case of ranlib. None of the systems I
regularly port to require using ranlib anymore. (And that's not something
autoconf checks for in its boilerplate, FWIW.) And, yes, if you just
abandoned portability as a criterion (strict or loose), required a modern
Linux/glibc environment, and perhaps didn't care about cross-compilation.

But even across various Linux environments (e.g. Linux/musl, ancient
Linux/glibc environments still floating around, or cross-compilation
environments where autoconf's tests work hard to avoid depending on the
runtime), IME once you begin approaching the same level of portability
autoconf can easily provide out-of-the-box you'll find a trail of even
_more_ headaches behind you.

autoconf doesn't alleviate the burden of figuring out the minimum and
simplest requirements. No tool does. There's no avoiding struggling with
that except by abandoning hope altogether.

It's very tempting to just require a modern Linux environment, but the Linux
ecosystem changes rapidly. APIs change. Compilers change. Exigencies change.
If you expect to be supporting the same software 10, 5, or even 3 years
hence, IME prudence dictates taking portability into account now by
providing the framework necessary for basic feature detection.

At the same time, it's not something you want to spend too much time on. IME
you want to provide for the general capability ahead of time, but only rely
on a particular test or dependency based on proven need. If you roll your
own you're much likely to overshoot or undershoot the mark. If you rely on
the framework that autoconf provides, but be judicious wrt the tests and
functionality you rely upon, you're more likely to reach optimal
cost+benefit.

0
william
8/10/2016 1:01:28 AM
In article <80bq7d-kbv.ln1@wilbur.25thandClement.com>,
 <william@wilbur.25thandClement.com> wrote:

> > Now I got something about ranlib puking.
> 
> And you think problems similar to these would just disappear without
> autoconf?

Unreadable spaghetti code is no longer accepted in C because it cannot be 
debugged and maintained. It doesn't matter how good the code is, if the code 
can't be maintained, it is abandonned.

Apply the same standards to build procedures. Write them readable, 
comprehensible, and debuggable.

> I mean, yes, literally in the case of ranlib. None of the systems I
> regularly port to require using ranlib anymore. (And that's not something
> autoconf checks for in its boilerplate, FWIW.) And, yes, if you just
> abandoned portability as a criterion (strict or loose), required a modern

I actually build fat binaries in MacOSX, which means I get to hit all the 
portability problems. That may be the source of the ranlib problem, so I get to 
rewrite my makes to avoid fat static libraries to see if that fixes it.

The linux version comes after. I still haven't figured which is a bigger pain in 
the nethers, building imagemagick or interfacing to CoreImage. Once I learn AV 
Foundation I can find out if it's the same API nightmare as ffmpeg.

> It's very tempting to just require a modern Linux environment, but the Linux

I still haven't got the MacOSX 10.10 and 10.7 to even begin to build, while I 
debug the 10.9 build.

-- 
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted.
'I desire mercy, not sacrifice.'
If you assume the final scene is a dying delusion as Tom Cruise drowns below
the Louvre, then Edge of Tomorrow has a happy ending. Kill Tom repeat..
0
Siri
8/10/2016 6:38:31 AM
In article <C6idndT0IuWc_zfKnZ2dnUU7-cnNnZ2d@giganews.com>,
 Doug McIntyre <merlyn@dork.geeks.org> wrote:

> <william@wilbur.25thandClement.com> writes:
> >Siri Cruise <chine.bleu@yahoo.com> wrote:
> >> I'm currently try to compile a package. configure in one shell results in 
> >> 'cc 
> >> ... -o file.o' which in fine, and another shell gives 'cc ... -o file.lo' 
> >> which 
> >> fails. Now I have to go through a few dozen arcane shell scripts to figure 
> >> out 
> >> why.
> 
> >While autoconf configure scripts are imperfect, there's nothing better out
> >there for feature discovery, ..
> 
> What features on modern OS's is anybody expecting to find? Since 99%
> of software shipping with configure is pretty much hard coded to
> something linux flavored or at least something that is modern POSIX,
> do we really need automated checks for string.h vs. strings.h
> recursing into dozens of directories over and over again? Only for the
> implementor to ignore all those findings because they can't figure
> out how to properly use the tool anyway?

I really don't mind automated checks for anything as long as they work 
flawlessly in all environments, or more realistically they can be debugged when 
they are broken.

-- 
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted.
'I desire mercy, not sacrifice.'
If you assume the final scene is a dying delusion as Tom Cruise drowns below
the Louvre, then Edge of Tomorrow has a happy ending. Kill Tom repeat..
0
Siri
8/10/2016 6:40:42 AM
In article <tn8q7d-1fb.ln1@wilbur.25thandClement.com>,
 <william@wilbur.25thandClement.com> wrote:

> I fully support the notion that people should be writing simple and clear
> makefiles from the get-go. Habitually starting a project using autotools is
> a bad habit, IMO. But I'll take a poorly written autoconf project over,
> e.g., one that relies on cmake or ninja.

I could never reliably build cmake on MacOSX. I toss all cmake packages 
immediately.

-- 
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted.
'I desire mercy, not sacrifice.'
If you assume the final scene is a dying delusion as Tom Cruise drowns below
the Louvre, then Edge of Tomorrow has a happy ending. Kill Tom repeat..
0
Siri
8/10/2016 6:46:54 AM
Reply: