COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Sampling: What Nyquist Didn't Say, and What to Do About It

• Follow

I know there's a few people out there who actually read the papers that I
post on my web site.

I also know that the papers have gotten a bit ragged, and that I haven't
been maintaining them.

So here: I've made a start.

http://www.wescottdesign.com/articles/Sampling/sampling.pdf

My intent (with apologies to all of you with dial-up), is to convert the
ratty HTML documents to pdf as time permits, and in a way that leaves the
documents easily maintainable and in a form that is easy to look at from
the web or to print out, as you desire.

--
http://www.wescottdesign.com

 0

Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com> napisa=B3(=
a):

> I know there's a few people out there who actually read the papers tha=
t I
> post on my web site.
>
> I also know that the papers have gotten a bit ragged, and that I haven=
't
> been maintaining them.
>
> So here: I've made a start.
>
> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
> My intent (with apologies to all of you with dial-up), is to convert t=
he
> ratty HTML documents to pdf as time permits, and in a way that leaves =
the
> documents easily maintainable and in a form that is easy to look at fr=
om
> the web or to print out, as you desire.
>

My first thought was that fonts look a little bit to thin and bright.
I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checke=
d.

-- =

Mikolaj

 0

Hi Tim,

On 12/20/2010 12:34 AM, Tim Wescott wrote:
> I also know that the papers have gotten a bit ragged, and that I haven't
> been maintaining them.
>
> My intent (with apologies to all of you with dial-up), is to convert the
> ratty HTML documents to pdf as time permits, and in a way that leaves the
> documents easily maintainable and in a form that is easy to look at from
> the web or to print out, as you desire.

I've not looked at the document(s).  But, if you think
carefully about how you build a PDF (e.g., which fonts
you embed, what resolution you use for images, etc.)
you can exert a great deal of control over the finished
size of the document.  (you also need to consider which
are forced to upgrade just to view someone else's "new"
document).

For example, when I include detailed photos, I deliberately
chose high enough resolutions that allow the user (reader)
to "zoom" to examine high levels of detail without
the image being rendered with jaggies, etc.

Also, note that cropping an image in the PDF doesn't discard
the "invisible" portion of the image.  This can be embarassing
if you think you've hidden (not included) a portion of the
image that isn't "visible"  :>

Finally, note that the PDF is tagged with several items
from your "writing environment" (user name, etc.).  Just
be sure you know what's embedded "behind the scenes".

HTH

 0

Mikolaj wrote:
> Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com> napisa�(a):
>
>> I know there's a few people out there who actually read the papers that I
>> post on my web site.
>>
>> I also know that the papers have gotten a bit ragged, and that I haven't
>> been maintaining them.
>>
>> So here: I've made a start.
>>
>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>
>> My intent (with apologies to all of you with dial-up), is to convert the
>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>> documents easily maintainable and in a form that is easy to look at from
>> the web or to print out, as you desire.
>>
>
> My first thought was that fonts look a little bit to thin and bright.
> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>
I agree, the font makes it very difficult to read, and is not
conducive to enhancing reading over a long term, namely longer than one
page..

 0

On 20/12/10 10:16, D Yuniskis wrote:
> Hi Tim,
>
> On 12/20/2010 12:34 AM, Tim Wescott wrote:
>> I also know that the papers have gotten a bit ragged, and that I haven't
>> been maintaining them.
>>
>> My intent (with apologies to all of you with dial-up), is to convert the
>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>> documents easily maintainable and in a form that is easy to look at from
>> the web or to print out, as you desire.
>

Good idea, it looks good.

> I've not looked at the document(s). But, if you think

I have looked at the document - but not read it.  I'll do that when I
have time to do it justice.

> carefully about how you build a PDF (e.g., which fonts
> you embed, what resolution you use for images, etc.)
> you can exert a great deal of control over the finished

Tim has used pdfLaTeX - the usual tool of choice for publishing
technical or academic papers.  So the CMR fonts are embedded as needed
to make the document self-supporting.  The images appear to be vector
format rather than bitmap, so they scale nicely.

> size of the document. (you also need to consider which
> are forced to upgrade just to view someone else's "new"
> document).
>

No sane person /chooses/ to use Acrobat Reader any more.  It is annoying
and insecure bloatware, and spreads itself over far too much of your
system (hint to Adobe - it's been a decade since it was acceptable for a
simple application to require a reboot of windows during installation).
It is /far/ slower, and takes orders of magnitude more memory than
common alternatives like Foxit on windows or Evince on Linux.  And it
has such a bad security record that I am considering banning it from our
company.  Unfortunately, there are occasions when someone has made a
document that will only work with the latest Acrobat Reader - and I
fully agree with you that it is annoying.  But Tim's documents are pdf
1.4, a very well-supported standard.

> For example, when I include detailed photos, I deliberately
> chose high enough resolutions that allow the user (reader)
> to "zoom" to examine high levels of detail without
> the image being rendered with jaggies, etc.
>

That's a good plan, and something people often forget about - the result
being documents that look good on-screen, but poor in printout.  In this
particular case, however, it seems the graphics are in a vector format
(pdf files support eps), which is the best choice for drawings.

> Also, note that cropping an image in the PDF doesn't discard
> the "invisible" portion of the image. This can be embarassing
> if you think you've hidden (not included) a portion of the
> image that isn't "visible" :>
>

This is seldom an issue with pdf files (though it be, depending on the
tools used to create it) - it is commonly found in MS Word files.  But
Tim has used pdfLaTeX - the pdf file contains exactly what he wants it
to contain.

> Finally, note that the PDF is tagged with several items
> from your "writing environment" (user name, etc.). Just
> be sure you know what's embedded "behind the scenes".
>

Again, pdfLaTeX adds the tags /you/ want it to add, and nothing else.
But since Tim has his name on the front page, and every page's footer,
I'm guessing he won't mind if the pdf file is also tagged with his user
name!


 0

On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>I know there's a few people out there who actually read the papers that I
>post on my web site.
>
>I also know that the papers have gotten a bit ragged, and that I haven't
>been maintaining them.
>
>So here: I've made a start.
>
>http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
>My intent (with apologies to all of you with dial-up), is to convert the
>ratty HTML documents to pdf as time permits, and in a way that leaves the
>documents easily maintainable and in a form that is easy to look at from
>the web or to print out, as you desire.

The fonts are terrible. They seem to be bitmap fonts and not vector.
It looks like you used TeX to generate the document.  Go to
http://www.truetex.com/ for links to quite number of articles on
how to use truetype fonts in TeX.

Regards
Anton

 0

On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer
<robertbaer@localnet.com> wrote in
<UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>:

>Mikolaj wrote:
>> Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com> napisa�(a):
>>
>>> I know there's a few people out there who actually read the papers that I
>>> post on my web site.
>>>
>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>> been maintaining them.
>>>
>>> So here: I've made a start.
>>>
>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>
>>> My intent (with apologies to all of you with dial-up), is to convert the
>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>> documents easily maintainable and in a form that is easy to look at from
>>> the web or to print out, as you desire.
>>>
>>
>> My first thought was that fonts look a little bit to thin and bright.
>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>>
>   I agree, the font makes it very difficult to read, and is not
>conducive to enhancing reading over a long term, namely longer than one
>page..

I think the fonts look great, watching full screen on a 1680x1050 LCD with
xpdf in Linux.
wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf
xpdf sampling.pdf


 0

On 20-12-2010 at 11:19:55 David Brown
<david.brown@removethis.hesbynett.no> wrote:

(...)
>
> No sane person /chooses/ to use Acrobat Reader any more.  It is annoying
> and insecure bloatware, and spreads itself over far too much of your
> system (hint to Adobe - it's been a decade since it was acceptable for a
> simple application to require a reboot of windows during installation).
>   It is /far/ slower, and takes orders of magnitude more memory than
> common alternatives like Foxit on windows or Evince on Linux.  And it
> has such a bad security record that I am considering banning it from our
> company.  Unfortunately, there are occasions when someone has made a
> document that will only work with the latest Acrobat Reader - and I
> fully agree with you that it is annoying.  But Tim's documents are pdf
> 1.4, a very well-supported standard.
(...)

Tank you, I found Foxit a fast an suitable pdf viewer.
I like it.
and uncompatible with his own previous versions.
Now Tim's document look better.

--
Mikolaj

 0

Jan Panteltje <pNaonStpealmtje@yahoo.com> writes:

> On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer
> <robertbaer@localnet.com> wrote in
> <UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>:
>
>>Mikolaj wrote:
>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com> napisał(a):
>>>
>>>> I know there's a few people out there who actually read the papers that I
>>>> post on my web site.
>>>>
>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>> been maintaining them.
>>>>
>>>> So here: I've made a start.
>>>>
>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>
>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>> documents easily maintainable and in a form that is easy to look at from
>>>> the web or to print out, as you desire.
>>>>
>>>
>>> My first thought was that fonts look a little bit to thin and bright.
>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>>>
>>   I agree, the font makes it very difficult to read, and is not
>>conducive to enhancing reading over a long term, namely longer than one
>>page..
>
> I think the fonts look great, watching full screen on a 1680x1050 LCD with
> xpdf in Linux.
> wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf
> xpdf sampling.pdf

No, they do look a bit "bitmapped" I'm afraid. I am also using xpdf in
linux. A minor detail though, still quite readable IMO.

--

John Devereux

 0

On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>I know there's a few people out there who actually read the papers that I
>post on my web site.
>
>I also know that the papers have gotten a bit ragged, and that I haven't
>been maintaining them.
>
>So here: I've made a start.
>
>http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
>My intent (with apologies to all of you with dial-up), is to convert the
>ratty HTML documents to pdf as time permits, and in a way that leaves the
>documents easily maintainable and in a form that is easy to look at from
>the web or to print out, as you desire.

Dang, it's like people haven't seen Computer Modern before. Maybe you
need to use Times New Roman, turn off kerning, and ditch the ligatures
so that it "looks right." Maybe it's the em-dashes. Jeez...

I, for one, salute your leet Latex skilz. And thank you for taking the
time to make these available.

--
Rich Webb     Norfolk, VA

 0

On 20/12/10 13:47, John Devereux wrote:
> Jan Panteltje<pNaonStpealmtje@yahoo.com>  writes:
>
>> On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer
>> <robertbaer@localnet.com>  wrote in
>> <UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>:
>>
>>> Mikolaj wrote:
>>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott<tim@seemywebsite.com>  napisał(a):
>>>>
>>>>> I know there's a few people out there who actually read the papers that I
>>>>> post on my web site.
>>>>>
>>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>>> been maintaining them.
>>>>>
>>>>> So here: I've made a start.
>>>>>
>>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>>
>>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>>> documents easily maintainable and in a form that is easy to look at from
>>>>> the web or to print out, as you desire.
>>>>>
>>>>
>>>> My first thought was that fonts look a little bit to thin and bright.
>>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>>>>
>>>    I agree, the font makes it very difficult to read, and is not
>>> conducive to enhancing reading over a long term, namely longer than one
>>> page..
>>
>> I think the fonts look great, watching full screen on a 1680x1050 LCD with
>> xpdf in Linux.
>> wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>> xpdf sampling.pdf
>
> No, they do look a bit "bitmapped" I'm afraid. I am also using xpdf in
> linux. A minor detail though, still quite readable IMO.
>

Getting /almost/ on-topic again, the issue is, I think, that xpdf
doesn't do anti-aliasing very well and so the fonts look a bit poor at
low resolution.  Evince does better.  But in general, CMR fonts are
better on high-resolution devices - they were designed for use on laser
printers, not to look nice on screens.


 0

On 12/20/2010 01:40 AM, Robert Baer wrote:
> Mikolaj wrote:
>> Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com> napisa�(a):
>>
>>> I know there's a few people out there who actually read the papers
>>> that I
>>> post on my web site.
>>>
>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>> been maintaining them.
>>>
>>> So here: I've made a start.
>>>
>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>
>>> My intent (with apologies to all of you with dial-up), is to convert the
>>> ratty HTML documents to pdf as time permits, and in a way that leaves
>>> the
>>> documents easily maintainable and in a form that is easy to look at from
>>> the web or to print out, as you desire.
>>>
>>
>> My first thought was that fonts look a little bit to thin and bright.
>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options
>> checked.
>>
> I agree, the font makes it very difficult to read, and is not conducive
> to enhancing reading over a long term, namely longer than one page..

What reader are you using?  I'm getting a two-valued distribution here:
"looks great!", and "looks nasty!".  If it's a reader issue --
particularly if you're using Adobe -- then I'd like to test on the 'bad'

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

 0

On 12/20/2010 03:30 AM, Anton Erasmus wrote:
> On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott<tim@seemywebsite.com>
> wrote:
>
>> I know there's a few people out there who actually read the papers that I
>> post on my web site.
>>
>> I also know that the papers have gotten a bit ragged, and that I haven't
>> been maintaining them.
>>
>> So here: I've made a start.
>>
>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>
>> My intent (with apologies to all of you with dial-up), is to convert the
>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>> documents easily maintainable and in a form that is easy to look at from
>> the web or to print out, as you desire.
>
> The fonts are terrible. They seem to be bitmap fonts and not vector.
> It looks like you used TeX to generate the document.  Go to
> http://www.truetex.com/ for links to quite number of articles on
> how to use truetype fonts in TeX.

What reader were you using?  I'm trying to figure out (a) why some
people think it looks peachy and some think it looks terrible (it looks
great on Evince), and (b) make sure I test it on enough different
readers that I get a true picture of what it looks like to the world at
large.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

 0

On 12/20/2010 11:14 AM, Tim Wescott wrote:
> On 12/20/2010 03:30 AM, Anton Erasmus wrote:
>> On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott<tim@seemywebsite.com>
>> wrote:
>>
>>> I know there's a few people out there who actually read the papers that I
>>> post on my web site.
>>>
>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>> been maintaining them.
>>>
>>> So here: I've made a start.
>>>
>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>
>>> My intent (with apologies to all of you with dial-up), is to convert the
>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>> documents easily maintainable and in a form that is easy to look at from
>>> the web or to print out, as you desire.
>>
>> The fonts are terrible. They seem to be bitmap fonts and not vector.
>> It looks like you used TeX to generate the document. Go to
>> http://www.truetex.com/ for links to quite number of articles on
>> how to use truetype fonts in TeX.
>
> What reader were you using? I'm trying to figure out (a) why some people think it looks peachy and some think it looks terrible (it
> looks great on Evince), and (b) make sure I test it on enough different readers that I get a true picture of what it looks like to
> the world at large.

Tim,

I just looked at it using Adobe acroread under Fedora 13, with TeXLive 2010
installed (so that even if you hadn't embedded the fonts I should still have
them available without substitution), and I agree with Anton - the fonts are
bit-mapped and not vector. I use the TeX->ps->pdf route (using dvips and
special to embed my fonts and ensure they are vector.
--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO

 0

Em 20/12/2010 05:34, Tim Wescott escreveu:
> I know there's a few people out there who actually read the papers that I
> post on my web site.
>
> I also know that the papers have gotten a bit ragged, and that I haven't
> been maintaining them.
>
> So here: I've made a start.
>
> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
> My intent (with apologies to all of you with dial-up), is to convert the
> ratty HTML documents to pdf as time permits, and in a way that leaves the
> documents easily maintainable and in a form that is easy to look at from
> the web or to print out, as you desire.
>
Tim,

I gave a diagonal look at the paper, as I got curious about the
complaints on the font.  They look OK to me :-) I'm used to read math's
articles written in CMR fonts so perhaps I'm not a good judge on this.

However, my attention was caught on a link you quote in the footnote
number 13 and I discovered it is not longer available at that address :-(

Maybe you can get the paper updated at some time!?

Regards,

--
Cesar Rabak
GNU/Linux User 52247.
Get counted: http://counter.li.org/

 0

On 12/20/2010 08:37 AM, Cesar Rabak wrote:
> Em 20/12/2010 05:34, Tim Wescott escreveu:
>> I know there's a few people out there who actually read the papers that I
>> post on my web site.
>>
>> I also know that the papers have gotten a bit ragged, and that I haven't
>> been maintaining them.
>>
>> So here: I've made a start.
>>
>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>
>> My intent (with apologies to all of you with dial-up), is to convert the
>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>> documents easily maintainable and in a form that is easy to look at from
>> the web or to print out, as you desire.
>>
> Tim,
>
> I gave a diagonal look at the paper, as I got curious about the
> complaints on the font. They look OK to me :-) I'm used to read math's
> articles written in CMR fonts so perhaps I'm not a good judge on this.
>
> However, my attention was caught on a link you quote in the footnote
> number 13 and I discovered it is not longer available at that address :-(
>
> Maybe you can get the paper updated at some time!?
>
> Regards,
>
It was there last week when I double-checked it!  I'll see if I can
chase it down.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

 0

On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>I know there's a few people out there who actually read the papers that I
>post on my web site.
>
>I also know that the papers have gotten a bit ragged, and that I haven't
>been maintaining them.
>
>So here: I've made a start.
>
>http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
>My intent (with apologies to all of you with dial-up), is to convert the
>ratty HTML documents to pdf as time permits, and in a way that leaves the
>documents easily maintainable and in a form that is easy to look at from
>the web or to print out, as you desire.

I sold a couple hundred thousand channels of an AC power meter, used
for utility end-use surveys, that sampled the power line voltage and
current signals at 27 Hz. I had a hell of a time arguing with
"Nyquist" theorists who claimed I should be sampling at twice the
frequency of the highest line harmonic, like the 15th maybe.

John


 0

On 12/20/2010 08:46 AM, John Larkin wrote:
> On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott<tim@seemywebsite.com>
> wrote:
>
>> I know there's a few people out there who actually read the papers that I
>> post on my web site.
>>
>> I also know that the papers have gotten a bit ragged, and that I haven't
>> been maintaining them.
>>
>> So here: I've made a start.
>>
>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>
>> My intent (with apologies to all of you with dial-up), is to convert the
>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>> documents easily maintainable and in a form that is easy to look at from
>> the web or to print out, as you desire.
>
> I sold a couple hundred thousand channels of an AC power meter, used
> for utility end-use surveys, that sampled the power line voltage and
> current signals at 27 Hz. I had a hell of a time arguing with
> "Nyquist" theorists who claimed I should be sampling at twice the
> frequency of the highest line harmonic, like the 15th maybe.

If you've got something like an SCR spike that lands on a different spot
in the cycle each time then subsampling isn't going to build up a true
picture.  But for truly repetitive signals, it's got a lot going for it
(it's how really really fast sampling scopes work -- even today you can
build a sampler that'll work a lot faster than an ADC, fill in the blanks).

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

 0

On 2010-12-20, Cesar Rabak <csrabak@bol.com.br> wrote:
> Em 20/12/2010 05:34, Tim Wescott escreveu:
>> I know there's a few people out there who actually read the papers that I
>> post on my web site.
>>
>> I also know that the papers have gotten a bit ragged, and that I haven't
>> been maintaining them.
>>
>> So here: I've made a start.
>>
>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>
>> My intent (with apologies to all of you with dial-up), is to convert the
>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>> documents easily maintainable and in a form that is easy to look at from
>> the web or to print out, as you desire.
>
> I gave a diagonal look at the paper, as I got curious about the
> complaints on the font.  They look OK to me :-) I'm used to read math's
> articles written in CMR fonts so perhaps I'm not a good judge on this.

I don't see anything at all wrong with the font.  The one thing that I
would change is the line length.  It looks like a typical line is
upwards of 110 characters.  That's a bit too much to read comfortably.
If you want to use a font that small, and don't want wide margins, I'd
recommend going to a two-column format.

--
Grant Edwards               grant.b.edwards        Yow! I'm definitely not
at               in Omaha!
gmail.com

 0

On 12/20/2010 11:46 AM, John Larkin wrote:
> [...]
> I sold a couple hundred thousand channels of an AC power meter, used
> for utility end-use surveys, that sampled the power line voltage and
> current signals at 27 Hz. I had a hell of a time arguing with
> "Nyquist" theorists who claimed I should be sampling at twice the
> frequency of the highest line harmonic, like the 15th maybe.

John,

If your AC signal had more than 13.5 Hz of bandwidth, how were you
able to accurately sample them at 27 Hz? As far as I know, even
subsampling assumes the _bandwidth_ is less than half the sample rate
(for real sampling).
--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO

 0

On Mon, 20 Dec 2010 08:14:31 -0800, Tim Wescott <tim@seemywebsite.com>
wrote:

>On 12/20/2010 03:30 AM, Anton Erasmus wrote:
>> On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott<tim@seemywebsite.com>
>> wrote:
>>
>>> I know there's a few people out there who actually read the papers that I
>>> post on my web site.
>>>
>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>> been maintaining them.
>>>
>>> So here: I've made a start.
>>>
>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>
>>> My intent (with apologies to all of you with dial-up), is to convert the
>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>> documents easily maintainable and in a form that is easy to look at from
>>> the web or to print out, as you desire.
>>
>> The fonts are terrible. They seem to be bitmap fonts and not vector.
>> It looks like you used TeX to generate the document.  Go to
>> http://www.truetex.com/ for links to quite number of articles on
>> how to use truetype fonts in TeX.
>
>What reader were you using?  I'm trying to figure out (a) why some
>people think it looks peachy and some think it looks terrible (it looks
>great on Evince), and (b) make sure I test it on enough different
>readers that I get a true picture of what it looks like to the world at
>large.

Looks peachy.
Reader: Tracker Software's PDF-XChange viewer, ver 2.5. Edit |
Preferences | Rendering set to smooth line art, text, and images; and
gamma is set to 1.0.

Text is readable down to 50% scaling (about 4.5" apparent page height on
a laptop LCD with 1400x1050 pixels) and quite good at 100% and up.
Paragraph density (overall 'grayness' on the page) looks even.

Looks so-so.
Reader: Foxit PDF Reader, ver 4.2. Tools | Preferences | Page Display
set to "display texts optimized for LCD screen."

Text is discernable but not really readable at 50%. So-so at 100%, with
stroke widths not well hinted; vertical strokes on capitals, for
example, are much darker than neighboring lower-case letters. Paragraph
density looks thin, with the capital letters standing out.

You might try Bitstream's Charter, which should be included with most
LaTeX distros and avoids the (sometimes painful) chore of using
arbitrary typeface packages.

--
Rich Webb     Norfolk, VA

 0

On 12/20/2010 11:57 AM, Grant Edwards wrote:
> [...]
> I don't see anything at all wrong with the font.

Did you zoom way in, e.g., on a single letter?
--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO

 0

On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates <yates@ieee.org>
wrote:

>On 12/20/2010 11:46 AM, John Larkin wrote:
>> [...]
>> I sold a couple hundred thousand channels of an AC power meter, used
>> for utility end-use surveys, that sampled the power line voltage and
>> current signals at 27 Hz. I had a hell of a time arguing with
>> "Nyquist" theorists who claimed I should be sampling at twice the
>> frequency of the highest line harmonic, like the 15th maybe.
>
>John,
>
>If your AC signal had more than 13.5 Hz of bandwidth, how were you
>able to accurately sample them at 27 Hz? As far as I know, even
>subsampling assumes the _bandwidth_ is less than half the sample rate
>(for real sampling).

The thing about an electric meter is that you're not trying to
reconstruct the waveform, you're only gathering statistics on it.  The
27.xxx Hz sample rate was chosen so that its harmonics would dance
between the line harmonics up to some highish harmonic of 60 Hz, so as
to not create any slow-wobble aliases in the reported values (trms
volts, amps, power, PF) that would uglify the local realtime display
or the archived time-series records.

From a signal-theory standpoint, the bandwidth of the signal is in
fact narrow, so the sample rate can be low. The "signal bandwidth" is
actually the sum of the bandwidths of the various spectral harmonic
lines, multiples of 60 Hz, mostly of the ugly current waveforms, which
is pretty weird when you think of it. The sample-hold is
simultaneously undersampling a bunch of narrow but disjoint spectral
zones, still following the Shannon rules for each one.

Given that, it was a considerable nuisance to come up with that 27.xxx
Hz sample rate. Using available crystals.

I also used a 7-bit single-slope ADC, which I didn't reveal to the
customers because they would have argued over that, too.

I did waveform acquisition on demand, in a burst of samples, at some
other goofy sample rate, some hundreds of Hz. I sampled over many line
cycles, stuck the samples into RAM, and then reordered them to make
them equivalent-time sequential. That was fun.

12K lines of MC6803 code!

John


 0

On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott <tim@seemywebsite.com>
wrote:

>I know there's a few people out there who actually read the papers that I
>post on my web site.
>
>I also know that the papers have gotten a bit ragged, and that I haven't
>been maintaining them.
>
>So here: I've made a start.
>
>http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
>My intent (with apologies to all of you with dial-up), is to convert the
>ratty HTML documents to pdf as time permits, and in a way that leaves the
>documents easily maintainable and in a form that is easy to look at from
>the web or to print out, as you desire.
>
>--
>http://www.wescottdesign.com

I'm using Adobe Reader 9 and it looks fine here, even blown way up.

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com

 0

On 12/20/2010 02:34 AM, Tim Wescott wrote:
> [...]
> http://www.wescottdesign.com/articles/Sampling/sampling.pdf

Tim,

First let me say that overall the new paper looks really great! I
am pleased that you've chosen to utilize (La)TeX - it has served
me well for over two decades. There are a few rough edges such
as bitmapped fonts (which aren't necessary) and errors in spacing,
but I'm sure you'll get those worked out.

What does concern me, however, is some of the theory you've presented.
Specifically, this section on p.11:

Sampling at some frequency that is equal to the repetition rate
divided by a prime number will automatically stack these narrow bits
of signal spectrum right up in the same order that they were in the
original signal, only jammed much closer together in frequency which
is the roundabout frequency-domain way of saying that you can sample
at just the right rate, and interpret the resulting signal as a
slowed-down replica of the input waveform.

There are two points in which I challenge the veracity of your assertions:

1. Sampling at a rate of F/N when N is integer will never help
subsample a signal since the period of the sampling, N/F, is always a
multiple of the repetition rate period 1/F.

2. It seems that to truely, completely sample a repetitive signal in
such a way, you would need a sampling period that will never be a
multiple of the repetition period. For example, for the 60 Hz example
you could use a sample rate of 60 Hz / sqrt(2). But then, even if you
sample at such a rate, it would take an INFINITE amount of time to
fully sample this signal. It's equivalent to sampling an interval on
the real line a point at a time; real analysis tells us that there are
an uncountably infinite number of points in such an interval!

So, I'm afraid I cannot agree that an accurate sampling of a repetitive
waveform can be made in this manner. If you disagree, please show me
where my reasoning is wrong.
--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO


 0

On 12/20/2010 12:48 PM, Eric Jacobsen wrote:
> [...]
> I'm using Adobe Reader 9 and it looks fine here, even blown way up.

Ha! Interesting...

What may be happening is that some font is not embedded, and that if
you don't have the TeX fonts installed, the reader is substituting a
postscript font, which is vector, so it looks fine. But if you do
have TeX fonts on your system, you get them rendered as bitmaps.
--
Randy Yates                      % "My Shangri-la has gone away, fading like
Digital Signal Labs              %  the Beatles on 'Hey Jude'"
yates@digitalsignallabs.com      %
http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO

 0

Jan Panteltje wrote:
> On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer
> <robertbaer@localnet.com>  wrote in
> <UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>:
>
>> Mikolaj wrote:
>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott<tim@seemywebsite.com>  napisa�(a):
>>>
>>>> I know there's a few people out there who actually read the papers that I
>>>> post on my web site.
>>>>
>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>> been maintaining them.
>>>>
>>>> So here: I've made a start.
>>>>
>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>
>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>> documents easily maintainable and in a form that is easy to look at from
>>>> the web or to print out, as you desire.
>>>>
>>>
>>> My first thought was that fonts look a little bit to thin and bright.
>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>>>
>>    I agree, the font makes it very difficult to read, and is not
>> conducive to enhancing reading over a long term, namely longer than one
>> page..
>
> I think the fonts look great, watching full screen on a 1680x1050 LCD with
> xpdf in Linux.
> wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf
> xpdf sampling.pdf
>

Interesting; I am looking on a 1680x1050 as well (although on Windows)
and they look great. Just an observation.

--
Les Cargill

 0

"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in message
news:0v1vg6p1838qiv06daj3cftnt4vtggr00s@4ax.com...
> I sold a couple hundred thousand channels of an AC power meter, used
> for utility end-use surveys, that sampled the power line voltage and
> current signals at 27 Hz. I had a hell of a time arguing with
> "Nyquist" theorists who claimed I should be sampling at twice the
> frequency of the highest line harmonic, like the 15th maybe.

This is another one of those things where I think university courses have done
most of the damage -- not emphasizing that Nyquist only cares about the
bandwidth of your signals, but not at all what particular frequencies it is
you're using.

2nd most common mis-interpretation (or perhaps, "non-optimal use") of Nyquist
I've seen: Figuring that, if you were initially sampling at Fs and neede a
brick-wall filter at Fs/2, if you go to, say, 8x oversampling you now need a
filter with negligible response by 8*Fs/2 (an easier filter to build)... not
realizing that actually all you really need is a filter with negligible
response by 8*Fs-Fs/2 (even easier still!).

---Joel


 0

John Larkin wrote:
> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org>
> wrote:
>
>> On 12/20/2010 11:46 AM, John Larkin wrote:
>>> [...]
>>> I sold a couple hundred thousand channels of an AC power meter, used
>>> for utility end-use surveys, that sampled the power line voltage and
>>> current signals at 27 Hz. I had a hell of a time arguing with
>>> "Nyquist" theorists who claimed I should be sampling at twice the
>>> frequency of the highest line harmonic, like the 15th maybe.
>>
>> John,
>>
>> If your AC signal had more than 13.5 Hz of bandwidth, how were you
>> able to accurately sample them at 27 Hz? As far as I know, even
>> subsampling assumes the _bandwidth_ is less than half the sample rate
>> (for real sampling).
>
>
> The thing about an electric meter is that you're not trying to
> reconstruct the waveform, you're only gathering statistics on it.  The
> 27.xxx Hz sample rate was chosen so that its harmonics would dance
> between the line harmonics up to some highish harmonic of 60 Hz, so as
> to not create any slow-wobble aliases in the reported values (trms
> volts, amps, power, PF) that would uglify the local realtime display
> or the archived time-series records.
>

Is this something like heterodyning, then? You're building a detector,
not a ... recorder. Right?

>  From a signal-theory standpoint, the bandwidth of the signal is in
> fact narrow, so the sample rate can be low. The "signal bandwidth" is
> actually the sum of the bandwidths of the various spectral harmonic
> lines, multiples of 60 Hz, mostly of the ugly current waveforms, which
> is pretty weird when you think of it. The sample-hold is
> simultaneously undersampling a bunch of narrow but disjoint spectral
> zones, still following the Shannon rules for each one.
>
> Given that, it was a considerable nuisance to come up with that 27.xxx
> Hz sample rate. Using available crystals.
>
> I also used a 7-bit single-slope ADC, which I didn't reveal to the
> customers because they would have argued over that, too.
>
> I did waveform acquisition on demand, in a burst of samples, at some
> other goofy sample rate, some hundreds of Hz. I sampled over many line
> cycles, stuck the samples into RAM, and then reordered them to make
> them equivalent-time sequential. That was fun.
>
> 12K lines of MC6803 code!
>
> John
>
>

--
Les Cargill

 0

David Brown <david.brown@removethis.hesbynett.no> writes:

> On 20/12/10 13:47, John Devereux wrote:
>> Jan Panteltje<pNaonStpealmtje@yahoo.com>  writes:
>>
>>> On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer
>>> <robertbaer@localnet.com>  wrote in
>>> <UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>:
>>>
>>>> Mikolaj wrote:
>>>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott<tim@seemywebsite.com>  napisał(a):
>>>>>
>>>>>> I know there's a few people out there who actually read the papers that I
>>>>>> post on my web site.
>>>>>>
>>>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>>>> been maintaining them.
>>>>>>
>>>>>> So here: I've made a start.
>>>>>>
>>>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>>>
>>>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>>>> documents easily maintainable and in a form that is easy to look at from
>>>>>> the web or to print out, as you desire.
>>>>>>
>>>>>
>>>>> My first thought was that fonts look a little bit to thin and bright.
>>>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>>>>>
>>>>    I agree, the font makes it very difficult to read, and is not
>>>> conducive to enhancing reading over a long term, namely longer than one
>>>> page..
>>>
>>> I think the fonts look great, watching full screen on a 1680x1050 LCD with
>>> xpdf in Linux.
>>> wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>> xpdf sampling.pdf
>>
>> No, they do look a bit "bitmapped" I'm afraid. I am also using xpdf in
>> linux. A minor detail though, still quite readable IMO.
>>
>
> Getting /almost/ on-topic again, the issue is, I think, that xpdf
> doesn't do anti-aliasing very well and so the fonts look a bit poor at
> low resolution.  Evince does better.  But in general, CMR fonts are
> better on high-resolution devices - they were designed for use on
> laser printers, not to look nice on screens.

You're right. Acrobat does better still. I guess I'm not used to this
since I don't see many bitmapped fonts. (Even with xpdf it is not at all
"terrible" by the way, and thanks Tim for posting it).

--

John Devereux

 0

On 12/20/2010 10:30 AM, Les Cargill wrote:
> John Larkin wrote:
>> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org>
>> wrote:
>>
>>> On 12/20/2010 11:46 AM, John Larkin wrote:
>>>> [...]
>>>> I sold a couple hundred thousand channels of an AC power meter, used
>>>> for utility end-use surveys, that sampled the power line voltage and
>>>> current signals at 27 Hz. I had a hell of a time arguing with
>>>> "Nyquist" theorists who claimed I should be sampling at twice the
>>>> frequency of the highest line harmonic, like the 15th maybe.
>>>
>>> John,
>>>
>>> If your AC signal had more than 13.5 Hz of bandwidth, how were you
>>> able to accurately sample them at 27 Hz? As far as I know, even
>>> subsampling assumes the _bandwidth_ is less than half the sample rate
>>> (for real sampling).
>>
>>
>> The thing about an electric meter is that you're not trying to
>> reconstruct the waveform, you're only gathering statistics on it. The
>> 27.xxx Hz sample rate was chosen so that its harmonics would dance
>> between the line harmonics up to some highish harmonic of 60 Hz, so as
>> to not create any slow-wobble aliases in the reported values (trms
>> volts, amps, power, PF) that would uglify the local realtime display
>> or the archived time-series records.
>>
>
> Is this something like heterodyning, then? You're building a detector,
> not a ... recorder. Right?

Pretty much -- read my paper!

You're taking advantage of the fact that the signal you're acquiring is
very cyclic in character.  So (for instance), instead of taking samples
every 1/600 seconds, you could take samples every 1/60 + 1/600 seconds,
and get the _effect_ of taking samples faster.

John chose a frequency that would let him get decent statistics faster
and more reliably, but he's just building on the basic idea that I present.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

 0

On 12/20/2010 10:00 AM, Randy Yates wrote:
> On 12/20/2010 02:34 AM, Tim Wescott wrote:
>> [...]
>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
> Tim,
>
> First let me say that overall the new paper looks really great! I
> am pleased that you've chosen to utilize (La)TeX - it has served
> me well for over two decades. There are a few rough edges such
> as bitmapped fonts (which aren't necessary) and errors in spacing,
> but I'm sure you'll get those worked out.
>
> What does concern me, however, is some of the theory you've presented.
> Specifically, this section on p.11:
>
> Sampling at some frequency that is equal to the repetition rate
> divided by a prime number will automatically stack these narrow bits
> of signal spectrum right up in the same order that they were in the
> original signal, only jammed much closer together in frequency which
> is the roundabout frequency-domain way of saying that you can sample
> at just the right rate, and interpret the resulting signal as a
> slowed-down replica of the input waveform.
>
> There are two points in which I challenge the veracity of your assertions:
>
> 1. Sampling at a rate of F/N when N is integer will never help
> subsample a signal since the period of the sampling, N/F, is always a
> multiple of the repetition rate period 1/F.
>
> 2. It seems that to truely, completely sample a repetitive signal in
> such a way, you would need a sampling period that will never be a
> multiple of the repetition period. For example, for the 60 Hz example
> you could use a sample rate of 60 Hz / sqrt(2). But then, even if you
> sample at such a rate, it would take an INFINITE amount of time to
> fully sample this signal. It's equivalent to sampling an interval on
> the real line a point at a time; real analysis tells us that there are
> an uncountably infinite number of points in such an interval!
>
> So, I'm afraid I cannot agree that an accurate sampling of a repetitive
> waveform can be made in this manner. If you disagree, please show me
> where my reasoning is wrong.

0: thanks for the kind words.  I wrote my Master's thesis in LaTeX, and
have been living in a continual state of disappointment since.  I'm
actually using Lyx, because I'm lazy, but it's still LaTeX underneath.

1 & 2: I felt that my arguments were not well stated in the paper.
Since I have to re-post it _anyway_, I'll spend a bit of time with the
math.  I just replied to another post, and in the process realized,
tentatively, a relationship: if you have a cycle interval T = 1/F and
you want to capture N samples of a cycle, then sampling at Ts = (M +
P/N) * T will do the job as long as M, N and P are integers, and P and N
are relatively prime and both non-zero.  Reordering things for P != 1 is
a challenge, but not impossible.

Whether I'm right and didn't argue my case well, or I'm just wrong, I
need to change things there.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html

 0

Hi Grant,

On 12/20/2010 9:57 AM, Grant Edwards wrote:
> On 2010-12-20, Cesar Rabak<csrabak@bol.com.br>  wrote:
>
>> I gave a diagonal look at the paper, as I got curious about the

[Cesar:  what is the intent of "diagonal look"?  Sorry, I can't
fathom what you mean, here  :<  Too early in the day...]

>> complaints on the font.  They look OK to me :-) I'm used to read math's
>> articles written in CMR fonts so perhaps I'm not a good judge on this.
>
> I don't see anything at all wrong with the font.  The one thing that I
> would change is the line length.  It looks like a typical line is
> upwards of 110 characters.  That's a bit too much to read comfortably.
> If you want to use a font that small, and don't want wide margins, I'd
> recommend going to a two-column format.

<frown>  I agree with your point re: line length.  But, I adopted
a two column format (3/8" gutter) years ago when I started my
"notes" series.  It *really* complicates layout.  You end up
having to create lots of "page width" boxes anchored to your
text.  This ends up breaking up the text columns A LOT.
Especially if you have lots of illustrations, tables, etc.

For example, putting code snippets in-line constrains the length
of each code line severely (unless you go to the page wide
boxes).

<shrug>  So far, I've not had to resort to "rotated pages"
but that's only because I've been aggressive at keeping
tables, illustrations, etc. tightly bound.  :-/

 0

In comp.dsp Randy Yates <yates@ieee.org> wrote:

(snip)

> What does concern me, however, is some of the theory you've presented.
> Specifically, this section on p.11:

>   Sampling at some frequency that is equal to the repetition rate
>   divided by a prime number will automatically stack these narrow bits
>   of signal spectrum right up in the same order that they were in the
>   original signal, only jammed much closer together in frequency which
>   is the roundabout frequency-domain way of saying that you can sample
>   at just the right rate, and interpret the resulting signal as a
>   slowed-down replica of the input waveform.

> There are two points in which I challenge the veracity of your assertions:

> 1. Sampling at a rate of F/N when N is integer will never help
> subsample a signal since the period of the sampling, N/F, is always a
> multiple of the repetition rate period 1/F.

One has to choose carefully.

> 2. It seems that to truely, completely sample a repetitive signal in
> such a way, you would need a sampling period that will never be a
> multiple of the repetition period. For example, for the 60 Hz example
> you could use a sample rate of 60 Hz / sqrt(2). But then, even if you
> sample at such a rate, it would take an INFINITE amount of time to
> fully sample this signal. It's equivalent to sampling an interval on
> the real line a point at a time; real analysis tells us that there are
> an uncountably infinite number of points in such an interval!

That would be true for signals with infinite bandwidth.  At least
for the AC power meter, you won't have that.  Harmonics from SCR
(or triac) based light dimmers likely get into the MHz range, so
one should be able to see that far.   The usual computer power
supply is a voltage double off the AC line, which shouldn't be
as bad as the SCR, but still has significant harmonics.

But as was previously said, the goal is not to sample the 60Hz
waveform, but, as used in describing modulated signals, the envelope.

> So, I'm afraid I cannot agree that an accurate sampling of a repetitive
> waveform can be made in this manner. If you disagree, please show me
> where my reasoning is wrong.

If one sample 60Hz power usage at 60Hz, one would lose much important
information.  At 27Hz, where do the aliases end up?

60Hz   --> 6Hz
120Hz  --> 12Hz
180Hz  --> -9Hz
240Hz  --> -3Hz
300Hz  --> 3Hz
360Hz  --> 9Hz
420Hz  --> -12Hz
480Hz  --> -6Hz
540Hz  --> 0Hz

It seems that you don't want exactly 27Hz, maybe that is
what he said previously.

What you want to measure, though, is the RMS power over some
period of time, taking into account the significant harmonics.

Now, say you have a signal with harmonics up to a few MHz,
and say, for example, that one of those aliases to 0Hz, and
so you don't see.  How much of a problem is that?  If you have
all the floor(1000000/60) harmonics up to that point, then you
are likely pretty close.

Floor(1000000/60) is 16666, so if you sample at 1000000/16666,
for a sampling rate of 60.0024... Hz.  If you want something
near 27Hz that doesn't have harmonics that are multiples of 60
until 1000020, then it looks like 27.000027Hz  is about right.

It seems to me that you pick the harmonic that you can afford
not to see, and plan the sampling rate accordingly.

However, as that is getting close to crystal tolerance, I might
suggest that phase locking to a multple of 60Hz, and then dividing
down would be a good way to generate the sampling clock.

-- glen

 0

Hi David,

On 12/20/2010 3:19 AM, David Brown wrote:
> On 20/12/10 10:16, D Yuniskis wrote:
>> For example, when I include detailed photos, I deliberately
>> chose high enough resolutions that allow the user (reader)
>> to "zoom" to examine high levels of detail without
>> the image being rendered with jaggies, etc.
>
> That's a good plan, and something people often forget about - the result
> being documents that look good on-screen, but poor in printout. In this
> particular case, however, it seems the graphics are in a vector format
> (pdf files support eps), which is the best choice for drawings.

Actually, I've found the opposite to be the case, more
often than not.  Printers are pretty much what they are.
OTOH, on screen, you can choose to zoom to arbitrary
levels into an image to see greater bits of detail.
With images resampled at lower resolutions, you quickly end
up with jaggies that wouldn't have been obvious to the
unaided eye in paper form.

But, very high resolution photographs quickly eat up lots
of bytes.  So, you have to come to a balance, somewhere.

>> Also, note that cropping an image in the PDF doesn't discard
>> the "invisible" portion of the image. This can be embarassing
>> if you think you've hidden (not included) a portion of the
>> image that isn't "visible" :>
>
> This is seldom an issue with pdf files (though it be, depending on the
> tools used to create it) - it is commonly found in MS Word files. But
> Tim has used pdfLaTeX - the pdf file contains exactly what he wants it
> to contain.

Dunno, I don't use word or pdfLaTeX.  I use FrameMaker for all my
DTP as it's "quickest" to merge sources into a presentable form
(and ~20 years of experience with it has a significant bit of inertia).

One typical technique I use is to include a photo of <something>.
Then, create another "window" (not in the GUI sense) overlapping the
original photo's "window".  In this smaller, overlapping window,
I paste yet another copy of the photo -- but zoomed to much higher
magnification.  Then, pan that image to the part of the underlying
photo that is "of detailed interest".  I.e., I end up with a
"closeup" of some portion of the basic photo to which I want to
draw attention.  It's more economical on real estate than a
separate "closeup photo" would be.  And, gives viewers of the print
edition the detail that would otherwise only be visible "on screen"
in an interactive environment.

Since FrameMaker writes PS, it relies on PS's innate abilities
to do this cropping on its behalf.  As a result, you end up with
the whole image *in* the document, layered *under* a viewport
built in PS.  :-/

My point was to understand what your tool is doing to your
"input"/data so that you aren't "leaking" anything that you
don't want to leak (nor adding to the size of the resulting
file, needlessly).

Next, I want to try embedding audio in some documents (e.g.,
phonetic sounds than to *see* visual symbols thereof.

 0

On 20/12/10 19:49, John Devereux wrote:
> David Brown<david.brown@removethis.hesbynett.no>  writes:
>
>> On 20/12/10 13:47, John Devereux wrote:
>>> Jan Panteltje<pNaonStpealmtje@yahoo.com>   writes:
>>>
>>>> On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer
>>>> <robertbaer@localnet.com>   wrote in
>>>> <UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>:
>>>>
>>>>> Mikolaj wrote:
>>>>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott<tim@seemywebsite.com>   napisał(a):
>>>>>>
>>>>>>> I know there's a few people out there who actually read the papers that I
>>>>>>> post on my web site.
>>>>>>>
>>>>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>>>>> been maintaining them.
>>>>>>>
>>>>>>> So here: I've made a start.
>>>>>>>
>>>>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>>>>
>>>>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>>>>> documents easily maintainable and in a form that is easy to look at from
>>>>>>> the web or to print out, as you desire.
>>>>>>>
>>>>>>
>>>>>> My first thought was that fonts look a little bit to thin and bright.
>>>>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked.
>>>>>>
>>>>>     I agree, the font makes it very difficult to read, and is not
>>>>> conducive to enhancing reading over a long term, namely longer than one
>>>>> page..
>>>>
>>>> I think the fonts look great, watching full screen on a 1680x1050 LCD with
>>>> xpdf in Linux.
>>>> wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>> xpdf sampling.pdf
>>>
>>> No, they do look a bit "bitmapped" I'm afraid. I am also using xpdf in
>>> linux. A minor detail though, still quite readable IMO.
>>>
>>
>> Getting /almost/ on-topic again, the issue is, I think, that xpdf
>> doesn't do anti-aliasing very well and so the fonts look a bit poor at
>> low resolution.  Evince does better.  But in general, CMR fonts are
>> better on high-resolution devices - they were designed for use on
>> laser printers, not to look nice on screens.
>
> You're right. Acrobat does better still. I guess I'm not used to this
> since I don't see many bitmapped fonts. (Even with xpdf it is not at all
> "terrible" by the way, and thanks Tim for posting it).
>

CMR fonts are not actually bitmapped fonts, but they are by the time
they end up in the pdf file.  They are metafont fonts, described by a
metafont program.  But pdf format does not support metafont fonts - so
pdfLaTeX uses a bitmapped CMR font build for something like a 300 dpi
laser printer, and this is not optimal for screen usage.

When used as intended - using dvi files on a system with the metafont
sources and metafont program available - metafont fonts have much more
flexibility than truetype, postscript or type 1 fonts, and will give you
results that are fine-tuned to the exact printer you are using.  But
that information is lost with pdf files.

The easiest way to improve the pdfs generated by pdfLaTeX is to add some
usepackage lines:

\usepackage{times}
\usepackage{mathpazo}
\usepackage{courier}
\usepackage{helvet}

This will result in the common fonts Times, Helvetica (Arial), and
Courier being used as the serif, typewriter and sans serif fonts, which
work well on all systems.  Of course, you still get the better font
handling of LaTeX - things like kerning and ligatures work as you would
want.

And it's always possible to use any one of a gazillion other font
packages that are common in TeX installations - or to build the required
metric files from any other fonts you might have.


 0

On Mon, 20 Dec 2010 13:30:00 -0500, Les Cargill
<lcargill99@comcast.net> wrote:

>John Larkin wrote:
>> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org>
>> wrote:
>>
>>> On 12/20/2010 11:46 AM, John Larkin wrote:
>>>> [...]
>>>> I sold a couple hundred thousand channels of an AC power meter, used
>>>> for utility end-use surveys, that sampled the power line voltage and
>>>> current signals at 27 Hz. I had a hell of a time arguing with
>>>> "Nyquist" theorists who claimed I should be sampling at twice the
>>>> frequency of the highest line harmonic, like the 15th maybe.
>>>
>>> John,
>>>
>>> If your AC signal had more than 13.5 Hz of bandwidth, how were you
>>> able to accurately sample them at 27 Hz? As far as I know, even
>>> subsampling assumes the _bandwidth_ is less than half the sample rate
>>> (for real sampling).
>>
>>
>> The thing about an electric meter is that you're not trying to
>> reconstruct the waveform, you're only gathering statistics on it.  The
>> 27.xxx Hz sample rate was chosen so that its harmonics would dance
>> between the line harmonics up to some highish harmonic of 60 Hz, so as
>> to not create any slow-wobble aliases in the reported values (trms
>> volts, amps, power, PF) that would uglify the local realtime display
>> or the archived time-series records.
>>
>
>Is this something like heterodyning, then? You're building a detector,
>not a ... recorder. Right?

It records rms volts, amps, power, but doesn't try to reconstruct the
raw waveforms; so the Sampling Theorem doesn't apply. That didn't stop
all sorts of people from arguing that the sample rate had to be twice
that of the highest reasonable AC line harmonic. As Tim says, lots of
people fling "Nyquist Rate" around without really thinking about it.

If the voltage waveform is a sine wave (which it pretty much is) then
there's no energy in the current harmonics anyhow.

John


 0

On 20/12/10 17:57, Grant Edwards wrote:
> On 2010-12-20, Cesar Rabak<csrabak@bol.com.br>  wrote:
>> Em 20/12/2010 05:34, Tim Wescott escreveu:
>>> I know there's a few people out there who actually read the papers that I
>>> post on my web site.
>>>
>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>> been maintaining them.
>>>
>>> So here: I've made a start.
>>>
>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>
>>> My intent (with apologies to all of you with dial-up), is to convert the
>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>> documents easily maintainable and in a form that is easy to look at from
>>> the web or to print out, as you desire.
>>
>> I gave a diagonal look at the paper, as I got curious about the
>> complaints on the font.  They look OK to me :-) I'm used to read math's
>> articles written in CMR fonts so perhaps I'm not a good judge on this.
>
> I don't see anything at all wrong with the font.  The one thing that I
> would change is the line length.  It looks like a typical line is
> upwards of 110 characters.  That's a bit too much to read comfortably.
> If you want to use a font that small, and don't want wide margins, I'd
> recommend going to a two-column format.
>

I agree that the line width is /slightly/ too wide for comfort, but I'd
avoid two-column format unless I were trying to save the last few trees
on the planet.  For papers with maths and figures, it makes things a lot
more complicated, and it's always a pain to read if your screen is not
large enough to fit a whole page comfortably on-screen.  It's better to
add a bit wider margins - they can be useful for notes or for binding on
a printed version.

If we are going to nit-pick on the typography (which seems a little
unfair, given that it is vastly better than in most papers), I'd like to
see a little more vertical space before the footnote delimiter line.
There are occasional mistakes in the spacing (such as an extra space
after "2f_0" in line three of page 1, and occasionally after figure
references).  I prefer not to have spaces around an em dash, but that's
perhaps just a personal preference.  It is considered poor style to
start a line with a number, such as on page 3.  I also think it is nice
with a small space before a unit (such as "100\!kHz").

It is clearer if consider your title page as page 1 - it makes the
document page number and the pdf page number consistent.

Try the "varref" package for generating references - then you avoid
things like "Figure 6 on page 7" appearing on page 7.

not sure how to generate one without including one in the document itself.

I still haven't got round to reading the document itself - I hope the
contents are worth the effort in the presentation!


 0

On Mon, 20 Dec 2010 10:59:14 -0800, Tim Wescott <tim@seemywebsite.com>
wrote:

>On 12/20/2010 10:30 AM, Les Cargill wrote:
>> John Larkin wrote:
>>> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org>
>>> wrote:
>>>
>>>> On 12/20/2010 11:46 AM, John Larkin wrote:
>>>>> [...]
>>>>> I sold a couple hundred thousand channels of an AC power meter, used
>>>>> for utility end-use surveys, that sampled the power line voltage and
>>>>> current signals at 27 Hz. I had a hell of a time arguing with
>>>>> "Nyquist" theorists who claimed I should be sampling at twice the
>>>>> frequency of the highest line harmonic, like the 15th maybe.
>>>>
>>>> John,
>>>>
>>>> If your AC signal had more than 13.5 Hz of bandwidth, how were you
>>>> able to accurately sample them at 27 Hz? As far as I know, even
>>>> subsampling assumes the _bandwidth_ is less than half the sample rate
>>>> (for real sampling).
>>>
>>>
>>> The thing about an electric meter is that you're not trying to
>>> reconstruct the waveform, you're only gathering statistics on it. The
>>> 27.xxx Hz sample rate was chosen so that its harmonics would dance
>>> between the line harmonics up to some highish harmonic of 60 Hz, so as
>>> to not create any slow-wobble aliases in the reported values (trms
>>> volts, amps, power, PF) that would uglify the local realtime display
>>> or the archived time-series records.
>>>
>>
>> Is this something like heterodyning, then? You're building a detector,
>> not a ... recorder. Right?
>
>Pretty much -- read my paper!
>
>You're taking advantage of the fact that the signal you're acquiring is
>very cyclic in character.  So (for instance), instead of taking samples
>every 1/600 seconds, you could take samples every 1/60 + 1/600 seconds,
>and get the _effect_ of taking samples faster.
>
>John chose a frequency that would let him get decent statistics faster
>and more reliably, but he's just building on the basic idea that I present.

I thought about sampling close to 60 Hz. I could have taken a block of
256 samples at, say, 60+1/256 Hz, and walked the whole sine wave in a
few seconds at equivalent steps of 1.406 degrees. But that had ugly
side effects for sampled harmonics, specifically reporting the RMS
value of ratty current waveforms. And I didn't have enough compute
power anyhow. So I sampled at 26.9947, which is 800.156 degrees at 60
Hz, which still gives 256 evenly-spaces samples but the harmonic
aliasing behavior is entirely different.

Messy stuff.

John


 0

Op 20-Dec-10 17:12, Tim Wescott schreef:
> On 12/20/2010 01:40 AM, Robert Baer wrote:
>> Mikolaj wrote:
>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com>
>>> napisa�(a):
>>>
>>>> I know there's a few people out there who actually read the papers
>>>> that I
>>>> post on my web site.
>>>>
>>>> I also know that the papers have gotten a bit ragged, and that I
>>>> haven't
>>>> been maintaining them.
>>>>
>>>> So here: I've made a start.
>>>>
>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>
>>>> My intent (with apologies to all of you with dial-up), is to convert
>>>> the
>>>> ratty HTML documents to pdf as time permits, and in a way that leaves
>>>> the
>>>> documents easily maintainable and in a form that is easy to look at
>>>> from
>>>> the web or to print out, as you desire.
>>>>
>>>
>>> My first thought was that fonts look a little bit to thin and bright.
>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options
>>> checked.
>>>
>> I agree, the font makes it very difficult to read, and is not conducive
>> to enhancing reading over a long term, namely longer than one page..
>
> What reader are you using? I'm getting a two-valued distribution here:
> "looks great!", and "looks nasty!". If it's a reader issue --
> particularly if you're using Adobe -- then I'd like to test on the 'bad'

I agree with the people who don't like the font, though I would go as
far as "looks nasty". On a 1920x1200 screen with Acrobat Reader the text
isn't as comfortable to read as in most PDFs. When watching it with
pages side-by-side (as I do with most documents) or at 100% the fonts
are too thin/light. When I zoom in the fonts do look indeed bitmapped;
the jaggies get worse as the zoom increases.

The fonts used in the graphs look perfectly fine though, even when
zoomed in.

 0

Tim Wescott wrote:
> On 12/20/2010 10:30 AM, Les Cargill wrote:
>> John Larkin wrote:
>>> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org>
>>> wrote:
>>>
>>>> On 12/20/2010 11:46 AM, John Larkin wrote:
>>>>> [...]
>>>>> I sold a couple hundred thousand channels of an AC power meter, used
>>>>> for utility end-use surveys, that sampled the power line voltage and
>>>>> current signals at 27 Hz. I had a hell of a time arguing with
>>>>> "Nyquist" theorists who claimed I should be sampling at twice the
>>>>> frequency of the highest line harmonic, like the 15th maybe.
>>>>
>>>> John,
>>>>
>>>> If your AC signal had more than 13.5 Hz of bandwidth, how were you
>>>> able to accurately sample them at 27 Hz? As far as I know, even
>>>> subsampling assumes the _bandwidth_ is less than half the sample rate
>>>> (for real sampling).
>>>
>>>
>>> The thing about an electric meter is that you're not trying to
>>> reconstruct the waveform, you're only gathering statistics on it. The
>>> 27.xxx Hz sample rate was chosen so that its harmonics would dance
>>> between the line harmonics up to some highish harmonic of 60 Hz, so as
>>> to not create any slow-wobble aliases in the reported values (trms
>>> volts, amps, power, PF) that would uglify the local realtime display
>>> or the archived time-series records.
>>>
>>
>> Is this something like heterodyning, then? You're building a detector,
>> not a ... recorder. Right?
>
> Pretty much -- read my paper!
>

No, I gotcha - I just wasn't thinking in terms of conversion to an...
"IF regime" for line voltage measurements!

> You're taking advantage of the fact that the signal you're acquiring is
> very cyclic in character. So (for instance), instead of taking samples
> every 1/600 seconds, you could take samples every 1/60 + 1/600 seconds,
> and get the _effect_ of taking samples faster.
>
> John chose a frequency that would let him get decent statistics faster
> and more reliably, but he's just building on the basic idea that I present.
>

Nice paper, BTW.

--
Les Cargill

 0

On Mon, 20 Dec 2010 08:14:31 -0800, Tim Wescott <tim@seemywebsite.com>
wrote:

>On 12/20/2010 03:30 AM, Anton Erasmus wrote:
>> On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott<tim@seemywebsite.com>
>> wrote:
>>
>>> I know there's a few people out there who actually read the papers that I
>>> post on my web site.
>>>
>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>> been maintaining them.
>>>
>>> So here: I've made a start.
>>>
>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>
>>> My intent (with apologies to all of you with dial-up), is to convert the
>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>> documents easily maintainable and in a form that is easy to look at from
>>> the web or to print out, as you desire.
>>
>> The fonts are terrible. They seem to be bitmap fonts and not vector.
>> It looks like you used TeX to generate the document.  Go to
>> http://www.truetex.com/ for links to quite number of articles on
>> how to use truetype fonts in TeX.
>
>What reader were you using?  I'm trying to figure out (a) why some
>people think it looks peachy and some think it looks terrible (it looks
>great on Evince), and (b) make sure I test it on enough different
>readers that I get a true picture of what it looks like to the world at
>large.

I used Acrobat Pro 9 on Windows XP. Acroreader 9 on OpenSuse
also looks terrible. Looks fine in Evince on the same machine.
So it looks like it is Acrobat. Unfortunately that is one PDF Viewer
it has to look nice on.

Regards
Anton Erasmus

 0

On Mon, 20 Dec 2010 19:38:01 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>In comp.dsp Randy Yates <yates@ieee.org> wrote:
>
>(snip)
>
>> What does concern me, however, is some of the theory you've presented.
>> Specifically, this section on p.11:
>
>>   Sampling at some frequency that is equal to the repetition rate
>>   divided by a prime number will automatically stack these narrow bits
>>   of signal spectrum right up in the same order that they were in the
>>   original signal, only jammed much closer together in frequency which
>>   is the roundabout frequency-domain way of saying that you can sample
>>   at just the right rate, and interpret the resulting signal as a
>>   slowed-down replica of the input waveform.
>
>> There are two points in which I challenge the veracity of your assertions:
>
>> 1. Sampling at a rate of F/N when N is integer will never help
>> subsample a signal since the period of the sampling, N/F, is always a
>> multiple of the repetition rate period 1/F.
>
>One has to choose carefully.
>
>> 2. It seems that to truely, completely sample a repetitive signal in
>> such a way, you would need a sampling period that will never be a
>> multiple of the repetition period. For example, for the 60 Hz example
>> you could use a sample rate of 60 Hz / sqrt(2). But then, even if you
>> sample at such a rate, it would take an INFINITE amount of time to
>> fully sample this signal. It's equivalent to sampling an interval on
>> the real line a point at a time; real analysis tells us that there are
>> an uncountably infinite number of points in such an interval!
>
>That would be true for signals with infinite bandwidth.  At least
>for the AC power meter, you won't have that.  Harmonics from SCR
>(or triac) based light dimmers likely get into the MHz range, so
>one should be able to see that far.   The usual computer power
>supply is a voltage double off the AC line, which shouldn't be
>as bad as the SCR, but still has significant harmonics.
>
>But as was previously said, the goal is not to sample the 60Hz
>waveform, but, as used in describing modulated signals, the envelope.
>
>> So, I'm afraid I cannot agree that an accurate sampling of a repetitive
>> waveform can be made in this manner. If you disagree, please show me
>> where my reasoning is wrong.
>
>If one sample 60Hz power usage at 60Hz, one would lose much important
>information.  At 27Hz, where do the aliases end up?
>
>60Hz   --> 6Hz
>120Hz  --> 12Hz
>180Hz  --> -9Hz
>240Hz  --> -3Hz
>300Hz  --> 3Hz
>360Hz  --> 9Hz
>420Hz  --> -12Hz
>480Hz  --> -6Hz
>540Hz  --> 0Hz
>
>It seems that you don't want exactly 27Hz, maybe that is
>what he said previously.

I used 26.99947. I wrote a horrible Basic program that explored the
possible selections of available crystals, divided IRQ rates, divided
channel rates (16 channels), and possible aliases of the sample rate
against line harmonics. All against a guess about avaliable compute
power. It was one of those ill-posed problems with no hard quality
metric, just a guess as to which solution felt better.

>
>What you want to measure, though, is the RMS power over some
>period of time, taking into account the significant harmonics.

The current harmonics have no real power, at least as long as the
voltage waveform is sinusoidal, which it usually sort of is.

>
>Now, say you have a signal with harmonics up to a few MHz,
>and say, for example, that one of those aliases to 0Hz, and
>so you don't see.  How much of a problem is that?  If you have
>all the floor(1000000/60) harmonics up to that point, then you
>are likely pretty close.

Most power people are only concerned with the 10th or maybe 15th
harmonic.

>
>Floor(1000000/60) is 16666, so if you sample at 1000000/16666,
>for a sampling rate of 60.0024... Hz.  If you want something
>near 27Hz that doesn't have harmonics that are multiples of 60
>until 1000020, then it looks like 27.000027Hz  is about right.
>
>It seems to me that you pick the harmonic that you can afford
>not to see, and plan the sampling rate accordingly.
>
>However, as that is getting close to crystal tolerance, I might
>suggest that phase locking to a multple of 60Hz, and then dividing
>down would be a good way to generate the sampling clock.

Fun, but overkill. The line voltage and currents are constantly
jumping around anyhow.

John


 0

In comp.dsp John Larkin <jjlarkin@highnotlandthistechnologypart.com> wrote:
(snip)

> It records rms volts, amps, power, but doesn't try to reconstruct the
> raw waveforms; so the Sampling Theorem doesn't apply. That didn't stop
> all sorts of people from arguing that the sample rate had to be twice
> that of the highest reasonable AC line harmonic. As Tim says, lots of
> people fling "Nyquist Rate" around without really thinking about it.

some radar designers that believe that Heisenberg uncertainty applied

> If the voltage waveform is a sine wave (which it pretty much is) then
> there's no energy in the current harmonics anyhow.

Unless the source impedance is too high, such as it might be
at the end of a long extension cord with small wire.

If you have 1/n harmonic distribution, as from a square wave or
from an SCR light dimmer, then you can figure how many harmonics
you need from the error tolerance.  I might have wanted to go
to 1MHz, but 256*60 isn't so far off.

-- glen

 0

In comp.dsp John Larkin <jjlarkin@highnotlandthistechnologypart.com> wrote:
(snip, I wrote harmonics up to...)

>>480Hz  --> -6Hz
>>540Hz  --> 0Hz

>>It seems that you don't want exactly 27Hz, maybe that is
>>what he said previously.

> I used 26.99947. I wrote a horrible Basic program that explored the
> possible selections of available crystals, divided IRQ rates, divided
> channel rates (16 channels), and possible aliases of the sample rate
> against line harmonics. All against a guess about avaliable compute
> power. It was one of those ill-posed problems with no hard quality
> metric, just a guess as to which solution felt better.

It is supposed to be that the most popular crystal is 3579545Hz,
as needed for NTSC color TV demodulation.  That isn't likely
to be a nice multiple of other common frequencies.  (Maybe now
that analog broadcasting has ceased, the crystals will be harder
to find.)

>>What you want to measure, though, is the RMS power over some
>>period of time, taking into account the significant harmonics.

> The current harmonics have no real power, at least as long as the
> voltage waveform is sinusoidal, which it usually sort of is.

That sounds right.  Though I do remember an undergrad physics
lab where we looked at the power line with an oscilloscope.
It seems that they had a voltage regulating transformer with
a large third harmonic.  Not so common, though.

>>Now, say you have a signal with harmonics up to a few MHz,
>>and say, for example, that one of those aliases to 0Hz, and
>>so you don't see.  How much of a problem is that?  If you have
>>all the floor(1000000/60) harmonics up to that point, then you
>>are likely pretty close.

> Most power people are only concerned with the 10th or maybe 15th
> harmonic.

I suppose that sound right.  The reason to be concerned with
the higher ones is the effect on AM radios, but the actual
power is pretty low.

(snip)

>>However, as that is getting close to crystal tolerance, I might
>>suggest that phase locking to a multple of 60Hz, and then dividing
>>down would be a good way to generate the sampling clock.

> Fun, but overkill. The line voltage and currents are constantly
> jumping around anyhow.

I was thinking that the PLL might be cheaper than a crystal,
but maybe not, and maybe it doesn't matter much.  27Hz is
far enough that you don't have to worry about low harmonics
even if the crystal is a little off.

-- glen

 0

On Mon, 20 Dec 2010 13:02:30 -0500, Randy Yates <yates@ieee.org> wrote:

>On 12/20/2010 12:48 PM, Eric Jacobsen wrote:
>> [...]
>> I'm using Adobe Reader 9 and it looks fine here, even blown way up.
>
>Ha! Interesting...
>
>What may be happening is that some font is not embedded, and that if
>you don't have the TeX fonts installed, the reader is substituting a
>postscript font, which is vector, so it looks fine. But if you do
>have TeX fonts on your system, you get them rendered as bitmaps.

Nope, looks like it's a Lyx problem (1.6.7 here).

A quickie foo.tex with a minimal header (just \documentclass{article},
\usepackage{amsmath}, and \begin{document}) produces peachy (vector)
output when run from the command line through "latex foo.tex" then
"dvipdfm foo.dvi" or straight to pdf with "pdflatex foo.tex".

However, the Lyx output, when exported with any of its three pdf
options, produces the blocky bitmap-style typefaces.

TeXnic Center's GUI did okay (more peachy output). Didn't try any of the
other GUI wrappers.

--
Rich Webb     Norfolk, VA

 0

On 2010-12-20, D Yuniskis <not.going.to.be@seen.com> wrote:

>> I don't see anything at all wrong with the font.  The one thing that I
>> would change is the line length.  It looks like a typical line is
>> upwards of 110 characters.  That's a bit too much to read comfortably.
>> If you want to use a font that small, and don't want wide margins, I'd
>> recommend going to a two-column format.
>
><frown>  I agree with your point re: line length.  But, I adopted
> a two column format (3/8" gutter) years ago when I started my
> "notes" series.  It *really* complicates layout.

Unfortunately, that's true.  If you have a lot of full-width diagrams
or listings, you sort of end up picking your pain.  Either the layout
is choppy and difficult to manage, lines are too long, or you end up
wasting a lot of paper using wider margins and a larger font.

> You end up having to create lots of "page width" boxes anchored to
> your text.  This ends up breaking up the text columns A LOT.
> Especially if you have lots of illustrations, tables, etc.
>
> For example, putting code snippets in-line constrains the length of
> each code line severely (unless you go to the page wide boxes).
>
><shrug> So far, I've not had to resort to "rotated pages" but that's
> only because I've been aggressive at keeping tables, illustrations,
> etc. tightly bound.  :-/

I know what you mean.

--
Grant

 0

On 2010-12-20, David Brown <david.brown@removethis.hesbynett.no> wrote:
> On 20/12/10 17:57, Grant Edwards wrote:
>> On 2010-12-20, Cesar Rabak<csrabak@bol.com.br>  wrote:
>>> Em 20/12/2010 05:34, Tim Wescott escreveu:
>>>> I know there's a few people out there who actually read the papers that I
>>>> post on my web site.
>>>>
>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>> been maintaining them.
>>>>
>>>> So here: I've made a start.
>>>>
>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>
>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>> documents easily maintainable and in a form that is easy to look at from
>>>> the web or to print out, as you desire.
>>>
>>> I gave a diagonal look at the paper, as I got curious about the
>>> complaints on the font.  They look OK to me :-) I'm used to read math's
>>> articles written in CMR fonts so perhaps I'm not a good judge on this.
>>
>> I don't see anything at all wrong with the font.  The one thing that I
>> would change is the line length.  It looks like a typical line is
>> upwards of 110 characters.  That's a bit too much to read comfortably.
>> If you want to use a font that small, and don't want wide margins, I'd
>> recommend going to a two-column format.
>
> I agree that the line width is /slightly/ too wide for comfort, but I'd
> avoid two-column format unless I were trying to save the last few trees
> on the planet.

[...]

> If we are going to nit-pick on the typography (which seems a little
> unfair, given that it is vastly better than in most papers),

Oh, I agree. The typography is certainly better than 99+ percent of
what's out there, so we are indeed picking nits.

> I still haven't got round to reading the document itself - I hope the
> contents are worth the effort in the presentation!

I've read parts of it, but the only thing I felt qualified to comment
on was the typesetting.  :)


 0

Wow, you got quite an audience for this paper so far...=20

Using Acrobat 6.0 the fonts look fine to me.  I will comment on the initial=
state of the doc on opening.  The bookmarks panel is open even though ther=
e is nothing in it and the document is sized to fit the remaining screen sp=
ace.  I suggest that you turn off the bookmarks panel in the initial view s=
ince it is empty.  Also, I recommend that both the page layout and magnific=
ation be set to default for the initial view.  That leaves things up to the=
viewer (the person, not the program). =20

Rick

 0

Hi Grant,

On 12/20/2010 7:25 PM, Grant Edwards wrote:
> On 2010-12-20, D Yuniskis<not.going.to.be@seen.com>  wrote:
>
>>> I don't see anything at all wrong with the font.  The one thing that I
>>> would change is the line length.  It looks like a typical line is
>>> upwards of 110 characters.  That's a bit too much to read comfortably.
>>> If you want to use a font that small, and don't want wide margins, I'd
>>> recommend going to a two-column format.
>>
>> <frown>   I agree with your point re: line length.  But, I adopted
>> a two column format (3/8" gutter) years ago when I started my
>> "notes" series.  It *really* complicates layout.
>
> Unfortunately, that's true.  If you have a lot of full-width diagrams
> or listings, you sort of end up picking your pain.  Either the layout
> is choppy and difficult to manage, lines are too long, or you end up
> wasting a lot of paper using wider margins and a larger font.

I think there is some magic ratio of text to graphics (treating
tables as graphics) that you must exceed in order for there to
be enough text to flow around the graphics.

I think you can probably cheat -- a little -- by using 3/4 wide
tables/graphics.  I.e., break one column and let the other column
flow around it.  But, that presumes you won't have a graphic
sitting in that "other" column, too.  :<

Or, resort to tiny text, etc. in any inserts.  <frown>

 0

On Mon, 20 Dec 2010 22:41:25 +0200, Anton Erasmus
<nobody@spam.prevent.net> wrote:

>On Mon, 20 Dec 2010 08:14:31 -0800, Tim Wescott <tim@seemywebsite.com>
>wrote:
>
>>On 12/20/2010 03:30 AM, Anton Erasmus wrote:
>>> On Mon, 20 Dec 2010 01:34:44 -0600, Tim Wescott<tim@seemywebsite.com>
>>> wrote:
>>>
>>>> I know there's a few people out there who actually read the papers that I
>>>> post on my web site.
>>>>
>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>> been maintaining them.
>>>>
>>>> So here: I've made a start.
>>>>
>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>
>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>> documents easily maintainable and in a form that is easy to look at from
>>>> the web or to print out, as you desire.
>>>
>>> The fonts are terrible. They seem to be bitmap fonts and not vector.
>>> It looks like you used TeX to generate the document.  Go to
>>> http://www.truetex.com/ for links to quite number of articles on
>>> how to use truetype fonts in TeX.
>>
>>What reader were you using?  I'm trying to figure out (a) why some
>>people think it looks peachy and some think it looks terrible (it looks
>>great on Evince), and (b) make sure I test it on enough different
>>readers that I get a true picture of what it looks like to the world at
>>large.
>
>I used Acrobat Pro 9 on Windows XP. Acroreader 9 on OpenSuse
>also looks terrible. Looks fine in Evince on the same machine.
>So it looks like it is Acrobat. Unfortunately that is one PDF Viewer
>it has to look nice on.
>

Hi,

Your new updated version looks MUCH better now. Fonts are all vector,
and it displays correctly in all the viewers I tried.

Regards
Anton Erasmus


 0

Dombo wrote:

> [..]. On a 1920x1200 screen with Acrobat Reader the text
> isn't as comfortable to read as in most PDFs. When watching it with
> pages side-by-side (as I do with most documents) or at 100% the fonts
> are too thin/light.

Same here on Linux with Acrobat Reader 9.4.1 (1600x1200 display).
xpdf looks best (as good as Acrobat, but not that thin), evince looks
horrible if not viewed at 300% or more.

bye
Andreas
--
Andreas H�nnebeck | email: acmh@gmx.de
----- privat ---- | www  : http://www.huennebeck-online.de
Fax/Anrufbeantworter: 0721/151-284301
GPG-Key: http://www.huennebeck-online.de/public_keys/andreas.asc
PGP-Key: http://www.huennebeck-online.de/public_keys/pgp_andreas.asc


 0

Anton Erasmus wrote:

> Your new updated version looks MUCH better now. Fonts are all vector,
> and it displays correctly in all the viewers I tried.

Same here (Linux), acrobat reader, evince and xpdf all look perfect.

bye
Andreas
--
Andreas H�nnebeck | email: acmh@gmx.de
----- privat ---- | www  : http://www.huennebeck-online.de
Fax/Anrufbeantworter: 0721/151-284301
GPG-Key: http://www.huennebeck-online.de/public_keys/andreas.asc
PGP-Key: http://www.huennebeck-online.de/public_keys/pgp_andreas.asc


 0

On 21/12/2010 03:32, Grant Edwards wrote:
> On 2010-12-20, David Brown<david.brown@removethis.hesbynett.no>  wrote:
>> On 20/12/10 17:57, Grant Edwards wrote:
>>> On 2010-12-20, Cesar Rabak<csrabak@bol.com.br>   wrote:
>>>> Em 20/12/2010 05:34, Tim Wescott escreveu:
>>>>> I know there's a few people out there who actually read the papers that I
>>>>> post on my web site.
>>>>>
>>>>> I also know that the papers have gotten a bit ragged, and that I haven't
>>>>> been maintaining them.
>>>>>
>>>>> So here: I've made a start.
>>>>>
>>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>>>>>
>>>>> My intent (with apologies to all of you with dial-up), is to convert the
>>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the
>>>>> documents easily maintainable and in a form that is easy to look at from
>>>>> the web or to print out, as you desire.
>>>>
>>>> I gave a diagonal look at the paper, as I got curious about the
>>>> complaints on the font.  They look OK to me :-) I'm used to read math's
>>>> articles written in CMR fonts so perhaps I'm not a good judge on this.
>>>
>>> I don't see anything at all wrong with the font.  The one thing that I
>>> would change is the line length.  It looks like a typical line is
>>> upwards of 110 characters.  That's a bit too much to read comfortably.
>>> If you want to use a font that small, and don't want wide margins, I'd
>>> recommend going to a two-column format.
>>
>> I agree that the line width is /slightly/ too wide for comfort, but I'd
>> avoid two-column format unless I were trying to save the last few trees
>> on the planet.
>
> [...]
>
>> If we are going to nit-pick on the typography (which seems a little
>> unfair, given that it is vastly better than in most papers),
>
> Oh, I agree. The typography is certainly better than 99+ percent of
> what's out there, so we are indeed picking nits.
>
>> I still haven't got round to reading the document itself - I hope the
>> contents are worth the effort in the presentation!
>
> I've read parts of it, but the only thing I felt qualified to comment
> on was the typesetting.  :)
>

I'm in the same position.  I not only own copies of the TeXbook, the
METAFONTbook, and the LaTeX Document Preparation System, but I've read
them too :-)  On the other hand, when Tim writes something on DSP, I
consider it gospel until proven otherwise by the other smart folks in
comp.dsp.


 0

In article <QLWdne913NWJmpLQnZ2dnUVZ_v-dnZ2d@web-ster.com>,
Tim Wescott  <tim@seemywebsite.com> wrote:
>I know there's a few people out there who actually read the papers that I
>post on my web site.
>
>I also know that the papers have gotten a bit ragged, and that I haven't
>been maintaining them.
>
>So here: I've made a start.
>
>http://www.wescottdesign.com/articles/Sampling/sampling.pdf
>
>My intent (with apologies to all of you with dial-up), is to convert the
>ratty HTML documents to pdf as time permits, and in a way that leaves the
>documents easily maintainable and in a form that is easy to look at from
>the web or to print out, as you desire.

I'd really hate to see documents with a low content/volume ratio
to appear on the net. Like user manuals that are bitmaps of
commercial flyers.
If html is replaced by bitmaps, the visually
impaired can no longer enlarge the fonts and braille terminals
are totally out of the question. And the net becomes unusable for
people at Ivory Coast, slowly but surely.
I hear that there are lines with 110 char's. You know that
professional type setters aim at 60/68 characters/line.
(Look at news papers. Look at a novel that has been printed before
the computer era.) So regardless whether it looks good, it doesn't

Please find a way to separate textual content and formatting.
Postscript can do the job. (You can add fonts, but you can also
rely on built in fonts.) The preoccupation with how it looks
on this years screens and this years printers is not good.
The content is probably good twenty years from now.
Character based content has brought forth our civilisation. Are we
going back to hieroglyphs?

I generate all my documents for the ciforth project via texinfo
in html, postscript and pdf format and make all three available
(and the source as well).
I try to keep documentation content-oriented.
Acrobat's left content window, and clickable indices are great
as is the "back" key.
I generally use Acrobat version 5 to view acrobat documents,
the last good acrobat. (Maybe we should have a clone of that program
and a downgrader of "modern" acrobat formats to be acceptable
for the acrobat 5 clone.)

These are general concerns. I appreciate that you cannot solve
these kind of problems just in behalf of *your* documentation.

>
>--
>http://www.wescottdesign.com

Groetjes Albert

--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst


 0

On a sunny day (21 Dec 2010 15:34:45 GMT) it happened Albert van der Horst
<albert@spenarnc.xs4all.nl> wrote in <ldsb9x.ds1@spenarnc.xs4all.nl>:

>I hear that there are lines with 110 char's. You know that
>professional type setters aim at 60/68 characters/line.

I do disagree.
My current Linux rxvt terminal setting is let's see: 235 columns.
That is the window for the editor 'joe' that I type my source code in,
Even in Usenet there is no line size limit, although exceeding 78 columns would give the usual complainers something to write about.

Especially in C code, things move to the right with a lot of tabs,
and printf also likes space:
fprintf(stderr, "xscpc: detected a line length of more than %d characters, looks like you are not connected to a sc_pic device, check if the correct serial device is specified!\n", RX_BUFFER_SIZE - 1);

Of course you can break lines with '\', but why make things less readable.

So I'd say, especially these days with 16:9 screens, the wider the better.
Of course all bets are off if you use MS windows and small little squares to type code in.

>(Look at news papers.

Something I try to avoid, except online.
Jobs has this idea that newspapers can be replaced by ipads...

>Look at a novel that has been printed before
>the computer era.) So regardless whether it looks good, it doesn't

I prefer long lines, reads much better.
Sometimes I wonder if I read all these complaints about people not being able to read something or other...
The times of 12 inch CRTs viewed in half dark rooms are long past.
The time or crap products that force little windows are long past.

This is how *I* see Tim's document:
ftp://panteltje.com/pub/sampling.gif

Now I am aware if you display that gif in a crap browser on a crap monitor, with crap lighting,
with crap settings, with crap glasses, that that picture may not help much.
But if it looks bad, check any of the things marked 'crap' above, and your eyes too.

 0

On 2010-12-21, Jan Panteltje <pNaonStpealmtje@yahoo.com> wrote:
> On a sunny day (21 Dec 2010 15:34:45 GMT) it happened Albert van der Horst
><albert@spenarnc.xs4all.nl> wrote in <ldsb9x.ds1@spenarnc.xs4all.nl>:
>
>>I hear that there are lines with 110 char's. You know that
>>professional type setters aim at 60/68 characters/line.
>
> I do disagree.

You're allowed. :)

> My current Linux rxvt terminal setting is let's see: 235 columns.
> That is the window for the editor 'joe' that I type my source code
> in, AND this reply. Even in Usenet there is no line size limit,
> although exceeding 78 columns would give the usual complainers

> Especially in C code, things move to the right with a lot of tabs,
> and printf also likes space:
>                         fprintf(stderr, "xscpc: detected a line length of more than %d characters, looks like you are not connected to a sc_pic device, check if the correct serial device is specified!\n", RX_BUFFER_SIZE - 1);
>
> Of course you can break lines with '\', but why make things less readable.

> So I'd say, especially these days with 16:9 screens, the wider the
> better.

Well, years of research seems prove you wrong.

> Of course all bets are off if you use MS windows and small little
> squares to type code in.

Huh?

>>(Look at news papers.
>
> Something I try to avoid, except online. Jobs has this idea that
> newspapers can be replaced by ipads...

>
>>Look at a novel that has been printed before
>>the computer era.) So regardless whether it looks good, it doesn't
>
> I prefer long lines, reads much better.

Then you're in a small minority.  Cognative research shows that long
lines are harder to read for pretty much everybody else.

--
Grant

 0

Hi Grant,

On 12/21/2010 11:48 AM, Grant Edwards wrote:

>>> Look at a novel that has been printed before
>>> the computer era.) So regardless whether it looks good, it doesn't
>>
>> I prefer long lines, reads much better.
>
> Then you're in a small minority.  Cognative research shows that long
> lines are harder to read for pretty much everybody else.

Agreed -- though I may have some "inherent" issue with
"minimizing head/eye motion" (e.g., my workstations are
designed so I don't have to move my head to see different
monitors).

Short lines (e.g., 2 or 3 column format) read *much* quicker
(IMO) than long ones.  Just the difference between a paperback
and a "hard-bound" tome alone is significant (though I wonder
if that isn't because the actual *physical* width of the
text is smaller?).

With short line lengths, you never lose sight of the left edge
of the column. So, moving to the next line requires less
visual (subconscious) "hunting".  E.g., when reading Braille
(two-fisted), you read the left half of the line with our
left hand, meet up with your right hand "mid-line", finish
the line back to the start as it then seeks the start of the
next line (row).  [if you read one-handed -- typ right -- the
other hand marks the start of the line so the "active" hand can
find it more quickly]

Almost by a third.  I attribute this to the fact that my eyes
must "walk" across the page, then retrace their path as they hunt
for the next line.

OTOH, when reading a paperback (or columnar text), it seems like
my eyes just *dart* right and left and spend most of their time
moving smoothly down the page.

Writing code is a different thing, entirely.  You have *lots*
of visual cues to help you move through the "text" -- all that
indentation and decorative punctuation.  You subconsciously
know whether to seek an indented line, an outdented line, a
closing brace, etc.  (though I still constrain my code to ~80
columns so that it fits nicely on damn near *any* display
device -- print or otherwise)

 0

On a sunny day (Tue, 21 Dec 2010 12:11:41 -0700) it happened D Yuniskis
<not.going.to.be@seen.com> wrote in <ieqtk7$rel$1@speranza.aioe.org>:

>OTOH, when reading a paperback (or columnar text), it seems like
>my eyes just *dart* right and left and spend most of their time
>moving smoothly down the page.

What I cannot stand is text with embedded pictures, where the text 'flows' around the picture.
I means what those DTP programs sometimes produce, and then they use variable spacing between the WORDS to get even line length... terrible.

_____
In this  picture you | pic |
see    the       cat | of  |
jump  on  the  table | cat |
on  the  next   page  _____
the cat is shown eating the mouse


 0

On 2010-12-21, D Yuniskis <not.going.to.be@seen.com> wrote:

>> I prefer long lines, reads much better.
>>
>> Then you're in a small minority.  Cognative research shows that long
>> lines are harder to read for pretty much everybody else.
>
> Agreed -- though I may have some "inherent" issue with "minimizing
> head/eye motion" (e.g., my workstations are designed so I don't have
> to move my head to see different monitors).
>
> Short lines (e.g., 2 or 3 column format) read *much* quicker (IMO)
> than long ones.  Just the difference between a paperback and a
> "hard-bound" tome alone is significant (though I wonder if that isn't
> because the actual *physical* width of the text is smaller?).

The research that I've read over the years indicates that both
physical width (specifically the angle subtended by the line), and the
number of characters had an effect.

> With short line lengths, you never lose sight of the left edge
> of the column. So, moving to the next line requires less
> visual (subconscious) "hunting".

Right.  The longer the line, the harder it is to jump to the beginning
of the next line without getting "lost" or making false starts on the
wrong line.

--
Grant

 0

On a sunny day (Tue, 21 Dec 2010 19:12:35 GMT) it happened Jan Panteltje
<pNaOnStPeAlMtje@yahoo.com> wrote in <iequ7b$m90$1@news.datemas.de>:

PS
I prefer it this way:
http://panteltje.com/panteltje/pic/gm_pic/


 0

Hi Grant,

On 12/21/2010 12:13 PM, Grant Edwards wrote:
> On 2010-12-21, D Yuniskis<not.going.to.be@seen.com>  wrote:
>
>>> I prefer long lines, reads much better.
>>>
>>> Then you're in a small minority.  Cognative research shows that long
>>> lines are harder to read for pretty much everybody else.
>>
>> Agreed -- though I may have some "inherent" issue with "minimizing
>> head/eye motion" (e.g., my workstations are designed so I don't have
>> to move my head to see different monitors).
>>
>> Short lines (e.g., 2 or 3 column format) read *much* quicker (IMO)
>> than long ones.  Just the difference between a paperback and a
>> "hard-bound" tome alone is significant (though I wonder if that isn't
>> because the actual *physical* width of the text is smaller?).
>
> The research that I've read over the years indicates that both
> physical width (specifically the angle subtended by the line), and the
> number of characters had an effect.

The former seems to coincide with my experience.  I had
assumed it "trumped" the latter.  (?)  I would have to
think about how/why "character count" would be significant
(barring the obvious correlation that, for a given "angle",
more characters means *smaller* characters).

>> With short line lengths, you never lose sight of the left edge
>> of the column. So, moving to the next line requires less
>> visual (subconscious) "hunting".
>
> Right.  The longer the line, the harder it is to jump to the beginning
> of the next line without getting "lost" or making false starts on the
> wrong line.

Setting the type "ragged right" actually helps with readability.
But, it doesn't "look pretty".

DTP is a lot more "art" than one would think.  You *assume* it's
just a matter of getting everything to fit on the page and
making it "look pretty".  In fact, making something that is
"easy to read" (i.e., so that it doesn't interfere with the
*content*) can be very difficult.

Knuth (?) commented that once you start designing fonts, it is
hard NOT to look at EVERY font that you encounter with a
critical eye.  I.e., you stop seeing things as "words on paper"
but, instead, start inspecting individual glyphs, kerning, etc.
The same is true of DTP -- you look at why someone opted for
"big caps", or outdents, or...

Always fun to see newbies using dozens of "fonts", colors, etc.
Starts to read like:  "LeAvE $1,o0o,00o iN a PapuR bAg uNDer tHe BRidgE at 5tH & MaIn iF yOu EVeR wANt tO sEe yOuR..." :>   0 Reply D 12/21/2010 7:55:49 PM On 12/21/2010 12:12 PM, Jan Panteltje wrote: > On a sunny day (Tue, 21 Dec 2010 12:11:41 -0700) it happened D Yuniskis > <not.going.to.be@seen.com> wrote in<ieqtk7$rel$1@speranza.aioe.org>: > >> OTOH, when reading a paperback (or columnar text), it seems like >> my eyes just *dart* right and left and spend most of their time >> moving smoothly down the page. > > What I cannot stand is text with embedded pictures, where the text > 'flows' around the picture. Too often, this is done for "artistic effect" -- without good reason. It's as if the person designing the publication just "discovered" some feature and is now playing with it (the same holds true of abusing typefaces, etc.) OTOH, there are times when it can be very advantageous to flow text around a picture -- if the alternative is *breaking* the text for an insignificant (small) picture. E.g., in a newsletter I put together for a local library, I was describing a gizmo gifted to the library that would allow them to hang/display works of art from local artists. The "hanger" onto which individual items are hung is insignificant. But, to describe how the "system" worked, it was necessary to include a photo of it. I included it *exactly* like your example below as this gave it *only* the space it "deserved" yet allowed it to be included. For an example of this, see the center of page 4 at http://www.fkbcl.org/f/Spring_2009_Newsletter.pdf also note the fancy photos! The software that does this is BFM as far as I am concerned! I'm sure there is a "simple" explanation for how it works but it always amazes me how "one click simple" it is to use! Though I need to learn how better to *take* the photos that it pieces together -- i.e., the room depicted in the cover photo is rectangular! :< For an example of the advantage of including photos at high resolution, zoom in on the individual pictures of the artworks on page 4 -- you can even see the texture of the "stucco" walls! (for an example of NOT using high resolution imiages, repeat the exercise for the book covers on page 5!) [note to Tim (OP): decide if it is worthwhile for your articles to open with a ToC (bookmarks) on the left, as in this example, or not. E.g., I've found that opening a document in "facing pages" mode is usually a *bad* idea -- as the document "always" starts with a recto page ... *wasting* half of the screen! You can also see that *only* the fonts used in the publication are embedded in it -- *AND* that the "special" fonts (e.g., the "display" font used for article titles) are *deliberately* included -- since I didn't expect most systems to have those available] Note that the way you layout a technical publication differs from the way you layout something like this newsletter. Different audiences, different emotional connections (in this case, trying to create interest *in* the library) > I means what those DTP programs sometimes > produce, and then they use variable spacing between the WORDS > to get even line length... terrible. Things are much better with modern DTP tools. In years past, to pad a line to full length, you had to insert *whole* spaces. For fixed width fonts (e.g., courier), this can be "like fingernails scraping on a chalkboard". Note that most tools will also play games balancing column lengths (if you so choose). Some will even use *variable* line SPACING to achieve this! > In this picture you | pic | > see the cat | of | > jump on the table | cat | > on the next page _____ > the cat is shown eating the mouse   0 Reply D 12/21/2010 8:21:46 PM On a sunny day (Tue, 21 Dec 2010 13:21:46 -0700) it happened D Yuniskis <not.going.to.be@seen.com> wrote in <ier1nk$5b1$1@speranza.aioe.org>: >OTOH, there are times when it can be very advantageous to >flow text around a picture -- if the alternative is >*breaking* the text for an insignificant (small) picture. > >E.g., in a newsletter I put together for a local library, >I was describing a gizmo gifted to the library that would >allow them to hang/display works of art from local artists. >The "hanger" onto which individual items are hung is >insignificant. But, to describe how the "system" worked, >it was necessary to include a photo of it. I included it >*exactly* like your example below as this gave it *only* >the space it "deserved" yet allowed it to be included. > >For an example of this, see the center of page 4 at >http://www.fkbcl.org/f/Spring_2009_Newsletter.pdf Yes, but this is done correctly, it even emphasises the word 'hangers'. My compliments for this newsletter, best I have seen in a very long time if not ever. >also note the fancy photos! The software that does this >is BFM as far as I am concerned! I'm sure there is a >"simple" explanation for how it works but it always >amazes me how "one click simple" it is to use! Though >I need to learn how better to *take* the photos that >it pieces together -- i.e., the room depicted in the >cover photo is rectangular! :< > >For an example of the advantage of including photos >at high resolution, zoom in on the individual pictures >of the artworks on page 4 -- you can even see the texture >of the "stucco" walls! xpdf goes to 400%, I can zoom some more by changing to a lower X resolution :-) Yes I see the texture. >(for an example of NOT using high resolution imiages, >repeat the exercise for the book covers on page 5!) Yes I see the jpeg artefacts in Mark Twain's >[note to Tim (OP): decide if it is worthwhile for your >articles to open with a ToC (bookmarks) on the left, >as in this example, or not. E.g., I've found that >opening a document in "facing pages" mode is usually >a *bad* idea -- as the document "always" starts with >a recto page ... *wasting* half of the screen! You can >also see that *only* the fonts used in the publication >are embedded in it -- *AND* that the "special" fonts >(e.g., the "display" font used for article titles) >are *deliberately* included -- since I didn't expect >most systems to have those available] > >Note that the way you layout a technical publication differs >from the way you layout something like this newsletter. >Different audiences, different emotional connections >(in this case, trying to create interest *in* the library) > >> I means what those DTP programs sometimes >> produce, and then they use variable spacing between the WORDS >> to get even line length... terrible. > >Things are much better with modern DTP tools. In years past, >to pad a line to full length, you had to insert *whole* >spaces. For fixed width fonts (e.g., courier), this can be >"like fingernails scraping on a chalkboard". > >Note that most tools will also play games balancing column >lengths (if you so choose). Some will even use *variable* >line SPACING to achieve this! > >> In this picture you | pic | >> see the cat | of | >> jump on the table | cat | >> on the next page _____ >> the cat is shown eating the mouse Yes, when I ever have the time I may try to find a good DTP program for Linux that actually installs (tried some in the past without much luck). At very old age, if I get that far, when all I can do is press a key, maybe I should write a book. Although these days it is probably 'twitter'. :-)   0 Reply Jan 12/21/2010 8:25:16 PM Hi Jan, On 12/21/2010 1:25 PM, Jan Panteltje wrote: > On a sunny day (Tue, 21 Dec 2010 13:21:46 -0700) it happened D Yuniskis > <not.going.to.be@seen.com> wrote in<ier1nk$5b1$1@speranza.aioe.org>: > >> OTOH, there are times when it can be very advantageous to >> flow text around a picture -- if the alternative is >> *breaking* the text for an insignificant (small) picture. >> >> E.g., in a newsletter I put together for a local library, >> I was describing a gizmo gifted to the library that would >> allow them to hang/display works of art from local artists. >> The "hanger" onto which individual items are hung is >> insignificant. But, to describe how the "system" worked, >> it was necessary to include a photo of it. I included it >> *exactly* like your example below as this gave it *only* >> the space it "deserved" yet allowed it to be included. >> >> For an example of this, see the center of page 4 at >> http://www.fkbcl.org/f/Spring_2009_Newsletter.pdf > > Yes, but this is done correctly, it even emphasises the word 'hangers'. > > My compliments for this newsletter, best I have seen in a very long time if not ever. <grin> This is what I was "competing" with: Before: http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf And "since": http://www.fkbcl.org/f/Spring_2010_Newsletter.pdf If you actually *read* them (with an eye towards spelling, punctuation, grammar and content), you'll see big "differences". <wink> >> also note the fancy photos! The software that does this >> is BFM as far as I am concerned! I'm sure there is a >> "simple" explanation for how it works but it always >> amazes me how "one click simple" it is to use! Though >> I need to learn how better to *take* the photos that >> it pieces together -- i.e., the room depicted in the >> cover photo is rectangular! :< >> >> For an example of the advantage of including photos >> at high resolution, zoom in on the individual pictures >> of the artworks on page 4 -- you can even see the texture >> of the "stucco" walls! > > xpdf goes to 400%, I can zoom some more by changing to a lower X resolution :-) > Yes I see the texture. It seemed "appropriate" to give the reader/viewer that ability wrt the photos. E.g., you can almost read the label on the "hanger"! I had argued that making the on-line version of the newsletter MORE APPEALING than the "print" version could help migrate people away from the print version (which would cut costs). This is one of the things that gave the "reader" more value in this electronic form than its print counterpart. >> (for an example of NOT using high resolution imiages, >> repeat the exercise for the book covers on page 5!) > > Yes I see the jpeg artefacts in Mark Twain's I didn't hunt down the individual titles and take photos of their covers. Instead, I lifted the images from the libraries on-line catalogue. There was enough detail to *suggest* the title of each book (note they are explicitly listed in the article) which I thought was enough for that "application". (I assembled the document remotely --- while I was attending to some family problems "back east" -- so I didn't have access to any of these physical items) >> [note to Tim (OP): decide if it is worthwhile for your >> articles to open with a ToC (bookmarks) on the left, >> as in this example, or not. E.g., I've found that >> opening a document in "facing pages" mode is usually >> a *bad* idea -- as the document "always" starts with >> a recto page ... *wasting* half of the screen! You can >> also see that *only* the fonts used in the publication >> are embedded in it -- *AND* that the "special" fonts >> (e.g., the "display" font used for article titles) >> are *deliberately* included -- since I didn't expect >> most systems to have those available] >> >> Note that the way you layout a technical publication differs >>from the way you layout something like this newsletter. >> Different audiences, different emotional connections >> (in this case, trying to create interest *in* the library) >> >>> I means what those DTP programs sometimes >>> produce, and then they use variable spacing between the WORDS >>> to get even line length... terrible. >> >> Things are much better with modern DTP tools. In years past, >> to pad a line to full length, you had to insert *whole* >> spaces. For fixed width fonts (e.g., courier), this can be >> "like fingernails scraping on a chalkboard". >> >> Note that most tools will also play games balancing column >> lengths (if you so choose). Some will even use *variable* >> line SPACING to achieve this! >> >>> In this picture you | pic | >>> see the cat | of | >>> jump on the table | cat | >>> on the next page _____ >>> the cat is shown eating the mouse > > Yes, when I ever have the time I may try to find a good DTP program for Linux > that actually installs (tried some in the past without much luck). While I am an advocate of FOSS (though don't run Linux), I am not a zealot. I *gladly* avail myself of the tools that are available under (e.g.) Windows. Especially if they make my life easier or my "product" better! > At very old age, if I get that far, when all I can do is press a key, > maybe I should write a book. > Although these days it is probably 'twitter'. > :-) <frown>   0 Reply D 12/21/2010 8:43:34 PM On a sunny day (Tue, 21 Dec 2010 13:50:30 -0700) it happened D Yuniskis <not.going.to.be@seen.com> wrote in <ier3dg$96o$1@speranza.aioe.org>: ><grin> This is what I was "competing" with: > >Before: >http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf # wget http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf --21:44:41-- http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf => all_2009_Newsletter.pdf' Resolving www.fkbcl.org... 69.90.45.41 Connecting to www.fkbcl.org|69.90.45.41|:80... connected. HTTP request sent, awaiting response... 404 Not Found 21:44:42 ERROR 404: Not Found. >While I am an advocate of FOSS (though don't run Linux), I am >not a zealot. I *gladly* avail myself of the tools that are >available under (e.g.) Windows. Especially if they make my life >easier or my "product" better! Yea, but I burned my XP disk, as it wasted too much time. So far Linux has always helped me out, did not want to put too much time into that DTP stuff back then, I write my web pages in html with a text editor. >> At very old age, if I get that far, when all I can do is press a key, >> maybe I should write a book. >> Although these days it is probably 'twitter'. >> :-) > ><frown> Yes, I frown on that too, but the younger generation seems to use it a lot.   0 Reply Jan 12/21/2010 8:49:58 PM Hi Jan, On 12/21/2010 12:18 PM, Jan Panteltje wrote: > On a sunny day (Tue, 21 Dec 2010 19:12:35 GMT) it happened Jan Panteltje > <pNaOnStPeAlMtje@yahoo.com> wrote in<iequ7b$m90$1@news.datemas.de>: > > PS > I prefer it this way: > http://panteltje.com/panteltje/pic/gm_pic/ That forces you to look *past* a photo for the next bit of text. I don't "interrupt" the reader unless the interruption is "worthwhile". I.e., a reader should be able to keep reading and ignore photos, illustrations, tables, etc. if they aren't "significant enough". For example, I like to try to arrange photos/tables/etc. at the top or bottom of a column so they user can conveniently ignore them. OTOH, doing this rigidly, results in a boring (visually) presentation. In the newsletter I mentioned (elsewhere), for example, I took liberties with which "articles" were on each page. And, did some significant editing to those articles to cause images to fall where I wanted them. When I do technical presentations, I try to silently impose a structure on each page so the reader knowws where to look for things. E.g., if I have "screenshots" in the document, then I might opt to "always" put them in the same relative position on a page -- so the user's "muscle memory" directs him to the text or image, as appropriate (without having to "hunt")   0 Reply D 12/21/2010 8:57:09 PM On 21/12/10 21:21, D Yuniskis wrote: > On 12/21/2010 12:12 PM, Jan Panteltje wrote: >> On a sunny day (Tue, 21 Dec 2010 12:11:41 -0700) it happened D Yuniskis >> I means what those DTP programs sometimes >> produce, and then they use variable spacing between the WORDS >> to get even line length... terrible. > > Things are much better with modern DTP tools. In years past, > to pad a line to full length, you had to insert *whole* > spaces. For fixed width fonts (e.g., courier), this can be > "like fingernails scraping on a chalkboard". > > Note that most tools will also play games balancing column > lengths (if you so choose). Some will even use *variable* > line SPACING to achieve this! > I can see the point of that - by reducing the vertical line spacing, you are reducing the area of the large space and thus its visual effect. I am not sure I like it, however - I thing the line spacing change is distracting and the space is still too big. Sometimes there is nothing that can be done to make the typesetting look good. The answer in a case like this is to slightly re-write the text until you get a good fit - not to massage the spacing to give a slightly less bad fit. The tool in question (FrameMaker) seems to do a better job than word processors, and does a reasonable job of the hyphenation, but it has a lot to learn from TeX. The typesetter (person rather than program) has missed a few points too - though again, it is typeset far better than most publications these days, and it looks very nice.   0 Reply David 12/21/2010 9:06:07 PM On 21/12/10 20:55, D Yuniskis wrote: > Hi Grant, > > On 12/21/2010 12:13 PM, Grant Edwards wrote: >> On 2010-12-21, D Yuniskis<not.going.to.be@seen.com> wrote: >> >>>> I prefer long lines, reads much better. >>>> >>>> Then you're in a small minority. Cognative research shows that long >>>> lines are harder to read for pretty much everybody else. >>> >>> Agreed -- though I may have some "inherent" issue with "minimizing >>> head/eye motion" (e.g., my workstations are designed so I don't have >>> to move my head to see different monitors). >>> >>> Short lines (e.g., 2 or 3 column format) read *much* quicker (IMO) >>> than long ones. Just the difference between a paperback and a >>> "hard-bound" tome alone is significant (though I wonder if that isn't >>> because the actual *physical* width of the text is smaller?). >> >> The research that I've read over the years indicates that both >> physical width (specifically the angle subtended by the line), and the >> number of characters had an effect. > > The former seems to coincide with my experience. I had > assumed it "trumped" the latter. (?) I would have to > think about how/why "character count" would be significant > (barring the obvious correlation that, for a given "angle", > more characters means *smaller* characters). > The basic physiological issue here is the time taken to read the line - if there is too much in it, then you have "forgotten" where the line started by the time you want to move to the next line, even if you can still "see" the start of the line. So both the subtended angle (which is obviously dependent on the reading distance) and the line's contents matter. It also depends somewhat on the type of text. If there are a lot of long and technical words, it's important that the line is long enough that these are not often broken. But if the text is slow to read, or boring, lines should be short or you will again lose track of the line starts. >>> With short line lengths, you never lose sight of the left edge >>> of the column. So, moving to the next line requires less >>> visual (subconscious) "hunting". >> >> Right. The longer the line, the harder it is to jump to the beginning >> of the next line without getting "lost" or making false starts on the >> wrong line. > > Setting the type "ragged right" actually helps with readability. > But, it doesn't "look pretty". > Ragged right is certainly better than rampant hyphenation or wildly varying spacing to justify the text. Good typesetters for short-line publications like newspapers will work with the editors to change text to fit, so that it can be readable /and/ pretty. > DTP is a lot more "art" than one would think. You *assume* it's > just a matter of getting everything to fit on the page and > making it "look pretty". In fact, making something that is > "easy to read" (i.e., so that it doesn't interfere with the > *content*) can be very difficult. > There is also a lot more "science" to typesetting than most people think. Even if you don't want to use TeX or its friends, I'd thoroughly recommend reading the TeXbook to see how Knuth thinks about typesetting. > Knuth (?) commented that once you start designing fonts, it is > hard NOT to look at EVERY font that you encounter with a > critical eye. I.e., you stop seeing things as "words on paper" > but, instead, start inspecting individual glyphs, kerning, etc. > The same is true of DTP -- you look at why someone opted for > "big caps", or outdents, or... > I know the feeling. When you've used TeX or LaTeX, and especially if you've read the books explaining the background, it is hard not to think about it. I can usually spot a (La)TeX'ed document immediately - there is just far more attention to the small details than you get with other programs. It is often easy to see when someone has used a professional program like Frame Maker rather than an amateurish word processor, but the difference is not as great unless it is done by a very skilled typesetter. I am often asked to proof-read documents at work - it is sometimes hard to concentrate on the relevant issues rather than glaring double-space errors or font issues. > Always fun to see newbies using dozens of "fonts", colors, etc. > Starts to read like: "LeAvE$1,o0o,00o iN a PapuR bAg uNDer
> tHe BRidgE at 5tH & MaIn iF yOu EVeR wANt tO sEe yOuR..." :>


 0

On 2010-12-21, D Yuniskis <not.going.to.be@seen.com> wrote:

> Always fun to see newbies using dozens of "fonts", colors, etc.
> Starts to read like:  "LeAvE $1,o0o,00o iN a PapuR bAg uNDer > tHe BRidgE at 5tH & MaIn iF yOu EVeR wANt tO sEe yOuR..." :> And then along came HTML and they could put all that mess on top of a nice busy background image! -- Grant   0 Reply Grant 12/21/2010 9:46:24 PM On 2010-12-21, David Brown <david.brown@removethis.hesbynett.no> wrote: > There is also a lot more "science" to typesetting than most people > think. Even if you don't want to use TeX or its friends, I'd thoroughly > recommend reading the TeXbook to see how Knuth thinks about typesetting. > >> Knuth (?) commented that once you start designing fonts, it is >> hard NOT to look at EVERY font that you encounter with a >> critical eye. I.e., you stop seeing things as "words on paper" >> but, instead, start inspecting individual glyphs, kerning, etc. >> The same is true of DTP -- you look at why someone opted for >> "big caps", or outdents, or... > > I know the feeling. When you've used TeX or LaTeX, and especially if > you've read the books explaining the background, it is hard not to think > about it. I can usually spot a (La)TeX'ed document immediately - there > is just far more attention to the small details than you get with other > programs. With (La)TeX, the defaults pretty much always produce a good-looking document. And, if you're not happy with the results, you can always tweak things to make it worse. [At least that's usually what happens when I decide I want to start changing things.] -- Grant   0 Reply Grant 12/21/2010 9:53:24 PM On a sunny day (Tue, 21 Dec 2010 13:57:09 -0700) it happened D Yuniskis <not.going.to.be@seen.com> wrote in <ier3pv$a0q$1@speranza.aioe.org>: >Hi Jan, > >On 12/21/2010 12:18 PM, Jan Panteltje wrote: >> On a sunny day (Tue, 21 Dec 2010 19:12:35 GMT) it happened Jan Panteltje >> <pNaOnStPeAlMtje@yahoo.com> wrote in<iequ7b$m90$1@news.datemas.de>: >> >> PS >> I prefer it this way: >> http://panteltje.com/panteltje/pic/gm_pic/ > >That forces you to look *past* a photo for the next bit of text. >I don't "interrupt" the reader unless the interruption is >"worthwhile". I.e., a reader should be able to keep reading >and ignore photos, illustrations, tables, etc. if they aren't >"significant enough". Yes, I should perhaps just have provided a link to the big pictures, and put the text next to it. Something to consider for the next project (that is already progressing in a nice way): ftp://panteltje.com/pub/sc_pic/xscpc.gif Uses a photo multiplier tube and scintillation crystal: ftp://panteltje.com/pub/PMT/PMT_1_img_2435.jpg Once the project is finished, I will use your suggestions to make that web page. Then there is already the next project materialising.. but that is a VERY cold project, super conducting cold actually. >For example, I like to try to arrange photos/tables/etc. >at the top or bottom of a column so they user can conveniently >ignore them. OTOH, doing this rigidly, results in a boring >(visually) presentation. > >In the newsletter I mentioned (elsewhere), for example, I >took liberties with which "articles" were on each page. >And, did some significant editing to those articles to >cause images to fall where I wanted them. > >When I do technical presentations, I try to silently impose >a structure on each page so the reader knows where to look >for things. E.g., if I have "screenshots" in the document, >then I might opt to "always" put them in the same relative >position on a page -- so the user's "muscle memory" directs >him to the text or image, as appropriate (without having to >"hunt") Yes, on my website I sort of start from the point of view that if somebody is REALLY interested they will need all data they can get. So I try to provide the essential info. And I assume some real knowledge how to use it. It is not a training or electronics education, something like that. Although I just got a letter from the chamber of commerce that I was registered as business training centre, can you imagine. Wonder hat gave them that idea :-)   0 Reply Jan 12/21/2010 10:13:31 PM Hi Jan, On 12/21/2010 3:13 PM, Jan Panteltje wrote: > On a sunny day (Tue, 21 Dec 2010 13:57:09 -0700) it happened D Yuniskis > <not.going.to.be@seen.com> wrote in<ier3pv$a0q$1@speranza.aioe.org>: > >> On 12/21/2010 12:18 PM, Jan Panteltje wrote: >>> On a sunny day (Tue, 21 Dec 2010 19:12:35 GMT) it happened Jan Panteltje >>> <pNaOnStPeAlMtje@yahoo.com> wrote in<iequ7b$m90$1@news.datemas.de>: >>> >>> PS >>> I prefer it this way: >>> http://panteltje.com/panteltje/pic/gm_pic/ >> >> That forces you to look *past* a photo for the next bit of text. >> I don't "interrupt" the reader unless the interruption is >> "worthwhile". I.e., a reader should be able to keep reading >> and ignore photos, illustrations, tables, etc. if they aren't >> "significant enough". > > Yes, I should perhaps just have provided a link to the big pictures, > and put the text next to it. For HTML, I'd put a reduced resolution/size image "in-line" with the associated text. And, a link *on* the image so you could "click for bigger picture". [I consider this too much work -- which is why I don't create HTML documents! :> ] While I despise PDF's, they are an effective way of controlling content *and* presentation. ["No matter how much you dislike pickles, they are, after all, the only thing you can do with CUcumbers!" (a bizarre reference that I suspect few will recognize)]   0 Reply D 12/21/2010 10:38:53 PM On 12/21/2010 03:21 PM, D Yuniskis wrote: > [...] > For an example of this, see the center of page 4 at > http://www.fkbcl.org/f/Spring_2009_Newsletter.pdf > also note the fancy photos! Wow, that looks fantastic! I admire a piece of good typesetting, at least as far as my eye can see, and this close to the top of my scale. > The software that does this is BFM as far as I am concerned! BFM? -- Randy Yates % "My Shangri-la has gone away, fading like Digital Signal Labs % the Beatles on 'Hey Jude'" yates@digitalsignallabs.com % http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO   0 Reply Randy 12/21/2010 10:52:31 PM Hi Jan, On 12/21/2010 1:49 PM, Jan Panteltje wrote: > On a sunny day (Tue, 21 Dec 2010 13:50:30 -0700) it happened D Yuniskis > <not.going.to.be@seen.com> wrote in<ier3dg$96o$1@speranza.aioe.org>: > >> <grin> This is what I was "competing" with: >> >> Before: >> http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf > > # wget http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf > --21:44:41-- http://www.fkbcl.org/f/Fall_2009_Newsletter.pdf > => all_2009_Newsletter.pdf' > Resolving www.fkbcl.org... 69.90.45.41 > Connecting to www.fkbcl.org|69.90.45.41|:80... connected. > HTTP request sent, awaiting response... 404 Not Found > 21:44:42 ERROR 404: Not Found. <Grrrr> My fault cutting and pasting. http://www.fkbcl.org/f/Fall_2008_Newsletter.pdf (Fall 200*8* precedes Spring 2009 :< I was no longer with the group in Fall 2009 -- which is why the *next* newsletter didn't show up until Spring 2010!) >> While I am an advocate of FOSS (though don't run Linux), I am >> not a zealot. I *gladly* avail myself of the tools that are >> available under (e.g.) Windows. Especially if they make my life >> easier or my "product" better! > > Yea, but I burned my XP disk, as it wasted too much time. > So far Linux has always helped me out, did not want to put > too much time into that DTP stuff back then, > I write my web pages in html with a text editor. The only "big" DTP (i.e., hundreds of pages) I've done were with Ventura (starting in the "GEM" days). I could coerce VP to do a lot more "clever" things -- but, it required more hand-holding (though I suspect some of that had to do with the state-of-the-art at the time). When Corel started mucking with VP (in particular, when they replaced the TEXT files that VP used with "wacko prorpietary format" files -- which were impossible to "patch"), I went looking for a replacement tool (e.g., Quark, Frame, etc.). FrameMaker doesn't let me play all the layout tricks that VP could be coerced into doing. But, it adds some other capabilities that VP didn't have (e.g., a much nicer equation editor). For small publications (e.g., newsletter, my "notes" series, etc.) it gets out of my way and lets me get the job done very quickly. E.g., most of the time spent on that newsletter (discounting taking pictures, rounding up articles, etc.) was spent editing other peoples' prose (it is amazing how uninspired some folks are as writers :< ) and gluing together the panoramic photos. The actual "publishing" was probably just a few hours. >>> At very old age, if I get that far, when all I can do is press a key, >>> maybe I should write a book. >>> Although these days it is probably 'twitter'. >>> :-) >> >> <frown> > > Yes, I frown on that too, but the younger generation seems to use it a lot. <shrug> We'll see what they say later in life (I take privacy issues considerably more seriously than the young-uns)   0 Reply D 12/21/2010 10:52:40 PM On a sunny day (Tue, 21 Dec 2010 15:52:40 -0700) it happened D Yuniskis <not.going.to.be@seen.com> wrote in <ieraii$p7j$1@speranza.aioe.org>: ><Grrrr> My fault cutting and pasting. > >http://www.fkbcl.org/f/Fall_2008_Newsletter.pdf > >(Fall 200*8* precedes Spring 2009 :< I was no longer with the >group in Fall 2009 -- which is why the *next* newsletter didn't >show up until Spring 2010!) Looks like a scan to me, page 2 text is not horizontal, grey background, no colors. Yours is a zillion times nicer. >The only "big" DTP (i.e., hundreds of pages) I've done were with >Ventura (starting in the "GEM" days). I could coerce VP to do >a lot more "clever" things -- but, it required more hand-holding >(though I suspect some of that had to do with the state-of-the-art >at the time). > >When Corel started mucking with VP (in particular, when they >replaced the TEXT files that VP used with "wacko prorpietary >format" files -- which were impossible to "patch"), I went >looking for a replacement tool (e.g., Quark, Frame, etc.). Corel did very strange things, they once made a Linux distro, and I bought it, In that distro they redirected all error messages to /dev/zero, so if something did not work you would not know about it. Was on my system for a VERY short time (hours), before it was replaced by Suze IIRC. >FrameMaker doesn't let me play all the layout tricks that VP >could be coerced into doing. But, it adds some other >capabilities that VP didn't have (e.g., a much nicer equation >editor). For small publications (e.g., newsletter, my "notes" >series, etc.) it gets out of my way and lets me get the job done >very quickly. E.g., most of the time spent on that newsletter >(discounting taking pictures, rounding up articles, etc.) was >spent editing other peoples' prose (it is amazing how uninspired >some folks are as writers :< ) and gluing together the panoramic >photos. The actual "publishing" was probably just a few hours. > >>>> At very old age, if I get that far, when all I can do is press a key, >>>> maybe I should write a book. >>>> Although these days it is probably 'twitter'. >>>> :-) >>> >>> <frown> >> >> Yes, I frown on that too, but the younger generation seems to use it a lot. > ><shrug> We'll see what they say later in life (I take privacy >issues considerably more seriously than the young-uns) These days we are all an open book to gov :-)   0 Reply Jan 12/21/2010 11:00:01 PM Hi David, On 12/21/2010 2:06 PM, David Brown wrote: > On 21/12/10 21:21, D Yuniskis wrote: >> On 12/21/2010 12:12 PM, Jan Panteltje wrote: >>> On a sunny day (Tue, 21 Dec 2010 12:11:41 -0700) it happened D Yuniskis >>> I means what those DTP programs sometimes >>> produce, and then they use variable spacing between the WORDS >>> to get even line length... terrible. >> >> Things are much better with modern DTP tools. In years past, >> to pad a line to full length, you had to insert *whole* >> spaces. For fixed width fonts (e.g., courier), this can be >> "like fingernails scraping on a chalkboard". >> >> Note that most tools will also play games balancing column >> lengths (if you so choose). Some will even use *variable* >> line SPACING to achieve this! > > I can see the point of that - by reducing the vertical line spacing, you > are reducing the area of the large space and thus its visual effect. I > am not sure I like it, however - I thing the line spacing change is > distracting and the space is still too big. I think feathering is intended for more "artistic" presentations. E.g., "I have a two column-inch box that I want 'THIS BIT OF TEXT' to *fill* -- stretch/compress it as necessary to achieve that goal". So, in multicolumn layouts, it just looks silly -- you don't want to (usually) "spread out" (vertically) one column just so it is a certain vertical size (there are other tricks that FM will employ to balance the columns without feathering). Instead, you usually want to "synchronize" the text baselines of adjacent columns (otherwise, the visual effect is very unsettling). Recall that your document is not one long string of characters (with images/tables interspersed). Rather, it may be composed of many *discrete* "flows" (FM-speak). E.g., when I want to past a lengthy bit of code into an article, I create a separate "flow" for that code -- with it's own layout rules. Then, I tell FM which "frames" the flow should occupy. So, I can start the flow in a 3"x4" box on page 1 and continue it in a 3"x6" box on page 7 (if that made sense). With this in mind, you can see how some publications (e.g., advertisements!) might want to use things like feathering in particular "frames". > Sometimes there is nothing that can be done to make the typesetting look > good. The answer in a case like this is to slightly re-write the text > until you get a good fit - not to massage the spacing to give a slightly > less bad fit. You use both techniques. You can adjust the kerning and spacing within lines (and character sequences). E.g., highlight a section of text and then interactively squeeze or stretch the characters within that highlighted region. The downside of short line lengths -- especially in technical publications -- is that you tend to end up with "big words" (and/or words that are glued together like "and/or" :> ). This doesn't give you many places to "insert whitespace". So, if you *don't* set it "ragged right", you frequently end up with things like: |the counterclockwise rotation| Your only remedy is to let some other word(s) onto the line and hyphenate them (I *really* dislike hyphenation! With narrow columns, it becomes VERY frequent) > The tool in question (FrameMaker) seems to do a better job than word Word processors are crap. I have yet to find a need for one *other* than writing single page correspondence (I do even that in FrameMaker since I don't want to deal with yet another program that does "something similar"). > processors, and does a reasonable job of the hyphenation, but it has a > lot to learn from TeX. The typesetter (person rather than program) has Ah, well... I'll gladly refund the money that I (wasn't) paid! :> > missed a few points too - though again, it is typeset far better than > most publications these days, and it looks very nice.   0 Reply D 12/21/2010 11:15:40 PM Hi David, On 12/21/2010 2:19 PM, David Brown wrote: [attributions elided] >>>> With short line lengths, you never lose sight of the left edge >>>> of the column. So, moving to the next line requires less >>>> visual (subconscious) "hunting". >>> >>> Right. The longer the line, the harder it is to jump to the beginning >>> of the next line without getting "lost" or making false starts on the >>> wrong line. >> >> Setting the type "ragged right" actually helps with readability. >> But, it doesn't "look pretty". > > Ragged right is certainly better than rampant hyphenation or wildly > varying spacing to justify the text. Good typesetters for short-line > publications like newspapers will work with the editors to change text > to fit, so that it can be readable /and/ pretty. The problem is that rewriting takes a disproportional amount of time. When I write technical documentation, I often DELIBERATELY use boilerplate prose. The point being to let the user ignore the repetitious aspects of the document and concentrate on that which *differs* from one "topic" to the next. E.g., like consistently-written man(1) pages (you don't even see the cruft parts of the sentences that just serve to provide "proper grammar" for the *MEAT* of the sentence). >> DTP is a lot more "art" than one would think. You *assume* it's >> just a matter of getting everything to fit on the page and >> making it "look pretty". In fact, making something that is >> "easy to read" (i.e., so that it doesn't interfere with the >> *content*) can be very difficult. > > There is also a lot more "science" to typesetting than most people > think. Even if you don't want to use TeX or its friends, I'd thoroughly > recommend reading the TeXbook to see how Knuth thinks about typesetting. When I did my first DTP project, I canvassed the state of the art for tools, texts, etc. In addition to _The TeXbook_ (volumes A through E, hard copy), I have an assortment of texts on the (legacy) printing process, composition, style guides, etc. >> Knuth (?) commented that once you start designing fonts, it is >> hard NOT to look at EVERY font that you encounter with a >> critical eye. I.e., you stop seeing things as "words on paper" >> but, instead, start inspecting individual glyphs, kerning, etc. >> The same is true of DTP -- you look at why someone opted for >> "big caps", or outdents, or... > > I know the feeling. When you've used TeX or LaTeX, and especially if > you've read the books explaining the background, it is hard not to think > about it. I can usually spot a (La)TeX'ed document immediately - there > is just far more attention to the small details than you get with other > programs. It is often easy to see when someone has used a professional > program like Frame Maker rather than an amateurish word processor, but > the difference is not as great unless it is done by a very skilled Chase down the "before" and "since" links I mentioned in one of these posts to see how "others" have tackled the same newsletter. <frown> It's disturbing (in any field) when people *think* something is "easy" just because it *looks* like it SHOULD be easy! I am always amazed at the various part of my brain that are "missing". E.g., I can't draw a living creature. But, I can draw landscapes, buildings, plants, etc. in very good detail (proper multippoint perspective, etc.). I.e., that portion of a human brain that can draw people is MISSING in my case! :> OTOH, if you were to show me a drawing of a person, I could immediately tell you what was wrong with the drawing and how to correct it! (I suspect a similar phenomenon explains why so many people can't "design from scratch" -- but can patch 'till the cows come home!) > typesetter. I am often asked to proof-read documents at work - it is > sometimes hard to concentrate on the relevant issues rather than glaring > double-space errors or font issues. For me, it is spelling (though I don't use a spell-checker -- to force myself to be *better* at it!). And, particularly, getting names correct (e.g., all of the names of the "sponsors" in that newsletter). Thankfully (regrettably?), I only obsess about it in formal documents... >> Always fun to see newbies using dozens of "fonts", colors, etc. >> Starts to read like: "LeAvE$1,o0o,00o iN a PapuR bAg uNDer
>> tHe BRidgE at 5tH & MaIn iF yOu EVeR wANt tO sEe yOuR..." :>


 0

Hi Grant,

On 12/21/2010 2:46 PM, Grant Edwards wrote:
> On 2010-12-21, D Yuniskis<not.going.to.be@seen.com>  wrote:
>
>> Always fun to see newbies using dozens of "fonts", colors, etc.
>> Starts to read like:  "LeAvE $1,o0o,00o iN a PapuR bAg uNDer >> tHe BRidgE at 5tH& MaIn iF yOu EVeR wANt tO sEe yOuR..." :> > > And then along came HTML and they could put all that mess on top of a > nice busy background image! .... that flashes, cycles through the color map *and* plays annoying music!! :-/ Sometimes, these tricks can be used very effectively (e.g., one of my "letters of reference" has NO letterhead; the firm's logo is VERY subtly embossed in the paper. Really classy!). But, too often they are abused. Like wearing a plaid shirt with plaid pants -- and a bright yellow BELT! :-/ (OK, which one of you guys have I just coincidentally described?? :> )   0 Reply D 12/21/2010 11:23:39 PM Hi Randy, On 12/21/2010 3:52 PM, Randy Yates wrote: > On 12/21/2010 03:21 PM, D Yuniskis wrote: >> The software that does this is BFM as far as I am concerned! > > BFM? Um, er, "Black F***ing Magic". <grin> You *really* would have to play with the software to see just how mindblowing it is! Take N slightly overlapping photos. Drag them into the program. Click and you're done! I climbed on the roof one day and took a set of photos while slowly rotating. It has a "mode" where it will glue them together in a 360 degree presentation (that you can later "scroll" left or right... amazing -- and dizzying!). As I said, it's probably "no big deal" under the hood. But, to see how much leeway it gives you in *taking* the photos and how well it stitches them together... BFM is all that comes to mind! I realize there are cameras that will do this for you. Hence my thought that there must be a "trick" that is easily exploited in analyzing the images.   0 Reply D 12/21/2010 11:38:13 PM D Yuniskis wrote: > I realize there are cameras that will do this for you. > Hence my thought that there must be a "trick" that is > easily exploited in analyzing the images. I think you'll find that these cameras have commercial licenses for the same autostitch software. Certainly true with a Canon I used.   0 Reply Clifford 12/22/2010 12:13:10 AM Hi Jan, On 12/21/2010 4:00 PM, Jan Panteltje wrote: > On a sunny day (Tue, 21 Dec 2010 15:52:40 -0700) it happened D Yuniskis > <not.going.to.be@seen.com> wrote in<ieraii$p7j$1@speranza.aioe.org>: > >> <Grrrr> My fault cutting and pasting. >> >> http://www.fkbcl.org/f/Fall_2008_Newsletter.pdf >> >> (Fall 200*8* precedes Spring 2009 :< I was no longer with the >> group in Fall 2009 -- which is why the *next* newsletter didn't >> show up until Spring 2010!) > > Looks like a scan to me, page 2 text is not horizontal, grey > background, no colors. Yours is a zillion times nicer. I don't know the person(s) who did the "before" and "since" editions. I suspect someone just scanned the *print* copy and posted it on their site. >> The only "big" DTP (i.e., hundreds of pages) I've done were with >> Ventura (starting in the "GEM" days). I could coerce VP to do >> a lot more "clever" things -- but, it required more hand-holding >> (though I suspect some of that had to do with the state-of-the-art >> at the time). >> >> When Corel started mucking with VP (in particular, when they >> replaced the TEXT files that VP used with "wacko prorpietary >> format" files -- which were impossible to "patch"), I went >> looking for a replacement tool (e.g., Quark, Frame, etc.). > > Corel did very strange things, they once made a Linux distro, > and I bought it, > In that distro they redirected all error messages to /dev/zero, > so if something did not work you would not know about it. > Was on my system for a VERY short time (hours), > before it was replaced by Suze IIRC. Yes, Corel seems to have had a lot of "swing-and-a-miss" in the software world. DR-DOS, their Linux, purchasing WP, purchasing VP, etc. And, losing the "DRAW!" market to Adobe... OTOH, I think they now own WinZIP -- despite the fact that its functionality is already present in most desktop OS's! :-/   0 Reply D 12/22/2010 12:18:16 AM Hi Clifford, On 12/21/2010 5:13 PM, Clifford Heath wrote: > D Yuniskis wrote: >> I realize there are cameras that will do this for you. >> Hence my thought that there must be a "trick" that is >> easily exploited in analyzing the images. > > I think you'll find that these cameras have commercial licenses > for the same autostitch software. Certainly true with a Canon I used. AFAIK, there are several software products out there with this sort of ability. I.e., it seems like someone came up with the idea and the *method* was "obvious" to the (different) people who developed these tools. I just am clueless as to the magic involved... Do you have any "controls" to influence how your camera stitches things together? E.g., I can *elect* to place three (IIRC) markers in "picture 1" and "picture 2" identifying the points that *should* coincide. I've only had to do this once. I think it was a consequence of the camera "re-setting" itself to different (optical) parameters from one photo to the next. I know there are some guidelines that you're (I'm) supposed to use to ensure the adjacent images line up "effortlessly" but I don't really understand optics and the consequences of different f-stops, etc. to know how to relate those guidelines to the underlying "science".   0 Reply D 12/22/2010 12:58:00 AM "D Yuniskis" <not.going.to.be@seen.com> wrote in message news:ierfj5$4rm$1@speranza.aioe.org... > OTOH, I think they now own WinZIP -- despite the fact that > its functionality is already present in most desktop OS's! :-/ If you want to *create* Zip packages the built-in support on Windows is pretty basic -- the 3rd party packages add a lot more features, that some people find useful. It's surprising just how many such packages there are (e.g., see http://en.wikipedia.org/wiki/Comparison_of_file_archivers ). I purchased a copy of WinAce some years ago now and have been quite happy with it... even if ..ACE never did take over the world like I was hoping it would. (It tends to compress noticeably better than Zip...) Unfortunately Phil Katz drank himself to death at the age of only 37. ---Joel   0 Reply Joel 12/22/2010 1:32:04 AM Hi Joel, On 12/21/2010 6:32 PM, Joel Koltner wrote: > "D Yuniskis" <not.going.to.be@seen.com> wrote in message > news:ierfj5$4rm$1@speranza.aioe.org... >> OTOH, I think they now own WinZIP -- despite the fact that >> its functionality is already present in most desktop OS's! :-/ > > If you want to *create* Zip packages the built-in support on Windows is > pretty basic -- the 3rd party packages add a lot more features, that > some people find useful. Ah, I didn't know that. I usually just unzip things (using gzip on my UN*X boxen to *zip* them) > It's surprising just how many such packages there are (e.g., see > http://en.wikipedia.org/wiki/Comparison_of_file_archivers ). I purchased > a copy of WinAce some years ago now and have been quite happy with it... > even if ..ACE never did take over the world like I was hoping it would. > (It tends to compress noticeably better than Zip...) Yeah, I recall ARJ, ACE, ZIP, RAR, etc. Now I see BZ2 and 7Z (?) coming along to further muddy the waters... And, of course, the StuffIt crowd from the land of apples... > Unfortunately Phil Katz drank himself to death at the age of only 37. Wow! Pretty young. An acquaintance, here, just passed away. When I inquired into the reason why, I was given the answer "Well, you know he was a 'drunk'..." Guess I'd never considered the health consequences of drinking (since I don't drink). I gather that most "alcohol-related" morbidity is from complications of drinking and not "alcohol toxemia" (?)   0 Reply D 12/22/2010 2:07:52 AM "D Yuniskis" <not.going.to.be@seen.com> wrote in message news:iermf5$hjl$1@speranza.aioe.org... > Yeah, I recall ARJ, ACE, ZIP, RAR, etc. Now I see BZ2 and 7Z (?) > coming along to further muddy the waters... I doubt 7Z will catch on. BZ2 likely will in the *NIX world... >> Unfortunately Phil Katz drank himself to death at the age of only 37. > > Wow! Pretty young. An acquaintance, here, just passed away. > When I inquired into the reason why, I was given the answer > "Well, you know he was a 'drunk'..." Guess I'd never considered > the health consequences of drinking (since I don't drink). > I gather that most "alcohol-related" morbidity is from > complications of drinking and not "alcohol toxemia" (?) Yes, I believe so. Bob Widlar had already become sober and was apparently doing a pretty good job of getting his life back under control when he died while out jogging from a heart attack... it's been suggested that it was all the cumulative damage his drinking had done that made him so susceptible to dying at age 53. ---Joel   0 Reply Joel 12/22/2010 2:52:43 AM Hi Joel, On 12/21/2010 7:52 PM, Joel Koltner wrote: > "D Yuniskis" <not.going.to.be@seen.com> wrote in message > news:iermf5$hjl$1@speranza.aioe.org... >> Yeah, I recall ARJ, ACE, ZIP, RAR, etc. Now I see BZ2 and 7Z (?) >> coming along to further muddy the waters... > > I doubt 7Z will catch on. BZ2 likely will in the *NIX world... I had to unpack *something* with 7Z recently. I know I was annoyed as it was Yet Another Stupid Compressor. Sort of like writing Yet Another RTOS! :> >>> Unfortunately Phil Katz drank himself to death at the age of only 37. >> >> Wow! Pretty young. An acquaintance, here, just passed away. >> When I inquired into the reason why, I was given the answer >> "Well, you know he was a 'drunk'..." Guess I'd never considered >> the health consequences of drinking (since I don't drink). >> I gather that most "alcohol-related" morbidity is from >> complications of drinking and not "alcohol toxemia" (?) > > Yes, I believe so. Bob Widlar had already become sober and was > apparently doing a pretty good job of getting his life back under > control when he died while out jogging from a heart attack... it's been > suggested that it was all the cumulative damage his drinking had done > that made him so susceptible to dying at age 53. <frown> I can't relate to addictions. I've had lots of *habits* over the years but none proved to be "addictions" in the sense that I couldn't just walk away from them. Sad. Of course, no guarantee that "clean living" won't also find you dead at 53 :-/ I just (3 minutes ago) was commenting (while reading his "Tea Time") about Adams' premature (from *my* viewpoint!) death. Disappointing when you consider the things that *could* have come into the world had things been otherwise (his posthumous "Salmon" is really frustrating as it looks like it could have been another winner)   0 Reply D 12/22/2010 3:24:54 AM D Yuniskis wrote: > On 12/21/2010 5:13 PM, Clifford Heath wrote: >> I think you'll find that these cameras have commercial licenses >> for the same autostitch software. Certainly true with a Canon I used. > AFAIK, there are several software products out there with > this sort of ability. There have been many different stitching methods. The one used by autostitch is the only automatic one AFAIK, and is widely (though not always visibly) licensed. > I.e., it seems like someone came up > with the idea and the *method* was "obvious" to the (different) > people who developed these tools. I just am clueless as to > the magic involved... I've read a bit about it. It involves feature extraction after the fashion used for object recognition in machine vision, correlation of matching features in the various images, followed by analysis of the lens distortion implied by the measurable curvature in the correlated points. Then some adjustments can be made to the actual exposure levels to reduce discontinuity, and finally a new image is constructed with blending in the overlap region. Straightforward enough technique now, but a lot of work to implement effectively. I doubt there are competitive implementations anywhere, but perhaps there are some inferior ones. Autostitch has a number of tweakable options in how loosely the features must match to be considered matching, and things like that, but no capability (nor need! as older techniques did) of requiring the user to manually identify features. > I think it was a consequence > of the camera "re-setting" itself to different (optical) > parameters from one photo to the next. You should set your camera to "Exposure lock", and take the first frame at a part of the scene with median lighting. Clifford Heath.   0 Reply Clifford 12/22/2010 5:02:59 AM On Wed, 22 Dec 2010 16:02:59 +1100, Clifford Heath <no@spam.please.net> wrote: >><snip> >There have been many different stitching methods. The one used >by autostitch is the only automatic one AFAIK, and is widely >(though not always visibly) licensed. ><snip> >Clifford Heath. Some interesting links worth reviewing related to Autostitch: http://www.cs.ubc.ca/~lowe/papers/brown02.pdf http://www.cs.ubc.ca/~lowe/papers/brown03.pdf http://www.cs.ubc.ca/~lowe/papers/07brown.pdf http://cvlab.epfl.ch/~brown/papers/iccv2003.ppt Jon   0 Reply Jon 12/22/2010 9:27:36 AM On 21/12/2010 21:50, D Yuniskis wrote: > Hi Jan, > > On 12/21/2010 1:25 PM, Jan Panteltje wrote: >> Yes, when I ever have the time I may try to find a good DTP program >> for Linux >> that actually installs (tried some in the past without much luck). > > While I am an advocate of FOSS (though don't run Linux), I am > not a zealot. I *gladly* avail myself of the tools that are > available under (e.g.) Windows. Especially if they make my life > easier or my "product" better! > That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/ open source DTP program. I have barely tried it myself, but it is apparently very popular. And as with many such tools, it is cross-platform - try it with Linux, Windows or MacOS as you will. >> At very old age, if I get that far, when all I can do is press a key, >> maybe I should write a book. >> Although these days it is probably 'twitter'. >> :-) > > <frown>   0 Reply David 12/22/2010 9:58:32 AM On 22/12/2010 03:52, Joel Koltner wrote: > "D Yuniskis" <not.going.to.be@seen.com> wrote in message > news:iermf5$hjl$1@speranza.aioe.org... >> Yeah, I recall ARJ, ACE, ZIP, RAR, etc. Now I see BZ2 and 7Z (?) >> coming along to further muddy the waters... > > I doubt 7Z will catch on. BZ2 likely will in the *NIX world... > 7z has never really caught on. There has to be a good reason for a new compression/archive format to take hold - 7z doesn't really have one. It is perhaps the tightest compressed archive format with a totally open specification is patent-free, but it is not so much tighter than zip for it to be worth the effort. rar took off because WinRAR made it easy to handle large files and split them up - it quickly became popular in the warez / file-sharing arena. bz2 has been an established standard for many years, and is extremely common in the Linux world. With Linux, there is a clear distinction between archiving and compressing - you use "tar" for the archive, and can then compress it with gzip or bzip2. gzip is faster, bzip2 gives better compression. bzip2 is not "catching on in the *nix world" - it caught on about a decade ago. The newcomer in the Linux world is LZMA, which gives even better compression. It is quite slow to compress, taking twice the time bzip2 takes. But it is much faster in decompression.   0 Reply David 12/22/2010 10:16:36 AM On 22/12/2010 02:32, Joel Koltner wrote: > "D Yuniskis" <not.going.to.be@seen.com> wrote in message > news:ierfj5$4rm$1@speranza.aioe.org... >> OTOH, I think they now own WinZIP -- despite the fact that >> its functionality is already present in most desktop OS's! :-/ > > If you want to *create* Zip packages the built-in support on Windows is > pretty basic -- the 3rd party packages add a lot more features, that > some people find useful. > > It's surprising just how many such packages there are (e.g., see > http://en.wikipedia.org/wiki/Comparison_of_file_archivers ). I purchased > a copy of WinAce some years ago now and have been quite happy with it... > even if .ACE never did take over the world like I was hoping it would. > (It tends to compress noticeably better than Zip...) > I find it even more surprising how many people /pay/ for packages like this - when there is free software such as 7zip that does everything you need. I also find it depressing how many people abuse WinZip and WinRAR by using them without paying for them. Shareware is commercial software sold for money - it is free to test and try out, but people should either pay for it (as you did for WinAce), or use something else. And I find it depressing how companies can take other people's hard work, put a pretty face on it, call it there own and sell it. That's what WinZip did - the guts of the program were originally InfoZip's BSD-licensed zip libraries. I know that this is perfectly legal under the BSD, but I still feel something is morally wrong somewhere. > Unfortunately Phil Katz drank himself to death at the age of only 37. >   0 Reply David 12/22/2010 10:25:35 AM Hi Clifford, On 12/21/2010 10:02 PM, Clifford Heath wrote: > D Yuniskis wrote: >> On 12/21/2010 5:13 PM, Clifford Heath wrote: >>> I think you'll find that these cameras have commercial licenses >>> for the same autostitch software. Certainly true with a Canon I used. >> AFAIK, there are several software products out there with >> this sort of ability. > > There have been many different stitching methods. The one used > by autostitch is the only automatic one AFAIK, and is widely > (though not always visibly) licensed. Dunno. I first stumbled on the "tool" at a client's shop. Only needed it for one photo so never thought much more about it. Some time later, needed to do something similar so bought a copy of it figuring it *might* work for me -- assumed the ease of my first success with it was just a pleasant coincidence. Figured it would probably take "a bit of work" to make regular use of it but considered the potential gain as well worth the effort (e.g., NOT having to use wide-angle lenses, 360 views, etc.). So, when I subsequently found it to be literally a "no-brainer" to use *regardless* (almost) of the "input" photos, I was really blown away. BFM without a doubt! I've since found several cases where it was unable to stitch things together without significant visual artifacts. But, in each case, I almost *knew* I was going to have a problem *while* I was taking the photos (but couldn't articulate "why" since I didn't understand what the algorithm was doing). Now, it's as if I go *looking* for opportunities to use it! :> >> I.e., it seems like someone came up >> with the idea and the *method* was "obvious" to the (different) >> people who developed these tools. I just am clueless as to >> the magic involved... > > I've read a bit about it. It involves feature extraction after > the fashion used for object recognition in machine vision, > correlation of matching features in the various images, > followed by analysis of the lens distortion implied by the > measurable curvature in the correlated points. Then some Hmmm... I had assumed it was just trying to convolve the DCT's from the individual images looking for a peak or a null. But, that was just an ignorant guess as to how the magic might work. Anything else that I could think of seemed like it would use more resources than this apparently was using. > adjustments can be made to the actual exposure levels to > reduce discontinuity, and finally a new image is constructed > with blending in the overlap region. Straightforward enough OK, the latter makes sense -- once you know where to line things up. > technique now, but a lot of work to implement effectively. > > I doubt there are competitive implementations anywhere, but > perhaps there are some inferior ones. > > Autostitch has a number of tweakable options in how loosely > the features must match to be considered matching, and things > like that, but no capability (nor need! as older techniques > did) of requiring the user to manually identify features. My tool does everything by itself -- unless you have taken bad photos to begin with. E.g., in one case, I had taken overlapping photos of a line of small trees. The pattern was probably too regular for it and it matched up the "wrong" trees in successive images. In another (360) case, most of the photos were at near infinite distance (the horizon) but a couple with very close content (~10 feet). The resulting image was pretty significantly "bent". E.g., the cover photo in newsletter shows a similar curvature though not anywhere as severe! >> I think it was a consequence >> of the camera "re-setting" itself to different (optical) >> parameters from one photo to the next. > > You should set your camera to "Exposure lock", and take the > first frame at a part of the scene with median lighting. Hmmm... I'll see what the various controls are. For the most part, I just snap photos until something looks "about right" (i.e., without really thinking about what I am doing -- though I've learned that "macro" is almost essential for anything within arm's reach)   0 Reply D 12/22/2010 10:43:12 AM Hi David, On 12/22/2010 2:58 AM, David Brown wrote: > On 21/12/2010 21:50, D Yuniskis wrote: >> Hi Jan, >> >> On 12/21/2010 1:25 PM, Jan Panteltje wrote: > >>> Yes, when I ever have the time I may try to find a good DTP program >>> for Linux >>> that actually installs (tried some in the past without much luck). >> >> While I am an advocate of FOSS (though don't run Linux), I am >> not a zealot. I *gladly* avail myself of the tools that are >> available under (e.g.) Windows. Especially if they make my life >> easier or my "product" better! > > That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/ > open source DTP program. I have barely tried it myself, but it is > apparently very popular. And as with many such tools, it is > cross-platform - try it with Linux, Windows or MacOS as you will. Unless it does something that FrameMaker *doesn't* (which would be a stretch), I wouldn't be interested. I think Frame was only about$500 when I bought my first "version".  So, I've maybe
laid out ~$1K considering upgrades -- well worth the amount of use I get out of it! And, the number of bugs/workarounds that I've had to deal with is surprisingly few. (I don't like having to debug someone else's tools...)   0 Reply D 12/22/2010 11:03:32 AM On 22/12/2010 00:15, D Yuniskis wrote: > Hi David, > > On 12/21/2010 2:06 PM, David Brown wrote: >> On 21/12/10 21:21, D Yuniskis wrote: >>> On 12/21/2010 12:12 PM, Jan Panteltje wrote: >>>> On a sunny day (Tue, 21 Dec 2010 12:11:41 -0700) it happened D Yuniskis >>>> I means what those DTP programs sometimes >>>> produce, and then they use variable spacing between the WORDS >>>> to get even line length... terrible. >>> >>> Things are much better with modern DTP tools. In years past, >>> to pad a line to full length, you had to insert *whole* >>> spaces. For fixed width fonts (e.g., courier), this can be >>> "like fingernails scraping on a chalkboard". >>> >>> Note that most tools will also play games balancing column >>> lengths (if you so choose). Some will even use *variable* >>> line SPACING to achieve this! >> >> I can see the point of that - by reducing the vertical line spacing, you >> are reducing the area of the large space and thus its visual effect. I >> am not sure I like it, however - I thing the line spacing change is >> distracting and the space is still too big. > > I think feathering is intended for more "artistic" presentations. > E.g., "I have a two column-inch box that I want 'THIS BIT OF TEXT' > to *fill* -- stretch/compress it as necessary to achieve that goal". > > So, in multicolumn layouts, it just looks silly -- you don't want to > (usually) "spread out" (vertically) one column just so it is a > certain vertical size (there are other tricks that FM will employ > to balance the columns without feathering). Instead, you usually > want to "synchronize" the text baselines of adjacent columns > (otherwise, the visual effect is very unsettling). > Yes - multi-column layouts need synchronising like this. For single-column layouts, it can be useful to stretch or shrink the vertical space a little to improve the layout, such as to avoid widows or orphans. I don't know how Frame Maker handles this - TeX will do it automatically within certain limits, and let you manually specify it outside that. And as usual with spacing, it applies more of the stretch to larger spaces such as inter-paragraph spaces than to inter-line spaces. > Recall that your document is not one long string of characters > (with images/tables interspersed). Rather, it may be composed > of many *discrete* "flows" (FM-speak). E.g., when I want to past > a lengthy bit of code into an article, I create a separate "flow" > for that code -- with it's own layout rules. Then, I tell FM > which "frames" the flow should occupy. > > So, I can start the flow in a 3"x4" box on page 1 and continue it > in a 3"x6" box on page 7 (if that made sense). With this in mind, > you can see how some publications (e.g., advertisements!) might > want to use things like feathering in particular "frames". > That's a little different from what I am used to with TeX / LaTeX, which treats everything as a whole. (You /can/ make framed boxes like this - sometimes that's very useful - but it's not the usual method). But then, Frame Maker and TeX are designed to work in very different ways - FM is much more of a visual layout tool, while TeX has a strong separation between visual appearance and textual content, and runs as a batch process. >> Sometimes there is nothing that can be done to make the typesetting look >> good. The answer in a case like this is to slightly re-write the text >> until you get a good fit - not to massage the spacing to give a slightly >> less bad fit. > > You use both techniques. You can adjust the kerning and spacing within > lines (and character sequences). E.g., highlight a section of text and > then interactively squeeze or stretch the characters within that > highlighted region. > Manual modifications like that are always a pain, especially if the document may be changed later. But sometimes they are unavoidable. > The downside of short line lengths -- especially in technical > publications -- is that you tend to end up with "big words" > (and/or words that are glued together like "and/or" :> ). > This doesn't give you many places to "insert whitespace". > So, if you *don't* set it "ragged right", you frequently > end up with things like: > |the counterclockwise rotation| > Your only remedy is to let some other word(s) onto the line > and hyphenate them (I *really* dislike hyphenation! With > narrow columns, it becomes VERY frequent) > >> The tool in question (FrameMaker) seems to do a better job than word > > Word processors are crap. I have yet to find a need for one > *other* than writing single page correspondence (I do even > that in FrameMaker since I don't want to deal with yet > another program that does "something similar"). > I managed to get by for years without using a word processor at all - LaTeX handled everything I needed. But unfortunately that poses certain challenges for working with colleagues and customers who don't use it. >> processors, and does a reasonable job of the hyphenation, but it has a >> lot to learn from TeX. The typesetter (person rather than program) has > > Ah, well... I'll gladly refund the money that I (wasn't) paid! :> > I had missed the point in an earlier post where you said /you/ were the typesetter... Here's a few tips - and I'm aware that these might be considered "style choices" rather than "good typesetting rules", and that you might also have knowingly broken them because of the short line-length constraints. Use non-breaking spaces in cases like "Mr. Bill", "N. Sixth", and "3:00 pm". Be /very/ careful with a name like "Kirk - Bear". I can see that you might want to write it that way, with spaces around the hyphen, as a sort of logo. But avoid line breaks that have the hyphen on the beginning of a line, and avoid ending a line with the hyphen if a neighbouring line is also (automatically) hyphenated. Your vertical spacing around the picture on page 2 is unbalanced, and I think the "Upcoming Activities" needs more vertical space. Other than that, it is - as I said before - very well done. >> missed a few points too - though again, it is typeset far better than >> most publications these days, and it looks very nice.   0 Reply David 12/22/2010 11:52:06 AM On 22/12/2010 00:28, D Yuniskis wrote: > Hi David, > > On 12/21/2010 2:19 PM, David Brown wrote: > > [attributions elided] > >>>>> With short line lengths, you never lose sight of the left edge >>>>> of the column. So, moving to the next line requires less >>>>> visual (subconscious) "hunting". >>>> >>>> Right. The longer the line, the harder it is to jump to the beginning >>>> of the next line without getting "lost" or making false starts on the >>>> wrong line. >>> >>> Setting the type "ragged right" actually helps with readability. >>> But, it doesn't "look pretty". >> >> Ragged right is certainly better than rampant hyphenation or wildly >> varying spacing to justify the text. Good typesetters for short-line >> publications like newspapers will work with the editors to change text >> to fit, so that it can be readable /and/ pretty. > > The problem is that rewriting takes a disproportional amount of > time. When I write technical documentation, I often DELIBERATELY > use boilerplate prose. The point being to let the user ignore > the repetitious aspects of the document and concentrate on that > which *differs* from one "topic" to the next. > There is a big difference between technical writing and publications with more specific layout requirements. Technical papers need good typesetting so that they are easy to read - newspapers (and library newsletters) need to look good to attract attention and readers. So for technical writing, you want as much to be automated as possible, and you use a software-friendly layout (such as long enough lines so that bad line breaks are rare). For newspapers, it's okay to spend more time on "artistic tweaking" to make it look good. > E.g., like consistently-written man(1) pages (you don't even see > the cruft parts of the sentences that just serve to provide > "proper grammar" for the *MEAT* of the sentence). > >>> DTP is a lot more "art" than one would think. You *assume* it's >>> just a matter of getting everything to fit on the page and >>> making it "look pretty". In fact, making something that is >>> "easy to read" (i.e., so that it doesn't interfere with the >>> *content*) can be very difficult. >> >> There is also a lot more "science" to typesetting than most people >> think. Even if you don't want to use TeX or its friends, I'd thoroughly >> recommend reading the TeXbook to see how Knuth thinks about typesetting. > > When I did my first DTP project, I canvassed the state of the > art for tools, texts, etc. In addition to _The TeXbook_ (volumes > A through E, hard copy), I have an assortment of texts on the > (legacy) printing process, composition, style guides, etc. > >>> Knuth (?) commented that once you start designing fonts, it is >>> hard NOT to look at EVERY font that you encounter with a >>> critical eye. I.e., you stop seeing things as "words on paper" >>> but, instead, start inspecting individual glyphs, kerning, etc. >>> The same is true of DTP -- you look at why someone opted for >>> "big caps", or outdents, or... >> >> I know the feeling. When you've used TeX or LaTeX, and especially if >> you've read the books explaining the background, it is hard not to think >> about it. I can usually spot a (La)TeX'ed document immediately - there >> is just far more attention to the small details than you get with other >> programs. It is often easy to see when someone has used a professional >> program like Frame Maker rather than an amateurish word processor, but >> the difference is not as great unless it is done by a very skilled > > Chase down the "before" and "since" links I mentioned in one of these > posts to see how "others" have tackled the same newsletter. <frown> > It's disturbing (in any field) when people *think* something is > "easy" just because it *looks* like it SHOULD be easy! > > I am always amazed at the various part of my brain that are "missing". > E.g., I can't draw a living creature. But, I can draw landscapes, > buildings, plants, etc. in very good detail (proper multippoint > perspective, etc.). I.e., that portion of a human brain that > can draw people is MISSING in my case! :> OTOH, if you were > to show me a drawing of a person, I could immediately tell you > what was wrong with the drawing and how to correct it! > I am a well-balanced person - I can't draw landscapes /or/ living things :) > (I suspect a similar phenomenon explains why so many people can't > "design from scratch" -- but can patch 'till the cows come home!) > >> typesetter. I am often asked to proof-read documents at work - it is >> sometimes hard to concentrate on the relevant issues rather than glaring >> double-space errors or font issues. > > For me, it is spelling (though I don't use a spell-checker -- to > force myself to be *better* at it!). And, particularly, getting > names correct (e.g., all of the names of the "sponsors" in that > newsletter). Thankfully (regrettably?), I only obsess about it > in formal documents... > >>> Always fun to see newbies using dozens of "fonts", colors, etc. >>> Starts to read like: "LeAvE$1,o0o,00o iN a PapuR bAg uNDer
>>> tHe BRidgE at 5tH & MaIn iF yOu EVeR wANt tO sEe yOuR..." :>
>


 0

On a sunny day (Wed, 22 Dec 2010 10:58:32 +0100) it happened David Brown
<david@westcontrol.removethisbit.com> wrote in
<oLKdnUw5M4dWVozQnZ2dnUVZ7r2dnZ2d@lyse.net>:

>On 21/12/2010 21:50, D Yuniskis wrote:
>> Hi Jan,
>>
>> On 12/21/2010 1:25 PM, Jan Panteltje wrote:
>
>>> Yes, when I ever have the time I may try to find a good DTP program
>>> for Linux
>>> that actually installs (tried some in the past without much luck).
>>
>> While I am an advocate of FOSS (though don't run Linux), I am
>> not a zealot. I *gladly* avail myself of the tools that are
>> available under (e.g.) Windows. Especially if they make my life
>> easier or my "product" better!
>>
>
>That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/
>open source DTP program.  I have barely tried it myself, but it is
>apparently very popular.  And as with many such tools, it is
>cross-platform - try it with Linux, Windows or MacOS as you will.

Yep,
~/compile/scribus/scribus-1.3.3.7/scribus
still on my system, don't remember why it did not work, maybe it did not compile
because of 'other things' it needed.
carried from disk to disk :-)
Probably left it there for 'one of these days'.
TB size harddisks make you do that.

It seems written in C++ ,maybe that is why I left it....
:-)

Bloat by default.
:-)

 0

On a sunny day (Wed, 22 Dec 2010 13:05:39 +0100) it happened David Brown
<david@westcontrol.removethisbit.com> wrote in
<IuednQNDkYYLdIzQnZ2dnUVZ7vidnZ2d@lyse.net>:

>I am a well-balanced person - I can't draw landscapes /or/ living things :)

It is a gift, look at this site:
http://www.artakiane.com/home.html

 0

On 22/12/2010 12:05, D Yuniskis wrote:
> Hi David,
>
> On 12/22/2010 2:58 AM, David Brown wrote:
>> On 21/12/2010 21:50, D Yuniskis wrote:
>>> Hi Jan,
>>>
>>> On 12/21/2010 1:25 PM, Jan Panteltje wrote:
>>
>>>> Yes, when I ever have the time I may try to find a good DTP program
>>>> for Linux
>>>> that actually installs (tried some in the past without much luck).
>>>
>>> While I am an advocate of FOSS (though don't run Linux), I am
>>> not a zealot. I *gladly* avail myself of the tools that are
>>> available under (e.g.) Windows. Especially if they make my life
>>> easier or my "product" better!
>>
>> That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/
>> open source DTP program. I have barely tried it myself, but it is
>> apparently very popular. And as with many such tools, it is
>> cross-platform - try it with Linux, Windows or MacOS as you will.
>
> Unless it does something that FrameMaker *doesn't* (which would
> be a stretch), I wouldn't be interested. I think Frame was only
> about $500 when I bought my first "version". So, I've maybe > laid out ~$1K considering upgrades -- well worth the amount
> of use I get out of it! And, the number of bugs/workarounds
> that I've had to deal with is surprisingly few. (I don't like
> having to debug someone else's tools...)

I would be surprised if a FrameMaker owner would switch to Scribus (not
that I've used either, so my comparison here is based on third-hand
knowledge and web sites).  There are cases where free, zero-cost open
source software is much better than expensive commercial equivalents,
but I don't think this is one of them.  This is especially in your case,
where money is not an object (since you've already paid it), and long
experience is a big point in FrameMaker's favour.

I am merely making the suggestion of Scribus for anyone wanting to try
it.  While I doubt that you would switch to using it, you may be
interested in trying it for comparison.


 0

On Dec 21, 5:38=A0pm, D Yuniskis <not.going.to...@seen.com> wrote:

...

> ["No matter how much you dislike pickles, they are,
> after all, the only thing you can do with CUcumbers!"
> (a bizarre reference that I suspect few will recognize)]

I believe that what Ernie Lundquist actually said is, "No matter how
much you dislike pickles, it is, after all, the only thing you can do
with cucumbers." Your version is more logical.

Jerry

 0

Hi Jerry,

On 12/22/2010 9:57 AM, Jerry Avins wrote:
> On Dec 21, 5:38 pm, D Yuniskis<not.going.to...@seen.com>  wrote:
>
>> ["No matter how much you dislike pickles, they are,
>> after all, the only thing you can do with CUcumbers!"
>> (a bizarre reference that I suspect few will recognize)]
>
> I believe that what Ernie Lundquist actually said is, "No matter how
> much you dislike pickles, it is, after all, the only thing you can do
> with cucumbers." Your version is more logical.

I stand corrected.  :>  Though the "audio" version's stress
on "CUEcumbers" [sic] is what always sticks in my mind...

That and:

"OOON... yellimahn"

"Help!  I'm locked in the refrigerator!"

and, of course, the whole:

"get-TING hung UP. GETing HUNG up. HUNG up, getTING...." shtick.

<grin>  I'll have to rummage through my stuph and see if I
can find either of those...

 0

Hi David,

On 12/22/2010 5:48 AM, David Brown wrote:
>>> That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/
>>> open source DTP program. I have barely tried it myself, but it is
>>> apparently very popular. And as with many such tools, it is
>>> cross-platform - try it with Linux, Windows or MacOS as you will.
>>
>> Unless it does something that FrameMaker *doesn't* (which would
>> be a stretch), I wouldn't be interested. I think Frame was only
>> about $500 when I bought my first "version". So, I've maybe >> laid out ~$1K considering upgrades -- well worth the amount
>> of use I get out of it! And, the number of bugs/workarounds
>> that I've had to deal with is surprisingly few. (I don't like
>> having to debug someone else's tools...)
>
> I would be surprised if a FrameMaker owner would switch to Scribus (not
> that I've used either, so my comparison here is based on third-hand
> knowledge and web sites). There are cases where free, zero-cost open
> source software is much better than expensive commercial equivalents,
> but I don't think this is one of them. This is especially in your case,
> where money is not an object (since you've already paid it), and long
> experience is a big point in FrameMaker's favour.

I'm not a zealot.  I don't want to make a career out of maintaining
my tools (unless I have to) but, rather, want to use a tool to get
a job done.  *Now* -- not when they get around to fixing some
bug that is standing in my way (note that this critique applies
equally to commercial products -- perhaps even MORESO as you are
entirely at THEIR mercy as to when and IF they will fix the problem!).

As I said, there are many things that FrameMaker *can't* do that
VP could.  I have publications that I did under VP that are *stuck*
there because of many layout tricks I could coerce VP to do for
me that FM's design precludes.

> I am merely making the suggestion of Scribus for anyone wanting to try
> it. While I doubt that you would switch to using it, you may be
> interested in trying it for comparison.

The problem I find with many "packages" is getting all the right
cruft in place that the package depends on.  In the UN*X world,
you have to make a commitment towards a particular direction
as everything BUILDS on everything else.  By contrast, the Windows
world rebundles (usually needlessly reinventing!) all the cruft that
a particular application is likely to need *with* the application
(hence the bloat -- in both cases).

If you just install a prebuilt package, you are putting your
faith in the "builder" that he understood the various (inevitable)
what to do in each case (too often, people just blindly build things
and as long as make ends "successfully" they figure "all is well").

[I build every application on my UN*X boxen from scratch so I
have a better feel for what *might* go wrong "in use"]

I'm doing my end of year "swap in/out" of hardware so maybe I'll
just install a prebuilt version and play with it long enough to
at least see what it does/doesn't (that way I don't have to
make a commitment to the software beyond reformatting the disk)

 0

On Tue, 21 Dec 2010 15:38:53 -0700, D Yuniskis <not.going.to.be@seen.com>
wrote:

>Hi Jan,
>
>On 12/21/2010 3:13 PM, Jan Panteltje wrote:
>> On a sunny day (Tue, 21 Dec 2010 13:57:09 -0700) it happened D Yuniskis
>> <not.going.to.be@seen.com>  wrote in<ier3pv$a0q$1@speranza.aioe.org>:
>>
>>> On 12/21/2010 12:18 PM, Jan Panteltje wrote:
>>>> On a sunny day (Tue, 21 Dec 2010 19:12:35 GMT) it happened Jan Panteltje
>>>> <pNaOnStPeAlMtje@yahoo.com>   wrote in<iequ7b$m90$1@news.datemas.de>:
>>>>
>>>> PS
>>>> I prefer it this way:
>>>>    http://panteltje.com/panteltje/pic/gm_pic/
>>>
>>> That forces you to look *past* a photo for the next bit of text.
>>> I don't "interrupt" the reader unless the interruption is
>>> and ignore photos, illustrations, tables, etc. if they aren't
>>> "significant enough".
>>
>> Yes, I should perhaps just have provided a link to the big pictures,
>> and put the text next to it.
>
>For HTML, I'd put a reduced resolution/size image "in-line"
>with the associated text.  And, a link *on* the image so you
>could "click for bigger picture".
>
>[I consider this too much work -- which is why I don't
>create HTML documents!  :> ]
>
>While I despise PDF's, they are an effective way
>of controlling content *and* presentation.
>
>["No matter how much you dislike pickles, they are,
>after all, the only thing you can do with CUcumbers!"
>(a bizarre reference that I suspect few will recognize)]

A Child's Garden of Grass.  Hadn't thought of that one in a while.

 0

Hi David,

On 12/22/2010 4:52 AM, David Brown wrote:

>>>> Note that most tools will also play games balancing column
>>>> lengths (if you so choose). Some will even use *variable*
>>>> line SPACING to achieve this!
>>>
>>> I can see the point of that - by reducing the vertical line spacing, you
>>> are reducing the area of the large space and thus its visual effect. I
>>> am not sure I like it, however - I thing the line spacing change is
>>> distracting and the space is still too big.
>>
>> I think feathering is intended for more "artistic" presentations.
>> E.g., "I have a two column-inch box that I want 'THIS BIT OF TEXT'
>> to *fill* -- stretch/compress it as necessary to achieve that goal".
>>
>> So, in multicolumn layouts, it just looks silly -- you don't want to
>> (usually) "spread out" (vertically) one column just so it is a
>> certain vertical size (there are other tricks that FM will employ
>> to balance the columns without feathering). Instead, you usually
>> want to "synchronize" the text baselines of adjacent columns
>> (otherwise, the visual effect is very unsettling).
>
> Yes - multi-column layouts need synchronising like this.

OTOH, it also imposes constraints on what you can *do* in
those "other columns".  E.g., if you use a different size typeface
you get weird side-effects as one tries to force the other to
"comply".

> For single-column layouts, it can be useful to stretch or shrink the
> vertical space a little to improve the layout, such as to avoid widows
> or orphans. I don't know how Frame Maker handles this - TeX will do it
> automatically within certain limits, and let you manually specify it
> outside that. And as usual with spacing, it applies more of the stretch
> to larger spaces such as inter-paragraph spaces than to inter-line spaces.

FM gives you control over widows/orphans on a "per type" basis.
So, you can change how "Heading Text" is handled differently
from "body text", etc.

Likewise, you have constraints on the extents to which text
within a line is stretched/compressed.  But, you typically need
to *see* how things layout before you can decide if those are
appropriate "for this publication".  (as I said, you can do

>> Recall that your document is not one long string of characters
>> (with images/tables interspersed). Rather, it may be composed
>> of many *discrete* "flows" (FM-speak). E.g., when I want to past
>> a lengthy bit of code into an article, I create a separate "flow"
>> for that code -- with it's own layout rules. Then, I tell FM
>> which "frames" the flow should occupy.
>>
>> So, I can start the flow in a 3"x4" box on page 1 and continue it
>> in a 3"x6" box on page 7 (if that made sense). With this in mind,
>> want to use things like feathering in particular "frames".
>
> That's a little different from what I am used to with TeX / LaTeX, which
> treats everything as a whole. (You /can/ make framed boxes like this -
> sometimes that's very useful - but it's not the usual method). But then,

FM (and most other WYSIWYG DTP packages that I've played with) *loves*
to break things into little "units" whenever it can do so without
looking like it is being obsessive.  :>  E.g., each "cell" in a
table has it's own specific formatting.

Nominally, the "table heading" (e.g., top row) cells have one
format ("paragraph type") while the cells in the *body* of the
table have another.  E.g., how far to indent from the edge
of the "cell", how to hyphenate, what typeface, tabstops, etc.
These can be overridden for individual cells, the table full
of cells, *all* cells (i.e., the "cell body" paragraph type),
etc.

The "title" for the table (e.g., "Table 3-9: Common Insect Larvae
found in Foodstuffs") is a separate type and "thing".  *References*
to that title are different things, etc.

This is hard to get used to until you think in terms of *other*
publications for which it could be used (e.g., imagine laying
out a "weekly sale brochure" for your local grocery store...).

Different DTP tools give you different ways of *using* this
"meta-information".  Commonly, you can build a ToC that just
includes all "Chapter Heading"s.  A list of figures that

FM, for example, will let you create cross references to instances
of specific paragraph types.  An obvious candidate is "Figure
in Figure 23.5 on page 94" automatically.

> Frame Maker and TeX are designed to work in very different ways - FM is
> much more of a visual layout tool, while TeX has a strong separation
> between visual appearance and textual content, and runs as a batch process.

Exactly.  Though I think there are packages (lyx?) that wrap
an interactive user interface around it.

I've just found DTP to be an inherently interactive process.
You tweek one thing and see what else changes as a result.
E.g., if I put a non-breaking space here to keep this phrase
intact IN THIS INSTANCE, what gets munged in the *next* paragraph
that I will need to "fix"?

>>> Sometimes there is nothing that can be done to make the typesetting look
>>> good. The answer in a case like this is to slightly re-write the text
>>> until you get a good fit - not to massage the spacing to give a slightly
>>
>> You use both techniques. You can adjust the kerning and spacing within
>> lines (and character sequences). E.g., highlight a section of text and
>> then interactively squeeze or stretch the characters within that
>> highlighted region.
>
> Manual modifications like that are always a pain, especially if the
> document may be changed later. But sometimes they are unavoidable.

I've disciplined myself not to use them.  This may be a bit
Draconian of a stand but it's too easy to forget where those
"tweeks" are in a document.  So, you could end up tweeking
something right next to it as a result of that previous
tweek (that you have forgotten about).

When things need that level of fine tweeking, I resort to
rewriting the text to cause the "natural" (un-tweeked)
appearance to be more acceptable (sometimes at the expense
of more stilted prose :-/ )

>> The downside of short line lengths -- especially in technical
>> publications -- is that you tend to end up with "big words"
>> (and/or words that are glued together like "and/or" :> ).
>> This doesn't give you many places to "insert whitespace".
>> So, if you *don't* set it "ragged right", you frequently
>> end up with things like:
>> |the counterclockwise rotation|
>> Your only remedy is to let some other word(s) onto the line
>> and hyphenate them (I *really* dislike hyphenation! With
>> narrow columns, it becomes VERY frequent)
>>
>>> The tool in question (FrameMaker) seems to do a better job than word
>>
>> Word processors are crap. I have yet to find a need for one
>> *other* than writing single page correspondence (I do even
>> that in FrameMaker since I don't want to deal with yet
>> another program that does "something similar").
>
> I managed to get by for years without using a word processor at all -
> LaTeX handled everything I needed. But unfortunately that poses certain
> challenges for working with colleagues and customers who don't use it.

That's why I opted on using PDF's for publications and notes.
It's too hard to prepare things that others will be able to
review AS YOU INTENDED THEM using other presentation formats.

>>> processors, and does a reasonable job of the hyphenation, but it has a
>>> lot to learn from TeX. The typesetter (person rather than program) has
>>
>> Ah, well... I'll gladly refund the money that I (wasn't) paid! :>
>
> I had missed the point in an earlier post where you said /you/ were the
> typesetter...
>
> Here's a few tips - and I'm aware that these might be considered "style
> choices" rather than "good typesetting rules", and that you might also
> have knowingly broken them because of the short line-length constraints.
>
> Use non-breaking spaces in cases like "Mr. Bill", "N. Sixth", and "3:00
> pm".

The problem with non-breaking spaces (besides the fact that you have
to *insert* them (i.e., replace the normal space with a non-breaking
one) is they make that "phrase" physically longer (intentionally).
I typically only do this selectively (e.g., a global "search and
replace" -- possible in FrameMaker even with formatting stuff -- will
often end up causing a problem elsewhere in the document.  (short
column widths really makes things tough!)

> Be /very/ careful with a name like "Kirk - Bear". I can see that you
> might want to write it that way, with spaces around the hyphen, as a
> sort of logo. But avoid line breaks that have the hyphen on the
> beginning of a line, and avoid ending a line with the hyphen if a
> neighbouring line is also (automatically) hyphenated.

The bigger "problem", IMO, is my not treating "Kirk - Bear Canyon"
(KBCN) more formally.  E.g., the names of each of the sponsors are
typeset in italics -- yet KBCN isn't.  <shrug>

On the one hand, you want to litter the text with "Kirk - Bear Canyon"
to drive that home to the reader (this is, in a sense, a "sales pitch"!).
OTOH, you end up with that much more italics in the document (which
is already seeing lots of pressure as a result of book titles,

> Your vertical spacing around the picture on page 2 is unbalanced, and I

This is probably a fault in the spacing associated with the
"font"s design.  "Display fonts" (i.e., the wacky, decorative
fonts like the one I used for the "article titles") are often
poorly built.  I.e., these are the types of things you get
in those "5,000 fonts for $9.95" packages. There are "space above" and "space below" parameters for each paragraph type (e.g., "Article Title", "body text", etc.). These are conditionally applied and interact with their counterpart parameters in the "paragraph types" above and below. There is an invisible "column top" created for the two leftmost columns on that page -- a consequence of the *bottom* edge of that photo *plus* any "space below" parameter that may be associated with the photo. This interacts differently (but consistently!) with the "Article Title" font at the top of column 1 and the "body text" font at the top of column 2. (notice how the "Article Title" font lines up "correctly" in the top of column 3). All I can do is play with parameters looking for a sweet spot that results in FM putting the text where you would like it in these two instances. Note that if I change the definition of the "Article Title" paragraph type, then it has consequences elsewhere in the document. (notice how "Summer Reading Program" does/doesn't appear to line up with a "baseline" *above* in the adjoining column) > think the "Upcoming Activities" needs more vertical space. That "article" was tough to format. Too many conflicting "things" in each of the items within (it is essentially a list of "mini-articles"). I.e., each "mini-article" (event) has a date, time (sometimes timeSPAN), title and "responsible party". I felt people wanted to see them expressed chronologically (and NOT "by topic"). So, date was the key item, with the time being secondary -- and closely associated with date. Left justifying the date and right justifying the time ON THE SAME LINE seemed pretty intuitive and, for the most part, worked great! The reader knows what to expect, where. The "title" of each tends to be long. Too long to tolerate anything else on the same line. So, the "responsible party" had to fall onto yet another line. I opted to right justify it as it would, otherwise, have NOT "set the title off" to draw attention to it *as* a "title". So, we've got three lines of text before we get to the description (body) of the event. And, you would need at least *one* line between events. If I had opted for more, it ran the risk of looking too "airy" (remember, the baselines of the text in the adjoining column force things to line up in nice clean multiples of the line spacing... so, put one, two or three blank lines between events -- not 1.5! :-/ Unless I also diddled with the fonts used for the title/date-time, etc. If I had chosen to start that whole "article" at the bottom of the preceding column (i.e., 86 the little blurb about "subordinate Clauses"), then I would have had more space to play with -- but, still had to fight the "swiss cheese" look that would result (everything else in the document tends to have a "tightness" to it). The biggest single problem with the layout is that it isn't very "senior friendly". The typefaces are too small, columns too tight, etc. OTOH, stepping up to a two-column layout (almost essential if you use a larger typeface for the body of the text) would have made the document a lot longer. (and printing and mailing costs disproportionately so) E.g., examine the "before" and "since" editions -- much looser layouts but a lot less *content*. (consider the last page is devoid of *real* content as it acts as a mailing label -- the newsletter is folded in half on the solid line) > Other than that, it is - as I said before - very well done. <shrug> It was my first "non-technical" publication. I try to make each new project a "learning experience". So, in that sense, it was a success. Just figuring out how to merge *my* writing (and editing) with that of others without having it "obvious" was an interesting experience! From the Friends' point of view, it was a *great* success as it attracted a lot of attention -- which, after all, was it's intent! >>> missed a few points too - though again, it is typeset far better than >>> most publications these days, and it looks very nice.   0 Reply D 12/23/2010 2:23:34 AM Hi David, On 12/22/2010 5:05 AM, David Brown wrote: >>> Ragged right is certainly better than rampant hyphenation or wildly >>> varying spacing to justify the text. Good typesetters for short-line >>> publications like newspapers will work with the editors to change text >>> to fit, so that it can be readable /and/ pretty. >> >> The problem is that rewriting takes a disproportional amount of >> time. When I write technical documentation, I often DELIBERATELY >> use boilerplate prose. The point being to let the user ignore >> the repetitious aspects of the document and concentrate on that >> which *differs* from one "topic" to the next. > > There is a big difference between technical writing and publications > with more specific layout requirements. > > Technical papers need good typesetting so that they are easy to read - > newspapers (and library newsletters) need to look good to attract > attention and readers. So for technical writing, you want as much to be > automated as possible, and you use a software-friendly layout (such as > long enough lines so that bad line breaks are rare). For newspapers, > it's okay to spend more time on "artistic tweaking" to make it look good. Yes. There is also the *intent* and "emotion" to consider. E.g., technical papers are intended to be (largely) "cool" presentations, dispassionate, etc. Informative. Rational. Newspapers should (theoretically) be likewise -- though there tends to be more of a sensational aspect as they are in the business of selling newspapers -- making you *want* to read them. Sales brochures, flyers, etc. want to be terse and hard-hitting. They want to generate excitement and/or irrational responses in the reader to "trick" (do I sound too cynical? :> ) the reader into parting with his money. I recall hearing that grocery ads tend to emphasize *red* as red is psychologically associated with hunger (though I would be hard pressed to find that reference :< ). >>>> Knuth (?) commented that once you start designing fonts, it is >>>> hard NOT to look at EVERY font that you encounter with a >>>> critical eye. I.e., you stop seeing things as "words on paper" >>>> but, instead, start inspecting individual glyphs, kerning, etc. >>>> The same is true of DTP -- you look at why someone opted for >>>> "big caps", or outdents, or... >>> >>> I know the feeling. When you've used TeX or LaTeX, and especially if >>> you've read the books explaining the background, it is hard not to think >>> about it. I can usually spot a (La)TeX'ed document immediately - there >>> is just far more attention to the small details than you get with other >>> programs. It is often easy to see when someone has used a professional >>> program like Frame Maker rather than an amateurish word processor, but >>> the difference is not as great unless it is done by a very skilled >> >> Chase down the "before" and "since" links I mentioned in one of these >> posts to see how "others" have tackled the same newsletter. <frown> >> It's disturbing (in any field) when people *think* something is >> "easy" just because it *looks* like it SHOULD be easy! >> >> I am always amazed at the various part of my brain that are "missing". >> E.g., I can't draw a living creature. But, I can draw landscapes, >> buildings, plants, etc. in very good detail (proper multippoint >> perspective, etc.). I.e., that portion of a human brain that >> can draw people is MISSING in my case! :> OTOH, if you were >> to show me a drawing of a person, I could immediately tell you >> what was wrong with the drawing and how to correct it! > > I am a well-balanced person - I can't draw landscapes /or/ living things :) Then you have no problem! :> I am puzzled at the inconsistency (and lack of rational explanation there for) in *my* skill set in this regard. One would assume that being able to draw a particular shape would be invariant of the thing *owning* that shape... <shrug> >> (I suspect a similar phenomenon explains why so many people can't >> "design from scratch" -- but can patch 'till the cows come home!)   0 Reply D 12/23/2010 2:36:07 AM Hi Jan, On 12/22/2010 5:28 AM, Jan Panteltje wrote: > On a sunny day (Wed, 22 Dec 2010 13:05:39 +0100) it happened David Brown > <david@westcontrol.removethisbit.com> wrote in > <IuednQNDkYYLdIzQnZ2dnUVZ7vidnZ2d@lyse.net>: > >> I am a well-balanced person - I can't draw landscapes /or/ living things :) > > It is a gift, look at this site: > http://www.artakiane.com/home.html *Far* beyond my goals! I'd be thrilled being able to draw a good *outline* of a person... :-/ I went to an art exhibit last year and spent a full 30 minutes examining an oil painting of a native american *convinced* that it was a photograph. Short of *touching* the work (frowned upon!), I tried everything I could imagine to convince myself that the few brush strokes that *were* visible were not, actually, just a "clear, textured overlay" atop a traditional photograph. Absolutely amazing to see people with "alternative skills"... especially when those are excellent! [I want to meet the guy who invented SEX and see what he's working on, now!]   0 Reply D 12/23/2010 2:41:48 AM D Yuniskis wrote: > *Far* beyond my goals! I'd be thrilled being able to draw > a good *outline* of a person... :-/ Get a copy of "Drawing on the right side of the brain", read it, and do the exercises. You'll be surprised what you can learn to do when you learn to suppress the left brain. > [I want to meet the guy who invented SEX and see what he's > working on, now!] He's trying to figure out how to get a woman interested in the idea. Clifford Heath.   0 Reply Clifford 12/23/2010 2:57:43 AM Hi Clifford, On 12/22/2010 7:57 PM, Clifford Heath wrote: > D Yuniskis wrote: >> *Far* beyond my goals! I'd be thrilled being able to draw >> a good *outline* of a person... :-/ > > Get a copy of "Drawing on the right side of the brain", read > it, and do the exercises. You'll be surprised what you can > learn to do when you learn to suppress the left brain. As I said, I can draw other things quite well -- including other "organic" things (still lifes, landscapes, etc.). *And*, can tell you what is wrong with *your* drawing of a person. So, one would think these two things would *imply* the ability to do so myself! :< <shrug> I'll put that "inability" in the same category as "whistling 'wrong'", etc. [if I'm going to invest that much time learning something, I'd rather learn how to ride a unicycle -- but I fear my bones are too old for the experience!] >> [I want to meet the guy who invented SEX and see what he's >> working on, now!] > > He's trying to figure out how to get a woman interested in the idea. ROTFPMP! I will *have* to remember THAT one!   0 Reply D 12/23/2010 3:22:47 AM Hi Jon, On 12/22/2010 2:27 AM, Jon Kirwan wrote: > On Wed, 22 Dec 2010 16:02:59 +1100, Clifford Heath >> There have been many different stitching methods. The one used >> by autostitch is the only automatic one AFAIK, and is widely >> (though not always visibly) licensed. > > Some interesting links worth reviewing related to Autostitch: > > http://www.cs.ubc.ca/~lowe/papers/brown02.pdf > http://www.cs.ubc.ca/~lowe/papers/brown03.pdf > http://www.cs.ubc.ca/~lowe/papers/07brown.pdf > http://cvlab.epfl.ch/~brown/papers/iccv2003.ppt Thanks! Two of the PDFs didn't open for me (I'll try on a different system). And, the math is far too much for me to digest at the moment. But, it looks like this describes the process completely (though probably in less detail than I can comprehend :< )   0 Reply D 12/23/2010 5:19:31 AM On 22/12/2010 23:51, D Yuniskis wrote: > Hi David, > > On 12/22/2010 5:48 AM, David Brown wrote: >>>> That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/ >>>> open source DTP program. I have barely tried it myself, but it is >>>> apparently very popular. And as with many such tools, it is >>>> cross-platform - try it with Linux, Windows or MacOS as you will. >>> >>> Unless it does something that FrameMaker *doesn't* (which would >>> be a stretch), I wouldn't be interested. I think Frame was only >>> about$500 when I bought my first "version". So, I've maybe
>>> laid out ~$1K considering upgrades -- well worth the amount >>> of use I get out of it! And, the number of bugs/workarounds >>> that I've had to deal with is surprisingly few. (I don't like >>> having to debug someone else's tools...) >> >> I would be surprised if a FrameMaker owner would switch to Scribus (not >> that I've used either, so my comparison here is based on third-hand >> knowledge and web sites). There are cases where free, zero-cost open >> source software is much better than expensive commercial equivalents, >> but I don't think this is one of them. This is especially in your case, >> where money is not an object (since you've already paid it), and long >> experience is a big point in FrameMaker's favour. > > I'm not a zealot. I don't want to make a career out of maintaining > my tools (unless I have to) but, rather, want to use a tool to get > a job done. *Now* -- not when they get around to fixing some > bug that is standing in my way (note that this critique applies > equally to commercial products -- perhaps even MORESO as you are > entirely at THEIR mercy as to when and IF they will fix the problem!). > I understand entirely. Sometimes it is fun to play with software, try it out, and see how it works (or doesn't work). But most of the time, software is just a tool, and we expect it to work as it should. > As I said, there are many things that FrameMaker *can't* do that > VP could. I have publications that I did under VP that are *stuck* > there because of many layout tricks I could coerce VP to do for > me that FM's design precludes. > >> I am merely making the suggestion of Scribus for anyone wanting to try >> it. While I doubt that you would switch to using it, you may be >> interested in trying it for comparison. > > The problem I find with many "packages" is getting all the right > cruft in place that the package depends on. In the UN*X world, > you have to make a commitment towards a particular direction > as everything BUILDS on everything else. By contrast, the Windows > world rebundles (usually needlessly reinventing!) all the cruft that > a particular application is likely to need *with* the application > (hence the bloat -- in both cases). > Actually, Linux distributions have pretty much solved this problem years ago - tools like "apt" and "yum", or their gui front-ends, are excellent at finding and automatically installing all the required packages, libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get install scribus", and apt pulls in python, cups libraries, and whatever else scribus needs. One can make many accusations about this system, but not bloat - it is very much about re-use and sharing of packages and libraries. In fact, I find one of the biggest problems with the Linux way of handling packages is that it is hard to get the bloat you sometimes want. For example, if you want to install two different versions of the same program, it is easy in windows - just pick different paths during installation (assuming it's not a program that claws its way deep into the system and the registry). With Linux, this needs a lot more thought and work - the standard installation procedure is so easy, and so automatic, that it is hard to do non-standard installations. > If you just install a prebuilt package, you are putting your > faith in the "builder" that he understood the various (inevitable) > compiler warnings, dependancies, etc. and made smart choices about > what to do in each case (too often, people just blindly build things > and as long as make ends "successfully" they figure "all is well"). > How is that any different from anything else you install on your machine, Windows or Linux? You are also putting your faith in the programmer that wrote the software in the first place. All I can say is that it is worth getting the main parts of your system from a source that has good build and test procedures - when your distro comes from major players like Red Hat or Ubuntu, you can be reasonably confident. As you stretch out, getting packages from smaller groups or additional repositories, you might get more problems - in particular, the testing will not have covered the same range of systems. > [I build every application on my UN*X boxen from scratch so I > have a better feel for what *might* go wrong "in use"] > Building from scratch is sometimes the best answer - I have typed the "./configure && make && make install" mantra many times myself. It can also be educational. But it is a lot more time and effort. Normally I expect software to just work straight out the box, whether it be a "setup.exe" in the windows world, or an "apt-get" or "yum install" in Linux. And most of the time, my expectations are justified (or they can be lowered until they are justified...) > I'm doing my end of year "swap in/out" of hardware so maybe I'll > just install a prebuilt version and play with it long enough to > at least see what it does/doesn't (that way I don't have to > make a commitment to the software beyond reformatting the disk)   0 Reply David 12/23/2010 9:45:03 AM On 23/12/2010 03:25, D Yuniskis wrote: > Hi David, > > On 12/22/2010 4:52 AM, David Brown wrote: > Lots of snipping to save electrons - your post was an interesting read, but I don't think I could add anything by commenting on much of it. >>> >>> So, I can start the flow in a 3"x4" box on page 1 and continue it >>> in a 3"x6" box on page 7 (if that made sense). With this in mind, >>> you can see how some publications (e.g., advertisements!) might >>> want to use things like feathering in particular "frames". >> >> That's a little different from what I am used to with TeX / LaTeX, which >> treats everything as a whole. (You /can/ make framed boxes like this - >> sometimes that's very useful - but it's not the usual method). But then, > > FM (and most other WYSIWYG DTP packages that I've played with) *loves* > to break things into little "units" whenever it can do so without > looking like it is being obsessive. :> E.g., each "cell" in a > table has it's own specific formatting. > TeX also breaks things up into little boxes (mostly hbox's and vbox's), but most of that is handled behind the scenes. > >> Frame Maker and TeX are designed to work in very different ways - FM is >> much more of a visual layout tool, while TeX has a strong separation >> between visual appearance and textual content, and runs as a batch >> process. > > Exactly. Though I think there are packages (lyx?) that wrap > an interactive user interface around it. > > I've just found DTP to be an inherently interactive process. > You tweek one thing and see what else changes as a result. > E.g., if I put a non-breaking space here to keep this phrase > intact IN THIS INSTANCE, what gets munged in the *next* paragraph > that I will need to "fix"? > DTP for a document like this is more of a visual art, and thus an interactive program is essential. lyx is more of a glorified TeX-specific editor - it doesn't attempt to give you a true "live" view of your output document. I tried it a bit, but I didn't like it - since it can't handle any of the more interesting features of (La)TeX, it gives you very little. The real fun with LaTeX comes when you make macros and "program" your document - you can't see any of that with lyx. And because lyx is not quite standard LaTeX, you don't have such easy access to the enormous numbers of existing packages, styles, and add-ons. >> >> I managed to get by for years without using a word processor at all - >> LaTeX handled everything I needed. But unfortunately that poses certain >> challenges for working with colleagues and customers who don't use it. > > That's why I opted on using PDF's for publications and notes. > It's too hard to prepare things that others will be able to > review AS YOU INTENDED THEM using other presentation formats. > pdf is ideal for delivering documentation to others. But sometimes it is necessary to work together on a document - I write some, someone else writes other parts. And that means using the lowest common denominator tool, which is typically a word processor. If I can at least get other people to use styles properly, then I can usually cope without tears. >>>> processors, and does a reasonable job of the hyphenation, but it has a >>>> lot to learn from TeX. The typesetter (person rather than program) has >>> >>> Ah, well... I'll gladly refund the money that I (wasn't) paid! :> >> >> I had missed the point in an earlier post where you said /you/ were the >> typesetter... >> >> Here's a few tips - and I'm aware that these might be considered "style >> choices" rather than "good typesetting rules", and that you might also >> have knowingly broken them because of the short line-length constraints. >> >> Use non-breaking spaces in cases like "Mr. Bill", "N. Sixth", and "3:00 >> pm". > > The problem with non-breaking spaces (besides the fact that you have > to *insert* them (i.e., replace the normal space with a non-breaking > one) is they make that "phrase" physically longer (intentionally). > I typically only do this selectively (e.g., a global "search and > replace" -- possible in FrameMaker even with formatting stuff -- will > often end up causing a problem elsewhere in the document. (short > column widths really makes things tough!) > I never claimed that following these typesetting rules would be easy! > > The biggest single problem with the layout is that it isn't > very "senior friendly". The typefaces are too small, columns > too tight, etc. OTOH, stepping up to a two-column layout > (almost essential if you use a larger typeface for the body > of the text) would have made the document a lot longer. > (and printing and mailing costs disproportionately so) > E.g., examine the "before" and "since" editions -- much > looser layouts but a lot less *content*. (consider the > last page is devoid of *real* content as it acts as a > mailing label -- the newsletter is folded in half on the > solid line) > Conflicting requirements are always a challenge - it's what makes the day job fun. Hands up all the engineers in these newsgroups who have been asked to make something small, good /and/ fast! >> Other than that, it is - as I said before - very well done. > > <shrug> It was my first "non-technical" publication. I try > to make each new project a "learning experience". So, in > that sense, it was a success. Just figuring out how to merge > *my* writing (and editing) with that of others without having > it "obvious" was an interesting experience! > > From the Friends' point of view, it was a *great* success as > it attracted a lot of attention -- which, after all, was it's > intent! > Be careful - you'll end up with a job for life. >>>> missed a few points too - though again, it is typeset far better than >>>> most publications these days, and it looks very nice.   0 Reply David 12/23/2010 10:11:12 AM Hi David, On 12/23/2010 3:11 AM, David Brown wrote: >>>> So, I can start the flow in a 3"x4" box on page 1 and continue it >>>> in a 3"x6" box on page 7 (if that made sense). With this in mind, >>>> you can see how some publications (e.g., advertisements!) might >>>> want to use things like feathering in particular "frames". >>> >>> That's a little different from what I am used to with TeX / LaTeX, which >>> treats everything as a whole. (You /can/ make framed boxes like this - >>> sometimes that's very useful - but it's not the usual method). But then, >> >> FM (and most other WYSIWYG DTP packages that I've played with) *loves* >> to break things into little "units" whenever it can do so without >> looking like it is being obsessive. :> E.g., each "cell" in a >> table has it's own specific formatting. > > TeX also breaks things up into little boxes (mostly hbox's and vbox's), > but most of that is handled behind the scenes. My recollection of that is different. FM tries to treat every bit of *text* as a little "unit" (not every bit of page real-estate). [the term "paragraph" is approximately correct -- though it also applies to more than *just* (classical) paragraphs...] E.g., the caption on an illustration, the text in each "cell" in a table, the page number at the bottom of the page, etc. You can apply "styles" to almost everything in FM -- "paragraphs", *characters*, tables, pages, etc. So, you might apply the "chapter title" PARAGRAPH style to the text: "Earthworm Mating Habits". This might cause the text to appear in a large decorative typeface, right justified on the line, 2.7 inches down from the top of the page, etc. Within that "string", you could apply the "draw attention" CHARACTER style (all these names are user defined) to the substring 'Mating'. This might, for example, cause them to be displayed in a different typeface, as small caps, bold, italic and in red ink (thereby "drawing attention" to them! :> ) When the table of contents is built, that string will appear (by virtue of a cross-reference) yet will have a different "paragraph" style applied to it -- perhaps "TOC chapter". This would undoubtedly use a *smaller* typeface with different margins (so it appears in the right spot in the ToC), etc. I think Word (et al.) have similar capabilities wrt "styles". >>> Frame Maker and TeX are designed to work in very different ways - FM is >>> much more of a visual layout tool, while TeX has a strong separation >>> between visual appearance and textual content, and runs as a batch >>> process. >> >> Exactly. Though I think there are packages (lyx?) that wrap >> an interactive user interface around it. >> >> I've just found DTP to be an inherently interactive process. >> You tweek one thing and see what else changes as a result. >> E.g., if I put a non-breaking space here to keep this phrase >> intact IN THIS INSTANCE, what gets munged in the *next* paragraph >> that I will need to "fix"? > > DTP for a document like this is more of a visual art, and thus an > interactive program is essential. I've found that to be the case for almost everything I "typeset". I use *lots* of illustrations, tables, cross references, etc. Trying to insert them in the text directly (as if writing HTML in vi(1)) is too tedious (FM supports a "visible" file encoding that you technically *could* "edit" directly). It's much more expedient to just race through the document "tagging" paragraphs with the "right" styles, etc. Likewise, inserting a table or a photo is much more intuitive (<Insert Table>, style "Short summary"; <Import Photo>; etc.) And, you can then scribble on things directly (e.g., adding callouts to an illustration). > lyx is more of a glorified TeX-specific editor - it doesn't attempt to > give you a true "live" view of your output document. I tried it a bit, Oh. :< I thought it was a WYSIWYG layer atop TeX. :-/ > but I didn't like it - since it can't handle any of the more interesting > features of (La)TeX, it gives you very little. The real fun with LaTeX > comes when you make macros and "program" your document - you can't see > any of that with lyx. And because lyx is not quite standard LaTeX, you > don't have such easy access to the enormous numbers of existing > packages, styles, and add-ons. > >>> I managed to get by for years without using a word processor at all - >>> LaTeX handled everything I needed. But unfortunately that poses certain >>> challenges for working with colleagues and customers who don't use it. >> >> That's why I opted on using PDF's for publications and notes. >> It's too hard to prepare things that others will be able to >> review AS YOU INTENDED THEM using other presentation formats. > > pdf is ideal for delivering documentation to others. But sometimes it is > necessary to work together on a document - I write some, someone else > writes other parts. And that means using the lowest common denominator > tool, which is typically a word processor. If I can at least get other > people to use styles properly, then I can usually cope without tears. You might find it easier to just let folks "feed" straight ASCII to someone who does the editting, etc. This helps ensure a consistent "style" imposed on the results. Usually, getting the *content* right is where most of the work lies. E.g., if someone explains/describes something, I can completely rewrite it rather quickly (much faster than if I had to come up with the content myself). Then, it "reads" consistently with the other parts of the document (also, many people are terrible writers and are happy to have someone else "dress up" their prose.) >> The biggest single problem with the layout is that it isn't >> very "senior friendly". The typefaces are too small, columns >> too tight, etc. OTOH, stepping up to a two-column layout >> (almost essential if you use a larger typeface for the body >> of the text) would have made the document a lot longer. >> (and printing and mailing costs disproportionately so) >> E.g., examine the "before" and "since" editions -- much >> looser layouts but a lot less *content*. (consider the >> last page is devoid of *real* content as it acts as a >> mailing label -- the newsletter is folded in half on the >> solid line) > > Conflicting requirements are always a challenge - it's what makes the > day job fun. Hands up all the engineers in these newsgroups who have > been asked to make something small, good /and/ fast! Yes. Now imagine marketing, manufacturing and engineering all telling you *different* goals -- and none actually doing any of the *work*! :> >>> Other than that, it is - as I said before - very well done. >> >> <shrug> It was my first "non-technical" publication. I try >> to make each new project a "learning experience". So, in >> that sense, it was a success. Just figuring out how to merge >> *my* writing (and editing) with that of others without having >> it "obvious" was an interesting experience! >> >> From the Friends' point of view, it was a *great* success as >> it attracted a lot of attention -- which, after all, was it's >> intent! > > Be careful - you'll end up with a job for life. No, some wanted an "boss-worker" relationship. If I'm *giving* my time, I sure don't *want* (nor NEED) a "boss" -- especially when I'm the one with the DTP experience! :> (no hard feelings; I just brought them a plate of cookies last week!) It appears they found someone to take over the task (though it looks like it took them a full year to do so :-/ ). This is good as I believe public libraries to be a real asset (though I am deeply disappointed at how *loud* they have become!)   0 Reply D 12/23/2010 6:04:02 PM Hi David, On 12/23/2010 2:45 AM, David Brown wrote: >>>>> That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/ >>>>> open source DTP program. I have barely tried it myself, but it is >>>>> apparently very popular. And as with many such tools, it is >>>>> cross-platform - try it with Linux, Windows or MacOS as you will. I played with the Windows version for an hour or so last night. (sigh) It's got a *long* way to go. I would consider it: "Wordpad with Tassles" (does more than Wordpad but not enough to make you want to adopt it IN PLACE of Wordpad). You might consider looking at the FM "tryout" (I think Adobe still offers one?). I don't recall how it is crippled (features, time, etc.). But, you would be able to see the sorts of things you can do and the effort required to do so. (E.g., for scribus, I tried to create a little document with a table, a photo, an illustration, etc. -- just to see what it felt like) >> I'm not a zealot. I don't want to make a career out of maintaining >> my tools (unless I have to) but, rather, want to use a tool to get >> a job done. *Now* -- not when they get around to fixing some >> bug that is standing in my way (note that this critique applies >> equally to commercial products -- perhaps even MORESO as you are >> entirely at THEIR mercy as to when and IF they will fix the problem!). > > I understand entirely. Sometimes it is fun to play with software, try it > out, and see how it works (or doesn't work). But most of the time, > software is just a tool, and we expect it to work as it should. Exactly. I doubt many carpenters go home after work and play with their hammers! There are enough things that I *must* (or *choose* to) do with a computer so spending extra time is not high on my list. Sort of like "window shopping" (do you REALLY have that much free time that you can waste it passively looking at merchandise without an *intent* to purchase??) >>> I am merely making the suggestion of Scribus for anyone wanting to try >>> it. While I doubt that you would switch to using it, you may be >>> interested in trying it for comparison. >> >> The problem I find with many "packages" is getting all the right >> cruft in place that the package depends on. In the UN*X world, >> you have to make a commitment towards a particular direction >> as everything BUILDS on everything else. By contrast, the Windows >> world rebundles (usually needlessly reinventing!) all the cruft that >> a particular application is likely to need *with* the application >> (hence the bloat -- in both cases). > > Actually, Linux distributions have pretty much solved this problem years > ago - tools like "apt" and "yum", or their gui front-ends, are excellent > at finding and automatically installing all the required packages, > libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get > install scribus", and apt pulls in python, cups libraries, and whatever > else scribus needs. One can make many accusations about this system, but > not bloat - it is very much about re-use and sharing of packages and > libraries. The *BSD's have a "package" system with the same functionality. As with any "true" UN*X tool, it just pieces together actions performed by other (existing) tools. E.g., consult "database" for package (to identify dependencies), "install" dependencies (which requires examining database for that package's dependencies, etc.), etc. My "bloat" comment is that everything calls other "packages" in. And, that those packages tend to have been created independant of the packages that rely on them. So, for example, if a "program" (getting away from notion of "package") needs to be able to *display* a PNG, it drags in the *entire* png package (which has far more capabilities) instead of *just* that "display PNG" capability. > In fact, I find one of the biggest problems with the Linux way of > handling packages is that it is hard to get the bloat you sometimes > want. For example, if you want to install two different versions of the > same program, it is easy in windows - just pick different paths during > installation (assuming it's not a program that claws its way deep into > the system and the registry). With Linux, this needs a lot more thought > and work - the standard installation procedure is so easy, and so > automatic, that it is hard to do non-standard installations. If you use the *BSD "package" system, you suffer the same fate. Things end up (in the file system) wherever the package "author" decided to put them. And, you get exactly the "features" that he decided upon when he created the package. >> If you just install a prebuilt package, you are putting your >> faith in the "builder" that he understood the various (inevitable) >> compiler warnings, dependancies, etc. and made smart choices about >> what to do in each case (too often, people just blindly build things >> and as long as make ends "successfully" they figure "all is well"). > > How is that any different from anything else you install on your > machine, Windows or Linux? You are also putting your faith in the > programmer that wrote the software in the first place. All I can say is The *programmer* is different from the "package maintainer"! E.g., I wouldn't question Wolfram's abilities. OTOH, when "John Doe" *packages* his creation for my use, I don't have anywhere near the confidence in John Doe (how familiar is he with the actual "product"? How familiar is he with the package system? How clever is he at getting a package to support flexible configuration? Is this something he just does "in his spare time" -- or, is he passionate about it?). > that it is worth getting the main parts of your system from a source > that has good build and test procedures - when your distro comes from <grin> Me! ;-) > major players like Red Hat or Ubuntu, you can be reasonably confident. > As you stretch out, getting packages from smaller groups or additional > repositories, you might get more problems - in particular, the testing > will not have covered the same range of systems. > >> [I build every application on my UN*X boxen from scratch so I >> have a better feel for what *might* go wrong "in use"] > > Building from scratch is sometimes the best answer - I have typed the > "./configure && make && make install" mantra many times myself. It can > also be educational. But it is a lot more time and effort. Normally I Exactly ---------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is why I am slow to upgrade... "chase the bleeding edge". If I have something (tool) that works -- at least "well enough for my current needs" -- then why waste time "upgrading"? (you can spend every waking minute "upgrading" *something*) When I build from scratch, I see what goes into a "package"; what options are present in that particular "configure"; etc. I also get a chance to look at what decisions/conclusions configure (as well as the rest of the make) comes to ("Um, I have libfoo installed! Why didn't it *find* it?"). The pkgsrc system in *BSD lets me explore packages without embracing them. For example: "make fetch-list" gives me a list of everything that has to be downloaded (not currently on my system) to build the package. The process recursively examines each dependency so I can get a feel for how much "work" is involved -- and how much potential there is for "fixups". "make fetch" obviously *gets* whatever is needed (using URLs listed in the package's definition file) "make extract" unpacks things and sets up a "work" directory in which to build the package. "make patch" applies any particular patches (including those that are specific to the package system itself -- like rewriting the original *fetched* distribution's makefiles to site the results in specific places). "make configure" runs configure et al. At this point, I can rummage through the "work" hierarchy to see what's there and what *will* happen when the actual "make" is invoked. I typically run "make > dgy 2>&1" so I can examine the messages emitted during the build. If that looks right, then proceed to "make install" (again capturing stderr/out) and "make clean". The "make install" updates the database of *installed* packages on the system so anything that subsequently "requires" this package sees that it is there. I have my own conventions for "what goes where". And, I notice some packages are inconsistent (there are guidelines for package creation but adherence is optional :> ). So, while most of these add-on packages go in the /usr/pkg hierarchy, some I will pull into /usr/local or other places (e.g., I mount /usr/pkg late and some of these are things I might want to use even with *just* the root filesystem mounted R/O) > expect software to just work straight out the box, whether it be a > "setup.exe" in the windows world, or an "apt-get" or "yum install" in > Linux. And most of the time, my expectations are justified (or they can > be lowered until they are justified...)   0 Reply D 12/23/2010 8:08:38 PM D Yuniskis wrote: > > Hi David, > > On 12/23/2010 2:45 AM, David Brown wrote: > >>>>> That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/ > >>>>> open source DTP program. I have barely tried it myself, but it is > >>>>> apparently very popular. And as with many such tools, it is > >>>>> cross-platform - try it with Linux, Windows or MacOS as you will. > > I played with the Windows version for an hour or so last night. > (sigh) It's got a *long* way to go. I would consider it: > "Wordpad with Tassles" (does more than Wordpad but not enough > to make you want to adopt it IN PLACE of Wordpad). > > You might consider looking at the FM "tryout" (I think Adobe still > offers one?). I don't recall how it is crippled (features, time, > etc.). But, you would be able to see the sorts of things you > can do and the effort required to do so. > > (E.g., for scribus, I tried to create a little document with > a table, a photo, an illustration, etc. -- just to see what it > felt like) > > >> I'm not a zealot. I don't want to make a career out of maintaining > >> my tools (unless I have to) but, rather, want to use a tool to get > >> a job done. *Now* -- not when they get around to fixing some > >> bug that is standing in my way (note that this critique applies > >> equally to commercial products -- perhaps even MORESO as you are > >> entirely at THEIR mercy as to when and IF they will fix the problem!). > > > > I understand entirely. Sometimes it is fun to play with software, try it > > out, and see how it works (or doesn't work). But most of the time, > > software is just a tool, and we expect it to work as it should. > > Exactly. I doubt many carpenters go home after work and play with > their hammers! There are enough things that I *must* (or *choose* to) > do with a computer so spending extra time is not high on my list. > Sort of like "window shopping" (do you REALLY have that much free > time that you can waste it passively looking at merchandise without > an *intent* to purchase??) > > >>> I am merely making the suggestion of Scribus for anyone wanting to try > >>> it. While I doubt that you would switch to using it, you may be > >>> interested in trying it for comparison. > >> > >> The problem I find with many "packages" is getting all the right > >> cruft in place that the package depends on. In the UN*X world, > >> you have to make a commitment towards a particular direction > >> as everything BUILDS on everything else. By contrast, the Windows > >> world rebundles (usually needlessly reinventing!) all the cruft that > >> a particular application is likely to need *with* the application > >> (hence the bloat -- in both cases). > > > > Actually, Linux distributions have pretty much solved this problem years > > ago - tools like "apt" and "yum", or their gui front-ends, are excellent > > at finding and automatically installing all the required packages, > > libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get > > install scribus", and apt pulls in python, cups libraries, and whatever > > else scribus needs. One can make many accusations about this system, but > > not bloat - it is very much about re-use and sharing of packages and > > libraries. > > The *BSD's have a "package" system with the same functionality. > As with any "true" UN*X tool, it just pieces together actions > performed by other (existing) tools. E.g., consult "database" > for package (to identify dependencies), "install" dependencies > (which requires examining database for that package's dependencies, > etc.), etc. > > My "bloat" comment is that everything calls other "packages" in. > And, that those packages tend to have been created independant of > the packages that rely on them. > > So, for example, if a "program" (getting away from notion of > "package") needs to be able to *display* a PNG, it drags in > the *entire* png package (which has far more capabilities) > instead of *just* that "display PNG" capability. > > > In fact, I find one of the biggest problems with the Linux way of > > handling packages is that it is hard to get the bloat you sometimes > > want. For example, if you want to install two different versions of the > > same program, it is easy in windows - just pick different paths during > > installation (assuming it's not a program that claws its way deep into > > the system and the registry). With Linux, this needs a lot more thought > > and work - the standard installation procedure is so easy, and so > > automatic, that it is hard to do non-standard installations. > > If you use the *BSD "package" system, you suffer the same fate. > Things end up (in the file system) wherever the package "author" > decided to put them. And, you get exactly the "features" that > he decided upon when he created the package. > > >> If you just install a prebuilt package, you are putting your > >> faith in the "builder" that he understood the various (inevitable) > >> compiler warnings, dependancies, etc. and made smart choices about > >> what to do in each case (too often, people just blindly build things > >> and as long as make ends "successfully" they figure "all is well"). > > > > How is that any different from anything else you install on your > > machine, Windows or Linux? You are also putting your faith in the > > programmer that wrote the software in the first place. All I can say is > > The *programmer* is different from the "package maintainer"! > E.g., I wouldn't question Wolfram's abilities. OTOH, when > "John Doe" *packages* his creation for my use, I don't have > anywhere near the confidence in John Doe (how familiar is > he with the actual "product"? How familiar is he with the > package system? How clever is he at getting a package to > support flexible configuration? Is this something he just > does "in his spare time" -- or, is he passionate about it?). > > > that it is worth getting the main parts of your system from a source > > that has good build and test procedures - when your distro comes from > > <grin> Me! ;-) > > > major players like Red Hat or Ubuntu, you can be reasonably confident. > > As you stretch out, getting packages from smaller groups or additional > > repositories, you might get more problems - in particular, the testing > > will not have covered the same range of systems. > > > >> [I build every application on my UN*X boxen from scratch so I > >> have a better feel for what *might* go wrong "in use"] > > > > Building from scratch is sometimes the best answer - I have typed the > > "./configure && make && make install" mantra many times myself. It can > > also be educational. But it is a lot more time and effort. Normally I > > Exactly ---------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > This is why I am slow to upgrade... "chase the bleeding edge". > If I have something (tool) that works -- at least "well enough > for my current needs" -- then why waste time "upgrading"? > (you can spend every waking minute "upgrading" *something*) > > When I build from scratch, I see what goes into a "package"; > what options are present in that particular "configure"; etc. > I also get a chance to look at what decisions/conclusions > configure (as well as the rest of the make) comes to > ("Um, I have libfoo installed! Why didn't it *find* it?"). > > The pkgsrc system in *BSD lets me explore packages without > embracing them. For example: > > "make fetch-list" gives me a list of everything that has > to be downloaded (not currently on my system) to build the > package. The process recursively examines each dependency > so I can get a feel for how much "work" is involved -- and > how much potential there is for "fixups". > > "make fetch" obviously *gets* whatever is needed (using > URLs listed in the package's definition file) > > "make extract" unpacks things and sets up a "work" directory > in which to build the package. > > "make patch" applies any particular patches (including those that > are specific to the package system itself -- like rewriting > the original *fetched* distribution's makefiles to site the > results in specific places). > > "make configure" runs configure et al. > > At this point, I can rummage through the "work" hierarchy to > see what's there and what *will* happen when the actual "make" > is invoked. > > I typically run "make > dgy 2>&1" so I can examine the messages > emitted during the build. If that looks right, then proceed > to "make install" (again capturing stderr/out) and "make clean". > > The "make install" updates the database of *installed* packages > on the system so anything that subsequently "requires" this > package sees that it is there. > > I have my own conventions for "what goes where". And, I > notice some packages are inconsistent (there are guidelines > for package creation but adherence is optional :> ). So, > while most of these add-on packages go in the /usr/pkg > hierarchy, some I will pull into /usr/local or other places > (e.g., I mount /usr/pkg late and some of these are things > I might want to use even with *just* the root filesystem > mounted R/O) > > > expect software to just work straight out the box, whether it be a > > "setup.exe" in the windows world, or an "apt-get" or "yum install" in > > Linux. And most of the time, my expectations are justified (or they can > > be lowered until they are justified...) What do you think of this PDF? http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf Any idea how I did it? -- For the last time: I am not a mad scientist, I'm just a very ticked off scientist!!!   0 Reply Michael 12/23/2010 8:36:50 PM On 23/12/10 21:36, Michael A. Terrell wrote: <snip> > What do you think of this PDF? > It's not the most exciting reading... > http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf > > Any idea how I did it? > The "producer" stamp just says ghostscript. So I'm guessing that you first generated a postscript file, then used ps2pdf to convert it to a pdf. As for the postscript file, it was perhaps generated programmaticly from an existing database or table of information. If that's the case, then you might want to consider generating the pdf file directly in the future. There are a number of pdf toolkits around - I have used reportlab with python a number of times, and it makes it easy to generate some pretty reasonable pdf's. An alternative is to have a program that generates a LaTeX file, and use pdfLaTeX to generate the pdf itself. This is an easy way to separate the data content from the style - change the style in the LaTeX part, and define macros for displaying the different parts of the document. Then the program part just generates a list of macro calls from the data, and you can easily experiment with different styles and effects. It would be much easier to get things like leaders, alternative fonts, clickable links, etc., in this way.   0 Reply David 12/23/2010 9:11:14 PM On 12/23/2010 03:36 PM, Michael A. Terrell wrote: > > D Yuniskis wrote: >> >> Hi David, >> >> On 12/23/2010 2:45 AM, David Brown wrote: >>>>>>> That would be scribus<http://www.scribus.net/> - it is, AFAIK, /the/ >>>>>>> open source DTP program. I have barely tried it myself, but it is >>>>>>> apparently very popular. And as with many such tools, it is >>>>>>> cross-platform - try it with Linux, Windows or MacOS as you will. >> >> I played with the Windows version for an hour or so last night. >> (sigh) It's got a *long* way to go. I would consider it: >> "Wordpad with Tassles" (does more than Wordpad but not enough >> to make you want to adopt it IN PLACE of Wordpad). >> >> You might consider looking at the FM "tryout" (I think Adobe still >> offers one?). I don't recall how it is crippled (features, time, >> etc.). But, you would be able to see the sorts of things you >> can do and the effort required to do so. >> >> (E.g., for scribus, I tried to create a little document with >> a table, a photo, an illustration, etc. -- just to see what it >> felt like) >> >>>> I'm not a zealot. I don't want to make a career out of maintaining >>>> my tools (unless I have to) but, rather, want to use a tool to get >>>> a job done. *Now* -- not when they get around to fixing some >>>> bug that is standing in my way (note that this critique applies >>>> equally to commercial products -- perhaps even MORESO as you are >>>> entirely at THEIR mercy as to when and IF they will fix the problem!). >>> >>> I understand entirely. Sometimes it is fun to play with software, try it >>> out, and see how it works (or doesn't work). But most of the time, >>> software is just a tool, and we expect it to work as it should. >> >> Exactly. I doubt many carpenters go home after work and play with >> their hammers! There are enough things that I *must* (or *choose* to) >> do with a computer so spending extra time is not high on my list. >> Sort of like "window shopping" (do you REALLY have that much free >> time that you can waste it passively looking at merchandise without >> an *intent* to purchase??) >> >>>>> I am merely making the suggestion of Scribus for anyone wanting to try >>>>> it. While I doubt that you would switch to using it, you may be >>>>> interested in trying it for comparison. >>>> >>>> The problem I find with many "packages" is getting all the right >>>> cruft in place that the package depends on. In the UN*X world, >>>> you have to make a commitment towards a particular direction >>>> as everything BUILDS on everything else. By contrast, the Windows >>>> world rebundles (usually needlessly reinventing!) all the cruft that >>>> a particular application is likely to need *with* the application >>>> (hence the bloat -- in both cases). >>> >>> Actually, Linux distributions have pretty much solved this problem years >>> ago - tools like "apt" and "yum", or their gui front-ends, are excellent >>> at finding and automatically installing all the required packages, >>> libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get >>> install scribus", and apt pulls in python, cups libraries, and whatever >>> else scribus needs. One can make many accusations about this system, but >>> not bloat - it is very much about re-use and sharing of packages and >>> libraries. >> >> The *BSD's have a "package" system with the same functionality. >> As with any "true" UN*X tool, it just pieces together actions >> performed by other (existing) tools. E.g., consult "database" >> for package (to identify dependencies), "install" dependencies >> (which requires examining database for that package's dependencies, >> etc.), etc. >> >> My "bloat" comment is that everything calls other "packages" in. >> And, that those packages tend to have been created independant of >> the packages that rely on them. >> >> So, for example, if a "program" (getting away from notion of >> "package") needs to be able to *display* a PNG, it drags in >> the *entire* png package (which has far more capabilities) >> instead of *just* that "display PNG" capability. >> >>> In fact, I find one of the biggest problems with the Linux way of >>> handling packages is that it is hard to get the bloat you sometimes >>> want. For example, if you want to install two different versions of the >>> same program, it is easy in windows - just pick different paths during >>> installation (assuming it's not a program that claws its way deep into >>> the system and the registry). With Linux, this needs a lot more thought >>> and work - the standard installation procedure is so easy, and so >>> automatic, that it is hard to do non-standard installations. >> >> If you use the *BSD "package" system, you suffer the same fate. >> Things end up (in the file system) wherever the package "author" >> decided to put them. And, you get exactly the "features" that >> he decided upon when he created the package. >> >>>> If you just install a prebuilt package, you are putting your >>>> faith in the "builder" that he understood the various (inevitable) >>>> compiler warnings, dependancies, etc. and made smart choices about >>>> what to do in each case (too often, people just blindly build things >>>> and as long as make ends "successfully" they figure "all is well"). >>> >>> How is that any different from anything else you install on your >>> machine, Windows or Linux? You are also putting your faith in the >>> programmer that wrote the software in the first place. All I can say is >> >> The *programmer* is different from the "package maintainer"! >> E.g., I wouldn't question Wolfram's abilities. OTOH, when >> "John Doe" *packages* his creation for my use, I don't have >> anywhere near the confidence in John Doe (how familiar is >> he with the actual "product"? How familiar is he with the >> package system? How clever is he at getting a package to >> support flexible configuration? Is this something he just >> does "in his spare time" -- or, is he passionate about it?). >> >>> that it is worth getting the main parts of your system from a source >>> that has good build and test procedures - when your distro comes from >> >> <grin> Me! ;-) >> >>> major players like Red Hat or Ubuntu, you can be reasonably confident. >>> As you stretch out, getting packages from smaller groups or additional >>> repositories, you might get more problems - in particular, the testing >>> will not have covered the same range of systems. >>> >>>> [I build every application on my UN*X boxen from scratch so I >>>> have a better feel for what *might* go wrong "in use"] >>> >>> Building from scratch is sometimes the best answer - I have typed the >>> "./configure&& make&& make install" mantra many times myself. It can >>> also be educational. But it is a lot more time and effort. Normally I >> >> Exactly ---------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> This is why I am slow to upgrade... "chase the bleeding edge". >> If I have something (tool) that works -- at least "well enough >> for my current needs" -- then why waste time "upgrading"? >> (you can spend every waking minute "upgrading" *something*) >> >> When I build from scratch, I see what goes into a "package"; >> what options are present in that particular "configure"; etc. >> I also get a chance to look at what decisions/conclusions >> configure (as well as the rest of the make) comes to >> ("Um, I have libfoo installed! Why didn't it *find* it?"). >> >> The pkgsrc system in *BSD lets me explore packages without >> embracing them. For example: >> >> "make fetch-list" gives me a list of everything that has >> to be downloaded (not currently on my system) to build the >> package. The process recursively examines each dependency >> so I can get a feel for how much "work" is involved -- and >> how much potential there is for "fixups". >> >> "make fetch" obviously *gets* whatever is needed (using >> URLs listed in the package's definition file) >> >> "make extract" unpacks things and sets up a "work" directory >> in which to build the package. >> >> "make patch" applies any particular patches (including those that >> are specific to the package system itself -- like rewriting >> the original *fetched* distribution's makefiles to site the >> results in specific places). >> >> "make configure" runs configure et al. >> >> At this point, I can rummage through the "work" hierarchy to >> see what's there and what *will* happen when the actual "make" >> is invoked. >> >> I typically run "make> dgy 2>&1" so I can examine the messages >> emitted during the build. If that looks right, then proceed >> to "make install" (again capturing stderr/out) and "make clean". >> >> The "make install" updates the database of *installed* packages >> on the system so anything that subsequently "requires" this >> package sees that it is there. >> >> I have my own conventions for "what goes where". And, I >> notice some packages are inconsistent (there are guidelines >> for package creation but adherence is optional :> ). So, >> while most of these add-on packages go in the /usr/pkg >> hierarchy, some I will pull into /usr/local or other places >> (e.g., I mount /usr/pkg late and some of these are things >> I might want to use even with *just* the root filesystem >> mounted R/O) >> >>> expect software to just work straight out the box, whether it be a >>> "setup.exe" in the windows world, or an "apt-get" or "yum install" in >>> Linux. And most of the time, my expectations are justified (or they can >>> be lowered until they are justified...) > > > What do you think of this PDF? > > http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf > > Any idea how I did it? > A database query dump into a2ps followed by pstopdf? -- Randy Yates % "My Shangri-la has gone away, fading like Digital Signal Labs % the Beatles on 'Hey Jude'" yates@digitalsignallabs.com % http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO   0 Reply Randy 12/23/2010 9:14:39 PM On 23/12/10 21:13, D Yuniskis wrote: > Hi David, > Here in Norway, Christmas is celebrated mainly on the 24th rather than the 25th. So if my replies in this thread are short or non-existent, it's not because the posts are no longer interesting - it's just I don't have time to read them or reply to them. > On 12/23/2010 2:45 AM, David Brown wrote: >>>>>> That would be scribus <http://www.scribus.net/> - it is, AFAIK, /the/ >>>>>> open source DTP program. I have barely tried it myself, but it is >>>>>> apparently very popular. And as with many such tools, it is >>>>>> cross-platform - try it with Linux, Windows or MacOS as you will. > > I played with the Windows version for an hour or so last night. > (sigh) It's got a *long* way to go. I would consider it: > "Wordpad with Tassles" (does more than Wordpad but not enough > to make you want to adopt it IN PLACE of Wordpad). > > You might consider looking at the FM "tryout" (I think Adobe still > offers one?). I don't recall how it is crippled (features, time, > etc.). But, you would be able to see the sorts of things you > can do and the effort required to do so. > > (E.g., for scribus, I tried to create a little document with > a table, a photo, an illustration, etc. -- just to see what it > felt like) > There is no real chance of me ever buying FrameMaker (either at home or at the office), so I'm not going to bother testing it. I will give Scribus a shot some time for interest, but I doubt if I'll make much use of it. Most of my writings are technical, and I use either LaTeX if I can, or OOO if I have to. Interactive DTP is just for fun in my case. >>> I'm not a zealot. I don't want to make a career out of maintaining >>> my tools (unless I have to) but, rather, want to use a tool to get >>> a job done. *Now* -- not when they get around to fixing some >>> bug that is standing in my way (note that this critique applies >>> equally to commercial products -- perhaps even MORESO as you are >>> entirely at THEIR mercy as to when and IF they will fix the problem!). >> >> I understand entirely. Sometimes it is fun to play with software, try it >> out, and see how it works (or doesn't work). But most of the time, >> software is just a tool, and we expect it to work as it should. > > Exactly. I doubt many carpenters go home after work and play with > their hammers! There are enough things that I *must* (or *choose* to) > do with a computer so spending extra time is not high on my list. > Sort of like "window shopping" (do you REALLY have that much free > time that you can waste it passively looking at merchandise without > an *intent* to purchase??) > >>>> I am merely making the suggestion of Scribus for anyone wanting to try >>>> it. While I doubt that you would switch to using it, you may be >>>> interested in trying it for comparison. >>> >>> The problem I find with many "packages" is getting all the right >>> cruft in place that the package depends on. In the UN*X world, >>> you have to make a commitment towards a particular direction >>> as everything BUILDS on everything else. By contrast, the Windows >>> world rebundles (usually needlessly reinventing!) all the cruft that >>> a particular application is likely to need *with* the application >>> (hence the bloat -- in both cases). >> >> Actually, Linux distributions have pretty much solved this problem years >> ago - tools like "apt" and "yum", or their gui front-ends, are excellent >> at finding and automatically installing all the required packages, >> libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get >> install scribus", and apt pulls in python, cups libraries, and whatever >> else scribus needs. One can make many accusations about this system, but >> not bloat - it is very much about re-use and sharing of packages and >> libraries. > > The *BSD's have a "package" system with the same functionality. > As with any "true" UN*X tool, it just pieces together actions > performed by other (existing) tools. E.g., consult "database" > for package (to identify dependencies), "install" dependencies > (which requires examining database for that package's dependencies, > etc.), etc. > The BSD "ports" system has a lot in common with the various Linux package managers, such as apt, yum, portage, etc. There are differences in the details and the functionality, but all are designed to make it easy to get hold of a package and any other packages that it depends on, and to keep everything updated (if you want it to). > My "bloat" comment is that everything calls other "packages" in. > And, that those packages tend to have been created independant of > the packages that rely on them. > > So, for example, if a "program" (getting away from notion of > "package") needs to be able to *display* a PNG, it drags in > the *entire* png package (which has far more capabilities) > instead of *just* that "display PNG" capability. > I think you are exaggerating here. In a great many such cases, these libraries are shared by a lot of programs on the system. No "apt-get" is going to install "libpng", because the basic installation already has a dozen other programs that use the library. The same thing applies to other common libraries. And for rarer libraries, they are often made to work with the main package (and thus there is little extra). I'll not claim there is no wastage, just that there isn't much extra. It is certainly an order of magnitude better than the windows style of including copies of every library with every program. >> In fact, I find one of the biggest problems with the Linux way of >> handling packages is that it is hard to get the bloat you sometimes >> want. For example, if you want to install two different versions of the >> same program, it is easy in windows - just pick different paths during >> installation (assuming it's not a program that claws its way deep into >> the system and the registry). With Linux, this needs a lot more thought >> and work - the standard installation procedure is so easy, and so >> automatic, that it is hard to do non-standard installations. > > If you use the *BSD "package" system, you suffer the same fate. > Things end up (in the file system) wherever the package "author" > decided to put them. And, you get exactly the "features" that > he decided upon when he created the package. > >>> If you just install a prebuilt package, you are putting your >>> faith in the "builder" that he understood the various (inevitable) >>> compiler warnings, dependancies, etc. and made smart choices about >>> what to do in each case (too often, people just blindly build things >>> and as long as make ends "successfully" they figure "all is well"). >> >> How is that any different from anything else you install on your >> machine, Windows or Linux? You are also putting your faith in the >> programmer that wrote the software in the first place. All I can say is > > The *programmer* is different from the "package maintainer"! True, but who is to say that the programmer is better at this job than the package maintainer? As an example, the typical programmer has tested his code on one or two machines, probably with the same cpu architecture. The package maintainer will test on dozens, and do builds on multiple cpu architectures, and will integrate with system testing on hundreds or thousands of systems. > E.g., I wouldn't question Wolfram's abilities. OTOH, when > "John Doe" *packages* his creation for my use, I don't have > anywhere near the confidence in John Doe (how familiar is > he with the actual "product"? How familiar is he with the > package system? How clever is he at getting a package to > support flexible configuration? Is this something he just > does "in his spare time" -- or, is he passionate about it?). > >> that it is worth getting the main parts of your system from a source >> that has good build and test procedures - when your distro comes from > > <grin> Me! ;-) > >> major players like Red Hat or Ubuntu, you can be reasonably confident. >> As you stretch out, getting packages from smaller groups or additional >> repositories, you might get more problems - in particular, the testing >> will not have covered the same range of systems. >> >>> [I build every application on my UN*X boxen from scratch so I >>> have a better feel for what *might* go wrong "in use"] >> >> Building from scratch is sometimes the best answer - I have typed the >> "./configure && make && make install" mantra many times myself. It can >> also be educational. But it is a lot more time and effort. Normally I > > Exactly ---------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > This is why I am slow to upgrade... "chase the bleeding edge". > If I have something (tool) that works -- at least "well enough > for my current needs" -- then why waste time "upgrading"? > (you can spend every waking minute "upgrading" *something*) > > When I build from scratch, I see what goes into a "package"; > what options are present in that particular "configure"; etc. > I also get a chance to look at what decisions/conclusions > configure (as well as the rest of the make) comes to > ("Um, I have libfoo installed! Why didn't it *find* it?"). > > The pkgsrc system in *BSD lets me explore packages without > embracing them. For example: > > "make fetch-list" gives me a list of everything that has > to be downloaded (not currently on my system) to build the > package. The process recursively examines each dependency > so I can get a feel for how much "work" is involved -- and > how much potential there is for "fixups". > > "make fetch" obviously *gets* whatever is needed (using > URLs listed in the package's definition file) > > "make extract" unpacks things and sets up a "work" directory > in which to build the package. > > "make patch" applies any particular patches (including those that > are specific to the package system itself -- like rewriting > the original *fetched* distribution's makefiles to site the > results in specific places). > > "make configure" runs configure et al. > > At this point, I can rummage through the "work" hierarchy to > see what's there and what *will* happen when the actual "make" > is invoked. > You can do pretty much the same things with apt (and its underlying dpkg tools) and yum (and rpm tools). The details are different, and sometimes you need to install extra utilities, but the functionality is all there for those that want it. You can be sure that the *BSD and various Linux distribution developers learn about each others tools, and take inspiration from them. > I typically run "make > dgy 2>&1" so I can examine the messages > emitted during the build. If that looks right, then proceed > to "make install" (again capturing stderr/out) and "make clean". > > The "make install" updates the database of *installed* packages > on the system so anything that subsequently "requires" this > package sees that it is there. > > I have my own conventions for "what goes where". And, I > notice some packages are inconsistent (there are guidelines > for package creation but adherence is optional :> ). So, > while most of these add-on packages go in the /usr/pkg > hierarchy, some I will pull into /usr/local or other places > (e.g., I mount /usr/pkg late and some of these are things > I might want to use even with *just* the root filesystem > mounted R/O) > >> expect software to just work straight out the box, whether it be a >> "setup.exe" in the windows world, or an "apt-get" or "yum install" in >> Linux. And most of the time, my expectations are justified (or they can >> be lowered until they are justified...)   0 Reply David 12/23/2010 9:29:47 PM On 23/12/10 19:04, D Yuniskis wrote: > Hi David, > > On 12/23/2010 3:11 AM, David Brown wrote: > >>>>> So, I can start the flow in a 3"x4" box on page 1 and continue it >>>>> in a 3"x6" box on page 7 (if that made sense). With this in mind, >>>>> you can see how some publications (e.g., advertisements!) might >>>>> want to use things like feathering in particular "frames". >>>> >>>> That's a little different from what I am used to with TeX / LaTeX, >>>> which >>>> treats everything as a whole. (You /can/ make framed boxes like this - >>>> sometimes that's very useful - but it's not the usual method). But >>>> then, >>> >>> FM (and most other WYSIWYG DTP packages that I've played with) *loves* >>> to break things into little "units" whenever it can do so without >>> looking like it is being obsessive. :> E.g., each "cell" in a >>> table has it's own specific formatting. >> >> TeX also breaks things up into little boxes (mostly hbox's and vbox's), >> but most of that is handled behind the scenes. > > My recollection of that is different. FM tries to treat every > bit of *text* as a little "unit" (not every bit of page real-estate). > [the term "paragraph" is approximately correct -- though it also > applies to more than *just* (classical) paragraphs...] > > E.g., the caption on an illustration, the text in each "cell" > in a table, the page number at the bottom of the page, etc. > > You can apply "styles" to almost everything in FM -- "paragraphs", > *characters*, tables, pages, etc. So, you might apply the > "chapter title" PARAGRAPH style to the text: "Earthworm Mating Habits". > This might cause the text to appear in a large decorative typeface, > right justified on the line, 2.7 inches down from the top of the page, > etc. > > Within that "string", you could apply the "draw attention" CHARACTER > style (all these names are user defined) to the substring 'Mating'. > This might, for example, cause them to be displayed in a different > typeface, as small caps, bold, italic and in red ink (thereby > "drawing attention" to them! :> ) > > When the table of contents is built, that string will appear (by > virtue of a cross-reference) yet will have a different "paragraph" > style applied to it -- perhaps "TOC chapter". This would undoubtedly > use a *smaller* typeface with different margins (so it appears in > the right spot in the ToC), etc. > > I think Word (et al.) have similar capabilities wrt "styles". > Word is poor at styles. It's biggest problem, however, is not lack of style functionality - but that users are encouraged to manually format everything by selecting fonts, sizes, etc., for text using toolbar buttons rather than styles. It's a guaranteed way to make the document inconsistent. OOO at least makes it easier to use styles and harder to use manual formatting, which helps document layout. >>>> Frame Maker and TeX are designed to work in very different ways - FM is >>>> much more of a visual layout tool, while TeX has a strong separation >>>> between visual appearance and textual content, and runs as a batch >>>> process. >>> >>> Exactly. Though I think there are packages (lyx?) that wrap >>> an interactive user interface around it. >>> >>> I've just found DTP to be an inherently interactive process. >>> You tweek one thing and see what else changes as a result. >>> E.g., if I put a non-breaking space here to keep this phrase >>> intact IN THIS INSTANCE, what gets munged in the *next* paragraph >>> that I will need to "fix"? >> >> DTP for a document like this is more of a visual art, and thus an >> interactive program is essential. > > I've found that to be the case for almost everything I "typeset". > I use *lots* of illustrations, tables, cross references, etc. > Trying to insert them in the text directly (as if writing HTML > in vi(1)) is too tedious (FM supports a "visible" file encoding > that you technically *could* "edit" directly). > > It's much more expedient to just race through the document > "tagging" paragraphs with the "right" styles, etc. > > Likewise, inserting a table or a photo is much more intuitive > (<Insert Table>, style "Short summary"; <Import Photo>; etc.) > And, you can then scribble on things directly (e.g., adding > callouts to an illustration). > >> lyx is more of a glorified TeX-specific editor - it doesn't attempt to >> give you a true "live" view of your output document. I tried it a bit, > > Oh. :< I thought it was a WYSIWYG layer atop TeX. :-/ > No, lyx is more a "what you see is a bit like what you get" layer. While editing, you see results that are closer to the output than you would with a simple text editor, but not /that/ close. It will cope with things like bold and italic, some fonts and sizes, and a bit of symbols. But it won't get line and page breaks right, it won't handle macros other than predefined ones (and even then, it assumes they have the standard definitions, while (La)TeX lets you re-define everything). If you use (La)TeX in a relatively simple way, sticking strictly to the basic standard styles, then it is maybe useful. But it never suited me. Mind you, it was /many/ years ago that I tried it. >> but I didn't like it - since it can't handle any of the more interesting >> features of (La)TeX, it gives you very little. The real fun with LaTeX >> comes when you make macros and "program" your document - you can't see >> any of that with lyx. And because lyx is not quite standard LaTeX, you >> don't have such easy access to the enormous numbers of existing >> packages, styles, and add-ons. >> >>>> I managed to get by for years without using a word processor at all - >>>> LaTeX handled everything I needed. But unfortunately that poses certain >>>> challenges for working with colleagues and customers who don't use it. >>> >>> That's why I opted on using PDF's for publications and notes. >>> It's too hard to prepare things that others will be able to >>> review AS YOU INTENDED THEM using other presentation formats. >> >> pdf is ideal for delivering documentation to others. But sometimes it is >> necessary to work together on a document - I write some, someone else >> writes other parts. And that means using the lowest common denominator >> tool, which is typically a word processor. If I can at least get other >> people to use styles properly, then I can usually cope without tears. > > You might find it easier to just let folks "feed" straight ASCII > to someone who does the editting, etc. This helps ensure a > consistent "style" imposed on the results. Usually, getting > the *content* right is where most of the work lies. E.g., if > someone explains/describes something, I can completely rewrite it > rather quickly (much faster than if I had to come up with the > content myself). Then, it "reads" consistently with the other > parts of the document (also, many people are terrible writers and > are happy to have someone else "dress up" their prose.) > That's okay if I am accepting work from others, and am happy to do all the layout, sectioning, etc. It loses a lot if there is non-ASCII data (tables, pictures, etc.). >>> The biggest single problem with the layout is that it isn't >>> very "senior friendly". The typefaces are too small, columns >>> too tight, etc. OTOH, stepping up to a two-column layout >>> (almost essential if you use a larger typeface for the body >>> of the text) would have made the document a lot longer. >>> (and printing and mailing costs disproportionately so) >>> E.g., examine the "before" and "since" editions -- much >>> looser layouts but a lot less *content*. (consider the >>> last page is devoid of *real* content as it acts as a >>> mailing label -- the newsletter is folded in half on the >>> solid line) >> >> Conflicting requirements are always a challenge - it's what makes the >> day job fun. Hands up all the engineers in these newsgroups who have >> been asked to make something small, good /and/ fast! > > Yes. Now imagine marketing, manufacturing and engineering all > telling you *different* goals -- and none actually doing any > of the *work*! :> > >>>> Other than that, it is - as I said before - very well done. >>> >>> <shrug> It was my first "non-technical" publication. I try >>> to make each new project a "learning experience". So, in >>> that sense, it was a success. Just figuring out how to merge >>> *my* writing (and editing) with that of others without having >>> it "obvious" was an interesting experience! >>> >>> From the Friends' point of view, it was a *great* success as >>> it attracted a lot of attention -- which, after all, was it's >>> intent! >> >> Be careful - you'll end up with a job for life. > > No, some wanted an "boss-worker" relationship. If I'm *giving* > my time, I sure don't *want* (nor NEED) a "boss" -- especially > when I'm the one with the DTP experience! :> (no hard feelings; > I just brought them a plate of cookies last week!) > > It appears they found someone to take over the task (though > it looks like it took them a full year to do so :-/ ). This is > good as I believe public libraries to be a real asset (though > I am deeply disappointed at how *loud* they have become!)   0 Reply David 12/23/2010 9:51:48 PM On 12/23/2010 1:36 PM, Michael A. Terrell wrote: > > D Yuniskis wrote: >> >> Hi David, >> >> On 12/23/2010 2:45 AM, David Brown wrote: >>>>>>> That would be scribus<http://www.scribus.net/> - it is, AFAIK, /the/ >>>>>>> open source DTP program. I have barely tried it myself, but it is >>>>>>> apparently very popular. And as with many such tools, it is >>>>>>> cross-platform - try it with Linux, Windows or MacOS as you will. >> >> I played with the Windows version for an hour or so last night. >> (sigh) It's got a *long* way to go. I would consider it: >> "Wordpad with Tassles" (does more than Wordpad but not enough >> to make you want to adopt it IN PLACE of Wordpad). >> >> You might consider looking at the FM "tryout" (I think Adobe still >> offers one?). I don't recall how it is crippled (features, time, >> etc.). But, you would be able to see the sorts of things you >> can do and the effort required to do so. >> >> (E.g., for scribus, I tried to create a little document with >> a table, a photo, an illustration, etc. -- just to see what it >> felt like) >> >>>> I'm not a zealot. I don't want to make a career out of maintaining >>>> my tools (unless I have to) but, rather, want to use a tool to get >>>> a job done. *Now* -- not when they get around to fixing some >>>> bug that is standing in my way (note that this critique applies >>>> equally to commercial products -- perhaps even MORESO as you are >>>> entirely at THEIR mercy as to when and IF they will fix the problem!). >>> >>> I understand entirely. Sometimes it is fun to play with software, try it >>> out, and see how it works (or doesn't work). But most of the time, >>> software is just a tool, and we expect it to work as it should. >> >> Exactly. I doubt many carpenters go home after work and play with >> their hammers! There are enough things that I *must* (or *choose* to) >> do with a computer so spending extra time is not high on my list. >> Sort of like "window shopping" (do you REALLY have that much free >> time that you can waste it passively looking at merchandise without >> an *intent* to purchase??) >> >>>>> I am merely making the suggestion of Scribus for anyone wanting to try >>>>> it. While I doubt that you would switch to using it, you may be >>>>> interested in trying it for comparison. >>>> >>>> The problem I find with many "packages" is getting all the right >>>> cruft in place that the package depends on. In the UN*X world, >>>> you have to make a commitment towards a particular direction >>>> as everything BUILDS on everything else. By contrast, the Windows >>>> world rebundles (usually needlessly reinventing!) all the cruft that >>>> a particular application is likely to need *with* the application >>>> (hence the bloat -- in both cases). >>> >>> Actually, Linux distributions have pretty much solved this problem years >>> ago - tools like "apt" and "yum", or their gui front-ends, are excellent >>> at finding and automatically installing all the required packages, >>> libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get >>> install scribus", and apt pulls in python, cups libraries, and whatever >>> else scribus needs. One can make many accusations about this system, but >>> not bloat - it is very much about re-use and sharing of packages and >>> libraries. >> >> The *BSD's have a "package" system with the same functionality. >> As with any "true" UN*X tool, it just pieces together actions >> performed by other (existing) tools. E.g., consult "database" >> for package (to identify dependencies), "install" dependencies >> (which requires examining database for that package's dependencies, >> etc.), etc. >> >> My "bloat" comment is that everything calls other "packages" in. >> And, that those packages tend to have been created independant of >> the packages that rely on them. >> >> So, for example, if a "program" (getting away from notion of >> "package") needs to be able to *display* a PNG, it drags in >> the *entire* png package (which has far more capabilities) >> instead of *just* that "display PNG" capability. >> >>> In fact, I find one of the biggest problems with the Linux way of >>> handling packages is that it is hard to get the bloat you sometimes >>> want. For example, if you want to install two different versions of the >>> same program, it is easy in windows - just pick different paths during >>> installation (assuming it's not a program that claws its way deep into >>> the system and the registry). With Linux, this needs a lot more thought >>> and work - the standard installation procedure is so easy, and so >>> automatic, that it is hard to do non-standard installations. >> >> If you use the *BSD "package" system, you suffer the same fate. >> Things end up (in the file system) wherever the package "author" >> decided to put them. And, you get exactly the "features" that >> he decided upon when he created the package. >> >>>> If you just install a prebuilt package, you are putting your >>>> faith in the "builder" that he understood the various (inevitable) >>>> compiler warnings, dependancies, etc. and made smart choices about >>>> what to do in each case (too often, people just blindly build things >>>> and as long as make ends "successfully" they figure "all is well"). >>> >>> How is that any different from anything else you install on your >>> machine, Windows or Linux? You are also putting your faith in the >>> programmer that wrote the software in the first place. All I can say is >> >> The *programmer* is different from the "package maintainer"! >> E.g., I wouldn't question Wolfram's abilities. OTOH, when >> "John Doe" *packages* his creation for my use, I don't have >> anywhere near the confidence in John Doe (how familiar is >> he with the actual "product"? How familiar is he with the >> package system? How clever is he at getting a package to >> support flexible configuration? Is this something he just >> does "in his spare time" -- or, is he passionate about it?). >> >>> that it is worth getting the main parts of your system from a source >>> that has good build and test procedures - when your distro comes from >> >> <grin> Me! ;-) >> >>> major players like Red Hat or Ubuntu, you can be reasonably confident. >>> As you stretch out, getting packages from smaller groups or additional >>> repositories, you might get more problems - in particular, the testing >>> will not have covered the same range of systems. >>> >>>> [I build every application on my UN*X boxen from scratch so I >>>> have a better feel for what *might* go wrong "in use"] >>> >>> Building from scratch is sometimes the best answer - I have typed the >>> "./configure&& make&& make install" mantra many times myself. It can >>> also be educational. But it is a lot more time and effort. Normally I >> >> Exactly ---------------^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> >> This is why I am slow to upgrade... "chase the bleeding edge". >> If I have something (tool) that works -- at least "well enough >> for my current needs" -- then why waste time "upgrading"? >> (you can spend every waking minute "upgrading" *something*) >> >> When I build from scratch, I see what goes into a "package"; >> what options are present in that particular "configure"; etc. >> I also get a chance to look at what decisions/conclusions >> configure (as well as the rest of the make) comes to >> ("Um, I have libfoo installed! Why didn't it *find* it?"). >> >> The pkgsrc system in *BSD lets me explore packages without >> embracing them. For example: >> >> "make fetch-list" gives me a list of everything that has >> to be downloaded (not currently on my system) to build the >> package. The process recursively examines each dependency >> so I can get a feel for how much "work" is involved -- and >> how much potential there is for "fixups". >> >> "make fetch" obviously *gets* whatever is needed (using >> URLs listed in the package's definition file) >> >> "make extract" unpacks things and sets up a "work" directory >> in which to build the package. >> >> "make patch" applies any particular patches (including those that >> are specific to the package system itself -- like rewriting >> the original *fetched* distribution's makefiles to site the >> results in specific places). >> >> "make configure" runs configure et al. >> >> At this point, I can rummage through the "work" hierarchy to >> see what's there and what *will* happen when the actual "make" >> is invoked. >> >> I typically run "make> dgy 2>&1" so I can examine the messages >> emitted during the build. If that looks right, then proceed >> to "make install" (again capturing stderr/out) and "make clean". >> >> The "make install" updates the database of *installed* packages >> on the system so anything that subsequently "requires" this >> package sees that it is there. >> >> I have my own conventions for "what goes where". And, I >> notice some packages are inconsistent (there are guidelines >> for package creation but adherence is optional :> ). So, >> while most of these add-on packages go in the /usr/pkg >> hierarchy, some I will pull into /usr/local or other places >> (e.g., I mount /usr/pkg late and some of these are things >> I might want to use even with *just* the root filesystem >> mounted R/O) >> >>> expect software to just work straight out the box, whether it be a >>> "setup.exe" in the windows world, or an "apt-get" or "yum install" in >>> Linux. And most of the time, my expectations are justified (or they can >>> be lowered until they are justified...) > > > What do you think of this PDF? > http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf If you want a *critique*... :> I like a "fence" on the sides of tables (doesn't have to be between columns) -- it helps constrain your eye so that it knows where the edges of the table are located (see below). For *long* tables (also see below), I like to add shading to help differentiate one row from another. E.g., much like old computer "green fan fold". This helps your eyes walk across a line without losing track of *which* line they should be following. For *wide* tables (which this is NOT!), I would opt for perhaps "every other" line being shaded. For longer tables, perhaps groups of *4* lines (i.e., 4 lines shaded followed by 4 unshaded lines). If there is some strategic value associated with some other "grouping" (e.g., every 5 lines for a table listing the integers from 1 to 100), then that would influence my choice. You could do similar with "lines" under every N'th table line. [FM lets you specify these things in the "table's format"] The same sort of thing applies to wide tables -- using lines between *select* columns (of differing thicknesses to set apart "groups" of related columns). I would also have shaded the background of the "header" line too set it apart from the table's "body". The page numbers would be either "center justified" or "right/decimal justified". The "Page" heading would then be centered above that. The column contents aren't "synchronized" with each other. I.e., if you were to draw a line under any *single* entry (in any column) and extend that line to the page's borders, you'd find the text on neighboring lines floating above or below this "reference". With *no* shaded backgrounds or lines between rows (as in your example), it is less obvious. By the same token, it makes it harder to do things like count how many entries are in your table; or find the 14th one in column 3; etc. (note that this is also made more difficult by the variable *height* of each entry in the table). In the absence of a fence around the table, I would opt for a thin line in each gutter (see below) to reinforce the visual structure of the table (i.e., it is really one *long* table "folded" onto the page). The "See <foo>" references I would write as "(/see/ <foo>)" (why capitalize "See"?) This makes the *reference* subordinate to the actual "datum" and helps differentiate the cross reference from parenthetical cases wherein you are expanding on an abbreviation (Automatic Musical Instruments). > Any idea how I did it? Lots of ideas as to how you *could* have done it. :> How *I* would do it in FrameMaker: Create a 4 column "master page" layout (or, just change the current "frame" on "this page" to be 4 columns... depends on whether or not you want to reuse this stuff). Pick an appropriate gutter size (if you put sides on the table, you can minimize the gutter; otherwise, I would go for a "noticeable" size gutter -- ideally with a 1 point line down the center) "Insert Table", 2 columns, ~100 (?) rows, 1 heading row. Specify the shading/lines that I mentioned above. Fill in the Table Title ("Index to SAMS CM & RC Manuals"). Fill in the two "header cells" ("Brand", "Page"). Select right column ("Page"). Specify "right" justification (or "decimal" if you want to go that route). Pick typeface, etc. Select the header row. Change to desired typeface, bold, etc. Specify "center" (justification) Fill in table contents... FrameMaker will fold the table into the next column once it reaches the bottom of the current column. Then the next column. And next. etc. Continuing onto the next *page*, as necessary. yet, it will still exist as "table 1" (conceptually). If you don't have enough rows, click "add row above/below" (after selecting an approximate number of rows to add -- so you don't have to keep adding one at a time!). When done, delete any unused rows. I think MSWord has a "convert to table" capability (wraps a table around a "delimited" set of lines of text). I think it also has provisions to add lines between rows/columns, etc. I'm not sure how/if MSWord lets you 'fold' tables, though. You can also build brute force with tabs, etc. But, that gets a lot harder to maintain (e.g., what happens if you wanted to insert "Victor Corporation of Japan" in there??)   0 Reply D 12/23/2010 10:01:46 PM On 12/23/2010 3:01 PM, D Yuniskis wrote: <grrr> Apologies for not eliding all that cruft -- I got distracted looking at the PDF <:-( Ah, well... I guess we're all entitled to ONE mistake each YEAR... ;-)   0 Reply D 12/23/2010 10:05:58 PM Hi Randy, On 12/23/2010 2:14 PM, Randy Yates wrote: > On 12/23/2010 03:36 PM, Michael A. Terrell wrote: >> What do you think of this PDF? >> >> http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf >> >> Any idea how I did it? > > A database query dump into a2ps followed by pstopdf? Ah! (smacks head) I had interpreted "how" to mean "how I *typeset* it" <:-/ Crap, that's *two*... and I had such high hopes for 2010...   0 Reply D 12/23/2010 10:24:26 PM Hi David, On 12/23/2010 2:51 PM, David Brown wrote: >> E.g., the caption on an illustration, the text in each "cell" >> in a table, the page number at the bottom of the page, etc. >> >> You can apply "styles" to almost everything in FM -- "paragraphs", >> *characters*, tables, pages, etc. So, you might apply the >> "chapter title" PARAGRAPH style to the text: "Earthworm Mating Habits". >> This might cause the text to appear in a large decorative typeface, >> right justified on the line, 2.7 inches down from the top of the page, >> etc. >> >> Within that "string", you could apply the "draw attention" CHARACTER >> style (all these names are user defined) to the substring 'Mating'. >> This might, for example, cause them to be displayed in a different >> typeface, as small caps, bold, italic and in red ink (thereby >> "drawing attention" to them! :> ) >> >> When the table of contents is built, that string will appear (by >> virtue of a cross-reference) yet will have a different "paragraph" >> style applied to it -- perhaps "TOC chapter". This would undoubtedly >> use a *smaller* typeface with different margins (so it appears in >> the right spot in the ToC), etc. >> >> I think Word (et al.) have similar capabilities wrt "styles". > > Word is poor at styles. It's biggest problem, however, is not lack of > style functionality - but that users are encouraged to manually format > everything by selecting fonts, sizes, etc., for text using toolbar <frown> I think that is more a consequence of "lack of understanding", "lack of training", "expediency", etc. E.g., FM has all those same buttons in the toolbar (even has a set of toolbar buttons to quickly cycle you *through* the various toolbars!). So, I can turn italic, bold, small caps, etc. on/off at will from the toolbar. However, I know *not* to. Instead, a set of 4 (?) buttons above the right scroll bar turn on/off the "important things". E.g., one calls up the "paragraph style" menu (a floating window that lists all of the user defined paragraph styles) while another calls up the "character style" menu, etc. With experience, you know to use the styles to tag "strings" with metadata (effectively). So, instead of applying bold or italic, you might apply an "emphasis" character style. Or, an "article title" style vs. a "book title" style, etc. One of the tricks is learning what purposes you will need to justify each particular "style" -- the various roles "strings of characters" can take on in a document. > buttons rather than styles. It's a guaranteed way to make the document > inconsistent. OOO at least makes it easier to use styles and harder to > use manual formatting, which helps document layout. This, BTW, was something I disliked about scribus... >>> lyx is more of a glorified TeX-specific editor - it doesn't attempt to >>> give you a true "live" view of your output document. I tried it a bit, >> >> Oh. :< I thought it was a WYSIWYG layer atop TeX. :-/ > > No, lyx is more a "what you see is a bit like what you get" layer. While > editing, you see results that are closer to the output than you would > with a simple text editor, but not /that/ close. It will cope with > things like bold and italic, some fonts and sizes, and a bit of symbols. > But it won't get line and page breaks right, it won't handle macros > other than predefined ones (and even then, it assumes they have the > standard definitions, while (La)TeX lets you re-define everything). If > you use (La)TeX in a relatively simple way, sticking strictly to the > basic standard styles, then it is maybe useful. But it never suited me. > Mind you, it was /many/ years ago that I tried it. <frown> So what does it *buy* you? I.e., why bother with it? >>> pdf is ideal for delivering documentation to others. But sometimes it is >>> necessary to work together on a document - I write some, someone else >>> writes other parts. And that means using the lowest common denominator >>> tool, which is typically a word processor. If I can at least get other >>> people to use styles properly, then I can usually cope without tears. >> >> You might find it easier to just let folks "feed" straight ASCII >> to someone who does the editting, etc. This helps ensure a >> consistent "style" imposed on the results. Usually, getting >> the *content* right is where most of the work lies. E.g., if >> someone explains/describes something, I can completely rewrite it >> rather quickly (much faster than if I had to come up with the >> content myself). Then, it "reads" consistently with the other >> parts of the document (also, many people are terrible writers and >> are happy to have someone else "dress up" their prose.) > > That's okay if I am accepting work from others, and am happy to do all > the layout, sectioning, etc. It loses a lot if there is non-ASCII data > (tables, pictures, etc.). Tables are tough to pass around in a "portable" form. But, pictures and illustrations can be reasonably portable. Finding someone willing to take on the "cleanup" is a bit more challenging. People inevitably skimp on what they *should* have done (material presented to the "editor") so the editor/typesetter ends up with the "dirty" end of the stick...   0 Reply D 12/24/2010 3:26:20 AM D Yuniskis wrote: > > Hi Randy, > > On 12/23/2010 2:14 PM, Randy Yates wrote: > > On 12/23/2010 03:36 PM, Michael A. Terrell wrote: > >> What do you think of this PDF? > >> > >> http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf > >> > >> Any idea how I did it? > > > > A database query dump into a2ps followed by pstopdf? > > Ah! (smacks head) I had interpreted "how" to mean "how > I *typeset* it" <:-/ That is what I mean. :) > Crap, that's *two*... and I had such high hopes for 2010... -- For the last time: I am not a mad scientist, I'm just a very ticked off scientist!!!   0 Reply Michael 12/24/2010 4:59:17 AM David Brown wrote: > > On 23/12/10 21:36, Michael A. Terrell wrote: > <snip> > > > What do you think of this PDF? > > > > It's not the most exciting reading... > > > http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf > > > > Any idea how I did it? > > > > The "producer" stamp just says ghostscript. So I'm guessing that you > first generated a postscript file, then used ps2pdf to convert it to a pdf. > > As for the postscript file, it was perhaps generated programmaticly from > an existing database or table of information. > > If that's the case, then you might want to consider generating the pdf > file directly in the future. There are a number of pdf toolkits around > - I have used reportlab with python a number of times, and it makes it > easy to generate some pretty reasonable pdf's. > > An alternative is to have a program that generates a LaTeX file, and use > pdfLaTeX to generate the pdf itself. This is an easy way to separate > the data content from the style - change the style in the LaTeX part, > and define macros for displaying the different parts of the document. > Then the program part just generates a list of macro calls from the > data, and you can easily experiment with different styles and effects. > It would be much easier to get things like leaders, alternative fonts, > clickable links, etc., in this way. It was a HTML page, created by importing a comma delimited database file. I used search & replace to break the data into cells and lines, then added a header and footer to the raw table. Then it was printed to PDF995, which is a Ghostscript shell. It took me about five minutes to convert the raw data into the PDF. -- For the last time: I am not a mad scientist, I'm just a very ticked off scientist!!!   0 Reply Michael 12/24/2010 5:02:42 AM D Yuniskis wrote: > > Lots of ideas as to how you *could* have done it. :> > How *I* would do it in FrameMaker: > > Create a 4 column "master page" layout (or, just change the > current "frame" on "this page" to be 4 columns... depends on > whether or not you want to reuse this stuff). Pick an > appropriate gutter size (if you put sides on the table, you > can minimize the gutter; otherwise, I would go for a "noticeable" > size gutter -- ideally with a 1 point line down the center) > > "Insert Table", 2 columns, ~100 (?) rows, 1 heading row. > Specify the shading/lines that I mentioned above. > > Fill in the Table Title ("Index to SAMS CM & RC Manuals"). > > Fill in the two "header cells" ("Brand", "Page"). > > Select right column ("Page"). Specify "right" justification > (or "decimal" if you want to go that route). Pick typeface, etc. > > Select the header row. Change to desired typeface, bold, etc. > Specify "center" (justification) > > Fill in table contents... > > FrameMaker will fold the table into the next column once it > reaches the bottom of the current column. Then the next column. > And next. etc. Continuing onto the next *page*, as necessary. > yet, it will still exist as "table 1" (conceptually). If you > don't have enough rows, click "add row above/below" (after > selecting an approximate number of rows to add -- so you don't > have to keep adding one at a time!). When done, delete any > unused rows. > > I think MSWord has a "convert to table" capability (wraps a > table around a "delimited" set of lines of text). I think it > also has provisions to add lines between rows/columns, etc. > I'm not sure how/if MSWord lets you 'fold' tables, though. > > You can also build brute force with tabs, etc. But, that > gets a lot harder to maintain (e.g., what happens if you > wanted to insert "Victor Corporation of Japan" in there??) By using the HTML, it would adjust itself to fit the page. :) I chose the layout to look similar to the old printed index for the manuals. The entire document was a single HTML table. -- For the last time: I am not a mad scientist, I'm just a very ticked off scientist!!!   0 Reply Michael 12/24/2010 5:08:22 AM On 24/12/10 04:34, D Yuniskis wrote: > Hi David, > > On 12/23/2010 2:51 PM, David Brown wrote: >>> E.g., the caption on an illustration, the text in each "cell" >>> in a table, the page number at the bottom of the page, etc. >>> >>> You can apply "styles" to almost everything in FM -- "paragraphs", >>> *characters*, tables, pages, etc. So, you might apply the >>> "chapter title" PARAGRAPH style to the text: "Earthworm Mating Habits". >>> This might cause the text to appear in a large decorative typeface, >>> right justified on the line, 2.7 inches down from the top of the page, >>> etc. >>> >>> Within that "string", you could apply the "draw attention" CHARACTER >>> style (all these names are user defined) to the substring 'Mating'. >>> This might, for example, cause them to be displayed in a different >>> typeface, as small caps, bold, italic and in red ink (thereby >>> "drawing attention" to them! :> ) >>> >>> When the table of contents is built, that string will appear (by >>> virtue of a cross-reference) yet will have a different "paragraph" >>> style applied to it -- perhaps "TOC chapter". This would undoubtedly >>> use a *smaller* typeface with different margins (so it appears in >>> the right spot in the ToC), etc. >>> >>> I think Word (et al.) have similar capabilities wrt "styles". >> >> Word is poor at styles. It's biggest problem, however, is not lack of >> style functionality - but that users are encouraged to manually format >> everything by selecting fonts, sizes, etc., for text using toolbar > > <frown> I think that is more a consequence of "lack of understanding", > "lack of training", "expediency", etc. > MS are good at making software that is easy to get started with, without considering the long-term effects. So the great majority of Word users use the quick-to-learn manual formatting, rather than learning to use the tool properly. Even the few that get proper training use styles effectively - they use manual formatting because they are lazy and it gets the job done fastest, even if the end result is poorer and the long-term efficiency is worse. Word /does/ have usable tools for proper structured and consistent document layout. They are not great, and the program is often unstable when documents get large, but they do exist - if people would use them. MS software is like a reasonable supermarket, selling fairly healthy food that people can take home and cook. And then they've put a burger bar by the entrance. > E.g., FM has all those same buttons in the toolbar (even has a > set of toolbar buttons to quickly cycle you *through* the > various toolbars!). So, I can turn italic, bold, small caps, > etc. on/off at will from the toolbar. However, I know *not* to. > > Instead, a set of 4 (?) buttons above the right scroll bar turn on/off > the "important things". E.g., one calls up the "paragraph style" > menu (a floating window that lists all of the user defined paragraph > styles) while another calls up the "character style" menu, etc. > > With experience, you know to use the styles to tag "strings" with > metadata (effectively). So, instead of applying bold or italic, > you might apply an "emphasis" character style. Or, an "article > title" style vs. a "book title" style, etc. > > One of the tricks is learning what purposes you will need to > justify each particular "style" -- the various roles "strings > of characters" can take on in a document. > >> buttons rather than styles. It's a guaranteed way to make the document >> inconsistent. OOO at least makes it easier to use styles and harder to >> use manual formatting, which helps document layout. > > This, BTW, was something I disliked about scribus... > >>>> lyx is more of a glorified TeX-specific editor - it doesn't attempt to >>>> give you a true "live" view of your output document. I tried it a bit, >>> >>> Oh. :< I thought it was a WYSIWYG layer atop TeX. :-/ >> >> No, lyx is more a "what you see is a bit like what you get" layer. While >> editing, you see results that are closer to the output than you would >> with a simple text editor, but not /that/ close. It will cope with >> things like bold and italic, some fonts and sizes, and a bit of symbols. >> But it won't get line and page breaks right, it won't handle macros >> other than predefined ones (and even then, it assumes they have the >> standard definitions, while (La)TeX lets you re-define everything). If >> you use (La)TeX in a relatively simple way, sticking strictly to the >> basic standard styles, then it is maybe useful. But it never suited me. >> Mind you, it was /many/ years ago that I tried it. > > <frown> So what does it *buy* you? I.e., why bother with it? > As you can guess, I /don't/ bother with it. However, if you are happy with the restricted set of (La)TeX that it works with, then it is much less intimidating and has a much easier learning curve than "proper" (La)TeX. >>>> pdf is ideal for delivering documentation to others. But sometimes >>>> it is >>>> necessary to work together on a document - I write some, someone else >>>> writes other parts. And that means using the lowest common denominator >>>> tool, which is typically a word processor. If I can at least get other >>>> people to use styles properly, then I can usually cope without tears. >>> >>> You might find it easier to just let folks "feed" straight ASCII >>> to someone who does the editting, etc. This helps ensure a >>> consistent "style" imposed on the results. Usually, getting >>> the *content* right is where most of the work lies. E.g., if >>> someone explains/describes something, I can completely rewrite it >>> rather quickly (much faster than if I had to come up with the >>> content myself). Then, it "reads" consistently with the other >>> parts of the document (also, many people are terrible writers and >>> are happy to have someone else "dress up" their prose.) >> >> That's okay if I am accepting work from others, and am happy to do all >> the layout, sectioning, etc. It loses a lot if there is non-ASCII data >> (tables, pictures, etc.). > > Tables are tough to pass around in a "portable" form. But, > pictures and illustrations can be reasonably portable. > > Finding someone willing to take on the "cleanup" is a bit > more challenging. People inevitably skimp on what they > *should* have done (material presented to the "editor") > so the editor/typesetter ends up with the "dirty" end of > the stick...   0 Reply David 12/24/2010 1:51:20 PM On 12/23/2010 9:59 PM, Michael A. Terrell wrote: > > D Yuniskis wrote: >> >> Hi Randy, >> >> On 12/23/2010 2:14 PM, Randy Yates wrote: >>> On 12/23/2010 03:36 PM, Michael A. Terrell wrote: >>>> What do you think of this PDF? >>>> >>>> http://home.earthlink.net/~mike.terrell/CM-RC-index.pdf >>>> >>>> Any idea how I did it? >>> >>> A database query dump into a2ps followed by pstopdf? >> >> Ah! (smacks head) I had interpreted "how" to mean "how >> I *typeset* it"<:-/ > > > That is what I mean. :) > > >> Crap, that's *two*... and I had such high hopes for 2010... > > Trivial with < http://www.tinaja.com/glib/gonzotut.pdf > -- Many thanks, Don Lancaster voice phone: (928)428-4073 Synergetics 3860 West First Street Box 809 Thatcher, AZ 85552 rss: http://www.tinaja.com/whtnu.xml email: don@tinaja.com Please visit my GURU's LAIR web site at http://www.tinaja.com   0 Reply Don 12/24/2010 4:24:32 PM Hi David, On 12/23/2010 2:29 PM, David Brown wrote: > Here in Norway, Christmas is celebrated mainly on the 24th rather than Then "Merry Christmas" (let's see... I guess it's still the 24th, there, as I type this) > the 25th. So if my replies in this thread are short or non-existent, > it's not because the posts are no longer interesting - it's just I don't > have time to read them or reply to them. Understood. You'd rather be lying flat on your face, hung over from too much eggnog... ;-) >> You might consider looking at the FM "tryout" (I think Adobe still >> offers one?). I don't recall how it is crippled (features, time, >> etc.). But, you would be able to see the sorts of things you >> can do and the effort required to do so. >> >> (E.g., for scribus, I tried to create a little document with >> a table, a photo, an illustration, etc. -- just to see what it >> felt like) > > There is no real chance of me ever buying FrameMaker (either at home or > at the office), so I'm not going to bother testing it. I will give > Scribus a shot some time for interest, but I doubt if I'll make much use > of it. Most of my writings are technical, and I use either LaTeX if I > can, or OOO if I have to. Interactive DTP is just for fun in my case. Understood. It is, however, interesting to see what can be done (and done differently) in an interactive environment. >>>> The problem I find with many "packages" is getting all the right >>>> cruft in place that the package depends on. In the UN*X world, >>>> you have to make a commitment towards a particular direction >>>> as everything BUILDS on everything else. By contrast, the Windows >>>> world rebundles (usually needlessly reinventing!) all the cruft that >>>> a particular application is likely to need *with* the application >>>> (hence the bloat -- in both cases). >>> >>> Actually, Linux distributions have pretty much solved this problem years >>> ago - tools like "apt" and "yum", or their gui front-ends, are excellent >>> at finding and automatically installing all the required packages, >>> libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get >>> install scribus", and apt pulls in python, cups libraries, and whatever >>> else scribus needs. One can make many accusations about this system, but >>> not bloat - it is very much about re-use and sharing of packages and >>> libraries. >> >> The *BSD's have a "package" system with the same functionality. >> As with any "true" UN*X tool, it just pieces together actions >> performed by other (existing) tools. E.g., consult "database" >> for package (to identify dependencies), "install" dependencies >> (which requires examining database for that package's dependencies, >> etc.), etc. > > The BSD "ports" system has a lot in common with the various Linux *BSD's tend to treat "ports" as things you build and "packages" as those same things, prebuilt. Then, you also have to deal with the various *ports* to different CPU's... :-/ Sort of like the terminology overloading that comes to play wrt "partitions", etc. <frown> > package managers, such as apt, yum, portage, etc. There are differences > in the details and the functionality, but all are designed to make it > easy to get hold of a package and any other packages that it depends on, > and to keep everything updated (if you want it to). I don't think the *BSD approach automates "updates". IIRC, there are hooks that let you periodically check to see if any of the packages on *your* machine have (new) security advisories that are applicable. But, I think you still have to explicitly decide that you want to go find a new version, etc. Most "packages" are only supported in one or two "releases" (at a time). So, going back to a particular release may be impossible once the system "marches on" (i.e., I keep snapshots of pkgsrc) >> My "bloat" comment is that everything calls other "packages" in. >> And, that those packages tend to have been created independant of >> the packages that rely on them. >> >> So, for example, if a "program" (getting away from notion of >> "package") needs to be able to *display* a PNG, it drags in >> the *entire* png package (which has far more capabilities) >> instead of *just* that "display PNG" capability. > > I think you are exaggerating here. In a great many such cases, these > libraries are shared by a lot of programs on the system. No "apt-get" is > going to install "libpng", because the basic installation already has a > dozen other programs that use the library. The same thing applies to > other common libraries. And for rarer libraries, they are often made to This is the difference between the Linux camps and the BSD's. First, Linux is *just* the "kernel". You can't do anything without the rest of the distro (at least, that's my understanding). With the BSD's, you end up with a functional system -- the typical "core services and utilities" (X, inetd, ntpd, ftpd, nfsd/c, etc.) right out of the box. And, there is no (real) change in them between BSD's. OTOH, the "rest of the distro" in the Linux world drags in things that BSD treats as "applications" -- relegated to the packages/ports system. For example, starting X leaves me with a "rootweave" screen and an xterm. I *think* twm is probably there. But, kde, gnome, etc. -- none of that cruft is even present in the filesystem! Want to browse the web? Sorry. Want to view a PNG? Sorry (i.e., libpng doesn't exist!). Where folks will tout "damn small linux", this is relatively *easy* to get to in the BSD camps -- because none of the "other crap" is there to begin with! So, each time you drag a "package/port" onto your machine, you start pulling in all this extra cruft. When I build ports, I keep an ordered list of what I build and when. This allows me to tackle the "common prerequisites" early so that I am sure I have them configured the way *I* want them -- without relying on the system building them 'automatically' to satisfy a prerequisite for some other "port" > work with the main package (and thus there is little extra). I'll not > claim there is no wastage, just that there isn't much extra. It is > certainly an order of magnitude better than the windows style of > including copies of every library with every program. >>>> If you just install a prebuilt package, you are putting your >>>> faith in the "builder" that he understood the various (inevitable) >>>> compiler warnings, dependancies, etc. and made smart choices about >>>> what to do in each case (too often, people just blindly build things >>>> and as long as make ends "successfully" they figure "all is well"). >>> >>> How is that any different from anything else you install on your >>> machine, Windows or Linux? You are also putting your faith in the >>> programmer that wrote the software in the first place. All I can say is >> >> The *programmer* is different from the "package maintainer"! > > True, but who is to say that the programmer is better at this job than > the package maintainer? As an example, the typical programmer has tested > his code on one or two machines, probably with the same cpu Granted. In one case, you rely on the programmer to have written a portable piece of code. OTOH, you rely on the "package author" to know how to ensure things are portable AS WELL AS understand the code he is "porting" (packaging?). My experience has been that people fixated on "porting" the code often are just trying to get it to compile -- hopefully "clean". But, they don't often look *into* the code to see if that "clean build" *should* have been clean or if there isn't an error lurking behind this seemingly "perfect" build. [and, of course, more often than not, the build *isn't* clean] > architecture. The package maintainer will test on dozens, and do builds > on multiple cpu architectures, and will integrate with system testing on > hundreds or thousands of systems.   0 Reply D 12/24/2010 8:28:01 PM Hi David, On 12/24/2010 6:51 AM, David Brown wrote: >>> Word is poor at styles. It's biggest problem, however, is not lack of >>> style functionality - but that users are encouraged to manually format >>> everything by selecting fonts, sizes, etc., for text using toolbar >> >> <frown> I think that is more a consequence of "lack of understanding", >> "lack of training", "expediency", etc. > > MS are good at making software that is easy to get started with, without > considering the long-term effects. So the great majority of Word users > use the quick-to-learn manual formatting, rather than learning to use > the tool properly. Even the few that get proper training use styles > effectively - they use manual formatting because they are lazy and it > gets the job done fastest, even if the end result is poorer and the I suspect the "difference" that sets FM users apart is that they are concerned with "publishing" and not just "getting a document out the door". I.e., if forced to layout a manual in MSWord, *I* would probably (grimace) make an effort to "do things right" *assuming* I was going to be STUCK with MSWord thereafter. I.e., if you are willing to shell out$\$ for a DTP tool then,
chances are, you are aware of the DTP issues vs. YetAnotherWordUser.

> long-term efficiency is worse. Word /does/ have usable tools for proper
> structured and consistent document layout. They are not great, and the
> program is often unstable when documents get large, but they do exist -
> if people would use them. MS software is like a reasonable supermarket,
> selling fairly healthy food that people can take home and cook. And then
> they've put a burger bar by the entrance.

Well, I'd replace "fairly healthy" with "greasy hamburgers" and
"burger bar" with "cotton candy dispenser" -- but your point is made.

>>>>> lyx is more of a glorified TeX-specific editor - it doesn't attempt to
>>>>> give you a true "live" view of your output document. I tried it a bit,
>>>>
>>>> Oh. :< I thought it was a WYSIWYG layer atop TeX. :-/
>>>
>>> No, lyx is more a "what you see is a bit like what you get" layer. While
>>> editing, you see results that are closer to the output than you would
>>> with a simple text editor, but not /that/ close. It will cope with
>>> things like bold and italic, some fonts and sizes, and a bit of symbols.
>>> But it won't get line and page breaks right, it won't handle macros
>>> other than predefined ones (and even then, it assumes they have the
>>> standard definitions, while (La)TeX lets you re-define everything). If
>>> you use (La)TeX in a relatively simple way, sticking strictly to the
>>> basic standard styles, then it is maybe useful. But it never suited me.
>>> Mind you, it was /many/ years ago that I tried it.
>>
>> <frown> So what does it *buy* you? I.e., why bother with it?
>
> As you can guess, I /don't/ bother with it. However, if you are happy
> with the restricted set of (La)TeX that it works with, then it is much
> less intimidating and has a much easier learning curve than "proper"
> (La)TeX.

<frown>  Sorry, let me rephrase.  "Why would SOMEONE bother with it?"
I.e., what does it offer that justifies it's presence/bulk/potential
for bugs/etc.?

E.g., adding a preprocessor to C *gives* you something -- for relatively
little cost (it doesn't really get in your way, doesn't dramatically
impact the reliability of the tool, etc.).

 0

Hi Michael,

On 12/23/2010 10:08 PM, Michael A. Terrell wrote:
>     By using the HTML, it would adjust itself to fit the page. :)
>
>     I chose the layout to look similar to the old printed index for the
> manuals.  The entire document was a single HTML table.

But apparently had *8* columns (instead of 2)?  I.e., how quickly could
you insert "JVC" (alphabetically) into the list?

 0

D Yuniskis wrote:
>
> Hi Michael,
>
> On 12/23/2010 10:08 PM, Michael A. Terrell wrote:
> >     By using the HTML, it would adjust itself to fit the page. :)
> >
> >     I chose the layout to look similar to the old printed index for the
> > manuals.  The entire document was a single HTML table.
>
> But apparently had *8* columns (instead of 2)?  I.e., how quickly could
> you insert "JVC" (alphabetically) into the list?

It would depend on how much data, other than just a single line with
a title.  I would have to edit the following pages, but that is simple
cut & paste to move the header data down to the proper page breaks.
Maybe ten minutes.  BTW, JVC didn't exist when those service manuals
were written. :)

--
For the last time:  I am not a mad scientist, I'm just a very ticked off
scientist!!!

 0

Hi Michael,

On 12/24/2010 4:13 PM, Michael A. Terrell wrote:
>>>      By using the HTML, it would adjust itself to fit the page. :)
>>>
>>>      I chose the layout to look similar to the old printed index for the
>>> manuals.  The entire document was a single HTML table.
>>
>> But apparently had *8* columns (instead of 2)?  I.e., how quickly could
>> you insert "JVC" (alphabetically) into the list?
>
>     It would depend on how much data, other than just a single line with
> a title.  I would have to edit the following pages, but that is simple
> cut&  paste to move the header data down to the proper page breaks.
> Maybe ten minutes.  BTW, JVC didn't exist when those service manuals
> were written. :)

My point was that your table has the "fold" inherent in the
location of the individual cells *in* the table.

E.g., the scheme I outlined lets me make a table like:

COLUMN    ANOTHER
A        AAAAA
B        BBBBB
C        CCCCC
D        DDDDD
E        EEEEE
F        FFFFF

(i.e., a table that is conceptually a TWO COLUMN table)

and then have the layout tool AUTOMAGICALLY display it
as:

COLUMN  ANOTHER       COLUMN  ANOTHER
A      AAAAA           D     DDDDD
B      BBBBB           E     EEEEE
C      CCCCC           F     FFFFF

*or*

COLUMN  ANOTHER     COLUMN  ANOTHER     COLUMN  ANOTHER
A      AAAAA         C     CCCCC         E     EEEEE
B      BBBBB         D     DDDDD         F     FFFFF

etc. as space permits.

Since this is *still* the same "two column" table, I can
add a line anywhere and the lines after will move around
as necessary:

COLUMN  ANOTHER       COLUMN  ANOTHER
A      AAAAA           E     EEEEE
B      BBBBB           1     11111
C      CCCCC           F     FFFFF
D      DDDDD

 0

D Yuniskis wrote:
>
> Hi Michael,
>
> On 12/24/2010 4:13 PM, Michael A. Terrell wrote:
> >>>      By using the HTML, it would adjust itself to fit the page. :)
> >>>
> >>>      I chose the layout to look similar to the old printed index for the
> >>> manuals.  The entire document was a single HTML table.
> >>
> >> But apparently had *8* columns (instead of 2)?  I.e., how quickly could
> >> you insert "JVC" (alphabetically) into the list?
> >
> >     It would depend on how much data, other than just a single line with
> > a title.  I would have to edit the following pages, but that is simple
> > cut&  paste to move the header data down to the proper page breaks.
> > Maybe ten minutes.  BTW, JVC didn't exist when those service manuals
> > were written. :)
>
> My point was that your table has the "fold" inherent in the
> location of the individual cells *in* the table.
>
> E.g., the scheme I outlined lets me make a table like:
>
> COLUMN    ANOTHER
>    A        AAAAA
>    B        BBBBB
>    C        CCCCC
>    D        DDDDD
>    E        EEEEE
>    F        FFFFF
>
> (i.e., a table that is conceptually a TWO COLUMN table)
>
> and then have the layout tool AUTOMAGICALLY display it
> as:
>
> COLUMN  ANOTHER       COLUMN  ANOTHER
>    A      AAAAA           D     DDDDD
>    B      BBBBB           E     EEEEE
>    C      CCCCC           F     FFFFF
>
> *or*
>
> COLUMN  ANOTHER     COLUMN  ANOTHER     COLUMN  ANOTHER
>    A      AAAAA         C     CCCCC         E     EEEEE
>    B      BBBBB         D     DDDDD         F     FFFFF
>
> etc. as space permits.
>
> Since this is *still* the same "two column" table, I can
> add a line anywhere and the lines after will move around
> as necessary:
>
> COLUMN  ANOTHER       COLUMN  ANOTHER
>    A      AAAAA           E     EEEEE
>    B      BBBBB           1     11111
>    C      CCCCC           F     FFFFF
>    D      DDDDD

Yes, but it cost me nothing for the software and takes only minutes
to get an acceptable file.  It isn't pretty print, but it has a uniform
style that is easy to edit.  What is real fun is creating web pages that
scale to different resolutions and still look OK.  Like the title bar on
one I did.  A solid bar that matches the background in the company logo
covers the entire width, and the logo stays centered.  The alternative
is a lot of different sized pages with Javascript to select multiple
versions for different screen widths.  That was easy, when there were
only a few.

--
For the last time:  I am not a mad scientist, I'm just a very ticked off
scientist!!!

 0

Hi Michael,

On 12/24/2010 8:34 PM, Michael A. Terrell wrote:
>     Yes, but it cost me nothing for the software and takes only minutes
> to get an acceptable file.  It isn't pretty print, but it has a uniform
> style that is easy to edit.  What is real fun is creating web pages that
> scale to different resolutions and still look OK.  Like the title bar on
> one I did.  A solid bar that matches the background in the company logo
> covers the entire width, and the logo stays centered.  The alternative
> is a lot of different sized pages with Javascript to select multiple
> versions for different screen widths.  That was easy, when there were
> only a few.

I think newer browsers now can resize JPEG's (?) so this gives you
an option for a "device independent" graphic.

 0

D Yuniskis wrote:
>
> Hi Michael,
>
> On 12/24/2010 8:34 PM, Michael A. Terrell wrote:
> ?     Yes, but it cost me nothing for the software and takes only minutes
> ? to get an acceptable file.  It isn't pretty print, but it has a uniform
> ? style that is easy to edit.  What is real fun is creating web pages that
> ? scale to different resolutions and still look OK.  Like the title bar on
> ? one I did.  A solid bar that matches the background in the company logo
> ? covers the entire width, and the logo stays centered.  The alternative
> ? is a lot of different sized pages with Javascript to select multiple
> ? versions for different screen widths.  That was easy, when there were
> ? only a few.
>
> I think newer browsers now can resize JPEG's (?) so this gives you
> an option for a "device independent" graphic.

Yes but it doesn't keep the same shape, and can look like crap. The
neat thing is you can grab the corner of the window, and watch it
resize, yet the logo is unchanged.  According to the server logs a lot
of visitors are using older browsers that would crash, or look like
crap.  For a new business, you can't afford to tick off any potential
customers.

--
For the last time:  I am not a mad scientist, I'm just a very ticked off
scientist!!!

 0

On 24/12/10 21:28, D Yuniskis wrote:
> Hi David,
>
> On 12/23/2010 2:29 PM, David Brown wrote:
>
>> Here in Norway, Christmas is celebrated mainly on the 24th rather than
>
> Then "Merry Christmas" (let's see... I guess it's still the 24th, there,
> as I type this)
>

It's now the 25th, and a day for relaxing here (in my house, anyway -
traditions vary).  Merry Christmas to everyone else, or "Happy Holidays"
for those that prefer that (I can't understand the reasoning behind that
phrase, but each to his own).

>> the 25th. So if my replies in this thread are short or non-existent,
>> it's not because the posts are no longer interesting - it's just I don't
>
> Understood. You'd rather be lying flat on your face, hung over
> from too much eggnog... ;-)
>

Eh, something like that...

>>> You might consider looking at the FM "tryout" (I think Adobe still
>>> offers one?). I don't recall how it is crippled (features, time,
>>> etc.). But, you would be able to see the sorts of things you
>>> can do and the effort required to do so.
>>>
>>> (E.g., for scribus, I tried to create a little document with
>>> a table, a photo, an illustration, etc. -- just to see what it
>>> felt like)
>>
>> There is no real chance of me ever buying FrameMaker (either at home or
>> at the office), so I'm not going to bother testing it. I will give
>> Scribus a shot some time for interest, but I doubt if I'll make much use
>> of it. Most of my writings are technical, and I use either LaTeX if I
>> can, or OOO if I have to. Interactive DTP is just for fun in my case.
>
> Understood. It is, however, interesting to see what can be done
> (and done differently) in an interactive environment.
>

Absolutely.

>>>>> The problem I find with many "packages" is getting all the right
>>>>> cruft in place that the package depends on. In the UN*X world,
>>>>> you have to make a commitment towards a particular direction
>>>>> as everything BUILDS on everything else. By contrast, the Windows
>>>>> world rebundles (usually needlessly reinventing!) all the cruft that
>>>>> a particular application is likely to need *with* the application
>>>>> (hence the bloat -- in both cases).
>>>>
>>>> Actually, Linux distributions have pretty much solved this problem
>>>> years
>>>> ago - tools like "apt" and "yum", or their gui front-ends, are
>>>> excellent
>>>> at finding and automatically installing all the required packages,
>>>> libraries, etc., that are needed. So on a Ubuntu, I just type "apt-get
>>>> install scribus", and apt pulls in python, cups libraries, and whatever
>>>> but
>>>> not bloat - it is very much about re-use and sharing of packages and
>>>> libraries.
>>>
>>> The *BSD's have a "package" system with the same functionality.
>>> As with any "true" UN*X tool, it just pieces together actions
>>> performed by other (existing) tools. E.g., consult "database"
>>> for package (to identify dependencies), "install" dependencies
>>> (which requires examining database for that package's dependencies,
>>> etc.), etc.
>>
>> The BSD "ports" system has a lot in common with the various Linux
>
> *BSD's tend to treat "ports" as things you build and "packages"
> as those same things, prebuilt.
>
> Then, you also have to deal with the various *ports* to different
> to play wrt "partitions", etc. <frown>
>

I thought "ports" was the name of FreeBSD's package manager, but I could
be wrong.

>> package managers, such as apt, yum, portage, etc. There are differences
>> in the details and the functionality, but all are designed to make it
>> easy to get hold of a package and any other packages that it depends on,
>> and to keep everything updated (if you want it to).
>
> I don't think the *BSD approach automates "updates". IIRC, there
> are hooks that let you periodically check to see if any of the
> packages on *your* machine have (new) security advisories that
> are applicable. But, I think you still have to explicitly decide
> that you want to go find a new version, etc. Most "packages"
> are only supported in one or two "releases" (at a time). So,
> going back to a particular release may be impossible once the
> system "marches on" (i.e., I keep snapshots of pkgsrc)
>

I don't know of any Linux distributions that try to force new updates on
you, or even make it the default.  Many /check/ for updates by default,
but it's your choice to install them or not.

>>> My "bloat" comment is that everything calls other "packages" in.
>>> And, that those packages tend to have been created independant of
>>> the packages that rely on them.
>>>
>>> So, for example, if a "program" (getting away from notion of
>>> "package") needs to be able to *display* a PNG, it drags in
>>> the *entire* png package (which has far more capabilities)
>>> instead of *just* that "display PNG" capability.
>>
>> I think you are exaggerating here. In a great many such cases, these
>> libraries are shared by a lot of programs on the system. No "apt-get" is
>> going to install "libpng", because the basic installation already has a
>> dozen other programs that use the library. The same thing applies to
>> other common libraries. And for rarer libraries, they are often made to
>
> This is the difference between the Linux camps and the BSD's.
> First, Linux is *just* the "kernel". You can't do anything without
> the rest of the distro (at least, that's my understanding). With

Correct.

> the BSD's, you end up with a functional system -- the typical
> "core services and utilities" (X, inetd, ntpd, ftpd, nfsd/c, etc.)
> right out of the box. And, there is no (real) change in them
> between BSD's.
>

The huge majority of these parts are common between Linux and *BSD.

It's just a slight difference in the terminology.  Technically, "Linux"
is just the kernel.  What people actually use, is a Linux distribution
(or a "GNU/Linux" distribution if you prefer, since many important parts
of the system are GNU).  But people typically refer to the whole system
as "Linux", or by distribution name.

In the *BSD world, there are three BSD's - OpenBSD, NetBSD and FreeBSD.
I think they all operate on much the same principle.  These are all
roughly equivalent to Linux distributions in that they cover the whole
system packaging, but differ in that it is the same group that develops
the kernel and the packaging.

Then there are a few odd-bod systems - Debian packages with a BSD
kernel, non-standard BSD distributions, etc.

> OTOH, the "rest of the distro" in the Linux world drags in things that
> BSD treats as "applications" -- relegated to the packages/ports
> system.
>
> For example, starting X leaves me with a "rootweave" screen and
> an xterm. I *think* twm is probably there.
>
> But, kde, gnome, etc. -- none of that cruft is even present in the
> filesystem! Want to browse the web? Sorry. Want to view a PNG?
> Sorry (i.e., libpng doesn't exist!).
>

It's all there in the BSD package manager system.  You install it if you
want.

All you are really saying here is that the BSD system /you/ installed
was fairly minimal.  You can do that with Linux too.  Many distributions
will include the browser and Gnome in the base install.  Others have
very little, while most of the "big" distros (Debian, Fedora, Ubuntu)
have something like Gnome by default, but you can choose not to install
it if you don't want it.

> Where folks will tout "damn small linux", this is relatively *easy*
> to get to in the BSD camps -- because none of the "other crap"
> is there to begin with!
>

Oh, it's all there, because it is not "crap" but is useful software.
These things are personal choice - for some people, "twm" is all the
window manager they need.  Other people - the great majority of users -
prefer something more advanced, such as Gnome or KDE.  BSD makes it easy
to install Gnome too.

> So, each time you drag a "package/port" onto your machine, you start
> pulling in all this extra cruft. When I build ports, I keep an
> ordered list of what I build and when. This allows me to tackle
> the "common prerequisites" early so that I am sure I have them
> configured the way *I* want them -- without relying on the
> system building them 'automatically' to satisfy a prerequisite
> for some other "port"
>

You seem to go through an awful lot of effort here - effort that the
very clever folks at FreeBSD (or NetBSD or OpenBSD, if that's what you
so that users like yourself will not have to bother.  Of course, clever
though they are, they are not perfect, and may make mistakes or have not
tested or considered some particular special situations.  And you
certainly have the /choice/ to do this work yourself.  But it's not
necessary - only a tiny proportion of BSD or Linux users micro-manage
their systems in this way.

>> work with the main package (and thus there is little extra). I'll not
>> claim there is no wastage, just that there isn't much extra. It is
>> certainly an order of magnitude better than the windows style of
>> including copies of every library with every program.
>
>>>>> If you just install a prebuilt package, you are putting your
>>>>> faith in the "builder" that he understood the various (inevitable)
>>>>> what to do in each case (too often, people just blindly build things
>>>>> and as long as make ends "successfully" they figure "all is well").
>>>>
>>>> How is that any different from anything else you install on your
>>>> machine, Windows or Linux? You are also putting your faith in the
>>>> programmer that wrote the software in the first place. All I can say is
>>>
>>> The *programmer* is different from the "package maintainer"!
>>
>> True, but who is to say that the programmer is better at this job than
>> the package maintainer? As an example, the typical programmer has tested
>> his code on one or two machines, probably with the same cpu
>
> Granted. In one case, you rely on the programmer to have written
> a portable piece of code. OTOH, you rely on the "package author"
> to know how to ensure things are portable AS WELL AS understand
> the code he is "porting" (packaging?). My experience has been
> that people fixated on "porting" the code often are just trying to
> get it to compile -- hopefully "clean". But, they don't often
> look *into* the code to see if that "clean build" *should* have
> been clean or if there isn't an error lurking behind this
> seemingly "perfect" build.
>
> [and, of course, more often than not, the build *isn't* clean]
>
>> architecture. The package maintainer will test on dozens, and do builds
>> on multiple cpu architectures, and will integrate with system testing on
>> hundreds or thousands of systems.


 0

Hi David,

On 12/25/2010 9:06 AM, David Brown wrote:
>>> Here in Norway, Christmas is celebrated mainly on the 24th rather than
>>
>> Then "Merry Christmas" (let's see... I guess it's still the 24th, there,
>> as I type this)
>
> It's now the 25th, and a day for relaxing here (in my house, anyway -
> traditions vary). Merry Christmas to everyone else, or "Happy Holidays"
> for those that prefer that (I can't understand the reasoning behind that
> phrase, but each to his own).

I believe it is intended to be sanitized of religious references.
E.g., "safe to use in mixed company"

>>>> The *BSD's have a "package" system with the same functionality.
>>>> As with any "true" UN*X tool, it just pieces together actions
>>>> performed by other (existing) tools. E.g., consult "database"
>>>> for package (to identify dependencies), "install" dependencies
>>>> (which requires examining database for that package's dependencies,
>>>> etc.), etc.
>>>
>>> The BSD "ports" system has a lot in common with the various Linux
>>
>> *BSD's tend to treat "ports" as things you build and "packages"
>> as those same things, prebuilt.
>>
>> Then, you also have to deal with the various *ports* to different
>> to play wrt "partitions", etc. <frown>
>
> I thought "ports" was the name of FreeBSD's package manager, but I could
> be wrong.

<frown>  It's hard to be pedantic, here -- hence my use of quotation
marks -- as the terms are heavily overloaded.  And, as I mentioned,
the introduction of support for different architectures adds yet
another dimension to the "mix".

First, the distinction that all *BSD's contain more than just
a kernel.  I.e., "NetBSD" is *a* kernel (usually several different
configurations are shipped in each release -- PER ACHITECTURE)
*plus* a set of core tools/services.  So, with *a* BSD, you have
a complete working system (whatever that means).  You can write
code, compile it, run X (and a core set of x apps), support a
full set of network services (httpd is NOT included but things
like tftpd, ftpd, ntp, etc. *are*), build new kernels, etc.
(you can opt NOT to install large portions of *BSD -- e.g.,
all the X related stuff, fonts, sources, etc. -- if you really

Beyond that, all of the "applications" are handled by the "ports
collection" (port being a bad choice of word as it is also used for
a "port" to another architecture -- I suspect the origin of the
term in this context was in light of "porting an application *to*
*BSD").  So, emacs is NOT present in any of the BSD's but is
added as an "application".  Ditto for apache, KDE, GIMP, etc.

The mechanism that allows for this application support exists
in two different forms.  You can chose to download and install a
PREBUILT "package".  Or, you can build that package yourself
from "pkgsrc" (this being little more than a hierarchy of patches,
makefiles, scripts and "relationships" between "packages").

The "packages" are created by "someone" sitting down and building
an app *in* the pkgsrc distribution.  E.g.,

# cd /usr/pkgsrc   (typically)
# cd archivers/zip

edit system-wide pkg configuration -- or specify pertinent options
for this particular application

# make  (build the application)
# make (I forget what the name of the target is to bundle the
installable files into a "pkg")
# make clean

I.e., the "packager" saves you the trouble of having your machine
crunch through the compile (I've run NetBSD on 40MHz SPARCs;
FreeBSD on 25MHz 386's; etc.  You used to be able to run FBSD
on a 5MB 16MHz 386sx -- can you imagine trying to make kde
on such a box??  Just wqaqtching the disk thrash would feel like
inhuman punishment!)

>>> package managers, such as apt, yum, portage, etc. There are differences
>>> in the details and the functionality, but all are designed to make it
>>> easy to get hold of a package and any other packages that it depends on,
>>> and to keep everything updated (if you want it to).
>>
>> I don't think the *BSD approach automates "updates". IIRC, there
>> are hooks that let you periodically check to see if any of the
>> packages on *your* machine have (new) security advisories that
>> are applicable. But, I think you still have to explicitly decide
>> that you want to go find a new version, etc. Most "packages"
>> are only supported in one or two "releases" (at a time). So,
>> going back to a particular release may be impossible once the
>> system "marches on" (i.e., I keep snapshots of pkgsrc)
>
> I don't know of any Linux distributions that try to force new updates on
> you, or even make it the default. Many /check/ for updates by default,
> but it's your choice to install them or not.

I don't think that capability exist in the pkgsrc or ports collection.
I know you can have pkgsrc fetch an updated "vulnerabilities" file
and then examine the pkgs installed on your system to inform you of
any "vulnerabilities" that may have been discovered since you
built/installed them.

>> the BSD's, you end up with a functional system -- the typical
>> "core services and utilities" (X, inetd, ntpd, ftpd, nfsd/c, etc.)
>> right out of the box. And, there is no (real) change in them
>> between BSD's.
>
> The huge majority of these parts are common between Linux and *BSD.
>
> It's just a slight difference in the terminology. Technically, "Linux"
> is just the kernel. What people actually use, is a Linux distribution
> (or a "GNU/Linux" distribution if you prefer, since many important parts
> of the system are GNU). But people typically refer to the whole system
> as "Linux", or by distribution name.

My point was that "Linux" doesn't mean anything -- you have to
reference which distribution you are talking about.

OTOH, OpenBSD/NetBSD/picoBSD/FreeBSD/etc. each define a particular
"core system" (which varies from release to release).  So, configuring
"ntpd" on OpenBSD has definite connotations.  (in fact, it is
probably the same for all of the BSD's -- though there are some
personality induced differences  :> )  Saying the same for
"Linux" begs the followup questions, "Which distro?  Which kernel
version?"

> In the *BSD world, there are three BSD's - OpenBSD, NetBSD and FreeBSD.
> I think they all operate on much the same principle. These are all
> roughly equivalent to Linux distributions in that they cover the whole
> system packaging, but differ in that it is the same group that develops
> the kernel and the packaging.

But a BSD system is typically more narrowly defined than a Linux
distro.  E.g., you *won't* find kde in any of the BSD's (well,
I may be wrong here as I am two full releases out of date and
haven't watched FreeBSD since 2.6 or so).

There are three groups associated with BSD maintenance.
Kernel hackers maintain the three different (where "different"
means "incredibly similar" :> ) kernels; others provide the
assorted "core utilities" that flesh out the "system".
Then, there are a group of "packagers" that work on porting
the various "applications" -- usually, WITHOUT regard to
a particular BSD.  I.e., the files in the pkgsrc hierarchy
have a lot *literally* shared between all of the BSD's and
some tweeks for each *particular* BSD.  So, if one camp
wants "packages" in /usr/pkg while another wants them
in /usr/local...

> Then there are a few odd-bod systems - Debian packages with a BSD
> kernel, non-standard BSD distributions, etc.
>
>> OTOH, the "rest of the distro" in the Linux world drags in things that
>> BSD treats as "applications" -- relegated to the packages/ports
>> system.
>>
>> For example, starting X leaves me with a "rootweave" screen and
>> an xterm. I *think* twm is probably there.
>>
>> But, kde, gnome, etc. -- none of that cruft is even present in the
>> filesystem! Want to browse the web? Sorry. Want to view a PNG?
>> Sorry (i.e., libpng doesn't exist!).
>
> It's all there in the BSD package manager system. You install it if you
> want.

> All you are really saying here is that the BSD system /you/ installed

No, you're missing the point.  If I download *BSD, I *don't* get
any of the PNG sources.  There is nothing *in* the BSD's that
needs them so the philosophy is (or at least "had been") to
not deal with them in the various BSD's but, instead, deal
with them as "application" issues.

IIRC, the options I have(had) for installing NetBSD:
- base
- comp
- etc
- games
- kernel(s)
- man
- misc
- text
- source
- xbase
- xfonts
- xcomp
- xfonts
- xserver

So, if I don't want X, I don't bother unpacking the x* groups.
If I don't want man pages, I don't install the man group.  If
I don't want to build/compile anything, then I omit the comp
group.  I think the bare minimum system can be run with etc,
base and *a* kernel from the kernel group.  A "full install"
would include all of the above (this is the *binary* distribution).

Note that this exists on a "per architecture" basis.  I.e., the
SPARC "base" is different from the i386 "base" (some are
shared -- like man)

Then, there are the "source" groups:
- gnusrc
- sharesrc
- src
- pkgsrc
- syssrc
- xsrc

The syssrc is typically the kernel sources.  Xsrc is obviously
the X sources.  The src group is the bulk of the system.  There
is an active attempt to quarantine all GNU/GPL software in the
gnusrc group (UNLIKE Linux, you can distribute the BSD's
*without* source code -- including the changes that YOU make
to the kernel, applications, etc.).

The pkgsrc is the issue I was talking about earlier.

libpng, libjpeg, etc. on my system AT THAT POINT.  It is only
when I delve into applications that I build out of the pkgsrc
hierarchy that these things get drawn into the picture -- but,
*separate* from NetBSD itself!

> was fairly minimal. You can do that with Linux too. Many distributions
> will include the browser and Gnome in the base install. Others have very
> little, while most of the "big" distros (Debian, Fedora, Ubuntu) have
> something like Gnome by default, but you can choose not to install it if
> you don't want it.
>
>> Where folks will tout "damn small linux", this is relatively *easy*
>> to get to in the BSD camps -- because none of the "other crap"
>> is there to begin with!
>
> Oh, it's all there, because it is not "crap" but is useful software.
> These things are personal choice - for some people, "twm" is all the
> window manager they need. Other people - the great majority of users -
> prefer something more advanced, such as Gnome or KDE. BSD makes it easy
> to install Gnome too.
>
>> So, each time you drag a "package/port" onto your machine, you start
>> pulling in all this extra cruft. When I build ports, I keep an
>> ordered list of what I build and when. This allows me to tackle
>> the "common prerequisites" early so that I am sure I have them
>> configured the way *I* want them -- without relying on the
>> system building them 'automatically' to satisfy a prerequisite
>> for some other "port"
>
> You seem to go through an awful lot of effort here - effort that the
> very clever folks at FreeBSD (or NetBSD or OpenBSD, if that's what you
> so that users like yourself will not have to bother. Of course, clever
> though they are, they are not perfect, and may make mistakes or have not
> tested or considered some particular special situations. And you
> certainly have the /choice/ to do this work yourself. But it's not
> necessary - only a tiny proportion of BSD or Linux users micro-manage
> their systems in this way.

The reason for taking on the responsibility myself hails back to the
"three groups" that maintain the sources.  Look at them as concentric
rings -- kernel maintainers in the center, pkgsrc authors at the
periphery.  The skill set required *tends* to increase as one
moves inward -- the repercussions of mis-steps increase.

Given the shear number of applications (thousands!), the number of
folks authoring and maintaining "packages" exceeds those maintaining
the kernel.  And, the commitment of folks in package land is usually
much less than that of the core team (kernel + system).

As it is a volunteer effort, there is no one riding herd over
these people verifying the quality of their work, enforcing
conventions, etc.  (keep in mind, I got on the NetBSD bandwagon
at 0.8 -- early 1990's -- I think they are now at 5.mumble)
Since I don't take a kitchen sink approach with my UN*X boxen
(I pick, carefully, what I will use on each), I *can* invest
the time to carefully  examine *how* a package is being built.
I can make changes that are more in favor with the way I want
to use a package.  I can chase down bugs that are important to
me (with that number of packages, some are obviously NOT going
to see regular usage) without waiting/hoping for the "package
author" or "port maintainer" to stumble on the same problem.
(for years, my primary contribution to FreeBSD was in this
role -- just feeding patches to friends who would dress them
up and submit them formally to "whomever".  I no longer do that
as I don't chase the current releases unless I am researching
a particular bug/problem)

>>> work with the main package (and thus there is little extra). I'll not
>>> claim there is no wastage, just that there isn't much extra. It is
>>> certainly an order of magnitude better than the windows style of
>>> including copies of every library with every program.
>>
>>>>>> If you just install a prebuilt package, you are putting your
>>>>>> faith in the "builder" that he understood the various (inevitable)
>>>>>> what to do in each case (too often, people just blindly build things
>>>>>> and as long as make ends "successfully" they figure "all is well").
>>>>>
>>>>> How is that any different from anything else you install on your
>>>>> machine, Windows or Linux? You are also putting your faith in the
>>>>> programmer that wrote the software in the first place. All I can
>>>>> say is
>>>>
>>>> The *programmer* is different from the "package maintainer"!
>>>
>>> True, but who is to say that the programmer is better at this job than
>>> the package maintainer? As an example, the typical programmer has tested
>>> his code on one or two machines, probably with the same cpu
>>
>> Granted. In one case, you rely on the programmer to have written
>> a portable piece of code. OTOH, you rely on the "package author"
>> to know how to ensure things are portable AS WELL AS understand
>> the code he is "porting" (packaging?). My experience has been
>> that people fixated on "porting" the code often are just trying to
>> get it to compile -- hopefully "clean". But, they don't often
>> look *into* the code to see if that "clean build" *should* have
>> been clean or if there isn't an error lurking behind this
>> seemingly "perfect" build.
>>
>> [and, of course, more often than not, the build *isn't* clean]

 0

136 Replies
1727 Views

Similiar Articles:

7/20/2012 5:01:25 PM