Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!mailrus!uflorida!haven!decuac!shlump.nac.dec.com!hiatus.dec.com!grue.dec.com!daniels From: daniels@grue.dec.com (Bradford R. Daniels) Newsgroups: comp.std.c Subject: Re: %g format in printf Message-ID: <1447@hiatus.dec.com> Date: 9 Sep 89 18:59:31 GMT References: <1441@hiatus.dec.com> Sender: news@hiatus.dec.com Lines: 36 Distribution: world Organization: Digital Equipment Corporation In article <1441@hiatus.dec.com>, daniels@grue.dec.com (Bradford R. Daniels) writes: > In article, > mcgrath@saffron.Berkeley.EDU (Roland McGrath) writes: > > Yes. The ANSI standard does specify that the default precision is 6. > > Huh? Where? I am working from document X3J11/88-159, which says > under %g: > > "The double argument is converted in the style f or e (or in > style E in the case of a G conversion specifier), with the > precision specifying the number of digits. If the precision > is zero, it is taken as 1. The style used depends on the > value converted..." > > It then goes on to describe when weach format is used, and that > trailing zeroes, etc. should be removed. What do you see that I > don't? I really would like a definitive answer (or at least some kind of consensus) on this issue. I appreciate all of the input on what significant digits should mean in the context of %g, but now that I'm pretty sure we handle that correctly, the default precision issue is more important... Thanks again, - Brad ----------------------------------------------------------------- Brad Daniels | Digital Equipment Corp. almost DEC Software Devo | definitely wouldn't approve of "VAX C RTL Whipping Boy" | anything I say here...