Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Posting-Version: version B 2.10.1a 12/4/83; site rlgvax.UUCP Path: utzoo!watmath!clyde!floyd!harpo!seismo!rlgvax!guy From: guy@rlgvax.UUCP (Guy Harris) Newsgroups: net.unix-wizards Subject: Re: Please use NULL instead of 0 whenever you have a pointer! Message-ID: <1649@rlgvax.UUCP> Date: Fri, 3-Feb-84 15:35:40 EST Article-I.D.: rlgvax.1649 Posted: Fri Feb 3 15:35:40 1984 Date-Received: Thu, 9-Feb-84 06:42:01 EST References: <16022@sri-arpa.UUCP> <164@ttds.UUCP> Organization: CCI Office Systems Group, Reston, VA Lines: 41 > Since I started this NULL vs. 0 discussion, I perhaps should repeat > my reasons for wanting people use to NULL instead of 0. > + 0 is 16 bits on my machine. > + NULL is declared as (char *)0 on my machine which makes 32 bits. > + Programs that use 0 as pointer *crash*. > + Programs that use NULL *don't crash*. > The reasons (although very obvious) have been very well elaborated > earlier so I will not repeat them. What fixes the problems caused by your first point (as elaborated by stating that pointers are 32 bits on your machine), your third point, and your fourth point is not using 0 vs. NULL, it's using 0 vs (whatever *)0. NULL solves this problem *only* if it is declared as (char *)0. It is *NOT* so declared on our (16-bit int, 32-bit pointer) machine, and it will not be. The correct way to fix the problem is to run your programs through "lint" and fix the type disagreements between formal and actual arguments. After all, is the point of this exercise to produce programs which don't break in the immediate environment, or is it to produce programs that are as type-correct as possible? Doing the latter brings benefits in terms of code correctness that the former doesn't bring (I can attest to bugs that have been uncovered as a result of running a program through "lint" - or even noticing the warnings that PCC gives you!). > While hoping to end this NULL discussion soon, I still have a question: > Is there really a C-compiler that makes a difference between a "(char *)0" > and "(int *)0" so a program can *crash* if it uses the wrong one? I'd be willing to bet that C implementations on Data General machines and on some Honeywell Level 6 minis make that distinction. There is *NO* guarantee whatsoever that there will not be such implementations of C; the language permits them, so if you want to write portable code you *must* assume that (char *)0 and (int *)0 may be completely incompatible beasts. Just declaring NULL to be (char *)0 and using it universally is sloppy coding practice. Period. It may patch over the problem on machines with 16-bit ints and 32-bit pointers, but the correct solution (casting 0 or NULL to the proper value) doesn't cost more than minor programmer inconvenience and is far more likely to work in general. Guy Harris {seismo,ihnp4,allegra}!rlgvax!guy