Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.1 6/24/83; site decwrl.UUCP
Path: utzoo!linus!decvax!decwrl!powell
From: powell@decwrl.UUCP
Newsgroups: net.lang.mod2
Subject: cardinal
Message-ID: <5821@decwrl.UUCP>
Date: Fri, 24-Feb-84 22:21:05 EST
Article-I.D.: decwrl.5821
Posted: Fri Feb 24 22:21:05 1984
Date-Received: Sat, 25-Feb-84 04:40:13 EST
Sender: powell@decwrl.UUCP
Organization: DEC Western Research Lab, Los Altos, CA
Lines: 55

From: powell (Mike Powell)
Is the cardinal data type in Modula-2 a mistake?  What was wrong with subranges
of integer, anyhow?  If you believe your machine implements values in the range
0..65535 efficiently, then by all means, define a type and use it in your
programs.  There is no reason a subrange can't be 0..4294967295 either, if
that's to your liking.  At some point, the compiler will complain about your
using numbers it can't handle.  E.g., mine won't like -1..4294967295, but at
least you'll get a compile error if you try, not a runtime error (assuming your
compiler/hardware is nice enough to do range/overflow checking).

The compiler (writer) can tell by looking at the subrange declarations and the
instruction manual which instructions to generate.  And you won't build
unnecessary machine dependencies into your programs.

Defining cardinal and integer to be something as small as 16 bits means many
programs have no hope of working unless there are longcardinal and longinteger.
No one writing a program on a real computer (one with >= 32-bit words) is going
to expect an integer to be <= 32767, and their programs will never port to a
microprocessor that thinks integers are so small.  Why do we need to repeat the
mistakes of C?  If the micro Modula-2 programmers think that 16 bits are enough
for a number, they are free to define types with smaller ranges.  E.g.,
type int = [-32768..32767]; uint = [0..65535].

I propose that cardinal be eliminated as a primitive type, and that it simply
be an alias for 0..MAXINT.  No reasonable implementation should define MAXINT
less than, say, a billion.  The range -MAXINT..MAXINT should always be legal.
Ranges with higher or lower bounds may be permitted, but not necessarily.  The
compiler is free to store subranges as smaller values, and may use unsigned
arithmetic when the values are all non-negative.  If the result of an operation
is stored into a subrange, than the compiler may use shorter arithmetic, since
range checking or overflow detection would catch any errors.  There is no need
to make different subranges incompatible.  If the compiler cannot generate
code for a particular statement because it exceeds the capacity of the machine,
then that's an implementation restriction.  E.g., my compiler wouldn't let you
add i : [-2147483648..2147483647] to j : [0..4294967295], but would allow
k : [0..65535] times i (and do it as signed integer) or k times j and do it
as unsigned integer.

This solution also eliminates the problems with addresses.  An address is
compatible with (subranges of) integers.  Because addresses may be segmented,
different range checks may be necessary, but for the most part, an address is
just another subrange of integer.  My compiler would likely define them to be
[0..4294967295], but other compilers might make them be [0..16777215] or
whatever is appropriate.

Is there any reason why this won't work?  Are there hidden advantages to
cardinal that I don't know about?  Does anyone like the current situation with
cardinal, integer, and address (and probably longcard, and longint)?

					Michael L. Powell
					Digital Equipment Corporation
					Western Research Laboratory
					Los Altos, California
					{decvax,ucbvax}!decwrl!powell
					powell@berkeley