Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » Multics vs Unix
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Multics vs Unix [message #425868] Tue, 17 December 2024 15:41 Go to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

I was reading the introductory “Multics Concepts and Utilization” book
< http://bitsavers.trailing-edge.com/pdf/honeywell/large_syste ms/multics/F01_multicsIntroCourseOct78.pdf>
over at Bitsavers. Multics (the “MULTiplexed Information and Computing
Service”) was, for its time, an extremely ambitious operating system
project. It was first introduced in 1965, in the form of a series of
papers at the AFIPS Fall Joint Computer Conference of that year
< https://www.computer.org/csdl/proceedings/1965/afips/12OmNzS h1au>.

Of course, it took far too long (7 years) to reach production quality.
In that time, a small group of researchers at AT&T Bell Labs grew
tired of waiting, and decided to create their own, less ambitious
system, which they called “UNIX” as a tongue-in-cheek homage to
“Multics”. And the rest, as they say, is history.

But, nevertheless, Multics remained an influential system. There are
even some present-day fans of it gathered at
<https://multicians.org/>. Apparently they have got the OS booting on
an emulator of the original GE 645 hardware. Though it was mostly
written in a high-level language (PL/I), Multics was never a portable
OS; to support its advanced virtual-memory and security features, it
required special processor hardware support which was not common in
those days.

Even today, Multics has some features which can be considered
innovative and uncommon. It may be true that, for example, SELinux can
match all of its security capabilities and more. But some aspects of
its file-protection system seem, to me, to make sharing of data
between users a bit easier than your typical Linux/POSIX-type system.

For a start, there seems to be no concept of file “ownership” as such.
Or even of POSIX-style file protection modes (read/write/execute for
owner/group/world). Instead, all file and directory access is
controlled via access-control lists (ACLs). Directories have a
permission called “modify”, which effectively gives a matching entity
(user, group, process) owner-type rights over that directory; except
that more than one entity can have that permission at once. Thus, a
group of users working on a common project can all be given this
“modify” access to a shared directory for that project, allowing them
all to put data there, read it back again, control access to it,
delete it etc on a completely equal basis. Contrast this with
POSIX/Linux, where every file has to have exactly one owner; even if
they create that file in a shared directory, it still gives the
creating user a special status over that file, that others with write
access to the containing directory do not have.

(Multics also offers a separate “append” permission, that allows the
possessor to create an item in a directory, without having the ability
to remove an item once it’s there.)

One radical idea introduced in Unix was its profligate use of multiple
processes. Every new command you executed (except for the ones built
into the shell) required the creation of a new process, often several
processes. Other OSes tended to look askance at this; it seemed
somehow wasteful, perhaps even sinful to spawn so many processes so
readily and discard them so casually. The more conventional approach
was to create a single process at user login, and execute nearly all
commands within the context of that. There were special commands for
explicitly creating additional processes (e.g. for background command
execution), but such process creation did not simply happen as a
matter of course.

Gradually, over time, the limitations of the single-process approach
became too much to ignore, and the versatility of the Unix approach
won over (nearly) everybody. Multics, however, is of the old school.
More than that, the process even preserves global state, including
static storage, in-between runs of programs, and this applies across
different programs, not just reruns of the same one. For example, in
FORTRAN, there is the concept of a “common block”. If you run two
different programs that both refer to the same common block, then the
second one will see values left in the block by the first one. To
completely reinitialize everything, you need to invoke the “new_proc”
command, which effectively deletes your process and gives you a fresh
one.

One common irritation I find on POSIX/Linux systems is the convention
that every directory has to have an entry called “.”, pointing to
itself, and one called “..”, pointing to its parent. This way these
names can be used in relative pathnames to reach any point in the
directory hierarchy. But surely it is unnecessary to have explicit
entries for these names cluttering up every directory; why not just
build their recognition as a special case into the pathname-parsing
logic in the kernel, once and for all? That way, directory-traversal
routines in user programs don’t have to be specially coded to look
for, and skip these entries, every single time.

Multics doesn’t seem to have this problem. An absolute pathname begins
with “>” (which is the separator for pathname components, equivalent
to POSIX “/”), while a relative pathname doesn’t. Furthermore, a
relative pathname can begin with one or more “<” characters,
indicating the corresponding number of steps up from the current
working directory. Unlike POSIX “..”, you can’t have “<” characters in
the middle of the pathname, which is probably not a big loss.

It is interesting to see other features which are nearly, but not
quite, the same as, corresponding features in Unix. For example, there
is a search path for executables, to save you typing the entire
pathname to run the program. However, this does not seem as flexible
as the $PATH environment-variable convention observed by Unix/POSIX
shells. In particular, it does not seem possible to remove the current
directory from the search path, which we now know can be a security
risk.

Another one is the concept of “active functions” and “active strings”.
These allow you to perform substitutions of dynamically-computed
values into a command line. However, they are not as general as the
Unix/POSIX concept of “command substitution”, where an entire shell
command can supply its output to be interpolated into another command.
Instead of having a completely separate vocabulary of “active
functions” which can only be used for such substitutions, Unix/POSIX
unifies this with the standard set of commands, any of which can be
used in this way.

There are other features of Multics that others more familiar with it
might want to see mentioned (the single-level store concept, where
“everything is a memory segment”, versus Unix “everything is a file”?
I/O redirection based on “switches”—symbolic references to files,
versus Unix integer “file descriptors”?). But then, this long-winded
essay would become even longer-winded :). So if you are interested in
this particular piece of computing history, feel free to follow up the
links above.

In summary, Multics is very much a museum piece, not something you
would want to use today for regular work—not in its original form. But
I think there are still one or two ideas there that we could usefully
copy and adapt to a present-day OS, particularly a versatile one like
Linux.
Re: Multics vs Unix [message #425869 is a reply to message #425868] Tue, 17 December 2024 20:13 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Grant Taylor

On 12/17/24 14:41, Lawrence D'Oliveiro wrote:
> But surely it is unnecessary to have explicit entries for these names
> cluttering up every directory; why not just build their recognition
> as a special case into the pathname-parsing logic in the kernel,
> once and for all?

Based on my limited understanding, the "in the kernel" bit is where your
question runs off the rails.

My understanding is that the kernel doesn't know / care where what the
path to the file is. Instead it cares about the file identifier, or
inode. Remember, you can have the same file / inode appear in multiple
directories a la hard links.

The directory path is largely a user space construct only with minimal
kernel support in support of the user space.

As such, I don't think you can special case "." and ".." into the
pathname-parsing logic in the kernel.

> That way, directory-traversal routines in user
> programs don’t have to be specially coded to look for, and skip
> these entries, every single time.

I question why you are wanting to treat "." and ".." special when you
are working with dot files. It seems to me like if you are explicitly
looking for dot files, then you'd want to see "." and "..".



--
Grant. . . .
Re: Multics vs Unix [message #425871 is a reply to message #425869] Tue, 17 December 2024 20:20 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 17 Dec 2024 19:13:42 -0600, Grant Taylor wrote:

> My understanding is that the kernel doesn't know / care where what the
> path to the file is.

Oh, but it does.

> Instead it cares about the file identifier, or
> inode. Remember, you can have the same file / inode appear in multiple
> directories a la hard links.

Yes you can. But there is no userland API in POSIX/*nix to let you
identify a file directly by inode. You always have to specify some path
that gets to it. This is by design.

> As such, I don't think you can special case "." and ".." into the
> pathname-parsing logic in the kernel.

You already have to, for “..” at least. Think about what happens when “..”
would take you to a different filesystem.

> I question why you are wanting to treat "." and ".." special when you
> are working with dot files. It seems to me like if you are explicitly
> looking for dot files, then you'd want to see "." and "..".

Typically, no.
Re: Multics vs Unix [message #425875 is a reply to message #425868] Wed, 18 December 2024 08:04 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: antispam

>
> Of course, it took far too long (7 years) to reach production quality.
> In that time, a small group of researchers at AT&T Bell Labs grew
> tired of waiting, and decided to create their own, less ambitious
> system, which they called “UNIX” as a tongue-in-cheek homage t

What I read is a bit different: AT&T management got tired and
decided to quit the project. Researchers at Bell Labs liked
Multics features, did not want to loose them, so decided to
do their own simpler variant.

--
Waldek Hebisch
Re: Multics vs Unix [message #425897 is a reply to message #425875] Wed, 18 December 2024 18:26 Go to previous messageGo to next message
Rich Alderson is currently offline  Rich Alderson
Messages: 528
Registered: August 2012
Karma: 0
Senior Member
antispam@fricas.org (Waldek Hebisch) writes:

>> Of course, it took far too long (7 years) to reach production quality.
>> In that time, a small group of researchers at AT&T Bell Labs grew
>> tired of waiting, and decided to create their own, less ambitious
>> system, which they called "UNIX" as a tongue-in-cheek homage t

> What I read is a bit different: AT&T management got tired and
> decided to quit the project. Researchers at Bell Labs liked
> Multics features, did not want to loose them, so decided to
> do their own simpler variant.

That is the usual story; the poster to whom you responded often gets this kind
of thing wrong...

--
Rich Alderson news@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Re: Multics vs Unix [message #425902 is a reply to message #425871] Thu, 19 December 2024 00:22 Go to previous messageGo to next message
Niklas Karlsson is currently offline  Niklas Karlsson
Messages: 282
Registered: January 2012
Karma: 0
Senior Member
On 2024-12-18, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
> On Tue, 17 Dec 2024 19:13:42 -0600, Grant Taylor wrote:
>
>> Instead it cares about the file identifier, or
>> inode. Remember, you can have the same file / inode appear in multiple
>> directories a la hard links.
>
> Yes you can. But there is no userland API in POSIX/*nix to let you
> identify a file directly by inode. You always have to specify some path
> that gets to it. This is by design.

Hmm. I haven't gone grovelling through the API documentation, but I note
that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
to find files by inode number. So unless find(1) does some serious
acrobatics for that, I do think there's a userland API to identify a
file by inode.

Niklas
--
All software sucks. Everybody is considered a jerk by somebody. The sun
rises, the sun sets, the Sun crashes, lusers are LARTed, BOFHs get drunk.
It is the way of things. -- sconley@summit.bor.ohio.gov (Steve Conley)
Re: Multics vs Unix [message #425903 is a reply to message #425902] Thu, 19 December 2024 02:11 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On 19 Dec 2024 05:22:37 GMT, Niklas Karlsson wrote:

> Hmm. I haven't gone grovelling through the API documentation, but I note
> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
> to find files by inode number. So unless find(1) does some serious
> acrobatics for that ...

It calls stat(2) or lstat(2) and checks the st_ino field in the returned
data.

<https://savannah.gnu.org/projects/findutils/>
Re: Multics vs Unix [message #425904 is a reply to message #425903] Thu, 19 December 2024 04:41 Go to previous messageGo to next message
Niklas Karlsson is currently offline  Niklas Karlsson
Messages: 282
Registered: January 2012
Karma: 0
Senior Member
On 2024-12-19, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
> On 19 Dec 2024 05:22:37 GMT, Niklas Karlsson wrote:
>
>> Hmm. I haven't gone grovelling through the API documentation, but I note
>> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
>> to find files by inode number. So unless find(1) does some serious
>> acrobatics for that ...
>
> It calls stat(2) or lstat(2) and checks the st_ino field in the returned
> data.

Looks like you're right; the only way to directly manipulate a file by
its inode number is through direct manipulation of the filesystem via
some FS-dependent tool (debugfs in the case of ext*).

Niklas
--
You know what the chain of command is? It's the chain I go get and beat
you with 'til you understand who's in ruttin' command here!
-- Jayne Cobb, _Firefly_
Re: Multics vs Unix [message #425906 is a reply to message #425897] Thu, 19 December 2024 07:28 Go to previous messageGo to next message
cross is currently offline  cross
Messages: 55
Registered: May 2013
Karma: 0
Member
In article <mddbjx8wp2a.fsf@panix5.panix.com>,
Rich Alderson <news@alderson.users.panix.com> wrote:
> antispam@fricas.org (Waldek Hebisch) writes:
>
>>> Of course, it took far too long (7 years) to reach production quality.
>>> In that time, a small group of researchers at AT&T Bell Labs grew
>>> tired of waiting, and decided to create their own, less ambitious
>>> system, which they called "UNIX" as a tongue-in-cheek homage t
>
>> What I read is a bit different: AT&T management got tired and
>> decided to quit the project. Researchers at Bell Labs liked
>> Multics features, did not want to loose them, so decided to
>> do their own simpler variant.
>
> That is the usual story; the poster to whom you responded often gets this kind
> of thing wrong...

Indeed. It's honestly best not to engage with him; he's
a known troll.

- Dan C.
Re: Multics vs Unix [message #425919 is a reply to message #425902] Thu, 19 December 2024 08:47 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4380
Registered: February 2012
Karma: 0
Senior Member
Niklas Karlsson <nikke.karlsson@gmail.com> writes:
> On 2024-12-18, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>> On Tue, 17 Dec 2024 19:13:42 -0600, Grant Taylor wrote:
>>
>>> Instead it cares about the file identifier, or
>>> inode. Remember, you can have the same file / inode appear in multiple
>>> directories a la hard links.
>>
>> Yes you can. But there is no userland API in POSIX/*nix to let you
>> identify a file directly by inode. You always have to specify some path
>> that gets to it. This is by design.
>
> Hmm. I haven't gone grovelling through the API documentation, but I note
> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
> to find files by inode number. So unless find(1) does some serious
> acrobatics for that, I do think there's a userland API to identify a
> file by inode.

Admins used to use ncheck(8) to map inode numbers to path name(s)
in bell labs versions of Unix.

find(1) uses 'stat(2)' to get the inode number when walking the
filesystem tree, so it's more a brute force method than an API.
Re: Multics vs Unix [message #425920 is a reply to message #425919] Thu, 19 December 2024 09:13 Go to previous messageGo to next message
Niklas Karlsson is currently offline  Niklas Karlsson
Messages: 282
Registered: January 2012
Karma: 0
Senior Member
On 2024-12-19, Scott Lurndal <scott@slp53.sl.home> wrote:
> Niklas Karlsson <nikke.karlsson@gmail.com> writes:
>>
>> Hmm. I haven't gone grovelling through the API documentation, but I note
>> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
>> to find files by inode number. So unless find(1) does some serious
>> acrobatics for that, I do think there's a userland API to identify a
>> file by inode.
>
> Admins used to use ncheck(8) to map inode numbers to path name(s)
> in bell labs versions of Unix.

Oh, hello. That's an interesting fact. I like it when things like this
come up in discussions here. Thank you!

I found https://illumos.org/man/8/ncheck - interesting!

A quick web search suggests that at least some commercial UNIXes still
have it; Illumos is of course a Solaris derivative, and I found a
reference to it existing on AIX as well. On Linux, assuming ext*, you
apparently have to use debugfs to do something similar.

Niklas
--
> I've wondered recently why it's not feasible to make large passenger planes
> capable of water landing.
Well, they keep having to replace all the seat cushions, for one
thing... -- Mike Sphar and Mark Hughes in asr
Re: Multics vs Unix [message #425921 is a reply to message #425920] Thu, 19 December 2024 09:47 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4380
Registered: February 2012
Karma: 0
Senior Member
Niklas Karlsson <nikke.karlsson@gmail.com> writes:
> On 2024-12-19, Scott Lurndal <scott@slp53.sl.home> wrote:
>> Niklas Karlsson <nikke.karlsson@gmail.com> writes:
>>>
>>> Hmm. I haven't gone grovelling through the API documentation, but I note
>>> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
>>> to find files by inode number. So unless find(1) does some serious
>>> acrobatics for that, I do think there's a userland API to identify a
>>> file by inode.
>>
>> Admins used to use ncheck(8) to map inode numbers to path name(s)
>> in bell labs versions of Unix.
>
> Oh, hello. That's an interesting fact. I like it when things like this
> come up in discussions here. Thank you!
>
> I found https://illumos.org/man/8/ncheck - interesting!

Unix V7 version:

$ PAGER= man /reference/usl/unix/v7/usr/man/man1/ncheck.1m
NCHECK(1M) NCHECK(1M)



NAME
ncheck - generate names from i-numbers

SYNOPSIS
ncheck [ -i numbers ] [ -a ] [ -s ] [ filesystem ]

DESCRIPTION
Ncheck with no argument generates a pathname vs. i-number list of all
files on a set of default file systems. Names of directory files are
followed by `/.'. The -i option reduces the report to only those files
whose i-numbers follow. The -a option allows printing of the names `.'
and `..', which are ordinarily suppressed. suppressed. The -s option
reduces the report to special files and files with set-user-ID mode; it
is intended to discover concealed violations of security policy.

A file system may be specified.

The report is in no useful order, and probably should be sorted.

SEE ALSO
dcheck(1), icheck(1), sort(1)

DIAGNOSTICS
When the filesystem structure is improper, `??' denotes the `parent' of
a parentless file and a pathname beginning with `...' denotes a loop.
Re: Multics vs Unix [message #425922 is a reply to message #425919] Thu, 19 December 2024 10:03 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Bob Eager

On Thu, 19 Dec 2024 13:47:14 +0000, Scott Lurndal wrote:

> Admins used to use ncheck(8) to map inode numbers to path name(s)
> in bell labs versions of Unix.
>
> find(1) uses 'stat(2)' to get the inode number when walking the
> filesystem tree, so it's more a brute force method than an API.

Yes, I remember that on Sixth Edition.

Not to mention icheck.




--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Re: Multics vs Unix [message #425923 is a reply to message #425920] Thu, 19 December 2024 10:04 Go to previous messageGo to next message
cross is currently offline  cross
Messages: 55
Registered: May 2013
Karma: 0
Member
In article <lsinvdFrpltU1@mid.individual.net>,
Niklas Karlsson <nikke.karlsson@gmail.com> wrote:
> On 2024-12-19, Scott Lurndal <scott@slp53.sl.home> wrote:
>> Niklas Karlsson <nikke.karlsson@gmail.com> writes:
>>>
>>> Hmm. I haven't gone grovelling through the API documentation, but I note
>>> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
>>> to find files by inode number. So unless find(1) does some serious
>>> acrobatics for that, I do think there's a userland API to identify a
>>> file by inode.
>>
>> Admins used to use ncheck(8) to map inode numbers to path name(s)
>> in bell labs versions of Unix.
>
> Oh, hello. That's an interesting fact. I like it when things like this
> come up in discussions here. Thank you!
>
> I found https://illumos.org/man/8/ncheck - interesting!
>
> A quick web search suggests that at least some commercial UNIXes still
> have it; Illumos is of course a Solaris derivative, and I found a
> reference to it existing on AIX as well. On Linux, assuming ext*, you
> apparently have to use debugfs to do something similar.

Various systems have had extensions to address files by inum
over time. From memory, systems that supported AFS used to add
an `openi` system call that allowed a user to open a file by
inode number (one presumes it also took some kind of reference
to the filesystem that the inode was relative to, since the name
space of inodes is per-fs, and not globally unique).

Hmm. Maybe that was Coda, and not AFS.

- Dan C.
Re: Multics vs Unix [message #425927 is a reply to message #425904] Thu, 19 December 2024 14:48 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On 19 Dec 2024 09:41:37 GMT, Niklas Karlsson wrote:

> On 2024-12-19, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>
>> On 19 Dec 2024 05:22:37 GMT, Niklas Karlsson wrote:
>>
>>> Hmm. I haven't gone grovelling through the API documentation, but I
>>> note that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum
>>> option to find files by inode number. So unless find(1) does some
>>> serious acrobatics for that ...
>>
>> It calls stat(2) or lstat(2) and checks the st_ino field in the
>> returned data.
>
> Looks like you're right; the only way to directly manipulate a file by
> its inode number is through direct manipulation of the filesystem via
> some FS-dependent tool (debugfs in the case of ext*).

Having said that, Linux does offer “handle” calls
<https://manpages.debian.org/2/open_by_handle_at.2.en.html>, which
very likely include inode info somewhere in that opaque structure.

But note that accessing files in this way can only be done by
suitably-privileged processes. Otherwise it would break the POSIX
security model.
Re: Multics vs Unix [message #425930 is a reply to message #425920] Thu, 19 December 2024 16:56 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: OrangeFish

On 2024-12-19 09:13, Niklas Karlsson wrote:
> On 2024-12-19, Scott Lurndal <scott@slp53.sl.home> wrote:
>> Niklas Karlsson <nikke.karlsson@gmail.com> writes:
>>>
>>> Hmm. I haven't gone grovelling through the API documentation, but I note
>>> that find(1) on this machine (Ubuntu 18.04.6 LTS) has the -inum option
>>> to find files by inode number. So unless find(1) does some serious
>>> acrobatics for that, I do think there's a userland API to identify a
>>> file by inode.
>>
>> Admins used to use ncheck(8) to map inode numbers to path name(s)
>> in bell labs versions of Unix.
>
> Oh, hello. That's an interesting fact. I like it when things like this
> come up in discussions here. Thank you!
>
> I found https://illumos.org/man/8/ncheck - interesting!
>
> A quick web search suggests that at least some commercial UNIXes still
> have it; Illumos is of course a Solaris derivative, and I found a
> reference to it existing on AIX as well. On Linux, assuming ext*, you
> apparently have to use debugfs to do something similar.
>
> Niklas

Solaris 11 still has it in /usr/sbin:

System Administration Commands ncheck(1M)

NAME
ncheck - generate a list of path names versus i-numbers

SYNOPSIS
ncheck [-F FSType] [-V] [generic_options]
[-o FSType-specific_options] [special]...

DESCRIPTION
ncheck with no options generates a path-name versus
i-number list of all files on special. If special is not
specified on the command line the list is generated for
all specials in /etc/vfstab which have a numeric
fsckpass. special is the raw device on which the file
system exists.

OPTIONS
-F Specify the FSType on which to operate. The FSType
should either be specified here or be determinable
from /etc/vfstab by finding an entry in the table
that has a numeric fsckpass field and an fsckdev
that matches special.

-V Echo the complete command line, but do not execute
the command. The command line is generated by using
the options and arguments provided by the user and
adding to them information derived from
/etc/vfstab. This option may be used to verify and
validate the command line.

[and so on]

OF
Re: Multics vs Unix [message #425969 is a reply to message #425868] Mon, 23 December 2024 13:20 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Sarr Blumson

Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
> I was reading the introductory ???Multics Concepts and Utilization??? book
> < http://bitsavers.trailing-edge.com/pdf/honeywell/large_syste ms/multics/F01_multicsIntroCourseOct78.pdf>
> over at Bitsavers. Multics (the ???MULTiplexed Information and Computing
> Service???) was, for its time, an extremely ambitious operating system
> project. It was first introduced in 1965, in the form of a series of
> papers at the AFIPS Fall Joint Computer Conference of that year
> < https://www.computer.org/csdl/proceedings/1965/afips/12OmNzS h1au>.
>
> Of course, it took far too long (7 years) to reach production quality.
> In that time, a small group of researchers at AT&T Bell Labs grew
> tired of waiting, and decided to create their own, less ambitious
> system, which they called ???UNIX??? as a tongue-in-cheek homage to
> ???Multics???. And the rest, as they say, is history.

Of course it was a decade or two before UNIX could support and AT&T sized
organization.

IBM got pissed when Bell Labs chose GE and cut a similar deal with the
University of Michigan. IBM tried to build TSS/360 on their own; UM
gave up after a similar delay but took the simple system route but
built their simple system on the 360/70. And ran it on IBM hardware for
40 years. An IBM Labs group built CP67/CMS->VM370 with good commercial
success.

Multics put a whole series of hardware vnedors out of business (GE, Honeywell,
Bull) but IBM had infinite resources.

--
sarr@sdf.org
SDF Public Access UNIX System - http://sdf.org
Re: Multics vs Unix [message #425976 is a reply to message #425969] Mon, 23 December 2024 15:16 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Mon, 23 Dec 2024 18:20:46 -0000 (UTC), Sarr Blumson wrote:

> An IBM Labs group built CP67/CMS->VM370 with good commercial
> success.

That was a pretty crummy way of building a timesharing system, though; CMS
was “interactive” (insofar as IBM understood the term), but it was not
multiuser. So CP (later VM) was tacked on as an extra layer underneath to
allow each user to run their own instance of CMS.

This is normally hailed as “IBM pioneered virtualization”. But it was just
a less flexible, higher-overhead way of supporting multiple users than
other vendors, like DEC, were able to do far more efficiently.
Re: Multics vs Unix [message #425979 is a reply to message #425969] Mon, 23 December 2024 15:21 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8608
Registered: December 2011
Karma: 0
Senior Member
Sarr Blumson <sarr@sdf.org> wrote:
> Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>> I was reading the introductory ???Multics Concepts and Utilization??? book
>> < http://bitsavers.trailing-edge.com/pdf/honeywell/large_syste ms/multics/F01_multicsIntroCourseOct78.pdf>
>> over at Bitsavers. Multics (the ???MULTiplexed Information and Computing
>> Service???) was, for its time, an extremely ambitious operating system
>> project. It was first introduced in 1965, in the form of a series of
>> papers at the AFIPS Fall Joint Computer Conference of that year
>> < https://www.computer.org/csdl/proceedings/1965/afips/12OmNzS h1au>.
>>
>> Of course, it took far too long (7 years) to reach production quality.
>> In that time, a small group of researchers at AT&T Bell Labs grew
>> tired of waiting, and decided to create their own, less ambitious
>> system, which they called ???UNIX??? as a tongue-in-cheek homage to
>> ???Multics???. And the rest, as they say, is history.
>
> Of course it was a decade or two before UNIX could support and AT&T sized
> organization.
>
> IBM got pissed when Bell Labs chose GE and cut a similar deal with the
> University of Michigan. IBM tried to build TSS/360 on their own; UM
> gave up after a similar delay but took the simple system route but
> built their simple system on the 360/70. And ran it on IBM hardware for
> 40 years. An IBM Labs group built CP67/CMS->VM370 with good commercial
> success.
>
> Multics put a whole series of hardware vnedors out of business (GE, Honeywell,
> Bull) but IBM had infinite resources.
>

You can still run MTS today.

--
Pete
Re: Multics vs Unix [message #425983 is a reply to message #425976] Mon, 23 December 2024 16:54 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Grant Taylor

On 12/23/24 14:16, Lawrence D'Oliveiro wrote:
> This is normally hailed as “IBM pioneered virtualization”. But
> it was just a less flexible, higher-overhead way of supporting
> multiple users than other vendors, like DEC, were able to do far
> more efficiently.

I don't see either of those statements as being incompatible with each
other.

I'm not aware of any prior efforts that could be described as
virtualization in the sense that VM / VMware / KVM mean it.

I agree that separate (virtual) systems for people is an inefficient way
to support multiple users.



--
Grant. . . .
Re: Multics vs Unix [message #425992 is a reply to message #425983] Mon, 23 December 2024 19:35 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8608
Registered: December 2011
Karma: 0
Senior Member
Grant Taylor <gtaylor@tnetconsulting.net> wrote:
> On 12/23/24 14:16, Lawrence D'Oliveiro wrote:
>> This is normally hailed as “IBM pioneered virtualization”. But
>> it was just a less flexible, higher-overhead way of supporting
>> multiple users than other vendors, like DEC, were able to do far
>> more efficiently.
>
> I don't see either of those statements as being incompatible with each
> other.
>
> I'm not aware of any prior efforts that could be described as
> virtualization in the sense that VM / VMware / KVM mean it.
>
> I agree that separate (virtual) systems for people is an inefficient way
> to support multiple users.
>

Great for security, though. Service bureaus loved CP and VM to make sure
customer data stayed secure.

--
Pete
Re: Multics vs Unix [message #425996 is a reply to message #425992] Mon, 23 December 2024 21:22 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Mon, 23 Dec 2024 17:35:15 -0700, Peter Flass wrote:

> Service bureaus loved CP and VM to make sure customer data stayed
> secure.

Service bureaus commonly used multiuser timeshared systems, relying on OS
protections to keep users out of each other’s data.
Re: Multics vs Unix [message #426003 is a reply to message #425992] Mon, 23 December 2024 21:57 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Grant Taylor

On 12/23/24 18:35, Peter Flass wrote:
> Great for security, though. Service bureaus loved CP and VM to make
> sure customer data stayed secure.

Was it better for security than physically separate machines?

Or was it a compromise that behaved like separate machines without
paying for multiple machines? ;-)



--
Grant. . . .
Re: Multics vs Unix [message #426004 is a reply to message #426003] Mon, 23 December 2024 22:44 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
Grant Taylor <gtaylor@tnetconsulting.net> writes:
> Was it better for security than physically separate machines?
>
> Or was it a compromise that behaved like separate machines without
> paying for multiple machines? ;-)

CP(VM) kernel was relatively small amount of source code with well
defined interface and relatively simple to modify and audit ... which
govs & service bureaus tended to further restrict (user group
presentations about "padded cells" for general users ... only allowing
full virtual machine capability for specific purposes). Because of the
clean separation it tended to reduce amount and complexity of code
.... so it was easier to address both performance and security issues.

In the 80s, as mainframes got larger, there appeared CP/VM subset
functions implemented directly in hardware&microcode to partition
machines ... LPAR & PR/SM
https://en.wikipedia.org/wiki/Logical_partition
which now can be found on many platforms, not just IBM mainframes ...
heavily leveraged by large cloud datacenter operations
https://aws.amazon.com/what-is/virtualization/
How is virtualization different from cloud computing?

Cloud computing is the on-demand delivery of computing resources over
the internet with pay-as-you-go pricing. Instead of buying, owning, and
maintaining a physical data center, you can access technology services,
such as computing power, storage, and databases, as you need them from a
cloud provider.

Virtualization technology makes cloud computing possible. Cloud
providers set up and maintain their own data centers. They create
different virtual environments that use the underlying hardware
resources. You can then program your system to access these cloud
resources by using APIs. Your infrastructure needs can be met as a fully
managed service.



--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426005 is a reply to message #425983] Mon, 23 December 2024 23:07 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1487
Registered: December 2011
Karma: 0
Senior Member
According to Grant Taylor <gtaylor@tnetconsulting.net>:
> I'm not aware of any prior efforts that could be described as
> virtualization in the sense that VM / VMware / KVM mean it.

I am fairly sure that CP/40 and CP/67 were the first virtual machine operating systems.

It was apparently a stroke of luck that S/360 had a clean enough separation between
system and user modes that it was possible to virtualizze. The PDP-6/10 were designed
at about the same time but couldn't

> I agree that separate (virtual) systems for people is an inefficient way
> to support multiple users.

True, but it was an extremely cost efficient way to do system program development.

I think Lynn will confirm that the CP system was so well designed that it got
good performance even without the memory sharing other systems might do. The IBM
channel architecture made the overhead of simulating I/O fairly cheap, since
each I/O operation did a lot of work, read an entire card, print an entire line,
seek and read or write a disk block.
--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: CP/67 Multics vs Unix [message #426009 is a reply to message #426005] Tue, 24 December 2024 00:48 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 24 Dec 2024 04:07:10 -0000 (UTC), John Levine wrote:

> According to Grant Taylor <gtaylor@tnetconsulting.net>:
>
>> I agree that separate (virtual) systems for people is an inefficient
>> way to support multiple users.
>
> True, but it was an extremely cost efficient way to do system program
> development.

There is an even more cost-efficient way: containers.

> The IBM channel architecture made the overhead of simulating
> I/O fairly cheap, since each I/O operation did a lot of work, read an
> entire card, print an entire line, seek and read or write a disk block.

I understand there were security holes in that: the hypervisor tended to
trust that the channel programs submitted by the individual VMs were well-
behaved, even while VM users had full control over their particular VMs.
Re: CP/67 Multics vs Unix [message #426010 is a reply to message #426005] Tue, 24 December 2024 03:06 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
John Levine <johnl@taugh.com> writes:
> I think Lynn will confirm that the CP system was so well designed that it got
> good performance even without the memory sharing other systems might do. The IBM
> channel architecture made the overhead of simulating I/O fairly cheap, since
> each I/O operation did a lot of work, read an entire card, print an entire line,
> seek and read or write a disk block.

Melinda's virtual machine history info
https://www.leeandmelindavarian.com/Melinda#VMHist

Univ. had got 360/67 to replace 709/1401 but ran as 360/65 with os/360
and I was still undergraduate but hired fulltime responsible for
OS/360. Univ. shutdown datacenter on weekends and I would have it
dedicated although 48hrs w/o sleep made monday classes hard.

CSC then came out and installed CP67 (3rd after CSC itself and MIT
Lincoln labs) and I mostly played with it in my weekend dedicated
time. Initially I mostly concentrated on pathlengths to improving
running OS/360 in virtual machine. My OS/360 job stream ran 322secs on
real machine, initially virtually ran 856secs (534secs CP67 CPU). After
a couple months I had CP67 CPU down to 113secs (from 534). I then start
redoing other parts of CP67, page replacement, dynamic adaptive resource
management, scheduling and page thrashing controls, ordered arm seek
queueing (from FIFO), multiple chained page transfers maximizing
transfer/revolution (2301 paging drum improved from 80/sec able to do
270/sec peak), etc. Most of this CSC picks up for distribution in
standard CP67.

After graduation, I join CSC and one of my hobbies was enhanced
production operating systems for internal datacenters (the world-wide,
online, sales&marketing support HONE systems was early and long-time
customer). After decision to add virtual memory to all 370s, the morph
of CP67->VM370 dropped or simplified a lot of stuff. During 1974 and
early 1975, I was able to get most of it back into VM370R2 and then
VM370R3.

In the wake of Future System implosion, Endicott ropes me into helping
with VM/370 ECPS microcode assist for 370 138/148 ... basically identify
6kbytes of highest executed VM370 kernel paths for moving into
microcode. 138/148 avg. 10 native instruction per emulated 370
instruction and kernel 370 instruction would translate
approx. one-for-one into native ... getting 10 times speed up. Old
archived a.f.c post with initial analysis
https://www.garlic.com/~lynn/94.html#21

6kbytes instructions accounted for 79.55% of kernel execution (moved to
native running ten times faster) ... a lot of it involved simulated I/O
(it had to make a copy of the virtual channel programs substituting real
addresses for virtual, the corresponding virtual pages also had to be
"fixed" in real storage until the VM I/O had completed)

Science Center was on 4th flr and Multics was on the 5th ... looking at
some amount of Multics, I figured I could do page mapped filesystem with
lots of sharing features (which was faster and much less CPU than the
standard requiring I/O emulation). Note that "Future System" did
single-level-store ala Multics and (IBM) TSS/360 ... I had joked I had
learned what not to do from TSS/360. However when FS imploded it gave
anything that even slightly related to single-level-store a bad
reputation (and I had trouble even getting my CMS paged-mapped
filesystem used internally inside IBM).

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426030 is a reply to message #426009] Tue, 24 December 2024 08:50 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8608
Registered: December 2011
Karma: 0
Senior Member
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
> On Tue, 24 Dec 2024 04:07:10 -0000 (UTC), John Levine wrote:
>
>> According to Grant Taylor <gtaylor@tnetconsulting.net>:
>>
>>> I agree that separate (virtual) systems for people is an inefficient
>>> way to support multiple users.
>>
>> True, but it was an extremely cost efficient way to do system program
>> development.
>
> There is an even more cost-efficient way: containers.
>
>> The IBM channel architecture made the overhead of simulating
>> I/O fairly cheap, since each I/O operation did a lot of work, read an
>> entire card, print an entire line, seek and read or write a disk block.
>
> I understand there were security holes in that: the hypervisor tended to
> trust that the channel programs submitted by the individual VMs were well-
> behaved, even while VM users had full control over their particular VMs.
>
>

All disks are either minidisks or dedicated packs, so it’s easy for VM to
ensure that users stay within their limits. Cards and printers are spooled,
and tapes are attached to a one user at a time, so it’s very difficult to
break security.

Channels fave a “file mask” that blocks seeks to another cylinder. The
first thing a dasd channel program does is seek cylinder and then set file
mask. After that any attempt to go elsewhere is trapped so the OS can
validate the new cylinder address before continuing.

--
Pete
Re: CP/67 Multics vs Unix [message #426040 is a reply to message #426010] Tue, 24 December 2024 13:40 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
trivia: CSC CP67 had 1052&2741 support, but univ. had some number of
TTY/ASCII terminals, so I added TTY/ASCII support ... and CSC picked up
and distributed with standard CP67 (as well as lots of my other
stuff). I had done a hack with one byte values for TTY line
input/output. Tale of MIT Urban Lab having CP/67 (in tech sq bldg across
quad from 545, multics & science center). Somebody down at Harvard got
an ascii device with 1200(?) char length ... they modified CP67 field
for max. lengths ... but didn't adjust my one-byte hack.
https://www.multicians.org/thvv/360-67.html

A user at Harvard School of Public Health had connected a plotter to a
TTY line and was sending graphics to it, and every time he did, the
whole system crashed. (It is a tribute to the CP/CMS recovery system
that we could get 27 crashes in in a single day; recovery was fast and
automatic, on the order of 4-5 minutes. Multics was also crashing quite
often at that time, but each crash took an hour to recover because we
salvaged the entire file system. This unfavorable comparison was one
reason that the Multics team began development of the New Storage
System.)

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426050 is a reply to message #426040] Tue, 24 December 2024 15:52 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 24 Dec 2024 08:40:50 -1000, Lynn Wheeler wrote:

> Multics was also crashing quite
> often at that time, but each crash took an hour to recover because we
> salvaged the entire file system. This unfavorable comparison was one
> reason that the Multics team began development of the New Storage
> System.)

Was the difference to do with storing allocation bitmaps on disk, instead
of only in the OS memory? Or just fixing up inconsistencies between the
two?

I recall our main campus PDP-11/70 system, back in 1979/1980 or so, would
take about a quarter of an hour to recover from a crash. The error you got
from trying to mount an improperly dismounted volume was “?Disk pack needs
CLEANing”.

I think journalling filesystems were being developed through the 1990s.
Then DEC took the idea to its logical conclusion in the early 1990s with
something called “Spiralog”, which was a filesystem that was all journal
and no actual (conventional) filesystem.
Re: CP/67 Multics vs Unix [message #426058 is a reply to message #426040] Tue, 24 December 2024 18:49 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
Science Center and couple of the commercial online CP67 service
spin-offs in the 60s did a lot of work for 7x24, dark-room, unattended
operation. Also 60s was when IBM leased/rented machines with charges
based on the "system meter" that ran whenever any cpu or any channel
(I/O) was busy ... and a lot of work was done allowing system meter to
stop when system was otherwise idle (there had to be no activity at all
for at least 400ms before system meter would stop). One was special
terminal I/O channel programs that would go idle (allowing system meter
to stop), but immediately start up whenever characters were arriving.

trivia: long after IBM had switched to selling machines, (IBM batch) MVS
system still had a 400ms timer event that guaranteed that system meter
never stopped.

Late 80s, for IBM Austin RS/6000, AIX filesystem was modified to journal
filesystem metadata changes using transaction memory (RIOS hardware that
tracked changed memory) ... in part claiming it was more efficient.

Got the HA/6000 project in the late 80s, originally for NYTimes to move
their newspaper system (ATEX) off DEC VAXCluster to RS/6000. AIX JFS
enabled hot-standby unix filesystem take-over ... and some of RDBMS
vendors supported (raw unix) concurrent shared disks.

Then IBM Palo Alto was porting journaled filesystem to machines that
didn't have transaction memory and found that transaction journaling
calls outperformed transaction memory (even when ported back to
RS/6000).

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426075 is a reply to message #426050] Wed, 25 December 2024 20:14 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8608
Registered: December 2011
Karma: 0
Senior Member
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
> On Tue, 24 Dec 2024 08:40:50 -1000, Lynn Wheeler wrote:
>
>> Multics was also crashing quite
>> often at that time, but each crash took an hour to recover because we
>> salvaged the entire file system. This unfavorable comparison was one
>> reason that the Multics team began development of the New Storage
>> System.)
>
> Was the difference to do with storing allocation bitmaps on disk, instead
> of only in the OS memory? Or just fixing up inconsistencies between the
> two?
>
> I recall our main campus PDP-11/70 system, back in 1979/1980 or so, would
> take about a quarter of an hour to recover from a crash. The error you got
> from trying to mount an improperly dismounted volume was “?Disk pack needs
> CLEANing”.
>
> I think journalling filesystems were being developed through the 1990s.
> Then DEC took the idea to its logical conclusion in the early 1990s with
> something called “Spiralog”, which was a filesystem that was all journal
> and no actual (conventional) filesystem.
>

Obviously with minidisks no recovery was needed. I think the CMS filesystem
kept everything on disk, so no recovery of user data either, though I could
be wrong here. (I wrote the Wikipedia article about the CMS filesystem, but
I don’t recall the details at the present)

--
Pete
Re: CP/67 Multics vs Unix [message #426085 is a reply to message #426075] Thu, 26 December 2024 14:55 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
Peter Flass <peter_flass@yahoo.com> writes:
> Obviously with minidisks no recovery was needed. I think the CMS
> filesystem kept everything on disk, so no recovery of user data
> either, though I could be wrong here. (I wrote the Wikipedia article
> about the CMS filesystem, but I don’t recall the details at the
> present)

when CMS was updating filesystem metadata (allocated blocks, allocated
files, location of files and associated records), it was always to new
disk record locations ... and last thing was to rewrite master record
that switched from the old metadata to the new metadata in single record
write.

Around the time of transition from CP67/CMS to VM370/CMS, it was found
that IBM 360&370 (CKD) disk I/O had a particular failure mode during
power failure, the system memory could have lost all power but the CKD
disk and channel could have enough power to finish a write operation in
progress ... but since there was no power to memory it would finish the
write with all zeros and then write record error check based on the
propagated zeros. CMS was enhanced to have a pair of master records and
update would alternate between the two with basically a version number
appended at the end (so any partial zero write wouldn't identify it was
most recent & valid).

This was later fixed for fixed-block disks ... where a write wouldn't
start until it had all the data from memory (i.e. countermeasure to
partial record writes with trailing zeros) ... but CKD disks and other
IBM operating systems (that didn't have FBA disk support) tended to
still be vulnerable to this particular power failure problem.

other trivia: 60s, IBM rented/leased computers, charges based on "system
meter" that ran whenever any cpu or channel was busy. CSC and a couple
of the CSC CP67 commercial online spinoffs, did a lot of work for 7x24,
dark room, unattended operation, optimized processing and channel
programs so "system meter" could stop during idle periods (including
special terminal channel programs that would release the channel with no
activity, but instantly on with arriving characters). "System Meter"
needed 400ms of complete idle before it would stop .... long after IBM
had switched to selling computers, MVS still had a 400ms timer event
that would guarantee system meter never stopped.

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426094 is a reply to message #426075] Fri, 27 December 2024 12:35 Go to previous messageGo to next message
scott is currently offline  scott
Messages: 4380
Registered: February 2012
Karma: 0
Senior Member
Peter Flass <peter_flass@yahoo.com> writes:
> Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>> On Tue, 24 Dec 2024 08:40:50 -1000, Lynn Wheeler wrote:
>>
>>> Multics was also crashing quite
>>> often at that time, but each crash took an hour to recover because we
>>> salvaged the entire file system. This unfavorable comparison was one
>>> reason that the Multics team began development of the New Storage
>>> System.)
>>
>> Was the difference to do with storing allocation bitmaps on disk, instead
>> of only in the OS memory? Or just fixing up inconsistencies between the
>> two?
>>
>> I recall our main campus PDP-11/70 system, back in 1979/1980 or so, would
>> take about a quarter of an hour to recover from a crash. The error you got
>> from trying to mount an improperly dismounted volume was “?Disk pack needs
>> CLEANing”.
>>
>> I think journalling filesystems were being developed through the 1990s.
>> Then DEC took the idea to its logical conclusion in the early 1990s with
>> something called “Spiralog”, which was a filesystem that was all journal
>> and no actual (conventional) filesystem.
>>
>
> Obviously with minidisks no recovery was needed. I think the CMS filesystem
> kept everything on disk, so no recovery of user data either, though I could
> be wrong here. (I wrote the Wikipedia article about the CMS filesystem, but
> I don’t recall the details at the present)

Veritas was an early player in the journaled filesystem space.


https://en.wikipedia.org/wiki/Veritas_File_System
Re: CP/67 Multics vs Unix [message #426095 is a reply to message #426058] Fri, 27 December 2024 14:35 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
Lynn Wheeler <lynn@garlic.com> writes:
> Got the HA/6000 project in the late 80s, originally for NYTimes to move
> their newspaper system (ATEX) off DEC VAXCluster to RS/6000. AIX JFS
> enabled hot-standby unix filesystem take-over ... and some of RDBMS
> vendors supported (raw unix) concurrent shared disks.
>
> Then IBM Palo Alto was porting journaled filesystem to machines that
> didn't have transaction memory and found that transaction journaling
> calls outperformed transaction memory (even when ported back to
> RS/6000).

RS/6000 AIX with Journal Filesystem released in 1990
https://en.wikipedia.org/wiki/IBM_AIX

AIX was the first operating system to implement a journaling file
system. IBM has continuously enhanced the software with features such as
processor, disk, and network virtualization, dynamic hardware resource
allocation (including fractional processor units), and reliability
engineering concepts derived from its mainframe designs.[8]

In 1990, AIX Version 3 was released for the POWER-based RS/6000
platform.[16] It became the primary operating system for the RS/6000
series, which was later renamed IBM eServer pSeries, IBM System p, and
finally IBM Power Systems.

.....

Nick Donofrio approved HA/6000 1988 (required the journal filesystem
that would be part of RS/6000 1990 release) ... and started at the IBM
Los Gatos lab Jan1989 (I rename it HA/CMP when start doing
technical/scientific cluster scaleup with national labs, LLNL, LANL,
NCAR, etc and commercial cluster scaleup with RDBMS vendors, Oracle,
Sybase, Ingres, Informix.

27 Years of IBM RISC
http://ps-2.kev009.com/rootvg/column_risc.htm
1990 POWER

IBM announces its new RISC-based computer line, the RISC System/6000
(later named RS/6000, nowadays eServer pSeries), running AIX Version
3. The architecture of the systems is given the name POWER (now commonly
referred to as POWER1), standing for Performance Optimization With
Enhanced RISC. They where based on a multiple chip implementation of the
32-bit POWER architecture. The models introduced included an 8 KB
instruction cache (I-cache) and either a 32 KB or 64 KB data cache
(D-cache). They had a single floating-point unit capable of issuing one
compound floating-point multiply-add (FMA) operation each cycle, with a
latency of only two cycles and optimized 3-D graphics capabilities.

The model 7013-540 (30 MHz) processed 30 million instructions per
second. Its electronic logic circuitry had up to 800,000 transistors per
silicon chip. The maximum memory size was 256 Mbytes and its internal
disk storage capacity was 2.5 GBytes.

Links: (for URLs see web page)
RISC System/6000 POWERstation/POWERserver 320
RISC System/6000 POWERstations/POWERservers 520 AND 530
RISC System/6000 POWERserver 540
RISC System/6000 POWERstation 730
RISC System/6000 POWERserver 930

AIX Version 3

AIX Version 3 is announced.

Links: (for URLs see web page)

AIX Version 3 (Februari, 1990)
Overview: IBM RISC System/6000 and related announcements

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426101 is a reply to message #426095] Sat, 28 December 2024 18:25 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Grant Taylor

On 12/27/24 13:35, Lynn Wheeler wrote:
> In 1990, AIX Version 3 was released for the POWER-based RS/6000
> platform.[16] It became the primary operating system for the RS/6000
> series, which was later renamed IBM eServer pSeries, IBM System p,
> and finally IBM Power Systems.

What other OSs ran on the RS/6000 in 1990?

I'm confident that NetBSD and know that Linux eventually made it to the
RS/6000. But I have no idea what else would run on it in '90.



--
Grant. . . .
Re: CP/67 Multics vs Unix [message #426103 is a reply to message #426101] Sat, 28 December 2024 22:04 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
Grant Taylor <gtaylor@tnetconsulting.net> writes:
> I'm confident that NetBSD and know that Linux eventually made it to
> the RS/6000. But I have no idea what else would run on it in '90.

(AT&T unix port) AIXV2 and (UCB BSD port) AOS ran on PC/RT. They then
added bunch of BSD'isms for AIXV3 for 1990 RS/6000 (RIOS power chipset).
Then start AIM (apple, IBM, Motorola) & Somerset, single chip
power/pc.
https://en.wikipedia.org/wiki/IBM_RS/6000
https://www.ibm.com/docs/en/power4?topic=rs6000-systems

so must of non-AIX systems are going to be for power/pc ... and then
power & power/pc eventually merge.
https://en.wikipedia.org/wiki/IBM_Power_microprocessors

--
virtualization experience starting Jan1968, online at home since Mar1970
Re: CP/67 Multics vs Unix [message #426104 is a reply to message #426101] Sun, 29 December 2024 05:28 Go to previous messageGo to next message
cb is currently offline  cb
Messages: 302
Registered: March 2012
Karma: 0
Senior Member
In article <vkq1cj$rur$1@tncsrv09.home.tnetconsulting.net>,
Grant Taylor <gtaylor@tnetconsulting.net> wrote:
> On 12/27/24 13:35, Lynn Wheeler wrote:
>> In 1990, AIX Version 3 was released for the POWER-based RS/6000
>> platform.[16] It became the primary operating system for the RS/6000
>> series, which was later renamed IBM eServer pSeries, IBM System p,
>> and finally IBM Power Systems.
>
> What other OSs ran on the RS/6000 in 1990?

Well, technically ... IBM licensed the NeXTStep operating system from
NeXT to put on the RS/6000 line, though it never made it to market:

https://www.techmonitor.ai/technology/ibms_rs6000_announceme nts

"There are no fewer than three interfaces offered on the operating
system licensed separately. First is Steve Jobs’ NeXTStep – dubbed
the AIX Graphic User Environment/6000."

https://simson.net/ref/NeXT/nextworld/NextWorld_Extra/92.08. Aug.NWE/92.08.Aug.NWExtra05.html

"In 1988, NeXT licensed NeXTSTEP 1.0 to IBM for use on the
RS/6000 workstation."

// Christian
Re: CP/67 Multics vs Unix [message #426106 is a reply to message #426101] Sun, 29 December 2024 10:56 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Grant Taylor

On 12/28/24 17:25, Grant Taylor wrote:
> What other OSs ran on the RS/6000 in 1990?

Thank you Lynn and Christian. Today I learned something new to me. :-)



--
Grant. . . .
Re: CP/67 Multics vs Unix [message #426107 is a reply to message #426104] Sun, 29 December 2024 16:17 Go to previous messageGo to previous message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Sun, 29 Dec 2024 10:28:03 -0000 (UTC), Christian Brunschen wrote:

> ... IBM licensed the NeXTStep operating system ...
>
> "There are no fewer than three interfaces offered on the operating
> system licensed separately. First is Steve Jobs’ NeXTStep – dubbed the
> AIX Graphic User Environment/6000."

So it was just an alternative GUI on top of AIX, not an OS in itself.
Pages (10): [1  2  3  4  5  6  7  8  9  10    »]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: Endicott Demolition: Original 100-Year-Old IBM Logo is History Read More: Endicott Demolition: Original 100-Year-Old IBM Logo is History | https://wnbf.com/endicott-demolition-original-ibm-logo-history/?utm_source=tsmclip&utm_medium=referral
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Thu Feb 13 16:19:37 EST 2025

Total time taken to generate the page: 0.00817 seconds