Megalextoria
Retro computing and gaming, sci-fi books, tv and movies and other geeky stuff.

Home » Digital Archaeology » Computer Arcana » Computer Folklore » Multics vs Unix
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
Re: old pharts, Multics vs Unix [message #426757 is a reply to message #426754] Tue, 04 February 2025 09:08 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Colin Macleod

Peter Flass <peter_flass@yahoo.com> posted:

> Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>> On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>
>>> I couldn't even interrupt my own program. They had to reboot the machine
>>> to fix it and I was told in no uncertain terms to never do that again!
>>
>> Fragile things, mainframes. They were not battle-hardened by exposure to
>> inquisitive students, the way interactive timesharing systems were.
>>
>
> Might the problem have been the controller for the 2260s? I recall they
> were somewhat kludge, using delay-lines as display buffers.
>


--
Colin Macleod ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ https://cmacleod.me.uk

Fermented grapes count towards my five-a-day, right?
Re: old pharts, Multics vs Unix vs mainframes [message #426758 is a reply to message #426739] Tue, 04 February 2025 11:08 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3899
Registered: January 2012
Karma: 0
Senior Member
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:

> On 2025-02-03, Dan Espen <dan1espen@gmail.com> wrote:
>
>> scott@slp53.sl.home (Scott Lurndal) writes:
>>
>>> Dan Espen <dan1espen@gmail.com> writes:
>>>
>>>> Meanwhile, fixes using 4 digit years are setting us up for
>>>> the Y9K bug.
>>>
>>> A signed 64-bit integer can represent any time value (in seconds)
>>> for a few hundred million years BCE and CE.
>>
>> Hopefully, those 64bit dates will remain an internal representation.
>> Anything converting a 4 digit year to a 64bit date still has
>> a Y9K problem.
>
> s/Y9K/Y10K/

I was going to go with Y9999 but it just didn't seem right.

> At the year 9000 we'll still have a millennium left to fix it.

Y2K proved it's no fun unless you wait until you have 2 years left.

--
Dan Espen
Re: old pharts, Multics vs Unix [message #426759 is a reply to message #426754] Tue, 04 February 2025 11:15 Go to previous messageGo to next message
Dan Espen is currently offline  Dan Espen
Messages: 3899
Registered: January 2012
Karma: 0
Senior Member
Peter Flass <peter_flass@yahoo.com> writes:

> Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>> On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>
>>> I couldn't even interrupt my own program. They had to reboot the machine
>>> to fix it and I was told in no uncertain terms to never do that again!
>>
>> Fragile things, mainframes. They were not battle-hardened by exposure to
>> inquisitive students, the way interactive timesharing systems were.
>
> Might the problem have been the controller for the 2260s? I recall they
> were somewhat kludge, using delay-lines as display buffers.

I read about those delay lines long after my 2260 project.
They sure sound like they would be easy to overload but I don't know if
that would slow the whole system down.

My thoughts were, it could put a heavy load on the multiplexor channel
which the printers and card readers needed,
and any 2260 program was likely to be running at a high priority,
locally attached 2260's used the attention interrupt, possibly
interfering with trying to hit attention on a terminal.



--
Dan Espen
Re: old pharts, Multics vs Unix [message #426760 is a reply to message #426748] Tue, 04 February 2025 11:21 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: John Ames

On Tue, 4 Feb 2025 01:45:32 -0000 (UTC)
cross@spitfire.i.gajendra.net (Dan Cross) wrote:

> Lol. One of the major compute engines used by undergrads when I
> was young was an ES/3090-600S running VM/ESA. It certainly got
> tested by inquisitive students.
>
> The troll continues to show his ignorance.

Lawrence's ability to make blanket assertions on things outside his own
window of experience with complete authority - in spite of the testimony
of those with firsthand knowledge of the domain in question - is fairly
breathtaking.
Re: old pharts, Multics vs Unix [message #426761 is a reply to message #426750] Tue, 04 February 2025 12:33 Go to previous messageGo to next message
Bill Findlay is currently offline  Bill Findlay
Messages: 297
Registered: January 2012
Karma: 0
Senior Member
On 4 Feb 2025, Lawrence D'Oliveiro wrote
(in article <vns3n2$1n890$2@dont-email.me>):

> On Tue, 4 Feb 2025 01:15:08 +0000, moi wrote:
>
>> On 04/02/2025 01:05, Lawrence D'Oliveiro wrote:
>>
>>> On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>>
>>>> I couldn't even interrupt my own program. They had to reboot the machine
>>>> to fix it and I was told in no uncertain terms to never do that again!
>>>
>>> Fragile things, mainframes.
>>
>> Nonsense.
>
> The very post I was replying to gives the lie to your denial.

I restore the context you omitted in bad faith:

> They were not battle-hardened by exposure to inquisitive students, the way
> interactive timesharing systems were.

I say again: nonsense.

--
Bill Findlay
Re: old pharts, Multics vs Unix vs mainframes [message #426762 is a reply to message #426752] Tue, 04 February 2025 12:36 Go to previous messageGo to next message
Bill Findlay is currently offline  Bill Findlay
Messages: 297
Registered: January 2012
Karma: 0
Senior Member
On 4 Feb 2025, Bob Martin wrote
(in article <m0dt5gFpgnqU1@mid.individual.net>):

> On 4 Feb 2025 at 01:15:08, moi<findlaybill@blueyonder.co.uk> wrote:
>> On 04/02/2025 01:05, Lawrence D'Oliveiro wrote:
>>> On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>>
>>>> I couldn't even interrupt my own program. They had to reboot the machine
>>>> to fix it and I was told in no uncertain terms to never do that again!
>>>
>>> Fragile things, mainframes. They were not battle-hardened by exposure to
>>> inquisitive students, the way interactive timesharing systems were.
>>
>> Nonsense.
>
> Everything Lawrence says is nonsense.
> If only people would stop responding to him.

Untruths need to be challenged.

--
Bill Findlay
Stress-testing of Mainframes (the HASP story) [message #426763 is a reply to message #426728] Tue, 04 February 2025 15:26 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lars Poulsen

Lynn Wheeler <lynn@garlic.com> writes:
>>> however, all 3270s were half-duplex and if you were unfortunate to hit a
>>> key same time system went to write to screen, it would lock the keyboard
>>> and would have to stop and hit the reset key. YKT developed a FIFO box
>>> for 3277, unplug the keyboard from the 3277 head, plug the FIFO box into
>>> the head and plug the 3277 keyboard into the fifo box ... eliminating
>>> the unfortunate keyboad lock.

On 2025-02-03, Colin Macleod <user7@newsgrouper.org.uk.invalid> wrote:
> This reminds me of something when I was a student around 1975. My university's
> only computer was an IBM 360/44 and we could use some 2260 terminals. Digging
> through some manuals I got the idea that by munging the "carriage control
> character" from a Fortran program I might be able to break out of the
> restriction of block-mode operation and persuade a 2260 to do animated
> graphics.
>
> When tried my proof-of-concept program I did get the terminal to keep
> redrawing a very flickery but changing few lines. But while I was watching
> this, the other users in the lab started complaining that their terminals were
> frozen. Then the operators started running around trying to find out why
> the whole mainframe had hung. It appeared that my hack had somehow elevated
> the priority of my animation such that nothing else was getting run at all.
> I couldn't even interrupt my own program. They had to reboot the machine to
> fix it and I was told in no uncertain terms to never do that again!

My own "NEVER do that again" stories:
1) 1971, I think:

I was one of three part-time operators of an IBM 1130 that had been
pressed into service as a HASP RJE terminal for the new IBM/360-65 MVT
system at NEUCC (DTU, Lyngby), but we still had a tray full of
pre-punched IBM 1130 DOS control cards:
// JOB T
// FOR
// XQT

One of my coworkers pondered what would happen if you submitted a small
deck with an 1130 JOB card instead of an OS/360 JOB card. I explained
that the spool system would take it as a job with a sort-of valid job
card, but with invalid values in most fields, such as the billing
account, the job queue (class=) field etc. Then when the job scheduler
got to that point in the queue, it would fail all sorts of syntax and
validity checks, and a printout would be returned with all sorts of
error messages. He did not believe me, so I proved it by submitting it.
The one solitary JOB card came back as 4 or so pages of print:
- front separator page
- HASP console message log
- JCL processing log
- back separator page

He then pondered what would happen if we read in a whole stack of JOB
cards, and I said "same thing for each card in the stack". So we did
it.

It read the first couple of dozen cards quite fast, then slowed down a
lot, and read one card at a time, a couple of seconds apart. And then it
went REALLY slow, reading one card after printing the 4 pages of job
output.

That was because the HASP job queue had about 100 slots, with a
pre-allocated cylinder for the input cards for the job and I think also
the log files. Once all the job queue slots were taken up by the BAD
jobs, no new jobs could be read in FROM ANYWHERE in the RJE network,
until a job slot had been finished and the slot made available for a new
job.

But it was worse. The OS/360 job card had a job ID field at the
beginning of the card (before the word JOB), and all the BAD jobs had
the same invalid ID. So when a job came in with a blank ID field, it was
a duplicate of the preceding BAD jobs; each new job had to be assigned a
unique new ID, and this generated a console message announcing a
duplicate ID, folowed by a console message announcing the new ID,
followed by a message that the new job had been placed in HOLD status,
so that only one of these jobs with the same ID could be in the OS job
queue. After the first duplicate, the newly generated ID was also a
duplicate, triggering more HASP console messages as the queue was
searched trying to find a new unique name. As each job finished, HASP
would release the jobs tht had been HELD, tgriggering more messages.
Soon, the 1050 console was running 15 minutes behind, and the operator
was unable to get a command in.

In the end, I think they had to re-IPL the system and FORMAT the HASP
SPOOL disk to recover.

I was lucky to keep my job.

I think someone wrote a PTF for HASP to make sure that could not happen
again!
Re: old pharts, Multics vs Unix [message #426764 is a reply to message #426761] Tue, 04 February 2025 17:58 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 04 Feb 2025 17:33:17 +0000, Bill Findlay wrote:

> I restore the context you omitted ...

So you admit that your claim only applied to the subsidiary point, not the
main one.

> ... in bad faith

Does not making your point clear count as “bad faith”?
Re: old pharts, Multics vs Unix [message #426765 is a reply to message #426760] Tue, 04 February 2025 17:58 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 4 Feb 2025 08:21:46 -0800, John Ames wrote:

> Lawrence's ability to make blanket assertions on things outside his own
> window of experience with complete authority - in spite of the testimony
> of those with firsthand knowledge of the domain in question ...

In this case, it was very clearly because of it, not in spite of it.
Re: old pharts, Multics vs Unix [message #426766 is a reply to message #426754] Tue, 04 February 2025 18:01 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 4 Feb 2025 06:09:27 -0700, Peter Flass wrote:

> Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
>
>> On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>
>>> I couldn't even interrupt my own program. They had to reboot the
>>> machine to fix it and I was told in no uncertain terms to never do
>>> that again!
>>
>> Fragile things, mainframes. They were not battle-hardened by exposure
>> to inquisitive students, the way interactive timesharing systems were.
>>
> Might the problem have been the controller for the 2260s? I recall they
> were somewhat kludge, using delay-lines as display buffers.

Remember, the whole point of a mainframe was to devolve as much I/O
processing load as possible to the peripheral controllers, bothering the
CPU as little as possible.

Obviously there was a loophole in this, if certain sequences of user
actions could cause the controller to overload the CPU with interrupts.

Sometimes these bugs are not specifically in a particular component, but
in the way that different components interact.
Re: Stress-testing of Mainframes (the HASP story) [message #426767 is a reply to message #426763] Tue, 04 February 2025 18:02 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 4 Feb 2025 20:26:30 -0000 (UTC), Lars Poulsen wrote:

> In the end, I think they had to re-IPL the system and FORMAT the HASP
> SPOOL disk to recover.
>
> I was lucky to keep my job.

IBM was, of course blameless. It was easier to sack a hapless employee
than to switch to a supplier of better-quality products.
Re: Wang Terminals (Re: old pharts, Multics vs Unix) [message #426768 is a reply to message #426749] Tue, 04 February 2025 18:05 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 4 Feb 2025 02:55:08 -0000 (UTC), Lars Poulsen wrote:

> Keyboards was the big problem. The Norvegian Wang subsidiary had copied
> the DecWriter Norvegian keyboards, which were really badly screwed up.
> (And DEC in Denmark had accepted those Norvegian keyboards.) I had some
> fun researching /standards/ for office keyboards to get something that
> would work for office typists.

But wouldn’t they have been based on official national standards? Hard to
think of computer companies (ones smaller than IBM, anyway) making up
their own specs, if there is already one established in the target market.
Re: old pharts, Multics vs Unix vs mainframes [message #426769 is a reply to message #426703] Tue, 04 February 2025 18:09 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Sun, 2 Feb 2025 15:17 +0000 (GMT Standard Time), John Dallman wrote:

> In article <vnmh9e$butt$6@dont-email.me>, ldo@nz.invalid (Lawrence
> D'Oliveiro) wrote:
>>
>> You think a modern company like Facebook runs
>> its main system on a mainframe, using some proprietary mainframe DBMS?
>> No, it uses MySQL/MariaDB with other pieces like memcached, plus code
>> written in its home-grown PHP engine (open-sourced as HHVM), and also
>> some back-end Python (that we know of).
>
> Facebook has quite different synchronisation requirements from a credit
> card provider. It doesn't matter to FB if updates to the page someone is
> looking at arrive take a few seconds to arrive.
>
> Speed of synchronisation matters a lot to a credit card provider who is
> trying to enforce customer credit limits and avoid double-spends. They
> still use mainframes with z/TPF for that. z/TPF is a curious OS; it
> essentially makes a mainframe into a single real-time transaction
> processing system.

Look at it this way: the article I got the info about Facebook from was
from some years ago, back when Facebook only had about a billion users who
were active at least once a month.

So that’s a bare minimum of about 380 real-time transactions per second,
on average, 24 hours a day, day in and day out, most likely much higher at
peak times.

Where do you have any IBM mainframe that can cope with that?
Re: old pharts, Multics vs Unix [message #426770 is a reply to message #426764] Tue, 04 February 2025 18:35 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: moi

On 04/02/2025 22:58, Lawrence D'Oliveiro wrote:
> On Tue, 04 Feb 2025 17:33:17 +0000, Bill Findlay wrote:
>
>> I restore the context you omitted ...
>
> So you admit that your claim only applied to the subsidiary point, not the
> main one.
>
>> ... in bad faith
>
> Does not making your point clear count as “bad faith”?

I do not accept that you are as stupid as you seem to be.

Into the kill file with you.

--
Bill F.
Re: old pharts, Multics vs Unix vs mainframes [message #426771 is a reply to message #426756] Wed, 05 February 2025 00:55 Go to previous messageGo to next message
Bob Martin is currently offline  Bob Martin
Messages: 167
Registered: August 2012
Karma: 0
Senior Member
On 4 Feb 2025 at 13:47:59, "Kerr-Mudd, John" <admin@127.0.0.1> wrote:
> On 4 Feb 2025 07:13:37 GMT
> Bob Martin <bob.martin@excite.com> wrote:
>
>> On 3 Feb 2025 at 21:40:05, Rich Alderson <news@alderson.users.panix.com> wrote:
>>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>
>>>> Peter Flass <peter_flass@yahoo.com> writes:
>>>> > Dan Espen <dan1espen@gmail.com> wrote:
>>>> >> scott@slp53.sl.home (Scott Lurndal) writes:
>>>
>>>> <Y2K mitigations>
>>>
>>>> >> I fixed one of my applications by looking at the current year, then
>>>> >> setting the window accordingly. Fixed forever.
>>>
>>>> > Depends on the application. For something like Social Security you may have
>>>> > records on someone born this year(parents applied for SSN) to this year
>>>> > minus 100 or more. For a payments system this works fine, since all you
>>>> > usually need is last year, this year, and next year.
>>>
>>>> For SS, even in the 1960's, they'd have to be able to store dates
>>>> from the 19th century; two-digit years were never useful in that context.
>>>
>>> Indeed. My greatgrandfather Alderson was born in 1876, and died in 1962; my
>>> greatgrandmother was born in 1885, and died 6 weeks short of her 90th birthday
>>> in 1975.
>>
>> My grandfather was born in 1863.
>>
> Can we presume that your other grandfather is no longer around?

Sorry, full info:
Paternal grandfather : 1863 to 1952
Maternal grandfather : 1877 to 1940

I'm 83
Re: old pharts, Multics vs Unix vs mainframes [message #426772 is a reply to message #426762] Wed, 05 February 2025 11:58 Go to previous messageGo to next message
cross is currently offline  cross
Messages: 55
Registered: May 2013
Karma: 0
Member
In article <0001HW.2D528791002FB78C30DA3538F@news.individual.net>,
Bill Findlay <findlaybill@blueyonder.co.uk> wrote:
> On 4 Feb 2025, Bob Martin wrote
> (in article <m0dt5gFpgnqU1@mid.individual.net>):
>
>> On 4 Feb 2025 at 01:15:08, moi<findlaybill@blueyonder.co.uk> wrote:
>>> On 04/02/2025 01:05, Lawrence D'Oliveiro wrote:
>>>> On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>>>
>>>> > I couldn't even interrupt my own program. They had to reboot the machine
>>>> > to fix it and I was told in no uncertain terms to never do that again!
>>>>
>>>> Fragile things, mainframes. They were not battle-hardened by exposure to
>>>> inquisitive students, the way interactive timesharing systems were.
>>>
>>> Nonsense.
>>
>> Everything Lawrence says is nonsense.
>> If only people would stop responding to him.
>
> Untruths need to be challenged.

My suggestion for handling this would be to have a periodically
posted FAQ that includes a section on cranks and bad-faith
posters that mentions Lawrence.

- Dan C.
Re: old pharts, Multics vs Unix vs mainframes [message #426773 is a reply to message #426771] Wed, 05 February 2025 13:56 Go to previous messageGo to next message
Harry Vaderchi is currently offline  Harry Vaderchi
Messages: 772
Registered: July 2012
Karma: 0
Senior Member
On 5 Feb 2025 05:55:51 GMT
Bob Martin <bob.martin@excite.com> wrote:

> On 4 Feb 2025 at 13:47:59, "Kerr-Mudd, John" <admin@127.0.0.1> wrote:
>> On 4 Feb 2025 07:13:37 GMT
>> Bob Martin <bob.martin@excite.com> wrote:
>>
>>> On 3 Feb 2025 at 21:40:05, Rich Alderson <news@alderson.users.panix.com> wrote:
>>>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>>
>>>> > Peter Flass <peter_flass@yahoo.com> writes:
>>>> >> Dan Espen <dan1espen@gmail.com> wrote:
>>>> >>> scott@slp53.sl.home (Scott Lurndal) writes:
>>>>
>>>> > <Y2K mitigations>
>>>>
>>>> >>> I fixed one of my applications by looking at the current year, then
>>>> >>> setting the window accordingly. Fixed forever.
>>>>
>>>> >> Depends on the application. For something like Social Security you may have
>>>> >> records on someone born this year(parents applied for SSN) to this year
>>>> >> minus 100 or more. For a payments system this works fine, since all you
>>>> >> usually need is last year, this year, and next year.
>>>>
>>>> > For SS, even in the 1960's, they'd have to be able to store dates
>>>> > from the 19th century; two-digit years were never useful in that context.
>>>>
>>>> Indeed. My greatgrandfather Alderson was born in 1876, and died in 1962; my
>>>> greatgrandmother was born in 1885, and died 6 weeks short of her 90th birthday
>>>> in 1975.
>>>
>>> My grandfather was born in 1863.
>>>
>> Can we presume that your other grandfather is no longer around?
>
> Sorry, full info:
> Paternal grandfather : 1863 to 1952
> Maternal grandfather : 1877 to 1940
>
> I'm 83
>
Wow! Sorry, I was just being a bit pernickity about only 1 grandfather.

I was lucky enough to have some IBM mainframe experience prior to being an
early PC adopter. I'm still reliving it - see my 8086 PC asm code in
a.l.a/c.o.m.p

--
Bah, and indeed Humbug.
Re: old pharts, Multics vs Unix vs mainframes [message #426774 is a reply to message #426772] Wed, 05 February 2025 13:58 Go to previous messageGo to next message
Harry Vaderchi is currently offline  Harry Vaderchi
Messages: 772
Registered: July 2012
Karma: 0
Senior Member
On Wed, 5 Feb 2025 16:58:46 -0000 (UTC)
cross@spitfire.i.gajendra.net (Dan Cross) wrote:

> In article <0001HW.2D528791002FB78C30DA3538F@news.individual.net>,
> Bill Findlay <findlaybill@blueyonder.co.uk> wrote:
>> On 4 Feb 2025, Bob Martin wrote
>> (in article <m0dt5gFpgnqU1@mid.individual.net>):
>>
>>> On 4 Feb 2025 at 01:15:08, moi<findlaybill@blueyonder.co.uk> wrote:
>>>> On 04/02/2025 01:05, Lawrence D'Oliveiro wrote:
>>>> > On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>>> >
>>>> > > I couldn't even interrupt my own program. They had to reboot the machine
>>>> > > to fix it and I was told in no uncertain terms to never do that again!
>>>> >
>>>> > Fragile things, mainframes. They were not battle-hardened by exposure to
>>>> > inquisitive students, the way interactive timesharing systems were.
>>>>
>>>> Nonsense.
>>>
>>> Everything Lawrence says is nonsense.
>>> If only people would stop responding to him.
>>
>> Untruths need to be challenged.
>
> My suggestion for handling this would be to have a periodically
> posted FAQ that includes a section on cranks and bad-faith
> posters that mentions Lawrence.
>

Challenging idi^w uninformed presid^w posters rarely gets you anywhere
good.


--
Bah, and indeed Humbug.
Re: old pharts, Multics vs Unix vs mainframes [message #426775 is a reply to message #426774] Wed, 05 February 2025 14:32 Go to previous messageGo to next message
cross is currently offline  cross
Messages: 55
Registered: May 2013
Karma: 0
Member
In article <20250205185807.78ec4cec6c4b6d1f447cdd9a@127.0.0.1>,
Kerr-Mudd, John <admin@127.0.0.1> wrote:
> On Wed, 5 Feb 2025 16:58:46 -0000 (UTC)
> cross@spitfire.i.gajendra.net (Dan Cross) wrote:
>
>> In article <0001HW.2D528791002FB78C30DA3538F@news.individual.net>,
>> Bill Findlay <findlaybill@blueyonder.co.uk> wrote:
>>> On 4 Feb 2025, Bob Martin wrote
>>> (in article <m0dt5gFpgnqU1@mid.individual.net>):
>>>
>>>> On 4 Feb 2025 at 01:15:08, moi<findlaybill@blueyonder.co.uk> wrote:
>>>> > On 04/02/2025 01:05, Lawrence D'Oliveiro wrote:
>>>> > > On Mon, 03 Feb 2025 15:03:10 GMT, Colin Macleod wrote:
>>>> > >
>>>> > > > I couldn't even interrupt my own program. They had to reboot the machine
>>>> > > > to fix it and I was told in no uncertain terms to never do that again!
>>>> > >
>>>> > > Fragile things, mainframes. They were not battle-hardened by exposure to
>>>> > > inquisitive students, the way interactive timesharing systems were.
>>>> >
>>>> > Nonsense.
>>>>
>>>> Everything Lawrence says is nonsense.
>>>> If only people would stop responding to him.
>>>
>>> Untruths need to be challenged.
>>
>> My suggestion for handling this would be to have a periodically
>> posted FAQ that includes a section on cranks and bad-faith
>> posters that mentions Lawrence.
>
> Challenging idi^w uninformed presid^w posters rarely gets you anywhere
> good.

Truth.

Though in the case of this particular uninformed poster, he need
not be challenged; one can simply put a blurb about him the FAQ,
and post it on some regular cadence. Readers are free to use
that information or not, but it takes off any pressure to
respond directly to him.

- Dan C.
Re: Stress-testing of Mainframes (the HASP story) [message #426776 is a reply to message #426763] Wed, 05 February 2025 17:59 Go to previous messageGo to next message
Peter Flass is currently offline  Peter Flass
Messages: 8608
Registered: December 2011
Karma: 0
Senior Member
Lars Poulsen <lars@cleo.beagle-ears.com> wrote:
> Lynn Wheeler <lynn@garlic.com> writes:
>>>> however, all 3270s were half-duplex and if you were unfortunate to hit a
>>>> key same time system went to write to screen, it would lock the keyboard
>>>> and would have to stop and hit the reset key. YKT developed a FIFO box
>>>> for 3277, unplug the keyboard from the 3277 head, plug the FIFO box into
>>>> the head and plug the 3277 keyboard into the fifo box ... eliminating
>>>> the unfortunate keyboad lock.
>
> On 2025-02-03, Colin Macleod <user7@newsgrouper.org.uk.invalid> wrote:
>> This reminds me of something when I was a student around 1975. My university's
>> only computer was an IBM 360/44 and we could use some 2260 terminals. Digging
>> through some manuals I got the idea that by munging the "carriage control
>> character" from a Fortran program I might be able to break out of the
>> restriction of block-mode operation and persuade a 2260 to do animated
>> graphics.
>>
>> When tried my proof-of-concept program I did get the terminal to keep
>> redrawing a very flickery but changing few lines. But while I was watching
>> this, the other users in the lab started complaining that their terminals were
>> frozen. Then the operators started running around trying to find out why
>> the whole mainframe had hung. It appeared that my hack had somehow elevated
>> the priority of my animation such that nothing else was getting run at all.
>> I couldn't even interrupt my own program. They had to reboot the machine to
>> fix it and I was told in no uncertain terms to never do that again!
>
> My own "NEVER do that again" stories:
> 1) 1971, I think:
>
> I was one of three part-time operators of an IBM 1130 that had been
> pressed into service as a HASP RJE terminal for the new IBM/360-65 MVT
> system at NEUCC (DTU, Lyngby), but we still had a tray full of
> pre-punched IBM 1130 DOS control cards:
> // JOB T
> // FOR
> // XQT
>
> One of my coworkers pondered what would happen if you submitted a small
> deck with an 1130 JOB card instead of an OS/360 JOB card. I explained
> that the spool system would take it as a job with a sort-of valid job
> card, but with invalid values in most fields, such as the billing
> account, the job queue (class=) field etc. Then when the job scheduler
> got to that point in the queue, it would fail all sorts of syntax and
> validity checks, and a printout would be returned with all sorts of
> error messages. He did not believe me, so I proved it by submitting it.
> The one solitary JOB card came back as 4 or so pages of print:
> - front separator page
> - HASP console message log
> - JCL processing log
> - back separator page
>
> He then pondered what would happen if we read in a whole stack of JOB
> cards, and I said "same thing for each card in the stack". So we did
> it.
>
> It read the first couple of dozen cards quite fast, then slowed down a
> lot, and read one card at a time, a couple of seconds apart. And then it
> went REALLY slow, reading one card after printing the 4 pages of job
> output.
>
> That was because the HASP job queue had about 100 slots, with a
> pre-allocated cylinder for the input cards for the job and I think also
> the log files. Once all the job queue slots were taken up by the BAD
> jobs, no new jobs could be read in FROM ANYWHERE in the RJE network,
> until a job slot had been finished and the slot made available for a new
> job.
>
> But it was worse. The OS/360 job card had a job ID field at the
> beginning of the card (before the word JOB), and all the BAD jobs had
> the same invalid ID. So when a job came in with a blank ID field, it was
> a duplicate of the preceding BAD jobs; each new job had to be assigned a
> unique new ID, and this generated a console message announcing a
> duplicate ID, folowed by a console message announcing the new ID,
> followed by a message that the new job had been placed in HOLD status,
> so that only one of these jobs with the same ID could be in the OS job
> queue. After the first duplicate, the newly generated ID was also a
> duplicate, triggering more HASP console messages as the queue was
> searched trying to find a new unique name. As each job finished, HASP
> would release the jobs tht had been HELD, tgriggering more messages.
> Soon, the 1050 console was running 15 minutes behind, and the operator
> was unable to get a command in.
>
> In the end, I think they had to re-IPL the system and FORMAT the HASP
> SPOOL disk to recover.
>
> I was lucky to keep my job.
>
> I think someone wrote a PTF for HASP to make sure that could not happen
> again!
>

There were only so many WTO (console) buffers, too, so writing lots of
stuff to the console would also bog the system down.

--
Pete
Re: Stress-testing of Mainframes (the HASP story) [message #426777 is a reply to message #426776] Wed, 05 February 2025 19:07 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lars Poulsen

Lars Poulsen <lars@cleo.beagle-ears.com> wrote:
>> That was because the HASP job queue had about 100 slots, with a
>> pre-allocated cylinder for the input cards for the job and I think also
>> the log files. Once all the job queue slots were taken up by the BAD
>> jobs, no new jobs could be read in FROM ANYWHERE in the RJE network,
>> until a job slot had been finished and the slot made available for a new
>> job.
>>
>> But it was worse. The OS/360 job card had a job ID field at the
>> beginning of the card (before the word JOB), and all the BAD jobs had
>> the same invalid ID. So when a job came in with a blank ID field, it was
>> a duplicate of the preceding BAD jobs; each new job had to be assigned a
>> unique new ID, and this generated a console message announcing a
>> duplicate ID, folowed by a console message announcing the new ID,
>> followed by a message that the new job had been placed in HOLD status,
>> so that only one of these jobs with the same ID could be in the OS job
>> queue. After the first duplicate, the newly generated ID was also a
>> duplicate, triggering more HASP console messages as the queue was
>> searched trying to find a new unique name. As each job finished, HASP
>> would release the jobs that had been HELD, triggering more messages.
>> Soon, the 1050 console was running 15 minutes behind, and the operator
>> was unable to get a command in.
>>
>> In the end, I think they had to re-IPL the system and FORMAT the HASP
>> SPOOL disk to recover.
>>
>> I was lucky to keep my job.
>>
>> I think someone wrote a PTF for HASP to make sure that could not happen
>> again!

On 2025-02-05, Peter Flass <peter_flass@yahoo.com> wrote:
> There were only so many WTO (console) buffers, too, so writing lots of
> stuff to the console would also bog the system down.

The more I have learned later, the more I understand just how bad it
was. It was an impressive denial-of-service attack for its time. As the
phone calls flew from the machine room operator to the operations
manager, to the head systems programmer, to the IBM field support, and
on and on, red faces of embarrassment must have triggered explosive
anger.

And like any other system vulnerability then or later, it was a simple
case of insufficient input validation. In retrospect, it was bound to
happen sooner or later. ;-)
Re: Stress-testing of Mainframes (the HASP story) [message #426778 is a reply to message #426777] Wed, 05 February 2025 22:16 Go to previous messageGo to next message
cross is currently offline  cross
Messages: 55
Registered: May 2013
Karma: 0
Member
In article <slrnvq7v9f.kpad.lars@cleo.beagle-ears.com>,
Lars Poulsen <lars@cleo.beagle-ears.com> wrote:
> [snip; great story]
> On 2025-02-05, Peter Flass <peter_flass@yahoo.com> wrote:
>> There were only so many WTO (console) buffers, too, so writing lots of
>> stuff to the console would also bog the system down.
>
> The more I have learned later, the more I understand just how bad it
> was. It was an impressive denial-of-service attack for its time. As the
> phone calls flew from the machine room operator to the operations
> manager, to the head systems programmer, to the IBM field support, and
> on and on, red faces of embarrassment must have triggered explosive
> anger.
>
> And like any other system vulnerability then or later, it was a simple
> case of insufficient input validation. In retrospect, it was bound to
> happen sooner or later. ;-)

Ah! So, what you're saying is that you added value by identify
the issue and its cause early on, allowing it to be corrected
before it appeared elsewhere. Well done! :-D

- Dan C.
Re: Stress-testing of Mainframes (the HASP story) [message #426779 is a reply to message #426763] Wed, 05 February 2025 22:36 Go to previous messageGo to next message
John Levine is currently offline  John Levine
Messages: 1487
Registered: December 2011
Karma: 0
Senior Member
It appears that Lars Poulsen <lars@cleo.beagle-ears.com> said:
> The more I have learned later, the more I understand just how bad it
> was. It was an impressive denial-of-service attack for its time. As the
> phone calls flew from the machine room operator to the operations
> manager, to the head systems programmer, to the IBM field support, and
> on and on, red faces of embarrassment must have triggered explosive
> anger.

As I've mentioned before, I crashed Princeton's 360/91 with this two line
Fortran program:

CALL MAIN
END

MAIN was the default name for a Fortran main program, so it recursively called
itself. The Fortran initialization code told the system to save the existing
floating point trap handlers, set up its own, and then restored them when the
program exited.

Except that the space where it saved the existing trap handlers wasn't very big.
Oops. Kaboom!
--
Regards,
John Levine, johnl@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Re: Stress-testing of Mainframes (the HASP story) [message #426781 is a reply to message #426777] Thu, 06 February 2025 14:06 Go to previous messageGo to next message
Anne &amp; Lynn Wheel is currently offline  Anne &amp; Lynn Wheel
Messages: 3254
Registered: January 2012
Karma: 0
Senior Member
Lars Poulsen <lars@cleo.beagle-ears.com> writes:
> The more I have learned later, the more I understand just how bad it
> was. It was an impressive denial-of-service attack for its time. As the
> phone calls flew from the machine room operator to the operations
> manager, to the head systems programmer, to the IBM field support, and
> on and on, red faces of embarrassment must have triggered explosive
> anger.
>
> And like any other system vulnerability then or later, it was a simple
> case of insufficient input validation. In retrospect, it was bound to
> happen sooner or later. ;-)

As an undergraduate in the 60s, I had rewritten lots of CP67
.... including doing dynamic adpative resource management and scheduling.
After graduation I joined the science center and one of my hobbies was
enhanced operating systems for internal datacenters.

In the decision to add virtual memory to all 370s, there was also the
creation of the VM370 group and some of the people in the science split
off from CSC, taking over the IBM Boston Programming Center on the 3rd
flr (Multics was on 5th flr, CSC was on the 4th flr and CSC machine room
was on the 2nd flr). In the morph of CP67->VM370, lots of stuff was
simplified and/or dropped, no more multiprocessor support, in-queue
scheduling time was only based on virtual problem CPU, and kernel
integrity was really simplified, other stuff.

Now virtual machine could in get into top, interactive Q1 and execute
some code that was almost supervisor CPU (and very little virtual
problom CPU) ... resulting in run-away CPU use ... locking out much of
the rest of the users. The simplification in kernel integrity resulted
in "zombie" users. In 1974, I started migrated lots of CP67 to
VM370R2-base for my internal CSC/VM ... which included curing the
run-away CPU use and zombie users.

Another problem was CP67 would determine long wait state drop from queue
and interactive Q1 based on real terminal type ... VM370 changed it to
virtual terminal type. That worked OK as long as the virtual terminal
type was the similar to the real terminal type ... which broke with CMS
virtual terminal type 3215 but the real terminal was 3270. CMS would put
up READ for 3215 and go into virtul wait (waiting for the enter
interrupt indicating end of typing input) and would be dropped from
queue. 3270 typing would be saved in local buffer, user hits enter and
presents a ATTN to system, CMS does a read and goes into wait state and
is dropped from queue, but end of read is almost immediately (rather
than waiting for somebody typing).

CP67 increased the count of virtual machine active "high-speed" real
device channel programs and at entry to virtual wait state and check
"high-speed" channel program count ... if it was zero, virtual machine
dropped from queue. VM370 at virtual machine entry to wait, would scan
complete virtual device configuration looking for "high-speed" device
active channel program, and virtual 3215 didn't qualify.

After transfer out to SJR on the west coast, ran into a different
problem, SJR replaced it 370/195 MVT system with 370/168 MVS and 370/158
VM370. It included MVS 3830 disk controller and MVS 3330 string with
VM370 3830 disk controller and VM370 3330 string. Both the MVS & VM370
3830 disk controller had dual channel connections to both systems,
however there was strict rules that never would MVS 3330 on VM370 string
.... but one morning operator mounted a MVS 3330 on VM370 drive ... and
almost immediately operations started getting irate phone calls from all
over the bldg.

Issue was OS/360 and descendents make exensive use of multi-track search
CCW ... which can take 1/3rd sec elapsed time ... which locks up
controller ... and locks out all devices on that controller
.... interferring with trivial interactive response that involves any
disk I/O (MVS/TSO users are use to it, but the CMS users were use to
better than .25sec interactive response).

Demands that operations move the offending MVS disk to the MVS string
was met with it would be done offshit. We get a VM370-tuned VS1 system
and mount its sysem pack on a MVS 3330 drive and start a program. The
highly tuned VM370 VS1, even running on loaded VM370 158 ... could bring
the MVS 168 nearly to a halt ... minimizing its interference with CMS
workloads. Operations immediately agree to move the offending MVS 3330
if we move the VS1 3330.


--
virtualization experience starting Jan1968, online at home since Mar1970
Re: old pharts, Multics vs Unix [message #426782 is a reply to message #426770] Thu, 06 February 2025 15:55 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Tue, 4 Feb 2025 23:35:43 +0000, moi wrote:

> I do not accept that ...

.... you quoted a part of my posting that you were not responding to, and
when I quoted the same part in my reply, you accused me of “bad faith”.
Re: Wang Terminals (Re: old pharts, Multics vs Unix) [message #426784 is a reply to message #426723] Sun, 09 February 2025 13:04 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: David Lesher

I recall having to use Wang wordprocessing. Besides many
disadvantages vs. Wordstar &/or WordPerfect, the terminals
needed two coax cables, one BNC, another TNC.

Yet other times they only needed one. What was the story with
the two needed, sometimes?



--
A host is a host from coast to coast...............wb8foz@panix.com
& no one will talk to a host that's close..........................
Unless the host (that isn't close).........................pob 1433
is busy, hung or dead....................................20915-1433
Re: Wang Terminals (Re: old pharts, Multics vs Unix) [message #426785 is a reply to message #426784] Sun, 09 February 2025 17:32 Go to previous messageGo to next message
Anonymous
Karma:
Originally posted by: Lars Poulsen

On 2025-02-09, David Lesher <wb8foz@panix.com> wrote:
> I recall having to use Wang wordprocessing. Besides many
> disadvantages vs. Wordstar &/or WordPerfect, the terminals
> needed two coax cables, one BNC, another TNC.
>
> Yet other times they only needed one. What was the story with
> the two needed, sometimes?

Gee, it's been 45 years, so my memory is just a bit shaky.

First, the systems you are comparing them to required that you have a PC
in the first place. The Wang 1200 WPS came out in 1976, the IBM 5150 did
not come out until 1981. Wordstar for CP/M came out in 1978. WordPerfect
first came out in 1980 (under the name SSI*WP) and became WordPerfect
when it moved to MS-DOS in 1982.

When I was introduced to the WPS in 1978, I had been using documention
scripting languages for about 8 years, mostly in the form of the Univac
Exec-8 @DOC program, which was great for programmers, but much more
suitable for technical documentation than for business correspondence.
WPS was mostly WYSIWYG, which made it much easier to use for office
people.

When Wang arrived, it owned this market by virtue of being first!
Wikipedia says: "WordPerfect 1.0 represented a significant departure
from the previous Wang standard for word processing."

As for the terminal wiring: I found a "maintenance manual"
http://bitsavers.informatik.uni-stuttgart.de/pdf/wang/ois/74 2-0664-1_OIS140-145_Maintenance_19871120.pdf
.... but that seems only to cover installation procedures. But I think
the coax wiring is described in
http://bitsavers.informatik.uni-stuttgart.de/pdf/wang/vs/com ms/742-1102_WangNet_Backbone_19850724.pdf

This describes an RF coax "loop" with branches. One cable is "outbound",
the other is "inbound".
Re: Wang Terminals (Re: old pharts, Multics vs Unix) [message #426786 is a reply to message #426785] Sun, 09 February 2025 18:49 Go to previous message
Anonymous
Karma:
Originally posted by: Lawrence D'Oliveiro

On Sun, 9 Feb 2025 22:32:32 -0000 (UTC), Lars Poulsen wrote:

> When I was introduced to the WPS in 1978, I had been using documention
> scripting languages for about 8 years, mostly in the form of the Univac
> Exec-8 @DOC program, which was great for programmers, but much more
> suitable for technical documentation than for business correspondence.
> WPS was mostly WYSIWYG, which made it much easier to use for office
> people.

I wonder how office people collaborate on a document, though. How do you
merge contributions from two or more contributors using a WYSIWYG app,
without something equivalent to patch/diff?

> When Wang arrived, it owned this market by virtue of being first!
> Wikipedia says: "WordPerfect 1.0 represented a significant departure
> from the previous Wang standard for word processing."

Notice they say “Wang standard”, not “standard”. The world’s most popular
word processor app, right into at least the late 1980s, was IBM’s
DisplayWrite. This was because it was a close emulation of the
DisplayWriter word-processing machine, which was very popular in IBM shops
(i.e. most of the mainframe computing world). That was still a big enough
market to outweigh the consumer/SME PC market at the time.

This didn’t make it a good word processor; reviews regularly found it
pretty horrible to use. But people brainw^H^H^H^H^H^Hindoctrinated into
the IBM culture/ecosystem seemed to think it was great.
Pages (10): [ «    1  2  3  4  5  6  7  8  9  10]  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: Endicott Demolition: Original 100-Year-Old IBM Logo is History Read More: Endicott Demolition: Original 100-Year-Old IBM Logo is History | https://wnbf.com/endicott-demolition-original-ibm-logo-history/?utm_source=tsmclip&utm_medium=referral
Goto Forum:
  

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Thu Feb 13 12:52:46 EST 2025

Total time taken to generate the page: 0.01030 seconds