@8FF7.ADF RISC coprocessor card
EDSRPC.DGS RISC Processor Card Diagnostics
15F0059
6152 Reference Diskette (contains *.IMD image AND extracted files)
6152 "Crossbow" Adapter
6152 Memory Daughter Card
Multiple 6152 CPU Cards
6152 "Crossbow" Adapter
Base images courtesy of William R. Walsh.

J??,18 DIP headers for ???
P1,2 Headers for memory card
U3 90X0781(ESD)
U4 6298252
U5 05F3551(ESD)
U7 83X2761(ESD)
|
U8 83X2791(ESD)
U9 58X4276(ESD)
U17 23F7481(ESD)
U22 MC68881RC20A FPU
Y1 29.4912 MHz osc
Y2 20.000 MHz osc |
The system was housed in a Model 60, 1 MB standard on the 6152 planar.
6152 Memory Daughter Card

U1 05F3130(ESD)
U6 BELFUSE 0447-0015-A3
There are three different memory sizes available, 2, 4, and 8 MB. From this,
I posit that the Crossbow supports 256 KB, 512 KB, and 1 MB 30-pin SIMMs.
Further, note there are five SIMMs per bank. I believe that one SIMM per bank
provides ECC, like that on the 7568 Gearbox systems.
SIMMs in William R. Walsh's machine were TI TM4256OU-12L with 9x
TMS4256FML-12 chips per SIMM.
Multiple 6152 CPU Cards
> Question from comp.sys.ibm.pc.rt article <F?.1.n91@cs.psu.edu>, ehrlich@cs.psu.edu (Dan &):
The Dec. '88 Release of AOS 4.3 supports 2 CPUs in and RT 6152. I do not
know if IBM every released any instructions although I vaguely remember seeing
them drift by some where. If memory serves, one also needs a modified version
of the 6152 configuration diskette (the one with the diagnostics) so the bus
addresses of the second CPU card can be set. One CPU could be used to run the X
server leaving the other for more useful computations.
Is there any other configuration possible? I'm somewhat more interested in
distributed computing applications, and doubling your processing power is
always nice. :-)
> Answer from Bill Webb:
Aha, somebody finally noticed that! I had intended to post this quite a
while back but as usual, until somebody brought it up I had forgotten about it.
I had sent out a beta-test of these instructions which was probably what Dan
had seen, but hadn't heard back anything so it just slipped until now.
Please note that the multiple CPU stuff is completely UNSUPPORTED by IBM -
it was a personal project that was sufficiently far advanced to get code into
the product but was much too late for the documentation and testing required
for a supported feature. This feature is not exactly secret as it was demo'ed
at the fall 1988 COMDEX as a technology demo with the two RISC bus masters
running BSD 4.3 on top of OS/2 (sigh).
There is also a paper that I'm going to present at the IBM internal Unix
conference next week, as it is not IBM CONFIDENTIAL I hope that I can also post
that here as well. In any case, if anybody goes attempt to get a multiple-CPU
system going please send me email with the results.
In any case here are some notes on how it is implemented followed by the
instructions for how one builds a multi-CPU IBM 6152 system. Enjoy!
Disclaimer: This is NOT a supported feature - use
at your own risk!
Multiple CPU Architecture for 6152
- processors run independently, each runs its own copy of IBM 4.3/6152.
- primary CPU (CPU 0) owns real devices (lan, printer, tape, asy, etc)
- all CPUs share disks (HD, FD, optical) but only 1 CPU writes to a given HD
partition.
- supports 1 or 2 additional processor cards, easiest case is 2 (total) cards
each of which has its "own" disk.
- access to "other" processor's disk is via read-only mount, or via NFS.
- I/O support program (unix.exe) runs under DOS or OS/2 which handles disk
and most other I/O.
- based on PS/2 model 60 and runs on model 80.
- a software "microchannel" device implements a network connection between
CPUs. The primary CPU acts as a gateway to connect the secondary CPUs to
other machines.
- Unix drivers exist to allow other processor's memory to be accessed and the
CPUs to be controlled.
- minimal Unix kernel changes from latest ship kernel.
Performance
- CPU bound jobs run with no appreciable degradation compared to conventional
6152's (each processor has 2, 4, or 8 meg of private memory). Processors MAY
have different memory sizes.
- I/O bound jobs compete for the same resources such as disk and are
degraded, though total I/O thruput is higher. A model 80 helps as the 286 CPU
becomes a bottleneck.
- kernel compile that took 53 minutes with 1 CPU, took 30 minutes with 2 and
30 minutes with 3. It appeared to be disk bound with 3 CPUs.
- sharing of work between processors was done at a high-level with a tool
"mc" that one specifies as the C compiler to make (e.g.: make "CC=mc cc" )
- my setup is to use X11 window manager and have a window for each secondary
CPU.
- many X benchmarks give almost 2x performance when the benchmark and server
run on different CPUs.
DOS implementation details
- one copy of unix.exe runs, with various structures changed to have
one-per-CPU. Implements a Hot-key (to switch the keyboard, screens, mouse, and
speaker from one CPU to another).
- halt/reboot requests are no-ops from any CPU but CPU 0.
- code provided to allow secondary CPUs to be automatically started from
primary CPU as part of normal boot sequence.
- "main loop" looks for requests from both CPUs; generally services each in
turn.
- about 1 week's effort to get first version working; about 1 month's effort
after that to fix bugs and add necessary features.
- code is in shippable state.
- runs on DOS 3.3 and has been run (once) on DOS 4.0
OS/2 implementation details
- one instance of unix.exe runs per CPU; runs in own full-screen session
- uses standard OS/2 hot-key to switch between CPUs and other OS/2
sessions
- runs on OS/2 1.1 (PM) and 1.0 (disk performance much better with 1.1)
- microchannel driver implemented using OS/2 shared memory segment.
- disk performance is good, network performance and "microchannel driver"
performance needs work (about 10x slower than DOS version).
- needs additional work to be "proper" OS/2 program (totally event driven),
currently uses polling for requests from RISC processor that would best be
event-driven threads. This would help performance and reduce the drag on
background tasks.
Other
Second CPU is very handy for kernel development and performance measurements
as one can be working on code on the main CPU and running tests or debugging on
the other CPU. I've found it nice to be able to recover from installing a
kernel with a fatal bug from home without having someone on site.
Building a Multiple-CPU 6152
This assumes that you already have a 6152 with one CPU.
- obtain an additional CPU (possibly by removing it from another 6152)
- install the December release of 6152 system (or at least the kernel, boot,
and unix.exe from December), you will also need the December /usr/bin/X11/Xibm
as it knows how to save the screen on hot-key events.
- install the new @8ff7.adf file onto the reference disk working copy (e.g.
via doswrite -va @8ff7.adf):
AdapterId 8ff7h
AdapterName "RISC coprocessor card"
NumBytes 4
NamedItem Prompt "I/O port"
Choice "01e0-01ef"
pos[3]=00011110b
pos[0]=00000001b
pos[1]=00011110b
io 01e0h-01eFh
int 7
arb 14
Choice "01f0-01ff"
pos[3]=00011111b
pos[0]=00000001b
pos[1]=00011100b
io 01f0h-01fFh
int 7
arb 12
Choice "200-20f"
pos[3]=00100000b
pos[0]=00000001b
pos[1]=00011010b
io 200h-20Fh
int 7
arb 10
Choice "Disabled"
pos[3]=11111111b
pos[0]=00000000b
pos[1]=00000000b
Help
"Default I/O address is 1e0. <Disabled> disables the adapter."
- install additional processor card, and use the reference diskette to
autoconfigure the system. If the two processors have different amounts of
memory the first one (port 0x1e0) should have the larger amount of memory, as
that is where one usually runs the X server.
- the simplest installation will have two processors, each with an HD for its
own use (you will need a root and a swap for each processor, but /usr can be
shared, either via mounting it read-only, or via nfs).
Note: For simplicity I will assume that each disk has the normal root, swap, and
/usr partitions.
- create two new host names, e.g. if the original system was 'frodo' then
create frodo-mc0, and frodo-mc1. frodo-mc0 will be the gateway machine for cpu1
to the rest of the world. (You should use your local naming conventions if they
are different from what we use).
- the hd1 disk can be created by cloning the hd0 disk, e.g. use fdisk and
minidisk to create the DOS/BIOS partition and the unix partitions respectively
and then copy the unix partitions via the normal newfs/dump/restore mechanism.
E.g.:
newfs -v hd1a
mount /dev/hd1a /mnt
cd /mnt
dump 0f - / | restore rvf -
cd /
umount /dev/hd1a
newfs -v hd1g
mount /dev/hd1g /mnt
cd /mnt
dump 0f - /usr | restore rvf -
cd /
umount /dev/hd1g
- change
/etc/rc.config on hd1 to reflect the new hostname and
network address, e.g. change network and hostname
entries to:
network='mc0'
hostname='frodo-mc1'
- on the hd0 disk,
add the following lines to rc.config, so that we make
frodo into a gateway (this may require allocating a new
network number for 'frodo')
network_2='mc0'
hostname_2='frodo-mc0'
net_flags_2="$net_flags"
- now you can boot up the system. Note that when unix.exe starts it will tell
you that you have 2 processors. The first processor should now be able to come
up normally. Once it gets to the login state you can hot-key to the second
processor (control-alt-esc), and boot it. It should come up on the second disk
by default (e.g. boot hd(1,0)vmunix.)
- if everything has worked properly you can add a line to /etc/rc.config so
that the second CPU is automatically started when the first is about to go
multiuser. This is done by specifying the following in /etc/rc.config:
ipl_cpu=1
- if one wants to
reboot the second CPU, one can do so by first halting or
rebooting it (e.g. /etc/halt or /etc/reboot), and then
issing the following commands (on CPU 0):
/etc/por 1
/etc/load /boot
/etc/ipl 1
Note: You must put the 6152 version of boot into
/boot rather than the RT version).
Note: Messages about the state of the second CPU
are displayed on the console of the master CPU so that one can determine that
has halted or attempted to reboot.
Note: If your configuration file doesn't have the
line device mc0 at iocc0 csr 0xffffffff priority 13 then it will need to be
added in order to send packets between the two RISC CPUs.
--
The above views are my own, not those of my employer.
Bill Webb (IBM AWD Palo Alto), (415) 855-4457.
UUCP: ...!uunet!ibmsupt!webb
|