I read an interview with Randy Katz of the original RAID paper where he said the academics initially thought that RAID would be used as a performance solution aggregating the performance of many spindles. He went on to say that they were surprised that it was the file server crowd who couldn’t afford more reliable drives, which cost 10X or more as much per GB as the 5 ¼” HDDs we were using, that adopted RAID for resiliency.
Of course 10 years later we were short stroking 15K RPM drives to tweak a few more IOPS out of our Clariions and Syms so RAID ended up taking over the whole storage world for a time but the 21st century is a time for another blog post.
Now that we’ve reviewed the basic taxonomy of RAID it’s time for another look at computer history, or at least my personal journey through computer history.
The mainframes, minicomputers and VAXen of my school days each had a string of 14” disk drives from SMD drives that looked like they came out of the Maytag factory to 6u Priam drives. My first real business, ProComp Systems, turned those Priam, and later 5 ¼” HDDs into subsystems with SASI (Shugart Associates Standard Interface the predecessor to SCSI) controllers and BIOS code for S-100 bus MP/M and TurboDOS systems and PC desktops for NetWare Servers.
The NetWare systems we built in the ‘80s used software mirroring until RAID controllers from Compaq, Mylex, StorageDimensions and TriCord hit the market in the mid ‘90s. Once they did we shifted to RAID pretty much whole hog from internal RAID with Mylex cards and SmartArrays to eternal SCSI to SCSI RAID systems including Data General’s Clariion.
By the time I refocused from NetWare to Windows NT 3.51 as my server platform the built in volume manager included striped and concatenated volumes with RAID5ish striped volumes with parity but Microsoft never optimized their software RAID making hardware RAID controllers a requirement for most applications.
It turns out the ‘386 through Pentium Pro processors of the day didn’t implement XOR as a native microcode instruction so it took many clock cycles to calculate, or check, the parity for each I/O. Intel had another processor in the parts bin, the i960 RISC processor that could perform XORs in just a few clock cycles and that i960 was the brains behind pretty much every RAID controller well into the 2000s.
As a Windows guy I made the transition to SAN technology a bit after some of my Unix sysadmin brethren so we’ll end this chapter around the turn of the century and Windows storage technology at SCSI arrays with up to a dozen SCSI ports for host connections. Windows Server Clustering, codename WolfPack, was brand new and relied on these shared SCSI systems.
vDisks, Volumes, and LUNS, Oh My
The basic function of a RAID controller, whether that’s a hardware device like a Compaq SmartArray or a software module in a host operating system’s volume manager, to aggregate together the drives in a RAIDset and present that capacity as a virtualized disk drive.
Over the years the storage community has decided to call the virtual disks an array presents to a host as a LUN. The acronym LUN stands for Logical Unit Number which in SCSI-speak is the bus address for a specific drive, a concept that dates back to when SCSI was a 50 or 68 line parallel bus and each device on the bus needed a unique address.
Next time we’ll start looking at how smart engineers have managed to extend the concepts of RAID to build ever better products by combining them.