Friday, June 10, 2005

OK, so one of the new complaints about the attitude of the Mac community is the apparent change of heart about Intel and the Pentium versus PowerPC. As a long-standing hater of the Intel architecture, I can sum it up briefly: I still don't like it, but it really doesn't matter any more.

For the first 5 years of my career, I coded almost exclusively in assembly language, mostly for PDP-11's. I could fairly readily translate octal into assembler in my head, could tell you how many milliseconds(!) a sequence of code would take to execute, and knew all sorts of gnarly tricks for packing the most functionality into the least number of bytes or milliseconds. I also coded on other, more primitive processors (one of which lacked a subtract instruction, for example).

In 1979, I started coding for the Motorola 68000 (only pre-production evaluation chips were available to us at the time), and developed a love-hate relationship with the 68K: richer instruction set, more registers, but "lop-sided": special purpose registers, addressing modes available only in certain contexts, etc. In 1983, I started working for a PC software company, and was introduced to the original 68K Mac. I immediately liked the idea of a 68K-based personal computer, but had my suspicions about the mouse as an input device. (See; we can learn to adapt...)

In 1985 I was assigned to work on a DOS-based TSR ("Terminate and Stay Resident) product, and was thus thrown into the deep end of the Intel processor world. TSR applications worked by hooking many of the interrupts used by MS-DOS (BIOS calls, keyboard interrupts, etc.) and inserting their own functionality to "enhance" MS-DOS; necessarily, this included the necessity to work with the 808x instruction set. After working on the 68K, I hated the 808x: limited registers, primitive instructions, weird addressing modes and modalities (like the "direction bit"), and don't get me started on the addressing model! What a frigging waste that most bytes in the address space could be addressed by 16 different pairs of segment/offset values, which made simple things like address comparison unnecessarily tedious.

The 80286 was itself a marvelous example of why I believe Intel couldn't design decent processors. The 80286 introduced "extended mode", which allowed for 32-bit addressing and larger address spaces than the one meg address space of the 8086 (shortened to the infamous 640K by the needs of the PC architecture). The 80286 needed to offer continued support for "real (8086) mode" for legacy software, but Intel left something out: the chip booted into real mode, and could be switched into extended mode by software, but could not be switched back! This led to a laughable hack by IBM for the 80286-based PC "AT" model, and severely limited the extent to which extended mode could be used.

Later in the 80's I started working with C, which at least papered over the instruction set somewhat. Still, given the state of the tools at the time, it was helpful during debugging to know assembler and watch how your code was executing, regardless of the processor. But the 808x was still tedious to work with because it required unnatural acts with the compiler to handle the various data models (near/far code, near/far data, aarrggh!).

[During this time I got a look at the NeXT machine and interviewed for a position on a team developing a NeXT app; although the system was cool (and, yeah, because it was Steve's), I wasn't too sure about NeXTSTEP and Objective-C... Damn; missed another call!]

Still later in the 80's I got back to Mac development, finally. The app we were building stretched the Mac memory model (which divided code into segments for various reasons), which required some cleverness (i.e., trap patching) on our part to handle our large code requirements, but it still felt like I was spending more time getting my app written and less fighting the architecture. I was so glad to be away from the 8088 and 80286.

During my work on that application, I got to see a few nifty things: a demo of the now-infamous "Star Trek", where Apple and Novell got the Mac OS running on Wintel hardware. (More thoughts on that project in a later entry.) I also saw a demo of my own Mac software running under emulation on a Motorola 88000, a RISC chip that was Apple's early choice to succeed the 680x0. Later, I also participated in investigations of how to port that application to PowerPC, after Apple shifted its sights to that architecture. We did extensive work trying to employ a binary translation technology offered by an AT&T spinoff called Echo Logic (yet another future blog entry), but that approach turned out not to be viable for us.

In 1993 I went to work for a company developing a C++ IDE for the Apple PowerPC platform. (If anybody cares who "RetiredMidn" is, there's one of the best clues you're going to get.) I didn't work on the compiler, but it was important for all of us on the project to become familiar with the PowerPC; I got somewhat familiar with the instruction set for debugging, but with the availability of source-level debugging, it was less important than general knowledge of the runtime architecture and format of the application binaries.

[While working for that employer I had a chance to look at some of Apple's future OS efforts, specifically Copland. My impression at the time: they were trying too hard. Yet another future blog entry.]

After a brief and disastrous experience maintaining code build around Microsoft's COM/OLE, I started writing Java code for a living. At this point, my assembly language skills and instincts are mostly an anachronism and sometimes a hindrance. Although Java developers sincerely worry about efficiency, it's at a level that's a joke to your average assembly-level programmer; most Java developers have no clue how much work the processor has to do to execute a particular sequence of code, and anyone that tells you they do know is probably lying, when you take into consideration the optimizations that can occur in the runtime by JIT compilation, and in the processor by pipelining and multiple execution units. Processor performance and the nuances of architecture and instruction sets are now the domain of a small group of compiler developers and their counterparts writing runtime engines for intermediate representations like Java's and the C#/.Net runtime; those of us writing applications above that layer are thoroughly insulated from it.

And that's why Intel's processor architectures and instruction sets, while not appealing to me, are not offensive, either; give me a decent compiler and runtime, and the differences are lost on me. In fact, conventional wisdom among those who pay more attention to the matter than I do suggest that, while it would be nicer to hand-code assembler for the PowerPC architecture, the commonly available compilers (gcc and Intel's) do a better job optimizing high-level code for the Intel platform, so, all other things being equal, that's where my code will perform better. (Excepting Altivec, but I don't work in that niche.)

For the specific universe of OS X applications, I'm an informed bystander. I was not responsible for any applications that needed to be ported to OS X, but I've kept up to date with OS X development on my own, and I am intimately familiar with two highly complex classic Mac applications and can the challenge involved in porting them forward through Apple's major transitions. I won't go into the details here, but the conclusions are simple: applications that are more deeply rooted in past architectures and runtimes are going to have the hardest time moving forward; applications written to current standards will have the least. And guess what? More often than not, the older applications are probably nearing the end of their useful life anyway; the "tremor" caused by Apple's transition will possibly hasten their end, but the outcome was inevitable anyway. The Mac user community will emerge, again, into a world where the available apps meet their needs and less cluttered by obsolescence.

Looking even further forward, Apple is putting itself on a foundation where applications can be easily developed and deployed for an arbitrary number of processor architectures, not just two; an application installed on a server will be simultaneously launchable from a PowerPC Mac and an Intel Mac; and an AMD Mac; and a Cell-based Mac; and a hypothetical IBM mainframe Mac running OS XI Server, should that particular scenario make sense in the future.

It's always good to have options.

No comments: