Tuesday, June 14, 2005

FUD: OS X on generic PC's



John Dvorak, in PC Magazine, and Jason Brooks, in eWeek (or at least on their respective sites), seem to think it's inevitable that OS X is going to run on generic PC's, sooner rather than later. Maybe, just maybe, it'll happen later, but I don't see it happening any time soon, in spite of their "reasoning".

Follow the Money



Jason at least anticipates that counter-argument that "Apple is a hardware company", but he dismisses it, on the basis that Apple's innovative strength lies in the OS X software. That's mostly true, as far as it goes, but the inescapable fact is that the big bucks are in the hardware, and the move to Intel, done correctly, could lead to huge benefits in hardware-generated revenue with no erosion of OS and application software revenue.

No matter how you slice it, if OS X were to run on generic PCs, Apple might sell more copies of OS X, but it would lose hardware sales to people looking for discounted hardware. As Jason does point out, Apple's advantage in industrial design would retain some customers (like myself), but some significant percentage of sales would shift; even I would give consideration to a generic machine as a kids' computer in the house, or a laptop headed off to college, for example. I can't imagine software sales making up for this.

On the other hand, if Apple ships boxes that can run OS X and Windows XP, but generic PCs are precluded from running OS X, Apple keeps (virtually) all of its existing hardware customer base, and adds to that Windows users who either a) are attracted to Apple's better hardware designs, or b) want the option of migrating to OS X later. All upside, no downside, excluding the Mac users that Apple loses to Windows, which I believe to be a very small number, and even some of them will hedge their bets with dual-purpose hardware.

"Try and stop me!"



...quoting John Siracusa in one of his many excellent Ars Technical columns. Dvorak believes that the hackers who would want to crack Apple's hardware protection won't be denied. Siracusa's coverage of this topic is, in my opinion, much more realistic.

Someday, however...



Apple does retain the ability to release OS X "into the wild" if and when that option becomes attractive. With a little help from third-party software, I have OS X 10.3.9 running on a nine year old PowerMac 7500. The machine has been upgraded with a G3 processor, but otherwise the machine did not match the specs of an OS X-capable Mac: ADB keyboard and mouse, SCSI disk drives, no FireWire or USB at all (when installed). It would not be a reach to run on generic Intel hardware, considering Darwin already does.

So Apple could throw the switch on opening OS X to non-Apple hardware, but I expect it will choose that moment very carefully, and delay it as long as possible while it milks the benefits of building and selling hardware with broader appeal than ever.

What's this BS about spyware and viruses?

Dvorak also seems to think it's inevitable that the new Macs and OS X will attract, and be unable to fend off, a new rush of malware. Maybe I'm missing something, but I don't think he gets it.

It's probably the case that malware authors will be more attracted to OS X if it is installed in vastly larger numbers. (On the whole, I suspect that's a problem Apple wouldn't mind having.) But wanting to attack and being able to carry it off are two different things. Just because the new machines run the same instruction set as Wintel PCs doesn't mean the virus code is now portable; most malware needs to use system APIs to do the damage, and OS X and Windows remain worlds apart as far as the APIs go. Between the need to write portable malware, and the fact that OS X is locked down tighter than Windows (or is less vulnerable, if you're a glass-is-half-empty type), I expect OS X to remain unappealing to malware authors.

Bottom line

I still see way more upside than down in this move for Apple, especially with the potential for increased hardware revenues and a lower threshold for Windows users to explore and ultimately adopt OS X without a precipitous move away from their investment in Wintel. The big question mark in my mind is to what extent, and I'm sure there is some, that people hold off on buying Mac hardware until the switch occurs.

Friday, June 10, 2005

OK, so one of the new complaints about the attitude of the Mac community is the apparent change of heart about Intel and the Pentium versus PowerPC. As a long-standing hater of the Intel architecture, I can sum it up briefly: I still don't like it, but it really doesn't matter any more.

For the first 5 years of my career, I coded almost exclusively in assembly language, mostly for PDP-11's. I could fairly readily translate octal into assembler in my head, could tell you how many milliseconds(!) a sequence of code would take to execute, and knew all sorts of gnarly tricks for packing the most functionality into the least number of bytes or milliseconds. I also coded on other, more primitive processors (one of which lacked a subtract instruction, for example).

In 1979, I started coding for the Motorola 68000 (only pre-production evaluation chips were available to us at the time), and developed a love-hate relationship with the 68K: richer instruction set, more registers, but "lop-sided": special purpose registers, addressing modes available only in certain contexts, etc. In 1983, I started working for a PC software company, and was introduced to the original 68K Mac. I immediately liked the idea of a 68K-based personal computer, but had my suspicions about the mouse as an input device. (See; we can learn to adapt...)

In 1985 I was assigned to work on a DOS-based TSR ("Terminate and Stay Resident) product, and was thus thrown into the deep end of the Intel processor world. TSR applications worked by hooking many of the interrupts used by MS-DOS (BIOS calls, keyboard interrupts, etc.) and inserting their own functionality to "enhance" MS-DOS; necessarily, this included the necessity to work with the 808x instruction set. After working on the 68K, I hated the 808x: limited registers, primitive instructions, weird addressing modes and modalities (like the "direction bit"), and don't get me started on the addressing model! What a frigging waste that most bytes in the address space could be addressed by 16 different pairs of segment/offset values, which made simple things like address comparison unnecessarily tedious.

The 80286 was itself a marvelous example of why I believe Intel couldn't design decent processors. The 80286 introduced "extended mode", which allowed for 32-bit addressing and larger address spaces than the one meg address space of the 8086 (shortened to the infamous 640K by the needs of the PC architecture). The 80286 needed to offer continued support for "real (8086) mode" for legacy software, but Intel left something out: the chip booted into real mode, and could be switched into extended mode by software, but could not be switched back! This led to a laughable hack by IBM for the 80286-based PC "AT" model, and severely limited the extent to which extended mode could be used.

Later in the 80's I started working with C, which at least papered over the instruction set somewhat. Still, given the state of the tools at the time, it was helpful during debugging to know assembler and watch how your code was executing, regardless of the processor. But the 808x was still tedious to work with because it required unnatural acts with the compiler to handle the various data models (near/far code, near/far data, aarrggh!).

[During this time I got a look at the NeXT machine and interviewed for a position on a team developing a NeXT app; although the system was cool (and, yeah, because it was Steve's), I wasn't too sure about NeXTSTEP and Objective-C... Damn; missed another call!]

Still later in the 80's I got back to Mac development, finally. The app we were building stretched the Mac memory model (which divided code into segments for various reasons), which required some cleverness (i.e., trap patching) on our part to handle our large code requirements, but it still felt like I was spending more time getting my app written and less fighting the architecture. I was so glad to be away from the 8088 and 80286.

During my work on that application, I got to see a few nifty things: a demo of the now-infamous "Star Trek", where Apple and Novell got the Mac OS running on Wintel hardware. (More thoughts on that project in a later entry.) I also saw a demo of my own Mac software running under emulation on a Motorola 88000, a RISC chip that was Apple's early choice to succeed the 680x0. Later, I also participated in investigations of how to port that application to PowerPC, after Apple shifted its sights to that architecture. We did extensive work trying to employ a binary translation technology offered by an AT&T spinoff called Echo Logic (yet another future blog entry), but that approach turned out not to be viable for us.

In 1993 I went to work for a company developing a C++ IDE for the Apple PowerPC platform. (If anybody cares who "RetiredMidn" is, there's one of the best clues you're going to get.) I didn't work on the compiler, but it was important for all of us on the project to become familiar with the PowerPC; I got somewhat familiar with the instruction set for debugging, but with the availability of source-level debugging, it was less important than general knowledge of the runtime architecture and format of the application binaries.

[While working for that employer I had a chance to look at some of Apple's future OS efforts, specifically Copland. My impression at the time: they were trying too hard. Yet another future blog entry.]

After a brief and disastrous experience maintaining code build around Microsoft's COM/OLE, I started writing Java code for a living. At this point, my assembly language skills and instincts are mostly an anachronism and sometimes a hindrance. Although Java developers sincerely worry about efficiency, it's at a level that's a joke to your average assembly-level programmer; most Java developers have no clue how much work the processor has to do to execute a particular sequence of code, and anyone that tells you they do know is probably lying, when you take into consideration the optimizations that can occur in the runtime by JIT compilation, and in the processor by pipelining and multiple execution units. Processor performance and the nuances of architecture and instruction sets are now the domain of a small group of compiler developers and their counterparts writing runtime engines for intermediate representations like Java's and the C#/.Net runtime; those of us writing applications above that layer are thoroughly insulated from it.

And that's why Intel's processor architectures and instruction sets, while not appealing to me, are not offensive, either; give me a decent compiler and runtime, and the differences are lost on me. In fact, conventional wisdom among those who pay more attention to the matter than I do suggest that, while it would be nicer to hand-code assembler for the PowerPC architecture, the commonly available compilers (gcc and Intel's) do a better job optimizing high-level code for the Intel platform, so, all other things being equal, that's where my code will perform better. (Excepting Altivec, but I don't work in that niche.)

For the specific universe of OS X applications, I'm an informed bystander. I was not responsible for any applications that needed to be ported to OS X, but I've kept up to date with OS X development on my own, and I am intimately familiar with two highly complex classic Mac applications and can the challenge involved in porting them forward through Apple's major transitions. I won't go into the details here, but the conclusions are simple: applications that are more deeply rooted in past architectures and runtimes are going to have the hardest time moving forward; applications written to current standards will have the least. And guess what? More often than not, the older applications are probably nearing the end of their useful life anyway; the "tremor" caused by Apple's transition will possibly hasten their end, but the outcome was inevitable anyway. The Mac user community will emerge, again, into a world where the available apps meet their needs and less cluttered by obsolescence.

Looking even further forward, Apple is putting itself on a foundation where applications can be easily developed and deployed for an arbitrary number of processor architectures, not just two; an application installed on a server will be simultaneously launchable from a PowerPC Mac and an Intel Mac; and an AMD Mac; and a Cell-based Mac; and a hypothetical IBM mainframe Mac running OS XI Server, should that particular scenario make sense in the future.

It's always good to have options.

Entering the blogosphere

In anticipation of a whole bunch of FUD to be stirred up around the Apple/Intel announcement this week, I've decided to establish a foothold in the blogosphere and add to the noise.

My initial take: It's a great move by Apple, nicely timed, and some of the most intriguing aspects of the story haven't even been broached yet. It's going to be a fun couple of years.

My background: I've been a software developer for just about 30 years, switching around among languages and platforms. I'm a Mac developer (and user) by choice, but I'm currently working with Java for web application development to pay the bills. I post to Slashdot and other sites under the name "RetiredMidn"; hence the blog title.

My nom de plume derives from the fact that I was a Midshipman at the U.S. Naval Academy for a short time, before the Navy introduced me to computer programming and I realized what I really wanted to do with my career. While I was there, I was spared a serious ass-chewing by a prank attributed to the ghost of "Philo McGiffin", who is said to haunt the academy; there's an excellent writeup of the career of the real Philo McGiffin here.

More to come, as soon as I figure out how to hook up all those blogging tools I bought on spec a while ago...