Monday, February 20, 2006

State of the Next-Gen Consoles - Part I: A Brief History of the Xbox 360

Microsoft screwed everyone up. By everyone, I mean their competitors… the juggernaut Sony, and the tenacious Nintendo. They released the Xbox 360 a full year earlier than Nintendo had hoped, and possibly two years earlier than Sony really would have liked.

The original Xbox was a complete failure… financially. The Xbox division lost money nearly every quarter since launch. Microsoft had no experience in the highly volatile console industry and cobbled together a thinly veiled PC in (enormous) console clothing. To make matters worse, the contracts they signed with some component suppliers left little room for the declining costs model that has become standard in console life cycles. At one point Microsoft actually took NVIDIA to arbitration based on the Xbox GPU pricing, and lost. It’s rumored that as time progressed, Microsoft wasn’t too happy with their Intel deal either. Although they negotiated fantastic pricing for the technology they received at the time, as time progressed they lacked both the proper IP ownership, and cost reducing manufacturing abilities enjoyed by both the PS2 and the GameCube. In spite of the errors Microsoft made, they lost much less money than they could have.

The Xbox was unarguably the most powerful of its peers (that’s right, don’t argue!) and it had some gems in its library. The most significant gem was the system-selling Halo which aided greatly with the system’s launch. However, it wasn’t just Halo, or the Xbox’s hardware superiority that saved it, the main reason the Xbox fared better than it could have is because of their online strategy. The comprehensive ‘Xbox Live’ played to the strengths of the console’s designers. With years of high load networking experience, Microsoft developed the first true top to bottom console network, robust enough to convince people to actually pay for it. As big of a success as Xbox Live was, the Xbox business was a sinking ship in terms of its financial prospects. Sony’s installed base of PS2’s was running away with the lead, and Nintendo’s GameCube was hugely undercutting both consoles in price. The anchor was the Xbox hardware cost, bleeding cash, Microsoft couldn’t wait to replace it.

Although some would argue that the Xbox was merely a planned foot in the door from Microsoft’s perspective, history will remember different. Different in that if Microsoft was truly forward-looking they might have secured their hardware’s intellectual property at least in so far as securing complete backwards compatibility with its successor. When Microsoft began developing the Xbox 360 they took a long hard look at the successes and errors they made with the Xbox, and then contrasted that with those of its competitors. They knew the Xbox hardware model was broken, but how to fix it? Sony’s model for to the PS2 (and the CPU of the PS3) was one of expensive long-term hardware co-development. Although Microsoft had the financial resources, they lacked Sony’s hardware experience and more so, they lacked the time. Nintendo’s GameCube model on the other hand was much more hands off. The contracted IBM “Gekko” CPU and ATI “Flipper” GPU of the GameCube ended up being quite competitive hardware-wise in spite of its bargain price. The difference was where Microsoft contracted to buy complete chips from NVIDIA and Intel at a set price, Nintendo licensed only the chip designs so they could reduce costs easier by contracting out, taking full advantage of ever decreasing die shrinks. Microsoft loved the model. They loved that the GameCube was fast, and for much of its life, only $99US. In fact they loved it so much they not only stole the hardware model… they also contracted the exact same chip designers of the GameCube for the Xbox Next! And so it began, IBM would design the CPU of the Xbox 360 while ATI would develop the GPU. In many ways the Xbox 360 would become the spiritual successor of Nintendo’s GameCube, at least hardware wise.

Although Microsoft dodged (most of) the bullet in their freshman attempt with the Xbox, they were very leery of making similar mistakes as a sophomore. Chief among the reasons for the high cost of the original Xbox was the inclusion of a hard drive in every machine. Fortunately for Microsoft, Sony never saw the need to cut the PS2’s retail prices too deeply. Sony could have afforded to be much closer to the $99 GameCube price tag, but the PS2 was selling well enough that it wasn’t necessary. Mind you had they felt the need to cut the PS2 MSRP more dramatically and earlier on, the results would have been disastrous for Microsoft. Because of this lesson almost learned, Microsoft was fearful of price cutting in the next gen race where Sony may end up having to be much more competitive. Leading up to the Xbox 360 release, Sony dropped several hints at a much higher PS3 cost. Some speculation was that Sony was attempting to coax Microsoft into launching the Xbox 360 at a higher price but I believe that Sony was rather trying to get Microsoft to lock into including an expensive hard drive with every system. Once Microsoft committed to that, Sony could then have launched a tiered model PS3 (an SKU with and without a hard drive) and be significantly closer to the 360’s price point in the SKU without a hard drive. Microsoft wisely didn’t bite and instead opted to offer a tiered model out of the gate. Although the tiered model has its drawbacks, chief among them being that game developers can’t assume a hard drives inclusion, it will help consumers be able to compare apples to apples.

The final belabored decision was all about the media. Microsoft’s HD Era pitch was muted somewhat by not supporting next generation Hi-Def media. Microsoft decided early on that in order to beat Sony or Nintendo to market by any significant degree meant going with regular DVD media. Many think this was is a critical error on Microsoft’s part, but I’m not so sure. There are two real advantages and two real detriments to going the DVD route. The first advantage is timing. Choosing DVD over HD-DVD allowed Microsoft a healthy head start in shipping the 360… at least six or so months but I predict significantly closer to a year for the PS3 launch. The other main advantage is cost. The DVD drive will cost a pittance compared to what it will cost Sony to include a Blu-Ray drive. Current estimates put the Xbox DVD drive as low as $15 US while the Blu-Ray drive is thought to cost Sony at least $100. This wide cost margin isn’t expected to narrow significantly until well into 2008, and you can add to that the increased cost of the media per game. The main disadvantage, being a game console, is the media size. Certainly some games will exceed the maximum 9GB limit per DVD in the near future but 9GB is a healthy amount of space. Even some monstrously long and detailed games today use less than half that space. (Resident Evil 4 for the GameCube comes to mind which clocked in under 3.6GB total.) It’s true that HD resolution textures take up more space but it’s also true that it’s financially insignificant to include two or more DVDs per game. However, it may be an issue for gamers who hate to swap out discs, but I don’t think this alone will be terribly relevant to the Xbox 360’s success (or lack of.) What is slightly more important from the more casual gamer’s perspective is the inability to play High Definition movies. I personally know some people early on who elected to purchase the PS2 over the GameCube based on DVD playback support alone. In this respect Sony may find favour with the same demographic with the PS3’s Blu-Ray support. Microsoft has announced that it will launch an add-on HD-DVD player which is alright for HD movies I guess (as long as it’s much cheaper than a stand alone HD-DVD player) , but this add-on is mostly moot from a gaming perspective as Microsoft can’t, and won’t, ever release any games on HD-DVD discs. If they did, early Xbox360 adopters would be up in arms, and rightfully so, over being forced to buy an expensive add-on to play the latest games.

- Xenon

In depth technical discussions of the Xbox IBM designed CPU and ATI designed GPU are beyond the scope of this article, but many excellent articles are available. (For the GPU Xenos I’d suggest Beyond 3D's in-depth analysis and for the CPU Xenon take a look at ArsTechnica's coverage). I would however like to touch upon the consensus of these key components.

The IBM designed “Xenon” CPU, although quite advanced, is nowhere near as radical a design as its “Cell” competitor. We’ll delve more into the Cell in the second part of this trilogy, but the Cell is a radical design indeed. Having said that, the Xenon is complex enough that its potential has barely been scratched by its launch titles. A three cored and six threaded beast, with little cache and less in the way of branch prediction means that the power is there, but silicon wasn’t “wasted” in making it easy to tap. The current development level of retail games is still very much single core and single threaded. No games to my knowledge on the 360 or otherwise have been designed from the ground up to take efficient advantage of a multi-core CPU. There have been a few hacks and patches released after the fact to improve performance on dual core PCs in specific games, but nothing that takes significant efficient advantage. Moving forward, the Xenon CPUs advantage, over the Cell at least, is that although advanced, it is still very much the type of design that developers had been anticipating. Couple that expectation with Microsoft’s software support and I expect developers to come to grips much more quickly with the Xenon than the Cell. The Cell may be more powerful and perhaps significantly so, but it may be quite a while before that theoretical delta translates to the practical. The graphics core however, is very much a different scenario.

With the aforementioned alienation/bridge burning with NVIDIA over the original Xbox GPU, there was only one choice of GPU partners for the Xbox 360. ATI Technologies, NVIDIA’s arch-rival in all-things graphics, was not exactly a second choice for the software giant. With proven custom graphics experience with Nintendo’s GameCube, and an ever-present PC performance threat to NVIDIA, the Canadian tech firm was a perfect fit for Microsoft’s lofty ambitions. The chip firm, who is used to launching successively faster PC cards every ten or so months, was tasked with designing a chip to last several of those generations. My guess as to their approach was something like this… Look at designs ~3 generations ahead in the PC space and then cut the logic in half to fit into the maximum ~300 million transistors that present day design processes allow.

- Xenos

If this was the approach, ATI nailed it when they created the 360 GPU dubbed “Xenos.” The Xenos chip, or chips rather, is quite a success in both the design, and more so in ATI’s prediction of graphical direction. Without getting into too much detail, ATI saw a growing bottleneck in the shading efficiency of the graphics pipeline. This bottleneck is only now becoming perceptible in the PC space as ATI releases GPU’s with increasingly powerful shader architectures alongside the latest games which make ever increasing use of those shaders. Rather than lock the chips ability to process vertex and pixel data in a rigid ratio, ATI took the truly next generation approach and made the chips 48 processors dynamic in their ability to process either. The result is that Xenos, at a mere 232million transistors, should be able to perform similar to a much larger traditional design in many cases. The other advancement of the Xenos GPU is a daughter die, a smaller chip right next to the main shader chip, but on the same package. The daughter is mostly comprised of 10MB of really fast RAM and offers important logic such as Z-culling (eliminating the drawing/colouring of polygons hidden behind other polygons), basic colour operations and most significantly up to 4x anti-aliasing with a negligible performance hit.

The last interesting aspect I’d like to discuss about the Xbox 360 is by no means a secret but I feel hasn’t been touched upon nearly enough - The potential for physics processing. Physics processing is the hot new area of gaming that the next generation of consoles seems to have missed. Or has it? None of the next generation consoles are expected to have a dedicated physics chip, as will be the case in the near future in the PC space. However, there is some functionality. Half Life 2, which was made more realistic in part by physics middleware provider Havok, showed us how much fun it is to defeat your enemy by, say, destroying the high wooden ledge he’s standing on and letting gravity do the dirty work, as opposed to everyone in the head. It was an immersive step in the right direction. It’s traditionally been the host CPUs job to process physics, as is the case with Half Life 2, and CPUs can do it, albeit not terribly efficiently. Physics calculations are very parallel in nature not at all unlike… graphics calculations.

Back in October of 2005 ATI came out and publicly stated what many already knew, and some were already doing - That graphics cards are not only capable of physics calculations, they’re an order of magnitude faster than same generation CPUs due to their extreme parallelism. But isn’t the GPU busy processing the graphics though? It’s true that most previous instances of physics-processing graphics chips did require near exclusive use of the GPU, but Xenos is different in two big ways. The first is an added function called MEMEXPORT, which basically allows writing and reading of floating point data between Xenos and the 360’s main RAM. This single command effectively turns Xenos into a massive FPU co-processor, but still there’s the nagging problem of physics calculations hogging the shading resources. The other big difference as stated earlier, is that Xenos is the first programmable unified shading design! In theory there’d be no difference to the GPU whether it’s processing for shading or physics. I believe it’s entirely possible and probable that some developers may choose to lockdown 33% of the shading ability of the Xenos and use that 1/3 as a virtual dedicated physics processor. This would reduce the shading ability of the 360 by the same 33%, but it would also be theoretically capable of nearly equaling the entire physics processing performance of the main CPU! The reason for the specific 33% or 1/3 lockdown example lies in the Xenos 48 shading processor design being broken into 3 arrays of 16. Although Xenos is programmable between the arrays, it may prove more efficient to lock a complete array for dedication. For reference, the Xenon CPU is capable of ~ 9 billion scaler ops/sec, while the Xenos GPU is capable of ~ 24.6 billion scaler ops/sec or 8.2 billion per each of the three arrays of 16 shading processors. In short, all else being equal, developers may have the option to take a 33% hit in the shading capability of the 360 in order to gain a physics co-processor capable of ~8.2 billion scaler operations per second. Again this is speculation and there are a lot of variables, not the least of which is the latency of the MEMEXPORT function, so I contacted ATI to clarify.

Just prior to the release of this article, ATI senior architect Clay Taylor replied, confirming the physics processing potential of Xenos. Although specific workloads (i.e. physics-specific instructions) are not assignable to ALU arrays in a discrete manner, he confirmed it is entirely possible that: “Physics processing could be interleaved into the command stream and would use the percentage of the ALU core that the work required.” The ability to effectively scale the use of Xenos as both a PPU and GPU opens many creative doors for developers. I’m rather surprised that Microsoft isn’t touting this capability, as it was obviously intended by ATI. The physics processing ability will come at the cost of shading ability but the proportion is entirely at the developer’s discretion. I can think of many instances (e.g. indoor scenes) or even game style choices in which the powerful shading unit will have plenty of extra cycles to act as a PPU.

- Final thoughts

Regardless of the physics processing ability of Xenos, it is the real deal in next generation graphics. The implementation of a truly unified shading architecture may have been enough to grant the GPU as next generation capable, but add the daughter die with an embedded frame buffer, and few would argue. Many have been quick to criticize the graphics quality of some launch titles and in most instances I agree, however, lackluster graphics is certainly not for lack of ability, it’s for lack of programming ability with beta development hardware. The potential is barely hinted at in a couple titles such as Call of Duty 2 and Project Gotham Racing 3, which both look impressive on an HDTV. On non-HD sets the graphics do look much more current generation but that is because much of the horsepower developers could muster at this early stage with new hardware was used in driving a higher resolution instead of higher quality shading. In other words, an unnecessary evil in developing for a set high resolution (i.e. 720P) means running at a lower resolution (480P) generally won’t yield the higher quality graphics the system is capable of even though it’s only pushing a third of the pixels. At best you’ll get a faster frame rate but I would hope some developers may at least boost FSAA quality at lower resolutions. We’ll delve more into HD relevance in the Revolution part of this trilogy. Suffice it to say, if significantly higher quality visuals is all you consider in deeming a console as being “next generation” or not, then the Xbox is clearly a next gen console. If everything I’m hearing about the competing consoles graphics parts is true, the Xbox 360s graphics, if not the fastest, will certainly be close enough that it should be largely irrelevant.

Clearly my impression of the Xbox 360 is that it is positioned to compete significantly better in the next gen console race than its predecessor. The difference this time around is that although Microsoft will no longer have the decidedly most powerful console, they also won’t have the most expensive console, and believe me, they will compete on price. The Xbox 360s media (DVD) and input device (gamepad) are safe choices and the CPU may be merely adequate, but the GPU is quite potent and should go far in keeping Microsoft’s box in the same league as Sony’s overall despite the disparity in time to market. You may also assume from my comments that like many analysts, I’m discounting Nintendo’s entry into the next gen race; however this really couldn’t be further from the truth. The Revolution is so unique it isn’t really directly comparable to the 360 or the PS3, but more on that in the final section in this trilogy of articles.

I would like to invite discussion on this and future articles as I will be following the discussion thread for this editorial and will reply as necessary, until then expect Part II: A Brief History of the PS3 in the coming weeks.

source:http://www.elitebastards.com/cms/index.php?option=com_content&task=view&id=20&Itemid=28


Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?