Tuesday, May 02, 2006
Learning from failure
Learning from failure is a hallmark of the technology business. Nick Baker, a 37-year-old system architect at Microsoft, knows that well. A British transplant at the software giant's Silicon Valley campus, he went from failed project to failed project in his career. He worked on such dogs as Apple Computer's defunct video card business, 3DO's failed game consoles, a chip startup that screwed up a deal with Nintendo, the never successful WebTV and Microsoft's canceled Ultimate TV satellite TV recorder.
But Baker finally has a hot seller with the Xbox 360, Microsoft's video game console launched worldwide last holiday season. The adventure on which he embarked four years ago would ultimately prove that failure is often the best teacher. His new gig would once again provide copious evidence that flexibility and understanding of detailed customer needs will beat a rigid business model every time. And so far the score is Xbox 360 one and the delayed PlayStation 3 nothing.
The Xbox 360 console is Microsoft's living room Trojan horse, purchased as a game box but capable of so much more in the realm of digital entertainment in the living room. Since the day after Microsoft terminated the Ultimate TV box, in February 2002, Baker has been working on the Xbox 360 silicon architecture team at Microsoft's campus in Mountain View, Calif. He is one of the 3DO survivors who now gets a shot at revenge against the Japanese companies that vanquished his old firm.
"It feels good," says Baker. "I can play it at home with the kids. It's family-friendly, and I don't have to play on the Nintendo anymore."
Baker is one of the people behind the scenes who pulled together the Xbox 360 console by engineering some of the most complicated chips ever designed for a consumer entertainment device. The team labored for years and made critical decisions that enabled Microsoft to beat Sony and Nintendo to market with a new box, despite a late start with the Xbox in the previous product cycle. Their story, captured here and in a forthcoming book by the author of this article, illustrates the ups and downs in any big project.
When Baker and his pal Jeff Andrews joined games programmer Mike Abrash, in early 2002, they had clear marching orders. Their bosses-Microsoft CEO Steve Ballmer, at the top of Microsoft; Robbie Bach, running the Xbox division; Xbox hardware chief Todd Holmdahl; Greg Gibson, for Xbox 360 system architecture; and silicon chief Larry Yang-all dictated what Microsoft needed this time around.
They couldn't be late. They had to make hardware that could become much cheaper over time and had to pack as much performance into a game console as they could without overheating the box.
Trinity takenThe group of silicon engineers started first among the 2,000 people in the Xbox division on a project that Baker had code-named Trinity. But they couldn't use that name, because someone else at Microsoft had taken it. So they named it Xenon, for the colorless and odorless gas, because it sounded cool enough. Their first order of business was to study computing architectures, from those of the best supercomputers to those of the most power-efficient portable gadgets. Although Microsoft had chosen Intel and Nvidia to make the chips for the original Xbox the first time around, the engineers now talked to a broad spectrum of semiconductor makers.
"For us, 2002 was about understanding what the technology could do," says Greg Gibson, system designer.
Sony had teamed up with IBM and Toshiba to create a full-custom microprocessor from the ground up. They planned to spend $400 million developing the Cell architecture and even more fabricating the chips. Microsoft didn't have the time or the chip engineers to match the effort on that scale, but Todd Holmdahl and Larry Yang saw a chance to beat Sony. They could marshal a host of virtual resources and create a semicustom design that combined both off-the-shelf technology and their own ideas for game hardware. Microsoft would lead the integration of the hardware, own the intellectual property, set the cost-reduction schedules, and manage its vendors closely.
They believed that this approach would get them to market by 2005, which was when they estimated Sony would be ready with the PlayStation 3. (As it turned out, Microsoft's dreams were answered when Sony, in March, postponed the PlayStation 3 launch until November.)
More important, using an IP ownership strategy with the chips could dramatically cut Microsoft's costs on the original Xbox. Microsoft had lost an estimated $3.7 billion over four years, or roughly a whopping $168 per box. By cutting costs, Microsoft could erase a lot of red ink.
Balanced designBaker and Andrews quickly decided they wanted to create a balanced design, trading off power efficiency and performance. So they envisioned a multicore microprocessor, one with as many as 16 cores, or miniprocessors, on one chip. They wanted a graphics chip with 60 shaders, or parallel processors for rendering distinct features in a graphic animations.
Laura Fryer, manager of the Xbox Advanced Technology Group in Redmond, Wash., solicited feedback on the new microprocessor. She said that game developers were wary of managing multiple software threads associated with multiple cores, because the switch created a juggling task they didn't have to do on the original Xbox or the PC. But they appreciated the power efficiency and added performance they could get.
Microsoft's current vendors, Intel and Nvidia, didn't like the idea that Microsoft would own the IP they created. For Intel, allowing Microsoft to take the x86 design to another manufacturer was as troubling as signing away the rights to Windows would be to Microsoft. Nvidia was willing to do the work, but if it had to deviate from its road map for PC graphics chips in order to tailor a chip for a game box, then it wanted to get paid for it. Microsoft didn't want to pay that high a price. "It wasn't a good deal," says Jen Hsun-Huang, CEO of Nvidia. Microsoft had also been through a painful arbitration on pricing for the original Xbox graphics chips.
IBM, on the other hand, had started a chip engineering services business and was perfectly willing to customize a PowerPC design for Microsoft, says Jim Comfort, an IBM vice president. At first IBM didn't believe that Microsoft wanted to work together, given a history of rancor dating back to the DOS and OS/2 operating systems in the 1980s. Moreover, IBM was working for Microsoft rivals Sony and Nintendo. But Microsoft pressed IBM for its views on multicore chips and discovered that Big Blue was ahead of Intel in thinking about these kinds of designs.
When Bill Adamec, a Microsoft program manager, traveled to IBM's chip design campus, in Rochester, N.Y., he did a double take when he arrived at the meeting room where 26 engineers were waiting for him. Although IBM had reservations about Microsoft's schedule, the company was clearly serious.
"For us, 2002 was about understanding what the technology could do." —Greg Gibson, Microsoft system designer
Meanwhile, ATI Technologies assigned a small team to conceive a proposal for a game console graphics chip. Instead of pulling out a derivative of a PC graphics chip, ATI's engineers decided to design a brand-new console graphics chip that relied on embedded memory to feed a lot data to the graphics chip while keeping the main data pathway clear of traffic- critical for avoiding bottlenecks that would slow down the system.
By the fall of 2002, Microsoft's chip architects had decided that they favored the IBM and ATI solutions. They met with Ballmer and Gates, who wanted to be involved in the critical design decisions at an early juncture. Larry Yang recalls, "We asked them if they could stomach a relationship with IBM." Their affirmative answer pleased the team.
By early 2003, the list of potential chip suppliers had been narrowed down. At that point, Robbie Bach, the chief Xbox officer, took his team to a retreat at the Salish Lodge, on the edge of Washington's beautiful Snoqualmie Falls, made famous by the Twin Peaks TV show. The team hashed out a battle plan. They would own the IP for silicon that could take the costs of the box down quickly. They would launch the box in 2005 at the same time as Sony would launch its box, or even earlier. The last time, Sony had had a 20-month head start with the PlayStation 2. By the time Microsoft sold its first 1.4 million Xboxes, Sony had sold more than 25 million PlayStation 2s.
Those goals fit well with the choice of IBM and ATI for the two pieces of silicon that would account for more than half the cost of the box. Each chip supplier moved forward, based on a "statement of work, " but Gibson kept his options open, and it would be months before the team finalized a contract. Both IBM and ATI could pull blocks of IP from their existing products and reuse them in the Microsoft chips. Engineering teams from both companies began working on joint projects such as the data pathway that connected the chips. ATI had to make contingency plans, in case Microsoft chose Intel over IBM, and IBM also had to consider the possibility that Microsoft might choose Nvidia.
Hacking embarassmentThrough the summer, Microsoft executives and marketers created detailed plans for the console launch. They decided to build security into the microprocessor to prevent hacking, which had proved to be a major embarrassment on the original Xbox. Marketers such as David Reid all but demanded that Microsoft try to develop the new machine in a way that would allow the games for the original Xbox to run on it. So-called backward compatibility wasn't necessarily exploited by customers, but it was a big factor in deciding which box to buy. And Bach insisted that Microsoft had to make gains in Japan and Europe by launching in those regions at the same time as in North America.
For a period in July 2003, Bob Feldstein, the ATI vice president in charge of the Xenon graphics chip, thought Nvidia had won the deal, but in August Microsoft signed a deal with ATI and announced it to the world. The ATI chip would have 48 shaders, or processors that would handle the nuances of color shading and surface features on graphics objects, and would come with 10 megabytes of embedded memory.
IBM followed with a contract signing a month later. The deal was more complicated than ATI's, because Microsoft had negotiated the right to take the IBM design and have it manufactured in an IBM-licensed foundry being built by contract chip maker Chartered Semiconductor. The chip would have three cores and run at 3.2 gigahertz. It was a little short of the 3.5 GHz that IBM had originally pitched, but it wasn't off by much.
By October 2003, the entire Xenon team had made its pitch to Gates and Ballmer. They faced some tough questions. Gates wanted to know if there was any chance the box would run the complete Windows operating system. The top executives ended up giving the green light to Xenon without a Windows version.
"They were on the highest wire with the shortest net." —J Allard, Corporate Vice President, Microsoft
The ranks of Microsoft's hardware team swelled to more than 200, with half of the team members working on silicon integration. Many of these people were like Baker and Andrews, stragglers who had come from failed projects such as 3DO and WebTV. About 10 engineers worked on "Ana," a Microsoft video encoder chip, while others managed the schedule and cost reduction with IBM and ATI. Others supported suppliers, such as Silicon Integrated Systems, the supplier of the "south bridge," the communications and input/output chip. The rest of the team helped handle relationships with vendors for the other 1,700 parts in the game console.
Ilan Spillinger headed the IBM chip program, which carried the code name Waternoose, after the spiderlike creature from the film He supervised IBM's chief engineer, Dave Shippy, and worked closely with Microsoft's Andrews on every aspect of the design program.
Games at centerEverything happened in parallel. For much of 2003, a team of industrial designers created the look and feel of the box. They tested the design on gamers, and the feedback suggested that the design seemed like something that either Apple or Sony had created. The marketing team decided to call the machine the Xbox 360, because it put the gamer at the center. A small software team led by Tracy Sharp developed the operating system in Redmond. Microsoft started investing heavily in games. By February 2004, Microsoft sent out the first kits to game developers for making games on Apple Macintosh G5 computers. And in early 2004, Greg Gibson's evaluation team began testing subsystems to make sure they would all work together when the final design came together.
IBM assigned 421 engineers from six or seven sites to the project, which was a proving ground for its design services business. The effort paid off, with an early test chip that came out in August 2004. With that chip, Microsoft was able to begin debugging the operating system. ATI taped out its first design in September 2004, and IBM taped out its full chip in October 2004. Both chips ran game code early on, which was good, considering that it's very hard to get chips working at all when they first come out of the factory.
IBM executed without many setbacks. As it revised the chip, it fixed bugs with two revisions of the chip's layers. The company was able to debug the design in the factory quickly, because IBM's fab engineers could work on one part while the Chartered engineers could debug a different part of the chip. They fed the information to each other, speeding the cycle of revisions. By January 30, 2005, IBM taped out the final version of the microprocessor.
ATI, meanwhile, had a more difficult time. The company had assigned 180 engineers to the project. Although games ran on the chip early, problems came up in the lab. Feldstein said that in one game, one frame of animation would freeze as every other frame went by. It took six weeks to uncover the bug and find a fix. Delays in debugging threatened to throw the beta-development-kit program off schedule. That meant that thousands of game developers might not get the systems they needed on time. If that happened, the Xbox 360 might launch without enough games, a disaster in the making.
The pressure was intense. But Neil McCarthy, a Microsoft engineer in Mountain View, designed a modification of the metal layers of the graphics chip. By doing so, he enabled Microsoft to get working chips from the interim design. ATI's foundry, Taiwan Semiconductor Manufacturing Co., churned out enough chips to seed the developer systems. The beta kits went out in the spring of 2005.
Meanwhile, Microsoft's brass was worried that Sony would trump the Xbox 360 by coming out with more memory in the PlayStation 3. So in the spring of 2005, Microsoft made what would become a fateful decision. It decided to double the amount of memory in the box, from 256 megabytes to 512 megabytes of graphics double-data-rate 3 (GDDR3) chips. The decision would cost Microsoft $900 million over five years, so the company had to pare back spending in other areas to stay on its profit targets.
Microsoft started tying up all the loose ends. It rehired Seagate Technology, which it had hired for the original Xbox, to make hard disk drives for the box, but this time Microsoft decided to have two SKUs, one with a hard drive, for the enthusiasts, and one without, for the budget-conscious. It brought aboard both Flextronics and Wistron, the current makers of the Xbox, as contract manufacturers. But it also laid plans to have Celestica build a third factory for building the Xbox 360.
Just as everyone started to worry about the schedule going off course, ATI spun out the final graphics chip design in mid-July 2005. Everyone breathed a sigh of relief, and they moved on to the tough work of ramping up manufacturing. There was enough time for both ATI and IBM to build a stockpile of chips for the launch, which was set for November 22 in North America, December 2 in Europe and December 10 in Japan.
Flextronics debugged the assembly process first. Nick Baker traveled to China to debug the initial boxes as they came off the line. Although assembly was scheduled to start in August, it didn't get started until September. Because the machines were being built in southern China, they had to be shipped over a period of six weeks by boat to the regions. Each factory could build only as many as 120,000 machines a week, running at full tilt. The slow start, combined with the multiregion launch, created big risks for Microsoft.
Pins and needlesThe hardware team was on pins and needles. The most-complicated chips came in on time and were remarkable achievements. Typically, it took more than two years to do the initial designs of complicated chip projects, but both companies were actually manufacturing inside that time window.
Then something unexpected hit. Both Samsung and Infineon Technologies had committed to making the GDDR3 memory for Microsoft. But some of Infineon's chips fell short of the 700 megahertz specified by Microsoft. Using such chips could have slowed games down noticeably. Microsoft's engineers consulted and decided to start sorting the chips, not using the subpar ones. Because GDDR3 700-MHz chips were just ramping up, there was no way to get more chips. Each system used eight chips. The shortage constrained the supply of Xbox 360s.
Microsoft blamed the resulting shortfall of Xbox 360s on a variety of component shortages. Some users complained of overheating systems. But overall, the company said, the launch was still a great achievement. In its first holiday season, Microsoft sold 1.5 million Xbox 360s, compared to 1.4 million original Xboxes in the holiday season of 2001. But the shortage continued past the holidays.
Leslie Leland, the hardware evaluation director, says she felt "terrible" about the shortage and that Microsoft would strive to get a box into the hands of every consumer who wanted one. But Greg Gibson, the system designer, says that Microsoft could have worse problems on its hands than a shortage. The IBM and ATI teams had outdone themselves.
The project was by far the most successful that Nick Baker had ever worked on. One night, hoisting a beer and looking at a finished console, he said that it felt good.
J Allard, the head of the Xbox platform business, praised the chip engineers such as Baker: "They were on the highest wire with the shortest net."
source:http://www.reed-electronics.com/eb-mag/article/CA6328378?pubdate=5%2F1%2F2006
First Neutron Pulse from SNS
source:http://science.slashdot.org/article.pl?sid=06/05/02/1336222
The Bad In Email (or Why We Need Collaboration Software)
To quell any speculation that we are ditching collaboration software in favor of email, realize that this was just an exercise. We embarked on a close examination of email to see what we could learn from this killer app, in an attempt to improve our customer’s experience. Our examination included an inquiry to all sides of the medium; with the intention of extracting the good and throwing away the bad.
In spite of email’s universal success (as a collaboration tool), and in spite of its many good traits, email contains deep, inherent flaws that force users and markets to seek alternatives to collaborating via email.
After all, if email is so “good” at its job, then how do we explain the popular resurgence of Collaboration Software (masked as Web 2.0)? And how else do you explain Ray Ozzie as the CTO of Microsoft?
To reiterate our stance from the previous article, the facts speak for themselves. Email is here to stay. But while ubiquity might define adoption, ubiquity does not define ‘correctness’ ‘rightness,’ ‘goodness’ or even ‘efficiency.’ Yet another example where the ‘wisdom of crowds’ does not apply. Just because ‘everyone is doing it’ does not mean that everyone should be doing it.
Therefore, we’d like to present The Bad In Email, or Why Ray Ozzie is the CTO of Microsoft.
Email is Silo'ed
The single worst trait of email is that it’s silo’ed.
What I mean by silo’ed is that email traps information into personalized, unsharable, unsearchable vacuums where no one else can access it - the Email Inbox. Think of your Email Inbox as a heavily fortified walled garden. Not mentioning the difficulties many have accessing their Email Inbox outside the corporate firewall, the Email Inbox contains a hodgepodge of business, personal and private information that most people do not want to share with others.
For many folks, the Email Inbox contains their most intimate secrets all mashed together into a single location: business correspondences, contracts, proposals, reminders, tasks, love letters, indiscreet online purchases, dirty jokes, pictures of your spouse (and kids), time-wasting games, inappropriate messages from co-workers and friends and lets not forget spam.
I think its obvious that silo’ed data is devastating to team productivity. The snowballing effects of silo’ed data can debilitate even the strongest of project managers.
Here is the progressive snowballing effect of silo’ed data:
1. The data and content types are mixed and mashed (see list above).
2. The data is often ‘NSFW’ (Not Safe For Work).
3. The data is unintelligent (untagged, lacks taxonomy, unfiled).
4. The data is therefore unsharable. (both by personal choice and lack of technology)
5. The data is therefore unsearchable (by others).
6. The data is therefore inaccessible (by others).
7. Your Email Inbox is therefore *useless* to the rest of the Team (In spite of the goldmine of data that probably resides in your Inbox).
Email *Perpetuates* Many Walled Gardens
The only thing worse than one walled garden are many walled gardens.
As soon as you introduce two or more people into a collaborative environment, you now have multiple ‘my inboxes’- each being a walled garden.
There are some hack fixes to this problem: Yahoo Groups, Google Groups, Newsgroups, List Serves, Forums, Carbon Copy and Email Aliases (ala Exchange); but in the end, each of these solutions still rely on the Email Inbox to send and receive data. Thus reinforcing the Walled Garden.
I can’t tell you how many times I’ve posted to a Public Forum / Group expecting to continue the exchange online…only to receive an email IN MY PERSONAL EMAIL INBOX.
Unfortunately, the Walled Gardens of our Email Inboxes are deceivingly warm and cozy. This feigned-comfort of safety whispers into our ears like a wily devil to, “Just email the document to me” or “Just email that document to yourself” with the false-belief that it will remain safe, secure and locked away. But that is just it……its locked away so that NO ONE ELSE CAN ACCESS IT. This is counter-culture to team collaboration.
This false-pretense of comfort in email only reinforces and perpetuates the temptation to build and protect your own walled garden.
Email is NOT Secure (Part 1)
We’ve been lulled into believing that email is safe and secure.
If you are using SMTP (the universal pipe, remember?), you need to know that it doesn’t encrypt data/messages.
If you are using POP or IMAP, you need to know that they both require you to send unencrypted authentication (username/password).
Unless both the email Sender (you) and the Recipient are using Digital Keys/Signatures, the contents of your email are about as secure as Imelda Marcos in a shoe store. While the idea of using Digital Keys/Signatures sounds neat, it is not practical.
Outside of fictional characters in Cryptonomicon, I’m not aware of anyone else using encrypted email and digital signatures.
(Anyone using cryptographic e-mail is in the minority and the exception to the rule.)
I’m aware of services such as Hushmail, and certificates offered by Thawte and Verisign; but I’ve never received a Hushmail nor have I ever encountered an email that I couldn’t read because I lacked a Digital Signature.
If you still don’t believe me, and if you are a user of Outlook, try this:
In Outlook, click on Tools | Options and select the Security Tab.
Now, select either of the first two check boxes that ask you to “Encrypt contents and attachments for outgoing messages” and/or “Add digital signature to outgoing messages.” Now, send an email.
If you did this correctly, you will be prompted to see this screen. If you are brave, click on the “Get Digital ID” button. If you are like me (and I venture that most of my audience is comprised of technical and/or business users) I don’t have the time, patience or desire to venture down the path of buying certificates and keys and configuring them on all six of the machines I work from on any given day.
This is a non-starter. No one uses this feature. Thus, my point: Email is not secure.
[I will concede that webmail is semi-secure in that if you are using SSL (HTTPS), the transmission of data from your computer to the email server is secure. But the moment your message leaves your email host….its a free-for-all for any one to sniff and hack. The contents of your message are not encrypted or secure…..unless the recipient of your email is also within the confines of the secure environment (for example, if you and your recipient are both sending and receiving email through Gmail’s web interface and both using SSL, then the message should be encrypted from point to point.) But, we are not all using Gmail either (at least not yet).
[Eudora Security Flashback: I still don’t know what the hell Kerberos is and what it has to do with a dog much less my email?]
Email is NOT Secure (Part 2)
I argue that email is the single most vulnerable point in any organization’s security policy. It takes two seconds to send a confidential document to anyone or any group in the world. And, unless you are using Novell Groupwise (gulp) there is no way to ‘retract’ your email.
Most companies spend a fortune locking down their IT infrastructure. This results in either Total Lockdown, also known as Paralysis whereby no one can do anything without a password, passkey, keycard, signature and sign-in sheet; or in No Lockdown, also known as Free-Love-Utopia whereby everyone is equal because everyone is an Administrator.
Security measures are very important for organizations at all levels, but they shouldn’t prevent the free flow of information amongst a team. Unfortunately, this also means that confidential data is only as secure as any person using email.
Group Email is Really Complicated
While personal email is easy to setup, configure and administer. Group email is a complete nightmare. The rise of spam, phishing and viruses makes group email administration a full time job (department in many cases).
For many enterprise users the infrastructure is already there. But for the remaining 25 million businesses in the United States that do not have an established group email infrastructure, the cost of administration is daunting.
Email is Not a Document Manager
Every company, department, workgroup and team has fallen prey to Document Hot Potato. This is when team members call each other (or even worse, email each other) looking for the latest revision of the proposal/contract/document.
There are some interesting solutions emerging over at Nextpage and Echosign, but these solutions are supplementary to email. They do a good job of integrating email into the workflow of contract and signature management, but appear to ignore the fundamental requirements of teams to just share, search and access documents and files.
Email Communications Do Not Correspond Priority
If everyone used Outlook (70% of Central Desktop users use Outlook), then the ability to assign priority to each message would actually work. But we don’t live in a Microsoft world (in spite of what many of you might think) and instead, we usually measure and weigh the importance of an email message by the number of people included in the carbon copy. This is highly subjective and fails to address the need to order and sort messages and task by importance.
One alternative is to use ALL CAPS IN YOUR MESSAGE TO IMPLY PRIORITY.
Email is inconsistent
In spite of email’s universality, there is still discrepancy and lack of consistency in reading HTML and rich text formats. Email clients require users to determine whether or not to ‘download images’ or ‘convert to text,’ these are options most users do not know how to set or configure.
How many times have you read email through a webmail interface that reformatted the HTML into unintelligible garbage? How many hours have you spent trying to open up a MIME-ENCAPSULATED MESSAGE?
Email works most of the time, but when it doesn’t it’s usually the result of a Client Configuration problem, not a connection problem.
Email is not permission based
You either have rights to use email, or you don’t. There is no viable middle-ground here.
Spam Filtering is better, but still not good enough
I still find very important messages in my Spam/Junk Folder. While I’m glad my Spam Filter (Gmail) is working most of the time, it’s not perfect, and often requires frequent ‘gardening.’
Email does not work well for multi-users
Its still challenging for multiple people to share business email accounts (i.e. support, bugs and sales messages). IMAP sort of works, but presents its fair-share of limitations.
Companies such as Sproutit are working on solving this problem. I wish them luck and admire their ambition.
Email is Prone to Viruses
There is no need to elaborate here.
Email makes us lazy
Lets face it, we all like being whispered to in our ear. We enjoy listening to that wily-devil of compromise that tempts us to “just email the document to myself.” This is purely the seduction of sloth.
In the end, the strength of the collaboration tool is only as strong as its weakest link. It only takes one person to break the entire system.
Yes, there is good in email; but it’s mixed with a bunch of bad.
In spite of our comfort with email, we must unlearn what we have learned, open our eyes and acknowledge The Bad In Email. Email IS the most adopted collaboration tool; but it isn’t the BEST collaboration tool. There are more efficient ways to work and its time for us to let go, create and adopt collaboration tools. I know this. You know this. And Bill Gates knows this (which is why Ray Ozzie is the CTO at Microsoft).
source:http://blog.centraldesktop.com/comments.php?y=06&m=05&entry=entry060501-194015
Freedom to Run Means Freedom from Complexity: An Argument for Running FOSS on Windows
Free and open source software (FOSS) is founded on the four software freedoms: (a) freedom to run; (b) freedom to study; (c) freedom to modify; and (d) freedom to redistribute a program. However, it seems that wider adoption of FOSS can be achieved if greater development effort is focused on the first freedom – the freedom to run. More importantly, this freedom should be understood in the sense of “freedom from complexity”. It is often forgotten that, from the standpoint of an ordinary user, freedom to run a program means the program itself must be user-friendly and it must be easy to download, install and use. This freedom means nothing if the exercise of such right excludes people who do not possess high technical knowledge or advanced skills sets. Without the guarantee of “ease of use”, the freedom to run FOSS for most users is a hollow promise.
Current FOSS operating systems (OS) are targeted mainly at geeks, hackers and other technically skilled developers and users. While there have been some progress in making the installation and use of FOSS OSes like Ubuntu easier and simpler, they still do not have the “click-click-click” ease of installation of popular proprietary OSes like Windows XP or Mac OS X. In addition, even after one successfully installs a FOSS OS on a computer, a user will typically have to deal with issues like lack of drivers, incompatibility with third party devices or difficulty in installing new programs or software packages. A normal user wants everything to work out-of-the-box. This is especially true in developing countries where a computer costs more than a month’s salary. Since a computer is a major purchase, it’s usefulness and usability should be present at the moment a user turns on his or her computer. People are not interested in (in fact, most are adverse to) messing around with, tinkering or hacking a program – the second, third and fourth software freedoms.
The simplest and most effective way to increase FOSS use and adoption now is to push for the adoption by ordinary users, not of FOSS OSes, but of FOSS programs running on Windows XP. Installing and running a program like OpenOffice.org (OpenOffice) on a Windows system in lieu of the proprietary Microsoft Office is indeed truly free – it is free from complexity (OpenOffice looks and works like Microsoft Office) and it is free as in free beer (there is no cost to the user). Users who use and run FOSS programs on Windows do not have to concern themselves with driver issues and other technical mumbo-jumbo. The Windows OS is well-supported by hardware and device manufacturers and other service providers. For an ordinary user, the cost of purchasing and using the Windows OS is a small price to pay for his or her freedom to actually and productively use his or her computer. It should be remembered that true software freedom is not “free as in free beer” but “free as in freedom”. Freedom from complexity is an essential and inherent part of running a computer program.
Aside from OpenOffice, there are a lot of other FOSS programs that run on Windows XP – Gimp (photo or image editing), Firefox (web browser), Thunderbird (email), Audacity (audio recording and mixing), and Gaim (instant-messaging). Projects such as TheOpenCD, which advocate the distribution and use of FOSS programs on Windows, help bring FOSS to ordinary users. Initially pushing users to run FOSS programs on Windows also has a long-term benefit. When the FOSS community finally releases a FOSS OS that is as easy to install and use as any proprietary OS, users will have no trouble moving to this FOSS OS since the programs they know and love will run on it.
Freedom to run a program means guaranteeing to an ordinary user that he or she will be able to run and use a program productively and free from complexity. What is the worth of freedom if it cannot be enjoyed by everyone?
source:http://lawnormscode.sync.ph/?p=15
Will Sun Open-Source Java?
According to sources inside Sun, an ongoing debate over whether to open-source Java is coming to a head with the JavaOne conference looming May 16. Schwartz, who led the open-sourcing of Solaris, could not be reached for comment on the matter.
Nevertheless, opponents of the idea are trying "to get time with Schwartz now that he is CEO so they can get their point of view across before the JavaOne conference in May, where some speculate he may announce the open-sourcing of Java," said a source close to Sun who requested anonymity.
What Schwartz will ultimately decide on Java remains to be seen, but it's another item on his long to-do list. Schwartz, who took the reins from Scott McNealy April 24, has to keep Wall Street happy and structure Sun so it will be consistently profitable. Sun hasn't reported an annual profit since 2001 and had a loss of $217 million for the fiscal third quarter of 2006, which ended March 26.
Meanwhile, skeptics of Schwartz abound. Financial services company JP Morgan, of New York, issued a research note April 25 that said it is "concerned that Jonathan Schwartz may bring less change to Sun than an outside candidate could have."
For his part, Schwartz remains confident. "First, we're in an industry that is only going to grow. For the rest of our lives, the network is only going to expand, as is the demand for the products which Sun builds. Sun is in a great position today to capitalize on this network growth," he told eWEEK in an e-mail interview. "We're ready to deliver."
Against that backdrop, Schwartz will have to weigh the future of Java. Schwartz has not balked at making some big decisions in his previous roles at Sun, most notably getting the Santa Clara, Calif., company to reverse course and commit to a version of Solaris for x86 hardware, and later open-sourcing the company's flagship operating system.
So far, Sun has resisted many calls to open-source Java. The reason: Sun fears doing so will open the doors for competitors to grab and change Java, resulting in the kernel forking and compatibility problems.
John Loiacono, Sun's former executive vice president of software, who recently took an executive position at Adobe Systems, of San Jose, Calif., admitted as much in an exclusive interview with eWEEK. "One of the projects we were working on was how far we should go with opening Java, to the point of absolutely open-sourcing it. But we always came back to the question of who we were ultimately appeasing with the move and how such a move benefits Sun customers and shareholders," Loiacono said.
Other former Sun executives have a different take. Peter Yared, a developer who was Sun's chief technologist for network identity before leaving in 2003 to become the CEO of San Francisco-based ActiveGrid, said the big question is how Java benefits Sun's shareholders today, especially since "Sun doesn't make any money on it," he said.
"It is losing momentum against open-source up-and-comers like LAMP [Linux, Apache, MySQL and PHP/ Python/Perl]. They can continue to get the same certification revenue by licensing the Java trademark," Yared said.
Yared has long called for Sun to open Java, which, he said, is "great on the back end, but LAMP is great on the Web tier, as Google, Amazon, Yahoo, Flickr, MySpace and Friendster have shown. Sun should endorse PHP and go one step forward and make sure the 'P' languages run great on the JVM [Java virtual machine] by open-sourcing Java."
The proof for this? IBM and Oracle both have strongly endorsed PHP in their architectures, and it has not cannibalized their Java middleware sales.
eWEEK also has learned that there are ongoing discussions within Sun about possible changes to the licensing terms for Java, while negotiations are under way for strategic partnerships around Java and the products and services associated with that.
As CEO, Schwartz is now in a position to make the call to open-source Java, unfettered. But some of the concerns that have prevented Sun from truly open-sourcing Java in the past linger. One issue cited by insiders: If Sun open-sources Java, Microsoft could take it and slap it into Windows Vista. Microsoft's licensing agreements with Sun to use Java source code and compatibility test suites generate revenue for the company and could be altered or voided if open-sourced, sources said.
There are worries inside Sun that an open Java could allow Microsoft and IBM to outmuscle the company on the marketing side, a source said. "It's a two-edged sword: The more freedom you give people because it's good and you get more usage, the more people decide they don't want to live by the rules of compatibility and they break away," Loiacono said.
But ActiveGrid's Yared disagreed, saying that all this talk of Java getting fractured is overblown. "Open-sourcing Java does not mean that Sun relinquishes the Java trademark. If you pass the Java compatibility test, you will get the right to call it Java. If not, you call it something else. Microsoft has already done that, first with J++ and then with C#, and no one thinks either of these are Java," he said.
source:http://www.eweek.com/article2/0,1895,1955448,00.asp
Higher ed fears wiretapping law
source:http://it.slashdot.org/article.pl?sid=06/05/01/1736230