Friday, January 27, 2006

The Future is XHTML 2.0

"As with its past, the future of HTML will be varied, some might say messy, but I believe XHTML 2.0 will ultimately receive widespread acceptance and adoption. A big move in this direction will be in Embedded devices such as phones and digital TVs, which will have no need to support the Web's legacy of messy HTML, and are free to take unburdened advantage of XHTML 2.0. This Developer Works article examines the work of the World Wide Web Consortium (W3C) in creating the next-generation version of their XHTML specification, and also their response to the demand for 'rich client" behavior exemplified by Ajax applications.'

source:http://developers.slashdot.org/developers/06/01/27/1548218.shtml

The future of HTML, Part 2: XHTML 2.0

25 Jan 2006

In this two-part series, Edd Dumbill examines the various ways forward for HTML that Web authors, browser developers, and standards bodies propose. This series covers the incremental approach embodied by the WHATWG specifications and the radical cleanup of XHTML proposed by the W3C. Additionally, the author gives an overview of the W3C's new Rich Client Activity. Here in Part 2, Edd focuses on the work in process at the W3C to specify the future of Web markup.

In the previous article in this series, I described why HTML is due for an update, both to fix past problems and to meet the growing requirements of the tasks to which Web pages and applications are put. I explained the work of the Web Hypertext Application Technology Working Group (WHATWG), a loose collaboration of browser vendors, in creating their Web Applications 1.0 and Web Forms 2.0 specifications.

In this article, I'll examine the work of the World Wide Web Consortium (W3C) in creating the next-generation version of their XHTML specification, and also their response to the demand for "rich client" behavior exemplified by Ajax applications.

The W3C has four Working Groups that are creating specifications of particular interest:

You can find links to each of these in Resources. This article mainly focuses on the work of the the HTML Working Group, but it is worth discussing each of the others to give some context as to how their work will shape the future of the Web.

XForms

XForms are the W3C's successor to today's HTML forms. They are designed to have richer functionality, and pass their results as an XML document to the processing application. XForms are modularized, so you can use them in any context, not just attached to XML. XForms' key differences from HTML forms are:

As it is a modularized language, XHTML 2.0 imports XForms as a module for its forms functionality.

Web APIs

The W3C's Web APIs Working Group is charged with specifying standard APIs for client-side Web application development. The first and most familiar among these is the XMLHttpRequest functionality at the core of Ajax (which is also a technology that the WHATWG has described). These APIs will be available to programmers through ECMAScript and any other languages supported by a browser environment.

Additional APIs being specified are likely to include:

While these APIs do not need to be implemented in tandem with XHTML 2.0, browsers four years in the future will likely integrate them both to provide a rich platform for Web applications.

Web Application Formats

XHTML 2.0 is one part of the Web application user interface question, but not the totality. Technologies such as Mozilla's XUL and Microsoft's XAML have pushed toward a rich XML vocabulary for user interfaces.

The Web Application Formats Working Group is charged with the development of a declarative format for specifying user interfaces, in the manner of XUL or XAML, as well as the development of XBL2, a declarative language that provides a binding between custom markup and existing technologies. XBL2 essentially gives programmers a way to write new widgets for Web applications.

Why XHTML 2.0?

The purpose of XHTML 1.0 was to transition HTML into an XML vocabulary. It introduced the constraints of XML syntax into HTML: case-sensitivity, compulsory quoted attribute values, and balanced tags. That done, XHTML 2.0 seeks to address the problems of HTML as a language for marking up Web pages.

In his presentation at the XTech 2005 conference in Amsterdam (see Resources), the W3C's Steven Pemberton expressed the design aims of XHTML 2.0:

These aims certainly appear pretty laudable to anybody who has worked with HTML for a while. I'll now take a deeper look at some ways in which they were achieved in XHTML 2.0.

Sections and paragraphs

When I was a newcomer to HTML many years ago, I remember experiencing a certain amount of bemusement at the textual structural elements in the language. Why were there six levels of heading, and when was it appropriate to use each of them? Also, why didn't the headings somehow contain the sections they denoted? XHTML 2.0 has an answer to this, with the new

and (heading) elements:


Level 1 heading
...

Level 2 heading
...


This is a much more logical arrangement than in XHTML 1.0, and will be familiar to users of many other markup vocabularies. One big advantage for programmers is that they can include sections of content in a document without the need to renumber heading levels.

You can then use CSS styling for these headings. While it is to be expected that browsers' default implementations of XHTML 2.0 will have predefined some of these, written explicitly they might look like this (abstracted from the XHTML 2.0 specification):

h {font-family: sans-serif; font-weight: bold; font-size: 200%}
section h {font-size: 150%} /* A second-level heading */
section section h {font-size: 120%} /* A third-level heading */

Another logical anomaly in XHTML 1.0 is that you must close a paragraph in order to use a list. In fact, you must close it to use any block-level element (blockquotes, preformatted sections, tables, etc.). This is often an illogical thing to do when such content can justly be used as part of the same paragraph flow. XHTML 2.0 removes this restriction. The only thing you can't do is put one paragraph inside another.

Images

The tag in HTML is actually pretty inflexible. As Pemberton points out, it does not include any fallback mechanism except alt text (hindering adoption of new image formats), the alt text can't be marked up, and the longdesc attribute never caught on due to its awkwardness. (longdesc is used to give a URI that points to a fuller description of the image than given in the alt attribute.)

XHTML 2.0 introduces an elegant solution to this problem: Allow any element to have a src attribute. A browser will then replace the element's content with that of the content at the URI. In the simple case, this is an image. But nothing says it can't be SVG, XHTML, or any other content type that the browser is able to render.

The tag itself remains, but now can contain content. The new operation of the src attribute means that the alt text is now the element's content, such as in this example markup:

H2O


This is especially good news for languages such as Japanese, whose Ruby annotations (see Resources) require inline markup that was previously impossible in attribute values.

XHTML 2.0 offers a more generic form of image inclusion in the element, which you can use to include any kind of object -- from images and movies to executable code like Flash or Java technology. This allows for a neat technique to handle graceful degradation according to browser capability; you can embed multiple elements inside each other. For instance, you might have a Flash movie at the outermost layer, an AVI video file inside that, a static image inside that, and finally a piece of text content at the center of the nested objects. See the XHTML Object Module (linked in Resources) for more information.

Extensible semantics

HTML has long had some elements with semantic associations, such as

and </code>. The problem with these is that they are few and not extensible. In the meantime, some have attempted to use the <code>class</code> attribute to give semantics to HTML elements. This is stretching the purpose of <code>class</code> further than it was designed for, and can't be applied very cleanly due to the predominant use of the attribute for applying CSS styling. (Some argue about this assertion of the purpose of <code>class</code>, but the latter point is undeniable.) </p> <p>Moving beyond these ad-hoc methods, XHTML 2.0 introduces a method for the specification of RDF-like metadata within a document. RDF statements are made up of triples (subject, property, object). For instance, in English you might have the triple: "my car", "is painted", "red".</p> <p> The <code>about</code> attribute acts like <code>rdf:about</code>, specifying the <i>subject</i> of an RDF triple -- it can be missing, in which case the document itself will the subject. The <code>property</code> attribute is the URI of the property referred to (which can use a namespace abbreviation given a suitable declaration of the prefix; more detail is available in the XHTML 2.0 Metainformation Attributes Module, see <a href="http://not-a-real-namespace/http://www-128.ibm.com/developerworks/xml/library/x-futhtml2.html?ca=dgr-lnxw01XHTML2#resources">Resources</a>).</p> <p>Finally, the third value in the triple is given by the content of the element to which the <code>about</code> and <code>property</code> attributes are applied -- or if it's empty, the value of the <code>content</code> attribute. Here's a simple usage that will be familiar from existing uses of the HTML <code><meta></code> tag, specifying a creator in the page header:</p> <table bgcolor="#eeeeee" border="1" cellpadding="5" cellspacing="0" width="100%"><tbody><tr><td><pre><code class="section"><html xmlns="http://www.w3.org/2002/06/xhtml2/" lang="en"><br /> <head><br /> <title>Edd Dumbill's Home Page
Edd Dumbill

...

Now look at this example given by Pemberton, which shows how to use metadata in the actual body of the document:

Welcome to my home page

This denotes the heading as the XHTML 2.0 title of the document, and specifies it as the inline heading. Finally, an end to writing the title out twice in every document!

Thanks to a simple transforming technology called GRDDL (Gleaning Resource Descriptions from Dialects of Languages -- see Resources), you now have a single standard for extracting RDF metadata from XHTML 2.0 documents.

XHTML 2.0 has plenty of other changes, many of which are linked in with the parallel development of specifications such as XForms. I don't have room to cover them all here. Regardless, it's certainly a marked leap from XHTML 1.0.

A few other new toys in XHTML 2.0

Fed up with of writing

 ...   
? Now you can use the new element.

To help with accessibility requirements, XHTML 2.0 now has a role attribute, which can be specified on any body element. For instance, purely navigational elements in the page can have a role="navigation" so text-to-speech engines can process it intelligently.

Browsers currently support some navigation of focus through the Tab key, but it can be arbitrary. The new nextfocus and prevfocus attributes allow you to control the order in which focus moves among the screen elements; this can be a vital feature for creating navigable user interfaces.

Preparing for XHTML 2.0

However deep the changes in advanced features, XHTML 2.0 is still recognisably HTML. Although it has new elements, a lot of XHTML 2.0 will work as-is. The

to

elements were carried through as a compatibility measure, as was .

However, the mission of XHTML 2.0 was not to preserve strict syntactic backwards compatibility, so HTML renderers in today's browsers won't be able to cope with the full expressivity of XHTML 2.0 documents. Nevertheless, most Web browsers today do a good job of rendering arbitrary XML-plus-CSS, and a lot of XHTML 2.0 can be rendered in this way -- even though you won't get the semantic enhancements.

Some of the differences in XHTML 2.0 are very significant -- the transition to XForms being one the most notable, as well as the complete break from the non-XML heritage of HTML. So, you can't switch your sites over to serving XHTML 2.0 right now, but you can make preparations for the future:

  • Get serious about using CSS, and try to remove all presentational markup.
  • Think about how you can deploy microformats in your pages. Microformats allow you to present metadata in your HTML using existing standards (see Resources).
  • If you've not done so already, get experience with XHTML 1.0. Serving XHTML 1.0 pages today as if they were regular HTML is possible if they are crafted according to the XHTML 1.0 HTML Compatibility Guidelines, but can create complications. XHTML 2.0 can't be served in this way. For more on this, see Resources.
  • Experiment with the X-Smiles browser (see Resources), which offers XHTML 2.0 support, along with SVG, XForms, and SMIL 2.0 Basic capabilities.
  • If you create new client systems based on XHTML-like functionality, seriously consider using XHTML 2.0 as your starting point.

Finally, note that XHTML 2.0 is not a finalized specification. At the time of this writing, it is still in the Working Draft stage at the W3C, which means it has some way to go yet before it becomes a Recommendation. Importantly, it must still go through the Candidate Recommendation phase, which is used to gather implementation experience.

XHTML 2.0 is not likely to become a W3C Recommendation until 2007, according to W3C HTML Working Group Roadmap. This means that 2006 will be a year to gain important deployment experience.

Comparing W3C XHTML 2.0 with WHATWG HTML 5

In these two articles, I've presented the salient points of both WHATWG's HTML 5 and the W3C's XHTML 2.0. The two initiatives are quite different: The grassroots-organised WHATWG aims for a gently incremental enhancement of HTML 4 and XHTML 1.0, whereas the consortium-sponsored XHTML 2.0 is a comprehensive refactoring of the HTML language.

While different, the two approaches are not incompatible. Some of the lower-hanging fruit from the WHATWG specifications is already finding implementation in browsers, and some of WHATWG's work is a description of de facto extensions to HTML. Significant portions of this, such as XMLHttpRequest, will find its way into the W3C's Rich Client Activity specifications. WHATWG also acts as a useful catalyst in the Web standards world.

Looking further out, the XHTML 2.0 approach offers a cleaned-up vocabulary for the Web where modular processing of XML, CSS, and ECMAScript is rapidly becoming the norm. Embedded devices such as phones and digital TVs have no need to support the Web's legacy of messy HTML, and are free to take unburdened advantage of XHTML 2.0 as a pure XML vocabulary. Additionally, the new features for accessibility and internationalization make XHTML 2.0 the first XML document vocabulary that one can reasonably describe as universal, and thus a sound and economic starting point for many markup-based endeavors.

As with its past, the future of HTML will be varied -- some might say messy -- but I believe XHTML 2.0 will ultimately receive widespread acceptance and adoption. If it were the only XML vocabulary on the Web, there might be some question, but as browsers gear up to deal with SVG, XForms, and other technologies, XHTML 2.0 starts to look like just another one of those XML-based vocabularies.



Back to top


Resources

Learn

Get products and technologies
  • Take a look at the X-Smiles browser, an experimental platform with early (and sometimes only partial) support for many of the W3C's new client technologies, including XHTML 2.0, SVG, XForms, and SMIL.


Back to top


About the author


Edd Dumbill is chair of the XTech conference on Web and XML technologies, and is an established commentator and open source developer with Web and XML technologies



source:http://www-128.ibm.com/developerworks/xml/library/x-futhtml2.html?ca=dgr-lnxw01XHTML2


"The X Prize Foundation, the group behind the $10 million prize for human space flight, 'plans to offer a $5 million to $20 million prize to the first team that completely decodes the DNA of 100 or more people in a matter of weeks, according to foundation officials and others involved,' the Wall Street Journal reports. 'Such speedy gene sequencing would represent a technology breakthrough for medical research. It could launch an era of "personal" genomics in which ordinary people can learn their complete DNA code for less than the cost of a wide-screen television.' But don't set aside that TV purchase just yet: Foundation officials don't expect the prize money to be claimed for five to 10 years."

source:http://science.slashdot.org/science/06/01/27/1337228.shtml

The New Boom

Silicon Valley is roaring back to life, as startups mint millionaires and Web dreams take flight. But, no, this is not another bubble. Here's why.
By Chris Anderson

Be careful what you wish for, all of you with the "Please, God, just one more bubble!" bumper stickers. It's getting wild again in Silicon Valley. In recent months, the breathtaking ascent of Google has lit a fire under its competitors, which include practically everyone in the online world. The result is all too familiar: seven-figure recruiting packages, snarled traffic on Highway 101, and a general sense that the boom is back.

A boom perhaps, but not (phew!) a bubble. There's a difference. Bubbles are inflated with hot air and speculation. They end with a wet pop, leaving behind messy splatters. Booms, on the other hand, tend to have strong foundations and gentle conclusions. Bubbles can be good: They spark a huge amount of investment that can make things easier for the next generation, even as they bankrupt the current one. But booms - with their more rational allocation of capital - are better. The problem is that exuberance can make it hard to tell one from the other.

Six years ago, people were likewise making the case that the dotcom frenzy was more boom than bubble, built as it was on the legitimate ground of the Internet revolution. And until late 1999 or so, maybe that was true. Then the Wall Street speculators gained the upper hand, and growth became malignant.

It's hard to know what "normal" prosperity looks like in Silicon Valley. This is, after all, the land of boom and bust - it's been alternating between greed and grief ever since the gold rush. But if there is such a thing as a healthy boom, we're living it now. Google may be trading above $400, but the Nasdaq as a whole has hardly budged in five years. Companies are once again minting millionaires, but venture capitalists are investing less than a fifth of what they were at the 2000 peak. About 50 technology companies went public last year, but more than 300 went public in 1999.

Of course, abundant venture capital and plentiful IPOs were once seen as evidence of vitality. Now, however, we know their true cost: The promise of heady valuations encourages venture capitalists to shower startups with money. And having placed such large bets, the VCs naturally want to fatten those startups for market. Fast cash and accelerated growth make a company lose touch with reality, the simplest explanation for the bubble's most notorious flameouts.

So why is the froth missing from the wave this time? Because the underlying economics are so much healthier, in three main ways.

First, technology adoption has continued at a torrid pace (and even accelerated at times) despite the bust. The dotcom business models of the 1990s may have been based on wild projections of broadband, advertising, and ecommerce trends. But the funny thing is, even after the bubble burst, those trends continued. These days, it's hard to find a technology-adoption projection from 1999 that hasn't come true. Meanwhile, the digital-media boom sparked by the iPod and iTunes has blown through even the most aggressive forecasts.

Today, broadband is mainstream, online shopping is commonplace, everyone has a wireless device or two, and Apple's latest music player was - for the fifth season in a row - the must-have holiday gift. The Internet and digital media are clearly not fads. Over the past decade, we've started to live a life only imagined in mid-'90s business plans. As a result, some silly bubble-era ideas are starting to actually make sense - perhaps a lot of sense.

Free phone calls over the Internet? That's Skype, which eBay just bought for nearly $4 billion. Online virtual communities? Now a global phenomenon in the form of massively multiplayer online games. Free music sites? MySpace, which rivals Google in traffic. (The boom's ultimate echo: The owner of Dog.com just paid $1 million for Fish.com, in hopes of starting what amounts to a new Pets.com. Just so long as it doesn't ship 50-pound bags of chow.)

The second reason that this boom is so different from the last is that the sunk costs of the dotcom era make the economics of entrepreneurship more favorable. In the bad old days, companies bankrupted themselves building out their fiber-optic networks. Bad for investors, good for everyone else: We're now enjoying supercheap bandwidth. So, too, for storage, screens, and a host of other technologies that are benefiting from profligate '90s-era investment and research.

Meanwhile, open source software has come of age, and computer hardware will soon cost less than the electricity it takes to run it. The result: industrial-strength servers that are cheaper than desktop PCs (sorry, Sun). Or, if you prefer, you can buy hardware and software even more cheaply as a hosted service (there's that inexpensive bandwidth again).

The result is that you can start a company today for a tiny fraction of what people spent five years ago. Joe Kraus, cofounder of the bubble-era search engine Excite, estimates that his new company, JotSpot, will make it to first revenues with a total investment of about $100,000 - less than 5 percent of what Excite burned through a decade earlier. Today companies are starting small and lean and staying that way - no more blowing all the first-round funding on PR stunts and rooftop parties. As a result, they're hitting break-even sooner.

In this new environment, startups can grow organically. That means less venture capital is needed - and that's the third reason this boom is different. Less venture capital leads to fewer venture capitalists hustling for early exits at high valuations. That, in turn, reduces the pressure to go public and translates to fewer undercooked companies launching IPOs on hype alone.

So there you have the recipe for a healthy boom, not a fragile bubble: a more receptive marketplace, lower costs, and lighter pressure from investors. Today, the typical exit strategy is to sell your startup to Yahoo! for a few million, not to maneuver for a rowdy IPO and an appearance on CNBC. Highway 101 is jammed with Prius-driving engineers, not biz-dev guys in Beemers. And most New York cab drivers are happily ignorant of what's hot in the Valley, just as they should be.

Chris Anderson (canderson@wiredmag.com) is Wired's editor in chief.

source:http://www.wired.com/wired/archive/14.02/boom_pr.html

7 myths about the Challenger shuttle disaster

It didn't explode, the crew didn't die instantly and it wasn't inevitable

Challenger launch
These sequential photos show a fiery plume escaping from the right solid rocket booster as the space shuttle Challenger ascends to the sky on Jan. 28, 1986.


HOUSTON - Twenty years ago, millions of television viewers were horrified to witness the live broadcast of the space shuttle Challenger exploding 73 seconds into flight, ending the lives of the seven astronauts on board. And they were equally horrified to learn in the aftermath of the disaster that the faulty design had been chosen by NASA to satisfy powerful politicians who had demanded the mission be launched, even under unsafe conditions. Meanwhile, a major factor in the disaster was that NASA had been ordered to use a weaker sealant for environmental reasons. Finally, NASA consoled itself and the nation with the realization that all frontiers are dangerous and to a certain extent, such a disaster should be accepted as inevitable.

At least, that seems to be how many people remember it, in whole or in part. That’s how the story of the Challenger is often retold, in oral tradition and broadcast news, in public speeches and in private conversations and all around the

Internet. But spaceflight historians believe that each element of the opening paragraph is factually untrue or at best extremely dubious. They are myths, undeserving of popular belief and unworthy of being repeated at every anniversary of the disaster.

The flight, and the lost crewmembers, deserve proper recognition and authentic commemoration. Historians, reporters, and every citizen need to take the time this week to remember what really happened, and especially to make sure their memories are as close as humanly possible to what really did happen.

If that happens, here's the way the mission may be remembered:

  1. Few people actually saw the Challenger tragedy unfold live on television.
  2. The shuttle did not explode in the common definition of that word.
  3. The flight, and the astronauts’ lives, did not end at that point, 73 seconds after launch.
  4. The design of the booster, while possessing flaws subject to improvement, was neither especially dangerous if operated properly, nor the result of political interference.
  5. Replacement of the original asbestos-bearing putty in the booster seals was unrelated to the failure.
  6. There were pressures on the flight schedule, but none of any recognizable political origin.
  7. Claims that the disaster was the unavoidable price to be paid for pioneering a new frontier were self-serving rationalizations on the part of those responsible for incompetent engineering management — the disaster should have been avoidable.

Myth #1: A nation watched as tragedy unfolded
Few people actually saw what happened live on television. The flight occurred during the early years of cable news, and although CNN was indeed carrying the launch when the shuttle was destroyed, all major broadcast stations had cut away — only to quickly return with taped relays. With Christa McAuliffe set to be the first teacher in space, NASA had arranged a satellite broadcast of the full mission into television sets in many schools, but the general public did not have access to this unless they were one of the then-few people with satellite dishes. What most people recall as a "live broadcast" was actually the taped replay broadcast soon after the event.

Myth #2: Challenger exploded
The shuttle did not explode in the common definition of that word. There was no shock wave, no detonation, no "bang" — viewers on the ground just heard the roar of the engines stop as the shuttle’s fuel tank tore apart, spilling liquid oxygen and hydrogen which formed a huge fireball at an altitude of 46,000 ft. (Some television documentaries later added the sound of an explosion to these images.) But both solid-fuel strap-on boosters climbed up out of the cloud, still firing and unharmed by any explosion. Challenger itself was torn apart as it was flung free of the other rocket components and turned broadside into the Mach 2 airstream. Individual propellant tanks were seen exploding — but by then, the spacecraft was already in pieces.

Myth #3: The crew died instantly
The flight, and the astronauts’ lives, did not end at that point, 73 seconds after launch. After Challenger was torn apart, the pieces continued upward from their own momentum, reaching a peak altitude of 65,000 ft before arching back down into the water. The cabin hit the surface 2 minutes and 45 seconds after breakup, and all investigations indicate the crew was still alive until then.

What's less clear is whether they were conscious. If the cabin depressurized (as seems likely), the crew would have had difficulty breathing. In the words of the final report by fellow astronauts, the crew “possibly but not certainly lost consciousness”, even though a few of the emergency air bottles (designed for escape from a smoking vehicle on the ground) had been activated.

The cabin hit the water at a speed greater than 200 mph, resulting in a force of about 200 G’s — crushing the structure and destroying everything inside. If the crew did lose consciousness (and the cabin may have been sufficiently intact to hold enough air long enough to prevent this), it’s unknown if they would have regained it as the air thickened during the last seconds of the fall. Official NASA commemorations of “Challenger’s 73-second flight” subtly deflect attention from what was happened in the almost three minutes of flight (and life) remaining AFTER the breakup.

Myth #4: Dangerous booster flaws result of meddling
The side-mounted booster rockets, which help propel the shuttle at launch then drop off during ascent, did possess flaws subject to improvement. But these flaws were neither especially dangerous if operated properly, nor the result of political interference.

Each of the pair of solid-fuel boosters was made from four separate segments that bolted end-to-end-to-end together, and flame escaping from one of the interfaces was what destroyed the shuttle. Although the obvious solution of making the boosters of one long segment (instead of four short ones) was later suggested, long solid fuel boosters have problems with safe propellant loading, with transport, and with stacking for launch — and multi-segment solids had had a good track record with the Titan-3 military satellite program. The winning contractor was located in Utah, the home state of a powerful Republican senator, but the company also had the strengths the NASA selection board was looking for. The segment interface was tricky and engineers kept tweaking the design to respond to flight anomalies, but when operated within tested environmental conditions, the equipment had been performing adequately.

Myth #5: Environmental ban led to weaker sealant
A favorite of the Internet, this myth states that a major factor in the disaster was that NASA had been ordered by regulatory agencies to abandon a working pressure sealant because it contained too much asbestos, and use a weaker replacement. But the replacement of the seal was unrelated to the disaster — and occurred prior to any environmental ban.

Even the original putty had persistent sealing problems, and after it was replaced by another putty that also contained asbestos, the higher level of breaches was connected not to the putty itself, but to a new test procedure being used. “We discovered that it was this leak check which was a likely cause of the dangerous bubbles in the putty that I had heard about," wrote physicist Richard Feynman, a member of the Challenger investigation board.

And the bubble effect was unconnected with the actual seal violation that would ultimately doom Challenger and its crew. The cause was an inadequate low-temperature performance of the O-ring seal itself, which had not been replaced.

Myth #6: Political pressure forced the launch
There were pressures on the flight schedule, but none of any recognizable political origin. Launch officials clearly felt pressure to get the mission off after repeated delays, and they were embarrassed by repeated mockery on the television news of previous scrubs, but the driving factor in their minds seems to have been two shuttle-launched planetary probes. The first ever probes of this kind, they had an unmovable launch window just four months in the future. The persistent rumor that the White House had ordered the flight to proceed in order to spice up President Reagan’s scheduled State of the Union address seems based on political motivations, not any direct testimony or other first-hand evidence. Feynman personally checked out the rumor and never found any substantiation. If Challenger's flight had gone according to plan, the crew would have been asleep at the time of Reagan's speech, and no communications links had been set up.

Myth #7: An unavoidable price for progress
Claims that the disaster was the unavoidable price to be paid for pioneering a new frontier were self-serving rationalizations on the part of those responsible for incompetent engineering management — the disaster should have been avoidable. NASA managers made a bad call for the launch decision, and engineers who had qualms about the O-rings were bullied or bamboozled into acquiescence. The skeptics’ argument that launching with record cold temperatures is valid, but it probably was not argued as persuasively as it might have been, in hindsight. If launched on a warmer day, with gentler high-altitude winds, there’s every reason to suppose the flight would have been successful and the troublesome seal design (which already had the attention of designers) would have been modified at a pace that turned out to have been far too leisurely. The disaster need never have happened if managers and workers had clung to known principles of safely operating on the edge of extreme hazards — nothing was learned by the disaster that hadn’t already been learned, and then forgotten.

source:http://www.msnbc.msn.com/id/11031097/page/2/


This page is powered by Blogger. Isn't yours?