Friday, May 26, 2006

Tech Titans Take Sides

Yahoo partners with eBay to battle with Google, which is teaming with Dell to put the heat on Microsoft


In the war for dominance of the Net, May 25 turned out to be a big day for alliance making. First there was news of an ad revenue-sharing deal between Yahoo! and eBay. Then came the announcement that Google would put its tools on millions of new personal computers made by Dell.


The pairings highlight the importance the fast-growing, $12.5 billion Internet ad market and the race to get in front of as many Web surfers as possible. The alliance with eBay (EBAY) gives Yahoo (YHOO) a way to narrow a lead by Google (GOOG) in generating advertising sales. Paring with Dell (DELL), meantime, helps Google muscle in on Microsoft's dominance of the desktop. "These alliances are predicated as a response to a looming threat" from others, says Standard & Poor's analyst Scott Kessler. "Companies are inclined to make these moves so they can solidify and enhance their competitive positioning."

TWO-WAY TRAFFIC. Under the Yahoo-eBay partnership, which covers only the U.S., Yahoo will supply eBay's site with ads, and the two will split revenue generated from them. eBay's auction listings will be included in Yahoo search results, driving users from that site to eBay's listings. Yahoo will also let users pay for Yahoo services using PayPal, eBay's online payment-processing company, and Yahoo links will be included on the eBay toolbar.

"We have the largest collection of content on the Web in one place, so as they index more and more of our listings...that's very relevant content that Yahoo would like to embrace into their Web search." says John Donahoe, president of eBay Marketplace.

Analysts say the Yahoo-eBay deal is a win for both companies. For Yahoo, it means a chance to get ads in front of eBay's estimated 76 million U.S. users. The deal gives eBay a bigger slice of the red-hot online ad market. It will also be a "big boost" for PayPal, says Scot Wingo, CEO of ChannelAdvisor, a North Carolina company that provides services to eBay sellers and other merchants. Because Yahoo has many paid services, Wingo says, the partnership will give users the opportunity to use PayPal for many more of their online transactions.

On its face, the move would also appear to buttress Yahoo against Google. For instance, Google would miss out on the portion of ads on eBay's site that will be earmarked for Yahoo. But analysts say in practice Google won't be hurt much at all. eBay will continue to be one of Google's biggest customers.

Currently eBay and its subsidiary, Shopping.com, are the two largest buyers of Google ads. According to Nielsen NetRatings, ads purchased through Google by the two companies were viewed a total of 1.4 billion times -- 38% more times than ads purchased with Yahoo from the two. That huge amount of spending from the company only furthers Google dominance in search and, indirectly, its ability to build new businesses that can compete with eBay and Yahoo.

NO THREAT TO GOOGLE. Merrill Lynch (MER) analysts in a research note called the arrangement a "good strategic fit" that could mean as much as $200 million in additional revenue for the Yahoo affiliate network in 2007. But it doesn't change the securities firm's outlook on Google at all. "We do not expect Google to lose traffic based on this announcement and are not changing estimates at this time," writes Merrill Lynch's Justin Post.

The Dell deal, on the other hand, gives Google prime real estate on desktops -- a space dominated by Microsoft (MSFT). Under the alliance, first reported by The Wall Street Journal, Dell computers purchased by consumers, small- and midsize businesses, and select enterprise customers will come loaded with a suite of Google software, including the Google Toolbar and the search application called Google Desktop. The default home page of Internet Explorer will be a page cobranded by Google and Dell.

That not only puts Google's software in front of millions of users who might not otherwise have downloaded it, but also gives those users an express route to search using Google's engine and, potentially, clicking on ads placed alongside search results. Dell is the world's largest PC maker and sold 4.9 million PCs in the U.S. in the first quarter, for a 29.8% market share, according to Gartner Dataquest.

DESKTOP WARS. The offensive could limit Microsoft's ability to use its prominence on PCs to direct users to its MSN Internet tools and services. In recent months, Microsoft has attempted to use its popular desktop applications like Internet Explorer to drive traffic to its Web sites and search engines. That drew the ire of Google, which in April complained to the Justice Dept. about unfair competitive practices.

Google didn't get much satisfaction. Less than a month later, on May 12, the Justice Dept. dismissed the complaint. It seems Google now is taking matters into its own hands.

source:http://www.businessweek.com/technology/content/may2006/tc20060526_212374.htm?campaign_id=bier_tcm

Symantec AntiVirus Worm Hole Puts Millions at Risk

A gaping security flaw in the latest versions of Symantec's anti-virus software suite could put millions of users at risk of a debilitating worm attack, Internet security experts warned May 25.

Researchers at eEye Digital Security, the company that discovered the flaw, said it could be exploited by remote hackers to take complete control of the target machine "without any user action."


"This is definitely wormable. Once exploited, you get a command shell that gives you complete access to the machine. You can remove, edit or destroy files at will," said eEye Digital Security spokesperson Mike Puterbaugh.

PointerClick here to read about Symantec's use of a rootkit-type feature in its Norton SystemWorks.

"We have confirmed that an attacker can execute code without the user clicking or opening anything," Puterbaugh said.

eEye, based in Aliso Viejo, Calif., posted a brief advisory to raise the alarm about the bug, which can allow the execution of malicious code with system-level access. The flaw carries a "high risk" rating because of the potential for serious damage, Puterbaugh said.

Symantec, of Cupertino, Calif., confirmed receipt of eEye's warning and said an investigation was underway.

"[Our] product security team has been notified of a suspected issue in Symantec AntiVirus 10.x. [We] are evaluating the issue now and, if necessary, will provide a prompt response and solution," a Symantec spokesperson said in a statement sent to eWEEK.

Symantec's anti-virus software is deployed on more than 200 million systems in both the enterprise and consumer markets, and the threat of a network worm attack is very real. However, eEye's Puterbaugh said there are no publicly shared proof-of-concept exploits or other information to suggest an attack is imminent.

But, he said, "there's nothing to say that someone hasn't found this and is already using it for nefarious activities. … It's quite possible that we weren't the only ones to find this. Who knows if it's already being used in targeted attacks that we'll never hear about."

Internet security experts have long warned that flaws in anti-virus products will become a big target for malicious hackers. During the last 18 months, some of the biggest names in the anti-virus business have shipped critical software updates to cover code execution holes, prompting speculation among industry watchers that it's only a matter of time before a malicious hacker is motivated to create a devastating network worm using security software flaws as the attack vector.

"The big surprise is we haven't seen one yet," said Johannes Ullrich, chief technology officer at the SANS ISC (Internet Storm Center), of Bethesda, Md., in a recent eWEEK interview.

In March 2004, the fast-moving Witty worm exploited a zero-day buffer overflow in security products sold by Internet Security Systems. Unlike most self-propagating worms, Witty was capable of corrupting the hard drives of infected machines, preventing normal operation of the PC and eventually causing it to crash.

"This could be Symantec's Witty," Puterbaugh warned.

The vulnerable Symantec 10.x application promises real-time detection and repairs for spyware, adware, viruses and other malicious intrusions. It is used by many of the world's largest corporate customers and U.S. government agencies.

source:http://www.eweek.com/article2/0,1895,1967941,00.asp


Nokia liberates the S60 browser source code

Nokia has released the source code of its S60 WebKit, the core rendering component utilized by the company's web browser for mobile devices. The S60 WebKit is based on the WebCore HTML rendering framework used in Apple's Safari web browser, which is in turn based on the KHTML component used by the Linux-based Konqueror web browser. The S60 WebKit provides a number of unique features that improve the browsing experience on portable devices with small screens, particularly mobile phones. The most notable unique feature of the S60 WebKit is the page overview mode, which enables users to select which part of a page is shown on the screen using a miniature view of the page. The S60 WebKit provides extensive support for CSS, XHTML, SVG, and Javascript, making it one of the few mobile phone web browsers capable of supporting AJAX web applications. The S60 WebKit is also modular and extensible, with support for plug-ins that provide additional functionality like Flash playback.

Although it is already available for a variety of Nokia phones, the company hopes that source code availability will enable other companies to adopt the S60 WebKit as well. Released under the highly permissive BSD license, the S60 WebKit can now be extended and redistributed in almost any context, and companies are now free to build their own custom browsers with S60 WebKit components. By making the S60 WebKit the industry standard for mobile phone web browsing, Nokia could significantly reduce the barriers associated with making web pages usable on mobile devices. According to Nokia representative Lee Epting, the project will increase consistency and minimize market fragmentation:

"This initiative will attract a critical mass of open source software developers to build a consistent, web browser engine as the clearest path to minimize fragmentation in the mobile browser market," said Lee Epting, vice president of Nokia's global software developer support program, Forum Nokia. "With nearly 100 million smartphones deployed worldwide, a common open source solution driving mobile web browser consistency will deliver on the long-awaited promise of full-web browsing and a true web experience for smartphone users around the globe."

Nokia is no stranger to open source, as the company's Linux-based Nokia 770 web tablet uses open source extensively. The power and scalability of WebKit-based browsers and the highly permissive license under which the S60 WebKit source code is available make it a good choice for companies that want to add mobile web browsing to their devices. I think it will be particularly interesting to see how this affects Opera, whose revenue primarily comes from distribution of its own virtually ubiquitous embedded browser. At present, Opera's offerings are available for a much wider selection of platforms and hardware, ranging from mobile phones to Nintendo's handheld gaming system. Opera is even available on Nokia's own 770 handheld. Opera recently stopped including ad banners in the free version of their web browser for desktop computers. The availability of S60 WebKit source code could compel Opera to make additional changes to their business model at some point in the future.

Available from the S60 WebKit version control system, the source code includes the core S60 WebKit system as well as a special memory management component designed for mobile phones, and a user interface reference implementation called Reindeer. Additional information is available from the official WebKit site.

source:http://arstechnica.com/news.ars/post/20060525-6918.html


Plan for cloaking device unveiled

Spacecraft, Science Photo Library
Cloaking devices are a staple of science fiction stories
Researchers in the US and Britain have unveiled their blueprints for building a cloaking device.

So far, cloaking has been confined to science fiction; in Star Trek it is used to render spacecraft invisible.

Professor Sir John Pendry says a simple demonstration model that could work for radar might be possible within 18 months' time.

In the journal Science two separate teams, including Professor Pendry's, have outlined ways to cloak objects .

These research papers present the maths required to verify that the concept could work. But developing an invisibility cloak is likely to pose significant challenges.

Both groups propose methods using the unusual properties of so-called "metamaterials" to build a cloak.

These metamaterials can be designed to induce a desired change in the direction of electromagnetic waves, such as light. This is done by tinkering with the nano-scale structure of the metamaterial, not by altering its chemistry.

Light flow

John Pendry's team suggest that by enveloping an object in a metamaterial cloak, light waves can be made to flow around the object in the same way that water would do so.

"Water behaves a little differently to light. If you put a pencil in water that's moving, the water naturally flows around the pencil. When it gets to the other side, the water closes up," Professor Pendry told the BBC.

Special materials could make light "flow" around an object like water
"A little way downstream, you'd never know that you'd put a pencil in the water - it's flowing smoothly again.

"Light doesn't do that of course, it hits the pencil and scatters. So you want to put a coating around the pencil that allows light to flow around it like water, in a nice, curved way."

The work provides a mathematical "recipe" for bending light waves in such a way as to achieve a desired cloaking effect.

John Pendry, along with colleagues David Smith and David Schurig at Duke University in North Carolina, US, have been testing suitable metamaterials for the device they plan to build.

This, Sir John explained, would consist of a sphere or cylinder wrapped in a sheath of metamaterial which could cloak it from radio waves.

"It's not tremendously fancy, but that for us would be quite an achievement," he told the BBC News website.

Professor Ulf Leonhardt, author of another cloaking paper in Science, described the effect for light as a "mirage".

"What you're trying to do is guide light around an object, but the art is to bend it such that it leaves the object in precisely the same way that it initially hits it. You have the illusion that there is nothing there," he told the BBC's Science in Action programme.

The work could have uses in military stealth technology - but engineers have not yet created the materials that could be used to cloak an aircraft or a tank, John Pendry explains. Professor Pendry's research has been supported by the US Defense Advanced Research Projects Agency (Darpa).

Several other scientific teams have proposed ideas for cloaking devices. One theoretical paper proposed using a material known as a superlens to cancel out light being scattered from an object.

source:http://news.bbc.co.uk/1/hi/sci/tech/5016068.stm


Interactive display system knows users by touch

An interactive computer display that keeps track of multiple users by differentiating between their touch could lead to safer vehicle controls and smarter video games, its makers claim.

The DiamondTouch (DT) system, developed by Mitsubishi Electric Research Laboratories (MERL) in Massachusetts, US, consists of a touch-sensitive screen that can be operated by several users simultaneously.

The system transmits distinct electrical signals to different areas on the surface of the screen. When a user makes contact with the screen the relevant signal is sent through their body and picked up by a receiver located in their chair.

This harmless connection tells a connected computer precisely where the screen was touched and by whom, allowing it to respond accordingly. It also means the screen can correctly pinpoint and distinguish between many different touches at once.

"Most touch screens only permit one touch at a time," explains Paul Dietz, a researcher at MERL. "A much smaller number allow multiple, simultaneous touches, but none of these can tell you who is touching where."

Access control

By distinguishing between different users, the system can track a person's input and control their access to certain functions. A video produced by the laboratory shows the system in action (4.1MB mov). "The key point of DiamondTouch controls is identity," Dietz says. "If the controls know who is operating them, they can behave appropriately."

The MERL researchers say the identification technology could be inexpensively added to many types of physical interface. It could, for example, be used in a vehicle dashboard to let a passenger, but not a driver, access its navigation system when the car is in motion, preventing driver distraction.

The system could also be used to help limit access to certain controls at power station or in the cockpit of an airplane, the researchers say.

Whack-a-Mole

Jeff Han, a user interface researcher at New York University in the US, says DT could have useful applications in certain environments. "The main drawback is that the user must be situated on a receiver antenna," he told New Scientist

"However, the application scenarios envisioned [by the system's designers] are wonderfully practical and appropriate, because they side-step this limitation beautifully."

But the technology could also be used to create smarter games, says Clifton Forlines, a researcher at MERL who is testing simple gaming applications. "Consider the Whack-a-Mole game," he says. "If it were augmented with DT, two or more players trying to whack the same moles would have to race one another and a whacked mole would know who whacked it."

source:http://www.newscientisttech.com/article/dn9222-interactive-display-system-knows-users-by-touch.html


Coming soon: The Web toll

New laws may transform cyberspace and the way you surf it


What if the Internet were like cable television, with Web sites grouped like channels into either basic or premium offerings? What if a few big companies decided which sites loaded quickly and which ones slowly, or not at all, on your computer?

Welcome to the brave new Web, brought to you by Verizon, Bell South, AT&T and the other telecommunications giants (including PopSci and CNN.com's parent company, Time Warner) that are now lobbying Congress to block laws that would prevent a two-tiered Internet, with a fast lane for Web sites able to afford it and a slow lane for everyone else.

Specifically, such companies want to charge Web sites for the speedy delivery of streaming video, television, movies and other high-bandwidth data to their customers. If they get their way (Congress may vote on the matter before the year is out), the days of wide-open cyberspace are numbered.

As things stand now, the telecoms provide the lines -- copper, cable or fiber-optic -- and the other hardware that connects Web sites to consumers.

But they don't influence, or profit from, the content that flows to you from, say, cinemanow.com; they simply supply the pipelines. In effect, they are impartial middlemen, leaving you free to browse the entire Internet without worrying about connection speeds to your favorite sites.

That looks set to change. In April a House subcommittee rejected a measure by Rep. Edward Markey of Massachusetts (D) that would have prevented telecoms from charging Web sites extra fees based on bandwidth usage.

The telecom industry sees such remuneration as fair compensation for the substantial cost of maintaining and upgrading the infrastructure that makes high-bandwidth services, such as streaming video, possible.

Christopher Yoo, a professor at Vanderbilt University Law School, argues that consumers should be willing to pay for faster delivery of content on the Internet, just as many FedEx customers willingly shell out extra for overnight delivery. "A regulatory approach that allows companies to pursue a strategy like FedEx's makes sense," he says.

On a technical level, creating this so-called Internet fast lane is easy. In the current system, network devices called differentiated service routers prioritize data, assigning more bandwidth to, for example, an Internet telephone call or streaming video than to an e-mail message.

With a tiered Internet, such routing technology could be used preferentially to deliver either the telecoms' own services or those of companies who had paid the requisite fees.

What does this mean for the rest of us? A stealth Web tax, for one thing.

"Google and Amazon and Yahoo are not going to slice those payments out of their profit margins and eat them," says Ben Scott, policy director for Free Press, a nonprofit group that monitors media-related legislation. "They're going to pass them on to the consumer. So I'll end up paying twice. I'm going to pay my $29.99 a month for access, and then I'm going to pay higher prices for consumer goods all across the economy because these Internet companies will charge more for online advertising."

Worse still, Scott argues, the plan stands to sour your Web experience. If, for instance, your favorite blogger refused to ante up, her pages would load more slowly on your computer than would content from Web sites that had paid the fees.

Which brings up another sticking point: A tiered system would give established companies with deep pockets a huge competitive edge over cash-strapped start-ups consigned to slow lanes.

"We have to remember that some of the companies that we now consider to be titans of the Internet started literally as guys in a garage," Scott says."That's the beauty and the brilliance of the Internet, yet we're cavalierly talking about tossing it out the window."

source:http://www.cnn.com/2006/TECH/internet/05/25/the.web.toll/index.html


Fils-Aime: Nintendo Has Learned from its Mistakes with GCN

Nintendo of America's executive vp of sales and marketing Reggie Fils-Aime (right) talks about what went wrong with GameCube and how Nintendo is applying the lessons learned from GCN to their launch of the Wii.

While by no means a failure, clearly the GameCube has not been the stellar success that Nintendo had hoped it would be. The company has learned its lesson with the GameCube experience, however, and moving forward with the Wii Nintendo is very confident.

Nintendo of America's executive vice president of sales and marketing Reggie
Fils-Aime, in an interview found in the latest issue of Nintendo Power, admitted that the initial software lineup for the GameCube was simply not "diverse and strong enough from a first and third-party perspective." And compounding that problem, he said, was that the next wave of titles was far too slow in arriving.

In order to avoid repeating this mistake with the Wii, Nintendo said that it has already changed its strategy and has been far more open with its partners from much earlier on.

"We have been sharing information and development tools with publishers since very early on in the process. We have communicated to them why it makes sense to develop for the platform, and why it makes business sense to bring their best current franchises and brand-new concepts to the platform," Fils-Aime said. "Those have been our key tools and tactics to make sure that publishers are on board with our strategy."

Because of this new approach Fils-Aime believes it will be a very different story with the Wii. The number of titles on display at the Wii booth at this year's E3Expo is already a positive indication of what's to come. Fils-Aime said that the Wii games at E3 represented "a very broad range that will meet gamers' needs."

"From Metroid Prime 3, Mario Galaxy, and Ubisoft's Red Steel, the core gamers will be thrilled. With Tennis and WarioWare, we have titles that will reach the masses," he continued.

"And, based on the sheer range of titles we've shown, we're confident that the entire first year of Wii's launch will be strong. So, I believe we are well on our way to addressing the key lessons coming out of the GameCube launch," Fils-Aime concluded.

source:http://biz.gamedaily.com/industry/feature/?id=12788&rp=49

Vista Beta 2, up close and personal

Up in Redmond, Microsoft developers proudly talk of dogfooding the software they write. Running beta software is the only way to learn what works and what doesn’t. A copy of Windows Vista running on a test machine in the corner isn’t likely to get a serious workout. To find the pain points – another popular Microsoft expression – you have to run that beta code on the machine you use every day.

Start menu - Click to enlarge

In that same spirit, I’ve spent the last three months running beta versions of Windows Vista on the PCs I use for everyday work. February and March were exasperating. April’s release was noticeably better, and the Beta 2 preview – Build 5381, released to testers in early May – has been running flawlessly on my notebook for nearly three weeks.

Yesterday, at WinHEC, Bill Gates officially unveiled Windows Vista Beta 2, which means you’ll get a chance to see for youself what all the fuss is about. (The public download should be available within a few weeks – sign up here to reserve your copy.)

In the comments to posts I’ve written over the past few weeks, one question comes up again and again: What’s really in Windows Vista? Why should I care?

To help answer that question, I’ve put together a gallery of 30 screen shots digging deep into Vista Beta 2. You’ve probably seen Vista screenshot galleries on other sites, most shot in a hurry by someone sprinting to meet a deadline. I took a little longer to assemble this collection so you can get a closer look at Vista’s workings instead of just a series of setup screens and wallpaper shots.

The gallery is divided into six sections:

The Vista interface

You’ve heard all about the Aero interface and Glass effects, but did you know you can select from eight colors and vary the transparency of the see through window elements?There’s no denying that the Vista interface is better looking than the bright blue XP Luna look, but after working with it for a few months I’ve grown to appreciate how it works, too. The redesigned Start menu, taskbar, and Control Panel – all featured in the image gallery – are easier to use than their XP predecessors.

File management

In earlier Vista builds, Windows Explorer was, to put it charitably, a mess. The worst offender was the misguided attempt to make all folders virtual – and a series of bugs and missing features made those builds painful to work with. In Beta 2, Windows Explorer appears to be working as it was designed. It’s radically different from XP’s Explorer, which means you can expect some confusion when you first sit down with it. A Search box is embedded in the top right corner of every Explorer window, powered by a fully customizable index. Oh, and Vista has a Backup program you might actually use.

Security

Security is one of the big selling points of Vista. One look at the new Security Center and you’ll see why. Where XP has three entries in its Security Center, Vista has six, including the most controversial feature in the OS: User Account Control. (See my series on UAC – Part 1, Part 2, and Part 3, for more details.) There’s no question that the new security features work as intended. The real test of Vista will be whether Windows users can be persuaded to keep UAC and other potentially disruptive features enabled.

Performance and reliability

Vista is packed with a bunch of features that have hardly received any publicity. You’ve probably seen the hokey Performance Rating dialog box, which measures your PC’s resources and assigns a 1–5 rating. But I’ll bet you haven’t seen the Performance Diagnostic Console, which is like Task Manager’s Performance tab on steroids, or the new Reliability Monitor, which sifts through event logs and helps you track down the cause of crashes and slowdowns.

On the web

If you’ve used Internet Explorer 7 on XP, you already know about tabbed browsing and IE’s support for RSS feeds. What’s different in Vista? IE7 runs in Protected Mode, a low-rights security scheme that lets your standard user account browse as usual without giving spyware and malware access to the rest of the system. There’s also an update to Outlook Express called Windows Mail, which is still a work in progress, and a suprisingly useful Calendar program.

Networking

And then there’s networking. In Vista, Microsoft has basically replaced every bit of network plumbing and built a whole new set of interfaces. The new Network Center can be confusing, especially if you already know your way around XP’s networking model.

What’s not in this image gallery? You won’t find any of the features aimed at portable computers until my notebook gets an upgrade later this week. And the extensive array of digital media features, including Windows Media Player 11 and Media Center, deserve their own gallery, Finally, next week I’ll look at some of the advanced features that IT pros will find intriguing.

Meanwhile, if you have any Vista-related questions, post them in the Talkback section here. I'll try to get to them in a follow-up post.

source:http://blogs.zdnet.com/Bott/?p=66


Data center networks often exclude Ethernet

It’s not often that Ethernet is on the outside of an emerging network technology market. But in linking some data center equipment to high-speed pipes, Ethernet sometimes has its nose pressed to the glass door, looking in.

For data center network managers, server interconnect technology falls into two distinct camps. For most, Ethernet, the world standard for networked computers, is how Windows, Linux, Unix and mainframe boxes are plugged in and accessed. But in the rarefied realm of high-performance data center clustering, technologies such as InfiniBand and some niche, proprietary interconnect technologies, such as Myricom’s Myrinet, still have a strong hold.

Over the past several years, InfiniBand switches have emerged as an alternative for some users. Makers such as Voltaire and Infinicon came to market with high-speed clustering switches that connect servers with specialized host bus adapters (HBA). These systems can provide as much as 30Gbps of throughput, with latency as low as the sub-200-nanosec range. (By comparison, latency in standard Ethernet gear is sometimes measured in microseconds -- one millionth of a second -- rather than nanoseconds, which are one-billionth of a second). This server-to-switch technology was so attractive that Cisco purchased InfiniBand switch start-up TopSpin a little more than a year ago for $250 million.

A need for speed, and more

“Ethernet is a good, versatile technology that can handle almost anything,” says Patrick Guay, vice president of marketing for Voltaire. “But Ethernet never had to address the levels of [traffic] efficiency and latency” required in clustered computer systems, storage networking and high-speed server interconnects, he adds.

“It’s not that there is no place for 10G Ethernet in data centers,” Guay says. “There is just a certain subset of customers who need more than what Ethernet and IP offer.”

This was the case at Mississippi State University’s Engineering Research Center (ERC), which runs several large Linux clusters used in engineering simulations for defense, medical and automotive industry research, among other areas. The ERC’s Maverick is a 384-processor Linux cluster connected by Voltaire InfiniBand products. Voltaire’s Intros 96-port InfiniBand switch is used to connect the diskless processor nodes, which access storage — and even operating system boot images — over the InfiniBand links.

This lets Roger Smith, network manager at the ERC, set up cluster configurations on the fly; however, many processors needed for a task can be called up quickly.

Smith says the communication between nodes in the cluster is "very chatty with very small messages" being passed back and forth. This requires extremely low latency among the nodes, so that no messages are missed, which could disrupt a job running on the cluster.

“Ethernet was just not ready for prime time, to get to the low-latency needs in some data centers” over the last few years, says Steve Garrison, marketing director for Force10 Networks, which makes high-speed Ethernet data center switches.

The latency even on older Ethernet networks is probably imperceptible for most LAN users — in the 1 to 50-millisec range. But in data centers, where CPUs may be sharing data in memory across different connected machines, the smallest hiccups can fail a process or botch data results.

“When you get into application-layer clustering, milliseconds of latency can have an impact on performance,” Garrison says. This forced many data center network designers to look beyond Ethernet for connectivity options.

“The good thing about InfiniBand is that it has gotten Ethernet off its butt and forced the Ethernet market to rethink itself and make itself better,” Garrison says.

Ethernet’s virtual twins

It’s become harder to tell standard Ethernet and high-speed interconnect technologies apart when comparing such metrics as data throughput, latency and fault tolerance, industry experts say. Several recent developments have led to this. One is the emergence of 10G Ethernet in server adapters; network interface card (NIC) makers such as Solarflare, Neterion and Chelsio have cards that can pump as much as 8G to 10Gbps of data into and out of a box with the latest PCI-X server bus technology.

Recent advancements in Ethernet switching components and chipsets have narrowed the gap between Ethernet and InfiniBand, with some LAN gear getting latency down to as little as 300 nanosec. Also, development of Ethernet-based Remote Direct Memory Access — most notably, the iWARP effort — has developed Ethernet gear that can bypass network stacks and bus hardware and push data directly into server memory. Improvements in basic Gigabit Ethernet and 10G chipsets have also brought latency down to the microsecond level — nearly matching InfiniBand and other proprietary HBA interconnect technologies.

The type of high-performance computing applications used at Mississippi State also has been the purview of specialized interconnects for a long time. One brand synonymous with clustering — at least in the high-performance computing world — is Myricom. The company’s proprietary fiber and copper interconnects have been in large supercomputers for years, connecting processors directly over the company’s own protocols. This allows for around 2 microsec of latency in node-to-node communications, and up to 20Gbps — more than 16 times faster than 10G Ethernet — of bandwidth. But even Myricom says Ethernet’s move in the high-performance data center is irrepressible.

“A great majority of even HPC applications are not sensitive to the differences in latency between Myrinet connections on one side and Ethernet on the other side,” says Charles Seitz, CEO of Myricom. The company is in the 10G Ethernet NIC market, having released Fiber-based adapters, which can run both 10G Ethernet and Myrinet protocols. This evolution was caused by customer demand for more interoperability with the Myricom gear.

“What do people mean when they say interoperability? They mean its interoperability with Ethernet,” Seitz says.

Universal acceptance

The widespread expertise in building and troubleshooting Ethernet networks, and universal interoperability makes it the better data center connectivity technology in the long run, others say.

“Ethernet does not require a certification,” says Douglas Gourlay, director of product management for Cisco’s Data Center Business Unit, which includes products such as the Fibre Channel MDS storage switch, InfiniBand (from the TopSpin acquisition), and Gigabit and 10G Ethernet products.

“With Fibre Channel, you had multiple vendors building multiple products, so the storage vendors took it upon themselves to create a certification standard for interoperability,” Gourlay says. “Now users won’t deploy products that are not certified by that specification.”

Gourlay adds that the industry is not at the point where InfiniBand technology, or other high-performance computing interconnect gear, has a common certification standard, or has proven interoperability among multivendor products.

“You can probably bet that you can’t build a network today with one of the narrowly focused high-performance computing [networking] technologies from multiple vendors.”

source:http://www.networkworld.com/news/2006/052506-data-center-ethernet.html


Honda says brain waves control robot

TOKYO - In a step toward linking a person's thoughts to machines, Japanese automaker Honda said it has developed a technology that uses brain signals to control a robot's very simple moves.

In the future, the technology that Honda Motor Co. developed with ATR Computational Neuroscience Laboratories could be used to replace keyboards or cell phones, researchers said Wednesday. It also could have applications in helping people with spinal cord injuries, they said.

In a video demonstration in Tokyo, brain signals detected by a magnetic resonance imaging scanner were relayed to a robotic hand. A person in the MRI machine made a fist, spread his fingers and then made a V-sign. Several seconds later, a robotic hand mimicked the movements.

Further research would be needed to decode more complex movements.

The machine for reading the brain patterns also would have to become smaller and lighter — like a cap that people can wear as they move about, said ATR researcher Yukiyasu Kamitani.

What Honda calls a "brain-machine interface" is an improvement over past approaches, such as those that required surgery to connect wires. Other methods still had to train people in ways to send brain signals or weren't very accurate in reading the signals, Kamitani said.

Honda officials said the latest research was important not only for developing intelligence for the company's walking bubble-headed robot, Asimo, but also for future auto technology.

"There is a lot of potential for application to autos such as safety measures," said Tomohiko Kawanabe, president of Honda Research Institute Japan Co.

Asimo, about 50 inches tall, can talk, walk and dance. It's available only for rental but is important for Honda's image and has appeared at events and TV ads.

At least another five years are probably needed before Asimo starts moving according to its owner's mental orders, according to Honda.

Right now, Asimo's metallic hand can't even make a V-sign.

sourc:http://news.yahoo.com/s/ap/20060525/ap_on_hi_te/honda_robot

This page is powered by Blogger. Isn't yours?