Monday, February 20, 2006

Microsoft free internet voice service challenges Vodafone

MICROSOFT has developed a Skype-style free internet voice service for mobile phones that City analysts believe could wipe billions off the market value of operators such as Vodafone.

The service is included in a mobile version of Microsoft Office Communicator due to be released this year. It will take the form of a voice-over internet protocol (VoIP) application that allows Office users to make free voice calls over wi-fi enabled phones running Windows Mobile software. It uses the internet as a virtual phone network as well as accessing e-mail, PowerPoint and other Office applications.

Microsoft chief executive Steve Ballmer dropped his bombshell at the mobile operators’ annual 3GSM show in Barcelona last week. The significance of his remarks was missed because of his effusive and eccentric delivery.

Trying to downplay the havoc Microsoft Office Communicator Mobile will wreak on the mobile telecoms industry, Ballmer chose a topical Valentine’s Day theme for his announcement. “I love the mobile industry and I love our operator partners, and I want to have that message precede all we’re about to show,” Ballmer said in Barcelona. He went on to demonstrate how a mobile phone running Windows Mobile can be used to make a free voice call over the internet. Ballmer told the audience: “That was a VoIP call.”

But Ballmer’s announcement may be closer to a St Valentine’s Day massacre than a love letter for the mobile operators concerned.

Cyrus Mewawalla, an analyst at Westhall Capital, believes VoIP, when backed by Microsoft, will have a more devastating effect on mobile operators than it did on the fixed-line operators, which saw their voice revenues slashed after the introduction of VoIP services such as Skype.

“Internet voice does not even have to take market share to force traditional operators to cut their prices. The mere thought of free voice is enough to make customers push for price cuts,” said Mewawalla, predicting a bloodbath for mobile operator stocks.

Operators such as Vodafone and O2 believe they will be able to fight off the threat from Microsoft’s entry into the mobile voice market.

Peter Erskine, chief executive of O2, told The Business: “This is not the first time Microsoft has tried to enter the mobile market and they still have a very long way to go.”

Erskine said because Micro-soft’s service runs on a mobile version of Office, its appeal initially will be to business users rather than private consumers.

But Ballmer last week said: “Most people have a personal life and they have a professional life. And they want the device that goes in their pocket to give them one glimpse of their information, whether it happens to be part of their private life or part of their professional life.” It is this formula that won Microsoft domination of the desktop.

source:http://www.thebusinessonline.com/Stories.aspx?Microsoft%20free%20internet%20voice%20service%20challenges%20Vodafone&StoryID=8C6C0A89-A9B7-4BD0-9CE2-11830FA31E15&SectionID=F3B76EF0-7991-4389-B72E-D07EB5AA1CEE


Beware the 'pod slurping' employee

A U.S. security expert who devised an application that can fill an iPod with business-critical data in a matter of minutes is urging companies to address the very real threat of data theft.

Abe Usher, a 10-year veteran of the security industry, created an application that runs on an iPod and can search corporate networks for files likely to contain business-critical data. At a rate of about 100MB every couple minutes, it can scan and download the files onto the portable storage units in a process dubbed "pod slurping."

To the naked eye, somebody doing this would look like any other employee listening to their iPod at their desk. Alternatively, the person stealing data need not even have access to a keyboard but can simply plug into a USB port on any active machine.

Usher denies that his creation is an irresponsible call to arms for malicious employees and would-be data thieves, and instead insists that his scare tactics are intended to stir companies into action to protect themselves against the threat.

"This is a growing area of concern, and there's not a lot of awareness about it," he said. "And yet in 2 minutes, it's possible to extract about 100MB of Word, Excel, PDF files--basically anything which might contain business data--and with a 60GB iPod, you could probably have every business document in a medium-size firm."

Andy Burton, CEO of device management firm Centennial Software, said Usher walks a fine line but believes that he is acting with the best intentions and agrees that companies that still haven't recognized the threat need to be given a wake-up call.

"Nobody wakes up in the morning worrying about antivirus or their firewall because we all know we need those things, and we all have them in place," Burton said. "Now the greatest threat is very much inside the organization, but I'm not sure there are that many businesses (that) have realized it's possible to plug in an iPod and just walk away with the whole business in a matter of minutes."

Usher said companies shouldn't expect any help from their operating system, the most popular of which lacks the granularity to manage this threat effectively without impairing other functions.

"(Microsoft Windows) Vista looks like it's going to include some capability for better managing USB devices, but with the time it's going to take to test it and roll it out, we're probably two years away from seeing a Microsoft operating system with the functionality built in," Usher said. "So companies have to ask themselves, 'Can we really wait two years?'"

Citing FBI figures that put the average cost of data theft at $350,000, Usher argues that they can't.

"The cost of being proactive is less than the cost of reacting to an incident," Usher said.

source:http://news.com.com/Beware+the+pod+slurping+employee/2100-1029_3-6039926.html


Quantum Telecloning Demonstrated?

"According to Physorg eavesdropping on a quantum encrypted link can now be done without detection. From the article: 'The scientists have succeeded in making the first remote copies of beams of laser light, by combining quantum cloning with quantum teleportation into a single experimental step. Telecloning is more efficient than any combination of teleportation and local cloning because it relies on a new form of quantum entanglement - multipartite entanglement.' There is also a PDF of a related paper available here for background material."

source:http://it.slashdot.org/article.pl?sid=06/02/20/0111231

Interview on Xen with Manuel Bouyer

Virtualization is about running an Operating System (the guest OS) on the top of another OS (the host OS). This technique enables running several virtual machines with different OSes at the same time on the same hardware. VMWare, MacOnLinux, and Xen are examples of virtualizer software.

Virtualization requires guest OSes to be built for the host machine processor. It should not be confused with emulation, that do not have this requirement: When an OS runs on the top of a virtualizer, its code runs unchanged on the processor, whereas an emulator has to interpret to the guest OS code. MAME or Basilisk are examples of emulators.

Binary compatibility is another different feature: it is the ability of an OS to run applications from another OS. For instance BSD systems are able to run Linux binaries.

Manuel Bouyer is a NetBSD developer who has been involved in kernel hacking for many years. He recently added support for Xen to NetBSD, based on Christian Limpach's work for the Xen team

In this interview, Manuel will tell us what is so good with Xen, and what was the work required to have it working on NetBSD.

ED: Hello Manuel, and thank you for accepting this interview. First, could you tell our readers how virtualization actually works? It is easy to understand that the guest OS code will run unchanged, but what happens when it tries to access the hardware? One could expect conflicts if both the host and the guest OS try to use a disk controller at the same time...

MB: The most common approach is that the virtualizer traps accesses to devices and schedule accesses to the hardware. For example, for a shared disk device, the virtualizer will trap each guest's access to the device and issue the command to the device itself after checking the request against the guest's privileges. Once the command completes, it will give the result back to the guest. The virtualizer will also apply scheduling policies to the device access to ensure proper share of resources, as defined by the administrator (for example, to make sure one guest won't use all the disk or network bandwidth).

ED: The host OS also has to control the guest OS memory accesses, to make sure that it does not screw up the host OS memory. How is it done?

MB: The answer to this is highly dependent on the hardware. It won't work the same way on Sun or IBM systems which have hardware support for virtualization, or on ordinary x86 hardware where this will have to be emulated. In the later case, the virtualizer won't let the Guest's kernel run in privileged mode. When a privileged instruction occurs, a trap is generated and the virtualizer emulates the instruction after checking it's within the domain's bounds (for example, that the domain doesn't try to map a page which doesn't belongs to its memory space).

ED: I am a bit confused about the virtualizer not allowing the guest kernel to run in privileged mode (also known as kernel mode). When an OS boots, it runs the processor in kernel mode. If the guest OS kernel is started in user mode, how does it avoid a panic on the first privileged instruction?

MB: When the guest kernel executes a privileged instruction or tries to write to protected memory (e.g. a page table) a fault is generated, which is handled by the virtualizer (not by the guest's kernel). The virtualizer will then decode the faulting instruction, and take appropriate action (really execute the privileged instruction if it's OK, do the memory operation if it's OK, or forward the fault to the guest's kernel if it's a "real" fault).

ED: Let us talk about Xen, now. This virtualizer claims an unprecedented level of performance. How is it achieved?

MB: I can talk about the differences between Xen and VMware because it's what I know best. The Xen papers also present benchmarks against User Mode Linux, but I don't know how User Mode Linux works so I can't really comment on this one.

VMware will emulate real hardware so that an unmodified OS can run on top of the virtualizer. Just run an OS in VMware, and this OS will see some kind of VGA adapter for the console along with the usual PS/2 keyboards and mouse, some kind of network interface (a DEC tulip if my memory is right), and some kind of IDE or SCSI adapter for mass storage. To achieve this, the virtualizer will have to emulate these devices. This means emulating the registers and interrupts, quite often with MMU faults when these fake registers are accessed. This means that a single I/O operation will most likely require several context switches. The same goes for the MMU: accesses to privileged CPU registers, page tables and use of privileged instruction will all generate context switches.

The Xen's virtualizer (the hypervisor) doesn't emulate real hardware but offers an interface to the guest OS for these operations. This means that the guest OS will have to be modified to use these interfaces. But then these interfaces can be optimized for the task, allowing it to batch large numbers of operations in a single communication with the hypervisor, and so limiting the number of context switches required for I/O or MMU operations.

ED: Your work was not just about making NetBSD a Xen host but also a Xen guest. What were the required changes for running as a guest? Were they limited to fake device drivers and machine-dependent parts of memory management (the kernel part known as pmap)? Or did you have other things to change?

MB: To make it clear: I didn't start from scratch. My first task in this was to take the NetBSD patches provided by the Xen team (more specifically, by Christian Limpach) and include them in NetBSD's sources. This allowed NetBSD to run as a non-privileged guest. Then I added support for domain0 (in Xen, domain0 is a "privileged" guest OS which has access to the hardware, controls the hypervisor and start others guest OSes. It also handles driver backends, to provide block and network devices to other guests). Now, to reply to your question: no changes were required in the NetBSD sources, all the needed bits were added in NetBSD/Xen. Then some changes were done in the i386-specific sources to allow more code sharing between NetBSD/i386 and NetBSD/Xen (but this was mostly cosmetic, like split a file into parts that can be shared and parts that can't).

ED: You introduced a new NetBSD port for NetBSD as a Xen guest. It has been called NetBSD/xen. Would it have been possible to merge Xen guest support in NetBSD/i386 instead of making a new port? What would have been the pros and cons of such an approach?

MB: Well, Christian introduced it. but I think it was the right approach. i386 is the name of the processor (MACHINE_ARCH in NetBSD kernel terminology) and the name of the architecture (MACHINE). NetBSD/xen is different from NetBSD/i386 in the same way NetBSD/mac68k and NetBSD/sun3 are different, even if they run on the same processor and can share binaries. Note that, even though Xen is a different port, there's no xen distribution, it's merged with i386: the i386 binary sets contains the Xen kernels.

ED: What about portability? For now NetBSD supports Xen as host or guest on i386. Will we see support for other architectures? For instance on amd64, on powerpc...

MB: There's no reasons it can't (but then NetBSD/xen will have to be renamed to NetBSD/xeni386 first :). I'm interested in working on xenamd64 myself.

ED: Since you helped port it, I imagine you were interested into using Xen. Can you tell us about how you use it?

MB: The primary use was to merge several different servers in an overloaded machine room onto a single piece of hardware. These different servers are managed by different groups of people, runing different services and/or different OSes. For all these servers, the hardware was underused. Thanks to Xen I've been able to merge 10 servers to 2 physical boxes. Another use is to build virtual network platforms for network experiments. Each node in the testbed can be a Xen guest instead of a physical system.

And finally, Xen is a wonderful platform for kernel development. Instead of a separate test machine, you can use a Xen guest. A Xen guest boots much faster, and you don't have to reboot to an older kernel in order to get back a working system to be able to install a new kernel when something goes bad. And, as the hypervisor can export PCI devices to guest OSes, you can even use a Xen guest for driver developments.

ED: From the security point of view, having different physical servers is a good thing, since an attacker compromising one machine may not be able to compromise another one. One could imagine an attack against the Xen virtualizer, enabling the superuser of a guest OS to get root on the host OS. Is it likely to happen?

MB: Of course this is something that could happen, and the person using virtualization to host different systems has to keep this in mind. As usual it's a matter of risk vs benefits. The hypervisor is quite small in fact - only 240 source files for Xen 2, 77000 lines of code - so it's easy to audit and maintain. Also, the interaction between the guest kernel and the hypervisor are mostly limited to events (which are just a bitmask), there are only a few places where a guest kernel can give a pointer to arbitrary data to the hypervisor. So I think the chances of compromising the hypervisor are very small - much smaller than required to get control of a regular Unix kernel. Then there is the risk of a guest compromising the domain0's kernel through the driver backends. Again these drivers are small, so the risk should be low. I think the main risk is getting the domain0 compromised by a bug in some network tool, through the virtual network with the guests.

ED: Let us talk about the configuration. How complex is it to set up a Linux running as a guest of NetBSD, for instance?

MB: There a two steps: first install NetBSD on top of Xen and then add more guests. This is documented on the Xen port page on NetBSD's web site. The first step is easy: Do a regular NetBSD/i386 install, install grub and the xen packages, create the xen devices in /dev (just a MAKEDEV xen is needed). Add a grub config to load the Xen kernel and the netbsd-XEN0 kernel as a module and you're done, you now have a Xen system running with NetBSD as domain0.

To add a guest isn't much more difficult: you need a free partition or a large file for the virtual disk (you can create this file using dd(1) for example) and a bridge interface pre-configured with your real ethernet. Create a Xen config file (just need to change a few lines in the provided sample) pointing to the kernel you want to load, the file or partition to use as virtual disk and the bridge to use for the virtual network.

A guest won't run properly without a root filesystem on its virtual disk (note that it's possible to use an NFS root too, as you've got networking). For a NetBSD guest it's easy: create your domain with the netbsd-INSTALL_XENU kernel and the guest will boot in sysinst, just like a regular NetBSD install. Once the install is complete, replace netbsd-INSTALL_XENU with netbsd-XENU and you've got your guest running with NetBSD. This is really fast, creating a new NetBSD guest only takes a few minutes, the most time-consuming part being creating the big file for the virtual disk with dd :)

For linux it's a bit more tricky, because of the different distributions and install procedure available. It should be possible to create a linux-XENU launch an install procedure just like NetBSD's INSTALL_XENU but I'm not aware of such a kernel. The documented way is to format the virtual disk as ext2 from domain0, and populate it with linux binaries. The easiest way is to do a copy from an existing, running linux system. But once you've got your virtual disk populated you can create as many guests as you want easily, just copy the file backing up the virtual disk.

ED: As far as I understood, Xen also offers virtual machine migration, where you freeze a Xen guest, move it to another machine and resume it there. Could you give us an insight about how this is done from the administrator point of view?

MB: Yes, it offers migration and suspend/resume, which, from the virtual machine's kernel is the same thing. NetBSD doesn't support this yet and I've not played with it under linux so I don't know the exact details. Suspend/Resume is easy, the current state of the guest is saved in a large file at suspend time, and reloaded at resume time. migration works in a similar way, but instead of saving the state of the guest in a file, it's transfered over the network to the remote Xen machine. This also means that a similar environment for the guest has to exist on the remote system. For example the same network segment has to be available on both systems so that the guest's virtual interface can be connected back to the same network on the new host. The backend storage for the virtual block devices also has to be present on the new host. To achieve this, you could use a SAN, or maybe just a file on a NFS server.

ED: Manuel, thank you for answering my questions. Do you have something to add about Xen on NetBSD?

MB: Xen associated with the performances and features of NetBSD is a wonderful tool for systems administrators and developers. Try it, you'll like it !

source:http://ezine.daemonnews.org/200602/xen.html


Greenland Glaciers Melting Much Faster

"NASA's Jet Propulsion Laboratory says that satellite observations indicate that Greenland's glaciers have been dumping ice into the Atlantic Ocean at a rate that's doubled over the past five years. Greenland Ice Sheet's annual loss has risen from 21.6 cubic miles in 1996 to 36 cubic miles in 2005 and it now contributes about 0.5 millimeters out of 3 millimeters to global sea level increases. One theory as to why this is happening is that the meltwater, caused by increasing temperatures in Greenland, serves as a lubricant for the moving ice, hastening its push to the sea. Another study has estimated that the warming rate in Greenland was 2.2 times faster than the global norm -- which is in line with U.N. climate models."

source:http://science.slashdot.org/article.pl?sid=06/02/19/1931207


Literacy Limps Into the Kill Zone

Years ago, the night news editor at the newspaper where I worked got upset when his frivolous story judgment was questioned by another editor.

"We can either put out a history book or a comic book," he said, taking a defensive swipe at a rebellious strand of hair. "I know which one I'm putting out."

He was clearly a man ahead of his time.

The Luddite
The Luddite
Welcome to the comic-book generation, the post-literate society. The stories that excited my news editor's imagination then -- the ones packed with lurid sex, vapid celebrity shenanigans, fallen idols -- are merely the plat du jour of journalism these days.

It doesn't matter whether you're reading your local rag, surfing the net or trying to make heads or tails of someone's inane blog -- the quality bar is set lower than ever, which is saying a lot considering it was never set very high to begin with.

But I'll save the critical examination of my profession for another column. Today, I want to talk about one of the byproducts of all this mediocrity. Today I want to talk about the all-out assault on the English language and the role technology plays in that unprovoked and dastardly attack. I especially want to talk about the ways dumbing down the language is not only seen as acceptable, but is tacitly encouraged as the status quo.

Any number of my acquaintances excuse the bad writing and atrocious punctuation that proliferates in e-mail by saying, in essence, "Well, at least people are writing again." Horse droppings. People have never stopped writing, although it's reaching a point where you wish a lot of them would.

The very nature of e-mail (which, along with first cousins IM and text messaging, is an undeniably handy means of chatting) encourages sloppy "penmanship," as it were. Its speed and informality sing a siren song of incompetent communication, a virtual hooker beckoning to the drunken sailor as he staggers along the wharf.

But it's not enough to simply vomit out of your fingers. It's important to say what you mean clearly, correctly and well. It's important to maintain high standards. It's important to think before you write.

The technology of instant -- or near-instant -- communication works against that. But it's not as if you can't surmount this obstacle. You only have to be willing to try.

Technology conspires against language in another, more insidious way: The sheer speed with which things move these days has given us the five-second attention span, the 10-second sound bite and the splashy infographic that tells you very little, if anything, while fooling you into thinking that you are now somehow informed. (Of course, if you need more than 10 seconds to "get" Mariah Carey, well, shame on you.)

Sadly, this devalues the thoughtful essayist and the sheer linguistic joy of the exposition. And the language dies a little more each day.

Then there's the havoc wrought on spelling and punctuation by all this casual communication. You can't lay all that at the feet of technology, of course. Grammar skills have been eroding in this country for years and that has a lot more to do with lax instruction than it does with e-mail or instant messaging. (Math is a different matter. No student should be allowed to bring a calculator into a math class. Ever.)

But couple those deficient grammar skills with the shorthand that's become prevalent in fast communication (not to mention all those irritating acronyms: LOL, WYSIWYG, IMHO, etc.) and you've just struck a match next to a can of gasoline. And people wonder why the tone of e-mail is so easily misunderstood.

On another front, we live in a world where the creep of tech and business jargon threatens to make the language indecipherable to everyone, even the spewer of this bilge water. It's amazing that any business gets done at all, with junk like this clotting our communications arteries.

Business jargon is simply the art of saying nothing while appearing to say a lot.

As a result, we have CEOs of major corporations who lack the basic writing skills to pen a simple, in-house memo in plain-spoken English. We see marketing swine get paid princely sums to lie about their products in language so bloated with jargon that their lies -- and even their half-truths -- are unintelligible. We see company flacks churning out impenetrable press releases that no editor in his right mind would consider reading, let alone using. We see business reporting reduced to the trite and formulaic because the reporter is either too uncritical or too lazy to take a hard look at what lies behind the smoke and mirrors.

Technical jargon, thankfully, is less invasive because most of it is so geeky that it has no practical application in the real world. Still, a few words and phrases manage to slip out now and then (help yourself here), especially when the business and tech boys overlap, which they frequently do. Eternal vigilance is required to nip these encroachments in the bud. While tech jargon might be very useful among engineers and programmers, it should stay among engineers and programmers so as not to frighten the children and the horses.

Apologists will argue that language isn't static, that it's ever-changing and evolving. That's true. Language does change. Idiomatic English is the product of centuries of social and cultural infusion, a fact that gives modern-day English much of its color and flair.

But when change does violence to the accepted standards of the king's English and takes the mother tongue into the realm of the unfathomable, as does almost all jargon coming out of the technology and business worlds, it's our job as keepers of the grail to drive it back into the dark little hole from whence it came.

If you've got some time to kill, you're welcome to join us, the happy few.

source:http://www.wired.com/news/columns/0,70214-1.html?tw=wn_story_page_next1

Astronomers get shortlist of possible ET addresses

ST. LOUIS (Reuters) - Astronomers looking for extraterrestrial life now have a short list of places to point their telescopes.

They include nearby stars of the right size, age and composition to have Earth-like planets circling them, scientists said on Saturday.

But cuts in federal funding mean that private philanthropists who pay for the bulk of their work may find out first when and if extraterrestrial life is discovered, the astronomers told a meeting of the American Association for the Advancement of Science.

Margaret Turnbull of the Carnegie Institution of Washington released her "top 10" list of potential stars to the meeting. They will be the first targets of NASA's Terrestrial Planet Finder, a system of two orbiting observatories scheduled for launch by 2020.

"There are 400 billion stars in the galaxy, and obviously we're not going to point the Terrestrial Planet Finder ... at every one of them," said Turnbull.

So, on behalf of the space agency NASA and the now independently funded

Search for Extraterrestrial Intelligence
" type="hidden"> SEARCH
News | News Photos | Images | Web

" type="hidden">

Search for Extraterrestrial Intelligence, or SETI, she narrowed down the list to stars that could have planets with liquid water orbiting them.

"We want to see these habitable planets with our own eyes," she added. So the star cannot be too bright, or it will obscure the planet.

Variable stars, which grow hotter and cooler, probably would not be conducive to life, so they were thrown out, as were stars that are too young or too old. Some are too gassy to have spawned planets like Earth, which contains a lot of metal. Others have massive companions whose gravity could interfere with the steady conditions needed for life to evolve.

TOP 10 LIST

Turnbull's top 10 list includes 51 Pegasus, where in 1995 Swiss astronomers spotted the first planet outside our solar system, a Jupiter-like giant.

Others include 18 Sco in the Scorpio constellation, which is very similar to our own sun; epsilon Indi A, a star one-tenth as bright as the sun; and alpha Centauri B, part of the closest solar system to our own.

"The truth is when looking at these so-called 'habstars,' habitable solar systems, it is hard to really rank them. I don't know enough about every star to say which one is the absolute best one," Turnbull said.

Jill Tarter of the SETI Institute, set up after U.S. government funding for the program was cut, said the current budget threatens other astronomical programs.

She said research and analysis budgets were cut by 15 percent in the fiscal year 2007 budget proposed this year by U.S.

President George W. Bush
" type="hidden"> SEARCH
News | News Photos | Images | Web

" type="hidden">

President George W. Bush.

"In the case of astrobiology, the cut is to 50 percent of what it was in 2005," Tarter told a news conference. "We are facing what we consider an extraordinarily difficult financial threat."

She said NASA once had a policy of what to do, whom to call, and how to announce the news if someone detected a signal of intelligent life from space.

"Today it is in fact a group of very generous philanthropists who will get the call before we get a press conference," Tarter said. They include Microsoft co-founder Paul Allen and Microsoft chief technology officer Nathan Myhrvold.

Carol Cleland of the University of Colorado argued that astronomers are limiting themselves by looking for planets that closely resemble Earth.

"I actually think we ought to be looking for life as we don't know it," Cleland told a news conference.

She said life on Earth is all so similar -- based on DNA made up of specific building blocks -- that it is likely to have had a single origin. Life elsewhere may be built from different ingredients, or structured very differently, she said.

source:http://news.yahoo.com/s/nm/20060218/sc_nm/space_life_dc


Archaeologists unearth Alexander the Great era wall

ATHENS (AFP) - Greek archaeologists excavating an ancient Macedonian city in the foothills of Mount Olympus have uncovered a 2,600-metre defensive wall whose design was "inspired by the glories of Alexander the Great," the site supervisor said Thursday.

Built into the wall were dozens of fragments from statues honouring ancient Greek gods, including Zeus, Hephaestus and possibly Dionysus, archaeologist Dimitrios Pantermalis told a conference in the northern port city of Salonika, according to the Athens News Agency.

Early work on the fortification is believed to have begun under Cassander, the fourth-century BC king of Macedon who succeeded Alexander the Great. Cassander is believed to have ordered the murders of Alexander's mother, wife and infant son, Pantermalis said.

The wall's design suggests that it was "inspired by the glory of Alexander the Great in the East," as the young king sought to emulate grandiose structures encountered during his campaigns, Pantermalis told the conference.

Bronze coins from the period of Theodosius, the 4th-century AD Byzantine Emperor who abolished the ancient Olympic Games, were also found hidden inside the wall.

The discovery was made in the archaeological site of Dion, an ancient fortified city and key religious sanctuary of the Macedonian civilisation, which ruled much of Greece until Roman times.

Prior excavations at Dion have already revealed two theatres, a stadium, and shrines to a variety of gods, including Egyptian deities Sarapis, Isis and Anubis, whose influence in the Greek world grew in the wake of Alexander's conquest of Egypt.

source:http://news.yahoo.com/s/afp/20060216/sc_afp/greecearchaeology


Space Adventures Announces $265 Million Global Spaceport Development Project

Space Adventures, Ltd., the world's leading space experiences company, announced today its plans to develop a commercial spaceport in Ras Al-Khaimah (the UAE), with plans to expand globally. Other potential spaceport locations include Asia, specifically Singapore, and North America. The total estimated cost of the global spaceport development project is at least $265 million (USD) and will be funded by various parties, along with shared investments by Space Adventures and the government of Ras Al-Khaimah. The company, which organized orbital flights for all of the world's private space explorers, also announces that His Highness Sheikh Saud Bin Saqr Al Qasimi of Ras Al-Khaimah, along with the UAE Department of Civilian Aviation, have granted clearance to operate suborbital spaceflights in their air space.

The UAE spaceport, planned to be located less than an hour drive from Dubai, already has commitments for $30 million (USD).

"I am proud to announce Ras Al-Khaimah as the site where suborbital commercial space travel will begin and flourish. Space Adventures is the pioneer of space tourism, which is why we signed an agreement with them nearly a year ago and began this project. After we initiate operations here, we look forward to expanding operations outside of the United Arab Emirates. In this regard, I would also like to announce that we have committed an additional $30 million (USD) of funding for global spaceport development. We are most excited about spearheading this multi-billion dollar industry," said His Highness Sheikh Saud Bin Saqr Al Qasimi, the Crown Prince of Ras Al-Khaimah, United Arab Emirates.

"Because of Ras Al-Khaimah’s unique airport and spaceport support facilities, His Highness' commitment to space tourism, and the close proximity to Dubai, one of the world's leading luxury tourist destinations, makes it a choice location for spaceflight operations," said Mr. Anderson. "As a global leader of tourism, the United Arab Emirates is an ideal location for a spaceport. Suborbital flights will offer millions of people the opportunity to experience the greatest adventure available, space travel. We are honored to partner with His Highness Sheikh Saud."

The suborbital space transportation system has been designed by Myasishchev Design Bureau, a leading Russian aerospace organization which has developed a wide-array of high performance aircraft and space systems. Explorer, as it has been named, will have the capacity to transport up to five people to space and is designed to optimize the customer experience of space travel, while maintaining the highest degree of safety.

The system consists of a flight-operational carrier aircraft, the M-55X, and a rocket spacecraft. The vehicle is designed to optimize the customer experience of space travel. "We've designed the Explorer with several exciting features that will be announced in the near future that will make the customer's experience fantastic. Additionally, the safety of the passengers is our chief aim and the Explorer will make use of several multi-redundant safety systems that we have unique experience in designing and implementing for the last 40 years," said Valery Novikov, head designer at the Myasishchev Design Bureau.

"Explorer design plans have been perfected over the years and it will be a truly remarkable system. Yesterday, we announced our fully-funded vehicle development joint venture with Prodea, a private investment firm founded by the Ansari family. Now, the manufacturing process can be completed to build a fleet of these vehicles in the near future," said Mr. Anderson. "We will not disclose the development schedule until it has been finalized but we, at Space Adventures, along with Prodea, have the utmost confidence that through our global vehicle and spaceport development projects, we will enable operations of the world's first commercial suborbital flights."

"The Department of Civil Aviation has examined and evaluated all the technical aspects and we, along with relevant federal authorities, are supportive of the operation of commercial suborbital spaceflight in UAE air space," said Eng. Salem Bin Sultan Al Qasimi, chairman of the Department of Civil Aviation. "We support Space Adventures’ development of a spaceport at the Ras Al-Khaimah International Airport, and we view this project as a technological enhancement to the region that will bring visitors to the Emirates from around the world."

Space Adventures, the only company to have successfully launched private explorers to space, is headquartered in Arlington, Va. with offices in Cape Canaveral, Fla., Moscow and Tokyo. It offers a variety of programs such as the availability today for orbital spaceflight missions to the International Space Station, commercial missions around the moon, Zero-Gravity and MiG flights, cosmonaut training, spaceflight qualification programs and reservations on future suborbital spacecrafts. The company's advisory board comprises Apollo 11 moonwalker Buzz Aldrin, shuttle astronauts Kathy Thornton, Robert (Hoot) Gibson, Charles Walker, Norm Thagard, Sam Durrance, Byron Lichtenberg, Pierre Thuot and Skylab astronaut Owen Garriott.

Ras Al-Khaimah is the most northern of the seven emirates that form the United Arab Emirates, which also includes Abu Dhabi, Dubai, Sharjah, Ajman, Umm Al Quwain, and Fujairah. It borders Oman on its Northern and Eastern limits. It covers 1,684 sq. km, has 64 km of pristine natural coastline, and beautiful mountains on the northern boarder. Ras Al-Khaimah has witnessed massive development in recent years and now boasts one of the largest pharmaceutical firms in the region, a world-leading ceramics industry, and a burgeoning tourist sector with world-class hotels and resorts. Ras Al-Khaimah has embarked on an ambitious development program including investments in infrastructure improvement, tourism, shopping, and efforts to attract industrial and commercial enterprises. Among the most important of these endeavors is the establishment of the Ras Al-Khaimah Free Trade Zone. The Ras Al-Khaimah International Airport is rapidly expanding, and offers excellent services and facilities for all types of flight operations.

source:http://www.spaceadventures.com/media/releases/2006-02/347


Web program simplifies artificial gene design

A web-based program that simplifies many tricky steps involved in designing artificial DNA has been released by US microbiologists.

The software suite, called GeneDesign, should make it easier for researchers to modify and study DNA. The cost of gene synthesis is rapidly falling with dozens of companies around the world now offering to create genes to order from the chemical components of DNA.

GeneDesign was created by researchers led by Jef Boeke at Johns Hopkins University School of Medicine in Baltimore, US. It simplifies and automates several key steps of DNA design.

These key steps include translating proteins and amino acids – the building blocks which make proteins – backwards into a DNA sequence. Or the software can manipulate simulated DNA “codons” which can code for an amino acid. DNA codons are made of sets of three nucleotides – the fundamental molecules which link together to form a DNA chain

Or the software can be used to identify DNA restriction sites – sections of the DNA which can be spliced or cut in order to mix synthetic and natural DNA.

"The ability to order up any piece of DNA you want is empowering, and the design process itself is quite interesting and gives a totally new perspective," Boeke told New Scientist.

Safety checks

"It's a really nice tool," says Drew Endy, a bioengineer the Massachusetts Institute of Technology, US. "But you should expect it to be outdated in 18 months."

Endy says the development of more and more advanced gene-design software reflects the increasing technological ease with which genes can be made to order. "We're at this interesting stage where it's becoming easy to synthesise DNA," Endy says. "It is important to have software environments to support this."

But this is also a source of concern. An investigation conducted by New Scientist in November 2005, revealed that few gene synthesis companies check that the genes they are being asked to make are safe, or perform customer background checks after receiving an orders.

Entire chromosomes

"Potential for misuse comes with the territory of any powerful new technology," Boeke says. "The synthetic biology community has prided itself on envisioning the darker side of the technology and building in safeguards wherever possible, to minimise the risks associated with gene synthesis technology."

In the face of such worries, Boeke’s team would like to introduce safeguards to prevent anyone from using their software to design genes that could be used as a bioweapon. However, Boeke says it is crucial to have access to a regularly updated database of suspect genes.

He believes gene design software will become more and more powerful. "The next scale will be the synthesis of entire chromosomes and genomes," he says.

source:http://www.newscientist.com/channel/health/dn8737.html


US and Canadian skiers get smart armour

A futuristic flexible material that instantly hardens into armour upon impact will protect US and Canadian skiers from injury on the slalom runs at this year's Winter Olympics.

The lightweight bendable material, known as d3o, can be worn under normal ski clothing. It will provide protection for US and Canadian skiers taking part in slalom and giant slalom races in Turin, Italy. Skiers normally have to wear bulky arm and leg guards to protect themselves from poles placed along the slalom run.

Skiwear company Spyder, based in Colorado, US, developed racing suits incorporating d3o along the shins and forearms and offered members of the US and Canadian Olympic alpine ski teams the chance to try them out several months ago. "Now they love it and won't ski without it," claims Richard Palmer, CEO of UK-based d3o Labs, which developed the material.

Although the exact chemical ingredients of d3o are a commercial secret, Palmer says the material is synthesised by mixing together a viscose fluid and a polymer. Following synthesis, liquid d3o is poured into a mould that matches the shape of the body part it will protect.

Brief impact

The resulting material exhibits a material property called "strain rate sensitivity". Under normal conditions the molecules within the material are weakly bound and can move past each with ease, making the material flexible. But the shock of sudden deformation causes the chemical bonds to strengthen and the moving molecules to lock, turning the material into a more solid, protective shield.

In laboratory testing, d3o-guards provided as much protection as most conventional protective materials, its makers claim. But Phil Green, research director at d3o Labs, says it is difficult to precisely measure the material's properties because the hardening effect only last as long as the impact itself.

However, Green believes it may be possible to alter the properties of d3o for new impact-protection and anti-trauma applications. "There are certainly opportunities to dabble with the chemistry and enhance the effect," he told New Scientist.

Another potential application may be sound-proofing. The propagation of sound waves should generate a similar strain to an impact, so it may be feasible to create a material that becomes more sound proof in response to increasing noise. "It could have some very interesting, unexplored properties," Green says.

source:http://www.newscientist.com/article.ns?id=dn8721&feedId=online-news_rss20


Ten Reasons to Buy Windows Vista

Unless you've been living under a rock for the past few months, you probably know that the latest version of Windows--called Vista--is due to hit store shelves later this year (in time for the holidays, Microsoft tells us). The successor to Windows XP offers a little something for everyone, from eye-catching graphics and new bundled applications to more-rigorous security. In fact, there is so much in the new operating system that it can be tough to get a handle on it all.

I've been noodling around with a recent beta version of Windows Vista (Build 5270) and had a chance to make some observations. While the sleek new look and polished interface caught my eye, it's what's under the covers that impressed me most. Microsoft's done a great job of improving security across the board. Things like Windows and spyware library updates are streamlined, and I definitely appreciate the more robust Backup software.

Still, there's plenty of unfinished work left to do. Internet Explorer 7 struggled to properly render some Web pages, and I found local network connectivity to be a hit-or-miss affair. And then there's the stuff that isn't even in there yet--like the intriguing Windows Sidebar, which will put real-time weather info, stock quotes, system status, RSS feeds, and other information on the display.

So during my time with Windows Vista, I kept an eye out for the reasons I--and you--might ultimately want to lay my hands on the new OS when it's available. And frankly, if you buy a new Windows-based PC at the end of this year or any time in, say, the next five years, you'll probably end up with Vista by default.

Keep in mind, this is based solely on my experience with prerelease software (and a whole new beta could be out by the time you read this). Features get tweaked, they come and go, but from what we can tell, Vista is now starting to harden into the product that will be running many, many desktops for the foreseeable future. And by and large, that's a good thing.

Here's what to be excited about:

1. Security, security, security: Windows XP Service Pack 2 patched a lot of holes, but Vista takes security to the next level. There are literally too many changes to list here, from the bidirectional software firewall that monitors inbound and outbound traffic to Windows Services Hardening, which prevents obscure background processes from being hijacked and changing your system. There's also full-disk encryption, which prevents thieves from accessing your data, even if they steal the PC out from under your nose.

Perhaps most crucial (and least sexy) is the long-overdue User Account Protection, which invokes administrator privileges as needed, such as during driver updates or software installations. UAP makes it much more convenient for users to operate Vista with limited rights (meaning the system won't let them do certain things, like load software, without clearance from an administrator). This in turn limits the ability of malware to hose your system.

2. Internet Explorer 7: IE gets a much-needed, Firefox-inspired makeover, complete with tabbed pages and better privacy management. There's also the color-coded Address Bar that lets you know if a page is secured by a digital key, or, thanks to new antiphishing features, if it's a phony Web site just looking to steal information about you.

These features will all be available for Windows XP users who download IE7. But Vista users get an important extra level of protection: IE7 on Vista will run in what Microsoft calls "protected mode"--a limited-rights mode that prevents third-party code from reaching your system. It's about darn time.

3. Righteous eye candy: For the first time, Microsoft is building high-end graphics effects into Windows. The touted Aero Glass interface features visually engaging 3D rendering, animation, and transparencies. Translucent icons, program windows, and other elements not only look cool, they add depth and context to the interface. For example, hover your cursor over minimized programs that rest on the taskbar and you'll be able to see real-time previews of what's running in each window without opening them full-screen. Now you can see what's going on behind the scenes, albeit at a cost: You need powerful graphics hardware and a robust system to manage all the effects.

4. Desktop search: Microsoft has been getting its lunch handed to it by Google and Yahoo on the desktop, but Vista could change all that. The new OS tightly integrates instant desktop search, doing away with the glacially slow and inadequate search function in XP. Powerful indexing and user-assignable metadata make searching for all kinds of data--including files, e-mails, and Web content--a lot easier. And if you're running Vista on a Windows Longhorn network, you can perform searches across the network to other PCs.

5. Better updates: Vista does away with using Internet Explorer to access Windows Update, instead utilizing a new application to handle the chore of keeping your system patched and up-to-date. The result is quicker response and a more tightly streamlined process. The update-tracking mechanism, for instance, is much quicker to display information about your installation. And now key components, such as the Windows Defender antispyware module, get their updates through this central point. Like other housekeeping features, a better Windows Update isn't a gee-whiz upgrade, but it should make it easier--and more pleasant--to keep your PC secure.

6. More media: Over the years, one of the key reasons to upgrade versions of Windows has been the free stuff Gates and Company toss into the new OS, and Vista is no exception. Windows Media Player (perhaps my least favorite application of all time) gets a welcome update that turns the once-bloated player into an effective MP3 library. The Windows Photo Gallery finally adds competent photo-library-management functionality to Windows, so you can organize photos; apply metatags, titles, and ratings; and do things like light editing and printing. The DVD Maker application, which was still very rough when I looked at it, promises to add moviemaking capabilities--along the lines of Movie Maker--to the operating system. There are even some nice new games tucked into the bundle.

7. Parental controls: Families, schools, and libraries will appreciate the tuned-up parental controls, which let you limit access in a variety of ways. Web filtering can block specific sites, screen out objectionable content by selected type, and lock out file downloads. You can also restrict each account's access by time of day or day of the week. As a dad, I can tell you this will be great for keeping kids off the PC while you're at work, for instance. You can even block access to games based on their Entertainment Software Rating Board ratings.

8. Better backups: When Windows 95 first came out, the typical hard disk was, maybe, 300MB in size. Today, desktops routinely ship with 300GB or 400GB hard drives. And yet, the built-in data-backup software in Windows has changed little in the past decade. Windows Vista boasts a much-improved backup program that should help users avoid wholesale digital meltdowns. Microsoft also tweaked the useful System Restore feature--which takes snapshots of your system state so you can recover from a nasty infection or botched software installation.

9. Peer-to-peer collaboration: The Windows Collaboration module uses peer-to-peer technology to let Vista users work together in a shared workspace. You can form ad hoc workgroups and then jointly work on documents, present applications, and pass messages. You can even post "handouts" for others to review.

10. Quick setup: Beta code alert: There are some Vista features I hope dearly for even though they haven't been built yet. This is one of them. Jim Allchin, Microsoft's co-president, says that Windows Vista boasts a re-engineered install routine, which will slash setup times from about an hour to as little as 15 minutes. Hurray! The new code wasn't in the beta version of Vista that Microsoft sent to me--my aging rig took well over an hour to set up--so I'll believe it when I see it. Still, any improvement in this area is welcome.

Five Things That Will Give You Pause

All this is not to say that Vista is a slam-dunk and everyone should be running out to buy it as soon as Microsoft takes the wraps off. Heck, Windows XP has developed into a fairly stable, increasingly secure OS. Why mess with that?

Yes, during my time with Vista, I've found more than enough features to get excited about--features that will make a sizable chunk of Windows users want to upgrade. So why would anyone in their right mind stick with what they've got? Here are a few reasons:

Pay that piper: Vista is an operating system. It's the stuff your applications run on. But it'll cost $100 or more to make the switch. Unless you're buying a new PC and starting from scratch, you may be better off saving the money for something else.

Where's my antivirus?: For all the hype about security in Windows Vista, users may be disappointed to learn that antivirus software will not be part of the package. There's every indication that an online subscription service--possibly under the OneCare rubric--will offer antivirus protection to Vista users down the road. But for the time being, you'll need to turn to third-party companies like Symantec, McAfee, Grisoft, and others for virus protection.

Watch that hourglass: Vista is a power hog. Unless you have a top-end PC with high-end graphics hardware, for instance, you won't see one of the coolest parts of the new OS--the Aero Glass interface. Microsoft did the smart thing by offering Aero Basic and Windows Classic looks as well, which will let older and slower PCs run Vista. It just won't look as pretty.

Curse the learning curve: Microsoft has already ditched some aggressive ideas--such as the whole "virtual folders" thing--because the concepts proved too confusing for users. Even so, you'll find that the new Windows changes a lot of old tricks, and not always for the better. Heck, it took me almost five minutes to find the Run command, which used to show up right in the Start menu. And many users may struggle with the new power scheme, which defaults to putting the PC into hibernation rather than shutting down. I know it frustrated me the first time I wanted to power down the system to swap out a disk drive.

Meet the old boss, same as the new boss: Microsoft has added lots of new stuff to Vista, but some features are just warmed-over fare. Windows Mail is nothing more than a rebranded Outlook Express, and Windows Defender is simply an updated version of Microsoft AntiSpyware.

So keep your eyes peeled for future previews of Vista. It may not be perfect (what software is?), but in a lot of ways, it's a giant leap forward.

source:http://news.yahoo.com/news?tmpl=story&u=/ttpcworld/20060210/tc_techtues_pcworld/124642&cid=1740&ncid=1729


Google's Response to the DoJ Motion

"Google Inc. on Friday formally rejected the U.S. Justice Department's subpoena of data from the Web search leader, arguing the demand violated the privacy of users' Web searches and its own trade secrets. Responding to a motion by U.S. Attorney General Alberto Gonzales, Google also said in a filing in U.S. District Court for the Northern District of California the government demand to disclose Web search data was impractical."

source:http://yro.slashdot.org/article.pl?sid=06/02/18/1356230

Outsourcing Is Climbing Skills Ladder

The globalization of work tends to start from the bottom up. The first jobs to be moved abroad are typically simple assembly tasks, followed by manufacturing, and later, skilled work like computer programming. At the end of this progression is the work done by scientists and engineers in research and development laboratories.

A new study that will be presented today to the National Academies, the nation's leading advisory groups on science and technology, suggests that more and more research work at corporations will be sent to fast-growing economies with strong education systems, like China and India.

In a survey of more than 200 multinational corporations on their research center decisions, 38 percent said they planned to "change substantially" the worldwide distribution of their research and development work over the next three years — with the booming markets of China and India, and their world-class scientists, attracting the greatest increase in projects.

Whether placing research centers in their home countries or overseas, the study said, companies often use similar criteria. The quality of scientists and engineers and their proximity to research centers are crucial.

The study contended that lower labor costs in emerging markets are not the major reason for hiring researchers overseas, though they are a consideration. Tax incentives do not matter much, it said.

Instead, the report found that multinational corporations were global shoppers for talent. The companies want to nurture close links with leading universities in emerging markets to work with professors and to hire promising graduates.

"The story comes through loud and clear in the data," said Marie Thursby, an author of the study and a professor at Georgia Tech's college of management. "You have to have an environment that fosters the development of a high-quality work force and productive collaboration between corporations and universities if America wants to maintain a competitive advantage in research and development."

The multinationals, representing 15 industries, were from the United States and Western Europe. The authors said there was no statistically significant difference between the American and European companies.

Dow Chemical is one company that plans to invest heavily in new research and development centers in China and India. It is building a research center in Shanghai, which will employ 600 technical workers when it is completed next year. Dow is also finishing plans for a large installation in India, said William F. Banholzer, Dow's chief technology officer.

Today, the company employs 5,700 scientists worldwide, about 4,000 of them in the United States and Canada, and most of the rest in Europe. But the moves overseas will alter that. "There will be a major shift for us," Mr. Banholzer said.

The swift economic growth in China and India, he said, is part of the appeal because products and processes often have to be tailored for local conditions. The rising skill of the scientists abroad is another reason. "There are so many smart people over there," Mr. Banholzer said. "There is no monopoly on brains, and none on education either."

Such views were echoed by other senior technology executives, whose companies are increasing their research employment abroad. "We go with the flow, to find the best minds we can anywhere in the world," said Nicholas M. Donofrio, executive vice president for technology and innovation at I.B.M., which first set up research labs in India and China in the 1990's. The company is announcing today that it is opening a software and services lab in Bangalore, India.

At Hewlett-Packard, which opened an Indian lab in 2002 and is starting one in China, Richard H. Lampman, senior vice president for research, points to the spread of innovation around the world. "If your company is going to be a global leader, you have to understand what's going on in the rest of the world," he said.

The globalization of research investment, industry executives and academics argued, need not harm the United States. In research, as in economics, they said, growth abroad does not mean stagnation at home — and typically the benefits outweigh the costs.

Still, more companies in the survey said they planned to decrease research and development employment in the United States and Europe than planned to increase employment.

In numerical terms, scientists and engineers in research labs represent a relatively small part of the national work force. Like the debate about offshore outsourcing in general, the trend, which may point to a loss of competitiveness, is more significant than the quantity of jobs involved.

The American executives who are planning to send work abroad express concern about what they regard as an incipient erosion of scientific prowess in this country, pointing to the lagging math and science proficiency of American high school students and the reluctance of some college graduates to pursue careers in science and engineering.

"For a company, the reality is that we have a lot of options," Mr. Banholzer of Dow Chemical said. "But my personal worry is that an educated, innovative science and engineering work force is vital to the economy. If that slips, it is going to hurt the United States in the long run."

Some university administrators see the same trend. "This is part of an incredible tectonic shift that is occurring," said A. Richard Newton, dean of the college of engineering at the University of California, Berkeley, "and we've got to think about this more profoundly than we have in the past. Berkeley and other leading American universities, he said, are now competing in a global market for talent. His strategy is to become an aggressive acquirer. He is trying to get Tsinghua University in Beijing and some leading technical universities in India to set up satellite schools linked to Berkeley. The university has 90 acres in Richmond, Calif., that he thinks would be an ideal site.

"I want to get them here, make Berkeley the intellectual hub of the planet, and they won't leave," said Mr. Newton, who emigrated from Australia 25 years ago.

The corporate research survey was financed by the Ewing Marion Kauffman Foundation, which supports studies on innovation. It was designed and written by Ms. Thursby, who is also a research associate of the National Bureau of Economic Research, and her husband, Jerry Thursby, who is chairman of the economics department at Emory University in Atlanta.


souce:http://www.nytimes.com/2006/02/16/business/16outsource.html?_r=1&ex=1140325200&en=528257ea09a183cb&ei=5070&oref=slogin

NIH-Created Ebola Vaccine Passes 1st Test

WASHINGTON -- The first vaccine designed to prevent infection with the lethal Ebola virus has passed initial safety tests in people and has shown promising signs that it may indeed protect people from contracting the disease, government scientists reported Friday.

Just 21 people received the experimental vaccine in this early stage testing. Much more research is necessary to prove whether the vaccine will pan out, cautioned lead researcher Dr. Gary Nabel of the National Institutes of Health.

But the results are encouraging for U.S. scientists who worry not only that the horrific virus might be used as a terrorist weapon, but also note that natural outbreaks in Africa seem to be on the rise.

Ebola hemorrhagic fever kills within days by causing massive internal bleeding. There is no cure. Ebola is highly contagious, and between half and 90 percent of people who catch it die. First recognized in 1976, scientists don't know where the virus incubates between outbreaks _ which so far have occurred only in Africa, apparently when people come into contact with infected apes or bushmeat.

A vaccine would be useful not only to quell a bad outbreak, but as advance protection for doctors, nurses and animal-care workers.

Nabel and colleagues at the NIH's Vaccine Research Center developed a vaccine made of DNA strands that encode three Ebola proteins. They boosted that vaccine with a weakened cold-related virus, and the combination protected monkeys exposed to Ebola.

The first human testing looked just at the vaccine's DNA portion; the full combination will be tested later.

At a microbiology meeting in Washington on Friday, Nabel and colleagues reported seeing no worrisome side effects when comparing six people given dummy shots with 21 volunteers given increasing doses of the DNA vaccine.

Moreover, the vaccine recipients produced Ebola-specific antibodies, giving "us some confidence that the vaccine is having an effect on the immune system," Nabel said.

If the complete vaccine passes additional safety testing, the question is how to prove that it will protect people. NIH plans to test whether people have the same immune-system reactions to the vaccine as do monkeys that are protected by it.

source:http://www.washingtonpost.com/wp-dyn/content/article/2006/02/17/AR2006021701666.html


Creating a Backboneless Internet?

"The Internet is the best thing to happen to the free exchange of ideas since... well... maybe ever. But it can also be used as a tool for media control and universal surveillance, perhaps turning that benefit into a liability. Imagine, for instance, if Senator McCarthy had been able to steam open every letter in the United States. In the age of ubiquitous e-mail and filtering software, budding McCarthys are able and willing to do so. I Am Not A Network Professional, but it seems like all this potential for abuse depends upon bottlenecks at the level of ISPs and backbone providers. Is it possible to create an internet that relies instead on peer-to-peer connectivity? How would the hardware work? How would the information be passed? What would be the incentive for average people to buy into it if it meant they'd have to host someone else's packets on their hard drive? In short, what would have to be done to ensure that at least one internet remains completely free, anonymous, and democratized?"

source:http://ask.slashdot.org/article.pl?sid=06/02/17/2355231

State of the Next-Gen Consoles - Part I: A Brief History of the Xbox 360

Microsoft screwed everyone up. By everyone, I mean their competitors… the juggernaut Sony, and the tenacious Nintendo. They released the Xbox 360 a full year earlier than Nintendo had hoped, and possibly two years earlier than Sony really would have liked.

The original Xbox was a complete failure… financially. The Xbox division lost money nearly every quarter since launch. Microsoft had no experience in the highly volatile console industry and cobbled together a thinly veiled PC in (enormous) console clothing. To make matters worse, the contracts they signed with some component suppliers left little room for the declining costs model that has become standard in console life cycles. At one point Microsoft actually took NVIDIA to arbitration based on the Xbox GPU pricing, and lost. It’s rumored that as time progressed, Microsoft wasn’t too happy with their Intel deal either. Although they negotiated fantastic pricing for the technology they received at the time, as time progressed they lacked both the proper IP ownership, and cost reducing manufacturing abilities enjoyed by both the PS2 and the GameCube. In spite of the errors Microsoft made, they lost much less money than they could have.

The Xbox was unarguably the most powerful of its peers (that’s right, don’t argue!) and it had some gems in its library. The most significant gem was the system-selling Halo which aided greatly with the system’s launch. However, it wasn’t just Halo, or the Xbox’s hardware superiority that saved it, the main reason the Xbox fared better than it could have is because of their online strategy. The comprehensive ‘Xbox Live’ played to the strengths of the console’s designers. With years of high load networking experience, Microsoft developed the first true top to bottom console network, robust enough to convince people to actually pay for it. As big of a success as Xbox Live was, the Xbox business was a sinking ship in terms of its financial prospects. Sony’s installed base of PS2’s was running away with the lead, and Nintendo’s GameCube was hugely undercutting both consoles in price. The anchor was the Xbox hardware cost, bleeding cash, Microsoft couldn’t wait to replace it.

Although some would argue that the Xbox was merely a planned foot in the door from Microsoft’s perspective, history will remember different. Different in that if Microsoft was truly forward-looking they might have secured their hardware’s intellectual property at least in so far as securing complete backwards compatibility with its successor. When Microsoft began developing the Xbox 360 they took a long hard look at the successes and errors they made with the Xbox, and then contrasted that with those of its competitors. They knew the Xbox hardware model was broken, but how to fix it? Sony’s model for to the PS2 (and the CPU of the PS3) was one of expensive long-term hardware co-development. Although Microsoft had the financial resources, they lacked Sony’s hardware experience and more so, they lacked the time. Nintendo’s GameCube model on the other hand was much more hands off. The contracted IBM “Gekko” CPU and ATI “Flipper” GPU of the GameCube ended up being quite competitive hardware-wise in spite of its bargain price. The difference was where Microsoft contracted to buy complete chips from NVIDIA and Intel at a set price, Nintendo licensed only the chip designs so they could reduce costs easier by contracting out, taking full advantage of ever decreasing die shrinks. Microsoft loved the model. They loved that the GameCube was fast, and for much of its life, only $99US. In fact they loved it so much they not only stole the hardware model… they also contracted the exact same chip designers of the GameCube for the Xbox Next! And so it began, IBM would design the CPU of the Xbox 360 while ATI would develop the GPU. In many ways the Xbox 360 would become the spiritual successor of Nintendo’s GameCube, at least hardware wise.

Although Microsoft dodged (most of) the bullet in their freshman attempt with the Xbox, they were very leery of making similar mistakes as a sophomore. Chief among the reasons for the high cost of the original Xbox was the inclusion of a hard drive in every machine. Fortunately for Microsoft, Sony never saw the need to cut the PS2’s retail prices too deeply. Sony could have afforded to be much closer to the $99 GameCube price tag, but the PS2 was selling well enough that it wasn’t necessary. Mind you had they felt the need to cut the PS2 MSRP more dramatically and earlier on, the results would have been disastrous for Microsoft. Because of this lesson almost learned, Microsoft was fearful of price cutting in the next gen race where Sony may end up having to be much more competitive. Leading up to the Xbox 360 release, Sony dropped several hints at a much higher PS3 cost. Some speculation was that Sony was attempting to coax Microsoft into launching the Xbox 360 at a higher price but I believe that Sony was rather trying to get Microsoft to lock into including an expensive hard drive with every system. Once Microsoft committed to that, Sony could then have launched a tiered model PS3 (an SKU with and without a hard drive) and be significantly closer to the 360’s price point in the SKU without a hard drive. Microsoft wisely didn’t bite and instead opted to offer a tiered model out of the gate. Although the tiered model has its drawbacks, chief among them being that game developers can’t assume a hard drives inclusion, it will help consumers be able to compare apples to apples.

The final belabored decision was all about the media. Microsoft’s HD Era pitch was muted somewhat by not supporting next generation Hi-Def media. Microsoft decided early on that in order to beat Sony or Nintendo to market by any significant degree meant going with regular DVD media. Many think this was is a critical error on Microsoft’s part, but I’m not so sure. There are two real advantages and two real detriments to going the DVD route. The first advantage is timing. Choosing DVD over HD-DVD allowed Microsoft a healthy head start in shipping the 360… at least six or so months but I predict significantly closer to a year for the PS3 launch. The other main advantage is cost. The DVD drive will cost a pittance compared to what it will cost Sony to include a Blu-Ray drive. Current estimates put the Xbox DVD drive as low as $15 US while the Blu-Ray drive is thought to cost Sony at least $100. This wide cost margin isn’t expected to narrow significantly until well into 2008, and you can add to that the increased cost of the media per game. The main disadvantage, being a game console, is the media size. Certainly some games will exceed the maximum 9GB limit per DVD in the near future but 9GB is a healthy amount of space. Even some monstrously long and detailed games today use less than half that space. (Resident Evil 4 for the GameCube comes to mind which clocked in under 3.6GB total.) It’s true that HD resolution textures take up more space but it’s also true that it’s financially insignificant to include two or more DVDs per game. However, it may be an issue for gamers who hate to swap out discs, but I don’t think this alone will be terribly relevant to the Xbox 360’s success (or lack of.) What is slightly more important from the more casual gamer’s perspective is the inability to play High Definition movies. I personally know some people early on who elected to purchase the PS2 over the GameCube based on DVD playback support alone. In this respect Sony may find favour with the same demographic with the PS3’s Blu-Ray support. Microsoft has announced that it will launch an add-on HD-DVD player which is alright for HD movies I guess (as long as it’s much cheaper than a stand alone HD-DVD player) , but this add-on is mostly moot from a gaming perspective as Microsoft can’t, and won’t, ever release any games on HD-DVD discs. If they did, early Xbox360 adopters would be up in arms, and rightfully so, over being forced to buy an expensive add-on to play the latest games.

- Xenon

In depth technical discussions of the Xbox IBM designed CPU and ATI designed GPU are beyond the scope of this article, but many excellent articles are available. (For the GPU Xenos I’d suggest Beyond 3D's in-depth analysis and for the CPU Xenon take a look at ArsTechnica's coverage). I would however like to touch upon the consensus of these key components.

The IBM designed “Xenon” CPU, although quite advanced, is nowhere near as radical a design as its “Cell” competitor. We’ll delve more into the Cell in the second part of this trilogy, but the Cell is a radical design indeed. Having said that, the Xenon is complex enough that its potential has barely been scratched by its launch titles. A three cored and six threaded beast, with little cache and less in the way of branch prediction means that the power is there, but silicon wasn’t “wasted” in making it easy to tap. The current development level of retail games is still very much single core and single threaded. No games to my knowledge on the 360 or otherwise have been designed from the ground up to take efficient advantage of a multi-core CPU. There have been a few hacks and patches released after the fact to improve performance on dual core PCs in specific games, but nothing that takes significant efficient advantage. Moving forward, the Xenon CPUs advantage, over the Cell at least, is that although advanced, it is still very much the type of design that developers had been anticipating. Couple that expectation with Microsoft’s software support and I expect developers to come to grips much more quickly with the Xenon than the Cell. The Cell may be more powerful and perhaps significantly so, but it may be quite a while before that theoretical delta translates to the practical. The graphics core however, is very much a different scenario.

With the aforementioned alienation/bridge burning with NVIDIA over the original Xbox GPU, there was only one choice of GPU partners for the Xbox 360. ATI Technologies, NVIDIA’s arch-rival in all-things graphics, was not exactly a second choice for the software giant. With proven custom graphics experience with Nintendo’s GameCube, and an ever-present PC performance threat to NVIDIA, the Canadian tech firm was a perfect fit for Microsoft’s lofty ambitions. The chip firm, who is used to launching successively faster PC cards every ten or so months, was tasked with designing a chip to last several of those generations. My guess as to their approach was something like this… Look at designs ~3 generations ahead in the PC space and then cut the logic in half to fit into the maximum ~300 million transistors that present day design processes allow.

- Xenos

If this was the approach, ATI nailed it when they created the 360 GPU dubbed “Xenos.” The Xenos chip, or chips rather, is quite a success in both the design, and more so in ATI’s prediction of graphical direction. Without getting into too much detail, ATI saw a growing bottleneck in the shading efficiency of the graphics pipeline. This bottleneck is only now becoming perceptible in the PC space as ATI releases GPU’s with increasingly powerful shader architectures alongside the latest games which make ever increasing use of those shaders. Rather than lock the chips ability to process vertex and pixel data in a rigid ratio, ATI took the truly next generation approach and made the chips 48 processors dynamic in their ability to process either. The result is that Xenos, at a mere 232million transistors, should be able to perform similar to a much larger traditional design in many cases. The other advancement of the Xenos GPU is a daughter die, a smaller chip right next to the main shader chip, but on the same package. The daughter is mostly comprised of 10MB of really fast RAM and offers important logic such as Z-culling (eliminating the drawing/colouring of polygons hidden behind other polygons), basic colour operations and most significantly up to 4x anti-aliasing with a negligible performance hit.

The last interesting aspect I’d like to discuss about the Xbox 360 is by no means a secret but I feel hasn’t been touched upon nearly enough - The potential for physics processing. Physics processing is the hot new area of gaming that the next generation of consoles seems to have missed. Or has it? None of the next generation consoles are expected to have a dedicated physics chip, as will be the case in the near future in the PC space. However, there is some functionality. Half Life 2, which was made more realistic in part by physics middleware provider Havok, showed us how much fun it is to defeat your enemy by, say, destroying the high wooden ledge he’s standing on and letting gravity do the dirty work, as opposed to everyone in the head. It was an immersive step in the right direction. It’s traditionally been the host CPUs job to process physics, as is the case with Half Life 2, and CPUs can do it, albeit not terribly efficiently. Physics calculations are very parallel in nature not at all unlike… graphics calculations.

Back in October of 2005 ATI came out and publicly stated what many already knew, and some were already doing - That graphics cards are not only capable of physics calculations, they’re an order of magnitude faster than same generation CPUs due to their extreme parallelism. But isn’t the GPU busy processing the graphics though? It’s true that most previous instances of physics-processing graphics chips did require near exclusive use of the GPU, but Xenos is different in two big ways. The first is an added function called MEMEXPORT, which basically allows writing and reading of floating point data between Xenos and the 360’s main RAM. This single command effectively turns Xenos into a massive FPU co-processor, but still there’s the nagging problem of physics calculations hogging the shading resources. The other big difference as stated earlier, is that Xenos is the first programmable unified shading design! In theory there’d be no difference to the GPU whether it’s processing for shading or physics. I believe it’s entirely possible and probable that some developers may choose to lockdown 33% of the shading ability of the Xenos and use that 1/3 as a virtual dedicated physics processor. This would reduce the shading ability of the 360 by the same 33%, but it would also be theoretically capable of nearly equaling the entire physics processing performance of the main CPU! The reason for the specific 33% or 1/3 lockdown example lies in the Xenos 48 shading processor design being broken into 3 arrays of 16. Although Xenos is programmable between the arrays, it may prove more efficient to lock a complete array for dedication. For reference, the Xenon CPU is capable of ~ 9 billion scaler ops/sec, while the Xenos GPU is capable of ~ 24.6 billion scaler ops/sec or 8.2 billion per each of the three arrays of 16 shading processors. In short, all else being equal, developers may have the option to take a 33% hit in the shading capability of the 360 in order to gain a physics co-processor capable of ~8.2 billion scaler operations per second. Again this is speculation and there are a lot of variables, not the least of which is the latency of the MEMEXPORT function, so I contacted ATI to clarify.

Just prior to the release of this article, ATI senior architect Clay Taylor replied, confirming the physics processing potential of Xenos. Although specific workloads (i.e. physics-specific instructions) are not assignable to ALU arrays in a discrete manner, he confirmed it is entirely possible that: “Physics processing could be interleaved into the command stream and would use the percentage of the ALU core that the work required.” The ability to effectively scale the use of Xenos as both a PPU and GPU opens many creative doors for developers. I’m rather surprised that Microsoft isn’t touting this capability, as it was obviously intended by ATI. The physics processing ability will come at the cost of shading ability but the proportion is entirely at the developer’s discretion. I can think of many instances (e.g. indoor scenes) or even game style choices in which the powerful shading unit will have plenty of extra cycles to act as a PPU.

- Final thoughts

Regardless of the physics processing ability of Xenos, it is the real deal in next generation graphics. The implementation of a truly unified shading architecture may have been enough to grant the GPU as next generation capable, but add the daughter die with an embedded frame buffer, and few would argue. Many have been quick to criticize the graphics quality of some launch titles and in most instances I agree, however, lackluster graphics is certainly not for lack of ability, it’s for lack of programming ability with beta development hardware. The potential is barely hinted at in a couple titles such as Call of Duty 2 and Project Gotham Racing 3, which both look impressive on an HDTV. On non-HD sets the graphics do look much more current generation but that is because much of the horsepower developers could muster at this early stage with new hardware was used in driving a higher resolution instead of higher quality shading. In other words, an unnecessary evil in developing for a set high resolution (i.e. 720P) means running at a lower resolution (480P) generally won’t yield the higher quality graphics the system is capable of even though it’s only pushing a third of the pixels. At best you’ll get a faster frame rate but I would hope some developers may at least boost FSAA quality at lower resolutions. We’ll delve more into HD relevance in the Revolution part of this trilogy. Suffice it to say, if significantly higher quality visuals is all you consider in deeming a console as being “next generation” or not, then the Xbox is clearly a next gen console. If everything I’m hearing about the competing consoles graphics parts is true, the Xbox 360s graphics, if not the fastest, will certainly be close enough that it should be largely irrelevant.

Clearly my impression of the Xbox 360 is that it is positioned to compete significantly better in the next gen console race than its predecessor. The difference this time around is that although Microsoft will no longer have the decidedly most powerful console, they also won’t have the most expensive console, and believe me, they will compete on price. The Xbox 360s media (DVD) and input device (gamepad) are safe choices and the CPU may be merely adequate, but the GPU is quite potent and should go far in keeping Microsoft’s box in the same league as Sony’s overall despite the disparity in time to market. You may also assume from my comments that like many analysts, I’m discounting Nintendo’s entry into the next gen race; however this really couldn’t be further from the truth. The Revolution is so unique it isn’t really directly comparable to the 360 or the PS3, but more on that in the final section in this trilogy of articles.

I would like to invite discussion on this and future articles as I will be following the discussion thread for this editorial and will reply as necessary, until then expect Part II: A Brief History of the PS3 in the coming weeks.

source:http://www.elitebastards.com/cms/index.php?option=com_content&task=view&id=20&Itemid=28


Frequency of Profanity in Halo 2

In picture format to help the “gifted”

Thanks for visiting, please realize that I’m not writing this to advocate censoring or that I’m surprised by swearing or don’t like it, I was just curious to see what the frequency was’

Master Chief

When you logon to Xbox live more often then not you will be greeted by a 14 year old that learned a new word on the playground that day, or maybe it’s the drunken 24 year old who hates black people, gays and anyone who isn’t in his frat. No matter who you are if you have played on live you have run into cursing and lewdness. If you look at the rating for the game you can see that it is intended for ages 17+ but parents don’t care/understand/listen so lots of underage kids have this game. Another important thing to note is that the rating includes a warning that the game experience changes with use of online features. Please do not confuse this with a plea for the government to crack down and tighten control on the gaming industry. If anything this information should be used by parents to educate themselves about what thier child is involved with and then make an educated decision to let them play or not.

Last December I started recording the frequency of profanity that I was able to hear while playing Halo 2. The results of the study which lasted 33.9 hours were surprising.

Disclaimers:

1. The curse words were those which I could hear, more may have occurred and other players in a given game may not have heard the same number of curse words I did. The ability to hear other players depends on your proximity to them.

2. The times recorded are the times I was signed into Xbox live and either engaged in a game or in the process of joining a game or viewing the results and listening to the discourse of other players. The times do not reflect solely “in game” time. You can view my games at bungie.net with the gamertag Ca1vin. note the ONE in the gamertag.

3. Do not proceed if you do not want to view words which may offend some.

More after the Jump

The Results….

The Words and categories used are as follows.

Fuck: All forms, ing, ed, er
Ass: hole, bag, hat, etc
Shit: Self Explanatory
Racial: any derogatory term which focused on a player’s race
Sexual: Sexual phrases which were not homosexual. Fuck was not included in this unless it was directed at a person i.e. your mom.
Homosexual: Comments which called someone gay or any slang term with the same meaning or referred to two people of the same gender performing sexual acts on each other.
Damn: Self Explanatory
Bitch: Self Explanatory

Overall percentage use of curse words or lewd comments

Halo 2 Cursing Frequency

Hourly breakdown of Curse words or lewd comments in Halo 2

halo 2 hourly curse frequency

Overall Usage of Profanity

halo 2 overall curse frequency

You can download the Excel spreadsheet with my data in it here (right click, Save As) FIXED

If you have any questions about the study please leave a comment.

Technorati Tags: , , ,

source:http://www.imjosh.com/?p=244


Search Engine For Coders to Launch

"According to Wired, 'Krugle' is set to next month. The search engine indexes programming code and documentation from open-source repositories like SourceForge, and includes corporate sites for programmers like the Sun Developer Network. The index will contain between 3 and 5 terabytes of code by the time the engine launches in March. According to article, Krugle also contains intelligence to help it parse code and to differentiate programming languages, so a PHP developer could search for a website-registration system written in PHP simply by typing 'PHP registration system.'" Update: 02/17 21:04 GMT by Z : Summary edited for accuracy.

source:http://developers.slashdot.org/article.pl?sid=06/02/17/2027211

This page is powered by Blogger. Isn't yours?