Monday, July 04, 2005

Man convicted for chipping Xbox

Man convicted for chipping Xbox
Xbox console
The Xbox was fitted with a 200GB hard drive packed with games
A 22-year-old man has become the first person in the UK to be convicted for modifying a video games console.

The Cambridge graduate was sentenced at Caerphilly Magistrates Court in Wales to 140 hours of community service.

The man had been selling modified Xbox consoles which he fitted with a big hard drive containing 80 games.

"This case sets a major precedent which marks a milestone in the fight against piracy," said games industry spokesman Michael Rawlinson.

Bypass controls

The conviction is the first of its kind in the UK, where the modification of video games consoles has been an illegal practice since October 2003, when the UK enacted the EU Copyright Directive.

It sends a clear message to anyone tempted to become involved in chipping consoles that this is a criminal offence and will be dealt with accordingly
Michael Rawlinson, Elspa
Under that directive, it is illegal to circumvent copy protection systems.

Consoles such as the Xbox and PlayStation 2 can be modified by chips that are soldered to a console's main circuit board to bypass copyright controls.

The chips allow people to play games purchased legitimately in other countries, as well as running backup copies or bootleg discs.

In this case, the man was tracked down by an investigator working for the UK games industry trade body, the Entertainment & Leisure Software Publishers Association, (Elspa).

The man, who has not yet been named, was selling modified Xbox consoles, fitted with a 200GB hard drive and 80 pre-installed games, via his website for £380.

Elspa informed Caerphilly County Borough Council Trading Standards and Gwent Constabulary, as well as helping to collect forensic evidence used by the prosecution.

The man was sentenced to 140 hours community service and ordered to pay £750 in costs. The court also seized his equipment - three PCs, two printers, three Xbox consoles and 38 hard drives.

"It sends a clear message to anyone tempted to become involved in chipping consoles that this is a criminal offence and will be dealt with accordingly," said Mr Rawlinson, deputy director general of Elspa.

"The modification of consoles is an activity that Elspa's anti-piracy team is prioritising. It is encouraging to see the UK courts do the same."

In July last year, Sony won a legal battle to ban the selling of mod chips in the UK.

source:http://news.bbc.co.uk/2/hi/technology/4650225.stm


Harvesting and reusing idle compute cycles

How United Devices Grid MP helps this happen at the UT Grid project

Level: Introductory

Ashok Adiga (adiga@tacc.utexas.edu), Research Scientist, Texas Advanced Computing Center
Nina Wilner (new_nina@tacc.utexas.edu), Grid IT Architect, IBM

28 Jun 2005

More on the University of Texas grid project's mission to integrate numerous, diverse resources into a comprehensive campus cyber-infrastructure for research and education. In this article, the authors examine the idea of harvesting unused cycles from compute resources to provide this aggregate power for compute-intensive work. They will also place this concept in context by offering an overview of a popular commercial software package designed to help achieve this task: the United Devices Grid MP platform.

Several early grid computing projects were focused on the idea of harvesting unused cycles from compute resources and providing this aggregated computing power for work that comprised lots of tasks -- from hundreds to millions -- that could be executed individually.

Today, there are several commercial and open source grid computing software packages that support this form of distributed computing on the desktop or other nondedicated computing resources. In this article, we will take a look at a popular commercial software package designed to help execute this function: the United Devices Grid MP platform.

Grid MP has several interesting and unique features, including:

We will provide an overview of the Grid MP features designed for harvesting idle cycles from nondedicated resources, and we'll describe the types of applications that can effectively use the type of "desktop grid" we're discussing.

Introducing Grid MP
The United Devices Grid MP platform is a commercially available package that can be used to access cycles on large numbers of desktop PCs and workstations, thereby providing a large-scale computing platform suitable for certain classes of high-performance computations. The motivations for using a desktop grid are twofold:

The latter motivation was the driving force behind many successful projects, such as SETI@home and Grid.org, to solve very large compute problems using techniques that would not have been feasible on existing cluster resources due to the cost of the required compute cycles.

Currently, Grid MP is used as a component of the UT Grid campus grid project at the University of Texas at Austin. Although the Grid MP platform supports dedicated resources, such as high-end servers and clusters, the UT Grid deployment now only includes only desktop and workstation resources typically located in student laboratories or in faculty/staff offices. In this article, we will focus only on these nondedicated types of resources and on features in the platform that make it suitable for this purpose.

Some relevant key features of the Grid MP platform are its:

Grid MP provides users with several options for running applications. For running a serial batch job, users can use a command-line submission utility that executes on the local desktop and submits the job to the Grid MP server for execution on a remote desktop. Grid MP also provides support for batch parallel jobs (MPI jobs, for example) which can be run across machines on the desktop grid. (MPI stands for message passing interface, designed for high performance on massively parallel machines and on workstation clusters.) Since desktops are usually not interconnected with high-speed switches, as is the case with typical clusters, only MPI jobs that are latency-tolerant can effectively be run on the platform.

In Part 1 of the series
Part 1 of this series, Grid in action, was titled "Developing a wide grid" and introduced the vision of the University of Texas grid project: to integrate the numerous and diverse computational, visualization, storage, data, and instrument and device resources at the University of Texas into a comprehensive campus cyber-infrastructure for research and education.

It covered the goals of the project, outlined the project deployment layout, detailed some key conceptual requirements and software components of the project, explained the reasons for choosing the first target discipline and target services to integrate into the grid, illuminated that timely access to resources was the prime benefit that grid conferred to users, and unfolded a roadmap that showed us a plan for integrating more disciplines and services into the campus-wide grid.

The most effective use of the Grid MP system is obtained when running data parallel applications across thousands of desktops on the grid. (Data-parallelism can be generally defined as the concurrent application of a computation on all items in a collection of data, yielding a degree of parallelism that typically scales with problem size.) The platform supports coarse-grained parallelism for large jobs that can be decomposed into several independent pieces, which are then scheduled collectively by the Grid MP server to run on desktops that meet the minimum specified resource requirements. Enabling data-parallel applications on Grid MP typically requires the development of application scripts to manage the decomposition of the problem into independently schedulable jobs, then managing the subsequent merging of the independent results to create a single result. Application developers can register applications to be hosted on Grid MP and create application services that can then be used repeatedly by application users.

The description of the installation and configuration of Grid MP is beyond the scope of this article; this is typically done by United Devices as part of their services. However, we will offer some configuration notes that can be useful when setting up the install.

Let's look at the Grid MP architecture.

Grid MP architecture
The Grid MP system consists of a set of servers providing grid services for administrators, developers, and users, and a collection of compute resources consisting of desktops, workstations, high-end servers, and clusters. Figure 1 provides an overview of the Grid MP architecture.

Figure 1. The United Devices Grid MP architecture
The United Devices Grid MP Architecture

The Grid MP servers provide services for managing resources like compute devices, data, applications, and user workload. Access to these services is provided through a Web services interface called the MP Grid Services Interface (MGSI). Complete documentation of this interface can be found in the MGSI reference guide distributed with the Grid MP Software Developers Toolkit. The Grid MP servers are responsible for:

Grid MP resources run a lightweight MP Agent that provides controlled access to the desktop while enforcing administrator-specified usage policies. The agent provides an environment where it can run applications received from the Grid MP dispatch services in a nonintrusive, secure sandbox environment. The current version of Grid MP supports Linux®, AIX, Windows®, Mac, and Solaris clients.

The platform provides several options for accessing and managing Grid MP resources. Grid MP has a browser-based Web console that can be used by administrators, developers, and application users to manage all Grid MP resources. Additionally, the architecture supports the deployment of application services and command-line tools that can provide customized functionality for grid users.

Desktop resources can be grouped, and individual groups can be managed by local administrators to set usage policies. Usage policies that can be specified include:

Application services and command-line tools can also be provided to enable application users to submit jobs.

Grid MP has several features to support the efficient execution of large data-parallel applications hosted on the Grid MP servers, including application and data caching at desktop nodes, and affinity scheduling to utilize cached files and reduce network traffic overhead. (In this context, affinity scheduling is the process by which the scheduler uses information about previous executions of a computation on a resource and attempts to schedule new computations on the resource that can reuse some of the data or executable code.) Hosted applications can also include the specification of executables for heterogeneous architectures (like Windows, Linux, or Mac). If multiple executables are registered, it is possible that different components of a data-parallel application could be executed on different architectures.

Grid MP client resources are nondedicated, unreliable resources since they can be shut off by the desktop owner at any time. The system, therefore, has the ability to detect client failures and reschedule jobs running on a resource when a failure occurs.

Running an application on Grid MP
To submit applications from a user's desktop, the Grid MP Software Developer's Kit (SDK) must be downloaded and installed, preferably in a location that can be shared by all users on the desktop machine. The SDK is a package containing documentation, libraries, and tools to help develop and test applications to run on the Grid MP platform.

The user must create a file called .uduserconf in his home directory containing information about the local Grid MP installation and the SDK. A sample of this file is distributed with the SDK as uduserconf.sample (shown in Listing 1).

Listing 1. Sample user configuration file .uduserconf


MGSI_FILESVR_URL = https://frio-file-svc.tacc.utexas.edu:443/mgsi/filesvr.cgi
MGSI_XMLRPC_URL = https://frio-web-svc.tacc.utexas.edu:443/mgsi/rpc_xmlrpc.fcgi
MGSI_SOAP_URL = https://frio-web-svc.tacc.utexas.edu:443/mgsi/rpc_soap.fcgi
MGSI_USERNAME =
MGSI_PASSWORD =

BUILDMODULE_PATH = /usr/local/UDsdk_v4.1/tools/build/buildmodule
BUILDPACKAGE_PATH = /usr/local/UDsdk_v4.1/tools/build/buildpkg
LOADER_PATH = /usr/local/UDsdk_v4.1/tools/build/loader
MPBATCH_PATH = /usr/local/UDsdk_v4.1/tools/mpbatch

MPI_LISTEN_PORT = 12345-12360

The MGSI_URL parameters specify the Grid MP services that will be used to submit jobs; the specific URLs can be obtained from the local Grid MP administrator. The configuration file also contains login and password information, as well as path variables pointing to utilities in the SDK package. The user can choose to leave the password definition unspecified in the configuration file and specify it each time he uses an application utility.

Application support
The platform supports three types of user jobs:

Batch and MPI jobs can be run on the Grid MP platform without requiring code changes. Data-parallel applications require the creation of application scripts to manage the division of the large problem into independent parts that can be submitted to the Grid MP system for parallel execution, as well as to retrieve and merge the results from the parallel executions to obtain a single result.

As with any batch system, the overhead involved in moving data and executables to and from a remote resource must be considered when evaluating the suitability of an application for the platform. Especially in the case of data-parallel applications -- which typically consist of thousands of independently schedulable pieces of work, or work units -- the benefit of running the work units concurrently must be compared with the overhead needed to move required data to the remote desktops. The ideal data-parallel applications are those that are compute-intensive, but have relatively small input and output data files.

Running batch jobs
To run a batch job, you use the mpsub command, which can be found in the SDK in the tools/mpbatch directory. Several flavors of mpsub are distributed in the SDK for the supported platforms, along with a Perl version that can be used on any desktop that has Perl installed.

Documentation on the usage of mpsub is available in the Applications Users QuickGuide distributed with the SDK or by using the -help option included with mpsub. This is an example of a batch submission:

mpsub -input file1 -output stdout -block myprogname

The Grid MP system would forward the executable file myprogname and the input file file1 to a remote machine where the executable would be invoked using the command-line string supplied in . The mpsub command would block and wait until the completion of remote execution, then the resulting standard output (stdout) would be returned to the submitting desktop. If the mpsub command is run asynchronously (without the -block option), it returns a job ID which can then be used with the mpresult utility to retrieve the results once the remote execution has completed.

Users can submit MPI jobs using ud_mpirun or mpsub. ud_mpirun is a customized mpirun created for Grid MP; it currently supports MPICH V1.25. For MPI applications that have been compiled using other versions of MPI (such as LAM/MPI), users should use mpsub.

Running data-parallel jobs
To set up an application as a data-parallel application, the application developer needs to:

The typical usage model for data-parallel applications is for an application developer to develop the scripts and create a "hosted" application on Grid MP that can then be made available to application users. Since the MGSI interface is implemented as a Web service interface, the application scripts can be developed using any language for which a SOAP client library exists.

For an application to be a suitable candidate to be run as a data-parallel application on the Grid MP platform, it should have certain properties. The application should be decomposable into parts that can be executed independently of each other (in which each part has a high compute-to-communication ratio). The overhead included with moving the executable and input and output data to a remote desktop machine should be offset by the computation requiring a relatively long execution time.

There are several types of applications usually well suited to this data-parallel solution approach:

The Java™ code snippet in Listing 2 is part of an application script that creates a job on the Grid MP system. The job descriptor is first updated with values for the application ID, state, and priority. This is followed by a MGSI call, createJob, which causes a new job to be created on the Grid MP system and returns the ID of the new job.

Listing 2. Sample Java code to submit a job to Grid MP


// Create Job
job.setApplication_Gid(appGID);
job.setState_Id(1); // Set Job state to active
job.setPriority(10); // Low priority
String jID = udmSession.getUdMgsi().createJob(udmSession.getAuthkey(), job);

The SDK contains application script examples in C++, Perl, and the Java language. In general, these sample scripts can be reused with modifications to create application scripts for other applications.

In conclusion
In this article, we've provided an overview of the United Devices Grid MP platform as a tool that supports computing efforts using otherwise idle cycles on nondedicated resources. Although the platform supports batch serial jobs and MPI jobs through a simple command-line interface, the most suitable applications for this platform are data-parallel applications that can be decomposed into many related, but independent tasks that can be run concurrently on the client machines.

We've also offered criteria for choosing applications that will benefit the most from using this type of setup, since this usage model requires the creation of application scripts to split the job into the independent parts and to recombine the results of the parts. It is beneficial to select applications that will be reused frequently by application users once set up for the Grid MP environment. Once the application scripts have been developed, the data-parallel applications can be hosted in the Grid MP system, making usage much easier.

Grid MP provides administration tools that make it easy for individual organizations or departments within an enterprise to configure and set policies for local desktop machines. The security and sandbox features, combined with the minimal resources required to support the platform, make Grid MP an attractive platform for enterprises and universities, benefiting both by maximizing their IT investments.

In our next installment, we'll provide a basic overview of grid meta-schedulers -- systems that balance the workload across a collection of resource managers, effectively creating a cluster of clusters or a hierarchy of schedulers. The grid meta-scheduler coordinates communication between multiple resource managers and operates on business policies and enforces service-level agreements, allowing the resource managers to efficiently manage jobs across local resources under its management. We'll also introduce some considerations for selecting an appropriate meta-scheduler for your project.

Resources

About the authors
Author photoAshok Adiga is a research scientist in the Distributed and Grid Computing Group at the Texas Advanced Computing Center at the University of Texas at Austin. He is a developer for the UT Grid campus cyber-infrastructure project, and has been responsible for deployment and support of a campus desktop grid at the university. He is involved in several software projects that develop and deploy grid middleware.


Author photoNina Wilner is a grid IT architect at IBM in Austin. She is currently working with the University of Texas at Austin to develop a campus cyber-infrastructure project, the UT Grid. She has been with IBM since 1987, when she started working for IBM in Munich, Germany. She has a post-graduate degree in mathematics. Other focus areas are 3-D graphics, 2-D graphics and GUIs, networking and distributed computing, pSeries® and AIX®, as well as life sciences.


source:http://www-128.ibm.com/developerworks/grid/library/gr-harvest/?ca=dgr-lnxw01HarvestingGrid


Graphics in Science

Slashdot | Graphics in Science: "Monday July 04, @09:30AM
'Nature has an interesting nugget about the second meeting of the Image and Meaning Initiative which was held at the Getty Museum in Los Angeles. It is about the use of graphics in presenting scientific data. I am also a big advocate of using nice graphics in scientific presentations, but I also agree with Felice Franel, the founder of I-M, that not all images are meaningful scientifically. In fact, one encounters (and I am ashamed to admit that I have published) images that look nice but have no scientific import at all. One very cool Harvard physics professor, Eric Heller, produces wickedly beautiful (and meaningful) images of quantum mechanical models. These images have made the covers of Science and Nature, and are featured in his online art gallery, which was reviewed in the New York Times in 2002.' And of course, any mention of graphic information should not go by without a big shout out to Edward Tufte"

In SIlicon Valley: Profits up. Employment Down.

Slashdot | In SIlicon Valley: Profits up. Employment Down.: "Monday July 04, @08:45AM
'The New York Times (free yada yada) has an interesting report on the changing landscape of Silicon Valley tech companies: Profits are soaring but employment figures are not. This dynamic points to significant future shifts down the road for Silicon Valley companies like Electronic Arts and Cisco. Interestingly, the culprit isn't just outsourcing. Huge leaps in worker productivity and automated processes are also responsible for the decreased need for new labor."

Owner of the Word Stealth 'Protecting' Rights

Slashdot | Owner of the Word Stealth 'Protecting' Rights: "Monday July 04, @12:48AM
'Just when you thought ownership of intellectual property couldn't get any more absurd: The New York Times is reporting that the word 'Stealth' is being vigorously protected *in all uses* by a man who claims to exclusively own its rights. Not only has he gone head to head with Northrop Grumman, he has pursued it vigorously in the courts and has even managed to shut down 'stealthisemail.com' (Steal This Email.com) because the URL coincidentally contains the word 'stealth'. What's terrifying is that he's gotten as far as he has.'"

Japan monitors 'volcanic' steam

Japan monitors 'volcanic' steam
Cloud of steam over the area
An undersea volcano last erupted in the area in 1986
An underwater volcano is thought to be behind a huge column of steam above the Pacific Ocean, Japanese officials say.

The coast guard sent helicopters to monitor the 1,000m (3,280ft) cloud, 1,120km (700 miles) south-east of Tokyo, and warned ships to stay away.

The team said the area around the site appeared to be red.

"It's highly likely that it's caused by an eruption of an underwater volcano," coast guard spokesman Shigeyuki Sato said, adding it had happened before.

Japanese troops stationed on the island of Iwo Jima first noticed the cloud of steam on Saturday.

Television footage showed white smoke billowing into the sky from the brick-red water.

"We suspect the undersea volcanic moves are becoming active," said another coast guard official.

Further investigations will continue on Monday.

An undersea volcano last erupted in 1986 for three days in the area.

source:http://news.bbc.co.uk/2/hi/asia-pacific/4646185.stm


Forget cameras - spy device will cut drivers’ speed by satellite


IT IS the ultimate back seat driver. Motorists face having their cars fitted with a “spy” device that stops speeding.

The satellite-based system will monitor the speed limit and apply the brakes or cut out the accelerator if the driver tries to exceed it. A government-funded trial has concluded that the scheme promotes safer driving.



Drivers in London could be among the first to have the “speed spy” devices fitted. They would be offered a discount on the congestion charge if they use the system.

The move follows a six-month trial in Leeds using 20 modified Skoda Fabias, which found that volunteer drivers paid more attention as well keeping to the speed limit. More than 1,000 lives a year could be saved if the system was fitted to all Britain’s cars, say academics at Leeds University, who ran the trial on behalf of the Department for Transport (DfT).

It is part of a two-year research project into “intelligent speed adaptation” (ISA), which the department is funding at a cost of £2m. Results of the initial trial will be presented to ministers this week.

A study commissioned by London’s transport planners has recommended that motorists who install it should be rewarded with a discount on the congestion charge, which tomorrow rises to £8 a day.

The trial Skodas were fitted with a black box containing a digital map identifying the speed limits of every stretch of road in Leeds. A satellite positioning system tracked the cars’ locations.

The device compared the car’s speed with the local limit — displayed on the dashboard — and sent a signal to the accelerator or brake pedal to slow if it was too fast. The system can be overridden to avoid a hazard.

“The trials have been incredibly successful,” said Oliver Carsten, project leader and professor of transport safety at Leeds University.

The DfT says it has no plans to make speed limiters mandatory but admits that it is considering creating a digital map of all Britain’s roads which would pave the way for a national ISA system.

Edmund King, of the RAC Foundation, said limiters might make motorists less alert: “If you take too much control away the driver could switch on to autopilot.”

source: http://www.timesonline.co.uk/article/0,,2087-1678707,00.html


Windows Software: Ugly, Boring & Uninspired

Dialogue Box by Chris Pirillo
Chris Pirillo (lockergnome.com) is stuck on dial-up this week, so his byline might be ad(A*Y!~ NO CARRIER / ATDT 8675309 / CONNECT 2400 / Sorry about that. There's a lot of line noise around here, so please bear with us while we work on the pP(*R#@ NO CARRIER / ATDT 8675309 / CONNECT 1200 / Dangit! While we work on the problem, you might just read Chris's ramblings online at chris.pirillo.com,or listen to his show at TheChrisPirilloShow.com. By the time our techni*(!#DJ NO CARRIER

Thanks to my new PSP, I don’t think I’ll ever finish this column. If you own Sony’s new gaming device, you understand why; it’s that addictive. A few days ago, I sold three of my game consoles just so I could purchase a PlayStation Portable. I don’t believe in gadget overkill, so I try to keep my shelves lean. As a general rule, I dislike Sony’s empire; it is all about creating unnecessarily proprietary formats for no reason other than to force users to stay within its brand. I don’t swallow that tripe, and you shouldn’t either. However, Sony broke the mold with the PSP, making everything else seem antiquated in the blink of a cursor.

I was willing to trade in my GameCube, PS2, Nintendo DS, and a handful of games for a single handheld experience. Why bother sticking with systems that seemed to be only collecting dust? Seldom do I find myself sitting in front of the television for any reason, so I’ve never been too hip on being tethered to a TV. The DS was a wonderful diversion, but I found myself unimpressed, bored rather, with its dated graphics engine. The PSP (and its growing set of titles) has been a breath of fresh air, drawing me into pixel seclusion at any given moment.

Too bad I can’t say the same about the Windows platform, eh?

Software for Windows is generally uninspired, generically cloned, and overwhelmingly wrought with lackluster (read: lousy) user interfaces. There’s too much coal and not enough diamonds within the sphere of downloads. The greatest pieces of software are plagued by unintelligent design, and very few rise to the level of ubiquity. Windows users don’t have a strong sense of belonging; there’s no user community rallying around the platform. We use the computer, certainly, or is the computer using us?

A few months ago, Apple made a move to win over the minds, hearts, and fingers of the world’s most curious users. Since iPod mania has stretched well beyond the confines of OS X, getting geeks to spend another $500 or so on a completely different experience was wise. It’s not likely to attract a noticeable amount of folks, but enough to make a difference in mindshare. I already see it happening. The “coolest” software today seems only to have been developed for OS X. Prove me wrong, CPU readers.

One application that typifies the creative elegance that you can find on systems outside of Windows is Comic Life from Plasq (plasq.com). Be forewarned: It’s likely to drive even the most die-hard Windows user to switch to OS X. It runs well, looks amazing, and does something so incredibly unique you’ll find yourself wanting to take more digital pictures just to make another comic strip out of ’em. Yes, Comic Life turns your images into comics, and anybody can use it, as there’s virtually no learning curve. Plasq is planning on developing a Win32 binary but is hard-pressed for an able developer at the moment (no surprise). That hasn’t led me to keep it a secret, though; you need to know what you’re missing because you’re missing a lot.

With Apple’s release of Tiger, widgets--desktop applets that each serve one purpose--have jumped to the forefront of everybody’s imagination. Why? Because they look slicker than snot! Windows users might argue that Stardock (stardock.com) has had DesktopX for years. That’s true, but 99% of its existing widgets look absolutely horrendous. At that point, what’s the point? Again, we come back to the concept that Windows software developers rarely develop any kind of pleasant UI. There may be hope with Kapsules (kapsules.shellscape.org), although it suffers from a lack of useable widgets. Konfabulator (www.konfabulator.com) has an OS X and Windows version of its rendering engine with an extensive collection of sweet-smelling widgets, but each one sucks up an insane amount of system resources (making the utility completely unusable for an extended period of time). At least it’s cross-platform.

Getting back to the PSP, I can only find one computer-based utility worth registering and using regularly. It’s PSPWare (www.nullriver.com/index/products/pspware). It’s not from Sony, and it only runs on OS X. It also plans on porting its fantastic front-end to Windows eventually. Wait a second. Wasn’t the shoe on the other foot just a few years ago? All the cool developers have moved to OS X, it seems. Hell must’ve frozen over?

You can dialogue with Chris at chris@cpumag.com.


source: http://www.computerpoweruser.com/editorial/article.asp?article=articles/archive/c0508/44c08/44c08.asp&did=844&aid=27197

Cloning In The Animal Kingdom

The New Scientist is carrying an interesting article on cloning in nature." From the article: "The ant Wasmannia Auropunctata, which is native to Central and South America but has spread into the US and beyond, has opted for a unique stand-off in the battle of the sexes. Both queens and males reproduce by making genetically identical copies of themselves - so males and females seem to have entirely separate gene pools. Conventional reproduction happens only to produce workers. This is the first instance in the animal kingdom where males reproduce exclusively by cloning, though male honeybees do it occasionally." National Geographic is also carrying the story.

source:http://science.slashdot.org/article.pl?sid=05/07/03/1711204&tid=14

Gates Says Technology Will One Day Allow Computer Implants -- But Hardwiring's Not For Him

SINGAPORE (AP) -- Technological advances will one day allow computers to be implanted in the human body -- and could help the blind see and the deaf hear -- Bill Gates said Friday. But the Microsoft chairman says he's not ready to be hardwired.

"One of the guys that works at Microsoft ... always says to me 'I'm ready, plug me in,"' Gates said at a Microsoft seminar in Singapore. "I don't feel quite the same way. I'm happy to have the computer over there and I'm over here."

Meshing people directly with computers has been a science fiction subject for years, from downloading memories onto computer chips to replacement robotic limbs controlled by brain waves.

The fantasy is coming closer to reality as advances in technology mean computers are learning to interact with human characteristics such as voices, touch -- even smell.

Gates, whose Redmond, Washington-based company is spending more than US$6 billion (euro4.95 billion) on research and development this year to stay a world leader in software development, was asked at the seminar whether he thought computers would ever be implanted in the human brain.

He noted that cochlear implants and other medical implants were already being used to treat hearing problems and some conditions that cause constant pain, and were changing some people's lives dramatically.

Cochlear implants, which employ digital pulses that the brain interprets as sound, can help profoundly deaf people hear.

Advances were also being made on implants that can help fix eyesight problems, Gates said.

These types of technologies would continue to be improved and expanded, especially in areas where they would be "correcting deficiencies," he said.

"We will have those capabilities," Gates said.

He cited author Ray Kurzweil, whom he called the best at predicting the future of artificial intelligence, as believing that such computer-human links would become mainstream -- though probably not for several generations.

Gates also predicted that the keyboard won't be replaced by voice recognition software, and that the pen will make a comeback -- although without ink. The three would form the basic ways people will interact with their computers in the future, he said.

He said when computer pen technology -- scratching words onto a screen that a computer tries to read -- gets more sophisticated it will do things like let people draw musical notes and chemical signs, as well as recognize handwriting.

"Some people today underestimate the pen, because that recognition software is at an early stage," Gates said. "But it's on a very fast learning curve."

Speech would probably become the main way to input information in mobile devices, though Gates noted the huge popularity of mobile phone short messages services -- used almost fanatically across Asia.

"In some cases -- mobile phones -- speech will be the primary input (because) either the pen or the keyboard is a bit tough -- although a lot of young people are awfully good with that little keyboard," Gates said.

source:http://www.technologyreview.com/articles/05/07/ap/ap_070105.asp


The Grinch Who Patented Christmas

Slashdot | The Grinch Who Patented Christmas: "Sunday July 03, @10:32AM
from the in-the-future-everything-will-be-patented dept.
theodp writes 'The USPTO has reversed its earlier rejection and notified Amazon that the patent application for CEO Jeff Bezos' invention, Coordinating Delivery of a Gift, has been examined and is allowed for issuance as a patent. BTW, Amazon was represented before the USPTO by Perkins Coie, who also supplied Bezos with legal muscle in his personal fight against zoning laws that threatened to curb the size of his Medina mansion (reg.) before the City of Medina eventually gave up on regulating the size of homes (reg.).'"

Deep Impact on Comet Theory

July 4, 2005

MEDIA RELATIONS OFFICE
JET PROPULSION LABORATORY
CALIFORNIA INSTITUTE OF TECHNOLOGY
NATIONAL AERONAUTICS AND SPACE ADMINISTRATION
PASADENA, CALIF. 91109 TELEPHONE (818) 354-5011
http://www.jpl.nasa.gov

DC Agle (818) 393-9011
Jet Propulsion Laboratory, Pasadena, Calif.

Dolores Beasley (202) 358-1753
NASA Headquarters, Washington

Lee Tune (301) 405-4679
University of Maryland, College Park

RELEASE: 2005-109

DEEP IMPACT KICKS OFF FOURTH OF JULY WITH DEEP SPACE FIREWORKS

After 172 days and 431 million kilometers (268 million miles) of deep space stalking, Deep Impact successfully reached out and touched comet Tempel 1. The collision between the coffee table-sized impactor and city-sized comet occurred at 1:52 a.m. EDT.

"What a way to kick off America's Independence Day," said Deep Impact Project Manager Rick Grammier of NASA's Jet Propulsion Laboratory, Pasadena, Calif. "The challenges of this mission and teamwork that went into making it a success, should make all of us very proud."

"This mission is truly a smashing success," said Andy Dantzler, director of NASA's Solar System Division. "Tomorrow and in the days ahead we will know a lot more about the origins of our solar system."

Official word of the impact came 5 minutes after impact. At 1:57 a.m. EDT, an image from the spacecraft's medium resolution camera downlinked to the computer screens of the mission's science team showed the tell-tale signs of a high-speed impact.

"The image clearly shows a spectacular impact," said Deep Impact principal investigator Dr. Michael A'Hearn of the University of Maryland, College Park. "With this much data we have a long night ahead of us, but that is what we were hoping for. There is so much here it is difficult to know where to begin."

The celestial collision and ensuing data collection by the nearby Deep Impact mothership was the climax of a very active 24 hour period for the mission which began with impactor release at 2:07 a.m. EDT on July 3. Deep space maneuvers by the flyby, final checkout of both spacecraft and comet imaging took up most of the next 22 hours. Then, the impactor got down to its last two hours of life.

"The impactor kicked into its autonomous navigation mode right on time," said Deep Impact navigator Shyam Bhaskaran, of JPL. "Our preliminary analysis indicates the three impactor targeting maneuvers occurred on time at 90, 35 and 12.5 minutes before impact."

At the moment the impactor was vaporizing itself in its 10 kilometers per second (6.3 miles per second) collision with comet Tempel 1, the Deep Impact flyby spacecraft was monitoring events from nearby. For the following14 minutes the flyby collected and downlinked data as the comet loomed ever closer. Then, as expected at 2:05 a.m. EDT, the flyby stopped collecting data and entered a defensive posture called shield mode where its dust shields protect the spacecraft's vital components during its closest passage through the comet's inner coma. Shield mode ended at 2:32 a.m. EDT when mission control re-established the link with the flyby spacecraft.

"The flyby surviving closest approach and shield mode has put the cap on an outstanding day," said Grammier. "Soon, we will begin the process of downlinking all the encounter information in one batch and hand it to the science team."

The goal of the Deep Impact mission is to provide a glimpse beneath the surface of a comet, where material from the solar system's formation remains relatively unchanged. Mission scientists expect the project will answer basic questions about the formation of the solar system, by offering a better look at the nature and composition of the frozen celestial travelers known as comets.

The University of Maryland is responsible for overall Deep Impact mission science, and project management is handled by JPL. The spacecraft was built for NASA by Ball Aerospace & Technologies Corporation, Boulder, Colo.

For information about Deep Impact on the Internet, visit:
http://www.nasa.gov/deepimpact



source: http://deepimpact.jpl.nasa.gov/press/050704jpl.html

Lost Newton manuscript rediscovered at Royal Society

Lost Newton manuscript rediscovered at Royal Society

2 Jul 2005

Isaac NewtonA collection of notes by Sir Isaac Newton, thought by experts to be lost forever, have recently been rediscovered during cataloguing at the Royal Society and go on display to the public for the first time next week at the Royal Society's Summer Science Exhibition.

The notes are written about alchemy, which some scientists in Newton's time believed to hold the secret for transforming base metals, such as lead, into the more precious metals of gold or silver. Much of the text consists of Newtons notes on the work of another alchemist of the seventeenth century, Frenchman Pierre Jean Fabre. But one page of the notes presents a more intriguing prospect it offers what may be Newton's own thoughts on alchemy, written almost entirely in English and in his own handwriting.

Although the notes were originally uncovered following Newton's death in 1727, they were never properly documented and were thought to be lost following their sale for £15 at an auction at Sotheby's in July 1936. During the cataloguing of the Royal Society's Miscellaneous Manuscripts Collection the notes were discovered and, with the help of Imperial College's Newton Project, were identified as being the papers which had disappeared nearly 70 years before.

The notes reflect a part of Newton's life which he kept hidden from public scrutiny during his lifetime, in part because the making of gold or silver was a felony and had been since a law was passed by Henry IV in 1404. Newton is famous for his revolutionary work in many areas including mathematics and the fields of optics, gravity and the laws of motion. However, throughout his career he, and other scientists of the time, many of whom were Fellows of the Royal Society, carried out extensive research into alchemy.

The text is written in English, but it is not easy to work out what Newton is actually saying. Alchemists were notorious for recording their methods and theories in symbolic language or code in order that others could not understand it. An excerpt demonstrates the elusive style of the writings:

"It is therefore no wonder that - in their advice lay before us the rule of nature in obtaining the great secret both for medicine & transmutation. And if I may have the liberty of expression give me leave to assert as my opinion that it is effectual in all the three kingdoms & from every species may be produced when the modus is rightly understood: only mineralls produce minerals & sic de calmis. But the hidden secret modus is Clissus Paracelsi wch is nothing else but the separation of the principles thris purification & reunion in a fusible & penetrating fixity."

Stephen Cox, Executive Secretary of the Royal Society, said: "Such an intriguing find highlights the sheer volume of fascinating materials contained in the Royal Society's library and archive. Our ongoing task is to ensure that the materials we hold are all identified and catalogued. This will allow historians and the public to fully access our great wealth of papers and artefacts from some of the most famous scientists in history. At the Summer Science Exhibition, alongside the many exhibits featuring the cutting-edge science of today, people can find displays throughout the building of the legacy that past Fellows have left behind, including these papers from Isaac Newton."

Dr John Young from the Newton Project said: "This is a hugely exciting find for Newton scholars and for historians of science in general. It provides vital evidence about the alchemical authors Newton was reading, and the alchemical theories he was investigating, in the last decades of the seventeenth century. The whereabouts of this document have been unknown since 1936 and it was a real thrill to see it preserved in the Royal Society's archives."


source:http://www.royalsociety.org/news.asp?id=3252

Entering a dark age of innovation

SURFING the web and making free internet phone calls on your Wi-Fi laptop, listening to your iPod on the way home, it often seems that, technologically speaking, we are enjoying a golden age. Human inventiveness is so finely honed, and the globalised technology industries so productive, that there appears to be an invention to cater for every modern whim.

But according to a new analysis, this view couldn't be more wrong: far from being in technological nirvana, we are fast approaching a new dark age. That, at least, is the conclusion of Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center in China Lake, California. He says the rate of technological innovation reached a peak a century ago and has been declining ever since. And like the lookout on the Titanic who spotted the fateful iceberg, Huebner sees the end of innovation looming dead ahead. His study will be published in Technological Forecasting and Social Change.

It's an unfashionable view. Most futurologists say technology is developing at exponential rates. Moore's law, for example, foresaw chip densities (for which read speed and memory capacity) doubling every 18 months. And the chip makers have lived up to its predictions. Building on this, the less well-known Kurzweil's law says that these faster, smarter chips are leading to even faster growth in the power of computers. Developments in genome sequencing and nanoscale machinery are racing ahead too, and internet connectivity and telecommunications bandwith are growing even faster than computer power, catalysing still further waves of innovation.

But Huebner is confident of his facts. He has long been struck by the fact that promised advances were not appearing as quickly as predicted. "I wondered if there was a reason for this," he says. "Perhaps there is a limit to what technology can achieve."

In an effort to find out, he plotted major innovations and scientific advances over time compared to world population, using the 7200 key innovations listed in a recently published book, The History of Science and Technology (Houghton Mifflin, 2004). The results surprised him.

Rather than growing exponentially, or even keeping pace with population growth, they peaked in 1873 and have been declining ever since (see Graphs). Next, he examined the number of patents granted in the US from 1790 to the present. When he plotted the number of US patents granted per decade divided by the country's population, he found the graph peaked in 1915.

The period between 1873 and 1915 was certainly an innovative one. For instance, it included the major patent-producing years of America's greatest inventor, Thomas Edison (1847-1931). Edison patented more than 1000 inventions, including the incandescent bulb, electricity generation and distribution grids, movie cameras and the phonograph.

Medieval future

Huebner draws some stark lessons from his analysis. The global rate of innovation today, which is running at seven "important technological developments" per billion people per year, matches the rate in 1600. Despite far higher standards of education and massive R&D funding "it is more difficult now for people to develop new technology", Huebner says.

Extrapolating Huebner's global innovation curve just two decades into the future, the innovation rate plummets to medieval levels. "We are approaching the 'dark ages point', when the rate of innovation is the same as it was during the Dark Ages," Huebner says. "We'll reach that in 2024."

But today's much larger population means that the number of innovations per year will still be far higher than in medieval times. "I'm certainly not predicting that the dark ages will reoccur in 2024, if at all," he says. Nevertheless, the point at which an extrapolation of his global innovation curve hits zero suggests we have already made 85 per cent of the technologies that are economically feasible.

But why does he think this has happened? He likens the way technologies develop to a tree. "You have the trunk and major branches, covering major fields like transportation or the generation of energy," he says. "Right now we are filling out the minor branches and twigs and leaves. The major question is, are there any major branches left to discover? My feeling is we've discovered most of the major branches on the tree of technology."

But artificial intelligence expert Ray Kurzweil - who formulated the aforementioned law - thinks Huebner has got it all wrong. "He uses an arbitrary list of about 7000 events that have no basis as a measure of innovation. If one uses arbitrary measures, the results will not be meaningful."

Eric Drexler, who dreamed up some of the key ideas underlying nanotechnology, agrees. "A more direct and detailed way to quantify technology history is to track various capabilities, such as speed of transport, data-channel bandwidth, cost of computation," he says. "Some have followed exponential trends, some have not."

Drexler says nanotechnology alone will smash the barriers Huebner foresees, never mind other branches of technology. It's only a matter of time, he says, before nanoengineers will surpass what cells do, making possible atom-by-atom desktop manufacturing. "Although this result will require many years of research and development, no physical or economic obstacle blocks its achievement," he says. "The resulting advances seem well above the curve that Dr Huebner projects."

Rather than growing exponentially, or keeping pace with population growth, innovation peaked in 1873 and has been declining ever since

At the Acceleration Studies Foundation, a non-profit think tank in San Pedro, California, John Smart examines why technological change is progressing so fast. Looking at the growth of nanotechnology and artificial intelligence, Smart agrees with Kurzweil that we are rocketing toward a technological "singularity" - a point sometime between 2040 and 2080 where change is so blindingly fast that we just can't predict where it will go.

Smart also accepts Huebner's findings, but with a reservation. Innovation may seem to be slowing even as its real pace accelerates, he says, because it's slipping from human hands and so fading from human view. More and more, he says, progress takes place "under the hood" in the form of abstract computing processes. Huebner's analysis misses this entirely.

Take a modern car. "Think of the amount of computation - design, supply chain and process automation - that went into building it," Smart says. "Computations have become so incremental and abstract that we no longer see them as innovations. People are heading for a comfortable cocoon where the machines are doing the work and the innovating," he says. "But we're not measuring that very well."

Huebner disagrees. "It doesn't matter if it is humans or machines that are the source of innovation. If it isn't noticeable to the people who chronicle technological history then it is probably a minor event."

A middle path between Huebner's warning of an imminent end to tech progress, and Kurzweil and Smart's equally imminent encounter with a silicon singularity, has been staked out by Ted Modis, a Swiss physicist and futurologist.

Modis agrees with Huebner that an exponential rate of change cannot be sustained and his findings, like Huebner's, suggest that technological change will not increase forever. But rather than expecting innovation to plummet, Modis foresees a long, slow decline that mirrors technology's climb.

At the peak

"I see the world being presently at the peak of its rate of change and that there is ahead of us as much change as there is behind us," Modis says. "I don't subscribe to the continually exponential rate of growth, nor to an imminent drying up of innovation."

So who is right? The high-tech gurus who predict exponentially increasing change up to and through a blinding event horizon? Huebner, who foresees a looming collision with technology's limits? Or Modis, who expects a long, slow decline?

The impasse has parallels with cosmology during much of the 20th century, when theorists debated endlessly whether the universe would keep expanding, creep toward a steady state, or collapse. It took new and better measurements to break the log jam, leading to the surprising discovery that the rate of expansion is actually accelerating.

Perhaps it is significant that all the mutually exclusive techno-projections focus on exponential technological growth. Innovation theorist Ilkka Tuomi at the Institute for Prospective Technological Studies in Seville, Spain, says: "Exponential growth is very uncommon in the real world. It usually ends when it starts to matter." And it looks like it is starting to matter.

source:http://www.newscientist.com/article.ns?id=dn7616


Cassini's Got Pictures And Data

Slashdot | Cassini's Got Pictures And Data: "Saturday July 02, @08:46PM
'To celebrate the anniversary of the Cassini-Huygens probe's orbital insertion, NASA's JPL has a set of fifteen amazing photos from the past year. Meanwhile, the BBC reports that some of the latest science data from the mission reveals that Saturn's ring system has its own (thin) O2 atmosphere, and that the planet's rotation seems to be slowing!'"

Feds Won't Let Go of Internet DNS

Updated: The U.S. Department of Commerce says the agency will keep final control of the Internet's domain naming system, rather than handing it over to ICANN as previously planned.

Citing national security and business concerns, officials of the U.S. Department of Commerce Thursday said the federal agency will retain control of the Internet's domain naming system, rather than hand over complete responsibility to the international nonprofit currently running the operation.

The move could roil governments and international businesses, analysts warned, as well as bring potential Internet chaos from dueling domain-name systems.

In an address covering the agency's latest policies on broadband, wireless spectrum allocation and other national infrastructure matters, Assistant Secretary of the National Telecommunications and Information Administration Michael Gallagher brought up the matter of the Internet's DNS (Domain Name System) and the administration's relationship with ICANN, the Internet Corporation for Assigned Names and Numbers.

Gallagher offered a set of principles that are to guide the administration's actions, including which body or country would retain control of the DNS system.

"Given the Internet's importance to the world's economy, it is essential that the underlying DNS of the Internet remain stable and secure," Gallagher said.

"As such, the United States is committed to taking no action that would have the potential to adversely impact the effective and efficient operation of the DNS, and will therefore maintain its historic role in authorizing changes or modifications to the authoritative root zone file," he said.

PointerRead eWEEK.com Security Center Editor Larry Seltzer's commentary here on the advantages of building a new, more secure Internet.

ICANN offered few details about how the Commerce Department's decision will impact its role with the DNS.

eWEEK.com Special Report: Politics Meets IT

"We're reviewing the statement," an ICANN spokesperson said in a statement. "We will continue to successfully fulfill the requirements of the [Memorandum of Understanding with the Commerce Department]."

The United States will continue to work with ICANN, Gallagher said. However, ICANN is the "appropriate technical manager of the Internet DNS," not the final word. The United States will "continue to provide oversight so that ICANN maintains its focus and meets its core technical mission," he said.

PointerICANN approves new domains related to jobs and travel. Click here to read more.

While the agency recognizes that other governments have "public policy and sovereignty concerns" relating to the Internet and domain services, Gallagher said, those interests should be focused on the ccTDL (country code top level domains). He said that the United States is "committed to working with the international community to address these concerns, bearing in mind the fundamental need to ensure stability and security of the Internet's DNS."

This statement flies in the face of an MOU (memorandum of understanding) between the Dept. of Commerce and ICANN, which would have transferred control over to the international body in September 2006.

In an address to ICANN's Working Group for Internet Governance in mid-June, CEO Paul Twomey said the organization was on track over the handoff of control.

"To date we have completed all milestones on or before the time stipulated. We are confident that not only will the MOU be completed, but that by doing so ICANN will have passed important tests related to its independence, its democratic and transparent functioning, efficient management, effective decision-making process, and having well-described roles and relationships with all its stakeholders," he said.

"As to what will be the relationship between the U.S. Department of Commerce and ICANN after the completion of the MOU, let me be clear that ICANN does not speak on behalf of the United States Government," Twomey said in June. "That said, the roles of all governments, including that of the U.S. Government, are important, as they share the same interest as all ICANN's stakeholders, namely a stable and secure Internet."

eWEEK.com Special Report: State of the Internet

Michael Froomkin, a law professor at the University of Miami School of Law and critic of ICANN, views the Commerce Department's move more as an attempt to impact the upcoming meeting of the United Nation's World Summit on the Information Society (WSIS) than a specific snub against ICANN. Through WSIS, the UN has been trying to play a larger role in Internet governance

The federal government's decision also comes as ICANN prepares to open its next major meeting on July 11 in Luxembourg.

In a discussion with Ziff Davis Internet News, Froomkin said that whether the Commerce Department's move will cause an international uproar remains an open question. He views the decision as one that will maintain the status quo, where the U.S. government continues to exert limited oversight of ICANN.

"The real big question right now is how is the EU going to deal with this?" Froomkin said. "The EU has supported ICANN through thick and thin from the sense that it was the best way to get the U.S. [government] out of this."

Editor's Note: This story was updated to include information and comments from analysts. Matt Hicks contributed to this story.

source:http://www.eweek.com/article2/0,1895,1833928,00.asp


Attack of the $1 DVD's

Published: July 3, 2005

THE scientist in the 1959 horror film "The Killer Shrews" is not only mad but also cheap. Monstrously cheap. To solve the problem of world hunger, he tries to breed humans down to half their normal size. Rather than increase the food supply, he reasons, he will decrease demand. But his penny-pinching plans go awry, naturally, or unnaturally, creating a pack of giant, munchies-afflicted shrews.

"The shrews were actually hound dogs with fangs stuck to their heads and hairy rugs on their backs," recalled James Best, who portrayed the hero, Thorne Sherman. Mr. Best's love interest was played by Ingrid Goude, a former Miss Universe who was, he said "very well-endowed but not very well-paid; she got about 15 cents." Mr. Best, now 78, reckoned that that was about 35 cents less than the budget of the entire movie.

"The Killer Shrews," the masterwork of Ray Kellogg, is one of hundreds of cheap old films now available as ridiculously cheap new DVD's. Because of lapsed or improperly registered copyrights, even some very watchable movies - among them, Howard Hawks's "His Girl Friday," Marlon Brando's "One-Eyed Jacks" and Francis Ford Coppola's "Dementia 13" - are now in the public domain and can be sold by anyone.

While overall DVD sales are robust - last year retailers sold $15.5 billion in discs - the low-end market is positively booming. Recently, 19 of the 50 top sellers on the Nielsen VideoScan national sales charts were budget DVD's. "The prices are irresistible," said Gary Delfiner, whose Global Multimedia Corporation offers 60 film, cartoon and television titles with prices ranging from 99 cents to $1.99.

Global, based in Philadelphia, is one of a half-dozen major players in what's called the dollar DVD industry. Since starting up in September, the company said, it has shipped more than two million discs.

Sheathed in cardboard slipcases, they are distributed to some 15,000 99-cent stores around the country, as well as thousands of supermarkets, drugstore chains and, soon, lingerie shops. "An intimate apparel store is a great place to sell old romances," said Mr. Delfiner, whose catalog includes the 1939 Irene Dunne-Charles Boyer weepie "Love Affair" and the 1954 tearjerker "The Last Time I Saw Paris," with Elizabeth Taylor and Van Johnson.

How does Mr. Delfiner define his audience? "Anybody who's breathing and owns a DVD player," he said. "Nobody ever walked into a store looking to buy my product. It's the ultimate impulse buy."

Still, he has his standards. "I won't produce any title that's too obscure," Mr. Delfiner said. "Or any title that's not family-friendly." Or any title without sound. "Silent movies are for aficionados," he added. "They don't appeal to the masses, and I'm in a mass business."

The chief attraction of cheap DVD's is that they're, well, cheap. "On average, a family of four spends around $40 to see a movie at the neighborhood multiplex," said Don Rosenberg, publisher of Home Media Retailing magazine. "For that, you could buy 40 budget DVD's."

The very term "budget DVD" makes Mike Omansky bristle. "It brings up the image of schlock, which our product is not," said Mr. Omansky, the chief executive of Digiview Productions, a New Jersey company that supplies Wal-Mart with classics like "Bucket of Blood" and "The Beast of Yucca Flats." "McDonald's puts out a high-quality, low-priced hamburger. Our burgers are high quality, too, without the frills."

Of course, for a buck you don't expect frills. And mostly you don't get any: the vast majority of dollar DVD's start playing the moment they're loaded. Only the best-made low-end discs have cast biographies, on-screen menus and chapter stops. And only Global's have an option for Spanish subtitles.

"We commissioned the translations," Mr. Delfiner said. "There's a huge Hispanic market for this stuff."

In the cutthroat world of cut-rate DVD's, different labels often release the same titles. "Print and sound quality varies according to the source material," said Bill Lee, division manager for Westlake Entertainment in Los Angeles. Mr. Lee stocks 13 early Alfred Hitchcock films, from "Easy Virtue" (1928) to "Jamaica Inn" (1939). "We look for pristine masters," he said.

Global ensures that its masters are pristine by buying them from companies that specialize in film preservation. The digital videotapes are compressed and encoded into digital linear tapes in a manufacturing plant called a replicator. The data are then downloaded into a computer. Dollar DVD's vary according to "the quality of the master and the quality control of the replication house," Mr. Delfiner said.

"Replication house" sounds like something from one of Global's horror flicks, and there are indeed horrors to be found among the replicants. Some reissues of the original "House on Haunted Hill" are haunted by ghost images as well as ghosts. A dollar edition of "Fangs of the Living Dead" - the 1969 picture that hammered the final nail into the cinematic coffin of the bomb-shelter-era bombshell Anita Ekberg - is so murky that the film seems to have been shot through the bottom of an inkwell.

"You see a lot of quick-buck artists," said Brian Austin, president of PC Treasures of Detroit.

With so many companies happily cranking out the same old stuff, Digiview is phasing out of public domain and phasing into public humiliation; it recently licensed some of the mangiest mutts ever to have escaped Hollywood's kennels.

Digiview actually paid for the rights to "American Vampire," a kind of "Beach Blanket Beowulf" starring Carmen Electra of "Baywatch." Whether anyone will pay for the DVD is unclear. "Just because it's a dollar doesn't mean people want it," said Mr. Rosenberg, of Home Media Retailing. Indeed, a nickel might be too much for a DVD of Raymond Burr in "Bride of the Gorilla."

Mr. Rosenberg said the novelty of dollar DVD's would soon fade. "There's a great danger of overdistribution," he said. "This is a business without much room for profit - either in the making or the selling. A year from now, most cheap DVD's will be gone from stores." One "Killer Shrew" alumnus hopes to cash in while the cheapness lasts. "This craze could build an audience for the sequel I'm writing," said Mr. Best, who played the stuttering Sheriff Rosco P. Coltrane on "The Dukes of Hazzard."

He has nearly finished a first draft of "Killer Shrews II." The plot is fiendishly simple. "I return to Shrew Island to rescue a bunch of teenagers," he reported. "A new mad scientist has turned herself into a human shrew that not only chews, but swims."

So what's the projected budget?

"This one's a little more expensive," Mr. Best said. "I could make it for, say, 75 cents."

source:http://www.nytimes.com/2005/07/03/movies/03lidz.html?ex=1278043200&en=277d25b3fc0fb730&ei=5088&partner=rssnyt&emc=rss


Deep Impact prepares for comet crash

Astronomers as far back as Aristotle have speculated about the sudden appearance of comets that blaze through the night sky.

Aristotle believed that comets erupted from the Earth into the heavens, while American astronomer Fred Whipple suggested in 1950 that comets resembled dirty snowballs composed of rocks, frozen water, and other compounds made up of carbon and hydrogen.

Deep Impact On Sunday evening, observers finally may have answers to some of these celestial questions. At 10:52 p.m. PDT, a NASA spacecraft the size of a wine barrel is scheduled to slam into the Tempel 1 comet at roughly 23,000 mph.

The impact, projected to leave a crater that could swallow a football stadium, is intended to scrape away part of Tempel 1's surface and expose what's underneath.

"We know that the crust--the outside shell of a comet and the stuff that comes off a comet--is changed by the solar wind," said Randii Wessen, a scientist at NASA's Jet Propulsion Laboratory. "One of the things that we're curious about is, some people will tell you that comets actually produce organic compounds...We want to see if that's inside."

A NASA spacecraft called Deep Impact, which launched in January, is scheduled to release an "impactor" module at 11:07 p.m. PDT on Friday. Barring technical glitches, the 820-pound module will collide with Tempel 1 while Deep Impact stays at a safe distance to take photographs and analyze the plume of dust and rocks.

Dozens of observatories will have their telescopes trained on the patch of sky where the rendezvous should happen--about 83 million miles away from Earth--and the impact should even be visible to some lucky earthbound observers.

The University of Maryland is offering tips on spotting the event, which will appear to take place in the Virgo constellation. U.S. watchers on the West Coast should be able to witness the impact, which can be seen with a telescope and perhaps glimpsed with the naked eye, at about 25 degrees above the horizon.

As Deep Impact closes in on the comet, its cameras have been firing photographs back to earth that show the bulk of Tempel 1 drawing nearer. (A photo library is available on NASA's Web site.)

timeline

The cost of the mission to taxpayers is expected to be at least $267 million, not counting the price of the launch vehicle.

Government scientists say the price tag is worth it. "One, we'll learn about comets," said NASA's Wessen. "Two, we'll learn about how that applies to the Earth, whether it brought organic material to the Earth...We can even learn, if a comet was coming our way, what it would take to deflect one of those things."

If successful, the Deep Impact mission could help answer some of the questions that have been nagging astronomers ever since Edmond Halley used Newtonian mechanics to predict cometary orbits--including the once-every-75-years comet that bears his name today.

One popular theory is that comets, suspected to be about 50 percent water, were responsible for delivering much of Earth's water, which in turn led to the emergence of life on this planet. On the other hand, a comet is also believed to be the most likely source for the mass extinction of dinosaurs about 65 million years ago.

As Deep Impact's impactor prepares to head to its final destination, NASA is moving to allay fears about the mission resulting in the comet breaking apart and a chunk spiraling into the planet. In a statement, NASA mission scientist Don Yeomans said: "In the world of science, this is the astronomical equivalent of a 767 airliner running into a mosquito. It simply will not appreciably modify the comet's orbital path. Comet Tempel 1 poses no threat to Earth now or in the foreseeable future."

source:http://news.com.com/Deep+Impact+prepares+for+comet+crash/2100-7337_3-5771841.html


Slashdot | Microsoft Serious About VoIP

Slashdot | Microsoft Serious About VoIP: "Saturday July 02, @04:21PM
'Microsoft, is quietly turning into a voice-over-IP powerhouse. It all started with the launch of its Microsoft Live Communication Server. Bill Gates says, 'Communicating in a better way has a huge impact for business,' and he states that he wants Microsoft to marry the PC, the cell phone and the desk phone. Recently, Microsoft teamed up with VoIP companies like Sylantro to offer hosted IP-PBX services, and now is rumored to have bought Teleo, a small VoIP company based in San Francisco. Microsoft's dominance on the desktop is helping the company extend its reach into the fast growing VoIP business, thus putting it in direct competition with the likes of Cisco. Teleo, for instance could help the company compete more effectively with the likes of Yahoo and Skype.'"

NASA Plans to Build Two New Shuttle-derived Launch Vehicles

NASA Plans to Build Two New Shuttle-derived Launch Vehicles

Keith Cowing
Friday, July 1, 2005

image

According to a new NASA study, when America goes back to the moon and on to Mars it will do so with hardware that looks very familiar.

NASA has decided to build two new launch systems - both of which will draw upon existing Space Shuttle hardware. One vehicle will be a cargo-only heavy lifter, the other will be used to launch the Crew Exploration Vehicle.

The Plan

NASA has essentially completed its Exploration Systems Architecture Study - also known as the "60 day study". Briefings of the study’s conclusions and recommendations will be conducted by Doug Stanley. Stanley led this study team and will begin his briefings next Tuesday on Capitol Hill and with representatives from industry. While the final report will be released in mid-July, its conclusions are already making the rounds in Washington.

According to an internal memo, the study team focused on four primary areas:

  1. Complete assessment of the top-level Crew Exploration Vehicle (CEV) requirements and plans to enable the CEV to provide crew transport to the ISS and to accelerate the development of the CEV and crew launch system to reduce the gap between Shuttle retirement and CEV IOC.
  2. Definition of top-level requirements and configurations for crew and cargo launch systems to support the lunar and Mars exploration programs.
  3. Development of a reference lunar exploration architecture concept to support sustained human and robotic lunar exploration operations.
  4. Identification of key technologies required to enable and significantly enhance these reference exploration systems and reprioritization of near-term and far-term technology investments.

Something Old, Something New

According to sources familiar with the study's final recommendations, the heavy lifter will be a "stacked" or "in line" configuration (one stage atop another) and not a "side-mounted" configuration as is currently used to launch the space shuttle. The first stage will be a modified shuttle external tank with rocket engines mounted underneath. The first configuration will use 6 existing shuttle (SSME Block II) engines.

A growth version for lifting heavier cargos will use three RS-68 engines. The RS-68 engines, manufactured by Boeing, are currently used in its Delta IV family of launch vehicles. Additional engines would be clustered for launching heavier loads such as those needed for Mars missions.

The second stage will have a liquid engine capable of restarting multiple times. The payload will sit atop this second stage inside a large aerodynamic payload shroud.

During the study several shuttle-derived heavy launch vehicle options were considered. An old favorite, based on so-called Shuttle-C NASA designed in the late 1980's would have replaced the shuttle orbiter with a payload canister which would more or less replicate the existing orbiter's payload interfaces - sans the orbiter. Existing launch infrastructure would stay mostly the same. This configuration has its limitations in terms of the size of payload that could be launched and was rejected in favor of the in-line design, which has greater capacity for growth and performance.

The in-line option resembles the "Magnum booster" that was designed by NASA in the mid-1990s. This will be a rather immense vehicle more on the scale of a Saturn-V. It will require substantial modifications to the existing launch pads and payload handling facilities at the VAB.

The second vehicle to be pursued is based on a 5 segment Solid Rocket Booster (SRB). Atop the SRB will be a new liquid-fueled upper stage and the CEV. While this vehicle is being developed for CEV launching, Mike Griffin has spoken of a cargo version of the CEV as well - one on a scale somewhat greater than Russia's Progress cargo carrier and more in line with that offered by Europe's ATV and Japan's HTV. See this website sponsored by Alliant Techsystems for more information on how SRBs might be used in shuttle-derived launch systems for heavy lift, cargo, and CEV launching.

Looming Consequences

The long-term implications from this decision are not insignificant. The heavy lifter will be designed so as to streamline payload processing. As such, while much of what is done by the existing infrastructure and workforce at KSC will be similar to what is done for the Space Shuttle system, it will likely require a much smaller workforce. While members of Congress from the space states will be happy to hear of a new launch system - one that retains some existing infrastructure - they will not be happy to hear that jobs will be lost.

Early after the announcement of the Vision for Space Exploration (VSE) by President Bush in early 2004, much speculation centered on the possible use of EELV (Evolved Expendable Launch Vehicles) such as the Boeing Delta IV and the Lockheed Martin Atlas V to loft the CEV and perhaps other payload associated with the VSE. With potential business shrinking for the two EELV launch systems, both Boeing and Lockheed-Martin formed a joint marketing endeavor, the United Launch Alliance which would "combine the production, engineering, test and launch operations associated with U.S. government launches of Boeing Delta and Lockheed Martin Atlas rockets". With the decision to go with a shuttle-derived launch system, it would seem that a substantial market for EELVs has disappeared.

Of course, once this study is formally released, the next task facing NASA will be to demonstrate how it is going to pay for these new systems and accelerate the delivery of the CEV. Existing plans called for its availability no earlier than 2014 - 4 years after the Space Shuttle fleet is due to be retired. And he has never spoken of any significant overlap between CEV and Shuttle operations. To do so he has streamline the so-called 'spiral development process" that had been in place. But it will take more than streamlining to bring these new launch systems into operation.

Griffin has also found himself facing a Congressional constraint - from the Senate - who wants Griffin to keep the Shuttle fleet operational until the CEV is online and flying. The concern is that the U.S. not have any gap in its independent ability to launch humans into space. Griffin is working toward a very firm date - 30 September 2010 - after which the space shuttle system will not longer be flying. Griffin has stated his intention to narrow that gap between CEV availability and shuttle retirement considerably. However, he has yet to claim that he will eliminate that gap. Whether his accelerated plans can result in an operational vehicle a scant 5 years away so as to placate Congress remains to be seen.

Being smart about how NASA does things will only get Griffin so far. He is going to have to find more money to make all of this happen sooner than was the case when the plan was initially presented to the White House and then to Congress.

To make all the books balance, there will be considerable pressure to reprogram funds from other NASA programs. Griffin made it clear that he saw the development of these new launch systems as being more important than the science that NASA had promised to do aboard the International Space Station for the past decade. In hearings earlier this week before the House Science Committee Griffin said "But I cannot responsibly prioritize microbiology and fundamental life science research higher than the need for the United States to have its own strategic access to space." Sen. Hutchison, who chairs the subcommittee on Space And Science has a somewhat different view and has introduced legislation that would bar Griffin from such large cuts.

Griffin is also faced with congressional demands that he not cut aeronautics (as planned) and pledges he has made that he is not going to lay people off (as had been planned).

The only other option is for Griffin is to go back to the White House with a request for additional funds. Given that NASA's increases in the past several years have been rather unprecedented, its is rather unlikely that any such request would result in additional funds - at least from this White House.

With the announcement of this new space architecture, no one can ever say that Mike Griffin is not serious about conceiving and building the systems needed to get humans back to the Moon and on to Mars ASAP. What remains to be seen is if everyone else in the approval chain agrees with the difficult choices that must be made in order for Griffin's plans to work.

source: http://www.spaceref.com/news/viewnews.html?id=1040


This page is powered by Blogger. Isn't yours?