Wednesday, March 15, 2006
Unconfirmed: Dell buys Alienware
An Alienware PR representative did not deny the claim but instead forwarded us a prewritten statement from the company that said: "At this time, Alienware will not comment on any speculative stories or rumors concerning Dell and Alienware's association. While we do believe that news stories like this are ultimately a strong positive reinforcement of the Alienware brand and the company's success, we will not comment on speculation or potential future events. As always, Alienware is committed to offering consumers and businesses with the best high-performance, innovative PC products on the market and we remain manically focused on that goal."
Dell did not return calls asking for comment.
Dell has taken a lot of flak in high-end system reviews over the past year or so due to its exclusive use of Intel processors. Alienware, however, currently sells AMD-based systems through its Aurora line of desktops. Owning a subsidiary that sells AMD-based systems could be an easier path for Dell toward reclaiming the performance crown, rather than incorporating Athlons or Semprons into its current Intel-exclusive assembly line. We'll presumably know more when the deal is officially announced, or whenever we learn the specifics of upcoming products.
source:http://reviews.cnet.com/4531-10921_7-6464030.html
Pa. seizes paper's computer hard disks
In an unusual and little-known case, the Pennsylvania Attorney General's Office has seized four computer hard drives from a Lancaster newspaper as part of a statewide grand-jury investigation into leaks to reporters.
The dispute pits the government's desire to solve an alleged felony - computer hacking - against the news media's fear that taking the computers circumvents the First Amendment and the state Shield Law.
The state Supreme Court declined last week to take the case, allowing agents to begin analyzing the data.
"This is horrifying, an editor's worst nightmare," said Lucy Dalglish, executive director of the Reporters Committee for Freedom of the Press in Washington. "For the government to actually physically have those hard drives from a newsroom is amazing. I'm just flabbergasted to hear of this."
The grand jury is investigating whether the Lancaster County coroner gave reporters for the Lancaster Intelligencer Journal his password to a restricted law enforcement Web site. The site contained nonpublic details of local crimes. The newspaper allegedly used some of those details in articles.
If the reporters used the Web site without authorization, officials say, they may have committed a crime.
In interviews yesterday, the reporters' lawyer, William DeStefano, and the coroner, Gary Kirchner, disagreed over whether Kirchner had given them permission to access the site.
DeStefano said that although he didn't know whether any of the reporters used the Web site, "evidence has been presented to the attorney general which makes it clear that the county coroner, an elected official, invited and authorized the paper or reporters access to the restricted portion of the Web site... . If somebody is authorized to give me a password and does, it's not hacking."
The coroner said yesterday that he had not "to my knowledge" provided the password or permission to the reporters.
"Why would I do that?" Kirchner said yesterday. "I'm not sure how I got drawn into something as goofy as this."
State agents raided Kirchner's home outside Lancaster last month and took computers, he said. He said he had had no other contact with authorities since.
The morning Intelligencer Journal is owned by Lancaster Newspapers Inc., which also publishes the afternoon Lancaster New Era and the Sunday News.
The Intelligencer Journal's editor, Raymond Shaw, was compelled last month to testify before the grand jury, which is based in Harrisburg. Yesterday, he declined to comment on the case.
Grand-jury investigations are secret. But some details trickled out when a lower-court judge in Harrisburg, Barry Feudale, held hearings last month to consider the newspaper's motion to stop the state from enforcing its subpoena for the hard drives.
Officials said the Internet histories and cached Web-page content retained on the newspaper's computer hard drives could contain evidence of a crime - unauthorized use of a computer. To properly search the computers, state lawyers argued, they needed to haul them to a government lab in Harrisburg.
Senior Deputy Attorney General Jonelle Eshbach argued that this was not a case of a journalist's right to protect a source but an attempt to use the First Amendment to shield a crime.
"We know the source," she said. It is a password-protected Web site, she said, essentially "a bulletin board in a locked room, and it is getting into that locked room and seeing the bulletin board that makes this a crime."
At the hearing, another lawyer for the newspaper, Jayson Wolfgang, said the search was illegal, and troubling.
"The government simply doesn't have the ability or the right, nor should it, in a free democracy, to seize the work-product materials, source information, computer hard drives, folders with paper, cabinet drawers of a newspaper," he argued.
Feudale ruled Feb. 23 that the state could seize the computers but view only Internet data relevant to the case. The judge also ordered the agent who withdraws the data to show them to him first - before passing them to prosecutors - to ensure that the journalists' other confidential files are not compromised. The ruling was stayed pending appeal to the State Supreme Court.
In the newspaper's appeal, DeStefano argued that the ramifications of allowing government officials to have control over a newspaper's computers, no matter the restrictions imposed, are frightening.
"Permitting the attorney general to seize and search unfettered the workstations will result in the very chilling of information," DeStefano wrote. "Confidential tips, leads, and other forms of information will undoubtedly dry up once sources and potential sources learn that Lancaster Newspapers' workstations were taken out of its possession and turned over to investigations."
In response, the state argued that "the newspaper has not produced one shred of evidence that the computer hard drives contain information protected from disclosure."
In a one-page order dated Wednesday, the Supreme Court declined to hear the case on procedural grounds, freeing the state to examine the hard drives.
source:http://www.philly.com/mld/philly/14084455.htm
Seven-ounce "wrist PC" runs Linux

(Click for larger view of Eurotech WWPC)

Eurotech's WWPC
According to Eurotech, the WWPC integrates everything users expect of a PC, in a versatile, ergonomic form factor that supports a variety of wrist sizes. It can be worn over or under work clothes, and has flexible left- or right-handed straps that enclose dual 2-cell Li-polymer rechargable batteries. Claimed battery life is six hours in "fully operational" mode, or eight hours under normal circumstances.
The WWPC weighs seven ounces (200 grams) without straps/batteries, Eurotech says.
The WWPC offers several wearable-specific innovations, according to the company, including a patented orientation sensor that can be configured to induce standby when the user's arm drops. Additionally, the device's tilt sensor can be used to detect motionless operator states, while a built-in GPS receiver and "dead reckoning" technology enable the device to serve as a location-transmitting beacon.
The WWPC is based on an unspecified low-power embedded processor. It boots from 32MB of flash, and has 64MB of SDRAM. Storage can be expanded through an SD-card slot supporting cards up to 1GB.
Standard PC interfaces include WiFi, Bluetooth, and fast infrared networking, USB host and device ports, sound, built-in speakers, and a headphone jack. The device has a "daylight-readable" 2.8 x 2.2-inch touchscreen LCD, and also supports human interface devices such as microphones and headsets connected via USB or Bluetooth, the company says.

- Direct-access keypad
- L1 16-channel GPS receiver with active helix antenna
- IrDa (up to 4Mbps)
- Bluetooth v1.1 (up to 721 Kbps)
- LAN 802.11b (up to 11Mbps) with "hardware coexistence handshake"
- Specific internal antennas
- Supports "different configurable audio/video user interfaces"
- Supports Linux or Windows CE
Eurotech describes the WWPC as a "user-centric, ubiquitous computing" concept, suggesting that the device is not yet available in product form. The company did not respond to availability enquires by publication time.
According to its website, Eurotech's corporate strategy is to "define and penetrate new and emerging markets." The company also offers rugged surveillance systems for public transportation that run Linux. The company is publicly traded on the Italian stock market; it merged with US SBC (single-board computer) vendor Parvus in 2003.
source:http://linuxdevices.com/news/NS5812502455.html
Sony Delays PlayStation 3
Sony has been anything but talkative these past few months regarding their plans for PlayStation 3. No one except Sony seemed to anticipate they'd continue to meet their projected spring release date in any territory, even Japan.
Today, Sony officially conceded defeat to the recent flurry of rumors and speculation, with Japanese newspaper Nihon Keizai Shimbun reporting the machine has been pushed back until November.
There aren't many details out right now, but Sony says issues over the finalization of copy protection technology related to their Blu-ray disc drive is the cause of the delay.
When asked for a comment, a Sony Computer Entertainment America spokesperson only went on record saying that SCEI has not issued any official statement itself yet.
As the news is coming out of Japan, that creates a worrisome scenario for America and Europe. There were already rumblings Sony wouldn't be able to launch in all territories before the end of the year, but missing out on the Christmas season over here could prove a deadly blow to Sony's next-generation plans.
Somewhere in Microsoft's offices, Bill Gates just opened a bottle of champagne. With more details expected at tomorrow's PlayStation conference and the Game Developers Conference just around the corner, it appears the flood gates (for good or ill) are about to be unleashed.
source:http://www.1up.com/do/newsStory?cId=3148763
Deploying Linux: Should you pre-compile binaries or roll your own?
When introduced to Linux and the option of compiling a favorite software package from source, most Windows administrators wonder why they'd ever want to do that. This tip explains why there are some compelling reasons to roll your own binary.
First, let's look at the back story. Traditionally, Windows has practiced a method of package distribution that provides good ease-of-use and reliability through its use of precompiled binaries. s a best-effort attempt to reach a wide variety of software configurations and hardware implementations, precompiled binary packages represent a consistent and efficient model for delivering software packages, updates, service packs and so forth.
Even Linux offers its own forms of precompiled distribution -- typically handled by some native package manager such as RPM or YuM -- to meets average users' needs. This practice aligns perfectly with the Windows model, but depends on the distribution provider or some interested third-party to precompile and distribute Linux-compatible software goods.
Another difference between the way Linux handles third-party packages and additions versus Windows is in package management utilities, such as YuM and YuMEX for Fedora Core, apt-get for Debian, etc. Linux provides users with a single general-purpose utility to help them acquire and install software applications from any third-party vendor. Best of all, such applications are compiled specifically for whatever platform and revision a given package manager recognizes or supports. Such Linux utilities do a phenomenal job of resolving platform-dependent issues transparently on behalf of their users, and can save considerable time and effort spent (or wasted, if the desired end-result is not attained) that might otherwise go to identifying and resolving installation issues by hand. Now, we're back to where we're started, when the Windows admin is introduced to the option in Linux of compiling a favorite software package from source. Any sane admin, at this point, would wonder why anyone would do that task. In most cases, there are only a few, if any, truly compelling reasons for compiling packages from source. It is time consuming and requires meticulous attention to detail for large code-bases. It also assumes an intimate knowledge of Linux compiler tools, not a fair or safe assumption to make of every user. As we said before, however, there are also some equally compelling reasons to roll-your-own binary from raw source code, particularly when building custom-tailored server installs. Let's look at two very good reasons: Perhaps your version of LAME (a popular MP3 encoder) lacks support for a certain necessary audio format, as is the case for many pre-packaged distributions. The only way to get it is by compiling from source, and the same thing is true for many other applications and services as well. Again, precompiled binaries are often built upon whatever is parameters are considered bare-minimum at compile-time. If a development lab specifies a 500MHz processor with 128MB of memory as bare-minimum, a software package will be tailored to meet those specifications. The only surefire way to get the absolute best performance out of a Linux application is to compile it on the target machine where it will run. For admins news to Linux or end-users of Linux-based solutions, precompiled packaging has the same appeal as to the Windows segment. For Linux professionals and expert developers, build-your-own is part of the basic equation. Only in those cases where a binary maps to a true one-size-fits-all situation does it make sense to implement a generic code library compiled against generic standards, as is often the case when coding or deploying network process images in your environment. It's a key test of a Linux professional's discrimination to know when to use precompiled binary, which is usually when basic building blocks are involved, and when to compile, which is most of the time. source:http://searchopensource.techtarget.com/tip/1,289483,sid39_gci1171130,00.html
Build your own profiling tool
14 Mar 2006
Profiling is a technique for measuring where software programs consume resources, including CPU time and memory. In this article, software architect Andrew Wilcox explains the benefits of profiling and some current profiling options and their shortcomings. He then shows you how to use the new Java™ 5 agent interface and simple aspect-oriented programming techniques to build your own profiler.
Whether you're using System.out.println()
or a profiling tool such as hprof
or OptimizeIt, code profiling should be an essential component of your software development practice. This article discusses the most common approaches to code profiling and explains their downsides. It provides a list of best-of-breed features you might look for in an ideal profiler and explains why aspect-oriented techniques are well suited to achieving some of those features. It also introduces you to the JDK 5.0 agent interface and walks you through the steps of using it to build your own aspect-oriented profiler.
Note that the example profiler and complete source code for this article are based on the Java Interactive Profiler (JIP) -- an open-source profiler built using aspect-oriented techniques and the Java 5 agent interface. See Resources to learn more about JIP and other tools discussed in this article.
Profiling tools and techniques
Most Java developers start out measuring application performance using System.currentTimeMillis()
and System.out.println()
. System.currentTimeMillis()
is easy to use: you just measure the time at the beginning of a method and again at the end, and print the difference, but it has two big downsides:
- It's a manual process, requiring you to determine what code to measure; instrument the code; recompile, redeploy, run, and analyze the results; back out the instrumentation code when done; and go through all the same steps again the next time there's a problem.
- It doesn't give a comprehensive view of how all the parts of the application are performing.
To get around these issues, some developers turn to code profilers like hprof
, JProbe, or OptimizeIt. Profilers avoid the problems associated with ad-hoc measurement because you don't have to modify your program to use them. They also give you a more comprehensive view of program performance because they gather timing information for every method call, not just a particular section of the code. Unfortunately, profiling tools also have some downsides.
![]() |
|
Profilers offer a nice alternative to manual solutions like System.currentTimeMillis()
, but they're far from ideal. For one thing, running a program with hprof
can slow it down by a factor of 20. That means an ETL (extract, transform, and load) operation that normally takes an hour could take a full day to profile! Not only is waiting inconvenient, but changing the timescale of the application can actually skew the results. Take a program that does a lot of I/O. Because the I/O is performed by the operating system, the profiler doesn't slow it down, so your I/O could appear to run 20 times faster than it actually does! As a result, you can't always count on hprof
to give you an accurate picture of your application's performance.
Another problem with hprof
has to do with how Java programs are loaded and run. Unlike programs in statically linked languages like C or C++, a Java program is linked at run time rather than compile time. Classes aren't loaded by the JVM until the first time they're referenced, and code isn't compiled from bytecode to machine code until it has been executed a number of times. If you're trying to measure the performance of a method but its class hasn't yet been loaded, your measurement will include the class loading time and compilation time in addition to the run time. Because these things happen only at the beginning of an application's life, you usually don't want to include them when measuring the performance of long-lived applications.
Things can get even more complicated when your code is running in an application server or servlet engine. Profilers like hprof
profile the entire application, servlet container and all. The trouble is, you usually don't want to profile the servlet engine, you just want to profile your application.
![]() |
|
Like selecting any other tool, selecting a profiler involves trade-offs. Free tools like hprof
are easy to use, but they have limitations, such as the inability to filter out classes or packages from the profile. Commercial tools offer more features but can be expensive and have restrictive licensing terms. Some profilers require that you launch the application through the profiler, which means reconstructing your execution environment in terms of an unfamiliar tool. Picking a profiler involves compromises, so what should an ideal profiler look like? Here's a short list of the features you might look for:
- Speed: Profiling can be painfully slow. But you can speed things up by using a profiler that doesn't automatically profile every class.
- Interactivity: The more interaction your profiler allows, the more you can fine-tune the information you get from it. For example, being able to turn the profiler on and off at run time helps you avoid measuring class loading, compilation, and interpreted execution (pre-JIT) times.
- Filtering: Filtering by class or package lets you focus on the problem at hand, rather than being overwhelmed by too much information.
- 100% Pure Java code: Most profilers require native libraries, which limits the number of platforms that you can use them on. An ideal profiler wouldn't require the use of native libraries.
- Open source: Open source tools typically let you get up and running quickly, while avoiding the restrictions of commercial licenses.
![]() |
|
The problem with using System.currentTimeMillis()
to generate timing information is that it's a manual process. If you could automate the instrumentation of the code, many of its disadvantages would go away. This type of problem is a perfect candidate for an aspect-oriented solution. The agent interface introduced in Java 5 is ideal for building an aspect-oriented profiler because it gives you an easy way to hook into the classloader and modify classes as they're loaded.
The remainder of the article focuses on the BYOP (build your own profiler). I'll introduce the agent interface and show you how to create a simple agent. You'll learn the code for a basic profiling aspect, as well as the steps you can take to modify it for more advanced profiling.
![]() |
|
The -javaagent
JVM option is, unfortunately, very sparsely documented. You won't find many books on the topic (no Java Agents for Dummies or Java Agents in 21 days), but you will find some good sources in the Resources section, along with the overview here.
The basic idea behind agents is that, as the JVM loads a class, the agent can modify the class's bytecode. You can create an agent in three steps:
- Implement the
java.lang.instrument.ClassFileTransformer
interface:
public interface ClassFileTransformer {
public byte[] transform(ClassLoader loader, String className,
Class classBeingRedefined, ProtectionDomain protectionDomain,
byte[] classfileBuffer) throws IllegalClassFormatException;
}
- Create a "premain" method. This method is called before the application's
main()
method and looks like this:
package sample.verboseclass;
public class Main {
public static void premain(String args, Instrumentation inst) {
...
}
}
- In the agent JAR file, include a manifest entry identifying the class containing the
premain()
method:
Manifest-Version: 1.0
Premain-Class: sample.verboseclass.Main
Your first step toward building a profiler is to create a simple agent that prints out the name of each class as it's loaded, similar to the behavior of the -verbose:class
JVM option. As you can see in Listing 1, this requires only a few lines of code:
Listing 1. A simple agent
|
If the agent was packaged in a JAR file called vc.jar
, the JVM would be started with the -javaagent
option, as follows:
|
![]() |
|
With the basic elements of an agent in place, your next step is to add a simple profiling aspect to your application classes as they're loaded. Fortunately, you don't need to master the details of the JVM instruction set to modify bytecode. Instead, you can use a toolkit like the ASM library (from the ObjectWeb consortium; see Resources) to handle the details of the class file format. ASM is a Java bytecode manipulation framework that uses the Visitor pattern to enable you to transform class files, in much the same way that SAX events can be used to traverse and transform an XML document.
Listing 2 is a profiling aspect that can be used to output the class name, method name, and a timestamp every time the JVM enters or leaves a method. (For a more sophisticated profiler, you would probably want to use a high-resolution timer like Java 5's System.nanoTime()
.)
Listing 2. A simple profiling aspect
|
If you were profiling by hand, your next step would be to modify every method to look something like this:
|
![]() |
|
Now you need to figure out what the bytecode for the Profile.start()
and Profile.end()
calls looks like -- which is where the ASM library comes in. ASM has a Bytecode Outline plugin for Eclipse (see Resources) that allows you to view the bytecode of any class or method. Figure 1 shows the bytecode for the method above. (You could also use a dis-assembler like javap
, which is part of the JDK.)
Figure 1. Viewing bytecode using the ASM plugin

The ASM plugin even produces the ASM code that can be used to generate the corresponding bytecode, as shown in Figure 2:
Figure 2. The ASM plugin generates code

You can cut and paste the highlighted code shown in Figure 2 into your agent to call a generalized version of the Profile.start()
method, as shown in Listing 3:
Listing 3. The ASM code to inject a call to the profiler
|
To inject the start and end calls, subclass ASM's MethodAdapter
, as shown in Listing 4:
Listing 4. The ASM code to inject a call to the profiler
|
The code to hook this into your agent is very simple and is part of the source download for this article.
Because your agent uses ASM, you need to ensure that the ASM classes are loaded for everything to work. There are many class paths in a Java application: the application classpath, the extensions classpath, and the bootstrap classpath. Surprisingly, the ASM JAR doesn't go in any of these; instead, you'll use the manifest to tell the JVM which JAR files the agent needs, as shown in Listing 5. In this case, the JAR files must be in the same directory as the agent's JAR.
Listing 5. The manifest file for the profiler
|
![]() |
|
Once everything is compiled and packaged, you can run your profiler against any Java application. Listing 6 is part of the output from a profile of Ant running the build.xml that compiles the agent:
Listing 6. A sample of the output from the profiler
|
![]() |
|
So far you've seen how to build a simple aspect-oriented profiler with just a few lines of code. While a good start, the example profiler doesn't gather thread and call stack data. Call stack information is necessary to determine the gross and net method execution times. In addition, each call stack is associated with a thread, so if you want to track call stack data, you'll need thread information as well. Most profilers use a two-pass design for this kind of analysis: first gather data, then analyze it. I'll show you how to adopt this approach rather than just printing out the data as it is gathered.
You can easily enhance your Profile
class to capture call stack and thread information. For starters, instead of printing out times at the start and end of each method, you can store the information using the data structures shown in Figure 3:
Figure 3. Data structures for tracking call stack and thread information

There are a number of ways to gather information about the call stack. One is to instantiate an Exception
, but doing this at the beginning and end of each method would be far too slow. A simpler way is for the profiler to manage its own internal call stack. This is easy because start()
is called for every method; the only tricky part is unwinding the internal call stack when an exception is thrown. You can detect when an exception has been thrown by checking the expected class and method name when Profile.end()
is called.
Printing is similarly easy to set up. You can create a shutdown hook using Runtime.addShutdownHook()
to register a Thread
that runs at shutdown time and prints a profiling report to the console.
![]() |
|
This article introduced you to the current tools and technologies most commonly used for profiling and discussed some of their limitations. You now have a list of features you might expect from an ideal profiler. Finally, you learned how to use aspect-oriented programming and the Java 5 agent interface to build your own profiler incorporating some of these ideal features.
The example code in this article is based on the Java Interactive Profiler, an open-source profiler built using the techniques I've discussed here. In addition to the basic features found in the example profiler, JIP incorporates the following:
- Interactive profiling
- The ability to exclude classes or packages
- The ability to include only classes loaded by a particular classloader
- A facility for tracking object allocations
- Performance measurement in addition to code profiling
JIP is distributed under a BSD-style license. See Resources for download information.
![]() |
|
Description | Name | Size | Download method |
---|---|---|---|
Source code for the simple profiler developed here | j-jipsimple-profiler-src.zip | 51KB | HTTP |
Source code for the verbose-class example | j-jipverbose-class-src.zip | 2KB | HTTP |
![]() | ||||
![]() | Information about download methods | ![]() | ![]() | Get Adobe® Reader® |
![]() |
|
Learn
-
java.lang.instrument
: The Javadoc for the agent interface. - Instrumentation: Modify Applications with Java 5 Class File Transformations (R. J. Lorimer, JavaLobby, June 2005): One of the few sources of information about the Java 5 agent interface.
- "AOP@Work: Performance monitoring with AspectJ, Part 1" and Part 2 (Ron Bodkin, developerWorks, September/November 2005): Ron Bodkin uses the open source project Glassbox Inspector to demonstrate that AOP is a natural fit for solving the problems of system monitoring.
- "Using HPROF to Tune Performance" (JDC Tech Tips, January 24, 2000 ): Gets you started with
hprof
and provides a code example showing improved performance. - "Eye on performance: Profiling on the edge (Jack Shirazi and Kirk Pepperdine, developerWorks, September 2004): A day in the life of two Java developers and their many profiling tools.
- Classworking toolkit: Dennis Sosnoski explores ASM in his regular series on developerWorks.
- The Java technology zone: Hundreds of articles about every aspect of Java programming.
Get products and technologies
- Java Interactive Profiler: The author's high-performance, low-overhead profiler that is written entirely in Java code.
- ASM Bytecode Outline plugin for Eclipse: Shows dis-assembled bytecode of current Java editor or class file, allows bytecode comparison for Java/class files, and shows ASMifier code for current bytecode.
- ASM bytecode manipulation framework: Can be used to dynamically generate stub classes or other proxy classes directly in binary form, or to dynamically modify classes at load time.
Discuss
- JVM and Bytecode discussion forum: This new forum, hosted by Dennis Sosnoski, covers issues related to Java bytecode, the Java binary class format, and general JVM issues.
- developerWorks Forums: Get involved in the developerWorks community.
![]() |
|
![]() | ||
![]() | Andrew Wilcox is a software architect at MentorGen LLC in Columbus, Ohio. He has over 15 years of industry experience, nine of them using the Java platform. Andrew specializes in frameworks, performance tuning, and metaprogramming. His focus is on tools and techniques to increase developer productivity and software reliability. He is also the creator of the Java Interactive Profiler. source:http://www-128.ibm.com/developerworks/java/library/j-jip/?ca=dgr-lnxw01JavaProfiling |
Immersion Wins Latest Round Of Sony 'Rumble' Suit
This comes after Microsoft settled against the company back in July 2003 for similar Xbox-related patent violations, and continuing settlements with smaller companies, most recently peripheral maker Electro Source, thanks to Immersion's wide-ranging patent on the concept.
In the last ruling against Sony, made in early 2005, Judge Claudia Wilken of the U.S. District Court levied an $82 million award to Immersion Corp., or 1.37% of Sony's sales of PlayStations and PlayStation-related paraphernalia. The $82 million is less than the $299 million originally sought by Immersion Corp., but the court ruled that Sony's infringement of the vibration patents was not willful and therefore not deserving of the full penalties.
However, in this latest ruling appealing that $82 million award, according to a Wall Street Journal report, Sony's defence was the alleged nondisclosure of some of the inventions of key employee Craig Thorner. who has been a consultant both for Immersion and subsequently for Sony. But, according to the report, U.S. District Judge Claudia Wilken was unhappy with Thorner's testimony supporting Sony, given that he had also been paid by Sony, and so dismissed this line of defence.
According to the reports, another appeal in the U.S. Court of Appeals for the Federal Circuit is expected to be heard this year, but it's expected that the matter will be finally resolved one way or the other in the next few months.
source:http://www.gamasutra.com/php-bin/news_index.php?story=8499