Wednesday, March 15, 2006

Unconfirmed: Dell buys Alienware

While no word has been officially released from either Dell or Alienware, we heard from a reliable source this morning that the purchase has indeed gone down. Speculation began on March 5 after a blog post by Voodoo PC CEO Rahul Sood. The new source, a contact at a high-end vendor who requested anonymity, claims that two of his company's suppliers confirmed that the deal has been done, and he also claims that his company has received an influx of resumes from Alienware employees.

An Alienware PR representative did not deny the claim but instead forwarded us a prewritten statement from the company that said: "At this time, Alienware will not comment on any speculative stories or rumors concerning Dell and Alienware's association. While we do believe that news stories like this are ultimately a strong positive reinforcement of the Alienware brand and the company's success, we will not comment on speculation or potential future events. As always, Alienware is committed to offering consumers and businesses with the best high-performance, innovative PC products on the market and we remain manically focused on that goal."

Dell did not return calls asking for comment.

Dell has taken a lot of flak in high-end system reviews over the past year or so due to its exclusive use of Intel processors. Alienware, however, currently sells AMD-based systems through its Aurora line of desktops. Owning a subsidiary that sells AMD-based systems could be an easier path for Dell toward reclaiming the performance crown, rather than incorporating Athlons or Semprons into its current Intel-exclusive assembly line. We'll presumably know more when the deal is officially announced, or whenever we learn the specifics of upcoming products.

source:http://reviews.cnet.com/4531-10921_7-6464030.html

Pa. seizes paper's computer hard disks

The Attorney General's Office says they may show evidence of a felony: unauthorized use of a restricted Web site.

In an unusual and little-known case, the Pennsylvania Attorney General's Office has seized four computer hard drives from a Lancaster newspaper as part of a statewide grand-jury investigation into leaks to reporters.

The dispute pits the government's desire to solve an alleged felony - computer hacking - against the news media's fear that taking the computers circumvents the First Amendment and the state Shield Law.

The state Supreme Court declined last week to take the case, allowing agents to begin analyzing the data.

"This is horrifying, an editor's worst nightmare," said Lucy Dalglish, executive director of the Reporters Committee for Freedom of the Press in Washington. "For the government to actually physically have those hard drives from a newsroom is amazing. I'm just flabbergasted to hear of this."

The grand jury is investigating whether the Lancaster County coroner gave reporters for the Lancaster Intelligencer Journal his password to a restricted law enforcement Web site. The site contained nonpublic details of local crimes. The newspaper allegedly used some of those details in articles.

If the reporters used the Web site without authorization, officials say, they may have committed a crime.

In interviews yesterday, the reporters' lawyer, William DeStefano, and the coroner, Gary Kirchner, disagreed over whether Kirchner had given them permission to access the site.

DeStefano said that although he didn't know whether any of the reporters used the Web site, "evidence has been presented to the attorney general which makes it clear that the county coroner, an elected official, invited and authorized the paper or reporters access to the restricted portion of the Web site... . If somebody is authorized to give me a password and does, it's not hacking."

The coroner said yesterday that he had not "to my knowledge" provided the password or permission to the reporters.

"Why would I do that?" Kirchner said yesterday. "I'm not sure how I got drawn into something as goofy as this."

State agents raided Kirchner's home outside Lancaster last month and took computers, he said. He said he had had no other contact with authorities since.

The morning Intelligencer Journal is owned by Lancaster Newspapers Inc., which also publishes the afternoon Lancaster New Era and the Sunday News.

The Intelligencer Journal's editor, Raymond Shaw, was compelled last month to testify before the grand jury, which is based in Harrisburg. Yesterday, he declined to comment on the case.

Grand-jury investigations are secret. But some details trickled out when a lower-court judge in Harrisburg, Barry Feudale, held hearings last month to consider the newspaper's motion to stop the state from enforcing its subpoena for the hard drives.

Officials said the Internet histories and cached Web-page content retained on the newspaper's computer hard drives could contain evidence of a crime - unauthorized use of a computer. To properly search the computers, state lawyers argued, they needed to haul them to a government lab in Harrisburg.

Senior Deputy Attorney General Jonelle Eshbach argued that this was not a case of a journalist's right to protect a source but an attempt to use the First Amendment to shield a crime.

"We know the source," she said. It is a password-protected Web site, she said, essentially "a bulletin board in a locked room, and it is getting into that locked room and seeing the bulletin board that makes this a crime."

At the hearing, another lawyer for the newspaper, Jayson Wolfgang, said the search was illegal, and troubling.

"The government simply doesn't have the ability or the right, nor should it, in a free democracy, to seize the work-product materials, source information, computer hard drives, folders with paper, cabinet drawers of a newspaper," he argued.

Feudale ruled Feb. 23 that the state could seize the computers but view only Internet data relevant to the case. The judge also ordered the agent who withdraws the data to show them to him first - before passing them to prosecutors - to ensure that the journalists' other confidential files are not compromised. The ruling was stayed pending appeal to the State Supreme Court.

In the newspaper's appeal, DeStefano argued that the ramifications of allowing government officials to have control over a newspaper's computers, no matter the restrictions imposed, are frightening.

"Permitting the attorney general to seize and search unfettered the workstations will result in the very chilling of information," DeStefano wrote. "Confidential tips, leads, and other forms of information will undoubtedly dry up once sources and potential sources learn that Lancaster Newspapers' workstations were taken out of its possession and turned over to investigations."

In response, the state argued that "the newspaper has not produced one shred of evidence that the computer hard drives contain information protected from disclosure."

In a one-page order dated Wednesday, the Supreme Court declined to hear the case on procedural grounds, freeing the state to examine the hard drives.

source:http://www.philly.com/mld/philly/14084455.htm


Seven-ounce "wrist PC" runs Linux

A European embedded computing specialist has announced a wrist-worn wearable computer that runs embedded Linux or Windows CE. Eurotech's WWPC ("wrist-worn PC") offers a wealth of standard PC interfaces, along with several innovative wearable-specific features, the company claims. It targets emergency rescue, security, healthcare, maintenance, logistics, and "many other" applications.


(Click for larger view of Eurotech WWPC)


Eurotech's WWPC


According to Eurotech, the WWPC integrates everything users expect of a PC, in a versatile, ergonomic form factor that supports a variety of wrist sizes. It can be worn over or under work clothes, and has flexible left- or right-handed straps that enclose dual 2-cell Li-polymer rechargable batteries. Claimed battery life is six hours in "fully operational" mode, or eight hours under normal circumstances.

The WWPC weighs seven ounces (200 grams) without straps/batteries, Eurotech says.

The WWPC offers several wearable-specific innovations, according to the company, including a patented orientation sensor that can be configured to induce standby when the user's arm drops. Additionally, the device's tilt sensor can be used to detect motionless operator states, while a built-in GPS receiver and "dead reckoning" technology enable the device to serve as a location-transmitting beacon.

The WWPC is based on an unspecified low-power embedded processor. It boots from 32MB of flash, and has 64MB of SDRAM. Storage can be expanded through an SD-card slot supporting cards up to 1GB.

Standard PC interfaces include WiFi, Bluetooth, and fast infrared networking, USB host and device ports, sound, built-in speakers, and a headphone jack. The device has a "daylight-readable" 2.8 x 2.2-inch touchscreen LCD, and also supports human interface devices such as microphones and headsets connected via USB or Bluetooth, the company says.

Additional claimed features include:
Availability

Eurotech describes the WWPC as a "user-centric, ubiquitous computing" concept, suggesting that the device is not yet available in product form. The company did not respond to availability enquires by publication time.

According to its website, Eurotech's corporate strategy is to "define and penetrate new and emerging markets." The company also offers rugged surveillance systems for public transportation that run Linux. The company is publicly traded on the Italian stock market; it merged with US SBC (single-board computer) vendor Parvus in 2003.

source:http://linuxdevices.com/news/NS5812502455.html

Sony Delays PlayStation 3

Blu-ray forcing the machine into hiding until November.

Sony has been anything but talkative these past few months regarding their plans for PlayStation 3. No one except Sony seemed to anticipate they'd continue to meet their projected spring release date in any territory, even Japan.

Today, Sony officially conceded defeat to the recent flurry of rumors and speculation, with Japanese newspaper Nihon Keizai Shimbun reporting the machine has been pushed back until November.

There aren't many details out right now, but Sony says issues over the finalization of copy protection technology related to their Blu-ray disc drive is the cause of the delay.

When asked for a comment, a Sony Computer Entertainment America spokesperson only went on record saying that SCEI has not issued any official statement itself yet.

As the news is coming out of Japan, that creates a worrisome scenario for America and Europe. There were already rumblings Sony wouldn't be able to launch in all territories before the end of the year, but missing out on the Christmas season over here could prove a deadly blow to Sony's next-generation plans.

Somewhere in Microsoft's offices, Bill Gates just opened a bottle of champagne. With more details expected at tomorrow's PlayStation conference and the Game Developers Conference just around the corner, it appears the flood gates (for good or ill) are about to be unleashed.

source:http://www.1up.com/do/newsStory?cId=3148763


Deploying Linux: Should you pre-compile binaries or roll your own?

When introduced to Linux and the option of compiling a favorite software package from source, most Windows administrators wonder why they'd ever want to do that. This tip explains why there are some compelling reasons to roll your own binary.

First, let's look at the back story. Traditionally, Windows has practiced a method of package distribution that provides good ease-of-use and reliability through its use of precompiled binaries. s a best-effort attempt to reach a wide variety of software configurations and hardware implementations, precompiled binary packages represent a consistent and efficient model for delivering software packages, updates, service packs and so forth.

Even Linux offers its own forms of precompiled distribution -- typically handled by some native package manager such as RPM or YuM -- to meets average users' needs. This practice aligns perfectly with the Windows model, but depends on the distribution provider or some interested third-party to precompile and distribute Linux-compatible software goods.

Another difference between the way Linux handles third-party packages and additions versus Windows is in package management utilities, such as YuM and YuMEX for Fedora Core, apt-get for Debian, etc. Linux provides users with a single general-purpose utility to help them acquire and install software applications from any third-party vendor. Best of all, such applications are compiled specifically for whatever platform and revision a given package manager recognizes or supports.

Such Linux utilities do a phenomenal job of resolving platform-dependent issues transparently on behalf of their users, and can save considerable time and effort spent (or wasted, if the desired end-result is not attained) that might otherwise go to identifying and resolving installation issues by hand.

Now, we're back to where we're started, when the Windows admin is introduced to the option in Linux of compiling a favorite software package from source. Any sane admin, at this point, would wonder why anyone would do that task.

In most cases, there are only a few, if any, truly compelling reasons for compiling packages from source. It is time consuming and requires meticulous attention to detail for large code-bases. It also assumes an intimate knowledge of Linux compiler tools, not a fair or safe assumption to make of every user.

As we said before, however, there are also some equally compelling reasons to roll-your-own binary from raw source code, particularly when building custom-tailored server installs. Let's look at two very good reasons:

  • Not every precompiled package is complete.

    Perhaps your version of LAME (a popular MP3 encoder) lacks support for a certain necessary audio format, as is the case for many pre-packaged distributions. The only way to get it is by compiling from source, and the same thing is true for many other applications and services as well.

  • You've got a situation in which the same software runs slower on machine X versus machine Y

    Again, precompiled binaries are often built upon whatever is parameters are considered bare-minimum at compile-time. If a development lab specifies a 500MHz processor with 128MB of memory as bare-minimum, a software package will be tailored to meet those specifications. The only surefire way to get the absolute best performance out of a Linux application is to compile it on the target machine where it will run.

For admins news to Linux or end-users of Linux-based solutions, precompiled packaging has the same appeal as to the Windows segment. For Linux professionals and expert developers, build-your-own is part of the basic equation.

Only in those cases where a binary maps to a true one-size-fits-all situation does it make sense to implement a generic code library compiled against generic standards, as is often the case when coding or deploying network process images in your environment.

It's a key test of a Linux professional's discrimination to know when to use precompiled binary, which is usually when basic building blocks are involved, and when to compile, which is most of the time.

source:http://searchopensource.techtarget.com/tip/1,289483,sid39_gci1171130,00.html


Build your own profiling tool

14 Mar 2006

Profiling is a technique for measuring where software programs consume resources, including CPU time and memory. In this article, software architect Andrew Wilcox explains the benefits of profiling and some current profiling options and their shortcomings. He then shows you how to use the new Java™ 5 agent interface and simple aspect-oriented programming techniques to build your own profiler.

Whether you're using System.out.println() or a profiling tool such as hprof or OptimizeIt, code profiling should be an essential component of your software development practice. This article discusses the most common approaches to code profiling and explains their downsides. It provides a list of best-of-breed features you might look for in an ideal profiler and explains why aspect-oriented techniques are well suited to achieving some of those features. It also introduces you to the JDK 5.0 agent interface and walks you through the steps of using it to build your own aspect-oriented profiler.

Note that the example profiler and complete source code for this article are based on the Java Interactive Profiler (JIP) -- an open-source profiler built using aspect-oriented techniques and the Java 5 agent interface. See Resources to learn more about JIP and other tools discussed in this article.

Profiling tools and techniques

Most Java developers start out measuring application performance using System.currentTimeMillis() and System.out.println(). System.currentTimeMillis() is easy to use: you just measure the time at the beginning of a method and again at the end, and print the difference, but it has two big downsides:

To get around these issues, some developers turn to code profilers like hprof, JProbe, or OptimizeIt. Profilers avoid the problems associated with ad-hoc measurement because you don't have to modify your program to use them. They also give you a more comprehensive view of program performance because they gather timing information for every method call, not just a particular section of the code. Unfortunately, profiling tools also have some downsides.



Back to top


Limitations of profilers

Profilers offer a nice alternative to manual solutions like System.currentTimeMillis(), but they're far from ideal. For one thing, running a program with hprof can slow it down by a factor of 20. That means an ETL (extract, transform, and load) operation that normally takes an hour could take a full day to profile! Not only is waiting inconvenient, but changing the timescale of the application can actually skew the results. Take a program that does a lot of I/O. Because the I/O is performed by the operating system, the profiler doesn't slow it down, so your I/O could appear to run 20 times faster than it actually does! As a result, you can't always count on hprof to give you an accurate picture of your application's performance.

Another problem with hprof has to do with how Java programs are loaded and run. Unlike programs in statically linked languages like C or C++, a Java program is linked at run time rather than compile time. Classes aren't loaded by the JVM until the first time they're referenced, and code isn't compiled from bytecode to machine code until it has been executed a number of times. If you're trying to measure the performance of a method but its class hasn't yet been loaded, your measurement will include the class loading time and compilation time in addition to the run time. Because these things happen only at the beginning of an application's life, you usually don't want to include them when measuring the performance of long-lived applications.

Things can get even more complicated when your code is running in an application server or servlet engine. Profilers like hprof profile the entire application, servlet container and all. The trouble is, you usually don't want to profile the servlet engine, you just want to profile your application.



Back to top


The ideal profiler

Like selecting any other tool, selecting a profiler involves trade-offs. Free tools like hprof are easy to use, but they have limitations, such as the inability to filter out classes or packages from the profile. Commercial tools offer more features but can be expensive and have restrictive licensing terms. Some profilers require that you launch the application through the profiler, which means reconstructing your execution environment in terms of an unfamiliar tool. Picking a profiler involves compromises, so what should an ideal profiler look like? Here's a short list of the features you might look for:



Back to top


Build it yourself!

The problem with using System.currentTimeMillis() to generate timing information is that it's a manual process. If you could automate the instrumentation of the code, many of its disadvantages would go away. This type of problem is a perfect candidate for an aspect-oriented solution. The agent interface introduced in Java 5 is ideal for building an aspect-oriented profiler because it gives you an easy way to hook into the classloader and modify classes as they're loaded.

The remainder of the article focuses on the BYOP (build your own profiler). I'll introduce the agent interface and show you how to create a simple agent. You'll learn the code for a basic profiling aspect, as well as the steps you can take to modify it for more advanced profiling.



Back to top


Creating an agent

The -javaagent JVM option is, unfortunately, very sparsely documented. You won't find many books on the topic (no Java Agents for Dummies or Java Agents in 21 days), but you will find some good sources in the Resources section, along with the overview here.

The basic idea behind agents is that, as the JVM loads a class, the agent can modify the class's bytecode. You can create an agent in three steps:

  1. Implement the java.lang.instrument.ClassFileTransformer interface:


    public interface ClassFileTransformer {

    public byte[] transform(ClassLoader loader, String className,
    Class classBeingRedefined, ProtectionDomain protectionDomain,
    byte[] classfileBuffer) throws IllegalClassFormatException;

    }



  2. Create a "premain" method. This method is called before the application's main() method and looks like this:


    package sample.verboseclass;

    public class Main {
    public static void premain(String args, Instrumentation inst) {
    ...
    }
    }



  3. In the agent JAR file, include a manifest entry identifying the class containing the premain() method:


    Manifest-Version: 1.0
    Premain-Class: sample.verboseclass.Main

A simple agent

Your first step toward building a profiler is to create a simple agent that prints out the name of each class as it's loaded, similar to the behavior of the -verbose:class JVM option. As you can see in Listing 1, this requires only a few lines of code:


Listing 1. A simple agent

package sample.verboseclass;

public class Main {

public static void premain(String args, Instrumentation inst) {
inst.addTransformer(new Transformer());
}
}

class Transformer implements ClassFileTransformer {

public byte[] transform(ClassLoader l, String className, Class c,
ProtectionDomain pd, byte[] b) throws IllegalClassFormatException {
System.out.print("Loading class: ");
System.out.println(className);
return b;
}
}

If the agent was packaged in a JAR file called vc.jar, the JVM would be started with the -javaagent option, as follows:


java -javaagent:vc.jar MyApplicationClass



Back to top


A profiling aspect

With the basic elements of an agent in place, your next step is to add a simple profiling aspect to your application classes as they're loaded. Fortunately, you don't need to master the details of the JVM instruction set to modify bytecode. Instead, you can use a toolkit like the ASM library (from the ObjectWeb consortium; see Resources) to handle the details of the class file format. ASM is a Java bytecode manipulation framework that uses the Visitor pattern to enable you to transform class files, in much the same way that SAX events can be used to traverse and transform an XML document.

Listing 2 is a profiling aspect that can be used to output the class name, method name, and a timestamp every time the JVM enters or leaves a method. (For a more sophisticated profiler, you would probably want to use a high-resolution timer like Java 5's System.nanoTime().)


Listing 2. A simple profiling aspect

package sample.profiler;

public class Profile {

public static void start(String className, String methodName) {
System.out.println(new StringBuilder(className)
.append('\t')
.append(methodName)
.append("\tstart\t")
.append(System.currentTimeMillis()));
}

public static void end(String className, String methodName) {
System.out.println(new StringBuilder(className)
.append('\t')
.append(methodName)
.append("\end\t")
.append(System.currentTimeMillis()));
}
}

If you were profiling by hand, your next step would be to modify every method to look something like this:


void myMethod() {
Profile.start("MyClass", "myMethod");
...
Profile.end("MyClass", "myMethod");
}



Back to top


Using the ASM plugin

Now you need to figure out what the bytecode for the Profile.start() and Profile.end() calls looks like -- which is where the ASM library comes in. ASM has a Bytecode Outline plugin for Eclipse (see Resources) that allows you to view the bytecode of any class or method. Figure 1 shows the bytecode for the method above. (You could also use a dis-assembler like javap, which is part of the JDK.)


Figure 1. Viewing bytecode using the ASM plugin
Viewing bytecode using the ASM plugin

The ASM plugin even produces the ASM code that can be used to generate the corresponding bytecode, as shown in Figure 2:


Figure 2. The ASM plugin generates code
The ASM plugin generates code

You can cut and paste the highlighted code shown in Figure 2 into your agent to call a generalized version of the Profile.start() method, as shown in Listing 3:


Listing 3. The ASM code to inject a call to the profiler

visitLdcInsn(className);
visitLdcInsn(methodName);
visitMethodInsn(INVOKESTATIC,
"sample/profiler/Profile",
"start",
"(Ljava/lang/String;Ljava/lang/String;)V");

To inject the start and end calls, subclass ASM's MethodAdapter, as shown in Listing 4:


Listing 4. The ASM code to inject a call to the profiler

package sample.profiler;

import org.objectweb.asm.MethodAdapter;
import org.objectweb.asm.MethodVisitor;
import org.objectweb.asm.Opcodes;

import static org.objectweb.asm.Opcodes.INVOKESTATIC;


public class PerfMethodAdapter extends MethodAdapter {
private String className, methodName;

public PerfMethodAdapter(MethodVisitor visitor, String className,
String methodName) {
super(visitor);
className = className;
methodName = methodName;
}

public void visitCode() {
this.visitLdcInsn(className);
this.visitLdcInsn(methodName);
this.visitMethodInsn(INVOKESTATIC,
"sample/profiler/Profile",
"start",
"(Ljava/lang/String;Ljava/lang/String;)V");
super.visitCode();
}

public void visitInsn(int inst) {
switch (inst) {
case Opcodes.ARETURN:
case Opcodes.DRETURN:
case Opcodes.FRETURN:
case Opcodes.IRETURN:
case Opcodes.LRETURN:
case Opcodes.RETURN:
case Opcodes.ATHROW:
this.visitLdcInsn(className);
this.visitLdcInsn(methodName);
this.visitMethodInsn(INVOKESTATIC,
"sample/profiler/Profile",
"end",
"(Ljava/lang/String;Ljava/lang/String;)V");
break;
default:
break;
}

super.visitInsn(inst);
}
}

The code to hook this into your agent is very simple and is part of the source download for this article.

Loading the ASM classes

Because your agent uses ASM, you need to ensure that the ASM classes are loaded for everything to work. There are many class paths in a Java application: the application classpath, the extensions classpath, and the bootstrap classpath. Surprisingly, the ASM JAR doesn't go in any of these; instead, you'll use the manifest to tell the JVM which JAR files the agent needs, as shown in Listing 5. In this case, the JAR files must be in the same directory as the agent's JAR.


Listing 5. The manifest file for the profiler

Manifest-Version: 1.0
Premain-Class: sample.profiler.Main
Boot-Class-Path: asm-2.0.jar asm-attrs-2.0.jar asm-commons-2.0.jar



Back to top


Running the profiler

Once everything is compiled and packaged, you can run your profiler against any Java application. Listing 6 is part of the output from a profile of Ant running the build.xml that compiles the agent:


Listing 6. A sample of the output from the profiler

org/apache/tools/ant/Main runBuild start 1138565072002
org/apache/tools/ant/Project start 1138565072029
org/apache/tools/ant/Project$AntRefTable start 1138565072031
org/apache/tools/ant/Project$AntRefTable end 1138565072033
org/apache/tools/ant/types/FilterSet start 1138565072054
org/apache/tools/ant/types/DataType start 1138565072055
org/apache/tools/ant/ProjectComponent start 1138565072055
org/apache/tools/ant/ProjectComponent end 1138565072055
org/apache/tools/ant/types/DataType end 1138565072055
org/apache/tools/ant/types/FilterSet end 1138565072055
org/apache/tools/ant/ProjectComponent setProject start 1138565072055
org/apache/tools/ant/ProjectComponent setProject end 1138565072055
org/apache/tools/ant/types/FilterSetCollection start 1138565072057
org/apache/tools/ant/types/FilterSetCollection addFilterSet start 1138565072057
org/apache/tools/ant/types/FilterSetCollection addFilterSet end 1138565072057
org/apache/tools/ant/types/FilterSetCollection end 1138565072057
org/apache/tools/ant/util/FileUtils start 1138565072075
org/apache/tools/ant/util/FileUtils end 1138565072076
org/apache/tools/ant/util/FileUtils newFileUtils start 1138565072076
org/apache/tools/ant/util/FileUtils start 1138565072076
org/apache/tools/ant/taskdefs/condition/Os start 1138565072080
org/apache/tools/ant/taskdefs/condition/Os end 1138565072081
org/apache/tools/ant/taskdefs/condition/Os isFamily start 1138565072082
org/apache/tools/ant/taskdefs/condition/Os isOs start 1138565072082
org/apache/tools/ant/taskdefs/condition/Os isOs end 1138565072082
org/apache/tools/ant/taskdefs/condition/Os isFamily end 1138565072082
org/apache/tools/ant/util/FileUtils end 1138565072082
org/apache/tools/ant/util/FileUtils newFileUtils end 1138565072082
org/apache/tools/ant/input/DefaultInputHandler start 1138565072084
org/apache/tools/ant/input/DefaultInputHandler end 1138565072085
org/apache/tools/ant/Project end 1138565072085
org/apache/tools/ant/Project setCoreLoader start 1138565072085
org/apache/tools/ant/Project setCoreLoader end 1138565072085
org/apache/tools/ant/Main addBuildListener start 1138565072085
org/apache/tools/ant/Main createLogger start 1138565072085
org/apache/tools/ant/DefaultLogger start 1138565072092
org/apache/tools/ant/util/StringUtils start 1138565072096
org/apache/tools/ant/util/StringUtils end 1138565072096



Back to top


Tracking the call stack

So far you've seen how to build a simple aspect-oriented profiler with just a few lines of code. While a good start, the example profiler doesn't gather thread and call stack data. Call stack information is necessary to determine the gross and net method execution times. In addition, each call stack is associated with a thread, so if you want to track call stack data, you'll need thread information as well. Most profilers use a two-pass design for this kind of analysis: first gather data, then analyze it. I'll show you how to adopt this approach rather than just printing out the data as it is gathered.

Modifying the Profile class

You can easily enhance your Profile class to capture call stack and thread information. For starters, instead of printing out times at the start and end of each method, you can store the information using the data structures shown in Figure 3:


Figure 3. Data structures for tracking call stack and thread information
Call stack

There are a number of ways to gather information about the call stack. One is to instantiate an Exception, but doing this at the beginning and end of each method would be far too slow. A simpler way is for the profiler to manage its own internal call stack. This is easy because start() is called for every method; the only tricky part is unwinding the internal call stack when an exception is thrown. You can detect when an exception has been thrown by checking the expected class and method name when Profile.end() is called.

Printing is similarly easy to set up. You can create a shutdown hook using Runtime.addShutdownHook() to register a Thread that runs at shutdown time and prints a profiling report to the console.



Back to top


In conclusion

This article introduced you to the current tools and technologies most commonly used for profiling and discussed some of their limitations. You now have a list of features you might expect from an ideal profiler. Finally, you learned how to use aspect-oriented programming and the Java 5 agent interface to build your own profiler incorporating some of these ideal features.

The example code in this article is based on the Java Interactive Profiler, an open-source profiler built using the techniques I've discussed here. In addition to the basic features found in the example profiler, JIP incorporates the following:

JIP is distributed under a BSD-style license. See Resources for download information.




Back to top


Downloads

DescriptionNameSizeDownload method
Source code for the simple profiler developed herej-jipsimple-profiler-src.zip51KBHTTP
Source code for the verbose-class examplej-jipverbose-class-src.zip2KBHTTP
Information about download methodsGet Adobe® Reader®


Back to top


Resources

Learn

Get products and technologies

Discuss


Back to top


About the author

Andrew Wilcox is a software architect at MentorGen LLC in Columbus, Ohio. He has over 15 years of industry experience, nine of them using the Java platform. Andrew specializes in frameworks, performance tuning, and metaprogramming. His focus is on tools and techniques to increase developer productivity and software reliability. He is also the creator of the Java Interactive Profiler.


source:http://www-128.ibm.com/developerworks/java/library/j-jip/?ca=dgr-lnxw01JavaProfiling


Immersion Wins Latest Round Of Sony 'Rumble' Suit

Immersion Corp., which develops and licenses touch-feedback technology sometimes used in game controllers, has won the latest round in its long-running legal battle against PlayStation 2 manufacturer Sony, regarding the DualShock rumble technology used in all PlayStation 2 game controllers.

This comes after Microsoft settled against the company back in July 2003 for similar Xbox-related patent violations, and continuing settlements with smaller companies, most recently peripheral maker Electro Source, thanks to Immersion's wide-ranging patent on the concept.

In the last ruling against Sony, made in early 2005, Judge Claudia Wilken of the U.S. District Court levied an $82 million award to Immersion Corp., or 1.37% of Sony's sales of PlayStations and PlayStation-related paraphernalia. The $82 million is less than the $299 million originally sought by Immersion Corp., but the court ruled that Sony's infringement of the vibration patents was not willful and therefore not deserving of the full penalties.

However, in this latest ruling appealing that $82 million award, according to a Wall Street Journal report, Sony's defence was the alleged nondisclosure of some of the inventions of key employee Craig Thorner. who has been a consultant both for Immersion and subsequently for Sony. But, according to the report, U.S. District Judge Claudia Wilken was unhappy with Thorner's testimony supporting Sony, given that he had also been paid by Sony, and so dismissed this line of defence.

According to the reports, another appeal in the U.S. Court of Appeals for the Federal Circuit is expected to be heard this year, but it's expected that the matter will be finally resolved one way or the other in the next few months.

source:http://www.gamasutra.com/php-bin/news_index.php?story=8499

This page is powered by Blogger. Isn't yours?