Tuesday, November 15, 2005
First Xbox 360 Reviews Hitting the Web
source:http://games.slashdot.org/games/05/11/15/1636243.shtml?tid=211&tid=10
'Spyware' vendor bangs copyright shield
RetroCoder, developers of the SpyMon remote monitoring program, is brandishing copyright law in a bid to protect its software from being detected by anti-spyware or anti-virus products.
SpyMon is marketed as a means for the paranoid to surreptitiously monitor the activities of their partners or kids online - behaviour that has brought it to the attention of security vendors.
RetroCoder has countered by confronting visitors to SpyMon's download page with a 'copyright notice' which states that it cannot be examined by security researchers.
"If you do produce a program that will affect this softwares ability to perform its function then you may have to prove in criminal court that you have not infringed this warning. Infringement of a copyright licence is a criminal offence," RetroCoder's End User Licensing Agreement (EULA) states.
It's questionable whether this agreement would withstand legal challenge but RetroCoder is making good on its threat to take security vendors to task for detecting its product. Anti-spyware maker Sunbelt Software has been sent a nastygram threatening legal action against it for labelling SpyMon as spyware.
"If you read the copyright agreement when you downloaded or ran our program you will see that anti-spyware publishers / software houses are NOT allowed to download, run or examine the software in any way. By doing so you are breaking EU copyright law, this is a criminal offence. Please remove our program from your detection list or we will be forced to take action against you," RetroCoder said.
Sunbelt Software is standing firm in its decision to label SpyMon as malware. It's far from alone in labeling SpyMon as potentially harmful. CA, for example, designates the software as a keystroke logger.
Red Herring
RetroCoder's effort to to cow security vendors is far from unique. Simon Perry, CA's VP of security strategy in EMEA, said security makers are getting hit by such legal threats on a regular basis. "I'm not aware of any developer successfully using this tactic in order to get a security vendor to back off," he said.
Perry said that the copyright threat tactic was something of a red herring. "A copyright license has nothing to do with looking at software. You don't need to decompile code to look at its behaviour. By looking at whether software 'does what it says on the tin' you can say whether its potentially harmful or not," he said. ®
Related stories
ID theft automated using keylogger Trojan (9 August 2005)
Dell rejects spyware charge (15 July 2005)
Symantec ask court to rule Hotbar.com as adware (9 June 2005)
Sophos in porn dialler row with UK developer (30 September 2004)
source:http://www.theregister.co.uk/2005/11/14/spymon/
Meet the man who will save the internet
WSIS Tunis It’s been four years since the issue of how the internet should be run, and by whom, became an official United Nations topic.
And yet despite hundreds of hours of talks, three preparatory meetings and a world summit, there is only one thing that the world’s governments can agree on: Masood Khan, Pakistan’s ambassador.
If a certain US senator and a certain EU commissioner are to be believed, the internet is five days away from total collapse as governments are finally forced into a corner and told to agree on a framework for future Internet governance.
Both are wrong, but there is a very real risk that an enormous political argument resulting in lifelong ill-will centred around the internet could developed unchecked at the WSIS Summit.
The fact that it hasn’t already is effectively down to one man: Mr Khan. He was chosen as chair of Sub-Committee A during the WSIS process, and his remit includes all the most difficult and contentious elements - not just internet governance but also how the world will deal with issues such as spam and cybercrime.
Even though press attention has focussed on the undecided question of control of the internet, at the start of the process there were widely varying views on just about every aspect of the internet.
And yet through a mixture of careful, respectful and open dialogue, occasional prodding and a dry sense of humour, Masood Khan has turned what could easily have become a bar-room brawl into a gradual formation of agreement.
Respect
Such is the level of respect and trust he has built up with all parties that at the first restart of the sub-committee this Sunday, every speaker without exception (and that includes countries as diverse as China, Iran, Brazil, Ghana, Argentina, the US and UK) went out of their way to stress how useful Mr Khan’s contribution as chairman was.
In an extraordinary statement, the UK/EU then deferred its entire contribution to the net governance debate to Mr Khan's stewardship. "We will co-operate in any way you choose," the representative told Mr Khan. This was the same UK/EU team that stunned the self-same room in September by producing a radical blueprint for a new form of internet control.
It may seem incredible that something of such importance rest on the careful judgements made by one man, but as it became clear that different governments were going to be unable to find a solution among themselves, each in turn has ceded more control to Mr Khan.
Having chaired dozens of meetings as a careful and unthreatening facilitator, Mr Khan saw his chance and went for it.
"I would encourage you all not to focus on general themes of internet governance but instead go to the heart of the matter,” were his opening words. And then he listed them. “The question of a future mechanism, the question of oversight, and the paradigm of co-operation amongst all stakeholders."
But government representatives only really feel comfortable when talking in gross generalisations or disagreeing with other delegations. Mr Khan summarised the positions and threw them back at delegates. "We have been discussing this issue for four years and people will want some sort of result. We won’t have any voting here, we will work by consensus. If there is a split, it will not make the final agreement. Where there is no agreement, the effort will have to be to convince each other."
Criticism
He criticised those reiterating the same points and the same broad principles, outlined the problems, pushed what he saw as the emerging trends and opened it out to the floor. It’s a measure of his standing that the room did not collapse under the eternal nay-saying that has come to represent Net governance discussions.
When the countries failed to heed his instructions, he then told all the main arguing delegates to sit in a room that afternoon and come up with a list of points where they agreed. Four hours later they came back to the official meetings with nothing. Khan suspended the meeting and told them to go back and do it again.
Sitting in a boiling hot and cramped drafting room, the early discussions suffered from the self-same problem of woolly jargon. But when they finished at 10pm, three of ten points had finally hit upon the hundred-pound gorillas in the room that everyone was ignoring.
This morning, with the list in front of delegates, Mr Khan again pushed the agenda. The way such meetings work is that each delegation raises their token, is added to the list of speakers and in turn called upon to speak. It is a non-combative approach proven to help governments gradually reach consensus but it is painfully slow. Mr Khan upped the pace. In response to one delegation’s comments, he ignored the pretence where the country being referred to is not named, and asked that country outright to respond.
And he did it time and time again, until, eventually, the real points at the heart of the internet governance started forming. "Would reform of the GAC [the governmental advisory council, part of ICANN] answer your points?" he asked Brazil. The Brazilian delegation demured. "You did not answer the question," Mr Khan came back.
It wasn’t just the Brazilians. The US wasn’t allowed to hide either. Would the US please say whether the word “oversight” is ever going to be acceptable to them? Could the US answer the assertion that other countries do not have adequate control over their own domain?
Tricks
It required some very fast and not entirely persuasive thinking on the part of delegates to avoid making mistakes. Twice, governments tried to stall the whole approach by asking what official standing the document they were creating would have - an age-old diplomatic trick. Mr Khan brushed it aside: "Just wait."
When a letter from ICANN chairman Vint Cerf was mentioned and argued over, Mr Khan found a copy and read the whole thing out . When one delegation suggested a useful compromise or pulled back the diplomatic curtains to produce straightforward language, he signalled his approval. If it got too heated, he made a joke and left the issue alone for the time being.
In such a way, Ambassador Khan has expertly moved a room full of governments that have been unable to get past the same topic for four years onto a path that now even the most pessimistic can see drawing ahead of them.
It is far from over but when the agreed text on how the internet should be run and by whom appears in front of the World Summit and is approved on Friday, it most certainly won’t be perfect but it will be in no short measure thanks to remarkable abilities of the unassuming ambassador from Pakistan.®
Related stories
World Summit blog: Heat, taxis and cous-cous (14 November 2005)
http://www.theregister.co.uk/2005/11/14/wsis_blog_three/
World Summit blog: internet, freedom of speech and the UN (14 November 2005)
http://www.theregister.co.uk/2005/11/14/wsis_blog_two/
World Summit blog: Hotels and women (13 November 2005)
http://www.theregister.co.uk/2005/11/13/wsis_blog_one/
Nations squabble over internet management (11 October 2005)
http://www.theregister.co.uk/2005/10/11/enews_net_governance/
EU outlines future net governance (30 September 2005)
http://www.theregister.co.uk/2005/09/30/eu_net_governance/
Tunis World Summit ‘in great danger’ (28 September 2005)
http://www.theregister.co.uk/2005/09/28/wsis_summit_danger/
WSIS: Who gets to run the internet? (28 September 2005)
http://www.theregister.co.uk/2005/09/28/wsis_geneva/
source:http://www.theregister.com/2005/11/14/masood_khan_wsis/print.html
Unit test your aspects
01 Nov 2005
AOP makes it easier than it's ever been to write tests specific to your application's crosscutting concerns. Find out why and how to do it, as Nicholas Lesiecki introduces you to the benefits of testing aspect-oriented code and presents a catalog of patterns for testing crosscutting behavior in AspectJ.
The widespread adoption of programmer testing over the past five years has been driven by the demonstrable productivity and quality of the resulting code. Prior to the advent of aspect-oriented programming (AOP), however, it was difficult to write certain kinds of tests for crosscutting behavior such as security, transaction management, or persistence. Why? Because this behavior was not well modularized. It's difficult to write a unit test if there's no unit to test. With the popularization of AOP, it has become both possible and desirable to write tests that check crosscutting concerns independent of their realization in a target system.
In this article, I introduce a catalog of techniques for testing crosscutting behavior implemented with aspects. I focus on unit tests for aspects, but I also present other patterns that can help you to build confidence in your aspect-oriented applications. As you'll quickly discover, testing aspects involves many of the same skills and concepts as testing objects, with many of the same practical and design benefits.
I've written this article based on my experiences developing in AspectJ. Many of the concepts should be portable to other AOP implementations, but some are language specific. See Download to download the source code for the article; see Resources to download AspectJ and the AJDT, which you will need to follow the examples.
![]() |
|
A good automated test suite for an application should look like the diagram in Figure 1: Isolated tests for individual classes form a broad base that gives lots of test coverage and rapid failure isolation. On top of those sit integration and end-to-end system tests, which verify that the units work in concert. Together, these layers (if they're well constructed and frequently run) can boost your confidence in the behavior of an application.
The unit tests at the base of the pyramid are important for several reasons. First, they help you to stimulate corner cases that may be difficult or tedious to reproduce in an integration test. Second, because they involve less code, they often run faster (and thus you're likely to run them more frequently). Third, they help you think through the interface and requirements of each unit. Good unit tests encourage loose coupling between units, a requirement to get the test running in a test harness.
Figure 1. Layered tests

But what about crosscutting behavior? Imagine a customer requirement: "Check the caller's security credentials before executing any operation on the ATM class." Certainly you could (and should) write an integration test for that requirement. However, non-aspect-oriented development environments make it difficult to write a unit test or otherwise isolate the behavior of "checking security before an operation." This is because the behavior diffuses into the target system and is difficult both for humans to pin down and for tools to analyze. If you develop with aspects, however, you could represent such behavior as advice, applied to any operation that matches a certain pointcut. Now the behavior has first-class representation as a unit, and you can test it in isolation or visualize it using your IDE.
![]() |
|
Before diving into a catalog of techniques for unit testing aspects, I should briefly discuss types of failure. Crosscutting behavior breaks down into two major components: what the behavior does (I'll call this the crosscutting functionality) and where the behavior applies (I'll call this the crosscutting specification). To return to the ATM example, the crosscutting functionality checks the caller's security credentials. The crosscutting specification applies that check at every public method on the ATM class.
For real confidence in your implementation, you need to check both the functionality and the specification (or, loosely speaking, the advice and the pointcut). As I proceed with the examples, I'll highlight whether a given test pattern verifies the crosscutting functionality, the specification, or both.
Note that I will focus on testing pointcuts, advice, and the code that supports them. Intertype declarations (and other aspect features) are certainly testable. Some of the techniques I present in this article could be applied to them with minor changes. They also have their own family of techniques, many of which are straightforward. In the interest of saving space, however, I decided not to cover them explicitly in this article.
![]() |
|
I've structured this article as a catalog of patterns for testing aspect-oriented code. For each pattern, I describe which failure types it applies to, summarize the pattern, provide an example, and discuss the benefits and drawbacks of the pattern. The catalog is divided into four sections:
- Testing integrated units: This section presents a pattern for testing a piece of an integrated system (in other words, testing both your aspects and non-aspect classes together). This technique is the only way to gain confidence in crosscutting behavior if you don't use aspects and remains a critical tool when you do use them.
- Using visual tools: The two patterns described here leverage AspectJ's IDE support for Eclipse, also known as AJDT. Using visual tools to inspect your application's crosscutting structure is not a testing technique, strictly speaking. However, it will help you to understand and gain confidence in your application's crosscutting concerns.
- Using delegation: This section demonstrates two patterns that help you tease apart the two failure types previously mentioned. By factoring some logic out of your advice and into a helper class (or method), you can write tests that check your application's crosscutting behavior independent of its crosscutting specification.
- Using mock targets: This final section includes three patterns introducing "mock targets," classes that mimic real advice targets and allow you to test the both join point matching and advice behavior without integrating your aspect it into a real target.
![]() |
|
To demonstrate the patterns in the catalog, I use an aspect that implements search-term highlighting (that is, highlighting a user's query terms in the search results). I implemented an aspect very similar to the one I present here at a previous job. Our system had to highlight terms on the results summary page, the detail page, and a number of other places in the application. The fact that it affected so many places made the behavior an ideal candidate for an aspect. The one I present in this article only crosscuts one class, but the principles are the same. Listing 1 contains one implementation of the Highlighter
aspect:
Listing 1. Highlighter defines highlighting behavior
|
The Highlighter
aspect captures the return value of a join point and replaces it with a highlighted version of the same. It chooses which words to highlight based on a collection of highlighted words stored in an intertype field aboard the Highlightable
interface. You can apply the Highlightable
interface to any classes that need to participate in the highlighting behavior, either in the class declaration or using a declare parents
statement.
I chose a very simple pointcut for the initial version of the example. Later in the article, I rewrite the pointcut as I demonstrate some of the testing patterns.
![]() |
|
Addresses: Crosscutting functionality and specification
Summary: As I explained in the introduction, aspects submit easily to integration tests. This pattern is very simple: write a test against your system as you would if the behavior were not implemented with aspects. In other words, put objects together, set up state, call methods, and verify the results. The key is to write a test that will fail if the aspect misbehaves or does not apply to the join points you intend it to. If you want the aspect to affect many join points, pick a few representative examples.
Example: An integration test for the Highlighter
In Listing 2, the thing to note is that this test operates just like a test for an application without aspects would. It puts objects together, sets up state, calls methods, and verifies the results.
Listing 2. An integration test for the Highlighter
|
Integration tests have similar costs and benefits whether or not you are using AOP. In either case, the key benefit is that you are verifying the high-level intent of your code (in other words, that the title and summary are highlighted appropriately). This helps when you perform major refactoring. It also drives out bugs that only show up when components interact.
Relying only on integration tests does lead to a number of problems, however. If the HighlightSearchResultsIntegrationTest
failed, it could be because the aspect failed to run at all, because the advice logic had a bug, or because of the other involved classes (like the SearchResult
). In fact, I encountered this exact situation while developing the code for the integration test example. I spent 20 minutes trying to understand why my aspect wasn't running, only to discover that I had an obscure problem with my regular expression!
Integration tests also require more complicated set up and assertions. This makes them harder to write than tests that isolate a single aspect. It's also hard to use integration tests to stimulate all of the edge cases that your code should handle properly.
Behavior that crosscuts a number of classes poses a particular problem for integration tests. Let's say that you wanted consistent exception handling for all of the classes in your application. You wouldn't want to test every class for this new behavior. Rather, you would want to select a representative example. But if you picked a specific domain class (say the Customer
class) and tested the error handling aspect against it, you would risk muddying the intent of your test. Would the test verify Customer
's behavior, or the application's error handling?
![]() |
|
One of the hard things about testing a widespread crosscutting concern is that it can advise so many join points. Executing and checking all the matches can be a real pain. (And testing for the reverse -- the accidental inclusion of an unintended join point -- is even harder.) Accordingly, the next two patterns show the benefits of supplementing normal tests with manual inspection of the crosscutting views available in tool such as AJDT. (The combination of AspectJ and AJDT provides the most visualization support as of this writing; however, other combinations, such as JBoss AOP and the JBoss IDE provide good visualization tools as well.)
Pattern 1. Inspect crosscutting visually
Addresses: Crosscutting specification
Summary: Use the AJDT's cross-references view as you develop your aspect to see which join points it is likely to advise. Verify manually that the list is complete and does not include join points that should be omitted.
Example: Identifying an unwanted match
Let's say you want to highlight the title, product, and summary of your search results. Rather than enumerating each method as I did in Listing 1, you write what you hope will be a more robust pointcut. (For more on the art of the robust pointcut, see Adrian Colyer's blog entry in Resources.) The following pointcut seems to capture the intent of the original:
|
When you inspect the pointcut using the AJDT's cross-references view, however, you see what's shown in Figure 2:
Figure 2. Four advised join points in the AJDT cross-references view

Notice that there is an extra match: SearchResult.getWebsite()
. You know that the Website is not supposed to be highlighted, so you rewrite the pointcut to exclude that unintended match.
Using AJDT's cross-references view to inspect crosscutting specifications has three major advantages. First, the cross-references view gives you instant feedback as you develop your aspects. Second, it lets you easily detect consequences that would be difficult to test for. (To write a test that verified that getWebsite()
was not highlighted, you would need to either guess that getWebsite()
was a likely source of error or check every String getter
on SearchResult
. The more unlikely the error, the harder it is to test against it preemptively.) Third, the automatically generated view can verify positive cases that would be tedious to verify in code. For example, if the search highlighter were to affect 20 join points by design, inspecting the cross-references view would be easier than writing a test for each join point.
The main drawback of using views for verification is that inspection cannot be automated. It requires programmer discipline. A hurried programmer could inspect Figure 2 and not catch the bug. (The next pattern presents a partial solution to this problem.) Another problem is that the crosscutting views only show matches based on static join point shadows. In other words, if you have a pointcut that relies on runtime checks, such as cflow()
or if()
, the cross-references view they cannot say for sure that the join point will match at run time, only that it is likely to.
Pattern 2. Inspect changes with crosscutting comparison tools
Addresses: Crosscutting specification
Summary: Use the crosscutting comparison feature of AJDT to save a crosscutting map of your project before a refactoring or another code change. Save another map after you complete the change. (You could also save a map nightly to compare against.) Compare the maps in the crosscutting comparison tool to detect any unwanted changes to the join points affected by your aspects. Note that as of this writing, only AJDT provides a crosscutting comparison tool.
Let's say to correct the problem shown in the previous example, you've decided to change the pointcut to use Java 5 annotations, as shown here:
|
You then add the annotation to the source at appropriate places, for example:
|
Your next step is to compare the snapshot of the project taken before the change with the one after the change and get the result shown in Figure 3. As you can see, the refactoring removed the advice match on getWebsite()
, but also the match on getSummary()
. (It looks as if you failed to add an annotation.)
Figure 3. Results of a change shown in the crosscutting comparison tool

This technique is really a refinement of the previous technique. By only showing the changes, the crosscutting comparison tool can help prevent information blindness. Also, whereas the cross-references view requires that you select advice or a class that you wish to analyze, the crosscutting comparison tool lets you inspect changes from your entire project.
On the downside, the crosscutting comparison view can degrade if an aspect affects many join points. Consider an aspect that logs all public methods. Such an aspect would add dozens of new changes to the crosscutting view after even a day's worth of development, making it difficult to see other, more important changes. In an ideal world, the crosscutting comparison tool would be highly configurable, issuing warnings for changes to certain aspects and ignoring changes related to other aspects.
![]() |
|
Aspects can and often do implement their crosscutting behavior using ordinary objects. You can leverage this separation of concerns to test their behavior separately from the crosscutting specification. The next two patterns illustrate how to employ delegation and mock objects to check both aspects of your aspect (pun intended).
Pattern 1. Test delegated advice logic
Addresses: Crosscutting functionality
Summary: If you have not already done so, delegate some or all of your advice logic to another class that you can test directly. (You can also delegate the behavior to a public method on the aspect if you choose.)
Example: Move the highlighting logic to another class
To better test the highlighting logic in isolation, you move it into a dedicated utility class:
|
By extracting the highlighting logic, you can write unit tests for it by calling methods on the HighlightUtil
class.
This technique makes it easier to stimulate edge cases in your domain logic. It also helps to isolate bugs; if the test for the helper fails, you know that it, not the aspect, is to blame. Finally, delegating logic often leads to a cleaner separation of concerns. In the example, by extracting it to another class, text highlighting becomes an operation that other parts of the system can use independently of this aspect. In turn, the aspect gains the flexibility to use alternate highlighting strategies (CSS highlighting for HTML, all-caps highlighting for plain text).
On the negative side, this technique doesn't work when the logic is difficult to extract. For example, it may be best to leave simple logic inlined. Also, some aspects store state, either locally or in ITDs on the classes they advise. State storage often forms a significant part of the logic of the aspect, and it can't always be moved cleanly into a helper.
Pattern 2. Use mock objects to record advice triggering
Addresses: Crosscutting specification and functionality
Summary: This technique naturally complements the previous one. If you have extracted advice behavior to another class, you can substitute a mock object for your helper object and verify that the advice triggers at the right join points. You can also verify that the advice passes the correct context to the helper, either directly from the advice parameters or from previously stored state.
Note: If you need an introduction to mock objects, see Resources.
Example: Using a mock HighlightUtil to test the Highlighting aspect
You've already seen how the aspect delegates to another class to handle the actual text highlighting. This paves the way for injecting a different implementation of the highlighter into the aspect during the test. The code in Listing 3 does this by leveraging the JMock library. (See Resources.)
Listing 3. Using JMock to test calls from the aspect
|
The setUp()
method instantiates the mock object and injects it into the aspect. The test method tells the mock to expect a call to a method with the name "highlight" taking two arguments: the return value from getTitle()
and the words list stored on the SearchResult
. Once the expectation is set, the test calls the getTitle()
method, which should trigger the aspect and result in the expected call to the mock. If the mock does not receive the call, it will fail the test automatically during tear down.
Note the setUp()
method stores a reference to the original HighlightUtil
. That's because the aspect, like most, is a singleton. Because of this, it's important to undo the effects of the mock injection during tear down; otherwise, the mock could persist in the aspect and affect other tests. The correct tear down for this aspect is shown here:
|
This pattern complements the previous one, except that it tests the crosscutting specification and context-handling of the aspect rather than the crosscutting behavior. Because you are not burdened by checking for indirect side effects in the outcome of the aspect, you can more easily stimulate corner cases in the join-point matching and context-passing behavior.
It's important to note that the benefits and drawbacks of delegating logic and then testing using mocks are similar whether you're applying the technique to objects or aspects. In both cases, you separate concerns and then validate each concern in a more isolated way.
There's one problem unique to aspects when it comes to injecting mocks. If you use singleton aspects (the default), any change you make to an aspect's fields, such as replacing one with a mock, must be undone at the end of the test. (Otherwise, the mock will hang around and may affect the rest of the system.) This tear-down logic is a pain to implement and remember. Writing a test-cleanup aspect to automatically reset aspects like the one in the example after each test is conceptually simple, but the details are beyond the scope of this article.
![]() |
|
In this final section, I introduce a term I invented to describe a type of test helper that is useful in writing aspect tests: mock targets. In the pre-aspect world, a mock object denoted a class (handwritten or dynamically generated) that imitated a collaborator for some class you were attempting to test. Similarly, a mock target is a class that imitates a legitimate advice target for some aspect you are attempting to test.
To create a mock target, write a class that has some structure or behavior similar to that you would like to advise in production. For example, if you are interested in the highlighting of text returned by getter, you could write a mock target like this one:
|
Then, you would write your test case to verify that the aspect correctly interacts with the target, as shown in Listing 4:
Listing 4. Interacting with a mock target to test advice
|
Note that in this example, I combine mock targets with mock objects (as described in Section III, Pattern 2). Mock targets underpin the next three techniques.
Pattern 1. Test advice by extending an abstract aspect and providing a pointcut
Addresses: Crosscutting functionality
Summary: Prework: If necessary, rewrite your aspect to split it into an abstract aspect and a concrete aspect which extends it and concretizes one or more pointcuts.
Once you have an abstract aspect, create a mock target inside your test class. Create a test aspect that extends your abstract aspect. Have the test aspect supply a pointcut that targets your mock target explicitly. The test verifies that the advice in the aspect succeeds by either looking for a known side-effect of the advice or by using a mock object.
Example: Extending AbstractHighlighter
Assume that you've already written the test code from the previous section. To make the test pass, you would have to split the Highlighter
aspect into an abstract aspect and a subaspect, as shown here:
|
Next, you would extend the AbstractHighlighter
aspect again with an aspect just for your test case. Here I show it as a static inner aspect of the test case:
|
This aspect concretizes the highlightedTextProperties
pointcut by selecting all method executions on the mock target.
Clearly, the test exercises an artificial situation. You're testing a fake aspect against a fake object. However, this simply means that you are not testing the real pointcut. You can still verify the advice and ITD code specified by the abstract aspect. In the example, the test verifies that the advice correctly marshals data from ITDs as well as the return value of the original join point, passes it to a utility class, and returns the new result. That's a non-trivial amount of behavior. Using a mock target also makes the test clearer because test readers will not have to reason about the behavior of a real target as well as the behavior of the aspect. This sort of test is particularly useful if you're writing unit tests for an aspect library, because there will be no real targets until the aspect is woven into a separate application.
If you split your aspect to take advantage of this pattern, you may also be making it more extensible. If new parts of the system need to participate in the highlighting behavior, for instance, they can simply extend the now-abstract aspect and define a pointcut that covers the new situation. In effect, the abstract aspect is decoupled from the system it advises.
Pattern 2. Test pointcut matching with mock targets
Addresses: Crosscutting specification and functionality
Summary: This technique relates closely to the previous one. Instead of extending an abstract aspect, this time you write your mock target so that it matches a pointcut on the aspect to be tested. You can test that the pointcut is correct by checking whether the aspect advises the mock target. If the pointcut you wish to test is overly specific, you may need to rewrite it so that the mock target can more easily "subscribe" to the advice.
Example: Testing pointcuts based on a marker interface
Instead of making the highlighting aspect abstract, you could rewrite your pointcut so that it matches method executions on the Highlightable
interface:
|
This broad pointcut matches any String
getter on a Highlightable
. Because the pointcut does not enumerate specific classes, it already matches the getSomeString()
method on the mock target. The rest of the test stays the same.
Variation: Using an annotation
You could also write your pointcut to match partially based on Java 5.0 metadata. For example, the following revised pointcut matches method executions that are decorated with the @Highlighted
annotation:
|
You can make the mock target match your new pointcut by adding the annotation to its getSomeString()
method:
|
This technique also clearly separates the testing of aspect behavior from the behavior of the target application, allowing the tests to be more self-contained. If your pointcuts were not already written to accommodate your mock targets, you could end up with a more decoupled aspect by rewriting them. By making your aspect general enough to affect a mock target inside a test class, you ensure that it's also easy for a real class to participate in the aspect's behavior.
Pattern 3. Verifying more complex pointcuts (a special case)
Addresses: Crosscutting specification and functionality
Summary: The previous mock target was simple, but you can also write mock targets to simulate complex join points (such as cflow()
) or sequences of join points that you wish to affect.
Let's say you wanted to turn off highlighting for downloaded reports. You could add a highlightExceptions
pointcut to exclude any getters called by the ReportGenerator
, as shown here:
|
Then you could write a mock ReportGenerator
that called the HighlightMockTarget
to test that no highlighting had occurred:
|
However, you can imagine creating similar mock targets for more complex matching situations (for example, somePointcut() && ! cflowbelow(somePointcut())
). Visualization tools do not give good information about matching for pointcuts that use runtime checks such as cflow()
. Checking such pointcuts with a few representative mock targets is worthwhile.
![]() |
|
When I see untested code, I get the jibblies. Code without a good test suite is typically buggy, hard to change with confidence, and poorly factored. If you implement your crosscutting behavior with aspects, however, you gain new ways to test (and understand) your application's crosscutting concerns.
Testing aspects is a lot like testing objects. In both cases, you need to break the behavior into components that you can test independently. A key concept to grasp is that crosscutting concerns divide into two different areas. First, there is the crosscutting specification, where you should ask yourself what parts of the program the concern affects. Second, there is the functionality, where you should ask what happens at those points. If you are only using objects, these two areas intertwine as your concern tangles itself throughout your application. However, with aspects, you can target one or both of these areas in isolation.
Writing aspects to be testable yields design benefits parallel to those achieved by factoring object-oriented code for testability. For instance, if I move my advice body into an independently testable class, I can analyze the behavior without necessarily needing to understand the way it crosscuts the application. If I modify my pointcuts to make them more accessible to mock targets, I also make them more accessible to non-test parts of the system. In both cases, I increase the flexibility and pluggability of the system as a whole.
A while ago, I heard a rumor circulating that aspect-oriented programs couldn't be tested. Although that rumor has mostly died out, I still think of it as a challenge. I hope that this article has demonstrated that not only can you test aspects, but that when it comes to testing crosscutting, you're a lot better off if you've used aspects in the first place.
This article owes much to Ron Bodkin, Wes Isberg, Gregor Kiczales, and Patrick Chanezon who reviewed earlier drafts and provided helpful insights and corrections.
![]() |
|
Description | Name | Size | Download method |
---|---|---|---|
Article source: Eclipse 3.1/AJDT 1.3 project | j-aopwork11-source.zip | 337 KB | FTP |
![]() | ||||
![]() | Information about download methods | ![]() | ![]() | Get Adobe® Reader® |
![]() |
|
Learn
- Unit testing with mock objects (Alex Chaffee and William Pietri, developerWorks, November 2002): Learn more about mock objects.
- Hacking with Harrop...: Adrian Colyer explains how to combine aspects and dependency injection.
- I don't want to know that ...: Adrian's primer on the art of writing robust pointcuts.
- Enhance design patterns with AspectJ (Nicholas Lesiecki, developerWorks, April 2005): AOP makes patterns lighter, more flexible, and easier to reuse (two-parts).
- Design with pointcuts to avoid pattern density (Wes Isberg, developerWorks, June 2005): Revisits JUnit: A Cook's Tour using aspect-oriented rather than object-oriented designs.
- Virtual Mocking ... with jMock (May 28, 2005): Ron Bodkin describes his work on ajMock.
- AOP@Work: Programming tips for aspect-oriented developers.
- The Java technology zone: Hundreds of articles about every aspect of Java programming.
Get products and technologies
- AspectJ home page: Download AspectJ and the AJDT.
Discuss
- Participate in the discussion forum.
- developerWorks blogs: Get involved in the developerWorks community.
![]() |
|
![]() | |
Nicholas Lesiecki is a recognized expert on AOP in the Java language. In addition to coauthoring Mastering AspectJ (Wiley, 2003), Nick is a member of AspectMentor, a consortium of experts in aspect-oriented software development. He has spoken about applying AspectJ to testing, design patterns, and real-world business problems in such venues as SD West, OOPSLA, AOSD, and the No Fluff Just Stuff symposium series. He currently serves Google as a Software Engineer and Programming Instructor. |
![]() |
|
source:http://www-128.ibm.com/developerworks/java/library/j-aopwork11/index.html?ca=dgr-lnxw01AOPtesting