Monday, May 15, 2006

There's one born every minute: spam and phishing

It's been a little while since I launched SpamOrHam.org where people perform a spam filtering task and their results are compared against best of breed spam filters. I set out to make sure that the spam filters were doing a good job on the assumption that people would be able to spot errors that the filter was making.

Bad assumption. It turns out, based on preliminary data, that people suck at spam filtering. Here's some initial figures: people agree with 89.1% of the classifications that they've examined. Now that could mean that the original spam filter sucked, but guess again!

Ignoring all the emails that have only been voted on once, and looking at the emails that have been seen by multiple people (who've agreed that they believe that the message is a ham or a spam), there are some really surprising results:

Here's one that people think is a spam:



and this one too:



and many people think this US Airways message is spam:



Now for the prize winning classification. The people who thought the following phish was a genuine message, could you please forward your bank account details and PIN to me so that I can deposit your prize in your account:



Happily, people are finding genuine errors that the spam filter made. For example, this really is a genuine message from Travelocity and not a spam:



source:http://www.jgc.org/blog/2006/05/theres-one-born-every-minute-spam-and.html

MegaTexture technology

One of the most respected and well-known game developers in the world, John Carmack hardly needs any introduction. Having mastered the skill of game programming, Carmack co-founded developer id Software, and has also worked on such classic series as Doom, Quake and Wolfenstein 3D.

In this Question & Answer with Carmack, he discusses the new MegaTexture technology, which will be used in the upcoming Enemy Territory: Quake Wars for PC. Definitely a worthy read for any programming, designing or general development enthusiast, as well as any gamer slightly interested in the development process behind games.


Q1:
What is MegaTexturing technology?

Answer: MegaTexture technology is something that addresses resource limitations in one particular aspect of graphics. The core idea of it is that when you start looking at outdoor rendering and how you want to do terrain and things in general, people almost always wind up with some kind of cross-fade blended approach where you tile your textures over and blend between them and add little bits of detail here and there. A really important thing to realize about just generally tiling textures, that we’re so used to accepting it in games, is that when you have one repeated pattern over a bunch of geometry, the texture tiling and repeating is really just a very, very specialized form of data compression where it’s allowing you to take a smaller amount of data and have it replicated over multiple surfaces, or multiple parts of the same surface in a game since you generally don’t have enough memory to be able to have the exact texture that you’d like everywhere.

The key point of that is what you really want to do is to be able to have as much texture as you want to use where you have something unique everywhere. Now normally, you just can’t get away with doing that, because if you allocate a 32,000 by 32,000 texture, the graphics curve can’t render directly from that. There’s not enough memory in the system to do that, and even when you have normal sized textures, games are always up against the limits of the graphics card memory, and system memory, and eventually you’ve got hard drive or DVD memory on there, but you wind up with a lot of different swapping schemes, where you’ll have a little low-res version of a texture, and then high res versions that you bring in at different times, and a lot of effort goes into trying to manage this one way or the other.

So when Splash Damage was starting on, really early with Enemy Territory: QUAKE Wars, they were looking at some of these different ways to render the outdoor scenes with different blends and things like that. And one of my early suggestions to them was that they consider looking at an approach where you just use one monumentally large texture, and that turned out to be 32,000 by 32,000. And I – rather then doing it by the conventional way that you would approach something like this (i.e. – chopping up the geometry into different pieces and mapping different textures on to there and incrementally swapping them for low res versus high res versions), just let them treat one uniform geometry mesh and have this effectively unbounded texture side on there, and use a more complicated fragment program to go ahead and pick out exactly what should be on there, just as if the graphics hardware and the system really did support such a huge texture.

In the end what this winds up getting us is the ability to create a great outdoor terrain texture that has far more complex interactions than anything that you would get with any kind of conventional rendering, where you’ve built it up out of pieces of lots of smaller textures on there, where they do some sophisticated things with growing grass up between bump maps. And then you can go back and do hand touch ups in a lot of different places to accent around features that are coming out of the surface. And this type of thing is, I’m very sure, going to become critically importance as we go forward into kind of next generation technologies on there. We’ve seen this over and over as we’ve gone through graphical technology improvements over the years, where there will be certain key elements that you start looking at in games that look really dated because they don’t have the capabilities that people are seeing in sort of the cutting edge things there. And this type of unique texturing over the coming generation of games, I think, is going to be one of those, where when people start looking back at a game that’s predominantly piled and doesn’t have that unique artist touched sense about all of the scenes, it’s going to look very previous generation.

Q2: What’s the benefit on the top most level just for gamers, of the MegaTexture. And the second part of that is what’s the benefit as the developer?

Answer: Well for the user the bottom line is just that it looks better. You wind up with something that has the diversity that you don’t get with more conventional terrain generation systems out there. As the developer, looks are still important for games. If you look at a game and you make it look better, it’s a better game, so long as you don’t impact the gameplay negatively. So it’s nothing profound and fundamental, it’s just one tiny little aspect of graphics rendering that’s just better now.



Q3: Aside from the visual aspect of the terrain looking better, do you think there will be any other foreseeable differences to us gamers that are playing MegaTexture games?

Answer: It’s just the variety and the diversity of it. Like I said at the very beginning, this is only a very small aspect of graphics, let alone of games in the larger sense. It’s a specific little piece of technology that addresses texture resource limitations, and this entire technology would not need to exist if you had four gigabyte graphics cards, and lots more RAM. In fact, so much of programming and graphics programming in particular is just trying to pretend that we’ve got hardware that’s five or 10 years more advanced than what we’ve got right now by making various algorithmic trade offs.

Q4: How is the MegaTexture a major step forward for game graphics?

Answer: My core comment here is that any repeating use of a texture is just very specialized data compression. Any time you have one set of texture data, and it’s present in more than one place on the screen, it’s really an approximation to what an ideal infinite resource video game would provide. Because in the real world, there aren’t any repeats—even things that look like they repeat, like bricks or dry wall, are uniquely different. The subtle differences that you get are the things that distinguish a rendering, especially a game rendering, from something that’s very realistic.

The MegaTexture allows us to have terrain in QUAKE Wars that does not require any repeated textures for resource limitation reasons. There may still be some areas where a texture is repeated just because they didn’t feel like doing anything better, but there was no resource limitation that encouraged them or required them to do that. They are perfectly capable of having an artist go in and add 10 million little tiny touches to the level if they chose to do so. It’s taken it from being a resource constraint to something that becomes a design trade off.

Q5: Does MegaTexturing technology bring any specific limitations with it?

Answer: No. There’s no limit to dynamically changing it. That’s one of the neat things about it – to the graphics engine, it looks like you’re just texturing on top of arbitrary geometry. You can move it around and all of that. With the technology in Enemy Territory: QUAKE Wars, there are some issues with deforming the texture coordinates too much. You’ll get areas that are blurred more than you would expect with a conventional texturing, and that’s something that’s fixed in my newer rev of technology.

There are some minor things you have to worry a little bit about. If you stretched up too steep a cliff slide, there would be some blurring involved there, even if you adjusted the texture coordinate somewhat. And you can crutch around that a little bit. That’s also a problem that’s been fixed by a newer rev of technology that we’ve got right now.



Q6: So would you consider the fact that the MegaTexture paints all of the terrain with one enormous texture an advantage to level of detail or a limitation?

Answer: Level of detail wise, the terrain does not render with any sophisticated geometry morphing situation. That’s one of those things that for years I think most of the research that’s gone into has been wasted. Geometry level of detail on terrain…there have been thousands of papers written about it, and I honestly don’t think it’s all that important. The way the hardware works, you’re so much better off setting down a static mesh that’s all in vertex and index buffers, and just letting the hardware plow through it, rather than going through and having the CPU attempt to do some really clever cross blended interpolation of vertices.

In and infinite sized world, you would have to include some degree of level of detail. The Quake Wars levels are not infinite size. They’re bounded. And it generally turns out to be the best idea to just have the geometry at a reasonable level of detail and very efficiently rendered.

But the MegaTexture would work just fine if you wanted to use that on something where you were dynamically level detailing the terrain. That is one of the nice aspects of it, where to the application it just looks like you can texture with an infinite size texture. You don’t have to worry about breaking it up on particular boundaries of anything special like that.

Q7: How do you see the mega texture developing in the next few years?

Answer: The particular version that’s in the Splash Damage code is essentially already abandoned, where the newer version of the stuff that I’ve got is a super setup that allows us to use it for arbitrary textures and has a few other nice benefits. It was one of those things where, if I had thought about it at the beginning, then I probably would have done it back then.

But from a technology development standpoint, content wise, the technologies that Splash Damage developed for creating these terrains ,and some of the stuff that I was working on to modify MegaTextures artistically, those are the corner stones of what we’re using going forward for content creation.

Q8: Do you think that since it’s a solution that’s working with Enemy Territory: QUAKE Wars, it’s eventually going to be used in other software.

Answer: Correct. What’s exciting is that I did this stuff a long time ago, when I first did the initial MegaTexture stuff for Splash Damage, which is specialized for terrains. The MegaTexture works for things that are topologically a deformed plain, like an outdoor surface, and it has certain particular limitations on how much you can deform the texture mapping there. For the better part of a year after that initial creation, I have been sort of struggling to find a way to have a similar technology that creates this unique mapping of everything, and use it in a more general sense so that we could have it on architectural models, and arbitrary characters, and things like that.

Finally, I found a solution that lets us do everything that we want in a more general sense, which is what we’re using in our current title that’s under development. That was one of those really happy programmer moments, where I knew that this sense of unique texturing was a really positive step forward for what we could do artistically with the game. I just hadn’t hit on the right thing for a long time, and then, finally, when I did settle down and come up with a technology that works for all of that, it was a good moment.

Q9: Do you think it is inevitable that this would be a wheel that the other guys are going to have to reinvent, too?

Answer: Yes. Although most graphics rendering stuff is not that incredibly mysterious and difficult. It used to be that people were always looking for the black magic in the code, some place, but it’s not that big of a deal. And especially now there are hundreds and hundreds of graphics programmers out there who, as soon as they see this type of stuff and read and article about it, they can go out and start implementing some of the same things. I expect that pretty much will happen.

I would say that the greater differentiation will be in the two ((inaudible)) that go into allowing people to take effective use of this because the core technology to do this is tiny. There’s one file of source code that manages the individual blocks, and then the fragment program stuff for this is like a page. It’s not that big of a deal. It’s an architectural and mind set change that you have to make to decide to actually build a project that’s going to leverage this type of technology.

Q10: Why do you think other developers haven’t done anything like this before?

Answer: One aspect of it is certainly the fear of unboundaried development time. That’s something that you can look at and say, “Oh my gosh, we make this many megabytes of textures. If we uniquely texture the entire world, it’s going to be 50 times that. How are we going to get that done?” Generally that’s a bad way to look at things, because while you now have the ability to uniquely texture everything, nothing is forcing you to. You can still set up and use the technology just like any old system where you repeat pictures; it’s just that now you have the ability to do it everywhere you want to, anywhere your fancies strike you or your artist wants to go in and touch everything up to make an area look better. But, the worry about development time certainly is an issue and has been an issue for many years now. Specifically a significant concern about the fact that it’s not such a good idea to develop a technology that is only going to make a game finish later and later. Anything that you’re going to include that allows more capabilities will take longer to optimize. There are very, very few things that you can do that just automatically take the same effort, but produce something drastically better.

Q11: Did you create the MegaTexture technology with PC hardware in mind? Or were you also planning for next gen consoles when you started coming up with it?

Answer: It was done on the PC. But we know that next-gen consoles are essentially PC graphics renderers?

Q12: Would the consoles having less memory than a PC pose a problem for the MegaTexture? Or is something that you guys have already started to work around?

Answer: If anything, it works out better for the next-generation consoles, because on the PC you could often get away with not doing texture management if you were targeting fairly high end, while on the consoles, you’ve always had to do it. And especially my newer paged virtual texturing which applies to everything instead of just terrain, allows you to have a uniform management of all texture resources there, as well as allowing infinitely sized texture dimensions. So this is actually working out very nicely on the Xbox 360.

Q13: Do you think the MegaTexture is a technology that will push hardware forward, in terms of gamers having to buy new upgrades for PCs, or not?

Answer: Interestingly, this isn’t as performance demanding as a lot of things we’ve done before. While the exact implementation that I’ve done for ETQW wouldn’t have been possible until the modern generation of cards, the fundamental idea of unique texturing is something that could have been done at any point all the way back to the 3DFX cards. And when I was originally starting the DOOM development five to six years ago, unique texturing was something that I looked at as a viable direction to go to make a next-generation step, but I instead chose to go with the bump mapping and the dynamic lighting and shadowing because I thought, for game play reasons, that they were going to work out better. It’s a technology that I’m surprised that no one else wound up pursuing, because I picked my direction way back in the DOOM 3 days and I kind of saw this other viable path that people could be pursuing. I was kind of surprised that five, six, years later, nobody else had really taken that task, because it always looked good to me.

Q14: Do you think that the MegaTexture technology will be accessible to mod teams? I’m making the connection there in terms of thinking of some of the smaller teams out there.

Answer: It doesn’t help them. In general, all the technology progress has been essentially reducing the ability of a mod team to do something significant and competitive. We’ve certainly seen this over the last 10 years, where, in the early days of somebody messing with DOOM or QUAKE, you could take essentially a pure concept idea, put it in, and see how the game play evolved there. But doing a mod now, if you’re making new models, new animation, you essentially need to be a game studio doing something for free to do something that’s going to be the significant equivalent. And almost nobody even considers doing a total conversion anymore. Anything like this that allows more media effort to be spent, probably does not help the mods.



Q15: Has the MegaTexture been a really rewarding breakthrough for you in the scope of some of your other accomplishments?

Answer: It’s hard to put everything in comparison against all the different things I’ve done. Certainly in this generation of technologies that I’m working on I’ve done dozens and dozens of little experiments with lots of different graphics technologies. I do think that the unique texturing technologies are the most important of all of the things that I’ve done and are going to have the most significant impact.

There’s a ton of little graphics technologies that you can experiment with, different rendering technologies, and ways of drawing things with silhouette lighting or deformation maps — just all sorts of things that are interesting when you look at them in a particular light, and may have some great use in a game. But any texturing technology is something that applies to everything, and I’ve always wanted to do technologies that have a more general application, rather than things that I always considered artifact effects that you put on a particular object. I’m probably more accepting of eye candy like that on particular objects now than I used to be, because people do find things like that catchy, and it will make an impact on people when they see something special. But I’ve generally preferred to set up technologies that effect everything uniformly across the entire game world and this is one of those.

Q16: Public perception of you is sometimes centered around your love of technology in making games, and maybe more so for right or wrong, than the finished product. What do you think of that assessment?

Answer: Well, the gameplay really is intertwined with the presentation. I’ve never pursued a technology that I thought would negatively impact gameplay. It’s always in the context of “how will this technology improve the game?” And it is true that I’m not the final arbiter of what’s necessarily going to make our games fun, gameplay-wise. I don’t necessarily consider myself representative of our target market. And the game play decisions are generally now made by Tim.

But I do still care about making sure that the technology that I help provide, which is sort of the canvas that everything is painted on, is something that will only have positive improvements to the whole game play experience. So I am focused more and more narrowly now, than I used to be, on the graphics technology and my little aspect of this. It’s true that I used to write essentially all of the code for everything. But as the demands of the technology have improved we have to have more and more people and it gets more and more specialized. So I’ve sort of retrenched into the area where I have the most to offer and I put in the time that I can to it.

Q17: Is there anything else that you’d like to add?

Answer: It’s still very exciting the capabilities that are continuously being added to our arsenal here. I am having a really good time working on the Xbox 360 right now, graphic technology-wise. As for the MegaTexture stuff, it is kind of funny that it’s not super demanding of the hardware. As I mentioned, I was kind of surprised that something like this hadn’t been pushed before we got around to it. There are lots more exciting possibilities for the graphics research and we’re still toying around with some fairly fundamental architectural design issues on the Xbox 360.

And, the PC space is going to be moving even faster than the consoles. The graphics technology is still exciting and they’re still going to be significant things that we can show to people that will make them look at this and say “wow, this is a lot better than the previous generation.” I do think unique texturing is the key for the coming generation.

There are lots and lots of graphics technologies that we can look at. And maybe you add five or six up and they wind up being something that really gives it a next generation wow. But just by itself, even with no newer presentation technologies, allowing unique texturing on lots and lots of surfaces, I think, is the key enabler for this generation.

source:http://www.gamerwithin.com/?view=article&article=1319&p=2&PHPSESSID=bc7df2eac93361d0b5e29e5fb27bb75a

Are Linux operating systems as easy as promised? We test them out.

Can the ordinary computer user ditch Windows for Linux?

The question came up when I decided that my six-year-old version of Microsoft Corp.'s Windows operating system had to be replaced.

My Sony Vaio computer was still too young for the trash heap. And I was hesitant to spend $200 on the Windows XP operating system, especially with Microsoft planning to launch XP's replacement, Vista, in January.

So, I decided to give the operating systems that run on Linux technology a try. The Linux-based operating systems are said to be faster and more secure than Windows -- and are usually available free. They also are said to work well with older computers. And the publishers of the most popular systems say they can now be installed and used by anyone.

What I found was that for some people, Linux systems may do just fine. But they still are largely more appealing to computer hobbyists who would like to see Microsoft face more competition. Specifically, while the installation and simple functions worked well enough, the systems couldn't handle all the multimedia applications I needed. And getting some of the systems to work required more time and effort than I was willing to exert.

Adding and Sharing

Linux was started in 1991 by a Finnish student, Linus Torvalds, who wanted to modify the Unix operating system to work on his PC. (Unix was a text-driven operating system running on big mainframe computers that could handle various tasks and users simultaneously.) The task proved too much for one person, so Mr. Torvalds asked for help from programmers around the world in a posting on a Web bulletin board -- and the Linux movement was born.

The Linux systems and related applications are open source, meaning users can modify the programs as long as they make their changes available to others. Mr. Torvalds, however, is still in charge of maintaining central Linux standards. Users are encouraged to pass their copies of open-source software on to others.

What has resulted is an array of operating systems -- or distributions, as they are called -- based on Linux technology, almost all of them available for downloading on the Web. Linux's primary success, however, has been as an operator of servers, not desktop computers.

Positive reviews of the latest Linux systems were what first piqued my interest. I searched the Web to see which distributions of Linux were compatible with my PC, its components and peripheral devices like my printer, digital camera and external DVD drive.

Compatibility with hardware can be a big problem for Linux. My computer's Pentium III processor from Intel Corp. appeared to be compatible, though I found no assurances about my computer's sound and graphics components, cordless mouse or peripherals.

Still, I purchased a copy of the book "Linux for Dummies," which comes with a DVD containing six of the most popular Linux distributions for home-PC use. The DVD also contains other software applications, including OpenOffice.org, a competitor to Microsoft Office supported by Sun Microsystems Inc. of Santa Clara, Calif., and the Mozilla Internet browser. All for $30. (Most of the software can be downloaded free at various Web sites, starting at www.Linux.org.)

I decided to give the six Linux systems a spin to see how they got along with my PC. But there was a minor setback: The current edition of "Linux for Dummies" comes with older editions of the Linux distributions. I was able to download free the latest editions of all but one -- the Linspire operating system from San Diego-based Linspire Inc. The new, seventh edition of "Linux for Dummies" is due out this month, and its DVD has the updates of the Linux distributions.

Installation went quickly and, for the most part, smoothly. All six systems recognized my disk drives, cable modem and wireless mouse. There's no need to dump Windows when putting in any of the Linux distributions, as long as there's enough room on the computer's hard drive. After installation, you simply select whether to launch Windows or Linux each time you start the computer.

Basic tasks like printing, email and Internet browsing worked easily. Even though none of the Linux versions recognized my particular model of Epson color printer, the device worked fine after I designated it as a similar Epson model. Setting up email to use my account with an Internet service provider required some configuration, as does setting up Microsoft Outlook email.

I was able to book an airline ticket online, reply to an invitation and look at satellite maps in the Google search engine. I also did some online banking, even though the bank's site sensed my PC was operating with a foreign system. The home page warned me that the site's full functionality required Windows or a Macintosh operating system, but my electronic bill payments went through just the same.

But most of the operating systems had problems with either my computer's graphics or sound or both. And the problems became more pronounced with multimedia applications, like viewing movie trailers and operating my digital camera and iPod. What's more, I couldn't transfer, via email or a disk, some complicated word-processor and spreadsheet files between my Linux system at home and Microsoft Windows on my work PC.

Two of the Linux operating systems -- Linspire and Fedora, which is run by Red Hat Inc. of Raleigh, N.C. -- didn't appear to be compatible with my graphics hardware. Suse, the system published by Novell Inc., of Waltham, Mass., had the same problem and didn't generate any sound despite attempts to fix it.

After the tests, representatives of Fedora, Linspire and Novell told me that Sony Vaios are known to have compatibility problems with Linux.

Greg Mancusi-Ungaro, Novell's director of marketing for Linux and open source, says other users of Linux on Sony desktops appeared to have developed solutions. But he agrees that chasing down and installing them would likely go beyond the abilities of a Linux novice.

Tom Welch, Linspire's chief technology officer, says the company's primary focus in selling its system is on new computers that come with Linspire already installed. He added that my inability to play Microsoft- and Apple-produced videos likely was due to hardware incompatibility. (Smaller manufacturers, like Microtel Computer Systems and Systemax Manufacturing Inc., make computers with Linspire installed.)

Keep It Simple

The OpenOffice.org suite of word-processor, spreadsheet and presentation software was included with each Linux system I tested. The programs worked well in and of themselves -- similar to Office's programs. They opened and saved files more quickly and didn't get hung up processing the way Office does from time to time. I was able to send files back and forth between Word on my work computer and OpenOffice's word processor, Writer, on my home PC.

But OpenOffice's spreadsheet, Calc, didn't handle a Microsoft Excel file with lots of graphs. Even a moderately complicated Word document -- a ballot for an Academy Awards office pool -- lost page breaks and other formatting. Surprisingly, though, Writer was able to track changes made during the editing of a Word file and translate the file back into Word with the changes still tracked.

Louis Suarez-Potts, a spokesman for OpenOffice.org, says the more complicated files don't always transport well back to Office. "We readily confess that there are occasional problems," he says, adding that "these are much fewer in number with [the newest] version."

One possible solution for multimedia and file-transfer problems is CrossOver. The $40 commercial program from CodeWeavers Inc., a software developer in St. Paul, Minn., is meant to run Windows applications from Linux. CrossOver can be downloaded for a free 30-day trial. It's also included in the current edition of the Xandros operating system from New York-based Xandros Inc.

CrossOver was able to operate Office software within Linux. But I couldn't get it to work with iTunes, Microsoft Media Player or Apple QuickTime.

Jeremy White, chief executive of CodeWeavers, says he wasn't surprised that I couldn't get CrossOver to run Media Player, but added that his program does work well with QuickTime. Had I purchased the program and then accessed the customer support, he says, I would have been able to get QuickTime to work.

As for iTunes? "Apple changes iTunes so frequently, we can't keep up with them," he says. "But my hope is that with our version 6.0, we'll support iTunes 6.0 and that we'll be able to more properly support the iPod."

Of the six Linux operating systems I tested, Xandros worked best on my PC. It installed the easiest and I didn't have the graphics or sound problems. While in Xandros, I was able to look at -- and in some cases open -- files on the Windows side of the computer more easily than while in the other Linux systems.

Like the other Linux distributions, though, Xandros had problems viewing some online video files, playing DVDs and downloading pictures from my digital camera. I was able to see, delete and copy to my computer some of the songs on my iPod using Xandros's file manager, but I couldn't copy songs to the iPod.

The current version of Xandros doesn't work with iPods, says Kelly Fraiser, the product development manager for the new version of Xandros due out this summer. The new version, he adds, might work better.

Generally, open-source software can't legally play encrypted DVDs in the U.S., and most commercially produced DVDs these days are encrypted. Linspire, however, sells a program for $40 that plays encrypted DVDs.

So after all my trials and errors, what have I concluded?

The Linux systems could make sense for users who just want to send and receive email and surf the Web without the need for multimedia programs, or to perform home-office tasks without a lot of interaction with Microsoft systems. But users should be prepared to spend a lot of time configuring their PCs. Also, people who are really bothered by viruses, spyware and hacking might want to take a look at Linux, since most viruses and spyware are aimed specifically at the widely used Windows systems.

For me, though, using the Linux systems didn't make sense. I often send documents and spreadsheets between my home PC and the one at work, which uses Microsoft Office. And the files are sometimes complex. Meanwhile, for both personal and professional computer use, I want access to all multimedia functions.

While solutions may exist to almost every problem I encountered, I was willing to invest only a limited amount of time as a system administrator. Claims by some Linux publishers that anybody can easily switch to Linux from Windows seem totally oversold.

In the end, I decided to buy an upgrade copy of Windows XP for $100. That normally wouldn't be a good idea since it doesn't upgrade the file system. But it's a good solution until Vista arrives.

Meantime, I'll continue to toy with Xandros, and look at upgrades of other distributions to see if I can overcome the hurdles. In exchange for a reasonable amount of time, I'd jump at the chance to gain the speed, security and savings promised by Linux -- and to feel that Microsoft has a bit more competition.

source:http://online.wsj.com/public/article/SB114727136610348924-Et3a0yO82d_xJdMWN_y8xKXLl7c_20060521.html?mod=blogs


A Traffic Control System For Molecules

"Our cells contain small protein factories which have to deliver materials inside the cell via a network of microtubules. And the transportation is carried out by biomolecular motors. Now, researchers from Delft University of Technology in the Netherlands have built a traffic control system able to force individual molecules to choose between 'roads' by applying strong electrical fields locally at Y-junctions. This traffic control system can potentially lead to new nano-fabrication techniques. Read more for additional references and pictures showing how this traffic system works."

source:http://science.slashdot.org/science/06/05/15/0535203.shtml

High-Definition Video Could Choke Internet

NEW YORK - Every day, it seems, a new service pops up offering to send you video over the Internet. "Desperate Housewives," Stephen Colbert heckling the president, clips of bad dancers at wedding parties: It's all there.

You may be up for it, but is the Internet?

The answer from the major Internet service providers, the telephone and cable companies, is "no." Small clips are fine, but TV-quality and especially high-definition programming could make the Internet choke.

Most home Internet use is in brief bursts — an e-mail here, a Web page there. If people start watching streaming video like they watch TV — for hours at a time — that puts a strain on the Internet that it wasn't designed for, ISPs say, and beefing up the Internet's capacity to prevent that will be expensive.

To offset that cost, ISPs want to start charging content providers to ensure delivery of large video files, for example.

Internet activists and consumer groups are vehemently against those plans, saying they amount to tilting the Internet's level playing field, one of the things that encourages innovation. They want legislation to guarantee a "neutral" Internet, but prospects appear slim.

At the heart of the debate is a key question: How much would it really cost the Internet carriers to provide a couple of hours of prime-time TV over their networks every day?

The carriers are playing their cards fairly close to their chest, but there are ways to get close to an answer.

One data point: As a rough estimate, an always-on, 1 megabit-per-second tap into the Internet backbone in downtown Atlanta, bought wholesale, costs an ISP $10 to $20 a month, according to the research firm TeleGeography Inc. An ISP's business is carrying data from that tap to the customer.

One megabit per second doesn't sound like that much, but ISPs spread that bandwidth out over their subscribers. Analysts estimate that ISPs sell around 30 times more bandwidth to their end users than they can connect simultaneously to the Internet (the figure probably varies widely from provider to provider).

In this sense, broadband is like old-fashioned telephone service, where there are always more lines leading from homes to the local switching station than there are going from the station out of the neighborhood. If everyone in a neighborhood picks up the phone at once, some calls won't go through because there aren't enough outgoing lines. But that rarely happens, so the system works.

On the broadband network, the oversubscription means that one megabit-per-second connection to the Internet is enough to serve 40 DSL accounts, each at a maximum speed of 768 kilobits per second, typical for low-end DSL. So the cost of providing data to each DSL is about 25 cents to 50 cents a month per customer.

Of course, the carrier also needs to pay for the equipment that brings data from the Internet connection point to the subscriber, first through fiber-optic lines and then through DSL or cable.

Oversubscription doesn't present a problem as long as people are using the Internet for Web surfing, e-mail and the occasional file download. But if everyone in a neighborhood is trying to download the evening news at the same time, it's not going to work.

"The plain truth is that today's access and backbone networks simply do not have the capacity to deliver all that customers expect," according to Tom Tauke, Verizon Communications Inc.'s top lobbyist.

The solution, of course, is to make the pipes connecting to the Internet fatter. To illustrate what that would mean, BellSouth Corp.'s chief architect, Henry Kafka, uses the assumption that the cost of providing a month's worth of data to the average user, about 2 gigabytes, costs the company $1. That's a fairly small amount compared to the $25 to $47 a month BellSouth charges for DSL, but then the company has to pay for sales, support, maintenance and a host of other costs.

If that same user were to start downloading five TV-quality movies per month, BellSouth's data cost, not including the cost of maintaining the DSL line, would go up to $4.50 a month. Higher, but perhaps not high enough to break BellSouth's business model.

But if the customer starts watching Internet TV like the average household watches regular TV, 8 hours a day, BellSouth's cost would go up to $112 a month, according to Kafka.

"We don't expect to get to the point where we're charging anyone those kinds of prices for Internet service, but it does reflect the kind of impact that high-quality video could have on the network and business models for providing the Internet," Kafka said.

To deal with that, Kafka said says BellSouth might put caps on the amount of data that a residential user gets for free, and charge extra if the user goes over, much like cell phone users pay overages. Other options include charging content providers extra for guaranteed delivery, the kind of model that has raised the hackles of Internet content providers and activists.

However, Kafka's estimates for these costs aren't really BellSouth's. Like other telephone companies, they don't disclose their actual costs. Instead, Kafka's base figure of $1 for 2 gigabytes of data per month is based on an estimate by Dave Burstein, editor of the DSL Prime newsletter, and Burstein thinks Kafka has it wrong.

"Traffic just isn't moving up that fast," Burstein said. "It will go up and it will go up faster, but not fast enough to be dollars and cents that really matter."

Internet video is still just a small fraction of the total amount of video people watch, and that's unlikely to change overnight, in Burstein's opinion.

In fact, he said, Internet traffic has increased much more slowly than the prices of Internet-carrying equipment like switches and routers have fallen, and that trend is likely to continue.

Burstein believes the danger of letting the carriers charge extra for guaranteed delivery is that they'll put the spending for upgrades into creating that extra "toll lane," and won't reduce oversubscription in the rest of the network even though it would be cheap to do so.

Both Verizon and AT&T Inc. have said they won't degrade or block anyone's Internet traffic. But it's impossible to tell what goes on inside their networks.

The message: Stay tuned, and watch your download speeds.

source:http://news.yahoo.com/s/ap/20060514/ap_on_hi_te/net_neutrality


Google Doesn't Have to Try Nearly as Hard as Microsoft, Yet to Maximize Its Success Google Ought to Try Even Less

Google, Intel, Microsoft, and Yahoo represent the four pillars of personal computing circa 2006, I explained in last week's column. "What about Apple and IBM?" a few dozen of you asked. And it is true that IBM in terms of sales is about the size of these other companies combined. And Apple, well, didn't I just finish a string of 3-4 columns in a row about nothing but Apple? How can I ignore them after that?

Easy.

Just as I wrote last week that Intel was a proxy for AMD, so Microsoft is a proxy for Apple. Go back and read those recent Apple columns and you'll notice the whole point is how Cupertino is planning to steal the market from Redmond. The only way to steal a market is by competing head-to-head, and the only way to compete head-to-head is by being in the same business. Microsoft, you'll recall, is in the platform business. Microsoft will do whatever it takes to defend the Windows platform (by which I mean to defend the Windows monopoly). And just in case that doesn't work, Microsoft wants to monopolize any other platforms that come along.

Well, Apple is no different. Apple is just Microsoft with a sense of style. If Microsoft's business theory is antiquated, then Apple's- - which is for the most part derived from Microsoft's -- ought to be antiquated, too.

The sad truth is that if Microsoft falters and Apple grabs command of the PC standard, Steve Jobs will defend his new standard using precisely the same brutal tools that Bill Gates has been using for the last 20 years. Apple already controls one important platform in iPod/iTunes and their reluctance to license that platform to date or to tolerate any flexibility on pricing shows this rigid -- even monopolistic -- need to control.

Remember what Macintosh programmer Andy Hertzfeld said about this on NerdTV: "Let's say Apple was able to get into the place Microsoft is in, they might do a better job of it, but we still haven't accomplished something. If you just have a different toll keeper. The King is dead, long live the King. I liken it more to the change from a monarchy to a democracy, where every man is king. That's where I'd like to see things go."

If Apple is just a pimped-out Microsoft, then what is IBM? IBM is a disaster-in-the-making. Big Blue as a total enterprise is running primarily on customer inertia and clever advertising, which definitely isn't enough.

Of course they have their Power5 and Cell processors, AIX and DB2, but IBM's customers are now all in big business, which doesn't touch my readers or the PC market very much. And IBM is in trouble. I have lots of friends at IBM and none of them are happy. The company is not going anywhere, but it is also going nowhere, if you know what I mean.

One aspect of IBM's malaise is the disconnect between the traditional public image of the company (basic research, advanced R&D, patents, patents, patents) and the fact that most of their revenue-generating businesses aren't about hardware or software products at all, but services. Why continue to spend all that money if you're mainly just a business/IT consulting company made up of IGS and Price Waterhouse? Why, indeed.

Here's what's happening with IBM. The heart of a company culture can be discovered if you look at the compensation system. IBM's major incentives right now are for signing business and cutting costs. In many IT firms, IBM included, billable hours are important. This results in a system where little is done to improve service efficiency, because doing so would lead to fewer hours and less revenue. Efficiency kills, so at today's IBM it is generally avoided.

Of course laws of both science and business continue to apply, something has to give, costs have to be driven down, so at today's IBM what gives is generally quality.

The result is that an increasing number of customers are unhappy with IBM, signings are harder, so there is less return business. To get that signing incentive, IBM's sales folks are now under-pricing deals. The people who do the actual work are still expected to show a profit though, even if one wasn't designed into the contract in the first place. So to still be profitable, they under-deliver on the contract, and this leads to an even lower quality of service. What I am describing is a death spiral that top IBM management either doesn't see or simply doesn't want to admit.

If IBM had invested in improving services, rather than cutting them to the bone, by now they could own their market. But they didn't. IBM's primary innovation has been to move as many jobs offshore as possible, cutting costs for now, but at a horrible long-term cost to the company.

IBM CEO Sam Palmisano, should he choose to reply to this column, might point to the IBM development center in Austin, Texas, which specifically targets the services business, as an example of what IBM is doing right. It's called the Global Technology Center. Alas, the GTC has spent millions of the company's money, are masters at promoting themselves, but have delivered very little of real value back to the services organization.

Losers.

For IBM maybe it isn't the business theory that's wrong, but just the execution. If I were running the place I'd probably split it into several separate businesses under a holding company. Then I'd invest in the parts of the company that really make money and sell off or kill most of the rest.

Now back to where I thought I was going this week, which is to Google -- that fourth pillar of modern Internet/PC technology. Where Microsoft's theory of business is built around the platform and its domination, Google has built a theory of business that is independent of the platform, and therefore their software runs (or can run) on any platform. The issue around "advertising based revenue" isn't really the key differentiator. What counts is that for Microsoft the platform is the PC while for Google the platform is the Internet and nobody can hope to control the Internet -- not Microsoft OR Google.

Given that Google can't practically aspire to control the Internet and Microsoft can't NOT aspire to control it, Google already has a vastly lighter load to carry.

So Microsoft can build software for a handheld or tablet computer, a mobile phone or a TV set-top box and even though the wrapper is different, the feel is always very much the same -- that of a fat PC client. Microsoft can't allow a phone to be a phone because they can't dominate and control a plain old phone unless it is more Windows than phone. That's a problem.

But not for Google, which couldn't care less about the phone OR the content (that's back to Yahoo, again). Google cares about the DATA. There is a key difference here between data and content. Content is stored and retrieved while data is generated. Google is about generating custom data based on applying proprietary algorithms. THAT's their theory of business, no matter how that data is ultimately paid for or by whom.

While Microsoft is trying as hard as they can to avoid the commoditization of their operating system and office suite, that job is getting harder and harder. Like many companies in this sector, Redmond is struggling with converting itself from a company that is vertically-integrated to one this is integrated horizontally.

Huh?

Vertical integration was perfected at Ford's River Rouge Plant, where they built every part of a Model A, including making their own steel from iron ore. The idea was simply that if you controlled the entire production process from top to bottom you could claim all the profit that might have gone to outside suppliers as well as have total control of your business. But what worked for Ford in the 1920s doesn't work as well for Ford today and barely works at all in high tech.

Remember my doom-and-gloom prediction last week for Sun Microsystems? That's based almost entirely on the company's inability to see itself moving from being vertically integrated (doing its own proprietary hardware and software) to competing on a level (that is horizontal) playing field. While that might make them just another PC vendor, don't worry about that happening because Sun would rather die first. And will.

How is Apple any different from Sun? Apple has volume, for one thing, spreading their investment over a far greater number of widgets. Apple has always had vastly higher sales-per-employee than Sun for another thing. This is what allows Apple to succeed in a consumer market Sun could never afford to even enter, at least not seriously. These days much of Apple's system software -- and even some of their applications like Safari -- are based on Open Source foundations that come for free, lowering costs and headcount that much further.

And yet, while Apple is different from Sun in these ways, look where Apple if going for growth -- toward supporting horizontal PC (that is Windows) applications. It's the exception that makes the point.

Google is Microsoft's nightmare, true, but maybe that will all end when Vista ships. Not! PC sales are down overall and Microsoft doesn't usually sell many upgrade licenses. Even if Vista is great, Microsoft isn't going to sell enough licenses to change the company's direction.

Maybe Microsoft's new Internet ad business will turn the tide. Wrong! How many web sites do you visit primarily for the advertising? Not anything from Microsoft to be sure, yet that's essentially what people do with Google - going TO the ads.

And look at those Google ads. Here's the most important key to Google's success: Most Google advertisers don't advertise ANYWHERE else. Its mainly small and medium sized companies whose advertisements the average person would NEVER have seen before the Internet. Google is making a ton of money from people who never advertised before. Heck, Google is making a ton of money from people who never were even in business before. This is not only a fundamental change in how advertising is done; it is a fundamental change in how BUSINESS is done.

I'm counting on Google and eBay to save America.

If Microsoft really wanted to compete -- if they really wanted (or even knew how) to truly defend their turf, here is what they would do. They would throw away Vista and develop a new operating system, one that is simpler, lighter, and more secure -- an OS that would run on any machine now running Windows 2000 or XP. They would price it right, which is to say cheap ($49.95). The associated and trimmed-down version of Office would be priced the same ($49.95). The upgrade market is probably five times bigger than the OEM PC market, so Microsoft needs (but probably doesn't realize it) an OS that will run well on most of the PC installed base. It needs to set the pricing of the OS so that we'll run to the store to get it. Instead of designing products exclusively for new equipment, now it's time for Microsoft to focus on the installed base.

Remember, $49.95 is more than Microsoft gets for an OEM Windows license, and OEM sales will continue simply because computers die and need to be replaced. Microsoft COULD win both ways, but probably won't risk it.

That's a survival strategy for Microsoft. Now here's a failure strategy for Google. While not intending so much to create a platform, Google has done just that. And once you control a platform, the way to best leverage that control is by sharing the platform generously. Google is right now the basis of much Web 2.0 creativity from third-party firms -- every one of which is afraid that they'll be put out of business next month by Google rolling-out its own version of whatever that ISV has built and proved. That's the Microsoft domination model, so why not? Because it poisons the well, that's why not.

It is great for Google to buy-up these little firms making millionaires along the way, but Google's obsession with reinventing the wheel might hurt them over time. I hope they are smart about this, but I fear that they aren't, and that Google's own vertical obsession might hamper their growth.


source:http://www.pbs.org/cringely/pulpit/pulpit20060511.html

What's the Secret Sauce in Ruby on Rails?

"Ruby on Rails seems to be a lightning rod for controversy. At the heart of most of the controversy lies amazing productivity claims. Rails isn't a better hammer; it's a different kind of tool. This article explores the compromises and design decisions that went into making Rails so productive within its niche."

source:http://developers.slashdot.org/article.pl?sid=06/05/14/1456224

CCTV channel beamed to your home

Big Brother, the reality television show that attracts up to seven million viewers, is old hat.

In the world of boundary-pushing television, it was surpassed yesterday by a group of Eastenders who have become the first to monitor their own neighbourhood via a home CCTV channel.

Jan Ashby
Jan Ashby has the CCTV channel in her flat: ‘I must admit I have watched it everyday since I jave had it’

Shoreditch TV is an experiment in beaming live footage from the street into people's homes and promises to be every bit as fascinating as the courtship rituals of Celebrity Big Brother contestants Chantelle and Preston.

Viewers can watch the dog walkers on the street below, monitor the appearance of new graffiti and keep an eye on the local pub.

This summer 22,000 Londoners will be tuning in and homes across Britain are getting their own version next year. But despite being a curtain-twitcher's paradise, the channel is about "fighting crime from the sofa", not entertainment.

In return for a package that includes footage from 12 security cameras, a police advice channel and an array of standard cable fare, the residents of Haberdasher Estate are expected to shop any yobs that they catch on camera.

They can alert the council and police through a CCTV hotline and an anonymous e-mail tip-off service. Or they can just watch the world go by.

Jan Ashby, 57, a resident who previewed the scheme before yesterday's launch, said: "I wouldn't say it was spying, but it is nice to see what's going on. Look, there's my local pub."

Mrs Ashby is a "huge fan" of Channel 4's Big Brother, but is the real deal just as addictive?

"I must admit I have watched it everyday since I have had it - but I wouldn't sit down to it for hours."

One of the stars of the show is Ken Hodkinson, whose pub, The Marie Lloyd, sits directly under the gaze of Mrs Ashby and Camera South.

"I can't say I ever fancied being on television, but I don't mind a bit if it keeps the area safe," he said.

Digital Bridge, which set up the scheme for the regeneration agency Shoreditch Trust, hopes it will reduce fear of crime. It is also in talks with police about including an Asbo channel, featuring the faces of youths to avoid because they have broken the terms of their order.

Civil liberties groups are concerned, with Mark Crossman of Liberty predicting the emergence of vigilante groups and an epidemic of old ladies crying wolf over young people in hoodies.

But James Morris, the chief executive of Digital Bridge, said: "This is not naming and shaming or spying, it is getting the community engaged with their services."

After a free three-month trial residents will pay £3.50 a month for the TV on-demand service, which also comes with a wireless keyboard that can turn the television into a PC with broadband internet.

Police will also be able to interrupt regular programming with alerts about incidents.

source:http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2006/05/09/ncctv09.xml&sSheet=/news/2006/05/09/ixuknewsnew.html


In Tokyo, the New Trend Is 'Media Immersion Pods'



TOKYO

Skip to next paragraph
Yuko Shimizu

Ko Sasaki for The New York Times

At the Bagus Gran Cyber Café customers rent so-called media immersion pods. It's not just a solo pursuit; pods for couples, are available too.

ALL cities takes a toll, and at times all city dwellers have to take their leave. When life in Istanbul gets too stressful, people can head to the baths. In Rio there's the beach. In Tokyo, though, the antidote to urban overload is more of the same. In the world's most media-saturated city, people take a break by checking themselves into media immersion pods: warrens cluttered with computers, TV's, video games and every other entertainment of the electronic age.

The Bagus Gran Cyber Cafés are Tokyo's grand temples of infomania. Situated well above retail level, on the odd floor number where in Manhattan you might find tarot readers or nail salons, these establishments contain row after row of anonymous cubicles. At first glance the spread looks officelike, but be warned: these places are drug dens for Internet addicts.

The first Gran Cyber Café opened in 1999. Today there are 10, serving some 5,000 people a day. Each has a slightly different orientation — some are geared to teenagers, some to salarymen — but the atmosphere is the same throughout the franchise: equal parts lending library, newsstand, arcade, Kinko's and youth hostel. An inspired extension of the basic Internet cafe, the Gran Cyber Cafés shift their meaning the more you study them, as if by a trick of their trademark low light.

Sometimes they look like nothing special, only marginally cooler than carrels you might find at a college library. But at other times, especially late at night, they seem visionary, an architectural realization of the social and personal life of the future.

"The Japanese love liminal spaces and gray zones," explained Con Isshow, a writer who has published widely on youth culture, including a collection of letters by abused children called "Letters to Japan's Worst Parents."

"In both the anonymity and role-playing games on offer at the Gran Cyber Café, you don't have to exist in tight social norms," Mr. Isshow told me. "Your identity can be in flux. You go to these places not to present yourself, but to lose yourself. Lose your name, your position, your pride."

Mr. Isshow spoke through a translator, but here he introduced some English: "No-face-man, no-ID-man, no-pride-man."

Although the services offered by Bagus, a company that also runs billiard halls, karaoke dens and spas, are aboveboard, the Gran Cyber Cafés are enshrouded in the urgent, furtive atmosphere of a hot-sheet motel. Eyes averted, customers sign in, head to the library of entertainment options, and load up on fashion magazines, video games and DVD's of "24" as if stocking up on Jim Beam. Then they beetle-brow it to their solitary pods.

What they do there is up to them. Some people channel-surf. Others trade stocks. You can download music, read novels, watch pornography, play video games, have sex, go to sleep.

According to Mr. Isshow, Japan's "petit iede," or little runaways, come for downtime, free lattes and smoothies, and, at some branches, showers. They use the places as trial separations from home — staying a few hours, overnight or a few days, long enough to scare their parents. (A "night pack" allows use of the pod from 11 p.m. to 8 a.m. for about $10; some places sell toothbrushes and underwear too.) Periodically the management will remind a customer that the cafe is not a hotel, but above all Bagus respects people's privacy.

ON a recent afternoon, at around 5:30, I visited the Gran Cyber Café in the Shinjuku neighborhood for the first time, to read e-mail and visit a news site or two. Checking in, I was assigned to pod 16-A.

I loved 16-A the instant I saw it. I closed the door, slipped into a low-slung leatherette seat and surveyed the all-you-can-eat tech feast, which includes VHS and DVD players, satellite and regular television on a Toshiba set, PlayStation 2, Lineage II and a Compaq computer loaded with software, all the relevant downloads and hyperspeedy Internet. In the nearby library were thousands of comic books, magazines and novels. On the desk was a menu of oddball snacks, like boiled egg curry and hot sandwich tuna.

The atmosphere is airless and hot, with a permanent cloud of cigarette smoke. Over all the effect is of a low-wattage, low-oxygen casino.

When I spoke to Japanese cultural critics about the Gran Cyber Cafés, most gave high-flown theoretical accounts of their appeal. But Takami Yasuda, a professor at the School of Informatics and Sciences at Nagoya University who writes about virtual reality, shrugged. "I do not know exactly why people, young guys in particular, love to stay in such a dark place," he said.

I don't know exactly why I stayed either. But 10 books, two DVD's, seven magazines, two newspapers and a video game later, I found that eight hours had elapsed.

On my second visit I brought Shizu Yuasa, a married 31-year-old Japanese friend who stays overnight at Gran Cyber Cafés whenever she wants time to herself.

Shizu, the director of 2DK, an arts and media production company, is an avid reader of the Japanese graphic novels known as manga. But because she can read one in about 15 minutes, she doesn't believe in buying them. So she heads to the Bagus shelves and picks out 20 or so.

Around 8 p.m. the place filled up with a reticent and largely male crowd of loners. One nameless man told me he comes on breaks from work, to read the sports news. Naomi Iwasaki, a 28-year-old manager of an Internet portal site, said he was there to read manga. Two boys with hair dyed strawberry blond companionably watched their screens: one was tuned to a cooking show, the other to Yahoo.

Back in the stacks I met Reiko Ishii, a 25-year-old student at Hosei University who lives with her parents in Tokyo. She had tea-colored hair, wore a horn-shaped amulet around her neck, and dressed in the clingy style of early Nicole Richie. She told me she comes to Bagus often, but I was the only other customer she had ever spoken to. There are so few places, she said, where a woman can go out alone, late at night, without having to be sociable. I asked if she'd ever spent the night.

"Sure," she said, looking unfazed. "My parents know I stay here, and it's fine with them." She retreated to her pod. I went to mine too, hit the button that changed the keyboard from Chinese characters to QWERTY, and answered some e-mail.

Shizu was catching up on manga. One was "The Monetary System of Osaka," a left-wing chronicle of graft and usury among the suits of Japan's second city. Another was "Inu," or "Dog," by Haruko Kashiwagi. It's considered clever, fairly high-toned and mainstream, which is surprising because, in part, it's about a woman who has sex with her dog.

The extensive manga library also includes pornography for every taste. But sex at the Gran Cyber Café is not just in the fiction. All around me, couples were making out. Some were watching sex videos. They seemed blasé. Still, in the cubicles that seat two, the walls are a little lower, and the seats don't have a massage option. Meanwhile other customers have taken a more professional approach. The Japanese Web site Tanteifile.com published an article about a freelance prostitute — a "delivery health" girl — who moved into a Gran Cyber Café after her workplace was raided.

Shizu and I got tea and calpis, a sweet, summery drink, and returned to our pod. I leafed through teenage fashion magazines while a Japanese movie about gay samurai, "Mayonaka no Yaji-san Kita-san," played. Shizu, in the meantime, checked out Mixi, the Japanese Friendster. Some people, I had been told, use the site to communicate with other customers who might be just a few pods away, to communicate without having to introduce themselves in person.

Finally an attractive 30-something couple, Kaori Karasawa and Naoya Ohada, settled in the pod across from us. "Will this article be on the Internet?" Ms. Karasawa asked me. "People at the office don't really know we're dating."

"But now they will," Mr. Ohada said, laughing.

He appeared eager to impress her; he held forth about manga, while she listened. They Googled subjects that came up in conversation, showing each other favorite sites, using the Internet as a kind of third party in their relationship: chaperon, entertainment, common ground. Over their pod the light at the Gran Cyber Café seemed not dim but soft, flattering, romantic.

CHECKING out wasn't going to be easy. I had come to appreciate the shared solitude the Gran Cyber Café provides, as well as the fast, infallible Internet connection.

Hidenori Kimura, a sociologist who writes about intercultural encounters, said he believes the Gran Cyber Cafés fulfill a deep and persistent cultural longing. The Japanese system of competition for education, career and social esteem, Dr. Kimura explained, forces young people to obsess over self-presentation, which costs them both fantasy and anonymity, the privileges of childhood. What Japanese young people want, in his view, are opportunities to be free of their social status.

"Traditionally," he explained, "tea ceremonies and festivals have been fulfilling this role of depriving people of their social status and thus help them become 'nobody.' Tea ceremonies deprived the feudal elites of their status and made them just a person enjoying tea ceremony and tea, while festivals among farmers offered an enclave of anarchy during the festivals where they were free of norms and rules of feudal eras."

The Gran Cyber Cafés now serve this purpose, he said. "Nobody cares what you do, which enables you to be absorbed in whatever fantasy you want to indulge in through Net surfing, Web games or manga. Yet you can satisfy your timid desire to belong." Staying in the Gran Cyber Cafés, he concluded, is now part of jibun-sagashi, or the search for the true self.

Nevertheless there's something a little shameful about spending a solo hour, or two, or seven, on a wanton media bender. It was in Japan that I first heard the word "infomania," a 2005 coinage by Hewlett-Packard, whose study last May showed that compulsive e-mailing and text-messaging do more damage to the I.Q. than regular marijuana use. But, as I read about the study in my pod, I came to doubt that such warnings would ever make people temper their infomaniac ways; maybe these are the I.Q.'s we're stuck with now.

And, really, what's so wrong with getting lost on the Internet; watching soccer or baseball on satellite television; devouring Us Weekly or Time Asia; and organizing solo marathons of Tim Burton or Kurosawa movies? The craving for media sprees runs deep, and, like so many Internet-era developments, Gran Cyber Cafés seem to answer an almost carnal need for uninterrupted access to pixels and screens and Web sites and instant-messaging and iTunes. And when that need is satisfied, you can always return to life in the city, at least for a while.

source:http://www.nytimes.com/2006/05/14/arts/14heff.html?ei=5090&en=3941ff903992e111&ex=1305259200&partner=rssuserland&emc=rss&pagewanted=all



Vendor Ready to Test 'Smart' Shirt

(May 05, 2006) Sensatex Inc. is looking for beta testers for its SmartShirt System. Nanotechnology is used to weave tiny conductive fibers into the cotton fabric of a shirt to enable monitoring of physiological activity.

A device smaller than a PDA snaps on the side of the shirt to collect data and transmit it to a computer. The data then is sent wired or wirelessly to clinicians or researchers.

The conductive fiber collects data on a wearer's movement, heart rate and respiration rate in real time. The shirt, minus the snap-on device, is washable, according to the Bethesda, Md.-based vendor.

The company envisions the shirt being used to remotely monitor home-based patients, first responders, hazardous materials workers and soldiers. It also can be used to monitor signs of fatigue in truck drivers and to support athletic training.

The vendor plans field testing later this year and also is developing a one-lead EKG band embedded in a "smart" bra. Organizations interested in testing the technology can send an e-mail to Robert Kalik, CEO at Sensatex, at rkalik@sensetex.com. More information is available at sensatex.com.

source:http://www.healthdatamanagement.com/html/news/SmartShirt.System.Nanotechnology.13399.cfm


The Net's not-so-secret economy of crime

The people who want to rip you off are very polite with each other when they're buying and selling credit card numbers.

NEW YORK (FORTUNE) - Raze Software offers a product called CC2Bank 1.3, available in freeware form - if you like it, please pay for it. Raze's attractively designed Web site, registered in Belarus, may suggest a shaky command of English -"I shall pleased any estimation in respect of my programs and this page," it reads - but it displays the classic characteristics of web commerce, like visitor statistics, advertising, and links to Web sites of partners.

But CC2Bank's purpose is the management of stolen credit cards. Release 1.3 enables you to type in any credit card number and learn the type of card, name of the issuing bank, the bank's phone number and the country where the card was issued, among other info.

The ad on Raze's site, in Russian, leads to another Belarus address that appears to be a market for stolen products.

The Internet economy, as we journalists like to write, has a lot to do with sharing. And commerce, at sites like eBay, is based largely on trust. But until recently I didn't realize that these same principles govern online dealmaking among criminals.

My naiveté was alleviated with an eye-popping tour of underground Web sites, conducted by two executives from RSA Cyota, an online security firm that works for banks like Barclays (Research) and Washington Mutual (Research). They showed me a variety of sites frequented by people who steal and trade credit card numbers and then use them to steal money.

This infrastructure for online crime is far more multi-layered and sophisticated than I ever imagined.

Says Marc Gaffan, a marketer at RSA: "There's an organized industry out there with defined roles and specialties. There are means of communications, rules of engagement, and even ethics. It's a whole value chain of facilitating fraud, and only the last steps of the chain are actually dedicated to translating activity into money."

This ecosystem of support for crime includes services and tools to make theft simpler, harder to detect, and more lucrative.

Gaffan and his colleague Yohai Einav showed me, for example, a site called TalkCash.net. It's a members-only forum, for both verified and non-verified members. To verify a new member, the administrators of the site must do due diligence, for example by requiring the applicant to turn over a few credit card numbers to demonstrate that they work.

It's an honorable exchange for dishonorable information. "I'm proud to be a vendor here," writes one seller.

'A very nice person'

"Have a good carding day and good luck," writes another seller, who notes "I do replace new cards in case any died." In response, a different poster comments "He delivers fast and he is a very nice person to deal with!" It's as if he was talking about a local florist.

These sleazeballs don't just deal in card numbers, but also in so-called "CVV" numbers. That's the Creditcard Validation Value - an extra three- or four-digit number on the front or back of a card that's supposed to prove the user has physical possession of the card.

On TalkCash.net you can buy CVVs for card numbers you already have, or you can buy card numbers with CVVs included. (That costs more, of course.)

"All CVV are guaranteed: fresh and valid," writes one dealer, who charges $3 per CVV, or $20 for a card number with CVV and the user's date of birth. "Meet me at ICQ: 264535650," he writes, referring to the instant message service (owned by AOL) where he conducts business.

Other discussants on the TalkCash forums politely request login IDs and passwords for accounts at HSBC and National Bank of Canada.

Gaffan says these credit card numbers and data are almost never obtained by criminals as a result of legitimate online card use. More often the fraudsters get them through offline credit card number thefts in places like restaurants, when computer tapes are stolen or lost, or using "pharming" sites, which mimic a genuine bank site and dupe cardholders into entering precious private information. Another source of credit card data are the very common "phishing" scams, in which an e-mail that looks like it's from a bank prompts someone to hand over personal data.

Also available on TalkCash is access to hijacked home broadband computers - many of them in the United States - which can be used to host various kinds of criminal exploits, including phishing e-mails and pharming sites.

RSA's Einav says there are about a dozen marketplace sites like TalkCash in operation at any given time. Unfortunately, he and Gaffan suggest it's unlikely this nefarious activity will end anytime soon (though of course that's good for their business).

"When the FBI shuts down a site they just move to another site," says Einav, "The URL changes but the community stays intact."

RSA doesn't even bother trying to shut down such sites, because by monitoring them it can help banks protect themselves. Says Einav: "If you see abnormal demand for accounts from a specific bank, you can assume an exploit is underway."

That's when it goes into action. RSA Cyota claims to have shut down 10,000 phishing and other schemes since Cyota was formed in 1999. (RSA Security bought Cyota last December.) The company maintains a blacklist of sites, which partners use to warn customers.

Microsoft's (Research) new Internet Explorer 7 browser, for example, uses the blacklist data to warn users that a site they have requested is likely to be fraudulent. RSA also works with ISPs to get them to shut down fraudulent sites.

Don't visit any of these sites. Tapping into them could lead to unpleasant consequences. I only looked at them via the safety of RSA's computers.

But it's worth knowing this ecosystem exists, if only as a cautionary reminder of how woefully unprotected our financial systems remain in the age of the Internet.

source:http://money.cnn.com/2006/05/11/technology/fastforward_fortune/


Nokia to Put Google Talk on its Linux Tablet

"The next version of Nokia 770 Linux-based Internet tablet with WiFi support will feature Google Talk with VOIP in its next release, MSNBC reports. The device is priced to sell at $390, and both Google and Nokia agree that right now it might appeal only to niche markets. In related news, however, it means Google's GTalk client will be ported to Linux, even if it's Nokia 770-specific software architecture."

source:http://hardware.slashdot.org/article.pl?sid=06/05/13/1735241

Ships' logs give clues to Earth's magnetic decline

The voyages of Captain Cook have just yielded a new discovery: the gradual weakening of Earth’s magnetic field is a relatively recent phenomenon. The discovery has led experts to question whether the Earth is on track towards a polarity reversal.

By sifting through ships’ logs recorded by Cook and other mariners dating back to 1590, researchers have greatly extended the period over which the behaviour of the magnetic field can be studied. The data show that the current decline in Earth's magnetism was virtually negligible before 1860, but has accelerated since then.

Until now, scientists had only been able to trace the magnetic field’s behaviour back to 1837, when Carl Friedrich Gauss invented the first device for measuring the field directly.

The field’s strength is now declining at a rate that suggests it could virtually disappear in about 2000 years. Researchers have speculated that this ongoing change may be the prelude to a magnetic reversal, during which the north and south magnetic pole swap places.

But the weakening trend could also be explained by a growing magnetic anomaly in the southern Atlantic Ocean, and may not be the sign of a large scale polarity reversal, the researchers suggest.

Crucial measurements

David Gubbins, an expert in geomagnetism at the University of Leeds, UK, led the study which began scouring old ships' logs in the 1980s, gathering log entries recording the direction of Earth's magnetic field.

It was common practice for captains in the 17th and 18th centuries to calibrate their ship's compasses relative to true north and, less often, to measure the steepness at which magnetic field lines entered the Earth's surface.

Even as far back as 1590, these measurements were typically very accurate – to within half a degree. "Their lives depended on it," Gubbins explains.

Such ship-log records may not be adequate for reconstructing the planet's past magnetic fields in fine detail, but the data can estimate large-scale features quite well. "In that regard, I think it's a very solid result," says Catherine Constable, an expert in palaeomagnetism at the University of California, San Diego, US, who was not involved in the study.

Mineral evidence

Using the locations of the ships at the time of measurement, these records allowed Gubbins to construct a map of the relative strength of Earth's magnetic field between 1590 and 1840, which was published in 2003.

The data was combined with 315 estimates of the field's overall strength during that period, based on indirect clues, such as mineral evidence in bricks from old human settlements or volcanic rock.

Gubbins showed that the overall strength of the planet's magnetic field was virtually unchanged between 1590 and 1840. Since then, the field has declined at a rate of roughly 5% per 100 years.

Every 300,000 years on average, the north and south poles of the Earth's magnetic field swap places. The field must weaken and go to zero before it can reverse itself. The last such reversal occurred roughly 780,000 years ago, so we are long overdue for another magnetic flip. Once it begins, the process of reversing takes less than 5000 years, experts believe.

Growing anomaly

A large-scale reversal might indeed be underway, Gubbins says, but the acceleration of the magnetic decline since the mid-1800s is probably due to a local aberration of the magnetic field called the South Atlantic Anomaly. "It looks like that's responsible for most of the fall we're seeing," he says.

This patch of reversed magnetic field lines covering much of South America first appeared in about 1800, according to the ship-log data. It slowly grew in strength, and by about 1860 it was large enough to affect the overall strength of the planet's magnetic field, Gubbins says.

If the field does flip 2000 years from now, the Northern Lights will be visible all over the planet during the transition, and solar radiation at ground level will be much more intense, with no field to deflect it.

There is no need to worry, though, argues Gubbins, as our ancestors have lived through quite a few of these transitions already.

Journal reference: Science (vol 312, p 900)

source:http://www.newscientist.com/article/dn9148-ships-logs-give-clues-to-earths-magnetic-decline.html


Convicted Hacker Adrian Lamo Refuses to Give Blood

"Hopefully everyone here remembers the case of Adrian Lamo, a so-called 'gray hat' hacker who plead guilty to one count of computer crimes against Microsoft, Nexis-Lexis and the New York Times in 2004. He got a felony conviction, six months detention in his parents' home, and two years of probation. Today, as a condition of his probation, he must provide a sample of his DNA in the form of a blood sample, something which he has refused to do. Should convicted felons on probation have privacy rights over their DNA? Or is a blood sample like a fingerprint, something that everyone should provide to their government?"

source:http://yro.slashdot.org/article.pl?sid=06/05/13/1331217

The week in technology: Yahoo says ya boo to Microsoft

The fight is on between the three internet search titans, after Yahoo’s Terry Semel laid down the gauntlet to Microsoft saying the software giant’s recently elevated ambitions in the search arena were a lost cause.

“My impartial advice to Microsoft is that you have no chance. The search business has been formed,” he said in an interview with the New Yorker’s Ken Auletta (Watch video footage here and read Jeff Jarvis’ immediate blog reaction here).

“Very cool” said one blogger, adding “Microsoft just does not seem to be getting much love these days”.

“That crashing sound you hear, is a chair being thrown across an office in Redmond,” said another.

And if Bill Gates and co were smarting at that taunt, the hurt must have been exacerbated by the fact that Microsoft been hoping to get Mr Semel’s company onside in its greater fight to win market share from Google, the dominant force in the lucrative sponsored search market.

Mr Semel, who spoke about his own regrets at not buying Google, flatly denied there had been any talks about Microsoft buying Yahoo and said he had rebuffed an offer to buy a stake in the search business.

“I will not sell a piece of search - it is like selling your right arm while keeping your left; it does not make any sense.”

Last week Microsoft raised its ambitions in the internet search arena when it announced it all advertising on MSN in the US, France and Singapore would be powered by its own AdCenter system, thus dispensing with ads generated by Yahoo.

But Microsoft had been hoping that its partnership with Yahoo might lead to something altogether more meaningful. It will be interesting to see how the spurned suitor reacts.

Google gets seriously systematic

Meanwhile, Google denied it was responding to Microsoft’s challenge when it said it would divert more resources into developing search technology.

In another sign that Google is growing up, the technology group formerly known as fun said it had also introduced a more “systematic” approach to management and announced a string of search-related products to prove “yes, we are still all about search”.

After a long period of diversification, which has seen Google branch into a range of activities including instant messaging, wi-fi and classified ads, the company said that the proportion of R&D dedicated to search had fallen below its long-running 70 per cent target.

In a world of Google Earth, Blogger, Writely, Picasa, Hello, Gmail, and Google Talk, it’s easy to forget that Google is ultimately a search engine,” commented chad78 on digg.com.

The company, renowned for allowing engineers time to pursue new ideas, said it would now ”encourage” its development staff to readjust the balance towards its core search business.

Game consoles square up on price

The big news at the E3 video games show was the Sony Playstation 3 price - $499 with a 20GB hard drive or $599 with 60GB; or about $100 more than the comparable Xbox 360 models. Sony says the fact that it’s “the best” and has a Blu-ray high-definition DVD player makes it all worthwhile. Nintendo, meanwhile, is expected to price its Wii at between $200 and $249 in its mission to reach the highly sought-after market of “casual gamers” and even “non gamers”. This elusive category of potential gamers - women, small children, older people - are probably less likely to fork out quite so much of their hard-earned cash as the traditional gaming market of young men.

E3 also saw the debut of the Wii, with its ambitious controller that can be moved around in 3D space. Engadget interviewed Shigeru Miyamoto, the legendary creator of Donkey Kong, Mario and Legend of Zelda, and asked the question that had to be asked: what if gamers are too lazy to go along with these active shenanigans?

Joystiq tried an early version of the controller and gave it a mixed review, along the lines that it seems better with some games than with others. They also had high resolution shots of the PlayStation 3.

Nintendo fans are nothing if not devoted, as this FT poll indicates: the Wii/Revolution got the least votes but was not short of plaudits such as “watch and see Nintendo pulls up from nowhere and beat down Microsoft”. And this was before they’d unveiled the new name...

Porn domain name ICANNed

ICANN, the US-based administrator of the “root” system that lets a user connect to information on a server held anywhere else in the world, has voted not to set up a new .xxx domain name for internet porn sites.

But the decision has re-ignited a row over the neutrality of the international body, with the European Union fearing that intense lobbying from the US government - under influence from religious and family groups incensed at the “legitimisation” of pornography - may have pressurised ICANN into voting against the move.

But .xxx has not only split nations - even the sex industry can’t decide with some websites objecting on fears that it might make censorship easier.

Unsurprisingly there was a lot of comment on this story, much of it not reaching me through the company firewall, but Michael Bloch on tamingthebeast.net thought .xxx would have been a step in the right direction:

“I really can’t see how, properly enforced, there would have been any losers in a compulsory .xxx strategy in the long term:

- parents would have less to worry about knowing that anything from .xxx domains could be easily blocked

- system administrators would have had an easier job in filtering

- the adult industry would have their own playpen

- those wanting to access adult content would know where to find it

- those operating outside the guidelines would be constantly playing a “dodge the cops” game.

The only possible problem I could envision is determining what constitutes ‘adult content’ - that could be a particularly contentious issue”.

Steven Fettig described ICANN’s decision as “just plain stupid”:

“You aren’t going to rid the net of pornography, period. You would have made it a lot easier for administrators like myself to be able to at least block a whole TLD that we don’t want people to have access to, instead of having to depend on poor software AI to figure out which domains have pornographic content and which don’t. Regardless of my opinion of the state of pornography on the net, this is a lose lose for everyone involved.”

“Shame on ICANN for bowing to what can only be conceived as political pressure,” he added.

source:http://news.ft.com/cms/s/11eadcd4-e1a3-11da-bf4c-0000779e2340.html


This page is powered by Blogger. Isn't yours?