Tuesday, February 07, 2006
Got a Question for Wikipedia Founder Jimmy Wales?
source:http://interviews.slashdot.org/interviews/06/02/06/2223223.shtml
Zoep Goes Open Source
source:http://slashdot.org/articles/06/02/07/0420249.shtml
Brain changes significantly after age 18, says Dartmouth research
Two Dartmouth researchers are one step closer to defining exactly when human maturity sets in. In a study aimed at identifying how and when a person's brain reaches adulthood, the scientists have learned that, anatomically, significant changes in brain structure continue after age 18.
The study, called "Anatomical Changes in the Emerging Adult Brain," appeared in the Nov. 29, 2005, on-line issue of the journal Human Brain Mapping. It will appear in a forthcoming issue of the journal's print edition. Abigail Baird, Assistant Professor of Psychological and Brain Sciences and co-author of the study, explains that their finding is fascinating because the study closely tracked a group of freshman students throughout their first year of college. She says that this research contributes to the growing body of literature devoted to the period of human development between adolescence and adulthood. "During the first year of college, especially at a residential college, students have many new experiences," says Baird. "They are faced with new cognitive, social, and emotional challenges. We thought it was important to document and learn from the changes taking place in their brains."
For the study, Baird and graduate student Craig Bennett looked at the brains of nineteen 18-year-old Dartmouth students who had moved more than 100 miles to attend college. A control group of 17 older students, ranging in age from 25 to 35, were also studied for comparison. The results indicate that significant changes took place in the brains of these individuals. The changes were localized to regions of the brain known to integrate emotion and cognition. Specifically, these are areas that take information from our current body state and apply it for use in navigating the world. "The brain of an 18-year-old college freshman is still far from resembling the brain of someone in their mid-twenties," says Bennett. "When do we reach adulthood? It might be much later than we traditionally think." The study was funded by a grant from the National Institute of Child Health and Development. |
source:http://www.dartmouth.edu/~news/releases/2006/02/06.html
Revolution in a Box: the Center for Responsible Nanotechnology
Founded in December 2002, the Center for Responsible Nanotechnology has a modest goal: to ensure that the planet navigates the emerging nanotech era safely. That's a lot for a couple of volunteers to shoulder, but Mike Treder and Chris Phoenix have carried their burden well, and done much to raise awareness of the potential risks and benefits of molecular manufacturing, including a major presentation at the US Environmental Protection Agency on the impacts of nanotechnology. We first linked to CRN back in October of 2003, and have long considered them a real WorldChanging ally.
We conducted this interview as a series of email exchanges over the last few months. This post captures (and organizes) the highlights of that conversation.
Mike, Chris -- thank you. Your work is one of the reasons we have optimism for the future.
WorldChanging: So, to start -- what is the Center for Responsible Nanotechnology hoping to make happen?
Center for Responsible Nanotechnology: We want to help create a world in which advanced nanotechnology -- molecular manufacturing -- is widely used for beneficial purposes, and in which the risks are responsibly managed. The ability to manufacture highly advanced nanotech products at an exponentially accelerating pace will have profound and perilous implications for all of society, and our goal is to lay a foundation for handling them wisely.
WC: So you set up a non-profit. How is that going?
CRN: CRN is a volunteer organization. We have no paid positions. Our co-founders have dedicated time to this cause in lieu of professional paying careers. But the thing is, technical progress toward nanotechnology is really accelerating, and it's become more urgent than ever for us to examine the global implications of this technology and begin designing wise and effective solutions.
It won't be easy. CRN needs to grow, quickly, to meet the expanding challenge. We're asking people who share the belief that our research must keep moving ahead to support us with small or large donations.
WC: One of the unusual aspects of CRN is that you're neither a nanotech advocacy group nor unmoving nanotech critics. Your focus is on the responsible development and deployment of next-generation nanotechnologies. Tell me a bit about what "responsible nanotechnology" looks like.
CRN: You’re right that we have tried hard to stay in a “middle” place. We sometimes refer to it as between resignation (forsaking attempts to manage the technology) and relinquishment (forsaking the technology altogether). Our view is that advanced nanotechnology—molecular manufacturing—should be developed as fast as it can be done safely and responsibly. We’re promoting responsible rapid development of the technology—not because we believe it is safe, but because we believe it is risky—and because the only realistic alternative to responsible development is irresponsible development.
CRN: So, what does ‘responsible’ mean? First, that we take effective precautions to forestall a new arms race. Second, that we do what is necessary to prevent a monopoly on the technology by one nation, one bloc of nations, or one multinational corporation. Third, that we seek appropriate ways to share the tremendous benefits of the technology as widely as possible; we should not allow a ‘nano-divide’. Fourth, that we recognize the possibilities for both positive and negative impacts on the environment from molecular manufacturing, and that we adopt sensible global regulations on its use. And fifth, that we understand and take precautions to avert the risk of severe economic disruption, social chaos, and consequent human suffering.
WC: How does the "responsible" approach differ from something like the "Precautionary Principle?" What's your take on the concept of "precaution" applied to emerging technologies?
CRN: One of our earliest published papers was on that very topic. It’s called “Applying the Precautionary Principle to Nanotechnology.” CRN’s analysis shows that there are actually two different forms of the Precautionary Principle, something that many people don’t realize. We call them the ‘strict form’ and the ‘active form’.
The strict form of the Precautionary Principle requires inaction when action might pose a risk. In contrast, the active form calls for choosing less risky alternatives when they are available, and for taking responsibility for potential risks. Because the strict form of the Precautionary Principle does not allow consideration of the risks of inaction, CRN believes that it is not appropriate as a test of molecular manufacturing policy.
The active form of the Precautionary Principle, however, seems quite appropriate as a guide for developing molecular manufacturing policy. Given the extreme risks presented by misuse of nanotechnology, it appears imperative to find and implement the least risky plan that is realistically feasible. Although we cannot agree with the strict form of the Precautionary Principle, we do support the active form.
WC: What is the CRN Task Force, and what do you hope to have it accomplish? [Disclaimer: I am a member of the CRN Task Force.]
CRN: Without mutual understanding and cooperation on a global level, the hazardous potentials of advanced nanotechnology could spiral out of control and deny any hope of realizing the benefits to society. We’re not willing to leave the outcome to chance.
So, last August we announced the formation of a new Task Force, convened to study the societal implications of this rapidly emerging technology. We’ve brought together a diverse group of more than 60 world-class experts from multiple disciplines to assist us in developing comprehensive recommendations for the safe and responsible use of nanotechnology.
Our first project is just nearing completion. Members of the task force have written a series of essays describing their greatest concerns about the potential impacts of molecular manufacturing. We have completed editing approximately 20 excellent articles that range from discussion of economic issues and security issues, to the implications of human enhancement and artificial intelligence. They will be published in the March 2006 issue of Nanotechnology Perceptions, an academic journal maintained by a couple of European universities. We will simultaneously publish the essays at the Wise-Nano.org website, where anyone can read and comment on them.
WC: We've discussed the different kinds of nanotechnology on WorldChanging, and you folks posted a very useful follow-up to one of our pieces on that subject. To be clear, when we talk about "nanotechnology" in this context, we're talking about "nanofactories." So let's drill down a bit on that particular subject. What kinds of things could an early version of a nanofactory make? Are we just talking desktop printing of simple physical objects (like a cup), items embedding diverse materials & electronics (like a laptop), or organic and biochemical materials (like medicines or food)?
CRN: The first, tiny nanofactory will be built by intricate laboratory techniques; then that nanofactory will have to build a bigger one, and so on, many times over. This means that even the earliest usable nanofactory will necessarily work extremely fast and be capable of making highly functional products with moving parts. So, in addition to laptops and phones, an early nanofactory should be able to make cars, home appliances, and a wide array of other products.
Medicines and food will not be early products. A large number of reactions will be required to make the vast variety of organic molecules. Some molecules will be synthesized more easily than others. It may work better first to build (using a nanofactory) an advanced fluidic system that can do traditional chemistry.
Food will be especially difficult because it contains water. Water is a small molecule that would float around and gum up the factory. Also, food contains a number of large and intricate molecules for taste and smell; furthermore, nourishing food requires mineral elements that would require extra research to handle with nanofactory-type processes.
WC: It seems to me that manufacturing via nanofactories will require some different concepts of the manufacturing process than the automated assembly-line model most of us probably have in mind when we think of "factories." Parallel to early design work on the hardware end, has there been much work done on the software/design end of how nanofactories would work?
CRN: We have thought about how nanofactories would be controlled, and it seems probable that it's just not a very difficult problem, at least for the kind of nanofactory that can include lots of integrated computers. (This should include almost any diamond-building nanofactory, and a lot of nanofactories based on other technologies as well.)
Until automated design capabilities are developed, products will be limited largely by our product design skills. A simple product-description language, roughly analogous to PostScript, would be able to build an enormous range of products, but would not even require fancy networking in the nanofactory. (Drexler discusses product-description languages in section 14.6 of Nanosystems.)
WC: What makes nanofactories so different from traditional production methods?
CRN: It's important to understand that molecular manufacturing implies exponential manufacturing--the ability to rapidly build as many desktop nanofactories (sometimes called personal fabricators) as you have the resources for. Starting with one nanofactory, someone could build thousands of additional nanofactories in a day or less, at very low cost. This means that projects of almost any size can be accomplished quickly.
Those who have access to the technology could use it to build a surveillance system to track six billion people, weapons systems far more powerful than the world's combined conventional forces, construction on a planetary scale, or spaceflight as easy as airplane flight is today.
Massive construction isn't always bad. Rapid construction could allow us to build environmental remediation technologies on a huge scale. Researchers at Los Alamos National Laboratory are suggesting that equipment could be built to remove significant quantities of carbon dioxide directly from the atmosphere. With molecular manufacturing, this could be done far more quickly, easily, and inexpensively.
In addition to being powerful, the technology will also be deft and exquisite. Medical research and treatment will advance rapidly, given access to nearly unlimited numbers of medical robots and sensors that are smaller than a cell.
This only scratches the surface of the implications. Molecular manufacturing has as many implications as electricity, computers, and gasoline engines.
WC: In other words, nanotechnology is both an engineering process and (for lack of a less jargony phrase) an "enabling paradigm" -- it doesn't just make it possible to do what we now do, but better/faster/ cheaper, it also makes it possible (in time) to do some things that we can't now do.
CRN: Yes, exactly. Another good way to look at it is as a general-purpose technology: enhancing and enabling a wide range of applications. It will be similar in effect to, say, electricity or computers.
WC: Back up a sec. The complexities of surveillance systems, planetary engineering, and cheap & easy space flight come from much more than not being able to make enough or sufficiently-precise gear. There are also questions of design, of power, of scale, and so forth. These seem likely to take substantial effort and time.
CRN: The speed of development will differ for each project. But by today's standards, almost any project could be done quite quickly. A lot of hardware development time today is spent in compensating for the high cost and large delay associated with building each prototype. If you could build a prototype in a few hours at low cost, a lot of engineering could be bypassed. Of course, this is less true for safety-critical systems. But imagine how quickly space flight could be developed if Elon Musk (SpaceX), John Carmack (Armadillo), and Burt Rutan could each build and fly a new (unmanned) spacecraft every day instead of waiting three months or more.
Power will of course have to be supplied to any project. But one of the first projects may be a massive solar-gathering array that could supply power for planet-scale engineering. A nanofactory-built solar array should be able to repay the energy cost of its construction in just a few days, so scaling up the solar array itself would not take too long.
A comparable advantage can be seen today in computer chip design. FPGA's and ASIC's are two similar kinds of configurable computer chips. They differ in that ASIC's are designed before they are built, and FPGA's can have new designs downloaded to them in seconds, even after they are integrated into a circuit. An FPGA can be designed by a person in a week or two. An ASIC requires a team of people working for several months--largely to make absolutely sure that they have not made even a single mistake, which could cost the company millions of dollars and months of delay. The difference between today's development cycle and nanofactory-enabled product R&D is the difference between ASIC's and FPGA's.
WC: The degree to which research is largely corporate, academic or governmental will obviously vary from country to country. Who are some of the organizations doing innovative work in nanotech?
CRN: There are only a few companies that are explicitly working on molecular manufacturing. Many more are doing work that is relevant, but not aiming at that goal--or at least not admitting to it.
Zyvex LLC is working on enabling technologies, with the stated goal of providing "tools, products, and services that enable adaptable, affordable, and molecularly precise manufacturing."
In Japan, individual silicon atoms have been moved and bonded into place since 1994, first by the Aono group and then by Oyabu. Because this used a much larger scanning probe microscope to move the atoms, it is not a large-scale manufacturing technique.
Researchers at Rice University have developed a "nano-car" with single-molecule wheels that roll on molecular bearings, and reportedly are aiming toward "nano-trucks" that could transport molecules in miniature factories.
WC: To what degree is nanotechnology research a province of the big industrial countries, and to what degree is it accessible to forward-looking developing countries (what we term on WorldChanging the "leapfrog nations")?
CRN: In the broad sense of nanoscale technologies, some kinds of nanotech research are quite accessible to leapfrog nations. Molecular manufacturing research may be accessible as well. Atom-level simulations can now be run on desktop PC's. Some of the development pathways, such as biopolymer approaches, require only a small lab's worth of equipment.
We don't yet know exactly how difficult it will be to develop a nanofactory. Several approaches are on the table, but there could be a much easier approach waiting to be discovered. It's probably safe to say that any nation that can support a space program could also engage in substantial research toward molecular manufacturing. Note that several individuals are now supporting space programs, including Elon Musk of SpaceX and Paul Allen who funded SpaceShipOne.
WC: Do you expect home "hobbyist" designers -- perhaps using home-made nanotools -- to have any role in the nanotechnology revolution, as "garage hackers" did in the early days of personal computing?
CRN: We have been aware of some of the scanning probe microscope efforts. If advanced molecular manufacturing requires a vacuum scanning-probe system cooled by liquid helium, it's doubtful you could do that in your garage. On the other hand, if all it requires is an inert-gas environment at liquid nitrogen temperatures, then some work might be doable by a very competent hobbyist.
Design of nanomachines (as opposed to construction) is already accessible to hobbyists. Without the ability to test their designs in the lab, many of the designs will have bugs, of course. However, at least in the early stages, the development of new design approaches and the demonstration that we've learned even approximately how to implement mechanisms will be important contributions.
WC: A big concern in a world of easy fabrication is what to do with broken or obsolete stuff. In what ways could a nanofactory-type system use "waste" materials, with an eye towards the "cradle-to- cradle" concept?
CRN: If the stuff is made of light atoms, such as carbon and nitrogen, it should be straightforward to burn it in an enclosed system. The resulting gases could be cooled and then sorted at a molecular level, and the molecules could be stored for re-use.
It seems likely that products will be designed and built using modules that would be somewhat smaller than a human cell. If these modules are standardized and re-usable, then it might be possible to pull apart a product and rearrange the modules into a different product. However, there are practical problems: the modules themselves may be obsolete, and they would need to be carefully cleaned before they could be reassembled. It would probably be easier to reduce them to atoms and start over, since every atom could be contained and re-used.
WC: That seems likely to take a serious amount of energy to accomplish thoroughly, am I right? That is, if I toss my cell phone into an incinerator, different parts will cook at different temperatures, and there are some components that would require some fairly high temps to break down. In addition, the nano-incinerator will need to be able to sort out the various atoms that are emitted by the burning object. Sounds complex.
This becomes an important issue, because a world where it's really easy to make stuff but much harder to get rid of it starts to accelerate some already-serious problems around garbage, especially hazardous wastes.
CRN: Breaking down a carbon-based product just requires heating it a bit, then exposing it to oxygen or hydrogen--something that can combine with the carbon to produce small gas molecules. This process will likely be exothermic--in other words, being high in carbon, nano-built products would burn very nicely when you wanted them to. (Adding small integrated water tanks that were drained before recycling would prevent premature combustion.)
Constructing a nano-built product requires not only rearranging a lot of molecular bonds, but computing how to do that, and moving around a lot of nanoscale machinery. A nanofactory might require several times the bond energy to accomplish all that. The energy required to break down a nano-built product should be less than the energy it took to make it in the first place. And in terms of product strength per energy invested, nano-built diamond would probably be many times better than aluminum--a cheap, energy-intensive commodity.
WC:We've occasionally written on WC about the increasing "digitization" of physical objects, whether through embedded computer chips and sensors or even the introduction of DRM-style use controls. On the flip side, futurists have for a few years talked about the possibility of "napster fabbing" -- swapping design files, legally or otherwise, and/or the development of an open source culture around next-generation fabrication tools like nanofactories.
What do you see as the key intellectual property issues emerging from the rise of nanomanufacturing?
CRN: Because molecular manufacturing will be a general-purpose technology, we can expect that it will raise many of the issues that exist today in many different domains. Many issues will be the same as for software and entertainment, but the stakes will be far higher. The issues we see in medicine, with controversies over whether affordable pharmaceuticals should be provided to developing nations, will also apply to humanitarian applications of nanofactory products.
WC:To tease that point out for a minute, you're suggesting that the issue won't be with the difficulty or expense of making the materials, but the expense of the time necessary to come up with the design in the first place. Big pharma argues that the majority of their work is actually in dead ends, and that they high fees they charge for the drugs that do work are to make up for the time they take with the stuff that doesn't work. Would the nanofactory world -- at least the early days of it -- parallel this?
CRN: It's not an exact parallel. Some percentage of pharmaceutical development costs go to preliminary testing, another percentage to clinical trials--which are hugely expensive due to regulation and liability--and a third percentage to advertising and incentives for doctors to prescribe the new medicine. Of these three, probably only the first will apply to early nanofactory products.
We do expect design time to be a large component of the cost of a product. But the Open Source software movement shows that significant design time can be contributed without adding to product price.
WC: So you see Open Source as an aspect of the nanofactory future?
CRN: Whether or not open source approaches will be allowed to develop nanofactory products is the single biggest intellectual property question. Open source software has been astonishingly creative and innovative, and open source products could be a rich source of innovation as well as humanitarian designs. Even businesses could benefit, since open source usually doesn't put a final polish on its products, so commercial interests can repackage them and sell at a good profit.
However, the business interests that will want a monopoly, and the security institutions that will be uncomfortable with unrestricted fabbing, will probably oppose open source products. It would be easy to criminalize unrestricted fabbing, though far more difficult to prevent it. Prevention of private innovation, through simply not allowing private ownership of nanofactories, would have to be rigorously enforced worldwide--likely impossible, and certainly oppressive. Criminalization without prevention would almost certainly be bad policy, but it will probably be tried.
WC: We see early parallels to this in the issue of open source and "digital rights management" routines. The idea of outlawing Open Source (because it can't be locked down) even gets kicked around from time to time. It seems likely that an open source that could result in new weapons might be even more likely to trigger this kind of response.
CRN: Historically, Open Source has been a huge source of innovation. Open source applied to molecular manufacturing could result in new weapons, but also in new defenses. Shutting down Open Source might not reduce the weapons much, but it probably would reduce the development of defenses. We should think very carefully before we reduced our capacity to design new defenses. That said, you may well be right that a combination of government and corporate interests would work together to successfully eliminate Open Source-type development.
WC: What would you say are your top concerns about how nanofactory technology might develop?
CRN: Our biggest concern is that molecular manufacturing will be a source of immense military power. A medium-sized or larger nation that was the sole possessor of the technology would be a superpower, with a strong likelihood of becoming the superpower if they were sufficiently ruthless. This implies geopolitical instability in the form of accelerating arms races and preemptive strikes. For several reasons, a nanofactory-based arms race looks less stable than the nuclear arms race was.
Related to the military concern is a tangle of security concerns. If molecular manufacturing proliferates, it will become relatively easy to build a wide range of high-tech automated weaponry. Accountability may decrease even as destructive power increases. The Internet, with its viruses, spam, spyware, and phishing, provides a partial preview of what we might expect. It could be very difficult to police such a society without substantial weakening of civil rights and even human rights.
Economic disruption is a likely consequence of widespread use of molecular manufacturing. On the one hand, we would have an abundance of production capacity able to build high-performance products at minimal expense. On the other hand, this could threaten a lot of today's jobs, from manufacturing to transportation to mineral extraction.
Environmental damage could result from widespread use of inexpensive products. Although products filling today's purposes could be made more efficient with molecular manufacturing, future applications such as supersonic and ballistic transport may demand far more energy than we use today.
Another major risk associated with molecular manufacturing comes from not using it for positive purposes. Artificial scarcities—legal restrictions—have been applied to lifesaving medicines. Similar restrictions on molecular manufacturing, whether in the form of military classification, unnecessary safety regulations, or explicit intellectual property regulation, could allow millions of people to die unnecessarily.
WC: We know from the digital restrictions/"piracy" debate that technical limitations on copying, etc., do an adequate job of preventing regular folks from duplicating movies, software and such, whether for illicit reasons (passing a copy to a friend) or otherwise (making a backup or other "fair use"), while doing little to prevent real IP pirates from duping off thousands of copies to sell on the street in Shanghai or the like.
In short, there's every reason to believe that top-down efforts to stymie the illegal/illicit/irresponsible use of nanofactories will be only marginally-effective, at best, while driving the worst stuff deep underground and preventing regular citizens from using their nanofactories in ways that would be beneficial and not significantly harmful.
CRN: It would be premature to dismiss all top-down regulation as ineffective. At the same time, the reduction in humanitarian and other benefits from excessive regulation is one of CRN's primary concerns. It is certainly true that regulation will impose a significant cost in lost opportunities. However, because there are so many different types of harm that could be done with a nanofactory, we are not ready to say that all regulation would be undesirable.
It will be difficult to apply "fine-grained relinquishment" (Kurzweil's term) to a general-purpose technology like nanofactories. However, we will probably have to achieve this, because both blanket permissiveness and blanket restrictions will impose extremely high costs and risks.
As we have said before, there will be no simple solutions. We will need a combination of both top-down and emergent approaches.
WC: I've been a pretty vocal advocate of openness as a tool for countering dangerous uses. It's a bit counter-intuitive, I admit, but there's real precedent for its value. Most experts see free/open source software, for example, as being more secure than closed, proprietary code. And the treatment for SARS (to cite a non-computer example) emerged directly from open global access to the virus genome.
In both cases, the key is the widespread availability of the underlying "code" to both professional and interested amateurs. The potential increase in possible harmful use of that knowledge is, at least so far, demonstrably outweighed by the preventative use.
What do you think of an open approach to nanotechnology as a means of heading off disasters?
CRN: In a false dichotomy between totally closed and totally open, the open approach would seem to increase the dangers posed by hobbyists and criminals. A totally closed approach, assuming no one in power was insanely stupid, probably would not lead to certain kinds of danger such as hobbyist-built free-range self-replicators, the so-called grey goo.
I don't think we can count on no one in power being insanely stupid, however. Realistically, even a totally closed, locked-down, planet-wide dictator approach would not be safe.
A partially closed approach, where Open Source was criminalized but bootleg or independent nanofactories were available, would be prone to danger from criminals and rebellious hobbyists--and by the way, the world still needs a lot more research to determine just how extreme that danger is. An open approach probably would not increase the danger much versus a partially closed approach, and would certainly increase our ability to deal with the danger.
Remember Ben Franklin's adage: Three can keep a secret, if two are dead. There would be a substantial danger of disastrous abuse even with a mere one thousand people or groups having access to the technology (and the rest of the six billion at their mercy). It's not certain that the danger would be very much worse with a million or even a billion people empowered.
WC: Closing on a more positive note, what would you say are your biggest hopes about how this kind of technology might be applied? In other words, what does a world of responsible nanotechnology look like?
CRN: We would like to see a world in which security and geopolitical concerns are addressed proactively and skillfully, in order to maximize liberty without allowing any devastating uses.
We would like to see a world in which the ubiquity of tradeoffs is recognized, and where consequences are neither dismissed nor exaggerated. Regulation should be appropriate to the extent of the various risks. The drawbacks of inaction should be considered along with the risks and problems of action.
We would like to see a world in which everyone has access to at least a minimal molecular manufacturing capacity. The computer revolution has shown that inventiveness is maximized by a combination of commercial and open source development, and open source is a good generator of free basic products when the cost of production is tiny.
source:http://www.worldchanging.com/archives/004078.html
Scientists hail discovery of hundreds of new species in remote New Guinea
An astonishing mist-shrouded "lost world" of previously unknown and rare animals and plants high in the mountain rainforests of New Guinea has been uncovered by an international team of scientists.
Among the new species of birds, frogs, butterflies and palms discovered in the expedition through this pristine environment, untouched by man, was the spectacular Berlepsch's six-wired bird of paradise. The scientists are the first outsiders to see it. They could only reach the remote mountainous area by helicopter, which they described it as akin to finding a "Garden of Eden".
In a jungle camp site, surrounded by giant flowers and unknown plants, the researchers watched rare bowerbirds perform elaborate courtship rituals. The surrounding forest was full of strange mammals, such as tree kangaroos and spiny anteaters, which appeared totally unafraid, suggesting no previous contact with humans.
Bruce Beehler, of the American group Conservation International, who led the month-long expedition last November and December, said: "It is as close to the Garden of Eden as you're going to find on Earth. We found dozens, if not hundreds, of new species in what is probably the most pristine ecosystem in the whole Asian-Pacific region. There were so many new things it was almost overwhelming. And we have only scratched the surface of what is there." The scientists hope to return this year.
The area, about 300,000 hectares, lies on the upper slopes of the Foja Mountains, in the easternmost and least explored province of western New Guinea, which is part of Indonesia. The discoveries by the team from Conservation International and the Indonesian Institute of Sciences will enhance the island's reputation as one of the most biodiverse on earth. The mountainous terrain has caused hundreds of distinct species to evolve, often specific to small areas.
The Foja Mountains, which reach heights of 2,200 metres, have not been colonised by local tribes, which live closer to sea level. Game is abundant close to villages, so there is little incentive for hunters to penetrate up the slopes. A further 750,000 hectares of ancient forest is also only lightly visited.
One previous scientific trip has been made to the uplands - the evolutionary biologist and ornithologist Professor Jared Diamond visited 25 years ago - but last year's mission was the first full scientific expedition.
The first discovery made by the team, within hours of arrival, was of a bizarre, red-faced, wattled honeyeater that proved to be the first new species of bird discovered in New Guinea - which has a higher number of bird species for its size than anywhere else in the world - since 1939. The scientists also found the rare golden-fronted bowerbird, first identified from skins in 1825. Although Professor Diamond located their homeland in 1981, the expedition was able to photograph the bird in its metre-high "maypole" dance grounds, which the birds construct to attract mates. Male bowerbirds, believed to be the most highly evolved of all birds, build large and extravagant nests to attract females.
The most remarkable find was of a creature called Berlepsch's six-wired bird of paradise, named after the six spines on the top of its head, and thought "lost" to science. It had been previously identified only from the feathers of dead birds.
Dr Beehler, an expert on birds of paradise, which only live in northern Australia and New Guinea, said: "It was very exciting, when two of these birds, a male and a female, which no one has seen alive before ... came into the camp and the male displayed its plumage to the female in full view of the scientists."
Scientists also found more than 20 new species of frogs, four new butterflies, five new species of palm and many other plants yet to be classified, including what may be the world's largest rhododendron flower. Botanists on the team said many plants were completely unlike anything they had encountered before.
Tree kangaroos, which are endangered elsewhere in New Guinea, were numerous and the team found one species entirely new to the island. The golden-mantled tree kangaroo is considered the most beautiful but also the rarest of the jungle-dwelling marsupials. There were also other marsupials, such as wallabies and mammals that have been hunted almost to extinction elsewhere. And a rare spiny anteater, the long beaked echidna, about which little is known, allowed itself to be picked up by hand. Dr Beehler said: "What was amazing was the lack of wariness of all the animals. In the wild, all species tend to be shy of humans, but that is learnt behaviour because they have encountered mankind. In Foja they did not appear to mind our presence at all.
"This is a place with no roads or trails and never, so far as we know, visited by man ... This proves there are still places to be discovered that man has not touched."
Inhabitants of New Guinea
Birds
The scientists discovered a new species - the red faced, wattled honeyeater - and found the breeding grounds of two birds of almost mythical status - the golden- fronted bowerbird and Berlepsch's six-wired bird of paradise, long believed to have disappeared as a separate species. The expedition also came across exotic giant-crowned pigeons and giant cassowaries - a huge flightless bird - which are among more than 225 species which breed in the area, including 13 species of birds of paradise. One scientist said that the dawn chorus was the most fantastic he had ever heard.
Mammals
Forty species of mammals were recorded. Six species of tree kangeroos, rare elsewhere in New Guinea, were abundant and the scientists also found a species which is new to Indonesia, the golden-mantled tree kangeroo. The rare and almost unknown long-beaked echidna, or spiny anteater, a member of a primitive group of egg-laying mammals called monotremes, was also encountered. Like all the mammals found in the area, it was completely unafraid of humans and could be easily picked up, suggesting its previous contact with man was negligible.
Plants
A total area of about one million hectares of pristine, ancient, tropical, humid forest containing at least 550 plants species, many previously unknown and including five new species of palms. One of the most spectacular discoveries was a so far unidentified species of rhododendron, which has a white scented flower almost six inches across, equalling the largest recorded rhododendron flower.
Butterflies
Entomologists among the scientists identified more than 150 different species of butterfly, including four completely new species and several new sub-species, some of which are related to the common English "cabbage white" butterfly. Other butterflies observed included the rare giant birdwing, which is the world's largest butterfly, with a wingspan that stretches up to seven inches.
Frogs
The Foja is one of the richest sites for frogs in the entire Asia-Pacific region, and the team identified 60 separate species, including 20 previously unknown to science, one of which is only 14mm big. Among their discoveries were healthy populations of the rare and little-known lace-eyed frog and a new population of another frog, the Xenorhina arboricola, which had previously only been known to exist in Papua New Guinea.
source:http://news.independent.co.uk/environment/article343740.ece
ActiveState Returns to Open-Source Roots
Financial terms of the transaction were not disclosed. The deal is expected to close within the next 30 days, the companies announced Feb. 6.
For Sophos, the decision to spin out ActiveState is hardly a surprise. The acquisition was meant to give the UK-based company the technology to add anti-spam capabilities to its enterprise-class anti-virus group, much like rivals Network Associates, Symantec and McAfee.
But, ActiveState's bread and butter was in the open-source dynamic languages space. ActiveState makes free distributions and commercial programming tools for programming languages like Perl, Python, PHP, Tcl (Tool Command Language) and Ruby, while Sophos sells integrated threat management services.
Click here to read more about ActiveState getting acquired by Sophos.
Once the anti-spam element of ActiveState was integrated with Sophos' anti-virus products, the marriage no longer made much sense, industry watchers said.
David Ascher, ActiveState CTO and vice president of engineering, said as much in a blog post, announcing the Pender Financial acquisition.
"Sophos is focused on ensuring their success in the space of threat management, and there was broad agreement that ActiveState needs to stand on its own feet if it is to continue to do good things," Ascher said. "This deal results in two independent organizations, each focused on its own needs."
"[Being] independent will allow us to react faster, and make bigger bets," he added.
Bart Copeland, a well-known figure in Canada technology circles, will join ActiveState as CEO.
Ascher said the company will maintain offices in Vancouver and continue development of several free language distributions, including ActivePerl, ActivePython and ActiveTcl.
The company plans to release a full set of OS X on Intel downloads soon.
ActiveState is also sticking to current plans around tools such as Komodo and the Perl Dev Kit and Ascher said there are "significant upgrades" planned for next year.
source:http://www.eweek.com/article2/0,1895,1920555,00.asp
Would You Take A Paycut for More Interesting Work?
However, the work I actually do seems to be a waste of my CS education. My current project right now involves hooking up Excel and Access with a little VBA and some macros. The other day I was asked to export a Lotus Notes database into an Excel file and format it. The most programming-intensive project that I've done here was an ASP.NET webapp, for the company intranet.
Am I selling out by continuing to work in my current firm? Should I take the pay-cut to work at a startup where I can make more use of my talents? I'm a recent grad with no loans or credit cards to pay, so I have a low cost of living aside from a girlfriend. Which would you prefer: fun at work, or fun outside of work?"
source:http://ask.slashdot.org/article.pl?sid=06/02/07/0150256
Early Puberty Often More Hazardous
source:http://science.slashdot.org/article.pl?sid=06/02/06/2239251
Mobile Cooking
To do this you will need two mobile phones -they do not have to be on the same network but you will need to know the number of one of them. The only other items you will need are:
- An egg cup, (make sure that the egg cup is made of an insulating material such as China, wood or glass - plastic will do. DO NOT use stainless steel or other metal).
- A radio, AM or FM - you can also use your hifi.
- A table or other flat surface on which to place the phones and egg cup. You can place the radio anywhere in the room but you might as well put it on the table.
How To Do It:
- Take an egg from the fridge and place it in the egg cup in the centre of the table.
- Switch on the radio or hifi and turn it up to a comfortable volume.
- Switch on phone A and place it on the table such that the antenna (the pokey thing at the top) is about half an inch from the egg (you may need to experiment to get the relative heights correct - paperbacks are good if you have any - if not you may be able to get some wood off cuts from your local hardware shop).
- Switch on phone B and ring phone A then place phone B on the table in a similar but complementary position to Phone A.
- Answer phone A - you should be able to do this without removing it from the table. If not, don't panic, just return the phone to where you originally placed on the table.
- Phone A will now be talking to Phone B whilst Phone B will be talking to Phone A.
- Cooking time: This very much depends on the power output of your mobile phone. For instance, a pair of mobiles each with 2 Watts of transmitter output will take three minutes to boil a large free range egg. Check your user manual and remember that cooking time will be proportional to the inverse square of the output power for a given distance from egg to phone.
- Cut out these instructions for future reference.
![]() ![]() ![]() |
Phone A > > > > > > Egg < < < < < <> |
BitTorrent and End to End Encryption
source:http://yro.slashdot.org/article.pl?sid=06/02/06/2039241
The Inside Track on Firefox Development.
Where Did Firefox Come From?
The story of Mozilla is long and rich in detail. There are many perspectives. This is mine.
Getting Involved
I got involved with Mozilla because I loved the idea of working on something that had the potential to make an impact on millions of people. My friends and I lived in our browsers, so there was also a tangible payoff for contributions that made it into a shipping Netscape release. After switching gears on the layout engine, it looked like Netscape needed all the help it could get. In early 1999 only the most basic elements of the old Communicator suite were in place in the new browser; you could barely browse or read mail as Netscape's engineers worked furiously to erect the framework of the application.
I thought about what I could offer. It was difficult — through my website I had taught myself JavaScript and HTML, but I did not know C++. Perhaps more importantly my AMD K6/166 was not really capable of compiling a codebase as large as Mozilla (even when it could almost fit on a floppy!). What I did notice though was that the user interface was being developed using something similar to HTML and JavaScript. Realizing that my fussiness and attention to detail could be useful, I began working on improving the UI. I became involved both polishing the developing browser front end and in the design of the XUL toolkit that rendered it. After a few months I was offered a job on the Browser team.
It was mid January, 2000, and I stood in the dimly lit international arrival lounge at SFO waiting for my ride. After a few moments a smiling man with a burst of reddish hair approached. It was gramps, and I had arrived.
An Awkward Alliance
The relationship between Netscape and the Mozilla open source project was uneasy. Mozilla wanted an independent identity, to be known as the community hub in which contributors could make investments of code and trust, while companies like Netscape productized the output. Netscape was not satisfied to let Mozilla turn the crank however; building and shipping a product with as many constraints as the Netscape browser was — and remains — a mighty challenge. Netscape was convinced it was the only one that knew what needed to be done. At the time, I think it was true.
While there were many community volunteers, there were more missing essential features than were present. There were more unfixed critical bugs than fixed. The only organization that had a strategy to drive the ominously lengthy bug lists to zero was Netscape.
Netscape's Communication Problem
Netscape made two mistakes. They did not publish enough product management information to enable the community to help them achieve their goals. They did not even consistently communicate what these goals were. The cohabitation of engineers and PMs did not contribute to open dissemination of information as it was easier to communicate with your immediate team than those on the outside. Without understanding the importance of publicly available documentation to effective community development, the extra steps to publish the results of internal discussions were not always taken.
The other mistake was not having a clear vision for product development. For the folks working in Client Product Development (CPD — the browser/mail software division at Netscape), the idea was to rebuild and improve upon the Communicator suite using the newly selected layout engine. The goal was to make a better browser and mail reader.[1]
Netcenter's Agenda
More importantly however, Netscape had to make money to survive. Across campus, the Netcenter division management had browser ideas of its own. Once the most popular site on the web, Netcenter's fortunes were waning as users switched to other portals like Yahoo and MSN. Netscape had to replace revenue from declining traffic and it looked at what it could do to differentiate itself. The one thing Netcenter had was close affiliation with a browser. Netcenter sought to monetize the browser in two ways: by linking the browser very closely to the portal itself in design and content and by linking the browser to features supplied by business deals.
Netcenter passed various requirements on to CPD. Over time these included things like a browser “theme” that was supposed to merge seamlessly into a particular iteration of the Netcenter web site. Users could be forgiven for thinking, when using the freshly themed browser, that their monitor was broken. Adding to the injury was an overload of links in the browser UI to Netcenter and partner properties, and a large eye-catching but mostly useless sidebar panel with additional partner content vying with web content for the user's attention.
Eventually, Mozilla.org forced Netscape to push most of its business specific functionality into its own private branding cvs server, where features like Netscape Instant Messenger were also developed. The controversial “Modern” theme, panned in the Netscape 6 Preview Release 1, was eventually removed and replaced by a more stylish but similarly non-native theme.
Mozilla's UI Dysfunction
Since most of the user interface design for the Netscape products was done by Netscape staff working to Netcenter requirements, the Mozilla user interface suffered. Instead of being a clean core upon which Netscape could build a product to suit its needs, the Mozilla suite never felt quite right; it was replete with awkward UI constructs that existed only to be filled in by overlays in the Netscape's private source repository — the “commercial tree.”
Compounding this dysfunction, at the time the project was being developed by over a hundred engineers in different, sometimes poorly connected departments within CPD. Netscape had grown rapidly in previous years, and with an uneven hiring bar engineers with abilities that would suggest they needed more assistance from others had far too much autonomy in feature design and implementation. User experience assistance was sparse, and as a result the application quickly bloated.
Fighting For Change
Engineers in the Browser and Toolkit groups were not happy with the way things were going. For my part, I began working on what would become known as the “Classic theme” — a visual appearance that respected the system configuration and would later form the foundation of what would be Firefox's theme. Since I had little graphic design talent I used icons from Netscape 4. This became the default theme for Mozilla distributed releases, Netscape ultimately choosing to use its “New Modern” theme for Netscape 6. Several of us argued strenuously against this decision but were unsuccessful. In the eyes of reviewers, the alien appearance of Netscape 6 would be the straw that broke the camel's back. Suck.com, an early pop-culture review, wrote an article decrying the themed interface, using Netscape 6 as its prime example of the failure of theming.
Despite the failure of Netscape 6, engineers were still enthusiastic about continuing development. Now that Netscape could include its partner customizations[2] — engineers called them “whore bars” — in its commercial tree, the engineering teams focused on improving the Open Source releases instead, hoping Mozilla would be the suite they had dreamt of building. Netscape branded products were largely ignored.
Many contributors, myself included, pushed for further improvements to the user interface. We got extensive pushback from people within the company. On more than one occasion I tried to flex my muscle as “user interface module owner” (Mozilla parlance, then a something of a novelty — Mozilla had granted me this role in an attempt to show autonomy in project development after the disaster that was the original Modern theme). It did not go well. Weak management stressed the importance of seniority over logic when it came to feature design. I was told I could not expect to use Open Source tricks against folk who were employed by the Company (all hail!). I held true to my beliefs and refused to review low quality patches. I was almost fired. Others weren't so lucky. It became a source of great frustration and disillusionment for me. I lost motivation. I realized Netscape's Byzantine stranglehold permeated the design of the Mozilla product still, and that now as a Netscape employee I was expected to use my “module ownership” to support its whims. I was to be a puppet.
I disengaged. I no longer treated every patch I thought was low quality as a battle to be fought and won at all costs. Netscape and Mozilla continued to ship releases. The application improved in terms of stability and performance, but the user interface remained baroque.
New Beginnings
One night in mid 2001 I went to Denny's late at night with David Hyatt. Dave is the sort of person with an enthusiasm that is magnetic. You cannot help but become caught up in the excitement of the ideas generated during a discussion with him. We discussed the rot within Mozilla, which we blamed on Netscape and Mozilla's inability to assert independence. He suggested it'd be perhaps preferable to start again on the user interface – much of the code in the front end was so bloated and bad that it was better off starting from a fresh perspective. We talked about using C# and .NET, and Manticore was born. As is often the case with ideas and prototypes, the fun quickly deteriorates into tedium as the magnitude of the task becomes clearer. A couple of weeks after it was begun, Manticore died. Dave tried again though, first with Camino, and finally with Firefox.
These browser efforts were reactions to the rot we had seen in the Mozilla application suite. As Netscape began to lose favor within AOL, Netcenter's grip on CPD loosened, and the strength of the community grew. However even after Netscape cleaned up a lot of its engineering operations the UI did not improve. Rapid improvement to the user interface was now restrained by the same processes established to preserve code quality when weak engineers had free reign to check in.
There was no organized vision for the browser UI, and the ability to make changes was too widely distributed: anyone could make any change or addition to the UI as long as they had two other reviews. There was no clear plan for improving the resulting clutter. There was no vision.
Firefox was different. After 0.6 I laid out a plan for reaching 1.0. After a few cuts and sanity checks, a year and a half of engineering work by a motivated core of engineers on the front end and the continuing development of Gecko beneath, Firefox 1.0 shipped.
During this time Mozilla finally gained the independence it had long sought, establishing a non-profit foundation on the same day many of the remaining client engineers at Netscape were laid off in a mass termination that can only be described as a bloodbath.
There was and remains much resentment towards Firefox and its development model. At its creation, there was much shouting about how the many were not always smarter than the few, the merits of small development teams with strong centralized direction, the need to adhere strictly to Mozilla's module ownership policy[3]. In practice, these statements resulted in effectively locking everyone but the Firefox team out of the Firefox source code. We railed against the inefficiencies of past UIs. We were unnecessarily harsh, and polarized opinions. We had been badly wounded by the Netscape experience and the disorganization that had followed. I don't think a lot of people understood that. It wasn't something we could easily communicate.
To many, it looked like we were breaking ranks. We were claiming their work had no value. It was said that what we were doing went against the principles of community development. That wasn't true — as most open source projects are centrally managed by a small few. Many have well defined release plans and maintain tight control over what contributions make it in. We had hurt our case though by being so dogmatic up front. We did not do a good job of PR.
Vindication
We were determined, and we brought the product through to a 1.0 release. It was a long, difficult, all consuming road. I may eventually write more about it. But for now I'll just say that despite the rocky start, and the criticisms faced along the way, the model worked. It worked better than it ever had before for Mozilla or Netscape. Firefox 1.0 shipped to over a million downloads on the first day alone, 10 million downloads in ten days, and over a hundred million downloads before it was replaced by Firefox 1.5 just over a year later.
Today Firefox is one of the brightest stars in upcoming tech brands. It was voted by BrandChannel readers as the number 7 global brand across all segments — ahead of eBay and Sony. It is the first browser to stem the decline of market share against Internet Explorer and has reclaimed a healthy 10-25% share depending on country. It is still growing. Firefox's mail counterpart, Thunderbird, is a successful and comprehensively featured mail application. The dream that team of engineers had back in the darkest days of Netscape 6 product hell had been delivered at last.
Here & Now
I am writing this because I want to provide a historical perspective on where we are today with Firefox. A lot has been told about the development of the Firefox browser since Firefox 1.0. The reality is that the story is bigger than just Firefox 1.0. It goes back years, spans continents, and includes a cast of thousands. It's a fantastic story, with all of your standard themes — greed, rage, turmoil, love lost. But mostly it's a story of dedicated people laboring to create something they truly believe in. That's something I think everyone should be able to relate to - no matter what their walk of life. That's why Mozilla is so powerful and extends well beyond just Firefox.
For me, the story included the realization that I had never believed in something this much before, and discovering how easily and arbitrarily your dreams could be snatched away. Ultimately though I realized that with some patience and good old-fashioned hard work, anything is possible.
Over the years, Mozilla finally gained the ability to be that crossroads where people could come together and share their thoughts on the internet and where it is going. Different people have different ideas, and these are borne out in the different projects that exist: Camino, SeaMonkey, Thunderbird, Sunbird, Chatzilla, Bugzilla and so on. These projects create the ecosystem that is Mozilla. While related projects may not always agree on approach, the work that is done is inevitably beneficial — one project feeding off the ideas of another, and vice versa. Whatever project scratches the itch of any particular person, having their contributions and ideas around is beneficial for all projects. Generic tools to support many instances provide a backbone to support today's demands and tomorrow's as well.
Firefox is so successful today that it is gaining attention from many quarters. Many new contributors are finding the project and new ways to help out. This sort of thing is essential to keep the project vibrant and maintain the flow of innovation. It is important that those of us who've been round the block a few times share what came before, what did and did not work. The struggles that were fought, the price that was paid. This project has not been successful by accident. Its success represents the sum total of the energy expended by thousands people around the world for more than half a decade.
No contributor, no matter how new they are or what their motivation should let the story of Mozilla stray too far from their mind.
February 4, 2006.
Footnotes
- Some people claimed building a browser and a mail client at the same time was muddying the waters. I don't think the two are necessarily mutually exclusive however, and many of the needs of one helped influence improvements which benefited he other.
- “How to Monetize™ your browser”, “The IE Advantage”
- “Module Owners”
- “Moving Target”
Solar Energy Becoming More Pervasive
source:http://hardware.slashdot.org/article.pl?sid=06/02/06/1729211
UNIX Security: Don't Believe the Truth
UNIX is geared towards server use, and so is its security system. As we all know, 'normal' users do not have permanent root access (well, shouldn't have, in any case). As such, all important system files are protected from whatever stupid things the user might do. The user does not have full access rights to all files. The user only has full access rights to his or her own personal files.
And that is where the problem lies.
I believe that desktop Linux/OSX/etc. users all over the world have a false sense of security, and are activily promoting that false sense of security on the internet, in magazines, and conferences all over the world. No, they are not doing this on purpose. However, that does not negate the fact that it does happen.
What I am blabbering about?
A hypothetical virus or other malware on a UNIX-like system can only, when it is activated by a normal user, wreak havoc inside that user's /home
directory (or whatever other files the user might have access rights to). Say it deletes all those files. That sucks, but: UNIX rocks, the system keeps on running, the server-oriented security has done its work, no system files were affected, uptime is not affected. Great, halleluja, triumph for UNIX.
But what is more important to a home user? His or her own personal files, or a bunch of system files? I can answer that question for you: the pictures of little Johnny's first day of school mean a whole lot more to a user than the system files that keep the system running. Of course, they should make backups-- but wasn't Linux supposed to be secure? So why should they backup? Isn't Linux immune to viruses and what not? Isn't that what the Linux world has been telling them?
This is the false sense of security I am talking about. UNIX might be more secure than Windows, but that only goes for the system itself. The actual content that matters to normal people is not a single bit safer on any UNIX-like system than it is on any Windows system. In the end, the result of a devastating virus or other malware program can be just as devastating on a UNIX-like system as it can be on a Windows system-- without the creator having to circumvent any extra (UNIX-specific) security measures.
To blatantly copy Oasis: don't believe the truth. Yes, UNIX-like systems might be more secure than Windows systems, but not in the places where it matters to average home users.
--Thom Holwerda
source:http://www.osnews.com/story.php?news_id=13568