Thursday, July 07, 2005
How schools can get free software
![]() Schools' computer costs have been rising |
In open source software (OSS), the underlying computer code is freely available so users can alter it and publish new versions, to benefit the community.
Leslie Fletcher, chair of governors at Parrs Wood High School in south Manchester and campaigns manager for the UK's Unix and Open Systems User Group, offers a personal perspective on how schools can benefit.
Schools using open source software can develop their information and communication technology (ICT) as they think best, without worrying about software costs and licensing because OSS is usually free.
The software a school needs to keep its computer network running and secure, send and receive e-mail, access the internet, protect users from viruses, spam and unsuitable content and carry out office tasks such as word processing is all available free by using OSS.
It can be downloaded from the internet - free as in "free beer" - and has very liberal licensing terms - free as in "free speech".
Parrs Wood High School has more than 2,000 students and more than 200 staff.
When it moved into new buildings at Easter 2000 spending had to be tightly controlled.
Capital
One of the technical staff, Tim Fletcher, had experience with OSS and convinced the head teacher and governors that it could deliver their vision for ICT in the new school extremely cost- effectively.
Capital was spent on high-speed network equipment and the best available servers, the computers running the system.
Because OSS runs well on old hardware, computers from the old school and cast-offs from local businesses could be deployed in ICT rooms and other classrooms, requiring little additional capital expenditure.
Now Parrs Wood has more than 1,000 computers in school and more than 100 school laptops are on free loan to students who would not otherwise have a computer at home.
All staff, students and governors can, and many do, login to the school network from home - a facility soon be extended to parents and carers.
Microsoft desktops
The OSS enabling this does not cost anything and can be given away by the school without any concern about violating licence terms.
The majority of Parrs Wood's servers run OSS and use OSS to communicate with desktop computers in classrooms and offices. At Parrs Wood OSS is seen not as merely a way of saving money, but rather of spending it more effectively
What appears on screen - the so-called desktop - for ordinary users is the familiar, paid-for Microsoft Windows.
The software used by staff and students includes the content management system Moodle, which is open source, and Microsoft's Word, Excel and Powerpoint.
Software licences cost Parrs Wood about £30,000 each year, less than half the cost if no OSS were deployed, according to figures in the recent Becta report.
Only recently has the school become satisfied that OSS is now sufficiently well developed to meet classroom and office needs and provides a viable alternative to licensed software.
With governors' support and encouragement, the school is adopting OSS more completely over the next three years, including the eventual replacement of Windows by an OSS desktop, which will be a significant change.
Effectiveness
The gradual transition ratified by governors will enable the school community, including parents, to be made aware of the value which the school places on the freedom to innovate which OSS gives.
Schemes of work will be revised so that students gain an appreciation of the uses and value of ICT which goes beyond competency with a few of today's computer applications.
Staff training will be provided and all those involved kept abreast of developments in OSS and its increasingly widespread use.
At Parrs Wood OSS is seen not as merely a way of saving money, but rather of spending it more effectively.
Paying for capable technical support staff is an essential first step to effective ICT; providing career opportunities maintains the momentum.
Tim is now a member of the school's leadership team, with responsibility for the strategic development of ICT, and manages an experienced team of six technical staff.
Reaching out
Parrs Wood's commitment to OSS has other implications. Its behaviour management system, developed in-house, is to be made available to local high schools by means of an open source licence.
Staff needed to be convinced that this accorded with the school's philosophy of open dissemination of knowledge and information, and the local education authority's historic reluctance to endorse free software had to be overcome.
The OSS business model, in which software is free but support is paid for, has to be explained to other schools.
OSS is so trouble-free and reliable that there is time to look after the ICT networks in a dozen or so local primary schools.
A service-level agreement provides an initial health check, after which a Parrs Wood technician spends half a day each week giving on-site support.
A built-in capability of OSS allows the networks to be managed remotely from Parrs Wood for the rest of the week.
The schools get a service second to none, at a price they can afford. Expertise is shared without any interference from software vendors.
When Manchester needed a new school e-mail system and many of the city's schools needed improved access to the internet, experience at Parrs Wood proved invaluable.
OSS licences allow software to be modified to meet users' requirements - so the software powering the school system was scaled up to a city-wide system.
source:http://news.bbc.co.uk/1/hi/education/4642461.stm
Google invests in power-line broadband
Current Communications Group, which offers broadband Internet service over power lines, said Thursday that it has received investment money from Google, Hearst and Goldman Sachs.
Although Current did not specify the amount it received from the search king, the media giant and the investment banker, The Wall Street Journal reported that the three companies have invested roughly $100 million in the start-up. Last year, Current and Cinergy Broadband, a subsidiary of energy company Cinergy, announced that their two joint ventures had received $70 million from Cinergy, EnerTech Capital and Liberty Associated Partners.
As part of last year's announcement, Current and Cinergy Broadband said they would create one joint venture to bundle broadband and voice services for Cinergy's 1.5 million customers in Ohio, Indiana and Kentucky. A second joint venture would deploy broadband over power lines to smaller municipal and cooperatively owned power companies, which reach about 24 million customers across the United States.
Current said Thursday that it plans to use the new investment money to expand its broadband-over-power-line deployments in the United States and overseas. EnerTech Capital and Liberty Associated, a partnership between Liberty Media and the Berkman family, also contributed to the new financing round.
"These investments provide us with both capital and operating assistance as we continue to roll out broadband-over-power-line services to provide voice, video and data services," William Berkman, chairman of Current, said in a statement.
Current Communications has begun offering its broadband service over Cinergy's power grid to customers in Cincinnati and is rolling it out in Indiana and Kentucky, as well, said Scott Bruce, managing director for Current.
The company's second-joint venture with Cinergy is beginning to gain some traction and is in talks with power companies in rural to semirural areas, but deployments of the service remain somewhat limited.
"It takes some time to introduce a concept and overlay our equipment on their power grid," Bruce said.
While cable companies were initially slow adopters of broadband services due to the heavy capital investments they would have to make, they had one advantage over power companies.
"The cable guys were already in the communications business, so it wasn't that much of a leap," Bruce said. "The power companies are on the more conservative side."
Under its relationship with most power companies, Current runs the service, bills the customers and collects 100 percent of the revenue. Power companies, in return for use of their grid, receive payments from Current.
While the start-up has yet to generate a heavy flow of revenue from a large customer base, the company is able to live off of the millions of dollars it has raised in two back-to-back financing rounds in the last two years.
"We have continued to optimize our equipment, and part of the money we raised was used to develop commercial broadband," Bruce said.
Current puts small boxes on power transformers, and those boxes piggyback on the power grid to transmit signals via the power wires that go into homes.
source: http://news.zdnet.com/2100-1035_22-5777917.html
Computer snooping a growing problem
One person "lost everything.'' For someone else, everything "just shut down.''
These people were not reciting the impact of a hurricane or tornado. Rather, they were telling what happened when their computers became infected by programs known as spyware.
The comments, part of a study released Wednesday by the Pew Internet & American Life Project, show just how big a problem spyware has become to the nation's estimated 135 million Internet users. The project surveyed 2,000 people by phone in May and June.
The study's authors defined spyware as tracking software that is secretly placed on a computer. The programs can significantly slow a computer, route it to Web sites you don't want to visit or cause an annoying stream of ads to pop up.
The study found that spyware has disrupted the computer lives of 43 percent of surfers. That means an estimated 59 million people have spyware or adware on their computers, the study found. Adware is defined as tracking programs that come bundled with other software and that users knowingly download, although they don't necessarily want the adware.
But the problem could be even bigger. A study released last year found that 80 percent of users actually had such spyware or adware on their computers.
"There is a trust gap,'' said Douglas Sabo, a member of the board of directors for the National Cyber Security Alliance, which did that study. Consumers believe they are safer than they actually are, he said.
Whatever the number, the threat has caused more than nine of 10 users to alter their online behavior, either by not visiting certain Web sites, not downloading music or video files or not opening e-mail attachments, the Pew survey found.
How to fight back
"They scale back on what they are doing online,'' said Susannah Fox, who authored the study.
But many surfers could do even more to protect themselves, like using anti-spyware software, virus-protection programs and firewalls
And few surfers actually read user agreements that appear before they download free stuff from the net. Those agreements often spell out in fine type that adware is a part of the deal.
To demonstrate how few read the agreements, one Web site offered $1,000 to the first person who read the agreement in full and wrote in. Some 3,000 people downloaded the agreement before anyone claimed the money, the Pew study said.
Averaged $129 to fix it
Fox said 90 percent of users want better notice of adware. Sixty percent said they would have paid for the software if they knew it came with adware.
Those whose computers have been slowed down or even hijacked by spyware spent an average of $129 to fix a problem, she said.
Bob Bulmash, founder of Private Citizen, a privacy advocacy group in Naperville, said the federal government needs to do more to stop the purveyors of spyware. "It spies on who we are,'' he said. "It's the most grievous type of theft.''
source: http://www.suntimes.com/output/news/cst-nws-spy07.html
Dell, Napster Target College Downloads
Dell (nasdaq: DELL - news - people ) says that its college and university customers have complained that excessive illegal downloading of music was causing a slowdown in the performance of their networks.
Campuses were "shrinking the [available] bandwidth on the network to discourage" illegal downloading, says John Mullen, vice president of Dell's higher education business. He says schools want a way to minimize the impact of music downloads on their networks and encourage students to shift toward legal downloads.
Napster (nasdaq: NAPS - news - people ) will make its entire music library available to cache, or store, on Dell servers at colleges and universities that participate in the program. The songs will be available on systems locally, on systems managed by Dell, so there will be minimal impact on bandwidth.
It remains to be seen, however, if students will go for the promotion. Many already have Apple Computer (nasdaq: AAPL - news - people ) iPods and are already wedded to that service, or have already invested time downloading songs from file-sharing services.
Indeed, in March 2005 the number of songs downloaded from peer-to-peer services totaled nearly 275 million, more than ten times the number of songs downloaded legally, according to NPD.
Mullen says that increasing Dell's market share in portable music players isn't the point of the project. He stressed Dell's position as the leading provider of technology to higher education and said the company is simply giving its customers what they want.
Still, Dell would surely like to put a dent in Apple's overwhelming market share for hard drive-based players, which stands at more than 80%. Napster, too, is looking for whatever leverage it can get as à la carte services like iTunes continue growing and new subscription services hit the market from companies like Yahoo! (nasdaq: YHOO - news - people ) and even Target (nyse: TGT - news - people ) (see: "Target Aims At Music Subscriptions").
The University of Washington is the first school to sign up and will market the service and Dell's portable players to students. Napster will offer discounted rates on its subscriptions, as will Dell for PCs and players.
source: http://www.forbes.com/business/services/2005/07/06/dell-napster-partnership-cx_ld_0706music.html
Florida Man Charged For Stealing Wi-Fi
source:http://yro.slashdot.org/article.pl?sid=05/07/07/1351258&tid=123&tid=193&tid=158
Windows AntiSpyware Downgrades Claria Detections
source:http://yro.slashdot.org/article.pl?sid=05/07/07/1234217&tid=158&tid=172&tid=201
AMD Subpoenas to Stop Document Destruction
AMD sent notices to 32 computer companies, microprocessor distributors and computer retailers requesting that they suspend their normal document destruction and take steps to present evidence from being lost, according to an AMD filing with the court.
Of these, 14 companies have responded and nine of those indicated they would work with AMD to preserve documents, AMD said. The nine companies were Acer, Gateway, Lenovo, NEC, Rackable Systems, Sony and Sun. The others are distributor Tech Data and the retailer Circuit City Stores.
Best Buy Co. has agreed to comply with AMD's request "without limitation", while Dell and Hitachi acknowledged AMD's letters of request and promised to respond. CompUSA has acknowledged AMD's request.
Toshiba is the only company to have acknowledged receipt of AMD's notice and "refused to negotiate at all," according to the filing. Toshiba declined to comment.
So far, 18 companies have not responded. This includes firms ranging from HP and IBM to Dixons. AMD now believes that this court order will help to successfully preserve the relevant documents.
source: http://www.techspot.com/news/17981-AMD-wins-antishred-request.html
Six Bomb Blasts Around Central London
source:http://politics.slashdot.org/article.pl?sid=05/07/07/121258&tid=99&tid=219
Intel Invests in Movie Distribution Co.
By SETH SUTEL, AP Business Writer Wed Jul 6, 2:53 PM ET
SUN VALLEY, Idaho - Actor Morgan Freeman and chipmaking giant Intel Corp. are teaming up on a new venture to distribute premium movies to consumers over the Internet before the films become available on DVD.
Freeman and Intel executives announced the new digital entertainment company Wednesday at an annual retreat for chief executives of top media companies in this mountain resort.
Intel is investing an unspecified amount of money in the new venture. Called ClickStar Inc., it was formed by Revelations Entertainment, a company Freeman created in 1996 with producer Lori McCreary.
Hollywood has been reluctant to offer digitized movies directly to consumers over the Internet, fearful of suffering a similar fate as the music industry, which has been hit hard hit by piracy enabled by file-swapping services.
Freeman said his deal with Intel should avoid those pitfalls by giving customers a "simple, easy and attractive" alternative to piracy.
"We're going to bypass what the music industry had to come up with, and that's to get ahead of the whole piracy thing," Freeman told reporters at Sun Valley after making his presentation, which was closed to the press.
Few other concrete details were provided by Freeman and Intel officials about the company. However, they did say that ClickStar will be led by former Sony Pictures executive Nizar Allibhoy.
ClickStar has had discussions with major studios and producers about its plans, but no studios have yet agreed to distribute films over the new service.
Hollywood studios have offered major films over the Internet for more than a year now through Movielink, a joint venture of five studios, and CinemaNow.
Those services have yet to catch fire with the public, in part because the films are delivered over the Internet. The services also have a limited choice of back titles.
Studios such as Warner Bros., Sony and others are planning their own Internet and video on demand offerings, some of which may debut by the end of this year.
Intel spokesman Bill Calder said Intel had been working for several years with Freeman, setting up "digital home" technology in his studio and doing a long-range wireless demo at the Sundance film festival.
"It fits into our whole digital home strategy," Calder said of the investment. "One of the things we've always said is content is key."
In order for people to want multimedia PCs connected to TVs through home entertainment centers, film producers must provide high quality films for download over the Internet — sometimes even on the day of their theatrical release.
The annual Sun Valley conference draws an A-list group of media CEOs, investors and key technology figures. This year's attendees include Rupert Murdoch, Michael Dell, Walt Disney Co. CEO Michael Eisner and investor Warrren Buffett.
source:http://news.yahoo.com/s/ap/20050706/ap_on_en_mo/intel_movie_downloads;_ylt=ArSDbPND9FjaK7ib5nxlODkjtBAF;_ylu=X3oDMTBiMW04NW9mBHNlYwMlJVRPUCUl
Examining ICMP Flaws
One new attendee of this year's OpenBSD hackathon was Fernando Gont, a diverse individual from Argentina whose current job titles include teacher, technical writer, system administrator and network researcher. His presence at the hackathon was the result of an internet-draft he wrote about some flaws in the ICMP protocol, flaws he discovered while writing the "Security Considerations" of a different internet-draft titled "TCP's reaction to soft errors" for the IPv6 Operations working group. In researching that earlier draft, he considered various attacks against TCP using ICMP error messages, and proposed some extra validation that could be done as prevention. Following up, Fernando reviewed the IETF specifications for ICMP and TCP and was surprised to discover that they didn't propose similar validation checks, ultimately deciding to write his latest internet-draft highlighting the security impact.
Fernando was interested in discussing the ideas with his peers, but was concerned about vendors trying to patent his suggested fixes. He'd read some comments by OpenBSD creator Theo de Raadt [interview] which led him to believe that he could safely talk with Theo about his ICMP discoveries. Theo was impressed by the ideas, and as Fernando was already heading to BSDCan, Theo helped arrange for him to stay in Canada longer to attend CanSecWest and the OpenBSD hackathon. At the hackathon, Fernando worked around the clock to implement some of his suggested fixes into the OpenBSD networking stack, during which time I spoke with him.
The ICMP flaw is in the design of the protocol, not in any specific implementation. Theo explains, "here we have a 20 year old protocol, a part of the Internet infrastructure that hasn't been touched in 10 years and we were all sure was right, and now is cast in doubt." He went on to add, "these things have to be done carefully. We can't ignore the problem, which is what the IETF and the other vendors are telling us to do."
Three Blind ICMP Attacks:
Fernando stressed that the issues in ICMP are with the specification itself, "this makes the problem more important because it affects everyone, not just one implementation from a programmer mistake." He goes on to point out that the problem won't truly be fixed until the IETF specification themselves are fixed, as it is from these specifications that vendors implement their systems. "Most vendors have, are, or will be implementing the recommended counter-measures in the near future," Fernando acknowledges, "however, vendors have not bothered to participate in the relevant IETF working group to update the existing specifications." Thus Fernando is concerned that future implementations will continue to be made following these outdated and now known-to-be-flawed specifications.
All three ICMP flaws can be exploited without sniffing network traffic, and do not require a "man in the middle". Unlike the earlier "slipping in the window" TCP reset attack [story], these ICMP-based TCP attacks don't require an attacker to guess a correct TCP sequence number, making it simpler to disrupt network traffic. As a brief overview, the three flaws are:
- Blind connection reset attack: an attacker can generate a "hard" ICMP error to remotely tear down an existing connection.
- Blind throughput reduction: an attacker can generate ICMP errors that repeatedly trigger source quenching, thereby reducing the throughput of the connection.
- Blind performance degrading attack: an attacker can use ICMP packets to trick Path MTU discovery into reducing the size of each sent packet down to only 68 bytes.
"Hard" ICMP Errors:
The ICMP protocol was first defined in RFC 792, published in September of 1981. Referring to TCP connections, ICMP errors are classified as either "hard" or "soft". A "hard" error results in the TCP connection being torn down, much the same as if a RST packet was received. There are three ICMP type 3 'destination unreachable' errors that are defined in RFC 1122 as hard errors. Code 2, 'protocol unreachable', code 3, 'port unreachable', and possibly code 4, 'fragmentation needed and don't fragment bit set' are all hard errors that if received can cause a TCP stack to tear down an existing connection. (Code 4 is only a 'hard' error if Path MTU discovery is not implemented.)
Other ICMP errors are considered "soft" errors. "Soft" errors are reported to the application affected, but the connection continues. Fernando's solution for the "hard" ICMP error flaw is to simply treat them like "soft" ICMP errors. "If treated that way," he said, "the stack becomes immune to the problem." As to why the ICMP stack was designed this way in the first place, "the basic idea of hard errors was to avoid keeping TCP connections from retrying and retrying lots of times," Fernando explained. "Maybe it made sense many years ago when you didn't have the processing power you have now, but these days there is no problem with just letting the TCP connection eventually timeout when there is a legitimate network problem."
Source Quenching:
ICMP type 4 code 0 packets are defined as "source quench" messages. When a router between two endpoints or the remote endpoint itself begins to run out of buffer space for processing incoming packets, it can send a source quench ICMP packet to the endpoint from where the traffic originated. As defined in RFC 792, when an endpoint receives a source quench packet it should slow the rate at which it is sending out packets. After ten minutes, the endpoint should gradually increase the rate at which it's sending packets up to the original rate.
Fernando's paper points out that source quench messages can also be abused. If the messages are spoofed at a high enough rate, a TCP connection can be slowed to a crawl. "While this would not reset the connection," Fernando explained, "it would certainly degrade the performance of the data transfer taking place." Fortunately the solution is simple he goes on to explain, "you can just completely disable ICMP source quenching for TCP because the TCP protocol has its own handling for these conditions, and routers, as specified by RFC 1812, should not be sending source quench packets either."
Path MTU Discovery
IP sessions are composed of many packets. The largest size of each of these packets is known as the maximum transmission unit, or MTU, and ideally it's sized for maximum throughput. If packets are too large, there's extra overhead for routers in between the endpoints that have to break the large packets into smaller fragments, and again overhead for the final endpoint that has to reassemble the fragments back into the original packets. If packets are too small, there's extra overhead creating and processing all the additional smaller packets. Additional research into the potential problems of fragmentation can be found in the 1987 paper "Fragmentation considered harmful" and the more recent "Fragmentation considered very harmful" from 2004. Thus, it's important to configure your endpoints to use an appropriate MTU, usually the maximum packet size that doesn't require fragmentation.
Path MTU Discovery is defined in RFC 1191, and is a technique using ICMP packets to dynamically discover the maximum transmission unit of an arbitrary internet path. Essentially PMTU works by beginning with sending large packets with the "don't fragment" bit set in the IP header. The "don't fragment" bit tells routers along the way that the data payload of the packet shouldn't be broken into smaller pieces. If a router receives the packet and finds it is too big to forward, it will drop the packet and reply to the original host with an ICMP error stating "packet too large and don't fragment bit set". Additionally, RFC 1191 defines the use of a header field to specify the MTU of the hop that generated the ICMP error. The originating host lowers the size of the packet to this MTU and tries again. The process continues until the packet successfully reaches the destination endpoint. In this way, the host is able to discover the best possible MTU for the current internet path.
In Fernando's 3'rd ICMP attack, ICMP error packets are spoofed saying "packet too large and don't fragment bit set", causing the endpoint to reduce the size of its packet to a smaller than optimal size, as small as 68 bytes. RFC 1812 specifies that once a system has reduced the Path MTU, it will leave it at the reduced size for ten minutes before it tries increasing it again, thus a sustained attack only requires the sending of one packet every ten minutes. With the increased number of smaller packets, the interrupt rate increases on both the client and the server, degrading the performance of both systems. One of the most susceptible systems to this type of attack are BGP routers, which require maintaining long TCP sessions with high data throughput. As this doesn't cause the session to abort, it's much more difficult to detect and can result in very slow data transmissions.
The solution for this third attack is more complex than for the earlier types of attacks. Essentially, Fernando's solution is to delay the processing of the ICMP error messages. Instead of immediately reducing the MTU when a "packet too large and don't fragment bit" ICMP error is received, the system can simply remember that it received the packet and wait for an appropriate amount of time before acting on it. The appropriate amount of time depends on the network and is thus dynamically calculated, but essentially it is the average amount of time taken for a packet to make a round trip between the two endpoints, multiplied by a factor. If during that time you receive a delivery acknowledgment for the same packet that you also received an ICMP error, you know that the ICMP error wasn't real and thus can safely be ignored. Alternatively, if after that amount of time no acknowledgment is received then you can act appropriately on the ICMP error, reducing the MTU.
Additional generic countermeasure:
In addition to the first two countermeasures mentioned above, and inherently part of the third countermeasure, it is also possible to generically defend against ICMP attacks on TCP sessions by verifying the TCP sequence number of the packet contained within an ICMP error. This works because all ICMP error packets are required to contain the IP header and at least 8 more bytes of the packet that caused the error in the first place. In the case of TCP packets, these 8 bytes include the TCP sequence number, and thus this sequence number can be compared against the active session that generated the packet. If the sequence number is not within the sequence number window [story], the ICMP error is obviously not real and can be safely ignored. Evidently many vendors did not provide even this amount of prevention, which is why the ICMP issues described in Fernando's paper are so easy to exploit. While sequence number validation is a useful preventative measure, it is not enough by itself. Fernando notes, "it may serve as a counter-measure nowadays, but if in the future we begin to use larger windows, we will be facing the same problem again." He points to the earlier discussed counter-measures as the appropriate complete solution to the problem.
The politics of vulnerabilities:
Once Fernando understood the vulnerabilities he'd found in the ICMP protocol, he began to try and safely report the problem so that it could be fixed in the many ICMP implementations that comprise the Internet. To begin, he wrote an internet draft which he submitted to the IETF in August of 2004. At that time he contacted CERT/CC and NISCC, and privately notified several open source projects including OpenBSD, NetBSD, FreeBSD and Linux, as well as larger vendors such as Microsoft, Cisco, and Sun Microsystems. He described to each the vulnerabilities to give them an opportunity to address the issues before they became public.
Around this same time, Fernando began receiving emails from Cisco who had numerous technical questions about his solutions to the problems. He continued to reply thoroughly to all their questions, until two months later when he received an email from Cisco's lawyer claiming that Cisco held a patent on his work. He asked their lawyer for specifics, but they refused to reveal any details. For two more months this continued, until Fernando was cc'd on an email thread between Cisco, Linus Torvalds, and David Miller. Reading back through the thread, Fernando found where David Miller had asked Cisco how they could possibly patent sequence tracking as Linux had been doing it for many years, and later in the same thread Cisco noted that they had withdrawn their patent. Fernando found the experience frustrating, "a third party knew what it was all about before I did. One would expect the person who discovered the vulnerability to be the most involved, but that didn't happen. To this day I still don't know exactly what the patent was about, I'm only guessing it was about TCP sequence tracking based on the email thread I read."
While the patent issue was happening with Cisco, CERT/CC created a mailing list to allow vendors to communicate amongst themselves about the newly discovered vulnerability. "They blamed me for submitting my work," Fernando said in exasperation. "One of Cisco's managers of PSIRT said I was cooperating with terrorists, because a terrorist could have gotten the information in the paper I wrote!" Fernando was familiar with intellectual property arguments with last year's Slipping In The Window paper, so he had intentionally publicly published his findings to prevent it from being patented. "Then they accused me of working with terrorists, and even still tried to patent my work!" He noted that he now suspected had he actually worked exclusively with Cisco as they had requested, they probably would have managed to patent all of his ideas. "I decided to work this issue with NISCC, as they were much more responsive. But Cisco wanted me to work with Cert/CC. And as I didn't, maybe our relationship was harder than it should have been."
Fernando also found Microsoft difficult to work with. "Microsoft's acknowledgment policy says that you must report the issues to them 'confidentially'", he explained. As he chose to contact CERT and various open source projects as well, he claimed that they refused to give him credit for the discovery. Only with much effort did he finally get them to acknowledge that he had discovered the issue.
The actual disclosure of the ICMP issue was an adventure in itself, delayed multiple times. It was originally planned to be disclosed in January of 2005, but was repeatedly delayed until April 12'th because many of the bigger vendors weren't ready yet with fixes. Fernando acknowledged, "CERTs don't have many choices here. They get paid for providing a so-called 'responsible disclosure' process. Suppose that they disclosed a security issue while Cisco or Microsoft were still vulnerable. Do you think they would keep their jobs if the bad guys began to attack the Cisco and Microsoft systems based on the information published by CERTs?" He went on to note, "I don't know what the community could do to educate vendors," suggesting that perhaps the public disclosure should happen no matter what after a couple of months of being announced privately. "Maybe after being hit by the media several times, then the big vendors would learn that they must become more responsive."
Fernando went on to point out that from his experience vendors seem to be more concerned about who gets credit for finding a flaw, rather than about actually fixing it. Fernando explained, "Cisco was worried about not giving me credit because they claimed to have been working on the problem for four years. They offered to set up a meeting with some people of Cisco Argentina to show me documentation that would prove they had been working on the Path MTU Discovery attack for more than a year. It didn't happen. Of course, that's ironic, as in the same way I could fake a document and say that I have been working on my draft privately for ten years. On the other hand, if it were true, then it would mean that Cisco takes about two years to address these issues. I would be concerned about this if I were one of their customers."
One week prior to the eventual discloser, Fernando received a call from the CTO of Cisco Argentina who asked him for a copy of his resume. "He said he wanted to have a meeting with me, telling me they might have a job for me," Fernando shrugged. "The meeting was delayed a few times, then I never heard from him again. I wouldn't have thought much of it, but I mentioned it to other people and it turns out they'd had similar experiences. It seems this is a common practice for Cisco to offer someone work in the hopes you'll not talk to the media when the security issues are disclosed."
Conclusion:
Following the public disclosure of Fernando's findings, the media began to discuss the flaws in ICMP. "Instead of contacting at least NISCC, or myself, they contacted the affected parties," Fernando explained. "For example, ZD-NET contacted Microsoft and thus came to the conclusion that there was no problem, that the only way for the attack to work was for the attacker to sniff traffic on the network. This isn't true! It seems the reporters hadn't even read the draft and thus didn't understand what is wrong with ICMP. What's even worse, it seems that they nor the contact at Microsoft realize what the word 'blind' means in each of the attacks." As discussed earlier, due to the fact that most vendors didn't even check the TCP sequence number of packets within ICMP errors, an attacker could blindly spoof ICMP errors and thus trivially exploit these vulnerabilities.
The issues affecting the ICMP protocol are legitimate, and will need to be dealt with by all vendors and open source groups. Fernando told me that Linux quickly implemented the counter measure for ICMP Source Quench upon receiving his internet draft, and had already been working on prevention for blind connection reset attacks and on TCP sequence checking. He reported that FreeBSD also had been working on a counter-measure for blind connection reset attacks, and that they removed Source Quench processing and added additional TCP sequence checking upon receiving his internet draft. As far as Fernando is aware, NetBSD has not acted on his internet draft, and is thus still vulnerable.
As for OpenBSD, they were already working on implementing the counter measure for blind connection reset attacks. "In August of 2004," Fernando said, "Markus Friedl implemented the TCP sequence check in OpenBSD following my report." He went on to discuss efforts at the recent Hackathon in Calgary, "at the hackathon, Chad Loder and I worked with Markus to implement the counter-measure for ICMP Source Quench attacks, then we began to work on the counter-measure for the Path MTU Discovery attack." The PMTUD fix was the last to be merged, on June 30'th, so now all ICMP fixes are in the OpenBSD -current source tree and will be part of OpenBSD 3.8. Regarding the PMTUD fix, Fernando notes, "other projects said they liked the idea, but wanted to hear about experiments with it. OpenBSD decided to implement it because it was clear to the project that it was the only definitive solution to the problem. This is a clear example of what being proactive is about: fixing problems before you really face them."
source: http://kerneltrap.org/node/5382
Blowing the lid off of TiVo
Level: Intermediate
Peter Seebach (dwlinux@plethora.net)
Writer, Freelance
06 Jul 2005
Everyone's heard that the TiVo "runs Linux™". In this installment of Linux on board, Peter takes a look at the Linux system installed on the TiVo. Examining the TiVo system reveals how one company made the transition from desktop operating system to embedded system.
There are a lot of sites about "hacking" the TiVo, to do this to it and that to it (and there's always the other thing too). After all, half the fun of owning something that runs Linux is to make it do something more (or different) than it was intended to do. But most of us only need so many Web servers (off the top of my head, I think I have 10 or 15 Web servers in my house already, including the embedded systems).
A couple of bits of advice about a planned TiVo hack attack. You should probably assume that you will void any warranty it ever had and that it won't work as a video recorder again. It's not that you're likely to botch things that badly, but rather that the aggravation you could face trying to get the machine fixed for any little change will be offset by the realization that you're the one who busted it.
If you're going to try to upgrade the hard drive, online instructions tell you to start by getting T10 and T15 Torx screwdriver bits from a hardware store. This is partially good advice: the TiVo does indeed use T10 and T15 Torx screws. However, if you are not the sort of person who already has a reasonable collection of Torx bits lying around, you may not want to mess with this. Thanks to the Kuro box and the Mac Mini, you have other options for reasonably affordable PowerPC® hardware.
My selection for this experiment was the 40-hour TiVo. It's a Series2 machine, which means it's much less flexible or open than the Series1 was. Unfortunately, that's all they sell now, but it is moderately inexpensive.
Making backups
As always, when about to do something that may alter the electronic landscape, start by making backups. Before even powering up my TiVo, I made a backup of its disk. This involves opening the machine (using the T10 Torx bit), removing the drive sled (T10 again), and removing the drive from the sled (T15). Now you have a 40-GB disk. Put this drive in any old x86 Linux box, and you'll find that it has an unknown partition table. So, there are no partitions on TiVo (such as /dev/hde1), just a whole disk. No problem.
Figure 1. TiVo's innards
TiVo's innards

On my system, I put the drive into a drive bay, making it /dev/hde. If you use an external drive bay, it might show up as /dev/sdX in which "X" is some letter; it might be "a" if you have no other SCSI or pseudo-SCSI devices, or possibly something from later on in the alphabet. Be sure you know which disk you're working on!
Listing 1. Makin' copies
# bzip2 -1c < /dev/hde > tivo.img.bz2
Note that at first it may appear that something's gone horribly wrong. No output will appear for quite a while. This disk contains lots of empty sectors full of zeroes at the start, and bzip2 is dutifully doing a very good job of compressing them -- it was a minute or so before the file reached 4,096 bytes and a few minutes more before it reached 8,192. The final file size was around 560 MB, not bad for an image of a 40-GB disk. If you want to restore from that backup, just do it the other way:
Listing 2. For a reversal of fortunes
# bzip2 -dc <> /dev/hde
The image will be a lot more than 560 MB once you've got some data on the drive. Assume you'll need to have about as much free space as the size of your TiVo drive.
So, what's on the disk?
Since the disk doesn't show up as being partitioned, it's reasonable to suspect that it might be in some proprietary format. Time to figure out what that format is.
Of course, the first thing to do is look at the disk as raw bytes. It starts out with something that looks a little like configuration for a boot loader:
Listing 3. Seen this boot before?
root=/dev/hda7
runfinaltest=2 contigmem8=16M brev=0x10
After that, however, is the explanation of all mysteries -- this disk contains an Apple partition map:
Listing 4. An Apple for the brightest
0x0200 50 4d 00 00 00 00 00 0d 00 00 00 01 00 00 00 3f 'PM.............?'
0x0210 41 70 70 6c 65 00 00 00 00 00 00 00 00 00 00 00 'Apple...........'
0x0220 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 '................'
0x0230 41 70 70 6c 65 5f 70 61 72 74 69 74 69 6f 6e 5f 'Apple_partition_'
0x0240 6d 61 70 00 00 00 00 00 00 00 00 00 00 00 00 00 'map.............'
0x0250 00 00 00 00 00 00 00 3f 00 00 00 33 00 00 00 00 '.......?...3....'
Did you suspect this?
In retrospect, an Apple partition map is a very reasonable thing for a PowerPC Linux box to hold. Support for Apple partition maps isn't unusual, and they're reasonably well documented. It's a little less typical for a MIPS system, but they presumably opted for compatibility with the Series1 systems, which ran on PowerPC.
Strictly speaking, this is still a proprietary format; it's just a well-known one. Unfortunately, none of the partitions are in formats that are familiar to OS X, but my Mac Mini could read the partition table. Here's the partition map:
Listing 5. Familiar format, eh?
Partition map (with 512 byte blocks) on '/dev/rdisk1'
#: type name length base ( size )
1: Apple_partition_map Apple 63 @ 1
2: Image Bootstrap 1 1 @ 44161324
3: Image Kernel 1 8192 @ 44161325 ( 4.0M)
4: Ext2 Root 1 524288 @ 44169517 (256.0M)
5: Image Bootstrap 2 1 @ 44693805
6: Image Kernel 2 8192 @ 44693806 ( 4.0M)
7: Ext2 Root 2 524288 @ 44701998 (256.0M)
8: Swap Linux swap 262144 @ 45226286 (128.0M)
9: Ext2 /var 262144 @ 45488430 (128.0M)
10: MFS MFS application region 524288 @ 45750574 (256.0M)
11: MFS MFS media region 33494098 @ 46799150 ( 16.0G)
12: MFS MFS application region 2 524288 @ 46274862 (256.0M)
13: MFS MFS media region 2 44161260 @ 64 ( 21.1G)
Device block size=512, Number of Blocks=80293248 (38.3G)
DeviceType=0x0, DeviceId=0x0
This gives us a good idea of what to expect. For one thing, it looks like it is probably designed to update one file system while running from the other, to make updates safe. And, wonder of wonders: this gives exact block offsets and sizes for some file systems to look at. That means it's time to connect the drive back up to a Linux box and have a look at those file systems.
Listing 6. Perusing the file systems
# dd if=/dev/hde bs=512 count=524288 skip=44169517 of=root1.img
# dd if=/dev/hde bs=512 count=524288 skip=44701998 of=root2.img
# dd if=/dev/hde bs=512 count=262144 skip=45488430 of=var.img
It turns out that the Root 1 file system isn't even formatted; it's just 256 MB of null bytes. But the Root 2 file system is golden:
Listing 7. Eureka!
# file root2.img
root2.img: Linux rev 0.0 ext2 filesystem data
# mount -o loop root2.img /mnt
# ls /mnt
. bin diag etc initrd lib mnt proc sbin tvbin var
.. dev dist etccombo install lost+found opt res tmp tvlib
# mount -o loop var.img /mnt/var
# ls /mnt/var
. a dev etc lost+found mnt packages run tmp
.. bin dist log mess mtab persist state utils
Security
One of the disadvantages of this being essentially a proprietary system is that the TiVo as shipped has some "security" features that are intended to prevent you from modifying it. This is a simple question of economics -- anyone selling general-purpose MIPS systems with hard drives and TV tuners in them for $100 would go broke quickly. So for now, I'm just going to look at how Linux runs on this system, not how to change it.
Note that the convenient compatibility of the ext2 file system from one system to another means that you have the option of playing around under a regular Linux box. You could even set up a cross-compiler and related tools and poke around in a little more detail. Don't expect to be able to change things easily, though; while it has been done, it is not intended to be easy or convenient. Remember, this is a proprietary piece of hardware that has the capability to record video. Needless to say, a large number of corporations want very badly for it to be hard to change it too much.
On this particular model of TiVo, there is a hardware security check before the kernel even gets loaded. Then, the kernel itself has a RAMdisk built into it that contains some of the security features; compare with the code found on the hard drive, such as the /var/utils/checkkernel.tcl script.
It is worth noting that in the initial install, there's a very large amount of extra space. The root file system has 54 MB in use and 182 MB free; /var has 3 MB in use and 116 MB free. Of course, the intention is to store a lot of data, such as the programs you like to watch.
Looking at the software
One of the more interesting things about the TiVo is the variety of special-purpose applications it has. Although in theory it has a display available, in fact it can't just scroll text up the screen. There's an executable named /tvbin/text2osd, which sounds suspiciously like an application to write words on the output video as an on-screen display. There's also an interesting collection of PNG files, all in typical video sizes, containing messages that might need to be displayed. In a nod to Douglas Adams, my favorite says "Don't Panic."
The software is a little disorganized, but then again there's really no need to have things in "intuitive" locations -- only the development team needs to be able to find the bits. Some things make less sense than others -- all of the TiVo software is in /tvbin and /tvlib, but the configuration files for a lot of programs are in /opt/tivo. A more traditional Linux file system layout would have put the files in /opt/tivo/bin, /opt/tivo/lib, and /opt/tivo/etc.
Looking at a Tcl script, I noticed that it was interpreted by /tvbin/tivosh. This is probably a tcl interpreter. But wait, it's actually a symbolic link to a program called tivoapp. Many different programs are linked to tivoapp. It looks like a unified binary containing bits of a number of different programs. It is not immediately obvious why this application would be built this way -- it may reduce memory footprint or it may just make the system harder to crack.
Mix and match
One thing that's very noticeable is that the TiVo has a fairly loose combination of shell scripts, Tcl programs, and binaries. You can learn a lot about what a given program does by looking at it. For instance, the installNFS script, written in bash, calls text2osd to display messages. It even has an inlined Tcl script!
One of the big strengths of Linux for development is the freedom to mix and match development tools like this -- TiVo shows this off to good advantage.
System boot
Like any Linux system, the TiVo spawns /sbin/init, which in turn looks at /etc/inittab to decide what to do. The first thing it does is run /etc/rc.d/rc.sysinit, which in turn runs files from directories with names like StageA_PreKickstart and StageG_PostApplication. They are run in order.
Each of these directories contains a number of scripts with names like rc.Sequence_150.CheckForDebug.sh. These are analogous to files like /etc/rc.d/rc3.d/S12sshd on a more conventional Linux system. Note that the shell's habit of collating expansions (such as rc.Sequence_*.sh) is used to provide ordering. If a script's name has the string .Platform in it, it will be run only on matching hardware.
This is a good design for a commodity vendor -- they don't need to build the disk differently for each machine. The same goes for .Implementation and .Implementer flags, which identify scripts to run only on some systems. The Stage directories replace the rcN.d directories, which are absent.
This organization makes it comparatively easy to see what's going on at each stage of boot. It's interesting to note that the shell scripts are sourced into the parent shell so that early scripts can set environment variables for later ones to use.
Summary
The TiVo is a fascinating example of a number of different Linux philosophies, in action and contrasted starkly with the philosophy of a company making money selling services and providing a simple, robust application for users. Many of the features that would-be hackers find most frustrating are probably there to keep people from calling customer support with an entirely customized box and wondering why it won't work.
On the other hand, it does appear that a real effort has gone into making the machine harder and harder to tweak. Early TiVo systems were often modified to be Web servers. The one I'm looking at, so far as anyone can tell, cannot be made to run a new kernel or otherwise be changed in any significant way without an actual hardware hack to the PROM that checks for unauthorized modifications to the software.
It's particularly interesting to note that the GPL, while it does compel TiVo to produce source code for their kernels, doesn't really keep them from building a box that runs Linux but won't let you change anything. The interesting aspects of this box are mostly in studying how it does what it does and how various open source tools and technologies are used to build an embedded application.
Resources
* Participate in the discussion forum on this article. (You can also click Discuss at the top or bottom of the article to access the forum.)
* Read the other installments in Peter's Linux on board series.
* The Series2 TiVo is based on the Broadcom BCM7317 chip, a MIPS core with some handy support for set-top box features.
* The Atmel AT90SC6464C chip is a low-power, high-performance, 8-/16-bit secure microcontroller with 64KB/64KB ROM EEPROM and such security features as MMU, MED, OTP (One Time Programmable) EEPROM area, RNG (Random Number Generator), "out of bounds" detectors, and side channel attack countermeasures.
* If you want to make the TiVo into a slightly friendlier hard disk recorder, there are instructions for hacking the TiVo Series 2.
* You can find a whole lot of discussion on TiVo hacking at the DealDatabase Forum. And no, they won't help you get free DirectTV.
* Peter's TiVo is nicknamed a Series 2.5; it has a non-reflashable chip designed to provide extra security. Read about it in this thread.
* You can check out The Hitchhiker's Guide to the Galaxy (movie version) on this site. And remember, "Don't Panic!"
* The "Three reasons why Linux will trounce the embedded market" (developerWorks, May 2001) are royalty fees, device features, and the promise of a single platform (TiVo gets a mention here).
* "Software Development for Consumer Electronics: Bigger, Better, Faster, More -- Or Bust" (The Rational Edge, 2004) details Linux as a best choice for consumer electronics where the profit margin can be razor thin.
* "Emulation and cross-development for PowerPC" (developerWorks, January 2005) introduces PowerPC emulation and cross-compiling.
* "A developer's guide to the POWER architecture" (developerWorks, March 2004) introduces the PowerPC application-level programming model to provide an overview of the instruction set and important registers.
* Find more resources for Linux developers in the developerWorks Linux zone.
* Get involved in the developerWorks community by participating in developerWorks blogs.
* Order the no-charge SEK for Linux, a two-DVD set containing the latest IBM trial software for Linux from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
* Build your next development project on Linux with IBM trial software, available for download directly from developerWorks.
About the author
Author photoPeter Seebach loves to take things apart and find out how they would have worked. He feels that the discovery that a device "runs Linux" can reasonably be considered a challenge to poke around in its root file system. You can contact Peter at dwlinux@plethora.net.
source: http://www-128.ibm.com/developerworks/linux/library/l-lobtivo/?ca=dgr-lnxw01LidOffTivo
LoJack for Your Computer
July 6, 2005
Last week, LoJack (Nasdaq: LOJN) announced the dawning of a new era in data recovery.
What? Is the groundbreaking gorilla of stolen vehicle recovery committing Peter Lynch's cardinal sin of deworsification into the unrelated field of hard-drive hacking? Not really.
LoJack is licensing its brand name to Absolute Software, which provides Computrace -- soon to be known as the "LoJack for Laptops" line of computer theft recovery systems. When a stolen Computrace-equipped system is connected to the Internet, it automatically and silently sends locating data to Absolute Software, which then calls out the law. In some cases, Absolute Software customers are eligible for a $1,000 guarantee payment when a stolen system is not recovered within 60 days.
In my opinion, LoJack investors should be pleased for at least two reasons. First, without committing any capital or assets, LoJack is collecting a licensing fee, as well as warrants to purchase 500,000 shares of Absolute Software, with a $2 per-share exercise price. Assuming that LoJack can capitalize on its option to buy shares profitably (Absolute Software shares are trading at around $2 each), LoJack investors might be looking at the elusive free lunch. As long as Absolute Software delivers on quality control and customer service, thereby maintaining its reputation, downside risk is relatively limited.
Second, and more importantly, the LoJack brand name is gaining free exposure in the laptop market, catering to a higher-middle-income individual and business population, which happens to be a major segment of LoJack's automotive target customer base. Ostensibly, LoJack's status as a recognized brand and market leader in its field stands to be confirmed and enhanced. If companies take note (and mass appeal exists), there might be more licensing revenue to come.
To be sure, in a business that depends on brand awareness and customer confidence, a deal like this carries tempered risks because a company's brand equity is tantamount to the success or failure of a product. That said, successful licensing also offers the possibility for even greater rewards.
Want valuable nuggets on small-cap investing with a potential for mythic returns? Spend your magic bean money on a subscription to the Motley Fool Hidden Gems newsletter.
source: http://www.fool.com/News/mft/2005/mft05070623.htm