Monday, April 24, 2006
WebOS market review
My post last week about XIN, a new contender in the Web OS space, provoked some skeptical comments from ZDNet readers. It wouldn't surprise me if one of the small startups I've mentioned here goes on to become the next Linux So in this post I explain what a Web OS is and why it's of use. I also take a look at the main WebOS vendors.
The OS of course stands for 'Operating System' and here's how Wikipedia defines WebOS:
"More generally, WebOS refers to a software platform that interacts with the user through a web browser and does not depend on any particular local operating system. Such predictions date to the mid-1990s, when Marc Andreessen predicted that Microsoft Windows was destined to become "a poorly debugged set of device drivers running Netscape Navigator." More recently attention has focused on rumors that Google might produce a software platform."
(emphasis mine)
WebOS also happens to be the specific name of a computing research project, which started at the University of California, Berkeley in 1996 and is continuing at other American universities such as Duke. Here's how it's described:
"WebOS provides basic operating systems services needed to build applications that are geographically distributed, highly available, incrementally scalable, and dynamically reconfiguring."
GoogleOS
The WebOS I'm talking about here is the general one. As Wikipedia noted, Google is the most obvious candidate nowadays to build a WebOS. Jason Kottke wrote a famous (in the blogosphere at least) post on GoogleOS back in August 2005. Kottke saw the WebOS as having three parts to it: the web browser as the primary application interface, web apps (like Gmail, etc), and a local server. The third part seems to be the most crucial and the piece largely missing today. Kottke went on to say:
"Aside from the browser and the Web server, applications will be written for the WebOS and won't be specific to Windows, OS X, or Linux. This is also completely feasible, I think, for organizations like Google, Yahoo, Apple, Microsoft, or the Mozilla Foundation to make happen…"
Kottke's post was visionary, but as yet there's no sign of a Google WebOS - or one from Yahoo, Apple, Microsoft and Mozilla for that matter.
Those that are building a WebOS
But there are a number of small startups trying their luck. I've already covered XIN. Others are YouOS, EyeOS, Orca, Goowy, Fold. YouOS got a lot of interest last month, making it to the front page of Digg.
There's also a bit of crossover with Ajax homepages like Netvibes, Pageflakes, Microsoft's Live.com and Google's start page. The key difference from Ajax homepages is that a WebOS is a full-on development platform. The likes of XIN and YouOS are application development platforms that also offer things like file storage. Services like Netvibes and Live.com are more of an interface for web content and 'mini apps' like gadgets (some, like Netvibes and Pageflakes, also offer APIs).
YouOS - a virtual computer
So what is a WebOS again? The developers behind YouOS wrote a manifesto about their work, describing it as an attempt to "bring the web and traditional operating systems together to form a shared virtual computer." They're at pains to point out that a WebOS is different from a traditional computer OS, which is concerned with integrating hardware and software. A WebOS, according to YouOS, is "a liberation of software from hardware". I think this statement gets to the heart of what a WebOS does:
"YouOS is a shared computer that houses your data and applications, but you are the owner of this data and applications."
From a user point of view, of course you still need a traditional OS (like Windows or Linux) on whatever machines you use to access YouOS or another WebOS. But as a user, the OS is no longer your primary concern - it's your data and your apps that you only need to concern yourself with.
What's the best WebOS currently?
To be honest I don't know, but I asked the question in a Digg forum last week and got a great reply from 'automan':
"A webOS that wants to make it should be able to adapt to an open source style of environment. Why would I want to be tied into another "proprietary" image editor or word processor? I think that the webOS that supports containers that you can put your own code into and run will be the ones to survive. […[ I believe that XIN and YouOS have the better model for future development and expansion… YouOS in particular. While it is in no way visually appealing at this point, I believe it has plenty of room to build upon itself to grow in a very good direction."
An open source style makes perfect sense for a WebOS, particularly for the small players wanting to stand a chance against Google and Microsoft. I'll be investigating the above WebOS contenders myself over the next few weeks, so will be in a better position to judge then.
The skeptics
As for developers, a big benefit is that a WebOS theoretically makes it easier to develop apps that work cross-platform. DHTML and Javascript are the main tools to do that, which is where a lot of the skepticism comes from. Take this comment from a ZDNet reader:
"Oh, I wish I wish I wish we could just create a new, standard, simple, clean, cross-platform/write-once/run anywhere, open, programmatic, efficient, robust GUI language that provided the above advantages: 0 administration, 0 risk. Java could've been a contender, but it's a complete mess now; DHTML+Javascript is just evil."
So it seems the jury is out among many people as to how viable a WebOS is. Also a lot of people don't consider a WebOS to be a real operating system, but I think that's semantics and not something worth debating. If you imagine a future when you're accessing your data and apps from multiple devices, the need for a WebOS will become clearer.
The optimists (futurists?)
The reason I'm interested in a WebOS is of course the same reason I'm obsessed with the Web Office - there are so many more opportunities for applications and data running in a networked space, rather than on a single computer or other device. I think we're in the very early stages of WebOS development, but it wouldn't surprise me if one of the small startups I've mentioned here goes on to become the next Linux. A big call perhaps, but we're living and working on the Web more and more every year.
source:http://blogs.zdnet.com/web2explorer/?p=166
Social Networking From Your Cell
source:http://hardware.slashdot.org/hardware/06/04/23/1954252.shtml
Beating Traffic
Woe Is Traffic
Traffic: the commuter's bane. It plagues major city drivers around the globe and shows no sign of letting up.1 In fact, the average U.S. commuter spends about 100 hours a year driving just to work - 20 hours more than a typical year's supply of vacation.2 This personal "daily grind" uses more than 15,000 miles and 1,000 gallons of gas every year, which might not be so bad if much of it wasn't waste: 1.6 million hours and 800 million gallons of gas are wasted every day in traffic jams across the nation. Traffic even affects your health, raising blood pressure, increasing stress, and producing more Type-A personalities.3
Of course, some places are much worse than others. New York tops the list, with Chicago, Newark and Riverside following, albeit at a distance. L.A. comes in at #6 and Houston, where I reside and commute, is #15.4 Other cities, such as Nashville, TN and Kansas City, MO, show up much further down the list, but something tells me that even commuters in those relative traffic havens dedicate significant effort and conversation to 'beating traffic.'
Resources are sometimes available to help in this quest. Houston Transtar provides up to the minute traffic information for all major Houston highways.5 Average traveling speed, construction and accident information are all available at the click of the mouse, but how to avoid the perpetual web of red during the morning and evening rush hours is nowhere to be found. Obvious answers such as public transportation and carpooling are legitimate, but trends show that Americans are meeting the increase in traffic by using such transportation methods less, not more.6 Also, if the online traffic-reporting graphic warns of potential issues, there is no indication of how long they might persist, leaving the traffic conscientious commuter right where he started: guessing.
Tired of the typically inefficient and contradictory workplace chatter on the subject and feeling the pull of a slight worksheet obsession, I set out to statistically analyze my commute in order to determine how I might minimize my time behind the wheel. If there was a way to figure out how to give myself an advantage over the almost 900,000 other Houstonian workers out there (who average a 26.1 minute commute),7 math and a smidgeon of obsessive compulsive disorder had to be essential ingredients. At the very least, I would be able to ascertain just how much of my commute time was up to me - and how much depended on a "higher power" (e.g., weather, school districts, wrecks, etc.).
Gathering Data
From March of 2004 to March of 2005, I recorded my departure and arrival times both to and from work, along with whether school was in or out. Other factors, although most likely important, were excluded to keep the scope of the experiment narrow and measurable.
Driving Data
Every morning, I took note of the time on my car clock as I pulled out of my driveway at the Riata Ranch subdivision of northwest Houston8 and then again as I pulled into the parking garage at my office building close to the north-bound frontage road of Sam Houston Pky and Clay Rd.9 In the evening, I followed the same process in reverse. The morning route 10 and evening route11 differed slightly in length, but data was only recorded when the planned course was followed, allowing for only slight variations.12
School District & Government Data
Being suspicious of the influence of the school session, I collected official 2004-2005 and 2005-2006 calendar data from Cypress Fairbanks Independent School District,13 which covers almost all of my commute route,14 and took note of all full student holidays (i.e., teacher in-service days, but not student early release days).15 I also collected official 2005 and 2006 government holiday information from the city of Houston16 and the US Federal Government,17 but this proved next to useless as I only commuted to work on one city and two federal government holidays.
Analysis
To set up the gathered information, I first organized the variables into inputs and outputs as shown in Table 1.
To determine which variables had a statistically significant effect on my commute times, I ran one-way ANOVAs18 on the discrete variables and plotted smoothed graphs of means for the continuous variables.19
Morning Commute ANOVAs
Day of the Work Week
The one-way ANOVA of the morning commute duration versus the day of work week (y1 vs. x1) showed a statistically significant effect.20 The table in the ANOVA output21 and the boxplot below confirm that this effect comes on Fridays, on which there is a significantly shorter commute time:
Source DF SS MS F P
Day of Week 4 544.4 136.1 3.87 0.005
Error 202 7103.2 35.2
Total 206 7647.6
S = 5.930 R-Sq = 7.12% R-Sq(adj) = 5.28%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev ----+---------+---------+---------+-----
1 43 22.209 5.726 (------*------)
2 44 22.886 5.891 (-------*------)
3 47 23.447 6.382 (------*------)
4 39 22.462 7.014 (-------*------)
5 34 18.559 3.855 (-------*-------)
----+---------+---------+---------+-----
17.5 20.0 22.5 25.0
Pooled StDev = 5.930
Week of the Month
The results from an ANOVA of the week of the month versus the morning commute duration (y1 vs. x2) showed no statistically significant impact, although week 5 has the highest average commute time:
Source DF SS MS F P
Week of Month 4 226.5 56.6 1.54 0.192
Error 202 7421.1 36.7
Total 206 7647.6
S = 6.061 R-Sq = 2.96% R-Sq(adj) = 1.04%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev ---+---------+---------+---------+------
1 52 22.673 7.040 (------*-----)
2 44 21.636 4.760 (-------*------)
3 51 21.706 6.090 (------*------)
4 42 21.048 5.441 (------*-------)
5 18 24.944 7.075 (----------*----------)
---+---------+---------+---------+------
20.0 22.5 25.0 27.5
Pooled StDev = 6.061
Month of the Year
The month of the year versus morning commute time (y1 vs. x3) ANOVA results showed even less of an effect:
Source DF SS MS F P
Month of Year 11 496.5 45.1 1.23 0.269
Error 195 7151.1 36.7
Total 206 7647.6
S = 6.056 R-Sq = 6.49% R-Sq(adj) = 1.22%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev ------+---------+---------+---------+---
1 21 22.476 5.793 (--------*--------)
2 8 22.875 4.121 (-------------*-------------)
3 19 24.053 7.764 (--------*--------)
4 19 22.737 4.053 (--------*--------)
5 19 23.842 5.650 (--------*---------)
6 18 21.722 5.278 (--------*---------)
7 19 18.947 4.441 (--------*--------)
8 23 20.652 5.556 (-------*-------)
9 17 21.824 7.502 (---------*--------)
10 17 21.353 4.182 (--------*---------)
11 16 24.250 9.774 (---------*---------)
12 11 20.545 5.336 (-----------*-----------)
------+---------+---------+---------+---
18.0 21.0 24.0 27.0
Pooled StDev = 6.056
Cypress-Fairbanks ISD
Whether or not the local school district was in session proved to be the greatest measured variable in explaining the morning commute time variation (y1 vs. x6):
Source DF SS MS F P
CyFair 1 774.0 774.0 23.08 0.000
Error 205 6873.6 33.5
Total 206 7647.6
S = 5.791 R-Sq = 10.12% R-Sq(adj) = 9.68%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev -+---------+---------+---------+--------
0 63 19.159 4.646 (------*------)
1 144 23.361 6.222 (----*----)
-+---------+---------+---------+--------
18.0 20.0 22.0 24.0
Pooled StDev = 5.791
Evening Commute ANOVAs
Day of the Work Week
While the day of the week proved to have a significant impact on the morning commute, the evening commute showed no such relationship (y2 vs. x1):
Source DF SS MS F P
Day of Week 4 68.5 17.1 0.82 0.516
Error 158 3312.1 21.0
Total 162 3380.7
S = 4.579 R-Sq = 2.03% R-Sq(adj) = 0.00%
Individual 95% CIs For Mean Based on Pooled
StDev
Level N Mean StDev +---------+---------+---------+---------
1 40 22.125 4.333 (--------*--------)
2 40 21.275 5.002 (--------*--------)
3 34 21.706 5.190 (---------*--------)
4 33 20.697 4.149 (--------*---------)
5 16 22.875 3.304 (-------------*-------------)
+---------+---------+---------+---------
19.2 20.8 22.4 24.0
Pooled StDev = 4.579
Week of the Month
Again, the week of the month did not explain the commute time variation (y2 vs. x2):
Source DF SS MS F P
Week of Month 4 86.4 21.6 1.04 0.390
Error 158 3294.2 20.8
Total 162 3380.7
S = 4.566 R-Sq = 2.56% R-Sq(adj) = 0.09%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev ---+---------+---------+---------+------
1 34 21.176 4.496 (-------*-------)
2 39 20.769 4.782 (------*------)
3 42 21.857 4.176 (------*------)
4 35 22.000 4.583 (-------*-------)
5 13 23.462 5.238 (-----------*------------)
---+---------+---------+---------+------
20.0 22.0 24.0 26.0
Pooled StDev = 4.566
Month of the Year
Another change from the morning results, the month of the year proved to have a significant effect, with February, April and November showing the longest evening commute times (y2 vs. x3):
Source DF SS MS F P
Month of Year 11 541.2 49.2 2.62 0.004
Error 151 2839.4 18.8
Total 162 3380.7
S = 4.336 R-Sq = 16.01% R-Sq(adj) = 9.89%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev ----+---------+---------+---------+-----
1 15 21.400 3.418 (------*-------)
2 9 24.222 3.833 (---------*--------)
3 17 20.529 3.319 (-----*------)
4 10 23.700 6.325 (--------*--------)
5 14 20.143 3.416 (------*-------)
6 14 21.357 4.584 (------*-------)
7 14 19.143 4.400 (-------*------)
8 21 21.905 5.078 (-----*-----)
9 14 20.929 4.811 (-------*------)
10 11 20.091 3.590 (--------*--------)
11 16 25.625 4.731 (------*-------)
12 8 20.625 3.021 (---------*---------)
----+---------+---------+---------+-----
18.0 21.0 24.0 27.0
Pooled StDev = 4.336
Cypress-Fairbanks ISD
The school session again showed signification influence, but it was not as strong in the evening as in the morning (y2 vs. x6):
Source DF SS MS F P
CyFair 1 106.2 106.2 5.22 0.024
Error 161 3274.4 20.3
Total 162 3380.7
S = 4.510 R-Sq = 3.14% R-Sq(adj) = 2.54%
Individual 95% CIs For Mean Based on
Pooled StDev
Level N Mean StDev ---------+---------+---------+---------+
0 50 20.400 4.677 (------------*------------)
1 113 22.150 4.434 (--------*-------)
---------+---------+---------+---------+
20.0 21.0 22.0 23.0
Pooled StDev = 4.510
Departure Time Analysis
For the continuous variable of departure time, I plotted smoothed curves of the mean commute time at each minute.
The morning departure time plot shows relatively long commute times until about 7:40AM, at which time a gradual decrease starts that continues in an overall linear fashion for the next hour. After 8:40AM, traffic appears to have only minimal impact. (y1 vs. x4):
![]() | ||
Figure 9. A smoothed plot of the mean of the recorded morning commute durations versus the home departure time. |
The evening departure time plot shows a peak commute time at about 5:10PM, tapering off linearly through the next two or so hours. Departure times prior to 5:00PM showed erratic results, but it is obvious that traffic played a decreasing role in evening commute time duration moving back through 4:00PM, before which it's influence is noticeable, but slight. (y2 vs. x5):
![]() | ||
Figure 10. A smoothed plot of the mean of the recorded evening commute durations versus the work departure time. |
I usually leave home at 8:00AM and work at 5:30PM, but a 30 minute delay of each looks like it would shave five minutes off the morning commute and about 2.5 minutes off the evening. Additional half-hour delays bring 2.5 minutes of commute time savings in the evening, but little to no savings in the morning. Slightly earlier departure times appear to result in commute time increases for both trips. Moving back past 4:30 in the evening brings slight improvement in the evening commute, but savings in the morning would most likely require leaving before 6:30AM.
Conclusions
Given the above data and analysis, what can be done to improve my commute times? Changing my morning or evening departure time looks promising. The best bet appears to be moving my schedule out a half-hour to 8:30AM and 6:00PM, bringing significant savings (about 7.5 minutes of commute time per day) without getting too far from normal business hours. Spread out over 50 work weeks, that results in a total savings of over 30 hours a year - the equivalent of about a 38% boost to my existing 80 hours of vacation.
Departure time isn't the say-all, however, and making this shift won't always result in a smooth and fast commute. The day of the week in the morning and the month of the year in the evening both have significant impacts, and whether or not school is in session affects both. I could possibly squeeze out a few more minutes of savings by scheduling my vacation days to align with the potentially longest commutes (e.g., non-Friday school days in the months of November, February and April), but the data shows significant variation up and above that described by the measured variables - much likely due to factors outside of the control of the commuter (e.g., weather, wrecks, breakdowns, response to traffic predictions, etc.).22
The commuter may have more control than it appears, however. Adjusting your commute times and rearranging your vacation schedule will probably help in the meantime, but getting cars off the road is the only sure solution - one that is within commuters' sphere of influence.23 It might require punching your "free reign" in the gut, but getting involved in your community by writing your Congressperson or attending city council meetings in promotion/defense of improved mass transit could be the most effective way to "curb" your drive times in the long run.24
Notes
- "Beating Traffic." Mathematical Moments. American Mathematical Society. 2005. Accessed April 2006 from http://www.ams.org/ams/mm31-traffic.pdf. According to the publication, "In the last 30 years while the number of vehicle-miles traveled has more than doubled, physical road space has increased only six percent." ↑
- "Americans Spend More Than 100 Hours Commuting to Work Each Year, Census Bureau Reports." US Census Press Release. March 20, 2005. Accessed April 2006 from http://www.census.gov/Press-Release/www/releases/archives/ american_community_survey_acs/004489.html. ↑
- "Understanding Traffic." Discovery Channel Features. January 30, 2006. Accessed April 2006 from http://www.odeo.com/audio/674920/view. ↑
- "Average Travel Time to Work of Workers 16 Years and Over Who Did Not Work at Home." U.S. Census Bureau: American Community Survey 2003. Accessed April 2006 from http://www.census.gov/acs/www/Products /Ranking/2003/pdf/R04T160.pdf. ↑
- Houston Real-Time Traffic Map. HoustonTranstar.org. Accessed April 2006 from http://traffic.houstontranstar.org/layers/. ↑
- Reschovsky, Clara. "Journey to Work 2000." US Census Bureau. Accessed April 2006 from http://www.census.gov/prod /2004pubs/c2kbr-33.pdf. According to Table 1: Means of Transportation to Work: 1990 and 2000, 2.5% more Americans drove to work alone in 2000 when compared with ten years earlier. All public transportation used saw at least a minor decline. ↑
- "Houston city, Texas: Selected Economic Characteristics: 2004." U.S. Census Bureau: American Fact Finder. Accessed April 2006 from http://factfinder.census.gov/servlet/ADPTable?_bm= y&-geo_id=16000US4835000&-qr_name=ACS_2004_EST_G00 _DP3&-ds_name=ACS_2004_EST_G00_&-_lang=en&-_sse=on. ↑
- Google Local - Cypress N Houston Rd & Riata Ranch Blvd, Houston, TX 77095. Google Maps. Accessed April 2006 from http://maps.google.com/maps?f=q&hl=en&q=Cypress+N+ Houston+Rd+%26+Riata+Ranch+Blvd,+Houston,+TX+77095&om=1. My exact home address is withheld purposely. ↑
- Google Local - W Sam Houston Pky N & Clay Rd, Houston, TX 77041. Google Maps. Accessed April 2006 from http://maps.google.com/maps?f=q&hl=en&q= W+Sam+Houston+Pky+N+%26+Clay+Rd,+Houston,+TX+77041&om=1. Again, the exact details of my office location are purposely omitted. ↑
- My 12.7 mile route to work consists of the following:
a. Proceed .1 miles from home to Riata Ranch Blvd & Cypress N Houston Rd.
b. Proceed west .2 miles on Cypress N Houston Rd.
c. Turn right on Barker Cypress Rd. Proceed .8 miles.
d. Turn right on US-290 E. Proceed 1 mile.
e. Take US-290 ramp. Proceed 6.7 miles.
f. Take Frontage Road Exit to Beltway 8 / FM-529 / Senate Ave. Proceed .7 miles. (I exit here instead of taking the shorter - and most likely faster - freeway to avoid the toll. Yes, I'm cheap and I like spreadsheets.)
g. Turn right on Senate Ave. Proceed 3.1 miles to Clay Rd.
h. Proceed .1 miles to office. ↑ - My 13.0 mile route home consists of the following:
a. From the office, proceed north on Sam Houston Parkway frontage road for 3.1 miles.
b. Turn left on US-290 frontage road. Proceed 1.0 mile.
c. Take US-290 ramp. Proceed 6.8 miles.
d. Take Barker Cypress Rd Exit. Proceed .9 miles, veering right at split.
e. Turn left on Barker Cypress Rd. Proceed .9 miles.
f. Turn left on Cypress N Houston Rd. Proceed .2 miles to Riata Ranch Blvd.
g. Proceed .1 miles to home. ↑ - I occasionally took two variations, one on the way to work and one on the way home. In the morning, I sometimes drove around the south side of the fast food restaurants on the southwest bound frontage road of US-290 to avoid the backup at the light at Senate Ave. In the evening, heading north on Senate Ave, I occasionally continued straight under US-290 to avoid the backup in the left-hand turn lanes. Although the road is not shown on the map, the first left after crossing the US-290 frontage road proceeds about .2 miles, then makes a left turn and dead-ends back into the frontage road. A detail of the US-290 and Senate Ave intersection, which contains both variations, is available from Google Maps: http://maps.google.com/maps?f=q&hl=en&q=US-290 +W+%26+Senate+Ave,+Houston,+TX+77040&ll=29.877 341,-95.564607&spn=0.006549,0.013561&t=h&om=1 ↑
- Cypress-Fairbanks ISD Home Page. CFISD.net. Accessed April 2006 from http://www.cfisd.net/. ↑
- Harris County Appraisal District: Index Map: By School District. HCAD: I-Map Publication Service. Accessed April 2006 from http://www.hcad.org/maps/default.asp. ↑
- As an interesting aside, information was also gathered for surrounding school districts:
a. Houston (http://www.houstonisd.org)
b. Katy (http://www.katyisd.org)
c. Klein (http://www.kleinisd.net)
d. Spring Branch (http://www.springbranchisd.com)
e. Tomball (http://www.tomballisd.net)
f. Waller (http://www.waller.isd.esc4.net)
Analysis indicated that these schedules had no statistically significant impact on my commute, confirming that the effect of the school district schedule is limited to within its own boundaries. ↑ - "Official City Holidays." HoustonTX.gov. 2006. Accessed April 2006 from http://www.houstontx.gov/abouthouston/cityholidays.html. 2005 city holidays confirmed via Mrs. Wilkerson of Houston City's 3-1-1 Helpline, accessible per: "Contact Us." HoustonTX.gov. 2006. Accessed April 2006 from http://www.houstontx.gov/contactus/index.html. ↑
- "2005 Federal Holidays." OPM.gov. Accessed April 2006 from http://www.opm.gov/Fedhol/2005.asp. & 2006 Federal Holidays. OPM.gov. Accessed April 2006 from http://www.opm.gov/Fedhol/2006.asp. ↑
- "ANOVA" stands for ANalysis Of VAriance. For more details on ANOVAs and how/when they are used: "Chapter 12: Introduction to ANOVA." HyperStat Online Textbook. Accessed April 2006 from http://davidmlane.com/hyperstat/intro_ANOVA.html. ↑
- Discrete variables are those whose values are represented in a limited set. For example, the "day of the work week" variable consists of five values ("Monday" through "Friday") and a one-way ANOVA analyzes each to determine if it has a significant impact on the result variation. On the other hand, the "departure time" variable is practically continuous, with as many "categories" as there are minutes, and doesn't lend itself well to ANOVA analysis. ↑
- For each of the ANOVA analyses, the significance level (α) is .05 and the null hypothesis (H0) is that the input variable has no statistically significant influence on the output. When the Pvalue < α, H0 is thrown out. For example, in the case of the day of the work week vs. the morning commute, the Pvalue is .005, which is less than .05. Thus, it is statistically improbable that the results could have occurred at random and, therefore, the day of the week is shown to exert a significant effect on the morning commute duration. ↑
- I used Minitab to run the ANOVAs. The top table of the output lists the output variable (Source), the degrees of freedom (DF), the sum of the squares (SS), the mean of the squares (MS), the Fvalue (F) and the Pvalue (P). The lower table lists the input variables (Level), the number of inputs for each (N), the mean of the inputs (Mean), the standard deviation of the inputs (StDev), and then these mean values graphed with a 95% confidence interval (CI) based on the pooled standard deviation. For more information on interpreting the output of one-way ANOVAs: "How to Read the Output From One Way Analysis of Variance." Jerry Dallal's Tufts Home Page. Accessed April 2006 from http://www.tufts.edu/~gdallal/aov1out.htm. ↑
- Some have even suggested chaos theory and driver psychology as ways to best model traffic behavior. More information on chaos theory and traffic: "Chaos and your everyday Traffic Jam." FailedSuccess.com. Accessed April 2006 from http://www.failedsuccess.com/index.php?/ weblog/comments/traffic_jam_causes/. More information on driver psychology: Groegera, J. A. and Rothengatter, J. A. "Traffic psychology and behaviour." Transportation Research Part F: Traffic Psychology and Behaviour. Volume 1, Issue 1, August 1998, Pages 1-9. Accessed April 2006 from http://dx.doi.org/10.1016/S1369-8478(98)00007-2. ↑
- "Understanding Traffic." Discovery Channel Features. January 30, 2006. Accessed April 2006 from http://www.odeo.com/audio/674920/view. Every subway train takes 1,000 cars off the road. Every bus, 40 cars. ↑
- "Critical Relief for Traffic Congestion." PublicTransportation.org. Accessed April 2006 from http://www.publictransportation.org/pdf/reports/congestion.pdf. Public transportation stands to improve commute times more than departure time adjustment. "The Benefits of Public Transportation: An Overview." PublicTransportation.org. Accessed April 2006 from http://www.publictransportation.org/reports/asp/pub_benefits.asp. Public transportation brings unparalleled reliability and consistency. ↑
Scientists find brain cells linked to choice
LONDON (Reuters) - If choosing the right outfit or whether to invest in stocks or bonds is difficult, it may not be just indecisiveness but how brain cells assign values to different items, scientists said on Sunday.
Researchers at Harvard Medical School in Boston have identified neurons, or brain cells, that seem to play a role in how a person selects different items or goods.
Scientists have known that cells in different parts of the brain react to attributes such as colour, taste or quantity. Dr Camillo Padaoa-Schioppa and John Assad, an associate professor of neurobiology, found neurons involved in assigning values that help people to make choices.
"The neurons we have identified encode the value individuals assign to the available items when they make choices based on subjective preferences, a behaviour called economic choice," Padoa-Schioppa said in a statement.
The scientists, who reported the findings in the journal Nature, located the neurons in an area of the brain known as the orbitofrontal cortex (OFC) while studying macaque monkeys which had to choose between different flavours and quantities of juices.
They correlated the animals' choices with the activity of neurons in the OFC with the valued assigned to the different types of juices. Some neurons would be highly active when the monkeys selected three drops of grape juice, for example, or 10 drops of apple juice.
Other neurons encoded the value of only the orange juice or grape juice.
"The monkey's choice may be based on the activity of these neurons," said Padoa-Schioppa.
Earlier research involving the OFC showed that lesions in the area seem to have an association with eating disorders, compulsive gambling and unusual social behaviour.
The new findings show an association between the activity of the OFC and the mental valuation process underlying choice behaviour, according to the scientists.
"A concrete possibility is that various choice deficits may result from an impaired or dysfunctional activity of this population (of neurons), though this hypothesis remains to be tested," Padoa-Schioppa.
source:http://news.scotsman.com/latest.cfm?id=610912006How To Set Up A Load-Balanced MySQL Cluster
source:http://developers.slashdot.org/article.pl?sid=06/04/23/1333219
Google Violates Miro's Copyright?
source:http://yro.slashdot.org/article.pl?sid=06/04/23/1331246
Abandoned Games
source:http://games.slashdot.org/article.pl?sid=06/04/23/139228
High DPI Web Sites
One area of Web design that is going to become more important in the coming years is high DPI. For those of us working on WebKit, this will also become an issue for WebKit applications and for Dashboard widgets.
What is DPI?
DPI stands for “dots per inch” and refers to the number of pixels of your display that can fit within an inch. For example a MacBook Pro has a 1440×900 resolution on a 15 inch screen. Screens exist for laptops, however, that have the same physical size (15 inches) but that cram many more pixels into the same amount of space. For example my Dell XPS laptop has a 1920×1200 resolution.
Why does this matter?
Consider a Web page that is designed for an 800×600 resolution. Let’s say we render this Web page such that the pixels specified in CSS (and in img tags and such on the page) map to one pixel on your screen.
On a screen with 1920×1200 resolution the Web site is going to be tiny, taking up <>
Now this may not be a huge problem yet, but as displays cram more and more pixels into the same amount of space, if a Web browser (or any other application for that matter) naively continues to say that one pixel according to the app’s concept of pixels is the same as one pixel on the screen, then eventually you have text and images so small that they’re impossible to view easily.
How do you solve this problem?
The natural way to solve this “high DPI” problem is to automatically magnify content so that it remains readable and easily viewable by the user. It’s not enough of course to simply pick a pleasing default, since the preferences of individuals may vary widely. An eagle-eyed developer may enjoy being able to have many open windows crammed into the same amount of space, but many of us would like our apps to remain more or less the same size and don’t want to have to squint to read text.
The full solution to this problem therefore is to allow your user interface to scale, with the scale factor being configurable by the user. This means that Web content has to be zoomable, with the entire page properly scaling based off the magnification chosen by the user.
What the heck is a CSS px anyway?
Most Web site authors have traditionally thought of a CSS pixel as a device pixel. However as we enter this new high DPI world where the entire UI may be magnified, a CSS pixel can end up being multiple pixels on screen.
For example if I set a zoom magnifcation of 2x, then 1 CSS pixel would actually be represented by a 2×2 square of device pixels.
This is why a pixel in CSS is referred to as a relative unit, because it is a unit whose value is relative to the viewing device (e.g., your screen).
CSS 2.1 describes how a the px unit should be rescaled as needed.
http://www.w3.org/TR/CSS21/syndata.html#length-units
What’s wrong with zooming?
Zooming an existing Web page so that it can be more easily viewed has a number of immediate benefits. Text remains readable. Images don’t become so tiny that they can’t be viewed.
Doing naive zooming, however, will result in a Web site that - when scaled - looks much worse. (Try looking at what happens to images in Internet Explorer for Windows when you change the OS DPI setting from 96 to 120 for example.) Several factors come into play here.
For example, with text you don’t want or need to “zoom” it. In other words, you aren’t going to take the actual pixels for each character and scale them like you’d scale an image. Instead you simply use a larger font size. This will allow text to have a higher level of detail on high DPI displays and ultimately look more and more like the text you might see in a printed book.
For images, you first and foremost need a good scaling algorithm. You’d like for the image to look about as good as it did on a lower DPI display when rendered at the same physical size. However, the problem with scaling of existing images is that all you’ve done is maintained the status quo, when instead you could be designing a Web site that looks *even better* on these higher DPI displays.
How can I make images look better?
Consider a common Web site example: the use of images to do UI elements like buttons with rounded corners and fancy backgrounds. Let’s say the Web designer uses a 50×50 pixel image for the button. The rounded corners and background may look reasonably nice on a lower DPI display and even continue to look nice when the image is scaled by 2x but rendered at the same physical size on a higher DPI display.
What if you could use a 200×200 image instead? Or, even better, what if you used an image format that hadn’t hard-coded all of its pixel information in the first place? The use of either a higher resolution image (with more detail) or of a scalable image format allows for the creation of images that would look *better* when rendered on the higher DPI display.
Enter SVG
Safari actually supports PDF as an image format (the hands of the clock Dashboard widget are an example of this). However other browsers do not support this format. The agreed-upon standard for scalable graphics on the Web is SVG.
SVG stands for Scalable Vector Graphics and is an XML language for describing two-dimensional images as vector graphics. Describing graphics in this fashion allows for the creation of images that will look better on high DPI displays when rendered at the same physical size.
Our goal with WebKit is to make SVG a first-class image format, so that it can be used anywhere you might use a PNG, a GIF or a JPG. In other words, all of the following should be possible:
div {
background-image: url(tiger.svg)
}
li {
list-style-image: url(bullet.svg)
}
Our current thinking regarding SVG images used this way is that they would be non-interactive (in other words you can’t hit test elements inside the SVG’s DOM). It’s debatable whether or not script execution should be allowed when SVG is used this way.
These are some issues we’d like to hammer out, since we view this use of SVG as being very different from SVG included explicitly in a compound XHTML document or included via the use of an
Size Matters
In addition to supporting scalable image formats like SVG, we want to make it possible for Web designers to continue to use image formats they are familiar with (like PNG, JPG and GIF), but give them the capability to conditionally include higher resolution artwork.
The idea behind this approach is that a much higher-resolution image can be specified and then either used only if the resolution is detected to be high enough, or downscaled on lower DPI displays.
In order for this approach to be viable, every place where images can be used today must support being able to specify a size in CSS pixels so that the higher resolution artwork can render with more detail in the same amount of space. (This will become clear with the examples that follow.)
In addition we would like these approaches to degrade gracefully in browsers that don’t support high DPI Web sites yet.
Let’s go over each of the places images can be used today.
The img Element
The img element already supports specifying explicit sizes, and so today you can specify a width and height and if an image is larger it will be downscaled. In a high DPI system where 1 CSS pixel != 1 device pixel, more detail can then be rendered.
In other words how you scale an image is based off comparing *device pixels* and not CSS pixels. For example, if you have a zoom factor of 2, an img tag that specifies a width and height of 100 CSS pixels, and an image whose intrinsic size is 200×200
device pixels, then that image is not scaled, since 100×2 = 200. However on a lower DPI display that might not have any zoom factor set, the 200×200 image would be properly downscaled to 100×100.
If no CSS size is specified for the element, then the size of the element in CSS pixels is simply the image’s size in device pixels. This will result in the image obeying the zoom.
This approach degrades gracefully, with the only tradeoff being that the higher resolution artwork would be slower to load on low DPI displays that couldn’t render all the detail anyway.
Backgrounds
For backgrounds the problem is that you need to be able to specify the size of a background tile in CSS pixels. CSS3 has a new property that we now support in WebKit called background-size. This property allows you to specify the size of a tile in CSS pixels, and thus enables backgrounds to support higher DPI artwork as well.
If no tile size is specified (as is the case on the Web today), then the size used is the image’s intrinsic size in CSS pixels. Existing background images on the Web will then obey the zoom automatically.
However once you can specify a tile size in CSS pixels, then the way the image scales can then be done using device pixels. Using the background-size property inside the background shorthand can allow for a degradable approach that won’t mess up in other browsers.
For example, let’s say you have an image tiger-low.png that is 100×100 and an image tiger-high.png that is 200×200. Here’s an example of how you might make a CSS declaration that can use the low-res image for browsers that don’t understand background-size and the higher-resolution image for browsers that do.
div {
background: url(tiger-low.png);
background: url(tiger-high.png) (100px 100px);
}
In the above example, both declarations result in a tile that is the same size in CSS pixels, but on a high DPI machine with a zoom factor of 2, you will be able to see all of the additional detail of the higher resolution image.
Browsers that don’t understand background-size specified in the shorthand will throw out the entire second declaration. Browsers that do understand it will overwrite the previous background declaration.
List Bullets
As with backgrounds the trouble with list bullets using images is that you have no way of specifying the size of the list bullet in CSS2. Luckily CSS3 has a solution for this problem as well.
The ::marker pseudo-element can be used to style a list bullet. We plan to add support for this pseudo-element to provide much more control over the images used by bullets.
Once you can specify the size of the marker, then the same rules apply as in the previous examples.
li {
list-style-image: url(bullet-low.png);
}
li::marker {
content: url(bullet-high.png);
width:10px;
height:10px;
}
In the above example, let’s say that bullet-low.png is 10×10 pixels and bullet-high.png is 20×20 pixels. With this approach, only if the browser understands the CSS3 list marker pseudo-element, the image will be replaced by a higher-resolution version, and thus more detail will be shown when zooming on a high dpi display.
Border Images
Safari supports the CSS3 border-image property. This property essentially already works, since in the places where tiling is used, the tiles get scaled to match the widths of the borders.
The only open issue right now is tiling in the center, since right now the spec states that the center tile is not scaled. Hopefully some heuristic will be chosen that will scale the center tiles based off the border widths (e.g., using the left/top border widths). This issue can be worked around by using border-image only to render the border and using background with background-size to tile high-resolution artwork in the center.
Conditional Inclusion
The above approaches allow you to go ahead and mingle low-res and high-res rules, but this approach can get somewhat cluttered. In addition the approach only works for two different images. What if you want to offer more than 2 versions of your artwork, e.g., low/medium/high images?
Our proposed solution for this problem is to extend CSS Media Queries with a new media query feature, the CSS pixel scaling factor.
Media queries allow a Web site author to write rules that should only be matched conditionally based off features of the device (like the viewport width/height, the screen dimensions, the screen’s DPI, etc.). Unfortunately media queries do not include the ability to query based off the zoom factor. This feature is necessary in order to really understand what’s going to happen with images.
We plan to add a new feature, device-pixel-ratio, that can be queried to find out how a CSS pixel relates to a device pixel. Min and max versions of the feature can be supported as well.
You can then construct queries like so:
href="highres.css"/>
With CSS3 media queries you can then build Web sites with completely different CSS files based off the pixel-ratio of CSS pixels to device pixels, including higher res artwork as necessary.
This approach also degrades gracefully, since you can specify the lowres CSS file and then higher res CSS files inside media queries that will be ignored by browsers that don’t understand them.
source:http://webkit.opendarwin.org/blog/?p=55
Ajax and the Ken Burns Effect
source:http://developers.slashdot.org/article.pl?sid=06/04/22/1536217
Rich Ajax slide shows with DHTML and XML
18 Apr 2006
Learn to create an Asynchronous JavaScript and XML (Ajax) client-side slide show that's animated using "Ken Burns Effects." Here, you discover how to build XML data sources for Ajax, request XML data from the client, and then dynamically create and animate HTML elements with that XML.
If the Web 2.0 revolution has one buzzword, it's Asynchronous JavaScript and XML (Ajax). The client-side interactivity in applications such as Google Maps™ mapping service and Gmail™ webmail service make Ajax both exciting and useful. The technologies of Ajax, including Hypertext Markup Language (HTML), JavaScript coding, Cascading Style Sheets (CSS), XML, and asynchronous Web requests, can create far more compelling Web interactions than those we saw in Web V1.0. Of course, these technologies have been around since Microsoft® Internet Explorer® V4, but only recently have other high-profile applications displayed the benefits.
How difficult is Ajax to implement? Each element of the Ajax model is relatively easy to learn. But the trick is blending all the elements into a seamless experience. Often that problem is compounded, because different individuals do the client-side and server-side coding. This article shows how just one person can write a small Ajax-based slide viewing application in a couple of hours.
Personal image-management applications such as Apple® iPhoto® on the Macintosh® have popularized the slide show view. In a slide show, the images appear in a timed sequence, with images fading in and out. In addition, the images are moved and zoomed in what has become known as the "Ken Burns Effect."
In this example, I have the browser download a list of images from the server. Then, I use that list of images to compose a slide show using Dynamic HTML (DHTML). I animate the images with random slow moves, zooms, and fades to give a pleasing version of the Ken Burns Effect without having to download Macromedia® Flash or any other heavyweight animation tools.
![]() |
|
To understand what's different about Ajax, you must first understand the current model of Web programming. The simple interaction between client and server is shown in Figure 1.
Figure 1. The Web V1.0 model of client-server interaction

The Web browser, or client, makes a GET
or POST
request of the Web server. The server formats an HTML response. The client parses the HTML and displays it to the user. If the user clicks another link or button, another request is made to the server, and the current page is replaced with the new page that the server returns.
The new model is more asynchronous, as shown in Figure 2.
Figure 2. The Ajax model of client-server interaction

In this new model, the server returns an HTML page, just as before. But now this page has some JavaScript code on it. That code calls back to the server for more information as needed. Those requests can be made as simple GET
requests for Representational State Transfer (REST) service, or as the POST
requests required for SOAP.
The JavaScript code then parses the response, often encoded as XML, and updates the HTML on the page dynamically to reflect the new data. In addition to XML, engineers are returning data encoded in the JavaScript Serialized Object Notation (JSON) format. This data is easier for a browser to understand but not for other client types. The value of returning XML is that clients other than browsers can interpret the data. The choice is up to you and depends on the application.
![]() |
|
The first step in developing the Ajax slide show is to put together the REST data service. In this example, I use a PHP page that returns all the available slide show images and their sizes (width and height). All the images reside in a directory named images. The names of the files are name_width_height.jpg -- for example, oso1_768_700.jpg, which means that the file is a picture of Oso, one of my dogs, and is 768 pixels in width and 700 pixels in height. I use this kind of naming all the time, because it makes it easy to see what the width and height of an image are without cracking open Adobe® PhotoShop® or Macromedia Fireworks.
To serve up the list, I use the PHP server code shown in Listing 1.
Listing 1. The slides.php server page
|
The code is relatively simple. To start, it sets the content type to XML. It's critical to get the browser to recognize the document as XML and to create a document object model (DOM) for it. The code starts the
tag, and then reads the images directory to create a
tag for each image it sees. Finally, the script closes the
tag.
If you navigate the Mozilla® Firefox® browser to the page, hosted (in my case) on my localhost in a directory called kenburns, you see the result shown in Figure 3.
Figure 3. The output of the slides.php server script

There are three images: one of my daughter and two of my dogs. Obviously, you can add whatever detail and multimedia you want here, but I've tried to keep it simple for this example.
![]() |
|
The next step is to write an HTML page (shown in Listing 2) that will read the data from the service and verify that the Ajax connection between the browser and the server works. This HTML code, with embedded JavaScript code, retrieves the XML and brings up an alert shown in the text that the server returns.
Listing 2. A simple Ajax fetch page
|
The code grabs the XML content from a specified URL, then the loadXMLDoc
function starts the Ajax request. That request goes off asynchronously to retrieve the page and return the result. When the request is complete, the processReqChange
function is called with the result. In this case, the processReqChange
function displays the value of the responseText
function in an alert window. The result of firing up this page in my Firefox browser is shown in Figure 4.
Figure 4. The XML shown in an alert window

That's a good start. I'm definitely getting the XML data back from the server. But let me point out a few things. First, notice that the URL is an absolute path, domain name and all. That's the only valid URL style for Ajax. The server code that writes the Ajax JavaScript code always creates valid, fully formed URLs.
Another thing that isn't evident here is the Ajax security precautions. The JavaScript code can't ask for just any URL. The URL must have the same domain name as the page. In this case, that's localhost. But it's important to note that you can't render HTML from www.mycompany.com, and then have the script retrieve data from data.mycompany.com. Both domains must match exactly, including the sub-domains.
Another item of interest is the code in loadXMLDoc
, which seems to do back flips to create a request object. Why so much hassle? Pre-version 7 Internet Explorer doesn't have the XMLHTTPRequest
object type built in. So, I must use Microsoft ActiveX® controls.
Finally, in the processReqChange
function, you see that I look for readyState
to be equal to 4 and status
to be set to 200. The readyState
value of 4 means that the transaction is complete. The status
value of 200 means that the page is valid. You might also get error message 404 if a page isn't found, just like you see in the browser. I don't handle exception cases here, because it's just example code, but the Ajax code you ship should handle requests that return errors.
![]() |
|
Before I show you how to create the slide show, I will extend the current example by having the processReqChange
function create an HTML table with the results of the XML request from the server. In that way, I can test two things: that I can read the XML and that I can create HTML from it dynamically.
Listing 3 shows the updated code that creates a table from the returned XML.
Listing 3. The enhanced test page
|
It's tough to show what this looks like in a browser without a movie. So, I took a single snapshot of the show and present it in Figure 6.
Figure 6. A snapshot from the slide show

This page starts by bringing in the base classes through the src
items on the tags. After those classes are installed, new functions are added to bring the whole mechanism together:
load_slides
and start_slides
. The load_slides
function takes an array of image src
, width
, and height
specifications, and then creates the
tags and the animations. The start_slides
function starts the slide show with the first item.
Another function attached to the animation manager, on_finished
, is called whenever an animation is complete. I use that notification to move on to the next slide or to return to the first slide in the list if I've completed the animation of all the slides.
Getting back to load_slides
, notice that it references an array called g_directions
. This array contains a set of random ranges that the slide loader uses to specify where the image should start and end its movement. The most appealing effects go from corner to corner. As you can see from the comments, these ranges specify movement of the slide from each combination of northeast, southeast, northwest, and southwest. The last tag defines an array of images, and then uses the
load_slides
and start_slides
functions to start the slide show.
![]() |
|
The final step in this process is to create the Ajax version of the slide show. This means replacing the hard-coded image list with something retrieved from the slides.php service.
The Ajax version of the slide show code is shown in Listing 8.
Listing 8. The Ajax slide show code
Adapting Ajax slide shows to your needs In this article, I used object-oriented JavaScript code whenever possible. JavaScript is a fully object-oriented language, and although it might not use the In addition to what you've seen in this article, I have the following recommendations for your Ajax slide shows:
It used to be that you needed Flash or a similar application to make dynamic slide shows like the one in this article. With modern browsers, which include excellent support for DHTML with rich effects like opacity (or even rotation, blur, and so on, in Internet Explorer) and Ajax, you can do amazing things right in the browser. That means that your customers don't have to download fancy extensions or run potentially unsafe applications. They can just surf to your pages and get stunning graphic effects that will keep them coming back for more.
Learn
Get products and technologies
Discuss
Previous Posts
LinksArchives
|