This site currently requires Javascript to function correctly. This will soon change. In the meantime, you can check how to enable Javascript in your browser. If you don't want to use Javascript, please come back from time to time to check our progresses in that department.

John Allsopp

John Allsopp has spent more than 15 years developing for the web, developing software like the acclaimed CSS editor Style Master, and writing and publishing training for web developers. John frequently speaks at conferences and delivers workshops around the world. He is a co-founder of the Web Directions conferences for web designers and developers, held on several continents. In 1999, John wrote the still highly regarded Dao of Web Design and his Microformats: Empowering Your Markup for Web 2.0 was the first book published on microformats. He is also the author of Developing with Web Standards. When not bathed in the glow of various computer screens, he’s a volunteer surf lifesaver and lives at the southern edge of Sydney with his wife and young daughters, who are the light of his life.

John tweets @johnallsopp.

Published Thoughts

Looking back on the 11 months of Pastry Box posts, around 5,000 words, I’ve been trying to find some sort of pattern to what were short essays on typically whatever idea was formost in my mind as that month’s deadline loomed.

A lot of the ideas I’d been thinking about for a long time. A number I’d presented on, while one or two, particularly August’s thoughts on digital artefacts, more or less popped into my head, but is obviously influenced by “The New Aesthetic” (which has been something of an obsession of mine since James Bridle’s “Waving at the Machines” presentation at Web Directions in 2011). Curiously just the other day I found reference to an essay by Brian Eno from 1996 that discussed this same idea. So, I’m only 14 years behind the times on that one.

So, is there a pattern? I think almost all of these pieces are about the future, and what it might look like. Even my piece from October “Ancient History” is really about the future. And how, in the famous formulation by George Satayana, we are doomed to a future that is essentially like our past, unless we learn from that past. Which prompts me to ask why I seem so concerned with the future?

Well, for one, who isn’t?

But what I think makes the future more pressing for me, who in previous generations (and I’m sure by at least some of you out there) would be considered “middle aged” is that I have a young family, and a daughter due to be born within a few weeks of this being written. This daughter, if she lives the average life expectancy for a child born now, will live to be 100. When her grandparents were born, that expectancy was much closer to 70 years.

So, my daughters, and their generation, have a lot of the future in front of them.

When I was growing up, the future looked like colonies on Mars, and holidays in space, and flying cars, and jetpacks, and video phones. We kind of got video phones, and no one really cares all that much about those.

The future we didn’t know about, or didn’t think much about was AIDS, and massive population growth, and global climate change, and the fall of the Soviet Union (many of you won’t have even known a time when the Soviet Union existed, let alone posed (supposedly) the greatest existential threat to humanity).

The future I talk about in these Pastry Box pieces is pretty trivial by comparison. If there’s anything among them that aspires to be more than just a few thoughts about the next few years, it’s the idea that the technologies of the web provide us with the opportunity to be more honest with ourselves, as individuals, about our actions (what we eat, drink, buy, how we act, where we travel) and their consequences, for ourselves, and for the billions of others on our planet, and as a civilisation, as a planet as a whole.

We face monumental challenges to our very existence as a species. So far we’ve been pretty good at sticking our heads in the sand. That time is over.

If we want a future at all, let alone one with jetpacks and holidays in space, we need to start accepting reality, then talking about how to fix the problems that are essentially of our own making.

And thank goodness we have the web, and the ability to communicate across borders, and language and cultural barriers at this time. Because without that I honestly believe we’d have no hope at all.

Before and after Apps

In some (very small, for the most part insignificant) circles, I’m known as the anti-native-apps guy.

Which is in many ways fair. For years, I’ve argued that the supposed commercial benefits to developers are largely illusory (something I continue to believe, and which the last several years largely demonstrates).

But, my principal concern with apps was actually best expressed last year by Scott Jensen, legendary HCI expert at Apple, Symbian, Google and now Frog Design, in his provocatively titled “Native Apps Must Die”.

Native apps are a remnant of the Jurassic period of computer history, a local maximum that is holding us back. The combination of a discovery service and just-in-time interaction is a powerful interaction model that native apps can’t begin to offer.

Apps are a model of functionality that has shaped both what developers and designers do, and how users conceive of what’s possible more or less for the entire history of computing. But I think they are a dead end.

One interesting detour from the app-centric model of computing was the Lisa operating system, the not quite fore-runner to the Macintosh from Apple. Lisa differed markedly from the Macintosh, and other graphical operating systems, in that it was document centric. You didn’t open applications to edit documents, you opened documents to work on them. Users focussed on the task at hand, not the tool.

Changing this conception of what a user experience might be is very difficult, so ingrained is the app-centric model with users, and among ourselves as designers and developers. But what might a more user-centric model look like?

Let’s take a very popular consumer application/service right now. Photo hosting. Well, it’s been popular for quite some time, with services like Flickr now several years old, and then Instagram and numerous similar services emerging particularly with the rise of smart phones.

But all of these services are monolithic—indeed are becoming more so. Most if not all of what a user can do with them takes place inside one application (this is particularly true of services like Instagram). You shoot, apply effects, crop, tag, annotate and upload all within the application. In the case of Instagram you also sign up, search, follow people, in short, access the entire Instagram experience from inside the Instagram app.

While we think of apps and services like these as photo-sharing services, they aren’t really. Instagram is an Instagram app. Facebook is a Facebook app. Twitter increasingly is a Twitter app (as soon as they can get rid of all those other pesky client apps). The application, the service, is an end in itself.

So, what might a user focussed, post-app photo sharing service (experience? thing?) (let’s call it phUto) look like.

At its core, it would be a place to upload photos which people could view. Some users might want to apply filters and effects to their photos. Now, if I were building phUto, I might build that functionality into it. Or, I might make it possible for other developers to provide the user this functionality, ideally in a way that felt seamlessly part of phUto.

Users might want to search the photos stored at phUto. I could build a search engine, and results experience. Or, I could make it easy for developers to provide this functionality, and a way for my users to easily discover these search services, and use them in a way that feels like it’s a seamless experience.

You get the idea.

There have been complex attempts to enable this in the past. OpenDoc, an enormous undertaking by Apple in the mid 1990s was among the most sophisticated of such attempts. Microsoft’s ActiveX similarly was designed to enable unrelated pieces of functionality work together. But recently, more lightweight approaches among them Android Intents and Web Intents.

But as much as the technology for enabling this is important, the underlying concept is crucial.

The app model essentially creates silos of functionality, carefully created for the user. It gives rise to feature creep, and rewards large teams with lots of engineering resources over smaller teams and individuals who focus on one thing they can do really well.

Perhaps in a decade, or a century, we’ll still be working with monolithic apps. But I suspect (and certainly hope) not.

Ancient History

Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

George Santayana

One of the advantages of being (relatively) old in a relatively young field is you’ve witnessed firsthand important milestones, the emergence of ideas, significant debates and other formative aspects of that field. In many other fields, these events are ancient history, their protagonists long gone, their importance fading.

Recently, I’ve been taking something of a walk down memory lane, looking at some of the history of the web (things like browser releases, publication dates for specifications and so on).

What it has emphasised for me is not so much how far we have come, but in many ways how little. Here’s a simple example:

ViolaWWW screenshot

That’s ViolaWWW, one of the earliest graphical web browsers, originally released March 9, 1992, so over 20 years ago. At first glance it looks so primitive. But take a closer look. There’s the home, back and forward buttons. There’s a url field. And inside the window, a page full of text with links to click.

And how about this:

old Google screenshot

Google’s home page from 1998. While visually quite different, again, essentially the same user experience as Google today. And it doesn’t stop there. Think about search engine results. In essence, a list of around a dozen potential matches, with a short description and a link for users to follow. This really hasn’t changed at all since the very beginning of the web. (Results have become more relevant, but when was the last time you went beyond the first page of results? Beyond the first two or three results? Surely with billions of documents, for any search there’s likely to be more than a handful of relevant results?)

So what’s the point?

To some extent we live in an ever-present now, of exciting cool new stuff. To put it harshly, we live in George Santayana’s perpetual infancy. We should step back once in a while, from the sense of excitement mixed with panic, that we are missing out, that it will all pass us by, and get a sense of perspective, of where we have come from, and maybe even think a little about where we are going.

At the beginning of, and indeed well into the 19th century, only a small percentage of even people in the then-developed world were in any modern sense literate.

And yet, the percentage of people in Western Europe and North America at that time who could read and write probably exceeds the percentage in those places now who can program in any non-trivial sense.

The rise of literacy in the 19th Century was largely a side effect of the need for clerical skills in a rapidly industrialising world, but gave rise to the modern world of medicine, science, and literature as a critical mass of the population gained the capacity to improve themselves.

I believe, and hope the next great wave of literacy, with a similarly unpredictable but extraordinary impact on civilisation for the better, will be programmatic literacy, the near ubiquitous ability to program computers.

This doesn’t mean everyone in the world using JavaScript. In fact, part of the challenge is for us to find ways to lower the barriers to programming that tend to hold people back from something which humans are actually typically quite good at.

The web is an enormous step toward this possibility, but not until we start seeing these core skills in primary schools, taught alongside reading, writing, and mathematics, will we start seeing what sort of revolution an explosion of programming literacy might bring.

Artefacts

Photographs make for very big image files. Luckily, they can also compress significantly, with very little noticeable decrease in image quality (well, at least on lower resolution screens, we’ll see what impact higher resolution screens have on just how much we can compress images in future).

But compress too far, and suddenly artefacts appear, the unavoidable fingerprint of a particular technology. We see (and hear) this elsewhere, with 3D rendering engines, audio formats, everywhere we emulate the organic world, the analogue world with digital technology.

I sometimes wonder though whether these artefacts are bugs, or should we treat them as features? Why do we see the organic as the ideal, and the synthetic, the digital as less pure, less real?

Increasingly, our lives are lived through digital experiences, film (often not simply a compression of the real world, but a purely digital emulation of it), and music (which often is digitally generated and reproduced).

Years ago, Hip Hop DJs embraced the scratch, an artefact of mixing using turntables and vinyl, and created an entirely new sound.

I wonder ultimately what artefacts of the digital age will transcend their status as bugs, and become features of digital media? If any?

It can take less than a decade for a prescient, profound, clever observation to become hackneyed (well worth considering in itself), as it must be said has William Gibson’s observation in just 2003, that “future is already here—it’s just not evenly distributed”.

I think we as early adopters, tinkerers, hackers, too often recite this with a kind of smugness—yes, the future has a arrived, and I’m living it—you plebs just haven’t grokked it yet.

But just because the future is not evenly distributed, indeed precisely because it is not evenly distributed, it’s not just ours to have and to hold.

Take the “future of money”. We geeks with our squares and stripes and NFCs might think we are living in the future of money. But these are just incremental improvements, making it easy to use government issued currency, backed by banks and credit card companies and the entire traditional economic edifice. It’s not the future, it’s just a slightly more convenient present.

But when we pull our head out of our developed backside, we’ll see systems like M-Pesa, a truly transformative payment system with profound, often unforseen consequences from sub-Saharan Africa to the subcontinent.

M-Pesa reminded me of the story of Keralan fisherman, and the profound and positive transformation in their lives wrought by the introduction of the mobile phone, first brought to my attention by the fantastic Mark Pesce some years ago, and covered at the Economist.

While we too often use the extraordinary capabilities of the technologies that are emerging around us all the time for trivial purposes, elsewhere, these are transforming lives, countries, whole regions.

So maybe instead of focussing on what is happening in the Valley, or the Bay Area, we should be focussing on what is happening in Kenya, Cambodia, and the other places where the future has also unevenly arrived?

When I say web…

As might be apparent by now, I try to think a lot about where the web is now, but also where it is going (which really boils down to the question “what is the web anyway?” in a lot of ways).

So I'm always interested in exploring new devices, ways of interacting, both in practice and in theory.

What’s interesting in these conversations is often very experienced technologists think of the web and the browser more or less interchangeably. Which constrains the possibilities of the web as a medium tremendously.

In a similar vein, just as it has been for the last decade or more, indeed since the web transitioned from an academic tool to a more popular medium in the mid 1990s, our focus on what the web is, and might be, is typically almost entirely visual.

Of course this is a fundamental aspect of the web, but the web is not exclusively a visual medium. Nor is it exclusively a human-driven medium. Increasingly our interactions with the web are, and will be, passive (as I discussed in my first Pastrybox entry back in January). Indeed, increasingly it won’t be humans, but our built environment, our buildings and vehicles, the environment itself, that drives the web.

So when you think of the web, when you talk about the web, train yourself to go beyond fonts and colors and responsiveness.

Play with some of the emerging low-cost, easy to program sensor devices like Ninja Blocks, Twine, and others. Investigate the APIs that are in mobile devices, accessible to web developers via phoneGap, or on platforms like Windows8, and RIM's TabletOS and upcoming BB10 phone OS.

When I say web, I’m not 100% sure what I mean, but I know it goes far beyond the browser.

In the last 12 months or so, I've had a number of quite specific online conversations about the performance of web technologies. Indeed, some of these conversations go back quite a bit before that.

Most recently, I critiqued a spate of articles arguing that localStorage suffers from performance issues (well, might in theory, though evidence suggests otherwise).

9 months or so ago, for reasons best known to myself, I posted a point by point critique of one of John Gruber's typically fact free pieces asserting that native apps are inherently better than apps developed with web technologies. At the heart of these arguments is the issue of performance.

Developers care a lot about performance. The mark of a quality application is responsiveness. Not in the sense we've come to use it recently, but in the sense of the perceived lag between a user's action and the outcome.

Typically, these conversations are about performance in the absolute—for example, it's frequently asserted that any "native" app will always be faster than a web app, because native apps are compiled, and web apps use interpreted JavaScript.

But this is essentially a meaningless observation, because performance is just one small, although of course very important, piece of the puzzle. Lower level, more highly performant languages may well impact time to market, cost of development, platform reach, and more.

Practical developers typically care not about overall performance, but the bottlenecks where performance becomes an issue—often because it impacts the user experience. Developers optimize for those bottlenecks, and it's only when after all optimisation, performance remains below a certain threshold that we have a real issue.

In essence, at present most conversations about performance take place at an ideological level. Where we need them to take place is the practical level.

When someone observes something lacks performance, is "slow", or however this is characterised, hand waving like "compiled is always faster than interpreted" or "synchronous is always slower than asynchronous" miss the point. What we need to do is:

  1. Identify the specific, measurable performance issue
  2. Benchmark this, so as to meaningfully compare it with alternatives, and also to derive a measure of what is adequate (in performance, as elsewhere, the perfect is often the enemy of the good)
  3. Look for optimisations which can help provide adequate solutions in the given circumstances

We have a genuine issue when we can't do the third step.

Too often we fall into black and white thinking, which is anathema to engineering, the art of making stuff work. In engineering being right isn't as important as solving the problem. Sure, rhetorical discussions may be engaging, but they don't really solve much.

When it comes to development, we have so much to learn. Glibness won't help us do that.

That synching feeling.

How many screens do you routinely use to access email, messages, web content? Two? Three? More?

On what devices do you store your music, photos, movies?

On what devices do you view movies or photos? Listen to music?

As users (and most likely as developers and designers too) we are used to thinking of movies, photos, and so on as discrete things, located in a given place. 

This in no small part comes from the pre networked days of computing, where we associate a document, an image, a song, and so on with an individual file on our computers hard disk.

But even that analogy is inaccurate. A file on a disk is not a single thing. It's not even a contiguous block of memory on a hard drive. Rather it's a whole lot of little fragments, scattered all over a drive, kept track of by the filesystem.

For us as humans, knowing this, let alone keeping track of the implementation details is a waste of time (even for most developers, we can safely think of a file as a discrete object).

So, the notion of a file as a thing, located in a given place, distinct from the identical information located elsewhere, even on the same drive is a useful metaphor. But a metaphor none the less.

And while metaphors are useful, they are also dangerous.

Because we are burdened by this metaphor, when it comes to managing our information across multiple devices, we imagine that this information is on each of these devices. Which it may be. But that's purely an implementation detail.

When I play a song on my laptop, does it really matter whether it's really stored on a server at Apple, or Google, or Amazon? Or on a NAS on my local network? Or on my phone?

When I open a file to be edited, do I care where it is located? 

Those are implementation details, for developers to worry about.

Yet our mental models of what is happening when we play a song, or edit a document, or watch a youtube Video on AppleTV are encumbered, burdened with the metaphor of a file as a discrete thing, located somewhere specific. 

Hence our obsession with "cloud computing" (as if we ever cared about "magnetic computing"). And our obsession with synching. Something which simply should be an implementation detail for developers to concern themselves with not something we as users should have to worry about, or even know about.

Which is all a roundabout way of saying that our mental models of computing are still essentially rooted in the 1940s and 1950s. 

The device I hold in my hand, or which sits on my laptop, or my desk is still a store of data, connected to a whole lot of other such devices, some big, some small.

But that metaphor has outlived its usefulness. 

Devices are portals to information, and it really shouldn't matter where that information "really" is. Whether it's cached locally to my device, or always refreshed from a server, shouldn't matter to our user, or to us as a user.

When I start editing a document, then open that document in another device, I should be seeing the same document. When I start watching a movie on AppleTV, then take my phone to bed to finish watching the movie, it should be the same movie.

Gmail gets closest to this model, indeed does it quite well. But I think it is a metal model of computing that we should strive for as we design and develop for any connected devices.  

Commodification

Remember not too long ago, when people cared a lot about what was under the hood of their computer? The Manufacturer (Intel versus AMD versus Motorola), the architecture (RISC versus CISC), the size of the L1 Cache, the chip's clock speed, the bus speed, and on and on and on?

Sure, a lot of this was folks with too much time on their hands, but it was also the technology press. "Intel Inside" was something folks cared a lot about. Chips had brand names, and billion dollar marketing budgets (Intel after all sold not directly to the end user, but to the companies that built the system, but still spent hundreds of millions branding and marketing Celerons and Pentiums, and Core Duos and so on).

But a funny thing happened on the way to the future. People simply stopped caring. Few people know, or could care less about the hardware, chips, memory speed of their tablet or phone (and even their laptop). People certainly care about the results of the hardware, particularly battery life and perceived performance, but not the hardware itself, or its intrinsic characteristics.

Hardware has been almost entirely commodified.

A particularly strong indication of this is the "ARM" chip. Unlike most widely used chip architectures of the past (exemplified by Intel's 186 instruction set), ARM chips are licensed and manufactured by a wide array of companies, rather than purchased wholesale from a single manufacturer.

Well into 2005, with Mac OS computers all powered by PowerPC chips, debates about the real world benefits of RISC versus CISC architectures (did you have to look the acronyms up?), and raw CPU speeds measured in clock frequencies, versus measuring processor performance using measures such as Instructions Per Second were "serious" topics of conversation in the tech press, and among technologists with, as I hinted, a little too much time on their hands.

If, as little as half a decade ago you were to have suggested the almost complete commodification of the hardware layer of computing, I doubt you would have got too many takers.

I believe this process of commodification is a long term trend, that didn't start with PC hardware, and won't stop there.

In the 1990s, the Internet commodified networking, once the domain of huge companies like Novell. Once upon a time, companies paid hundreds of dollars per seat to license networking technologies for their PCs. No longer.

I think the commodification of the operating system layer is already well under way (during a 3 or 4 year period which has seen an explosion in new operating systems like iOS, Android, webOS, Bada, as well as the major upgrade of Windows Phone 7, this might seem like a ludicrous thing to say, so first let me outline what I mean by commodification).

With hardware, end user decisions were once often made based on the characteristics of CPUs, busses, motherboards, and the like. This is essentially no longer true. Hardware performance matters, but hardware characteristics simply don't (yes, companies do on occasion try to market their Android phones based on their clock speed, but effectively no one cares.) When users no longer care about the characteristics of a layer of a system, that layer has now been commodified.

So, what would it mean for the OS layer to be commodified? In it's most extreme form, end users would cease to care about the OS as part of their platform choice, which right now would be a ludicrous assertion. At least among affluent people in the “developed” world, a very significant percentage of users choose iPhones, in no small part because of the unique user experience of the OS (there are of course other factors at play, which we'll return to shortly).

But let's turn to the Android platform, the other significant smartphone platform (again, at least in the developed world).

Unlike iOS, Android is a far more fragmented ecosystem, even before we get to the array of OS versions. The Android user experience, unlike the rigidly controlled iOS user experience, is endlessly customized by device manufacturers and carriers, which is the first step toward the commodification of the Android platform, which is ironic, because these customization exist largely because of the commodification of the hardware layer.

Unlike iOS, Android form factors, screen sizes and resolutions vary significantly, from 128px x 128px swatch like devices, to Android big screen TVs.

Amazon's recently launched Kindle Fire, based on Android, but with an entirely reworked user experience effectively heralds the commodification of the Android OS. A vanishingly small number of people will purchase a Kindle Fire because it is an Android based device (and few if any won't purchase one when they would otherwise have because it is an Android device).

Like power and plumbing, Android (and in time all Operating Systems) will simply become a utility - vital, but largely unnoticed (until those times as with water and electricity that it's suddenly not available).

But I don't think the commodification of IT will end there. It will move further up the stack.

So, what sits above operating systems? Applications. Now, how in blue blazes, where the number of apps on a platform is a key measure of the the health of that ecosystem, and where platform owners work incredibly hard to entice developers to their platforms, could we possibly see the application layer commodified? What might this even mean?

Why would anyone actually build an application if it were just going to be a commodity?

Well, it's already happening. In many cases, an application is simply a means of providing part of a broader service (be that banking, or ordering a pizza for home delivery). These services, these business existed before apps, before smart phones, before the web. Apps (like web sites, and people answering plain old telephones) are simply a small part of the existing business.

Twitter effectively commodified Twitter client apps, by releasing their own free native clients across a wide range of platforms, they severely undermined the market for Twitter client applications. It's Twitter the service that matters.

Then there are services like Netflix, where the application in many ways is the service (you order, pay for and consume the service inside the application - which of course glosses over the enormous effort behind the scenes to make, but from the user's perspective, their engagement with the service is their engagement with the app).

But in the case of banks, home delivery, services like NetFlix, the application is already a commodity (while it's important that the user experience doesn't suck, that's part of the broader service design, its the overall service, not the application itself which drives users to choose it).

Of course, this is not always true, particularly in the game category. People play Angry Birds because it is Angry Birds. But even here, games are becoming increasingly less a stand alone product, and more part of a broader service. Angry Birds becomes Angry Birds Rio, tied into, and enabled by not application sales, but a broader entertainment experience.

The rise of game networks, with a continuous stream of new, engaging games is gaming as a service, which threatens to turn gaming itself into a commodity.

So, where is the business opportunity, when everything is turning into a utility?

Hardware, Operating Systems, applications, networks don't exist for their own sake. People use them to get things done. They use them for the outcome, not the process.

So, focus on the outcome, what the user wants to achieve - kill time, get food, communicate, learn something - and provide services which help them do that. You'll build sites and apps that run on browsers and operating systems, and ultimately hardware and networks to do so, but what you're really building is the service. Everything else is (and really always has been) but a means to that end.

In my early to mid teens, like many in my experience in the technology world, I voraciously read the classics of 20th C science fiction, in particular, giants like Asimov and Clarke. It's decades since I've read them, but many of the stories and ideas stay with me.

One expression you've almost certainly read or heard is Asimov's 3rd law of technology (Asimov rather liked laws, his laws of robotics having long since passed into mainstream consciousness).

"Any sufficiently advanced technology is indistinguishable from magic": http://en.wikipedia.org/wiki/Clarke's_three_laws.

When I was younger, I thought this was a very astute observation about, a description, indeed a definition of technology.

But many years later, it occurred to me it's more than that. It's a challenge to technologists. Make your technology magical. Make it disappear, make it just work.

A recent, in many ways mundane example comes to mind.

Some months ago we got a new iMac for home. For the first time in years, I set it up from scratch, rather than transferring an account over from an older computer.

I created an account, gave it a password, and when the setup process completed, it connected to the web. Over our secure wireless network.

Most people would likely give this no extra thought. But I was perplexed. How could it possibly connect to our password protected wireless network? Choosing a network and connecting to it was not part of the setup process.

Then I realized, the password I gave the new account was the same as that for our wireless network. Whoever designed the setup process decided to try the user password for the networks it could see. If none of these shared the same password, there's no additional impact on the user, they simply need to connect to the network with a password, as they would otherwise. But. If (as I suspect is common) the user's password and network password are the same, the computer magically connects to the network, and so the web.

We are definitely not talking quantum levitation here, but this is magical technology.

And, there are opportunities everywhere to weave a little magic in what you build.

Something simple that comes to mind is auto-filling form fields based on the user's location using the geolocation API, but there are ways to make them almost any interaction less frustrating, more intuitive, more magical.

Perhaps your user will stop and wonder "how did they do that?". But even better, they may not notice at all. A vanishing act, the greatest magic trick of them all.

Everyday, every hour, every minute, every second, awake or asleep, we humans generate information. Not just consciously, by speaking, writing, drawing, painting, but unconsciously too.

Every time we move.

Every time we breathe.

Every time our heart beats.

With countless vital statistics, like blood pressure, our temperature, our immune response to myriad foreign agents in our bodies, the lean muscle content in our bodies.

It's only relatively recently that we've been able to gather much of this information at all - via blood tests, Sphygmomanometers and the like.

Increasingly, such information can be gathered silently, un-invasively, continuously, often with little more than the mobile phone we carry around with us.

I believe this creates an opportunity to revolutionize human health, and happiness, by helping us know ourselves better.

The challenge for designers and developers is how to take what will be an enormous amount of information, often traditionally meaningful only to experts (120 over 90, is this good or bad?), and help anyone know themselves better, and make better decisions about their choices.

Human Computer Interactions to date have for the most part, indeed have overwhelmingly been, about active human interaction - people typing, clicking, tapping, and otherwise directly, consciously providing information.

I see the next revolution in HCI being where software and systems take the vast amounts of information we generate simply by being, and present that back to us, in ways that make us healthier, fitter, happier.

Rudimentary pieces are there. With services as simple as Foursquare or Gowalla, Nike Plus, Runkeeper, with devices like Withings wireless scales and blood pressure monitors.

But it's what's next that interests me.