At some point in your career, this will become relevant:
At the very least:
- Don’t pull up the ladder.
- Acknowledge that a ladder existed.
- Don’t mock those on the ladder after you.
When you can:
- Point out ladders to others.
- Study some ladders.
- Make better and more ladders.
I can’t listen to music when I’m designing. It’s true! Lorde, Tom Waits, The Beatles, electronica, classical, even movie soundtracks — none of them work for me.
I just can’t think.
In our industry, it’s a fixture of people working — designing, programming, drawing, reading — that they’re plugged in, jamming’. But I relish my quiet. I don’t want to drown it, throttle it, or bludgeon it with music.
(I have, on occasion, stuck headphones over my ears without anything playing. If you work in a noisy environment, it’s a different calculus.)
But if you’re like me, but haven’t wanted to admit it to those around you? Take heart, fellow whisperer. I’m with you. We will work in silence.
Assisted digital is the help government will give to ensure that we do not exclude any users of government services (whether citizens or businesses) who are not online. The Digital Landscape Research shows that 18% of UK adults are offline (defined as rarely or never being online). 82% of people are online (defined as regularly or occasionally using the internet) but some have lower digital skills and may need help, at least initially, to use digital services.
This kind of service is increasingly needed not only when people struggle to access the online web services we build, but also if (well, when) the services themselves go down. Over in the U.S., when the health insurance exchanges launched this month, the websites faced a host of issues. The application counselors who staffed physical locations across the country suddenly became the only means for people to apply.
Who worked on designing the user experience for that?
This month, I couldn’t come up with anything worth listening to more than this fantastic set of tweets by Kathy Sierra:
In the well-intended, crucial quest for conference speaker diversity, are we also sending the (wrong) message that Presenting = Success?— Seriouspony (@seriouspony) September 21, 2013
Just that some of us (e.g. me) were drawn to programming *because* social networking, presentations, etc. were NOT career requirements.— Seriouspony (@seriouspony) September 21, 2013
There are so many paths available for making contributions to our domain. Presenting should be just one possibility, not a key milestone— Seriouspony (@seriouspony) September 21, 2013
We want more women in STEM; role models are crucial, but I'd have NOT chosen this path if I'd thought you need high visibility to "grow".— Seriouspony (@seriouspony) September 21, 2013
I would never, NEVER become a developer had I felt that one must eventually *present* or even network/interact/"hallway convos", etc.— Seriouspony (@seriouspony) September 21, 2013
I struggle today telling women in tech that High Visiblity is Super Awesome, though we need the women (like @Geek_Manager!) who risk it.— Seriouspony (@seriouspony) September 21, 2013
Also hilarious that the very people who've made the need to be IN PERSON LIVE *not necessary*, put so much emphasis on live/in-person events— Seriouspony (@seriouspony) September 21, 2013
Half-kidding: maybe we need good/willing presenters who are tasked w/ presenting the work of *others* that prefer behind-the-scenes.— Seriouspony (@seriouspony) September 21, 2013
Why can't we decouple the stuff-to-be-presented from the person-doing-the-presenting?— Seriouspony (@seriouspony) September 21, 2013
Or perhaps hybrid presentations where one person is LIVE on stage while the "co-presenter" is on video, non-live. (I've done this)— Seriouspony (@seriouspony) September 21, 2013
@seriouspony see Jony Ive, helps to have very public figures that don't present live— Gavin Carothers (@gcarothers) September 21, 2013
Last thing: we agree speaker diversity matters for both our domain/profession *and* attendees. So let's be more creative on how that happens— Seriouspony (@seriouspony) September 21, 2013
You’ve heard it all before — success comes after failure, so fail often. Fail quickly. Fail hard. Failure hurts for everyone, so do it enough times, it’s said, and the teeth will fall out of the bite.
It’s a common enough theme in people’s talks — by smart, wonderful, well-intentioned people — to talk about setbacks (you get to call them setbacks, once you succeed), and how they learned from or pushed past them, to reach where they stand today. And this is more-or-less fine, because they are speaking to their own experience, in the hope that something in there is illuminating and useful for those in the audience.
Where this kind of framing crosses over into shaky ground is when we begin asking for, demanding, requiring others to be burned by mistakes. To glorify the act of failing so much that it becomes a badge you wear, so you can say to those not doing as well, “Well, they just haven’t failed enough times yet.”
Our own tolerance for risk is a gift. Not everyone gets the same wiggle-room we have, or the same ability to bounce back from a bad turn of events. There are plenty of plausible reasons why someone is more risk-averse than you:
You: “So-and-so won’t quit this job they hate.” Perhaps they are living month-to-month, and can’t afford to take any chances.
You: “So-and-so doesn’t want to form a startup, despite having great ideas.” Perhaps they have to care for kids, or a sick pet, or elderly parents, that they have to plan their days around.
You: “So-and-so doesn’t know how to code.” Perhaps they don’t feel comfortable (say, if they’re a woman in a men-dominated office) and want to be around people who won’t make sexist characterizations.
You: “So-and-so doesn’t know how to use the internet. I’ve been using it to talk to other people online since AOL.” Perhaps they didn’t have the disposable income that you or your parents had. Maybe they spent their time after school helping out at their parents’ restaurant.
Since when did this become weakness? Maybe these people’s biggest problem is needing more courage or less self-doubt, or maybe they’re just playing for entirely different stakes. If you don’t know for sure, then this is a failure of empathy.
It’s a kind of mistake you should not get used to making.
Don’t Judge a Drink by Its Container
The newest form of media — the stream — already takes its name from a form of water. What if we extended this metaphor to other media? Lake Superior and Lake Michigan might be titanic newspapers, calm bodies whose surfaces are burned off by the sun every morning. Books could be liquids bound by bottles in all shapes and sizes. (People will say, don’t judge a drink by its container.) Imagine Captain Nemo, sealed off in his submarine library, floating through tons and tons of water.
Browsing online would be like cloud-watching — fleeting and boundless, perhaps, but also fantastically associative and unpredictable. And maybe Google is the ocean, where all water eventually flows into.
Marking an article for Read-It-Later would be like freezing up ice cubes. How many get angry at people rebottling their drinks in the containers of their convenience?
This exercise is a bit indulgent and silly, but gets interesting when you start working backwards. Come up with an element of water — now what’s the media form equivalent?
- bathtub, shower, sink
- swimming pools
- raindrops / rainstorms / hail
- ice sculpture
For the new graduates, but also for the rest of us:
Learning about imposter syndrome is comforting, at first.
Everyone seems to have it, and when anyone talks about feeling lost, the first reassurance seems to be: It’s ok! So do I!
If someone you admire, who is highly competent, can feel like a fraud or simply lucky, then you’re fine. Whew. You’re just the same as them! (And maybe you’re just as talented as they are, too.)
But I think it’s dangerous to tell that to people, despite how tempting it is to believe. Because the feelings that come from actual incompetence and the feelings that come from just being blind to your talent — well, it’s hard to tell them apart. From your limited perspective, it may be impossible. And if they’re actually incompetent, telling them that nothing’s wrong and to ignore their disquiet is the worst mistake to make.
Instead, perhaps consider:
Identifying blind spots. Is there anything obvious and foundational that is missing? Gaps in knowledge can be papered over or avoided for a while, but they should be identified first, before they can be slowly reinforced.
Reminding them what they already know. It’s common, when feeling clueless, to imagine everything you know to be just a subset of what everyone else knows (or should know). But that’s not how knowledge works: it’s not one large pool, but many scattered puddles of varying sizes. The more related two bodies of knowledge are, the more adjacent the puddles will be. It may be valuable to bridge puddles together, to let them mix and feed into one another. And if a blind spot has been identified, reaching it by connecting what you already know will be easier and more interesting than starting from scratch.
Seeing if there’s a mismatch between taste and skill. This Ira Glass quote is well-known: but it cuts both ways. Just as your skill may not yet meet your high standards, you may also be quite skilled but not have developed taste — not having a strong enough sense of what’s great and what can be dropped. If one far exceeds the other, it’s time to work on bolstering what’s fallen short.
Guiding them towards doing the above by themselves. This is the hardest one. But they’re not always going to have someone else they trust and knows them better than themselves. It’ll require being honest and vulnerable, and will feel like digging around in an open wound. It’s not pleasant, but it’s necessary.
Form-making gets a lot of attention today.
The new forms are what people talk about. They win awards, clients, the praise of your peers, and money. They start to get reused, adapted, and become a shorthand for kinds of storytelling. Our collective attention privileges the thing.
But it’s worth remembering that they’re the substrate of a process. What you see rests on experiments with framing word and image in certain ways, dividing and managing readers’ attention and rhythm and flow, and a whole mess of technological superglue that bonds them together.
More often than not, it’s the form that gets copied, not the process that it came from. Maybe it’s because it’s easier to copy the thing. But to mimic something without understanding why it works is to become a cargo cult, unlikely to reap the benefits you’re hoping for.
The thing doesn’t matter. It — along with the assumptions, gambles, and affordances inherent — is simply a stake in the ground.
This worked here.
It allows the adjacent possible, the next set of forms, to be uncovered.
Maybe it’s also because the thinking behind form-making is hard to decipher, and that we’re rarely comfortable with talking about this stuff. Not in the open, anyway, and not nearly enough.
Apps are greedy. They demand to be constantly played with, refreshed, updated — and buzz at you with glaring red eyes when ignored.
Just as a thought experiment: what would an app without any interactions be like? Something you put on the wall, or on your desk. You glance at it a couple times a day. Its face isn’t bright, and doesn’t flash with bright colors and sound.
What if an app acted more like a clock?
I always keep two books on my nightstand. Right now, I’m trundling through Anna Karenina (my first Tolstoy!) and rereading a 2011 issue of Lapham’s Quarterly titled The Future. It’s a way to make accidental juxtapositions and see connections that I wouldn’t have come up with on my own.
Timo Arnall recently wrote about the problems with ‘invisible design’. It not only questions the myth of the intuitive, but also argues eloquently for the legible. (It doesn’t hurt that it draws upon lots of prior design literature, which we could all stand to do more of.) The piece got shared widely online and was met with surprise and controversy.
Now, here’s some juxtaposition (emphasis mine):
It turns out that changing behaviour is a way to subsequently change attitudes; this is entirely counter the thinking behind many smart systems, which are predicated on feedback loops delivering information to people, whose attitudes then change, and who then choose to change their behaviour accordingly. Instead, behaviour change happens through changing behaviour, and then attitudes.
It is not enough to simply “make the invisible, visible”, to use the already well-worn phrase in urban informatics. But change might happen through creating convenient, accessible ways to try something different, and then multiplying that through social proof and network effects, reinforcing through feedback. (This means all those smart meters are a complete waste of time and money, and will eventually have to be uninstalled.)
That was from Dan Hill’s piece about ‘smart cities’ and active citizenship. It takes Timo’s arguments for legibility and extrapolates it to more smartness—but guess what? Dan posted it before Timo’s piece. I was lucky to make the connection between them only because I’d read Dan’s entry a week prior.
This particular kind of serendipity—remembering and seeing connections between unrelated works—ignores many distinctions. It doesn’t matter whether something is new or old, whether it was published in the New Yorker or on a blog, whether it was written by a designer or scientist or urban planner. Your act of reading—jumping from point A to point B—creates a wormhole between the two. It lets you time travel.
Sometimes you have to step away from your work to see what’s not there.
This video is most-probably-definitely-maybe staged, but I don’t love it any less for that. It takes an everyday situation and flips it into absurdity. It’s hilarious and a little painful to watch, as it builds and builds up to…well, keep watching to the end.
There’s so much there you can laugh at. You don’t need to have lived in Italy, you don’t need to drive or park — I showed this to a 7-year-old and she laughed as hard as I did. I wish more of our work could be like this. To have low barriers. To be liked by people of all kinds. To have a bit more range.
Maybe I’m noticing it more with the end-of-the-year roundups, but time is such a slippery thing on the web. In fact, it’s often little more than a timestamp.
Stacking a list of things by reverse-chronology is easy for computers, but my squishy human brain misses having more discrete units of time. Printed publications can be categorized by how they bracket time — dailys, weeklies, monthlies, quarterlies, annuals — and are as much tied to their pace of production as to their worldview of what is the most interesting span of time?
We haven’t yet formed good answers to this online. Streams reflect our ability to instantly put things out there, but what is the right amount of time to talk about an idea? To deep-dive into a subject? To have an argument? To celebrate, or to mourn?
What unit of time will distill this piece of information to its most potent?