The company called [a] special town hall after head of policy Joel Kaplan caused an internal uproar for appearing at the congressional hearing for Judge Brett Kavanaugh. A young female employee was among those who got up to speak, addressing her comments directly to COO Sheryl Sandberg.
“I was reticent to speak, Sheryl, because the pressure for us to act as though everything is fine and that we love working here is so great that it hurts,” she said, according to multiple former Facebook employees who witnessed the event.
“There shouldn’t be this pressure to pretend to love something when I don’t feel this way,” said the employee, setting off a wave of applause from her colleagues at the emotional town hall in Menlo Park, California.
The episode speaks to an atmosphere at Facebook in which employees feel pressure to place the company above all else in their lives, fall in line with their manager’s orders and force cordiality with their colleagues so they can advance. Several former employees likened the culture to a “cult.”
What does it mean for a workplace to be a fucking mess?
Two CEOs once came to my previous consultancy at around the same time with the same problem: their workplace cultures were, in the words of one of them, “like Game of Thrones”, and that this was compromising their ability to actually deliver.
The contexts were strikingly different: one was a tech startup that was trying to save the environment, and the other a religious charity that tackled social inequality, but there were radical similarities: team members would prove their worthiness to change the world at the expense of their colleagues, and for both organisations, this zeal was amplified via their fealty to a charismatic paternal figure: for the tech startup, it was the CEO, and for the charity, it was literally God.
We might roll our eyes from a position of enlightened HR superiority, but after reading the above article about Facebook, I’m beginning to think that I’d prefer to work in a bitterly divided organisation that was tripping over itself in the service of a misplaced loyalty to a higher mission — even when reinforced by intense daddy issues — than one with such a hollow consensus commitment to… nothing besides the mindless amplification of a company’s positivity about itself. And isn’t that what Facebook actually is? A company that amplifies (our) collective feelings for nothing other than the surplus value it can extract from them?
It’s easy to point to the obvious aggression of our tech startup and charity examples as “workplace toxicity”, and these are good problems to address through the lens of org design, but I’m glad that people are beginning to frame the tyranny of the uncritical positivity at places like Facebook as a kind of toxicity, too. It makes expressions like “sheep-like” sound deeply unfair to sheep.
And if we’re going to do the hipster thing of enquiring about the ethical production and provenance of our food, as so many of us do these days, we should be doing more than focusing on various individual Facebook privacy scandals, as important as they are, and start enquiring about how each lapse is related to the baseline reality of how our digital lives get produced.
Is the digital substrate of our lives created by battery hen engineers and designers in cheerful gilded cages? Perhaps if they had the freedom to be at each others’ throats instead of having to use friendship as a currency, perhaps we wouldn’t be in such a social media quagmire.
As a strategic design consultant I spend a large amount of my time thinking about the future. My work ranges everywhere from anticipating how social services will need to be deployed to an ageing population, to identifying the kinds of supple skills non-profits need to foster right now in order to be future-ready. How are we going to be, as a society?
It’s always been this way with me: as a child, my bookshelf was full of wide-eyed educational extrapolations into the distant 21st Century, epic science fiction novels, and tomes about artificial intelligence. Good times!
Indeed, I often look to the past when I think about the very idea of the future, not just so we can avoid repeating “the mistakes of history” (as important as that might be), but because as designers trying to make the world a better place, we really should honour the creative friction that happens when the weird fragments of the past we continue to live with rub against the potentials of the present moment. (For a future-oriented person, I do an amusing amount of hoarding! In my view, forgetting to deal with legacy systems, even if “dealing with them” involves actively destroying them, is tantamount to vapourware dreaming.)
So it’s not any one future as an end that I’m looking forward to, but rather a critical orientation towards futurity that we as a society will need in order to prosper alongside the other things with which we share this planet. You could say that it’s the most crucial line of inquiry of all: as a society, what’s the difference between what’s currently deemed important and what should be important? And how do we help bring the required change into being? Big questions. Political questions. Questions of vision.
What metaphors should we use as guides to navigate such questions? What’s our style of future thinking?
I write this on my birthday, which in 2017 also happens fall on the day of the Chinese Moon Festival. To mark this harvest festival, we Chinese traditionally eat mooncakes: delicious packages of lotus seed paste wrapped in crumbly pastry. And for me, mooncakes are a great way to think about our orientation to futurity, especially if you contrast them to their tackier cousin in the “Chinese food” universe: the fortune cookie. Fortune cookies are those crispy things that contain cheesy aphorisms about your future on a slip of paper. You crack open the cookie, and you read your fortune in a terrible suburban Chinese restaurant.
I’m sorry, but fortune cookies are bullshit. I’d argue that if you want to prepare for the future, a mooncake is much more useful model than a fortune cookie.
Why is this so? Firstly, fortune cookies aren’t even Chinese, and are probably a relatively recent Japanese invention that only got retconned into Chinese cultural history during World War II. But more importantly, they’re kind of evil.
In this week’s episode of the excellent Star Trek Discovery, we were introduced to Captain Lorca, the first captain in the main cast of a Star Trek TV show that you could imagine committing war crimes. In his first scene, the secretive Lorca offers our protagonist a bowl of fortune cookies, telling her that his family was once in the fortune cookie business. Like his ancestors, Captain Lorca is still in the business of the future: the Federation is now waging a war against the Klingon Empire, and he wants to be one step ahead of his enemies, to the point of co-opting dangerous experimental research on interdimensional travel for military purposes. Essentially, he has his own Manhattan Project on the boil.
It’s fitting that Lorca is obsessed with fortune cookies. His orientation to the future is all about sublimating his need for military supremacy into predictive certainty. The research he’s co-opted is fantastically creative, but Lorca has subordinated it to the logic of monarchs and generals: he seeks a shortcut to the future in order to stay one step ahead of the Klingons, and develops it in secret like an elite 23rd Century Bond villain. For Lorca, this is who the future belongs to. Indeed, he says (contrary to the need for ethical standards and in favour of the ends justifying the means) that “context is for kings”.
What, then, of mooncakes? Fortune cookies are actually just a pale contemporary echo of the more powerful story of mooncakes, and the role they apparently played in the the Chinese overthrow of the Yuan Dynasty in the 14th Century.
At that time, China had been occupied by Mongolian forces for a hundred years, and as they planned their revolution, Chinese insurgents used mooncakes as vehicles to collectively coordinate the uprising against their occupiers. Fortune cookies contain bullshit messages that purport to predict the future, whereas 14th Century mooncakes were designed to carry messages that were a collective call to action. Rather than turning to prediction in order to stay on top, the population worked together to create a future in which they were free.
In some accounts, a slip of paper was hidden in each cake, stating the time of the uprising as a kind of open secret amongst the entire Chinese population: “Revolt on the 15th day of the eighth lunar month”. In other versions of the legend, this message was encrypted into the patterns on the top of the cake, and could be decrypted by slicing the cake into quarters and rearranging the pieces. So cool.
But regardless of exactly how the system worked, I find the contrast in disposition between fortune cookies and mooncakes useful in how we can think about the future.
When we think about the thorny problems that organisations and networks of people and other beings are going to face in the coming years, like coping with the now-inevitable consequences of climate change, or finding new systems to ensure food security, will we hanker after the hubris of kings, for whom the future is the secret to an endless reign, or can we instead see the future as an open secret, a common conspiracy of hope, that’s less about making grand predictions and more about coordinating our current creativity as an act of grassroots rebellion?
If you’re part of an organisation that wants to be future-ready, are you going to plot like a Bond villain or king who wants to live forever? Or are you going to work with your communities to co-design a surprising future?
On this birthday, I feel like a mooncake future. 😀
(Unfortunately, the mooncake emoji does not yet exist in the wild, and is scheduled for release in 2018.)
When I’m doing deep research as part of the design process, I’m always tempted to try to create the most exhaustive model of the situation in which I’m immersed. I want to see the whole system, down to the details. What makes it tick? I start to get drawn into an obsessive spiral of curiosity. At a certain point I have to remind myself to stop.
I once designed an online platform that delivered mental health advice to very young parents who were vulnerable to stress, depression and other issues. I ran workshops that involved these kinds of people in an exciting co-design process, which provided them with agency in the design process. I held their babies and even fed them while facilitating. It was awesome.
So far, so good. But then we found myself creating detailed models of what it was like to be pregnant, give birth, etc. We created tables that included ludicrous detail about potential incidents in each trimester. “Week 12: nuchal translucency scan.”
Um. No. We stopped, and got the project back on track.
As a designer, I exist to solve problems or invent new ways to address opportunities, so I have a natural urge to make a map of the situation I’m addressing. This is obviously a good thing. And within those landscapes we designers map, I’m also the first person to champion empathy — putting myself in the shoes of the people who live these situations — as a hugely important way to ensure that I’m designing things that fit their context.
Meanwhile, as my remit has expanded over the years from brand and digital design to fields like service design and the realm of how organisations work, it becomes even more important to understand the breadth and depth of complex ecosystems and the motivations of the actors within them.
But how much “research” is “too much research”? And what is “research”? Collecting facts, feeling what our users feel and creating detailed models are great, but they’re only part of research. Letting these things stand for “research” can be incredibly dangerous. Remember: testing is research. Experimentation is research.
One morning a few weeks ago, I had a long, hot shower. I watched my bathroom slowly fill with steam. Drawing parallels with my design work, I started wondering about how I could “truly understand” the steam. I could maybe build a complex model of the steam’s turbulence using dynamical systems theory. I could try and predict the position of each droplet of steam in the room. Obviously, that approach is valid for certain situations. But what if the true design task in this situation was to see what would happen to the steam if I opened the window?
Since that morning, this is the question I constantly ask myself and my fellow designers:
“Do we need to ‘truly understand’ this thing, or do we need to open the window and see what happens?”
I love making models and channelling the emotions of others. But sometimes you have to just open the window.
One of the things I’ve learned with my recent Pinterest obsession is that like their capacity to create party invitations, everyday people are truly excellent at naming the buckets in which they sort their stuff.
The names people give their Pinterest boards are often really elegant and appropriate. A couple of patterns I’ve seen:
Use an imperative or exhortation. So rather than “Casual Fashion”, people use things like “Dress Down!”. Instead of “Winter”, there’s “Keep Me Warm”.
Use a phrase pattern across your IA. Rather than “Yves Saint Laurent”, “Dior, etc, people use “I ❤️ YSL,” “I ❤️ Dior”, etc.
We should never underestimate the power of lists and the artful variations they can suggest. Obviously, the most mundane labels are often the best, but sometimes it might make sense to think a bit laterally and take inspiration from these kinds of sources.
A designer from the Netherlands once gave a talk at our studio over lunch. Jeroen made beautiful objects in a past life as an industrial designer, but after encountering experience and service design, he gave up industrial design and ended up travelling the world, investigating how to apply user-centred design and social entrepreneurship to address poverty. I’m not particularly sympathetic to the idea that entrepreneurialism can solve structural economic inequality, but I welcomed Jeroen’s candour about the lessons he learned on his journey.
What did puzzle me, though, was his attitude to methods that predate our stereotypical user-centred design world of Post-it notes, lo-fi prototypes and co-design workshops. Bemoaning the state of design education, he declared with disgust that “in Amsterdam, there are students who still think you can design with mood boards!”
Okay. When did mood boards become beyond the pale? When did a fetish for sticky notes succeed in displacing aesthetics? This feels like user-centred design puritanism. And as the design director of a user-centred design studio, I find that mood boards are not only useful, but that mood itself is a key element that needs defending in the design process.
From method to heuristics to method
We didn’t always explicitly use user-centred design methods at our studio. Originally cast in a more classic visual communications mould, we art-directed material for progressive causes. We found that this wasn’t enough; lately, everyone’s understanding of media has shifted from a transmission-oriented model of communication to one that’s concerned with how complex ecosystems interact, which demands new conceptual tools and methods. But these new tools and methods were also very much in tune with our own take on our work: user-centred design fit our commitment to doing justice to the issues we cared about.
Suffice to say that our embrace of user-centred design a decade ago (yes, before it was cool, haha) was both a sea change and a kind of continuity. While our new media universe does bring certain issues into relief, thinking about stakeholders and their situations is not terribly new: just as bespoke tailors have always known how to ask their clients where and when they’ll be wearing certain garments before they’re made, graphic designers have always taken context into account. Across the existence of our discipline, this understanding of context slowly crystallised into the heursitic conventions of print design: this is how tables of contents work; that is best practice in typographic content hieararchy, etc.
Montage and argument
It’s true that many kinds of graphic design wisdom have ossified, and or been taken as gospel rather than seen as the outcomes of real processes of understanding. But at their best, graphic design methods might surprise UCD practitioners.
In my design practice, mood boards aren’t a superficial or arbitrary grab bag of visuals that happen to appeal to me. If you’re doing that, you’re simply being a lazy designer. Neither are they necessarily the best visual matches for how I think my designs will ultimately turn out, as an early replacement for mockups. This limits your possibilities, and renders your process too literal.
Rather, moodboards are a way of assembling material in a montage to make an aesthetico-conceptual argument. How should things feel for our users, and why? What references allow us to think through a range feelings, from the intention behind one’s graphical choices, to actual examples thereof?
For example, if I’m designing a system that empowers people to more easily increase their apartment blocks’ energy efficiency, I might assemble a bunch of references: the playfully urban, graph-like geometry of Blue Note jazz album covers and Saul Bass graphics; DIY hardware manuals; and the stationery-fetishism of the “productivity hack” subculture.
The argument: when we get together to reduce our collective carbon footprint, we can see it as our improvisatory but ever-improving action around the rhythms of self-measurement, which is rewardingly practical and efficient. We can become our own productivity porn. This is what my mood board argues about what the experience of the entire project might be, and it also supplies a blend of aesthetics that we can put into action in the project’s actual art direction.
Through montage, mood boards are good at synthesising new combinations of meaning in suggestive ways. Other things — affinity mapping the outputs of contextual inquiries, creating personas, storyboarding archetypal use cases and mapping user flows — are good at finding and modelling users’ needs and behaviours, seeing the opportunities within those, and then bringing them to life in ways that can be prodded, tested and revised. Each has advantages and limitations.
The pervasiveness of mood
But mood isn’t just the domain of visual design. Now that we’re in an era of complex media ecologies rather than the centralised messages of broadcast models of communication, the way we shape the spaces in which we make meanings with each other has become paramount. Rather than shaping and colouring “the message”, we’re more invested as designers in making spaces that are conducive to various things happening.
Our speciality as a studio is designing to enable positive social and environmental change, and I’ve always felt that a good way to approach this is allowing people to get on the same wavelength in a mutual negotiation, rather than an evangelical model, in which we convert people to a cause. It’s a way to create resonance, rather than converts. It’s a way to make spaces that vibrate in the right way. For us, creating great experience architecture is a way to make spaces for people to get in tune with each other.
This isn’t something anyone should take lightly. As Wilson Miner put it in his justifiably influential talk at Build 2011, shaping the digital spaces in which people are going to spend a majority of their time is a great responsibility:
For me, the key word here is “feel”: “What do we want that environment to feel like?” Miner asks. “What do we want to feel like?”
Isn’t this one of the most important design questions of our time? And I think mood boards, amongst other things, continue to be very relevant to this.
In our world of “don’t make me think”-style usability, sometimes people like to pretend that everything can be reduced to arranging elements of information in the most logical and seamless manner so that everything we do as designers becomes invisible. That’s simply bullshit. I fart in your general direction, colourless usability rationalists. Sure, not all systems or places should be in your face, but all spaces should have a mood.
So as I continue to do user-centred design for social change, there’s a great coda for me that ties most of this together. Let’s return to resonance and getting in tune as a model for communicating change in the spaces we create. The philosopher Martin Heidegger had a really interesting take on mood; he used the German word “Stimmung” to talk about it. For Heidegger, mood was not simply something we experience internally, but something that happens between people, and attached to certain spaces. It’s great, then, that “Stimmung” also means “attunement” in German. Mood and attunement. Two sides of the same coin.
Over the last few years, I’ve noticed a romantic tide of content authenticity developing across web culture. You can find it in confessional commentaries that deal with our investments in life and creative work (Merlin Mann’s Cranking being a common touchstone), in podcasts that celebrate doing what you love, and on the stages of design conferences where charismatic storytelling has become paramount.
I’m generally a fan of this “get excited and make stuff” scene, but we need to approach its romanticism with a critical eye, especially when it tends to enshrine a particular model of storytelling, of creativity, and of “good content” — one that privileges confessional narratives, emotional catharsis and an authentic personal voice.
A common trait of this new web romanticism has been to dismiss the “listicle” (a list-based article, like “30 Ways Your Smartphone Isn’t As Cool As You Think”) as a form of inauthentic, SEO-driven linkbait which, along with paginated articles that chase page-impressions, tend to crowd out a more unitary reading experience. Lists are inauthentic. They have no voice. They lack emotional gravity. They have no personality. They’re glommed together. They have obvious seams. They don’t contribute to a narrative. They’re examples of bad writing.
A lot of this criticism can be true, but there’s an important slippage happening here. What this romanticism rallies around is not “good writing” as such, but something much more like verbal charisma, which is not specific to writing at all. In fact, it is the nonlinear list form itself that is more specific to writing. As Walter Ong reminds us in Orality and Literacy, writing and list making are technologically intertwined (and yes, writing itself is an “unnatural” technology):
Orality knows no lists or charts or figures. Goody (1977) has examined in detail the poetic significance of tables and lists, of which the calendar is one example. Writing makes such apparatus possible. Indeed, writing was in a sense invented largely to make something like lists: by far most of the earliest writing we know, that in the cuneiform script of the Sumerians beginning around 3500 BC, is account-keeping. (Ong 2013:93)
Not all lists are going to be interesting, but we need to be open to the possibility that you don’t have to write a confessional, cathartic inspiration piece to be considered compelling. In fact, the assumption that this should be the default setting for “successful content” on the web saddens me, and perhaps betrays the corrosive influence of TED culture, in which charisma and a certain type of narrative outweighs the hard work of opening critical questions and making vital connections. Connections that might be better made with the constellative properties of lists, perhaps.
I flew back to Sydney last night from an out-of-town client workshop. As our plane banked over Botany Bay, I noticed for the first time that while the waves on the water’s surface were clearly visible, they looked perfectly still. The ocean looked like a piece of textured glass, frozen in time. It took a moment for me to realise that from our height and orientation, I could see the waveforms themselves, but without the right combination of magnification and reference points (e.g. landmarks), I couldn’t see that those very obvious waveforms were propagating across the water’s surface, largely intact. So from a particular perspective, a very obvious kind of movement had been rendered invisible.
This got me thinking about how we do social research (whether it be “design research” or any other kind): without the right framing that emerges from a combination of control points and our sense of scale, we might totally miss a vital kind of activity. Something that we think is without movement might actually be very active. Maybe a quick “zoom calibration” (push in and pull out, just to see whether you’re at the right social magnification) might be all it takes, or angling your vision slightly to find the nearest landmark. Or maybe just wait until those things naturally happen — as my plane banked over Brighton Le Sands and I saw the sand of the beach, a control point was revealed: I could see the waves ripple and break as they approached the sand. So if you keep observing just a bit longer than you’d originally intended, you might experience something like an anamorphic moment, and have your whole system of mental coordinates rearranged.
What are the frozen waves in your research landscape?
While working on a client presentation over the weekend, a fellow designer and I angsted a bit over the relative merits of presenting work in progress to clients in a descriptive form that left our thought process in the open, versus showing a more streamlined, emblematic package. We didn’t have the luxury of deciding which to go with, but in any case, our angst reminded me of a great passage from Martin Nicolaus’ introduction to Karl Marx’s legendary Grundrisse (literally, “Outlines”), which was basically Marx’s rough draft of Capital:
The inner structure [of Capital] is identical in the main lines to the Grundrisse, except that in the Grundrisse the structure lies on the surface, like a scaffolding, while in Capital it is built in; and this inner structure is nothing other than the materialist dialectic method. In the Grundrisse the method is visible; in Capital it is deliberately, consciously hidden, for the sake of more graphic, concrete, vivid and therefore more materialist-dialectical presentation. (Source.)
It’s a cool observation that lets us honour both stages, and yet also get somewhere different. I think this transition — from obvious scaffolding to a more implicit structure via a “more graphic, concrete, vivid” presentation — also represents our awesome challenge as designers. To be true to each step in the design process, but end up with something that’s more palpable to people than a piece of “mental sausage” in a process diagram (no matter how sexy that might be at the time!).
In the Hegelian philosophy that Marx “inherited”, the term for such acts of transition is Aufheben, a German word which has no English equivalent, but that can simultaneously mean something like “abolish”, “preserve” and “transcend”. It’s easy to fold this kind of terminology into a predetermined idea of “conservative destiny” — Hegel himself did this, by all accounts, and my sympathies have usually been with philosophies that emphasise horizontal guerilla tactics instead of some grand, upward motion. But these days I’m getting a vibe from “Aufheben” that’s far more alive, less predictable and full of friction and transformation — faithful to our earlier steps, but not as some kind of veneration. Moving on, but not necessarily in an obvious direction. (This was no doubt Marx’s intention: to use Hegel to move on from Hegel’s own limits, rather than be properly “Hegelian”.) In terms of my own practice, this means having more arguments, more productive friction, and to avoid making the design process one long line of steps towards a predestined end point. Hebt auf, designers!
I saw this boarding pass in a recent episode of Fringe:
Even though the shot only lasted a second, I couldn’t help but notice such a lovely piece of design fiction, which was then doubly emphasised when the next person in line gave their much crappier-looking boarding pass to the homicidal-TSA-staffperson-of-the-week:
(Yes, that crappy boarding pass really is there to make a statement about how readable and elegant the previous one is.)
Designers who work in film and television have a great opportunity to create an alternate, redesigned universe in which things work slightly differently, and the fact that Fringe is set in a gaggle of literally alternate universes, each with telltale differences in their everyday environments, makes this all the more delicious. Perhaps it’s design/science fiction squared.
However, it didn’t take me long to remember where I’d seen a boarding pass like this before:
Yep, it’s former Apple designer Louie Mantia’s contribution to Tyler Thompson’s excellent Boarding Pass/Fail exercise, which in turn is part of a recent trend in speculative, non-commissioned redesigns that can be found on the web. I’ll read this charitably as an homage, or a vision of a utopia where Louie Mantia really does design our boarding passes (which I much prefer to the current iTunes icon, which he also designed).
Of course, this isn’t the first time that Fringe has snuck in a boarding pass Easter egg. Back in Season One, Lost’s Oceanic Air made a sneaky appearance.
After Flight 815, I think Oceanic are due for a rebrand.
conspiracy theory (and its garish narrative manifestations) must be seen as a degraded attempt — through the figuration of advanced technology — to think the impossible totality of the world system. [My emphasis.]
That is, our culture currently lacks the clarity to map the arcane workings of the global economy, but some of our tall stories still feature some wayward, fetishistic glimmers of that impulse. This figure of degradation reappears throughout Jameson’s work, always referring to an echo of a lost or unattainable whole: our critical sense of history, or an image of Utopia, etc.
Web designers have our own take on degradation. Graceful degradation is a way to deal with a world where different web browsers support web standards and sexy new technologies to uh, varying degrees<cough />. We design for the most complete experience, and build pages in a way that might preserve an echo of that experience in older or less capable browsers. In those dodgy browsers, the page gracefully degrades, and we exerience a still-worthwhile remnant of the lost whole — a bit like the way Jameson likes to see our radical impulses.
Meanwhile, a different design approach has emerged over the last few years, turning graceful degradation on its head: progressive enhancement is a way of designing outwards from the core content of the page. It keeps the design open to possibilities of sexiness in opportune contexts, rather than starting with a “whole” experience that must be compromised. While it might simply seem like another way to achieve graceful degradation’s exact goal from the opposite direction, this newer approach is qualitatively different: because progressive enhancement doesn’t presume a single, ideal state to fall back from, it deals much better with emerging landscapes and multiple contexts. For example, developing an integrated design that provides an equally “full” and contextually appropriate experience for both mobile and desktop browsers is easier with progressive enhancement.
So, if our degraded attempts at Utopia remind me of design’s graceful degradation, design should return the favour: what might progressive enhancement suggest in the world of culture and politics? As a designer who hungers for progressive political change, this question intrigues me. At the very least: rather than groping for a Lost Symbol of freedom, with plenty of us being left with a “graceful”, less-than-ideal experience as a fallback position from a fetishised Utopia, progressive enhancement suggests instead that a well designed experience of freedom can be built outwards from a core structure of meaning, in multiple ways, and in uneven terrain.
We should be careful not to reduce progressive enhancement in the real world to something politically unambitious, like simply “working within the system”; in web design, this would’ve been like sticking with table layouts and font tags in your markup “because that’s what we have”. That’s how ridiculous the idea of social democratic politics seems to me: you can’t redesign the world with spacer GIFs. Web designers wouldn’t enjoy our remarkably coherent landscape of enhancement options unless standardistas had advocated a clear break with the status quo, and so it is with redesigning society. (And yet this break wasn’t a spectacular event, but rather a massive sea-change that occurred over several years of bitter struggle.)
Meanwhile, you can find graceful degradation’s ambition — assuming a maximum specification, and then making do in less than ideal circumstances — in the experience of Stalinism, and that really wasn’t so graceful, was it? In the absence of a worldwide socialist revolution in the wake of World War I, Stalin’s defensive pragmatism of “socialism in one country” was clearly the wrong kind of pragmatism. (It’s no accident that orthodox Trotskyists, who utterly opposed Stalinism, still defended the Soviet Union as a “deformed workers’ state”, i.e. a degradation of a canonical design.)
On the web, progressive enhancement suggests a different kind of pragmatism — one that avoids both the conservatism of continuing to use the corrupt instutitions of embedded font tags, and the defensive contortions of trying to preserve a canonical design that was specified for only the most advanced browsers. By shrewdly taking different opportunities to enhance a core structure of freedom in different contexts, an ethic of progressive social enhancement could avoid both the increasing conservatism of social democracy on one hand, and the development of regimes that try and fail to defend a canonical idea of revolution that was only really aimed at the most industrially advanced countries.
Anyway, if you take anything from this, “you can’t redesign the world with spacer GIFs” is my favourite phrase from the above.