Deliberate Rest

A blog about getting more done by working less

Month: April 2011 (page 1 of 2)

Another reason I wish the BBC player worked in the US

Unfortunately I didn’t see “The Big Silence,” a documentary about five people on a week-long silent retreat, which aired on BCC Two last fall. Serena Davis’ piece about it has a nice description of the challenges of spending time in the company of others without communicating, and the benefits:

Living in silence takes away the surface distractions of life so that, paradoxically, you become much more aware of what is going on. The primary focus of this awareness is internal, in terms of your state of mind, but you also start to take in your surroundings better. As they do for the volunteers in The Big Silence, walks in the countryside, for example, became an incredibly pleasurable experience for me on my retreats as, seemingly for the first time, I took in the detail of the wildlife around me. Normally, worrying about the present or the future means we are impervious to the present, right down to the trees and flowers around us, which on a sunny afternoon in the British countryside can bring a delight all of their own.

The BBC iPlayer doesn’t work in the U.S., but fortunately you can watch the three episodes on Top Documentary Films, though. (The popunders are not very contemplative.)

The documentary’s organizer, Father Christopher Jamison, was the center of an earlier documentary, The Monastery, about a month in Worth Abbey. His book Finding Sanctuary is really great (this excerpt about silence gives a good sense of his writing).

The iPhone tracking controversy, and why I gave up Foursquare and Last.fm

As an iPhone user, I’ve been interested in the iPhone tracking controversy, but I haven’t been that surprised that it was creating such a file, nor that the company has been churlish in its response. I have a lot of respect for Apple, and every computer I’ve bought myself has been a Mac, but I don’t think of myself as a big fan of the company: my feeling about it follow Lionel Mandrake’s feelings about the Japanese: “The thing is, they make such bloody good cameras.” (If you haven’t seen Dr. Strangelove, it’s time.) I know people who work there, and they’re great folks, but the organization itself seems to love the concept of users, but not necessarily the actual reality of customers.

Anyway, this post on Buad Attitude summed up my feelings about the tracking files:

My iPhone is tracking me (and it turns out I’m really boring)… all the data I have on me says that I spend a lot of time in Eugene and Portland and have gone to the coast once.

Seriously, I haven’t even been to freakin’ Seattle in the last 8 months, much less anywhere exotic.

I’m so depressed.

This is why I gave up Foursquare and Last.fm, and other services that automatically broadcast where you are, what you’re listening to, etc.: not because I was worried about Big Brother, but because I didn’t feel like other people would get anything out of such a fine-grained yet decontextualized view of my life. I’ll be imposing on friends to read book chapters before too long; I don’t want to waste their patience on reading that I’m at the gym, listening to the Moulin Rouge soundtrack. I can be pretty obsessive about tracking my attention, concentration level, what I eat and drink, hunger level, and other things when I feel the need, and even I couldn’t figure out what I would do with this information. Stash it in a file, maybe; but share it with everyone, no.

Stefan Sagmeister: “It’s much easier to return emails than to actually think about something new”

Artist Stefan Sagmeister is famous for taking a year-long sabbatical every seven years (what is he, an academic or something?). In this brief interview, he talks about that practice, as well as the idea that you have to know rules in order to break them (a concept that a professor who was educated by Jesuits first suggested to me):

For those too impatient to watch it, here’s the transcript:

It doesn’t really matter if you do a year every seven years as I do… or day a week, or half an hour every day. I do think, though, that it’s very important that these times are scheduled. If I have somewhere in the back of my mind, “Oh, I would like to experiment,” then it’s not going to happen. And I think it’s not going to happen because it actually is difficult. It’s much easier to return emails than to actually think about something new. Much easier.

So if I don’t really have that completely scheduled, I’ll always return emails. Or I’ll pour coffee, or I’ll have to go to the loo, I’ll use every excuse to not have to think. It’s a strange tension I find in myself… that on one hand I’m complaining that I don’t have enough time to think and explore… but on the other side, when I do have the time, I squander it and just do nothing with it. So the scheduled part is really important.

Duke university researchers discover that Botox dulls empathy

A fascinating follow-on to Paula Niedenthal’s work on mirroring emotions: Duke University researchers have found that people who undergo Botox have a harder time reading other people’s emotions.

A few well-placed Botox injections can erase your hard-won character lines. But that may also make you less likely to pick up on other people’s emotions.

That’s because the botulinum toxin, which reduces wrinkles by temporarily paralyzing small muscles in the face, can make it hard to furrow the brow or make other expressions that convey emotion. And our own facial expressions, researchers now show, may be essential to recognizing the feelings of others.

This unexpected Botox effect is a fascinating window on how we understand what other people are feeling. A good part of that process requires unconscious mimicry of the other person’s facial expression….

“The tendency to mimic facial expressions is rapid, automatic and highly emotion-specific,” write David Neal and Tanya Chartrand in an intriguing paper just published online by Social Psychological and Personality Science.

Neal and Chartrand say the subtle contraction of our facial muscles when we mirror a friend’s happiness or woe generates a feedback signal to our brains. Those incoming signals from facial nerves help the brain interpret how the other person is feeling….

In one experiment, the researchers recruited 31 women who were already having either Botox treatments or injections of a dermal filler, which plumps up wrinkles but doesn’t paralyze muscles. After the treatment, the women were shown a series of images that showed people’s eyes embodying different emotional states. Study subjects were asked to judge, as quickly as possible, what emotion the eyes conveyed.

The Botox patients scored significantly worse than those who got a dermal filler. That meant the Botox patients’ ability to make fast judgments about another person’s emotions was blunted. (The Botox didn’t eliminate their ability to judge emotion. They still were about 70 percent accurate.)…

The cognitive implications go well beyond Botox users. But the findings do make Neal and Chartrand wonder if prolonged use of Botox would hobble people’s ability to perceive others’ emotions and give others empathetic facial feedback.

“Mimicry promotes liking and emotional sharing,” the researchers say, “and may contribute to long-term relationship satisfaction.”

Ambient religious communication

Arizona State professor Pauline Hope Cheong has an interesting article about the use of Twitter and microblogging and “ambient religious communication:”

What would Jesus tweet? Historically, the quest for sacred connections has relied on the mediation of faith communication via technological implements, from the use of the drum to mediate the Divine, to the use of the mechanical clock by monks as reminders to observe the canonical hours of prayer (Mumford). Today, religious communication practices increasingly implicate Web 2.0, or interactive, user-generated content like blogs (Cheong, Halavis & Kwon), and microblogs like “tweets” of no more than 140 characters sent via Web-based applications like text messaging, instant messaging, e-mail, or on the Web. According to the Pew Internet and American Life Project’s latest report in October 2009, 19% of online adults said that they used a microblogging service to send messages from a computer or mobile device to family and friends who have signed up to receive them (Fox, Zickuhr & Smith). The ascendency of microblogging leads to interesting questions of how new media use alters spatio-temporal dynamics in peoples’ everyday consciousness, including ways in which tweeting facilitates ambient religious interactions.

The notion of ambient strikes a particularly resonant chord for religious communication: many faith traditions advocate the practice of sacred mindfulness, and a consistent piety in light of holy devotion to an omnipresent and omniscient Divine being. This paper examines how faith believers appropriate the emergent microblogging practices to create an encompassing cultural surround to include microblogging rituals which promote regular, heightened prayer awareness. Faith tweets help constitute epiphany and a persistent sense of sacred connected presence, which in turn rouses an identification of a higher moral purpose and solidarity with other local and global believers. Amidst ongoing tensions about microblogging, religious organisations and their leadership have also begun to incorporate Twitter into their communication practices and outreach, to encourage the extension of presence beyond the church walls.

Also interesting is this piece from the United Methodist Church’s Interpreter Online on Twitter use by congregants during services.

Monica Smith enters the “[insert ability here] is what makes us human” sweepstakes

In his book , Daniel Gilbert quipped that the sentence “[WHATEVER I STUDY] is what makes us human” is one that scientists can’t avoid; they know they should, but they just can’t help themselves. I just came across another entry in that sweepstakes: UCLA professor Monica Smith’s 2010 book, . From the UCLA press room:

The abundance of contemporary distractions offers many reasons to curse multitasking.

But… Monica L. Smith maintains that human beings should appreciate their ability to sequence many activities and to remember to return to a task once it has been interrupted, possibly even with new ideas on how to improve the activity…. In fact, Smith, an associate professor of anthropology, contends that the multitasking is the ability that separates human beings from animals: “Multitasking is what makes us human….

“People seem to think that the past was this simpler time with fewer interruptions because so many of the modern gadgets we have today had yet to be invented…. But we’ve been multitasking from the beginning. Every object that we have from the past is the result of a dynamic process where people were being interrupted all the time.”…

Smith finds support for her theory by combining research from two fields. From archaeology, she takes the calculations extracted from archaeological digs to determine the number of people who occupied prehistoric sites and the kinds of human activities that were undertaken there — such as making tools, pots and beads. From anthropological studies of traditional people today, she takes estimates of how long it takes to make similar objects using similar approaches.

“We can calculate how much prehistoric people needed to eat, how long it takes to do a particular kind of task, and any seasonal restrictions on different tasks,” Smith said. “We find that there’s no way that you could sit down and do any of these things from start to finish. Multitasking had to be involved.”

Multitasking also makes sense from a biological perspective, Smith argues, citing recent research by economists, folklorists, neurologists and archaeologists. Researchers have noted that the type of cognitive shortcuts involved in multitasking extends the number of activities humans can accomplish without having to tap higher-order cognitive abilities such as reasoning.

Again, this is an example of “multitasking” meaning something rather different when used by archaeologists than by, say, experimental psychologists or Nicholas Carr: the cases she’s talking about do involve switching attention between tasks, but at a temporal scale rather different than we have to deal with in digital environments.

Stone Age multitasking, and how modern “multitasking” is really switch-tasking

While I was in Cambridge, one of the things I got interested in was cognitive archaeology– a sub-branch of archaeology that uses physical and geographical artifacts to reconstruct the mental worlds of ancient peoples. One of the big figures in the field, Colin Renfrew, is a professor at Cambridge, and I was lucky enough to spend some time with him; when I visited Oxford I spent a little time with one of his former students, a brilliant guy named Lambros Malafouris. The field interests me for two reasons: first, it might be able to provide additional evidence to test arguments about the relationship between cognition and new technolog; and second, it might offer some design clues for creating contemplative objects. It strikes me that there are certain pieces of material culture that tend to reappear in contemplative and meditative practices– bells for example– and qualities that are valued in contemplative spaces, and I want to get a better sense of how old these might be.

So I was interested to see an article by Lyn Wadley and colleagues at the University of the Witwatersrand in South Africa, on multitasking and the making of stone tools:

Compound adhesives made from red ochre mixed with plant gum were used in the Middle Stone Age (MSA), South Africa. Replications reported here suggest that early artisans did not merely color their glues red; they deliberately effected physical transformations involving chemical changes from acidic to less acidic pH, dehydration of the adhesive near wood fires, and changes to mechanical workability and electrostatic forces. Some of the steps required for making compound adhesive seem impossible without multitasking and abstract thought. This ability suggests overlap between the cognitive abilities of modern people and people in the MSA. Our multidisciplinary analysis provides a new way to recognize complex cognition in the MSA without necessarily invoking the concept of symbolism….

People today have a capacity for novel, sustained multilevel operations; this ability may have arisen from neural connectivity in part of the prefrontal cortex (1). The capacity may be recognizable in some technologies, and we use compound adhesive manufacture as our example.

Continue reading

“All self-help is Buddhism with a service mark”

So said Merlin Mann in a very entertaining New York Magazine article on attention, distraction and overstimulation:

There’s no shell script, there’s no fancy pen, there’s no notebook or nap or Firefox extension or hack that’s gonna help you figure out why the fuck you’re here…. That’s on you. This makes me sound like one of those people who swindled the Beatles, but if you are having attention problems, the best way to deal with it is by admitting it and then saying, ‘From now on, I’m gonna be in the moment and more cognizant.’ I said not long ago, I think on Twitter—God, I quote myself a lot, what an asshole—that really all self-help is Buddhism with a service mark.

Where you allow your attention to go ultimately says more about you as a human being than anything that you put in your mission statement…. It’s an indisputable receipt for your existence. And if you allow that to be squandered by other people who are as bored as you are, it’s gonna say a lot about who you are as a person.

Great Clifford Nass interview on multitasking

This 2010 Frontline interview with Clifford Nass is a great explanation of the state of the art on research in multitasking. The bottom line is, we’re terrible at it, we’re not really designed to do it, and people who think they can successfully multitask really can’t.

So what gets lost?

Some things that we know get lost are, first of all, anytime you switch from one task to another, there’s something called the “task switch cost,” which basically, imagine, is I’ve got to turn off this part of the brain and turn on this part of the brain. And it’s not free; it takes time. So one thing that you lose is time.

A second thing you lose is when you’re looking at unrelated things, our brains are built to relate things, so we have to work very, very hard when we go from one thing to another, going: “No, not the same! Not the same! Stop it! Stop it!”….

At the end of the day, it seems like it’s affecting things like ability to remember long term, ability to handle analytic reasoning, ability to switch properly, etc., if this stuff is, again, … trained rather than inborn. If it’s inborn, what we’re losing is the ability to do a lot of things that we’re doing. We’re doing things much, much poorer and less efficiently in time. So it’s actually costing us time….

You’re confident of that?

The demonstration that when you ask people to do two things at once they’re less efficient has been demonstrated over and over and over. No one talks about it — I don’t know why — but in fact there’s no contradictory evidence to this for about the last 15, 20 years. Everything [as] simple as the little feed at the bottom of a news show, the little text, studies have shown that that distracts people. They remember both less. Studies on asking people to read something and at the same time listen to something show those effects. So there’s really, in some sense, no surprise there. There’s denial, but there’s no surprise.

The surprise here is that what happens when you chronically multitask, you’re multitasking all the time, and then you don’t multitask, what we’re finding is people are not turning off the multitasking switch in their [brain] — we think there’s a switch in the brain; we don’t know for sure — that says: “Stop using the things I do with multitasking. Focus. Be organized. Don’t switch. Don’t waste energy switching.” And that doesn’t seem to be turned off in people who multitask all the time.

I think in a sense my contemplative computing work is about how you switch the multitasking part of your brain off, and start concentrating again.

Ironically I’ll be posting about effortful thinking research on Twitter and Facebook

Three researchers have published an article (a publicly available abstract is here) in Computers in Human Behavior titled “Less effortful thinking leads to more social networking? The associations between the use of social network sites and personality traits,” which examines the relationship between “need for cognition” and participation in social networking sites. “Need for cognition” is one of those wonderfully self-explanatory terms that social scientists occasionally come up with: first articulated in the 1950s, it refers to a person’s preference for cognitively challenging activities, and is sort of the nerd version of the need for speed. If you prefer to spend your time reading Physical Review Letters B, you have a high NFC; if you watch Jersey Shore with the sound off because it’s too tiring to follow the dialogue, you have low NFC. Or as Nick Carr explains,

NFC, as Professor Zhong explained in an email to me, “is a recognized indicator for deep or shallow thinking.” People who like to challenge their minds have high NFC, while those who avoid deep thinking have low NFC. Whereas, according to the authors, “high NFC individuals possess an intrinsic motivation to think, having a natural motivation to seek knowledge,” those with low NFC don’t like to grapple with complexity and tend to content themselves with superficial assessments, particularly when faced with difficult intellectual challenges.

The article’s conclusion:

As media multitasking is increasingly becoming part of the media routine in the lives of Internet users, the variable of media multitasking was incorporated into the analysis of SNS [social network sites] use, which, to our knowledge, is the first study that did this. For a better understanding of social networking, this research also investigated other online activities that may predict the time spent on SNS, which included total online time, online time for study/work and time in surfing with no specific purposes.

The key finding is that NFC [need for cognition] played an important role in SNS use. Specifically, high NFC individuals tended to use SNS less often than low NFC people, suggesting that effortful thinking may be associated with less social networking among young people. It is possible that those with a higher NFC are more likely to seek mental stimulation through other cognitively challenging tasks, such as doing puzzles and searching for product information. Those with a lower NFC may be more comfortable with the rich peripheral cues available on SNS and feel more at home with the communication tool.

High NFC individuals were also significantly less likely to add anyone to their SNS accounts than low NFC individuals. But there was no difference between them in terms of maintaining several SNS accounts or sharing information through these sites. There was little difference among low, medium and high NFC groups regarding media multitasking, which might suggest that media multitasking is pervasively performed among young adults. This may also add evidence for the observation that media multitasking has become such a critical part of the media routine in the lives of Internet users that it has little to do with one’s tendency of engaging in and enjoying effortful cognitive activities.

This has been folded neatly into the “Facebook makes you stoopid” meme, though it may be that Facebook appeals to dull people more than to thoughtful people. As Carr puts it, “if you want to be a big success on Facebook, it helps to be a dullard.” It doesn’t require deep thought to hit the Like button.

While it’s interesting, I find it problematic that the study doesn’t differentiate between different social media platforms. Facebook, Mendeley (a social media platform for scientists), and LinkedIn are different things, and you use them for quite different purposes: I don’t post pictures on Mendeley, I don’t use Facebook to look for a job, and I don’t use LinkedIn for scientific research. Another question is whether some high-NFC users treat services like Facebook or MySpace as the equivalent of Jerry Bruckheiser movies– sites that you visit when you don’t really want to think. When I use Facebook, I limit my interactions with it: I do relatively little browsing of the news feed, tend to hop on and off quickly, and don’t turn on the online chat. For me the issue isn’t the absence or presence of peripheral cues, but rather the volume of content and amount of time responding to it– or rather, making its creators feel good by responding to their posts– would take. I don’t worry particularly that Facebook will make me stupid; I worry that it will absorb time I can better spend doing other things (even very low-NFC things like laundry).

More broadly, I think arguments along the “Facebook makes you stupid by unwinding neural pathways” lines assign too much influence to technologies. Ultimately they become another form of technological determinism, a kind of neuroscience-spiced version of high-tech marketing (use this and be smart / more connected / more productive / happier!) or leftist critiques of technology. But such arguments obscure the fact that users always have choices– sometimes very limited, but still extant– about whether and how to use technologies; and in this case of neuroscience-backed claims about technologies changing us, they tend to emphasize the possibility that novel technologies rewire our brains, but not that that same neuroplasticity means that we can get back whatever cognitive abilities we worry about losing to the World Wide Web.

Older posts

© 2018 Deliberate Rest

Theme by Anders NorenUp ↑