On Tribal Rationality, the Impossibility of Consciousness and Our Chance to Find a Common Ground

◇ ◇ ◇


A tribe often refers to itself using its own language’s word for “people”, and refers to other, neighboring tribes with various epithets. For example, the term “Inuit” translates to “people”.


Tribalism has a very adaptive effect in human evolution. Humans are social animals and ill-equipped to live on their own. Tribalism and social bonding help to keep individuals committed to the group, even when personal relations may fray. That keeps individuals from wandering off or joining other groups. It also leads to bullying when a tribal member is unwilling to conform to the politics of the collective.


The tendencies of members to unite against an outside tribe and the ability to act violently and prejudicially against that outside tribe likely boosted the chances of survival in genocidal conflicts.

— Tribalism
via wikipedia

◇ ◇ ◇

Humans, like other primates, are tribal animals. We need to belong to groups, which is why we love clubs and teams. Once people connect with a group, their identities can become powerfully bound to it. They will seek to benefit members of their group even when they gain nothing personally. They will penalize outsiders, seemingly gratuitously. They will sacrifice, and even kill and die, for their group.


The human instinct to identify with a group is almost certainly hard-wired, and experimental evidence has repeatedly confirmed how early in life it presents itself. In one recent study, a team of psychology researchers randomly assigned a group of children between the ages of four and six to either a red group or a blue one and asked them to put on T-shirts of the corresponding color. They were then shown edited computer images of other children, half of whom appeared to be wearing red T-shirts and half of whom appeared to wearing blue, and asked for their reactions. Even though they knew absolutely nothing about the children in the photos, the subjects consistently reported that they liked the children who appeared to be members of their own group better, chose to hypothetically allocate more resources to them, and displayed strong subconscious preferences for them. In addition, when told stories about the children in the photos, these boys and girls exhibited systematic memory distortion, tending to remember the positive actions of in-group members and the negative actions of out-group members. Without “any supporting social information whatsoever,” the researchers concluded, the children’s perception of other kids was “pervasively distorted by mere membership in a social group.”

Neurological studies confirm that group identity can even produce physical sensations of satisfaction. Seeing group members prosper seems to activate our brains’ “reward centers” even if we receive no benefit ourselves. Under certain circumstances, our reward centers can also be activated when we see members of an out-group failing or suffering. Mina Cikara, a psychologist who runs Harvard’s Intergroup Neuroscience Lab, has noted that this is especially true when one group fears or envies another—when, for example, “there’s a long history of rivalry and not liking each other.”

This is the dark side of the tribal instinct. Group bonding, the neuroscientist Ian Robertson has written, increases oxytocin levels, which spurs “a greater tendency to demonize and de-humanize the out-group” and which physiologically “anesthetizes” the empathy one might otherwise feel for a suffering person. Such effects appear early in life. Consider two recent studies about the in-group and out-group attitudes of Arab and Jewish children in Israel. In the first, Jewish children were asked to draw both a “typical Jewish” man and a “typical Arab” man. The researchers found that even among Jewish preschoolers, Arabs were portrayed more negatively and as “significantly more aggressive” than Jews. In the second study, Arab high school students in Israel were asked for their reactions to fictitious incidents involving the accidental death (unrelated to war or intercommunal violence) of either an Arab or a Jewish child—for example, a death caused by electrocution or a biking accident. More than 60 percent of the subjects expressed sadness about the death of the Arab child, whereas only five percent expressed sadness about the death of the Jewish child. Indeed, almost 70 percent said they felt “happy” or “very happy” about the Jewish child’s death.


Amy Chua
Tribal World
via foreignaffairs.com

◇ ◇ ◇


Our brains distinguish between in-group members and outsiders in a fraction of a second, and they encourage us to be kind to the former but hostile to the latter. These biases are automatic and unconscious and emerge at astonishingly young ages. They are, of course, arbitrary and often fluid. Today’s “them” can become tomorrow’s “us.” But this is only poor consolation. Humans can rein in their instincts and build societies that divert group competition to arenas less destructive than warfare, yet the psychological bases for tribalism persist, even when people understand that their loyalty to their nation, skin color, god, or sports team is as random as the toss of a coin. At the level of the human mind, little prevents new teammates from once again becoming tomorrow’s enemies.


The human mind’s propensity for us-versus-them thinking runs deep. Numerous careful studies have shown that the brain makes such distinctions automatically and with mind-boggling speed. Stick a volunteer in a brain scanner and quickly flash pictures of faces. Among typical white subjects in the scanner, the sight of a black man’s face activates the amygdala, a brain region central to emotions of fear and aggression, in under one-tenth of a second. In most cases, the prefrontal cortex, a region crucial for impulse control and emotional regulation, springs into action a second or two later and silences the amygdala: “Don’t think that way, that’s not who I am.” Still, the initial reaction is usually one of fear, even among those who know better.

This finding is no outlier. Looking at the face of someone of the same race activates a specialized part of the primate brain called the fusiform cortex, which recognizes faces, but it is activated less so when the face in question is that of someone of another race. Watching the hand of someone of the same race being poked with a needle activates the anterior cingulate cortex, a region implicated in feelings of empathy; being shown the same with the hand of a person of another race produces less activation. Not everyone’s face or pain counts equally.


Sometimes the very foundations of affection and cooperation are also at the root of humankind’s darker impulses. Consider oxytocin, a compound whose reputation as a fuzzy “cuddle hormone” has recently taken a bit of a hit. In mammals, oxytocin is central to mother-infant bonding and helps create close ties in monogamous couples. In humans, it promotes a whole set of pro-social behaviors. Subjects given oxytocin become more generous, trusting, empathic, and expressive. Yet recent findings suggest that oxytocin prompts people to act this way only toward in-group members—their teammates in a game, for instance. Toward outsiders, it makes them aggressive and xenophobic. Hormones rarely affect behavior this way; the norm is an effect whose strength simply varies in different settings. Oxytocin, however, deepens the fault line in our brains between “us” and “them.”


That our group identities—national and otherwise—are random makes them no less consequential in practice, for better and for worse. At its best, nationalism and patriotism can prompt people to pay their taxes and care for their nation’s have-nots, including unrelated people they have never met and will never meet. But because this solidarity has historically been built on strong cultural markers of pseudo-kinship, it is easily destabilized, particularly by the forces of globalization, which can make people who were once the archetypes of their culture feel irrelevant and bring them into contact with very different sorts of neighbors than their grand-parents had. Confronted with such a disruption, tax-paying civic nationalism can quickly devolve into something much darker: a dehumanizing hatred that turns Jews into “vermin,” Tutsis into “cockroaches,” or Muslims into “terrorists.” Today, this toxic brand of nationalism is making a comeback across the globe, spurred on by political leaders eager to exploit it for electoral advantage.

In the face of this resurgence, the temptation is strong to appeal to people’s sense of reason. Surely, if people were to understand how arbitrary nationalism is, the concept would appear ludicrous. Nationalism is a product of human cognition, so cognition should be able to dismantle it, too.

Yet this is wishful thinking. In reality, knowing that our various social bonds are essentially random does little to weaken them. Working in the 1970s, the psychologist Henri Tajfel called this “the minimal group paradigm.” Take a bunch of strangers and randomly split them into two groups by tossing a coin. The participants know the meaninglessness of the division. And yet within minutes, they are more generous toward and trusting of members of their in-group. Tails prefer not to be in the company of Heads, and vice versa. The pull of us-versus-them thinking is strong even when the arbitrariness of social boundaries is utterly transparent, to say nothing of when it is woven into a complex narrative about loyalty to the fatherland. You can’t reason people out of a stance they weren’t reasoned into in the first place.

This Is Your Brain on Nationalism. The Biology of Us and Them
via foreignaffairs.com

◇ ◇ ◇


One of my all-time favorite studies, conducted by Stanley Schachter in the late 1950s, examined whether fear might bring people together. Schachter convinced college-aged women that they would receive a series of electric shocks about 15 minutes later. Some were told that these shocks would barely tickle, and others were told they would be very painful. Participants were then asked whether they wanted to wait for their shocks in a room alone, or with other people. People who believed that shocks would be painful strongly preferred being near others, whereas those who believed the shocks would be mild generally did not care whether or not they had neighbors in their waiting room. On Schachter’s logic, this exposed a powerful rule about social behavior: in times of anxiety, people seek each other out. Like penguins in February, we tend to face adversity by huddling up.

Schachter refined this rule through a follow-up study. Some participants were given the option to be alone or with others who would also be shocked; others could wait alone or with people who would receive no shocks. When individuals could no longer wait with fellow shock-fearers, their preferences for company disappeared. This suggests that the benefit of crowds depends on our belief that others share our experiences. Schachter put this more poetically, claiming that his finding “removes one shred of ambiguity from the old saw ‘Misery loves company.’ Misery doesn’t love just any kind of company, it loves only miserable company.”


What about like-minded crowds appeals to us? Schachter speculated that they provide a freeing mask of anonymity, allowing individuals to “lose themselves” in the masses. And some music festivals certainly appear to provide such opportunities. Others suggest that similar crowds, such as partisan echo chambers, reinforce our judgments and make us more confident in our opinions. I think the power of groups might also go a bit deeper. One of our most fundamental human needs is that of belonging to a social group, and the right crowd can offer us a taste of such belonging.

Jamil Zaki
Crowds versus company: When are we drawn to groups?
via sciam.com

◇ ◇ ◇


…moderates, less encumbered by bias, are more open to new ideas. Moderates are more likely to move toward the extremes than partisans are to move toward the middle. Given their flexibility, moderates tend to adopt the preferences of the intransigent minority.

But how does the moderate majority come to accept the preferences of an extreme minority? Researchers at Rensselaer Polytechnic Institute provide an answer. The researchers, using mathematical modeling, found that there is a tipping point for when opinions held by a committed minority spread to the rest of the population. The tipping point is 10%. “When the number of committed opinion holders is below 10%, there is no visible progress in the spread of ideas… once that number grows above 10 %, the idea spreads like a flame.”

In short, how we behave often depends on how many people are behaving in that manner.

For a viewpoint to become popular, a minimum number of group members must first adopt it. Once this threshold is reached, the viewpoint becomes self-sustaining with more and more adopting it. Thus, the preferences of an intransigent minority are mainstreamed once enough moderates adopt them.

Following Taleb’s “minority rule,” political culture is driven by a small group of charismatic individuals. This makes intuitive sense as the actions of those in Washington drive the news cycle. Whether they be party leaders, activists, or intellectuals, these individuals are the heart and soul of a political movement. They draw the “party lines.” They create the tribe’s preferences and the tribe is willing to follow along.

The “minority rule” manifests rather mundanely in American politics. There is a lot of talk about hope, change, and making America great again. Slogans are indicators of what a party and political movement might become under a new leader. Often, a party comes to resemble the person who leads it. The leader determines what policies to support, what enemies to hate, what morals to espouse, and what hills to die on.

As a result, group preferences can change at the drop of hat. When Barack Obama openly supported gay marriage in 2012, his view not only became the de facto position of the Democratic Party but also produced preference changes across the US. Group preferences can also change radically over time. The Obama presidency significantly changed the political alignment of the Democratic voter base. According to Pew Research, between 2008 and 2016, Democrats have moved significantly leftward on a number of issues including racial discrimination, immigration, and the role of government.

This is not unique to Democrats, however, as Republicans’ political preferences have been “Trumpified.” YouGov polling indicates that between August 2014 and August 2017, Republicans’ view of Russia as an ally increased from 9% to 30%. Furthermore, Vladimir Putin’s favorability rating among Republicans increased from 12% to 32% between 2015 and 2017. And it’s not just Russia. Once supporters of free trade, Republicans are now increasingly against it, favoring steel tariffs.

This is not to suggest that political leaders are solely responsible for determining group preferences. Tribal preferences can be generated by self-righteous late night hosts, potty-mouthed actors, and this guy.


Imagine a group of people trying to make dinner plans. One person suggests driving to a restaurant in a distant city called Abilene. Another person, not wanting to travel very far but dreading an argument, says “sure.” A third individual, now thinking that her two peers want to go to Abilene, doesn’t want to be the odd person out. She agrees that Abilene is a good idea. This domino effect leads to everyone thinking everyone else wants to go to Abilene when in fact a consensus does not exist.

This is called The Abilene Paradox, described by management expert Jerry B. Harvey. It resembles the aforementioned echo chamber. But the Abilene Paradox is stranger. It consists of individuals who do not agree with an idea yet acquiesce because of their mistaken belief that a consensus has been reached.


Vincent Harinam, Rob Henderson
Political Moderates Are Lying
via quillette.com

◇ ◇ ◇

In order to study a wide range of biases, and to avoid confusion, the researchers defined prejudice as “a negative evaluation of a group or of an individual on the basis of group membership” for the purposes of this study.

The study took 5,914 individuals and tested them for their “cognitive ability”, determined by their score on the Wordsum test of verbal ability. The subjects were asked of their opinions of certain groups of people such as Christians, Hispanics, or the poor. Those answers were later converted to a zero to 100 scale, with 100 being the most negatively viewed.

The study showed that individuals of higher and lower intelligence showed similar levels of prejudice, but not towards the same people. Persons of lower cognitive ability tended to be prejudiced towards “low choice” groups, persons who have little control over the fact that they happen to be a member of that group. More intelligent persons were more prejudiced against “high choice” groups, where the members of that group, hypothetically, had greater ability to opt in or out of membership in that group.


Previous research had focused primarily on persons of lower intelligence and biases towards groups with low choice, such as racial minorities. Those studies have suggested that the prejudice of less intelligent persons is rooted in fear, with a rationalization of the need to identify a potential threat. This research suggests that we all have at least a slight distaste for those who are different from us or who we perceive as being opposed to our worldview. They put it bluntly as “Writ large, prejudice does not appear related to cognitive ability”.

Scotty Hendricks
How Prejudice Differs in People of Higher and Lower Intelligence
via bigthink.com

◇ ◇ ◇

It may essentially be the case that science can tell us that it is a “fact” that water is made of hydrogen and oxygen. There are many other claims that can, for all intents and purposes, be considered facts as well. However, in contemporary discussions on contentious issues, we generally imagine that science can provide us with facts that are just as authoritative as ‘water is made of hydrogen and oxygen’. This is not the case.


As an example of this simple-world-philic aspect of human nature, consider the relationship between people’s perception of the severity of human-caused climate change and their perception of economic benefits of green energy policies. In principle, these should be independent issues and a person’s opinion on one should not necessarily predict their opinion on the other.

Many people argue that human-caused climate change will have a catastrophic negative impact on the earth and thus human well-being and many people also argue that green energy policies (e.g., taxing carbon emissions) will have a catastrophic impact on the economy and thus human well-being. However, these arguments are hardly ever heard coming out of the same person’s mouth […]. Similarly, many people argue that the concern over climate change is overblown and many people argue that green energy policies will be a boon to the economy by creating clean-tech jobs. But again, these are hardly ever the same people. Why is this? I think it is just a manifestation of people wanting the world to be simple. We don’t want conflicting information, nuance or shades of gray. We want nice neat conclusions, in other words, we want “facts”.


Since people desire facts and since most important contemporary issues rely to some degree on scientific conclusions it was perhaps inevitable that a notion of “Science” (proper noun with a capital S) would emerge where Science is thought of as the official arbiter of facts. Many people imagine that Science is an authoritative body that can swoop in, perform its magic, and definitively deem some controversial statement to be fact or fiction, true or false. I think of this view of Science as being somewhat Vatican-like in that it conceptualizes science as a hierarchical, centralized authority that should not be questioned.

This Vatican notion of Science not only comes from our innate simple-world-philia but is also taught to us in our education system. People tend to be taught science in school as if science is simply a catalog of conclusions that have been deemed to be true rather than a way of looking at the world and asking questions.


So, it may be the case that science has an over-rated ability to produce unquestionable facts, but this hardly means that science can’t tell us anything. Many ideas have been shown to be robust to so many attacks, that they can be considered to be true beyond any reasonable doubt. For climate science these include the fundamentals that the greenhouse effect exists, humans are increasing greenhouse gasses which are warming the planet substantially, and there are substantial negative impacts associated with this warming.

I find that the Vatican notion of Science is commonly held on both the political left as well as the political right in the United States and that this model frames how the politics of climate change get discussed. The primary difference between the right and the left is not on how they conceptualize science but how much legitimacy they give Science. The far left tends to articulate that imminent catastrophe from human-caused climate change is a Scientific fact. Given the perceived legitimacy of Science on the left, it is thought to be either insane or evil to question this […].

The political far-right, however, does not grant that Science has legitimately earned its authority. Rather, they tend to think of Science as a corrupt organization, built and populated by their ideological opponents […]. In this view, it is quite noble to push back against fraudulent, ideologically-driven Science and doing so makes one comparable to Galileo.

The conversation regarding climate change could be defused quite a bit if people realized how flawed the Vatican notion of Science is. The left would have to discontinue the strategy of using “facts” as a bludgeon to end debate and the right would have to admit that conspiracies and collusion are simply not possible in such a decentralized process […].


Having said that, scientists are humans and humans are social beings who are influenced by the zeitgeist of their proximate culture. Unfortunately, as political polarization has increased in the United States, many of us are becoming more and more hermetically sealed into our ideological bubbles where our ideas are not challenged and we only hear from other people who agree with us. I do worry that this phenomenon has the power to influence the collective research output and communication of climate science.

Patrick T. Brown
The fact illusion: Objective truth is elusive in (climate) science
via patricktbrown.org

◇ ◇ ◇


In a recent experiment, we showed it is possible to trick people into changing their political views. In fact, we could get some people to adopt opinions that were directly opposite of their original ones. Our findings imply that we should rethink some of the ways we think about our own attitudes, and how they relate to the currently polarized political climate. When it comes to the actual political attitudes we hold, we are considerably more flexible than we think.

A powerful shaping factor about our social and political worlds is how they are structured by group belonging and identities. For instance, researchers have found that moral and emotion messages on contentious political topics, such as gun-control and climate change, spread more rapidly within rather than between ideologically like-minded networks. This echo-chamber problem seems to be made worse by the algorithms of social media companies who send us increasingly extreme content to fit our political preferences.

We are also far more motivated to reason and argue to protect our own or our group’s views. Indeed, some researchers argue that our reasoning capabilities evolved to serve that very function. A recent study illustrates this very well: participants who were assigned to follow Twitter accounts that retweeted information containing opposing political views to their own with the hope of exposing them to new political views. But the exposure backfired—increased polarization in the participants. Simply tuning Republicans into MSNBC, or Democrats into Fox News, might only amplify conflict. What can we do to make people open their minds?

The trick, as strange as it may sound, is to make people believe the opposite opinion was their own to begin with.

The experiment relies on a phenomenon known as choice blindness. Choice blindness was discovered in 2005 by a team of Swedish researchers. They presented participants with two photos of faces and asked participants to choose the photo they thought was more attractive, and then handed participants that photo. Using a clever trick inspired by stage magic, when participants received the photo it had been switched to the person not chosen by the participant—the less attractive photo. Remarkably, most participants accepted this card as their own choice and then proceeded to give arguments for why they had chosen that face in the first place. This revealed a striking mismatch between our choices and our ability to rationalize outcomes. This same finding has since been replicated in various domains including taste for jam, financial decisions, and eye-witness testimony.

While it is remarkable that people can be fooled into picking an attractive photo or a sweet jam in the moment, we wondered whether it would be possible to use this false-feedback to alter political beliefs in a way that would stand the test of time.

In our experiment, we first gave false-feedback about their choices, but this time concerning actual political questions (e.g., climate taxes on consumer goods). Participants were then asked to state their views a second time that same day, and again one week later. The results were striking. Participants’ responses were shifted considerably in the direction of the manipulation. For instance, those who originally had favoured higher taxes were more likely to be undecided or even opposed to it.

These effects lasted up to a week later. The changes in their opinions were also larger when they were asked to give an argument—or rationalization—for their new opinion. It seems that giving people the opportunity to reason reinforced the false-feedback and led them further away from their initial attitude.

Why do attitudes shift in our experiment? The difference is that when faced with the false-feedback people are free from the motives that normally lead them to defend themselves or their ideas from external criticism. Instead they can consider the benefits of the alternative position.

To understand this, imagine that you have picked out a pair of pants to wear later in the evening. Your partner comes in and criticizes your choice, saying you should have picked the blue ones rather than the red ones. You will likely become defensive about your choice and defend it—maybe even becoming more entrenched in your choice of hot red pants.

Now imagine instead that your partner switches the pants while you are distracted, instead of arguing with you. You turn around and discover that you had picked the blue pants. In this case, you need to reconcile the physical evidence of your preference (the pants on your bed) with whatever inside your brain normally makes you choose the red pants. Perhaps you made a mistake or had a shift in opinion that slipped you mind. But now that the pants were placed in front of you, it would be easy to slip them on and continue getting ready for the party. As you catch yourself in the mirror, you decide that these pants are quite flattering after all.

Philip Pärnamets, Jay Van Bavel
How Political Opinions Change
via scientificamerican.com

◇ ◇ ◇


In a subsequent experiment, published in 2013, Hall and Johansson set out to change political attitudes during a general election in Sweden using choice blindness techniques. Study participants stated their voting intentions for the upcoming election and filled out a survey, revealing their attitudes toward hot-topic issues under debate by Sweden’s left- and right-wing coalitions. Then the researchers once more used sleight of hand to alter the respondents’ replies, presenting them with the opposite political views to their own. When invited to explain these doctored survey responses, 92 percent of the participants accepted and endorsed them! Moreover, almost half of the respondents indicated a willingness to consider changing their vote, based on the manipulated results—a much higher percentage than the 10 percent of voters who were prone to swing according to established political polls.

Susana Martinez-Conde, Stephen L. Macknik
When Free Choice Is an Illusion
via scientificamerican.com

◇ ◇ ◇

[…] what is the relationship between our minds and the physical world? Here, we don’t have a settled answer. We know something about the body and brain, but what about the subjective life inside?


Many theories have been proposed, but none has passed scientific muster. I believe a major change in our perspective on consciousness may be necessary, a shift from a credulous and egocentric viewpoint to a skeptical and slightly disconcerting one: namely, that we don’t actually have inner feelings in the way most of us think we do.

Imagine a group of scholars in the early 17th century, debating the process that purifies white light and rids it of all colors. They’ll never arrive at a scientific answer. Why? Because despite appearances, white is not pure. It’s a mixture of colors of the visible spectrum, as Newton later discovered. The scholars are working with a faulty assumption that comes courtesy of the brain’s visual system. The scientific truth about white (i.e., that it is not pure) differs from how the brain reconstructs it.

The brain builds models (or complex bundles of information) about items in the world, and those models are often not accurate. From that realization, a new perspective on consciousness has emerged in the work of philosophers like Patricia S. Churchland and Daniel C. Dennett. Here’s my way of putting it:

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information.

You might object that this is a paradox. If awareness is an erroneous impression, isn’t it still an impression? And isn’t an impression a form of awareness?

But the argument here is that there is no subjective impression; there is only information in a data-processing device. When we look at a red apple, the brain computes information about color. It also computes information about the self and about a (physically incoherent) property of subjective experience. The brain’s cognitive machinery accesses that interlinked information and derives several conclusions: There is a self, a me; there is a red thing nearby; there is such a thing as subjective experience; and I have an experience of that red thing. Cognition is captive to those internal models. Such a brain would inescapably conclude it has subjective experience.

I concede that this approach is counterintuitive. One reason is that it seems to leave a gap in the logic: Why would the brain waste energy computing information about subjective awareness and attributing that property to itself, if the brain doesn’t in fact have this property?

This is where my own work comes in. In my lab at Princeton, my colleagues and I have been developing the “attention schema” theory of consciousness, which may explain why that computation is useful and would evolve in any complex brain. Here’s the gist of it:

Take again the case of color and wavelength. Wavelength is a real, physical phenomenon; color is the brain’s approximate, slightly incorrect model of it. In the attention schema theory, attention is the physical phenomenon and awareness is the brain’s approximate, slightly incorrect model of it. In neuroscience, attention is a process of enhancing some signals at the expense of others. It’s a way of focusing resources. Attention: a real, mechanistic phenomenon that can be programmed into a computer chip. Awareness: a cartoonish reconstruction of attention that is as physically inaccurate as the brain’s internal model of color.

In this theory, awareness is not an illusion. It’s a caricature. Something — attention — really does exist, and awareness is a distorted accounting of it.

One reason that the brain needs an approximate model of attention is that to be able to control something efficiently, a system needs at least a rough model of the thing to be controlled. Another reason is that to predict the behavior of other creatures, the brain needs to model their brain states, including their attention. This theory pulls together evidence from social neuroscience, attention research, control theory and elsewhere.

Almost all other theories of consciousness are rooted in our intuitions about awareness. Like the intuition that white light is pure, our intuitions about awareness come from information computed deep in the brain. But the brain computes models that are caricatures of real things. And as with color, so with consciousness: It’s best to be skeptical of intuition.

Michael S. A. Graziano
Are We Really Conscious?
via nytimes.com

◇ ◇ ◇

What is the New Tribalism?

It is the growth of a politics based upon narrow concerns, rooted in the exploitation of divisions of class, cash, gender, region, religion, ethnicity, morality and ideology a give-no-quarter and take-no-prisoners activism that demands satisfaction and accepts no compromise.

It is a raw permissiveness that escalates rhetorical excess sometimes even to physical violence.

And it is an environment where our political system of limited government is asked to take on social and religious disputes that the system cannot possibly resolve.


First, we must distrust the language of moral righteousness employed by those who ask for our support, our votes, our money.

Distrust those who bathe in the purity of their motives.

Judge people not by their passion for one cause, but by their capacity to calculate the consequences, long and short term, of their actions in its pursuit.

A second approach is to find our common ground.

Dave Frohnmayer
The New Tribalism
via uoregon.edu

◇ ◇ ◇

We’re all guilty of not listening at one point or another in our lives. We tune others out while we’re watching the TV, or trying to concentrate on something we’re reading. Nowadays, we try hard to multi-task between twitter and texting, but inevitably that means we’re not always listening to someone who’s trying to talk to us.

Believe it or not, listening is a skill just like writing or playing football is. That’s good news, because it also means you can learn to listen and be with the person who’s talking to you when they’re talking to you. In the meantime, it helps to understand some of the reasons we don’t listen. By identifying those reasons that ring true, you can then work on improving your listening skills, focusing on being aware of those reasons next time you find yourself not listening.


1. Truth
You take a dualistic position that you are right and the other person is wrong. Dualism supports a preoccupation with proving your point of view. Directly expressing your feelings and thoughts without needing to be “right” allows you to express yourself, and listen to and understand others (without binding your communication to a right/wrong mindset).

2. Blame
You believe that the problem is the other person’s fault. “Owning” your problem (also called problem ownership, which means to take responsibility for it), based on the identification of your needs, is a functional alternative to a “blame-game” (e.g., to attribute to others what may not reflect their personal reality).

3. Need to be a Victim
You feel sorry for yourself and think that other people are treating you unfairly because they are insensitive and selfish. Listening minimizes becoming a voluntary victim or martyr — a position commonly observed when an individual performs tasks for others without their explicit request or approval.

4. Self-Deception
An individual’s behavior can contribute to an interpersonal relationship problem although he or she does not “own” the problem. A “blind spot” prevents an individual from being aware of how her or his behavior affects others. An individual may be evaluated as dogmatic or stubborn. However, the person who’s doing the evaluating could be unaware of her or his tendency to be oppositional with regard to that person’s thoughts and ideas.

5. Defensiveness
You are so fearful of criticism that you cannot listen when someone shares anything negative or unacceptable. Instead of listening and evaluating the perceptions of an individual, you prefer to defend yourself.

6. Coercion Sensitivity
You are uncomfortable with being supervised or given task-related instructions. Without concrete evidence, a position is taken that specific or general others are controlling and domineering; therefore, you must defend yourself.

7. Being Demanding
You feel entitled to better treatment from others, and you get frustrated when they do not treat you in a manner that is consistent with your entitlement. An insistence that they are unreasonable, and should not behave the way they do, negates your ability to understand the probable needs that are met through the other person’s behavior.

8. Selfishness
You want what you want when you want it, and you become confrontational or defiant when you do not get it. The absence of an interest in what others are probably thinking and feeling is a barrier to listening.

9. Mistrust
The position of mistrust includes a fundamental belief that others will manipulate you if you listen to them. An absence of empathic understanding prevents you from listening to others.

10. Help Addiction
You feel the need to help people when they need someone to listen to and understand them. The tendency to look for or seek out solutions when others are hurt, frustrated, or angry is viewed as trying to be helpful (even though the speaker did not explicitly request your recommendations or intervention).

John M. Grohol
10 Reasons You Don’t Listen
via psychcentral.com

See also

On Herds and Leaders

Your reply...

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s