• Home

Communiations

We can get a better understanding of current cultural identities by unpacking how they came to be. By looking at history, we can see how cultural identities that seem to have existed forever actually came to be constructed for various political and social reasons and how they have changed over time. Communication plays a central role in this construction. As we have already discussed, our identities are relational and communicative; they are also constructed. Social constructionism is a view that argues the self is formed through our interactions with others and in relationship to social, cultural, and political contexts (Allen, 2011). In this section, we’ll explore how the cultural identities of race, gender, sexual orientation, and ability have been constructed in the United States and how communication relates to those identities. There are other important identities that could be discussed, like religion, age, nationality, and class. Although they are not given their own section, consider how those identities may intersect with the identities discussed next.

Race

Would it surprise you to know that human beings, regardless of how they are racially classified, share 99.9 percent of their DNA? This finding by the Human Genome Project asserts that race is a social construct, not a biological one. The American Anthropological Association agrees, stating that race is the product of “historical and contemporary social, economic, educational, and political circumstances” (Allen, 2011). Therefore, we’ll define race as a socially constructed category based on differences in appearance that has been used to create hierarchies that privilege some and disadvantage others.

8.2.1N

There is actually no biological basis for racial classification among humans, as we share 99.9 percent of our DNA.

Evelyn – 
friends
 – CC BY-NC-ND 2.0.

Race didn’t become a socially and culturally recognized marker until European colonial expansion in the 1500s. As Western Europeans traveled to parts of the world previously unknown to them and encountered people who were different from them, a hierarchy of races began to develop that placed lighter skinned Europeans above darker skinned people. At the time, newly developing fields in natural and biological sciences took interest in examining the new locales, including the plant and animal life, natural resources, and native populations. Over the next three hundred years, science that we would now undoubtedly recognize as flawed, biased, and racist legitimated notions that native populations were less evolved than white Europeans, often calling them savages. In fact, there were scientific debates as to whether some of the native populations should be considered human or animal. Racial distinctions have been based largely on phenotypes, or physiological features such as skin color, hair texture, and body/facial features. Western “scientists” used these differences as “proof” that native populations were less evolved than the Europeans, which helped justify colonial expansion, enslavement, genocide, and exploitation on massive scales (Allen, 2011). Even though there is a consensus among experts that race is social rather than biological, we can’t deny that race still has meaning in our society and affects people as if it were “real.”

Given that race is one of the first things we notice about someone, it’s important to know how race and communication relate (Allen, 2011). Discussing race in the United States is difficult for many reasons. One is due to uncertainty about language use. People may be frustrated by their perception that labels change too often or be afraid of using an “improper” term and being viewed as racially insensitive. It is important, however, that we not let political correctness get in the way of meaningful dialogues and learning opportunities related to difference. Learning some of the communicative history of race can make us more competent communicators and open us up to more learning experiences.

Racial classifications used by the government and our regular communication about race in the United States have changed frequently, which further points to the social construction of race. Currently, the primary racial groups in the United States are African American, Asian American, European American, Latino/a, and Native American, but a brief look at changes in how the US Census Bureau has defined race clearly shows that this hasn’t always been the case (see 
Table 8.2 “Racial Classifications in the US Census”
). In the 1900s alone, there were twenty-six different ways that race was categorized on census forms (Allen, 2011). The way we communicate about race in our regular interactions has also changed, and many people are still hesitant to discuss race for fear of using “the wrong” vocabulary.

Table 8.2 Racial Classifications in the US Census

Year(s)

Development

1790

No category for race

1800s

Race was defined by the percentage of African “blood.” Mulatto was one black and one white parent, quadroon was one-quarter African blood, and octoroon was one-eighth.

1830–1940

The term color was used instead of race.

1900

Racial categories included white, black, Chinese, Japanese, and Indian. Census takers were required to check one of these boxes based on visual cues. Individuals did not get to select a racial classification on their own until 1970.

1950

The term color was dropped and replaced by race.

1960, 1970

Both race and color were used on census forms.

1980–2010

Race again became the only term.

2000

Individuals were allowed to choose more than one racial category for the first time in census history.

2010

The census included fifteen racial categories and an option to write in races not listed on the form.

Source: Adapted from Brenda J. Allen, Difference Matters: Communicating Social Identity (Long Grove, IL: Waveland Press, 2011), 71–72.

The five primary racial groups noted previously can still be broken down further to specify a particular region, country, or nation. For example, Asian Americans are diverse in terms of country and language of origin and cultural practices. While the category of Asian Americans can be useful when discussing broad trends, it can also generalize among groups, which can lead to stereotypes. You may find that someone identifies as Chinese American or Korean American instead of Asian American. In this case, the label further highlights a person’s cultural lineage. We should not assume, however, that someone identifies with his or her cultural lineage, as many people have more in common with their US American peers than a culture that may be one or more generations removed.

History and personal preference also influence how we communicate about race. Culture and communication scholar Brenda Allen notes that when she was born in 1950, her birth certificate included an N for Negro. Later she referred to herself as colored because that’s what people in her community referred to themselves as. During and before this time, the term black had negative connotations and would likely have offended someone. There was a movement in the 1960s to reclaim the word black, and the slogan “black is beautiful” was commonly used. Brenda Allen acknowledges the newer label of African American but notes that she still prefers black. The terms colored and Negro are no longer considered appropriate because they were commonly used during a time when black people were blatantly discriminated against. Even though that history may seem far removed to some, it is not to others. Currently, the terms African American and black are frequently used, and both are considered acceptable. The phrase people of color is acceptable for most and is used to be inclusive of other racial minorities. If you are unsure what to use, you could always observe how a person refers to himself or herself, or you could ask for his or her preference. In any case, a competent communicator defers to and respects the preference of the individual.

The label Latin American generally refers to people who live in Central American countries. Although Spain colonized much of what is now South and Central America and parts of the Caribbean, the inhabitants of these areas are now much more diverse. Depending on the region or country, some people primarily trace their lineage to the indigenous people who lived in these areas before colonization, or to a Spanish and indigenous lineage, or to other combinations that may include European, African, and/or indigenous heritage. Latina and Latino are labels that are preferable to Hispanic for many who live in the United States and trace their lineage to South and/or Central America and/or parts of the Caribbean. Scholars who study Latina/o identity often use the label Latina/o in their writing to acknowledge women who avow that identity label (Calafell, 2007). In verbal communication you might say “Latina” when referring to a particular female or “Latino” when referring to a particular male of Latin American heritage. When referring to the group as a whole, you could say “Latinas and Latinos” instead of just “Latinos,” which would be more gender inclusive. While Hispanic is used by the US Census, it refers primarily to people of Spanish origin, which doesn’t account for the diversity of background of many Latinos/as. The term Hispanic also highlights the colonizer’s influence over the indigenous, which erases a history that is important to many. Additionally, there are people who claim Spanish origins and identify culturally as Hispanic but racially as white. Labels such as Puerto Rican or Mexican American, which further specify region or country of origin, may also be used. Just as with other cultural groups, if you are unsure of how to refer to someone, you can always ask for and honor someone’s preference.

The history of immigration in the United States also ties to the way that race has been constructed. The metaphor of the melting pot has been used to describe the immigration history of the United States but doesn’t capture the experiences of many immigrant groups (Allen, 2011). Generally, immigrant groups who were white, or light skinned, and spoke English were better able to assimilate, or melt into the melting pot. But immigrant groups that we might think of as white today were not always considered so. Irish immigrants were discriminated against and even portrayed as black in cartoons that appeared in newspapers. In some Southern states, Italian immigrants were forced to go to black schools, and it wasn’t until 1952 that Asian immigrants were allowed to become citizens of the United States. All this history is important, because it continues to influence communication among races today.

Interracial Communication

Race and communication are related in various ways. Racism influences our communication about race and is not an easy topic for most people to discuss. Today, people tend to view racism as overt acts such as calling someone a derogatory name or discriminating against someone in thought or action. However, there is a difference between racist acts, which we can attach to an individual, and institutional racism, which is not as easily identifiable. It is much easier for people to recognize and decry racist actions than it is to realize that racist patterns and practices go through societal institutions, which means that racism exists and doesn’t have to be committed by any one person. As competent communicators and critical thinkers, we must challenge ourselves to be aware of how racism influences our communication at individual and societal levels.

We tend to make assumptions about people’s race based on how they talk, and often these assumptions are based on stereotypes. Dominant groups tend to define what is correct or incorrect usage of a language, and since language is so closely tied to identity, labeling a group’s use of a language as incorrect or deviant challenges or negates part of their identity (Yancy, 2011). We know there isn’t only one way to speak English, but there have been movements to identify a standard. This becomes problematic when we realize that “standard English” refers to a way of speaking English that is based on white, middle-class ideals that do not match up with the experiences of many. When we create a standard for English, we can label anything that deviates from that “nonstandard English.” Differences between standard English and what has been called “Black English” have gotten national attention through debates about whether or not instruction in classrooms should accommodate students who do not speak standard English. Education plays an important role in language acquisition, and class relates to access to education. In general, whether someone speaks standard English themselves or not, they tend to negatively judge people whose speech deviates from the standard.

Another national controversy has revolved around the inclusion of Spanish in common language use, such as Spanish as an option at ATMs, or other automated services, and Spanish language instruction in school for students who don’t speak or are learning to speak English. As was noted earlier, the Latino/a population in the United States is growing fast, which has necessitated inclusion of Spanish in many areas of public life. This has also created a backlash, which some scholars argue is tied more to the race of the immigrants than the language they speak and a fear that white America could be engulfed by other languages and cultures (Speicher, 2002). This backlash has led to a revived movement to make English the official language of the United States.

image

The “English only” movement of recent years is largely a backlash targeted at immigrants from Spanish-speaking countries.


Wikimedia Commons
 – public domain.
Courtesy of 
www.CGPGrey.com.

The US Constitution does not stipulate a national language, and Congress has not designated one either. While nearly thirty states have passed English-language legislation, it has mostly been symbolic, and court rulings have limited any enforceability (Zuckerman, 2010). The Linguistic Society of America points out that immigrants are very aware of the social and economic advantages of learning English and do not need to be forced. They also point out that the United States has always had many languages represented, that national unity hasn’t rested on a single language, and that there are actually benefits to having a population that is multilingual (Linguistic Society of America, 2011). Interracial communication presents some additional verbal challenges.

Code-switching involves changing from one way of speaking to another between or within interactions. Some people of color may engage in code-switching when communicating with dominant group members because they fear they will be negatively judged. Adopting the language practices of the dominant group may minimize perceived differences. This code-switching creates a linguistic dual consciousness in which people are able to maintain their linguistic identities with their in-group peers but can still acquire tools and gain access needed to function in dominant society (Yancy, 2011). White people may also feel anxious about communicating with people of color out of fear of being perceived as racist. In other situations, people in dominant groups may spotlight nondominant members by asking them to comment on or educate others about their race (Allen, 2011). For example, I once taught at a private university that was predominantly white. Students of color talked to me about being asked by professors to weigh in on an issue when discussions of race came up in the classroom. While a professor may have been well-intentioned, spotlighting can make a student feel conspicuous, frustrated, or defensive. Additionally, I bet the professors wouldn’t think about asking a white, male, or heterosexual student to give the perspective of their whole group.

Gender

When we first meet a newborn baby, we ask whether it’s a boy or a girl. This question illustrates the importance of gender in organizing our social lives and our interpersonal relationships. A Canadian family became aware of the deep emotions people feel about gender and the great discomfort people feel when they can’t determine gender when they announced to the world that they were not going to tell anyone the gender of their baby, aside from the baby’s siblings. Their desire for their child, named Storm, to be able to experience early life without the boundaries and categories of gender brought criticism from many (Davis & James, 2011). Conversely, many parents consciously or unconsciously “code” their newborns in gendered ways based on our society’s associations of pink clothing and accessories with girls and blue with boys. While it’s obvious to most people that colors aren’t gendered, they take on new meaning when we assign gendered characteristics of masculinity and femininity to them. Just like race, gender is a socially constructed category. While it is true that there are biological differences between who we label male and female, the meaning our society places on those differences is what actually matters in our day-to-day lives. And the biological differences are interpreted differently around the world, which further shows that although we think gender is a natural, normal, stable way of classifying things, it is actually not. There is a long history of appreciation for people who cross gender lines in Native American and South Central Asian cultures, to name just two.

You may have noticed I use the word gender instead of sex. That’s because gender is an identity based on internalized cultural notions of masculinity and femininity that is constructed through communication and interaction. There are two important parts of this definition to unpack. First, we internalize notions of gender based on socializing institutions, which helps us form our gender identity. Then we attempt to construct that gendered identity through our interactions with others, which is our gender expression. Sex is based on biological characteristics, including external genitalia, internal sex organs, chromosomes, and hormones (Wood, 2005). While the biological characteristics between men and women are obviously different, it’s the meaning that we create and attach to those characteristics that makes them significant. The cultural differences in how that significance is ascribed are proof that “our way of doing things” is arbitrary. For example, cross-cultural research has found that boys and girls in most cultures show both aggressive and nurturing tendencies, but cultures vary in terms of how they encourage these characteristics between genders. In a group in Africa, young boys are responsible for taking care of babies and are encouraged to be nurturing (Wood, 2005).

Gender has been constructed over the past few centuries in political and deliberate ways that have tended to favor men in terms of power. And various academic fields joined in the quest to “prove” there are “natural” differences between men and women. While the “proof” they presented was credible to many at the time, it seems blatantly sexist and inaccurate today. In the late 1800s and early 1900s, scientists who measure skulls, also known as craniometrists, claimed that men were more intelligent than women because they had larger brains. Leaders in the fast-growing fields of sociology and psychology argued that women were less evolved than men and had more in common with “children and savages” than an adult (white) males (Allen, 2011). Doctors and other decision makers like politicians also used women’s menstrual cycles as evidence that they were irrational, or hysterical, and therefore couldn’t be trusted to vote, pursue higher education, or be in a leadership position. These are just a few of the many instances of how knowledge was created by seemingly legitimate scientific disciplines that we can now clearly see served to empower men and disempower women. This system is based on the ideology of patriarchy, which is a system of social structures and practices that maintains the values, priorities, and interests of men as a group (Wood, 2005). One of the ways patriarchy is maintained is by its relative invisibility. While women have been the focus of much research on gender differences, males have been largely unexamined. Men have been treated as the “generic” human being to which others are compared. But that ignores that fact that men have a gender, too. Masculinities studies have challenged that notion by examining how masculinities are performed.

There have been challenges to the construction of gender in recent decades. Since the 1960s, scholars and activists have challenged established notions of what it means to be a man or a woman. The women’s rights movement in the United States dates back to the 1800s, when the first women’s rights convention was held in Seneca Falls, New York, in 1848 (Wood, 2005). Although most women’s rights movements have been led by white, middle-class women, there was overlap between those involved in the abolitionist movement to end slavery and the beginnings of the women’s rights movement. Although some of the leaders of the early women’s rights movement had class and education privilege, they were still taking a risk by organizing and protesting. Black women were even more at risk, and Sojourner Truth, an emancipated slave, faced those risks often and gave a much noted extemporaneous speech at a women’s rights gathering in Akron, Ohio, in 1851, which came to be called “Ain’t I a Woman?” (Wood, 2005) Her speech highlighted the multiple layers of oppression faced by black women. You can watch actress Alfre Woodard deliver an interpretation of the speech in Video Clip 8.1.

Video Clip 8.1

Alfre Woodard Interprets Sojourner Truth’s Speech “Ain’t I a Woman?”


(click to see video)

Feminism as an intellectual and social movement advanced women’s rights and our overall understanding of gender. Feminism has gotten a bad reputation based on how it has been portrayed in the media and by some politicians. When I teach courses about gender, I often ask my students to raise their hand if they consider themselves feminists. I usually only have a few, if any, who do. I’ve found that students I teach are hesitant to identify as a feminist because of connotations of the word. However, when I ask students to raise their hand if they believe women have been treated unfairly and that there should be more equity, most students raise their hand. Gender and communication scholar Julia Wood has found the same trend and explains that a desire to make a more equitable society for everyone is at the root of feminism. She shares comments from a student that capture this disconnect: (Wood, 2005)

I would never call myself a feminist, because that word has so many negative connotations. I don’t hate men or anything, and I’m not interested in protesting. I don’t want to go around with hacked-off hair and no makeup and sit around bashing men. I do think women should have the same kinds of rights, including equal pay for equal work. But I wouldn’t call myself a feminist.

It’s important to remember that there are many ways to be a feminist and to realize that some of the stereotypes about feminism are rooted in sexism and homophobia, in that feminists are reduced to “men haters” and often presumed to be lesbians. The feminist movement also gave some momentum to the transgender rights movement. Transgender is an umbrella term for people whose gender identity and/or expression do not match the gender they were assigned by birth. Transgender people may or may not seek medical intervention like surgery or hormone treatments to help match their physiology with their gender identity. The term transgender includes other labels such as transsexualtransvestitecross-dresser, and intersex, among others. Terms like hermaphrodite and she-male are not considered appropriate. As with other groups, it is best to allow someone to self-identify first and then honor their preferred label. If you are unsure of which pronouns to use when addressing someone, you can use gender-neutral language or you can use the pronoun that matches with how they are presenting. If someone has long hair, make-up, and a dress on, but you think their biological sex is male due to other cues, it would be polite to address them with female pronouns, since that is the gender identity they are expressing.

Gender as a cultural identity has implications for many aspects of our lives, including real-world contexts like education and work. Schools are primary grounds for socialization, and the educational experience for males and females is different in many ways from preschool through college. Although not always intentional, schools tend to recreate the hierarchies and inequalities that exist in society. Given that we live in a patriarchal society, there are communicative elements present in school that support this (Allen, 2011). For example, teachers are more likely to call on and pay attention to boys in a classroom, giving them more feedback in the form of criticism, praise, and help. This sends an implicit message that boys are more worthy of attention and valuable than girls. Teachers are also more likely to lead girls to focus on feelings and appearance and boys to focus on competition and achievement. The focus on appearance for girls can lead to anxieties about body image. Gender inequalities are also evident in the administrative structure of schools, which puts males in positions of authority more than females. While females make up 75 percent of the educational workforce, only 22 percent of superintendents and 8 percent of high school principals are women. Similar trends exist in colleges and universities, with women only accounting for 26 percent of full professors. These inequalities in schools correspond to larger inequalities in the general workforce. While there are more women in the workforce now than ever before, they still face a glass ceiling, which is a barrier for promotion to upper management. Many of my students have been surprised at the continuing pay gap that exists between men and women. In 2010, women earned about seventy-seven cents to every dollar earned by men (National Committee on Pay Equity, 2011). To put this into perspective, the National Committee on Pay Equity started an event called Equal Pay Day. In 2011, Equal Pay Day was on April 11. This signifies that for a woman to earn the same amount of money a man earned in a year, she would have to work more than three months extra, until April 11, to make up for the difference (National Committee on Pay Equity, 2011).

Sexuality

While race and gender are two of the first things we notice about others, sexuality is often something we view as personal and private. Although many people hold a view that a person’s sexuality should be kept private, this isn’t a reality for our society. One only needs to observe popular culture and media for a short time to see that sexuality permeates much of our public discourse.

Sexuality relates to culture and identity in important ways that extend beyond sexual orientation, just as race is more than the color of one’s skin and gender is more than one’s biological and physiological manifestations of masculinity and femininity. Sexuality isn’t just physical; it is social in that we communicate with others about sexuality (Allen, 2011). Sexuality is also biological in that it connects to physiological functions that carry significant social and political meaning like puberty, menstruation, and pregnancy. Sexuality connects to public health issues like sexually transmitted infections (STIs), sexual assault, sexual abuse, sexual harassment, and teen pregnancy. Sexuality is at the center of political issues like abortion, sex education, and gay and lesbian rights. While all these contribute to sexuality as a cultural identity, the focus in this section is on sexual orientation.

The most obvious way sexuality relates to identity is through sexual orientation. Sexual orientation refers to a person’s primary physical and emotional sexual attraction and activity. The terms we most often use to categorize sexual orientation are heterosexualgaylesbian, and bisexual. Gays, lesbians, and bisexuals are sometimes referred to as sexual minorities. While the term sexual preference has been used previously, sexual orientation is more appropriate, since preference implies a simple choice. Although someone’s preference for a restaurant or actor may change frequently, sexuality is not as simple. The term homosexual can be appropriate in some instances, but it carries with it a clinical and medicalized tone. As you will see in the timeline that follows, the medical community has a recent history of “treating homosexuality” with means that most would view as inhumane today. So many people prefer a term like gay, which was chosen and embraced by gay people, rather than homosexual, which was imposed by a then discriminatory medical system.

The gay and lesbian rights movement became widely recognizable in the United States in the 1950s and continues on today, as evidenced by prominent issues regarding sexual orientation in national news and politics. National and international groups like the Human Rights Campaign advocate for rights for gay, lesbian, bisexual, transgender, and queer (GLBTQ) communities. While these communities are often grouped together within one acronym (GLBTQ), they are different. Gays and lesbians constitute the most visible of the groups and receive the most attention and funding. Bisexuals are rarely visible or included in popular cultural discourses or in social and political movements. Transgender issues have received much more attention in recent years, but transgender identity connects to gender more than it does to sexuality. Last, queer is a term used to describe a group that is diverse in terms of identities but usually takes a more activist and at times radical stance that critiques sexual categories. While queer was long considered a derogatory label, and still is by some, the queer activist movement that emerged in the 1980s and early 1990s reclaimed the word and embraced it as a positive. As you can see, there is a diversity of identities

Communiations

We can get a better understanding of current cultural identities by unpacking how they came to be. By looking at history, we can see how cultural identities that seem to have existed forever actually came to be constructed for various political and social reasons and how they have changed over time. Communication plays a central role in this construction. As we have already discussed, our identities are relational and communicative; they are also constructed. Social constructionism is a view that argues the self is formed through our interactions with others and in relationship to social, cultural, and political contexts (Allen, 2011). In this section, we’ll explore how the cultural identities of race, gender, sexual orientation, and ability have been constructed in the United States and how communication relates to those identities. There are other important identities that could be discussed, like religion, age, nationality, and class. Although they are not given their own section, consider how those identities may intersect with the identities discussed next.

Race

Would it surprise you to know that human beings, regardless of how they are racially classified, share 99.9 percent of their DNA? This finding by the Human Genome Project asserts that race is a social construct, not a biological one. The American Anthropological Association agrees, stating that race is the product of “historical and contemporary social, economic, educational, and political circumstances” (Allen, 2011). Therefore, we’ll define race as a socially constructed category based on differences in appearance that has been used to create hierarchies that privilege some and disadvantage others.

8.2.1N

There is actually no biological basis for racial classification among humans, as we share 99.9 percent of our DNA.

Evelyn – 
friends
 – CC BY-NC-ND 2.0.

Race didn’t become a socially and culturally recognized marker until European colonial expansion in the 1500s. As Western Europeans traveled to parts of the world previously unknown to them and encountered people who were different from them, a hierarchy of races began to develop that placed lighter skinned Europeans above darker skinned people. At the time, newly developing fields in natural and biological sciences took interest in examining the new locales, including the plant and animal life, natural resources, and native populations. Over the next three hundred years, science that we would now undoubtedly recognize as flawed, biased, and racist legitimated notions that native populations were less evolved than white Europeans, often calling them savages. In fact, there were scientific debates as to whether some of the native populations should be considered human or animal. Racial distinctions have been based largely on phenotypes, or physiological features such as skin color, hair texture, and body/facial features. Western “scientists” used these differences as “proof” that native populations were less evolved than the Europeans, which helped justify colonial expansion, enslavement, genocide, and exploitation on massive scales (Allen, 2011). Even though there is a consensus among experts that race is social rather than biological, we can’t deny that race still has meaning in our society and affects people as if it were “real.”

Given that race is one of the first things we notice about someone, it’s important to know how race and communication relate (Allen, 2011). Discussing race in the United States is difficult for many reasons. One is due to uncertainty about language use. People may be frustrated by their perception that labels change too often or be afraid of using an “improper” term and being viewed as racially insensitive. It is important, however, that we not let political correctness get in the way of meaningful dialogues and learning opportunities related to difference. Learning some of the communicative history of race can make us more competent communicators and open us up to more learning experiences.

Racial classifications used by the government and our regular communication about race in the United States have changed frequently, which further points to the social construction of race. Currently, the primary racial groups in the United States are African American, Asian American, European American, Latino/a, and Native American, but a brief look at changes in how the US Census Bureau has defined race clearly shows that this hasn’t always been the case (see 
Table 8.2 “Racial Classifications in the US Census”
). In the 1900s alone, there were twenty-six different ways that race was categorized on census forms (Allen, 2011). The way we communicate about race in our regular interactions has also changed, and many people are still hesitant to discuss race for fear of using “the wrong” vocabulary.

Table 8.2 Racial Classifications in the US Census

Year(s)

Development

1790

No category for race

1800s

Race was defined by the percentage of African “blood.” Mulatto was one black and one white parent, quadroon was one-quarter African blood, and octoroon was one-eighth.

1830–1940

The term color was used instead of race.

1900

Racial categories included white, black, Chinese, Japanese, and Indian. Census takers were required to check one of these boxes based on visual cues. Individuals did not get to select a racial classification on their own until 1970.

1950

The term color was dropped and replaced by race.

1960, 1970

Both race and color were used on census forms.

1980–2010

Race again became the only term.

2000

Individuals were allowed to choose more than one racial category for the first time in census history.

2010

The census included fifteen racial categories and an option to write in races not listed on the form.

Source: Adapted from Brenda J. Allen, Difference Matters: Communicating Social Identity (Long Grove, IL: Waveland Press, 2011), 71–72.

The five primary racial groups noted previously can still be broken down further to specify a particular region, country, or nation. For example, Asian Americans are diverse in terms of country and language of origin and cultural practices. While the category of Asian Americans can be useful when discussing broad trends, it can also generalize among groups, which can lead to stereotypes. You may find that someone identifies as Chinese American or Korean American instead of Asian American. In this case, the label further highlights a person’s cultural lineage. We should not assume, however, that someone identifies with his or her cultural lineage, as many people have more in common with their US American peers than a culture that may be one or more generations removed.

History and personal preference also influence how we communicate about race. Culture and communication scholar Brenda Allen notes that when she was born in 1950, her birth certificate included an N for Negro. Later she referred to herself as colored because that’s what people in her community referred to themselves as. During and before this time, the term black had negative connotations and would likely have offended someone. There was a movement in the 1960s to reclaim the word black, and the slogan “black is beautiful” was commonly used. Brenda Allen acknowledges the newer label of African American but notes that she still prefers black. The terms colored and Negro are no longer considered appropriate because they were commonly used during a time when black people were blatantly discriminated against. Even though that history may seem far removed to some, it is not to others. Currently, the terms African American and black are frequently used, and both are considered acceptable. The phrase people of color is acceptable for most and is used to be inclusive of other racial minorities. If you are unsure what to use, you could always observe how a person refers to himself or herself, or you could ask for his or her preference. In any case, a competent communicator defers to and respects the preference of the individual.

The label Latin American generally refers to people who live in Central American countries. Although Spain colonized much of what is now South and Central America and parts of the Caribbean, the inhabitants of these areas are now much more diverse. Depending on the region or country, some people primarily trace their lineage to the indigenous people who lived in these areas before colonization, or to a Spanish and indigenous lineage, or to other combinations that may include European, African, and/or indigenous heritage. Latina and Latino are labels that are preferable to Hispanic for many who live in the United States and trace their lineage to South and/or Central America and/or parts of the Caribbean. Scholars who study Latina/o identity often use the label Latina/o in their writing to acknowledge women who avow that identity label (Calafell, 2007). In verbal communication you might say “Latina” when referring to a particular female or “Latino” when referring to a particular male of Latin American heritage. When referring to the group as a whole, you could say “Latinas and Latinos” instead of just “Latinos,” which would be more gender inclusive. While Hispanic is used by the US Census, it refers primarily to people of Spanish origin, which doesn’t account for the diversity of background of many Latinos/as. The term Hispanic also highlights the colonizer’s influence over the indigenous, which erases a history that is important to many. Additionally, there are people who claim Spanish origins and identify culturally as Hispanic but racially as white. Labels such as Puerto Rican or Mexican American, which further specify region or country of origin, may also be used. Just as with other cultural groups, if you are unsure of how to refer to someone, you can always ask for and honor someone’s preference.

The history of immigration in the United States also ties to the way that race has been constructed. The metaphor of the melting pot has been used to describe the immigration history of the United States but doesn’t capture the experiences of many immigrant groups (Allen, 2011). Generally, immigrant groups who were white, or light skinned, and spoke English were better able to assimilate, or melt into the melting pot. But immigrant groups that we might think of as white today were not always considered so. Irish immigrants were discriminated against and even portrayed as black in cartoons that appeared in newspapers. In some Southern states, Italian immigrants were forced to go to black schools, and it wasn’t until 1952 that Asian immigrants were allowed to become citizens of the United States. All this history is important, because it continues to influence communication among races today.

Interracial Communication

Race and communication are related in various ways. Racism influences our communication about race and is not an easy topic for most people to discuss. Today, people tend to view racism as overt acts such as calling someone a derogatory name or discriminating against someone in thought or action. However, there is a difference between racist acts, which we can attach to an individual, and institutional racism, which is not as easily identifiable. It is much easier for people to recognize and decry racist actions than it is to realize that racist patterns and practices go through societal institutions, which means that racism exists and doesn’t have to be committed by any one person. As competent communicators and critical thinkers, we must challenge ourselves to be aware of how racism influences our communication at individual and societal levels.

We tend to make assumptions about people’s race based on how they talk, and often these assumptions are based on stereotypes. Dominant groups tend to define what is correct or incorrect usage of a language, and since language is so closely tied to identity, labeling a group’s use of a language as incorrect or deviant challenges or negates part of their identity (Yancy, 2011). We know there isn’t only one way to speak English, but there have been movements to identify a standard. This becomes problematic when we realize that “standard English” refers to a way of speaking English that is based on white, middle-class ideals that do not match up with the experiences of many. When we create a standard for English, we can label anything that deviates from that “nonstandard English.” Differences between standard English and what has been called “Black English” have gotten national attention through debates about whether or not instruction in classrooms should accommodate students who do not speak standard English. Education plays an important role in language acquisition, and class relates to access to education. In general, whether someone speaks standard English themselves or not, they tend to negatively judge people whose speech deviates from the standard.

Another national controversy has revolved around the inclusion of Spanish in common language use, such as Spanish as an option at ATMs, or other automated services, and Spanish language instruction in school for students who don’t speak or are learning to speak English. As was noted earlier, the Latino/a population in the United States is growing fast, which has necessitated inclusion of Spanish in many areas of public life. This has also created a backlash, which some scholars argue is tied more to the race of the immigrants than the language they speak and a fear that white America could be engulfed by other languages and cultures (Speicher, 2002). This backlash has led to a revived movement to make English the official language of the United States.

image

The “English only” movement of recent years is largely a backlash targeted at immigrants from Spanish-speaking countries.


Wikimedia Commons
 – public domain.
Courtesy of 
www.CGPGrey.com.

The US Constitution does not stipulate a national language, and Congress has not designated one either. While nearly thirty states have passed English-language legislation, it has mostly been symbolic, and court rulings have limited any enforceability (Zuckerman, 2010). The Linguistic Society of America points out that immigrants are very aware of the social and economic advantages of learning English and do not need to be forced. They also point out that the United States has always had many languages represented, that national unity hasn’t rested on a single language, and that there are actually benefits to having a population that is multilingual (Linguistic Society of America, 2011). Interracial communication presents some additional verbal challenges.

Code-switching involves changing from one way of speaking to another between or within interactions. Some people of color may engage in code-switching when communicating with dominant group members because they fear they will be negatively judged. Adopting the language practices of the dominant group may minimize perceived differences. This code-switching creates a linguistic dual consciousness in which people are able to maintain their linguistic identities with their in-group peers but can still acquire tools and gain access needed to function in dominant society (Yancy, 2011). White people may also feel anxious about communicating with people of color out of fear of being perceived as racist. In other situations, people in dominant groups may spotlight nondominant members by asking them to comment on or educate others about their race (Allen, 2011). For example, I once taught at a private university that was predominantly white. Students of color talked to me about being asked by professors to weigh in on an issue when discussions of race came up in the classroom. While a professor may have been well-intentioned, spotlighting can make a student feel conspicuous, frustrated, or defensive. Additionally, I bet the professors wouldn’t think about asking a white, male, or heterosexual student to give the perspective of their whole group.

Gender

When we first meet a newborn baby, we ask whether it’s a boy or a girl. This question illustrates the importance of gender in organizing our social lives and our interpersonal relationships. A Canadian family became aware of the deep emotions people feel about gender and the great discomfort people feel when they can’t determine gender when they announced to the world that they were not going to tell anyone the gender of their baby, aside from the baby’s siblings. Their desire for their child, named Storm, to be able to experience early life without the boundaries and categories of gender brought criticism from many (Davis & James, 2011). Conversely, many parents consciously or unconsciously “code” their newborns in gendered ways based on our society’s associations of pink clothing and accessories with girls and blue with boys. While it’s obvious to most people that colors aren’t gendered, they take on new meaning when we assign gendered characteristics of masculinity and femininity to them. Just like race, gender is a socially constructed category. While it is true that there are biological differences between who we label male and female, the meaning our society places on those differences is what actually matters in our day-to-day lives. And the biological differences are interpreted differently around the world, which further shows that although we think gender is a natural, normal, stable way of classifying things, it is actually not. There is a long history of appreciation for people who cross gender lines in Native American and South Central Asian cultures, to name just two.

You may have noticed I use the word gender instead of sex. That’s because gender is an identity based on internalized cultural notions of masculinity and femininity that is constructed through communication and interaction. There are two important parts of this definition to unpack. First, we internalize notions of gender based on socializing institutions, which helps us form our gender identity. Then we attempt to construct that gendered identity through our interactions with others, which is our gender expression. Sex is based on biological characteristics, including external genitalia, internal sex organs, chromosomes, and hormones (Wood, 2005). While the biological characteristics between men and women are obviously different, it’s the meaning that we create and attach to those characteristics that makes them significant. The cultural differences in how that significance is ascribed are proof that “our way of doing things” is arbitrary. For example, cross-cultural research has found that boys and girls in most cultures show both aggressive and nurturing tendencies, but cultures vary in terms of how they encourage these characteristics between genders. In a group in Africa, young boys are responsible for taking care of babies and are encouraged to be nurturing (Wood, 2005).

Gender has been constructed over the past few centuries in political and deliberate ways that have tended to favor men in terms of power. And various academic fields joined in the quest to “prove” there are “natural” differences between men and women. While the “proof” they presented was credible to many at the time, it seems blatantly sexist and inaccurate today. In the late 1800s and early 1900s, scientists who measure skulls, also known as craniometrists, claimed that men were more intelligent than women because they had larger brains. Leaders in the fast-growing fields of sociology and psychology argued that women were less evolved than men and had more in common with “children and savages” than an adult (white) males (Allen, 2011). Doctors and other decision makers like politicians also used women’s menstrual cycles as evidence that they were irrational, or hysterical, and therefore couldn’t be trusted to vote, pursue higher education, or be in a leadership position. These are just a few of the many instances of how knowledge was created by seemingly legitimate scientific disciplines that we can now clearly see served to empower men and disempower women. This system is based on the ideology of patriarchy, which is a system of social structures and practices that maintains the values, priorities, and interests of men as a group (Wood, 2005). One of the ways patriarchy is maintained is by its relative invisibility. While women have been the focus of much research on gender differences, males have been largely unexamined. Men have been treated as the “generic” human being to which others are compared. But that ignores that fact that men have a gender, too. Masculinities studies have challenged that notion by examining how masculinities are performed.

There have been challenges to the construction of gender in recent decades. Since the 1960s, scholars and activists have challenged established notions of what it means to be a man or a woman. The women’s rights movement in the United States dates back to the 1800s, when the first women’s rights convention was held in Seneca Falls, New York, in 1848 (Wood, 2005). Although most women’s rights movements have been led by white, middle-class women, there was overlap between those involved in the abolitionist movement to end slavery and the beginnings of the women’s rights movement. Although some of the leaders of the early women’s rights movement had class and education privilege, they were still taking a risk by organizing and protesting. Black women were even more at risk, and Sojourner Truth, an emancipated slave, faced those risks often and gave a much noted extemporaneous speech at a women’s rights gathering in Akron, Ohio, in 1851, which came to be called “Ain’t I a Woman?” (Wood, 2005) Her speech highlighted the multiple layers of oppression faced by black women. You can watch actress Alfre Woodard deliver an interpretation of the speech in Video Clip 8.1.

Video Clip 8.1

Alfre Woodard Interprets Sojourner Truth’s Speech “Ain’t I a Woman?”


(click to see video)

Feminism as an intellectual and social movement advanced women’s rights and our overall understanding of gender. Feminism has gotten a bad reputation based on how it has been portrayed in the media and by some politicians. When I teach courses about gender, I often ask my students to raise their hand if they consider themselves feminists. I usually only have a few, if any, who do. I’ve found that students I teach are hesitant to identify as a feminist because of connotations of the word. However, when I ask students to raise their hand if they believe women have been treated unfairly and that there should be more equity, most students raise their hand. Gender and communication scholar Julia Wood has found the same trend and explains that a desire to make a more equitable society for everyone is at the root of feminism. She shares comments from a student that capture this disconnect: (Wood, 2005)

I would never call myself a feminist, because that word has so many negative connotations. I don’t hate men or anything, and I’m not interested in protesting. I don’t want to go around with hacked-off hair and no makeup and sit around bashing men. I do think women should have the same kinds of rights, including equal pay for equal work. But I wouldn’t call myself a feminist.

It’s important to remember that there are many ways to be a feminist and to realize that some of the stereotypes about feminism are rooted in sexism and homophobia, in that feminists are reduced to “men haters” and often presumed to be lesbians. The feminist movement also gave some momentum to the transgender rights movement. Transgender is an umbrella term for people whose gender identity and/or expression do not match the gender they were assigned by birth. Transgender people may or may not seek medical intervention like surgery or hormone treatments to help match their physiology with their gender identity. The term transgender includes other labels such as transsexualtransvestitecross-dresser, and intersex, among others. Terms like hermaphrodite and she-male are not considered appropriate. As with other groups, it is best to allow someone to self-identify first and then honor their preferred label. If you are unsure of which pronouns to use when addressing someone, you can use gender-neutral language or you can use the pronoun that matches with how they are presenting. If someone has long hair, make-up, and a dress on, but you think their biological sex is male due to other cues, it would be polite to address them with female pronouns, since that is the gender identity they are expressing.

Gender as a cultural identity has implications for many aspects of our lives, including real-world contexts like education and work. Schools are primary grounds for socialization, and the educational experience for males and females is different in many ways from preschool through college. Although not always intentional, schools tend to recreate the hierarchies and inequalities that exist in society. Given that we live in a patriarchal society, there are communicative elements present in school that support this (Allen, 2011). For example, teachers are more likely to call on and pay attention to boys in a classroom, giving them more feedback in the form of criticism, praise, and help. This sends an implicit message that boys are more worthy of attention and valuable than girls. Teachers are also more likely to lead girls to focus on feelings and appearance and boys to focus on competition and achievement. The focus on appearance for girls can lead to anxieties about body image. Gender inequalities are also evident in the administrative structure of schools, which puts males in positions of authority more than females. While females make up 75 percent of the educational workforce, only 22 percent of superintendents and 8 percent of high school principals are women. Similar trends exist in colleges and universities, with women only accounting for 26 percent of full professors. These inequalities in schools correspond to larger inequalities in the general workforce. While there are more women in the workforce now than ever before, they still face a glass ceiling, which is a barrier for promotion to upper management. Many of my students have been surprised at the continuing pay gap that exists between men and women. In 2010, women earned about seventy-seven cents to every dollar earned by men (National Committee on Pay Equity, 2011). To put this into perspective, the National Committee on Pay Equity started an event called Equal Pay Day. In 2011, Equal Pay Day was on April 11. This signifies that for a woman to earn the same amount of money a man earned in a year, she would have to work more than three months extra, until April 11, to make up for the difference (National Committee on Pay Equity, 2011).

Sexuality

While race and gender are two of the first things we notice about others, sexuality is often something we view as personal and private. Although many people hold a view that a person’s sexuality should be kept private, this isn’t a reality for our society. One only needs to observe popular culture and media for a short time to see that sexuality permeates much of our public discourse.

Sexuality relates to culture and identity in important ways that extend beyond sexual orientation, just as race is more than the color of one’s skin and gender is more than one’s biological and physiological manifestations of masculinity and femininity. Sexuality isn’t just physical; it is social in that we communicate with others about sexuality (Allen, 2011). Sexuality is also biological in that it connects to physiological functions that carry significant social and political meaning like puberty, menstruation, and pregnancy. Sexuality connects to public health issues like sexually transmitted infections (STIs), sexual assault, sexual abuse, sexual harassment, and teen pregnancy. Sexuality is at the center of political issues like abortion, sex education, and gay and lesbian rights. While all these contribute to sexuality as a cultural identity, the focus in this section is on sexual orientation.

The most obvious way sexuality relates to identity is through sexual orientation. Sexual orientation refers to a person’s primary physical and emotional sexual attraction and activity. The terms we most often use to categorize sexual orientation are heterosexualgaylesbian, and bisexual. Gays, lesbians, and bisexuals are sometimes referred to as sexual minorities. While the term sexual preference has been used previously, sexual orientation is more appropriate, since preference implies a simple choice. Although someone’s preference for a restaurant or actor may change frequently, sexuality is not as simple. The term homosexual can be appropriate in some instances, but it carries with it a clinical and medicalized tone. As you will see in the timeline that follows, the medical community has a recent history of “treating homosexuality” with means that most would view as inhumane today. So many people prefer a term like gay, which was chosen and embraced by gay people, rather than homosexual, which was imposed by a then discriminatory medical system.

The gay and lesbian rights movement became widely recognizable in the United States in the 1950s and continues on today, as evidenced by prominent issues regarding sexual orientation in national news and politics. National and international groups like the Human Rights Campaign advocate for rights for gay, lesbian, bisexual, transgender, and queer (GLBTQ) communities. While these communities are often grouped together within one acronym (GLBTQ), they are different. Gays and lesbians constitute the most visible of the groups and receive the most attention and funding. Bisexuals are rarely visible or included in popular cultural discourses or in social and political movements. Transgender issues have received much more attention in recent years, but transgender identity connects to gender more than it does to sexuality. Last, queer is a term used to describe a group that is diverse in terms of identities but usually takes a more activist and at times radical stance that critiques sexual categories. While queer was long considered a derogatory label, and still is by some, the queer activist movement that emerged in the 1980s and early 1990s reclaimed the word and embraced it as a positive. As you can see, there is a diversity of identities

Communiations

Why blockbusters are taking over the arts

Harvard’s Anita Elberse on why the ‘long tail’ is not where the money is

By Craig Fehrman Globe Correspondent,October 13, 2013, 12:33 a.m.

“Iron Man 3” grossed $1.2 billion worldwide.WALT DISNEY PICTURES /GLOBE STAFF PHOTOILLUSTRATION

EARLY THIS SUMMER at the University of Southern California, Steven Spielberg sat before an audience and worried aloud that Hollywood had become too dependent on blockbuster movies. The director, hardly a stranger to big summer hits, was concerned that studios were fixating on franchises and sequels to the point that they no longer wanted anything else. “There’s going to be an implosion,” Spielberg warned, “where three or four or maybe even a half-dozen megabudget movies are going to go crashing into the ground.”

The next few weeks seemed to bear out his prediction, as “The Lone Ranger” (reported cost: $215 million), “R.I.P.D.” ($130 million), and other big titles flopped. But then a funny thing happened. When summer came to an end, Hollywood had brought in more money than ever: a
domestic box office of $4.76 billion. For every “Lone Ranger” there had been an “Iron Man 3” and a “Fast & Furious 6.” Hollywood wasn’t collapsing under the weight of its blockbusters. It was enjoying its best summer ever.

Harvard Business School professor Anita Elberse recently published a book on how the entertainment industry is obsessed with producing big blockbusters.Harvard Business School professor Anita Elberse recently published a book on how the entertainment industry is obsessed with producing big blockbusters. JULIETTE LYNCH FOR THE BOSTON GLOBE

That might have surprised Spielberg, but it’s exactly what Anita Elberse expected. Elberse, a professor at Harvard Business School, has spent a decade studying the entertainment industry and how it’s changing in the online economy. Many observers had predicted the Web would revolutionize our culture and wildly expand our choices—and in some ways, it has. But in her new book “Blockbusters,” Elberse argues that for entertainment companies at least, the digital shift has only amplified the star system already in place. Movie studios now succeed by sinking extra resources into a handful of super-hits, and the public responds by flocking to them. “Blockbusters” shows that this strategy has also worked for book publishers, music labels, TV networks, and video game companies.

Get Weekend Reads from Ideas

A weekly newsletter from the Boston Globe Ideas section, forged at the intersection of ‘what if’ and ‘why not.’

Enter Email

Sign Up

Elberse analyzes the realm of culture with a rigorous, numbers-driven approach. One of her central findings has been that Chris Anderson’s influential “long tail” theory, which imagined a digital future in which we would happily browse a niche-filled utopia, hasn’t quite worked out as promised. In the pages of the Harvard Business Review, and now in her new book, Elberse has mounted a forceful argument against it, showing that instead of producing a “long tail” of modest successes, consumers respond to an overwhelming mass of products by drifting back to the biggest brands. “Blockbusters,” she writes, “will become more—not less—relevant in the future.”

ADVERTISING

Advertisement

The notion that blockbusters are doing better than ever has been a big relief for entertainment companies worried that digital content would gut their business. For the wider culture, however, it might not sound so encouraging. Who wants to live in a world where there’s “Fast & Furious 12” and little else?

Advertisement

It’s easy to blame movie studios and publishers for crassly chasing the easy money. But Elberse’s book shows the reasons lie with us, as well. We may think we’ll use the Internet as a gateway to marvelous and obscure new music, books, movies, and so on—but to a significant extent, we’re really using it for a mass discussion of Miley Cyrus’s new number one hit. A blockbuster economy, it seems, is what happens when people get what they really want.

***

ELBERSE’S OFFICE on the Harvard campus isn’t that of a typical business professor. “Most of my colleagues have research awards on the shelf,” she jokes. “I have party invites.” In one corner sits a guitar autographed by the guys in Maroon 5; on the wall hangs an invitation to LeBron James and Jay-Z’s Two Kings’ Dinner.

Before she was an expert on the entertainment industry, the Dutch-born Elberse was a fan. “I spend way too much time watching television, going to sports games, going to movies,” she says. But for all the cultural chatter about those events, Elberse noticed that very few scholars were studying them empirically. “It struck me that there’s an awful lot of data in the public domain for these sectors,” she says. “The movie industry publishes weekly sales numbers—not many industries do.”

Advertisement

While a graduate student at the London Business School, Elberse decided to quantify the best entertainment business strategies, building complex models that controlled for all kinds of factors. Subrata Sen, a professor in Yale’s School of Management, still remembers the novelty of Elberse’s 2002 dissertation on the film industry. “She doesn’t just wave her hands and make some general statement,” Sen says. “She actually works with the numbers. She does the math.” Once she got to Harvard in 2003, Elberse began mixing in more qualitative research as well, including interviews with book publishers, music executives, and movie producers.

While doing this work, Elberse kept bumping up against a popular new idea: the long tail. According to Chris Anderson, who developed the theory in a 2004 Wired article, then in a 2006 book, the Internet makes it easier than ever to produce, distribute, and buy products—and this freedom would transform customer behavior. With evangelical fervor, he wrote of an end to the era of bland, one-size-entertains-all popular culture. A typical mass-market “demand curve” slopes from left to right, graphing the fall-off in popularity from the megahits in the “head” to the less trendy “tail,” which represents the many products with a relatively small audience. The Web, Anderson predicted,would empower us to reach beyond the high-volume head of the curve to the long and ever-expanding tail, where people would increasingly create and consume products better suited to their personal tastes.

Advertisement

Anderson’s book itself became a blockbuster, and his theory became a key framework for understanding the cultural marketplace. But Elberse was skeptical from the start. “I remember thinking, this just does not jibe with the underlying data I’ve seen for the industry,” she says.

No one disputes that the Internet gives consumers many more choices—just compare your local bookstore’s selection to Amazon’s. But when it comes to what most of us actually buy and read, Elberse argues, there’s little evidence that we’re taking advantage of the immense variety out in that tail. Perhaps the best example comes from digital music. From 2007 to 2011, the number of unique songs that sold at least one copy, largely through iTunes, exploded from 3.9 million to 8 million. But in 2011 nearly a third of those songs sold only one copy—a percentage that keeps increasing every year. And 94 percent of the songs sold fewer than 100 copies.

The long tail, it turns out, is a pretty lonely place. Instead, more and more fans are moving to the head, where the blockbusters reside. In 2007, 36 songs sold at least a million copies. But by 2011, more than a hundred songs sold that many. Put another way, a mere 0.001 percent of the available songs was responsible for 15 percent of all sales. “Every time new data come out,” Elberse says, “we see more demand shifting to the head.”

Advertisement

In “Blockbusters,” Elberse shows how Warner Bros. has capitalized on this trend. After a strategy shift in 1999, the studio began committing an unprecedented chunk of resources to a mere handful of movies. In 2010, for example, Warner Bros. put a third of its production budget and nearly a quarter of its marketing budget into just three of its 22 movies: “Harry Potter and the Deathly Hallows, Part 1”; “Inception”; and “Clash of the Titans.” It worked: Those three generated more than 50 percent of the studio’s worldwide box-office.The blockbuster strategy doesn’t always work—for Warner Bros., it’s led to disasters like “The Green Lantern” and “Speed Racer”—but over time, Elberse demonstrates, the approach consistently produces the highest returns. Last year, Warner Bros. became the first studio in history to earn $1 billion or more for 12 straight years, and Elberse has uncovered the same pattern in other fields. When a music executive described one of Lady Gaga’s albums, he invoked the Hollywood model. The release, he said, had been orchestrated like “a movie blockbuster in the summer months, like ‘Avatar.’”

***

FOR ENTERTAINMENT EXECUTIVES , the blockbuster strategy makes a lot of sense. But what about for the rest of us? Do big movies succeed because they’re what we want, or because the studios invest lots of money in pushing them on us? Elberse wrote her dissertation on that question, creating models that accounted for a movie’s budget, its stars, the number of theaters, the quality of the reviews, and more. She found that both factors were at work. “Success is a combination of supply and demand forces,” she says.

In other words, a big part of any blockbuster’s appeal is that we simply like blockbusters. And here we’ve changed less than you might think. In fact, Elberse says, the work that best explains today’s consumers isn’t Anderson’s “Long Tail” but the far less seductively titled “Formal Theories of Mass Behavior,” a 1963 book by sociologist William McPhee.

Elberse first learned about McPhee’s book when an emeritus professor at Harvard mentioned it during one of her presentations. When she checked the title out at the campus library, she saw it hadn’t been borrowed since 1973. McPhee constructed a series of experiments where people evaluated 12 different entertainment options. What he found was that most fans of pop culture were fairly light consumers—they didn’t consume many products, but when they did they preferred the biggest hits. The heavier consumers (the film buffs, the music junkies) were more likely to dip into what we now call the long tail. But McPhee also found that they were less likely to enjoy those obscure items. Even movie buffs liked blockbusters, he observed—and most of the long tail just wasn’t that good.

When Elberse read McPhee’s findings, she recognized them instantly. “I still remember the feeling of, ‘Oh my God, he described it back then,’” she says. She’s replicated his model in all sorts of modern settings—for example, in the user queues of Quickflix, an Australian version of Netflix. But Elberse also believes it makes intuitive sense. “It’s really not fun to have seen a movie that you want to talk about and you can’t find anyone else who’s seen it,” she points out. “It’s much better to say, ‘Did you watch yesterday’s “Scandal” episode? Oh my God, can you believe…’”

Elberse’s findings about the profitability of big hits has reassured those inside the industry who had feared the long tail would end their businesses. “Throughout the 2000s, there was a lot of questioning and concern,” says James Diener, the president of Maroon 5’s record label (and a subject for one of Elberse’s early case studies). Elberse’s models “demystified, even within the music industry, what was often mysterious to us,” Diener says.

Elberse expects the strategy to keep working: “I don’t really see a saturation point anytime soon,” she says. But to some that sounds worrisome. At USC Spielberg didn’t just question the durability of blockbuster strategy—he questioned its impact on quality, too. “You’re at the point right now,” the director said, “where a studio would rather invest $250 million in one film for a real shot at the brass ring than make a whole bunch of really interesting [projects].”

One hears echoes of Spielberg’s concern across the entertainment industry. Sub-blockbuster commercial products—what book publishers call the “midlist”—are where a lot of the best popular art has traditionally emerged. That space has also nurtured and supported artists before they started producing hits. Jonathan Franzen, to take only one example, wrote two slow-selling novels before his breakthrough “The Corrections.” Yet with bigger profits coming from blockbusters, today’s companies now have less incentive to invest in music that’s not obviously Top 40, or in TV shows that try something new. As a Warner Bros. executive told Elberse, “because technology is shrinking the pie, at least in the foreseeable future, we’ll have to make fewer smaller movies.”

Elberse can point to a few sound business reasons for making smaller movies, even in a blockbuster age. Smaller movies help movie studios preserve their relationships with movie theaters and maintain a flexible schedule. They’re the best place to try out new concepts or actors. “You don’t want to do your R&D in a blockbuster,” Elberse says.

But she also notes that, in the end, the cultural products that thrive are up to us. “In a way it’s our fault for not going to the movies more often,” she says. “Would I prefer to see ‘Lincoln’ over ‘Iron Man 5’? Yes. But is that representative of the general population? No. There’s clearly an enormous group of people out there who find tremendous value in these blockbuster movies

Communiations

Why blockbusters are taking over the arts

Harvard’s Anita Elberse on why the ‘long tail’ is not where the money is

By Craig Fehrman Globe Correspondent,October 13, 2013, 12:33 a.m.

“Iron Man 3” grossed $1.2 billion worldwide.WALT DISNEY PICTURES /GLOBE STAFF PHOTOILLUSTRATION

EARLY THIS SUMMER at the University of Southern California, Steven Spielberg sat before an audience and worried aloud that Hollywood had become too dependent on blockbuster movies. The director, hardly a stranger to big summer hits, was concerned that studios were fixating on franchises and sequels to the point that they no longer wanted anything else. “There’s going to be an implosion,” Spielberg warned, “where three or four or maybe even a half-dozen megabudget movies are going to go crashing into the ground.”

The next few weeks seemed to bear out his prediction, as “The Lone Ranger” (reported cost: $215 million), “R.I.P.D.” ($130 million), and other big titles flopped. But then a funny thing happened. When summer came to an end, Hollywood had brought in more money than ever: a
domestic box office of $4.76 billion. For every “Lone Ranger” there had been an “Iron Man 3” and a “Fast & Furious 6.” Hollywood wasn’t collapsing under the weight of its blockbusters. It was enjoying its best summer ever.

Harvard Business School professor Anita Elberse recently published a book on how the entertainment industry is obsessed with producing big blockbusters.Harvard Business School professor Anita Elberse recently published a book on how the entertainment industry is obsessed with producing big blockbusters. JULIETTE LYNCH FOR THE BOSTON GLOBE

That might have surprised Spielberg, but it’s exactly what Anita Elberse expected. Elberse, a professor at Harvard Business School, has spent a decade studying the entertainment industry and how it’s changing in the online economy. Many observers had predicted the Web would revolutionize our culture and wildly expand our choices—and in some ways, it has. But in her new book “Blockbusters,” Elberse argues that for entertainment companies at least, the digital shift has only amplified the star system already in place. Movie studios now succeed by sinking extra resources into a handful of super-hits, and the public responds by flocking to them. “Blockbusters” shows that this strategy has also worked for book publishers, music labels, TV networks, and video game companies.

Get Weekend Reads from Ideas

A weekly newsletter from the Boston Globe Ideas section, forged at the intersection of ‘what if’ and ‘why not.’

Enter Email

Sign Up

Elberse analyzes the realm of culture with a rigorous, numbers-driven approach. One of her central findings has been that Chris Anderson’s influential “long tail” theory, which imagined a digital future in which we would happily browse a niche-filled utopia, hasn’t quite worked out as promised. In the pages of the Harvard Business Review, and now in her new book, Elberse has mounted a forceful argument against it, showing that instead of producing a “long tail” of modest successes, consumers respond to an overwhelming mass of products by drifting back to the biggest brands. “Blockbusters,” she writes, “will become more—not less—relevant in the future.”

ADVERTISING

Advertisement

The notion that blockbusters are doing better than ever has been a big relief for entertainment companies worried that digital content would gut their business. For the wider culture, however, it might not sound so encouraging. Who wants to live in a world where there’s “Fast & Furious 12” and little else?

Advertisement

It’s easy to blame movie studios and publishers for crassly chasing the easy money. But Elberse’s book shows the reasons lie with us, as well. We may think we’ll use the Internet as a gateway to marvelous and obscure new music, books, movies, and so on—but to a significant extent, we’re really using it for a mass discussion of Miley Cyrus’s new number one hit. A blockbuster economy, it seems, is what happens when people get what they really want.

***

ELBERSE’S OFFICE on the Harvard campus isn’t that of a typical business professor. “Most of my colleagues have research awards on the shelf,” she jokes. “I have party invites.” In one corner sits a guitar autographed by the guys in Maroon 5; on the wall hangs an invitation to LeBron James and Jay-Z’s Two Kings’ Dinner.

Before she was an expert on the entertainment industry, the Dutch-born Elberse was a fan. “I spend way too much time watching television, going to sports games, going to movies,” she says. But for all the cultural chatter about those events, Elberse noticed that very few scholars were studying them empirically. “It struck me that there’s an awful lot of data in the public domain for these sectors,” she says. “The movie industry publishes weekly sales numbers—not many industries do.”

Advertisement

While a graduate student at the London Business School, Elberse decided to quantify the best entertainment business strategies, building complex models that controlled for all kinds of factors. Subrata Sen, a professor in Yale’s School of Management, still remembers the novelty of Elberse’s 2002 dissertation on the film industry. “She doesn’t just wave her hands and make some general statement,” Sen says. “She actually works with the numbers. She does the math.” Once she got to Harvard in 2003, Elberse began mixing in more qualitative research as well, including interviews with book publishers, music executives, and movie producers.

While doing this work, Elberse kept bumping up against a popular new idea: the long tail. According to Chris Anderson, who developed the theory in a 2004 Wired article, then in a 2006 book, the Internet makes it easier than ever to produce, distribute, and buy products—and this freedom would transform customer behavior. With evangelical fervor, he wrote of an end to the era of bland, one-size-entertains-all popular culture. A typical mass-market “demand curve” slopes from left to right, graphing the fall-off in popularity from the megahits in the “head” to the less trendy “tail,” which represents the many products with a relatively small audience. The Web, Anderson predicted,would empower us to reach beyond the high-volume head of the curve to the long and ever-expanding tail, where people would increasingly create and consume products better suited to their personal tastes.

Advertisement

Anderson’s book itself became a blockbuster, and his theory became a key framework for understanding the cultural marketplace. But Elberse was skeptical from the start. “I remember thinking, this just does not jibe with the underlying data I’ve seen for the industry,” she says.

No one disputes that the Internet gives consumers many more choices—just compare your local bookstore’s selection to Amazon’s. But when it comes to what most of us actually buy and read, Elberse argues, there’s little evidence that we’re taking advantage of the immense variety out in that tail. Perhaps the best example comes from digital music. From 2007 to 2011, the number of unique songs that sold at least one copy, largely through iTunes, exploded from 3.9 million to 8 million. But in 2011 nearly a third of those songs sold only one copy—a percentage that keeps increasing every year. And 94 percent of the songs sold fewer than 100 copies.

The long tail, it turns out, is a pretty lonely place. Instead, more and more fans are moving to the head, where the blockbusters reside. In 2007, 36 songs sold at least a million copies. But by 2011, more than a hundred songs sold that many. Put another way, a mere 0.001 percent of the available songs was responsible for 15 percent of all sales. “Every time new data come out,” Elberse says, “we see more demand shifting to the head.”

Advertisement

In “Blockbusters,” Elberse shows how Warner Bros. has capitalized on this trend. After a strategy shift in 1999, the studio began committing an unprecedented chunk of resources to a mere handful of movies. In 2010, for example, Warner Bros. put a third of its production budget and nearly a quarter of its marketing budget into just three of its 22 movies: “Harry Potter and the Deathly Hallows, Part 1”; “Inception”; and “Clash of the Titans.” It worked: Those three generated more than 50 percent of the studio’s worldwide box-office.The blockbuster strategy doesn’t always work—for Warner Bros., it’s led to disasters like “The Green Lantern” and “Speed Racer”—but over time, Elberse demonstrates, the approach consistently produces the highest returns. Last year, Warner Bros. became the first studio in history to earn $1 billion or more for 12 straight years, and Elberse has uncovered the same pattern in other fields. When a music executive described one of Lady Gaga’s albums, he invoked the Hollywood model. The release, he said, had been orchestrated like “a movie blockbuster in the summer months, like ‘Avatar.’”

***

FOR ENTERTAINMENT EXECUTIVES , the blockbuster strategy makes a lot of sense. But what about for the rest of us? Do big movies succeed because they’re what we want, or because the studios invest lots of money in pushing them on us? Elberse wrote her dissertation on that question, creating models that accounted for a movie’s budget, its stars, the number of theaters, the quality of the reviews, and more. She found that both factors were at work. “Success is a combination of supply and demand forces,” she says.

In other words, a big part of any blockbuster’s appeal is that we simply like blockbusters. And here we’ve changed less than you might think. In fact, Elberse says, the work that best explains today’s consumers isn’t Anderson’s “Long Tail” but the far less seductively titled “Formal Theories of Mass Behavior,” a 1963 book by sociologist William McPhee.

Elberse first learned about McPhee’s book when an emeritus professor at Harvard mentioned it during one of her presentations. When she checked the title out at the campus library, she saw it hadn’t been borrowed since 1973. McPhee constructed a series of experiments where people evaluated 12 different entertainment options. What he found was that most fans of pop culture were fairly light consumers—they didn’t consume many products, but when they did they preferred the biggest hits. The heavier consumers (the film buffs, the music junkies) were more likely to dip into what we now call the long tail. But McPhee also found that they were less likely to enjoy those obscure items. Even movie buffs liked blockbusters, he observed—and most of the long tail just wasn’t that good.

When Elberse read McPhee’s findings, she recognized them instantly. “I still remember the feeling of, ‘Oh my God, he described it back then,’” she says. She’s replicated his model in all sorts of modern settings—for example, in the user queues of Quickflix, an Australian version of Netflix. But Elberse also believes it makes intuitive sense. “It’s really not fun to have seen a movie that you want to talk about and you can’t find anyone else who’s seen it,” she points out. “It’s much better to say, ‘Did you watch yesterday’s “Scandal” episode? Oh my God, can you believe…’”

Elberse’s findings about the profitability of big hits has reassured those inside the industry who had feared the long tail would end their businesses. “Throughout the 2000s, there was a lot of questioning and concern,” says James Diener, the president of Maroon 5’s record label (and a subject for one of Elberse’s early case studies). Elberse’s models “demystified, even within the music industry, what was often mysterious to us,” Diener says.

Elberse expects the strategy to keep working: “I don’t really see a saturation point anytime soon,” she says. But to some that sounds worrisome. At USC Spielberg didn’t just question the durability of blockbuster strategy—he questioned its impact on quality, too. “You’re at the point right now,” the director said, “where a studio would rather invest $250 million in one film for a real shot at the brass ring than make a whole bunch of really interesting [projects].”

One hears echoes of Spielberg’s concern across the entertainment industry. Sub-blockbuster commercial products—what book publishers call the “midlist”—are where a lot of the best popular art has traditionally emerged. That space has also nurtured and supported artists before they started producing hits. Jonathan Franzen, to take only one example, wrote two slow-selling novels before his breakthrough “The Corrections.” Yet with bigger profits coming from blockbusters, today’s companies now have less incentive to invest in music that’s not obviously Top 40, or in TV shows that try something new. As a Warner Bros. executive told Elberse, “because technology is shrinking the pie, at least in the foreseeable future, we’ll have to make fewer smaller movies.”

Elberse can point to a few sound business reasons for making smaller movies, even in a blockbuster age. Smaller movies help movie studios preserve their relationships with movie theaters and maintain a flexible schedule. They’re the best place to try out new concepts or actors. “You don’t want to do your R&D in a blockbuster,” Elberse says.

But she also notes that, in the end, the cultural products that thrive are up to us. “In a way it’s our fault for not going to the movies more often,” she says. “Would I prefer to see ‘Lincoln’ over ‘Iron Man 5’? Yes. But is that representative of the general population? No. There’s clearly an enormous group of people out there who find tremendous value in these blockbuster movies

Communiations

2

Intercultural Communication

Michael W. Robinson

COMM240

April 1, 2022

My first experience as a traveler, I went to Hawaii and my first impression of Hawaii was that the place was a very beautiful place and I loved my experience in the place very much. Hawaii is beautiful because of the experience that one gets when interacting with the nature and the beautiful beaches in Hawaii. The place is green endowed with some very beautiful waterfalls and vegetation which was a very great view for me to take some very beautiful pictures. I feel that it is a very serene place and I got to visit most of the beaches which were very clean and had some very beautiful beach hotels with good accommodation and services. My very first impression after getting to Hawaii and visiting some places I felt that this is a very beautiful destination where tourists can visit and I felt that the people in Hawaii were very friendly and hospitable making it a good tourist destination.

As indicated, my immediate impression of the place was that it had some really good and hospitable people. When I interacted with the people in Hawaii my immediate reaction was that I showed the people kindness and strived to make sure that I did not offend the people in any way. Most of the communication which I made with the people in Hawaii was verbal and when I was visiting the place, I had fears of whether there would be any language barriers between me and the people (Dasih et al., 2019). I was delighted that I found out so many of the people were speaking in English and that made it easier for me to communicate during my stay in Hawaii. When I was communicating with the people and asking questions, I made sure to give as much details as I could and this helped me be clear in my communication and made it easy for the people to understand me.

If I ever got an opportunity to visit Hawaii as a traveler again there are a few changes I would make and the very first change is that I would book a longer stay. When I visited the place, I only stayed for a couple of days and the tour guides told me about some very beautiful sites that I was not able to visit because my stay was limited. The other thing which I would change would ensure that I would ensure that I have enough finances in place. This would be very important because when I visited the place I had not budgeted enough for the trip and felt that a lot of money was needed to enjoy the place, making more beautiful memories and shop around.

I would describe my entire experience in Hawaii as very awesome and this course taught me so much that would be helpful in future. I have learned so much about intercultural communication and one lesson from the course is that cultural differences can create communication barriers between individuals. I learned that during intercultural communication, it is very important to make sure that you are culturally aware so that you do not end up using any gestures or non-verbal language that would be offensive. I also learned the importance of paying attention to non-verbal communication because there is a lot that is communicated using the body language and facial expressions that might not be put in words (Dasih et al., 2019). I learned about stereotypes and prejudices and how they can impact intercultural communication negatively. Stereotypes are opinions which we have about others and how they behave that in most cases are not always true and affect the reception that we have about people.

References

Dasih, I. G. A. R. P., Triguna, I. B. G. Y., & Winaja, I. W. (2019). Intercultural

communication based on ideology, theology and sociology. International journal of linguistics, literature and culture5(5), 29-35.

Communiations

Chapter 7 Verbal communication: How can I reduce cultural misunderstandings in my verbal communication? 153

might make no changes in his behavior (maintenance) or even highlight his own style
to mark it as different from that of the other group (divergence). Jin can change his
behavior in terms of nonverbal behavior (distance, posture, touch, etc.), paralinguistic
behavior (tone of voice, rate of speech, volume, etc.), and verbal behavior (word choice,
complexity of grammar, topic of conversation, turn-taking, etc.). Many things influence
shifts in his speech, such as the status and power of the other communicator, the situa-
tion, who is present, communication goals (for example, to seem friendly, or to show
status or threat), the strength of his own language in the community, and his communi-
cation abilities.

Communication and sites of dominance
Convergence can often go wrong. Giles and Noels (2002) explain that, although con-
verging is usually well received, we can overaccommodate, or converge too much or
in ineffective ways, by adjusting in ways we might think are appropriate, but are based
on stereotypes of the other. People often speak louder and more slowly to a foreigner,
thinking that they will thus be more understandable. Overaccommodation also works
in situations of dominance. For example, younger people often inappropriately adjust
their communication when talking with elderly people. Often called secondary baby
talk, this includes a higher pitch in voice, simpler vocabulary, and use of plural
first-person (“we”—“Would we like to put our coat on? It’s very cold outside”). While
some older people find this type of communication comforting, especially from health
workers, some feel it speaks down to them and treats them as no longer competent.
A  similar feeling might be experienced by Blacks in the United States when Whites
use  hyperexplanation. This inappropriate form of adjustment also includes use of
simpler grammar, repetition, and clearer enunciation. But Harry Waters (1992) sug-
gests that it is a behavior some Whites engage in while talking with Blacks (or other
minority members)—perhaps based on real communication differences or perhaps
based on stereotypes, but certainly leaving hurt feelings or resentment on the part of
the Black listeners.

Writers have outlined the ways in which word choice, turn-taking and length, or topic
selection may also serve to exclude others, often without us even being aware of it
(Fairclough, 2001; Tannen, 1994). Don Zimmerman and Candace West (1975) found that
while women “overlapped” speech turns in talking to men, often with “continuers” (“mm
hmm,” “yes”) that continued the turn of the male, men were more often likely to interrupt
women, often taking the turn away from them. And when women did interrupt men, the
men did not yield the turn to women, while women did yield the turn to men. Jennifer
Coates (2003), observing storytelling, found that men and boys often framed themselves
as heroes, as being rebels or rule-breakers. In analysis of family communication, she
found that there is “systematic” work done by all family members in many families to
frame the father as either the primary story teller or the one to whom children tell their
stories. Coates concludes, “Family talk can be seen to construct and maintain political
order within families. . . . to conform roles and power structures within families” (p. 158),
giving men more power in most mixed-gender storytelling over women. We can see that
each aspect of verbal communication could be used in ways to impose power over others,
often based on group identity, cultural difference, maintenance of group power, or, simply
put, prejudice.

Baldwin, J. R., Coleman, R. R. M., González, A., Shenoy-Packer, S., & González, A. (2014). Intercultural communication for everyday life. John Wiley & Sons, Incorporated.
Created from apus on 2022-03-30 00:24:39.

C
o
p
yr

ig
h
t
©

2
0
1
4
.
Jo

h
n
W

ile
y

&
S

o
n
s,

I
n
co

rp
o
ra

te
d
.
A

ll
ri
g
h
ts

r
e
se

rv
e
d
.

Communiations

Part three Messages152

Communication accommodation theory
Often people with different speaking styles communicate with each other, even from within
the same nation. Basil Bernstein (1966) stated that the social situation, including commu-
nicative context (for example, a job interview versus a party) and social relationships (for
example, peers versus status unequals), dictates the forms of speaking used in a particular
situation. Bernstein suggested that in all cultures, there are different types of codes. A
restricted code is a code used by people who know each other well, such as jargon or
argot. Jargon refers to a vocabulary used by people within a specific profession or area
(such as rugby players or mine workers), while argot refers to language used by those in a
particular underclass, often to differentiate themselves from a dominant culture (e.g., pros-
titutes, prisoners). However, as people get to know each other better, even good friends can
develop this sort of linguistic shorthand, speaking in terms or references that others do not
understand. In an elaborated code, people spell out the details of meaning in the words in
a way that those outside of the group can understand them. This switching back and forth
between codes is called code-switching. Effective communicators should be able to speak
in restricted codes appropriate to their context, but also know how to switch to elaborated
code (for example, to include outsiders)—to change their vocabulary, level of formality, and
so on, to match the audience and social occasion.

Based on the notions of different codes within a community, as well as code-switching
and other theoretical ideas, Howard Giles and his colleagues introduced communication
accommodation theory (Giles & Noels, 2002; Gallois et al., 2005). This theory predicts
how people adjust their communication in certain situations, the factors that lead to such
changes, and the outcomes of different types of changes.

In the U.S. television series, Lost, through a series of flashbacks and present commu-
nication, we observe the speech of Jin Kwon (Daniel Dae Kim), a Korean man, the son
of a fisherman, but hired by a wealthy restaurant owner. In some cases, his communica-
tion is respectful, indirect, deferential; in others, it is direct, friendly or aggressive, and
nonverbally more expressive. In some cases, he might change his behavior to be more
like that of the person with whom he is speaking (convergence), and in others, he

Break it down

Tell about a time that you moved back and forth between an elaborated and a restricted code.

This might have happened at a workplace, if your work has a specific jargon, or even as you

move between slang your friends use and the talk you use with parents or teachers. What are

some ways that “code-switching” can be effective or ineffective in communication? How can we

use an awareness of others around us (such as international students) to use code-switching

appropriately to make their communication adjustment easier and to make them feel more

accepted?

Baldwin, J. R., Coleman, R. R. M., González, A., Shenoy-Packer, S., & González, A. (2014). Intercultural communication for everyday life. John Wiley & Sons, Incorporated.
Created from apus on 2022-03-30 00:24:13.

C
o
p
yr

ig
h
t
©

2
0
1
4
.
Jo

h
n
W

ile
y

&
S

o
n
s,

I
n
co

rp
o
ra

te
d
.
A

ll
ri
g
h
ts

r
e
se

rv
e
d
.

Communiations

 1. Reference: Complete an APA reference for each article

2. Annotation: Complete an annotation for each article.

  • The annotation is not a summary or paraphrase. In this paragraph, you will interpret and evaluate the contents of the article itself. It is a narrative paragraph of about 100 words providing information and assessment about the article.

3. Reflection: Write a reflection for each article. The reflection should connect your intercultural communication experience(s) with information from the article as it applies to you personally (This section only can be written in first person).

  • In this paragraph of about 100 words, relate the information that you have evaluated in the article to your own cultural identity and intercultural communication. This is a reflective piece where you are able to connect the information in theory to an understanding of your own identity.