Political Party Affiliation Class Assignment

Abstract

People who belong to the same group often behave alike. Is this because people with similar preferences naturally associate with each other or because group dynamics cause individual preferences and/or the information that they have to converge? We address this question with a natural experiment. We find no evidence that peer political identification affects individual identification. But we do find that peer engagement affects political identification: a more politically engaged peer group encourages individual political affiliation to move from the extremes to the centre.

1. Introduction

People often behave alike when they know each other well. Friends, for example, frequently vote for the same party, send their children to similar schools, choose the same types of vacations or enjoy eating at certain restaurants and not at others. Groups are formed by such commonalities and they pose a fundamental question for social science. Do such commonalities arise because people with prior preferences for ‘x’ naturally associate with fellow ‘x’ seekers and share information, or does membership of the group encourage conformity because the psychological dynamics within a group are such that individual preferences become more alike? This is the question that we address in this paper with a natural experiment, focusing on political behaviour.

The question matters because much in economics and some versions of liberal political theory turns on taking individual preferences as given. The appeal, for instance, of the Pareto criterion in welfare economics and the ‘will of the people’ as a justification for democratic decision-making depends on being able to identify individuals with their preference and this becomes problematic if an individual’s preferences change with those of their peers.1 The question, however, is difficult to answer. To control adequately for possible prior commonalities, common shocks and the role of information transmission within a group, and so identify whether there is a distinct peer effect on individual preferences, is not easy. This is why experiments, where the scope for such control is often greater, are attractive. The laboratory experimental evidence, however, is mixed on this general question. For example, Hung and Plott (2001) interpret the evidence from their information cascade experiment as telling in favour of information transmission and against preference change in the explanation of behavioural conformity. But, the evidence on the unpredictability of music bandwagons in Salganik et al.’s (2006) experiment is difficult to reconcile with information transmission alone. In this paper, we report on a natural experiment where we attempt to disentangle the contribution of prior commonalities and the possible information transmission effect within a group from the possible influence that peers have on other individuals’ specific political preferences.

We consider whether there is evidence of peer effects on two types of individual political behaviours. One is an individual’s substantive political identification on a left-right spectrum and the other is on an individual’s engagement with the process of politics that is revealed by their acquisition of information on candidates in an election and their willingness to vote in an election.2 Where there is evidence that a peer’s political identification and/or engagement affects individual political identification and/or engagement, we exploit aspects of the data to consider whether it arises from a peer influence on the political information that individuals have or over their preferences.

There is a large literature on peer effects in politics.3 The specific evidence on peer effects on political identification is mixed. Some studies find evidence consistent with the claim that people follow their peer’s political affiliations (Kenny, 1994; Beck, 2002; Sinclair, 2009), others find no association (MacKuen and Brown, 1987). But much of this is based on correlations that are subject to selection biases: that is, the correlations could arise from people with shared prior commonalities naturally being drawn together. We address this difficulty in the natural experiment by exploiting the fact that our data consists of freshman students who have been randomly divided between different class groups for the introductory courses in their chosen major subject. This means that the characteristics of the peers in a person’s class group should be independent of his or her own characteristics. We interview students twice in an election year (before the presidential campaign and after the election). To test for peer effects, we examine how and whether their identification and engagement in the second survey correlates with their classmates’ initial political orientations and engagement.

There are other studies that use an experimental or quasi-experimental framework for the same reason. For example, Sacerdote (2001), Lyle (2009), and Carrell et al. (2011) use data on randomly assigned networks to identify peer effects on student performance, physical fitness and workers’ productivity, respectively. The closest to our study are the natural and field experiments that have examined peer effects on voting turnout (Gerber et al., 2008; Nickerson, 2008; Klofstad, 2009, 2010; Funk, 2010; Panagopoulos, 2010.) Their findings are consistent with the fact that voting is contagious in social circles. But little is known about the mechanism producing conformity in this instance. Does it arise because individuals become better informed about political choices through interaction with peers and so become more inclined to vote? Or do peer preferences for political engagement strengthen what would otherwise be weak individual preferences for political engagement?4 The difference matters for the reason discussed above and our natural experiment is useful in distinguishing between these possible explanations of peer effects on the likelihood of voting, as well as other aspects of political behaviour.

We find no evidence that peer political identification influences individual political identification. Interestingly, when we relax the controls for prior commonalities among the members of a group, we find an apparent peer political identification effect on individual political identification. This suggests that the failure to control fully for prior commonalities can, in practice, be a serious problem: it can lead to misleading inferences over the sensitivity of individual behaviour to peers.

We do find evidence, however, of a peer engagement effect on individual political identification and possibly on the willingness to vote. This might seem troubling for those who take preferences as given, especially as there is no evidence that this effect arises because individuals acquire more information through the media and only weak evidence that their political knowledge of the candidates improves. But, the fact that peer engagement appears to encourage individual political identification to move towards the Centre suggests a different and less troubling interpretation, especially as there is no evidence that this effect comes from the mere existence of differences in view within the peer group (as in Mutz 2002a, 2002b). It is an effect that is associated with an engaged peer group and if initial political affiliations are held with some uncertainty then discussion within an engaged group can help clarify an individual’s own affiliation on the left-right-wing scale with the result that there is regression to the mean in the form of a movement to the Centre.

In the next section, we explain the data and describe the natural experiment on freshman students at Brazil’s largest university. We set out the model that we use for identifying peer effects in Section 3. Section 4 presents the estimates of peer effects. Section 5 discusses these results and we conclude in Section 6.

2. Data and identification of peer effects

2.1 Overview

The data is based on freshman students at the Universidade de São Paulo (USP). The move from high school to university marks a natural transition to adulthood where new networks are formed. USP is the largest university in Brazil and the freshman students are randomly allocated to classrooms. As a result, these classes plausibly represent new randomly created peer groups for the incoming students. Our strategy was to sample these freshman students early in the academic year, and before the commencement of a presidential campaign, to establish prior values of the individual variables relating to preferences for political affiliation and engagement. For each individual we calculate peer effect variables for two key measures, relating to the political engagement and political affiliation. We then re-survey the sample at the conclusion of the presidential election and test whether the individual political affiliation, engagement and knowledge at this later date correlates with the peer variables.

The choice of surveying freshman students who are entering during a presidential election year is important for the identification of peer effects. The fact of the election makes the transition to adulthood particularly salient because voting is compulsory for everyone aged 18 or above in Brazil. The campaign, that occurs between the first and final sample of individual variables, is also a natural political event which might cause individuals to think about politics and so become exposed to peer effects, if there are any.

There are strong grounds for supposing that the social life in classrooms is an appropriate environment to measure peer effects. USP freshmen have all their introductory lectures with the same group of classmates during their first term in university (when we first interview them). They have at least two lectures together per day5 and they interact outside the class with each other through academic activities such as study groups and joint course projects. In addition, there are fewer alternative university peer groups than is typically the case at UK and US universities because most students are local and live at home (74%). Classmates are the first group of students they meet in college and it is a relatively large pool of possible friends (the average size of a classroom is 33 students). In short, between our surveys, students became friends, interacted in classes, and were exposed to a presidential campaign that made politics salient for discussions within social circles.

2.2 The sample and method of data collection

USP has approximately 86,187 students enrolled and offers 229 undergraduate and graduate courses. To be enrolled, undergraduate students must complete secondary education and pass an entrance exam (‘Vestibular’), which is USP-major specific and runs once a year. USP is a public university, that is tuition-free, and it is one of the most prestigious universities in Latin America. For these reasons, the USP entrance exam is highly competitive: for instance, in 2011, the number of applicants was 138,888 and the year’s enrolment was only 10,202.6

Our data come from the 2010 cohort of freshman students enrolled in specific subject majors: architecture, business administration, economics, history, law, literature, mathematics, physics, and sociology. For these majors, USP admits more than 180 students per year and divides the freshmen into at least two classes for the introductory courses. While students obviously choose their subject major, they cannot choose their class assignment: it is based either on alphabet order or a university algorithm. Since the initial process of allocating students to classes is random, our classes and the peer variables should be free from the more obvious sources of selection bias.7

The same survey procedure was used in all classes. An interviewer entered the classroom about 15 minutes before the end of a lecture, read an introductory script aloud, and distributed the questionnaires to all students. Lecturers also contributed by asking that attention and consideration be given to the survey. Students, then, had 10–12 minutes to complete, individually, the questionnaires. The survey was titled ‘Young Adults’ Political Behaviour’ and the contact details of the authors were given for further information. The instructions made it clear that students should answer the survey individually. In every class, four types of questionnaires—containing the same questions but in a different order—were randomly distributed to students (to encourage individual answering). Practically all students agreed to answer the survey (in a few classrooms, one or two students failed to return the completed-out survey), and 95.54% of the respondents declared that they had answered questions in a serious manner. The questions are on individual demographics, political knowledge, political identification, media consumption and their parents’ political commitments.

The first wave, pre-election, was administered during April 2010 (henceforth, referred to as t-1). The questionnaires were collected before the formal entry of all candidates in the race or of their running mates (in June) and before the beginning of the TV presidential campaign (in July) or any of the three debates on TV (in August and September). There is also evidence that media interest in the election notably picked up after the first wave.8 So it is likely that people tended to form opinions and discuss politics more enthusiastically from July on and after the first wave.

Nevertheless, it is possible that some peer effects had already occurred by the time of the first wave because it was conducted roughly one month after the beginning of classes. To test for this possibility, we conduct several tests for random assignment at this stage. First, following Sacerdote (2001), we use a standard test for random selection. This is discussed in Section 4.1 and does not cause any concern. Second, we test whether the variance of peer variables across classrooms per major class (the randomization level) is consistent with students’ random assignment to classrooms. When peer variables differ too much or too little (with respect to the ones generated by a lottery), one explanation is that peer effects have occurred. The data largely reject the hypothesis that the variances are unusual.9 Based on this evidence and the early stage of the election year, we take the first wave of the survey as supplying information on pre-determined characteristics.

The second wave of the survey was administrated just after the first round of presidential elections, during October 2010 (henceforth, t). Students were asked the same questions as in the first wave, and they took a political quiz (that was piloted beforehand to ensure all questions were clear). The data in the first survey consisted of 1,593 student responses from 48 classes, the data in the second wave had 1,103 student responses from 39 classes. Our panel sample consisted of the students that had responded to both surveys, a total of 635 students.10,11 This is the main sample used in the analysis. It represents 39.8% of the initial sample. Two things should be noted about this. First, the peer variables for these individuals are calculated based on the larger initial survey of relevant individuals. Second, the panel sample has many similarities with USP students’ population.12 We test for whether the attrition is in any sense unbalanced or not random so as to bias results. We do this in three ways.

First, we investigate if there is any correlation between abstention in the second survey and our peer variables. We investigate this association across students within a major class (e.g. comparing the behaviour of students enrolled in Economics-evening, but that are assigned to different classrooms). We find no association.13

In other words, variations in the proportions of classmates that self-declare right-wing or those have a partisan parent in t-1, are unlikely to cause abstention in the survey in t.

Second, following Good (2006), we simulate random groups to calculate an empirical confidence interval for panels generated randomly, and test the null hypothesis that observed classroom panels was formed ‘like randomly’. For each classroom, we randomly drew, from the group observed in the first survey, a sample without replacement, with the same size as the observed panel. We calculated the average characteristic for those selected to be in simulated panel. We repeated this process 10,000 times to obtain an empirical 90% confidence interval of the panel characteristics, for each classroom. To summarize individuals’ many characteristics (demographics and political preferences) into a single number, we considered the conditional probability of an individual (observed in t-1) belonging to his/her own classroom.14 The results are presented in Table A3 in the online Appendix. They show values for observed classrooms, as compared to the confidence interval generated by the simulated groups. Out of 47 classrooms, the null hypothesis of a ‘random panel’ is rejected, at the 10% level, for only six of them. In an alternative check for whether attrition introduces selection in the data, we replicated tests of selection (explained in Section 4.1), for all students observed in the first survey and restricting the data to students in the panel, and we find null selection effects.15

Finally, we estimated the main regressions weighting each observation by the inverse probability of being observed in the panel.16 The magnitudes and level of significant of peer effect coefficients, reported in the online Appendix, largely remain the same.

We conclude that the attrition does not alter the sample in any visibly worrying respect. Nevertheless, to be sure that there is no biasing effect, we also control for students’ observable characteristics in the main regressions.

2.3 Variables

Table 1

Table 1

Summary statistics individual and classroom level

Mean Stand Dev Min Max Obs 
Panel A: Pre-determined characteristics and preferences
Female 0.490 0.490 622 
White 0.781 0.414 622 
Mother has a college degree 0.620 0.486 625 
Have a partisan parent 0.448 0.498 620 
Age 20.880 5.008 17 60 626 
Right-wing 0.229 0.421 623 
Centre 0.401 0.490 623 
Intends to cast and invalid vote 0.168 0.374 622 
Intends to watch political campaign on TV 0.451 0.498 627 
Panel B: Outcomes
Cast an invalid vote 0.058 0.233 625 
% correct answers in the quiz 0.711 0.188 635 
Asymmetric mistakes 1.704 1.367 635 
Mistakes on own intended candidate 0.267 0.105 0.143 0.571 486 
Mistakes on remaining candidates 0.285 0.096 0.143 0.714 622 
Right-wing 0.240 0.427 622 
Centre 0.363 0.481 622 
No of days follows politics on TV 3.492 2.126 632 
No of days follows politics on newspapers 2.590 2.168 630 
No of days follows politics on internet 4.623 2.152 631 
Panel C: Classroom Composition (Peer variables)
Have a partisan parent 0.432 0.080 0.25 0.63 48 
Intend to watch political campaign 0.426 0.119 0.20 0.75 48 
Right-wing oriented 0.189 0.126 0.03 0.50 48 
Number of Respondents (in t-1) 33.188 11.164 12 65 48 
Mean Stand Dev Min Max Obs 
Panel A: Pre-determined characteristics and preferences
Female 0.490 0.490 622 
White 0.781 0.414 622 
Mother has a college degree 0.620 0.486 625 
Have a partisan parent 0.448 0.498 620 
Age 20.880 5.008 17 60 626 
Right-wing 0.229 0.421 623 
Centre 0.401 0.490 623 
Intends to cast and invalid vote 0.168 0.374 622 
Intends to watch political campaign on TV 0.451 0.498 627 
Panel B: Outcomes
Cast an invalid vote 0.058 0.233 625 
% correct answers in the quiz 0.711 0.188 635 
Asymmetric mistakes 1.704 1.367 635 
Mistakes on own intended candidate 0.267 0.105 0.143 0.571 486 
Mistakes on remaining candidates 0.285 0.096 0.143 0.714 622 
Right-wing 0.240 0.427 622 
Centre 0.363 0.481 622 
No of days follows politics on TV 3.492 2.126 632 
No of days follows politics on newspapers 2.590 2.168 630 
No of days follows politics on internet 4.623 2.152 631 
Panel C: Classroom Composition (Peer variables)
Have a partisan parent 0.432 0.080 0.25 0.63 48 
Intend to watch political campaign 0.426 0.119 0.20 0.75 48 
Right-wing oriented 0.189 0.126 0.03 0.50 48 
Number of Respondents (in t-1) 33.188 11.164 12 65 48 
gives descriptive statistics on the individual variables at t-1 and t and the peer variables. The pre-determined individual characteristics at t-1 are set out in Panel A. They relate to the usual demographics (gender, race, income, mother’s education and age), their declared political affiliation (left-wing, centre, right-wing), whether they have a partisan parent, whether they intend to cast an invalid vote and whether they intend to watch the political campaign on TV.

The individual outcomes in the second survey at time t are given in Panel B. Our measure of individual political identification is positioning on the left-right scale. A potential concern is that this is a stable concept and so may be less susceptible to peer influence (forcing a null result). However, in our sample of young adults, this turns out not to be the case: 29.3% of individuals changed their identification on this scale between the two surveys, suggesting that, as young voters, their political preferences were in some degree still in formation (Sears and Funk, 1999; Franklin, 2004; Prior, 2010). Nevertheless, to check the robustness of our findings, we replicate the analysis for party preference. We also asked students to cite the three most relevant socioeconomic problems among 13 alternatives and we use these answers to identify student political identification. We report these robustness results in the online Appendix.

Our measures of individual political engagement are casting an invalid vote and the number of days following politics in the media. ‘Following politics’ needs no explanation, but ‘casting an invalid vote’ may. The natural measure of a lack of engagement (not voting) is not available in Brazil because voting is compulsory. However, there is the option on the ballot paper of voting for no one and this is what counts as an invalid vote. Of course, this could still be construed as a protest vote, but it does not, as it often does when voting is voluntary, suggest that person was at least sufficiently engaged with politics to make the effort to go to the polling booth (e.g. see Maringoni, 2010). There is no choice over making such an effort when voting is compulsory. Instead we find that those who cast an invalid vote are less informed and consume less political information (see

The Confederate flag. The Supreme Court ruling on gay marriage. Policing minority communities. Nuclear weapons and Iran. Summer often brings a lull in the news, but not this year. And, come September, students are going to want to talk about these headlines.

But how should teachers navigate our nation's thorny politics?

The Political Classroom

Evidence and Ethics in Democratic Education

by Diana E. Hess and Paula Mcavoy

Do politics belong in the classroom at all, or should schools be safe havens from never-ending partisan battles? Can teachers use controversial issues as learning opportunities, and, if so, to teach what? And then, the really sticky question: Should teachers share with students their own political viewpoints and opinions?

In their book, The Political Classroom: Evidence and Ethics in Democratic Education, Diana E. Hess and Paula McAvoy offer guidelines to these and other questions, using a study they conducted from 2005 to 2009. It involved 21 teachers in 35 schools and their 1,001 students. Hess is the dean of the school of education at the University of Wisconsin-Madison, and McAvoy is the program director at UW-Madison's Center for Ethics and Education.

Schools, they conclude, are and ought to be political places — but not partisan ones. I talked with them recently about how, in today's highly polarized society, teachers can walk that very fine line.

Sometimes it seems there's a belief that schools should be political ... sort of. With students taking on issues – like smoking – that are political but not too political. Did you find that in your study?

Hess: You're absolutely right, there are a number of schools that encourage students to get involved in political campaigns, but they tend to be political campaigns that really aren't very controversial. They'll encourage kids to form a campaign about something that everyone agrees should be done. For example, that we should clean up the litter that's around our school, or that it's important for people to eat healthy food...

We have evidence that kids learn a lot from doing that. It's not necessarily a terrible thing. My view is that if you're going to have students involved in authentic politics, then it's really important to make sure you have issues for which there are multiple and competing views, and you don't give students the impression that there's a political view that they should be working toward.

McAvoy: How political do we want students to be? That's really a question that a lot of communities struggle with and a lot of teachers struggle with. And the point of the book is to say that, in general, to be able to talk about politics is a skill that people need to learn. And it would be great if it were learned in school because these are great moments in which you bring a group of young people together who are forming their political views. They can really learn to engage across their differences and to start to see that political conflict is a normal part of democratic life.

A key point in your book is that, while teachers are teaching about the issues – immigration or same-sex marriage — they're also teaching students how to have these discussions. They're teaching the process of democracy.

McAvoy: Right. The "political classroom" is a classroom in which young people are learning to deliberate about political questions. It really is the process of deliberation that is the major skill being taught. And then, through deliberation, students are learning about the issues. They're learning how to form arguments, how to weigh evidence. So there's social studies content that is being learned in a process that is, at its heart, democratic.

Are there issues that are, or should be, completely off the table?

Hess: One of the things we talk about in the book is the distinction between issues that we called "settled issues" and issues that are "open"...

It's a little complicated, but, in a nutshell, we suggest that there are some issues that are settled and should be taught as settled and to not do that is being dishonest with young people. For example, the question about whether climate change is occurring — that's a settled issue. The question is, What to do about climate change? That's an open issue.

We wouldn't suggest that teachers engage kids in talking about whether climate change is occurring, but we strongly encourage teachers to engage in discussion about what should be done about climate change.

You mention in your book policies that might allow students to opt out [of a controversial topic or discussion]. Which raises questions about whether that's a good thing, to just allow students to sit out.

McAvoy: The philosopher in me thinks there's not a really good way to defend the view that students should always be able to opt out. We don't allow students to opt out of writing essays because they don't like writing essays.

At the same time, democracies allow us, when we're in the public sphere, to walk out of a discussion if we don't like what's happening or if we're being offended. Classrooms are unusual in that we're compelling students to be there. Teachers do need to weigh [whether] there might be times when a particular student has a good reason for wanting to pass on a comment...

Opting out because I feel uncomfortable sharing my views or talking out loud in class is a skill that can be taught and overcome. Opting out because this discussion is really hard for me given my religious background — that might be a reason that you let a student pass on a discussion.

You note the challenges and dangers of teaching both in mixed classrooms – with students of varied racial and economic backgrounds — and homogenous classrooms. How should teachers adapt to these different scenarios?

Hess: In many ways the more difference you have within a classroom the better. We want to make sure that we have as many multiple competing views as we possibly can ... So difference is a good thing, something that can be used and primed as opposed to something to be feared and quelled.

One of the challenges of lots of difference is, difference often causes high emotions and often can cause breaches of civility. So teachers who are in classrooms that have lots of naturally occurring difference often have to go to great lengths to make sure that students understand what it looks like to participate in a civil manner...

In classes where there's a lot of sameness, the first thing we learned is that, though it might appear that there's a lot of sameness, there's always some difference. So when teachers say, "Well all these kids think alike," we're almost sure — all the time — that the teachers are wrong, that in fact not all the kids do think alike.

That being said, there are classes that are more similar than they are different, and teachers have to use a lot of strategies to bring differences into the discussion. Those strategies might include bringing in guest speakers or making sure the materials the kids are using to prepare for discussion are full of multiple and competing ideas.

Students really seem to like this stuff – to engage in issues that are current and relevant to their own lives.

Hess: Absolutely. There are two things going on here: In many schools, students still spend most of the day listening to teachers talk. One reason we think kids like [these issues] is they finally get a chance to talk themselves. More than that, we did find that the content of these political issues was really interesting to kids. Especially when they were hearing multiple and competing views. Students would report that in discussions where there was a lot of shared opinion, those were not as interesting as in discussions where there were differing views ... They were really responding to the fact that it's quite interesting to hear what your peers think about things. And not just that they have different points of view but what they're supporting those points of view with.

What advice do you have or does your study have for teachers considering how to talk about [breaking events such as] Baltimore or Ferguson, Missouri?

Hess: One of the problems with discussing events that just happened is that often we don't know enough about what happened. There's a distinction between current events ... and discussions about controversial political issues where kids are preparing in advance and being deliberative. In the best-case scenario, teachers are able to take advantage of current events and use them as opportunities to get kids to talk about controversial political issues. There's a big difference in talking about, "What do you think happened?" and talking about a policy issue like "Should police officers be required to wear video cameras?"

McAvoy: Young people need to see these as moments within their historical context – need to understand some of the history. It's difficult to have those materials at the ready when things sort of erupt as they have in the last year or so with Baltimore and Ferguson. Good teachers start building curriculum about the history of redlining in cities or how cities become segregated. [To] put these moments within the context is much better than having young people just reacting to "What do you think about what you're seeing on television today?" Young people really need to study these issues in depth.

OK, the big elephant in the room: the question of whether teachers should talk about their own personal beliefs to their students. Should they?

Hess: What we found is that there were teachers who were doing an excellent job who shared their own views with students, and there were teachers doing an excellent job who didn't share their views. So we don't believe that there is one right answer to this. And we think empirically we can show that there's not.

That being said, we think that there are times when it's probably better for teachers to share than other times when it's better for them not to share. That depends in large part on the context — on who's in their class and what their goals are.

One thing we were really intrigued by was that a lot of the teachers we interviewed talked about changing their minds on that question over time. Some of them would say, "Well, it used to be that I would never share, but now I do only in these circumstances." Other teachers would say, "I used to be really prone to share a lot, and now I don't and here's why." We think it's all a matter of professional judgment. Teachers need to think about this very carefully...

I sometimes worry that, even though there can be really good ethical reasons for teachers to share, in a very polarized time that sharing can be misinterpreted. And if it's misinterpreted by the public or by parents as teachers trying to get kids to adopt their beliefs, then I think we could have a big problem.

That being said, we have no evidence from the study of teachers who were actively and purposely trying to indoctrinate kids to a particular point of view...

We think that this feeling that the public seems to have that teachers by definition are trying to push their political views on students is just false.

You were critical of the notion – that teachers would do that.

Hess: What we learned from students when we interviewed and surveyed them is that they make a really clear distinction between a teacher sharing his or her own view and a teacher trying to push his or her own view. Students not surprisingly report that they don't like being pushed.

You seem to draw a pretty firm line that teachers should not be advocating for their own beliefs.

McAvoy: What we argue in the book is that what's most important is that teachers create a culture of fairness in the classroom. That means being fair and reasonable to all the competing views that are in the classroom and that are being represented in the public. The practice that we found most troubling, from the study, is what we referred to in the book as political seepage: teachers who make sarcastic comments, who use partisan humor. It's these offhanded comments that are sort of biting and mean-spirited about the political climate that I think is problematic. Because it creates a climate not of fairness, but it creates a kind of insider/outsider feeling. If you get the humor that I just said or, "Do you agree with me that that politician's a big idiot?" That invites the most divisive parts of the partisan climate into the classroom.

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *