Lokman Tsui (Ph.D. '10) Interviews Oscar Gandy on Personal Data Protection, Privacy, and Surveillance

Gandy studies privacy and surveillance and is the author of The Panoptic Sort.

In a recent issue of Communication and Society, Annenberg alumnus Lokman Tsui (Ph.D. '10) interviewed Professor Emeritus Oscar Gandy on his views of surveillance, privacy, and personal data protection; whether we can protect ourselves; and the role of scholars and policy makers in safeguarding our rights. With the permission of the journal, which holds the copyright, the full interview has been reproduced below.

To access the original (in both English and Chinese), entitled "On Personal Data Protection, Privacy, and Surveillance," click here.

Lokman Tsui: In your view, what have been some of the most pertinent or important developments in the past two decades, since The Panoptic Sort (PS) has come out, on the issue of privacy and surveillance?

Oscar Gandy: The most important developments after the publication of PS relate to advancements in the technologies available to gather, process, share, and take actions on the basis of transactiongenerated-information (TGI). This of course also includes the tremendous increases in the amounts of TGI that we make available as we make our way through a digital environment that leaves traces and records of our interactions with people, places and things, and that are being collected and processed in the pursuit of actionable intelligence.

While we are currently focused on big, or even massive data in corporate and government files, we are just barely coming to understand what the future will look like when nearly every interaction with a device, or an environment with sensors will make this information available to some actor, human or not, as an aid to consequential decision-making. This “internet of things” then, is emerging as an important new source of data that will be used in ways that have implications for social control or autonomy. The changes in the capacity to capture and “make sense” of all this TGI with the aid or perhaps at the direction of automatic/autonomous devices/systems raises important questions about privacy and surveillance that we have not really begun to pay enough attention to.

By reference to autonomous intelligence, I intend to place the role of non-human actors on our agenda of concern. The number and variety of consequential “decisions” that will be made without our knowledge and consent will expand dramatically, and the laws governing liability and responsibility for the consequences are simply not up to the task of managing automated decision making. Of course, this includes concerns to what extent the setting of goals and the identification of problems are tasks that are diverted/allocated to relatively autonomous intelligent systems.

LT: In the preamble of PS, you argue that privacy will be the defining issue for the 1990s. Two decades later, why do you think this continues to be such an enduring issue and can you imagine a point where it will stop being defining?

OG: At the time, when I was beginning to write about privacy, the primary focus of scholarly and political attention was focused on the government and its surveillance of citizens and others. While considerable attention since then has become focused on corporate surveillance and government/corporate partnerships, we have still not come to terms with the many ways in which information about individuals and groups can and will be used to produce influence over their behavior.

So, let me suggest that we will next have to figure out is how to turn our attention away from the collection and processing of information, and instead, to pay more attention to how this information is being used. What I am saying here is that neither privacy or surveillance will continue to be the focus of attention that it has become; instead, the focus will be on the misuse of information, inviting a return to Habermas and the purposes of communication: enlightenment, not manipulation or strategic intervention.

This raises all sorts of questions about how we regulate those who routinely and purposively use surveillance and analytical technology to gain knowledge that generates social harms through its use. I am talking about the kinds of regulation we have developed to attempt to limit pollution of air or water, such as financial and other sorts of sanctions, but also criminal punishment.

This focus on harms is tied to my long-term interest in technology assessment that would help us to identify some of the “unintended consequences” that flow from the use of technology, such as the need for the revival of the US Office of Technology Assessment (OTA) or something like it. This, of course, also raises a concern about the undesirable consequences as the result of the use of technology for what we might consider to be good or socially useful purposes; consequences that occur unintentionally but still generate harmful effects, especially when those effects are unevenly or unequally distributed. Nothing is easy!

Naturally, as you are well aware, this raises all sorts of concerns about “free speech” and its important role within democracies and those nation states moving in that direction, however slowly. Since I believe that the primary actors/agents involved in the misuse of information are corporations and their employees, they should become the primary targets of this regulatory activity.

Thus, the struggle that is being pursued with regard to the Citizens United decision, and a host of other decisions that have provided support for treating corporations as though they were natural persons, has to be the focus of a national and then global movement to make it clear that “corporations are not the people,” and that they have been given special privileges only for the purpose of improving the wellbeing of “the people.”

LT: You mention that the primary abuse is being done by corporations. To what extent should we also guard ourselves against abuse of state actors?

OG: Perhaps, I misspoke when I suggested that corporate actors engage in more abuse of TGI than state actors do. My emphasis on corporate actors has been an attempt to convince my audiences that we need to pay more attention to the corporate actor than we have been in the past.

We generally tend to characterize abuse in terms of the power being exercised by the actor. There is no question that state actors have far more power than corporate actors do. The state has the power of life and death. And as we are almost daily being reminded, with regard to concerns of the Black Lives Matter movement, agents of the state exercise this power in ways that many of us consider to be illegitimate. This official misbehavior by agents of the state is clearly more deserving of attention than is the behavior of the marketers of subprime loans. But the misuse of concern is the misuse of deadly force in one case, while the misuse of concern in the other case, is the misuse of information.

While Edward Snowden has certainly helped us to understand a bit more about the extent of information gathering by the federal government in the US, my sense is that the information gathering by Google and Facebook is comparable in size and scope. The challenge we face is one of developing a sense of the sociopolitical impact of its use by corporate actors within society at a global level.

Again, I don’t mean to minimize the importance of governmental activity within the sphere of communication and the production of influence through strategic use of TGI. State censorship and propaganda are powerful forces affecting our wellbeing. But again, I want to emphasize the role being played by corporate actors and their agents in using TGI to shape the laws and regulations that enable them to pursue profits, while damaging the social, economic and political environments in which they operate. We continue to underestimate the power of the corporate sector, and we do so at our own peril.

LT: Speaking of the power of the corporate sector, Baidu was caught selling personal information from people frequenting health forums to fake medical institutions and practitioners in 2016. What can we do to make sure these technologies are used in a socially responsible manner, especially in environments where the legal or regulatory protections are not very strong?

OG: So, this question is really about the concerns I introduced into my response to the first question, although you extend it a bit to include those places where legal and regulatory protections are not that strong. I say that because my career in this area has been in a nation in which regulation is substantial, but still limited in its effectiveness because of the power and influence of corporate actors. This problem is a bit different in places where it is the state that is the “bad actor.”

This needs to be made a focal point for democracy oriented movements that understand the importance of socially responsible but largely autonomous individual decision-making about a whole range of concerns. This is a movement that has to mobilize to educate the public about the myriad ways through which TGI is being used to limit the possibilities for autonomous self development and collective democratic action. Imagine the equivalent of a “civil rights movement” not focused on a single population segment, like black people in the US, or other minorities around the world, but on the people themselves who are ready to claim their freedom to decide and act, once they understand the limitations on their ability to make informed choices.

As I suggested earlier, this movement needs to shift its attention away from the gathering of information: that is a lost cause. Besides, there are countless benefits to be derived from learning more about how the world including its people works. The problem is about how that information or “knowledge” is used. Also, as I suggested earlier, there is a need for us to revisit the question of who are the “rights holders” in our societies. Clearly rights are primarily for “the people.” But the state and the legal systems are also critically important to the development of laws and regulatory structures that establish limits on the exercise of those rights, especially where that exercise limits or harms the exercise by others, especially those others who may have been burdened in the past in ways that limit their capabilities.

Here again, I think it is important to emphasize the distinction between the people and the institutions that have been created, theoretically, to enhance the quality of life for the people. Corporations and the institutions of the state have to have limits on their exercise of “rights” with regard to the activities and interests of “the people.” The movements we need are those that will act to severely limit the ability of institutional actors, especially state and corporate actors, to use information/ knowledge in ways that harm or limit individual autonomy.

It is important to note, however, that the variety of ways through which the “sharing” of TGI can result in harm, especially to the vulnerable, is increasing rather dramatically. There is a need for the development of an “institutional actor” whose entire reason for being is to engage in technology assessments that inform us about those harms and their distribution. In the same way that we have Centers for Disease Control in the health area, and new agencies in the financial area, we need centers of expertise to help us evaluate the emergent uses of TGI in terms of their consequences, and perhaps to recommend limitations on, or compensation for the harms that are generated.

LT: You make the case for regulatory intervention to address the harms of surveillance. Professor Zuboff last year argued for understanding the logic of accumulation as a system of surveillance capitalism. To what extent do you think that the surveillance system is married to the capitalist system? To what extent do we need to go beyond reform of privacy regulation, and is reform of surveillance only possible if we reform the underlying system of capitalism?

OG: As I think most of my responses so far suggest, I agree wholeheartedly with Zuboff’s assessment and identification of our present status as “surveillance capitalism.” My comments about the difficulties that regulators face is based on my assessment of the nature of corporate influence over public policy formation and implementation. Reform of this system is a fundamental necessity, even though this is not primarily a problem of capitalism, but a problem of how capitalist relations have been allowed to develop, especially in the US. I count myself among those who are concerned about the near term future that will be shaped by the contradictions within capitalism, not the least of which involve the worsening conditions of the laboring classes struggling to be able to acquire/consume the products of capitalism firms. The push to reduce the costs of producing and delivering goods and services that now are looking more and more to automation means that more and more good paying jobs will evaporate. Marxists talk about overproduction/underconsumption crises, and it looks like we are heading for yet another one.

LT: The array of services in the past decade that are now available to people to explore via the internet — and now their smartphones — has continued to explode. Life without a smartphone seems almost unimaginable, but, of course, the degree to which the internet and the smartphone facilitate surveillance is also almost unimaginable. What do you think about this “trade-off” between surveillance and digital participation? Is opting-out a feasible solution?

OG: This is an important and very challenging question. While I would like to say that I have managed to survive without a smartphone, I have to admit that my wife has one, and like most users she replaced her “old” one with a newer model this year. The access to information that is provided with the aid of this device is something that one can become addicted to quite easily. And while I don’t actually use the device, when we are travelling together, it is used quite frequently, and the information clearly influences the decisions we make about where we will go and what we will do.

There is no denying the benefit of being able to ask questions of some resource anywhere, at any time. Clearly, this is in the nature of a trade-off, and concerns about surveillance are readily placed “out of sight / out of mind,” because only the immediate benefits are likely to be seen/felt. Similarly, my wife organizes our shopping — she is quite skilled at gathering coupons and discounts. She also relies heavily on the social utility of accumulated reviews of places we would like to visit, including campsites, hotels, restaurants, etc. The fact that I am not “personally” participating in this informational trade-off, or exchange does not take me out of the equation. There is no doubt that I am included as part of a family unit/household that is very well known as a result of our reliance on the device and its services.

So, clearly, my opting out is really only a minimally successful strategy. The same is true for other parts of the social web that I try to avoid. While I am not a Facebook user, I did sign up for Research Gate for what I thought of as sharing my research more easily. I rely heavily on Google Scholar for my research, and increasingly documents identified there are accessible via link to Research Gate. However, it has become something of a burden, not so much for what we might be concerned about with regard to surveillance in general, but it is becoming more and more of a pest in attempting to get me to become fans, or friends, or whatever... to anyone who has looked me up or found an article indirectly. How do I get out of this when I am constantly reminded about how I am not being a good neighbor by reciprocating interest?

I have suggested that we really need to do some research that would characterize the kinds of active socialization that the providers of these social network resources engage in. Social change doesn’t “just happen,” powerful actors shape it through their constant “nudges,” not exactly like the overseers in Foucault’s prisons, and schools, and hospital wards, but close enough to warrant our attention.

LT: Lessig once made the distinction between legal code and software code as different modes of regulation. What kind of role do you see for software code in protecting privacy or limiting the abuse of personal data?

OG: Of course, code will play a very important role in protecting privacy, or limiting the unauthorized use of TGI. I say “of course” because we recognize, as Lessig does, that code enables the capture and evaluative assessment of the information derived from this data. If you allow your Google searches to include patents, in addition to the conference papers that address these concerns, you will see that considerable talent and energy is being applied to problems related to anonymization, or the identification of people, places and things. The problems we will not easily solve relate to the vast inequality between those who seek this and those who want to protect their ability to make informed choices about things that matter, or should matter to them.

As we’ve already noted with regard to concerns being expressed about “digital capitalism,” the deck is pretty seriously stacked in support of corporate interests in “knowing” the citizen/ consumer to the fullest extent possible. And while Cass Sunstein carries on about “behavioral market failures” as justification for government supplied nudges and other “default” rules designed to help consumers make the appropriately rational decisions, the kinds of support we need to provide those coders like Howe and Nissenbaum to develop counterweights like “Track-Me-Not” doesn’t seem likely to emerge in the near term. In addition, there is the continuing problem in the context of digital capitalism, which suggests that a small segment of the population will be able to afford, and effectively install, update, and operate these defenses across their devices, while the rest of us will not be the beneficiaries of this code. Privacy by Design is a nice idea, and while there are signs that governments are taking note of some of the concerns of consumers, their long term interests lead them to pay more attention to the concerns of the marketers.

LT: Related, what are your thoughts on the ethical implications of code that block ads and/or tracking? Do you use them / should we use them? Or do you have reservations about this?

OG: Yes, I use ad blockers. Naturally, I have a story about that. I was an early paid subscriber to the New York Times. I would say my scholarly life truly depends upon that resource, especially the hyperlinks to articles and reports that I could never find on my own. Of course, I was outraged when the Times started sending me notes, asking me to “whitelist” the paper, and I consistently sent back letters indicating that I was a paid subscriber, and didn’t want those ads. They never responded specifically to my argument, and recently, they have stopped sending the appeals. So, as is implied in my response to the Times, I understand the need to pay for the production and distribution of my digital newspaper. Because I do pay, I don’t feel any ethical burden for refusing to pay more. I think I follow that logic with other digital information sources that I use more than routinely, but not enough to pay for a subscription. I go ahead, upon request, and whitelist those sources — probably not more than a dozen. I understand, and don’t object to sources that set a limit on the number of items I can access within a month (and download as .pdf). I would understand if sources established a paywall for the delivery of those files.

I am a bit more uncertain about the use of anti-tracking software to the extent that “obfuscation” or other strategies do more than block access to TGI, but actually affect the performance of platforms. To the extent I had agreed, by opting in, that I grant tracking my data as the cost of access to informational resources, then I would consider using code to effect blocking or obfuscating tracking data to be an ethical failure on my part, because agreements should mean something on both sides of the table. In contrast, those so-called agreements where the powerful actor assumes a consent, without an agreement, and then offers me the option to opt-out, I don’t feel bound not to block, since I made no such agreement in the first place.

LT: Sunstein makes a compelling argument that through “nudging” people, there are productive ways to influence and regulate behavior. To what extent do you see potential for big data to “do good” for society, for example in combination with credit scores where certain activities are classified as “desirable” or “undesirable,” and what kind of conditions would have to be in place for this to happen?

OG: This is an area where there is much work to do. My first response is what I would hope you would expect from a social scientist. We are made to seek knowledge. Not all of those applications of this knowledge have been for the best. Not all of those applications that we have only recently come to understand how harmful they actually were had been implemented with the intention of doing harm. Those harms were accidents, or externalities, or unintended consequences. As your question suggests, there is also a problem in how these outcomes and their distributions come to be evaluated. Experts will always disagree, so the classification of these outcomes will depend upon “trusted agents” to help us understand and evaluate those outcomes.

This notion of “trusted agent” is, and to some degree has always been, important within society and the public sphere where we talk about such things. As a privacy scholar, I always used the example of doctor/patient relationships, where it is in the patient’s best interest to be fully disclosing to their health care provider, although there are always complications, as when the provider actually works for, or reports to one’s employer.

That said, “nudging” should certainly be allowed by a trusted agent to observe, track, monitor, evaluate, report and nudge their “client” consistent with a treatment plan that both agent and client have agreed upon; not with a 6 point type, 100 page contract, but a validated consent form, demonstrably understood by the client as assessed by another trusted agent with the responsibility for determining that such a level of informed consent has actually been achieved.

So, to be clear, I am emphasizing the role of informed consent for exposure to nudging. There will certainly be cases in which persons who have been determined to be incapable of providing that consent, may come to be nudged on the basis of a decision by a court, or other trusted agency that would determine that an intervention is required in the best interest of the individual, and society at large.

Again, as suggested earlier, limiting nudging to trusted agents raises all sorts of questions and challenges with regard to the free speech “rights” of corporations. They don’t have such rights, or at least should not be treated as though they do, as these are the rights of persons or citizens. I see “nudging” as being what advertisers do routinely. For the most part advertising, especially more and more narrowly targeted advertising, is more manipulative than informative, and the right of corporate actors to engage in manipulative communication, based on algorithmic assessments of individuals, needs to be strictly limited. I would much prefer the development of “personal shoppers” — intelligent systems that scan the market(s) in the interest of their clients to identify options, and inform their clients about the risks and benefits associated with purchase and use, including assessments of the risks and benefits of acquisition from particular vendors. This kind of system would “force” producers to emphasize quality, rather than marketing. This is more of a technological response than the kinds of advice that my wife and others gather from social networks, but I am enough of a techie to believe that trusted agents can be developed within competitive markets for their services.

LT: You once warned that we as researchers tend to have biases: and a strong bias is to do research “where the light is.” I remind myself this is a particular important bias to be aware of in studies of institutional surveillance and privacy. What suggestions or advice would you give to scholars interested in pursuing research where the data is perhaps not so clearly in the light but where instead we might have to grope in the dark? And is there anything academia as an institution can do to encourage this type of research more?

OG: Way back when we were developing the Union for Democratic Communications, we thought that it would be important to develop “critical communications research” as a disciplinary focus, as well as an institutional umbrella for our work. I have a sense that the disciplinary focus has developed to a remarkable extent. We haven’t moved quite so far in developing the kinds of institutions we had in mind.

I continually refer to the former US Office of Technology Assessment (OTA) because I believe that we need to bring it back. Of course, because such an entity would once again become the target of strategic opposition by those whose interests are dependent upon their ability to produce and market goods and services that harm us.

While there are smart people working actively to understand the myriad ways through which algorithmic assessment affects different segments of the population, these independent scholars don’t have the resources or the necessary access to corporate information in order to engage in the kinds of evaluative assessments we need. When I talked about looking where the light is brightest, I only emphasized part of the problem. I ignored consideration of who it is that places the lights there in the first place.

Ignorance is not randomly distributed, it reflects the exercise of power. An OTA with resources and authority can determine where we need lights and which questions we ought to be asking, as well as helping to develop better lights to help us see in those places, including corporate decision-making, that have been difficult to access in the past. As I write this, I am reminded of a related project I am working on having to do with cognitive science and neuromarketing. We are developing new technologies to see within the working brain to understand more about how we respond to stimuli. Again, the problem is not primarily one of gathering information, but in managing its uses, and identifying inappropriate uses.

LT: You suggest it is important to raise awareness of the importance of surveillance. You also suggest it is particularly important to be aware of the harms, that awareness would (hopefully) lead to change in attitude or behavior. At the same time, Professor Turow’s research suggests that people care about privacy but also feel resigned, that they feel there is not much they can do to protect their privacy. What role is there for researchers and scholars on the one hand, and advocates and activists on the other hand, to make people more aware of possible actions they can take?

OG: This is also an important question. I think that I have tried to respond to it in a recent paper about strategies for putting inequality on the public agenda. This is a piece about an educational campaign designed to help people to recognize the nature of inequality as a social problem that needs to be addressed through public policy. It includes unusual (for me) praise of a non-profit, foundation supported organization of communications researchers, The FrameWorks Institute, that has a well-developed strategy for understanding the nature of public understanding of issues, identifying the kinds of triggers that lead people to select/ support regressive policy options, as well as the kinds of problem and solution frames that seem to lead people to move forward more progressive policy options. Theirs is very interesting work, but my paper also identifies some of the problems involved in mobilizing the public. This difficulty in mobilizing the public to act politically is part of the sense that Turow implies in his use of being “resigned”: we think that there is nothing we can do. This sense of powerlessness has to be overcome more generally, not just with regard to privacy and surveillance.

It also takes note of something of a contradiction that I face. I have long been a critic of segmentation and targeting, and this kind of interventionist strategy pursues the understandable, if troubling logic of choosing which messages to deliver to which population segments. At the moment, I find myself in something of a box around this issue of strategy. As suggested earlier with regard to “trusted agents,” we would expect that different people would get different recommendations on the basis of the agent’s best assessment of what was in the client’s best interests. In the policy realm, where the best interests are of the collective, the nature of trusted agents will be difficult to define. The collective can’t readily and routinely be asked to provide informed consent. In addition, individual preferences also might include considerations of the collective, the global, and the future.

Scholar activists — I don’t separate them in the way your question suggests — can engage in research designed to help them understand the nature of public understanding of the problems linked to surveillance and privacy. As FrameWorks suggests, a variety of research strategies can also be used to identify the arguments and information strategies that are more likely to lead to increased support for public policies limiting the use of surveillance for problematic applications.

LT: You mention in your talk at the LSE that we have failed to come up with compelling examples of harm. Are examples we have come up with so far not compelling to the larger public, to the regulators and decision-makers, or both? More importantly, why have we failed to come up with compelling examples and what can we do to address this lacuna?

OG: Again, another interesting, and challenging question. As you suggest, there are two different audiences, the general public and the policy makers. The general public will be primarily interested in the risks/harms they face, while they may be generally disinterested in the risks/harms that are faced by others. The regulators have more complicated information needs. They have to assess these harms in terms of members and segments of the public, but they also have very different sets of tradeoffs to consider, such as the economic impact, including employment, tax revenues, etc. that would result from limitations on the use of TGI for marketing.

Both of these audiences are being continually bombarded with messages praising the social and economic benefits of segmentation and targeted marketing informed by the collection and analysis of TGI. While there are a good number of journalists and academics who provide the public with examples of the harms that flow from some uses, the number and prominence of these articles is a fraction of that devoted to cheerleading. Although people recognize, or can easily be challenged to think about the ways in which market discrimination affects some segments of the population, it is not enough apparently to mobilize public opposition, especially with the foundation of a craftily captured right of free speech that the marketers now exercise.

My answer is pretty much the same here, as for other questions about what can be done. We need to engage in an information campaign focused on the myriad harms to global society associated with the promotion of mindless consumption on the one hand, and the assortment of public policies that have worsened the nature and extent of social and economic inequality around the globe.

Clearly, I am suggesting that our focus needs to underscore the link between surveillance and marketing, and strategic communications designed to shape public policy (yes, I hear the contradictions!). I have been working with an organization, the Center for Digital Democracy to develop a partnership/coalition with environmental organizations to generate greater public awareness of the ways that surveillance-aided marketing is contributing to global warming. After all, surveillance is a technology that gathers information in order to generate actionable intelligence, in this case, strategies for segmenting and targeting consumers. And marketing is about mobilizing targets toward increased consumption, including the replacement of perfectly useful technology by “the next better thing.” The consequences of this waste, including increased consumption of energy and other resources, is part of an environmental concern about sustainability. Not making much progress, but I really think that this is an obvious connection.

LT: Can you elaborate a bit more on how to get the grassroots movement started so that the general public will care more about privacy and surveillance, so that bottom up energies can come together for a renewed OTA or similar public authority. Can you maybe shed some light on the general principles by comparing or reflecting on past attempts — both successes and failures — at broad-based movements (not identity politics) by various groups such as the UDC, FramesWork, and maybe Gerbner’s Cultural Environment movement?

OG: I am not much of a historian, so I have nothing in the way of comparisons of past attempts by such entities like the Union for Democratic Communication (UDC), or Gerbner’s Cultural Environment Movement, other than to suggest that none of those mentioned could be considered to have risen to the level of a social movement as they are generally understood. UDC, which was supposed to include activists, professionals, as well as academics became dominated by academic interests. As far as I know, Gerbner’s effort did not really reach a take-off point and the FrameWorks Institute is a research-oriented “think tank” that has been recognized as being at the leading edge of a kind of strategic communications research focused on what we might recognize as progressive policy outcomes. There is little doubt in my mind that they are making important contributions to the development of communications campaigns, some of which they have identified as being quite successful. They provide a number of guidebooks in specific policy areas that might be of interest.

While I have written more generally about the kinds of approaches that have been taken toward movement development with regard to inequality in my article on “The political economy of framing” (The Political Economy of Communication, 3(2), 88– 112), I have also explored the role of framing with regard to imprisonment, or “Hyperincarceration” (“Choosing the points of entry”). But neither of these have a history that I could supply as evidence of what works.

LT: What advice would you give (young) scholars aspiring to be agents of change? Academia has changed, in particular in its rationalization of scholarly output as a way of measuring performance. At the same time, there are perhaps also more avenues than ever before to have your voice heard.

OG: As I have been outside the classroom, as well as direct interaction with academic administrations, I don’t have much sense of how the pressures are being delivered within the academy. I don’t know what the relationship in career terms is between traditionally measured scholarly output and activism as we understand it. Clearly there are scholar/activists who have become “public intellectuals,” and have as a result done well for themselves and for their political projects.

My sense is that there is still a tremendous amount of freedom and autonomy for young scholars to develop their identities as public intellectuals, although there may not be the same amount of resources from government agencies or foundations to support their research efforts. Depending upon their institution, and its sense of itself, there may be support for professors who involve students in research initiatives that also serve public purposes if they can be framed in ways that are seen as legitimate within the academy and the legislatures that might be relied upon for funding of the institution.

I suspect that these professors will face even more of a careerist shift in the student population that I observed in my later years in the academy. The concerns I raised about the coming constraints on meaningful employment are likely to influence the kinds of choices that these students are willing to make about how they spend their time. Those professors who can demonstrate that their courses/projects with social/public purposes also can provide much valued skills not easily available elsewhere might attract students who are willing to make that kind of “tradeoff.”