A Sociology of Dragons: danah boyd on AI, Trump, and critique

by danah boyd and Elif Buse Doyuran June 25, 2025

On 18 November 2024, danah boyd gave a talk on the challenges of pursuing responsible AI at the Edinburgh Futures Institute. In an old concert hall, she spoke to an audience swamped with emotion, just days after the U.S. elections. The talk was scheduled for an hour, followed by another hour of Q&A. The indefatigable boyd spent a third hour with JCE editor, Elif Buse Doyuran, answering our questions on American politics, artificial intelligence, and inevitably, the academy.[1] As with most projects involving humans, the interview came about through a mix of contingency and scheming. At JCE, we have long been interested in developing formats that respond to current affairs, especially those that demand quicker reactions than the traditional academic publishing cycle allows.[2] The second term of President Trump was one of those affairs, and too is 2024’s dramatic boom in generative AI. And we just happened to encounter the perfect responder on both accounts.

It is telling, both of academic publishing structures and danah boyd’s unique talents, that by the time this interview reached publication, many of her predictions had already materialized into the political realities Americans, and the rest of the world, now face. With the U.S. in the throes of federal dismantling, comical tariffs, and the ‘MAGA makeover’ of its tech elite, nearly every point boyd foresaw when asked about the future of tech under Trump has come true. The otherwise slower-moving trends she identified have been drastically accelerated by the ongoing wave of political and financial attacks on universities. Meanwhile, AI continues to solidify as a social and political-economic fact, and critical researchers must find new ways to understand, explain, and even intervene in the world now being remade in its image.

Our current methods of critique are no longer fit for purpose, says boyd, a founder of the critical data studies. True critique demands that we find new ways of seeing the world, even more so when we confront the unknown, when we stand at the edge of uncharted waters. Here be dragons, as her AI talk is titled, referencing the phrase used by medieval mapmakers to mark territories as potentially dangerous but ultimately unknown – places yet to be explored. It’s no surprise that a scholar of STS (the science of monsters and hybrids) would find appealing fantasy metaphors. Yet, the word choice does more than signal aesthetic affinity.

Classical STS teaches us to refuse habits of thought and of scholarship, to suspend judgment and just describe, in the simple but not at all easy challenge Bruno Latour set for the field. boyd, in turn, raises the challenge with her metaphor, subtly shifting the emphasis to the object of instruction, and asking: How do we describe something we’re not certain is there? AI, the glaring object of our collective attention, is not inevitable. Neither its ‘risks,’ ‘harms,’ or any other effects on society are predetermined. If critique is to be meaningful, it must remain faithful to what AI presently is; a potentially dangerous but ultimately unexplored territory. To be bold is to mark AI as unmapped, not to emulate the false certainty of tech evangelists who insist that it is our future, and our present. So boyd reminds her fellow travelers, in the exchange we present below, both seasoned and newly joined in this strange field of sorts, that we might as well call, a sociology of dragons.

[1] The event was organized by the Critical Data Studies Cluster at EFI. We are grateful to Karen Gregory for making the connection, and to Alex Taylor for generously offering his office, and patiently waiting while we used it.

[2] See Caliskan’s (2016) commentary on the July 15 coup attempt in Turkey, and more recently, Mazzucato, O’Connor, and Bennett’s (2025) analysis of UK cultural policy.


 

Why give a talk about AI now?

For over a decade, I’ve been thinking about different aspects of what I call “data and society” because the terms keep shape shifting. When I created Data & Society, “big data” was the term, and now it’s “AI.” At the moment, some people are talking about generative AI, while others mean any algorithmic system. So depending on how you cut it, I’ve been talking about AI for a while.

In 2016, I embarked on a project that turned into a research endeavor to understand what makes data legitimate. I was thinking about how AI-esque systems require data. I spent four years embedded inside the US Census Bureau trying to understand one of the oldest and most politically powerful projects to make data. The details of what is involved are fascinating, but the stark reality is that people are not that interested in the esoteric details of the census. They really want a hot take to the latest technical development. So I wanted to give a talk that was derived from some of what I’ve learned but not census-specific.

Still, the census work really forced me to think about all that goes into the making of systems, all of the invisible layers. Likewise, when “algorithmic fairness” was the hot topic, I had worked with a group of peers to identify and map different “traps” that technologists tend to fall into. All of us have engaged with the AI conversation in different ways. (Two of them went on to work for the Office of Science and Technology Policy in the White House in the US to do AI policy!) So when I was asked to give this talk, I decided to reflect on lessons derived from the census work and insights generated from the fairness work to speak directly to the AI moment we’re having.

In the process of putting this together, a third dimension also started to form. I’ve always loathed hype and fear cycles with a passion. Of course, it’s easy to be upset with technologists who go all hype on things. But I’m concerned that many of my peers, dear friends even, have embraced just as much of a deterministic and inevitability logic as I’ve seen in the tech industry for decades. And so, in thinking about the AI spectacle, I also thought it would be constructive to intervene in the pervasive determinist thinking in academia as well.

Those ideas blended together to form the basis of this talk. More generally, I am hoping that scholars can step back and be critical without being deterministic in their orientation. I also want academics to recognize how we too are being enrolled in the AI inevitability narrative. To resist this means to not simply be reactive, but to understand the phenomenon in a broader context. And that’s what I attempted to do with this talk.

A critical eye does not just mean: tear down the current system. A critical eye offers a new way of looking at a problem. Consider the Rubik’s cube as a metaphor. You have to put the pieces together in a different dimension and look at it from a different angle. You can’t just line up one face of the cube. That to me is what it means to really critique – to see the interconnections. To appreciate the political and economic arrangements behind the sociotechnical system that is in front of us.

Let’s also be honest – there’s a lot in my talk today that is also very STS 101. For example, I made a snarky joke about bicycles in part to see who was familiar with that lineage of scholarship. The social construction of a bicycle is about as canonical as it gets for critiquing determinism in STS. And yet, we also have the movie The Social Dilemma which argues that a bicycle could only be designed one way. This is literally the worst example that the producers could’ve used for talking about determinism. Cracking a joke about bicycles in this room allowed me to simultaneously identify who saw the STS connections while also signaling to that crowd that I completely understood that some parts of my talk are pretty basic.

 

Before we dive into critique and where it’s headed, I’d like to ask: how do you see the future of tech and its regulation under ‘Trump 2.0’?

Oh boy. The first thing an international audience should understand is the politics around the US’s First Amendment. This issue is ideological and political, not simply legal. Legally, the First Amendment concerns what happens when the government suppresses speech acts. And, to be clear, US law has lots of distinctions here. For example, the right to speak is not the same as the right to be amplified. And some speech acts are actually crimes. But politically, the First Amendment stands for something else. There are many Americans who believe that they should face no consequences for anything that they say. So when people scream “First Amendment!”, they often mean that they should have the right to say anything to anyone, regardless of who it might hurt.

This gets really messy when we start talking about toxic speech, content moderation, harassment, racism, misogyny, homophobia, ways of doing violence or asserting power. In our current political moment, efforts to create conditions where anybody can be comfortable in a public space, free of those kinds of oppressive environments, have been relabeled by zealots as “wokeism”. The ability to critique somebody for their hurtful speech acts has been re-narrated as “cancel culture”. The idea of trying to let somebody know that content that they’re about to encounter might be psychologically harmful or “triggering” has been reframed as indicative of someone’s weak constitution. These are political fights, often discussed as part of our “culture war.”

At the same time, this has significant implications for technology policy. Consider the battles that surround Section 230. To explain Section 230 quickly, the Communications Decency Act (CDA) in the 1990s was a proposal to say that websites needed to address pornography. It was really about pornography at the time, and it was really about kids, and it was struck down as unconstitutional in light of the First Amendment. In short, restricting adults’ access to pornography in order to theoretically protect youth curtailed adults’ First Amendment rights to consume porn. Ironically, there was one little carve out in the CDA that was placed in there to appease technology companies: Section 230. The idea was that a company would not be liable for the kinds of speech acts that took place across its wires. The idea was to create a parallel with other policies. For example, AT&T, the telephone company, is not liable when people arrange a crime via the phone. After the court case, Section 230 was the only remaining bit of that original law that was not struck down as unconstitutional. Ever since, it has created space for companies to simultaneously be free of liability for the content on their platforms and make choices based on their own business interests to moderate as they will.

Everyone now expects the new Trump administration to reopen up CDA 230, but not in a way that will make sense to Europeans. Instead of holding companies more accountable for the hurtful speech acts on their platform, we expect to see that any act of content moderation by a company will make them ineligible for CDA 230. This means that if they do any content moderation at all, they won’t retain legal protections. In practice, this will disincentivize companies from curbing hurtful speech or preventing disinformation. The Trump Administration believes that conservatives are harmed by content moderation. So what we’ll see is a new era of “anything goes.”

The other promise that the incoming Trump administration has made is that they will prevent all communication between the Federal Government and the tech companies. At present, the agencies of the US federal government provide information to the tech companies on issues related to disinformation, illegal activity, and acts of violence. Conservative states sued the federal government a few years ago, arguing that when the government informs tech companies of problematic content, the federal government is violating citizens’ First Amendment rights. Even though the Supreme Court ruled that the government has the right to speak to companies if it does not force them to take action, conservative politicians have vowed to take action at the legislative level.

Politicians argue that the ‘censorship industrial complex’ is being leveraged to suppress contrarian views. Much of this has to do with vaccines, but there are plenty of conservatives who are up-in-arms about other topics. Ironically, these same conservatives do want to suppress other types of speech. For example, they want to ensure that young people have no right to access LGBT content without parental permission – and if parents do permit their children to access such content, that the state should intervene for child abuse.

That’s just the First Amendment fight that’s coming. Then, there’s the supposed tariff fight. There will be a tariff fight, but the reality will be different than the rhetoric. If the new administration implements all the tariffs that the incoming administration says it will, it would destroy the economy. But tariffs will be used as a political threat to force governments around the world to get in line or lose access to key technologies. But let’s also be clear that this is already happening. Right now, China, for example, cannot get Nvidia chips, they’re not allowed.

Another complication concerns technology manufacturing. Under the CHIPS Act, the Biden administration gave Intel significant funding to build domestic capacity to build chips. And yet, that manufacturing effort has not gone according to plan. One problem is that there are not enough domestic skilled workers. Trump has made many promises about bringing home manufacturing while curbing immigration, but it’s not clear how he’s going to pull this off.

We can also certainly expect to see a massive deregulation effort, especially when it comes to environmental regulations. Given the rapid increase in data centers, this is bound to be devastating in terms of climate change.

Keep in mind that the new Administration’s deregulation goals are going to be enhanced because of a Supreme Court ruling that took place last year. This is a bit esoteric, but in 1984, the Supreme Court ruled in a case known as Chevron that if Congress wrote a law that is unclear, government agencies had the authority to interpret it in light of their expertise. Ever since, Congress has written vague laws and battles have been waged over the interpretation of those laws. Last summer, in a case known as Loper Bright, the Supreme Court threw out its previous ruling, saying that the courts had a right to decide if the agencies’ interpretation was justified. This has opened the floodgates for litigation. So brace yourself to see all sorts of previous efforts by the government to reign in bad behavior by companies be thrown out the door. In the tech space, this will affect all sorts of Biden-era AI policy. But it will also turn back the clock on other things like net neutrality. It’ll be a free-for-all as litigation flies.

Meanwhile, individual states are going to step in and create new laws. We’re already seeing that in the area of kids’ safety. So you can expect to see conflicting laws across the US as a stark reminder that the states are only united on occasion.

 

How do you keep on top of all of that when there’s so much going on at the state level as well?

When I was leading Data & Society, I had a team who kept me informed. That team is still doing fantastic work, but they no longer have any reason to ensure that I know anything! <laugh> But in all seriousness, I believe in the power of networks. I am the kind of person who likes to move between and bridge different communities.

I have the privilege of often being able to travel to give talks and speak with different audiences. I regularly engage with people who I know are at the center of different communities.  Heck, the reason that I am here talking with you is because I decided that I needed to spend some time in Europe to get outside of the US bubble.

Of course, I don’t get to pay attention to everything, but I’m voraciously curious. And so when I encounter something that I don’t know anything about, I try to find the person who does and learn what they know. This means that I know a little about a lot, but can often identify who knows much more.

When I invest in an area as an ethnographer, I attempt to go both deep and wide. The going-deep is probably methodologically obvious to most scholars. When it comes to going wide to get broader context, I have a lot of small tricks. I try to get myself into a wide range of rooms where I can get a sense of different perspectives. When I was younger, one of my favorite games was to volunteer at conferences. For example, I would show up to a random tech conference and walk around the event serving water to people. Because it turns out if you walk up to people serving water, they don’t shut up. They just keep talking to whoever they’re talking to. It’s a great way to listen in and get context even if I couldn’t use that material directly.

Sadly, it’s harder to be an invisible lurker these days.  Like many scholars, I have really felt the absence of Twitter. Twitter was never a representative sample, but it sure did let me gauge the temperature of so many different rooms.

I should also note that I have a bad habit of building networks. In many ways, that was what Data & Society was for me. But I also host book clubs and gather random people for conversations. One of the things that excites me about joining Cornell is that it will allow me new opportunities to build communities and networks.

 

Any specific people or publications you would recommend?

Every publication is going to have partial, incomplete, and biased information. It doesn’t matter which one – or whether it’s academic or public in nature. The point is the triangulate; it’s not to assume you’re going to get one answer.

For example, when it comes to tracking the tech industry and tech policy, I subscribe to mailing lists like The Information and Platformer. They give me a sense of what’s on the mind of the tech industry. Then I track Tech Policy Press to get different perspectives on policy issues. I also subscribe to newsletters put out by different civil society organizations and sit on Slack channels filled with tech people. This gives me a broad level of awareness of US issues. I find it harder to track the European debates, most likely because I’m not as connected to those networks.

Scholarship is in many ways more messy. I mean, sure, I get various updates from different journals, but given how many fields I’m in, that feels more like a flood. It’s hard to sort out what to pay attention to. Truthfully, here’s where I count on having networks who will share information with me. I relish recommendations.

 

But how do you build new networks?

In my opinion, relationship development is an ongoing thing. And I think you can be very intentional about it. Not in some creepy way. For example, whenever I go to a conference, I tell myself that I need to make 2 or 3 connections that I will intentionally follow-up on afterwards. And I aim for connections that offer new perspectives. As I tell my students, if you do that over a lifetime, you build a network that’s pretty robust.

One of the things that makes me very sad about academia these days is it keeps moving towards transactional exchanges. I keep thinking about Marcel Mauss’ famous work on “the gift.” Giving people gifts – even of time – helps build a social fabric. So when I’m talking about networking, I’m not talking about exchanging cards just to check some box or ask some favor. I often follow-up by trying to offer someone something. Small things. Opportunities, readings they might like, introductions to someone that they want to know, etc. These are gifts that can start a relationship.

It actually saddens me just how neoliberal academia has become. And not just at the administrative level. Many of my peers are super critical of capitalism, but we’re also so complicit in its upkeep. There’s no doubt that many academics are underpaid, but the response to this has been to create boundaries and focus on what is contractually obliged. In the process, academics often treat their colleagues and students transactionally. But the political fight is not with each other or those who we get to teach. It’s with the system. And so we’re aiding and abetting the dismantlement of what is most wonderful about academia under an illusion that this will make the work more fair.

Consider something like peer review. We all rely on it. And we all know that some people do more of it than others. Increasingly, scholars are refusing to help out. As such, journal editors are having a helluva time getting people to do reviews and so now it takes forever for our submissions to be reviewed. And of course, who gets screwed in this downward spiral? Junior scholars, of course. So what’s the answer? I’m shocked by how many scholars think the answer is to pay people for reviewing. In other words, completely shift the process from being tethered to social capital and gifting and turn it into a capitalist system. I cannot imagine that will improve the process, but it will almost certainly degrade the social fabric of connections in academia further.

We all know that senior scholars need to do more to help out. But I also think it’s important for junior scholars to think about how to create an academic climate that they want to be a part of rather than just waiting for senior scholars or administrators to fix it. There are so many small steps that folks can take. Send a note of gratitude to a scholar whose work you appreciate. Intentionally cite your peers and those who are more junior than you. Intentionally pay it forward.

 

This was great. A practical guide for junior researchers.

Have you ever read Sharon Traweek’s Beamtimes and Lifetimes? The book is supposedly about high-end particle physics, but it has nothing to do with physics. It’s all about learning to see the rules of a professional discipline. Academia is not unique in this way. Every profession is shaped by the implicit and unstated rules. Success in any context is about learning to see the games being played and choosing which games to play and how. And at what levels.

As academics, we often attend to the politics of our discipline or our department, but this leaves us oblivious to the broader politics of our institutions and academia more generally. We were talking about President Trump earlier, but we didn’t talk about why the MAGA movement hates academia. It has nothing to do with substance; that’s just posturing. What upsets conservatives in the United States is the role that elite universities play in shaping people’s social networks. And the ways in which those networks tend to stand in opposition to the MAGA movement.

Unlike most European universities – except, perhaps Oxbridge – students in most elite universities in the US live in dorms. They leave their friends and family to go to school and they make new connections. These connections have historically formed the elite class in the US. This is why progressive activists have long pushed to diversify universities – to make them more accessible to communities who have been structurally disenfranchised. Meanwhile, we’re living through a backlash to this as our political context is getting reconstructed. This is because the networks that are formed in this manner tend to value progress, privilege scientific knowledge and expertise, and relish global connectivity. This is antithetical to nationalism, conservatism, and values rooted in faith and family. And so it’s very threatening. Of course, most students don’t develop these orientations from professors. Truthfully, we’re just not that persuasive. Transformations happen through the development of new networks. And that’s threatening.

I wish more scholars were aware of how much of a role we play in supporting the development of networks. What students really get out of classes has less to do with what the professor teaches and more to do with the connections made during the class. If we were smart about it, we’d be more thoughtful in encouraging those connections. I’m a huge fan of Priya Parker’s pop book The Art of Gathering. It’s all about being attentive to how you bring people together. I try to bring this mindset to any place where people come together. Truthfully, I think that the thing that we at Data & Society did best was not the publications we put out there, but the gatherings we hosted and the communities we created. We gathered people and remixed people and built networks of people over and over and over again. That network building, social fabric building, that to me is the legacy of that project.

Think about the role of journals in light of this. The reason journals came into being is because they were dialogues. They were conversations between scholars. They didn’t used to be credentialing check boxes, which is what they’ve become. Peer review doesn’t have to be about a zero sum game; we could use it to collectively strengthen each other’s work. I wish we paid more attention to the ways in which the project of intellectual development is a conversation. We should be thoughtfully building on top of each other rather than tearing each other down.  I feel like we’ve forgotten a lot of that. We’ve gotten stuck in processes without remembering their purpose.

What would it take for us to go back to basics. What are we actually trying to do in this scholarly endeavor? How do we go back to purpose?

 

Many young, talented researchers are choosing to work for organizations like Data & Society to the university. How has this shift come about? Is it unique to tech critique?

I think what’s happening right now is that the universities are a mess. The incentives are screwed up. Keep in mind that, in the sciences, it’s quite common for PhD students to not seek out an academic job after graduating. And faculty members want students because they need research assistants en masse. This is different in the humanities and social sciences. For decades, we have been producing way too many PhDs compared to possible faculty positions. We do this in part because we need teaching assistants.

I don’t think it’s responsible of us to encourage everyone to get a PhD simply because they’re interested in research. Too many students think that getting a PhD is just like doing more school, but it’s not. It’s a training for a profession. But it’s a profession in which there are not enough jobs, where many of the jobs are poorly paid, where the the jobs are scattered in locations that might not be compatible with other life goals. Etc.

So lots of students go through this process thinking that they’re getting trained to be researchers and that there are interesting jobs on the other end. And then they get there and are like, wait a minute, I don’t want to move to a remote location. Or I don’t want to adjunct for minimum wage. Or I don’t want to be in a toxic environment. And so they start to look for alternative paths. But their advisors are not well situated to help them find those jobs or even think about alternatives. At least there’s now a name for the phenomenon: “alt-ac.”

Now, keep in mind, that there have always been alt-ac paths. In the technical and quantitative fields, it’s been extraordinarily common for people to go to industry or government. In the fields that are closer to policy, it’s been common for people to seek out jobs at think tanks. There are always a range of historic anomalies. Consider Santa Fe Institution. This has been ground zero for complexity theory from physics all the way through to philosophy of science for decades.

When I started Data & Society, I wanted to create an alt-ac space for types of research that do not normally appear outside of academia. In particular, I wanted to support scholars who wanted to do in-depth qualitative research. When we created Data & Society as a project, we changed the rules of how to do high quality scholarship, and it was fun. But let’s also be clear, there has never been a funding model for it. And that is a huge challenge both for Data & Society and for similar projects.

It saddens me deeply that Data & Society isn’t able to fund the level of in-depth research we were able to fund in the beginning when it was still kosher to take corporate funding. As much as people like to complain about corporate funding (and for very good reasons), the corporate funding apparatus was so much more flexible than the traditional philanthropy structures when it came to taking chances on in-depth research.

Organizations like Data & Society are “soft money” organizations – which means that they have to constantly be begging for funding. It’s hustle culture I want to commend the leadership of Data & Society – they’ve done a phenomenal job of navigating this precarity for so many junior researchers. But I know it’s exhausting – it burnt me out. So I worry when people tell me that they want to go to places like D&S as a career path. I’m like: “do you understand what it’s like to constantly fundraise?”

Most researchers want the stability of academia without the headaches. I get it. But most alt-ac paths are not the solution to that. Just a different set of problems. My advisor used to tell me that he got paid to attend committee meetings so he could keep up his hobby of teaching. There are taxes in any institution. The question is, are they tolerable for you? And what’s the equation? And what does it look like under constriction? I do wish we helped junior scholars think more critically about their career paths because too many folks finish their PhD without having thought much about this as part of a career journey. They aren’t encouraged to develop marketable skills that give them options. They aren’t invited to reflect on whether or not fighting for this career is for them. It’s really unfortunate.

You are able to see things as messy, complicated realities, and yet still make definitive statements about them. How do you go from ‘it’s complicated’ to ‘here’s what we should do’?

For me analytically, it’s just constantly looking at puzzles from different directions and then experimenting with interventions. And encouraging others around me to do the same. “Okay, given your positionality and the things that matter to you, how can I match how you look at the system, and how you can take an intervention, and move it just slightly here?” There’s no one fix, it’s all moving together.

Don’t stabilize your point of view, find a different way, and every time you think you have an explanation for something, try to tear it apart, try to look at it from a different angle. Try to understand why it’s logical from its own point of view.

When I read Karen Barad’s approach to methodology through the lens of quantum physics, I was like “aha!” I’m not interested in finding a neutral position or a stable position. I’m interested in moving between worlds, perspectives.

That’s part of why I think in systems, I think in networks. I am obsessed with building networks, and part of it is because those are the structures that allow me to work, to see and to recommend to others.

So I never have an “answer” to what should be done, although I have countless ideas of next steps, next interventions. And those may backfire – they often do. So then you regroup and try again. Anyhow, there are many different paths, but that’s the one that has helped me think.


 

danah boyd is a Partner Researcher at Microsoft Research and a Distinguished Visiting Professor at Georgetown University. Her research focuses on the intersection of technology and society, with an eye to how structural inequities shape and are shaped by technologies. She is currently conducting a multi-year ethnographic study of the US census to understand how data are made legitimate. Her previous studies have focused on media manipulation, algorithmic bias, privacy practices, social media, and teen culture. Her monograph “It’s Complicated: The Social Lives of Networked Teens” has received widespread praise. She founded the research institute Data & Society, where she currently serves as an advisor. She is also a trustee of the Computer History Museum, a member of the Council on Foreign Relations, and on the advisory board of Electronic Privacy Information Center. She received a bachelor’s degree in computer science from Brown University, a master’s degree from the MIT Media Lab, and a Ph.D in Information from the University of California, Berkeley.

Elif Buse Doyuran is a Postdoctoral Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society, based at Queensland University of Technology. At ADM+S, she works on the Generative Authenticity project, examining the politics of industry responses to the authenticity challenges posed by generative AI. She holds a PhD in Sociology from the University of Edinburgh and an MSc from the London School of Economics and Political Science. Her doctoral thesis on the movement of behavioural science techniques into software development received the SPS Outstanding Dissertation Award. During her PhD, she was a research affiliate at the Data Civics Observatory within the Edinburgh Futures Institute (EFI), where she co-founded and led The Platform Social, a heterodox research community on platforms, economies, and societies. Elif currently serves as the Reviews and Commentaries Editor at the Journal of Cultural Economy.

SHARE THIS