Some Words To Not

Longer-form Writing by @hoffman@hci.social

My Commencement Address to our 2025 Graduating Class

This is what I said to this year's graduating class of Mechanical and Aerospace Engineers at Cornell.


2025 MAE Commencement - image ffrom Cornell MAE IG account

As you are certainly aware, you are graduating at a technological inflection point, brought about by high-performance probabilistic content generators, sometimes called “AI”.

You are one of the first college classes graduating with this technology at your fingertips, and one of the first wave of people entering a workforce that is increasingly embracing this technology across the board. We have never had a technology that is so adept at writing high-quality text, summarizing large amounts of written data, and performing at many content-related tasks as well as, or better than, humans.

Where does this leave you, the young engineer just entering the workforce?

As someone who has studied AI and the interaction between technology and society for decades, I feel confident in saying that nobody has any idea where this generative content technology will take us.

At times like this, I like to invoke the philosopher and media theorist Marshall McLuhan, who said, and I paraphrase, that, at the onset, the content of every new medium is simply an older medium. He suggested that, at first, film was just used to capture and replay theater performances, television just broadcast films. Similarly, Youtube initially just posted content from television, and podcasts just made radio available on-demand.

It is only after people experiment with a new medium that they discover new ways to use it that don’t just encapsulate an old medium. This then changes the landscape forever, and we get MTV music videos, TikTok reaction videos, and CGI-generated Marvel movies that have almost nothing in common with stage theater.

Similarly, we are now in a transition period where we use “AI” to generate content from the previous era. We use it to write essays, summarize papers, and do homework, all while waiting to understand and discover the real use of this technology, which might make problem sets, CVs, and even PowerPoint presentations a complete thing of the past.

As some of the brightest people facing this change, it is important to remember, though, that no technology is inevitable. ChatGPT is not a natural phenomenon or a divine gift that we just have to accept as is. On the contrary: It is up to you to shape how it operates, in what contexts it should be allowed, how and when you use it, and even whether you want to use it at all.

I like to tell young people to remember one rule: things are never the way they are for a good reason. Always be skeptical and challenge the status quo, because you are the ones who are going to determine the shape of things to come.

In particular, if I were you, I would be highly suspicious of a tool whose explicit goal is to replace your skills and ability to think, the very thing that you worked so hard for the last 20-odd years to hone and improve. Yes, you might be better off on the immediate horizon, as you are designing this next slideshow and crafting this next CV, but after you’re done submitting everything, I encourage you to debate and discuss with your friends what the end-game of this technology is, and what you want it to be.

It is also worth remembering that any technology is ideological, and the ideology of the tech industry is individualism. This is what Silicon Valley believes in: the individual person. For example, we always hear the story of boy geniuses (yes, they are almost always boys), who took big chances on their own, and went on to invent crazy things out of the brilliance of their minds, to make society better.

Here, I’d like to quote from a book written by a friend of mine, Noam Cohen, entitled “The Know-it-Alls”:

[Silicon Valley] taps into our yearning for a better life that technology can bring, a utopia made real, yet one cannot escape the suspicion that these entrepreneurs may not fully appreciate what it means to be human. That is, not just to be a human individual […] but to be part of a family, a community, a society.

The feminist political theorist Susan Moller Okin argued convincingly that [in the individualistic] fantasy, men magically arrive at adulthood ready to remake the world: How? Raised by whom? If advocates for extreme individualism actually had to acknowledge the work and sacrifice of women to bear and nurture children, Okin contended, as well as the assistance of society in children's upbringing, their arguments would lose all force. No one would then be able to say with a straight face that whatever he has is the product of his own hard work and should be his alone to control. 'Behind the individualist façade […],’ she concluded, 'the family is assumed but ignored.'

And this is a great moment, on this day of your really amazing individual achievement, to not ignore the family and society, who were there with you all the way from Day One. The parents, grandparents, and caregivers who woke up in the middle of the night to hug you back to sleep when you were little, the uncountable people who worked hard, really hard, to make sure you had healthy food on your plate and a clean place to sleep – not just at home, but also here at Cornell. Some of these invaluable individuals sit right here with you…

(Let’s stand up, turn, and thank those here right now)

…others are sitting, tear-filled, on WhatsApp and Zoom across the world, watching you, and yet others are standing outside waiting to clean up after this event.

They are all proud of you. We are all proud of you. Now go out there and change the world. Spend lots of time with your friends and family. Watch movies in theaters. Leave your phone at home when you go out. Be the best version of yourself you can be. Forgive yourself and others. We hope you learned some of the things you need to know, right here.

Congratulations to the MAE class of 2025!

They can go ahead, use it to cheat on their essay. It won't do them much good.

An Experiment

Here's a recent experience I've had with GPT-3.5: I was editing a paper by one of my graduate students, and right from the start, the abstract was very difficult to understand. It started with a slow introductory sentence (a pet peeve of mine), then was riddled with repetitions, passive and vague language. By the time I read through it, I wasn't even sure what the main point of the paper was.

So I asked GPT-3.5 to improve the writing.


import openai

prompt = 'How would you improve the following academic paper abstract?\n\n'
abstract = # <imagine some text here>

response = openai.Completion.create(
                  model='text-davinci-003',
                  prompt=prompt+abstract, 
                  temperature=0, 
                  max_tokens=350)

print(response['choices'][0]['text'])

I am posting the full code above to clarify how easy GPT-3.5 is to use, and what a low barrier already exists for any person or company to offer services based on this technology. Assuming, of course, OpenAI will continue to share their model through an API.

How Did it Do?

The resulting abstract was surprisingly clear and accurate. It retained all the crucial information from the original text, but started with a strong summary and then clearly laid out the argument of the paper. And that's not all. Reading GPT's version of our abstract, I finally understood what the main point of the student's paper was.

I was impressed. It was as if GPT crystallized the student's ideas that were buried in the badly written text, and carved them out of the raw marble with amazing rhetoric clarity.

Good news everyone, right?

Good News Everyone!

Not so fast.

Bans, Regulation, and Embracement

Anecdotes like this one, which are flooding blog posts, news articles, and social media accounts, have sparked a swift reaction from pundits to policymakers. The main sentiment seems to agree with the recent Atlantic headline “The College Essay is Dead”, meaning we can never give out writing assignments to students again. Others took the opposite side and imagined a brave new world where everyone can write brilliantly.

Policy reactions in the education space were quick to follow, and can be divided into three categories: outright bans, partial regulation, and—in contrast—wholehearted embracement of the technology.

Reportedly, the New York City education department is one of several organizations that blocked access to the technology in schools, to prevent students from cheating on their writing assignments.

Others, like my own college at Cornell University, have forgone outright bans and instead guide professors to add a note to their syllabus that “submitting work created by ChatGPT, or copied from a bot or a website, as your own work violates Academic Integrity.”

The Cornell approach is good in the sense that it shifts the responsibility to the user of the technology—in this case the student—while setting policy boundaries on the organization's part. That said, it is also problematic because it sets up the use of GPT as a kind of probabilistic gamble for the student. The vagueness of “submitting work [...] as your own” may trigger boundary questions like: Can I use the technology just a little without outright submitting its outcome? How much text can I use from GPT without getting caught? What use case can I defend in an academic integrity hearing?, etc.

In response to these bans and warnings, a whole movement of opinion makers and instructors counter that the GPT ship has sailed. We can't prevent students from using it, and therefore, we should not only accept this fact, but embrace it. Instructors design assignments where students are not only allowed but required to use ChatGPT, and many are saying that its potential to teach outweighs the potential downsides.

There is even a trending hype around “prompt engineering” for ChatGPT, as if there was some sophisticated theory and practice behind trial-and-error chatting with an AI bot. Call me back when they open a “Prompt Engineering” department at a major university.

New Proposal: Shoulder-Shrugging

Amidst this flurry of banning, regulating, and embracing, not many are proposing the most appropriate reaction: shoulder-shrugging along with a good dose of mentorship for students.

First of all, I agree with those who claim that bans and regulations are not effective. There is no point in starting an arms race between students, professors, cheating-detectors, cheating-detector-circumventors, digital watermarking, watermark-erasers, and so forth. It is perhaps a good jobs program and can be a boost for the economy, but I see it as a huge waste of energy on everyone's part.

But that doesn't mean that I am excited for students using it for writing assignments.

Instead, I propose a radical approach: Teachers shouldn't worry very much about algorithms that can write, and instead spend a significant amount of time explaining to students why they do not actually want to use GPT to write their essay for them.

One reason is that there are many ethical issues with large language models.

But there is also WestWorld.

WestWorld narrative start

The WestWorld Metaphor

Since WestWorld made a resurgence as an HBO series, I tell my students that one of the most important things they need to understand in college is that they are guests in a WestWorld theme park. It will change the way they study.

For those who have not seen the film or series, here is the basic premise: Guests pay a lot of money to spend a week in a huge artificially constructed world populated by hyperrealistic robot “hosts”. Gradually, guests are exposed to a number of carefully crafted challenges, called “narratives”, which they have to conquer with lots of hard work and cunning—not to mention a good amount of sexual and physical violence. Once their week is over, the robot hosts do a movie-set-style scene reset and wait frozen at their narrative starting point, awaiting the next round of visitors. These, in turn, are put on the same quests, with the same challenges, and the same reanimated robot hosts.

You can see where this is going.

Students spend their years in school under a false perception that we professors ask them for pieces of work, which they provide to us. At the day-to-day micro level, it seems that instructors are the customers and students the service providers. We (the faculty) give students specifications for things we want and then they work hard to solve problems we have posed for them in the best way possible. It is as if we are renovating our kitchen and our students are cabinet makers.

This causes a host of confusing situations. First of all, if students are doing all the work, why are they paying us for it? Second, if we are the clients of student homework, why do they get to complain about us not being happy with the results? And so on.

All of this is less confusing once students realize that they are the clients of this strange reverse interaction. In fact, they are guests in an academic version of WestWorld.

Every semester, I spend part of my last lecture letting students in on the secret that we professors are nothing more than very realistic looking robots who pretend to need them to solve problems we secretly already have the solution for. We pose questions and ask for their best work, but in reality already have the answers and have done the work ourselves. And the whole thing is constructed in semester-long narrative arcs that do a complete scene-reset once the quests are completed, and a new train full of guests arrive at the same exact narrative point where the last one was a year ago.

What Students Really Want

WestWorld sells the enactment of violent fantasies. Given that, for the most part, we do not allow students to play out their violent fantasies on the faculty, what is the WestWorld experience that students pay for, with their money, their effort, and their time?

The answer is perhaps obvious: learning. Amidst the daily discussion about assignments, projects, grades, and credits, students can easily forget that the only useful outcome of all the hard work that goes into a semester (theirs and ours) is knowledge gained by students.

At WestWorld, guests really feel like they are solving life-or-death problems. In reality, they are merely collecting high-end vacation experiences. Similarly, and this is hard to remember when you are stuck on a problem set: You are not trying to solve a differential equation; you are trying to learn.

This gap is even more painful when it comes to creative projects and writing assignments. The elaborate student design project that I, as a professor, have been obsessively dissecting, critiquing, and guiding for weeks, is not important to me at all. It doesn't matter to me or, in fact, to anyone else. I don't care if it works or not, how accurate it is, or how clever the solution was, even though that's all we seem to talk about during the semester. Once the result is perfect, it can be thrown in the trash. The only thing that matters is the difference in student knowledge before and after having done the project. It is a somewhat paradoxical situation, because both instructors and students need to care deeply about something that is essentially worthless.

Destruction of a Sand Mandala

The Return of Craft Writing

Which brings me back to GPT and other AI systems that write for you.

After reading my introductory story, one might come to the conclusion that we should embrace the support that GPT-3.5 gave the authors of the badly written abstract. In this view, the ideas were already in the student's head, and they just had a technical problem of translating these ideas into readable text. “It's like a calculator” is a common quote I hear about ChatGPT. As if the idea is what matters and writing it down is just a necessary evil or technical chore that needs to be done by someone or somecode.

But anyone who writes for a living knows that in many ways writing is thinking. The process of translating vague ideas into a coherent text helps structure ideas and make connections. The time spent editing and re-editing weeds out important ideas from marginal ones. The effort to address an imaginary reader, to clarify things to them, helps eliminate unnecessary style decisions. Finding your own voice helps you understand yourself and your contribution to the world better.

Letting an AI system do this work for you means giving up all of that. It's like sending a robot to do your WestWorld vacation for you, and just sharing the photos it took on your Instagram feed. Behaving in this way is not at all about cheating, it is about missing the whole point. If you care about having clear ideas and becoming better at what you do, you want to be writing.

That's why I think the appearance of hyper-realistic text generation does not only not imply the death of the essay, but rather may usher in a renaissance of appreciation for good, slow, writing. The existence of computer-generated writing can take the focus off the end result and re-center students on the fact that writing is a craft with value in its process, and not just a means to get an outcome. Students will finally realize that nobody needs their homework essays but they.

It reminds me of the appearance of perfect mechanical image reproduction by means of photography in the 19th century. This invention did not spell the end of painting, but rather the start of an explosion of painting genres. I recently speculated that, similarly, AI-generated art that can look like any existing visual genre might bring on a renaissance of detailed realistic hand painting. Perhaps the next generation of artists will rediscover the usefulness of learning to capture light with brush strokes. GPT and the likes could similarly bring on a return of craft writing. Writing without any computational help, perhaps even without spell checkers and grammar suggestion engines, might make a comeback as a tool of sharpening ones thinking.

But I Just Want to Get a Good Job

Cynics will accuse me of naïveté. What students really want is get good grades, be done, pad their CV, have a nice transcript, and get the best job they can. Agreed, some students will. I realize everyone is trying to optimize their own function, and I have nothing against anyone's priorities.

Let them use GPT. It doesn't bother me, .

Our role as educators is to remind our students of WestWorld; ask them to step out of the client-supplier narrative; remind them that they are doing this only for themselves and that even if you have the best job in the world, it won't make you feel accomplished and won't give you the same satisfaction as being able to have clear thoughts, ideas, and opinions. And at the end of the day you still have to go to sleep with your own thoughts, ideas, and opinions. Unless, of course, you fall asleep in the Metaverse.

Sleeping in the Metaverse

Google has been showing a lot of AI technology recently, garnering the Oohs and Aahs of audiences and tech blogs. For example: Gmail will now auto-complete your emails. You type: “I had” and Google types “a really great time at your party last night”. Enter. Done.

Email auto-completion

This may be a cool application, but it is misguided. Isn't it enough that we don't know how to spell the endings of long words anymore thanks to Autocomplete? Do we want really want humans to not even have to think about the second half of their sentences? As a faculty member, I encounter students daily who have a hard time putting together sentences, chain those sentences into paragraphs, and then combine those into high-school essay level arguments. Off-campus I see Baristas open the calculator app to figure out what the change for $3.50 from a $5 bill is.

Yeah, yeah, yeah. I know. I'm such a luddite roboticist. I should realize that technology only helps us be more effective. But how little do we want to end up thinking for ourselves? With each such feature Google's AI is just taking a tiny annoying bit of thinking out of our lives and adding just a tiny bit of automation. But in the process we lose sight of the fact that thinking is actually good for you!

But the problem is not just that with every such feature Google is making us ever so slightly dumber. An important side effect of Google's AI doing the more of the thinking for us is that it is also taking away another sliver of thought diversity, and replacing it with more uniformity. What if I wanted to say something slightly different than what Google's machine learning found that 200 million other people wrote when they were in a similar situation? Will I really insist on my phrasing? The text is already there, a mere Enter away. It's so easy! Do I really care that much about those subtleties? Who has time for that? I have ads to click on! Enter. Done.

The result is that we increasingly sound the same. And that's good. For machine learning algorithms at least. The more predictable we are, the easier it is to predict us, the better the AI that is predicting us seems to be! Everyone wins! 🎉

At the same I/O event that announced Smart Compose, we saw the new Google conversation agent booking a restaurant slot and a haircut, much to the amazement of the Internet (“Turing Test: Solved!”). This demo is both impressive and troubling. Impressive because they did something that seems really hard. So, kudos for the great tech work!

But it is also troubling, because one of the reasons it works so well is also that people have themselves become so much more predictable. As with auto-complete and Smart Compose, the technology-centric, efficiency-driven culture that Silicon Valley promotes makes us type less, think less, look around less, navigate our spaces less consciously (more below), and as a result it drives people to be more similar and more predictable. This predictability includes the human on the other end of the phone.

Conversation agent setting up a haircut

Here's the full irony. We get daily news stories about how AI is getting smarter and more human-like. But at the same time we are also getting dumber and more robot-like. And that is, at least in part, because we communicate predominantly through technology, so our communication becomes more technologically-adapted. A true meeting in the middle.

I increasingly feel that with its latest technology, Google just wants to promote more of this, making sure we don't even have to call the hairdresser. God forbid we may hold a human-to-human conversation. With voice! And pauses! The humanity! That may only screw up the next AI down the road.

Finally, Google announced AR overlays for Google Maps. Why do we need this? Just listen to the Google representative on stage, telling us us how frustrating it is when it sometimes takes you 20 seconds to realize you were walking in the wrong direction. Yes, that annoying sliver of opportunity left in our world to get lost for a brief moment despite GPS maps on our bodies 24/7.

When I hear “walking in the wrong direction”, I think of it as an opportunity to look around; to see things we were not explicitly looking for; a last vestige of serendipity. In efficiency culture, we don't want any of that. All of these “problems” are now solved! Again, we are one step more predictable, and as a result AI can get better at predicting us, without that annoying wandering-around data noise that messes up the machine learning algorithms.

AI Overlay for Google Maps


Zeynep Tufekci is correct to call Silicon Valley “ethically lost”, and the latest Best-of-Show features emphasize that. Others, too, say it better than me, in long format. One thing is sure: As the tech industry marches blindly along, there is an increasing need for academic and activist writers to keep an ideological check on this procession.