By Catherine Brewster
Dedicated readers might recall that, only a few years ago, we covered “Adventures in Artificial Intelligence,” interviewing alumni/ae harnessing AI for projects like tracking populations of endangered zebras and translating movement into sounds blind people can use to “see” the world around them. Simpler times.
Since then, ChatGPT and similar large language models have prompted more existential questions: What is the purpose of writing, of education, of thinking for oneself? What can we outsource, and what should we? Of course, versions of these come up with any technological advance, but none has registered as such a body blow to English teachers since I became one. Every September, I’ve told my seniors, “After high school, you can run away from anything else you’ve had to learn here—math, history, science—but not writing.” Is that still true? Will writing your own sentences become akin to making your own shoes, the domain of eccentrics, preppers, and multinational corporations? Is our prose diet about to be flooded with the equivalent of, as my history colleague Melissa Glenn Haber ’87 put it, high-fructose corn syrup? This simile was perversely comforting, as an example of the kind of thing I don’t think ChatGPT could (yet) come up with.
At class meetings, several juniors made a case for thinking of large language models the way math teachers think of calculators: “We still teach first graders arithmetic, because if they can’t do it themselves, they won’t be able to understand math when they’re sixteen.” Another, noting that LLMs can only draw on the material they’re trained on, asked indignantly, “Why would you want to write an essay if you’re not going to say something that hasn’t been said before?” A classmate bravely and gently pointed out that, under pressure, even a person who believes in learning for its own sake can still be tempted by a free, fast, and tirelessly on-demand source of passable academic prose. Unsurprisingly, I continue to look for affirmation from people who already believe that writing is inextricable from thinking—a group that’s probably overrepresented at Commonwealth. But I also asked the following questions of a few avowed technologists among our graduates. Here are their answers, along with ChatGPT’s for good measure.
First, Meet Our Technologists...
Mark Towfiq ’85 credits Bruce Molay’s courses at Commonwealth, along with the aunt who gave him a Commodore 64 in eighth grade, for his gradual realization that computer science wasn’t just a hobby but what he enjoyed “more than anything.” With a degree in CS from Princeton, he “drifted into e-commerce and two-sided marketplaces” and led the team that built and launched Walmart.com. Since then, he has become an expert in “modernizing systems to use a digitally native way of thinking,” at various startups, PepsiCo, and now Estée Lauder. He worries about the “Frankenstein’s monster” possibilities of generative AI: “We haven’t built the necessary safeguards to insulate ourselves from potentially disastrous consequences, from criminal uses to the ability to generate misinformation and disinformation at scale. It’s not hard to imagine that leading to a world unmoored from facts and reality, and the collapse of governments, persecution, and mob rule.”
Jamie Matheson ’08 earned his undergraduate degree at Hampshire College, where he also launched his first startup, Trext. In 2017, he co-founded Carta Healthcare, which uses AI in conjunction with a team of nurses. “The AI system analyzes clinical notes, lab results, device data, and outcome information to suggest answers. Our nurses then verify and complete the process,” he explains. “Our approach is a ‘glassbox’ model, which means all data we present has a clear and traceable source. In contrast, ChatGPT’s suggestions can be difficult to understand due to its occasional hallucinations, making it unsuitable for data abstraction. We also employ a statistical methodology, inter-rater reliability, to assess the quality of AI-generated suggestions compared to nurses working without the software. The AI ultimately helps us build better datasets and, as a result, enhance the quality and consistency of healthcare while reducing its cost.”
Tahmid Rahman ’13 didn’t take his first programming course until he was a sophomore at Swarthmore College. He then majored in math and computer science alongside DJ’ing a radio show, playing a lot of Ultimate Frisbee, and coordinating the college writing center. Hired as a software engineer by Google in 2017, he now works for Google Travel in Cambridge. At a Commonwealth assembly in March, he laid out some of the puzzles and tensions around large datasets and privacy, citing Latanya Sweeney’s 1997 success in extracting Governor William Weld’s health records from public data on state employees’ medical information that was, the Governor had assured everyone, anonymized.
Arthur Mateos ’15 devoted himself to learning (among many other things) both French and Mandarin at Commonwealth. Though he majored in math at Princeton, he wrote his senior thesis in linguistics, the field in which he is now working toward a Ph.D. at UCLA. While he says his knowledge of AI, from a summer in a biology lab modeling animal behavior, is outdated—“a couple of generations back from what large language models can do now”—he’s able to explain, as someone who studies human language, what ChatGPT doesn’t understand if it doesn’t recognize as nonsensical a sentence like “What language did they hire someone who speaks?” (that is, a muddled attempt at asking “They hired someone who speaks what language?”). Like Steven Pinker, he’s fascinated by the capacity of ordinary children learning human language to infer such rules from a tiny fraction of the input ChatGPT was trained on. For him, the largest worry about LLMs, “which is not a language question,” is that they’re overseen by for-profit companies who aren’t bound to act in the interest of society: “Being the administrator of the model gives you a lot of power over how it’s trained, what biases are included, and who can use it.”
Alexandra Bates ’21 (known as Sasha at Commonwealth, where she maxed out our STEM offerings from differential equations to relativity) is an undergraduate who spent a gap year working at the Tenenbaum Lab at MIT and with a number of student groups focused on AI safety. Coming out of her freshman year studying math and physics at Columbia, she found herself increasingly preoccupied with the imminence of “human-level AI”—that is to say, both interested in and frightened by what’s known as “the alignment problem.” As Alexandra explained it to me, if we have no plan to shape future AI so that it shares human values, there’s no reason to believe that it will. In that case, “we could all end up enslaved by robot masters. That sounds hyperbolic, but if we don’t program them to care about human flourishing, why would they?”
In Discussion
What’s your reaction to Vanderbilt’s ChatGPT-generated “message of unity” to students after the Michigan State shootings? Students were upset, but will they be five years from now? Should they be?
Mark: “They were right to be upset. It goes to authenticity, in the same way that a handwritten card signals that you really are thinking about someone.”
Jamie: “Using an auto-generated response for such a serious incident seems inappropriate. It’s crucial to ensure that AI-generated content is contextually relevant and sensitive, especially when dealing with emotionally charged situations.”
Arthur: “Statements like that tend to follow a template anyway, so I’m not sure their use of ChatGPT makes all that much of a difference here. I think what really matters is whether the person whose name is on the statement believes it and will stand behind it.”
Tahmid: “I can see that if Vanderbilt becomes dependent on such a tool, that’s a problem. I think a lot depends on what Vanderbilt says to students about using ChatGPT for academic work.” [The Vanderbilt Hustler quotes one student as saying it would be considered cheating, which indeed it would at Commonwealth, but reports no confirmation from the university that it would.]
Alexandra: “Writing something yourself is one signal that you really care about it, so using ChatGPT reads as evidence to the contrary. So I can understand the students’ anger, even if there are lots of other ways writing can be dishonest. Right now, human emotions are out of reach as a heuristic for large language models, but they’ll get there.”
ChatGPT: "As an AI language model, I do not have emotions, but I can provide an objective analysis. The ChatGPT-generated ‘message of unity’ issued by Vanderbilt in response to the Michigan State shootings was an attempt to provide comfort and support to the student body during a difficult time. While some students may have been upset with the impersonal and automated nature of the message, it is important to recognize that the purpose of such a message is to acknowledge the tragedy and express solidarity with the affected community.”
Assuming chatbots will only get better at writing, should students still learn to do it themselves?
Mark: “Yes. The act of writing is developing the thought muscle, arranging thoughts in a convincing way that exposes a reader to a new idea. The principles I learned at Commonwealth—economy, sensitivity to diction—are real skills that, once you develop them, never go away. Yesterday, working with my team on a budget, I spent an hour changing words on every page of a slideshow, because part of a budget is convincing, ultimately, the CEO that your team deserves its cut.”
Jamie: “Good writing is about clearly expressing and communicating ideas. Even as chatbots improve, students should continue learning to write themselves, as it helps them develop critical-thinking skills, articulate their thoughts effectively, and convey complex ideas to others. Writing is not just a mechanical process; it is also a way to enhance understanding and foster creativity.”
Tahmid: “Well, how else would you communicate ideas?! A distinctive writer’s voice is something worth developing. Maybe LLMs will end up raising the quality of ‘average’ writing. Maybe the norm will become using an LLM to generate a first draft, then improving it. But there are so many questions to think through before we can use LLMs in these ways responsibly. For example, I wonder what LLMs will do to originality; already it seems as if a lot of new films and games are just variations on the ones that are already out there. And I wonder if anyone developing LLMs consulted with schools about what they were going to do to education.”
Arthur: “The act of turning jumbled words and incoherent thoughts into sentences will be doable by a computer, but when you’re putting something out there, you want it to reflect what you think. Language models do not think; schools still need to teach people to do that. Your first pass at expressing an idea won’t be the best one. Writing isn’t just making sentences. The writing process includes thinking and editing and rearranging until you’ve got the most compelling version.”
Alexandra: “It makes sense for education to concentrate on skills that can’t be automated—and also that, eventually, everything will be automated. At that point, I think writing will become something you won’t have to do but can do if you enjoy it, which I always have, as part of a productive and happy life.” [When I took the position you might expect from an English teacher—that until you can write clearly, you can’t think clearly—she pointed out, “You phrased that in a very binary way. How about thinking of writing as a tool that can help people think better? In the very long term, I think human happiness matters more than the capacity to think and write.”]
ChatGPT: “Yes, students should still learn to write themselves even as chatbots and other AI technologies continue to improve their writing abilities. Writing is a fundamental skill that is essential for communication, critical thinking, and self-expression. While AI tools can help to automate certain tasks and improve efficiency, they cannot replace the creativity, nuance, and personal touch that comes from writing oneself. Learning to write also helps to develop important cognitive skills, such as attention to detail, logical thinking, and effective communication.”
Computer science is thriving at Commonwealth. What do you hope those students aspire to and pay attention to if they’re considering careers in the field?
Mark: “Now that my transformation to full-time manager is complete, I don’t write a line of code. If that’s not where you want to end up, you want the ‘tech track’; some companies are good at letting people alone to just code. But software engineering actually is about people, which a lot of people don’t understand when they start or sometimes ever. You are on a team no matter what; you can write great code, but there’s also a user-experience person, a product manager who’s trying to align what you’re building with the needs of the business and make sure that what you’re building is actually useful.”
Jamie: “Many students enter computer science classes expecting to learn the skills required to be a good programmer. However, computer science is an academic discipline, while programming is primarily an exercise in applied problem solving. The most interesting projects in computer science often involve interdisciplinary ideas from fields like neurology, genetics, or game theory. And good management and collaboration with colleagues, physical tasks, and deep expertise in areas with little or no data are currently beyond AI’s reach. Humans will continue to play a vital role in these aspects of work.”
Tahmid addressed some of these questions at his recent Commonwealth assembly entitled “Take Your Time”: enjoy the people and experiences you discover as you grow up, and be ready to be surprised by your own interests. Despite this, as both he and I noticed, a lot of the students who were eager to talk to him wanted to know the best path for becoming a software engineer at Google. When we talked, he reflected, “I guess something I didn’t understand until I was maybe a senior at Commonwealth was that when you write essays and lab reports, you’re participating in a discourse with other people, and knowing how to do that well, how to pay attention to the bigger picture and when to zoom in on the granular stuff, is a
very underrated skill. The technical part of what I do—writing designs and code—can be automated, but not that.”
Arthur, who after college worked as a software engineer at Capital One, says the best aspects of that job were the problem solving and the people on his team. “Working as a software engineer, most of the coding stuff I did, I learned on the job. The first few computer-science courses I took in college were enough to get me started, but after that a lot of it you kind of just pick up as you go. Being able to think through and solve problems—skills that come up in a wide range of disciplines—is really important.” He reflected, “One thing Commonwealth conveys really well is that no matter what you do, you’re part of a society, and your actions affect other people. So whether you end up in computer science or any other field, I hope you use your skills for good.”
Alexandra: “If any current Commonwealth CS students don’t think it’s possible that we could create AI with values other than our own, I hope they’ll get in touch with me!”
ChatGPT: “As an AI language model, I would hope that students who are interested in pursuing a career in computer science would consider the following: As an AI language model, I don’t have hopes, but I can provide an objective answer. For students considering careers in the field of computer science, it is important to aspire to a career that not only provides financial stability but also allows for personal and professional growth. A strong foundation in computer science fundamentals, coupled with the ability to think creatively and problem-solve, will serve them well in any career path they choose. They should also pay attention to the constantly evolving technology landscape and stay up-to-date with the latest developments and trends.”
Catherine Brewster teaches English at Commonwealth.