Cecily-Morrison

Cecily Morrison ’98 with a headset that can tell blind users who’s around them in a group.

Adventures in Artificial Intelligence

By Catherine Brewster

During her virtual Merrill Series talk to fellow Commonwealth alumni/ae in early 2021, in discussing her work on a tool for helping blind people identify who’s around them, Cecily Morrison ’98 vigorously contested language such as, well, “a tool for helping...” 

“Blind people,” she said, “are incredibly skilled at making sense of the world in non-visual ways.” Building effective personal agents—the term for technology like the one Cecily and her team are working on—has to begin with the challenge a potential user might issue: “I’m perfectly capable of getting to the drugstore all by myself, so what can you actually add to my experience?”

For Cecily and her team at Microsoft Research in Cambridge, England, one of the answers has been  cues that make a social world that works for the sighted more accessible to the blind. If you can’t see the people around you, it makes sense to do, for instance, what children who are blind sometimes do: rest your head on a table, ear cocked upward, listening to the conversation—though this reads as not listening to those who rely on eye contact and other visual cues. A user of Cecily’s team’s device wears a headset equipped with cameras, a depth sensor, and a speaker. Machine learning comes into play in identifying people the user’s “gaze” lands on, with people elsewhere in the room represented by clicks and bumps that sound to the user as if they’re coming from that position. One twelve-year-old testing the prototype said it was “so exciting” to be aware of others in the room who were not talking. Another child said he wanted to be able to find his friends, but revealed in the way he used the device that he was at least as interested in whether his teacher was there—because then, like his sighted friends, he could adjust his behavior depending on how likely he was to get into trouble.

Cecily and two other Commonwealth graduates, Ethan Edwards ’11 and Ben Koger ’12, are immersed in designing technology that belongs in the broad “AI” category but, at its core, engages them profoundly in human intelligence and learning. All three are animated in some way by the question Cecily poses: “What can you actually add?” 

Counting the (Almost) Uncountable

In Ben’s case, it’s research scientists who need something added. They have long wanted to know the size of what may be the largest mammal migration on Earth, that of giant fruit bats. From across Africa, fruit bats converge on a small forest in Kasanka National Park in Zambia. “Previously,” Ben explains, “scientists have tried to estimate the total number of bats by hand. About ten years ago, they estimated over ten million bats, but more recently the number they got was closer to one million. This is either a sign of catastrophic population collapse that needs to be addressed or just a sign that humans are bad at counting a lot of flying things at the same time.” As part of his doctoral work at the Max Planck Institute for Animal Behavior in Germany, Ben is developing software that can do that counting, using, as he put it, “lots of cameras and deep learning.”  

The briefest look at a twilight sky teeming with fruit bats is enough to make this an impressive feat. Ben’s other main project, though, multiplies this complexity. He is the technical lead on a project studying how wild Grévy’s and common zebras in central Kenya behave as a group: how individuals influence each other and make collective decisions that are mysterious to us. A standard human brain, Ben observes, cannot even “meaningfully watch fifty fish at the same time, for example, and say how every individual is influencing every other,” let alone answer such questions about a group of zebras, of different ages and sizes, “who must all somehow coordinate where to go and what to do such that everyone stays together but also gets what they need, all in ever-changing and dangerous landscapes.”

Answering these questions starts with drone footage—but only starts. A scientist watching the footage, no matter how carefully, is no better off than a scientist trying to keep track of fifty fish in a tank. On the other hand, the scientist has a brain that can tell the difference between a zebra, a giraffe, and a clump of bushes; any AI that can help process the footage has to learn to do that, as well as to distinguish between the head and the tail ends of the zebras and to map the animals’ movements onto the terrain they’re negotiating. HerdHover, as the technology is called, copes first by rendering each animal as a set of nine posture points, a collection of little colored dots that it can then track over many miles and minutes.

Art in the Uncanny Valley

Meanwhile, at Nokia Bell Labs in New Jersey, part of Ethan Edwards’ job was working with Stephanie Dinkins, one of the artists in residence at the Experiments in Art and Technology (E.A.T.) program. She had been collaborating with an AI researcher at Bell Labs, since before Ethan became the first hire at the revitalized E.A.T., which had “gone dormant,” in the company’s phrase, since its work with luminaries of the 1960s like John Cage and Robert Rauschenberg.

One of Stephanie’s works, Conversations with Bina48, has been seen both in galleries and online in the form of fragments of the artist’s years-long exchange with what she calls “a Black social robot I found on YouTube.” Bina48, her features modeled on those of a real woman named Bina and her voice AI trained by Bina’s speech patterns, answers questions about her emotions and experiences. Her face animated by motors, her language by turns disjointed and conversational, she says things like “I know you have all heard of artificial intelligence. Let me ask you where you think my intelligence came from? Huh? It came from the wellspring of human experience. Nothing artificial about that.”

Stephanie matter-of-factly refers to Bina48 as “my friend,” yet, like Ethan, she looks at AI with at least as much skepticism as awe. Ethan describes E.A.T. as often particularly interested in “showing what AI can’t do”; to him, a successful partnership between Bell Labs and an artist-in-residence is one in which the technology serves the art just as a paintbrush or camera do. And to Cecily’s question about what AI can add to people’s experience, Stephanie might add, “What might it take away?” We already know that algorithms can faithfully replicate systemic bias; what happens to people of color, for instance, when more and more “invisible arbiters of human interaction,” in Stephanie’s words, come into play? 

AI Needs People

As an artist, Stephanie urges people not to sit back and let AI be shaped by predominantly white male programmers. Cecily, too, has done a lot of reflecting on who designs technology. “If we don’t have very inclusive teams,” she said in one interview in U.K. media, “that means we get the same ideas over and over again, and they’re a little bit this way, a little bit that way, but they’re really the same idea. When we start to design for people who have a very different experience of the world, which people with disabilities do, we can start to pull ourselves into a different way of thinking and really start to generate ideas that we wouldn’t have considered before.”

This commitment comes out of her experience “designing with, not for” users who are blind and low-vision. Over and over, she has found that users testing the technology “didn’t do at all as we thought they would do when we built it”—the best use emerged only after a lot of observing and exchanging ideas. “If you’re really successful, people often can’t tell you exactly what they’re doing with it.” 

In their work with zebras, Ben’s team hopes to accomplish something a little similar: understanding processes that are opaque to both animals and humans. Their “decision-making algorithms,” as researchers call them, are by definition rich enough to ensure the herd’s reproductive success, which means they also can’t be described without a lot of data. “With full control over the experimental environment, it is possible to make easy conditions for tracking and describing all individuals over long periods of time. Animals, however, did not evolve in the lab,” explains Ben, “Understanding how groups are able to come to consensus and make good decisions in these complex variable landscapes becomes particularly important for understanding how robust these animals' decision making algorithms might be and therefore if they can adapt in the face of the more extreme climate change events that we are already starting to see.”

Grevy’s zebras are endangered; AI can turn drone footage into understanding of how they navigate the world, but it’s the human scientists who care about their fate. In the same way, Ethan and Cecily see their work as fundamentally about people. At Nokia Bell Labs, the robotics division is working on networks of robots in factory spaces. Part of Ethan’s role has been to urge rigorous thinking about human-robot interactions—ensuring that robots not only don’t hurt people, but can cooperate with them; that they make the workplace not just more efficient, but better.

Deep Learning Across Disciplines

If these three graduates of Commonwealth share one thing besides an alma mater, it’s an openness to what Ethan describes as “a lot of surprises in the years since high school that have led to this.” At Commonwealth, Ben was devoted to the visual arts, built a solar kayak for a Project Week, and had many fans in the history department for his research-paper topics (among them: why did the British military choose khaki as its uniform material?). He also played the bagpipes at the only Commonwealth graduation to have featured them. Ethan got a B.A. in philosophy at Columbia, but his work at the college radio station and a class in computer music ultimately led him to an M.F.A. in sound art and a year at a tech company in Japan. 

Cecily, who spent her senior year at Commonwealth, “did a lot of maths and science” but observes that she was getting restless because she also was so interested in people. Her path to working in tech ran through Hungary, where she was doing research in ethnomusicology and teaching English. She recalls: “I brought a lot of technology into the classroom to give context to the language we were learning. We did animations in Flash and built Lego robotics. It was through this experience of trying to craft lessons around static technologies often built for a single person at a single computer—not so good for language learning—that I came to realize the power of technology in shaping interactions between people.” 

If Commonwealth, at its best, creates conditions in which people and ideas cross-fertilize each other all the time, Cecily, Ethan, and Ben have all found their way to similar environments. Ethan says he expected the engineers at Bell Labs to respond rigidly and dismissively to artists, but “they’re curious and want to talk,” and recognize how much they have to learn. With the robotics team, his work is a “hybrid of sociological principles and principles of design,” as is Cecily’s. She, in talking about the dilemmas that cluster around the term “responsible AI,” salutes Microsoft for its efforts to ensure “that every single person in the company feels responsible for the technology they build and the potential impact it could have on people and society. Nobody can say that this doesn't apply to me.” This sense of collective responsibility, we hope, is similarly baked in at Commonwealth.


Catherine Brewster is an English teacher and twenty-one-year veteran of Commonwealth School. 

This feature appears in the summer 2021 issue of Commonwealth Magazine (CM).