AI Rules in Two Popular MBLCourses

Matteo De Bernardo of the Whitehead Institute, a student in Deep Learning@Ůֱ, gets advice from course teaching assistant Diane Adjavon of HHMI Janelia. Credit: Diana Kenney

Everybody is talking about some form of artificial intelligence, or AI, these days. But at Ůֱ, two courses have been taking students into real-world, scientific uses of AI for years. Both courses have changed as the explosive growth in the AI field has made ever-more sophisticated tools available – and researchers are racing to add them to their own toolbox.

One course, called “Brains, Minds and Machines,” explores how new work in AI can provide insights into how the brain works, which itself can lead to new approaches to develop better AI algorithms. While AI can perform many of the same kinds of tasks as human brains, such as interpreting spoken language or identifying the content of images, it does so in very different ways than the brain does -- and both are still poorly understood. The course explores the ways that each discipline can provide a deeper understanding of the other. 

The other course, called “DL@Ůֱ: Deep Learning for Microscopy Image Analysis,” introduces students to the latest techniques for interpreting huge volumes of biological imaging data, such as of cells and tissues, in some cases allowing them to extract information that would be impossible to glean by human analysis alone.

The Deep Learning Curve

Deep Learning (DL) is a type of Machine Learning (ML), a branch of AI in which vast numbers of examples are fed into a computer model to train it to identify some specific characteristic. For example, thousands of images of cats, and thousands without cats, are fed into the model so it will learn to identify cats in images it has never seen before. In Deep Learning, the process more closely emulates the way the brain works by using multiple – sometimes thousands -- of processing stages or layers, which are called deep neural networks.

Shalin Mehta of Chan Zuckerberg Biohub, right, co-director of Deep Learning@Ůֱ, assists student Gloria Lau of University of Illinois, Urbana-Champaign. At left is course teaching assistant Larissa Heinrich of HHMI Janelia.
Shalin Mehta of Chan Zuckerberg Biohub, right, co-director of Deep Learning@Ůֱ, assists student Gloria Lau of University of Illinois, Urbana-Champaign. At left is course teaching assistant Larissa Heinrich of HHMI Janelia. Credit: Diana Kenney

The first week of DL@MBLis “very intense,” said of HHMI Janelia Research Campus, who co-created the course in 2019 and has taught in it ever since.  “There’s a lot of exercises and people get hands-on experience using the most current Deep Learning methods.”

“Students really appreciate the hands-on exercises that we have refined over multiple years and their close interactions with teaching assistants and faculty. It’s gratifying to see them training their models within the first three days of the course,” said  of Chan Zuckerberg Biohub, who has taught in the course since its inception and now co-directs it.

Deep Learning@MBLstudent Sammy Hansali of Tufts University shows his data on 2D cell tracking over time to student Justin Paek of Cornell University.
Deep Learning@MBLstudent Sammy Hansali of Tufts University shows his data (on 2D cell tracking over time) to student Justin Paek of Cornell University. Credit: Diana Kenney

In the second week, students apply these methods to their own imaging data that they have brought from their home labs. “For me, this is the most exciting part of the course,” Funke said.

“For many students, it means that in five days, they accomplish what would have otherwise taken six months or a year,” Funke said. They learn how to format their data correctly to use as inputs for Deep Learning and how to obtain desired outputs, such as delineating the boundaries of cells, sorting images into categories, or tracking the sequential motion of cells, such as in a developing embryo.

Compared with the early years of the course, he said, “Now we have much better tools, and some of them work right out of the box” for operations such as segmentation, which is delineating the boundaries of distinct structures, such as cells. Steps that originally took a couple of days to teach can now be covered in just a day, he said, thanks to these new tools.

Maria Theiss of Harvard Medical School, a student in Deep Learning@Ůֱ, classifies an image into three categories of tissues to check if the computer model is doing so correctly.
Maria Theiss of Harvard Medical School, a student in Deep Learning@Ůֱ, classifies an image into three categories of tissues to check if the computer model is doing so correctly. Credit:  Diana Kenney

Among the kinds of datasets that students have brought to the class are images containing different categories of cells, such as some that have a certain disease and some that don’t. They are looking for markers that will enable software to distinguish between the different cell types.

“Machines can look at hundreds of thousands of images and tell you the difference between the diseased and the healthy state,” as long as the differences do exist in the data, Funke said, something that could never be accomplished without AI tools.

While AI tools can provide impressive results, such as removing “noise” from an image to produce a clean, beautiful version, the class also emphasizes the importance of choosing when not to use them.

Fifteen teaching assistants participated in Deep Learning @ MBLover the course’s two weeks.  Here, the “TA table” prepares for the students’ data presentations.
Twenty-four students and 15 teaching assistants participated in Deep Learning @ MBLover the course’s two weeks.  Here, the “TA table” prepares for the students’ data presentations. Credit: Diana Kenney

“We don’t just want something that looks impressive. We want something that is as true as possible to the data you’re dealing with… We want to be able to verify the results,” Funke said.

“We emphasize that our lectures and exercises show both sides of the coin - the exciting capabilities of new AI tools, and their failure modes as well,” Mehta added.

Intelligence, Artificial and Otherwise

“Brains, Minds and Machines” is now in its tenth year. Held each August at Ůֱ, the course is organized by the joint MIT/Harvard Center for Brains, Minds and Machines.

“It’s been an exciting journey from the beginning, seeing and contributing to the rise of some of the emergent algorithms and technologies that have been blooming in AI,” said course co-director of Harvard Medical School.

Class photo for the 2024 Brains, Minds, and Machines course.
Class photo for the 2024 Brains, Minds, and Machines course. Credit: Kris Brewer

“We are trying to create and train a new generation of scholars who can converse in neuroscience, cognitive science, and computer science, and build bridges between AI and the study of brain function,” Kreiman said.

The course has nine teaching assistants who help to scour all the latest developments in these fields and who “have kept us on our toes…. Every summer we encounter new things, and we’re doing things that are different from the previous year.”

Students bring their own projects to pursue within the course’s broad subject areas. Projects such as how to fill in the blanks in an image where some objects are partially blocked from view, or computational models of the visual cortex to try to understand the way the brain processes visual information, are pursued. “The projects are often really at the cutting edge of research,” Kreiman said.

Students in Brains, Minds, and Machines race their hand-built boats at Stoney Beach. A boat-building competition is an annual team-building exercise and highlight of the course.
Students in Brains, Minds, and Machines race their hand-built boats at Stoney Beach. A boat-building competition is an annual team-building exercise and highlight of the course. Credit: Kris Brewer

The course’s co-director, of MIT, is an expert on language, Kreiman said, “so a lot of projects are related to language, and there’s been a lot of exciting developments. Large language models such as Open AI’s Chat-GPT have most recently changed the landscape, he said, “but there are many other projects that have to do with language as well.”

Experts come to the course to share their most recent and exciting findings, he said, with a roster that changes from year to year as the fields advance.

From left, Rachel Lee, Meghan Collins, and Guillaume Bellec in the Brains, Minds, and Machines course.
From left, Rachel Lee, Meghan Collins, and Guillaume Bellec in the Brains, Minds, and Machines course. Credit: Kris Brewer

When they started the course 10 years ago, the idea was “We are starting a new field,” Kreiman said. Since then, the intersection between brain science and computer science has indeed proliferated. MIT has introduced a new major that combines the two fields, for example, and ongoing work at institutes at Harvard and MIT “aim to build bridges between neuroscience and AI,” he said.

Given the accelerating pace of AI, these courses are bound to keep pushing innovation at the intersection of biology and machines.