His Artificial Intelligence Sees Inside Living Cells

Source: QuantaMagazine

The computer vision scientist Greg Johnson is building systems that can recognize organelles on sight and show the dynamics of living cells more clearly than microscopy can.

Your high school biology textbook was wrong about cells. The prototypical human cell — say, a pluripotent stem cell, capable of differentiating into anything from muscle to nerve to skin — isn’t a neat translucent sphere. Nor are its internal parts sitting still and conveniently far apart, like chunks of pineapple suspended in gelatin. In reality, a cell looks more like a pound of half-melted jelly beans stuffed into a too-small sandwich bag. And its contents are all constantly moving, following a choreography more precise and complex than that of a computer chip.

In short, understanding what cells look like on the inside — much less the myriad interactions among their parts — is hard even in the 21st century. “Think of a cell as a sophisticated machine like a car — except every 24 hours, you’ll have two cars in your driveway, and then four cars in your driveway,” said Greg Johnson, a computer vision and machine learning researcher at the Allen Institute for Cell Science. “If you found the smartest engineers in the world and said, ‘Make me a machine that does that,’ they would be totally stumped. That’s what I think of when I think of how little we know about how cells work.”

To view the inner workings of living cells, biologists currently use a combination of genetic engineering and advanced optical microscopy. (Electron microscopes can image cell interiors in great detail, but not with live samples.) Typically, a cell is genetically modified to produce a fluorescent protein that attaches itself to specific subcellular structures, like mitochondria or microtubules. The fluorescent protein glows when the cell is illuminated by a certain wavelength of light, which visually labels the associated structure. However, this technique is expensive and time-consuming, and it allows only a few structural features of the cell to be observed at a time.

But with his background in software engineering, Johnson wondered: What if researchers could teach artificial intelligence to recognize the interior features of cells and label them automatically? In 2018, he and his collaborators at the Allen Institute did just that. Using fluorescence imaging samples, they trained a deep learning system to recognize over a dozen kinds of subcellular structures, until it could spot them in cells that the software hadn’t seen before. Even better, once trained, Johnson’s system also worked with “brightfield images” of cells — images easily obtained with ordinary light microscopes through a process “like shining a flashlight through the cells,” he said.

Instead of performing expensive fluorescence imaging experiments, scientists can use this “label-free determination” to efficiently assemble high-fidelity, three-dimensional movies of the interiors of living cells.

The data can also be used to build a biologically accurate model of an idealized cell — something like the neatly labeled diagram in a high school textbook but with greater scientific accuracy. That’s the goal of the institute’s project.

Read full news here