A picture of the Scotch Tape camera and Raspberry Pi setup (left); The image obtained by shining a flashlight at the tape camera and taking a picture (center); A reconstruction of an illuminated hand captured by the Scotch Tape camera (right). This homework assignment is inspired by work done at the University of California, Berkeley on DiffuserCam.
For the first class she would teach as a member of the Caltech faculty, Katie Bouman wanted to help her students unleash the power of computational cameras. That's because Bouman, an assistant professor of computing and mathematical sciences and electrical engineering and a Rosenberg Scholar, is an expert in understanding and using computation and mathematics to "see" our universe in new ways, as evidenced by her work as a member of the Event Horizons Team, which used just such a camera to capture the first image of a black hole in 2019.
So Bouman created CS 101, a computer science course that she knew would impart to her students a deeper understanding of how hardware and software can work together to change how we view our world. But in the weeks before the students could gather for their first class, Bouman and her TAs, Mason McGill and Brendan Hollaway (BS '20), suddenly found themselves in a position of having to pivot as a result of the coronavirus pandemic, to rethink the course's curriculum in order to give their 28 students the same hands-on experience, however physically remote they were.
To make that happen, Bouman, McGill, and Hollaway knew that their best option was to provide each student with their own Raspberry Pi, a simple open-source computer that can be equipped with a camera. Enter the Innovation in Education Fund, an initiative of the Caltech Center for Teaching, Learning, and Outreach that supports inventive teaching methods, and provided the funds needed.
With the Raspberry Pi and their cameras in hand, the students were ready to tackle their first challenge: to create high dynamic range (HDR) images. This is something many smartphones do, and you may have noticed the letters HDR in small print at the bottom of your photos. It means your phone has combined elements from multiple exposures to create a single image. For example, if focusing on a person's face means the background sky would be blown out, the phone can combine the sky from one exposure with the rest of the picture from another.
HDR is one example of how hardware and software work in tandem to create images beyond what traditional cameras and lenses can see. Bouman had her students take multiple images with different properties and use software to merge them. "Computational photography is all about this interplay between modifying hardware at the same time as software," she says.
In another assignment, her students used the camera to capture a light field. Where a conventional camera captures only the intensity of incoming light to create a two-dimensional image, light field cameras also capture information about the direction all the light rays are coming from—the so-called "light field." Because the image contains all this information, the image can be refocused at different depths even after the picture is taken.
A final assignment challenged students to capture an image in an unconventional manner. With Bouman's blessing, the students took apart their Raspberry Pi cameras to remove the lens and replace it with a piece of double-sided Scotch tape. The light that shines through tape results in an image that looks like noise, but students can use math to reveal the subject of the photo.
The point of the exercise was to understand that lenses, which scientists have used for centuries to study the astronomical and the microscopic, are only one way to see. This is why scientists are turning to imaging technologies that go beyond simply building bigger lenses, says Bouman. In fields ranging from astronomy to biology, lensless cameras can be used to capture imagery where it is not possible to use a standard lens. "You have to think creatively about how we can break the traditional mold of how a camera collects and records light; we can fill in the missing gaps of knowledge with computation," Bouman says.
Bouman's students in this spring's course did exactly that as they applied the imaging lessons to their fields of study in their final projects. In the lab of Elizabeth J. Hong, Clare Boothe Luce Assistant Professor of Neuroscience, graduate student Yue Hu is working on imaging the fruit fly's olfactory system and a model for the human version. Researchers in the Hong lab had made microscope video recordings of neurons from parts of the fly brain tasked for olfaction and memory. "To extract the neurons' activities into numbers to investigate, we have to track the outline of each neuron throughout the recording video, which is hard because of the motion seen in the videos," Hu says. She used techniques she learned from Bouman's class—a steerable-pyramid image decomposition and an understanding of signal processing—to address the problem.
Junior Irene Crowell researched the "Sea-Thru" algorithm, which aims to take underwater imagery and correct the colors to what they would be if seen through air. Such work is crucial for swimming robots, and Crowell, who is the software lead for the Caltech Robotics Team, says, "we continually struggle with the problem of processing the underwater photos our submarine captures to extract useful information." In Bouman's class, she succeeded in getting a prototype algorithm working and plans to continue developing it.
"It was a great experience for me because I was able to apply the image processing techniques we learned throughout the class, including color spaces, depth information, and filtering techniques, and apply them to a project I had a personal interest in," Crowell says. "Katie's emphasis on teaching us the physical basis for image formation really came in handy."