Methods

In order to take our Proustian walk through the forest, we compared the same forest scenes from a human perspective and an approximation of an animal’s perspectives. For each case, I estimated field-of-view, depth-of-field, focal field, and spectral frequency for each animal from the literature and represented the comparison in side-by-side images. Here’s how I created the images:

Simulating animal vision is no easy task. For starters, imaging super wide-angle panoramas up to 360 degrees involved capturing the initial photos with wide-angle and specialized hemispherical lenses. This gave me a stack of overlapping photos that I then digitally stitched together into a single panorama. Typical cameras are only designed to capture light in the human visual frequency range. So, for tetrachromat birds and deer, I performed this process a second time, taking images with a specialized camera to capture UV light outside of the human visible spectrum. For the frog and fish images, I used underwater housings to take photos below the waterline. In the case of the frog image, I then composited the underwater image to the above water image to create a split-vision effect.

The process for approximating animal color vision. Multiple images were taken with both color and UV cameras. The images were composited into panoramas. The color image was split into red, green, and blue channels and the monochrome UV image was converted to a false-color image. The four channels were mixed to represent the animal’s spectral sensitivity.

 

Once the panoramas had been created, I split the color image into red, green, and blue color channels. Since UV is recorded in monochrome by the camera sensor, I created a false-color image for this channel. In order to approximate the color vision of the animal, I adjusted the levels of the color channels to match the animal’s spectral sensitivity range. I scaled the blue and UV curves to fit the whole range into the human visible spectrum.

To further approximate animal vision, a black vignette, a desaturation vignette, and a Gaussian blur vignette were added to represent the field-of-view and focal field.

 

While taking the images, I used the camera settings to limit the depth-of-field. In post-processing, I represented an animal’s field-of-view and focal field by layering a solid-color black vignette, a desaturation vignette, and a unfocused vignette on the color panorama. Finally, I included other manipulations to represent special features like dual fovea in hawks.

This project was made possible through a Digital Education Innovation Grant from the Yale Center for Teaching and Learning which provided funds for photo equipment and advising for the project.