forest – A.Z. Andis Arietta https://www.azandisresearch.com Ecology, Evolution & Conservation Mon, 09 Oct 2023 14:27:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 141290705 UPDATE: Smartphone Hemispherical Image Analysis https://www.azandisresearch.com/2023/05/20/update-smartphone-hemispherical-image-analysis/ Sat, 20 May 2023 19:27:45 +0000 https://www.azandisresearch.com/?p=2279 When I first developed the method to estimate canopy metrics from smartphone spherical panoramas, I used a somewhat convoluted workflow involving command line image manipulation, a plugin for ImageJ to do binarization, and a script I wrote in AutoHotKey language to automate mouse clicks on a GUI for canopy measures. Admittedly, it is a difficult pipeline for others to easily replicate.

In an effort to make life easiest, I spent some time building out the pipeline entirely in R, including a sourceable function for converting spherical panos to hemispherical images (all available on this repo).

The easiest way to convert all of your spherical panos to hemispherical projections is to source the function from my github:

source("https://raw.githubusercontent.com/andisa01/Spherical-Pano-UPDATE/main/Spheres_to_Hemis.R")

When you source the script, it will install and load all necessary packages. It also downloads the masking file that we will use to black out the periphery of the images.

The script contains the function convert_spheres_to_hemis, which does exactly what is says. You’ll need to put all of your raw spherical panos into a subdirectory within your working directory. We can then pass the path to the directory as an argument to the function.

convert_spheres_to_hemis(focal_path = "./raw_panos/")

This function will loop through all of your raw panos, convert them to masked, north-oriented upward-facing hemispherical images and put them all in a folder called “masked_hemispheres” in your working directory. It will also output a csv file called “canopy_output.csv” that contains information about the image.

Below, I will walk through the steps of the workflow that happened in the convert_spheres_to_hemis function. If you just want to use the function, you can skip to the analysis. I’ve also written a script to do all of the conversion AND analysis in batch in the repo titled “SphericalCanopyPanoProcessing.R”.

library(tidyverse) # For data manipulation
library(exifr) # For extracting metadata

R is not really the best tool for working with image data. So, we’ll use the magick package to call the ImageMagick program from within R.

library(magick) # For image manipulation
# Check to ensure that ImageMagick was installed.
magick_config()$version

You should see a numeric version code like ‘6.9.12.3’ if ImageMagick was properly installed.

# ImageR also requires ImageMagick
library(imager) # For image display

For binarizing and calculating some canopy metrics, we will use Chiannuci’s hemispheR package, which we need to install from the development version.

library(devtools)
devtools::install_git("https://gitlab.com/fchianucci/hemispheR")
library(hemispheR) # For binarization and estimating canopy measures

To get started, you’ll need to have all of your raw equirectangular panoramas in a folder. I like to keep my raw images in a subdirectory called ‘raw_panos’ within my working directory. Regardless of where you store your files, set the directory path as the focal_path variable. For this tutorial we’ll process a single image, but I’ve included scripts for batch processing in the repo. Set the name of the image to process as the focal_image variable.

focal_path <- "./raw_panos/"
focal_image <- "PXL_20230519_164804198.PHOTOSPHERE_small.jpg"

focal_image_path <- paste0(focal_path, focal_image)
focal_image_name <- sub("\\.[^.]+$", "", basename(focal_image_path))

Let’s take a look at the equirectangular image.

pano <- image_read(focal_image_path)
pano # Visualize the pano
Raw equirectangular projection of a spherical panorama within a forest.
Raw equirectangular projection of a spherical panorama.

Note: One advantage of spherical panos is that they are large, and therefore, high resolution. The images from my Google Pixel 4a are 38 mega pixels. For this tutorial, I downsized the example pano to 10% resolution to make processing and visualizing easier. For your analysis, I’d recommend using full resolution images.

Spherical panoramas contain far more metadata than an average image. We can take a look at all of this additional information with the read_exif function.

read_exif(focal_image_path) %>%
  glimpse()

We’ll extract some of this information about the image, the date it was created, the georeference, and altitude, etc. to output alongside our canopy metrics. You can decide which elements are most important or useful for you.

xmp_data <- read_exif(focal_image_path) %>%
  select(
    SourceFile,
    Make,
    Model,
    FullPanoWidthPixels,
    FullPanoHeightPixels,
    SourcePhotosCount,
    Megapixels,
    LastPhotoDate,
    GPSLatitude,
    GPSLongitude,
    GPSAltitude,
    PoseHeadingDegrees
  )

The first image processing step is to convert the equirectangular panorama into a hemispherical image. We’ll need to store the image width dimension and the heading for processing. The pose heading is a particularly important feature that is a unique advantage of spherical panoramas. Since the camera automatically stores the compass heading of the first image of the panorama, we can use that information to automatically orient all of our hemispherical images such that true north is the top of the image. This is critical for analyses of understory light which requires plotting the sunpath onto the hemisphere.

# Store the pano width to use in scaling and cropping the image
pano_width <- image_info(pano)$width
image_heading <- read_exif(focal_image_path)$PoseHeadingDegrees

The steps to reproject the spherical panorama into an upward-looking hemispherical image go like this:

Crop the upper hemisphere (this is easy with smartphone spheres because the phone’s gyro ensures that the horizon line is always the midpoint of the y-axis).

Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama within a forest.
Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama.

Rescale the cropped image into a square to retain the correct scaling when reprojected into polar coordinate space.

Rescaled upper hemisphere from an equirectangular projection of a spherical panorama within a forest.
Rescaled upper hemisphere from an equirectangular projection of a spherical panorama.

Project into polar coordinate space and flip the perspective so that it is upward-looking.

Polar projection of the upper half of a spherical panorama of a forest.
Polar projection of the upper half of a spherical panorama.

Rotate the image so that the top of the image points true north and crop the image so that the diameter of the circle fills the frame.

Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest.
Polar projection of the upper half of a spherical panorama rotated to orient to true north.

We can accomplish all of those steps with the code below.

pano_hemisphere <- pano %>%
  # Crop to retain the upper hemisphere
  image_crop(geometry_size_percent(100, 50)) %>%
  # Rescale into a square to keep correct scale when projecting in to polar coordinate space
  image_resize(geometry_size_percent(100, 400)) %>%
  # Remap the pixels into polar projection
  image_distort("Polar",
                c(0),
                bestfit = TRUE) %>%
  image_flip() %>%
  # Rotate the image to orient true north to the top of the image
  image_rotate(image_heading) %>%
  # Rotating expands the canvas, so we crop back to the dimensions of the hemisphere's diameter
  image_crop(paste0(pano_width, "x", pano_width, "-", pano_width/2, "-", pano_width/2))

The resulting image looks funny because the outer pixels are extended by interpolation and we’ve rotated the image which leaves white space at the corners. Most analyses define a bounding perimeter to exclude any pixels outside of the circular hemisphere; so, the weird border shouldn’t matter. But, we can add a black mask to make the images look better.

I’ve included a vector file for a black mask to lay over the image in the repo.

# Get the image mask vector file
image_mask <- image_read("./HemiPhotoMask.svg") %>%
  image_transparent("white") %>%
  image_resize(geometry_size_pixels(width = pano_width, height = pano_width)) %>%
  image_convert("png")

masked_hemisphere <- image_mosaic(c(pano_hemisphere, image_mask))

masked_hemisphere
Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest with a black mask around the outside of the hemisphere.
Masked hemispherical canopy image.

We’ll store the masked hemispheres in their own subdirectory. This script makes that directory, if it doesn’t already exists and writes our file into it.

if(dir.exists("./masked_hemispheres/") == FALSE){
  dir.create("./masked_hemispheres/")
} # If the subdirectory doesn't exist, we create it.

masked_hemisphere_path <- paste0("./masked_hemispheres/", focal_image_name, "hemi_masked.jpg") # Set the filepath for the new image

image_write(masked_hemisphere, masked_hemisphere_path) # Save the masked hemispherical image

At this point, you can use the hemispherical image in any program you’d like either in R or other software. For this example, I’m going to use Chiannuci’s hemispheR package in order to keep this entire pipeline in R.

The next step is to import the image. hemispheR allows for lots of fine-tuning. Check out the docs to learn what all of the options are. These settings most closely replicate the processing I used in my 2021 paper.

fisheye <- import_fisheye(masked_hemisphere_path,
                          channel = '2BG',
                          circ.mask = list(xc = pano_width/2, yc = pano_width/2, rc = pano_width/2),
                          gamma = 2.2,
                          stretch = FALSE,
                          display = TRUE,
                          message = TRUE)
Circular hemispheric plot output by hemispheR's 'import_fisheye' function.
Circular hemispheric plot output by hemispheR’s ‘import_fisheye’ function.

Now, we need to binarize the images, converting all sky pixels to white and everything else to black (at least as close as possible). Again, there are lots of options available in hemispheR. You can decides which settings are right for you. However, I would suggest keeping the zonal argument set to FALSE. The documentation describes this argument as:

zonal: if set to TRUE, it divides the image in four sectors
(NE,SE,SW,NW directions) and applies an automated classification
separatedly to each region; useful in case of uneven light conditions in
the image

Because spherical panoramas are exposing each of the 36 images separately, there is no need to use this correction.

I also suggest keeping the export argument set to TRUE so that the binarized images will be automatically saved into a subdirectory named ‘results’.

binimage <- binarize_fisheye(fisheye,
                 method = 'Otsu',
                 # We do NOT want to use zonal threshold estimation since this is done by the camera
                 zonal = FALSE,
                 manual = NULL,
                 display = TRUE,
                 export = TRUE)
Binarized circular hemispheric plot output by hemispheR's 'binarize_fisheye' function.
Binarized circular hemispheric plot output by hemispheR’s ‘binarize_fisheye’ function.

Unfortunately, hemispheR does not allow for estimation of understory light metrics like through-canopy radiation or Global Site Factors. If you need light estimates, you’ll have to take the binarized images and follow my instructions and code for implementing Gap Light Analyzer.

Assuming all you need is canopy metrics, we can continue with hemispheR and finalize the whole pipeline in R. We estimate canopy metrics with the gapfrac_fisheye() function.

gapfrac <- gapfrac_fisheye(
  binimage,
  maxVZA = 90,
  # Spherical panoramas are equidistant perforce
  lens = "equidistant",
  startVZA = 0,
  endVZA = 90,
  nrings = 5,
  nseg = 8,
  display = TRUE,
  message = TRUE
)
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR's 'gapfrac_fisheye' function.
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR’s ‘gapfrac_fisheye’ function.

Finally, we can estimate the canopy metrics with the canopy_fisheye() function, join those to the metadata from our image, and output our report.

canopy_report <- canopy_fisheye(
  gapfrac
)

output_report <- xmp_data %>%
  bind_cols(
    canopy_report
  ) %>%
  rename(
    GF = x,
    HemiFile = id
  )

glimpse(output_report)
Rows: 1
Columns: 32
$ SourceFile            "./raw_panos/PXL_20230519_164804198.PHOTOSPHERE_small.jpg"
$ Make                  "Google"
$ Model                 "Pixel 4a"
$ FullPanoWidthPixels   8704
$ FullPanoHeightPixels  4352
$ SourcePhotosCount     36
$ Megapixels            0.37845
$ LastPhotoDate         "2023:05:19 16:49:57.671Z"
$ GPSLatitude           41.33512
$ GPSLongitude          -72.91103
$ GPSAltitude           -23.1
$ PoseHeadingDegrees    86
$ HemiFile              "PXL_20230519_164804198.PHOTOSPHERE_smallhemi_masked.jpg"
$ Le                    2.44
$ L                     3.37
$ LX                    0.72
$ LXG1                  0.67
$ LXG2                  0.55
$ DIFN                  9.981
$ MTA.ell               19
$ GF                    4.15
$ VZA                   "9_27_45_63_81"
$ rings                 5
$ azimuths              8
$ mask                  "435_435_434"
$ lens                  "equidistant"
$ channel               "2BG"
$ stretch               FALSE
$ gamma                 2.2
$ zonal                 FALSE
$ method                "Otsu"
$ thd                   116

Be sure to check out my prior posts on working with hemispherical images.

]]>
2279
Tips for taking spherical panoramas https://www.azandisresearch.com/2021/07/16/tips-for-taking-spherical-panoramas/ Fri, 16 Jul 2021 13:44:59 +0000 http://www.azandisresearch.com/?p=1888 Spherical panoramas can be tricky to stitch properly in dense forests, and especially when canopy elements are very close to the camera (which exaggerates the parallax). Fortunately, the issue attenuated with every update of phone software, but it still pays to be careful when capturing your panoramas. So, if you are taking spherical panos in order to estimate canopy metrics, here are my suggestions for improving your captures:

Rotate around the camera

Perfecting panoramas took some practice for me. The first key is to avoid rotating the camera around your body, but rather, to rotate your body around the camera. I found this post helpful when I was learning. The principle is to keep the entrance pupil of the camera lens as the exact center of the sphere (as much as possible).

Calibrate your compass and gyro

Another tip is to calibrate your compass and gyro regularly (here’s how for Android users). If the compass is off, the phone won’t always know when you’ve made a full rotation.

Take your time

It also helps to go slowly. The camera is constantly processing and mapping the scene to guide the next image, which takes a few seconds.

Be consistent

Even though you can technically take images in any order, I always rotate the same direction around the horizon and then the upper two circles. The lower sphere doesn’t really matter, so I don’t spend much time on it. The key here is that consistency, both for you and the camera, helps.

Use your body as a tripod

You can also try to use your body as a kind of tripod. I used a specific method when I was first learning to take spherical images: I hold my right elbow to my hip bone directly over my right knee and pinch the phone at the top in line with the camera. Then I keep my right foot planted and walk my left foot around the right, using my left hand to tip the phone as need. After a few dozen spheres, I don’t really need to do that any more. I usually know if the sphere will turn out bad and just restart.

Review your images

It is important to remember that you can (and should) review your images in the field so that you can retake any with issues. You can also get a Google Cardboard viewer for $13 which lets you review the sphere in VR in the field to more clearly see the images.

 

Over time, you get the hang of capturing spheres to minimize problems. It takes me much less time to get a good photosphere than it does to level and set my exposure with my DSLR, now that I have practiced.

Nevertheless, there are always some artifacts. After looking at lots of images side-by-side, my sense is that the artifactual errors in the panos, even when fairly large, contribute less error than the total errors of incorrect orientation, unleveled camera, poor lens calibrate, exposure misspecification, and much lower pixel count from other methods.

 

Be sure to check out my other posts on this topic:

Smartphone hemispherical photography

Analyzing hemispherical photos

Taking hemispherical canopy photos

Hardware for hemispherical photos

Hemispherical light estimates

]]>
1888
Hot competition and tadpole Olympics https://www.azandisresearch.com/2020/12/24/hot-competition-and-tadpole-olympics/ Thu, 24 Dec 2020 12:35:58 +0000 http://www.azandisresearch.com/?p=1836 Our newest paper (pdf available on my publications page), led by Kaija Gahm is just out in the Journal of Experimental Zoology as part of special issue on herp physiology that came out of the World Congress of Herpetology last January.

The study:

One of the most consistent findings arising from 20 years of study in our lab is that wood frogs seem to adapt to life in cold, dark ponds. In general, cold-blooded animals like reptiles and amphibians are not suited for the cold and function much better in warmer conditions. So, wood frogs that live in colder ponds should have a harder time competing against their neighbors in warmer ponds.

In response, cold-pond wood frogs seem to have developed adaptations that level the playing field. In separate experiments, we’ve found that wood frog tadpoles in cold-ponds tend to seek out warmer water (like in sunflecks) and have lower tolerance to extremely warm temperatures. Most importantly, they can mature faster as eggs and larvae.

But I’ve always struggled with a lingering question: if cold pond frogs have adapted these beneficial adaptation to compete with warm-pond frogs, what is keeping those genes out of the warm-ponds? Shouldn’t cold-pond genes in a warm pond mean double the benefits? One would expect the extrinsic environmental influence and the intrinsically elevated growth rates to produce super tadpoles that metamorphose and leave the ponds long before all the others.

Kaija, who was an undergrad in the lab at the time, decide to tackle that question for her senior thesis.

We hypothesized that there might be a cost to developing too quickly. Studies in fish suggested that the trade-off could be between development and performance. The idea is that, like building Ikea furniture, if you build the tissue of a tadpole too quickly, the price is loss of performance.

Much like assembling Ikea furniture too quickly, we hypothesized that when tadpoles develop too quickly, there might be a functional cost.

So we collected eggs from 10 frog ponds that spanned the gradient from sunny and warm to dark and cold. We split clutches across two incubators that we set to bracket the warmest and coldest of the ponds.

Then we played parents to 400 tadpoles, feeding and changing water in 400 jars two to three times a week.

Half of the 400 tadpoles we reared in temperature-controlled incubators.

We reared the tadpoles to an appropriate age (Gosner stage 35ish). Those in the warm incubator developed about 68% faster than those in the cold incubator. In addition to our lab-reared tadpoles, we also captured tadpoles from the same ponds in the wild as a comparison. Development rates in the lab perfectly bounded those in the wild.

Fig. 1. from the paper: (a,b) Temperatures in incubators and natal ponds during the 2019 season. ‘High’ and ‘Low’ refer to the corresponding temperature treatments in the lab. Two‐letter codes are abbreviations for the names of individual ponds. (c,d) Development rates of warm treatment, cold treatment, and wild tadpoles

Once they reached an appropriate size, we put them to the test. We simulated a predator attack by a dragon fly naiad by poking them in the tale. Dragonfly naiads are fast, fierce, tadpole-eating machines and a tadpole’s fast-twitch flight response is a good indicator of their chance of evading their insect hunters. It’s a measure of performance that directly relates to a tadpole’s fitness.

Above the test arenas, we positioned highspeed cameras to capture the tadpoles’ burst responses. We recorded 1245 trials, to be exact—way more than we ever wanted to track by hand. Fortunately, Kaija is a wiz at coding; and with a bit of help, she was able to write a Matlab script that could identify the centroid of a tadpole and record its position 60 times per second.

Kaija wrote a script to automatically identify tadpoles and track their movement from the high-speed videos.

We measured the tadpoles’ speed during the first half second of their burst response and looked for an association with their developmental rates. One complicating factor is that a tadpole’s fin and body shape can influence burst speeds. So, a weak tadpole with a giant fin might have a similar burst speed to a super fit tadpole with a small fin. To account for this, we took photos of each tadpole and ran a separate analysis mapping their morphometry and included body shape into our models.

Figure 2 from the paper. Lab reared tadpoles showed very similar shapes with long, narrow tails, large tail muscles, and small bodies. Wild tadpoles had much deeper tails and larger bodies. Other folks have done extensive research on the many factors like water chemistry, food quality, and even the scent of different predators that induce different body shapes, so it is not surprising that we saw so much diversity between ponds and between lab and wild tadpoles that originated from the same pond. And props to Bayla for the painting of the tadpole!

As we had hypothesized, tadpoles reared at warmer temperatures show much slower burst speed than their genetic half-sibling reared in the cold incubator. We even saw a similar, but weaker effect for the tadpoles that were allowed to develop in their natal ponds. It seems that developing too fast reduces performance.

Fig. 3 from the paper: Relationship between development rate and burst speed for (a) lab tadpoles and (b) wild tadpoles. Dots represent pond‐wise means, and in (a), lines connect means from the same pond. Marginal density plots are based on individual tadpoles rather than pond‐wise means. Orange and blue represent tadpoles reared in the high‐ and low‐temperature incubators, respectively

Thus, it certainly seems that the counter-gradient pattern we see of faster development in cold-pond populations, but not in warm-pond populations, is at least partially driven by the trade-off between development rate and performance.

In fact, it may even be the case that we’ve been viewing the pattern backwards all along. Perhaps instead we should consider if warm-pond populations have developed adaptively slower development rates to avoid the performance cost. This especially makes sense given the range of wood frogs. Our populations are at the warm, southern end of the range. Maybe this tradeoff is also a factor constraining wood frogs to the cold north of the continent?

Range map of wood frogs (Rana sylvatica).

If warm weather and faster development are a real liability for wood frogs, it is only going to get worse in the future. We know from another recent study that our ponds have been warming quickly, especially during the late spring and early summer months. But climate change is also causing snow to fall later in the winter forcing frogs to breed later. The net result is that wood frogs may be forced to develop fast intrinsic developmental rates in response to a contracting developmental window, while at the same time, extrinsic forces drive development even faster. That’s a double whammy in the trade-off with performance. And might lead to too many “Ikea furniture mistakes” at the cellular level.

As a separate part of this study, we also measured metabolic rates in out tadpoles in hopes of understanding the relationship between developmental rates, performance, and cellular respiration. I’m in the process of analyzing those data, so stay tuned for more!

]]>
1836
Smartphone hemispherical photography https://www.azandisresearch.com/2020/12/16/smartphone-hemispherical-photography/ Wed, 16 Dec 2020 23:41:13 +0000 http://www.azandisresearch.com/?p=1809 Hemispherical photography is one of those tasks often prefaced by the statement, “How hard could it be?” I’m pretty sure I said something like this at the beginning of my PhD when we wanted to ask how the canopy over wood frog ponds influences their larval ecology.

Now, five years, four blog posts, and uncountable hours later, I can say that measuring canopy structure with hemispherical photos is surprisingly difficult.

One of the biggest hurdles is understanding the equipment and deploying it properly in the field. For me, nothing is more tedious than standing waste deep in a mucky, mosquito-infested pond while I fiddle around with camera exposure settings and fine-tuning my leveling device. Add to that the constant fear of dropping a couple thousands of dollars of camera and specialized lens into the water, and you get a good sense of my summers.

So, it is with great pleasure that I offer an alternative method for capturing canopy photos that requires nothing but a cell phone, yet produces higher quality images than a traditional DSLR setup. This new method exploits the spherical panorama function available on most cameras (or in the free Google Street View app). Hemispherical photos can then be easily extracted a remapped from the spheres. You can check out the manuscript at Forestry here (a PDF is available on my Publications page) or continue reading while I walk through the paper below.

From figure 2 of the paper: . Comparisons of smartphone spherical panorama hemispherical photographs (SSP HP) (right B and C) to traditional DSLR hemispherical photographs (DSLR HP) (left B and C) captured at the same site. Details of the same subsection of the canopy, indicated by orange boxes, are expanded in C. Binarized images are shown below color images in B and C.

The old way

The most common way to measure canopy structure these days is with the use of hemispherical photographs. These images capture the entire canopy and sky from horizon to horizon. Assuming proper exposure, we can categorize individual pixels as either sky or canopy and run simple statistics to count the amount of sky pixels versus canopy or the number and size of the gaps between canopy pixels. We can also plot a sun path onto the image and estimate the amount of direct and indirect light that penetrated through the canopy. (You can follow my analysis pipeline in this post).

All of this analysis relies on good hemispherical images. But the problem is that there are many things that can go wrong when taking canopy photos, including poor lighting conditions, bad exposure settings, improperly oriented camera, etc. Another problem is that capturing images of high-enough quality requires a camera with a large sensor, typically a DSLR, a specialized lens, and a leveling device, which can cost a lot of money. Most labs only have one hemispherical photography setup (if any), which means that we sacrifice the number of photos we can take in favor of high-quality images.

The new way

In the past few years, researchers have tried to figure out ways to get around this equipment barrier. Folks have tried eschewing the leveling device, using clip-on hemispherical lenses for smartphones, or using non-hemispherical smartphone images. I even tested using a hemispherical lens attachment on a GoPro.

But, none of these methods really produce images that are comparable to the images from DSLRs, for three reasons:

  1. Smartphone sensors are tiny compared to DSLR, so there is a huge reduction in quality.
  2. Clip-on smartphone lenses are tiny compared to DSLR, so again, there is a huge reduction in optical quality.
  3. Canopy estimates are sensitive to exposure settings and DLSRs allow for more control over exposure.

The method I developed gets around all of these issues by using multiple, individual cell phone images to stitch together a single hemispherical image. Thus, instead of relying on one tiny cell phone sensor, we are effectively using many tiny cell phone sensors to make up the difference.

Another advantage of creating a hemispherical image out of many images is that each individual image only has to be exposed for a portion of the sky. This avoids the problems of glare and variable sky conditions that plague traditional systems. An added benefit is that, smartphone cameras operate in a completely different way than DSLRs, so they are much less sensitive to exposure issues in general.

Smartphones are less sensitive to exposure issues because, unlike DSLRs that capture a single instance on the sensor when you hit the shutter button, smartphone cameras use computational photography techniques that blend the best parts of many images taken in short succession. You may not realize it, but your smartphone is constantly taking photos as soon as you turn it on (which makes sense since you can see the scene from the camera on your screen). The phone stores about 15 images at a time, constantly dumping the older versions out of temporary memory as updated images pour in. When you hit the button to take a picture, your phone then automatically blends the last few images with the next few images. The phone’s software selects the sharpest pixels with the most even contrast and color from each image and then composites those into the picture presented to you. With every new software update, the algorithms for processing images get better and better. That’s why modern cell phones are able to take photos that can compete with mid-range DSLRs despite the limitations of their tiny sensors.

So, if each phone photos is essentially a composite of 15 images, and then we take 18 of those composite images and stitch them into a hemispherical image, we are effectively comparing a sensor the size of 270 individual phone camera sensors to the DSLR sensor.

The best part is that there is already software that can do this for us via the spherical panorama feature included with most contemporary smartphone cameras. This feature was introduced in the Google Camera app back in 2012 and iOS users can access the feature via the Google StreetView app. It is incredibly simple to use.

Update: Check out my post on tips for taking spherical panoramas

Once you’ve taken a spherical panorama, it is stored in your phone as a 2D JPEG in equirectangular format. The best part about the photo sphere software is that it utilizes your phone’s gyroscope and spatial mapping abilities to automatically level the horizon. This is advantageous for two reasons. First, it means we can ditch the tedious leveling devices. Second, it means that the equirectangular image can be perfectly split between the upper and lower hemisphere. We simply have to crop the top half of the rectangular image and remap it to polar coordinates to get a circular hemispherical image.

Figure 1 from Arietta (2020)
Figure 1 from the paper: Spherical panoramas (A) are stored and output from smartphones as 2D images with equirectangular projection (B). Because spherical panoramas are automatically leveled using the phone gyroscope, the top half of the equirectangular image corresponds to the upper hemisphere of the spherical panorama. The top portion of the equirectangular image (B) can then be remapped onto the polar coordinate plane to create a circular hemispherical photo (C). In all images, zenith and azimuth are indicated by Θ and Φ, respectively.

How to extract hemispherical images from spherical panoramas

UPDATE: Please see my latest post to process spherical images with R.

Command line instructions

If you are proficient with the command line, the easiest way to extract hemispherical images from photo spheres is to use ImageMagick. After you download and install the program you can run the script below to convert all of your images with just a couple lines of code.

cd "YOUR_IMAGE_DIR"

magick mogrify -level 2%,98% -crop 8704x2176-0-0 -resize "8704x8704!" -virtual-pixel horizontal-tile -background black +distort Polar 0 -flop -flip *jpg

You may need to make a few modifications to the script for your own images. The -crop 8704x2176-0-0 flag crops the top half of the image (i.e. upper hemisphere). Be sure to adjust this to 1.00×0.25 the dimensions of your panorama dimensions. The -resize "8704x8704!" flag resizes the image into a square in order to apply a polar transformation. Be sure to adjust this to 1.00×1.00 the width of your panorama

Note that the code above will convert and overwrite all of the .jpg files in your folder to hemispherical images. I suggest that you practice on a folder of test images or a folder of duplicates to avoid any mishaps.

GUI instructions

If you are intimidated by the command line, extracting hemispherical images from photo spheres is also easy with GIMP (I used GIMP because it is free, but you can follow the same steps in Photoshop).

Update: You can also try out this cool web app developed by researchers in Helsinki which allows you to upload spherical panoramas from your computer or phone and automatically converts them to hemispherical images that you can download. However, I would not suggest using this tool for research purposes because the app fixes the output resolution at 1000p, so you lose all of the benefits of high-resolution spherical images.

Spherical panoramas are stored as 2D equirectangular projections from which hemispherical images can be extracted in GIMP.

First, crop the top half of the rectangular photo sphere.

Crop the top half of the panorama.

Second, scale the image into a square. I do this by stretching the image so that the height is the same size as the width. I go into why I do this below.

Scale the image into a square.

Third, remap the image to a polar projection. Go to Filter > Distort > Polar Coordinates

Settings for remapping the panorama into a polar projection.
Once, mapped onto polar coordinates, the image is now a circular hemispherical image.

Fourth, I found that increasing the contrast slightly helps the binarization algorithms find the correct threshold.

All of these steps can be automated in batch with BIMP plugin (a BIMP recipe is available in the supplemental files of the paper). This can also be automated from the command line with ImageMagick (see scripts above and in the supplemental materials of the paper).

The result is a large image with a diameter equal to the width of the equirectangular sphere. Because we are essentially taking columns of pixels from the rectangular image and mapping them into “wedges” of the circular image, we will always need to down sample pixels toward the center of the circular image. Remember that each step out from the center of the image is the same as each step down the rows of the rectangular image. So, the circumference of every ring of the circular image is generated from a row of pixels that is the width of the rectangular image.

With a bit of geometry, we can see that, the circumference matches the width of our rectangular image (i.e. 1:1 resolution) at zenith 57.3 degrees. Zenith rings below 57.3 will be downsampled and those above will be scaled up and new pixels will be interpolated into the gaps. Conveniently, 57.3 degrees is 1 radian. The area within 1 rad, from zenith 0° to 57°, is important for canopy estimates as gap fraction measurements in this portion of the hemisphere are insensitive to leaf inclination angle, allowing for estimated of leaf area index without accounting for leaf orientation.

Thus, we retain most of our original pixel information within this critical portion of the image, but it does mean that we are expanding the pixels (increasing the resolution) closer to the horizon. I tested the impact of resolution directly in my paper and found almost no difference in canopy estimates; so, it is probably okay to downscale images for ease of processing if high resolution is not needed.

The hemispherical image produced can be now be analyzed in any pipeline used to analyze DSLR hemispherical images. You can see the pipeline I uses in this post.

How do images from smartphone panoramas compare to DSLR

In my paper, I compared hemispherical photos taken with a DSLR against those extracted from a spherical panorama. I took consecutive photos at 72 sites. Overall, I found close concordance between measures of canopy structure (canopy openness) and light transmittance (global site factor) between the methods (R2 > 0.9). However, the smartphone images were of much greater clarity and therefore retained more detailed canopy structure that was lost in the DSLR images.

Figure 4 from the paper: Difference in canopy structure and light environment estimates between reference (standard DSLR HP) and full resolution SSP HP (dark orange), low resolution SSP HP downsampled to match the standard DSLR resolution (light orange), fisheye HP (blue), and DSLR HP with exposure adjusted from +5 to -5 (light to dark). SSP HP images were generated from spherical panoramas taken with Google Pixel 4a and Google Camera. Fisheye HP images were simulated from smartphone HP for two intersecting 150° FOV images from a Pixel 4a. DSLR HP were captured with Canon 60D and Sigma 4.5mm f2.8 hemispherical lens.

Although the stitching process occasionally produces artifacts in the image, the benefits of this method far outweigh the minor problems. Care when taking the panorama images, as well as ever-improving software will help to minimize imperfect stitching.

Figure 2 from the paper: Comparisons of smartphone spherical panorama hemispherical photographs (SSP HP) (right B and C) to traditional DSLR hemispherical photographs (DSLR HP) (left B and C) captured at the same site. Details of the same subsection of the canopy, indicated by orange boxes, are expanded in C. Binarized images are shown below color images in B and C. Image histograms differ in the distribution of luminance values in the blue color plane (A). In panel E, a section of the canopy from full resolution SSP HP (left), downsampled SSP HP (middle), and DSLR HP (right) is further expanded to demonstrate the effect of image clarity on pixel classification. An example of an incongruous artifact resulting from misalignment in the spherical panorama is outlined in blue in A and expanded in D.

Overall, this method is not only a good alternative, it is probably even more accurate than traditional methods because of the greater clarity and robustness to variable exposure. My hope is that this paper will help drive more studies in the use of smartphone spheres for forest research. For instance, 360 horizontal panoramas could be extracted for basal measurement or entire spheres could be used to spatially map tree stands. The lower hemisphere could also be extracted and used to assess understory plant communities or leaf litter composition. Researchers could even enter the sphere with a virtual reality headset in order to identify tree species at their field sites from the comfort of their home.

Mostly, I’m hopeful that the ease of this method will allow more citizen scientists and non-experts to collect data for large-scale projects. After all, this method requires no exposure settings, no additional lenses, and is automatically levelled. The geolocation and compass heading can even be extracted from the image metadata to automatically orient the hemispherical image and set the location parameters in analysis software. Really, anyone with a cell phone can capture research-grade spherical images!

 

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos, and my tips for taking spherical panoramas.

]]>
1809
Evolution of Intrinsic Rates at the Evolution Conference 2019 https://www.azandisresearch.com/2019/09/03/evolution-of-intrinsic-rates-at-the-evolution-conference-2019/ Tue, 03 Sep 2019 13:13:38 +0000 http://www.azandisresearch.com/?p=1548 At this year’s Evolution Conference in Providence Road island, the organizers managed to recruit volunteers to film most of the talks. This is such a great opportunity for folks who cannot attend the meeting in person to stay up to date in the field. It’s also a useful chance for those of us who presented to critically review our talks.

Here’s my talk from the conference, “Evolution of Intrinsic Rates: Can adaptation counteract environmental change?“:

]]>
1548
Create a radial, mirrored barplot with GGplot https://www.azandisresearch.com/2019/07/19/create-a-radial-mirrored-barplot-with-ggplot/ Fri, 19 Jul 2019 12:21:45 +0000 http://www.azandisresearch.com/?p=1477

 

Among other topics, my lab studies the relationship between forest obligate frogs and urbanization. During a seminar, I once heard my advisor mention that Connecticut is the perfect state for us because the state sits a the top of the rankings for both the greatest percentage of tree cover and highest population density.

I’ve been meaning to dig into that statement for a while, so when Storytelling With Data encouraged folks to submit radial graphs for their July #SWDchallenge, I took the opportunity.

I pulled the population data from the US Census Bureau and used “People per sq. mile” from the 2010 census for density estimates. The tree cover data came from Nowak and Greenfield (2012, Tree and impervious cover in the United States. Landscape and Urban Planning).

Here’s how I made the graphic:

library(tidyverse)

TP <- read.csv("C:\\Users\\Andis\\Google Drive\\2019_Summer\\TreePop\\CanopyVsPopDensity.csv", header = TRUE) %>% select(State, Perc.Tree.Cov, Pop.Den.) %>% rename(Trees = Perc.Tree.Cov, Pop = Pop.Den.) %>% filter(State != "AK")

> head(TP)
  State Trees   Pop
1    AL    70  94.4
2    AZ    19  56.3
3    AR    57  56.0
4    CA    36 239.1
5    CO    24  48.5
6    CT    73 738.1
> 

Here is the dataset. First we rename the variables and removed Alaska since it was not included in the tree cover dataset.

ggplot(TP, aes(x = State, y = Trees)) +
  geom_col()

This gives us a column or bar plot of the percent tree cover for each state. Note that we could also use geom_bar(), but geom_col() will be easier to deal with once we start adding more elements to the plot.

Mirrored bar charts are a great way to compare two variables for the same observation point, especially when the variables are in different units. However, we still want to make sure that the scales are at least pretty similar for aesthetic symmetry. In our case, we will actually be asking ggplot to use the same unit scale for both tree cover and population density, so we need to make sure that they are very similar in scale.

The best option would be to standardize both values of tree cover and population density to a common scale by dividing by the respective standard deviation. The problem comes in interpreting the axis because the scale is now in standard deviations and not real-world units.

For this graphic, I’m going to cheat a little. Since we will eventually be removing our y-axis completely, we can get away with our values being approximately congruent. Since tree cover is a percentage up to 100, I decided to simply scale population density to a similar magnitude.

The greatest Population Density is Rhode Island with 1018.1 people per square mile. We can divide by 10 to make this a density in units of “10 people per square miles” which will scale our range of density values down to 0.6 to 101.8, on par with the 0 to 100 range of the tree cover scale.

> max(TP$Pop)
[1] 1018.1
> 

TP <- mutate(TP, Pop.10 = Pop/10)

> range(TP$Pop.10)
[1]   0.58 101.81
>

Now we can add the population density to the figure. To make this a mirror plot, we just need to make the values for population density negative. Also, I gave each variable a different fill color so we could tell them apart.

TP <- mutate(TP, Pop.10 = -Pop.10)

ggplot(TP, aes(x = State)) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_col(aes(y = Pop.10), fill = "#817d79") 

I’ve always loved the vertically oriented mirrored bar plots used so often by FiveThirtyEight. The problem is that these tall charts can’t fit on a wide-format presentation slide. And it is hard to read the horizontally oriented plot above. I realized that if I could wrap a mirrored bar chart into a circle, it can fit in any format. All that we need to do to make this into a circular plot is to add the coord_polar() element.

ggplot(TP, aes(x = State)) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  coord_polar()

Now we have a mirrored, radial bar plot. But, this is super ugly and not very intuitive to read. This first useful adjustment we can make is to order the states to highlight the comparison we are interested in. In this case, we are trying to highlight the states that simultaneously have the greatest tree cover and highest population densities. One easy solution would be to rank order the states by either of those variables. For instance, we can order by tree cover rank.

ggplot(TP, aes(x = reorder(State, Trees))) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  coord_polar()

But that doesn’t really highlight the comparison because having lots of trees doesn’t really correlate with having lots of people.

ggplot(TP, aes(x = Pop, y = Trees)) +
  geom_point(size = 4, alpha = 0.7) +
  theme_minimal()

Instead, we can directly highlight the comparison by computing a new variable that simultaneously accounts for tree cover rank and population density rank. We cannot simply average rankings because that will produce a lot of ties. Also, if a state has a really low rank in one variable, it can discount the higher rank of the other variable. We can deal with this by using the mean of the squared rank orders of each variable (similar to mean squared error in regression). Also, note that since we want the largest values to be rank 1, we need to find the rank of the negative values.

TP <- TP %>% mutate(TreeRank = rank(-Trees), PopRank = rank(-Pop)) %>% mutate(SqRank = (TreeRank^2)+(PopRank^2)/2) %>% mutate(RankOrder = rank(SqRank))

ggplot(TP, aes(x = reorder(State, RankOrder))) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  coord_polar()

Next, we can improve the readability of the plot. Since our y-axis isn’t technically comparable, we can get rid of the axis label and ticks altogether using theme_void(), then we tell ggplot to label all of the states for us and to place the labels at position y = 100.

ggplot(TP, aes(x = reorder(State, RankOrder))) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  geom_text(aes(y = 100, label = State)) +
  coord_polar() +
  theme_void()

This plot is pretty worthless without some numbers to help us interpret what the bar heights represent. We can add those just as we added the state labels. In order to keep the character lengths short, we need to round the values. Also, now that the scale units are independent, I decided to further scale the population density values to 100 people per square mile simply by dividing the density by 100. The values tends to overlap, so we also need to make the font smaller.

ggplot(TP, aes(x = reorder(State, RankOrder))) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_text(aes(y = 10, label = round(Trees, 2)), size = 3)+
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  geom_text(aes(y = -10, label = round(Pop/100, 1)), size = 3)+
  geom_text(aes(y = 100, label = State)) +
  coord_polar() +
  theme_void()

This looks okay, but we can make it look even better. First we can adjust the limits of the y-axis. We can use the negative limit to create a white circle in the center, essentially pushing all of the data towards the outer ring instead of dipping down to the very central point.

Also, it bothers me that the bars cut into the value labels. We can adjust the position of the labels conditionally so that labels too big to fit in the bar are set outside of the bar using an ifelse() statement.

We can use the same type of ifelse() statement to conditionally color the labels so that those inside of the bars are white while those outside of the bars match the colors of the bars. We just need to include the scale_color_identiy() to let ggplot know that we are directly providing the name of the color.

ggplot(TP, aes(x = reorder(State, RankOrder))) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_text(aes(y = ifelse(Trees >= 15, 8, (Trees + 10)), color = ifelse(Trees >= 15, 'white', '#5d8402'), label = round(Trees, 2)), size = 3)+
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  geom_text(aes(y = ifelse(Pop.10 <= -15, -8, (Pop.10 - 10)), color = ifelse(Pop.10 <= -15, 'white', '#817d79'), label = round(Pop/100, 1)), size = 3)+
  geom_text(aes(y = 100, label = State)) +
  coord_polar() +
  scale_y_continuous(limits = c(-150, 130)) +
  scale_color_identity() +
  theme_void()

 

The way the state labels are so crowded on the right, but not on the left bugs me. We can set a standard distance, like y = 50, but then conditionally bump out the values if they would interfere with the bar.

And finally, ggplot builds in an obnoxious amount of white space around circular plots. We can manually reduce the white area by adjusting the plot margins.

ggplot(TP, aes(x = reorder(State, RankOrder))) +
  geom_col(aes(y = Trees), fill = "#5d8402") +
  geom_text(aes(y = ifelse(Trees >= 15, 8, (Trees + 10)), color = ifelse(Trees >= 15, 'white', '#5d8402'), label = round(Trees, 2)), size = 3)+
  geom_col(aes(y = Pop.10), fill = "#817d79") +
  geom_text(aes(y = ifelse(Pop.10 <= -15, -8, (Pop.10 - 10)), color = ifelse(Pop.10 <= -15, 'white', '#817d79'), label = round(Pop/100, 1)), size = 3)+
  geom_text(aes(y = ifelse(Trees <= 50 , 60, Trees + 15), label = State)) +
  coord_polar() +
  scale_y_continuous(limits = c(-150, 130)) +
  scale_color_identity() +
  theme_void() +
  theme(plot.margin=grid::unit(c(-20,-20,-20,-20), "mm"))

And there we have it. I made some final touches like changing the font and adding a legend in Illustrator.

 

]]>
1477
Analyzing Hemispherical Photos https://www.azandisresearch.com/2019/02/03/analyzing-hemispherical-photos/ Mon, 04 Feb 2019 02:06:15 +0000 http://www.azandisresearch.com/?p=1205 Now that you’ve got the theory, equipment, and your own hemispherical photos, the final steps are processing the photos and generating estimates of canopy and light values.

We’ll be correcting photos and then converting to a binary image for analysis just like those in the slider above.

There are many paths to get light estimates from photos. Below, I’ve detailed my own pipeline which consists of these main steps:

  • Standardizing exposure and correcting photos
  • Binarization
  • File conversion
  • Model configuration
  • Analyzing images

For this pipeline, I use Adobe Lightroom 5 for standardizing the photos and masking, ImageJ with Hemispherical 2.0 plugin for binarization and image conversion, Gap Light Analyzer (GLA) for estimating light values, and AutoHotKey for automating the GLA GUI. All of these programs, with the exception of Lightroom, are freely available. You could easily replicate all of the steps I show for Lightroom in GIMP, an open-source graphics program.

There are probably stand-alone proprietary programs out there that can analyze hemispherical photos in a more streamlined manner within a single program, but personally, I’m willing to sacrifice some efficiency for free and open-source methods.

1. Standardizing photos:

With the photos on your computer, the first step is to standardize the exposure. In theory, exposure should be standardized in the field by calibrating reference exposure to the open sky (as outlined in my previous post). However, when taking many photos throughout the day, or on days with variable cloud cover, there will be variation in the photos. Similarly, snowy days or highly reflective vegetation can contribute anomalies in the photos. Fortunately, you can reduce much of this variability in post-processing adjustments. We will apply some of these adjustments across the entire image and others we will need to correct targeted elements within the image.

Across the entire image, we can right-shift the exposure histogram so that the brightest pixels in the sky are always fully white. This standardizes the sky to the upper end of the brightness distribution across the image. So, even if we capture one photo with relatively dark clouds and another with relatively bright clouds over the course of a day, after standardizing, the difference between the white sky and dark canopy will be consistent.

Example using a local adjustment brush to recover the effect of sun flare.

If elements of the image are highly reflective, an additional step to mask these areas is required to avoid underestimating canopy closure. For instance, extremely bright tree bark or snow on the ground should be masked. I do this by painting a -2EV exposure brush over the area to ensure it will be converted to black pixels in the binarization step. Unfortunately, this must be done by hand on each photo.

Example using a local adjustment brush to mask reflective areas in the image.

Sunspots can also be partially corrected, but again, it requires manipulation by hand on each photo. I find that a local exposure brush that depresses the highlights and increases the contrast will recover pixel that have been overexposed due to sun flare, but only to an extent. It is best to avoid flare in the first place (see my last post for tips for avoiding flare).

2. Binarization

Binarization is the process of converting all color pixels in an image to black (canopy) and white (sky). The most basic method of conversion is to set a defined threshold for the tone of each pixel above which a pixel is converted to white and below which it is converted to black. This type of simple thresholding suffers from problems in cases when pixels are ambiguous, especially at the edges of solid canopy objects. To avoid these problem, a number of algorithms have been developed that take a more complex approach. I won’t go into details here, but I suggest reading Glatthorn and Beckshafer’s (2014).

Examples of pixels that pose significant problems for thresholding and binarizing images. From Glatthorn & Beckshafer 2014.

I explored a number of binarization programs while developing my project, including SideLook 1.1  (Nobis, 2005) and the in-program thresholding capability in Gap Light Analyzer (Frazer et al. 2000). However, both of these programs fell short of my needs. In the end, I found the Hemispherical 2.0 plugin (Beckshafer 2015) for ImageJ to be most useful because it is easily able to handle batch processing.

You can download the Hemispherical 2.0 plugin and manual here (scroll to the bottom of the page under “Forest Software”). This plugin uses a “Minimum” thresholding algorithm on the blue color plane of the image (skies are blue, so the blue plane yields the highest contrast). The threshold is based on a histogram populated from a moving window of neighboring pixel values to ensure continuity of canopy edges (see Glatthorn & Beckshafer 2014 for details).

Once the plugin is installed, you can run the binarization process in batch on a folder containing all of your adjusted hemispherical photo files. I’ve found the program lags if I try to batch process more than 100 photos at a time; so, I suggest splitting datasets into multiple folders with fewer than 100 files if you are processing lots of photos.

Example of a standard registration image. Naming this file starting with “000” or “AAA” will ensure it is the first image processed in the folder.

Before you begin the binarization and analysis process, I suggest that you make a registration image. The first step in both of the binarization and analysis programs is defining the area of interest in the image that excludes the black perimeter. This can be hard to do depending on which photo is first in the stack (for instance, if the ground is very dark and blends into the black perimeter). A registration image allows you to easily and consistently define your area of interest. I like to make a standardized registration image for all of my camera/lens combinations.

The easiest way to make a registration image is to simply take a photo of a white wall and increase the contrast in an editing program until you have hard edges. Or, you can overlay a solid color circle that perfectly matches the bounds of the hemisphere (as I’ve done in the photo above). I also like to include a demarcated point that indicates a Northward orientation (this is at the top of the circle, in my case). I use this mark as a starting point for manually dragging the area-of-interest selection tool in both GLA and Hemispherical 2.0.

3. File Conversion

Hemispherical 2.0 outputs binarized images as TIFF files, but GLA can only accept bitmap images. So, we need to convert the binary images to bitmap files. This can easily be accomplished in ImageJ (from the Menu go to Process > Batch > Convert).

4. Model configuration

Now that our files are processed, we can finally move to GLA and compute our estimates of interest.

The first step is to establish the site specific parameters in the “Configure” menu. I’ll walk through these parameters tab-by-tab where they pertain to this pipeline. You can read more about each setting in the GLA User Manual.

“Image” tab:

In the “Registration” box, define the orientation of the photo. If you used a registration image with a defined North point, select “North” for the “Initial Cursor Point.” Select if your North point in the photo is geographic or magnetic. If the latter, input the declination.

In the “Projection distortion” box, select a lens distortion or input custom values (see my previous post for instructions on computing these for your lens).

“Site” tab:

Input the coordinates of the image in the “Location” box. Note that we will use a common coordinate for all photos in the batch. This only works if there is not much distance between photos (I use 10km as a rule of thumb, but realistically, even 100km probably makes very little difference). If your sites are far apart, remember to set new coordinates for each cluster of neighboring sites.

If you ensure that your images point directly up as I suggest in my previous post, you should select “Horizontal” in the “Orientation” box. Note that in this case, “Horizontal” means “horizontal with respect to gravity” not “horizontal to the ground” which might not be the same thing if you are standing on a slope.

The “Topographic Shading” is not important if our only interest is estimating light. If you are interested in the canopy itself, this tool can be used to exclude a situation where, for instance, a tall mountain blocks light behind a canopy and would be considered a canopy element by the program.

“Resolution” tab:

In the “Suntrack” box, it is important to carefully think about and set your season start and end dates. If you decide to consider other seasonal timespans later, you’ll need to rerun all of these analyses again. Remember that if you are estimating light during spring leaf-out or fall senescence, you’ll need to run separate analyses for each season with corresponding images.

“Radiation” tab:

Assuming that you did not directly measure radiation values alongside your images, we need to model these parameters. These are probably the most difficult parameters to set because they will require some prior analysis from other data on radiation for your site.

The NSRDB has many observation locations across the U.S. Most of these have long-term data with low uncertainty in daily estimates.

You can download radiation values from local observation points from the National Solar Radiation Dataset website. From these data, you can calculate parameter estimates for Cloudiness index, Beam fraction, and Spectral fraction. Remember only to use data for the seasonal window that matches the seasonal window of interest in your photos.

Cloudiness index (Kt):

This is essentially a measure of the percentage of the total radiation entering the atmosphere (Ho) that makes it to the ground surface (H) (i.e. not reflected by clouds).

Where H is the Modeled Global Horizontal radiation (Glo Mod (W/m^2)) and Ho is the Hourly Extraterrestrial radiation (ETRN (W/m^2)) with 0.01 added to each to avoid dividing by zero.

Beam fraction (Hb/H):

Beam fraction is the ratio of radiation from direct beam (Hb) versus spectral radiation that reaches the ground (H). As you can imagine, as cloudiness increase, direct beam radiation decreases as it is diffused by the clouds; thus, we can estimate beam fraction as a function of cloudiness (Kt).

Where the constants -3.044 and 2.436 are empirically derived by Frazer et al. (1999) (see their paper for details and extensions).

Spectral fraction (Rp/Rs):

Spectral fraction is a ratio of photosynthetically active radiation wavelengths (Rp) to all broadband shortwave radiation (Rs). Frazer et al. (1999) provide the following equation for calculating Spectral fraction as a function of Cloudiness index (Kt).

Frazer et al. (1999) provide guidance on computing this as a monthly mean, but for most analyses, I think this overall average is probably fine.

The Solar constant is not region-specific, so the default value should be used.

The Transmission coefficient basically describes how much solar transmittance is expected on a perfectly clear day and is mainly a function of elevation and particulate matter in the air (i.e. dusty places have lower clear sky transmission). Frazer et al. (1999) suggest that this coefficient ranges from 0.4 to 0.8 globally and between 0.6 and 0.7 for North America. Therefore, they use a default of 0.65. Unless you have good reason to estimate your own transmission values, this default should be fine.

It can be very intimidating to pick values for these parameters. To be honest, I suspect that most practitioners never even open up the configuration menu, and instead, simply use the default parameters of the program. In the end, no matter what values you use to initialize your model, the most critical step is recording the values you chose and justifying your choice. As long as you are transparent with your analysis, others can repeat it and adjust these parameters in order to make direct comparisons with their own study system. Note: you can easily save this configuration file within GLA, too.

 5. Analyzing images

Once GLA is configured, you will need to register the first image. If you’ve named your registration image something like “AA000” as I have, it will be the first in your stack. Place the cursor on the known north point and drag to fit the circumference. Be sure to select the “Fix Registration for Next Image” button and also record the coordinate values for the circle.

Now that the first image is registered, we can automate the analysis process for the rest of the images. Unfortunately, GLA does not have a command line version and the GUI does not offer a method of batch processing. So, I wrote a script for the AutoHotKey program that replicates keyboard and mouse inputs. You can download AutoHotKey here. The free version is fine. For best operation, I recommend the 32-bit version.

I’ve included my AutoHotKey script below. You will need to use a text editor to edit the first line of the macro (line 17) with the file path to your image folder. Save the file as an AutoHotKey file (.ahk) and run it as administrator. Now you can pull up the GLA window and use the F6 shortcut to start the macro.

You should see the program move your cursor to open windows and perform operations, like in the video above. Since AutoHotKey is replicating mouse and keyboard movements, if you use your computer, it will stop the program. I suggest not touching your computer until the program is done (go get a cup of coffee while your new robot does all of your analysis for you!). If you must use your computer concurrently, you will need to first set up GLA and the AutoHotKey to run in a virtual machine.


#NoEnv
SetWorkingDir %A_ScriptDir%
CoordMode, Mouse, Window
SendMode Input
#SingleInstance Force
SetTitleMatchMode 2
#WinActivateForce
SetControlDelay 1
SetWinDelay 0
SetKeyDelay -1
SetMouseDelay -1
SetBatchLines -1

F6::
Macro1:
Loop, Files, ENTER_YOUR_FILEPATH_HERE*.*, F
{
WinMenuSelectItem, Gap Light Analyzer, , File, Open Image..
WinWaitActive, Gap Light Analyzer ahk_class #32770
Sleep, 333
ControlClick, Button1, Gap Light Analyzer ahk_class #32770,, Left, 1,  NA
Sleep, 10
WinWaitActive, Open file... ahk_class #32770
Sleep, 333
ControlSetText, Edit1, %A_LoopFileName%, Open file... ahk_class #32770
ControlClick, Button2, Open file... ahk_class #32770,, Left, 1,  NA
Sleep, 1000
Sleep, 1000
WinWaitActive, Gap Light Analyzer  ; Start of new try
Sleep, 333
WinMenuSelectItem, Gap Light Analyzer, , Configure, Register Image
WinWaitActive, Gap Light Analyzer  ahk_class ThunderRT5MDIForm
Sleep, 333
ControlClick, ThunderRT5CommandButton1, Gap Light Analyzer  ahk_class ThunderRT5MDIForm,, Left, 1,  NA
Sleep, 10
WinMenuSelectItem, Gap Light Analyzer, , View, OverLay Sky-Region Grid
WinMenuSelectItem, Gap Light Analyzer, , View, OverLay Mask
WinMenuSelectItem, Gap Light Analyzer, , Image, Threshold..
WinWaitActive, Threshold ahk_class ThunderRT5Form
Sleep, 333
ControlClick, ThunderRT5CommandButton1, Threshold ahk_class ThunderRT5Form,, Left, 1,  x34 y13 NA
Sleep, 10
WinMenuSelectItem, Gap Light Analyzer, , Calculate, Run Calculations..
WinWaitActive, Calculations ahk_class ThunderRT5Form
Sleep, 333
ControlClick, ThunderRT5CommandButton1, Calculations ahk_class ThunderRT5Form,, Left, 1,  NA
Sleep, 10
WinWaitActive, Calculation Summary Results ahk_class ThunderRT5Form
Sleep, 333
ControlSetText, ThunderRT5TextBox4, %A_LoopFileName%, Calculation Summary Results ahk_class ThunderRT5Form
ControlClick, ThunderRT5CommandButton2, Calculation Summary Results ahk_class ThunderRT5Form,, Left, 1,  NA
Sleep, 10
Sleep, 5000
}
MsgBox, 0, , Done!
Return

F8::ExitApp

F12::Pause

When the program has looped through all of the photos, a dialogue box will pop up with “Done!”

The data output is in a separate sub-window within GLA. It is usually minimized and hidden behind the registration image window. You can save the data, but GLA defaults to save the output as a semicolon delimited text file. This can be easily converted to a CSV or Excel worksheet in Excel.

And you’re done!

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.


References

Beckchafer, P. (2015). Hemispherical_2.0: Batch processing hemispherical and canopy photographs with ImageJ – User manual. doi:10.13140/RG.2.1.3059.4088.

Frazer, G. W., Canham, C. D., and Lertzman, K. P. (2000). Technology Tools: Gap Light Analyzer, version 2.0. The Bulletin of the Ecological Society of America 81, 191–197.

Frazer, G. W., Canham, C. D., and Lertzman, K. P. (1999). Gap Light Analyzer (GLA): Imaging software to extract canopy structure and gap light transmission indices from true-color fisheye photographs, users manual and documentation. Burnaby, British Columbia: Simon Fraser University Available at: http://rem-main.rem.sfu.ca/downloads/Forestry/GLAV2UsersManual.pdf.

Glatthorn, J., and Beckschäfer, P. (2014). Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms. PLoS One 9, e111924.

Nobis, M. (2005). Program documentation for SideLook 1.1 – Imaging software for the analysis of vegetation structure with true-colour photographs. Available at: http://www.appleco.ch/sidelook.pdf.

Wilcox, S. (2012). National Solar Radiation Database 1991–2010 Update: User’s Manual. National Renewable Energy Laboratory Available at: https://www.nrel.gov/docs/fy12osti/54824.pdf.

]]>
1205
A Wake in Space-time https://www.azandisresearch.com/2018/07/26/a-wake-in-space-time/ Thu, 26 Jul 2018 18:49:11 +0000 http://www.azandisresearch.com/?p=660

I’m currently on my way up to Alaska for another supremely short season of guiding (just two trips this years). I was going through some old photos and came across this image of the Milky Way from a trip back in 2014. It evoked a memory of the last time I paddled with Ken Leghorn in Windfall Harbor.

Ken Leghorn was a hero of the Alaskan conservation movement and a friend and mentor of mine who passed away a little over a year ago. I wrote down this recollection on the airline napkin:

The night was dead still under the stars as we scraped the final bites from our dishes and made the slippery pilgrimage over the popweed to wash plates at the waterline. As we cast our rinse water out, the splash excited thousands of tiny green sparks in the wake. Bioluminescent algae had flooded into Windfall Harbor with the rising tide and now the bay was dense with the tiny flashing organisms. Ken and I decided it was definitely worth the effort of pulling a tandem down from the woods. We slid off into the black indefinite water. Every paddle stroke lit up like an aquatic Christmas tree. We stopped paddling not far from shore and floated. As the hull lost momentum it ceased to perturb the algae. Now the water was a black mirror of the star-full sky. Between the silence, Ken and I traded similes: Our kayak was like a space ship floating in space. Our wake was like a ripple in space-time. The Alexander Archipelago was like a solar system hurtling through the universe and we were a satellite in orbit around a tiny island planet.

We paddle back and pulled the kayak back up into the treeline. Knotting the bowline, we agreed it was the best bioluminescence we’d ever seen in Southeast.

Since I shuttered my photoblog a few months ago, I realized that my original post from that trip to Windfall Harbor had been lost to the ether. So, I resurrected the photos and lightly edited that post below.

From August 2014:

There are only a few Wilderness areas in Southeast Alaska that I have not been to. Surprisingly, Admiralty Island/Kootznoowoo Wilderness, one of the larger Wildernesses in the Tongass is one that I had never visited. Along with Baranof Island and Chichagof Island, Admiralty Island has one of the highest concentrations of brown bears in the world. The average is one bear per square mile. In total, that means that the bears outnumber the people on these large islands. In fact, Admiralty itself has more bears on it than all of the lower 48 states combined.

Pack Creek is a special place for bears. It is a wildlife sanctuary in addition to its Wilderness designation. That means that there is no hunting of bears at Pack and the viewing at the Creek is strictly regulated. This is a great set up for bear viewing, as bears get much closer than would be normally comfortable. We arrived late in the season, well after the tourists, so we basically had the place to ourselves.

Many thanks to friend Ken Leghorn and Pack Creek Bear Tours for loaning us a kayak, sharing salmon dinner, and providing super helpfully detailed info about Pack. If you ever want to make the trip yourself. Pack Creek Bear Tours are the folks to call.

The inspiration for this trip was a visit from one of my best friends from middle school, Jordan, who came up to visit Alaska for the first time. After years hearing about the incredible bear viewing at Pack Creek, this seemed like the best excuse to spend a few days there. We boarded the float plane in Juneau and made the short flight to Windfall Harbor where the Forest Service maintains a small seasonal camp for their rangers on an island just a stone’s throw from the Creek. This is also where Pack Creek Outfitters store their kayaks. It was the end of the season, so Ken offered to let us use a kayak for a few days if we would help him move his fleet to the winter storage area.

The operation at Pack Creek is nothing like any other bear viewing site. There is no platform, no fences, no barriers. The viewing area is a 5 by 10 meter area of mown grass with a driftlog to sit on. The Forest Service and Fish and Game rangers are on-site at all times that people are present. They are trained to let the bears move about freely up to the edge of the mown grass line.

The unique situation at Pack Creek is a stamp of its history. In the 1930s a major conservation campaign sprouted with the intent of designating all of Admiralty Island a bear refuge, but succeeded only in protecting the Pack Creek drainage from hunting. In 1935, the Forest Service designated it an official bear viewing. Despite the restrictions, poaching was regular in the remote watershed. In 1956, a local miner and logger, Stan Price, rowed his floating cabin on shore at the mouth of Pack Creek and established a homestead with his wife, Edna. Rather than fear, they treated the local bruins as neighbors. Their presence helped to curtail poaching and also attracted new visitors. For almost 4 decades, the Prices lived with the bears. Over that time, new generations of cubs were born and reared with the Prices as a normal fixture of life. By the time Price died in 1989, just about every local bear was habituated to constant human presence.

In 1984, the tiny sliver of bear sactuary was expanded to a no-hunting zone encompassing Pack Creek and the adjacent watersheds, as well as the islands in Windfall Harbor. As the 80s progressed visitation increased to the point that the agencies decided to actively manage the area. Viewing times were limited, rangers were installed on-site, and visitation was limited to just 24 people per day.

As a result, generations of bears have come to associate Pack Creek as a safe haven from hunting and to ignore the small groups of human onlookers.

The Swan River estuary looking south across Windfall Harbor.
The dark silhouettes of salmon in the clear waters of an Admiralty Island stream.

Bear trail through the grass, making a straight line from one salmon stream to the next salmon stream.

Sitting on a log, surrounded by Alaskan brown bears playing, snoozing, bathing, and snapping at salmon is a mesmerizing experience. We spent most of our time sitting on the log at the Creek mouth or walking up the trail to the viewing platform. But we managed a couple paddles around the Harbor, including a visit to the most impressive Sitka Spruce tree I’ve ever met.

Both photos are the same tree from different aspects. Daven is easy to spot in his bright blue jacket (left), but you have to look a little more closely to see me lounging on the branch in the right image.

On our second night, we sat under the clear night sky and discovered bioluminescent algae in the water. It is rare to see stars in Souheast Alaska. And it is a pleasure to see them reflected in the still waters. It is utterly, chest-caving, breathtaking to paddle the myth-like firmament of water sandwiched between a sky of stars above and swirls of bioluminesces below. Ken and I paddled out in a tandem just to sit and float. I can’t describe it. It was one of those utterly unique experiences that will forever bound my conception of hyperbole.

On the final evening of our visit, Jordan and I sat on the log with my friend Daven who happened to be the Forest Service Ranger on staff for the day.

The three of us sat in silence for most of the evening, occasionally swatting mosquitos, surveying the moldering ruins of Stan Price’s cabin, and potting bears across the river. With the sun dropping behind the mountains, we were contemplating packing up for the evening when a medium-sized rich-chocolate colored bear sauntered out of the trees. Daven recognized her immediately as Chino (her mother, a creamy brown bear, was named Mocha… get it?). Chino ambled across the streamlets and with no attention to us, came to rest in the tall grass at the edge of the viewing area. We were stunned into silence. I frantically switched lenses since she was closer than the focusing distance of my long lens and filled cards with her portrait.

As Chino ambled toward us, casually munching sedge, we sat quite. You can see Daven official USFS hat crouching in front of me.

 

After grazing on the grass before us, Chino walked a couple meters past, sat down with her back to us, ears unalert and pointed away from us, in a posture of complete indifference to our presence.

I’ve seen many, many bears at very close range. But the general protocol for bear encounters is to make your presence known with the goal making it clear to the bear that you want your space. At Pack Creek, the tone is completely different, the intent is to discharge any discomfort, to let Chino forget we were even there. I learned that nonchalance is a powerful emotion when seen in the eyes of a bear.

We flew out on a clear day with Ken. Upon take off, we circled over the Swan River estuary which was expansive at low tide. The afternoon sun fluoresced the rivulets like veins under an X-ray. Out on the flats, we passed over a sow and two cubs. It takes a big landscape to make a 900lbs animal look like a speck, and it takes an even larger Wilderness area to ensure that such a landscape remains truly wild.

]]>
660
Songbird Vision https://www.azandisresearch.com/2018/07/21/songbird-vision/ Sat, 21 Jul 2018 05:21:32 +0000 https://www.azandisresearch.com/?p=589 I’m getting really excited about a project I’m working on that will visually compare the way various animals see their habitat. This is part of a Digital Education Innovation grant I received from Yale’s Center for Teaching and Learning.

Below is an example of a comparison between songbird vision and human vision of the same forest habitat. Songbirds have an extra cone-type that allows them to see in ultraviolet. Also, they have about 240 degrees field of view compared to our 190. But, our range of binocular vision is about 40 degrees, fully twice that of songbirds.

These images were made by stacking two separate panoramas composited from about 75 images each. I took one pano in UV and one in visible spectrum, then layered them for a false-color bird’s-eye-view of the world.

Oh, and here is a sweet gif showing UV verses visible light images.

]]>
589
Hardware for hemispherical photos https://www.azandisresearch.com/2018/03/01/hardware-for-hemispherical-photos/ Fri, 02 Mar 2018 03:09:18 +0000 https://www.azandisresearch.com/?p=330 This is my second technical post on using hemispherical photography for light estimates. My previous post considered the rationale behind hemispherical photography estimation methods. This post will cover hardware, specifically:

  1. Lens
  2. Camera
  3. Stabilizer

One of the reasons for the lack of standardization in hemispherical photo techniques is the rapid pace of improvements in camera technology. In the earliest days of hemispherical canopy photos, researchers shot on film, developed the images, and scanned them into digital images for processing (Rich 1989). Now that digital camera sensors have improved, we can skip the development and scanning steps, and the accuracy of techniques have also increased. However, substantial bias can creep into light analyses if the data are captured without a solid understanding of hardware (covered in this post) and settings (covered in the next post).

At the most basic level, hemispherical photography requires just three pieces of hardware: a camera, a lens, and something stable to mount it to.

Lens:

Arguably, the lens is the most critical piece of equipment, because without a true hemispherical lens with an appropriate projection, all photos, regardless of the camera, will be biased.

Fisheye versus hemispherical:

Hemispherical lenses are relatively rare in the camera world. In your hunt for the perfect lens, you will more likely come across many variants of fisheye lenses, which are more commonly used in traditional photography. Hemispherical and fisheye lenses are similar in that they capture a wide-angle of view and project it down onto the flat and rectangular camera sensor. In fact, both types of lenses can be built with the same glass. The main difference is that hemispherical lenses focus a circular image ON the sensor, whereas fisheye lenses project a circle AROUND the sensor, so that the resulting image is rectangular (see figure below).

Example of the same image projected by the same lens onto different camera sensor sizes. On a full-frame sensor, this lens is considered hemispherical, while on a cropped sensor, this lens is considered a fisheye. Illustration by me, AZ Andis.

Hemispherical lenses are extremely uncommon. Lens manufacturers produce these primarily for research purposes. Fisheye lenses, on the other hand, are common—almost every manufacturer makes a few focal lengths.

So how do you find a true hemi-lens? The most certain method is to buy a tried and true model (I use the Sigma 4.5mm f2.8). Otherwise, you can try to find a lens that captures 180 degree angle of view and results in a circle within the photo frame. However, be aware that not all hemispherical lenses are created equal. Different lenses produce different angular distortion, and this distortion can radically alter light estimates.

Lens projection:

The ideal lens for hemispherical canopy measurements will retain the same area over any viewing angle (i.e. “equal solid angle” in Herbert 1987; or “equisolid” in Fleck 1994).

Figure adapted from Herbert (1987) showing the ideal projection of a hemispherical lens wherein objects at equal distance and equal size from the camera are represented by equal area on the image plane.

In other words, an object should take up the same number of pixels if it is in the center of the frame (zenith angle = 0) or the very edge of the frame (zenith angle = 90), although the shape may be distorted. But, this is not the case for most lenses which look like a funhouse mirror depending on the zenith angle of view. The difference originates in the intended application of the lens. Most traditional photographers prefer fidelity of shape over fidelity of area. For instance, landscape photographers don’t care if a field of flowers maintains constant shape across an image, but they certainly want all of the flowers to retain a flower-like shape. For this reason, most common fisheye lenses employ another common radial projection called “equidistant”, in which each pixel represent the same angular value at all zenith angles and maintains shape at all viewing angles (see the difference in the image below). (Fleck (1994) does a great job explaining the math behind angle projections, why some are better for different applications, and why some of them look so strange.)

Fleck (1994) demonstrates the “funhouse mirror” effect of different projections that either maintain consistent area (right) or consistent angle and shape (left) across the image frame.

Michael Thoby’s post is one of the most comprehensive and easy-to-follow explanations I’ve found to explain the many radial distortions lenses can accommodate. He even has some neat, interactive example of various projections in this post.

Figure by Michael Thoby. Relationship between viewing angle, and position on an image plane.
Figure by Michael Thoby. Equations for calculating the relationship between viewing angle and position on an image plane for common radial projections (color of equations in the figure on the left correspond to the color of the projection curves in the figure on the right).

Calibrating lens projections:

Not all lens manufacturers will explicitly indicate which radial distortion they use. Nevertheless, you can easily calculate the projection of any hemispherical lens empirically with a couple sheets of gridpaper and a round frame (I used a sieve from the soil-microbe lab). Simply affix the grid sheets to the inside rim of the frame, place the camera inside with the edges of the lens lined up along the chord across the diameter, then snap a shot (see image below).

Calibration test for a 180 degree Lensbaby GoPro attachment.

In ImageJ (or similar program), measure the pixel count across the diameter of the image frame. Next, count the number of cells across the diameter of the photo. Dividing the number of cells across the diameter by 18 (or 36) will tell you how many cells represent each 10 (or 5) degree angle of view. Now, starting at the center of the image (i.e. the pixel count across the diameter divided by 2) and moving outward, count the cells (which each represent a known degree angle) and measure the total pixels. Continue doing this from the center (zenith angle = 0) out to the outer edge of the image (zenith angle = 90).

For example, here are the empirically derived projection estimates for the GoPro/Lensbaby180+ lens setup I tested:

You can also calculate these values for other image projections to compare the distortion of your lens. Dividing the total pixels across by the total cells across gives the pixel count in each cell that would be expected from an equadistant angle projection. For equisolid projections, calculate the radial distance from the center by taking the sin of half the zenith angle in degrees and multiplying by the total pixel diameter divided by 180. I did this for a 180 degree Lensbaby GoPro attachment and plotted the actual pixel count for each viewing angle against equasolid and equidistant projections (see figure below). As you can see in the figure, for the Lensbaby lens, objects in the periphery of the frame are compressed (i.e. fewer pixels represent a given angle of view).

Figure by me, A.Z. Andis, plotting equidistant (black), equisolid (red), and non-standard (blue) lens projection curves. The non-standard estimates were empirically measured projections from the LensBaby 180+ lens on a GoPro Hero4.

Downstream analysis applications can correct for lens distortion by weighting estimates from different radii in the image, but this means that errors are not continuous across the angle of view. In canopy measurements where the largest gaps will occur in the center of the image, this results in systematic bias of estimates.

Although not a direct comparison of lenses, I took identical photos with a GoPro/Lensbaby system and a CanonDSLR/Sigma f2.8 system (equisolid) in locations with increasing canopy gaps. In more closed canopies, the lens distortion of the Lensbaby lens systematically overestimates light values (see figure below), even when using an automatic thresholding algorithm (we’ll go into all of this analysis in post 4).

Camera:

Which camera should you use? In theory, any camera with a 180 degree lens can be used for canopy estimates. But for accurate and replicable measurements, a camera with a high quality sensor and fully-programmable settings is required.

Sensors:

High quality is not synonymous with ‘a bazillion megapixels.’ A camera’s sensor is actually an array of many tiny sensors packed together. When the camera takes a photo, light enters through the lens and “collects” on each individual sensor, kind of like rain falling on a field collecting in an array of buckets. If we want to collect as much light (or rainwater) as possible, we want the largest sensor area (or bucket area) as possible. To use the rain-catching analogy, it won’t matter if we put out a bunch of tiny buckets or fewer large buckets, we’ll still collect the same amount of water if we cover the same area. The same goes for light. One can pack lots of pixels onto a tiny sensor, but at the end of the day, a bigger sensor will (pretty much) always produce higher quality image information, even if the total megapixel count is lower.

Camera control:

As digital cameras have become more popular in general, camera manufacturers have develop more powerful on-board computers that can do most of the thinking for the user. This is helpful for selfies, but not as useful for accurate research wherein we need to carefully select the camera’s settings. For accurate photos, you should be able to independently set the shutter speed, aperture, ISO, and light metering mode, at the very least (I’ll go over these settings in the next post).

Camera type:

DSLRs (digital single-lens reflex cameras like Canon EOS series cameras), especially full-frame DSLRs, house large sensors and are ideal for canopy photos because they are fully programmable. The downside is that they are heavy, expensive, and rarely weatherproof. (There’s nothing more nerve-wracking than sinking off-balanced into pond muck trying to keep a couple thousand dollars-worth of camera hardware above the surface!). In contrast, a cellphone with a clip-on hemisperical lens is light, relatively cheap, and can be waterproofed, but the image quality is too low and the parameters for image capture are difficult or impossible to set. A middle-ground option could be a point-and-shoot cameras (like the Nikon Coolpix); the primary disadvantage being the lack of standardization in lenses.

Hemispherical photography systems, L to R: GoPro Hero4 with LensBaby 180+ lens, Nikon Coolpix with hemispherical lens, Canon 60D with Sigma 4.5mm f2.8 lens.

Capture format:

A final consideration is that your camera should be able to capture RAW images. RAW file formats retain all information for each pixel. Most consumer cameras, on the other hand, utilize image compression formats (like JPEG) that reduces file size by averaging pixel values and reducing information complexity. For most canopy measurement purposes, the first step of processing is binary compression (which is basically just a radical compression scheme), so pre-analysis compression should have little to no effect. If, however, one intends to spot-edit photos prior to processing (for instance applying exposure corrections to sun spots) RAW format will be required.

My setup:

I’ve tried may camera systems and in the end, I think that a DSLR with a hemi-lens is really the best option if it is in your budget. I use a Canon 60D with the Sigma 4.5mm f2.8. You could also use a lower-end Canon (like a T-series Rebel) or Nikon camera body with the same lens. This setup offers a large sensor, full control over capture settings, and is well represented in the literature.

Stabilization

Finally, canopy photos require the camera to be pointed directly upward and perfectly level. The simplest option to achieve this orientation is to mount the camera to a tripod with a fully articulated head. A dual-axis bubble level placed on the lens cover should facilitate a perfectly level lens orientation.

Repositioning and leveling a tripod for multiple sites gets very tedious, very quickly. To speed up the process, an auto-leveling device can be attached to the tripod. Once calibrated to the weight and balance of the camera, these devices ensure that the lens points opposite of gravity (i.e. level) at all times. Unfortunately, I’ve only seen these devices offered along with (expensive) proprietary software like HemiView, and the units are a bit heavy.

An auto-leveling device that comes with the Hemiview software package. While this unit is nice, cheaper and lighter DIY options are available.

For most terrestrial applications, a tripod system will work well. However, if you need to take many photos at different points, a tripod is a pain to redeploy. Or, if you need photos over water (as in my case) tripods may be infeasible. To overcome this problem, I created a handheld stabilizer from a cheap steadycam (a free-rotating, dual-axis handle attached to a post with the camera mounted to the top and counterweights at the bottom) and an L-bracket (see the SECOND video in the IG post below). This rig allows me to move freely to sampling points and immediately take photos without an arduous setup process, while still ensuring that the lens is perfectly level.

A post shared by Andis A. (@azandis) on

In the next post, I cover how to take hemispherical canopy photos in the field, including sampling strategies and camera settings.

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.


References:

Fleck, M. M. (1994). Perspective Projection: the Wrong Imaging Model. Computer Science, University of Iowa.

Herbert, T. J. (1987). Area projections of fisheye photographic lenses. Agric. For. Meteorol. 39, 215–223. doi:10.1016/0168-1923(87)90039-6.

Rich, P. M. (1989). A Manual for Analysis of Hemispherical Canopy Photography. Los Alamos National Laboratory.

]]>
330