hemispherical photography – A.Z. Andis Arietta https://www.azandisresearch.com Ecology, Evolution & Conservation Mon, 09 Oct 2023 14:27:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 141290705 UPDATE: Smartphone Hemispherical Image Analysis https://www.azandisresearch.com/2023/05/20/update-smartphone-hemispherical-image-analysis/ Sat, 20 May 2023 19:27:45 +0000 https://www.azandisresearch.com/?p=2279 When I first developed the method to estimate canopy metrics from smartphone spherical panoramas, I used a somewhat convoluted workflow involving command line image manipulation, a plugin for ImageJ to do binarization, and a script I wrote in AutoHotKey language to automate mouse clicks on a GUI for canopy measures. Admittedly, it is a difficult pipeline for others to easily replicate.

In an effort to make life easiest, I spent some time building out the pipeline entirely in R, including a sourceable function for converting spherical panos to hemispherical images (all available on this repo).

The easiest way to convert all of your spherical panos to hemispherical projections is to source the function from my github:

source("https://raw.githubusercontent.com/andisa01/Spherical-Pano-UPDATE/main/Spheres_to_Hemis.R")

When you source the script, it will install and load all necessary packages. It also downloads the masking file that we will use to black out the periphery of the images.

The script contains the function convert_spheres_to_hemis, which does exactly what is says. You’ll need to put all of your raw spherical panos into a subdirectory within your working directory. We can then pass the path to the directory as an argument to the function.

convert_spheres_to_hemis(focal_path = "./raw_panos/")

This function will loop through all of your raw panos, convert them to masked, north-oriented upward-facing hemispherical images and put them all in a folder called “masked_hemispheres” in your working directory. It will also output a csv file called “canopy_output.csv” that contains information about the image.

Below, I will walk through the steps of the workflow that happened in the convert_spheres_to_hemis function. If you just want to use the function, you can skip to the analysis. I’ve also written a script to do all of the conversion AND analysis in batch in the repo titled “SphericalCanopyPanoProcessing.R”.

library(tidyverse) # For data manipulation
library(exifr) # For extracting metadata

R is not really the best tool for working with image data. So, we’ll use the magick package to call the ImageMagick program from within R.

library(magick) # For image manipulation
# Check to ensure that ImageMagick was installed.
magick_config()$version

You should see a numeric version code like ‘6.9.12.3’ if ImageMagick was properly installed.

# ImageR also requires ImageMagick
library(imager) # For image display

For binarizing and calculating some canopy metrics, we will use Chiannuci’s hemispheR package, which we need to install from the development version.

library(devtools)
devtools::install_git("https://gitlab.com/fchianucci/hemispheR")
library(hemispheR) # For binarization and estimating canopy measures

To get started, you’ll need to have all of your raw equirectangular panoramas in a folder. I like to keep my raw images in a subdirectory called ‘raw_panos’ within my working directory. Regardless of where you store your files, set the directory path as the focal_path variable. For this tutorial we’ll process a single image, but I’ve included scripts for batch processing in the repo. Set the name of the image to process as the focal_image variable.

focal_path <- "./raw_panos/"
focal_image <- "PXL_20230519_164804198.PHOTOSPHERE_small.jpg"

focal_image_path <- paste0(focal_path, focal_image)
focal_image_name <- sub("\\.[^.]+$", "", basename(focal_image_path))

Let’s take a look at the equirectangular image.

pano <- image_read(focal_image_path)
pano # Visualize the pano
Raw equirectangular projection of a spherical panorama within a forest.
Raw equirectangular projection of a spherical panorama.

Note: One advantage of spherical panos is that they are large, and therefore, high resolution. The images from my Google Pixel 4a are 38 mega pixels. For this tutorial, I downsized the example pano to 10% resolution to make processing and visualizing easier. For your analysis, I’d recommend using full resolution images.

Spherical panoramas contain far more metadata than an average image. We can take a look at all of this additional information with the read_exif function.

read_exif(focal_image_path) %>%
  glimpse()

We’ll extract some of this information about the image, the date it was created, the georeference, and altitude, etc. to output alongside our canopy metrics. You can decide which elements are most important or useful for you.

xmp_data <- read_exif(focal_image_path) %>%
  select(
    SourceFile,
    Make,
    Model,
    FullPanoWidthPixels,
    FullPanoHeightPixels,
    SourcePhotosCount,
    Megapixels,
    LastPhotoDate,
    GPSLatitude,
    GPSLongitude,
    GPSAltitude,
    PoseHeadingDegrees
  )

The first image processing step is to convert the equirectangular panorama into a hemispherical image. We’ll need to store the image width dimension and the heading for processing. The pose heading is a particularly important feature that is a unique advantage of spherical panoramas. Since the camera automatically stores the compass heading of the first image of the panorama, we can use that information to automatically orient all of our hemispherical images such that true north is the top of the image. This is critical for analyses of understory light which requires plotting the sunpath onto the hemisphere.

# Store the pano width to use in scaling and cropping the image
pano_width <- image_info(pano)$width
image_heading <- read_exif(focal_image_path)$PoseHeadingDegrees

The steps to reproject the spherical panorama into an upward-looking hemispherical image go like this:

Crop the upper hemisphere (this is easy with smartphone spheres because the phone’s gyro ensures that the horizon line is always the midpoint of the y-axis).

Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama within a forest.
Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama.

Rescale the cropped image into a square to retain the correct scaling when reprojected into polar coordinate space.

Rescaled upper hemisphere from an equirectangular projection of a spherical panorama within a forest.
Rescaled upper hemisphere from an equirectangular projection of a spherical panorama.

Project into polar coordinate space and flip the perspective so that it is upward-looking.

Polar projection of the upper half of a spherical panorama of a forest.
Polar projection of the upper half of a spherical panorama.

Rotate the image so that the top of the image points true north and crop the image so that the diameter of the circle fills the frame.

Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest.
Polar projection of the upper half of a spherical panorama rotated to orient to true north.

We can accomplish all of those steps with the code below.

pano_hemisphere <- pano %>%
  # Crop to retain the upper hemisphere
  image_crop(geometry_size_percent(100, 50)) %>%
  # Rescale into a square to keep correct scale when projecting in to polar coordinate space
  image_resize(geometry_size_percent(100, 400)) %>%
  # Remap the pixels into polar projection
  image_distort("Polar",
                c(0),
                bestfit = TRUE) %>%
  image_flip() %>%
  # Rotate the image to orient true north to the top of the image
  image_rotate(image_heading) %>%
  # Rotating expands the canvas, so we crop back to the dimensions of the hemisphere's diameter
  image_crop(paste0(pano_width, "x", pano_width, "-", pano_width/2, "-", pano_width/2))

The resulting image looks funny because the outer pixels are extended by interpolation and we’ve rotated the image which leaves white space at the corners. Most analyses define a bounding perimeter to exclude any pixels outside of the circular hemisphere; so, the weird border shouldn’t matter. But, we can add a black mask to make the images look better.

I’ve included a vector file for a black mask to lay over the image in the repo.

# Get the image mask vector file
image_mask <- image_read("./HemiPhotoMask.svg") %>%
  image_transparent("white") %>%
  image_resize(geometry_size_pixels(width = pano_width, height = pano_width)) %>%
  image_convert("png")

masked_hemisphere <- image_mosaic(c(pano_hemisphere, image_mask))

masked_hemisphere
Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest with a black mask around the outside of the hemisphere.
Masked hemispherical canopy image.

We’ll store the masked hemispheres in their own subdirectory. This script makes that directory, if it doesn’t already exists and writes our file into it.

if(dir.exists("./masked_hemispheres/") == FALSE){
  dir.create("./masked_hemispheres/")
} # If the subdirectory doesn't exist, we create it.

masked_hemisphere_path <- paste0("./masked_hemispheres/", focal_image_name, "hemi_masked.jpg") # Set the filepath for the new image

image_write(masked_hemisphere, masked_hemisphere_path) # Save the masked hemispherical image

At this point, you can use the hemispherical image in any program you’d like either in R or other software. For this example, I’m going to use Chiannuci’s hemispheR package in order to keep this entire pipeline in R.

The next step is to import the image. hemispheR allows for lots of fine-tuning. Check out the docs to learn what all of the options are. These settings most closely replicate the processing I used in my 2021 paper.

fisheye <- import_fisheye(masked_hemisphere_path,
                          channel = '2BG',
                          circ.mask = list(xc = pano_width/2, yc = pano_width/2, rc = pano_width/2),
                          gamma = 2.2,
                          stretch = FALSE,
                          display = TRUE,
                          message = TRUE)
Circular hemispheric plot output by hemispheR's 'import_fisheye' function.
Circular hemispheric plot output by hemispheR’s ‘import_fisheye’ function.

Now, we need to binarize the images, converting all sky pixels to white and everything else to black (at least as close as possible). Again, there are lots of options available in hemispheR. You can decides which settings are right for you. However, I would suggest keeping the zonal argument set to FALSE. The documentation describes this argument as:

zonal: if set to TRUE, it divides the image in four sectors
(NE,SE,SW,NW directions) and applies an automated classification
separatedly to each region; useful in case of uneven light conditions in
the image

Because spherical panoramas are exposing each of the 36 images separately, there is no need to use this correction.

I also suggest keeping the export argument set to TRUE so that the binarized images will be automatically saved into a subdirectory named ‘results’.

binimage <- binarize_fisheye(fisheye,
                 method = 'Otsu',
                 # We do NOT want to use zonal threshold estimation since this is done by the camera
                 zonal = FALSE,
                 manual = NULL,
                 display = TRUE,
                 export = TRUE)
Binarized circular hemispheric plot output by hemispheR's 'binarize_fisheye' function.
Binarized circular hemispheric plot output by hemispheR’s ‘binarize_fisheye’ function.

Unfortunately, hemispheR does not allow for estimation of understory light metrics like through-canopy radiation or Global Site Factors. If you need light estimates, you’ll have to take the binarized images and follow my instructions and code for implementing Gap Light Analyzer.

Assuming all you need is canopy metrics, we can continue with hemispheR and finalize the whole pipeline in R. We estimate canopy metrics with the gapfrac_fisheye() function.

gapfrac <- gapfrac_fisheye(
  binimage,
  maxVZA = 90,
  # Spherical panoramas are equidistant perforce
  lens = "equidistant",
  startVZA = 0,
  endVZA = 90,
  nrings = 5,
  nseg = 8,
  display = TRUE,
  message = TRUE
)
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR's 'gapfrac_fisheye' function.
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR’s ‘gapfrac_fisheye’ function.

Finally, we can estimate the canopy metrics with the canopy_fisheye() function, join those to the metadata from our image, and output our report.

canopy_report <- canopy_fisheye(
  gapfrac
)

output_report <- xmp_data %>%
  bind_cols(
    canopy_report
  ) %>%
  rename(
    GF = x,
    HemiFile = id
  )

glimpse(output_report)
Rows: 1
Columns: 32
$ SourceFile            "./raw_panos/PXL_20230519_164804198.PHOTOSPHERE_small.jpg"
$ Make                  "Google"
$ Model                 "Pixel 4a"
$ FullPanoWidthPixels   8704
$ FullPanoHeightPixels  4352
$ SourcePhotosCount     36
$ Megapixels            0.37845
$ LastPhotoDate         "2023:05:19 16:49:57.671Z"
$ GPSLatitude           41.33512
$ GPSLongitude          -72.91103
$ GPSAltitude           -23.1
$ PoseHeadingDegrees    86
$ HemiFile              "PXL_20230519_164804198.PHOTOSPHERE_smallhemi_masked.jpg"
$ Le                    2.44
$ L                     3.37
$ LX                    0.72
$ LXG1                  0.67
$ LXG2                  0.55
$ DIFN                  9.981
$ MTA.ell               19
$ GF                    4.15
$ VZA                   "9_27_45_63_81"
$ rings                 5
$ azimuths              8
$ mask                  "435_435_434"
$ lens                  "equidistant"
$ channel               "2BG"
$ stretch               FALSE
$ gamma                 2.2
$ zonal                 FALSE
$ method                "Otsu"
$ thd                   116

Be sure to check out my prior posts on working with hemispherical images.

]]>
2279
Tips for taking spherical panoramas https://www.azandisresearch.com/2021/07/16/tips-for-taking-spherical-panoramas/ Fri, 16 Jul 2021 13:44:59 +0000 http://www.azandisresearch.com/?p=1888 Spherical panoramas can be tricky to stitch properly in dense forests, and especially when canopy elements are very close to the camera (which exaggerates the parallax). Fortunately, the issue attenuated with every update of phone software, but it still pays to be careful when capturing your panoramas. So, if you are taking spherical panos in order to estimate canopy metrics, here are my suggestions for improving your captures:

Rotate around the camera

Perfecting panoramas took some practice for me. The first key is to avoid rotating the camera around your body, but rather, to rotate your body around the camera. I found this post helpful when I was learning. The principle is to keep the entrance pupil of the camera lens as the exact center of the sphere (as much as possible).

Calibrate your compass and gyro

Another tip is to calibrate your compass and gyro regularly (here’s how for Android users). If the compass is off, the phone won’t always know when you’ve made a full rotation.

Take your time

It also helps to go slowly. The camera is constantly processing and mapping the scene to guide the next image, which takes a few seconds.

Be consistent

Even though you can technically take images in any order, I always rotate the same direction around the horizon and then the upper two circles. The lower sphere doesn’t really matter, so I don’t spend much time on it. The key here is that consistency, both for you and the camera, helps.

Use your body as a tripod

You can also try to use your body as a kind of tripod. I used a specific method when I was first learning to take spherical images: I hold my right elbow to my hip bone directly over my right knee and pinch the phone at the top in line with the camera. Then I keep my right foot planted and walk my left foot around the right, using my left hand to tip the phone as need. After a few dozen spheres, I don’t really need to do that any more. I usually know if the sphere will turn out bad and just restart.

Review your images

It is important to remember that you can (and should) review your images in the field so that you can retake any with issues. You can also get a Google Cardboard viewer for $13 which lets you review the sphere in VR in the field to more clearly see the images.

 

Over time, you get the hang of capturing spheres to minimize problems. It takes me much less time to get a good photosphere than it does to level and set my exposure with my DSLR, now that I have practiced.

Nevertheless, there are always some artifacts. After looking at lots of images side-by-side, my sense is that the artifactual errors in the panos, even when fairly large, contribute less error than the total errors of incorrect orientation, unleveled camera, poor lens calibrate, exposure misspecification, and much lower pixel count from other methods.

 

Be sure to check out my other posts on this topic:

Smartphone hemispherical photography

Analyzing hemispherical photos

Taking hemispherical canopy photos

Hardware for hemispherical photos

Hemispherical light estimates

]]>
1888
Smartphone hemispherical photography https://www.azandisresearch.com/2020/12/16/smartphone-hemispherical-photography/ Wed, 16 Dec 2020 23:41:13 +0000 http://www.azandisresearch.com/?p=1809 Hemispherical photography is one of those tasks often prefaced by the statement, “How hard could it be?” I’m pretty sure I said something like this at the beginning of my PhD when we wanted to ask how the canopy over wood frog ponds influences their larval ecology.

Now, five years, four blog posts, and uncountable hours later, I can say that measuring canopy structure with hemispherical photos is surprisingly difficult.

One of the biggest hurdles is understanding the equipment and deploying it properly in the field. For me, nothing is more tedious than standing waste deep in a mucky, mosquito-infested pond while I fiddle around with camera exposure settings and fine-tuning my leveling device. Add to that the constant fear of dropping a couple thousands of dollars of camera and specialized lens into the water, and you get a good sense of my summers.

So, it is with great pleasure that I offer an alternative method for capturing canopy photos that requires nothing but a cell phone, yet produces higher quality images than a traditional DSLR setup. This new method exploits the spherical panorama function available on most cameras (or in the free Google Street View app). Hemispherical photos can then be easily extracted a remapped from the spheres. You can check out the manuscript at Forestry here (a PDF is available on my Publications page) or continue reading while I walk through the paper below.

From figure 2 of the paper: . Comparisons of smartphone spherical panorama hemispherical photographs (SSP HP) (right B and C) to traditional DSLR hemispherical photographs (DSLR HP) (left B and C) captured at the same site. Details of the same subsection of the canopy, indicated by orange boxes, are expanded in C. Binarized images are shown below color images in B and C.

The old way

The most common way to measure canopy structure these days is with the use of hemispherical photographs. These images capture the entire canopy and sky from horizon to horizon. Assuming proper exposure, we can categorize individual pixels as either sky or canopy and run simple statistics to count the amount of sky pixels versus canopy or the number and size of the gaps between canopy pixels. We can also plot a sun path onto the image and estimate the amount of direct and indirect light that penetrated through the canopy. (You can follow my analysis pipeline in this post).

All of this analysis relies on good hemispherical images. But the problem is that there are many things that can go wrong when taking canopy photos, including poor lighting conditions, bad exposure settings, improperly oriented camera, etc. Another problem is that capturing images of high-enough quality requires a camera with a large sensor, typically a DSLR, a specialized lens, and a leveling device, which can cost a lot of money. Most labs only have one hemispherical photography setup (if any), which means that we sacrifice the number of photos we can take in favor of high-quality images.

The new way

In the past few years, researchers have tried to figure out ways to get around this equipment barrier. Folks have tried eschewing the leveling device, using clip-on hemispherical lenses for smartphones, or using non-hemispherical smartphone images. I even tested using a hemispherical lens attachment on a GoPro.

But, none of these methods really produce images that are comparable to the images from DSLRs, for three reasons:

  1. Smartphone sensors are tiny compared to DSLR, so there is a huge reduction in quality.
  2. Clip-on smartphone lenses are tiny compared to DSLR, so again, there is a huge reduction in optical quality.
  3. Canopy estimates are sensitive to exposure settings and DLSRs allow for more control over exposure.

The method I developed gets around all of these issues by using multiple, individual cell phone images to stitch together a single hemispherical image. Thus, instead of relying on one tiny cell phone sensor, we are effectively using many tiny cell phone sensors to make up the difference.

Another advantage of creating a hemispherical image out of many images is that each individual image only has to be exposed for a portion of the sky. This avoids the problems of glare and variable sky conditions that plague traditional systems. An added benefit is that, smartphone cameras operate in a completely different way than DSLRs, so they are much less sensitive to exposure issues in general.

Smartphones are less sensitive to exposure issues because, unlike DSLRs that capture a single instance on the sensor when you hit the shutter button, smartphone cameras use computational photography techniques that blend the best parts of many images taken in short succession. You may not realize it, but your smartphone is constantly taking photos as soon as you turn it on (which makes sense since you can see the scene from the camera on your screen). The phone stores about 15 images at a time, constantly dumping the older versions out of temporary memory as updated images pour in. When you hit the button to take a picture, your phone then automatically blends the last few images with the next few images. The phone’s software selects the sharpest pixels with the most even contrast and color from each image and then composites those into the picture presented to you. With every new software update, the algorithms for processing images get better and better. That’s why modern cell phones are able to take photos that can compete with mid-range DSLRs despite the limitations of their tiny sensors.

So, if each phone photos is essentially a composite of 15 images, and then we take 18 of those composite images and stitch them into a hemispherical image, we are effectively comparing a sensor the size of 270 individual phone camera sensors to the DSLR sensor.

The best part is that there is already software that can do this for us via the spherical panorama feature included with most contemporary smartphone cameras. This feature was introduced in the Google Camera app back in 2012 and iOS users can access the feature via the Google StreetView app. It is incredibly simple to use.

Update: Check out my post on tips for taking spherical panoramas

Once you’ve taken a spherical panorama, it is stored in your phone as a 2D JPEG in equirectangular format. The best part about the photo sphere software is that it utilizes your phone’s gyroscope and spatial mapping abilities to automatically level the horizon. This is advantageous for two reasons. First, it means we can ditch the tedious leveling devices. Second, it means that the equirectangular image can be perfectly split between the upper and lower hemisphere. We simply have to crop the top half of the rectangular image and remap it to polar coordinates to get a circular hemispherical image.

Figure 1 from Arietta (2020)
Figure 1 from the paper: Spherical panoramas (A) are stored and output from smartphones as 2D images with equirectangular projection (B). Because spherical panoramas are automatically leveled using the phone gyroscope, the top half of the equirectangular image corresponds to the upper hemisphere of the spherical panorama. The top portion of the equirectangular image (B) can then be remapped onto the polar coordinate plane to create a circular hemispherical photo (C). In all images, zenith and azimuth are indicated by Θ and Φ, respectively.

How to extract hemispherical images from spherical panoramas

UPDATE: Please see my latest post to process spherical images with R.

Command line instructions

If you are proficient with the command line, the easiest way to extract hemispherical images from photo spheres is to use ImageMagick. After you download and install the program you can run the script below to convert all of your images with just a couple lines of code.

cd "YOUR_IMAGE_DIR"

magick mogrify -level 2%,98% -crop 8704x2176-0-0 -resize "8704x8704!" -virtual-pixel horizontal-tile -background black +distort Polar 0 -flop -flip *jpg

You may need to make a few modifications to the script for your own images. The -crop 8704x2176-0-0 flag crops the top half of the image (i.e. upper hemisphere). Be sure to adjust this to 1.00×0.25 the dimensions of your panorama dimensions. The -resize "8704x8704!" flag resizes the image into a square in order to apply a polar transformation. Be sure to adjust this to 1.00×1.00 the width of your panorama

Note that the code above will convert and overwrite all of the .jpg files in your folder to hemispherical images. I suggest that you practice on a folder of test images or a folder of duplicates to avoid any mishaps.

GUI instructions

If you are intimidated by the command line, extracting hemispherical images from photo spheres is also easy with GIMP (I used GIMP because it is free, but you can follow the same steps in Photoshop).

Update: You can also try out this cool web app developed by researchers in Helsinki which allows you to upload spherical panoramas from your computer or phone and automatically converts them to hemispherical images that you can download. However, I would not suggest using this tool for research purposes because the app fixes the output resolution at 1000p, so you lose all of the benefits of high-resolution spherical images.

Spherical panoramas are stored as 2D equirectangular projections from which hemispherical images can be extracted in GIMP.

First, crop the top half of the rectangular photo sphere.

Crop the top half of the panorama.

Second, scale the image into a square. I do this by stretching the image so that the height is the same size as the width. I go into why I do this below.

Scale the image into a square.

Third, remap the image to a polar projection. Go to Filter > Distort > Polar Coordinates

Settings for remapping the panorama into a polar projection.
Once, mapped onto polar coordinates, the image is now a circular hemispherical image.

Fourth, I found that increasing the contrast slightly helps the binarization algorithms find the correct threshold.

All of these steps can be automated in batch with BIMP plugin (a BIMP recipe is available in the supplemental files of the paper). This can also be automated from the command line with ImageMagick (see scripts above and in the supplemental materials of the paper).

The result is a large image with a diameter equal to the width of the equirectangular sphere. Because we are essentially taking columns of pixels from the rectangular image and mapping them into “wedges” of the circular image, we will always need to down sample pixels toward the center of the circular image. Remember that each step out from the center of the image is the same as each step down the rows of the rectangular image. So, the circumference of every ring of the circular image is generated from a row of pixels that is the width of the rectangular image.

With a bit of geometry, we can see that, the circumference matches the width of our rectangular image (i.e. 1:1 resolution) at zenith 57.3 degrees. Zenith rings below 57.3 will be downsampled and those above will be scaled up and new pixels will be interpolated into the gaps. Conveniently, 57.3 degrees is 1 radian. The area within 1 rad, from zenith 0° to 57°, is important for canopy estimates as gap fraction measurements in this portion of the hemisphere are insensitive to leaf inclination angle, allowing for estimated of leaf area index without accounting for leaf orientation.

Thus, we retain most of our original pixel information within this critical portion of the image, but it does mean that we are expanding the pixels (increasing the resolution) closer to the horizon. I tested the impact of resolution directly in my paper and found almost no difference in canopy estimates; so, it is probably okay to downscale images for ease of processing if high resolution is not needed.

The hemispherical image produced can be now be analyzed in any pipeline used to analyze DSLR hemispherical images. You can see the pipeline I uses in this post.

How do images from smartphone panoramas compare to DSLR

In my paper, I compared hemispherical photos taken with a DSLR against those extracted from a spherical panorama. I took consecutive photos at 72 sites. Overall, I found close concordance between measures of canopy structure (canopy openness) and light transmittance (global site factor) between the methods (R2 > 0.9). However, the smartphone images were of much greater clarity and therefore retained more detailed canopy structure that was lost in the DSLR images.

Figure 4 from the paper: Difference in canopy structure and light environment estimates between reference (standard DSLR HP) and full resolution SSP HP (dark orange), low resolution SSP HP downsampled to match the standard DSLR resolution (light orange), fisheye HP (blue), and DSLR HP with exposure adjusted from +5 to -5 (light to dark). SSP HP images were generated from spherical panoramas taken with Google Pixel 4a and Google Camera. Fisheye HP images were simulated from smartphone HP for two intersecting 150° FOV images from a Pixel 4a. DSLR HP were captured with Canon 60D and Sigma 4.5mm f2.8 hemispherical lens.

Although the stitching process occasionally produces artifacts in the image, the benefits of this method far outweigh the minor problems. Care when taking the panorama images, as well as ever-improving software will help to minimize imperfect stitching.

Figure 2 from the paper: Comparisons of smartphone spherical panorama hemispherical photographs (SSP HP) (right B and C) to traditional DSLR hemispherical photographs (DSLR HP) (left B and C) captured at the same site. Details of the same subsection of the canopy, indicated by orange boxes, are expanded in C. Binarized images are shown below color images in B and C. Image histograms differ in the distribution of luminance values in the blue color plane (A). In panel E, a section of the canopy from full resolution SSP HP (left), downsampled SSP HP (middle), and DSLR HP (right) is further expanded to demonstrate the effect of image clarity on pixel classification. An example of an incongruous artifact resulting from misalignment in the spherical panorama is outlined in blue in A and expanded in D.

Overall, this method is not only a good alternative, it is probably even more accurate than traditional methods because of the greater clarity and robustness to variable exposure. My hope is that this paper will help drive more studies in the use of smartphone spheres for forest research. For instance, 360 horizontal panoramas could be extracted for basal measurement or entire spheres could be used to spatially map tree stands. The lower hemisphere could also be extracted and used to assess understory plant communities or leaf litter composition. Researchers could even enter the sphere with a virtual reality headset in order to identify tree species at their field sites from the comfort of their home.

Mostly, I’m hopeful that the ease of this method will allow more citizen scientists and non-experts to collect data for large-scale projects. After all, this method requires no exposure settings, no additional lenses, and is automatically levelled. The geolocation and compass heading can even be extracted from the image metadata to automatically orient the hemispherical image and set the location parameters in analysis software. Really, anyone with a cell phone can capture research-grade spherical images!

 

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos, and my tips for taking spherical panoramas.

]]>
1809
Analyzing Hemispherical Photos https://www.azandisresearch.com/2019/02/03/analyzing-hemispherical-photos/ Mon, 04 Feb 2019 02:06:15 +0000 http://www.azandisresearch.com/?p=1205 Now that you’ve got the theory, equipment, and your own hemispherical photos, the final steps are processing the photos and generating estimates of canopy and light values.

We’ll be correcting photos and then converting to a binary image for analysis just like those in the slider above.

There are many paths to get light estimates from photos. Below, I’ve detailed my own pipeline which consists of these main steps:

  • Standardizing exposure and correcting photos
  • Binarization
  • File conversion
  • Model configuration
  • Analyzing images

For this pipeline, I use Adobe Lightroom 5 for standardizing the photos and masking, ImageJ with Hemispherical 2.0 plugin for binarization and image conversion, Gap Light Analyzer (GLA) for estimating light values, and AutoHotKey for automating the GLA GUI. All of these programs, with the exception of Lightroom, are freely available. You could easily replicate all of the steps I show for Lightroom in GIMP, an open-source graphics program.

There are probably stand-alone proprietary programs out there that can analyze hemispherical photos in a more streamlined manner within a single program, but personally, I’m willing to sacrifice some efficiency for free and open-source methods.

1. Standardizing photos:

With the photos on your computer, the first step is to standardize the exposure. In theory, exposure should be standardized in the field by calibrating reference exposure to the open sky (as outlined in my previous post). However, when taking many photos throughout the day, or on days with variable cloud cover, there will be variation in the photos. Similarly, snowy days or highly reflective vegetation can contribute anomalies in the photos. Fortunately, you can reduce much of this variability in post-processing adjustments. We will apply some of these adjustments across the entire image and others we will need to correct targeted elements within the image.

Across the entire image, we can right-shift the exposure histogram so that the brightest pixels in the sky are always fully white. This standardizes the sky to the upper end of the brightness distribution across the image. So, even if we capture one photo with relatively dark clouds and another with relatively bright clouds over the course of a day, after standardizing, the difference between the white sky and dark canopy will be consistent.

Example using a local adjustment brush to recover the effect of sun flare.

If elements of the image are highly reflective, an additional step to mask these areas is required to avoid underestimating canopy closure. For instance, extremely bright tree bark or snow on the ground should be masked. I do this by painting a -2EV exposure brush over the area to ensure it will be converted to black pixels in the binarization step. Unfortunately, this must be done by hand on each photo.

Example using a local adjustment brush to mask reflective areas in the image.

Sunspots can also be partially corrected, but again, it requires manipulation by hand on each photo. I find that a local exposure brush that depresses the highlights and increases the contrast will recover pixel that have been overexposed due to sun flare, but only to an extent. It is best to avoid flare in the first place (see my last post for tips for avoiding flare).

2. Binarization

Binarization is the process of converting all color pixels in an image to black (canopy) and white (sky). The most basic method of conversion is to set a defined threshold for the tone of each pixel above which a pixel is converted to white and below which it is converted to black. This type of simple thresholding suffers from problems in cases when pixels are ambiguous, especially at the edges of solid canopy objects. To avoid these problem, a number of algorithms have been developed that take a more complex approach. I won’t go into details here, but I suggest reading Glatthorn and Beckshafer’s (2014).

Examples of pixels that pose significant problems for thresholding and binarizing images. From Glatthorn & Beckshafer 2014.

I explored a number of binarization programs while developing my project, including SideLook 1.1  (Nobis, 2005) and the in-program thresholding capability in Gap Light Analyzer (Frazer et al. 2000). However, both of these programs fell short of my needs. In the end, I found the Hemispherical 2.0 plugin (Beckshafer 2015) for ImageJ to be most useful because it is easily able to handle batch processing.

You can download the Hemispherical 2.0 plugin and manual here (scroll to the bottom of the page under “Forest Software”). This plugin uses a “Minimum” thresholding algorithm on the blue color plane of the image (skies are blue, so the blue plane yields the highest contrast). The threshold is based on a histogram populated from a moving window of neighboring pixel values to ensure continuity of canopy edges (see Glatthorn & Beckshafer 2014 for details).

Once the plugin is installed, you can run the binarization process in batch on a folder containing all of your adjusted hemispherical photo files. I’ve found the program lags if I try to batch process more than 100 photos at a time; so, I suggest splitting datasets into multiple folders with fewer than 100 files if you are processing lots of photos.

Example of a standard registration image. Naming this file starting with “000” or “AAA” will ensure it is the first image processed in the folder.

Before you begin the binarization and analysis process, I suggest that you make a registration image. The first step in both of the binarization and analysis programs is defining the area of interest in the image that excludes the black perimeter. This can be hard to do depending on which photo is first in the stack (for instance, if the ground is very dark and blends into the black perimeter). A registration image allows you to easily and consistently define your area of interest. I like to make a standardized registration image for all of my camera/lens combinations.

The easiest way to make a registration image is to simply take a photo of a white wall and increase the contrast in an editing program until you have hard edges. Or, you can overlay a solid color circle that perfectly matches the bounds of the hemisphere (as I’ve done in the photo above). I also like to include a demarcated point that indicates a Northward orientation (this is at the top of the circle, in my case). I use this mark as a starting point for manually dragging the area-of-interest selection tool in both GLA and Hemispherical 2.0.

3. File Conversion

Hemispherical 2.0 outputs binarized images as TIFF files, but GLA can only accept bitmap images. So, we need to convert the binary images to bitmap files. This can easily be accomplished in ImageJ (from the Menu go to Process > Batch > Convert).

4. Model configuration

Now that our files are processed, we can finally move to GLA and compute our estimates of interest.

The first step is to establish the site specific parameters in the “Configure” menu. I’ll walk through these parameters tab-by-tab where they pertain to this pipeline. You can read more about each setting in the GLA User Manual.

“Image” tab:

In the “Registration” box, define the orientation of the photo. If you used a registration image with a defined North point, select “North” for the “Initial Cursor Point.” Select if your North point in the photo is geographic or magnetic. If the latter, input the declination.

In the “Projection distortion” box, select a lens distortion or input custom values (see my previous post for instructions on computing these for your lens).

“Site” tab:

Input the coordinates of the image in the “Location” box. Note that we will use a common coordinate for all photos in the batch. This only works if there is not much distance between photos (I use 10km as a rule of thumb, but realistically, even 100km probably makes very little difference). If your sites are far apart, remember to set new coordinates for each cluster of neighboring sites.

If you ensure that your images point directly up as I suggest in my previous post, you should select “Horizontal” in the “Orientation” box. Note that in this case, “Horizontal” means “horizontal with respect to gravity” not “horizontal to the ground” which might not be the same thing if you are standing on a slope.

The “Topographic Shading” is not important if our only interest is estimating light. If you are interested in the canopy itself, this tool can be used to exclude a situation where, for instance, a tall mountain blocks light behind a canopy and would be considered a canopy element by the program.

“Resolution” tab:

In the “Suntrack” box, it is important to carefully think about and set your season start and end dates. If you decide to consider other seasonal timespans later, you’ll need to rerun all of these analyses again. Remember that if you are estimating light during spring leaf-out or fall senescence, you’ll need to run separate analyses for each season with corresponding images.

“Radiation” tab:

Assuming that you did not directly measure radiation values alongside your images, we need to model these parameters. These are probably the most difficult parameters to set because they will require some prior analysis from other data on radiation for your site.

The NSRDB has many observation locations across the U.S. Most of these have long-term data with low uncertainty in daily estimates.

You can download radiation values from local observation points from the National Solar Radiation Dataset website. From these data, you can calculate parameter estimates for Cloudiness index, Beam fraction, and Spectral fraction. Remember only to use data for the seasonal window that matches the seasonal window of interest in your photos.

Cloudiness index (Kt):

This is essentially a measure of the percentage of the total radiation entering the atmosphere (Ho) that makes it to the ground surface (H) (i.e. not reflected by clouds).

Where H is the Modeled Global Horizontal radiation (Glo Mod (W/m^2)) and Ho is the Hourly Extraterrestrial radiation (ETRN (W/m^2)) with 0.01 added to each to avoid dividing by zero.

Beam fraction (Hb/H):

Beam fraction is the ratio of radiation from direct beam (Hb) versus spectral radiation that reaches the ground (H). As you can imagine, as cloudiness increase, direct beam radiation decreases as it is diffused by the clouds; thus, we can estimate beam fraction as a function of cloudiness (Kt).

Where the constants -3.044 and 2.436 are empirically derived by Frazer et al. (1999) (see their paper for details and extensions).

Spectral fraction (Rp/Rs):

Spectral fraction is a ratio of photosynthetically active radiation wavelengths (Rp) to all broadband shortwave radiation (Rs). Frazer et al. (1999) provide the following equation for calculating Spectral fraction as a function of Cloudiness index (Kt).

Frazer et al. (1999) provide guidance on computing this as a monthly mean, but for most analyses, I think this overall average is probably fine.

The Solar constant is not region-specific, so the default value should be used.

The Transmission coefficient basically describes how much solar transmittance is expected on a perfectly clear day and is mainly a function of elevation and particulate matter in the air (i.e. dusty places have lower clear sky transmission). Frazer et al. (1999) suggest that this coefficient ranges from 0.4 to 0.8 globally and between 0.6 and 0.7 for North America. Therefore, they use a default of 0.65. Unless you have good reason to estimate your own transmission values, this default should be fine.

It can be very intimidating to pick values for these parameters. To be honest, I suspect that most practitioners never even open up the configuration menu, and instead, simply use the default parameters of the program. In the end, no matter what values you use to initialize your model, the most critical step is recording the values you chose and justifying your choice. As long as you are transparent with your analysis, others can repeat it and adjust these parameters in order to make direct comparisons with their own study system. Note: you can easily save this configuration file within GLA, too.

 5. Analyzing images

Once GLA is configured, you will need to register the first image. If you’ve named your registration image something like “AA000” as I have, it will be the first in your stack. Place the cursor on the known north point and drag to fit the circumference. Be sure to select the “Fix Registration for Next Image” button and also record the coordinate values for the circle.

Now that the first image is registered, we can automate the analysis process for the rest of the images. Unfortunately, GLA does not have a command line version and the GUI does not offer a method of batch processing. So, I wrote a script for the AutoHotKey program that replicates keyboard and mouse inputs. You can download AutoHotKey here. The free version is fine. For best operation, I recommend the 32-bit version.

I’ve included my AutoHotKey script below. You will need to use a text editor to edit the first line of the macro (line 17) with the file path to your image folder. Save the file as an AutoHotKey file (.ahk) and run it as administrator. Now you can pull up the GLA window and use the F6 shortcut to start the macro.

You should see the program move your cursor to open windows and perform operations, like in the video above. Since AutoHotKey is replicating mouse and keyboard movements, if you use your computer, it will stop the program. I suggest not touching your computer until the program is done (go get a cup of coffee while your new robot does all of your analysis for you!). If you must use your computer concurrently, you will need to first set up GLA and the AutoHotKey to run in a virtual machine.


#NoEnv
SetWorkingDir %A_ScriptDir%
CoordMode, Mouse, Window
SendMode Input
#SingleInstance Force
SetTitleMatchMode 2
#WinActivateForce
SetControlDelay 1
SetWinDelay 0
SetKeyDelay -1
SetMouseDelay -1
SetBatchLines -1

F6::
Macro1:
Loop, Files, ENTER_YOUR_FILEPATH_HERE*.*, F
{
WinMenuSelectItem, Gap Light Analyzer, , File, Open Image..
WinWaitActive, Gap Light Analyzer ahk_class #32770
Sleep, 333
ControlClick, Button1, Gap Light Analyzer ahk_class #32770,, Left, 1,  NA
Sleep, 10
WinWaitActive, Open file... ahk_class #32770
Sleep, 333
ControlSetText, Edit1, %A_LoopFileName%, Open file... ahk_class #32770
ControlClick, Button2, Open file... ahk_class #32770,, Left, 1,  NA
Sleep, 1000
Sleep, 1000
WinWaitActive, Gap Light Analyzer  ; Start of new try
Sleep, 333
WinMenuSelectItem, Gap Light Analyzer, , Configure, Register Image
WinWaitActive, Gap Light Analyzer  ahk_class ThunderRT5MDIForm
Sleep, 333
ControlClick, ThunderRT5CommandButton1, Gap Light Analyzer  ahk_class ThunderRT5MDIForm,, Left, 1,  NA
Sleep, 10
WinMenuSelectItem, Gap Light Analyzer, , View, OverLay Sky-Region Grid
WinMenuSelectItem, Gap Light Analyzer, , View, OverLay Mask
WinMenuSelectItem, Gap Light Analyzer, , Image, Threshold..
WinWaitActive, Threshold ahk_class ThunderRT5Form
Sleep, 333
ControlClick, ThunderRT5CommandButton1, Threshold ahk_class ThunderRT5Form,, Left, 1,  x34 y13 NA
Sleep, 10
WinMenuSelectItem, Gap Light Analyzer, , Calculate, Run Calculations..
WinWaitActive, Calculations ahk_class ThunderRT5Form
Sleep, 333
ControlClick, ThunderRT5CommandButton1, Calculations ahk_class ThunderRT5Form,, Left, 1,  NA
Sleep, 10
WinWaitActive, Calculation Summary Results ahk_class ThunderRT5Form
Sleep, 333
ControlSetText, ThunderRT5TextBox4, %A_LoopFileName%, Calculation Summary Results ahk_class ThunderRT5Form
ControlClick, ThunderRT5CommandButton2, Calculation Summary Results ahk_class ThunderRT5Form,, Left, 1,  NA
Sleep, 10
Sleep, 5000
}
MsgBox, 0, , Done!
Return

F8::ExitApp

F12::Pause

When the program has looped through all of the photos, a dialogue box will pop up with “Done!”

The data output is in a separate sub-window within GLA. It is usually minimized and hidden behind the registration image window. You can save the data, but GLA defaults to save the output as a semicolon delimited text file. This can be easily converted to a CSV or Excel worksheet in Excel.

And you’re done!

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.


References

Beckchafer, P. (2015). Hemispherical_2.0: Batch processing hemispherical and canopy photographs with ImageJ – User manual. doi:10.13140/RG.2.1.3059.4088.

Frazer, G. W., Canham, C. D., and Lertzman, K. P. (2000). Technology Tools: Gap Light Analyzer, version 2.0. The Bulletin of the Ecological Society of America 81, 191–197.

Frazer, G. W., Canham, C. D., and Lertzman, K. P. (1999). Gap Light Analyzer (GLA): Imaging software to extract canopy structure and gap light transmission indices from true-color fisheye photographs, users manual and documentation. Burnaby, British Columbia: Simon Fraser University Available at: http://rem-main.rem.sfu.ca/downloads/Forestry/GLAV2UsersManual.pdf.

Glatthorn, J., and Beckschäfer, P. (2014). Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms. PLoS One 9, e111924.

Nobis, M. (2005). Program documentation for SideLook 1.1 – Imaging software for the analysis of vegetation structure with true-colour photographs. Available at: http://www.appleco.ch/sidelook.pdf.

Wilcox, S. (2012). National Solar Radiation Database 1991–2010 Update: User’s Manual. National Renewable Energy Laboratory Available at: https://www.nrel.gov/docs/fy12osti/54824.pdf.

]]>
1205
Taking hemispherical canopy photos https://www.azandisresearch.com/2018/07/24/taking-hemispherical-canopy-photos/ Tue, 24 Jul 2018 19:30:29 +0000 http://www.azandisresearch.com/?p=640 Be sure to check out the previous two posts:

1. Hemispherical Light Estimates

2. Hardware for Hemispherical Photos

Once the theory and equipment are taken care of, you are ready to go out into the field and collect data. This post will cover the when, where, and how of shooting hemispherical photos. The next and final post deals with analyzing the photos you’ve captured.

When:

Hemispherical photos are very sensitive to lighting conditions. Because you are measuring an entire half hemisphere of the sky at once with each photo, it is important that the background (i.e. the sky) is as standardized as possible so that the canopy can be accurately distinguished without bias.

To imagine the wildly different exposures gradient across a hemispherical lens, wait until a clear sunset and walk out into an open field. Stare directly at the setting sun for a few seconds, then turn 180 degrees and try to make out the horizon. Even though our eyes are extremely well-equipped to quickly adjust for different lighting, you’ll probably have a hard time making out the dark horizon after looking at the bright sunset. Unlike your eye, which continuously adjusts to light conditions even at these extremes, cameras are stuck in one setting. Even the best camera sensors are limited in their tolerances, so we need to take photos at times when lighting in the entire hemisphere fits within a narrow range.

The best time to shoot is on uniformly overcast days. Overcast cloud cover yields a nice homogeneous white background that makes strong contrast to the edges of leaves and branches.

Overcast days provide a nice standard background on which the canopy structure can be easily distinguished.

If cloudy days are not in the forecast, another option is to wait until dawn or dusk, before or after the sun is below the horizon and the sky is evenly lit. The problem is that this only allows a very short window for shooting.

Sometimes, we can’t help but take photos on sunny days. Before I talk about strategies for dealing with difficult exposures, I want to explain some of the problems direct sunlight can precipitate in light estimation methods: blown-out highlights, flare, and reflection.

Overexposed highlights

Overexposed or “blown-out” highlights is an issue of exposure settings and sensor limitations. When shooting into light, individual sensor cells can only record light up to a threshold. In my last post, I compared light collecting on camera sensor cells to an array of buckets in a field collecting raindrops. Using that analogy, think about sensor thresholds as the rim of the bucket. At some point, enough rain collects in the buckets that it overfills. After that point, you can record no more information about relative rainfall between buckets. In the camera, eventually too much light falls incident on a portion of the sensor and those pixels are maxed out and recorded as fully white (i.e. no information is coded in those cells). This leads to an underestimate of canopy closure because some pixels occupied by canopy will look like white sky after binarization. We can try correct our exposure to ensure we do not truncate those bright pixels, but compared to direct sun, blue sky directly overhead is relatively dark on sunny days, so this will almost certainly lead to a loss of information on the dark end of the spectrum, and we will then be overestimating canopy closure.

Overexposure will blow out the brightest parts of the image and the result will be that solid canopy elements will be considered open sky in downstream analysis. This photo is uniformly overexposed, but this can also occur if the range of exposure across the hemisphere is greater than the dynamic range of the sensor.

Unfortunately, there are no easy ways to deal with wide exposure ranges (other than avoiding sunny days). One solution is to shoot in RAW format which retains more information per pixel. With RAW photos one can attempt to recover highlights and boost shadows to some extent, but because the next processing step is binarization, this will only be effective if you can sufficiently expand the pixel value range at the binarization threshold. This also entails hand-calibrating each image, which reduces standardization and replicability, and may take a lot of time if you need to process many photos.

Flare

Flare is another problem and it emerges for a couple of reasons.

Hazy-looking “veiling flare” occurs when light scatters inside the glass optics of your lens. In normal photography, it is most prevalent when the sun is oblique to the front lens surface and light gets “trapped” bouncing around inside of the glass. It can also look like a halo of light bleeding into a solid object or streaks of light radiating from a point of light depending on your aperture settings (this is called Fraunhofer diffraction and can make for very cool effects…just not in hemispherical photos!). When these streaks appear to overlay solid canopy objects in our photos it will lead to underestimation of closure.

Flare from a lightsource can lead to mischaracterization of solid canopy pixels as open sky by binarization algorithms as seen in the to two images on the right.

Ghosting flare looks like random orbs of light in photos and it occurs when light entering the lens highlights the inside of the optics close enough to focus on the sensor. Hemispherical lenses are incredibly prone to this type of flare because of the short focal distance of wide angle lenses.

Ghost flare, when sufficiently bright, will be considered a canopy gap in downstream estimation.

If photos must be taken in sunlight, one alternative is to at least block the direct beams of sunlight by positioning the sun behind a natural sunblock or fashioning a shield. I’ve never tried the shield option myself, but I’ve seen photos of folks who have affixed a small disk on a semi-ridge wire on their tripod. The disk can be positioned such that it only blocks the sun. There are many problems with this option. First, the sunshield will be counted as canopy in the light calculations which will bias the estimates. One could exclude the blocked area from analysis by masking out the area of interest, but then that area will be excluded from the analysis. Another option is to spot-correct flare in each photo. This is most effective with RAW photos and can be accomplished in photo-editing applications by using a weak adjustment brush to reduce highlights, boost shadows, and increase contrast directly over the flare. Again, I don’t recommend editing individual photos, but sometimes this is the only option.

Reflection

Finally, direct sun reflecting on solid surfaces can lead to mischaracterization of pixels and overestimate openness. In this case, objects opposite the sun can be lit so well that they are brighter than the background. This is very common in forests dominated by light bark trees like aspen and birch. This also occurs just about any time there is snow in the frame, regardless of the lighting conditions. The only solution for this problem is darkening the solid objects in a photo-editing program. It is a painstaking task and increases the error in the final estimation, but sometimes is necessary.

Reflection can make solid object in the canopy appear lighter than the sky. Although, this image is properly exposed, the light trunk of the tree reflects enough light to be characterized as sky by the binarization algorithm.

Where.

Next you must decide where you want to take photos. The answer to this will depend on your research question. We can break it down into spatial/temporal sampling scale and relative position.

Hemispherical photos can fit into any standard spatial sampling scheme (e.g. transects, grid, opportunistic, clustered, etc.) depending on your downstream statistical analysis. However, because hemispherical photos capture such a wide angle of view, you must be careful of any analysis that assumes independence of observations if the viewshed overlaps between photos.

Generally, when we think about spatial sampling, we think in two dimensions (like in the figure below). However, it is also important to consider that canopy closure estimates integrate sun angle, and so it is critical that an even sampling scheme considers the third spatial dimension and include a representative sample of topographical aspect.

Example spatial sampling schemes from Collender et al 2015.

There is no perfect sampling strategy for any given project. To illustrate some considerations, I will outline the sampling for my work. For my project, I need to characterize the canopy closure above forested pond that range in size from a few meters to a hundred meters across. The most obvious strategy was to sample the intersections of a tightly spaced Cartesian grid over the entire surface of the pond. A previous research student tried this method on a handful of ponds. Using that information, we were then able to subsample those data to determine the spacing of the grid that would yield the most accurate estimates with the fewest photos. In this case, it turned out that every pond, regardless of size, could be characterized with 5 photos: 4 photos along the most distal shoreline in each cardinal direction and one photo in the center of the pond where the east-west and north-south lines intersect. An ancillary benefit of taking the same number of photos at each pond is that I can also calculate the variance within each pond, which gives me a sense of the homogeneity of the habitat. Keep in mind that measures of higher moments of the distribution of light values across habitats (like variance or kurtosis) may be extremely ecologically relevant and can be incorporated for more meaningful statistical analysis.

One final consideration in spatial sampling is the height at which photos are captured. By virtue of practicality, the most common capture height is about 1m from the ground since this is the height of most tripods. However, the study question might dictate taking photos above understory plants or at ground level. Regardless, the height of photos should be consistent, or recorded for each photo, and explicitly stated in published methods.

You will also need to consider the frequency of your sampling to ensure that you capture any relevant variation in the study system over time. In temperate forests, this usually means, at the very least, taking the same photos with deciduous leaves on and again after the leaves have fallen. On the other hand, phenological studies might need photos from many timepoints over shorter durations.

Example of a sunpath projected onto a hemispherical photo from Gap Light Analyzer.

It is important to remember that canopy closure estimation integrates the sun’s path over a specific yearly window. We will define that window explicitly in the model, so it is important to ensure that the canopy structure in the photos accurately represents the sunpath window you define.

How.

In this section I will get into the nuts and bolts of taking photos in the field. I’ll cover camera settings and then camera operation.

Camera settings.

Most modern cameras are designed for ease of use and offer a variant of “automatic” settings. Automatic settings are great for snapping selfies and family photos, but awful for data collection. Manually adjusting the camera increases replicability and increases the accuracy of light estimates. Fortunately, there are only 4 parameters that we need to adjust for hemispherical photography: ISO, aperture, shutter speed, and focal distance.

ISO measures the sensitivity of a camera’s sensor to light. Higher ISO settings ( = greater sensitivity) allow for faster capture in lower light. However, high ISO leads to lots of noise and mischaracterization of pixels. In general, you should aim for the lowest ISO setting possible to produce better quality photos. More expensive cameras have better sensors and interpolation algorithms, so you can get away with higher ISO settings.

The aperture is the iris of a lens and controls the amount of light entering the camera. Aperture settings are given in f-numbers (which is the ratio of the lens focal length to the physical aperture width). Counterintuitively, a larger f-value (i.e. f22) is a smaller physical pupil, and therefore, less light than a smaller f-value (i.e. f2.8). Your aperture settings will be a balance between letting in enough light and getting crisp focus across the focal range (see focal distance below).

The shutter speed determines the duration of time that the sensor is exposed to light. Longer shutter speeds means more light resulting in brighter photos. However, the longer the shutter speed, the more any movement of the camera or the canopy will blur the image. If you are using a handheld system, I suggest at least 1/60sec shutter. With a tripod, shutter speeds can be longer, but only if the canopy is completely still. If there is ANY wind, I suggest at least 1/500sec shutter.

Focal distance is the simplest—just adjust the lens focus so that canopy edges are in sharp focus. This is easy when the canopy is a consistent distance from the lens, but can be difficult when capturing multi-layer structure. Lenses resolve greater depth of field (range of focal distances simultaneously in focus) when the aperture is smallest.

Since all four settings rely on all of the others, camera settings will be a balancing act. The end goal is to ensure that you have the best balance between overly white and overly black pixels and you can check this with your camera’s internal light meter. The big catch here is that we are actually not that interested in the exposure of the sky; in fact, we would like for the sky to be entirely white.

The most common exposure standardization technique is to first determine the exposure settings for open sky, then overexpose the image by 2-3 exposure values (EV) (Beckshäfer et al. 2013, Brown et al. 2000, Chianucci and Cutini 2012). In theory, this will ensure that a uniformly overcast sky is entirely white without blowing-out the canopy. The primary benefit is that this method uses the sky as a relative reference standard which is replicable.

It is easy to employ this method using your camera’s internal light meter. First, set your camera to meter from a single central point (you will probably need to look in your manual to figure out how to do this). If there is a large enough gap in the canopy overhead, you can point the meter spot there and take a reading then adjust your settings to get a 0 EV. (If the canopy has no gaps, you can set this in the open before going into the forest–just remember to take another reading if the conditions change). Now, reduce the shutter speed by 2 full stops (i.e. if 1/500 is 0 EV for the sky, set your shutter speed to 1/125; if 0 EV is 1/1600, set you speed to 1/400).

[Note, other authors (e.g. Beckshäfer et al. 2013) suggest adjusting for 2 EV overexposure. I don’t like using EV for anything other than the spot reference because all cameras use different methods of evaluating exposure. In contrast, shutter speed is invariant across all camera platforms.]

This may all seem like a confusing juggling act, but it is not that difficult in practice. Here is my general strategy:

  • I set my ISO at 200 and my aperture at around f11.
  • With the camera set to evaluate the central point of the image, I take a light meter reading of open sky.
  • I adjust my shutter speed to an exposure value of 0 for open sky.
  • Now, I re-adjust my shutter speed slower by 2 full stops.
  • If my shutter speed is now too slow, I will increase my ISO one level or decrease my f-stop (aperture) by one stop and go back to step #2. I repeat until I find the balance.
  • With the camera set, I can set up my camera and take images using these same settings; however, I must re-calibrate if the sky conditions change.

Taking photos.

At this point, shooting the photos is the easy part! A couple of helpful tips will make life easier.

Before shooting, you will need to orient the camera so that the sensor is perpendicular to the zenith angle (i.e. the camera lens is pointing directly up, opposite gravity). In my previous post covering hardware I mentioned that there are pre-fabricated leveling systems available or you can make a DIY version. With a tripod, you can manually level the camera.

For later analysis, you will need to know the compass orientation of the photos. Some pre-fab systems have light-up LEDs around the perimeter that are controlled by an internal compass and always light north. Otherwise, you can place a compass on your stabilizer or tripod and point the top of the image frame in a consistent direction (magnetic or true north is fine, just make sure you are consistent and write down which one you use).

It can be hard to take a hemispherical photo without unintentionally making self-portraits. With a tripod, you can use a remote to release the shutter from a distance or from behind a tree. Camera manufacturers make dedicated remotes, or if your camera has wifi capabilities, you can use an app from your phone. Most cameras also have a timer setting which can give you enough time to duck for cover.

 

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.

 

References

Beckschäfer, P., Seidel, D., Kleinn, C., and Xu, J. (2013). On the exposure of hemispherical photographs in forests. iForest 6, 228–237.

Brown, P. L., Doley, D., and Keenan, R. J. (2000). Estimating tree crown dimensions using digital analysis of vertical photographs. Agric. For. Meteorol. 100, 199–212.

Chianucci, F., and Cutini, A. (2012). Digital hemispherical photography for estimating forest canopy properties: Current controversies and opportunities. iForest – Biogeosciences and Forestry 5.

Collender, P. A., Kirby, A. E., Addiss, D. G., Freeman, M. C., and Remais, J. V. (2015). Methods for Quantification of Soil-Transmitted Helminths in Environmental Media: Current Techniques and Recent Advances. Trends Parasitol. 31, 625–639.

]]>
640
Hardware for hemispherical photos https://www.azandisresearch.com/2018/03/01/hardware-for-hemispherical-photos/ Fri, 02 Mar 2018 03:09:18 +0000 https://www.azandisresearch.com/?p=330 This is my second technical post on using hemispherical photography for light estimates. My previous post considered the rationale behind hemispherical photography estimation methods. This post will cover hardware, specifically:

  1. Lens
  2. Camera
  3. Stabilizer

One of the reasons for the lack of standardization in hemispherical photo techniques is the rapid pace of improvements in camera technology. In the earliest days of hemispherical canopy photos, researchers shot on film, developed the images, and scanned them into digital images for processing (Rich 1989). Now that digital camera sensors have improved, we can skip the development and scanning steps, and the accuracy of techniques have also increased. However, substantial bias can creep into light analyses if the data are captured without a solid understanding of hardware (covered in this post) and settings (covered in the next post).

At the most basic level, hemispherical photography requires just three pieces of hardware: a camera, a lens, and something stable to mount it to.

Lens:

Arguably, the lens is the most critical piece of equipment, because without a true hemispherical lens with an appropriate projection, all photos, regardless of the camera, will be biased.

Fisheye versus hemispherical:

Hemispherical lenses are relatively rare in the camera world. In your hunt for the perfect lens, you will more likely come across many variants of fisheye lenses, which are more commonly used in traditional photography. Hemispherical and fisheye lenses are similar in that they capture a wide-angle of view and project it down onto the flat and rectangular camera sensor. In fact, both types of lenses can be built with the same glass. The main difference is that hemispherical lenses focus a circular image ON the sensor, whereas fisheye lenses project a circle AROUND the sensor, so that the resulting image is rectangular (see figure below).

Example of the same image projected by the same lens onto different camera sensor sizes. On a full-frame sensor, this lens is considered hemispherical, while on a cropped sensor, this lens is considered a fisheye. Illustration by me, AZ Andis.

Hemispherical lenses are extremely uncommon. Lens manufacturers produce these primarily for research purposes. Fisheye lenses, on the other hand, are common—almost every manufacturer makes a few focal lengths.

So how do you find a true hemi-lens? The most certain method is to buy a tried and true model (I use the Sigma 4.5mm f2.8). Otherwise, you can try to find a lens that captures 180 degree angle of view and results in a circle within the photo frame. However, be aware that not all hemispherical lenses are created equal. Different lenses produce different angular distortion, and this distortion can radically alter light estimates.

Lens projection:

The ideal lens for hemispherical canopy measurements will retain the same area over any viewing angle (i.e. “equal solid angle” in Herbert 1987; or “equisolid” in Fleck 1994).

Figure adapted from Herbert (1987) showing the ideal projection of a hemispherical lens wherein objects at equal distance and equal size from the camera are represented by equal area on the image plane.

In other words, an object should take up the same number of pixels if it is in the center of the frame (zenith angle = 0) or the very edge of the frame (zenith angle = 90), although the shape may be distorted. But, this is not the case for most lenses which look like a funhouse mirror depending on the zenith angle of view. The difference originates in the intended application of the lens. Most traditional photographers prefer fidelity of shape over fidelity of area. For instance, landscape photographers don’t care if a field of flowers maintains constant shape across an image, but they certainly want all of the flowers to retain a flower-like shape. For this reason, most common fisheye lenses employ another common radial projection called “equidistant”, in which each pixel represent the same angular value at all zenith angles and maintains shape at all viewing angles (see the difference in the image below). (Fleck (1994) does a great job explaining the math behind angle projections, why some are better for different applications, and why some of them look so strange.)

Fleck (1994) demonstrates the “funhouse mirror” effect of different projections that either maintain consistent area (right) or consistent angle and shape (left) across the image frame.

Michael Thoby’s post is one of the most comprehensive and easy-to-follow explanations I’ve found to explain the many radial distortions lenses can accommodate. He even has some neat, interactive example of various projections in this post.

Figure by Michael Thoby. Relationship between viewing angle, and position on an image plane.
Figure by Michael Thoby. Equations for calculating the relationship between viewing angle and position on an image plane for common radial projections (color of equations in the figure on the left correspond to the color of the projection curves in the figure on the right).

Calibrating lens projections:

Not all lens manufacturers will explicitly indicate which radial distortion they use. Nevertheless, you can easily calculate the projection of any hemispherical lens empirically with a couple sheets of gridpaper and a round frame (I used a sieve from the soil-microbe lab). Simply affix the grid sheets to the inside rim of the frame, place the camera inside with the edges of the lens lined up along the chord across the diameter, then snap a shot (see image below).

Calibration test for a 180 degree Lensbaby GoPro attachment.

In ImageJ (or similar program), measure the pixel count across the diameter of the image frame. Next, count the number of cells across the diameter of the photo. Dividing the number of cells across the diameter by 18 (or 36) will tell you how many cells represent each 10 (or 5) degree angle of view. Now, starting at the center of the image (i.e. the pixel count across the diameter divided by 2) and moving outward, count the cells (which each represent a known degree angle) and measure the total pixels. Continue doing this from the center (zenith angle = 0) out to the outer edge of the image (zenith angle = 90).

For example, here are the empirically derived projection estimates for the GoPro/Lensbaby180+ lens setup I tested:

You can also calculate these values for other image projections to compare the distortion of your lens. Dividing the total pixels across by the total cells across gives the pixel count in each cell that would be expected from an equadistant angle projection. For equisolid projections, calculate the radial distance from the center by taking the sin of half the zenith angle in degrees and multiplying by the total pixel diameter divided by 180. I did this for a 180 degree Lensbaby GoPro attachment and plotted the actual pixel count for each viewing angle against equasolid and equidistant projections (see figure below). As you can see in the figure, for the Lensbaby lens, objects in the periphery of the frame are compressed (i.e. fewer pixels represent a given angle of view).

Figure by me, A.Z. Andis, plotting equidistant (black), equisolid (red), and non-standard (blue) lens projection curves. The non-standard estimates were empirically measured projections from the LensBaby 180+ lens on a GoPro Hero4.

Downstream analysis applications can correct for lens distortion by weighting estimates from different radii in the image, but this means that errors are not continuous across the angle of view. In canopy measurements where the largest gaps will occur in the center of the image, this results in systematic bias of estimates.

Although not a direct comparison of lenses, I took identical photos with a GoPro/Lensbaby system and a CanonDSLR/Sigma f2.8 system (equisolid) in locations with increasing canopy gaps. In more closed canopies, the lens distortion of the Lensbaby lens systematically overestimates light values (see figure below), even when using an automatic thresholding algorithm (we’ll go into all of this analysis in post 4).

Camera:

Which camera should you use? In theory, any camera with a 180 degree lens can be used for canopy estimates. But for accurate and replicable measurements, a camera with a high quality sensor and fully-programmable settings is required.

Sensors:

High quality is not synonymous with ‘a bazillion megapixels.’ A camera’s sensor is actually an array of many tiny sensors packed together. When the camera takes a photo, light enters through the lens and “collects” on each individual sensor, kind of like rain falling on a field collecting in an array of buckets. If we want to collect as much light (or rainwater) as possible, we want the largest sensor area (or bucket area) as possible. To use the rain-catching analogy, it won’t matter if we put out a bunch of tiny buckets or fewer large buckets, we’ll still collect the same amount of water if we cover the same area. The same goes for light. One can pack lots of pixels onto a tiny sensor, but at the end of the day, a bigger sensor will (pretty much) always produce higher quality image information, even if the total megapixel count is lower.

Camera control:

As digital cameras have become more popular in general, camera manufacturers have develop more powerful on-board computers that can do most of the thinking for the user. This is helpful for selfies, but not as useful for accurate research wherein we need to carefully select the camera’s settings. For accurate photos, you should be able to independently set the shutter speed, aperture, ISO, and light metering mode, at the very least (I’ll go over these settings in the next post).

Camera type:

DSLRs (digital single-lens reflex cameras like Canon EOS series cameras), especially full-frame DSLRs, house large sensors and are ideal for canopy photos because they are fully programmable. The downside is that they are heavy, expensive, and rarely weatherproof. (There’s nothing more nerve-wracking than sinking off-balanced into pond muck trying to keep a couple thousand dollars-worth of camera hardware above the surface!). In contrast, a cellphone with a clip-on hemisperical lens is light, relatively cheap, and can be waterproofed, but the image quality is too low and the parameters for image capture are difficult or impossible to set. A middle-ground option could be a point-and-shoot cameras (like the Nikon Coolpix); the primary disadvantage being the lack of standardization in lenses.

Hemispherical photography systems, L to R: GoPro Hero4 with LensBaby 180+ lens, Nikon Coolpix with hemispherical lens, Canon 60D with Sigma 4.5mm f2.8 lens.

Capture format:

A final consideration is that your camera should be able to capture RAW images. RAW file formats retain all information for each pixel. Most consumer cameras, on the other hand, utilize image compression formats (like JPEG) that reduces file size by averaging pixel values and reducing information complexity. For most canopy measurement purposes, the first step of processing is binary compression (which is basically just a radical compression scheme), so pre-analysis compression should have little to no effect. If, however, one intends to spot-edit photos prior to processing (for instance applying exposure corrections to sun spots) RAW format will be required.

My setup:

I’ve tried may camera systems and in the end, I think that a DSLR with a hemi-lens is really the best option if it is in your budget. I use a Canon 60D with the Sigma 4.5mm f2.8. You could also use a lower-end Canon (like a T-series Rebel) or Nikon camera body with the same lens. This setup offers a large sensor, full control over capture settings, and is well represented in the literature.

Stabilization

Finally, canopy photos require the camera to be pointed directly upward and perfectly level. The simplest option to achieve this orientation is to mount the camera to a tripod with a fully articulated head. A dual-axis bubble level placed on the lens cover should facilitate a perfectly level lens orientation.

Repositioning and leveling a tripod for multiple sites gets very tedious, very quickly. To speed up the process, an auto-leveling device can be attached to the tripod. Once calibrated to the weight and balance of the camera, these devices ensure that the lens points opposite of gravity (i.e. level) at all times. Unfortunately, I’ve only seen these devices offered along with (expensive) proprietary software like HemiView, and the units are a bit heavy.

An auto-leveling device that comes with the Hemiview software package. While this unit is nice, cheaper and lighter DIY options are available.

For most terrestrial applications, a tripod system will work well. However, if you need to take many photos at different points, a tripod is a pain to redeploy. Or, if you need photos over water (as in my case) tripods may be infeasible. To overcome this problem, I created a handheld stabilizer from a cheap steadycam (a free-rotating, dual-axis handle attached to a post with the camera mounted to the top and counterweights at the bottom) and an L-bracket (see the SECOND video in the IG post below). This rig allows me to move freely to sampling points and immediately take photos without an arduous setup process, while still ensuring that the lens is perfectly level.

A post shared by Andis A. (@azandis) on

In the next post, I cover how to take hemispherical canopy photos in the field, including sampling strategies and camera settings.

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.


References:

Fleck, M. M. (1994). Perspective Projection: the Wrong Imaging Model. Computer Science, University of Iowa.

Herbert, T. J. (1987). Area projections of fisheye photographic lenses. Agric. For. Meteorol. 39, 215–223. doi:10.1016/0168-1923(87)90039-6.

Rich, P. M. (1989). A Manual for Analysis of Hemispherical Canopy Photography. Los Alamos National Laboratory.

]]>
330
Hemispherical light estimates https://www.azandisresearch.com/2018/02/16/hemispherical-light-estimates/ Fri, 16 Feb 2018 03:13:59 +0000 https://www.azandisresearch.com/?p=297
Hemispherical canopy photo used to estimate light profiles of vernal pools. Note that I lightened this image for aesthetics–this is not what your photos should look like for canopy estimates.

Despite the fact that foresters have been estimating forest canopy characteristics for over a century or more, the techniques and interpretation for these measurements is surprisingly inconsistent. As part of my dissertation research, I wanted to ask what I thought was a simple question: “How much light reaches different vernal pools over the season?” It took a lot of literature searching, a lot of emails, and a lot of trial and error to discover that this is actually not as simple of a question as I originally thought.

But in the end, I’ve developed what I think is a very sound workflow. In the hopes of saving other researchers a journey through the Wild West of canopy-based light estimates, I decided to publish my notes in a series of blog posts.

  • In this first post, I’ll cover the rationale behind hemispherical photo measurements.
  • The second post will compare hardware and address measuring/calibrating lens projections.
  • The third post will focus on capturing images in the field.
  • Finally, the fourth post will be a detailed walk-through of my software and analysis pipeline, including automation for batch processing multiple images.

Why measure light with hemispherical photos?

Light is an important environmental variable. Obviously the light profile in the forest drives photosynthesis in understory plants, but it is also important for exothermic wildlife, and can also impact abiotic factors like snow retention. An ecologist interested in any of these processes should be interested in the light environment at specific points in the habitat. Although light intensity at a point can be measured directly with photocells (Getz 1968), this requires continuous measurement at each point of interest over the entire seasonal window of interest. For most applications, this would be intractable. A more common method involves measuring the canopy above a point and estimating the amount of sky that is obstructed in order to infer the amount of light incident at that location.

Fortunately, since both foresters and ecologists are interested in the canopy, the field has produced a multitude of resources for measuring it. However, foresters are more often interested in estimating the character of the canopy itself and the canopy metrics they use are not always directly relatable to questions in ecology.

For instance, while foresters are generally interested in canopy COVER, ecologists might be more interested in canopy CLOSURE. These terms are similar enough that they are often (incorrectly) used interchangeably. According to Korhonen (2007) and Jennings et al. (1999), canopy cover is “the proportion of the forest floor covered by the vertical projection of the tree crowns” while canopy closure is “the proportion of sky hemisphere obscured by vegetation when viewed from a single point” (see the figure below).

From Korhonen et al. (2007)

The choice between measuring cover versus closure depends on both the scale and perspective of your research. For instance, forest-wide measurements of photosynthetic surface will probably be estimated from remote sensing data which is a vertical projection (i.e. canopy cover). However, if you are interested in the amount of light reaching a specific understory plant or a vernal pool, canopy closure is more relevant.

The disparity in perspective can be an issue in downstream analysis too. For instance, some analysis procedures consider only the outer edge of a tree crown or contiguous canopy and ignore gaps within this envelope. This kind of analysis is much less useful if your interest is in the light filtering through canopy gaps.

Canopy estimation:

There are three main ways to estimate canopy characteristics: direct field measurements, indirect modelling from other parameters, and remote sensing. The choice of measurement methods employed will largely be determined by the scope of the question and the scale-accuracy required. For the purpose of my research, I am interested in estimating light environments for a specific set of vernal pools; so, direct measurements are most useful. However, if I wanted to know how different tree composition influences light environments of ponds on average across forest habitats, I might want to try modelling that relationship followed by ground-truthing with direct measurements.

The rest of this post will focus on directly measuring canopy closure to estimate light environments.

Measurement methods:

The most popular methods for directly measuring closure are hemispherical photography and spherical densiometer. A spherical densiometer (Lemmon 1956) is basically a concave or convex mirror that reflects the entire canopy. A grid is drawn over the mirror and researchers can count the number of quadrants of sky versus canopy.

Hemispherical photos use a similar principle. A true hemispherical lens projects a 180 degree picture of the canopy that can then be processed to determine percentage of sky versus canopy.

This is a great example of both a densiometer (top left) and a hemispherical photo (top right). Bottom left is an example grid for densiometer estimates, and bottom right is an example view from a densiometer with the grid overlaid on the photo. Borrowed from Eric Nordberg.

Advantages of densiometer are that they are cheap, easy to operate, and can function in any light conditions. The exact converse is true of hemispherical photography which can be expensive, difficult to operate, and requires very narrow light conditions (I’ll get into the particulars in post 3).

So, why would anyone use hemispherical photos over densiometers?

The main disadvantage of a densiometer is that the estimates are instantaneous, whereas hemispherical photos can be integrated to estimate continuous measurements. It is easiest to explain this with an example.

Imagine you want to know how much light is received by a specific plant on the forest floor at the very edge of a clearcut (see figure below). A closure estimate from a densiometer would indicate 50% closure no matter the orientation of the tree line. However, if that little plant is in the northern hemisphere, we know that it will receive much less light if the clearcut lies to the north than if the clearcut lies to the south due to the angle and orientation of the sun.

Figure by me, A.Z. Andis.

The advantage of hemispherical photos (if taken with known orientation and location) is that they can be used to integrate light values over time with respect to the direction of the sun relative to gaps in the canopy. This means that with a single photo (or two photos for deciduous canopies) one can calculate the path of the sun and estimate the total light received by the plant in our example at any point in time or cumulatively over a range of time.

An ancillary advantage of photos is that they can be archived along with the scripts used for processing, which makes the entire analysis easily reproducible.

I’ll go much further in depth in later posts, but as a preview, here is a general overview of how light estimates are calculated from hemispherical photos:

  1. Hemispherical photos are captured in the field at a specific location and known compass orientation.
  2. Images are converted to binary, such that each pixel is either black or white using thresholding algorithms.
  3. Software can then be used to project a sun path onto the image.
  4. Models parameterized with average direct radiation, indirect radiation, and cloudiness can then estimate the total radiation filtering through the gaps represented as white pixels in the photo at any point in time or averaged over time ranges.
  5. The result is a remarkably accurate point estimate of incident light at any time in the season.

I’ll be drafting subsequent post in the coming weeks so be sure to check back in!

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.


References:

Getz, L. L. (1968). A Method for Measuring Light Intensity Under Dense Vegetation. Ecology 49, 1168–1169. doi:10.2307/1934505.

Jennings, S. B., Brown, N. D., and Sheil, D. (1999). Assessing forest canopies and understorey illumination: canopy closure, canopy cover and other measures. Forestry 72, 59–74. doi:10.1093/forestry/72.1.59.

Korhonen, L., Korhonen, K. T., Rautiainen, M., and Stenberg, P. (2006). Estimation of forest canopy cover: a comparison of field measurement techniques. Available at: http://www.metla.fi/silvafennica/full/sf40/sf404577.pdf.

Lemmon, P. E. (1956). A Spherical Densiometer For Estimating Forest Overstory Density. For. Sci. 2, 314–320. doi:10.1093/forestscience/2.4.314.

 

]]>
297