canopy estimates – A.Z. Andis Arietta https://www.azandisresearch.com Ecology, Evolution & Conservation Mon, 09 Oct 2023 14:25:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 141290705 UPDATE: Smartphone Hemispherical Image Analysis https://www.azandisresearch.com/2023/05/20/update-smartphone-hemispherical-image-analysis/ Sat, 20 May 2023 19:27:45 +0000 https://www.azandisresearch.com/?p=2279 When I first developed the method to estimate canopy metrics from smartphone spherical panoramas, I used a somewhat convoluted workflow involving command line image manipulation, a plugin for ImageJ to do binarization, and a script I wrote in AutoHotKey language to automate mouse clicks on a GUI for canopy measures. Admittedly, it is a difficult pipeline for others to easily replicate.

In an effort to make life easiest, I spent some time building out the pipeline entirely in R, including a sourceable function for converting spherical panos to hemispherical images (all available on this repo).

The easiest way to convert all of your spherical panos to hemispherical projections is to source the function from my github:

source("https://raw.githubusercontent.com/andisa01/Spherical-Pano-UPDATE/main/Spheres_to_Hemis.R")

When you source the script, it will install and load all necessary packages. It also downloads the masking file that we will use to black out the periphery of the images.

The script contains the function convert_spheres_to_hemis, which does exactly what is says. You’ll need to put all of your raw spherical panos into a subdirectory within your working directory. We can then pass the path to the directory as an argument to the function.

convert_spheres_to_hemis(focal_path = "./raw_panos/")

This function will loop through all of your raw panos, convert them to masked, north-oriented upward-facing hemispherical images and put them all in a folder called “masked_hemispheres” in your working directory. It will also output a csv file called “canopy_output.csv” that contains information about the image.

Below, I will walk through the steps of the workflow that happened in the convert_spheres_to_hemis function. If you just want to use the function, you can skip to the analysis. I’ve also written a script to do all of the conversion AND analysis in batch in the repo titled “SphericalCanopyPanoProcessing.R”.

library(tidyverse) # For data manipulation
library(exifr) # For extracting metadata

R is not really the best tool for working with image data. So, we’ll use the magick package to call the ImageMagick program from within R.

library(magick) # For image manipulation
# Check to ensure that ImageMagick was installed.
magick_config()$version

You should see a numeric version code like ‘6.9.12.3’ if ImageMagick was properly installed.

# ImageR also requires ImageMagick
library(imager) # For image display

For binarizing and calculating some canopy metrics, we will use Chiannuci’s hemispheR package, which we need to install from the development version.

library(devtools)
devtools::install_git("https://gitlab.com/fchianucci/hemispheR")
library(hemispheR) # For binarization and estimating canopy measures

To get started, you’ll need to have all of your raw equirectangular panoramas in a folder. I like to keep my raw images in a subdirectory called ‘raw_panos’ within my working directory. Regardless of where you store your files, set the directory path as the focal_path variable. For this tutorial we’ll process a single image, but I’ve included scripts for batch processing in the repo. Set the name of the image to process as the focal_image variable.

focal_path <- "./raw_panos/"
focal_image <- "PXL_20230519_164804198.PHOTOSPHERE_small.jpg"

focal_image_path <- paste0(focal_path, focal_image)
focal_image_name <- sub("\\.[^.]+$", "", basename(focal_image_path))

Let’s take a look at the equirectangular image.

pano <- image_read(focal_image_path)
pano # Visualize the pano
Raw equirectangular projection of a spherical panorama within a forest.
Raw equirectangular projection of a spherical panorama.

Note: One advantage of spherical panos is that they are large, and therefore, high resolution. The images from my Google Pixel 4a are 38 mega pixels. For this tutorial, I downsized the example pano to 10% resolution to make processing and visualizing easier. For your analysis, I’d recommend using full resolution images.

Spherical panoramas contain far more metadata than an average image. We can take a look at all of this additional information with the read_exif function.

read_exif(focal_image_path) %>%
  glimpse()

We’ll extract some of this information about the image, the date it was created, the georeference, and altitude, etc. to output alongside our canopy metrics. You can decide which elements are most important or useful for you.

xmp_data <- read_exif(focal_image_path) %>%
  select(
    SourceFile,
    Make,
    Model,
    FullPanoWidthPixels,
    FullPanoHeightPixels,
    SourcePhotosCount,
    Megapixels,
    LastPhotoDate,
    GPSLatitude,
    GPSLongitude,
    GPSAltitude,
    PoseHeadingDegrees
  )

The first image processing step is to convert the equirectangular panorama into a hemispherical image. We’ll need to store the image width dimension and the heading for processing. The pose heading is a particularly important feature that is a unique advantage of spherical panoramas. Since the camera automatically stores the compass heading of the first image of the panorama, we can use that information to automatically orient all of our hemispherical images such that true north is the top of the image. This is critical for analyses of understory light which requires plotting the sunpath onto the hemisphere.

# Store the pano width to use in scaling and cropping the image
pano_width <- image_info(pano)$width
image_heading <- read_exif(focal_image_path)$PoseHeadingDegrees

The steps to reproject the spherical panorama into an upward-looking hemispherical image go like this:

Crop the upper hemisphere (this is easy with smartphone spheres because the phone’s gyro ensures that the horizon line is always the midpoint of the y-axis).

Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama within a forest.
Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama.

Rescale the cropped image into a square to retain the correct scaling when reprojected into polar coordinate space.

Rescaled upper hemisphere from an equirectangular projection of a spherical panorama within a forest.
Rescaled upper hemisphere from an equirectangular projection of a spherical panorama.

Project into polar coordinate space and flip the perspective so that it is upward-looking.

Polar projection of the upper half of a spherical panorama of a forest.
Polar projection of the upper half of a spherical panorama.

Rotate the image so that the top of the image points true north and crop the image so that the diameter of the circle fills the frame.

Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest.
Polar projection of the upper half of a spherical panorama rotated to orient to true north.

We can accomplish all of those steps with the code below.

pano_hemisphere <- pano %>%
  # Crop to retain the upper hemisphere
  image_crop(geometry_size_percent(100, 50)) %>%
  # Rescale into a square to keep correct scale when projecting in to polar coordinate space
  image_resize(geometry_size_percent(100, 400)) %>%
  # Remap the pixels into polar projection
  image_distort("Polar",
                c(0),
                bestfit = TRUE) %>%
  image_flip() %>%
  # Rotate the image to orient true north to the top of the image
  image_rotate(image_heading) %>%
  # Rotating expands the canvas, so we crop back to the dimensions of the hemisphere's diameter
  image_crop(paste0(pano_width, "x", pano_width, "-", pano_width/2, "-", pano_width/2))

The resulting image looks funny because the outer pixels are extended by interpolation and we’ve rotated the image which leaves white space at the corners. Most analyses define a bounding perimeter to exclude any pixels outside of the circular hemisphere; so, the weird border shouldn’t matter. But, we can add a black mask to make the images look better.

I’ve included a vector file for a black mask to lay over the image in the repo.

# Get the image mask vector file
image_mask <- image_read("./HemiPhotoMask.svg") %>%
  image_transparent("white") %>%
  image_resize(geometry_size_pixels(width = pano_width, height = pano_width)) %>%
  image_convert("png")

masked_hemisphere <- image_mosaic(c(pano_hemisphere, image_mask))

masked_hemisphere
Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest with a black mask around the outside of the hemisphere.
Masked hemispherical canopy image.

We’ll store the masked hemispheres in their own subdirectory. This script makes that directory, if it doesn’t already exists and writes our file into it.

if(dir.exists("./masked_hemispheres/") == FALSE){
  dir.create("./masked_hemispheres/")
} # If the subdirectory doesn't exist, we create it.

masked_hemisphere_path <- paste0("./masked_hemispheres/", focal_image_name, "hemi_masked.jpg") # Set the filepath for the new image

image_write(masked_hemisphere, masked_hemisphere_path) # Save the masked hemispherical image

At this point, you can use the hemispherical image in any program you’d like either in R or other software. For this example, I’m going to use Chiannuci’s hemispheR package in order to keep this entire pipeline in R.

The next step is to import the image. hemispheR allows for lots of fine-tuning. Check out the docs to learn what all of the options are. These settings most closely replicate the processing I used in my 2021 paper.

fisheye <- import_fisheye(masked_hemisphere_path,
                          channel = '2BG',
                          circ.mask = list(xc = pano_width/2, yc = pano_width/2, rc = pano_width/2),
                          gamma = 2.2,
                          stretch = FALSE,
                          display = TRUE,
                          message = TRUE)
Circular hemispheric plot output by hemispheR's 'import_fisheye' function.
Circular hemispheric plot output by hemispheR’s ‘import_fisheye’ function.

Now, we need to binarize the images, converting all sky pixels to white and everything else to black (at least as close as possible). Again, there are lots of options available in hemispheR. You can decides which settings are right for you. However, I would suggest keeping the zonal argument set to FALSE. The documentation describes this argument as:

zonal: if set to TRUE, it divides the image in four sectors
(NE,SE,SW,NW directions) and applies an automated classification
separatedly to each region; useful in case of uneven light conditions in
the image

Because spherical panoramas are exposing each of the 36 images separately, there is no need to use this correction.

I also suggest keeping the export argument set to TRUE so that the binarized images will be automatically saved into a subdirectory named ‘results’.

binimage <- binarize_fisheye(fisheye,
                 method = 'Otsu',
                 # We do NOT want to use zonal threshold estimation since this is done by the camera
                 zonal = FALSE,
                 manual = NULL,
                 display = TRUE,
                 export = TRUE)
Binarized circular hemispheric plot output by hemispheR's 'binarize_fisheye' function.
Binarized circular hemispheric plot output by hemispheR’s ‘binarize_fisheye’ function.

Unfortunately, hemispheR does not allow for estimation of understory light metrics like through-canopy radiation or Global Site Factors. If you need light estimates, you’ll have to take the binarized images and follow my instructions and code for implementing Gap Light Analyzer.

Assuming all you need is canopy metrics, we can continue with hemispheR and finalize the whole pipeline in R. We estimate canopy metrics with the gapfrac_fisheye() function.

gapfrac <- gapfrac_fisheye(
  binimage,
  maxVZA = 90,
  # Spherical panoramas are equidistant perforce
  lens = "equidistant",
  startVZA = 0,
  endVZA = 90,
  nrings = 5,
  nseg = 8,
  display = TRUE,
  message = TRUE
)
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR's 'gapfrac_fisheye' function.
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR’s ‘gapfrac_fisheye’ function.

Finally, we can estimate the canopy metrics with the canopy_fisheye() function, join those to the metadata from our image, and output our report.

canopy_report <- canopy_fisheye(
  gapfrac
)

output_report <- xmp_data %>%
  bind_cols(
    canopy_report
  ) %>%
  rename(
    GF = x,
    HemiFile = id
  )

glimpse(output_report)
Rows: 1
Columns: 32
$ SourceFile            "./raw_panos/PXL_20230519_164804198.PHOTOSPHERE_small.jpg"
$ Make                  "Google"
$ Model                 "Pixel 4a"
$ FullPanoWidthPixels   8704
$ FullPanoHeightPixels  4352
$ SourcePhotosCount     36
$ Megapixels            0.37845
$ LastPhotoDate         "2023:05:19 16:49:57.671Z"
$ GPSLatitude           41.33512
$ GPSLongitude          -72.91103
$ GPSAltitude           -23.1
$ PoseHeadingDegrees    86
$ HemiFile              "PXL_20230519_164804198.PHOTOSPHERE_smallhemi_masked.jpg"
$ Le                    2.44
$ L                     3.37
$ LX                    0.72
$ LXG1                  0.67
$ LXG2                  0.55
$ DIFN                  9.981
$ MTA.ell               19
$ GF                    4.15
$ VZA                   "9_27_45_63_81"
$ rings                 5
$ azimuths              8
$ mask                  "435_435_434"
$ lens                  "equidistant"
$ channel               "2BG"
$ stretch               FALSE
$ gamma                 2.2
$ zonal                 FALSE
$ method                "Otsu"
$ thd                   116

Be sure to check out my prior posts on working with hemispherical images.

]]>
2279
Tips for taking spherical panoramas https://www.azandisresearch.com/2021/07/16/tips-for-taking-spherical-panoramas/ Fri, 16 Jul 2021 13:44:59 +0000 http://www.azandisresearch.com/?p=1888 Spherical panoramas can be tricky to stitch properly in dense forests, and especially when canopy elements are very close to the camera (which exaggerates the parallax). Fortunately, the issue attenuated with every update of phone software, but it still pays to be careful when capturing your panoramas. So, if you are taking spherical panos in order to estimate canopy metrics, here are my suggestions for improving your captures:

Rotate around the camera

Perfecting panoramas took some practice for me. The first key is to avoid rotating the camera around your body, but rather, to rotate your body around the camera. I found this post helpful when I was learning. The principle is to keep the entrance pupil of the camera lens as the exact center of the sphere (as much as possible).

Calibrate your compass and gyro

Another tip is to calibrate your compass and gyro regularly (here’s how for Android users). If the compass is off, the phone won’t always know when you’ve made a full rotation.

Take your time

It also helps to go slowly. The camera is constantly processing and mapping the scene to guide the next image, which takes a few seconds.

Be consistent

Even though you can technically take images in any order, I always rotate the same direction around the horizon and then the upper two circles. The lower sphere doesn’t really matter, so I don’t spend much time on it. The key here is that consistency, both for you and the camera, helps.

Use your body as a tripod

You can also try to use your body as a kind of tripod. I used a specific method when I was first learning to take spherical images: I hold my right elbow to my hip bone directly over my right knee and pinch the phone at the top in line with the camera. Then I keep my right foot planted and walk my left foot around the right, using my left hand to tip the phone as need. After a few dozen spheres, I don’t really need to do that any more. I usually know if the sphere will turn out bad and just restart.

Review your images

It is important to remember that you can (and should) review your images in the field so that you can retake any with issues. You can also get a Google Cardboard viewer for $13 which lets you review the sphere in VR in the field to more clearly see the images.

 

Over time, you get the hang of capturing spheres to minimize problems. It takes me much less time to get a good photosphere than it does to level and set my exposure with my DSLR, now that I have practiced.

Nevertheless, there are always some artifacts. After looking at lots of images side-by-side, my sense is that the artifactual errors in the panos, even when fairly large, contribute less error than the total errors of incorrect orientation, unleveled camera, poor lens calibrate, exposure misspecification, and much lower pixel count from other methods.

 

Be sure to check out my other posts on this topic:

Smartphone hemispherical photography

Analyzing hemispherical photos

Taking hemispherical canopy photos

Hardware for hemispherical photos

Hemispherical light estimates

]]>
1888
Taking hemispherical canopy photos https://www.azandisresearch.com/2018/07/24/taking-hemispherical-canopy-photos/ Tue, 24 Jul 2018 19:30:29 +0000 http://www.azandisresearch.com/?p=640 Be sure to check out the previous two posts:

1. Hemispherical Light Estimates

2. Hardware for Hemispherical Photos

Once the theory and equipment are taken care of, you are ready to go out into the field and collect data. This post will cover the when, where, and how of shooting hemispherical photos. The next and final post deals with analyzing the photos you’ve captured.

When:

Hemispherical photos are very sensitive to lighting conditions. Because you are measuring an entire half hemisphere of the sky at once with each photo, it is important that the background (i.e. the sky) is as standardized as possible so that the canopy can be accurately distinguished without bias.

To imagine the wildly different exposures gradient across a hemispherical lens, wait until a clear sunset and walk out into an open field. Stare directly at the setting sun for a few seconds, then turn 180 degrees and try to make out the horizon. Even though our eyes are extremely well-equipped to quickly adjust for different lighting, you’ll probably have a hard time making out the dark horizon after looking at the bright sunset. Unlike your eye, which continuously adjusts to light conditions even at these extremes, cameras are stuck in one setting. Even the best camera sensors are limited in their tolerances, so we need to take photos at times when lighting in the entire hemisphere fits within a narrow range.

The best time to shoot is on uniformly overcast days. Overcast cloud cover yields a nice homogeneous white background that makes strong contrast to the edges of leaves and branches.

Overcast days provide a nice standard background on which the canopy structure can be easily distinguished.

If cloudy days are not in the forecast, another option is to wait until dawn or dusk, before or after the sun is below the horizon and the sky is evenly lit. The problem is that this only allows a very short window for shooting.

Sometimes, we can’t help but take photos on sunny days. Before I talk about strategies for dealing with difficult exposures, I want to explain some of the problems direct sunlight can precipitate in light estimation methods: blown-out highlights, flare, and reflection.

Overexposed highlights

Overexposed or “blown-out” highlights is an issue of exposure settings and sensor limitations. When shooting into light, individual sensor cells can only record light up to a threshold. In my last post, I compared light collecting on camera sensor cells to an array of buckets in a field collecting raindrops. Using that analogy, think about sensor thresholds as the rim of the bucket. At some point, enough rain collects in the buckets that it overfills. After that point, you can record no more information about relative rainfall between buckets. In the camera, eventually too much light falls incident on a portion of the sensor and those pixels are maxed out and recorded as fully white (i.e. no information is coded in those cells). This leads to an underestimate of canopy closure because some pixels occupied by canopy will look like white sky after binarization. We can try correct our exposure to ensure we do not truncate those bright pixels, but compared to direct sun, blue sky directly overhead is relatively dark on sunny days, so this will almost certainly lead to a loss of information on the dark end of the spectrum, and we will then be overestimating canopy closure.

Overexposure will blow out the brightest parts of the image and the result will be that solid canopy elements will be considered open sky in downstream analysis. This photo is uniformly overexposed, but this can also occur if the range of exposure across the hemisphere is greater than the dynamic range of the sensor.

Unfortunately, there are no easy ways to deal with wide exposure ranges (other than avoiding sunny days). One solution is to shoot in RAW format which retains more information per pixel. With RAW photos one can attempt to recover highlights and boost shadows to some extent, but because the next processing step is binarization, this will only be effective if you can sufficiently expand the pixel value range at the binarization threshold. This also entails hand-calibrating each image, which reduces standardization and replicability, and may take a lot of time if you need to process many photos.

Flare

Flare is another problem and it emerges for a couple of reasons.

Hazy-looking “veiling flare” occurs when light scatters inside the glass optics of your lens. In normal photography, it is most prevalent when the sun is oblique to the front lens surface and light gets “trapped” bouncing around inside of the glass. It can also look like a halo of light bleeding into a solid object or streaks of light radiating from a point of light depending on your aperture settings (this is called Fraunhofer diffraction and can make for very cool effects…just not in hemispherical photos!). When these streaks appear to overlay solid canopy objects in our photos it will lead to underestimation of closure.

Flare from a lightsource can lead to mischaracterization of solid canopy pixels as open sky by binarization algorithms as seen in the to two images on the right.

Ghosting flare looks like random orbs of light in photos and it occurs when light entering the lens highlights the inside of the optics close enough to focus on the sensor. Hemispherical lenses are incredibly prone to this type of flare because of the short focal distance of wide angle lenses.

Ghost flare, when sufficiently bright, will be considered a canopy gap in downstream estimation.

If photos must be taken in sunlight, one alternative is to at least block the direct beams of sunlight by positioning the sun behind a natural sunblock or fashioning a shield. I’ve never tried the shield option myself, but I’ve seen photos of folks who have affixed a small disk on a semi-ridge wire on their tripod. The disk can be positioned such that it only blocks the sun. There are many problems with this option. First, the sunshield will be counted as canopy in the light calculations which will bias the estimates. One could exclude the blocked area from analysis by masking out the area of interest, but then that area will be excluded from the analysis. Another option is to spot-correct flare in each photo. This is most effective with RAW photos and can be accomplished in photo-editing applications by using a weak adjustment brush to reduce highlights, boost shadows, and increase contrast directly over the flare. Again, I don’t recommend editing individual photos, but sometimes this is the only option.

Reflection

Finally, direct sun reflecting on solid surfaces can lead to mischaracterization of pixels and overestimate openness. In this case, objects opposite the sun can be lit so well that they are brighter than the background. This is very common in forests dominated by light bark trees like aspen and birch. This also occurs just about any time there is snow in the frame, regardless of the lighting conditions. The only solution for this problem is darkening the solid objects in a photo-editing program. It is a painstaking task and increases the error in the final estimation, but sometimes is necessary.

Reflection can make solid object in the canopy appear lighter than the sky. Although, this image is properly exposed, the light trunk of the tree reflects enough light to be characterized as sky by the binarization algorithm.

Where.

Next you must decide where you want to take photos. The answer to this will depend on your research question. We can break it down into spatial/temporal sampling scale and relative position.

Hemispherical photos can fit into any standard spatial sampling scheme (e.g. transects, grid, opportunistic, clustered, etc.) depending on your downstream statistical analysis. However, because hemispherical photos capture such a wide angle of view, you must be careful of any analysis that assumes independence of observations if the viewshed overlaps between photos.

Generally, when we think about spatial sampling, we think in two dimensions (like in the figure below). However, it is also important to consider that canopy closure estimates integrate sun angle, and so it is critical that an even sampling scheme considers the third spatial dimension and include a representative sample of topographical aspect.

Example spatial sampling schemes from Collender et al 2015.

There is no perfect sampling strategy for any given project. To illustrate some considerations, I will outline the sampling for my work. For my project, I need to characterize the canopy closure above forested pond that range in size from a few meters to a hundred meters across. The most obvious strategy was to sample the intersections of a tightly spaced Cartesian grid over the entire surface of the pond. A previous research student tried this method on a handful of ponds. Using that information, we were then able to subsample those data to determine the spacing of the grid that would yield the most accurate estimates with the fewest photos. In this case, it turned out that every pond, regardless of size, could be characterized with 5 photos: 4 photos along the most distal shoreline in each cardinal direction and one photo in the center of the pond where the east-west and north-south lines intersect. An ancillary benefit of taking the same number of photos at each pond is that I can also calculate the variance within each pond, which gives me a sense of the homogeneity of the habitat. Keep in mind that measures of higher moments of the distribution of light values across habitats (like variance or kurtosis) may be extremely ecologically relevant and can be incorporated for more meaningful statistical analysis.

One final consideration in spatial sampling is the height at which photos are captured. By virtue of practicality, the most common capture height is about 1m from the ground since this is the height of most tripods. However, the study question might dictate taking photos above understory plants or at ground level. Regardless, the height of photos should be consistent, or recorded for each photo, and explicitly stated in published methods.

You will also need to consider the frequency of your sampling to ensure that you capture any relevant variation in the study system over time. In temperate forests, this usually means, at the very least, taking the same photos with deciduous leaves on and again after the leaves have fallen. On the other hand, phenological studies might need photos from many timepoints over shorter durations.

Example of a sunpath projected onto a hemispherical photo from Gap Light Analyzer.

It is important to remember that canopy closure estimation integrates the sun’s path over a specific yearly window. We will define that window explicitly in the model, so it is important to ensure that the canopy structure in the photos accurately represents the sunpath window you define.

How.

In this section I will get into the nuts and bolts of taking photos in the field. I’ll cover camera settings and then camera operation.

Camera settings.

Most modern cameras are designed for ease of use and offer a variant of “automatic” settings. Automatic settings are great for snapping selfies and family photos, but awful for data collection. Manually adjusting the camera increases replicability and increases the accuracy of light estimates. Fortunately, there are only 4 parameters that we need to adjust for hemispherical photography: ISO, aperture, shutter speed, and focal distance.

ISO measures the sensitivity of a camera’s sensor to light. Higher ISO settings ( = greater sensitivity) allow for faster capture in lower light. However, high ISO leads to lots of noise and mischaracterization of pixels. In general, you should aim for the lowest ISO setting possible to produce better quality photos. More expensive cameras have better sensors and interpolation algorithms, so you can get away with higher ISO settings.

The aperture is the iris of a lens and controls the amount of light entering the camera. Aperture settings are given in f-numbers (which is the ratio of the lens focal length to the physical aperture width). Counterintuitively, a larger f-value (i.e. f22) is a smaller physical pupil, and therefore, less light than a smaller f-value (i.e. f2.8). Your aperture settings will be a balance between letting in enough light and getting crisp focus across the focal range (see focal distance below).

The shutter speed determines the duration of time that the sensor is exposed to light. Longer shutter speeds means more light resulting in brighter photos. However, the longer the shutter speed, the more any movement of the camera or the canopy will blur the image. If you are using a handheld system, I suggest at least 1/60sec shutter. With a tripod, shutter speeds can be longer, but only if the canopy is completely still. If there is ANY wind, I suggest at least 1/500sec shutter.

Focal distance is the simplest—just adjust the lens focus so that canopy edges are in sharp focus. This is easy when the canopy is a consistent distance from the lens, but can be difficult when capturing multi-layer structure. Lenses resolve greater depth of field (range of focal distances simultaneously in focus) when the aperture is smallest.

Since all four settings rely on all of the others, camera settings will be a balancing act. The end goal is to ensure that you have the best balance between overly white and overly black pixels and you can check this with your camera’s internal light meter. The big catch here is that we are actually not that interested in the exposure of the sky; in fact, we would like for the sky to be entirely white.

The most common exposure standardization technique is to first determine the exposure settings for open sky, then overexpose the image by 2-3 exposure values (EV) (Beckshäfer et al. 2013, Brown et al. 2000, Chianucci and Cutini 2012). In theory, this will ensure that a uniformly overcast sky is entirely white without blowing-out the canopy. The primary benefit is that this method uses the sky as a relative reference standard which is replicable.

It is easy to employ this method using your camera’s internal light meter. First, set your camera to meter from a single central point (you will probably need to look in your manual to figure out how to do this). If there is a large enough gap in the canopy overhead, you can point the meter spot there and take a reading then adjust your settings to get a 0 EV. (If the canopy has no gaps, you can set this in the open before going into the forest–just remember to take another reading if the conditions change). Now, reduce the shutter speed by 2 full stops (i.e. if 1/500 is 0 EV for the sky, set your shutter speed to 1/125; if 0 EV is 1/1600, set you speed to 1/400).

[Note, other authors (e.g. Beckshäfer et al. 2013) suggest adjusting for 2 EV overexposure. I don’t like using EV for anything other than the spot reference because all cameras use different methods of evaluating exposure. In contrast, shutter speed is invariant across all camera platforms.]

This may all seem like a confusing juggling act, but it is not that difficult in practice. Here is my general strategy:

  • I set my ISO at 200 and my aperture at around f11.
  • With the camera set to evaluate the central point of the image, I take a light meter reading of open sky.
  • I adjust my shutter speed to an exposure value of 0 for open sky.
  • Now, I re-adjust my shutter speed slower by 2 full stops.
  • If my shutter speed is now too slow, I will increase my ISO one level or decrease my f-stop (aperture) by one stop and go back to step #2. I repeat until I find the balance.
  • With the camera set, I can set up my camera and take images using these same settings; however, I must re-calibrate if the sky conditions change.

Taking photos.

At this point, shooting the photos is the easy part! A couple of helpful tips will make life easier.

Before shooting, you will need to orient the camera so that the sensor is perpendicular to the zenith angle (i.e. the camera lens is pointing directly up, opposite gravity). In my previous post covering hardware I mentioned that there are pre-fabricated leveling systems available or you can make a DIY version. With a tripod, you can manually level the camera.

For later analysis, you will need to know the compass orientation of the photos. Some pre-fab systems have light-up LEDs around the perimeter that are controlled by an internal compass and always light north. Otherwise, you can place a compass on your stabilizer or tripod and point the top of the image frame in a consistent direction (magnetic or true north is fine, just make sure you are consistent and write down which one you use).

It can be hard to take a hemispherical photo without unintentionally making self-portraits. With a tripod, you can use a remote to release the shutter from a distance or from behind a tree. Camera manufacturers make dedicated remotes, or if your camera has wifi capabilities, you can use an app from your phone. Most cameras also have a timer setting which can give you enough time to duck for cover.

 

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.

 

References

Beckschäfer, P., Seidel, D., Kleinn, C., and Xu, J. (2013). On the exposure of hemispherical photographs in forests. iForest 6, 228–237.

Brown, P. L., Doley, D., and Keenan, R. J. (2000). Estimating tree crown dimensions using digital analysis of vertical photographs. Agric. For. Meteorol. 100, 199–212.

Chianucci, F., and Cutini, A. (2012). Digital hemispherical photography for estimating forest canopy properties: Current controversies and opportunities. iForest – Biogeosciences and Forestry 5.

Collender, P. A., Kirby, A. E., Addiss, D. G., Freeman, M. C., and Remais, J. V. (2015). Methods for Quantification of Soil-Transmitted Helminths in Environmental Media: Current Techniques and Recent Advances. Trends Parasitol. 31, 625–639.

]]>
640
Hemispherical light estimates https://www.azandisresearch.com/2018/02/16/hemispherical-light-estimates/ Fri, 16 Feb 2018 03:13:59 +0000 https://www.azandisresearch.com/?p=297
Hemispherical canopy photo used to estimate light profiles of vernal pools. Note that I lightened this image for aesthetics–this is not what your photos should look like for canopy estimates.

Despite the fact that foresters have been estimating forest canopy characteristics for over a century or more, the techniques and interpretation for these measurements is surprisingly inconsistent. As part of my dissertation research, I wanted to ask what I thought was a simple question: “How much light reaches different vernal pools over the season?” It took a lot of literature searching, a lot of emails, and a lot of trial and error to discover that this is actually not as simple of a question as I originally thought.

But in the end, I’ve developed what I think is a very sound workflow. In the hopes of saving other researchers a journey through the Wild West of canopy-based light estimates, I decided to publish my notes in a series of blog posts.

  • In this first post, I’ll cover the rationale behind hemispherical photo measurements.
  • The second post will compare hardware and address measuring/calibrating lens projections.
  • The third post will focus on capturing images in the field.
  • Finally, the fourth post will be a detailed walk-through of my software and analysis pipeline, including automation for batch processing multiple images.

Why measure light with hemispherical photos?

Light is an important environmental variable. Obviously the light profile in the forest drives photosynthesis in understory plants, but it is also important for exothermic wildlife, and can also impact abiotic factors like snow retention. An ecologist interested in any of these processes should be interested in the light environment at specific points in the habitat. Although light intensity at a point can be measured directly with photocells (Getz 1968), this requires continuous measurement at each point of interest over the entire seasonal window of interest. For most applications, this would be intractable. A more common method involves measuring the canopy above a point and estimating the amount of sky that is obstructed in order to infer the amount of light incident at that location.

Fortunately, since both foresters and ecologists are interested in the canopy, the field has produced a multitude of resources for measuring it. However, foresters are more often interested in estimating the character of the canopy itself and the canopy metrics they use are not always directly relatable to questions in ecology.

For instance, while foresters are generally interested in canopy COVER, ecologists might be more interested in canopy CLOSURE. These terms are similar enough that they are often (incorrectly) used interchangeably. According to Korhonen (2007) and Jennings et al. (1999), canopy cover is “the proportion of the forest floor covered by the vertical projection of the tree crowns” while canopy closure is “the proportion of sky hemisphere obscured by vegetation when viewed from a single point” (see the figure below).

From Korhonen et al. (2007)

The choice between measuring cover versus closure depends on both the scale and perspective of your research. For instance, forest-wide measurements of photosynthetic surface will probably be estimated from remote sensing data which is a vertical projection (i.e. canopy cover). However, if you are interested in the amount of light reaching a specific understory plant or a vernal pool, canopy closure is more relevant.

The disparity in perspective can be an issue in downstream analysis too. For instance, some analysis procedures consider only the outer edge of a tree crown or contiguous canopy and ignore gaps within this envelope. This kind of analysis is much less useful if your interest is in the light filtering through canopy gaps.

Canopy estimation:

There are three main ways to estimate canopy characteristics: direct field measurements, indirect modelling from other parameters, and remote sensing. The choice of measurement methods employed will largely be determined by the scope of the question and the scale-accuracy required. For the purpose of my research, I am interested in estimating light environments for a specific set of vernal pools; so, direct measurements are most useful. However, if I wanted to know how different tree composition influences light environments of ponds on average across forest habitats, I might want to try modelling that relationship followed by ground-truthing with direct measurements.

The rest of this post will focus on directly measuring canopy closure to estimate light environments.

Measurement methods:

The most popular methods for directly measuring closure are hemispherical photography and spherical densiometer. A spherical densiometer (Lemmon 1956) is basically a concave or convex mirror that reflects the entire canopy. A grid is drawn over the mirror and researchers can count the number of quadrants of sky versus canopy.

Hemispherical photos use a similar principle. A true hemispherical lens projects a 180 degree picture of the canopy that can then be processed to determine percentage of sky versus canopy.

This is a great example of both a densiometer (top left) and a hemispherical photo (top right). Bottom left is an example grid for densiometer estimates, and bottom right is an example view from a densiometer with the grid overlaid on the photo. Borrowed from Eric Nordberg.

Advantages of densiometer are that they are cheap, easy to operate, and can function in any light conditions. The exact converse is true of hemispherical photography which can be expensive, difficult to operate, and requires very narrow light conditions (I’ll get into the particulars in post 3).

So, why would anyone use hemispherical photos over densiometers?

The main disadvantage of a densiometer is that the estimates are instantaneous, whereas hemispherical photos can be integrated to estimate continuous measurements. It is easiest to explain this with an example.

Imagine you want to know how much light is received by a specific plant on the forest floor at the very edge of a clearcut (see figure below). A closure estimate from a densiometer would indicate 50% closure no matter the orientation of the tree line. However, if that little plant is in the northern hemisphere, we know that it will receive much less light if the clearcut lies to the north than if the clearcut lies to the south due to the angle and orientation of the sun.

Figure by me, A.Z. Andis.

The advantage of hemispherical photos (if taken with known orientation and location) is that they can be used to integrate light values over time with respect to the direction of the sun relative to gaps in the canopy. This means that with a single photo (or two photos for deciduous canopies) one can calculate the path of the sun and estimate the total light received by the plant in our example at any point in time or cumulatively over a range of time.

An ancillary advantage of photos is that they can be archived along with the scripts used for processing, which makes the entire analysis easily reproducible.

I’ll go much further in depth in later posts, but as a preview, here is a general overview of how light estimates are calculated from hemispherical photos:

  1. Hemispherical photos are captured in the field at a specific location and known compass orientation.
  2. Images are converted to binary, such that each pixel is either black or white using thresholding algorithms.
  3. Software can then be used to project a sun path onto the image.
  4. Models parameterized with average direct radiation, indirect radiation, and cloudiness can then estimate the total radiation filtering through the gaps represented as white pixels in the photo at any point in time or averaged over time ranges.
  5. The result is a remarkably accurate point estimate of incident light at any time in the season.

I’ll be drafting subsequent post in the coming weeks so be sure to check back in!

Be sure to check out my other posts about canopy research that cover the theory, hardware, field sampling, and analysis pipelines for hemispherical photos.

Also, check out my new method of canopy photography with smartphones and my tips for taking spherical panoramas.


References:

Getz, L. L. (1968). A Method for Measuring Light Intensity Under Dense Vegetation. Ecology 49, 1168–1169. doi:10.2307/1934505.

Jennings, S. B., Brown, N. D., and Sheil, D. (1999). Assessing forest canopies and understorey illumination: canopy closure, canopy cover and other measures. Forestry 72, 59–74. doi:10.1093/forestry/72.1.59.

Korhonen, L., Korhonen, K. T., Rautiainen, M., and Stenberg, P. (2006). Estimation of forest canopy cover: a comparison of field measurement techniques. Available at: http://www.metla.fi/silvafennica/full/sf40/sf404577.pdf.

Lemmon, P. E. (1956). A Spherical Densiometer For Estimating Forest Overstory Density. For. Sci. 2, 314–320. doi:10.1093/forestscience/2.4.314.

 

]]>
297