A.Z. Andis Arietta https://www.azandisresearch.com Ecology, Evolution & Conservation Mon, 21 Jul 2025 17:01:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 141290705 Wild Idea Podcast https://www.azandisresearch.com/2025/07/21/wild-idea-podcast/ Mon, 21 Jul 2025 17:01:46 +0000 https://www.azandisresearch.com/?p=2396 I recently joined my dear friend Bill Hodge on the The Wild Idea Podcast for a conversation about ecological resilience, climate adaptation, and how we think about wilderness in a changing world. We covered topics such as road ecology, species adaptation, and the sometimes counterintuitive lessons that emerge when humans step back from the landscape. From wood frogs that freeze solid in winter to the 22-mile rule showing how few truly remote places remain, we explored how human systems, even unintended ones, shape the trajectories of natural systems.

Drawing on my work in evolutionary ecology, wilderness ethics, and machine learning, I reflected on the tension between our desire to intervene and our limited ability to forecast long-term ecological outcomes. Using examples like the Chernobyl exclusion zone—where many species are thriving in the absence of people despite nuclear contamination—I argued that ecological recovery is often less about precision intervention and more about restraint. We discussed how machine learning can help us simulate alternative futures and understand potential tradeoffs, but that ultimately, the most powerful conservation tool may be humility. More wilderness, not more control, might be the best way to meet the uncertainties ahead.

Listen to the episode here or wherever you get your podcasts.

]]>
2396
Structural Equation Modeling in R https://www.azandisresearch.com/2024/07/24/structural-equation-modeling-in-r/ Wed, 24 Jul 2024 21:52:46 +0000 https://www.azandisresearch.com/?p=2363 Introduction

Today we are talking about structural equation models (SEM). There are lots of synonyms, sub-categories, and adjacent techniques that you may have heard before–covariance structure analysis, linear structural relations, path analysis, latent variable modeling, causal modeling, confirmatory factor analysis, exploratory factor analysis, latent growth modeling, mixture/multigroup/heirarchical/multilevel structural modeling, construct analysis, etc., etc…

So many names exist because there are a LOT of things that you can do with these types of models. It turns out that allowing for and testing structure in modeling can help solve lots of problems in research.

I think that the best way to understand SEMs is to start with simple regressions. Let’s consider a regression predicting a rooster crowing from the rising of the sun:

y = b0 + b1x + ε

y ~ Rooster Crowing,
x ~ Sun Rising,
b0 = 0,
ε ~ N(0, 1)

This is a silly regression because we all understand the relationship: the rooster goes cock-a-doodle-do when the sun crests the horizon. But in the language of mathematics, there is no reason I can’t rewrite this equation as:

x = ( yε ) b1-1

This formulation make no sense. Basically, we are saying that the rising of the sun is a function of the rooster crowing! Even though this is totally mathematically viable, it defies our common sense of causation.

The language of structural equation modeling allows one way to impose some directional structure on mathematical equations. We usually visualize that language as directed graphical models like the one below.

In a graphical model, the observed variables are displayed as boxes Unobserved or latent variables are displayed as circles. Constants are triangles. The functional relationship between variables is displayed as directional arrows. Non-directional or double-headed arrows indicate a variance or covariance.

This graphical model above is the same model as the regression in the equation above. Our independent variable x has a linear relationship with the dependent variable y with the slope parameter b1. y has a constant intercept b0 of 0. Finally, the residual variation in y not caused by x is assumed to come from some other unobserved cause with it’s own variance. Rather than thinking of variables as independent or dependent, we used the terms exogenous or endogenous. Exogenous variables are those (like x) that have no incoming paths. These are the most ‘upstream’ variables in the causal paths. Endogenous variables are those that receive causal paths. We’ll see later that some endogenous variables can also be causes of other endogenous variables, but they are still considered endogenous.

In practice, we generally ignore the peripherals and intercepts in structural models, yielding a simplified graph:

Now that we have a shared language. Let’s take a look at a toy example to understand why SEMs can be so useful in research.

Motivating example

You will need a handful of packages for this tutorial:

packages_for_sem_workshop <-
  c(
    'tidyverse', # basic data wrangling
    'tidygraph', # graph visualization
    'ggraph', # graph visualization
    'lavaan', # sem tools
    'piecewiseSEM', # sem tools
    'mgcv', # nonlinear modeling
    'lme4', # random effect modeling
    'cvsem' # cross-validating sems
    )

install_and_load_packages <-
  function(x){
    for( i in x ){
      if( ! require( i , character.only = TRUE ) ){
        install.packages( i , dependencies = TRUE )
        library( i , character.only = TRUE )
      }
    }
  }

install_and_load_packages(packages_for_sem_workshop)

Fitting a simple regression

I find that, for most applied researchers, the language of code is more intuitive than the language of math. So, let’s simulate a toy dataset and see how we can fit the same system as a linear model or an SEM.


simple_ex <-
  data.frame(
    x = runif(n = 100, min = 0, max = 10),
    e = rnorm(n = 100, 0, 1)
  )

simple_ex <- simple_ex %>%
  mutate(
    y = 1 + 2.5*x + e
  )

Now, let’s fit a good old-fashioned linear regression model:

fit_simple_ex_lm <- 
  lm(y ~ x, data = simple_ex)

summary(fit_simple_ex_lm)
> summary(fit_simple_ex_lm)

Call:
lm(formula = y ~ x, data = simple_ex)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.61653 -0.49110 -0.01622  0.51680  2.76976 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  1.08641    0.17916   6.064 2.49e-08 ***
x            2.52441    0.02871  87.913  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.8972 on 98 degrees of freedom
Multiple R-squared:  0.9875,	Adjusted R-squared:  0.9874 
F-statistic:  7729 on 1 and 98 DF,  p-value: < 2.2e-16

We can fit the same model using the lavaan package. The syntax for lavaan is to pass the list of models as a simple character string separated by next lines.

simple_ex_sem <-
  '
  y ~ x
  y ~ 1
'

fit_simple_ex_sem <-
  sem(model = simple_ex_sem,
      data = simple_ex)

summary(fit_simple_ex_sem)
> summary(fit_simple_ex_sem)
lavaan 0.6-18 ended normally after 1 iteration

  Estimator                                         ML
  Optimization method                           NLMINB
  Number of model parameters                         3

  Number of observations                           100

Model Test User Model:
                                                      
  Test statistic                                 0.000
  Degrees of freedom                                 0

Parameter Estimates:

  Standard errors                             Standard
  Information                                 Expected
  Information saturated (h1) model          Structured

Regressions:
                   Estimate  Std.Err  z-value  P(>|z|)
  y ~                                                 
    x                 2.524    0.028   88.805    0.000

Intercepts:
                   Estimate  Std.Err  z-value  P(>|z|)
   .y                 1.086    0.177    6.125    0.000

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
   .y                 0.789    0.112    7.071    0.000

Lavaan give us an informative output. We see that the parameters are estimated with ‘ML’, maximum likelihood, which differs from ‘lm’ which estimates via ordinary least squares (OLS). This means that estimates might differ slightly. We will trust the default optimizer ‘NLMINB‘.

We see that the model is estimating 3 parameters–the regression slope, the intercept, and the residual variance. Note that we typically do not estimate the intercepts in SEMs (I’m doing so here for continuity). To fit the SEM without the intercept, simply remove the 'y ~ 1' from the model list.

We will pass over the Model Tests and focus on the parameter estimates. The slope estimate of 2.524 and the intercept estimate of 1.086 perfectly match our lm estimates. We don’t get an estimate of the residual variance from lm, but we can extract the residuals from the lm model and calculate the variance ourselves.

fit_simple_ex_lm %>%
  resid() %>%
  var()
> fit_simple_ex_lm %>%
+   resid() %>%
+   var()
[1] 0.7968917

That values is a bit off due to the difference in estimation techniques, I believe.

Fitting complex systems

Now that we’ve seen how linear regression models can be fit as a special case of SEM, let’s take a look at an example that shows where SEM surpasses linear regression.

Again, we will use simulated data, but we’ll download the data so that we can’t see the underlying structure used to generate it. Instead, we’ll use SEMs to determine the structure.

source("https://raw.githubusercontent.com/andisa01/202407_SEM_turorial/main/scripts/SEM_tutorial_example_source.R")

The traditional approach

First, let’s analyze this data using linear regression modeling with stepwise variable selection like almost (the technique use by almost half of all ecological and animal behavioral researcher (including my past self)). In this methodology, we start with a ‘full’ model including all variables and their interactions. Then, we drop non-significant parameters starting with the least parsimonious and refit the model until only significant parameters remain.

# Fit the full or 'global model'
mod_ex01_full <- lm(Y ~ X1 + X2 + X1:X2, data = example01_data_anon)
summary(mod_ex01_full)
> summary(mod_ex01_full)

Call:
lm(formula = Y ~ X1 + X2 + X1:X2, data = example01_data_anon)

Residuals:
     Min       1Q   Median       3Q      Max 
-15.5935  -3.7755   0.1861   3.6929  13.7593 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.163110   7.944663  -0.146    0.884
X1          -0.264413   0.387296  -0.683    0.496
X2           0.155742   0.109397   1.424    0.156
X1:X2        0.002022   0.003950   0.512    0.609

Residual standard error: 5.066 on 196 degrees of freedom
Multiple R-squared:  0.144,	Adjusted R-squared:  0.1309 
F-statistic: 10.99 on 3 and 196 DF,  p-value: 1.06e-06

None of our parameters are significant, so we drop the interaction term.

# Drop the interaction
lm(Y ~ X1 + X2, data = example01_data_anon) %>% 
  summary()
> lm(Y ~ X1 + X2, data = example01_data_anon) %>% 
+   summary()

Call:
lm(formula = Y ~ X1 + X2, data = example01_data_anon)

Residuals:
     Min       1Q   Median       3Q      Max 
-15.5962  -3.8684   0.1564   3.6213  13.8291 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept) -4.73378    3.79270  -1.248  0.21346   
X1          -0.08449    0.16216  -0.521  0.60295   
X2           0.19734    0.07308   2.701  0.00753 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.057 on 197 degrees of freedom
Multiple R-squared:  0.1428,	Adjusted R-squared:  0.1341 
F-statistic: 16.41 on 2 and 197 DF,  p-value: 2.555e-07

Now, x2  is significant, but x1 is not. So, we drop x1.

lm(Y ~ X2, data = example01_data_anon) %>% 
  summary()
> lm(Y ~ X2, data = example01_data_anon) %>% 
+   summary() # Drop X1

Call:
lm(formula = Y ~ X2, data = example01_data_anon)

Residuals:
     Min       1Q   Median       3Q      Max 
-15.4915  -3.8082  -0.0172   3.6272  13.5921 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -3.28441    2.57317  -1.276    0.203    
X2           0.16227    0.02839   5.716 3.97e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.047 on 198 degrees of freedom
Multiple R-squared:  0.1416,	Adjusted R-squared:  0.1373 
F-statistic: 32.67 on 1 and 198 DF,  p-value: 3.966e-08

And, now we have a minimum adequate model to explain the system! We can explain 14% of the variation of y with x2. For every 1 unit increase in x2, we expect y to increase by 0.16. And look at that p-value! Let’s go publish a paper!!!

In fact, this method of stepwise variable selection is so common that you can do it all in one command with dredge() from the MuMIn package.

MuMIn::dredge(mod_ex01_full)
> MuMIn::dredge(mod_ex01_full)
Fixed term is "(Intercept)"
Global model call: lm(formula = Y ~ X1 + X2 + X1:X2, data = example01_data_anon)
---
Model selection table 
   (Int)       X1     X2    X1:X2 df   logLik   AICc delta weight
3 -3.284          0.1623           3 -606.551 1219.2  0.00  0.626
4 -4.734 -0.08449 0.1973           4 -606.413 1221.0  1.81  0.254
8 -1.163 -0.26440 0.1557 0.002022  5 -606.279 1222.9  3.64  0.101
2  4.870  0.31890                  3 -610.048 1226.2  6.99  0.019
1 11.280                           2 -621.824 1247.7 28.49  0.000
Models ranked by AICc(x) 

The complexity of complex systems

The problem with the linear regression approach is that there is an inherent assumption that the independent variables are, well… independent. But, in natural systems, there are almost always relationships between the independent variables. For instance, here are just a few structures that could underly the relationship between y, x1, and x2.

The model structure we derived in our stepwise regression model is the single effect graph in the top left. The model implied by the multiple regression model is a ‘common effect’ structure where the exogenous variables are uncorrelated–in other words x1 and x2 have independent effects on y.

But other structures could exist that we cannot easily capture in regression. For instance, For instance,  x1 and x2 have independent effects on y but remain correlated with each other (common cause with correlation (bottom right)). Or x1 may have no direct effect on y, but may affect x2 which affects y in turn (this is called a chain or fully mediated effect (top middle)). Or, x1 might directly affect y in addition to the indirect effect (partial mediator (top right)).

x2 may not have any effect on y at all, but may still covary because both are directly affected by x1 (common cause (bottom left)).

Using SEMs to compare structural hypotheses

Given all the possible structures, how can we ever know which governs our study system? Well, the first pass is to use common sense. If x1 is height and x2 is weight, the principle of allometry should have us exclude any model without a relationship between them.

The second pass is to use knowledge of your system from the literature. For example, if x1 is maternal genotype and x2 is F1 genotype (of the progeny), there can be no direct effect of the maternal effect on y if the species has an annual lifecycle, but might be partially mediated for perennial species.

Once you have a set of plausible structural hypotheses, we can use the mechanics of SEM to ask which structure best fits the data.

For now, we will assume that all 6 hypotheses above are plausible. We’ll fit each in turn. To do so, I’ll introduce a new operator ~~ which relates the undirected covariation between two variables (i.e. covariance) or with itself (i.e. variance).

# single effect
ex01_formula_x2effect <- '
Y ~ X2
X1 ~~ X1
'

ex01_sem_x2effect <- sem(ex01_formula_x2effect, data = example01_data_anon)

summary(ex01_sem_x2effect)
> summary(ex01_sem_x2effect)
lavaan 0.6-18 ended normally after 8 iterations

  Estimator                                         ML
  Optimization method                           NLMINB
  Number of model parameters                         3

  Number of observations                           200

Model Test User Model:
                                                      
  Test statistic                               377.733
  Degrees of freedom                                 2
  P-value (Chi-square)                           0.000

Parameter Estimates:

  Standard errors                             Standard
  Information                                 Expected
  Information saturated (h1) model          Structured

Regressions:
                   Estimate  Std.Err  z-value  P(>|z|)
  Y ~                                                 
    X2                0.162    0.028    5.745    0.000

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
    X1               32.093    3.209   10.000    0.000
   .Y                25.220    2.522   10.000    0.000

We won’t use the summary output for now, so I will exclude it when fitting the rest of the models.

# chain
ex01_formula_chain <- '
X2 ~ X1
Y ~ X2
'

ex01_sem_chain <- sem(ex01_formula_chain, data = example01_data_anon)
# partially mediated
ex01_formula_mediator <- '
X2 ~ X1
Y ~ X2
Y ~ X1
'

ex01_sem_mediator <- sem(ex01_formula_mediator, data = example01_data_anon)
# common cause
ex01_formula_commoncause <- '
X2 ~ X1
Y ~ X1
'

ex01_sem_commoncause <- sem(ex01_formula_commoncause, data = example01_data_anon)
# common effect (uncorrelated)
ex01_formula_commoneffect <- '
Y ~ X1
Y ~ X2
'

ex01_sem_commoneffect <- sem(ex01_formula_commoneffect, data = example01_data_anon)
# common effect (correlated)
ex01_formula_commoneffect2 <- '
Y ~ X1
Y ~ X2
X1 ~~ X2
'

ex01_sem_commoneffect2 <- sem(ex01_formula_commoneffect2, data = example01_data_anon)

We’ll use the anova() command to get some summary statistics on all of the models.

anova(
  ex01_sem_chain, 
  ex01_sem_commoneffect, 
  ex01_sem_commoncause, 
  ex01_sem_x2effect, 
  ex01_sem_commoneffect2, 
  ex01_sem_mediator
)
Chi-Squared Difference Test

                       Df    AIC    BIC    Chisq Chisq diff RMSEA Df diff Pr(>Chisq)    
ex01_sem_commoneffect   0 1218.8 1228.7   0.0000                                        
ex01_sem_commoncause    0 2425.5 2442.0   0.0000       0.00 0.000       0               
ex01_sem_commoneffect2  0 3688.8 3708.6   0.0000       0.00 0.000       0               
ex01_sem_mediator       0 2425.5 2442.0   0.0000       0.00 0.000       0               
ex01_sem_chain          1 2423.8 2437.0   0.2754       0.28 0.000       1     0.5997    
ex01_sem_x2effect       2 2480.4 2490.3 377.7330     377.46 1.372       1     <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Warning message: lavaan->lavTestLRT():  
   some models have the same degrees of freedom 

The models are not mutually nested, so we can’t interpret the chi-squared test. However, we can use the two information criteria (AIC and BIC) since those are agnostic to the particular models in comparison. Both AIC and BIC indicate that the uncorrelated common effect structure is the most likely model.

That’s not the end of the story, though…

Cross-validating SEMs

As an applied data scientist, I’ve become very skeptical of parametric goodness-of-fit metrics. After all, I can fit a neural net that perfectly fits any dataset by memorizing it. What I am really interested is in how well a model can make accurate predictions.

Ideally, we would fit our model on one dataset and then collect separate data to validate its predictive accuracy. This is called cross-validation. Given the limitations of field research, however, a more common approach is to split your single dataset into two groups. One for fitting and one for validation.

The simple form of cross-validation only gives us one chance to measure accuracy (or lack thereof, i.e. error). In order to maximize the use of our data and get better estimates, it is a standard practice to split the dataset into a given number of evenly distributed, randomly sampled splits (i.e. folds). Then we can perform the cross-validation as many times as we have folds. This is called k-fold cross-validation.

We can use the library cvsem to easily conduct a 10-fold cross validation on all of our models.

# Cross validating
models_to_cv <- 
  cvgather(
    ex01_sem_chain, 
    ex01_sem_commoneffect, 
    ex01_sem_commoncause, 
    ex01_sem_x2effect, 
    ex01_sem_commoneffect2, 
    ex01_sem_mediator
    )

cvsem(
  data = example01_data_anon,
  Models = models_to_cv,
  k = 10
)
Cross-Validation Results of 6 models 
based on  k =  10 folds. 

                   Model E(KL-D)   SE
1         ex01_sem_chain    0.32 0.16
3   ex01_sem_commoncause    0.33 0.16
6      ex01_sem_mediator    0.33 0.16
4      ex01_sem_x2effect    5.90 1.32
5 ex01_sem_commoneffect2    7.16 1.23
2  ex01_sem_commoneffect    7.20 1.23

Both of the ‘common cause’ structures exhibit the worst predictive performance. They have the highest errors (by default, the error is calculated via KL-Divergence–the difference between the covariance matrix of the validation set and the covariance matric implied by the fitted model). To be fair, a very quick look at the relationship between x1 and x2 could have told us that any model that fails to account for the relationship of these two models is not realistic.

The next highest error comes from the model suggested by our initial stepwise regression method. Thus, based on predictive performance, we can conclusively exclude these hypothesized structures.

The chain, common cause, and mediator structures all have similar predictive accuracy. How can we decide which is the true structure of our system?  Unfortunately, there is no statistical method to differentiate with our data. However, we can still use the structural model to help us develop further experiments to select the final model. For example, we could envision conducting an experiment where we held x2 constant in one set of plots and let it vary in another.

If the system were a common cause structure, holding x2 would cause no difference in the y values between plots. However, if the system were a fully mediated chain through x2 , holding x2 would completely decouple the association between x1 and y, whereas controlling x2 in a partially mediated system would only attenuate the covariation between x1 and y.

In our toy example, the true causal structure will be easy to diagnose when I tell you what the variables really are:

y ~ Shark attacks,
x1 ~ Weather,
x2 ~ Ice cream sales  

Estimating SEM coefficients

You might be tempted to take the SEM for the common cause structure and use that in your paper. I’d advise against it. Since we used all of our data in estimating the structure of the system, we don’t want to reuse the same data to estimate parameter coefficients. That would be double-dipping on our data. Having our cake and eating it too.

Instead, we need to either collect additional data (ideal) or reserve a split of our original data to use as the final fitting data. Since this is a simulated dataset, we can ‘collect’ all of the new data we want.

set.seed(666)

n <- 200

weather <- rnorm(n, mean = 20, sd = 5)

ice_cream_sales <- 50 + 2 * weather + rnorm(n, mean = 0, sd = 5)

shark_attacks <- 5 + 0.3 * weather + rnorm(n, mean = 0, sd = 5)

example01_newdata <- data.frame(shark_attacks, weather, ice_cream_sales)
# common cause
ex01_new_commoncause <- '
ice_cream_sales ~ weather
shark_attacks ~ weather
# shark_attacks ~ 1
'

ex01_sem_new_commoncause <- sem(ex01_new_commoncause, data = example01_newdata)

summary(ex01_sem_new_commoncause)
> summary(ex01_sem_new_commoncause)
lavaan 0.6-18 ended normally after 10 iterations

  Estimator                                         ML
  Optimization method                           NLMINB
  Number of model parameters                         5

  Number of observations                           200

Model Test User Model:
                                                      
  Test statistic                                 0.000
  Degrees of freedom                                 0

Parameter Estimates:

  Standard errors                             Standard
  Information                                 Expected
  Information saturated (h1) model          Structured

Regressions:
                    Estimate  Std.Err  z-value  P(>|z|)
  ice_cream_sales ~                                    
    weather            2.131    0.065   32.853    0.000
  shark_attacks ~                                      
    weather            0.419    0.061    6.852    0.000

Covariances:
                     Estimate  Std.Err  z-value  P(>|z|)
 .ice_cream_sales ~~                                    
   .shark_attacks       3.026    1.644    1.841    0.066

Variances:
                   Estimate  Std.Err  z-value  P(>|z|)
   .ice_cream_sals   24.452    2.445   10.000    0.000
   .shark_attacks    21.727    2.173   10.000    0.000

Our coefficient estimates are close, but not spot on the true parameter values we simulated. The reason is that the simulated errors are pretty high (sd = 5). Rather than fitting the final model once, we can borrow another technique from machine learning and repeatedly fit bootstrap replicates. This will both stabilize our estimates and provide a convenient, non-parametric confidence interval.

Boostrapping SEM coefficients

Content to come!

Extensions to SEM

Content to come!

Laten variables

Content to come!

Composite variables

Content to come!

Non-linear relationships

Content to come!

Hierarchical relationships

Content to come!

Background

This tutorial was originally developed for a graduate workshop at Yale University on 2024 07 25.

]]>
2363
Text analysis using AI in R https://www.azandisresearch.com/2023/10/05/text-analysis-using-ai-in-r/ Fri, 06 Oct 2023 00:02:01 +0000 https://www.azandisresearch.com/?p=2301 Introduction

Analyzing qualitative data is challenging. Such analyses are even more difficult when the topic is controversial and the results will drive important policy decisions. This post explores AI methods for qualitative research, using chatGPT for categorization, embeddings to find hidden topics, and long-context summarization with Claude2 on a case study analyzing free-text public comments to a controversial Environmental Impact decision.

Background

Quite a while ago, I detailed why replacing wolves on Isle Royale National Park was a bad policy decision back by even worse science. Since then, the National Park Service (NPS) decided to commit to wolf replacement anyway, dropping 19 new wolves on the island in 2018 and 2019. The results were expected. The new wolves killed the last original male wolf in 2019, almost certainly ensuring that the new wolf population will be genetically disconnected from the prior population. Of the 20 wolves that NPS attempted to relocate, one died before making it to the island, one voluntarily crossed the ice back to the mainland*, and four others died by the end of 2019. The surviving 14 wolves successfully bred and the population now stands at 31. So, in the end, we have a new, synthetic wolf population that is entirely disjunct from a genetic and ecological perspective. As I predicted in my original post: “in reality, this is not a genetic rescue project, it is a genetic replacement project,” which violates both the scientific and management purpose of the Park.

* This contradicts one of the primary justifications for replacing the wolves. Proponents argued that the lack of ice due to climate change would make natural repopulation impossible.

But neither science nor policy drove NPS’s decision. Management of charismatic mammals, especially in a well-known National Park, is largely a matter of public sentiment. In fact, it is a codified part of the decision process. Federal managers are required to seek public comments as part of the NEPA process.

In general, I am a huge supporter of public voices in important conservation decisions (I’ve even written papers advocating for it). But, sometimes I worry about how advocacy groups can skew the perception of organic public sentiment. That’s what I’d like to analyze in this post.

All of the public comments submitted to NPS on the Isle Royale wolf-moose management plan are public record. You can download and read all 1117 pages of comments.

But 1117 pages is a lot of text to read and digest. In this post, I want to show how you can easily process lots of text using AI (both generative large-language models (LLM), like chatGPT, and LLM embeddings) to make quantitative (or semi-quantitative) analyses.

Basic analyses

Visit my GitHub repo for this project for a fully reproducible analysis.

First, we’ll set up the environment and load in necessary packages.

# Load libraries
library(pdftools) # We will use 'pdftools' to convert the pdf to plain text
library(tidyverse)
library(stringr)
library(RColorBrewer)

# Set up the directory structure:
make_new_dir <- 
     function(DIR_TO_MAKE){
          if(dir.exists(DIR_TO_MAKE) == FALSE){
               dir.create(DIR_TO_MAKE)
          }else{
               print("Directory exists")
          }
     }

make_new_dir("./data/")
make_new_dir("./figs/")

We can download the comments from the NPW website.

download.file(
     url = "https://parkplanning.nps.gov/showFile.cfm?projectID=59316&MIMEType=application%252Fpdf&filename=ISRO%5FMWVPlan%5FAllCorrespondence%5FPEPC%2Epdf&sfid=232552",
     destfile = "./data/ISRO_MWVPlan_AllCorrespondence_PEPC.pdf",
mode = "wb"
)

The first step to analyze the public comments is to parse the pdf into text. This is a tedious process. I won’t show it here, but you can follow all of the steps on my GitHub repo for this project.

Example public comment from the downloaded pdf.
Example comment from the formatted PDF document.

You can download my pre-processed dataset to short-cut the the PDF parsing steps.

download.file(
     url = "https://www.azandisresearch.com/wp-content/uploads/2023/09/EIS_comments.csv",
     destfile = "./data/EIS_comments2.csv"
)

EIS_comments <- read.csv("./data/EIS_comments.csv")

The formatting follow the same structure for every comment. I’ve extracted the ‘Comment ID’, ‘Received’ date time, ‘Correspondence Type’, and ‘Correspondence’ text into a dataframe. I’ve also truncated the longest comments (…comment 68 looks like someone copy and pasted their term paper) to 12,000. This will be important later because the context window for chatGPT is 4000 tokens.

EIS_comments %>% glimpse()
Rows: 2,776
Columns: 4
$ ID             <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,…
$ Received       <dttm> 2015-07-12 20:45:30, 2015-07-14 23:18:34, 2015-07-15 12:03:55, 2015-07-15 13:14:52, 2015-07-15 13:35:47, …
$ Correspondence <chr> "Web Form Correspondence", "Web Form Correspondence", "Web Form Correspondence", "Web Form Correspondence"…
$ Content        <chr> "The alternatives are complete enough as a starting point. The issues will be related to the details. The …

We can do some basic summary analysis on these initial variables. The most comments were submitted in the week before the comment deadline on Sept 1. The vast majority of comments were received through the web form. Less than 10% of comments were physical letters and 51 of the 2777 comments were form cards given to Park visitors.

Area plot of the cumulative total comments over time for each correspondence type.

Often, large influxes of web and email comments are the product of advocacy groups encouraging their members to submit pre-written comments. I’ve used this tactic myself in conservation campaigns, so I won’t cast dispersions. But, I’ll also be the first to admit that a copy-and-pasted form letter is far less sincere than a uniquely crafted opinion.

After checking for matches among the comments, it is clear that there were two archetypical pre-written texts.  These include 733 near identical comment in favor of wolf replacement (i.e. Alternative B), likely from National Parks Conservation Association:

EIS_comments %>%
+   filter(grepl("I care about the wildlife at our national parks, including the wolves and moose at Isle Royale. Right now there are only three", Content)) %>%
+   group_by(Content) %>%
+   tally() %>%
+   arrange(desc(n)) %>%
+   ungroup() %>%
+   filter(row_number() == 1) %>%
+   .$Content %>% 
+   cat()
Dear Superintendent Green, I care about the wildlife at our national parks, including the wolves and moose at Isle Royale. Right now there are only three wolves left at the park- -the lowest number of wolves in more than 50 years- -threatening the overall ecosystem health of this iconic national park. I support management Alternative B to bring new wolves to the island, but urge the Park Service to do this as needed, rather than one time only. Without wolves, the moose population on the island will continue to increase, eating until the food sources are gone. If we bring new wolves to the island, they will help keep the moose population from rapidly expanding and minimize impacts to the native vegetation. This option is much less intrusive in this wilderness park than culling moose, removing moose from the island, or having to replant native vegetation once the moose consume it. As stewards of this park, the National Park Service should take the least intrusive action that results in the biggest benefit to the island's wildlife and ecosystem. I support the Park Service taking action to bring new wolves to the park immediately, before the population vanishes altogether. Thank you for considering my concerns. Sincerely,

And 55 nearly identical comments in favor of Wilderness (i.e. Alternative A), likely from Wilderness Watch:

EIS_comments %>%
+   filter(grepl("Isle Royale's wilderness designation requires that we protect the area's unmanipulated, untrammeled wilderness character. Wild", Content)) %>%
+   group_by(Content) %>%
+   tally() %>%
+   arrange(desc(n)) %>%
+   ungroup() %>%
+   filter(row_number() == 1) %>%
+   .$Content %>% 
+   cat()
Isle Royale's wilderness designation requires that we protect the area's unmanipulated, untrammeled wilderness character. Wilderness designation means we let Nature call the shots. Transplanting wolves from the mainland to Isle Royale is a major manipulation of the Isle Royale Wilderness and must not be done. Alternative Concept A, the No Action Alternative, is the best alternative to protect Isle Royale's unmanipulated, untrammeled wilderness character.

It is important to flag these duplicated comments because the methods that we will use later on will not behave correctly with nearly identical strings.

EIS_comments_deduplicated <- 
     EIS_comments %>%
     # Remove comments with no content
     filter(!is.na(Content)) %>%
     # Flag the web form duplicates
     mutate(form_duplicate = ifelse(grepl("I care about the wildlife at our national parks, including the wolves and moose at Isle Royale. Right now there are only three", Content), "for Alt B", NA)) %>%
     mutate(form_duplicate = ifelse(grepl("Isle Royale's wilderness designation requires that we protect the area's unmanipulated, untrammeled wilderness character. Wild", Content), "for Alt A", form_duplicate)) %>%
     # Form duplicates are not exact matches
     mutate(Content_dup = ifelse(is.na(form_duplicate), Content, form_duplicate)) %>%
     group_by(Content_dup) %>%
     # Retain one of the duplicate sets
     slice_sample(n = 1)

After removing the duplicates and cleaning the data, we are left with 1970 unique comments.

Text analysis with chatGPT

Now, we can start analyzing the content. There are many ways that we could do this, depending on the question we want to answer. For instance, maybe we want to see with questions naturally group together to see if we can find common themes? Traditionally, a common way to do this type of natural language processing would be to use an approach like a Latent-Dirchelt allocation topic analysis that groups comments by tf-idf values of the stems of words contained in the comment. (I cover tf-idf in a previous post). But, one problems with this approach is that the context of words is lost.

If we want to capture the context of the text, we might try using word embeddings from a LLM like GPT. We’ll try this approach later.

In our case, maybe we just want to know how many comments support a given policy.. It would be hard to answer that from the embeddings ourselves, but we could treat GPT as an agent who could read and categorize comments by preferred policy alternative.

We’ll use two packages. httr helps us interact with the chatGPT API. The API speaks in json format. jsonlite helps us parse formatted prompts and responses.

library(httr)
library(jsonlite)

Working with chatGPT is a lot like working with a new intern. Like an new intern, it has no prior contextual understanding of our specific task–we have to be very explicit with our directions. On the bright side, our chatGPT intern has endless patience and never sleeps!

We will be interacting with chatGPT through the API. This differs from the dialectical way that most people interact with chatGPT. We need to engineer our prompt to get a robust response in exactly the same format, every time.  We can do that by passing in quite a bit of context in our prompt and giving specific directions for the output, with examples. Here is the prompt we’ll use:

You are a federal employee tasked with reading the following comment submitted by a member of the public in response to the The Isle Royale National Park Moose-Wolf-Vegetation Management Plan/EIS. The Plan/EIS is a document that evaluates management alternatives for the moose and wolf populations on the island National Park land.
Management alternatives include:

- Alternative A: No Action. Continue the current management of letting nature take its course, without any intervention or manipulation of the moose or wolf populations or their habitats.
- Alternative B: Immediate Wolf Introduction. Introduce 20-30 wolves over a three-year period, starting as soon as possible to reduce the moose population and its impacts on vegetation.
- Alternative C: Wolf Introduction after Thresholds are Met. Introduce wolves if certain thresholds are met, such as the extirpation of wolves, the overabundance of moose, or the degradation of vegetation. The number and timing of wolf introductions would depend on the conditions at the time.
- Alternative D: Moose Reduction and Wolf Assessment. Reduce the moose population by lethal and non-lethal means, such as hunting, contraception, or relocation. The goal would be to lower the moose density to a level that would allow vegetation recovery and assessing introducing wolves to the island in the future.

Here is the text of the public comment: '[INSERT COMMENT TEXT]'.

State which alternative the commenter is most likely to favor (A, B, C, D).
State if the comment is 'For', 'Against', or 'Neutral' on wolf introductions.
State if the strength of the commenter's opinion on a scale from 'Extremely strong', 'Very strong', 'Strong', 'Somewhat strong', or 'Mild'.

Produce the output in json format like this:
{
"favored_alternative": "",
"wolf_opinion": "",
"opinion_strength": ""
}

ChatGPT 3.5 costs 0.002$ per 1000 tokens. We can use the OpenAI tokenizer to estimate the number of tokens constituting our input prompt.

Example output from OpenAI's tokenizer for our prompt.

Our input is 420 tokens. The output should be less than 50 tokens. So we can round to assume 500 tokens per query. So, it will cost us about $1 to process 1000 comments. Much cheaper than paying a human!

In the old days, you could pass a list of inputs into chatGPT ‘completions’ model all at once. This is no longer possible. Now, to use the ‘chat/completions’ API requires looping through each of the inputs and making individual requests. Unfortunately, the API often fails or hits the request rate limit. So, we need to be smart about staging and error handling with this larger loop. The structure of this loop is to define the prompt, wait 18 seconds to avoid the rate limit, run a tryCatch block to test if the API call fails, and if so, it skips to the next record and logs the records that the error occurred on, otherwise, parse the response and store the output in a file.

After getting initial responses, I also want to rerun 500 randomly selected comments in order to check chatGPT’s consistency. This is a critical part of using a generative model in quantitative analysis. I’ll talk more about this later.

Here’s the loop. It will take quite a while depending on your rate limit. I’d suggest either running it overnight or putting in on a remote server. Because we write each response out to file, there’s no problem if it fails. Just note the number of the last successful iteration (which will be printed to the screen) and start back up there.

set.seed(7097)

# Randomly select 500 records to resample
IDs_to_resample <- sample(unique(EIS_comments_deduplicated$ID), 500, replace = FALSE)
ID_list <- c(unique(EIS_comments_deduplicated$ID), IDs_to_resample)

# Create a vector to store failed IDs
failed_ids <- c()

ID_list <- Still_need_IDs

for (i in 1:length(ID_list)) {
  ID_number = ID_list[i]
  # Define the prompt
  prompt_content <- paste0( "Here is the text of the public comment: '", EIS_comments_deduplicated %>%
        filter(ID == ID_number) %>%
        .$Content,
      "'.
    State which alternative the commenter is most likely to favor (A, B, C, D).
State if the comment is 'For', 'Against', or 'Neutral' on wolf introductions.
State if the strength of the commenter's opinon on a scale from 'Extremely strong', 'Very strong', 'Strong', 'Somewhat strong', or 'Mild'.
Produce the output in json format like this:\n{\n\"favored_alternative\": \"\",\n\"wolf_opinion\": \"\",\n\"opinion_strength\": \"\"\n}"
    )
  
  # Initialize gpt_response
  gpt_response <- NULL
  
  # With my account, I can make 3 requests per minute. To avoid denied API calls, I add a 18 second pause in each loop.
  Sys.sleep(18)
  
  tryCatch({
    # Call GPT for a response
    gpt_response <- 
      POST(
        url = "https://api.openai.com/v1/chat/completions", 
        add_headers(Authorization = paste0("Bearer ", read_lines("../credentials/openai.key"))),
        content_type_json(),
        encode = "json",
        body = list(
          model = "gpt-3.5-turbo",
          messages = list(
            list(
              "role" = "system",
              "content" = "You are a federal employee tasked with reading the following comment submitted by a member of the public in response to the The Isle Royale National Park Moose-Wolf-Vegetation Management Plan/EIS. The Plan/EIS is a document that evaluates management alternatives for the moose and wolf populations on the island National Park land.
Management alternatives include:
- Alternative A: No Action. Continue the current management of letting nature take its course, without any intervention or manipulation of the moose or wolf populations or their habitats.
- Alternative B: Immediate Wolf Introduction. Introduce 20-30 wolves over a three-year period, starting as soon as possible to reduce the moose population and its impacts on vegetation.
- Alternative C: Wolf Introduction after Thresholds are Met. Introduce wolves if certain thresholds are met, such as the extirpation of wolves, the overabundance of moose, or the degradation of vegetation. The number and timing of wolf introductions would depend on the conditions at the time.
- Alternative D: Moose Reduction and Wolf Assessment. Reduce the moose population by lethal and non-lethal means, such as hunting, contraception, or relocation. The goal would be to lower the moose density to a level that would allow vegetation recovery and assessing introducing wolves to the island in the future."
            ),
            list(
              "role" = "user",
              "content" = prompt_content
            )
          )
        )
      )
    print(paste0("API call successful for ID: ", ID_number, ", index: ", i))
  }, error = function(e) {
    # Handle API call errors
    cat("API call failed for ID: ", ID_number, ", index: ", i, "\n")
    failed_ids <- c(failed_ids, i)
  })
  
  # If the API call was successful, proceed with data wrangling and output
  if (!is.null(gpt_response)) {
    # parse the response object as JSON
    content <- content(gpt_response, as = "parsed")
    
    # Assign the ID to the GPT response
    gpt_response_df <- data.frame(
      response_id = ID_number,
      gpt_response = content$choices[[1]]$message$content
    )
    
    # Convert the JSON to a dataframe and join to the record data
    output <- bind_cols( EIS_comments_deduplicated %>%
        filter(ID == ID_number),
      fromJSON(gpt_response_df$gpt_response) %>% 
        as.data.frame()
    ) %>%
      mutate(response_created_time = Sys.time())
    
    # Append the data to the extant records and write the output to a file. (This is a bit less memory efficient to do this within the loop, but I )
    if (!file.exists("./EIS_GPT_responses.csv")) {
      write.csv(output, "./EIS_GPT_responses.csv", row.names = FALSE)
    } else {
      read.csv("./EIS_GPT_responses.csv") %>%
        mutate(across(everything(), as.character)) %>%
        bind_rows(output %>%
                    mutate(across(everything(), as.character))
        ) %>%
        write.csv("./EIS_GPT_responses.csv", row.names = FALSE)
    }
    
    print(paste0("Completed response ", i))
  }
}

# Log the failed IDs to a file
if (length(failed_ids) > 0) {
  write.csv(data.frame(ID = failed_ids), "./failed_ids.csv", row.names = FALSE)
  cat("Failed IDs logged to 'failed_ids.csv'\n")
}

ChatGPT is nondeterministic, so your responses will differ. You can download the responses I got to follow along.

download.file(
     url = "https://www.azandisresearch.com/wp-content/uploads/2023/09/Final_GPT_Responses.csv",
     destfile = "./data/GPT_output.csv"
)

GPT_output <- read.csv("./data/GPT_output.csv")
GPT_output %>% glimpse()
Rows: 2,470
Columns: 13
$ ID                    <int> 93, 440, 2164, 636, 839, 2335, 36, 487, 1268, 2303, 1781, 60, 1033, 1948, 1826, 1538, 1685, 308, 22…
$ Received              <chr> "7/29/2015 9:09", "8/9/2015 5:14", "8/27/2015 14:36", "8/18/2015", "8/25/2015", "8/28/2015 12:30", …
$ Correspondence        <chr> "Web Form Correspondence", "Web Form Correspondence", "Web Form Correspondence", "Web Form Correspo…
$ Content               <chr> "\"100% o wolves examined since 1994...have spinal anomalies.\"- -Of the six alternatives put forth…
$ form_duplicate        <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ Content_dup           <chr> "\"100% o wolves examined since 1994...have spinal anomalies.\"- -Of the six alternatives put forth…
$ favored_alternative   <chr> "C", "C", "Alternative D", "C", "C", "B", "C", "C", "D", "C", "Unknown", "C", "B", "A", "B", "A", "…
$ wolf_opinion          <chr> "For", "Against", "Neutral", "For", "Neutral", "For", "For", "For", "Against", "For", "Neutral", "F…
$ opinion_strength      <chr> "Very strong", "Very strong", "Strong", "Strong", "Somewhat strong", "Very strong", "Strong", "Stro…
$ response_created_time <chr> "32:19.2", "33:11.7", "33:16.9", "33:19.5", "34:35.2", "34:54.2", "34:55.4", "36:15.1", "36:16.3", …
$ Favored_alternative   <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ Wolf_opinion          <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…
$ Opinion_strength      <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA,…

A couple of interesting things to note here. First, I apparently was not specific enough in my instructions for classifying the favored alternative because chatGPT sometimes returns “Alternative B” instead of just “B”. This is one of the struggles with using chatGPT, a generative model, in this way. It strays from instructions just like human survey respondents when inputting free-text results. For example, common responses to the survey question, “How are you feeling on a scale from 1 (bad) to 10 (good)?” might be “I’m good” or “Okay” or “nine” or “0”. None of those answers fit the instructions, so we have to clean them up.

In the case of chatGPT, we might be able to reduce these errors with more specific prompt engineering. For now, we’ll just clean up the responses on the backend.

# Fix erroneous column names
GPT_output <-
     GPT_output %>%
     mutate(
          favored_alternative = ifelse(is.na(favored_alternative), Favored_alternative, favored_alternative),
          wolf_opinion = ifelse(is.na(wolf_opinion), Wolf_opinion, wolf_opinion),
          opinion_strength = ifelse(is.na(opinion_strength), Opinion_strength, opinion_strength)
          ) %>%
     select(
          -Wolf_opinion,
          -Favored_alternative,
          -Opinion_strength
     )

# There are probably more elegant ways to write generalized rules to classify these reponses, but this does the trick
GPT_output <-
     GPT_output %>%
     # Fix 'favored alternative' responses
     mutate(
          favored_alternative_edit = case_when(
               (grepl(" and ", favored_alternative) | grepl(" or ", favored_alternative) | grepl("/", favored_alternative) | grepl("&", favored_alternative) | favored_alternative == "B, D") & !grepl(" and Wolf ", favored_alternative) & !grepl("N/A", favored_alternative) ~ "Multiple",
               grepl("\\bAlternative A\\b", favored_alternative) | favored_alternative %in% c("A", "No Action (A)") ~ "A",
               grepl("\\bAlternative B\\b", favored_alternative) | favored_alternative == "B" ~ "B",
               grepl("\\bAlternative C\\b", favored_alternative) | favored_alternative %in% c("C", "Concept C") ~ "C",
               grepl("\\bAlternative D\\b", favored_alternative) | favored_alternative == "D" ~ "D",
               TRUE ~ "Other"
          )
     ) %>%
     # Fix 'opinion strength' responses
     mutate(opinion_strength = tolower(opinion_strength)) %>%
     mutate(
          opinion_strength_edit = case_when(
               opinion_strength %in% c("strong", "very strong", "mild", "somewhat strong", "extremely strong") ~ opinion_strength,
               TRUE ~ "other"
          )
     ) %>%
     # Fix 'wolf opinion' responses
     mutate(wolf_opinion = tolower(wolf_opinion)) %>%
     mutate(
          wolf_opinion_edit = case_when(
          wolf_opinion %in% c("for", "against", "neutral") ~ wolf_opinion,
          TRUE ~ "other"
          )
     )

Let’s take a look at the results.

Bar chart of the favored alternative expressed in comments as assessed by chatGPT.We can see that the majority of comments favor Alternative B: immediate wolf introduction. However, if we exclude the duplicated comments, our conclusion shifts to a majority in favor of the more moderate Alternative C: introduce wolves only after certain thresholds are met. Almost no one supports Alternative D: moose reduction and wolf assessment.

Bar chart of opinion strength by favored alternative alternative.Comments that favored Alternative A were stronger proportionally. Alternative B supporters had mostly strong opinions but very few extremely strong or mild opinions. Supporters of Alternatives C and D were the least opinionated.

Validating chatGPT responses

It is worth asking ourselves how reliable chatGPT is at classifying these responses. One way to test this is to rerun a subset of comments, like we did above and check for agreement. This is called inter-rater reliability* (IRR).

* Although, maybe it should be called intra-rater reliability in this case. I guess it depends on out definition of ‘individual’ with LLM queries, but that’s a very philosophical bag of worms!

First, we need to subset our dataset to the responses that we scored twice.

IRR_comparisons <- 
     GPT_output %>%
     group_by(ID) %>%
     arrange(response_created_time) %>%
     mutate(ID_row_count = row_number()) %>%
     filter(ID_row_count <= 2) %>%
     mutate(n = n()) %>%
     filter(n > 1) %>%
     ungroup()

Then we can see how reliably the favored alternative was scored,

IRR_comparisons %>%
     select(ID, favored_alternative_edit, ID_row_count) %>%
     pivot_wider(
          id_cols = "ID",
          names_from = "ID_row_count",
          values_from = "favored_alternative_edit",
          names_prefix = "val"
     ) %>%
     group_by(val1 == val2) %>%
     tally() %>%
     mutate(
          total = sum(n),
          prop = n/total
     )
# A tibble: 2 × 4
  `val1 == val2`     n total  prop
  <lgl>          <int> <int> <dbl>
1 FALSE              2   500 0.004
2 TRUE             498   500 0.996

ChatGPT gave consistent responses in 498 out of 500 cases. That’s pretty good! Let’s look at the comments where it disagreed with itself.

IRR_comparisons %>%
     select(ID, favored_alternative_edit, ID_row_count) %>%
     pivot_wider(id_cols = "ID", names_from = "ID_row_count", values_from = "favored_alternative_edit", names_prefix = "val") %>%
     filter(val1 != val2)
# A tibble: 2 × 3
     ID val1  val2 
1   288 C     B    
2  1160 B     C    
 
EIS_comments_deduplicated %>%
     filter(ID == 288) %>%
     .$Content %>%
     cat()
There should be a balance between the wolf population and moose. When it is not balanced there is more harm than good done to the environment. Please introduce more wolves on this island instead of decreasing their population and this will keep the moose in check. Please add more wolves to contain the moose population. So many wolves are under attack in other states and decreasing their population is NOT the answer. It only creates more problems to the environment. There should be intense management of the wolf population to help it thrive and return the land back to it's natural state where there are enough moose and wolves. I think the public should be consulted as far as future plans for any culling. There should be intense management to monitor the effects of climate change as this will affect all aspects of wildlife and plant life on the island. I do not like the idea of a moose cull. I like the idea of introducing more wolves to the island so long as there is harmony with the existing wolves on the island. Maybe possibly try to introduce another type of animal that would be a good balance with the wolves and moose but only if it does not disrupt the balance and create new problems. Other states have adopted disastrous wolf culling plans that are only in the interests of farmers and ranchers. As the wolf population is dwindling, other problems will begin to develop as there is not a proper balance. Please keep wolves in mind and do your best to increase their population before it is too late and more animals will be needlessly killed without the proper balance of mother nature.> 
 
EIS_comments_deduplicated %>%
     filter(ID == 1160) %>%
     .$Content %>%
     cat()
I have heard both sides of this situation and I believe that new wolves should be introduced on Isle Royale. Climate change has made a large impact on the amount of ice that freezes in the Isle Royale region. Previously wolves from the mainland could cross the ice that formed and take up residence on the Isle. The ice hasn't been stable enough for these crossings in the last few years and the wolves are becoming inbred and dying off. If you will check a video that I have watched about the wolves being reintroduced to Yellowstone, you will see that the ecology of the region is benefited by the wolves being there. If enough wolves are transported to Isle Royale, the wolves will keep the moose in check and the ecology will improve. Allowing the pack to die off is really not a positive move. Introducing a new bloodline to the pack will help. I believe the wilderness designation of Isle Royale is a positive thing and that the wolves help to keep the ecosystem there in good order. Thank you for taking comments from the public.

In both cases, chatGPT vacillated between classifying the comment as favoring alternative B or C. Difference between those alternatives is admittedly nuanced. Both alternatives propose replacing wolves, the only difference is in the timing. In Alternative B, wolves would be introduced immediately and in Alternative C wolve would be introduced, “if certain thresholds are met, such as the extirpation of wolves, the overabundance of moose, or the degradation of vegetation. The number and timing of wolf introductions would depend on the conditions at the time.”

Both of the comments that made chatGPT disagree with itself focus on the environmental conditions that wolf introductions might remedy. However, these comments seems to presuppose that those conditions have been met and seem to suggest immediate introduction is necessary. So, I can see where chatGPT might have a hard time solidly classifying these comments.

Let’s also check the IRR for chatGPT’s classification of ‘opinion strength.’ Unlike the favored alternative, where most folks explicitly stated their preference, classifying the strength of an opinion is a far more subjective task.

IRR_comparisons %>%
     select(ID, opinion_strength_edit, ID_row_count) %>%
     pivot_wider(
          id_cols = "ID",
          names_from = "ID_row_count",
          values_from = "opinion_strength_edit",
          names_prefix = "val") %>%
     group_by(val1 == val2) %>%
     tally() %>%
     mutate(
          total = sum(n),
          prop = n/total
     )
# A tibble: 2 × 4
  `val1 == val2`     n total  prop
  <lgl>          <int> <int> <dbl>
1 FALSE              5   500  0.01
2 TRUE             495   500  0.99

ChatGPT disagreed with itself in 5 cases, but gave reliable classifications 99% of the time. That’s pretty good! However, just assessing binary disagreement or agreement isn’t a strong metric for this variable. A switch from “extremely strong” to “very strong” is less of an issue than a vacillating from “extremely strong” to “mild”.

Instead, we can use the Krippendorff’s Alpha. This metric provides a formal way to assess the the amount of inter-rater disagreement. There are multiple metrics that we could use, but Krippendorff’s Alpha is nice because it can generalize to any number of reviewers and can handle many types of disagreement (i.e. binary, ordinal, interval, categorical, etc.). Here’s a great post for understanding Krippendorff’s Alpha. We’ll use the irr package to estimate it.

library(irr)

The irr package needs the dataset in wide format matrix with one row per reviewer and each record (the package calls records ‘subjects’ because this metric is traditionally used in social science research) as a column. For this analysis, we’ll consider the first and second responses from chatGPT as individual reviewers. We also need to enforce the order of our opinion strength levels; otherwise, R will naturally order them alphabetically.

IRR_comparisons %>%
     mutate(opinion_strength_edit = fct_relevel(
          opinion_strength_edit,
          c(
               "other",
               "mild",
               "somewhat strong",
               "strong",
               "very strong",
               "extremely strong"
           )
     )) %>%
     select(
          ID,
          opinion_strength_edit,
          ID_row_count
     ) %>%
     pivot_wider(
          id_cols = "ID_row_count",
          names_from = "ID",
          values_from = "opinion_strength_edit",
          names_prefix = "ID_"
     ) %>%
     select(-ID_row_count) %>%
     as.matrix() %>%
     kripp.alpha(method = "ordinal")
  
Krippendorff's alpha

 Subjects = 500 
   Raters = 2 
    alpha = 0.996 

Krippendorff’s Alpha ranges from -1 to 1, where 1 means perfect concordance, 0 means random guesses among reviewers, and -1 is perfect negative concordance. At .996, we are pretty near perfect reliability.

For many datasets, there will be a lower degree of IRR. But, it is important to remember to interpret the alpha value in context. Perfect concordance may not be realistic, especially in highly subjective classifications. In most cases our goals is not perfect concordance, but simply greater reliability than we’d get if we hired a bunch of humans to do the annotating. Preliminary evidence seems to indicate that even version 3.5 of chatGPT is more reliable than humans (even domain experts!) in subjective classification tasks.

In most cases, you won’t have the resources to get human annotations for an entire dataset for comparison. Instead, you could 1.) get human annotations for a small subset, 2.) use a similar benchmark dataset, or 3.) spot-check responses yourself. If you choose to spot check, I’d suggest rerunning chatGPT multiple times (> 3) in order to estimate the variance in responses. High variance responses indicate especially difficult classifications that you should target for spot-checks. Another tip is to ask chatGPT to return it’s justification with each response. Ultimately, this process will help you diagnose problematic types of responses and enable you to engineer better prompts to deal with those edge cases.

The bottom line is that working with chatGPT is less like working with a model and more like working with human raters–and all of the validation tasks that entails.

Analysis with token embeddings

Up to this point, we’ve presupposed the classifications we wanted ChatGPT to identify in our data. But, what if we wanted to uncover hidden categories in the responses? Folks could advocate for the same Alternative but for different reasons. For example, among those who favor Alternative C, some might argue from the perspective of climate change and some from the perspective of moose populations.

We can use token embeddings to uncover hidden clusters of topics in our responses. Embeddings are the way that LLMs encode free text into numeric form.  Each token or ‘unit of language’ is numerically described as a position in multidimensional language space. This is a huge advantage over more traditional language clustering methods that simply count the occurrence of certain words. Embeddings retain the context of each token as it exists in the document.

Toy example of four sentences containing the word 'train' embedded in two dimensions.
Embeddings allow us to retain the context of text by expressing tokens in multidimensional language space.

As a toy example, the word “train” in these sentences: “I train a model”, “I train for a marathon”, “I rode the train”, “I’m on the Soul Train” could be described in two dimensions of more or less metaphorical and noun/verb. If we do this for all of the words in a document or chunk of text, we can then think of all the embeddings as a point cloud. Documents with highly overlapping point clouds are more similar that those that don’t overlap at all.

We call a different OpenAI model, text-embedding-ada-002, to return the embeddings. Unlike the chat model, we can pass all of the responses as a list in a single call, instead of looping through each response. This makes embeddings much faster and cheaper than using the chatGPT API.

Prior to embedding, I like to remove non-alpha numeric characters from the text.

# Clean up the text to remove non-alpha numeric characters
input_to_embed <- 
     EIS_comments_deduplicated %>%
     mutate(Content_cleaned = str_replace_all(Content, "[^[:alnum:]]", " "))

# Call OpenAI for the embeddings
embeddings_return <- 
     POST(
          "https://api.openai.com/v1/embeddings",
          add_headers(Authorization = paste0(
               "Bearer ", read_lines("../credentials/openai.key"))
          ),
          body = list(
               model = "text-embedding-ada-002",
               input = input_to_embed$Content_cleaned
               ),
          encode = "json"
     )

The returned object is a bit convoluted. We can use a bit of purrr and jsonlite to extract the embeddings.

# Extract the embeddings from the API return
embeddings_list <-
     embeddings_return %>%
     content(as = "text", encoding = "UTF-8") %>%
     fromJSON(flatten = TRUE) %>%
     pluck("data", "embedding")

Then add the embeddings back into the dataframe.

# Combine the embeddings with the original data
EIS_GPT_embeddings <- 
     EIS_comments_deduplicated %>%
     as_tibble() %>%
     mutate(
          embeddings = embeddings_list,
          ID = as.character(ID)
     ) %>%
     left_join(
# We need to get only the first instance of the GPT response data, which also included the repeated reliability test responses, to know which alternative the comment favors
          GPT_output %>%
               group_by(ID) %>%
               arrange(response_created_time) %>%
               mutate(ID_row_count = row_number()) %>%
               filter(ID_row_count == 1) %>%
               ungroup() %>%
     select(
          ID,
          favored_alternative_edit,
          opinion_strength_edit
          )
     )

Topical clustering from text embeddings

The problem is that those point clouds exist in extremely high dimensions. OpenAI’s text-embedding-ada-002 model returns 1536 dimensions. We need a method to reduce that complexity into something useful.

As mentioned, the embeddings allow us to see how comments relate in high-dimensional language space. We want to figure out where there are denser clusters of point clouds in that space which indicate common themes in the comments.

A couple of common ways to do this is to use a clustering algorithm (e.g. K-means) or dimension reduction (e.g. PCA). For this tutorial I want to use a bit of a hybrid approach called t-SNE (t-distributed Stochastic Neighbor Embedding) that will allow us to easily visualize the clusters of common comments which we can then explore.

We’ll use Rtsne package which requires that the data be in matrix form.

library(Rtsne)

# Rtsne requires the embeddings to be in matrix form, so we extract the lists of emdeddings from the dataframe and convert them to matrix form.
openai_embeddings_mat <-
     matrix(
          unlist(
               EIS_GPT_embeddings %>%
               .$embeddings
               ),
          ncol = 1536,
          byrow = TRUE
     )

# Estimate tSNE coordinates
set.seed(7267158)
tsne_embeddings <-
     Rtsne(
          openai_embeddings_mat,
          pca = TRUE,
          theta = 0.5,
          perplexity = 50,
          dims = 2,
          max_iter = 10000
     )

Determining the proper theta (i.e. learning rate) and perplexity (basically an estimate of how close points are in relation to the expected groupings) is more of an art than a science. This post does a great job of exploring choices for these parameters. By setting pca = TRUE in this case, we are first reducing the dimensionality to 50 principal components and then using tSNE to do the final reduction to two visual dimensions.

# Extract the tSNE coordinates and add them to the main dataset
EIS_GPT_embeddings <- 
     EIS_GPT_embeddings %>%
     mutate(
          tsne_dim1 = tsne_embeddings$Y[,1],
          tsne_dim2 = tsne_embeddings$Y[,2]
     )

# Visualize the tSNE plot
EIS_GPT_embeddings %>%
     ggplot(aes(x = tsne_dim1, y = tsne_dim2)) +
     geom_point(alpha = 0.5, pch = 16)
tSNE plot of the openai embeddings
The tSNE plot uncovers some weak groupings, but there are no extremely clear delineation between most comments. This is likely a symptom of low diversity in comments and the fact that most of our comments are very short, so there is less signal in the content.

The first thing to note is that we are not seeing much discrete grouping of the points. This tells us that that the comments share a lot more in common across all comments than across local groups of comments. The second thing to notice is that despite the spread, we do see a handful of groups budding off along the periphery. In fact, one group in the bottom right is very distinct. It is important to remember that, unlike PCA, the axis dimensions in tSNE are meaningless. In fact, I’ll remove them from plot for the rest of the post. Position doesn’t matter in tSNE–only relative closeness.

At this point, we might want to manually delimit groups that we want to analyze further, like pulling out all of the comments from that cluster in the top left. To make this a bit easier, I’ve opted to cluster the two dimensional tSNE with hierarchical clustering. It is important to realize that this is purely a convenience for visualization. If we really wanted to use clustering to directly define groups (like hierarchical, KNN, etc.), it would make much more sense to cluster directly on the first 50 principle components.

tsne_embedding_clusters <- 
     hclust(
          dist(tsne_embeddings$Y), 
          method = "average"
     )

EIS_embeddings_clustered <-
     EIS_GPT_embeddings %>%
     mutate(
          cluster = cutree(tsne_embedding_clusters, 7)
)

Since we are clustering on tSNE dimensions where distance doesn’t really matter, deciding where to set our breakpoint is a personal choice. I’ve decided to use 7 clusters because it seemed a natural breakpoint and recovered the obvious clusters.

tsne plot and hierarchical tree diagram displaying the data split into 8 clusters
Using hierarchical clustering, we can cluster on the tSNE coordinates. Since tSNE coordinates are mostly meaningless, deciding how many clusters to split the data into is a bit arbitrary.

Text analysis of topical clusters

Now that we have putative clusters of topics, we can perform some classic natural language processing (NLP) to illuminate the themes of those topics. We’ll use tidytext for this task.

library(tidytext)

First, we need to get the data into a long, tidy format where each word in every comments is its own row. We’ll also remove common stop words that are predefined in the tidytext library. Then, we can calculate the term frequency-inverse document frequency (TF-IDF) for the clusters. TF-IDF is basically a measure of how common a word is within a cluster, after accounting for how common a given words is overall.

For example, if we take a look at the most common words in each cluster, it is unsurprising that “wolves”, “moose”, “isle” and “royale” dominate. (Although it is interesting that the top words for clusters 4 and 7 are “wilderness” and “management”… more on that later).

word frequency bar plots for each cluster
Unsurprisingly, when considering the most common words, “wolves”, “moose”, and “isle” dominate.

However, TF-IDF tells us about the relatively unique words that define a cluster of comments. Some clusters, like 1 and 2 have very even tf-idf distribution and the important words are mostly filler or nonsense words. This happens when clusters are saturated with common words and there is no strong theme producing uniquely important words. We could have guessed from the tSNE plot of the embeddings that the bulk of comments in the center of the plot would fall in this lexical no-man’s-land. But! Clusters 3, 4, 5, and 7 show promisingly skewed distributions.

term frequency inverse document frequency bar plots for clusters
TF-IDF is a measure of uniquely important words in a ‘document’ (or cluster, in this case) relative to common words across all documents.

Cluster 3 seems to orient towards a topic of animal welfare, with words like, “contraception”, “sterilization”, “lethal”, and “culls”. I suspect that these comments speak to folks’ concerned less about the wolf population or wilderness management, and more about the ethics of any proposed action involving animals. In a similar way, it looks like Cluster 7 is more concerned with the science and measurement behind the management decision and less about the decision itself with words like, “evaluating”, “approximately”, and “tools” with high uniqueness and “management” as the most common word overall. These topics would have been completely lost if we had stopped at categorizing favored alternatives.

Meanwhile cluster 4 appears to be squarely concerned with Wilderness issues. “Wilderness” and “nature” are the most common words in this cluster and “untrammeled” and “unmanipulated” are the most uniquely important words. We might expect that most of the comments that chatGPT categorizes as favoring alternative A will fall into cluster 4.

We can also take a look at how the clusters map onto the chatGPT categorizations.

chatGPT categorized 'favored alternative' mapped to tSNE coordinates with bar plot showing favored alternative counts per cluster
Mappin the chatGPT categorized ‘favored alternative’ onto the tSNE coordinates, we can see that topical clusters mostly conform to

Mappin the chatGPT categorized ‘favored alternative’ onto the tSNE coordinates, we can see that comments roughly sort by favored alternative. Cluster 6 is almost entirely defined by support for Alternative B – immediate wolf introduction. Cluster 4, which seemed to orient towards Wilderness values is mostly comprised of comments in support of Alternative A – no action.

Cluster 7 and Cluster 3, are mostly skewed to Alternative C – more monitoring, but exhibit very similar distributions. This might be a great example where even folks who tend to agree on the same Alternative, do so for different reasons–a pattern we would have totally missed without text analysis.

The remaining clusters which compose the bulk of the midland in the tSNE plot favor a mix of Alternatives.

Chain-of-density summarization

We can learn a lot from looking at common and important words and using our human judgement to piece together the topical theme of each cluster. Ideally, we would read all of the comments in a cluster to develop a topical summary. But that would take a long time. As an alternative, we can pass all of the comments in a given cluster to an LLM and have it summarize the theme.

Currently, only a handful of models support context windows large enough to digest the entirety of the comments in our clusters. Anthropic’s Claude2 has a context widow of up to 100k tokens (rough 75,00 words). Although, it isn’t quite as good at chatGPT 4. To get the most out of Claude2, we can use a special type of prompting developed for summarization called “chain-of-density”. Chain-of-density prompting forces the model to recurrently check it’s own output to maximize the density and quality of its summarization. Research shows that people tend to like the chain-of-density summaries even better than human-written summaries of new articles.

For demonstration, we’ll use chain-of-density prompting to summarize the theme of cluster 3. Here is the prompt that we will pass to Claude2:

"You will generate increasingly concise entity-dense summaries of the semicolon separated comments included below.

The comments were submitted by a member of the public in response to the The Isle Royale National Park Moose-Wolf-Vegetation Management Plan/EIS. The Plan/EIS is a document that evaluates management alternatives for the moose and wolf populations on the island National Park land.

Now that you know the context, here are the semicolon separated survey response:

[INSERT SEMICOLON SEPARATED COMMENTS]

Instructions: You will generate increasingly concise entity-dense summaries of the above semicolon separated comments. Repeat the following 2 steps 5 times.

Step 1: Identify 1-3 informative entities (delimited) from the comments which are missing from the previously generated summary.
Step 2: Write a new denser summary of identical length which covers every entity and detail from the previous summary plus the missing entities.

A missing entity is
- Relevant: to the main themes of the comments.
- Specific: descriptive yet concise (5 words or fewer).
- Novel: not in the previous summary.
- Faithful: present in the comments.
- Anywhere: located in the comments.

Guidelines:
- The first summary should be long (7 - 5 sentences, ~100 words), yet highly non-specific, containing little information beyond the entities marked as missing. Use overly verbose language and fillers (e.g., 'these comments discuss') to reach ~100 words.
- Make every word count. Rewrite the previous summary to improve flow and make space for additional entities.
- Make space with fusion, compression, and removal of uninformative phrases like 'these comments discuss'.
- The summaries should become highly dense and concise, yet self-contained, e.g., easily understood without having read the comments.
- Missing entities can appear anywhere in the new summary.
- Never drop entities from the previous summary. If space cannot be made, add fewer new entities.

Remember: Use the exact same number of words for each summary.

Answer in JSON. The JSON should be a list (length 5) of dictionaries whose keys are 'missing_entities' and 'denser_summary'."

I don’t have API access to Claude2, but unlike chatGPT, Claude2 accepts files as input. So, it is easy to output our prompt with the contained comments as a .txt file and submit it to Claude2 manually.

Claude2 gives us five increasingly dense summaries:

Summary 1: These comments provide a range of perspectives on how the National Park Service should manage the declining wolf population and increasing moose population on Isle Royale. There is discussion of various preliminary alternative concepts, including taking no action, reintroducing wolves once, maintaining both moose and wolves, reducing the moose population, and intensively managing the moose population. Many commenters express support for alternative C, which involves reintroducing and augmenting the wolf population on Isle Royale more than once if necessary. There is also frequent mention of the possibility of relocating problem wolves from elsewhere to the island. Overall, most commenters emphasize the importance of restoring the natural wolf/moose balance on Isle Royale through wolf reintroduction rather than intensively managing the moose population.
Summary 2: These comments provide perspectives on managing the declining Isle Royale wolf and increasing moose populations. Many support alternative C to reintroduce and augment wolves more than once if needed. Wolf relocation from the mainland is also mentioned. Most emphasize restoring the wolf/moose balance through wolf reintroduction rather than intensively managing moose. Climate change reducing ice bridges for wolf migration and the wilderness designation of Isle Royale are frequently discussed. Many oppose lethal moose culling, instead supporting moose contraception/sterilization. Overall there is a focus on restoring the natural predator-prey dynamic through wolf reintroduction while minimizing intensive human management due to the wilderness designation.
Summary 3: These comments discuss managing the declining Isle Royale wolf and increasing moose populations. Many support alternative C to reintroduce and augment wolves more than once if needed to restore the wolf/moose balance. Wolf relocation and genetics are often mentioned given reduced migration from climate change. The wilderness designation and opposition to lethal moose culling are frequently discussed, with support for contraception/sterilization. There is concern about moose overbrowsing without sufficient wolves. Some mention wolf tourism benefits. Overall there is emphasis on restoring the natural predator-prey dynamic and wolf genetics through reintroduction while minimizing intensive human management due to the wilderness designation.
Summary 4: These comments discuss managing the declining Isle Royale wolf and increasing moose populations, often supporting alternative C to reintroduce and augment wolves more than once to restore balance. Wolf relocation and genetics are frequently mentioned to compensate for reduced migration from climate change. Many oppose lethal moose culling, instead supporting contraception/sterilization, expressing concerns about moose overbrowsing and ticks without wolves. Some mention wolf tourism benefits and preserving the wilderness designation. There is emphasis on restoring balance through wolf reintroduction to limit moose impacts on vegetation while minimizing intensive human management.
Summary 5: These comments frequently support alternative C to reintroduce and augment Isle Royale wolves to restore wolf/moose balance and predator-prey dynamics, concerned about moose overbrowsing, ticks, and vegetation without sufficient wolves. Wolf relocation and genetics are often proposed to compensate for reduced migration from climate change. Many oppose lethal moose culling, noting wilderness values. Some mention continuing the iconic predator-prey research, wolf tourism benefits, wolf-livestock issues, and the value of public engagement. Overall there is emphasis on reestablishing a healthy Isle Royale wolf population through reintroduction to minimize intensive human management and moose impacts.

The final two summaries start to get at the nuance particular to this cluster. Summary 4 mentions, “Many oppose lethal moose culling, instead supporting contraception/sterilization”. Summary 5 mentions: “Many oppose lethal moose culling, noting wilderness values.” As expected, much of the summary is likely common across all comments. So, a human interpretation of the summaries is still necessary.

Conclusion

In this post, we explored using chatGPT, text embeddings, and Claude2 to analyze public comments on a complex environmental management decision. We learned methods to responsibly validate chatGPT output. While not perfect, chatGPT showed promising reliability at categorizing free-form opinions. The text embeddings allowed us to uncover hidden topical clusters among comments that traditional methods would have missed. Claude2’s long context window allowed us to further interpret the topical clusters. Together, these tools enabled a nuanced quantitative analysis of subjective text data that would be infeasible for a single human analyst to perform manually.

 

]]>
2301
UPDATE: Smartphone Hemispherical Image Analysis https://www.azandisresearch.com/2023/05/20/update-smartphone-hemispherical-image-analysis/ Sat, 20 May 2023 19:27:45 +0000 https://www.azandisresearch.com/?p=2279 When I first developed the method to estimate canopy metrics from smartphone spherical panoramas, I used a somewhat convoluted workflow involving command line image manipulation, a plugin for ImageJ to do binarization, and a script I wrote in AutoHotKey language to automate mouse clicks on a GUI for canopy measures. Admittedly, it is a difficult pipeline for others to easily replicate.

In an effort to make life easiest, I spent some time building out the pipeline entirely in R, including a sourceable function for converting spherical panos to hemispherical images (all available on this repo).

The easiest way to convert all of your spherical panos to hemispherical projections is to source the function from my github:

source("https://raw.githubusercontent.com/andisa01/Spherical-Pano-UPDATE/main/Spheres_to_Hemis.R")

When you source the script, it will install and load all necessary packages. It also downloads the masking file that we will use to black out the periphery of the images.

The script contains the function convert_spheres_to_hemis, which does exactly what is says. You’ll need to put all of your raw spherical panos into a subdirectory within your working directory. We can then pass the path to the directory as an argument to the function.

convert_spheres_to_hemis(focal_path = "./raw_panos/")

This function will loop through all of your raw panos, convert them to masked, north-oriented upward-facing hemispherical images and put them all in a folder called “masked_hemispheres” in your working directory. It will also output a csv file called “canopy_output.csv” that contains information about the image.

Below, I will walk through the steps of the workflow that happened in the convert_spheres_to_hemis function. If you just want to use the function, you can skip to the analysis. I’ve also written a script to do all of the conversion AND analysis in batch in the repo titled “SphericalCanopyPanoProcessing.R”.

library(tidyverse) # For data manipulation
library(exifr) # For extracting metadata

R is not really the best tool for working with image data. So, we’ll use the magick package to call the ImageMagick program from within R.

library(magick) # For image manipulation
# Check to ensure that ImageMagick was installed.
magick_config()$version

You should see a numeric version code like ‘6.9.12.3’ if ImageMagick was properly installed.

# ImageR also requires ImageMagick
library(imager) # For image display

For binarizing and calculating some canopy metrics, we will use Chiannuci’s hemispheR package, which we need to install from the development version.

library(devtools)
devtools::install_git("https://gitlab.com/fchianucci/hemispheR")
library(hemispheR) # For binarization and estimating canopy measures

To get started, you’ll need to have all of your raw equirectangular panoramas in a folder. I like to keep my raw images in a subdirectory called ‘raw_panos’ within my working directory. Regardless of where you store your files, set the directory path as the focal_path variable. For this tutorial we’ll process a single image, but I’ve included scripts for batch processing in the repo. Set the name of the image to process as the focal_image variable.

focal_path <- "./raw_panos/"
focal_image <- "PXL_20230519_164804198.PHOTOSPHERE_small.jpg"

focal_image_path <- paste0(focal_path, focal_image)
focal_image_name <- sub("\\.[^.]+$", "", basename(focal_image_path))

Let’s take a look at the equirectangular image.

pano <- image_read(focal_image_path)
pano # Visualize the pano
Raw equirectangular projection of a spherical panorama within a forest.
Raw equirectangular projection of a spherical panorama.

Note: One advantage of spherical panos is that they are large, and therefore, high resolution. The images from my Google Pixel 4a are 38 mega pixels. For this tutorial, I downsized the example pano to 10% resolution to make processing and visualizing easier. For your analysis, I’d recommend using full resolution images.

Spherical panoramas contain far more metadata than an average image. We can take a look at all of this additional information with the read_exif function.

read_exif(focal_image_path) %>%
  glimpse()

We’ll extract some of this information about the image, the date it was created, the georeference, and altitude, etc. to output alongside our canopy metrics. You can decide which elements are most important or useful for you.

xmp_data <- read_exif(focal_image_path) %>%
  select(
    SourceFile,
    Make,
    Model,
    FullPanoWidthPixels,
    FullPanoHeightPixels,
    SourcePhotosCount,
    Megapixels,
    LastPhotoDate,
    GPSLatitude,
    GPSLongitude,
    GPSAltitude,
    PoseHeadingDegrees
  )

The first image processing step is to convert the equirectangular panorama into a hemispherical image. We’ll need to store the image width dimension and the heading for processing. The pose heading is a particularly important feature that is a unique advantage of spherical panoramas. Since the camera automatically stores the compass heading of the first image of the panorama, we can use that information to automatically orient all of our hemispherical images such that true north is the top of the image. This is critical for analyses of understory light which requires plotting the sunpath onto the hemisphere.

# Store the pano width to use in scaling and cropping the image
pano_width <- image_info(pano)$width
image_heading <- read_exif(focal_image_path)$PoseHeadingDegrees

The steps to reproject the spherical panorama into an upward-looking hemispherical image go like this:

Crop the upper hemisphere (this is easy with smartphone spheres because the phone’s gyro ensures that the horizon line is always the midpoint of the y-axis).

Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama within a forest.
Cropped upper hemisphere (top half of the image) from an equirectangular projection of a spherical panorama.

Rescale the cropped image into a square to retain the correct scaling when reprojected into polar coordinate space.

Rescaled upper hemisphere from an equirectangular projection of a spherical panorama within a forest.
Rescaled upper hemisphere from an equirectangular projection of a spherical panorama.

Project into polar coordinate space and flip the perspective so that it is upward-looking.

Polar projection of the upper half of a spherical panorama of a forest.
Polar projection of the upper half of a spherical panorama.

Rotate the image so that the top of the image points true north and crop the image so that the diameter of the circle fills the frame.

Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest.
Polar projection of the upper half of a spherical panorama rotated to orient to true north.

We can accomplish all of those steps with the code below.

pano_hemisphere <- pano %>%
  # Crop to retain the upper hemisphere
  image_crop(geometry_size_percent(100, 50)) %>%
  # Rescale into a square to keep correct scale when projecting in to polar coordinate space
  image_resize(geometry_size_percent(100, 400)) %>%
  # Remap the pixels into polar projection
  image_distort("Polar",
                c(0),
                bestfit = TRUE) %>%
  image_flip() %>%
  # Rotate the image to orient true north to the top of the image
  image_rotate(image_heading) %>%
  # Rotating expands the canvas, so we crop back to the dimensions of the hemisphere's diameter
  image_crop(paste0(pano_width, "x", pano_width, "-", pano_width/2, "-", pano_width/2))

The resulting image looks funny because the outer pixels are extended by interpolation and we’ve rotated the image which leaves white space at the corners. Most analyses define a bounding perimeter to exclude any pixels outside of the circular hemisphere; so, the weird border shouldn’t matter. But, we can add a black mask to make the images look better.

I’ve included a vector file for a black mask to lay over the image in the repo.

# Get the image mask vector file
image_mask <- image_read("./HemiPhotoMask.svg") %>%
  image_transparent("white") %>%
  image_resize(geometry_size_pixels(width = pano_width, height = pano_width)) %>%
  image_convert("png")

masked_hemisphere <- image_mosaic(c(pano_hemisphere, image_mask))

masked_hemisphere
Polar projection of the upper half of a spherical panorama rotated to orient north and cropped to size of a forest with a black mask around the outside of the hemisphere.
Masked hemispherical canopy image.

We’ll store the masked hemispheres in their own subdirectory. This script makes that directory, if it doesn’t already exists and writes our file into it.

if(dir.exists("./masked_hemispheres/") == FALSE){
  dir.create("./masked_hemispheres/")
} # If the subdirectory doesn't exist, we create it.

masked_hemisphere_path <- paste0("./masked_hemispheres/", focal_image_name, "hemi_masked.jpg") # Set the filepath for the new image

image_write(masked_hemisphere, masked_hemisphere_path) # Save the masked hemispherical image

At this point, you can use the hemispherical image in any program you’d like either in R or other software. For this example, I’m going to use Chiannuci’s hemispheR package in order to keep this entire pipeline in R.

The next step is to import the image. hemispheR allows for lots of fine-tuning. Check out the docs to learn what all of the options are. These settings most closely replicate the processing I used in my 2021 paper.

fisheye <- import_fisheye(masked_hemisphere_path,
                          channel = '2BG',
                          circ.mask = list(xc = pano_width/2, yc = pano_width/2, rc = pano_width/2),
                          gamma = 2.2,
                          stretch = FALSE,
                          display = TRUE,
                          message = TRUE)
Circular hemispheric plot output by hemispheR's 'import_fisheye' function.
Circular hemispheric plot output by hemispheR’s ‘import_fisheye’ function.

Now, we need to binarize the images, converting all sky pixels to white and everything else to black (at least as close as possible). Again, there are lots of options available in hemispheR. You can decides which settings are right for you. However, I would suggest keeping the zonal argument set to FALSE. The documentation describes this argument as:

zonal: if set to TRUE, it divides the image in four sectors
(NE,SE,SW,NW directions) and applies an automated classification
separatedly to each region; useful in case of uneven light conditions in
the image

Because spherical panoramas are exposing each of the 36 images separately, there is no need to use this correction.

I also suggest keeping the export argument set to TRUE so that the binarized images will be automatically saved into a subdirectory named ‘results’.

binimage <- binarize_fisheye(fisheye,
                 method = 'Otsu',
                 # We do NOT want to use zonal threshold estimation since this is done by the camera
                 zonal = FALSE,
                 manual = NULL,
                 display = TRUE,
                 export = TRUE)
Binarized circular hemispheric plot output by hemispheR's 'binarize_fisheye' function.
Binarized circular hemispheric plot output by hemispheR’s ‘binarize_fisheye’ function.

Unfortunately, hemispheR does not allow for estimation of understory light metrics like through-canopy radiation or Global Site Factors. If you need light estimates, you’ll have to take the binarized images and follow my instructions and code for implementing Gap Light Analyzer.

Assuming all you need is canopy metrics, we can continue with hemispheR and finalize the whole pipeline in R. We estimate canopy metrics with the gapfrac_fisheye() function.

gapfrac <- gapfrac_fisheye(
  binimage,
  maxVZA = 90,
  # Spherical panoramas are equidistant perforce
  lens = "equidistant",
  startVZA = 0,
  endVZA = 90,
  nrings = 5,
  nseg = 8,
  display = TRUE,
  message = TRUE
)
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR's 'gapfrac_fisheye' function.
Binarized circular hemispheric plot with azimuth rings and segments output by hemispheR’s ‘gapfrac_fisheye’ function.

Finally, we can estimate the canopy metrics with the canopy_fisheye() function, join those to the metadata from our image, and output our report.

canopy_report <- canopy_fisheye(
  gapfrac
)

output_report <- xmp_data %>%
  bind_cols(
    canopy_report
  ) %>%
  rename(
    GF = x,
    HemiFile = id
  )

glimpse(output_report)
Rows: 1
Columns: 32
$ SourceFile            "./raw_panos/PXL_20230519_164804198.PHOTOSPHERE_small.jpg"
$ Make                  "Google"
$ Model                 "Pixel 4a"
$ FullPanoWidthPixels   8704
$ FullPanoHeightPixels  4352
$ SourcePhotosCount     36
$ Megapixels            0.37845
$ LastPhotoDate         "2023:05:19 16:49:57.671Z"
$ GPSLatitude           41.33512
$ GPSLongitude          -72.91103
$ GPSAltitude           -23.1
$ PoseHeadingDegrees    86
$ HemiFile              "PXL_20230519_164804198.PHOTOSPHERE_smallhemi_masked.jpg"
$ Le                    2.44
$ L                     3.37
$ LX                    0.72
$ LXG1                  0.67
$ LXG2                  0.55
$ DIFN                  9.981
$ MTA.ell               19
$ GF                    4.15
$ VZA                   "9_27_45_63_81"
$ rings                 5
$ azimuths              8
$ mask                  "435_435_434"
$ lens                  "equidistant"
$ channel               "2BG"
$ stretch               FALSE
$ gamma                 2.2
$ zonal                 FALSE
$ method                "Otsu"
$ thd                   116

Be sure to check out my prior posts on working with hemispherical images.

]]>
2279
Visualize mixed effect regressions in R with GGplot2 https://www.azandisresearch.com/2022/12/31/visualize-mixed-effect-regressions-in-r-with-ggplot2/ Sat, 31 Dec 2022 18:34:16 +0000 https://www.azandisresearch.com/?p=2224
Example of the final viz.

In this post, I will show some methods of displaying mixed effect regression models and associated uncertainty using non-parametric bootstrapping. This is kind of a follow-up to my previous post on visualizing custom main effect models.

Introduction

Mixed models have quickly become the model du jour in many observation-oriented fields because these models obviate many of the issues of pseudoreplication inherent in blocked or repeated measures experiments and structured data.

They do this by treating the levels of categorical variables not as unique instances to be parameterized individually, but as random samples from an infinite distribution of levels. Instead of wasting degrees of freedom estimating parameter values for each level, we only need to estimate that global distribution (which requires only a handful of parameters) and instead focus our statistical power on the variables of interest.

Thanks to packages like nlme and lme4, mixed models are simple to implement. For all of their virtues, mixed models can also be a pain to visualize and interpret. Although linear mixed models are conceptually similar to the plain old ordinary least-squares regression we know and love, they harbor a lot more math under the hood, which can be intimidating.

One of the reasons mixed models are difficult to intuitively visualize is because they allow us to manage many levels of uncertainty. Depending on the focus of our analyses, we usually want to focus on certain aspects of the trends and associated uncertainty. For instance, an ecologist might be interested in the effect of nutrient input across many plots, but not interested in the difference between plots (i.e. traditional random effect). Or, an educator might be interested in the effect of different curricula, but not the difference between specific classes within specific schools (i.e. nested random effects). Or, a physician might be interested in the effect of a long-term treatment on a patient after accounting for baseline difference between patients (i.e. repeated measures).

In this tutorial, I’m going to focus on how to visualize the results of mixed effect models from lme4 using ggplot2. You can also clone the annotated code from my Github.

First, load in the necessary libraries.

library(tidyverse)
library(lme4)
library(ggsci)
library(see)
library(cowplot)
theme_set(theme_classic()) # This sets the default ggplot theme

To begin, I am going to simulate an experiment with 10 experimental units each containing 100 observations. These could be 10 plots with 100 random samples, or 10 schools with 100 student test scores, or the records of 10 patients from each of 100 follow-up visits. Each of the experimental units will ultimately get its own intercept and slope effect coefficient. The rand_eff data frame is essentially a Z matrix in classic mixed model notation. For this example, I’ll assume that the intercepts come from a distribution with standard deviation of 20 and the slopes from a distribution with standard deviation of 0.5. The random effects define the variation of the experimental unit around the main effect, so the mean of these distributions is necessarily 0.

set.seed(666)
rand_eff <- data.frame(unit = as.factor(seq(1:10)),
            b0 = rnorm(10, 0, 20),
            b1 = rnorm(10, 0, 0.5))

We can now join our random effect matrix to the full dataset and define our y values as yi = B0i + b0j + B1xi + b1xj + ε.

X <- expand.grid(unit = as.factor(seq(1:10)), obs = as.factor(seq(1:100))) %>%
  left_join(rand_eff,
            by = "unit") %>%
  mutate(x = runif(n = nrow(.), 0, 10),
         B0 = 20,
         B1 = 2,
         E = rnorm(n = nrow(.), 0, 10)) %>%
  mutate(y = B0 + b0 + x * (B1 + b1) + E)

Here’s a look at the data.

X %>%
  ggplot(aes(x = x, y = y, col = unit)) +
  geom_point() +
  facet_wrap(vars(unit))
Scatter plots of the simulated data. Each of 10 experimental units contains 100 observations.

Random intercept model

For demonstration, let’s first assume that we are primarily interested in the overall slope of the relationship. For instance, if these are 10 field plots, we might want to know the effect of adding 1 unit of nutrient fertilizer, regardless of the baseline level of nutrients in a given plot.

We can do this by fitting a random intercept model and then looking at the summary of the resulting model.

lmer1 <- lmer(y ~ x + (1|unit), data = X)

summary(lmer1)
> summary(lmer1)
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ x + (1 | unit)
   Data: X

REML criterion at convergence: 7449.7

Scaled residuals: 
     Min       1Q   Median       3Q      Max 
-2.88156 -0.68745  0.01641  0.71022  2.84532 

Random effects:
 Groups   Name        Variance Std.Dev.
 unit     (Intercept) 811.59   28.488  
 Residual              94.63    9.728  
Number of obs: 1000, groups:  unit, 10

Fixed effects:
            Estimate Std. Error t value
(Intercept)  18.2652     9.0301   2.023
x             2.0091     0.1077  18.651

Correlation of Fixed Effects:
  (Intr)
x -0.060

We can see that the fitted model does a good job estimating the fixed effect slope (B1), which we simulated with a coefficient of 2 as 2.0091. However, the model is underestimating the fixed effect intercept (B0) as 18.3 and overestimating the standard deviation of the random effect slopes (b1) as 28.5, when we simulated those values as 20 and 20.

If we think we could live with that fit, how would we go about visualizing our model?

Here is a look at our data with a linear regression fit to each experimental unit. It is clear that there is a wide spread in the intercepts, but the slopes are similar.

X %>%
ggplot(aes(x = x, y = y, col = unit)) +
geom_point() +
geom_smooth(method = 'lm', se = F)
Scatter plot of the simulated data with an independent linear regression fit to each experimental unit.

Marginal (Fixed effect) versus Conditional (Fixed + Random effect)

We might be tempted to use this built-in regression by group from ggplot as a visualization of the mixed model. However, this would be WRONG!!! GGplot is fitting an ordinary least squares regression without accounting for the random effect. That means that the estimates and the confidence intervals do not reflect our model. In this case, the estimates might be pretty close since our samples sizes across species are pretty even, but this could be wildly off, or even opposite, of mixed model slope estimate.

In a prior post, I showed how we can use the predict function to display our custom models in ggplot. In the case of mixed effect models, you can predict both the marginal and conditional values. The marginal value is the fixed effect. The conditional value is the mixed effect of the fixed and random effects.

In other words, the marginal effect is asking “What would I expect y to be for a given x without knowing which experimental unit it came from?” whereas the conditional effect is asking “What would I expect y to be for a given x from a given experimental unit?”

We can specify which prediction we want with the random effect formula argument re.form:

X <- X %>% 
  mutate(fit.m = predict(lmer1, re.form = NA),
         fit.c = predict(lmer1, re.form = NULL))

The simplest visualization would be to display the marginal fit on the raw values.

X %>%
  ggplot(aes(x = x, y = y)) +
    geom_point(pch = 16, col = "grey") +
    geom_line(aes(y = fit.m), col = 1, size = 2) +
    coord_cartesian(ylim = c(-40, 100))
Linear fit of the marginal (fixed) effects (black line) shown with a scatterplot of the raw data (grey dots).

However, this is a bit misleading because it underrepresents our confidence in the slope by making it look like the residuals are huge.

But the residuals, and our confidence in the fit, is based on the conditional residual variance, which is much tighter. We can see that easily when we look at the conditional fits. This is one option for visualization, but it highlights the wrong element if our primary interest is the overall slope trend.

X %>%
  ggplot(aes(x = x, y = y, col = unit)) +
  geom_point(pch = 16) +
  geom_line(aes(y = fit.c, col = unit), size = 2) +
  facet_wrap(vars(unit)) +
  coord_cartesian(ylim = c(-40, 100))
Conditional fits from the random effect model with random intercepts on the raw data points, facetted by experimental unit.

Displaying the conditional fits on the same facet helps. Now we can see the variation in the conditional intercepts, but as a tradeoff it makes it difficult to get a sense of the residual variance because there are too many points.

X %>%
  ggplot(aes(x = x, y = y, col = unit)) +
  geom_point(pch = 16) +
  geom_line(aes(y = fit.c, col = unit), size = 2)  +
  coord_cartesian(ylim = c(-40, 100))
Conditional fits from the random effect model with random intercepts on the raw data points. All experimental units are displayed on the same facet, differentiated by color.

Instead, I think it makes more sense to display the conditional residuals around the marginal effect. You can kind of think of this as collapsing all of the conditional fits from the previous plot into the single marginal fit. We can do this by extracting the residuals (which are the conditional residuals) and then displaying the points as the marginal fit plus the residuals.

X <- X %>%
  mutate(resid = resid(lmer1))

X %>%
  ggplot(aes(x = x, y = fit.m + resid, col = unit)) +
  geom_point(pch = 16) +
  geom_line(aes(y = fit.m), col = 1, size = 2)  +
  coord_cartesian(ylim = c(-40, 100))
Marginal fit from the random effect model with random intercepts on the conditional residuals of the experimental units, differentiated by color.

In some cases, we might also want to give the reader a sense of the variation in the conditional intercepts. For instance, the fact that the slope is so consistent across a wide range of baselines might actually increase our confidence in the relationship even further.

There are a couple of ways to simultaneously display both our confidence in the fit of the marginal trend and the variance in the conditional fits.

Depending on the number of conditional units, one option is to display the conditional fits below the scatter plot of the conditional residuals.

X %>%
  ggplot(aes(x = x, y = fit.m + resid)) +
  geom_line(aes(y = fit.c, col = unit), size = 1) +
  geom_point(pch = 16, col = "grey") +
  geom_line(aes(y = fit.m), col = 1, size = 2) +
  coord_cartesian(ylim = c(-40, 100))
Marginal fit (heavy black line) from the random effect model with random intercepts with the conditional residuals (grey dots) and conditional fits (thin lines) for each experimental unit, differentiated by color.

Another option is to display a density plot or histogram of the estimated conditional intercepts (also known as the Best Linear Unbiased Predictors or BLUPs). In a random effect framework, we are assuming that the conditional intercepts are samples of some infinite distribution of intercepts, so this histogram from the BLUPs of our model is essentially an empirical representation of that idealized distribution. (Alternatively, we could also simply plot the idealized distribution as a normal distribution from the estimated variance of the random effect, but I like the empirical density plot because it also give a sense of when our conditionals do NOT conform to the assumption of being samples from a normal distribution).

We can extract the BLUPs from the model object (b0_hat) and add those to the model estimate of the marginal intercept (B0_hat) to get the estimated conditional intercepts. This is our data frame of conditional estimates.

Cond_DF <- as.data.frame(ranef(lmer1)) %>% transmute(unit = grp, b0_hat = condval) %>% mutate(Intercept_cond = b0_hat + summary(lmer1)$coef[1,1])

X %>%
  ggplot(aes(x = x, y = fit.m + resid)) +
  geom_point(pch = 16, col = "grey") +
  geom_violinhalf(data = Cond_DF, aes(x = 0, y = Intercept_cond), trim = FALSE, width = 3, fill = NA) +
  geom_line(aes(y = fit.m), col = 1, size = 2) +
  coord_cartesian(ylim = c(-40, 100))
Marginal fit (heavy black line) from the random effect model with random intercepts with the conditional residuals (grey dots) and histogram of the distribution of conditional intercepts.

Random slope and intercept model

Now let’s imagine that we are not satisfied with the random intercept model and also want to fit a random slope parameter. In this case, we want to estimate the distribution of slopes for all experimental units across the values of x.

lmer2 <- lmer(y ~ x + (x|unit), data = X)

We can see that the values from the model are getting much closer to the known values that we simulated.

summary(lmer2)

> summary(lmer2)
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ x + (x | unit)
   Data: X

REML criterion at convergence: 7442.8

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-3.2071 -0.6957  0.0268  0.7067  2.8482 

Random effects:
 Groups   Name        Variance Std.Dev. Corr 
 unit     (Intercept) 846.3021 29.0913       
          x             0.2151  0.4638  -0.27
 Residual              93.0853  9.6481       
Number of obs: 1000, groups:  unit, 10

Fixed effects:
            Estimate Std. Error t value
(Intercept)  18.3474     9.2202    1.99
x             2.0075     0.1816   11.05

Correlation of Fixed Effects:
  (Intr)
x -0.250

Correlation of random effects

One important addition from the random intercept-only model is the estimate for the correlation between the distribution of the random slopes and random intercepts (which the model estimates as -0.268, see output below). Because we simulated these data, we know that there is no true correlation between the unit slopes and intercepts. But, because we have a small number of units, we just happened to have an emergent correlation.

summary(lmer2)$varcor

cor.test(rand_eff$b0, rand_eff$b1)
> summary(lmer2)$varcor
 Groups   Name        Std.Dev. Corr  
 unit     (Intercept) 29.09127       
          x            0.46378 -0.268
 Residual              9.64807       
> 
> cor.test(rand_eff$b0, rand_eff$b1)

	Pearson's product-moment correlation

data:  rand_eff$b0 and rand_eff$b1
t = -1.2158, df = 8, p-value = 0.2587
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
 -0.8205150  0.3123993
sample estimates:
       cor 
-0.3949022 

If you had a reason to assume NO correlation between your random effects, you could specify that as (x||unit) in this way:

lmer3 <- lmer(y ~ x + (x||unit), data = X)

summary(lmer3)

summary(lmer3)$varcor
> summary(lmer3)
Linear mixed model fit by REML ['lmerMod']
Formula: y ~ x + ((1 | unit) + (0 + x | unit))
   Data: X

REML criterion at convergence: 7443.3

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-3.1812 -0.6951  0.0223  0.7074  2.8523 

Random effects:
 Groups   Name        Variance Std.Dev.
 unit     (Intercept) 833.2681 28.8664 
 unit.1   x             0.2092  0.4574 
 Residual              93.1011  9.6489 
Number of obs: 1000, groups:  unit, 10

Fixed effects:
            Estimate Std. Error t value
(Intercept)   18.325      9.149   2.003
x              2.010      0.180  11.165

Correlation of Fixed Effects:
  (Intr)
x -0.035
> 
> summary(lmer3)$varcor
 Groups   Name        Std.Dev.
 unit     (Intercept) 28.86638
 unit.1   x            0.45738
 Residual              9.64889

As with the random intercept model, we can use the predict function to get expected values of y based on the marginal or conditional estimates. Note that re.form = NULL is the same as re.form = ~ (x|unit).

X <- X %>% 
  mutate(fit2.m = predict(lmer2, re.form = NA),
         fit2.c = predict(lmer2, re.form = NULL),
         resid2 = resid(lmer2))

As with the random intercept model, one way to visualize the model is to show the conditional intercept/slopes as fitted lines and the conditional residuals as points.

pmain_lmer2 <- X %>%
  ggplot(aes(x = x, y = fit2.m + resid2)) +
  geom_line(aes(y = fit2.c, col = unit), size = 1) +
  geom_point(pch = 16, col = "grey") +
  geom_line(aes(y = fit2.m), col = 1, size = 2) +
  coord_cartesian(ylim = c(-40, 100))
pmain_lmer2
Marginal fit (heavy black line) from the random effect model with random intercepts and slopes with the conditional residuals (grey dots) and conditional fits (thin lines) for each experimental unit, differentiated by color.

Visualizing the random effect variance gets a bit more difficult with two random parameters. One strategy I like is to include an additional plot of the correlation and distribution of the random effects.

Basically, we make a scatter plot of the BLUPs of slopes and intercepts, Then we make histograms for each and add those to the margins of the plot. (I’m relying heavily on Claus Wilke’s post to code the histograms at the margins of the plot). Finally, we patch it all together with cowplot.

Cond_DF2 <- as.data.frame(ranef(lmer2)) %>% 
  transmute(unit = grp,
            term = case_when(term == "(Intercept)" ~ "b0_hat",
                             term == "x" ~ "b1_hat"),
            value = condval) %>%
  pivot_wider(id_cols = "unit", names_from = "term", values_from = "value") %>%
  mutate(Intercept_cond = b0_hat + summary(lmer2)$coef[1,1],
         Slope_cond = b1_hat + summary(lmer2)$coef[2,1])

pmain <- Cond_DF2 %>%
  ggplot(aes(x = Intercept_cond, y = Slope_cond)) +
    geom_point(aes(col = unit), size = 3) +
    geom_density2d(bins = 4, col = "grey", adjust = 3)

xdens <- axis_canvas(pmain, axis = "x") +
  geom_density(data = Cond_DF2, aes(x = Intercept_cond), fill = "grey", col = NA, trim = FALSE, adjust = 2)

ydens <- axis_canvas(pmain, axis = "y", coord_flip = TRUE) +
  geom_density(data = Cond_DF2, aes(x = Slope_cond), fill = "grey", col = NA, trim = FALSE, adjust = 2) +
  coord_flip()

p1 <- insert_xaxis_grob(pmain, xdens, grid::unit(.2, "null"), position = "top")
p2 <- insert_yaxis_grob(p1, ydens, grid::unit(.2, "null"), position = "right")

pinsert_lmer2 <- ggdraw(p2)

plot_grid(
  pmain_lmer2,
  pinsert_lmer2,
  nrow = 1
)
Left: Marginal fit (heavy black line) from the random effect model with random intercepts with the conditional residuals (grey dots) and conditional fits (thin lines) for each experimental unit, differentiated by color. Right: Correlation of BLUPs (Best Linear Unbiased Predictors) of random intercept and slope parameters of experimental units, differentiated by color with marginal density distributions.

Bootstrapping uncertainty

For many researchers, one of the most frustrating aspects of mixed models is that estimating confidence intervals and testing the significance of parameters is not straight forward. I highly encourage that folks take a look at Ben Bolker’s thorough considerations on the topic. Dr. Bolker suggests many different problems and solutions depending on the structure of your model.

I think that the most generalized solution is to use non-parametric bootstrapping. This method essentially asks the question, “How would our model fit change if we could go back in time, select different samples, and then rerun our analysis?”

We can’t go back in time, but maybe we CAN assume that our original samples were representative of the population. If so, instead of resampling the actual population, we could resample our original observations WITH REPLACEMENT to approximate resamples of the population. If we do this many time, we can then make intuitive statements like, “95% of the time, if we reran this experiment, we’d expect the main effect to be between X and X value.”

It is important to stop an consider what re-doing your data generation process would look like. For instance, imagine our mock data had come from 1000 independent random observation that we then categorized into “units” to control for autocorrelation after the fact. If we re-ran the process to generate a new dataset, we may not always get the same number of observations in each “unit”.

However, if our mock data came from an experiment where we planted 100 trees in each of 10 “units”, then when we re-ran the experiment, we could control the number of individuals per unit. We would also need to consider if we would always choose to plant in the same 10 units, or if we would also choose units at random.

The structure of the data generation process can guide our bootstrap resampling strategy. In the first example, we could simply bootstrap all individual observations (although we may need to worry about non-convergence and small sample sizes). In the second example, where unit choice is constrained, we might decide to bootstrap within units. If, in the second example, we could also randomize units, we should probably take a hierarchical approach, first bootstrapping the units and then bootstrapping the observations within each unit.

NOTE: The problem with non-parametric boostrapping of this kind is that it can be computationally expensive. One trick is to parallelize the bootstraps across all of your computer’s processors. By default, R uses one processor, so it will fit one bootstrap iteration at a time, sequentially. But the bootstraps are independent and the order doesn’t matter. If you computer has 8 cores, there is no reason not to fit 8 models simultaneously on 8 processors in 1/8 of the time. Unfortunately, setting up for parallel processing can be an adventure of its own. I won’t detail it here, but will try to dedicate a post on it in the future. If you have a very large dataset and can’t run bootstraps in parallel, you might consider some of the other methods suggested by Dr. Bolker.

Since our dataset is fairly small and simple, I’ll demonstrate how we can use bootstrapping to simultaneously estimate confidence intervals of our model parameters and visualize error bands.

Just keep in mind that if you are fitting models with complex random effect designs, you’ll have to think critically about which elements and levels of variance are most important for your data story. Hopefully, these examples will at least get you started and inspired!

The bootstrapping process begins by initiating an empty dataframes to accept the parameter estimates for the fixed effect coefficients and random effect variances. The number of rows in the dataframe will be the same as the number of bootstrap iterations, so we set that first. The number of iterations is the number of times we want to simulate re-doing our data generation. The convention is 1000, but the more the merrier!

Number_of_boots <- 1000

The number of columns for the dataframes will equal the number of fixed effect coefficients and random effect variances. We can extract these from the initial model. First we extract the coefficients, then transpose the table into wide format.

# Extract the fixed effect coefficients.
FE_df <- fixef(lmer2) %>% 
  t() %>%
  as.data.frame()

# Extract the random effects variance and residual variance
RE_df <- VarCorr(lmer2) %>%
  as.data.frame() %>%
  unite("Level", -c(vcov, sdcor)) %>%
  select(-vcov) %>%
  t() %>%
  as.data.frame()

Next, we create empty dataframes to take our bootstraps.

BS_params <- data.frame(matrix(nrow = Number_of_boots, ncol = ncol(FE_df)))
colnames(BS_params) <- colnames(FE_df)

BS_var <- data.frame(matrix(nrow = Number_of_boots, ncol = ncol(RE_df)))
colnames(BS_var) <- RE_df["Level",]

In addition, we will be predicting marginal values from each model. So, we need to create a prediction dataframe with an empty column to store the predicted values. For this example, we only need to predict values for a handful of x values that represent the range of xs. I chose to use 10-quantiles because I want to be able to fit a non-linear confidence band later. If this was a non-linear fit, we might want even more prediction values.

BS_pred <- expand.grid(x = quantile(X$x, probs = seq(0, 1, length.out = 10)),
                       iterration = 1:Number_of_boots,
                       pred = NA)

Finally, we can write a loop that creates a resampled dataset (with replacement) and fits the original model formula to the new dataset. From the new model, we can then extract the fixed and random effects and predict s for the subset of x values. All of these get stored in their respective dataframes, indexed by the iteration number.

for(i in 1:Number_of_boots){
BS_X <- slice_sample(X, prop = 1, replace = TRUE)
BS_lmer <- lmer(formula = lmer2@call$formula,
data = BS_X)

BS_params[i,] <- BS_lmer %>%
fixef() %>%
t() %>%
as.data.frame()

BS_var[i,] <- BS_lmer %>%
VarCorr() %>%
as.data.frame() %>%
.$sdcor

BS_pred[which(BS_pred$iterration == i),]$pred <- predict(BS_lmer,
newdata = BS_pred[which(BS_pred$iterration == i),],
re.form = ~0)
}

Now we have a dataframe of the marginal (i.e. fixed effect) intercept and slope parameter estimates from 1000 models fit to bootstraps of our original data.

head(BS_params)
> head(BS_params)
  (Intercept)        x
1    18.06942 2.096240
2    18.18005 2.070043
3    18.13506 2.093110
4    18.77862 1.928048
5    18.47963 2.013831
6    18.28875 2.005947

One way to get a sense of our confidence in these parameter estimates is to take a look at their distributions.

BS_hist_x <- BS_params %>%
  ggplot(aes(x = x)) +
  geom_histogram()

BS_hist_intercept <- BS_params %>%
  ggplot(aes(x = `(Intercept)`)) +
  geom_histogram()

BS_hists <- plot_grid(
  BS_hist_intercept,
  BS_hist_x,
  nrow = 1)

BS_hists
Histograms of marginal (i.e. fixed effect) intercept (left) and slope (right) parameters.

These histograms tell us so much more than a typical confidence interval because we can see the full distribution. We can see that the baseline effect on y given x = 0 is around 18.5 and we are very confident it is between 17 and 20. We can also see that the effect of 1 unit change in x is expected to yield about a 2.0 unit change in y and we are extremely confident that the slope is positive and greater than 1.7 but probably less than 2.3.

Although we won’t directly visualize the random effect variance, we can see the estimates in the BS_var dataframe.

BS_var %>% head()

> BS_var %>% head()
  unit_(Intercept)_NA unit_x_NA unit_(Intercept)_x Residual_NA_NA
1            28.30440 0.4427450         0.01009018       9.786112
2            30.31353 0.6265168        -0.42315981       9.365630
3            30.33369 0.5261389        -0.78128072       9.320844
4            28.51896 0.6079722        -0.08575830       9.739881
5            30.50690 0.5835157        -0.64068868       9.718633
6            29.17012 0.5182395        -0.28244154       9.930305

Here, the first column is the estimated variance attributed to the random intercepts, the second column is the variance estimate of the random slopes, and the third column is the correlation between the random effects. The fourth and final column is the residual variation after accounting for the random effects (i.e. the conditional residual variation). This is information you’d want to include in a table of the model output.

We also have a dataframe of 10 predicted s for each iteration.

head(BS_pred, n = 20)
> head(BS_pred, n = 20)
             x iterration     pred
1  0.008197445          1 18.08660
2  1.098954093          1 20.37309
3  2.231045775          1 22.74622
4  3.266109165          1 24.91596
5  4.323574828          1 27.13267
6  5.667717615          1 29.95031
7  6.672807375          1 32.05722
8  7.748743587          1 34.31264
9  8.793447507          1 36.50259
10 9.968969584          1 38.96677
11 0.008197445          2 18.19702
12 1.098954093          2 20.45493
13 2.231045775          2 22.79841
14 3.266109165          2 24.94104
15 4.323574828          2 27.13003
16 5.667717615          2 29.91247
17 6.672807375          2 31.99305
18 7.748743587          2 34.22028
19 8.793447507          2 36.38286
20 9.968969584          2 38.81624

Rather than using a traditional confidence band (which basically reduces the distribution of our bootstraps down to two points: high and low), I prefer to actually show all of the iterations and let the density of the lines make a kind of optical illusion of a confidence band.

Since our estimates are pretty tightly distributed, I’m using cowplot to show what this looks like while also zooming in to a section.

plot_grid(
  BS_pred %>%
    ggplot(aes(x = x, y = pred)) +
    geom_line(aes(group = iterration), alpha = 0.1, col = "grey50") +
    geom_line(data = X,
              aes(x = x, y = fit2.m)) +
    geom_rect(aes(ymin = 35, ymax = 40,
                  xmin = 8, xmax = 10),
              col = "firebrick",
              fill = NA,
              size = 2),
  BS_pred %>%
    ggplot(aes(x = x, y = pred)) +
    geom_line(aes(group = iterration), alpha = 0.1, col = "black") +
    geom_line(data = X,
              aes(x = x, y = fit2.m),
              col = "grey", 
              size = 2) +
    coord_cartesian(xlim = c(8, 10),
                    ylim = c(35, 40)) +
    geom_rect(aes(ymin = 35, ymax = 40,
                  xmin = 8, xmax = 10),
              col = "firebrick",
              fill = NA,
              size = 2) +
    theme(axis.line = element_blank(),
          axis.text = element_blank(),
          axis.title = element_blank()) +
    labs(x = "", y = ""),
  nrow = 1
)
Marginal effects of linear mixed effect regressions fit to 1000 boostrap resamples of the original data (thin grey lines in left and think black lines in right panel). The model estimates fit to the original data are show as a thick black (left) or grey (right). The right panel is a detailed view of the quadrant indicated with a red rectangle in the left panel.

Of course, we can also create a more traditional 90% confidence band by summarizing the 5th and 95th percentiles of across each x value.

BS_pred %>%
  group_by(x) %>%
  summarise(hi = quantile(pred, 0.95),
            lo = quantile(pred, 0.05)) %>%
  ggplot(aes(x = x)) +
    geom_ribbon(aes(ymin = lo, ymax = hi),
                fill = "grey50",
                alpha = 0.3) +
    geom_line(data = X,
              aes(x = x, y = fit2.m))
Model fit of a linear mixed effect regression (black line) and 90% confidence band (grey band) estimated from fitting 1000 boostraps.

Putting it all together

Putting it all together, here is my preferred visualization of a mixed effect model with random intercepts and slopes, using bootstrapping to display uncertainty.

BS_ci_lines <- BS_pred %>%
  ggplot(aes(x = x, y = pred)) +
    geom_line(aes(group = iterration), alpha = 0.1, col = "grey") +
    geom_line(data = X,
              aes(x = x, y = fit2.m)) +
    geom_point(data = X,
               aes(x = x, y = fit2.m + resid2, col = unit),
               alpha = 0.3,
               pch = 16)

plot_grid(
  BS_ci_lines,
  plot_grid(
    pinsert_lmer2,
    plot_grid(
      BS_hist_intercept,
      BS_hist_x,
      nrow = 1,
      labels = c("C", "D")),
    nrow = 2,
    rel_heights = c(1, 0.7),
    labels = c("B", NA)
  ),
  nrow = 1,
  rel_widths = c(1, 0.7),
  labels = c("A", NA)
)plot_grid(
  BS_ci_lines,
  plot_grid(
    pinsert_lmer2,
    plot_grid(
      BS_hist_intercept,
      BS_hist_x,
      nrow = 1,
      labels = c("C", "D")),
    nrow = 2,
    rel_heights = c(1, 0.7),
    labels = c("B", NA)
  ),
  nrow = 1,
  rel_widths = c(1, 0.7),
  labels = c("A", NA)
)
Linear fixed effect (A, solid black line) of a mixed effect regression fit to the original data and estimate 1000 boostrap resamples (A, thin grey lines) with conditional residuals for experimental units (A, points). Correlation of BLUPs (Best Linear Unbiased Predictors) of random intercept and slope parameters of experimental units with marginal density distributions (B). Histograms of marginal (i.e. fixed effect) intercept (C) and slope (D) parameters estimated from 100 boostraps. Colors in A and B differentiate experimental units.

Or, if we want to get really fancy with it, we could inset everything into one plot panel.

BS_ci_lines +
  coord_cartesian(ylim = c(-30, 100)) +
  annotation_custom(ggplotGrob(BS_hists),
                    xmin = 5,
                    xmax = 10,
                    ymin = -30,
                    ymax = 5
  ) +
  annotation_custom(ggplotGrob(pinsert_lmer2),
                    xmin = 0,
                    xmax = 4,
                    ymin = 50,
                    ymax = 110
  )
sadfasdf
]]>
2224
Arctic Genes in Alaska Magazine https://www.azandisresearch.com/2022/12/10/arctic-genes-in-alaska-magazine/ Sat, 10 Dec 2022 14:14:55 +0000 https://www.azandisresearch.com/?p=2217 An article I wrote about an expedition to collect wood frogs in the Alaska Arctic is now online at Alaska Magazine. I’ve included the teaser below, but check out the whole article here.

Screenshot of the Alaska Magazine website for the article featuring a picture of Andis and Yara doing DNA extractions in a tent. Image by Kaylyn Messer.

I am deep in the Alaskan Arctic,  300 miles from the nearest road system, attempting to conduct the kind of science that usually requires a specialized laboratory. We rowed 30 miles of meandering flatwater today, bringing our total to 200 river miles in 12 days since we landed at a lonely gravel bar on the headwaters of Ambler River in Gates of the Arctic National Park.

Mosquitoes spangle the tent canopy arching over me. Backlit by summer solstice sun, the silhouettes of the insects make an inverted night sky of shifting constellations. The sun never sets on the banks of the Kobuk River this time of year. It hangs high above the horizon even now at 11 p.m., transforming my tent into a solar oven as I, ironically, work to uncover the secrets of a frog that can turn into ice.

Read the rest of the article here.

]]>
2217
Biomass Pyramids and Funnel Plots in R with GGplot2 https://www.azandisresearch.com/2022/11/22/biomass-pyramids-and-funnel-plots-in-r-with-ggplot2/ Tue, 22 Nov 2022 21:38:39 +0000 https://www.azandisresearch.com/?p=2209 Every once in a while, I come across a plot form that is not in my ggplot repertoire. For example, I occasionally want to make a funnel plot in my work. I’m ashamed to admit that I usually pop over to Illustrator to make those plots.

But last week, a friend asked me if I knew how to make biomass pyramids in ggplot2. Since I love a challenge and a pyramid plot is just an upside down funnel, I figured I would give it a try.

My solution was to hack the geom_bar() functionality to make a mirrored bar plot, then flip the axis.

Here’s a simple example with some simulated data:

library(tidyverse)

X <- data.frame(level = c("Small", "Medium", "Large"), value = c(25, 50, 100)) 

X %>%
  ggplot(aes(x = fct_reorder(level, value), y = value)) +
  geom_bar(
    stat = "identity", 
    width = 0.9
    ) +
  geom_bar(
    data = . %>% mutate(value = -value),
    stat = "identity",
    width = 0.9
    ) +
  geom_text(aes(label = value, y = 0),
            col = "white",
            vjust = 0) +
  coord_flip() +
  theme_minimal() +
  theme(
    axis.title = element_blank(),
    panel.grid = element_blank(),
    axis.text.x = element_blank()
    )

A basic funnel diagram.

If you want a pyramid instead of a funnel, just change value to -value when passing the x = fct_reorder(level, -value) argument to aes().

A basic pyramid plot.

There is a lot you can do to customize the plots from this basic plot. As an example, I decided to loosely replicate the biomass pyramid example image from Wikipedia.

library(cowplot)

Biomass <- data.frame(System = c(rep("Aquatic Ecosystem", 4), rep("Terrestrial Ecosystem", 4)),
                      level = c("Sea Lion", "Herring", "Zooplankton", "Phytoplankton", "Snakes", "Mice", "Grasshoppers", "Grasses"),
                      value = c(1e3, 1e4, 1e5, 1e6, 15.2, 152, 1520, 15200))

Aqua <- Biomass %>%
  filter(System == "Aquatic Ecosystem") %>%
  mutate(value = log10(value)) %>%
  ggplot(aes(x = fct_reorder(level, -value), y = value, fill = as.numeric(fct_reorder(level, abs(value))))) +
  geom_bar(
    stat = "identity", 
    width = 1) +
  geom_bar(
    data = . %>% mutate(value = -value),
    stat = "identity",
    width = 1
  ) +
  geom_text(aes(label = paste0(10^value, " kg"), y = 0),
            col = "goldenrod",
            vjust = 3,
            cex = 4) +
  geom_text(aes(label = level, y = 0),
            col = "grey",
            vjust = 0,
            cex = 8) +
  coord_flip() +
  theme_minimal() +
  theme(
    axis.title = element_blank(),
    panel.grid = element_blank(),
    axis.text = element_blank(),
    legend.position = "none"
  ) +
  scale_fill_gradient(low = "dodgerblue", high = "navyblue") +
  labs(title = "Aquatic Ecosystem")

Terra <- Biomass %>%
  filter(System == "Terrestrial Ecosystem") %>%
  mutate(value = log10(value)) %>%
  ggplot(aes(x = fct_reorder(level, -value), y = value, fill = as.numeric(fct_reorder(level, abs(value))))) +
  geom_bar(
    stat = "identity", 
    width = 1) +
  geom_bar(
    data = . %>% mutate(value = -value),
    stat = "identity",
    width = 1
  ) +
  geom_text(aes(label = paste0(10^value, " kg"), y = 0),
            col = "goldenrod",
            vjust = 3,
            cex = 4) +
  geom_text(aes(label = level, y = 0),
            col = "grey",
            vjust = 0,
            cex = 8) +
  coord_flip() +
  theme_minimal() +
  theme(
    axis.title = element_blank(),
    panel.grid = element_blank(),
    axis.text = element_blank(),
    legend.position = "none"
  ) +
  scale_fill_gradient(low = "palegreen", high = "darkgreen") +
  labs(title = "Terrestrial Ecosystem")

plot_grid(
  Aqua,
  Terra,
  nrow = 1
)

Biomass pyramids for an aquatic and terrestrial ecosystem.

A few notes on this plot, to remove the space between bars use width = 1 in geom_bar().

You’ll also see that I decided to log-scale the values. You have to transform the data before piping it into the ggplot function, otherwise when we make the values negative, the log is undefined.

I also decided to color the bars. In order to get the gradient to scale correctly, we need to order the levels factor when we pass it to the fill argument.

I tried valiantly to make side-by-side figures using facet_wrap() and scales = "free", but found it difficult to get the factor level order correct for the color fill. So, I fell back to making separate plots and paneling them together with cowplot.

]]>
2209
Plotting custom models with ggplot2 in R https://www.azandisresearch.com/2022/11/05/plotting-custom-models-with-ggplot2-in-r/ Sat, 05 Nov 2022 15:59:37 +0000 https://www.azandisresearch.com/?p=2193 Last week, I attended a talk at our local Yale data science group by my friend Matteo in which he presented an overview of ggplot2. During the questions after the talk, someone asked how you can plot your own models with ggplot. I realized that this is a very important question that is often overlooked in ggplot tutorials. Sure, we can slap on a smooth to any ggplot, but as academics, we actually need to know the mechanics of the model. Most often, we start by modelling a relationship first, and then want to visualize the relationship for the specific instantiation of our model afterward.

Below, I will go through an easy method for simple linear models and a slightly more involved method that can be generalized to any model framework. In this post, I only address fixed-effect models, but I intend to cover mixed-effect models, which incur some extra concerns, in a future post.

We’ll begin by setting up the environment: loading in the tidyverse and the mcgv package for additive modelling. I also like to reset the default ggplot theme to something less ugly.

library(tidyverse)
library(mgcv)
theme_set(theme_classic())

We’ll simulate a fake dataset in the form of a linear relationship with intercept of 2 and a slope of 0.2.

X <- data.frame(x = seq(0, 20, 0.1)) %>%
  mutate(e = rnorm(nrow(.), sd = 0.5),
         y.lm = 2 + 0.2*x + e)

As mentioned, it is easy to slap a linear fit onto a ggplot with geom_smooth(method = 'lm'). Under the hood, ggplot is running a linear regression and estimate the fit and confidence intervals for us.

Standard ggplot of a liner modelBut, what if we want to fit our own model and then visualize it with ggplot? As simple solution for a linear model is to directly pass the model coefficients to geom_abline.

lm.fit <- lm(y.lm ~ x, data = X) 

X %>%
  ggplot(aes(x, y.lm)) +
  geom_point() +
  geom_abline(intercept = coef(lm.fit)[1], slope = coef(lm.fit)[2])

However, this method is limited. It doesn’t allow us to include confidence bands, nor does it allow us to plot anything other than a linear fit.

Let’s say that we have a non-linear model. To simulate this, we will fit a polynomial (which is actually still a linear model in the sense that y is a linear combination of the degrees of x).

X <- X %>%
  mutate(y.nlm = 400 + 0.4 * (x - 10)^3 + e*100)
  
P1 <- X %>%
  ggplot(aes(x, y.nlm)) +
  geom_point()
P1

Scatterplot of a non-linear function
In real world scenarios we would not know the functional form when viewing the data and might decide to fit an additive model or a locally smoothed regression.

gam.fit <- gam(y.nlm ~ s(x), data = X)

loess.fit <- loess(y.nlm ~ x, data = X)

The easiest way to visualize the direct results of these non-linear models is to create a prediction data frame with x values that evenly span the range of our data. We can then predict the model fit for these points and use them to plot the line. This is also the most generalizable solution that will work with any model framework.

First, we create a new dataset with a range of x values. Then ,predict the fitted y values from our models.

newX <- data.frame(x = seq(min(X$x), max(X$x), length = 100))

newX <- newX %>%
  mutate(pred.gam = predict(gam.fit, newdata = newX),
         pred.loess = predict(loess.fit, newdata = newX))

To visualize the model, we simply plot the points from the initial dataset, then pass the new dataset into the data argument for geom_smooth. Note that we also need to pass our new y variable to the y argument of aes as well.

P1 +
  geom_line(data = newX, aes(y = pred.gam), col = "dodgerblue", size = 2) +
  labs(y = "GAM fit")

P1 +
  geom_line(data = newX, aes(y = pred.loess), col = "firebrick", size = 2) +
  labs(y = "LOESS fit")

Side-by-side scatterplots with a blue generalized additive model line and a red loess model lineWe can also use our model to estimate confidence intervals to use in plotting error bands.

newX <- newX %>%
  mutate(pred.gam.se = predict(gam.fit, newdata = newX, type = "link", se.fit = TRUE)$se.fit,
         pred.gam.lo = pred.gam - (1.96 * pred.gam.se),
         pred.gam.hi = pred.gam + (1.96 * pred.gam.se),
         pred.loess.se = predict(loess.fit, newdata = newX, se = TRUE)$se.fit,
         pred.loess.lo = pred.loess - (1.96 * pred.loess.se),
         pred.loess.hi = pred.loess + (1.96 * pred.loess.se))
P1 +
  geom_line(data = newX, aes(y = pred.gam), col = "dodgerblue", size = 1) +
  geom_ribbon(data = newX, aes(y = pred.gam, ymin = pred.gam.lo, ymax = pred.gam.hi), alpha = 0.3, fill = "dodgerblue") +
  labs(y = "GAM fit")

P1 +
  geom_line(data = newX, aes(y = pred.loess), col = "firebrick", size = 1) +
  geom_ribbon(data = newX, aes(y = pred.loess, ymin = pred.loess.lo, ymax = pred.loess.hi), alpha = 0.3, fill = "firebrick") +
  labs(y = "LOESS fit")

Side-by-side scatterplots with a blue generalized additive model line and a red loess model line including error bands

]]>
2193
Web scraping and text analysis in R and GGplot2 https://www.azandisresearch.com/2022/08/30/web-scraping-and-text-analysis-in-r/ Tue, 30 Aug 2022 12:04:44 +0000 https://www.azandisresearch.com/?p=2123 I recently needed to learn text mining for a project at work. I generally learn more quickly with a real-world project. So, I turned to a topic I love: Wilderness, to see how I could apply the skills of text scrubbing and natural language processing. You can clone my Git repo for the project or follow along in the post below. The first portion of this post will cover web scraping, then text mining, and finally analysis and visualization.

Introduction

I thought it would be interesting to see how different Wilderness Areas are described. Having spent a lot of time in Wilderness Areas across the country, I know that they are extremely diverse. We can use text mining and subsequent analysis to examine just how that regional diversity is perceived and portrayed.

Infographic depicting the most common and most unique words used to describe US Wilderness Areas
This is the final product visualizing descriptions of Wilderness Areas in the US.

Wilderness.net provides a brief description, which I assume are written by the agency Wilderness managers, for each of the 803 designated Wilderness Areas in the United States [data disclaimer]. For example, here is the page for one of my favorite Wilderness Areas: South Baranof.

As always, we first load packages and set up the environment.

library(rvest) # For web scrapping
library(qdap) # For tagging parts of speech
library(tidytext) # Text mining
library(hunspell) # For word stemming
library(quanteda) # Text analsysis
library(cowplot) # Composite figures
library(ggrepel) # For text labels in visualizations
library(wordcloud2) # Word cloud vizualizations
library(tidyverse) # For everything.

# I find it easier to install wordcloud2 from the repo:
# library(devtools)
# devtools::install_github("lchiffon/wordcloud2")

Web scraping

I found the video tutorials here very helpful. In those tutorials, the author employs a Chrome extension, SelectorGadget to isolate the page elements that you want to scrape. Unfortunately, Wilderness.net uses an extremely convoluted page structure without CSS. Thus, I could not find a way to isolate the Wilderness descriptions using CCS tags. Instead, I scrub all of the body text and then use strsplit to cleave out the portions of the text I am interested in keeping.

Conveniently, each Wilderness Areas gets its own page on Wilderness.net and each of those pages are numbered consecutively in the page address. For example, South Baranof Wilderness is number 561: “wilderness.net/visit-wilderness/?ID=561”. This means that we can simply loop through each page, scrape the text, and store it for further analysis.

First, we need to set up an empty matrix that we will populate with the data we scrub from Wilderness.net. From the text, we want to isolate and store the Wilderness name, the state that contains the most acreage, the year of designation, the federal agency (or agencies) charged with its management, and the description text.

Colnames <- c("Wild", "State", "Year", "Agency", "Descrip")
WildText <- matrix(nrow = 804, ncol = length(Colnames))
colnames(WildText) <- Colnames

Now we loop through all of the pages to populate the empty matrix with our data.

for(i in 1:nrow(WildText)){
  link = paste0("https://wilderness.net/visit-wilderness/?ID=", i)
  page = read_html(link)
  content <- page %>%
    html_nodes("#maincontent , h1") %>%
    html_text()

  WildText[i, 1] <- content %>%
    strsplit("wilderness = '") %>%
    unlist() %>%
    `[`(2) %>%
    strsplit("';\nvar") %>%
    unlist() %>%
    `[`(1)
  
  WildText[i, 2] <- content %>%
    strsplit("stateAbbr = '") %>%
    unlist() %>%
    `[`(2) %>%
    strsplit("';\nvar") %>%
    unlist() %>%
    `[`(1)
  
  WildText[i, 3] <- content %>%
    strsplit("yearDesignated = '") %>%
    unlist() %>%
    `[`(2) %>%
    strsplit("';\nvar") %>%
    unlist() %>%
    `[`(1)
  
  WildText[i, 4] <- content %>%
    strsplit("agency = '") %>%
    unlist() %>%
    `[`(2) %>%
    strsplit(";\nvar") %>%
    unlist() %>%
    `[`(1) %>%
    gsub("[^a-zA-Z ]", "", .)
  
   WildText[i, 5] <- content %>%
    strsplit("<h2>Introduction</h2></div>") %>%
    unlist() %>%
    `[`(2) %>%
    strsplit("Leave No Trace\n\t\t\t") %>%
    unlist() %>%
    `[`(1) %>%
    strsplit(";\n\n\n") %>%
    unlist() %>%
    `[`(2)
}

Now we convert the matrix to a tibble and check to make sure our scraping rules captured all of the descriptions.

WildText <- as_tibble(WildText) # Convert the matrix to a tibble.

MissingDescrip <- WildText %>%
  mutate(WID = row_number()) %>%
  filter(is.na(Descrip)) %>%
  .$WID

In MissingDescrip we find that 44 Wilderness Areas are missing descriptions. So, we need to alter the rules a bit and re-scrape those pages.

for(i in MissingDescrip){
  link = paste0("https://wilderness.net/visit-wilderness/?ID=", i)
  page = read_html(link)
  
  WildText[i, 5] <- page %>%
    html_nodes("#maincontent") %>%
    html_text() %>%
    strsplit(paste0(";\nvar WID = '", i, "';\n\n\n")) %>%
    unlist() %>%
    `[`(2) %>%
    strsplit("Leave No Trace\n\t\t\t") %>%
    unlist() %>%
    `[`(1)
}

There are still a couple of Wildernesses with missing information: Wisconsin Islands Wilderness #654 and Okefenokee Wilderness #426. Each of these areas have idiosyncratic text elements, so we can write specific rules to pull the descriptions for each.

# Wisconsin Islands Wilderness #654
link = paste0("https://wilderness.net/visit-wilderness/?ID=", 654)
page = read_html(link)

WildText[654, 5] <- page %>%
  html_nodes("#maincontent") %>%
  html_text() %>%
  strsplit(";\nvar WID = '654';\n\n\n") %>%
  unlist() %>%
  `[`(2) %>%
  strsplit("Closed Wilderness Area") %>%
  unlist() %>%
  `[`(1)

# Okefenokee Wilderness #426
link = paste0("https://wilderness.net/visit-wilderness/?ID=", 426)
page = read_html(link)

WildText[426, 5] <- page %>%
  html_nodes("#maincontent") %>%
  html_text() %>%
  strsplit("WID = '426';\n\n\n") %>%
  unlist() %>%
  `[`(2) %>%
  strsplit("Leave No TracePlan Ahead and Prepare:") %>%
  unlist() %>%
  `[`(1)

The management of many Wilderness Areas is mandated to two agencies. We need to parse those.

WildText <- WildText %>%
  mutate(
    Agency = case_when(
      Agency == "Bureau of Land ManagementForest Service" ~ "Bureau of Land Management; Forest Service",
      Agency == "Bureau of Land ManagementNational Park Service" ~ "Bureau of Land Management; National Park Service",
      Agency == "Forest ServiceNational Park Service" ~ "Forest Service; National Park Service",
      Agency == "Fish and Wildlife ServiceForest Service" ~ "Fish and Wildlife Service; Forest Service",
      TRUE ~ Agency
    ),
    WID = row_number()
  ) %>%
  filter(!is.na(Wild))

It would be difficult to analyze 804-way comparisons. Instead, I want to group the Wilderness Areas by broad regions. I’m defining the Eastern Region as the states that boarder the Mississippi and those states to the east. The Western Region is everything to the west. Alaska, which contains almost half of the nation’s Wilderness acreage [link to post] is it’s own Region. I’ve grouped Hawaii and Puerto Rico into an Island Region, but because there are only 3 Areas in those places, we won’t have enough data to analyze.

WildText <- WildText %>%
  mutate(Region = case_when(State %in% c("MT", "CA", "NM", "WA", "NV", "AZ", "UT", "OR", "SD", "TX", "CO", "WY", "ND", "ID", "NE", "OK") ~ "West",
                            State %in% c("MN", "FL", "PA", "IL", "TN", "VA", "KY", "MO", "VT", "GA", "MI", "AR", "NJ", "MS", "WI", "LA", "NC", "SC", "ME", "IN", "AL", "WV", "NY", "NH", "MA", "OH") ~ "East",
                            State == "AK" ~ "Alaska",
                            State == "HI" | State == "PR" ~ "Island"))

At this point, I like to save these data so I don’t need to re-scrape every time I run the analysis. Next time, I can simply pick back up by redefining the WildText object.

saveRDS(WildText, "./WildernessDescriptions.rds")
WildText <- readRDS("./WildernessDescriptions.rds")

Text Mining

Now that we’ve successfully scrubbed our data, we can begin text mining. I found Silge & Robinson’s book, “Text Mining with R” invaluable.

There are a number of approaches to dealing with text that fall on a spectrum from relatively simple text mining to more complex natural language processing. For this exercise we will conduct a simple analysis that compares the words and their relative frequencies. To do so, we need to decompose the descriptions from word strings to individual words.

First, we remove non-text characters.

WT <- WildText %>%
  mutate(Descrip = gsub("\\.", " ", Descrip),
         Descrip = gsub("  ", " ", Descrip),
         Descrip = gsub("[^\x01-\x7F]", "", Descrip))

Next, we tag the parts of speech for each word in the essay.

WT$pos <- with(WT, qdap::pos_by(Descrip))$POStagged$POStagged

POS_list <- strsplit(WT$pos, " ") # Break string down into list of tagged words

Next, we create a tidy dataframe of the parts of speech and retain only the informational words (nouns, verbs, adjectives, and adverbs). We also remove “stop words” or common words that tend not to provide a strong signal (or the wrong signal).

WT2 <- data.frame(Wild = rep(WT$Wild, sapply(POS_list, length)), words = unlist(POS_list)) %>% # Convert lists of tagged words into tidy format, one word per row
  separate(words, into = c("word", "POS"), sep = "/") %>% # Create matching columns for the word and the tag
  filter(POS %in% c("JJ", "JJR", "JJS", "NN", "NNS", "RB", "RBR", "RBS", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ")) %>% # Keep only the nouns, verbs, adjectives, and adverbs
  anti_join(stop_words) # Remove stop words

Next we lemmatize the words (i.e. extract the stem word, for instance, peaks = peak and starkly = stark).

WT2$stem <- NA
for(i in 1:nrow(WT2)){
  WT2$stem[i] <- hunspell_stem(WT2$word[i]) %>% unlist() %>% .[1]
  print(paste0(sprintf("%.2f", i/nrow(WT2)), "% completed")) # This just outputs a progress indicator. 
}

Finally, we can add the regions from the original dataset onto the tidy dataset and save the data object for future use.

WT2 <- WT2 %>%
  left_join(WildText %>% select(Wild, Region))

At this point, I like to save these data so we don’t need to reprocess every time we run the analysis.

# saveRDS(WT2, "./WildernessByWord.rds")
WT2 <- readRDS("./WildernessByWord.rds")

Analysis and Visualization

After all that effort of scraping text from the web and text mining, we can finally begin to take a look at our data. One of the first questions one might as is: do managers tend to write longer descriptions for Areas depending on the region?

WT2 %>%
  group_by(Region, Wild) %>%
  summarise(WordCount = length(word)) %>%
  ggplot(aes(x = fct_reorder(Region, WordCount, median), col = Region, y = WordCount)) +
  geom_jitter(size = 3, alpha = 0.4, pch = 16, height = 0) +
  geom_boxplot(width = 0.3, alpha = 0.7) +
  scale_y_log10() +
  scale_color_manual(values = c("#008dc0", "#8ba506", "grey", "#bb5a00"))

Boxplot of word per area showing that Areas in Alaska have more words on average compared to West or East regions.

Descriptions of Alaskan Wilderness Areas tend to be longer than those for other regions. While descriptions for Areas in Hawaii and Puerto Rico also have a high median, there are too few to make strong comparisons.

Another common question is: which words are used most frequently? We can answer that most simply by looking for the word with the high use count across all descriptions.

 WT2 %>% 
    group_by(word) %>%
    tally() %>%
    arrange(desc(n)) %>%
    top_n(20)

Unsurprisingly, “wilderness” is the most frequently used word, by far. Factual descriptors like “feet”, “miles”, “elevation”, and “boundary” are common. Features like “mountain”, “trail”, “river”, “wildlife”, “dessert”, and “forest” are also common.

Interestingly, no emotive adjectives make the list. John Muir would be disappointed! We can also pull out only the adjectives.

WT2 %>% 
    filter(POS %in% c("JJ", "JJS", "JJR")) %>%
    group_by(word, Wild) %>%
    tally() %>%
    group_by(Wild) %>%
    filter(n == max(n)) %>%
    group_by(word) %>%
    tally() %>%
    arrange(desc(n)) %>%
    top_n(20)

Other than the prosaic “national” and direction words, we can see some descriptor of the sublime, like “steep”, “rugged”, “rocky”,  and “wild” that would make the transcendentalists progenitors of the wilderness concept proud.

To make the main figure for this project, I really wanted to use word clouds. Word clouds aren’t a great visualization for data interpretation, but they are a fun gimmick and make for interesting visuals!

Unfortunately, R doesn’t have the best support for generating fancy wordclouds. The package wordcloud is solidly reliable, but can only make very basic images. The package wordcloud2 allows for far more customization, but it is extremely buggy and requires some work-arounds on most systems.
I want the regional word clouds to conform to the Region’s shape. I made some simple shape images (here: East, West, Alaska) that we can pass to wordcloud2.

Depending on your system, the image may not populate in RStudio’s plotting window. If you don’t see the plot, try clicking the button to “show in new window” which will open a browser tab. Try refreshing the browser tab a few times until the wordcloud generates. Unfortunately, there is no way to constrain the aspect ratio of the cloud, so you will need to resize your browser window to a square. Then you can right-click and save the image. …like I said, wordcloud2 requires far too many work-arounds, but it is the only option for word clouds with custom shapes in R.

Below I’m showing the code for generating the Western region. It should be easy to alter for the other regions (all of the code is on github).

WT2 %>%
  filter(Region == "West") %>%
  filter(word != "wilderness") %>%
  count(word) %>%
  filter(n > 10) %>%
  wordcloud2(figPath = "./West.png",
             maxRotation = 0,
             minRotation = 0,
             color = "#bb5a00",
             fontFamily = 'serif')

Wordclouds for Alaska, West, and East regions.
I pulled the wordclound images into Illustrator to make the final adjustments. (You could certainly do all of this in R, but I’m much faster at Illustrator and prefer to use a visual program for graphic design decisions.)

Because wordclouds are not useful for quantitative interpretation, I also want to make some histograms of the most common words associated with each region. Again, I’m only showing code for the Western region below, all of the code is on Github.

WT2 %>%
  filter(Region == "West") %>%
  filter(stem != "wilderness") %>%
  count(stem) %>%
  arrange(desc(n)) %>%
  top_n(40) %>%
  ggplot(aes(x = fct_reorder(stem, n), y = n)) +
  geom_bar(stat = "identity", fill = "#bb5a00", width = 0.7) +
  coord_flip() +
  labs(y = "", x = "") +
  theme(axis.text.y = element_text(color = "#bb5a00"),
        axis.line.y = element_blank(),
        panel.grid.major = element_blank(),
        axis.text = element_text(family = "serif"))

Histograms of the most common words used to describe Wilderness areas in the Alaska, West, and East regions.

Another question we could ask is: which words most distinguish regions from other regions? For example, “mountain” and “trail” are high frequency words in the Western region, but they also occur at high frequency in the Eastern region, as well. So, these terms don’t help us distinguish between regions. Instead we can estimate log ratios of word occurrence between regions. Log ratios conveniently scale symmetrically around zero. Greater absolute values indicate words particularly relevant to one region and smaller values indicate words that are equally relevant to both regions.

wordratios <- WT2 %>%
  filter(word != "wilderness") %>%
  filter(Region == "East" | Region == "West") %>%
  count(stem, Region) %>%
  filter(sum(n) > 10) %>%
  ungroup() %>%
  spread(Region, n, fill = 0) %>%
  mutate_if(is.numeric, funs((. + 1)/sum(. + 1))) %>%
  mutate(logratio = log(East/West)) %>%
  arrange(desc(logratio))

wordratios %>%
  arrange(abs(logratio)) # Small log ratios indicate terms that are equally likely to be from East or West

WT2 %>%
  count(stem, Region) %>%
  group_by(stem) %>%
  filter(sum(n) >= 10) %>%
  ungroup() %>%
  pivot_wider(names_from = Region, values_from = n, values_fill = 0) %>%
  mutate_if(is.numeric, list(~(. + 1) / (sum(.) + 1))) %>%
  mutate(logratio.EW = log(East / West)) %>%
  arrange(abs(logratio.EW)) %>%
  slice(1:20) %>%
  mutate(class = "mid") %>%
  bind_rows(
    WT2 %>%
      count(stem, Region) %>%
      group_by(stem) %>%
      filter(sum(n) >= 10) %>%
      ungroup() %>%
      pivot_wider(names_from = Region, values_from = n, values_fill = 0) %>%
      mutate_if(is.numeric, list(~(. + 1) / (sum(.) + 1))) %>%
      mutate(logratio.EW = log(East / West)) %>%
      arrange((logratio.EW)) %>%
      slice(1:20) %>%
      mutate(class = "west")
  ) %>%
  bind_rows(
    WT2 %>%
      count(stem, Region) %>%
      group_by(stem) %>%
      filter(stem != "roger") %>%
      filter(sum(n) >= 10) %>%
      ungroup() %>%
      pivot_wider(names_from = Region, values_from = n, values_fill = 0) %>%
      mutate_if(is.numeric, list(~(. + 1) / (sum(.) + 1))) %>%
      mutate(logratio.EW = log(East / West)) %>%
      arrange(desc(logratio.EW)) %>%
      slice(1:20) %>%
      mutate(class = "east")
  ) %>%
  ggplot(aes(x = fct_reorder(stem, logratio.EW), 
             y = logratio.EW, 
             col = class)) +
  geom_segment(aes(xend = fct_reorder(stem, logratio.EW), 
                   y = case_when(class == "west" ~ -0.1,
                                 class == "mid" ~ 0,
                                 class == "east" ~ 0.1),
                   yend = logratio.EW)) +
  geom_point(data = . %>% filter(class == "west"), aes(size = exp(abs(logratio.EW))), pch = 16) +
  geom_point(data = . %>% filter(class == "east"), aes(size = exp(abs(logratio.EW))), pch = 16) +
  geom_text(data = . %>% filter(class == "west"), aes(label = stem, y = 0), hjust = 0) +
  geom_text(data = . %>% filter(class == "east"), aes(label = stem, y = 0), hjust = 1) +
  geom_text(data = . %>% filter(class == "mid"), aes(label = stem, y = 0)) +
  coord_flip() +
  scale_color_manual(values = c("#8ba506", "grey70", "#bb5a00")) +
  theme(axis.text.y = element_blank(),
        axis.line.y = element_blank(),
        panel.grid.major = element_blank()) +
  labs(x = "",
       y = "Log ratio ('Uniqueness' of words for a region)",
       title = "Which words are most unique to a Wilderness?")


In the image above we can see the 20 most unique words that distinguish the Eastern from the Western regions. “Laurel”, “swamp”, “key”, and “bay” are characteristic of the East while “desert”, “lion”, “wash”, and “alpine” are almost exclusively used to describe Western areas. Words like “safety”, “coastal”, and “glimpse” are commonly used in both regions. Interestingly, the word “west” is used commonly in Eastern descriptions. I was also surprised to see “lynx” and “glacial” to be equally common.

Log ratios aren’t as useful when we have more than two groups to compare. In our case, where we want to compare multiple groups (i.e. West, East, and Alaska), we can find the words that most distinguish one region from the others by computing the term frequency-inverse document frequency (tf-idf). This metric is computed by multiplying the the number of times a word is used in a given “document” by the inverse of that word’s frequency across all “documents”. In this case, we treat all descriptions from a region as a single “document”. Similar to log ratios, tf-idf let’s us know how relevant a term is to a given document, but allows us to compare across many documents.

WT2 %>%
  filter(Region != "Island") %>%
  filter(!is.na(stem)) %>%
  filter(stem != "roger") %>%
  count(Region, stem, sort = TRUE) %>%
  bind_tf_idf(stem, Region, n) %>%
  group_by(Region) %>%
  top_n(10, tf_idf) %>%
  arrange(desc(Region), desc(tf_idf)) %>%
  print(n = 40)
# A tibble: 32 x 6
# Groups:   Region [3]
   Region stem           n       tf   idf   tf_idf
                    
 1 West   desert       533 0.00852  0.405 0.00346 
 2 West   wash         125 0.00200  1.10  0.00220 
 3 West   bighorn       95 0.00152  1.10  0.00167 
 4 West   mesa          90 0.00144  1.10  0.00158 
 5 West   golden        79 0.00126  1.10  0.00139 
 6 West   creosote      78 0.00125  1.10  0.00137 
 7 West   fir          203 0.00325  0.405 0.00132 
 8 West   sagebrush     73 0.00117  1.10  0.00128 
 9 West   tortoise      71 0.00114  1.10  0.00125 
10 West   badlands      63 0.00101  1.10  0.00111 
11 East   laurel        38 0.00180  1.10  0.00198 
12 East   hardwood      90 0.00427  0.405 0.00173 
13 East   key           68 0.00323  0.405 0.00131 
14 East   oak           66 0.00313  0.405 0.00127 
15 East   illustrate    20 0.000949 1.10  0.00104 
16 East   swamp         53 0.00251  0.405 0.00102 
17 East   logging       37 0.00175  0.405 0.000712
18 East   maple         34 0.00161  0.405 0.000654
19 East   turkey        34 0.00161  0.405 0.000654
20 East   branch        33 0.00157  0.405 0.000635
21 Alaska fjord         17 0.00243  1.10  0.00267 
22 Alaska anchorage     11 0.00157  1.10  0.00173 
23 Alaska tundra        28 0.00400  0.405 0.00162 
24 Alaska prince         9 0.00128  1.10  0.00141 
25 Alaska wale           9 0.00128  1.10  0.00141 
26 Alaska chuck          8 0.00114  1.10  0.00125 
27 Alaska whale         20 0.00286  0.405 0.00116 
28 Alaska lion          17 0.00243  0.405 0.000984
29 Alaska alpine        16 0.00228  0.405 0.000926
30 Alaska cook           5 0.000714 1.10  0.000784
31 Alaska frigid         5 0.000714 1.10  0.000784
32 Alaska warren         5 0.000714 1.10  0.000784

Words like “desert”, “wash”, “bighorn”, and “mesa” are highly indicative of the West. The East is described most distinctly by it’s plant species: “laurel”, “oak”, “maple” and by terms like “key” which refers to the islands in the southeast. Alaska is dominated by intuitive terms like “fjord”, “tundra” and “alpine” and sea animals like “whale” and sea “lion”. Place names also rise to high relevance for Alaska with terms like “Anchorage”, “Prince of Wales Island”, and “Cook Inlet”.

When plotting differences between East and West in log-ratios, above, it made sense to use a diverging bar graph (or lollipop graph, specifically). But with more than a two-way comparison, visualization gets more complicated.

After a few iterations, I settled on visualizing the most distinctive terms (i.e. terms with highest tf-idf) as growing larger from a common point. I accomplished this by wrapping the plot around a circular coordinate system. Terms that are larger and further from the axes of other regions are more distinctive to the focal region.

Before plotting, I also remove the words associated with proper noun place names.

WT2 %>%
  filter(Region != "Island") %>%
  filter(!is.na(stem)) %>%
  filter(!stem %in% c("roger", "anchorage", "prince", "wale", "cook", "warren", "admiralty", "coronation")) %>%
  count(Region, stem, sort = TRUE) %>%
  bind_tf_idf(stem, Region, n) %>%
  group_by(Region) %>%
  top_n(30, tf_idf) %>%
  ungroup() %>%
  mutate(ordering = as.numeric(as.factor(Region)) + (tf_idf*100),
         stem = fct_reorder(stem, ordering, .desc = FALSE)) %>%
  mutate(tf_idf = tf_idf * 100) %>%
  ggplot(aes(x = fct_relevel(Region, c("East", "West", "Alaska")), label = stem, y = tf_idf, col = Region)) +
  geom_point() +
  coord_flip() +
  scale_color_manual(values = c("#008dc0", "#8ba506", "#bb5a00")) +
  scale_y_log10(limits = c(.025, 0.35)) +
  coord_polar() +
  geom_text_repel(aes(cex = tf_idf), max.overlaps = 100, family = "serif", segment.linetype = 0) +
  theme(panel.grid.major.x = element_blank(),
        axis.line = element_blank(),
        axis.text = element_blank()) +
  labs(x = "", y = "")

Radial plot with more unique words on the outside.

With all of the components created, I pulled everything into Illustrator to design the final infographic.

Infographic depicting the most common and most unique words used to describe US Wilderness Areas
This is the final product visualizing descriptions of Wilderness Areas in the US.

There is certainly a lot more that one could learn from these data. For instance, do descriptions differ by management agency? Would we find stronger divergence in language used to describe North versus South regions in the lower 48 rather than East-West? Nevertheless, this was a useful exercise for learning a bit more about web scrapping, text mining, and word analysis.

]]>
2123
Fractured Aviary https://www.azandisresearch.com/2022/07/29/fractured-aviary/ Fri, 29 Jul 2022 10:51:25 +0000 https://www.azandisresearch.com/?p=2087

 

During the pandemic, my partner, Bayla, and I began taking daily walks down to Yale’s campus. We often noticed dead birds at the base of the glass walls that wrap the Yale School on Management building when we passed by.

 

 

Because we both have working relationships with the Peabody Museum of Natural History, we began saving the bird specimens for the museum’s collection. Through that partnership, we learned that the School of Management building is one of the most lethal pieces of architecture on Yale Campus. We also met Viveca Morris at the Yale Law Ethics and Animals Program who had been helping to organize city-wide bird-strike data collections and spearheading a push to adopt bird-friendly building ordinances in New Haven.

 

 

One of the main barriers enacting mitigatory measures at the SOM building was that the lack of hard accounting of the total number of birds killed allowed the administrators of the building to downplay the problem. So, along with Viveca, we began a systematic survey of bird strikes at SOM. I’ll write more about that in a future post.

We also began thinking about the larger picture. How could we get more folks to recognize the magnitude of deaths due to thoughtless architecture? And how could we inspire folks to demand businesses, architects, and municipalities to adopt bird-friend design?

 

 

View this post on Instagram

 

A post shared by Bayla Arietta (@baylaart)

 

Bayla began painting some of the specimens we found. She posted a painting of five warblers we collected on a single day at SOM. The response was huge. That image seemed to have struck a chord. We realized that art could be a way to simultaneously introduce the topic and inspire emotions toward enacting change.

 

 

View this post on Instagram

 

A post shared by Bayla Arietta (@baylaart)

 

Bayla contacted Talon and Antler galleries in Portland, Oregon which feature some of our favorite contemporary artists and tend toward natural themes.

They agreed to let us curate a show with us. Over the next few months, Bayla contacted artists whose work fit the theme. In total, 62 artists contributed original pieces to the show titled, “Fractured Aviary”, which hung for the month of June 2022.

If you missed the show, you can see some of my favorites below:

 

]]>
2087