Fork me on GitHub

ed=r

R in Ecotoxicology and Environmental Sciences

Species Sensitivity Distributions (SSD) With R

Introduction

Species Sensitivity Distributions (SSD) are a central tool for ecological risk assessment (ERA). Species show different sensitivities to chemicals and the variation between species can be described by a statistical distribution. A concentration at which x% of species are affected can be derived from SSDs (= $HC_x$ ). Usually a HC5 is derived (with 95% confidence interval) and used in ERA.

Data

In this example I will generate a SSD for the insecticide Chlorpyrifos (CAS 2921-88-2). SSDs are generated using data of toxicity experiments (like EC_50 / LC50 values). Such kind of data is available e.g. from US EPA ECOTOX database, ECHA or ETOX.

I prepared some data from the US EPA ECOTOX database for this post. I will skip the data cleaning and data quality checks here - but note, this data has not been checked thoroughly and prepared only for demonstration purposes. However, data cleaning and checking is a very important step for every data analysis.

You can read the data into R with these three lines

1
2
3
4
5
# download data from github
require(RCurl)
url <- getURL("https://raw.githubusercontent.com/EDiLD/r-ed/master/post_ssd/ssd_data.csv",
    ssl.verifypeer = FALSE)
df <- read.table(text = url, header = TRUE, sep = ',', stringsAsFactors = FALSE)

A first look at the data

SSDs are typically displayed as a plot showing the fraction of affected species on the y axis and the concentration on the x-axis.

To calculate the fraction affected we order the species by their toxicity values and then calculate the fraction:

1
2
df <- df[order(df$val), ]
df$frac <- ppoints(df$val, 0.5)

Then we can take a first look at the data:

1
2
3
4
5
6
7
8
require(ggplot2)
ggplot(data = df) +
  geom_point(aes(x = val, y = frac), size = 5) +
  geom_text(aes(x = val, y = frac, label = species), hjust = 1.1, size = 4) +
  theme_bw() +
  scale_x_log10(limits=c(0.0075, max(df$val))) +
  labs(x = expression(paste('Concentration of Chlorpyrifos [ ', mu, 'g ', L^-1, ' ]')),
       y = 'Fraction of species affected')

plot of chunk unnamed-chunk-4

Fitting a distribution to the data

To fit a distribution to the data we can use the fitdistr() function from the MASS package or the more flexible fitdist() from the fitdistrplus package (there are also others). I will use the MASS package here to fit a lognormal distribution to this data.

1
2
3
require(MASS)
fit <- fitdistr(df$val, 'lognormal')
fit
1
2
3
##    meanlog     sdlog 
##   0.20235    1.55028 
##  (0.26204)  (0.18529)

The mean (meanlog) and standard deviation (sdlog) of the lognormal distribution were estimated from the data. We could fit and compare (e.g. by AIC) different distributions, but I stick with the lognormal here.

Derive HC5

From the estimated parameters of the fitted distribution we can easily extract the HC5.

1
(hc5 <- qlnorm(0.05, meanlog = fit$estimate[1], sdlog = fit$estimate[2]))
1
## [1] 0.095596

parametric Bootstrap

To be more conservative the lower limit of the confidence interval (CI) around the HC5 is sometimes used. The lower limit of the CI can be estimated from the data using parametric bootstrap.

The idea is:

  • generate random values from the fitted distribution
  • fit to these random values the distribution
  • estimate the HC5 from this new distribution
  • repeat many times to assess the variability of HC5 values

Alternatively, also non-parametric bootstrap could be used (resample from the data, not from the fitted distribution).

In R we write a function (myboot()) that does steps 1-3 for us:

1
2
3
4
5
6
7
8
9
myboot <- function(fit, p){
  # resample from fitted distribution
  xr <- rlnorm(fit$n, meanlog = fit$estimate[1], sdlog = fit$estimate[2])
  # fit distribition to new data
  fitr <- fitdistr(xr, 'lognormal')
  # return HCp
  hc5r <- qlnorm(p, meanlog = fitr$estimate[1], sdlog = fitr$estimate[2])
  return(hc5r)
}

We repeat this function 1000 times and get the quantiles of the bootstrapped HC5 values:

1
2
3
set.seed(1234)
hc5_boot <- replicate(1000, myboot(fit, p = 0.05))
quantile(hc5_boot, probs = c(0.025, 0.5, 0.975))
1
2
##     2.5%      50%    97.5% 
## 0.046027 0.102427 0.214411

So for this data and the lognormal distribution the HC5 would be 0.096 with a CI of [0.046; 0.214].

A fancy plot

Finally, I generate a fancy SSD plot with predictions (red), with bootstrapped values (blue), CI (dashed) and the raw data (dots):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
myboot2 <- function(fit, newxs){
  # resample
  xr <- rlnorm(fit$n, meanlog = fit$estimate[1], sdlog = fit$estimate[2])
  # fit to resample
  fitr <- fitdistr(xr, 'lognormal')
  # predict for new data
  pyr <- plnorm(newxs, meanlog = fitr$estimate[1], sdlog = fitr$estimate[2])
  return(pyr)
}
# newdata to predict
newxs <- 10^(seq(log10(0.01), log10(max(df$val)), length.out = 1000))
boots <- replicate(1000, myboot2(fit, newxs))

# fancy plot
require(reshape2)
# extract boostrap values
bootdat <- data.frame(boots)
bootdat$newxs <- newxs
bootdat <- melt(bootdat, id = 'newxs')
# extract CI
cis <- apply(boots, 1, quantile, c(0.025, 0.975))
rownames(cis) <- c('lwr', 'upr')
# add fitted values
pdat <- data.frame(newxs, py = plnorm(newxs, meanlog = fit$estimate[1], sdlog = fit$estimate[2]))
# add CI
pdat <- cbind(pdat, t(cis))
# x koordinates for species names (better use lower Ci instead of fit...)
df$fit <- 10^(log10(qlnorm(df$frac, meanlog = fit$estimate[1], sdlog = fit$estimate[2])) -0.4)
# plot
ggplot()+
  geom_line(data = bootdat, aes(x = newxs, y = value, group = variable), col = 'steelblue', alpha = 0.05) +
  geom_point(data = df, aes(x = val, y = frac)) +
  geom_line(data = pdat, aes(x = newxs, y = py), col = 'red') +
  geom_line(data = pdat, aes(x = newxs, y = lwr), linetype = 'dashed') +
  geom_line(data = pdat, aes(x = newxs, y = upr), linetype = 'dashed') +
  geom_text(data = df, aes(x = fit, y = frac, label = species), hjust = 1, size = 4) +
  theme_bw() +
  scale_x_log10(breaks = c(0.1, 1, 10, 100, 1000), limits = c(0.003, max(df$val))) +
  labs(x = expression(paste('Concentration of Chlorpyrifos [ ', mu, 'g ', L^-1, ' ]')),
       y = 'Fraction of species affected')

plot of chunk unnamed-chunk-9

Note, that the bootstrap function is slightly different: this is because for this plot I wanted to have for each bootstrap a different (blue) line.

Web Scraping Pesticides Data With R

This week I had to find the CAS-numbers for a bunch of pesticides. Moreover, I also needed information about the major groups of these pesticides (e.g. herbicides, fungicides, …) and some of them were in German language.

ETOX is quite useful to find the CAS-numbers, even for German names, as they have also synonyms in their database.

Another, useful source is the Compendium of Pesticide Common Names by Allan Wood.

Since I had > 500 compounds in my list, this was feasible to be done manually. So I wrote two small functions (etox_to_cas() and allanwood()) to search and retrieve information from these two websites.

Both are available from my esmisc package on github.com. These are small functions using the RCurl and XML packages for scraping. They have not been tested very much and may not be very robust.

Query CAS from ETOX

1
2
require(esmisc)
etox_to_cas('2,4-D')
1
## [1] "94-75-7"

If you have a bunch of compounds you can use ‘sapply()’ to feed etox_to_cas:

1
sapply(c('2,4-D', 'DDT', 'Diclopfop'), etox_to_cas)
1
2
##     2,4-D       DDT Diclopfop 
## "94-75-7" "50-29-3"        NA

Query CAS and pesticide group from Allan Wood

1
allanwood('Fluazinam')
1
2
##                                CAS                           activity 
##                       "79622-59-6" "fungicides (pyridine fungicides)"
1
sapply(c('Fluazinam', 'Diclofop', 'DDT'), allanwood)
1
2
3
4
5
6
7
8
9
##          Fluazinam                         
## CAS      "79622-59-6"                      
## activity "fungicides (pyridine fungicides)"
##          Diclofop                                         
## CAS      "40843-25-2"                                     
## activity "herbicides (aryloxyphenoxypropionic herbicides)"
##          DDT                                                                                                            
## CAS      "50-29-3"                                                                                                      
## activity "acaricides (bridged diphenyl acaricides; organochlorine acaricides)insecticides (organochlorine insecticides)"

Quantitative Ecotoxicology, Page 223, Example 5.1

This is example 5.1 on page 223 of Quantitative Ecotoxicology - reproduced with R.

Load and clean the data

Read the data into R:

1
2
3
4
5
6
7
8
df <- read.table(header = TRUE, text = 'conc A B C D
0 1 1 0.9 0.9
32 0.8 0.8 1 0.8
64 0.9 1 1 1 
128 0.9 0.9 0.8 1
256 0.7 0.9 1 0.5
512 0.4 0.3 0.4 0.2')
df
1
2
3
4
5
6
7
##   conc   A   B   C   D
## 1    0 1.0 1.0 0.9 0.9
## 2   32 0.8 0.8 1.0 0.8
## 3   64 0.9 1.0 1.0 1.0
## 4  128 0.9 0.9 0.8 1.0
## 5  256 0.7 0.9 1.0 0.5
## 6  512 0.4 0.3 0.4 0.2

Next we do some house-keeping: convert the data to long format and the concentration to factor.

1
2
3
4
5
6
require(reshape2)
# to long
dfm <- melt(df, id.vars = 'conc', value.name = 'y', variable.name = 'tank')
# conc as factor
dfm$conc <- factor(dfm$conc)
head(dfm)
1
2
3
4
5
6
7
##   conc tank   y
## 1    0    A 1.0
## 2   32    A 0.8
## 3   64    A 0.9
## 4  128    A 0.9
## 5  256    A 0.7
## 6  512    A 0.4

Let’s have first look at the data:

1
2
boxplot(y ~ conc, data = dfm,
        xlab = 'conc', ylab = 'Proportion surv.')

plot of chunk unnamed-chunk-4

Transform response

Next we apply the arcsine transformation:

1
2
3
4
dfm$y_asin <- ifelse(dfm$y == 1,
                     asin(sqrt(dfm$y)) - asin(sqrt(1/40)),
                     asin(sqrt(dfm$y))
                     )

This adds the transformed values as column y_asin to our data.frame. Survivals of 1 (100%) are transformed

, where n = number of animals per replicate.

All other values are transformed using

If we would have had observations with 0% survival, these zero values should be transformed using

by adding an extra ifelse().

Let’s look at the transformed values:

1
head(dfm)
1
2
3
4
5
6
7
##   conc tank   y  y_asin
## 1    0    A 1.0 1.41202
## 2   32    A 0.8 1.10715
## 3   64    A 0.9 1.24905
## 4  128    A 0.9 1.24905
## 5  256    A 0.7 0.99116
## 6  512    A 0.4 0.68472
1
2
boxplot(y_asin ~ conc, data = dfm,
        xlab = 'conc', ylab = 'Transformed prop. surv.')

plot of chunk unnamed-chunk-6

Doesn’t look that different…

ANOVA

To fit a ANOVA to his data we use the aov() function:

1
mod <- aov(y_asin ~ conc, data = dfm)

And summary() gives the anova table:

1
summary(mod)
1
2
3
4
5
##             Df Sum Sq Mean Sq F value   Pr(>F)    
## conc         5  1.575  0.3151    13.3 0.000016 ***
## Residuals   18  0.426  0.0237                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The within-treatment variance is termed Residuals and the between-treatment variance is named according to the predictor conc. The total variance is simply the sum of those and not displayed.

R already performed an F-test for us, indicated by the F value (=ratio of the Mean Sq) and Pr (>F) columns.

Multiple Comparisons

Now we know that there is a statistically significant treatment effect, we might be interested which treatments differ from the control group.

The in the book mentioned Tukey contrasts (comparing each level with each other) can be easily done with the multcomp package:

1
2
require(multcomp)
summary(glht(mod, linfct = mcp(conc = 'Tukey')))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
## 
## 	 Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: aov(formula = y_asin ~ conc, data = dfm)
## 
## Linear Hypotheses:
##                Estimate Std. Error t value Pr(>|t|)    
## 32 - 0 == 0     -0.1472     0.1088   -1.35   0.7530    
## 64 - 0 == 0      0.0407     0.1088    0.37   0.9989    
## 128 - 0 == 0    -0.0762     0.1088   -0.70   0.9795    
## 256 - 0 == 0    -0.2211     0.1088   -2.03   0.3634    
## 512 - 0 == 0    -0.7273     0.1088   -6.69   <0.001 ***
## 64 - 32 == 0     0.1879     0.1088    1.73   0.5327    
## 128 - 32 == 0    0.0709     0.1088    0.65   0.9850    
## 256 - 32 == 0   -0.0740     0.1088   -0.68   0.9820    
## 512 - 32 == 0   -0.5802     0.1088   -5.33   <0.001 ***
## 128 - 64 == 0   -0.1170     0.1088   -1.08   0.8849    
## 256 - 64 == 0   -0.2619     0.1088   -2.41   0.2055    
## 512 - 64 == 0   -0.7681     0.1088   -7.06   <0.001 ***
## 256 - 128 == 0  -0.1449     0.1088   -1.33   0.7643    
## 512 - 128 == 0  -0.6511     0.1088   -5.98   <0.001 ***
## 512 - 256 == 0  -0.5062     0.1088   -4.65   0.0023 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)

However, this leads to 15 comparisons (and tests) and we may not be interested in all. Note that we are wrong in 1 out of 20 tests ($\alpha = 0.05$) (if we do not apply correction for multiple testing).

An alternative would be just to compare the control group to the treatments. This is called Dunnett contrasts and leads to only 5 comparison.

The syntax is the same, just change Tukey to Dunnett:

1
summary(glht(mod, linfct = mcp(conc = 'Dunnett')))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
## 
## 	 Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Dunnett Contrasts
## 
## 
## Fit: aov(formula = y_asin ~ conc, data = dfm)
## 
## Linear Hypotheses:
##              Estimate Std. Error t value Pr(>|t|)    
## 32 - 0 == 0   -0.1472     0.1088   -1.35     0.54    
## 64 - 0 == 0    0.0407     0.1088    0.37     0.99    
## 128 - 0 == 0  -0.0762     0.1088   -0.70     0.93    
## 256 - 0 == 0  -0.2211     0.1088   -2.03     0.20    
## 512 - 0 == 0  -0.7273     0.1088   -6.69   <0.001 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)

The column Estimate gives use the difference in means between the control and the respective treatments and Std. Error the standard error from these estimates. Both are combined to a t value (=Estimate / Std. Error), from which we can get a p-value (P(>t)).

Note, that the p-values are already corrected for multiple testing, as indicated at the bottom of the output. If you want to change the correction method you can use:

1
summary(glht(mod, linfct = mcp(conc = 'Dunnett')), test = adjusted('bonferroni'))

This applies Bonferroni-correction, see ?p.adjust and ?adjusted for other methods.

Outlook

Warton & Hui (2011) demonstrated that the arcsine transform should not be used in either circumstance. Similarly as O’Hara & Kotze (2010) showed that count data should not be log-transformed.

I a future post I will show how to analyse this data without transformation using Generalized Linear Models (GLM) and perhabs some simulations showing that using GLM can lead to an increased statistical power for ecotoxicological data sets.

Note, that I couldn’t find any reference to Generalized Linear Models in Newman (2012) and EPA (2002), although they have been around for 30 years now (Nelder & Wedderburn, 1972).

References

  • Warton, D. I., & Hui, F. K. (2011). The arcsine is asinine: the analysis of proportions in ecology. Ecology, 92(1), 3-10.

  • O’Hara, R. B., & Kotze, D. J. (2010). Do not log‐transform count data. Methods in Ecology and Evolution, 1(2), 118-122.

  • Newman, M. C. (2012). Quantitative ecotoxicology. Taylor & Francis, Boca Raton, FL.

  • EPA (2002). Methods for Measuring the Acute Toxicity of Effluents and Receiving Waters to Freshwater and Marine Organisms.

  • Nelder J.A., & Wedderburn R.W.M. (1972). Generalized Linear Models. Journal of the Royal Statistical Society Series A (General) 135:370–384.

Web Scraping Chemical Data With R

I recently came across the problem to convert CAS numbers into SMILES and retrieve additional information about the compound.

The are several sources around the web that provide chemical informations, eg. PubChem, ChemSpider and the Chemical Identifier Resolver.

I wrote up some functions to interact from R with these servers. You can find them in my esmisc package:

1
2
3
4
install.packages('devtools')
require(devtools)
install_github('esmisc', 'EDiLD')
require(esmisc)

These functions are very crude and need some further development (if you want to improve, fork the package!), however, here’s a short summary:

Covert CAS to SMILES

Suppose we have some CAS numbers and want to convert them to SMILES:

1
casnr <- c("107-06-2", "107-13-1", "319-86-8")
Via Cactus
1
cactus(casnr, output = 'smiles')
1
2
## [1] "C(Cl)CCl"                       "C(C#N)=C"                      
## [3] "C1(Cl)C(Cl)C(Cl)C(Cl)C(Cl)C1Cl"
Via ChemSpider

Note, that ChemSpider requires a security token. To obtain a token please register at ChemSpider.

1
2
csid <- get_csid(casnr, token = token)
csid_to_smiles(csid, token)
1
2
3
## [1] "C(CCl)Cl"                                                   
## [2] "C=CC#N"                                                     
## [3] "[C@@H]1([C@@H]([C@@H]([C@@H]([C@H]([C@@H]1Cl)Cl)Cl)Cl)Cl)Cl"
Via PubChem
1
2
cid <- get_cid(casnr)
cid_to_smiles(cid)
1
2
## [1] "C(CCl)Cl"                       "C=CC#N"                        
## [3] "C1(C(C(C(C(C1Cl)Cl)Cl)Cl)Cl)Cl"

Retrieve other data from CAS

All these web resources provide additional data. Here is an example retrieving the molecular weights:

Via cactus
1
cactus(casnr, output = 'mw')
1
## [1] "98.9596"  "53.0634"  "290.8314"
Via ChemSpider
1
csid_to_ext(csid, token)$MolecularWeight
1
## [1] "98.95916"  "53.06262"  "290.82984"
Via Pubchem
1
cid_to_ext(cid)$mw
1
## [1] "98.959160"  "53.062620"  "290.829840"

ChemSpider and PubChem return the same values, however the results from cactus are slightly different.

Retrieve partitioning coefficients

Partition coefficients are another useful property. LOGKOW is a databank that contains experimental data, retrieved from the literature, on over 20,000 organic compounds.

get_kow() extracts the ‘Recommended values’ for a given CAS:

1
get_kow(casnr)
1
## [1] 1.48 0.25 4.14

This function is very crude. For example, it returns only the first hit if multiple hits are found in the database - a better way would be to ask for user input, as we did the taxize package.

Outlook

Currently I have no time to extensively develop these functions. I would be happy if someone picks up this work - it’s fairly easy: just fork the repo and start.

In future this could be turned into a ROpenSci package as it is within their scope.

Collinearity

It has been quiet the last months here - main reason is that I’m working on my master’s thesis. I have already prepared some more examples from ‘Quantitative Ecotoxicolgy’, but I didn’t come to publish them here.

This post is about collinearity and the implications for linear models. The best way to explore this is by simulations - where I create data with known properties and look what happens.

Create correlated random variables

The first problem for this simulation was: How can we create correlated random variables?

This simulation is similar to the one in Legendre & Legendre (2012) , where it is also mentioned that one could use Cholesky Decompostion to create correlated variables:

What this function function does:

  • Create two normal random variables (X1, X2 ~ N(0, 1))
  • Create desired correlation matrix
  • Compute the Choleski factorization of the correlation matrix ( R )
  • Apply the resulting matrix on the two variables (this rotates, shears and scales the variables so that they are correlated)
  • Create a dependent variable y after the model:

y ~ 5 + 7*X1 + 7*X2 + e with e ~ N(0, 1)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#############################################################################
# create two correlated variables and a dependent variable
# n : number of points
# p:correlation

datagen <- function(n , p){
  # random points N(0,1)
  x1 <- rnorm(n)
  x2 <- rnorm(n)
  X <- cbind(x1, x2)

  # desired correlation matrix
  R <- matrix(c(1, p, p, 1), ncol = 2)

  # use cholesky decomposition `t(U) %*% U = R`
  U <- chol(R)
  corvars <- X %*% U

  # create dependent variable after model:
  # y ~ 5 + 7 * X1 + 7 * X2 + e | E ~ N(0,10)
  y <- 5 + 7*corvars[,1] + 7*corvars[,2] + rnorm(n, 0, 1)
  df <- data.frame(y, corvars)
  return(df)
}

Let’s see if it works. This creates two variables with 1000 observations with a correlation of 0.8 between them and a dependent variable.

1
df1 <- datagen(n = 1000, p = 0.8)

The correlation between X1 and X2 is as desired nearly 0.8

1
cor(df1)
1
2
3
4
##          y      X1      X2
## y  1.00000 0.94043 0.94715
## X1 0.94043 1.00000 0.79139
## X2 0.94715 0.79139 1.00000

And the data follows the specified model.

1
2
mod <- lm(y ~ X1 + X2, data = df1)
summary(mod)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## 
## Call:
## lm(formula = y ~ X1 + X2, data = df1)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.7375 -0.6712  0.0561  0.6225  2.9057 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   5.0210     0.0311     161   <2e-16 ***
## X1            6.9209     0.0517     134   <2e-16 ***
## X2            7.0813     0.0498     142   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.984 on 997 degrees of freedom
## Multiple R-squared:  0.995,	Adjusted R-squared:  0.995 
## F-statistic: 9.14e+04 on 2 and 997 DF,  p-value: <2e-16
1
pairs(df1)

plot of chunk example_datagen3

Methods to spot collinearity

Dormann et al. (2013) lists eight methods to spot collinearity (see their Table 1). I will only show how to calculate two of those (but see the Appendix of the Dormann paper for code to all methods):

Absolute value of correlation coefficients r

1
(cormat <- cor(df1[, 2:3]))
1
2
3
##         X1      X2
## X1 1.00000 0.79139
## X2 0.79139 1.00000

Dormann et al. (2013) found that ‘coefficients between predictor variables of r > 0.7 was an appropriate indicator for when collinearity begins to severely distort model estimation’.

Variance Inflation Factors

1
2
require(car)
vif(mod)
1
2
##     X1     X2 
## 2.6759 2.6759

Which is equivalent to (for variable X1):

1
2
sum <- summary(lm(X1 ~ X2, data = df1))
1/(1 - sum$r.squared)
1
## [1] 2.6759

Unfortunately there are many ‘rules of thumb’ associated with VIF: > 10, > 4, …

Simulation 1: How does collinearity affect the precision of estimates?

Here we simulate datasets with correlations ranging from -0.8 to 0.8:

1
2
3
4
ps <- seq(-0.8, 0.8, 0.005)
n = 1000

sim1 <- lapply(ps, function(x) datagen(n = n, p = x))

Next we fit a linear model to each of the datasets and extract the coefficients table:

1
2
3
4
5
6
# Function to fit and extract coefficients
regfun <- function(x) {
    return(summary(lm(y ~ X1 + X2, data = x))$coefficients)
}
# run regfun on every dataset
res1 <- lapply(sim1, regfun)

Finally we extract the Standard Errors and plot them against the degree of correlation:

1
2
3
4
5
6
7
8
9
# extract Standard Errors from results
ses <- data.frame(ps, t(sapply(res1, function(x) x[, 2])))

# plot
require(ggplot2)
require(reshape2)
ses_m <- melt(ses, id.vars = "ps")
ggplot(ses_m, aes(x = ps, y = value)) + geom_point(size = 3) + facet_wrap(~variable) +
    theme_bw() + ylab("Std. Error") + xlab("Correlation")

plot of chunk plot1

It can be clearly seen, that collinearity inflates the Standard Errors for the correlated variables. The intercept is not affected.

Having large standard errors parameter estimates are also variable, which is demonstrated in the next simulation:

Simulation 2: Are estimated parameters stable under collinearity?

Here I simulate for different correlations 100 datasets each in order to see how stable the estimates are.

The code creates the datasets, fits a linear model to each dataset and extracts the estimates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
n_sims <- 100
ps <- seq(-0.8, 0.8, 0.05)

# function to generate n_sims datasets, for give p
sim2 <- function(p) {
    sim <- lapply(1:n_sims, function(x) datagen(n = 1000, p = p))
    res <- lapply(sim, regfun)
    est <- t(sapply(res, function(x) x[, 1]))
    out <- data.frame(p, est)
    return(out)
}

res2 <- lapply(ps, function(x) {
    return(sim2(x))
})

The we create a plot of the estimates for the three coefficients, each boxplot represents 100 datasets.

1
2
3
4
5
6
7
ests <- lapply(res2, melt, id.vars = "p")
ests <- do.call(rbind, ests)

ggplot(ests, aes(y = value, x = factor(p))) + geom_boxplot() + facet_wrap(~variable,
    scales = "free_y") + theme_bw() + geom_hline(aes(yintercept = 7), data = subset(ests,
    variable %in% c("X1", "X2")), col = "red", lwd = 1) + ylab("Estimate") +
    xlab("Correlation")

plot of chunk plot2

The red line indicates the coefficients after the data has been generated (7 * X1 and 7 * X2). We see that the spread of estimates increases as correlation increases.

This is confirmed by looking at the standard deviation of the estimates:

1
2
3
4
sds <- data.frame(ps, t(sapply(res2, function(x) apply(x[, 2:4], 2, sd))))
sds_m <- melt(sds, id.vars = "ps")
ggplot(sds_m, aes(x = ps, y = value)) + geom_point(size = 3) + facet_wrap(~variable) +
    theme_bw() + ylab("SD of estimates from 100 simulations") + xlab("Correlation")

plot of chunk plot3

If the standard errors are large enough it may happen that parameter estimates may be so variable that even their sign is changed.

References

  • Carsten Dormann, Jane Elith, Sven Bacher, Carsten Buchmann, Gudrun Carl, Gabriel Carré, Jaime Marquéz, Bernd Gruber, Bruno Lafourcade, Pedro Leitão, Tamara Münkemüller, Colin McClean, Patrick Osborne, Björn Reineking, Boris Schröder, Andrew Skidmore, Damaris Zurell, Sven Lautenbach, (2013) Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography 36 (1) 027-046 10.1111/j.1600-0587.2012.07348.x
  • Pierre Legendre, Louis Legendre, (2012) Numerical ecology.