Fork me on GitHub

ed=r

R in Ecotoxicology and Environmental Sciences

Quantitative Ecotoxicology, Page 223, Example 5.1

This is example 5.1 on page 223 of Quantitative Ecotoxicology - reproduced with R.

Load and clean the data

Read the data into R:

1
2
3
4
5
6
7
8
df <- read.table(header = TRUE, text = 'conc A B C D
0 1 1 0.9 0.9
32 0.8 0.8 1 0.8
64 0.9 1 1 1 
128 0.9 0.9 0.8 1
256 0.7 0.9 1 0.5
512 0.4 0.3 0.4 0.2')
df
1
2
3
4
5
6
7
##   conc   A   B   C   D
## 1    0 1.0 1.0 0.9 0.9
## 2   32 0.8 0.8 1.0 0.8
## 3   64 0.9 1.0 1.0 1.0
## 4  128 0.9 0.9 0.8 1.0
## 5  256 0.7 0.9 1.0 0.5
## 6  512 0.4 0.3 0.4 0.2

Next we do some house-keeping: convert the data to long format and the concentration to factor.

1
2
3
4
5
6
require(reshape2)
# to long
dfm <- melt(df, id.vars = 'conc', value.name = 'y', variable.name = 'tank')
# conc as factor
dfm$conc <- factor(dfm$conc)
head(dfm)
1
2
3
4
5
6
7
##   conc tank   y
## 1    0    A 1.0
## 2   32    A 0.8
## 3   64    A 0.9
## 4  128    A 0.9
## 5  256    A 0.7
## 6  512    A 0.4

Let’s have first look at the data:

1
2
boxplot(y ~ conc, data = dfm,
        xlab = 'conc', ylab = 'Proportion surv.')

plot of chunk unnamed-chunk-4

Transform response

Next we apply the arcsine transformation:

1
2
3
4
dfm$y_asin <- ifelse(dfm$y == 1,
                     asin(sqrt(dfm$y)) - asin(sqrt(1/40)),
                     asin(sqrt(dfm$y))
                     )

This adds the transformed values as column y_asin to our data.frame. Survivals of 1 (100%) are transformed

, where n = number of animals per replicate.

All other values are transformed using

If we would have had observations with 0% survival, these zero values should be transformed using

by adding an extra ifelse().

Let’s look at the transformed values:

1
head(dfm)
1
2
3
4
5
6
7
##   conc tank   y  y_asin
## 1    0    A 1.0 1.41202
## 2   32    A 0.8 1.10715
## 3   64    A 0.9 1.24905
## 4  128    A 0.9 1.24905
## 5  256    A 0.7 0.99116
## 6  512    A 0.4 0.68472
1
2
boxplot(y_asin ~ conc, data = dfm,
        xlab = 'conc', ylab = 'Transformed prop. surv.')

plot of chunk unnamed-chunk-6

Doesn’t look that different…

ANOVA

To fit a ANOVA to his data we use the aov() function:

1
mod <- aov(y_asin ~ conc, data = dfm)

And summary() gives the anova table:

1
summary(mod)
1
2
3
4
5
##             Df Sum Sq Mean Sq F value   Pr(>F)    
## conc         5  1.575  0.3151    13.3 0.000016 ***
## Residuals   18  0.426  0.0237                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The within-treatment variance is termed Residuals and the between-treatment variance is named according to the predictor conc. The total variance is simply the sum of those and not displayed.

R already performed an F-test for us, indicated by the F value (=ratio of the Mean Sq) and Pr (>F) columns.

Multiple Comparisons

Now we know that there is a statistically significant treatment effect, we might be interested which treatments differ from the control group.

The in the book mentioned Tukey contrasts (comparing each level with each other) can be easily done with the multcomp package:

1
2
require(multcomp)
summary(glht(mod, linfct = mcp(conc = 'Tukey')))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
## 
## 	 Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: aov(formula = y_asin ~ conc, data = dfm)
## 
## Linear Hypotheses:
##                Estimate Std. Error t value Pr(>|t|)    
## 32 - 0 == 0     -0.1472     0.1088   -1.35   0.7531    
## 64 - 0 == 0      0.0407     0.1088    0.37   0.9989    
## 128 - 0 == 0    -0.0762     0.1088   -0.70   0.9795    
## 256 - 0 == 0    -0.2211     0.1088   -2.03   0.3632    
## 512 - 0 == 0    -0.7273     0.1088   -6.69   <0.001 ***
## 64 - 32 == 0     0.1879     0.1088    1.73   0.5327    
## 128 - 32 == 0    0.0709     0.1088    0.65   0.9850    
## 256 - 32 == 0   -0.0740     0.1088   -0.68   0.9820    
## 512 - 32 == 0   -0.5802     0.1088   -5.33   <0.001 ***
## 128 - 64 == 0   -0.1170     0.1088   -1.08   0.8849    
## 256 - 64 == 0   -0.2619     0.1088   -2.41   0.2054    
## 512 - 64 == 0   -0.7681     0.1088   -7.06   <0.001 ***
## 256 - 128 == 0  -0.1449     0.1088   -1.33   0.7643    
## 512 - 128 == 0  -0.6511     0.1088   -5.98   <0.001 ***
## 512 - 256 == 0  -0.5062     0.1088   -4.65   0.0023 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)

However, this leads to 15 comparisons (and tests) and we may not be interested in all. Note that we are wrong in 1 out of 20 tests ($\alpha = 0.05$) (if we do not apply correction for multiple testing).

An alternative would be just to compare the control group to the treatments. This is called Dunnett contrasts and leads to only 5 comparison.

The syntax is the same, just change Tukey to Dunnett:

1
summary(glht(mod, linfct = mcp(conc = 'Dunnett')))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
## 
## 	 Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Dunnett Contrasts
## 
## 
## Fit: aov(formula = y_asin ~ conc, data = dfm)
## 
## Linear Hypotheses:
##              Estimate Std. Error t value Pr(>|t|)    
## 32 - 0 == 0   -0.1472     0.1088   -1.35     0.54    
## 64 - 0 == 0    0.0407     0.1088    0.37     0.99    
## 128 - 0 == 0  -0.0762     0.1088   -0.70     0.93    
## 256 - 0 == 0  -0.2211     0.1088   -2.03     0.20    
## 512 - 0 == 0  -0.7273     0.1088   -6.69   <0.001 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)

The column Estimate gives use the difference in means between the control and the respective treatments and Std. Error the standard error from these estimates. Both are combined to a t value (=Estimate / Std. Error), from which we can get a p-value (P(>t)).

Note, that the p-values are already corrected for multiple testing, as indicated at the bottom of the output. If you want to change the correction method you can use:

1
summary(glht(mod, linfct = mcp(conc = 'Dunnett')), test = adjusted('bonferroni'))

This applies Bonferroni-correction, see ?p.adjust and ?adjusted for other methods.

Outlook

Warton & Hui (2011) demonstrated that the arcsine transform should not be used in either circumstance. Similarly as O’Hara & Kotze (2010) showed that count data should not be log-transformed.

I a future post I will show how to analyse this data without transformation using Generalized Linear Models (GLM) and perhabs some simulations showing that using GLM can lead to an increased statistical power for ecotoxicological data sets.

Note, that I couldn’t find any reference to Generalized Linear Models in Newman (2012) and EPA (2002), although they have been around for 30 years now (Nelder & Wedderburn, 1972).

References

  • Warton, D. I., & Hui, F. K. (2011). The arcsine is asinine: the analysis of proportions in ecology. Ecology, 92(1), 3-10.

  • O’Hara, R. B., & Kotze, D. J. (2010). Do not log‐transform count data. Methods in Ecology and Evolution, 1(2), 118-122.

  • Newman, M. C. (2012). Quantitative ecotoxicology. Taylor & Francis, Boca Raton, FL.

  • EPA (2002). Methods for Measuring the Acute Toxicity of Effluents and Receiving Waters to Freshwater and Marine Organisms.

  • Nelder J.A., & Wedderburn R.W.M. (1972). Generalized Linear Models. Journal of the Royal Statistical Society Series A (General) 135:370–384.

Web Scraping Chemical Data With R

I recently came across the problem to convert CAS numbers into SMILES and retrieve additional information about the compound.

The are several sources around the web that provide chemical informations, eg. PubChem, ChemSpider and the Chemical Identifier Resolver.

I wrote up some functions to interact from R with these servers. You can find them in my esmisc package:

1
2
3
4
install.packages('devtools')
require(devtools)
install_github('esmisc', 'EDiLD')
require(esmisc)

These functions are very crude and need some further development (if you want to improve, fork the package!), however, here’s a short summary:

Covert CAS to SMILES

Suppose we have some CAS numbers and want to convert them to SMILES:

1
casnr <- c("107-06-2", "107-13-1", "319-86-8")
Via Cactus
1
cactus(casnr, output = 'smiles')
1
2
## [1] "C(Cl)CCl"                       "C(C#N)=C"                      
## [3] "C1(Cl)C(Cl)C(Cl)C(Cl)C(Cl)C1Cl"
Via ChemSpider

Note, that ChemSpider requires a security token. To obtain a token please register at ChemSpider.

1
2
csid <- get_csid(casnr, token = token)
csid_to_smiles(csid, token)
1
2
3
## [1] "C(CCl)Cl"                                                   
## [2] "C=CC#N"                                                     
## [3] "[C@@H]1([C@@H]([C@@H]([C@@H]([C@H]([C@@H]1Cl)Cl)Cl)Cl)Cl)Cl"
Via PubChem
1
2
cid <- get_cid(casnr)
cid_to_smiles(cid)
1
2
## [1] "C(CCl)Cl"                       "C=CC#N"                        
## [3] "C1(C(C(C(C(C1Cl)Cl)Cl)Cl)Cl)Cl"

Retrieve other data from CAS

All these web resources provide additional data. Here is an example retrieving the molecular weights:

Via cactus
1
cactus(casnr, output = 'mw')
1
## [1] "98.9596"  "53.0634"  "290.8314"
Via ChemSpider
1
csid_to_ext(csid, token)$MolecularWeight
1
## [1] "98.95916"  "53.06262"  "290.82984"
Via Pubchem
1
cid_to_ext(cid)$mw
1
## [1] "98.959160"  "53.062620"  "290.829840"

ChemSpider and PubChem return the same values, however the results from cactus are slightly different.

Retrieve partitioning coefficients

Partition coefficients are another useful property. LOGKOW is a databank that contains experimental data, retrieved from the literature, on over 20,000 organic compounds.

get_kow() extracts the ‘Recommended values’ for a given CAS:

1
get_kow(casnr)
1
## [1] 1.48 0.25 4.14

This function is very crude. For example, it returns only the first hit if multiple hits are found in the database - a better way would be to ask for user input, as we did the taxize package.

Outlook

Currently I have no time to extensively develop these functions. I would be happy if someone picks up this work - it’s fairly easy: just fork the repo and start.

In future this could be turned into a ROpenSci package as it is within their scope.

Collinearity

It has been quiet the last months here - main reason is that I’m working on my master’s thesis. I have already prepared some more examples from ‘Quantitative Ecotoxicolgy’, but I didn’t come to publish them here.

This post is about collinearity and the implications for linear models. The best way to explore this is by simulations - where I create data with known properties and look what happens.

Create correlated random variables

The first problem for this simulation was: How can we create correlated random variables?

This simulation is similar to the one in Legendre & Legendre (2012) , where it is also mentioned that one could use Cholesky Decompostion to create correlated variables:

What this function function does:

  • Create two normal random variables (X1, X2 ~ N(0, 1))
  • Create desired correlation matrix
  • Compute the Choleski factorization of the correlation matrix ( R )
  • Apply the resulting matrix on the two variables (this rotates, shears and scales the variables so that they are correlated)
  • Create a dependent variable y after the model:

y ~ 5 + 7*X1 + 7*X2 + e with e ~ N(0, 1)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#############################################################################
# create two correlated variables and a dependent variable
# n : number of points
# p:correlation

datagen <- function(n , p){
  # random points N(0,1)
  x1 <- rnorm(n)
  x2 <- rnorm(n)
  X <- cbind(x1, x2)

  # desired correlation matrix
  R <- matrix(c(1, p, p, 1), ncol = 2)

  # use cholesky decomposition `t(U) %*% U = R`
  U <- chol(R)
  corvars <- X %*% U

  # create dependent variable after model:
  # y ~ 5 + 7 * X1 + 7 * X2 + e | E ~ N(0,10)
  y <- 5 + 7*corvars[,1] + 7*corvars[,2] + rnorm(n, 0, 1)
  df <- data.frame(y, corvars)
  return(df)
}

Let’s see if it works. This creates two variables with 1000 observations with a correlation of 0.8 between them and a dependent variable.

1
df1 <- datagen(n = 1000, p = 0.8)

The correlation between X1 and X2 is as desired nearly 0.8

1
cor(df1)
1
2
3
4
##          y      X1      X2
## y  1.00000 0.94043 0.94715
## X1 0.94043 1.00000 0.79139
## X2 0.94715 0.79139 1.00000

And the data follows the specified model.

1
2
mod <- lm(y ~ X1 + X2, data = df1)
summary(mod)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## 
## Call:
## lm(formula = y ~ X1 + X2, data = df1)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.7375 -0.6712  0.0561  0.6225  2.9057 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   5.0210     0.0311     161   <2e-16 ***
## X1            6.9209     0.0517     134   <2e-16 ***
## X2            7.0813     0.0498     142   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.984 on 997 degrees of freedom
## Multiple R-squared:  0.995,	Adjusted R-squared:  0.995 
## F-statistic: 9.14e+04 on 2 and 997 DF,  p-value: <2e-16
1
pairs(df1)

plot of chunk example_datagen3

Methods to spot collinearity

Dormann et al. (2013) lists eight methods to spot collinearity (see their Table 1). I will only show how to calculate two of those (but see the Appendix of the Dormann paper for code to all methods):

Absolute value of correlation coefficients r

1
(cormat <- cor(df1[, 2:3]))
1
2
3
##         X1      X2
## X1 1.00000 0.79139
## X2 0.79139 1.00000

Dormann et al. (2013) found that ‘coefficients between predictor variables of r > 0.7 was an appropriate indicator for when collinearity begins to severely distort model estimation’.

Variance Inflation Factors

1
2
require(car)
vif(mod)
1
2
##     X1     X2 
## 2.6759 2.6759

Which is equivalent to (for variable X1):

1
2
sum <- summary(lm(X1 ~ X2, data = df1))
1/(1 - sum$r.squared)
1
## [1] 2.6759

Unfortunately there are many ‘rules of thumb’ associated with VIF: > 10, > 4, …

Simulation 1: How does collinearity affect the precision of estimates?

Here we simulate datasets with correlations ranging from -0.8 to 0.8:

1
2
3
4
ps <- seq(-0.8, 0.8, 0.005)
n = 1000

sim1 <- lapply(ps, function(x) datagen(n = n, p = x))

Next we fit a linear model to each of the datasets and extract the coefficients table:

1
2
3
4
5
6
# Function to fit and extract coefficients
regfun <- function(x) {
    return(summary(lm(y ~ X1 + X2, data = x))$coefficients)
}
# run regfun on every dataset
res1 <- lapply(sim1, regfun)

Finally we extract the Standard Errors and plot them against the degree of correlation:

1
2
3
4
5
6
7
8
9
# extract Standard Errors from results
ses <- data.frame(ps, t(sapply(res1, function(x) x[, 2])))

# plot
require(ggplot2)
require(reshape2)
ses_m <- melt(ses, id.vars = "ps")
ggplot(ses_m, aes(x = ps, y = value)) + geom_point(size = 3) + facet_wrap(~variable) +
    theme_bw() + ylab("Std. Error") + xlab("Correlation")

plot of chunk plot1

It can be clearly seen, that collinearity inflates the Standard Errors for the correlated variables. The intercept is not affected.

Having large standard errors parameter estimates are also variable, which is demonstrated in the next simulation:

Simulation 2: Are estimated parameters stable under collinearity?

Here I simulate for different correlations 100 datasets each in order to see how stable the estimates are.

The code creates the datasets, fits a linear model to each dataset and extracts the estimates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
n_sims <- 100
ps <- seq(-0.8, 0.8, 0.05)

# function to generate n_sims datasets, for give p
sim2 <- function(p) {
    sim <- lapply(1:n_sims, function(x) datagen(n = 1000, p = p))
    res <- lapply(sim, regfun)
    est <- t(sapply(res, function(x) x[, 1]))
    out <- data.frame(p, est)
    return(out)
}

res2 <- lapply(ps, function(x) {
    return(sim2(x))
})

The we create a plot of the estimates for the three coefficients, each boxplot represents 100 datasets.

1
2
3
4
5
6
7
ests <- lapply(res2, melt, id.vars = "p")
ests <- do.call(rbind, ests)

ggplot(ests, aes(y = value, x = factor(p))) + geom_boxplot() + facet_wrap(~variable,
    scales = "free_y") + theme_bw() + geom_hline(aes(yintercept = 7), data = subset(ests,
    variable %in% c("X1", "X2")), col = "red", lwd = 1) + ylab("Estimate") +
    xlab("Correlation")

plot of chunk plot2

The red line indicates the coefficients after the data has been generated (7 * X1 and 7 * X2). We see that the spread of estimates increases as correlation increases.

This is confirmed by looking at the standard deviation of the estimates:

1
2
3
4
sds <- data.frame(ps, t(sapply(res2, function(x) apply(x[, 2:4], 2, sd))))
sds_m <- melt(sds, id.vars = "ps")
ggplot(sds_m, aes(x = ps, y = value)) + geom_point(size = 3) + facet_wrap(~variable) +
    theme_bw() + ylab("SD of estimates from 100 simulations") + xlab("Correlation")

plot of chunk plot3

If the standard errors are large enough it may happen that parameter estimates may be so variable that even their sign is changed.

References

  • Carsten Dormann, Jane Elith, Sven Bacher, Carsten Buchmann, Gudrun Carl, Gabriel Carré, Jaime Marquéz, Bernd Gruber, Bruno Lafourcade, Pedro Leitão, Tamara Münkemüller, Colin McClean, Patrick Osborne, Björn Reineking, Boris Schröder, Andrew Skidmore, Damaris Zurell, Sven Lautenbach, (2013) Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography 36 (1) 027-046 10.1111/j.1600-0587.2012.07348.x
  • Pierre Legendre, Louis Legendre, (2012) Numerical ecology.

Dispersion-based Weighting - a New Function in Vegan

Intro

When analysing multivariate community data the first step is often to transform the abundance data, in order to downweight contributions of high abundance taxa to dissimilarity. There are a lot of transformations available, for example:

  • log(x + 1), but see O’Hara & Kotze (2010)
  • ln(2x + 1), which is used often with mesocosm data
  • $x^{0.5}$
  • $x^{0.25}$
  • Presence/Absence
  • and manymany more

However these are all global transformations (applied to every species in the dataset).

In an email discussion with Bob Clarke he pointed me to one of his papers Clarke et al. 2006:

Clarke, K. R., M. G. Chapman, P. J. Somerfield, and H. R. Needham. 2006. “Dispersion-based Weighting of Species Counts in Assemblage Analyses.” Marine Ecology. Progress Series 320: 11–27. klick

I somehow missed this paper and it looks like this hasn’t been implemented in R yet. So I wrote it up for myself and the function has been merged into vegan this week (see Changelog).

Get it

Currently the function is only available in the dev-version of vegan. You can it easily install it from github using the devtools-package. You can also take a look at the implementation here.

1
2
3
require(devtools)
install_github("vegan", "jarioksa")
require(vegan)

The new function is named dispweight and takes three arguments:

1
dispweight(comm, group, nperm = 1000)

The community data matrix, a vector describing the group structure and the number of permutations to use for the permutation test.

It returns a list of four elements:

  • D: The dispersion index
  • p: p-value of the permutation test
  • weights: The weights applied to species
  • transformed: The transformed abundance matrix

Use it

Here it take the dune dataset and Management as grouping variable.

First we take look at the data via NMDS:

1
2
3
4
5
6
7
8
9
10
11
12
13
require(vegan)
data(dune)
data(dune.env)

# NMDS on fouth-root transformed abundances
un_nmds <- metaMDS(dune^0.25, distance = "bray")

# plot it
cols = rainbow(4)
plot(un_nmds, display = "sites", type = "n")
points(un_nmds, col = cols[dune.env$Management], pch = 16, cex = 1.5)
ordihull(un_nmds, groups = dune.env$Management, lty = "dotted")
ordispider(un_nmds, groups = dune.env$Management, label = TRUE)

plot of chunk plot_raw

Lets run the NMDS on dispersion-weighted abundances:

1
2
3
4
5
6
7
8
9
10
11
# calculate weights
dpw <- dispweight(dune, dune.env$Management, nperm = 100)

# NMDS on dispersion weighted community data
disp_nmds <- metaMDS(dpw$transformed, distance = "bray")

# plot
plot(disp_nmds, display = "sites", type = "n")
points(disp_nmds, col = cols[dune.env$Management], pch = 16, cex = 1.5)
ordihull(disp_nmds, groups = dune.env$Management, lty = "dotted")
ordispider(disp_nmds, groups = dune.env$Management, label = TRUE)

plot of chunk plot_disp

In this example there is not a big difference, but as they write in their paper:

[…] we should not expect a dispersion-weighting approach to radically alter ordination and test results. Indeed, the improvements shown in the real examples of this paper are sometimes small and subtle […].

I haven’t tested the function very much, so let me know if you find any bugs! Note that the code isn’t optimised for speed and follows largely the paper.

Refs

  • KR Clarke, MG Chapman, PJ Somerfield, HR Needham, (2006) Dispersion-Based Weighting of Species Counts in Assemblage Analyses. Marine Ecology Progress Series 320 11-27 10.3354/meps320011
  • Robert B. O’Hara, D. Johan Kotze, (2010) do Not Log-Transform Count Data. Methods in Ecology And Evolution 1 118-122 10.1111/j.2041-210X.2010.00021.x

Quantitative Ecotoxicology, Page 180, Example 4.10, Time-to-death

This is example 4.10 on page 180 of Quantitative Ecotoxicology. A follow-up to example 4.9.

Data

We use data from the previous example and also convert the concentrations into a factor:

1
2
3
4
5
6
7
# download data from github
require(RCurl)
url <- getURL("https://raw.github.com/EDiLD/r-ed/master/quantitative_ecotoxicology/data/TOXICITY.csv",
    ssl.verifypeer = FALSE)
TOXICITY <- read.table(text = url, header = TRUE)
# add status
TOXICITY$FLAG <- ifelse(TOXICITY$TTD > 96, 1, 2)

Compare models

We can fit parametric survival time models using the survreg() function, where the dist argument specifies the assumed distribution. The Gamma distribution is not supported by survreg (but see below).

1
2
3
4
5
6
require(survival)
# fit different models
mod_exp <- survreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY, dist = "exponential")
mod_wei <- survreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY, dist = "weibull")
mod_lnorm <- survreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY, dist = "lognorm")
mod_llog <- survreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY, dist = "loglogistic")

From the models we can access the Log-Likelihood via object$loglig[2]. Note that survreg gives 2 likelihoods, one without covariables (intercept only) and one with covariables. We are interested in the second one.

1
2
3
# extract loglik and save into data.frame
df <- data.frame(model = c("exp", "weibull", "lnorm", "loglog"), logLik = c(mod_exp$loglik[2],
    mod_wei$loglik[2], mod_lnorm$loglik[2], mod_llog$loglik[2]))

and extract the AIC-values (extractAIC) for model comparisons:

1
2
3
4
# add AIC as column to data.frame
df$AIC <- c(extractAIC(mod_exp)[2], extractAIC(mod_wei)[2], extractAIC(mod_lnorm)[2],
    extractAIC(mod_llog)[2])
df
1
2
3
4
5
##     model  logLik    AIC
## 1     exp -1309.7 2625.4
## 2 weibull -1114.3 2236.7
## 3   lnorm -1121.8 2251.5
## 4  loglog -1118.1 2244.2

Like with SAS the weibull-model has the lowest AIC. Not that the Log-Likelihood values differ by a constant of ~920 from SAS. Therefore also the AIC deviates by a constant of ~1840 (=2*920).

Inspect model

From the summary we get the estimates and their standard errors, as well as a Wald test for the coefficients.

1
summary(mod_wei)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
## 
## Call:
## survreg(formula = Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY, 
##     dist = "weibull")
##              Value Std. Error      z         p
## (Intercept)  7.849    0.08449  92.89  0.00e+00
## PPT         -0.295    0.00512 -57.54  0.00e+00
## WETWT        1.066    0.25604   4.16  3.15e-05
## Log(scale)  -1.189    0.04455 -26.69 6.09e-157
## 
## Scale= 0.305 
## 
## Weibull distribution
## Loglik(model)= -1114.3   Loglik(intercept only)= -1606.3
## 	Chisq= 983.9 on 2 degrees of freedom, p= 0 
## Number of Newton-Raphson Iterations: 9 
## n=480 (70 observations deleted due to missingness)

Note that the estimate of $0.0602 \pm 0.2566$ for WETWT in the book must be a typo: The standard error is way too big to be statistically significant, so I think the R result is right here.

We see that the concentration has a negative effect on time to death and that bigger fish survive longer.

Plot model

From our model we can predict for any salt concentration or fish wet weight the median time to death.

To produce Figure 4.12 we first generate some new data for which we want to predict the median time to death. I use expand.grid here, which gives every combination of wet weight from 0 to 1g and four salt concentrations.

Then I use the predict function to predict for all these combinations the median-time-to-death from our model (type='quantile', p = 0.5 specifies the median time).

Afterwards this is plotted using the ggplot2 plotting system.

1
2
3
4
5
6
7
8
9
10
11
# newdata, all combinations of WETWT and PPT
newtox <- expand.grid(WETWT = seq(0, 1.5, length.out = 100), PPT = seq(12.5,
    20, by = 2.5))
# predict ttd for newdata
newtox$preds <- predict(mod_wei, newdata = newtox, type = "quantile", p = 0.5)

# plot
require(ggplot2)
ggplot(newtox, aes(x = WETWT, y = preds, col = factor(PPT), group = PPT)) +
    geom_line() + theme_bw() + labs(x = "Wet Weight (g)", y = "Median-Time-To-Death (h)",
    col = "Concentration (g/L)") + coord_cartesian(ylim = c(0, 100))

plot of chunk plot_predicts

Notes

We can also plot the data and the model into one plot. Here I fit a model only with concentration, for demonstration. We can plot the Kaplan-Meier curve as shown in previous example. Then add the model as curve using the predict function:

1
2
3
4
5
6
7
8
9
10
11
12
13
# plot KM
km <- survfit(Surv(TTD, FLAG) ~ PPT, data = TOXICITY)
plot(km, col = 1:7)

# Fit model only for Concentrations
mod_ppt <- survreg(Surv(TTD, FLAG) ~ PPT, data = TOXICITY, dist = "weibull")

# add model to plot
PPT_u <- sort(unique(TOXICITY$PPT))
for (i in seq_along(PPT_u)) {
    lines(predict(mod_ppt, newdata = list(PPT = PPT_u[i]), type = "quantile",
        p = 1:99/100), 99:1/100, col = i)
}

plot of chunk plot_model

To produce a plot similar to Figure 4.11 we can use the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
# opposite of %in% : 'not in'
`%!in%` <- Negate(`%in%`)

# remove 0 and 20.1 treatments
TOXICITY_sub <- TOXICITY[TOXICITY$PPT %!in% c(0, 20.1), , drop = TRUE]
# kaplan-meier estimates
km <- survfit(Surv(TTD, FLAG) ~ PPT, data = TOXICITY_sub)
fac <- factor(rep(sort(unique(TOXICITY_sub$PPT)), km$strata))
cols <- 1:5
# plot tranformed time vs. survival
plot(log(km$time), log(-log(km$surv)), col = cols[fac], pch = 16)
# add legend
legend("bottomright", legend = levels(fac), col = cols, pch = 16, cex = 0.7)

plot of chunk unnamed-chunk-7

To fit a gamma distribution we have to use another package: flexsurv. Usage is (intended) similar to survival. However the loglogistic distribution is not build in. But this can be fixed easily, see ?flexsurvreg:

1
2
3
4
5
6
#### Define loglogistic distribution for flexsurvreg
library(eha)
custom.llogis <- list(name = "llogis", pars = c("shape", "scale"), location = "scale",
    transforms = c(log, log), inv.transforms = c(exp, exp), inits = function(t) {
        c(1, median(t))
    })
1
2
3
4
5
6
7
8
9
10
11
12
require(flexsurv)
# fit all models using flexsurvreg
mod_flex_exp <- flexsurvreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY,
    dist = "exp")
mod_flex_wei <- flexsurvreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY,
    dist = "weibull")
mod_flex_lnorm <- flexsurvreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY,
    dist = "lnorm")
mod_flex_llog <- flexsurvreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY,
    dist = custom.llogis)
mod_flex_ggamma <- flexsurvreg(Surv(TTD, FLAG) ~ PPT + WETWT, data = TOXICITY,
    dist = "gengamma")

Comparing the results from flexsurvreg with survreg, we see that the estimates are identical for all models. Therefore I conclude that flexsurv is an alternative when fitting with gamma distribution.

Code and data are available on my github-repo under file name ‘p180’.