Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Description: Animal abundance estimation via conventional, multiple covariate
fitting is performed via maximum likelihood. Also included are diagnostics
and plotting for fitted detection functions. Abundance estimation is via a
Horvitz-Thompson-like estimator.
Version: 3.0.0.9006
Version: 3.0.1
URL: https://github.com/DistanceDevelopment/mrds/
BugReports: https://github.com/DistanceDevelopment/mrds/issues
Depends:
Expand Down
18 changes: 14 additions & 4 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,19 @@
Bug Fixes

* Fixed formatting issue in flnl.grad help
* Fixed output for gof test when degrees of freedom for MR models is <= 0 (Issue #96)
* Now displays a warning if the user tries to fit a detection function with covariates using MCDS.exe which is not either a half-normal or a hazard rate model. (Issue #113)
* Fixed so that the MCDS.exe does not try to fit a negative exponential in place of a gamme key function. (Issue #113)
* Now issues warnings when there is only a single transect and varflag option is 1 or 2. (Issue #115)
* No longer plots data points on detection functions for binned analyses, only shows the interval bins.
* Add a check and a warning when the object field of the obs.table is not numeric. (Distance Issue #165)


Enhancements

* Documentation of dht.se to show variance calculations for varflag options (Issue #117)
* Modified the bysample table returned from dht (Issues #60 and #100)
* Clarified documentation regarding summary table output from dht.

# mrds 3.0.0

Expand Down Expand Up @@ -156,7 +166,7 @@ Bug Fixes

* rescaling parameters were not correct, now fixed. Thanks to Laura Marshall for spotting this.
* coefficients are called coefficients (not a mixture of coefficients and parameters) in summary() results
* speed-up in io.fi models (thanks to Winston Chang's profvis, showing many unecessary calls to model.matrix)
* speed-up in io.fi models (thanks to Winston Chang's profvis, showing many unnecessary calls to model.matrix)
* plot.ds now has a pdf= option to plot the probability density function (for point transect models only)
* assign.par, create.ddfobj and detfct are now exported, so it can be used by dsm (though shouldn't be used by anything else!)
* fixed bug in left truncation where probability of detection was not calculated correctly. Thanks to Jason Roberts for pointing this out!
Expand Down Expand Up @@ -217,7 +227,7 @@ BUG FIXES

BUG FIXES

* Standardisation was being applied to detection functions (such that g(0)=1) when there were no adjustments (which is uneccesary) but also caused issues when using gamma detection functions as this should be calculated at g(apex) instead. Standardisation code has been removed for when there are no adjustments and the correct scaling used for the gamma when there are. Thanks to Thomas Doniol-Valcroze for alerting us to this bug.
* Standardisation was being applied to detection functions (such that g(0)=1) when there were no adjustments (which is unnecessary) but also caused issues when using gamma detection functions as this should be calculated at g(apex) instead. Standardisation code has been removed for when there are no adjustments and the correct scaling used for the gamma when there are. Thanks to Thomas Doniol-Valcroze for alerting us to this bug.
* Partial name-matching in dht was fixed. Produced warning but not error.

NEW FEATURES
Expand Down Expand Up @@ -254,7 +264,7 @@ BUG FIXES

* uniform+cosine detection functions were ignored when using monotonicity constraints, now they can be used together
* mono.strict=TRUE didn't automatically turn on mono=TRUE, extra logic to correct this
* montonicity constraints did not use standardised (g(x)/g(0) detection functions, so if g(x)>1 monotonicity constraints were voilated. Now standardised detection functions are used. Thanks to Len Thomas for noticing this bug.
* montonicity constraints did not use standardised (g(x)/g(0) detection functions, so if g(x)>1 monotonicity constraints were violated. Now standardised detection functions are used. Thanks to Len Thomas for noticing this bug.

# mrds 2.1.4-3

Expand All @@ -265,7 +275,7 @@ BUG FIX
CHANGES

* general documentation updates
* simplication and re-structuring of internals
* simplification and re-structuring of internals

# mrds 2.1.4-3

Expand Down
2 changes: 1 addition & 1 deletion R/check.mono.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
#' @param strict if \code{TRUE} (default) the detection function must be
#' "strictly" monotone, that is that (\code{g(x[i])<=g(x[i-1])}) over the whole
#' range (left to right truncation points).
#' @param n.pts number of equally-spaced points between left and right
#' @param n.pts number of points between left and right
#' truncation at which to evaluate the detection function (default 100)
#' @param tolerance numerical tolerance for monotonicity checks (default 1e-8)
#' @param plot plot a diagnostic highlighting the non-monotonic areas (default
Expand Down
36 changes: 29 additions & 7 deletions R/dht.R
Original file line number Diff line number Diff line change
Expand Up @@ -152,10 +152,16 @@
#'
#' The list structure of clusters and individuals are the same:
#' \item{bysample}{\code{data.frame} giving results for each sample;
#' \code{Nchat} is the estimated abundance within the sample and \code{Nhat} is
#' scaled by surveyed area/covered area within that region}
#' \code{Sample.Area} is the covered area associated with the sampler,
#' \code{n} is the number of detections on the sampler,
#' \code{Nhat} is the estimated abundance within the sample, and
#' \code{Dhat} is \eqn{\frac{Nhat}{\sum{Sample.Area}}} so that summing
#' these values gives the overall density estimates.}
#'
#' \item{summary}{\code{data.frame} of summary statistics for each region and
#' total}
#' total. Note that the summary statistics give a general summary of the data
#' and may use more basic calculations than those used in the abundance
#' and density calculations.}
#' \item{N}{\code{data.frame} of estimates of abundance for each region and
#' total}
#' \item{D}{\code{data.frame} of estimates of density for each region and total}
Expand Down Expand Up @@ -223,15 +229,22 @@ dht <- function(model, region.table, sample.table, obs.table=NULL, subset=NULL,

bysample.table <- with(Nhat.by.sample,
data.frame(Region = Region.Label,
Area = Area,
Sample = Sample.Label,
Effort = Effort.x,
Sample.Area = CoveredArea,
Area = Area,
n = n,
Nhat = Nhat,
Nchat = Nhat*CoveredArea/Area))
Nhat = Nhat*CoveredArea/Area))

bysample.table$Dhat <- bysample.table$Nchat/bysample.table$Sample.Area
# Calculate density contributions (these can be summed to give overall density)
bysample.table$Dhat <- bysample.table$Nhat/bysample.table$Sample.Area
# Now update CoveredArea so it's per sample
if(point){
bysample.table$Sample.Area <- pi*bysample.table$Effort*width^2 - pi*bysample.table$Effort*left^2
}else{
bysample.table$Sample.Area <- 2*bysample.table$Effort*(width-left)
}

Nhat.by.region <- as.numeric(by(Nhat.by.sample$Nhat,
Nhat.by.sample$Region.Label, sum))

Expand Down Expand Up @@ -406,6 +419,15 @@ dht <- function(model, region.table, sample.table, obs.table=NULL, subset=NULL,
ervar=ifelse(model$meta.data$point, "P2",
"R2"),
areas.supplied=FALSE)

# Input checking
# Check that the object field is numeric in the obs.table
if(!is.null(obs.table)){
if(!is.numeric(obs.table$object)){
warning("Please ensure the object field in the obs.table is numeric, cannot perform dht calculations.", call. = FALSE)
return(NULL)
}
}

# jll 18-Nov-04; the following allows for a subset statement to be added to
# create obs.table from model data rather than creating obs.table separately.
Expand Down
17 changes: 15 additions & 2 deletions R/dht.se.R
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
#' \itemize{
#' \item variation due to uncertainty from estimation of the detection
#' function parameters;
#' \item variation in abundance due to random sample selection;
#' \item variation in abundance due to random sample selection.
#' }
#' The first component (model parameter uncertainty) is computed using a delta
#' method estimate of variance (\insertCite{huggins1989;nobrackets}{mrds}; \insertCite{huggins1991;nobrackets}{mrds}; \insertCite{borchers1998;nobrackets}{mrds}) in
Expand All @@ -34,7 +34,7 @@
#' is used in the detection function;
#' \item \code{2} is the default and uses the encounter rate estimator
#' \eqn{\hat{N}/L} (estimated abundance per unit transect) suggested by
#' \insertCite{innes2002;textual}{mrds} and \insertCite{marques2004;textual}{mrds}
#' \insertCite{innes2002;textual}{mrds} and \insertCite{marques2004;textual}{mrds}.
#' }
#'
#' In general if any covariates are used in the models, the default
Expand Down Expand Up @@ -66,6 +66,19 @@
#' of the assumed log-normal distribution because the components are
#' multiplicative. For combining df for the sampling variance across regions
#' they are weighted by the variance because it is a sum across regions.
#'
#' The coefficient of variation (CV) associated with the abundance estimates is calculated based on the following formula for the \code{varflag} options 1 and 2:
#'
#' \code{varflag=1}
#'
#' \deqn{CV(\hat{N}) = \sqrt{\left(\frac{\sqrt{n}}{n}\right)^2+CV(\hat{p})^2}}
#'
#' \code{varflag=2}
#'
#' \deqn{CV(\hat{N}) = \sqrt{\left(\frac{\sqrt{\hat{N}}}{\hat{N}}\right)^2+CV(\hat{p})^2}}
#' where n is the number of observations, \eqn{\hat{N}} is the estimated
#' abundance and \eqn{\hat{p}} is the average probability of detection for
#' an animal in the covered area.
#'
#' A non-zero correlation between regional estimates can occur from using a
#' common detection function across regions. This is reflected in the
Expand Down
17 changes: 13 additions & 4 deletions R/gof.io.R
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,17 @@ gof.io <- function(model, breaks=NULL, nc=NULL){
}else{
p.2 <- 1-pchisq(chisq.2,df.2)
}


# Calculate the pooled chi-square
df.pool <- 3*nc-length(model$par)-1

if(df.pool <= 0){
df.pool <- NA
p.pool <- NA
}else{
p.pool <- 1-pchisq(chisq.1+chisq.2, df.pool)
}

return(list(chi1=list(observed=observed.count.1,
expected=expected.1,
chisq=chisq.1,
Expand All @@ -96,7 +106,6 @@ gof.io <- function(model, breaks=NULL, nc=NULL){
p=p.2,
df=df.2),
pooled.chi=list(chisq=chisq.1+chisq.2,
df=3*nc-length(model$par)-1,
p=1-pchisq(chisq.1+chisq.2,
3*nc-length(model$par)-1))))
df=df.pool,
p=p.pool)))
}
17 changes: 13 additions & 4 deletions R/gof.io.fi.R
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,17 @@ gof.io.fi <- function(model,breaks=NULL,nc=NULL){
p.1 <- NA
df.2 <- NA
p.2 <- NA


# Calculate the pooled chi-square
df.pool <- 3*nc-length(model$par)-1

if(df.pool <= 0){
df.pool <- NA
p.pool <- NA
}else{
p.pool <- 1-pchisq(chisq.1+chisq.2, df.pool)
}

return(list(chi1=list(observed=observed.count.1,
expected=expected.1,
chisq=chisq.1,
Expand All @@ -85,7 +95,6 @@ gof.io.fi <- function(model,breaks=NULL,nc=NULL){
p=p.2,
df=df.2),
pooled.chi=list(chisq=chisq.1+chisq.2,
df=3*nc-length(model$par)-1,
p=1-pchisq(chisq.1+chisq.2,
3*nc-length(model$par)-1))))
df=df.pool,
p=p.pool)))
}
17 changes: 13 additions & 4 deletions R/gof.rem.R
Original file line number Diff line number Diff line change
Expand Up @@ -70,13 +70,23 @@ gof.rem <- function(model,breaks=NULL,nc=NULL){
p.1 <- 1-pchisq(chisq.1,df.1)
}

df.2 <- 2*nc-length(model$mr$par)
df.2 <- nc-length(model$mr$par)
if(df.2<=0){
df.2 <- NA
p.2 <- NA
}else{
p.2 <- 1-pchisq(chisq.2,df.2)
}

# Calculate the pooled chi-square
df.pool <- 2*nc-length(model$par)-1

if(df.pool <= 0){
df.pool <- NA
p.pool <- NA
}else{
p.pool <- 1-pchisq(chisq.1+chisq.2, df.pool)
}

return(list(chi1=list(observed=observed.count.1,
expected=expected.1,
Expand All @@ -89,7 +99,6 @@ gof.rem <- function(model,breaks=NULL,nc=NULL){
p=p.2,
df=df.2),
pooled.chi=list(chisq=chisq.1+chisq.2,
df=2*nc-length(model$par)-1,
p=1-pchisq(chisq.1+chisq.2,
2*nc-length(model$par)-1))))
df=df.pool,
p=p.pool)))
}
15 changes: 12 additions & 3 deletions R/gof.rem.fi.R
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,16 @@ gof.rem.fi <- function(model,breaks=NULL,nc=NULL){
p.1 <- NA
df.2 <- NA
p.2 <- NA

# Calculate the pooled chi-square
df.pool <- 2*nc-length(model$par)-1

if(df.pool <= 0){
df.pool <- NA
p.pool <- NA
}else{
p.pool <- 1-pchisq(chisq.1+chisq.2, df.pool)
}

return(list(chi1=list(observed=observed.count.1,
expected=expected.1,
Expand All @@ -81,7 +91,6 @@ gof.rem.fi <- function(model,breaks=NULL,nc=NULL){
p=p.2,
df=df.2),
pooled.chi=list(chisq=chisq.1+chisq.2,
df=2*nc-length(model$par)-1,
p=1-pchisq(chisq.1+chisq.2,
2*nc-length(model$par)-1))))
df=df.pool,
p=p.pool)))
}
15 changes: 12 additions & 3 deletions R/gof.trial.R
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,16 @@ gof.trial <- function(model, breaks=NULL, nc=NULL){
}else{
p.2 <- 1-pchisq(chisq.2,df.2)
}

# Calculate the pooled chi-square
df.pool <- 2*nc-length(model$par)-1

if(df.pool <= 0){
df.pool <- NA
p.pool <- NA
}else{
p.pool <- 1-pchisq(chisq.1+chisq.2, df.pool)
}

return(list(chi1=list(observed=observed.count.1,
expected=expected.1,
Expand All @@ -93,7 +103,6 @@ gof.trial <- function(model, breaks=NULL, nc=NULL){
p=p.2,
df=df.2),
pooled.chi=list(chisq=chisq.1+chisq.2,
df=2*nc-length(model$par)-1,
p=1-pchisq(chisq.1+chisq.2,
2*nc-length(model$par)-1))))
df=df.pool,
p=p.pool)))
}
15 changes: 12 additions & 3 deletions R/gof.trial.fi.R
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,16 @@ gof.trial.fi <- function(model,breaks=NULL,nc=NULL){
p.1 <- NA
df.2 <- NA
p.2 <- NA

# Calculate the pooled chi-square
df.pool <- 2*nc-length(model$par)-1

if(df.pool <= 0){
df.pool <- NA
p.pool <- NA
}else{
p.pool <- 1-pchisq(chisq.1+chisq.2, df.pool)
}

return(list(chi1=list(observed=observed.count.1,
expected=expected.1,
Expand All @@ -81,7 +91,6 @@ gof.trial.fi <- function(model,breaks=NULL,nc=NULL){
p=p.2,
df=df.2),
pooled.chi=list(chisq=chisq.1+chisq.2,
df=2*nc-length(model$par)-1,
p=1-pchisq(chisq.1+chisq.2,
2*nc-length(model$par)-1))))
df=df.pool,
p=p.pool)))
}
15 changes: 9 additions & 6 deletions R/plot.ds.R
Original file line number Diff line number Diff line change
Expand Up @@ -175,13 +175,15 @@ plot.ds <- function(x, which=2, breaks=NULL, nc=NULL,
zgrid <- matrix(rep(z[1,], length(xgrid)), byrow=TRUE, ncol=sum(zdim))
}

# Check if it is a binned analysis
if(is.null(model$meta.data$binned)){
binned <- FALSE
}else{
binned <- model$meta.data$binned
}

# create intervals of distance (breaks) for the chosen number of classes (nc).
if(is.null(breaks)){
if(is.null(model$meta.data$binned)){
binned <- FALSE
}else{
binned <- model$meta.data$binned
}
if(binned){
breaks <- model$ds$aux$breaks
nc <- length(breaks)-1
Expand Down Expand Up @@ -382,7 +384,8 @@ plot.ds <- function(x, which=2, breaks=NULL, nc=NULL,
ldots$y <- linevalues
do.call(lines, ldots)

if(showpoints){
# Don't show point if the data are binned
if(showpoints && !binned){
jitter.p <- rnorm(length(point_vals), 1, jitter.v[1])
pdots <- dots
pdots$x <- xmat$distance
Expand Down
Loading
Loading