Title: | Kernel Estimator and Bandwidth Selection for Density and Its Derivatives |
---|---|
Description: | Smoothing techniques and computing bandwidth selectors of the nth derivative of a probability density for one-dimensional data (described in Arsalane Chouaib Guidoum (2020) <arXiv:2012.06102> [stat.CO]). |
Authors: | Iago Giné-Vázquez [cre] , Arsalane Chouaib Guidoum [aut] |
Maintainer: | Iago Giné-Vázquez <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.0.4 |
Built: | 2024-10-31 21:13:39 UTC |
Source: | https://gitlab.com/iagogv/kedd |
Smoothing techniques and computing bandwidth selectors of the r'th derivative of a probability density for one-dimensional data.
Package: | kedd |
Type: | Package |
Version: | 1.0.4 |
Date: | 2024-01-27 |
License: | GPL (>= 2) |
There are four main types of functions in this package:
Compute the derivatives and convolutions of a kernel function (1-d).
Compute the kernel estimators for density and its derivatives (1-d).
Computing the bandwidth selectors (1-d).
Displaying kernel estimators.
Convolutions and derivatives in kernel function:
In non-parametric statistics, a kernel is a weighting function used in non-parametric estimation techniques.
The kernels functions are used in derivatives of kernel density estimator to estimate
, satisfying the following three requirements:
Several types of kernel functions are commonly used in this package: Gaussian, Epanechnikov, Uniform (rectangular), Triangular,
Triweight, Tricube, Biweight (quartic), Cosine.
The function kernel.fun
for kernel derivative and
kernel.conv
for
kernel convolution , where the write formally:
for
Estimators of r'th derivative of a density function:
A natural estimator of the r'th derivative of a density function is:
Here, is an i.i.d, sample of size
from the distribution with density
,
is the kernel function which we take to be a symmetric probability density with
at least
non zero derivatives when estimating
, and
is the bandwidth,
this parameter is very important that controls the degree of smoothing applied to the data.
The case is the standard kernel density estimator (e.g. Silverman 1986, Wolfgang 1991, Scott 1992,
Wand and Jones 1995, Jeffrey 1996, Bowman and Azzalini 1997, Alexandre 2009), properties of such derivative
estimators are well known e.g. Sheather and Jones (1991), Jones and Kappenman (1991), Wolfgang (1991). For
the case
, is derivative of kernel density estimator (e.g. Bhattacharya 1967, Schuster 1969, Alekseev 1972,
Wolfgang et all 1990, Jones 1992, Stoker 1993) and for applications which require the estimation of density derivatives can
be found in Singh (1977).
For r'th derivatives of kernel density estimator one-dimensional, the main function is dkde
. For display,
its plot method calls plot.dkde
, and if to add a plot using lines.dkde
.
R> data(trimodal) R> dkde(x = trimodal, deriv.order = 0, kernel = "gaussian") Data: trimodal (200 obs.); Kernel: gaussian Derivative order: 0; Bandwidth 'h' = 0.1007 eval.points est.fx Min. :-2.91274 Min. :0.0000066 1st Qu.:-1.46519 1st Qu.:0.0669750 Median :-0.01765 Median :0.1682045 Mean :-0.01765 Mean :0.1723692 3rd Qu.: 1.42989 3rd Qu.:0.2484626 Max. : 2.87743 Max. :0.4157340 R> dkde(x = trimodal, deriv.order = 1, kernel = "gaussian") Data: trimodal (200 obs.); Kernel: gaussian Derivative order: 1; Bandwidth 'h' = 0.09094 eval.points est.fx Min. :-2.87358 Min. :-1.740447 1st Qu.:-1.44562 1st Qu.:-0.343952 Median :-0.01765 Median : 0.009057 Mean :-0.01765 Mean : 0.000000 3rd Qu.: 1.41031 3rd Qu.: 0.415343 Max. : 2.83828 Max. : 1.256891
Bandwidth selectors:
The most important factor in the r'th derivative kernel density estimate is a choice of the bandwidth
for one-dimensional observations. Because of its role in controlling both the amount and
the direction of smoothing, this choice is particularly important. We present the popular bandwidth
selection (for more details see references) methods in this package:
Optimal Bandwidth (AMISE); with deriv.order >= 0
, name of this function is h.amise
.
For display, its plot method calls plot.h.amise
, and to add a plot used lines.h.amise
.
Maximum-likelihood cross-validation (MLCV); with deriv.order = 0
, name of this function is h.mlcv
.
For display, its plot method calls plot.h.mlcv
, and to add a plot used lines.h.mlcv
.
Unbiased cross validation (UCV); with deriv.order >= 0
, name of this function is h.ucv
.
For display, its plot method calls plot.h.ucv
, and to add a plot used lines.h.ucv
.
Biased cross validation (BCV); with deriv.order >= 0
, name of this function is h.bcv
.
For display, its plot method calls plot.h.bcv
, and to add a plot used lines.h.bcv
.
Complete cross-validation (CCV); with deriv.order >= 0
, name of this function is h.ccv
.
For display, its plot method calls plot.h.ccv
, and to add a plot used lines.h.ccv
.
Modified cross-validation (MCV); with deriv.order >= 0
, name of this function is h.mcv
.
For display, its plot method calls plot.h.mcv
, and to add a plot used lines.h.mcv
.
Trimmed cross-validation (TCV); with deriv.order >= 0
, name of this function is h.tcv
.
For display, its plot method calls plot.h.tcv
, and to add a plot used lines.h.tcv
.
R> data(trimodal) R> h.bcv(x = trimodal, whichbcv = 1, deriv.order = 0, kernel = "gaussian") Call: Biased Cross-Validation 1 Derivative order = 0 Data: trimodal (200 obs.); Kernel: gaussian Min BCV = 0.004511636; Bandwidth 'h' = 0.4357812 R> h.ccv(x = trimodal, deriv.order = 1, kernel = "gaussian") Call: Complete Cross-Validation Derivative order = 1 Data: trimodal (200 obs.); Kernel: gaussian Min CCV = 0.01985078; Bandwidth 'h' = 0.5828336 R> h.tcv(x = trimodal, deriv.order = 2, kernel = "gaussian") Call: Trimmed Cross-Validation Derivative order = 2 Data: trimodal (200 obs.); Kernel: gaussian Min TCV = -295.563; Bandwidth 'h' = 0.08908582 R> h.ucv(x = trimodal, deriv.order = 3, kernel = "gaussian") Call: Unbiased Cross-Validation Derivative order = 3 Data: trimodal (200 obs.); Kernel: gaussian Min UCV = -63165.18; Bandwidth 'h' = 0.1067236
For an overview of this package, see vignette("kedd")
.
R version >= 2.15.0
This package and its documentation are usable under the terms of the "GNU General Public License", a copy of which is distributed with the package.
Alekseev, V. G. (1972). Estimation of a probability density function and its derivatives. Mathematical notes of the Academy of Sciences of the USSR. 12(5), 808–811.
Alexandre, B. T. (2009). Introduction to Nonparametric Estimation. Springer-Verlag, New York.
Bowman, A. W. (1984). An alternative method of cross-validation for the smoothing of kernel density estimates. Biometrika, 71, 353–360.
Bowman, A. W. and Azzalini, A. (1997). Applied Smoothing Techniques for Data Analysis: the Kernel Approach with S-Plus Illustrations. Oxford University Press, Oxford.
Bowman, A.W. and Azzalini, A. (2003). Computational aspects of nonparametric smoothing with illustrations from the sm library. Computational Statistics and Data Analysis, 42, 545–560.
Bowman, A.W. and Azzalini, A. (2013). sm: Smoothing methods for nonparametric regression and density estimation. R package version 2.2-5.3. Ported to R by B. D. Ripley.
Bhattacharya, P. K. (1967). Estimation of a probability density function and Its derivatives. Sankhya: The Indian Journal of Statistics, Series A, 29, 373–382.
Duin, R. P. W. (1976). On the choice of smoothing parameters of Parzen estimators of probability density functions. IEEE Transactions on Computers, C-25, 1175–1179.
Feluch, W. and Koronacki, J. (1992). A note on modified cross-validation in density estimation. Computational Statistics and Data Analysis, 13, 143–151.
George, R. T. (1990). The maximal smoothing principle in density estimation. Journal of the American Statistical Association, 85, 470–477.
George, R. T. and Scott, D. W. (1985). Oversmoothed nonparametric density estimates. Journal of the American Statistical Association, 80, 209–214.
Habbema, J. D. F., Hermans, J., and Van den Broek, K. (1974) A stepwise discrimination analysis program using density estimation. Compstat 1974: Proceedings in Computational Statistics. Physica Verlag, Vienna.
Heidenreich, N. B., Schindler, A. and Sperlich, S. (2013). Bandwidth selection for kernel density estimation: a review of fully automatic selectors. Advances in Statistical Analysis.
Jeffrey, S. S. (1996). Smoothing Methods in Statistics. Springer-Verlag, New York.
Jones, M. C. (1992). Differences and derivatives in kernel estimation. Metrika, 39, 335–340.
Jones, M. C., Marron, J. S. and Sheather,S. J. (1996). A brief survey of bandwidth selection for density estimation. Journal of the American Statistical Association, 91, 401–407.
Jones, M. C. and Kappenman, R. F. (1991). On a class of kernel density estimate bandwidth selectors. Scandinavian Journal of Statistics, 19, 337–349.
Loader, C. (1999). Local Regression and Likelihood. Springer, New York.
Olver, F. W., Lozier, D. W., Boisvert, R. F. and Clark, C. W. (2010). NIST Handbook of Mathematical Functions. Cambridge University Press, New York, USA.
Peter, H. and Marron, J.S. (1987). Estimation of integrated squared density derivatives. Statistics and Probability Letters, 6, 109–115.
Peter, H. and Marron, J.S. (1991). Local minima in cross-validation functions. Journal of the Royal Statistical Society, Series B, 53, 245–252.
Radhey, S. S. (1987). MISE of kernel estimates of a density and its derivatives. Statistics and Probability Letters, 5, 153–159.
Rudemo, M. (1982). Empirical choice of histograms and kernel density estimators. Scandinavian Journal of Statistics, 9, 65–78.
Scott, D. W. (1992). Multivariate Density Estimation. Theory, Practice and Visualization. New York: Wiley.
Scott, D.W. and George, R. T. (1987). Biased and unbiased cross-validation in density estimation. Journal of the American Statistical Association, 82, 1131–1146.
Schuster, E. F. (1969) Estimation of a probability density function and its derivatives. The Annals of Mathematical Statistics, 40 (4), 1187–1195.
Sheather, S. J. (2004). Density estimation. Statistical Science, 19, 588–597.
Sheather, S. J. and Jones, M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. Journal of the Royal Statistical Society, Series B, 53, 683–690.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC. London.
Singh, R. S. (1977). Applications of estimators of a density and its derivatives to certain statistical problems. Journal of the Royal Statistical Society, Series B, 39(3), 357–363.
Stoker, T. M. (1993). Smoothing bias in density derivative estimation. Journal of the American Statistical Association, 88, 855–863.
Stute, W. (1992). Modified cross validation in density estimation. Journal of Statistical Planning and Inference, 30, 293–305.
Tarn, D. (2007). ks: Kernel density estimation and kernel discriminant analysis for multivariate data in R. Journal of Statistical Software, 21(7), 1–16.
Tristen, H. and Jeffrey, S. R. (2008). Nonparametric Econometrics: The np Package. Journal of Statistical Software,27(5).
Venables, W. N. and Ripley, B. D. (2002). Modern Applied Statistics with S. New York: Springer.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
Wand, M.P. and Ripley, B. D. (2013). KernSmooth: Functions for Kernel Smoothing for Wand and Jones (1995). R package version 2.23-10.
Wolfgang, H. (1991). Smoothing Techniques, With Implementation in S. Springer-Verlag, New York.
Wolfgang, H., Marlene, M., Stefan, S. and Axel, W. (2004). Nonparametric and Semiparametric Models. Springer-Verlag, Berlin Heidelberg.
Wolfgang, H., Marron, J. S. and Wand, M. P. (1990). Bandwidth choice for density derivatives. Journal of the Royal Statistical Society, Series B, 223–232.
ks, KernSmooth, sm, np, locfit, feature, GenKern.
A random sample of size 200 from the claw, bimodal, kurtotic, outlier and trimodal Gaussian density.
data(claw) data(bimodal) data(kurtotic) data(outlier) data(trimodal)
data(claw) data(bimodal) data(kurtotic) data(outlier) data(trimodal)
Numeric vector with length 200.
Generate 200 random numbers, distributed according to a normal mixture, using
rnorMix
in package nor1mix.
## Claw density claw <- rnorMix(n=200, MW.nm10) plot(MW.nm10) ## Bimodal density bimodal <- rnorMix(n=200, MW.nm7) plot( MW.nm7) ## Kurtotic density kurtotic <- rnorMix(n=200, MW.nm4) plot(MW.nm4) ## Outlier density outlier <- rnorMix(n=200, MW.nm5) plot( MW.nm5) ## Trimodal density trimodal <- rnorMix(n=200, MW.nm9) plot(MW.nm9)
Randomly generated a normal mixture with the function rnorMix
in package nor1mix.
Martin, M. (2013). nor1mix: Normal (1-d) mixture models (S3 classes and methods). R package version 1.1-4.
The (S3) generic function dkde
computes the r'th
derivative of kernel density estimator for one-dimensional
data. Its default method does so with the given kernel
and bandwidth for one-dimensional observations.
dkde(x, ...) ## Default S3 method: dkde(x, y = NULL, deriv.order = 0, h, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
dkde(x, ...) ## Default S3 method: dkde(x, y = NULL, deriv.order = 0, h, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
x |
the data from which the estimate is to be computed. |
y |
the points of the grid at which the
density derivative is to be estimated; the defaults are |
deriv.order |
derivative order (scalar). |
h |
the smoothing bandwidth to be used, can also be a character
string giving a rule to choose the bandwidth, see |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
A simple estimator for the density derivative can be obtained by taking the derivative
of the kernel density estimate. If the kernel is differentiable
times
then the r'th density derivative estimate can be written as:
where,
for
The following assumptions on the density , the bandwidth
, and the kernel
:
The derivative
is continuous, square integrable and ultimately monotone.
and
i.e., as the number of samples
is increased
approaches zero at a rate slower than
.
and
. The kernel function is assumed to be symmetric about the origin i.e.,
for even
and has finite second moment i.e.,
.
Some theoretical properties of the estimator have been investigated, among others, by Bhattacharya (1967), Schuster (1969). Let us now turn to the statistical properties of estimator. We are interested in the mean squared error since it combines squared bias and variance.
The bias can be written as:
The variance of the estimator can be written as:
with,
The MSE (Mean Squared Error) for kernel density derivative estimators can be written as:
It follows that the MSE-optimal bandwidth for estimating , is of order
. Therefore,
the estimation of
requires a bandwidth of order
compared to the optimal
for estimating
itself. It reveals the increasing difficulty in problems of estimating higher derivatives.
The MISE (Mean Integrated Squared Error) can be written as:
where,
with:
The performance of kernel is measured by MISE or AMISE (Asymptotic MISE).
If the bandwidth h
is missing from dkde
, then the default bandwidth is
h.ucv(x,deriv.order,kernel)
(Unbiased cross-validation, see h.ucv
).
For more details see references.
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
h |
the bandwidth value to use. |
eval.points |
the coordinates of the points where the density derivative is estimated. |
est.fx |
the estimated density derivative values. |
This function are available in other packages such as KernSmooth, sm,
np, GenKern and locfit if deriv.order=0
, and in ks package
for Gaussian kernel only if 0 <= deriv.order <= 10
.
Arsalane Chouaib Guidoum [email protected]
Alekseev, V. G. (1972). Estimation of a probability density function and its derivatives. Mathematical notes of the Academy of Sciences of the USSR. 12 (5), 808–811.
Alexandre, B. T. (2009). Introduction to Nonparametric Estimation. Springer-Verlag, New York.
Bowman, A. W. and Azzalini, A. (1997). Applied Smoothing Techniques for Data Analysis: the Kernel Approach with S-Plus Illustrations. Oxford University Press, Oxford.
Bhattacharya, P. K. (1967). Estimation of a probability density function and Its derivatives. Sankhya: The Indian Journal of Statistics, Series A, 29, 373–382.
Jeffrey, S. S. (1996). Smoothing Methods in Statistics. Springer-Verlag, New York.
Radhey, S. S. (1987). MISE of kernel estimates of a density and its derivatives. Statistics and Probability Letters, 5, 153–159.
Scott, D. W. (1992). Multivariate Density Estimation. Theory, Practice and Visualization. New York: Wiley.
Schuster, E. F. (1969) Estimation of a probability density function and its derivatives. The Annals of Mathematical Statistics, 40 (4), 1187–1195.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC. London.
Stoker, T. M. (1993). Smoothing bias in density derivative estimation. Journal of the American Statistical Association, 88, 855–863.
Venables, W. N. and Ripley, B. D. (2002). Modern Applied Statistics with S. New York: Springer.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
Wolfgang, H. (1991). Smoothing Techniques, With Implementation in S. Springer-Verlag, New York.
plot.dkde
, see density
in package "stats" if deriv.order = 0
, and kdde
in package ks.
## EXAMPLE 1: Simple example of a Gaussian density derivative x <- rnorm(100) dkde(x,deriv.order=0) ## KDE of f dkde(x,deriv.order=1) ## KDDE of d/dx f dkde(x,deriv.order=2) ## KDDE of d^2/x^2 f dkde(x,deriv.order=3) ## KDDE of d^3/x^3 f oldpar <- par(no.readonly = TRUE) dev.new() par(mfrow=c(2,2)) plot(dkde(x,deriv.order=0)) plot(dkde(x,deriv.order=1)) plot(dkde(x,deriv.order=2)) plot(dkde(x,deriv.order=3)) par(oldpar) ## EXAMPLE 2: Bimodal Gaussian density derivative ## show the kernels in the dkde parametrization fx <- function(x) 0.5 * dnorm(x,-1.5,0.5) + 0.5 * dnorm(x,1.5,0.5) fx1 <- function(x) 0.5 *(-4*x-6)* dnorm(x,-1.5,0.5) + 0.5 *(-4*x+6) * dnorm(x,1.5,0.5) ## 'h = 0.3' ; 'Derivative order = 0' kernels <- eval(formals(dkde.default)$kernel) dev.new() plot(dkde(bimodal,h=0.3),sub=paste("Derivative order = 0",";", "Bandwidth =0.3 "),ylim=c(0,0.5), main = "Bimodal Gaussian Density") for(i in 2:length(kernels)) lines(dkde(bimodal, h = 0.3, kernel = kernels[i]), col = i) curve(fx,add=TRUE,lty=8) legend("topright", legend = c(TRUE,kernels), col = c("black",seq(kernels)), lty = c(8,rep(1,length(kernels))),cex=0.7, inset = .015) ## 'h = 0.6' ; 'Derivative order = 1' kernels <- eval(formals(dkde.default)$kernel)[-3] dev.new() plot(dkde(bimodal,deriv.order=1,h=0.6),main = "Bimodal Gaussian Density Derivative",sub=paste ("Derivative order = 1",";","Bandwidth =0.6"),ylim=c(-0.6,0.6)) for(i in 2:length(kernels)) lines(dkde(bimodal,deriv.order=1, h = 0.6, kernel = kernels[i]), col = i) curve(fx1,add=TRUE,lty=8) legend("topright", legend = c(TRUE,kernels), col = c("black",seq(kernels)), lty = c(8,rep(1,length(kernels))),cex=0.7, inset = .015)
## EXAMPLE 1: Simple example of a Gaussian density derivative x <- rnorm(100) dkde(x,deriv.order=0) ## KDE of f dkde(x,deriv.order=1) ## KDDE of d/dx f dkde(x,deriv.order=2) ## KDDE of d^2/x^2 f dkde(x,deriv.order=3) ## KDDE of d^3/x^3 f oldpar <- par(no.readonly = TRUE) dev.new() par(mfrow=c(2,2)) plot(dkde(x,deriv.order=0)) plot(dkde(x,deriv.order=1)) plot(dkde(x,deriv.order=2)) plot(dkde(x,deriv.order=3)) par(oldpar) ## EXAMPLE 2: Bimodal Gaussian density derivative ## show the kernels in the dkde parametrization fx <- function(x) 0.5 * dnorm(x,-1.5,0.5) + 0.5 * dnorm(x,1.5,0.5) fx1 <- function(x) 0.5 *(-4*x-6)* dnorm(x,-1.5,0.5) + 0.5 *(-4*x+6) * dnorm(x,1.5,0.5) ## 'h = 0.3' ; 'Derivative order = 0' kernels <- eval(formals(dkde.default)$kernel) dev.new() plot(dkde(bimodal,h=0.3),sub=paste("Derivative order = 0",";", "Bandwidth =0.3 "),ylim=c(0,0.5), main = "Bimodal Gaussian Density") for(i in 2:length(kernels)) lines(dkde(bimodal, h = 0.3, kernel = kernels[i]), col = i) curve(fx,add=TRUE,lty=8) legend("topright", legend = c(TRUE,kernels), col = c("black",seq(kernels)), lty = c(8,rep(1,length(kernels))),cex=0.7, inset = .015) ## 'h = 0.6' ; 'Derivative order = 1' kernels <- eval(formals(dkde.default)$kernel)[-3] dev.new() plot(dkde(bimodal,deriv.order=1,h=0.6),main = "Bimodal Gaussian Density Derivative",sub=paste ("Derivative order = 1",";","Bandwidth =0.6"),ylim=c(-0.6,0.6)) for(i in 2:length(kernels)) lines(dkde(bimodal,deriv.order=1, h = 0.6, kernel = kernels[i]), col = i) curve(fx1,add=TRUE,lty=8) legend("topright", legend = c(TRUE,kernels), col = c("black",seq(kernels)), lty = c(8,rep(1,length(kernels))),cex=0.7, inset = .015)
The (S3) generic function h.amise
evaluates the asymptotic
mean integrated squared error AMISE for optimal smoothing
parameters of r'th derivative of kernel density
estimator one-dimensional.
h.amise(x, ...) ## Default S3 method: h.amise(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "triweight", "tricube", "biweight", "cosine"), ...)
h.amise(x, ...) ## Default S3 method: h.amise(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "triweight", "tricube", "biweight", "cosine"), ...)
x |
vector of data values. |
deriv.order |
derivative order (scalar). |
lower , upper
|
range over which to minimize. The default is
almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.amise
asymptotic mean integrated squared error implements for choosing
the optimal bandwidth of a r'th derivative kernel density estimator.
We Consider the following AMISE version of the r'th derivative of the r'th
derivative of the kernel estimate (see Scott 1992, pp 131):
The optimal bandwidth minimizing this function is:
whereof
which is the smallest possible AMISE for estimation of using the kernel
,
where
and
.
The range over which to minimize is hos
Oversmoothing bandwidth, the default is almost always
satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
deriv.order |
the derivative order to use. |
h |
value of bandwidth parameter. |
amise |
the AMISE value. |
Arsalane Chouaib Guidoum [email protected]
Bowman, A. W. and Azzalini, A. (1997). Applied Smoothing Techniques for Data Analysis: the Kernel Approach with S-Plus Illustrations. Oxford University Press, Oxford.
Radhey, S. S. (1987). MISE of kernel estimates of a density and its derivatives. Statistics and Probability Letters, 5, 153–159.
Scott, D. W. (1992). Multivariate Density Estimation. Theory, Practice and Visualization. New York: Wiley.
Sheather, S. J. (2004). Density estimation. Statistical Science, 19, 588–597.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC. London.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
plot.h.amise
, see nmise
in package sm this function
evaluates the mean integrated squared error of a density estimate (deriv.order = 0
)
which is constructed from data which follow a normal distribution.
## Derivative order = 0 h.amise(kurtotic,deriv.order = 0) ## Derivative order = 1 h.amise(kurtotic,deriv.order = 1)
## Derivative order = 0 h.amise(kurtotic,deriv.order = 0) ## Derivative order = 1 h.amise(kurtotic,deriv.order = 1)
The (S3) generic function h.bcv
computes the biased
cross-validation bandwidth selector of r'th derivative of
kernel density estimator one-dimensional.
h.bcv(x, ...) ## Default S3 method: h.bcv(x, whichbcv = 1, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian","epanechnikov", "triweight","tricube","biweight","cosine"), ...)
h.bcv(x, ...) ## Default S3 method: h.bcv(x, whichbcv = 1, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian","epanechnikov", "triweight","tricube","biweight","cosine"), ...)
x |
vector of data values. |
whichbcv |
method selected, |
deriv.order |
derivative order (scalar). |
lower , upper
|
range over which to minimize. The default is
almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.bcv
biased cross-validation implements for choosing the bandwidth of a
r'th derivative kernel density estimator. if
whichbcv = 1
then BCV1 is selected
(Scott and George 1987), and if whichbcv = 2
used BCV2 (Jones and Kappenman 1991).
Scott and George (1987) suggest a method which has as its immediate target the AMISE
(e.g. Silverman 1986, section 3.3). We denote and
(Peter and Marron 1987, Jones and Kappenman 1991) by:
and
Scott and George (1987) proposed using to estimate
.
Thus,
, say, is the
that minimises:
and we define as the minimiser of (Jones and Kappenman 1991):
where is the convolution of the r'th derivative kernel function
(see
kernel.conv
and kernel.fun
); and
.
The range over which to minimize is hos
Oversmoothing bandwidth, the default is almost always
satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
deriv.order |
the derivative order to use. |
whichbcv |
method selected. |
h |
value of bandwidth parameter. |
min.bcv |
the minimal BCV value. |
Arsalane Chouaib Guidoum [email protected]
Jones, M. C. and Kappenman, R. F. (1991). On a class of kernel density estimate bandwidth selectors. Scandinavian Journal of Statistics, 19, 337–349.
Jones, M. C., Marron, J. S. and Sheather,S. J. (1996). A brief survey of bandwidth selection for density estimation. Journal of the American Statistical Association, 91, 401–407.
Peter, H. and Marron, J.S. (1987). Estimation of integrated squared density derivatives. Statistics and Probability Letters, 6, 109–115.
Scott, D.W. and George, R. T. (1987). Biased and unbiased cross-validation in density estimation. Journal of the American Statistical Association, 82, 1131–1146.
Sheather,S. J. (2004). Density estimation. Statistical Science, 19, 588–597.
Tarn, D. (2007). ks: Kernel density estimation and kernel discriminant analysis for multivariate data in R. Journal of Statistical Software, 21(7), 1–16.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
Wolfgang, H. (1991). Smoothing Techniques, With Implementation in S. Springer-Verlag, New York.
plot.h.bcv
, see bw.bcv
in package "stats" and
bcv
in package MASS for Gaussian kernel only if deriv.order = 0
,
Hbcv
for bivariate data in package ks for Gaussian kernel
only if deriv.order = 0
, kdeb
in package locfit
if deriv.order = 0
.
## EXAMPLE 1: x <- rnorm(100) h.bcv(x,whichbcv = 1, deriv.order = 0) h.bcv(x,whichbcv = 2, deriv.order = 0) ## EXAMPLE 2: ## Derivative order = 0 h.bcv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.bcv(kurtotic,deriv.order = 1)
## EXAMPLE 1: x <- rnorm(100) h.bcv(x,whichbcv = 1, deriv.order = 0) h.bcv(x,whichbcv = 2, deriv.order = 0) ## EXAMPLE 2: ## Derivative order = 0 h.bcv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.bcv(kurtotic,deriv.order = 1)
The (S3) generic function h.ccv
computes the complete
cross-validation bandwidth selector of r'th derivative of
kernel density estimator one-dimensional.
h.ccv(x, ...) ## Default S3 method: h.ccv(x, deriv.order = 0, lower = 0.1 * hos, upper = hos, tol = 0.1 * lower, kernel = c("gaussian", "triweight", "tricube", "biweight", "cosine"), ...)
h.ccv(x, ...) ## Default S3 method: h.ccv(x, deriv.order = 0, lower = 0.1 * hos, upper = hos, tol = 0.1 * lower, kernel = c("gaussian", "triweight", "tricube", "biweight", "cosine"), ...)
x |
vector of data values. |
deriv.order |
derivative order (scalar). |
lower , upper
|
range over which to minimize. The default is
almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.ccv
complete cross-validation implements for choosing the bandwidth
of a r'th derivative kernel density estimator.
Jones and Kappenman (1991) proposed a so-called complete cross-validation (CCV)
in kernel density estimator. This method can be extended to the estimation of
derivative of the density, basing our estimate of integrated squared density
derivative (Peter and Marron 1987) on the 's,
we get the following, start from
as an estimate
of MISE. Thus,
, say, is the
that minimises:
with
and
and is the convolution of the r'th derivative kernel function
(see
kernel.conv
and kernel.fun
); and
,
.
The range over which to minimize is hos
Oversmoothing bandwidth, the default is almost always
satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
deriv.order |
the derivative order to use. |
h |
value of bandwidth parameter. |
min.ccv |
the minimal CCV value. |
Arsalane Chouaib Guidoum [email protected]
Jones, M. C. and Kappenman, R. F. (1991). On a class of kernel density estimate bandwidth selectors. Scandinavian Journal of Statistics, 19, 337–349.
Peter, H. and Marron, J.S. (1987). Estimation of integrated squared density derivatives. Statistics and Probability Letters, 6, 109–115.
## Derivative order = 0 h.ccv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.ccv(kurtotic,deriv.order = 1)
## Derivative order = 0 h.ccv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.ccv(kurtotic,deriv.order = 1)
The (S3) generic function h.mcv
computes the modified
cross-validation bandwidth selector of r'th derivative of
kernel density estimator one-dimensional.
h.mcv(x, ...) ## Default S3 method: h.mcv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "triweight", "tricube", "biweight", "cosine"), ...)
h.mcv(x, ...) ## Default S3 method: h.mcv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "triweight", "tricube", "biweight", "cosine"), ...)
x |
vector of data values. |
deriv.order |
derivative order (scalar). |
lower , upper
|
range over which to minimize. The default is
almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.mcv
modified cross-validation implements for choosing the bandwidth
of a r'th derivative kernel density estimator.
Stute (1992) proposed a so-called modified cross-validation (MCV) in kernel density estimator. This method can be extended to the estimation of derivative of a density, the essential idea based on approximated the problematic term by the aid of the Hajek projection (see Stute 1992). The minimization criterion is defined by:
whit
and is the convolution of the r'th derivative kernel function
(see
kernel.conv
and kernel.fun
); and
.
The range over which to minimize is hos
Oversmoothing bandwidth, the default is almost always
satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
deriv.order |
the derivative order to use. |
h |
value of bandwidth parameter. |
min.mcv |
the minimal MCV value. |
Arsalane Chouaib Guidoum [email protected]
Heidenreich, N. B., Schindler, A. and Sperlich, S. (2013). Bandwidth selection for kernel density estimation: a review of fully automatic selectors. Advances in Statistical Analysis.
Stute, W. (1992). Modified cross validation in density estimation. Journal of Statistical Planning and Inference, 30, 293–305.
## Derivative order = 0 h.mcv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.mcv(kurtotic,deriv.order = 1)
## Derivative order = 0 h.mcv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.mcv(kurtotic,deriv.order = 1)
The (S3) generic function h.mlcv
computes the maximum
likelihood cross-validation (Kullback-Leibler information)
bandwidth selector of a one-dimensional kernel density estimate.
h.mlcv(x, ...) ## Default S3 method: h.mlcv(x, lower = 0.1, upper = 5, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
h.mlcv(x, ...) ## Default S3 method: h.mlcv(x, lower = 0.1, upper = 5, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
x |
vector of data values. |
lower , upper
|
range over which to maximize. The default is almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.mlcv
maximum-likelihood cross-validation implements for choosing
the optimal bandwidth of kernel density estimator.
This method was proposed by Habbema, Hermans, and Van den Broeck (1971) and by Duin (1976). The maximum-likelihood cross-validation (MLCV) function is defined by:
the estimate on the subset
denoting the leave-one-out estimator, can be written:
Define that as good which approaches the finite maximum of
:
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
h |
value of bandwidth parameter. |
mlcv |
the maximal likelihood CV value. |
Arsalane Chouaib Guidoum [email protected]
Habbema, J. D. F., Hermans, J., and Van den Broek, K. (1974) A stepwise discrimination analysis program using density estimation. Compstat 1974: Proceedings in Computational Statistics. Physica Verlag, Vienna.
Duin, R. P. W. (1976). On the choice of smoothing parameters of Parzen estimators of probability density functions. IEEE Transactions on Computers, C-25, 1175–1179.
plot.h.mlcv
, see lcv
in package locfit.
h.mlcv(bimodal) h.mlcv(bimodal, kernel ="epanechnikov")
h.mlcv(bimodal) h.mlcv(bimodal, kernel ="epanechnikov")
The (S3) generic function h.tcv
computes the trimmed
cross-validation bandwidth selector of r'th derivative of
kernel density estimator one-dimensional.
h.tcv(x, ...) ## Default S3 method: h.tcv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
h.tcv(x, ...) ## Default S3 method: h.tcv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
x |
vector of data values. |
deriv.order |
derivative order (scalar). |
lower , upper
|
range over which to minimize. The default is
almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.tcv
trimmed cross-validation implements for choosing the bandwidth
of a r'th derivative kernel density estimator.
Feluch and Koronacki (1992) proposed a so-called trimmed cross-validation (TCV) in kernel
density estimator, a simple modification of the unbiased (least-squares) cross-validation
criterion. We consider the following "trimmed" version of "unbiased", to be minimized with
respect to :
where denotes the indicator function and
is a sequence of positive
constants,
as
, and
the trimmed cross-validation function is defined by:
whit
here we take , for assure the convergence. Where
is the convolution of the r'th derivative kernel function
(see
kernel.conv
and kernel.fun
).
The range over which to minimize is hos
Oversmoothing bandwidth, the default is almost always
satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
deriv.order |
the derivative order to use. |
h |
value of bandwidth parameter. |
min.tcv |
the minimal TCV value. |
Arsalane Chouaib Guidoum [email protected]
Feluch, W. and Koronacki, J. (1992). A note on modified cross-validation in density estimation. Computational Statistics and Data Analysis, 13, 143–151.
## Derivative order = 0 h.tcv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.tcv(kurtotic,deriv.order = 1)
## Derivative order = 0 h.tcv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.tcv(kurtotic,deriv.order = 1)
The (S3) generic function h.ucv
computes the unbiased
(least-squares) cross-validation bandwidth selector
of r'th derivative of kernel density estimator one-dimensional.
h.ucv(x, ...) ## Default S3 method: h.ucv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
h.ucv(x, ...) ## Default S3 method: h.ucv(x, deriv.order = 0, lower = 0.1 * hos, upper = 2 * hos, tol = 0.1 * lower, kernel = c("gaussian", "epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine"), ...)
x |
vector of data values. |
deriv.order |
derivative order (scalar). |
lower , upper
|
range over which to minimize. The default is
almost always satisfactory. |
tol |
the convergence tolerance for |
kernel |
a character string giving the smoothing kernel to be used, with default
|
... |
further arguments for (non-default) methods. |
h.ucv
unbiased (least-squares) cross-validation implements for choosing the bandwidth
of a r'th derivative kernel density estimator.
Rudemo (1982) and Bowman (1984) proposed a so-called unbiased (least-squares) cross-validation
(UCV) in kernel density estimator. An adaptation of unbiased cross-validation is proposed by
Wolfgang et al. (1990) for bandwidth choice in the r'th derivative of kernel density estimator.
The essential idea of this methods, for the estimation of (
is derivative order),
is to use the bandwidth
which minimizes the function:
The bandwidth minimizing this function is:
for
where
and is the convolution of the r'th derivative kernel function
(see
kernel.conv
and kernel.fun
).
The estimate on the subset
denoting the leave-one-out estimator, can be written:
The function is unbiased cross-validation in the sense that
(see, Scott and George 1987). Can be simplified to give the computationally:
where .
The range over which to minimize is hos
Oversmoothing bandwidth, the default is almost always
satisfactory. See George and Scott (1985), George (1990), Scott (1992, pp 165), Wand and Jones (1995, pp 61).
x |
data points - same as input. |
data.name |
the deparsed name of the |
n |
the sample size after elimination of missing values. |
kernel |
name of kernel to use |
deriv.order |
the derivative order to use. |
h |
value of bandwidth parameter. |
min.ucv |
the minimal UCV value. |
Arsalane Chouaib Guidoum [email protected]
Bowman, A. (1984). An alternative method of cross-validation for the smoothing of kernel density estimates. Biometrika, 71, 353–360.
Jones, M. C. and Kappenman, R. F. (1991). On a class of kernel density estimate bandwidth selectors. Scandinavian Journal of Statistics, 19, 337–349.
Jones, M. C., Marron, J. S. and Sheather,S. J. (1996). A brief survey of bandwidth selection for density estimation. Journal of the American Statistical Association, 91, 401–407.
Peter, H. and Marron, J.S. (1987). Estimation of integrated squared density derivatives. Statistics and Probability Letters, 6, 109–115.
Rudemo, M. (1982). Empirical choice of histograms and kernel density estimators. Scandinavian Journal of Statistics, 9, 65–78.
Scott, D.W. and George, R. T. (1987). Biased and unbiased cross-validation in density estimation. Journal of the American Statistical Association, 82, 1131–1146.
Sheather, S. J. (2004). Density estimation. Statistical Science, 19, 588–597.
Tarn, D. (2007). ks: Kernel density estimation and kernel discriminant analysis for multivariate data in R. Journal of Statistical Software, 21(7), 1–16.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
Wolfgang, H. (1991). Smoothing Techniques, With Implementation in S. Springer-Verlag, New York.
Wolfgang, H., Marron, J. S. and Wand, M. P. (1990). Bandwidth choice for density derivatives. Journal of the Royal Statistical Society, Series B, 223–232.
plot.h.ucv
, see bw.ucv
in package "stats" and
ucv
in package MASS for Gaussian kernel only if deriv.order = 0
,
hlscv
in package ks for Gaussian kernel only if 0 <= deriv.order <= 5
,
kdeb
in package locfit if deriv.order = 0
.
## Derivative order = 0 h.ucv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.ucv(kurtotic,deriv.order = 1)
## Derivative order = 0 h.ucv(kurtotic,deriv.order = 0) ## Derivative order = 1 h.ucv(kurtotic,deriv.order = 1)
The (S3) generic function kernel.conv
computes the convolution
of r'th derivative for kernel function.
kernel.conv(x, ...) ## Default S3 method: kernel.conv(x = NULL, deriv.order = 0,kernel = c("gaussian","epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine", "silverman"), ...)
kernel.conv(x, ...) ## Default S3 method: kernel.conv(x = NULL, deriv.order = 0,kernel = c("gaussian","epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine", "silverman"), ...)
x |
points at which the convolution of kernel derivative is to be evaluated. |
deriv.order |
derivative order (scalar). |
kernel |
a character string giving the smoothing kernel to be used,
with default |
... |
further arguments for (non-default) methods. |
The convolution of r'th derivative for kernel function is written . It is defined as
the integral of the product of the derivative for kernel. As such, it is a particular kind of integral transform:
where:
for
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
x |
the n coordinates of the points where the convolution of kernel derivative is evaluated. |
kx |
the convolution of kernel derivative values. |
Arsalane Chouaib Guidoum [email protected]
Olver, F. W., Lozier, D. W., Boisvert, R. F. and Clark, C. W. (2010). NIST Handbook of Mathematical Functions. Cambridge University Press, New York, USA.
Scott, D. W. (1992). Multivariate Density Estimation. Theory, Practice and Visualization. New York: Wiley.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC. London.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
Wolfgang, H. (1991). Smoothing Techniques, With Implementation in S. Springer-Verlag, New York.
plot.kernel.conv
, kernapply
in package "stats"
for computes the convolution between an input sequence, and convolve
use the Fast Fourier Transform (fft
) to compute the several kinds of
convolutions of two sequences.
kernels <- eval(formals(kernel.conv.default)$kernel) kernels ## gaussian kernel.conv(x = 0,kernel=kernels[1],deriv.order=0) kernel.conv(x = 0,kernel=kernels[1],deriv.order=1) ## silverman kernel.conv(x = 0,kernel=kernels[9],deriv.order=0) kernel.conv(x = 0,kernel=kernels[9],deriv.order=1)
kernels <- eval(formals(kernel.conv.default)$kernel) kernels ## gaussian kernel.conv(x = 0,kernel=kernels[1],deriv.order=0) kernel.conv(x = 0,kernel=kernels[1],deriv.order=1) ## silverman kernel.conv(x = 0,kernel=kernels[9],deriv.order=0) kernel.conv(x = 0,kernel=kernels[9],deriv.order=1)
The (S3) generic function kernel.fun
computes the
r'th derivative for kernel density.
kernel.fun(x, ...) ## Default S3 method: kernel.fun(x = NULL, deriv.order = 0, kernel = c("gaussian","epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine", "silverman"), ...)
kernel.fun(x, ...) ## Default S3 method: kernel.fun(x = NULL, deriv.order = 0, kernel = c("gaussian","epanechnikov", "uniform", "triangular", "triweight", "tricube", "biweight", "cosine", "silverman"), ...)
x |
points at which the derivative of kernel function is to be evaluated. |
deriv.order |
derivative order (scalar). |
kernel |
a character string giving the smoothing kernel to be used,
with default |
... |
further arguments for (non-default) methods. |
We give a short survey of some kernels functions ; where
is derivative order,
Gaussian:
Epanechnikov:
uniform (rectangular):
triangular:
triweight:
tricube:
biweight:
cosine:
Silverman:
The r'th derivative for kernel function is written:
for
The r'th derivative of the Gaussian kernel is given by:
where is the r'th Hermite polynomial. This polynomials
are set of orthogonal polynomials, for more details see,
hermite.h.polynomials
in package orthopolynom.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
x |
the n coordinates of the points where the derivative of kernel function is evaluated. |
kx |
the kernel derivative values. |
Arsalane Chouaib Guidoum [email protected]
Jones, M. C. (1992). Differences and derivatives in kernel estimation. Metrika, 39, 335–340.
Olver, F. W., Lozier, D. W., Boisvert, R. F. and Clark, C. W. (2010). NIST Handbook of Mathematical Functions. Cambridge University Press, New York, USA.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC. London.
plot.kernel.fun
, deriv
and D
in
package "stats" for symbolic and algorithmic derivatives of simple expressions.
kernels <- eval(formals(kernel.fun.default)$kernel) kernels ## gaussian kernel.fun(x = 0,kernel=kernels[1],deriv.order=0) kernel.fun(x = 0,kernel=kernels[1],deriv.order=1) ## silverman kernel.fun(x = 0,kernel=kernels[9],deriv.order=0) kernel.fun(x = 0,kernel=kernels[9],deriv.order=1)
kernels <- eval(formals(kernel.fun.default)$kernel) kernels ## gaussian kernel.fun(x = 0,kernel=kernels[1],deriv.order=0) kernel.fun(x = 0,kernel=kernels[1],deriv.order=1) ## silverman kernel.fun(x = 0,kernel=kernels[9],deriv.order=0) kernel.fun(x = 0,kernel=kernels[9],deriv.order=1)
The plot.dkde
function loops through calls to
the dkde
function. Plot for kernel density
derivative estimate for 1-dimensional data.
## S3 method for class 'dkde' plot(x, fx = NULL, ...) ## S3 method for class 'dkde' lines(x, ...)
## S3 method for class 'dkde' plot(x, fx = NULL, ...) ## S3 method for class 'dkde' lines(x, ...)
x |
object of class |
fx |
add to graphics the true density derivative (class : |
... |
other graphics parameters, see |
The 1-d plot is a standard plot of a 1-d curve. If
!is.null(fx)
then a true density derivative is added.
Plot of 1-d kernel density derivative estimates are sent to graphics window.
Arsalane Chouaib Guidoum [email protected]
dkde
, plot.density
in package "stats" if deriv.order = 0
.
plot(dkde(kurtotic,deriv.order=0,kernel="gaussian"),sub="") lines(dkde(kurtotic,deriv.order=0,kernel="biweight"),col="red")
plot(dkde(kurtotic,deriv.order=0,kernel="gaussian"),sub="") lines(dkde(kurtotic,deriv.order=0,kernel="biweight"),col="red")
The plot.h.amise
function loops through calls to
the h.amise
function. Plot for asymptotic mean integrated
squared error function for 1-dimensional data.
## S3 method for class 'h.amise' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.amise' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.amise' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.amise' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the AMISE function.
By default, the procedure defines a sequence of 50 points, from |
... |
other graphics parameters, see |
Plot of 1-d AMISE function are sent to graphics window.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
seq.bws |
the sequence of bandwidths. |
amise |
the values of the AMISE function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
plot(h.amise(bimodal,deriv.order=0))
plot(h.amise(bimodal,deriv.order=0))
The plot.h.bcv
function loops through calls to
the h.bcv
function. Plot for biased cross-validation
function for 1-dimensional data.
## S3 method for class 'h.bcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.bcv' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.bcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.bcv' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the biased
cross-validation function. By default, the procedure defines a sequence of
50 points, from |
... |
other graphics parameters, see |
Plot of 1-d biased cross-validation function are sent to graphics window.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
seq.bws |
the sequence of bandwidths. |
bcv |
the values of the biased cross-validation function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
## EXAMPLE 1: plot(h.bcv(trimodal, whichbcv = 1, deriv.order = 0),main="",sub="") lines(h.bcv(trimodal, whichbcv = 2, deriv.order = 0),col="red") legend("topright", c("BCV1","BCV2"),lty=1,col=c("black","red"),inset = .015) ## EXAMPLE 2: plot(h.bcv(trimodal, whichbcv = 1, deriv.order = 1),main="",sub="") lines(h.bcv(trimodal, whichbcv = 2, deriv.order = 1),col="red") legend("topright", c("BCV1","BCV2"),lty=1,col=c("black","red"),inset = .015)
## EXAMPLE 1: plot(h.bcv(trimodal, whichbcv = 1, deriv.order = 0),main="",sub="") lines(h.bcv(trimodal, whichbcv = 2, deriv.order = 0),col="red") legend("topright", c("BCV1","BCV2"),lty=1,col=c("black","red"),inset = .015) ## EXAMPLE 2: plot(h.bcv(trimodal, whichbcv = 1, deriv.order = 1),main="",sub="") lines(h.bcv(trimodal, whichbcv = 2, deriv.order = 1),col="red") legend("topright", c("BCV1","BCV2"),lty=1,col=c("black","red"),inset = .015)
The plot.h.ccv
function loops through calls to
the h.ccv
function. Plot for complete cross-validation
function for 1-dimensional data.
## S3 method for class 'h.ccv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.ccv' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.ccv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.ccv' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the complete
cross-validation function. By default, the procedure defines a sequence of
50 points, from |
... |
other graphics parameters, see |
Plot of 1-d complete cross-validation function are sent to graphics window.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
seq.bws |
the sequence of bandwidths. |
ccv |
the values of the complete cross-validation function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.ccv(trimodal,deriv.order=0),main="") plot(h.ccv(trimodal,deriv.order=1),main="") par(oldpar)
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.ccv(trimodal,deriv.order=0),main="") plot(h.ccv(trimodal,deriv.order=1),main="") par(oldpar)
The plot.h.mcv
function loops through calls to
the h.mcv
function. Plot for modified cross-validation
function for 1-dimensional data.
## S3 method for class 'h.mcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.mcv' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.mcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.mcv' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the modified
cross-validation function. By default, the procedure defines a sequence of
50 points, from |
... |
other graphics parameters, see |
Plot of 1-d modified cross-validation function are sent to graphics window.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
seq.bws |
the sequence of bandwidths. |
mcv |
the values of the modified cross-validation function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.mcv(trimodal,deriv.order=0),main="") plot(h.mcv(trimodal,deriv.order=1),main="") par(oldpar)
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.mcv(trimodal,deriv.order=0),main="") plot(h.mcv(trimodal,deriv.order=1),main="") par(oldpar)
The plot.h.mlcv
function loops through calls to
the h.mlcv
function. Plot for maximum-likelihood
cross-validation function for 1-dimensional data.
## S3 method for class 'h.mlcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.mlcv' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.mlcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.mlcv' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the maximum-
likelihood cross-validation function. By default, the procedure defines a
sequence of 50 points, from |
... |
other graphics parameters, see |
Plot of 1-d maximum-likelihood cross-validation function are sent to graphics window.
kernel |
name of kernel to use. |
seq.bws |
the sequence of bandwidths. |
mlcv |
the values of the maximum-likelihood cross-validation function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
plot(h.mlcv(bimodal))
plot(h.mlcv(bimodal))
The plot.h.tcv
function loops through calls to
the h.tcv
function. Plot for trimmed cross-validation
function for 1-dimensional data.
## S3 method for class 'h.tcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.tcv' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.tcv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.tcv' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the trimmed
cross-validation function. By default, the procedure defines a sequence of
50 points, from |
... |
other graphics parameters, see |
Plot of 1-d trimmed cross-validation function are sent to graphics window.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
seq.bws |
the sequence of bandwidths. |
tcv |
the values of the trimmed cross-validation function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.tcv(trimodal,deriv.order=0),main="") plot(h.tcv(trimodal,deriv.order=1),seq.bws=seq(0.1,0.5,length.out=50),main="") par(oldpar)
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.tcv(trimodal,deriv.order=0),main="") plot(h.tcv(trimodal,deriv.order=1),seq.bws=seq(0.1,0.5,length.out=50),main="") par(oldpar)
The plot.h.ucv
function loops through calls to
the h.ucv
function. Plot for unbiased cross-validation
function for 1-dimensional data.
## S3 method for class 'h.ucv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.ucv' lines(x,seq.bws=NULL, ...)
## S3 method for class 'h.ucv' plot(x, seq.bws=NULL, ...) ## S3 method for class 'h.ucv' lines(x,seq.bws=NULL, ...)
x |
object of class |
seq.bws |
the sequence of bandwidths in which to compute the unbiased
cross-validation function. By default, the procedure defines a sequence of
50 points, from |
... |
other graphics parameters, see |
Plot of 1-d unbiased cross-validation function are sent to graphics window.
kernel |
name of kernel to use. |
deriv.order |
the derivative order to use. |
seq.bws |
the sequence of bandwidths. |
ucv |
the values of the unbiased cross-validation function in the bandwidths grid. |
Arsalane Chouaib Guidoum [email protected]
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.ucv(trimodal,deriv.order=0),seq.bws=seq(0.06,0.2,length=50)) plot(h.ucv(trimodal,deriv.order=1),seq.bws=seq(0.06,0.2,length=50)) par(oldpar)
oldpar <- par(no.readonly = TRUE) par(mfrow=c(2,1)) plot(h.ucv(trimodal,deriv.order=0),seq.bws=seq(0.06,0.2,length=50)) plot(h.ucv(trimodal,deriv.order=1),seq.bws=seq(0.06,0.2,length=50)) par(oldpar)
The plot.kernel.conv
function loops through calls to
the kernel.conv
function. Plot for convolutions of
r'th derivative kernel function one-dimensional.
## S3 method for class 'kernel.conv' plot(x, ...)
## S3 method for class 'kernel.conv' plot(x, ...)
x |
object of class |
... |
other graphics parameters, see |
Plot of 1-d for convolution of r'th derivative kernel function are sent to graphics window.
Arsalane Chouaib Guidoum [email protected]
## Gaussian kernel oldpar <- par(no.readonly = TRUE) dev.new() par(mfrow=c(2,2)) plot(kernel.conv(kernel="gaussian",deriv.order=0)) plot(kernel.conv(kernel="gaussian",deriv.order=1)) plot(kernel.conv(kernel="gaussian",deriv.order=2)) plot(kernel.conv(kernel="gaussian",deriv.order=3)) ## Silverman kernel dev.new() par(mfrow=c(2,2)) plot(kernel.conv(kernel="silverman",deriv.order=0)) plot(kernel.conv(kernel="silverman",deriv.order=1)) plot(kernel.conv(kernel="silverman",deriv.order=2)) plot(kernel.conv(kernel="silverman",deriv.order=3)) par(oldpar)
## Gaussian kernel oldpar <- par(no.readonly = TRUE) dev.new() par(mfrow=c(2,2)) plot(kernel.conv(kernel="gaussian",deriv.order=0)) plot(kernel.conv(kernel="gaussian",deriv.order=1)) plot(kernel.conv(kernel="gaussian",deriv.order=2)) plot(kernel.conv(kernel="gaussian",deriv.order=3)) ## Silverman kernel dev.new() par(mfrow=c(2,2)) plot(kernel.conv(kernel="silverman",deriv.order=0)) plot(kernel.conv(kernel="silverman",deriv.order=1)) plot(kernel.conv(kernel="silverman",deriv.order=2)) plot(kernel.conv(kernel="silverman",deriv.order=3)) par(oldpar)
The plot.kernel.fun
function loops through calls to
the kernel.fun
function. Plot for r'th derivative
kernel function one-dimensional.
## S3 method for class 'kernel.fun' plot(x, ...)
## S3 method for class 'kernel.fun' plot(x, ...)
x |
object of class |
... |
other graphics parameters, see |
Plot of 1-d for r'th derivative kernel function are sent to graphics window.
Arsalane Chouaib Guidoum [email protected]
## Gaussian kernel oldpar <- par(no.readonly = TRUE) dev.new() par(mfrow=c(2,2)) plot(kernel.fun(kernel="gaussian",deriv.order=0)) plot(kernel.fun(kernel="gaussian",deriv.order=1)) plot(kernel.fun(kernel="gaussian",deriv.order=2)) plot(kernel.fun(kernel="gaussian",deriv.order=3)) ## Silverman kernel dev.new() par(mfrow=c(2,2)) plot(kernel.fun(kernel="silverman",deriv.order=0)) plot(kernel.fun(kernel="silverman",deriv.order=1)) plot(kernel.fun(kernel="silverman",deriv.order=2)) plot(kernel.fun(kernel="silverman",deriv.order=3)) par(oldpar)
## Gaussian kernel oldpar <- par(no.readonly = TRUE) dev.new() par(mfrow=c(2,2)) plot(kernel.fun(kernel="gaussian",deriv.order=0)) plot(kernel.fun(kernel="gaussian",deriv.order=1)) plot(kernel.fun(kernel="gaussian",deriv.order=2)) plot(kernel.fun(kernel="gaussian",deriv.order=3)) ## Silverman kernel dev.new() par(mfrow=c(2,2)) plot(kernel.fun(kernel="silverman",deriv.order=0)) plot(kernel.fun(kernel="silverman",deriv.order=1)) plot(kernel.fun(kernel="silverman",deriv.order=2)) plot(kernel.fun(kernel="silverman",deriv.order=3)) par(oldpar)