Binary files /tmp/tmppQ8pTw/vKQTrf71tA/r-cran-bayesm-3.1-1/build/vignette.rds and /tmp/tmppQ8pTw/tOaFVuk0PH/r-cran-bayesm-3.1-3+dfsg/build/vignette.rds differ diff -Nru r-cran-bayesm-3.1-1/debian/changelog r-cran-bayesm-3.1-3+dfsg/debian/changelog --- r-cran-bayesm-3.1-1/debian/changelog 2019-01-07 09:52:13.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/debian/changelog 2019-07-30 09:50:58.000000000 +0000 @@ -1,3 +1,22 @@ +r-cran-bayesm (3.1-3+dfsg-1) unstable; urgency=medium + + * Team upload. + * New upstream version + * debhelper-compat 12 + + -- Andreas Tille Tue, 30 Jul 2019 11:50:58 +0200 + +r-cran-bayesm (3.1-2+dfsg-1) unstable; urgency=medium + + * Team upload. + * New upstream version + * debhelper 12 + * Standards-Version: 4.4.0 + * Remove autogenerated HTML docs since it is including compressed JS + * Test-Depends: r-cran-rmarkdown + + -- Andreas Tille Wed, 17 Jul 2019 22:53:13 +0200 + r-cran-bayesm (3.1-1-1) unstable; urgency=medium * Team upload. diff -Nru r-cran-bayesm-3.1-1/debian/compat r-cran-bayesm-3.1-3+dfsg/debian/compat --- r-cran-bayesm-3.1-1/debian/compat 2019-01-07 09:52:13.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/debian/compat 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -11 diff -Nru r-cran-bayesm-3.1-1/debian/control r-cran-bayesm-3.1-3+dfsg/debian/control --- r-cran-bayesm-3.1-1/debian/control 2019-01-07 09:52:13.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/debian/control 2019-07-30 09:50:58.000000000 +0000 @@ -3,12 +3,12 @@ Uploaders: Chris Lawrence Section: gnu-r Priority: optional -Build-Depends: debhelper (>= 11~), +Build-Depends: debhelper-compat (= 12), dh-r, r-base-dev, r-cran-rcpp (>= 0.12.0), r-cran-rcpparmadillo -Standards-Version: 4.3.0 +Standards-Version: 4.4.0 Vcs-Browser: https://salsa.debian.org/r-pkg-team/r-cran-bayesm Vcs-Git: https://salsa.debian.org/r-pkg-team/r-cran-bayesm.git Homepage: https://cran.r-project.org/package=bayesm diff -Nru r-cran-bayesm-3.1-1/debian/copyright r-cran-bayesm-3.1-3+dfsg/debian/copyright --- r-cran-bayesm-3.1-1/debian/copyright 2019-01-07 09:52:13.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/debian/copyright 2019-07-30 09:50:58.000000000 +0000 @@ -2,6 +2,7 @@ Upstream-Name: bayesm Upstream-Contact: Peter Rossi Source: https://cran.r-project.org/web/packages/bayesm/index.html +Files-Excluded: */inst/doc/*.html Files: * Copyright: 2003-2017 Peter Rossi diff -Nru r-cran-bayesm-3.1-1/debian/tests/control r-cran-bayesm-3.1-3+dfsg/debian/tests/control --- r-cran-bayesm-3.1-1/debian/tests/control 2019-01-07 09:52:13.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/debian/tests/control 2019-07-30 09:50:58.000000000 +0000 @@ -1,5 +1,5 @@ Tests: vignette -Depends: @, r-cran-knitr +Depends: @, r-cran-knitr, r-cran-rmarkdown Restrictions: allow-stderr diff -Nru r-cran-bayesm-3.1-1/debian/watch r-cran-bayesm-3.1-3+dfsg/debian/watch --- r-cran-bayesm-3.1-1/debian/watch 2019-01-07 09:52:13.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/debian/watch 2019-07-30 09:50:58.000000000 +0000 @@ -1,2 +1,3 @@ version=4 +opts="repacksuffix=+dfsg,dversionmangle=s/\+dfsg[0-9]*//g,repack,compression=xz" \ https://cran.r-project.org/src/contrib/bayesm_@ANY_VERSION@@ARCHIVE_EXT@ diff -Nru r-cran-bayesm-3.1-1/DESCRIPTION r-cran-bayesm-3.1-3+dfsg/DESCRIPTION --- r-cran-bayesm-3.1-1/DESCRIPTION 2018-12-21 07:20:49.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/DESCRIPTION 2019-07-29 22:50:02.000000000 +0000 @@ -1,9 +1,9 @@ Package: bayesm -Version: 3.1-1 +Version: 3.1-3 Type: Package Title: Bayesian Inference for Marketing/Micro-Econometrics Depends: R (>= 3.2.0) -Date: 2018-12-20 +Date: 2019-07-29 Author: Peter Rossi Maintainer: Peter Rossi License: GPL (>= 2) @@ -40,6 +40,6 @@ Methods and Applications (Princeton U Press 2014). RoxygenNote: 6.0.1 NeedsCompilation: yes -Packaged: 2018-12-20 19:37:45 UTC; perossichi +Packaged: 2019-07-29 21:26:14 UTC; perossichi Repository: CRAN -Date/Publication: 2018-12-21 07:20:49 UTC +Date/Publication: 2019-07-29 22:50:02 UTC diff -Nru r-cran-bayesm-3.1-1/inst/doc/bayesm_Overview_Vignette.html r-cran-bayesm-3.1-3+dfsg/inst/doc/bayesm_Overview_Vignette.html --- r-cran-bayesm-3.1-1/inst/doc/bayesm_Overview_Vignette.html 2018-12-20 19:37:45.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/inst/doc/bayesm_Overview_Vignette.html 1970-01-01 00:00:00.000000000 +0000 @@ -1,1103 +0,0 @@ - - - - - - - - - - - - - -bayesm Overview - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - -
-
-
-
-
- -
- - - - - - - -
-
-

1 Introduction

-

bayesm is an R package that facilitates statistical analysis using Bayesian methods. The package provides a set of functions for commonly used models in applied microeconomics and quantitative marketing.

-

The goal of this vignette is to make it easier for users to adopt bayesm by providing a comprehensive overview of the package’s contents and detailed examples of certain functions. We begin with the overview, followed by a discussion of how to work with bayesm. The discussion covers the structure of function arguments, the required input data formats, and the various output formats. The final section provides detailed examples to demonstrate Bayesian inference with the linear normal, multinomial logit, and hierarchical multinomial logit regression models.

-
-
-

2 Package Contents

-

For ease of exposition, we have grouped the package contents into:

-
    -
  • Posterior sampling functions
  • -
  • Utility functions
  • -
  • S3 methods1
  • -
  • Datasets
  • -
-

Because the first two groups contain many functions, we organize them into subgroups by purpose. Below, we display each group of functions in a table with one column per subgroup.

- - ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Posterior Sampling Functions
Linear Models Limited Dependent Variable ModelsHierarchical Models Density Estimation
runireg*rbprobitGibbs**rhierLinearModelrnmixGibbs*
runiregGibbsrmnpGibbsrhierLinearMixturerDPGibbs
rsurGibbs*rmvpGibbsrhierMnlRwMixture*
-
rivGibbsrmnlIndepMetroprhierMnlDP
-
rivDPrscaleUsagerbayesBLP
-

-
rnegbinRwrhierNegbinRw
-

-
rordprobitGibbs
-

*bayesm offers the utility function breg with a related but limited set of capabilities as runireg — similarly with rmultireg for rsurGibbs, rmixGibbs for rnmixGibbs, and rhierBinLogit for rhierMnlRwMixture.

-

**rbiNormGibbs provides a tutorial-like example of Gibbs Sampling from a bivariate normal distribution.

- - ------- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Utility Functions
Log-Likelihood 
-(data vector)
Log Density (univariate)Random Draws Mixture-of-Normals Miscellaneous
llmnllndIChisqrdirichletclusterMixcgetC
llmnplndIWishartrmixtureeMixMargDencondMom
llnhlogitlndMvnrmvstmixDencreateX

-
lndMvstrtrunmixDenBighkvec

-

-
rwishartmomMixlogMargDenNR

-

-

-

-
mnlHess

-

-

-

-
mnpProb

-

-

-

-
nmat

-

-

-

-
numEff

-

-

-

-
simnhlogit
- - ---- - - - - - - - - - - - - - - - - - - - - -
S3 Methods
Plot MethodsSummary Methods
plot.bayesm.matsummary.bayesm.mat
plot.bayesm.nmixsummary.bayesm.nmix
plot.bayesm.hcoefsummary.bayesm.var
- - ----- - - - - - - - - - - - - - - - - - - - - - - - - -
Datasets
bankcustomerSatorangeJuice
cameradetailingScotch
cheesemargarinetuna
-

Some discussion of the naming conventions may be warranted. All functions use CamelCase but begin lowercase. Posterior sampling functions begin with r to match R’s style of naming random number generation functions since these functions all draw from (posterior) distributions. Common abbreviations include DP for Dirichlet Process, IV for instrumental variables, MNL and MNP for multinomial logit and probit, SUR for seemingly unrelated regression, and hier for hierarchical. Utility functions that begin with ll calculate the log-likelihood of a data vector, while those that begin with lnd provide the log-density. Other abbreviations should be straighforward; please see the help file for a specific function if its name is unclear.

-
-
-

3 Working with bayesm

-

We expect most users of the package to interact primarily with the posterior sampling functions. These functions take a consistent set of arguments as inputs (Data, Prior, and Mcmc — each is a list) and they return output in a consistent format (always a list). summary and plot generic functions can then be used to facilitate analysis of the output because of the classes and methods defined in bayesm. The following subsections describe the format of the standardized function arguments as well as the required format of the data inputs and the format of the output from bayesm’s posterior sampling functions.

-
-

3.1 Input: Function Arguments

-

The posterior sampling functions take three arguments: Data, Prior, and Mcmc. Each argument is a list.

-

As a minimal example, assume you’d like to perform a linear regression and that you have in your work space y (a vector of length \(n\)) and X (a matrix of dimension \(n \times p\)). For this example, we utilize the default values for Prior and so we do not specify the Prior argument. These components (Data, Prior, and Mcmc as well as their arguments including R and nprint) are discussed in the subsections that follow. Then the bayesm syntax is simply:

-
mydata <- list(y = y, X = X)
-mymcmc <- list(R = 1e6, nprint = 0)
-
-out <- runireg(Data = mydata, Mcmc = mymcmc)
-

The list elements of Data, Prior, and Mcmc must be named. For example, you could not use the following code because the Data argument mydata2 has unnamed elements.

-
mydata2 <- list(y, X)
-out <- runireg(Data = mydata2, Mcmc = mymcmc)
-
-

3.1.1 Data Argument

-

bayesm’s posterior sampling functions do not accept data stored in dataframes; data must be stored as vectors or matrices.

-

For non-hierarchical models, the list of input data simply contains the approprate data vectors or matrices from the set {y, X, w, z}2 and possibly a scalar (length one vector) from the set {k, p}.

-
    -
  • For functions that require only a single data argument (such as the two density estimation functions, rnmixGibbs and rDPGibbs), y is that argument. More typically, y is used for LHS3 variable(s) and X for RHS variables. For estimation using instrumental variables, variables in the structural equation are separated into “exogenous” variables w and an “edogenous” variable x; z is a matrix of instruments.

  • -
  • For the scalars, p indicates the number of choice alternatives in discrete choice models and k is used as the maximum ordinate in models for ordinal data (e.g., rordprobitGibbs).

  • -
-

For hierarchical models, the input data has up to 3 components. We’ll discuss these components using the mixed logit model (rhierMnlRwMixture) as an example. For rhierMnlRwMixture, the Data argument is list(lgtdata, Z, p).

-
    -
  1. The first component for all hierarchical models is required. It is a list of lists named either regdata or lgtdata, depending on the function. In rhierMnlRwMixture, lgtdata is a list of lists, with each interior list containing a vector or one-column matrix of multinomial outcomes y and a design matrix of covariates X. In Example 3 below, we show how to convert data from a data frame to this required list-of-lists format.

  2. -
  3. The second component, Z, is present but optional for all hierarchical models. Z is a matrix of cross-sectional unit characteristics that drive the mean responses; that is, a matrix of covariates for the individual parameters (e.g. \(\beta_i\)’s). For example, the model (omitting the priors) for rhierMnlRwMixture is:

    -

    \[ y_i \sim \text{MNL}(X_i'\beta_i) \hspace{1em} \text{with} \hspace{1em} \beta_i = Z \Delta_i + u_i \hspace{1em} \text{and} \hspace{1em} u_i \sim N(\mu_j, \Sigma_j) \]

    -

    where \(i\) indexes individuals and \(j\) indexes cross-sectional unit characteristics.

  4. -
  5. The third component (if accepted) is a scalar, such as p or k, and like the arguments by the same names in the non-hierarchical models, is used to indicate the size of the choice set or the maximum value of a scale or count variable. In rhierMnlRwMixture, the argument is p, which is used to indicate the number of choice alternatives.

  6. -
-

Note that rbayesBLP (the hierarchical logit model with aggregate data as in Berry, Levinsohn, and Pakes (1995) and Jiang, Manchanda, and Rossi (2009)) deviates slightly from the standard data input. rbayesBLP uses j instead of p to be consistent with the literature and calls the LHS variable share rather than y to emphasize that aggregate (market share instead of individual choice) data are required.

-
-
-

3.1.2 Prior Argument

-

Specification of prior distributions is model-specific, so our discussion here is brief.

-

All posterior sampling functions offer default values for parameters of prior distributions. These defaults were selected to yield proper yet reasonably-diffuse prior distributions (assuming the data are in approximately unit scale). In addition, these defaults are consistent across functions. For example, normal priors have default values of mean 0 with value 0.01 for the scaling factor of the prior precision matrix.

-

Specification of the prior is important. Significantly more detail can be found in chapters 2–5 of BSM4 and the help files for the posterior sampling functions. We strongly recommend you consult them before accepting the default values.

-
-
-

3.1.3 Mcmc Argument

-

The Mcmc argument controls parameters of the sampling algorithm. The most common components of this list include:

-
    -
  • R: the number of MCMC draws
  • -
  • keep: a thinning parameter indicating that every keep\(^\text{th}\) draw should be retained
  • -
  • nprint: an option to print the estimated time remaining to the console after each nprint\(^\text{th}\) draw
  • -
-

MCMC methods are non-iid. As a result, a large simulation size may be required to get reliable results. We recommend setting R large enough to yield an adequate effective sample size and letting keep default to the value 1 unless doing so imposes memory constraints. A careful reader of the bayesm help files will notice that many of the examples set R equal to 1000 or less. This low number of draws was chosen for speed, as the examples are meant to demonstrate how to run the code and do not necessarily suggest best practices for a proper statistical analysis.

-

nprint defaults to 100, but for large R, you may want to increase the nprint option. Alternatively, you can set nprint=0, in which case the priors will still be printed to the console, but the estimated time remaining will not.

-

Additional components of the Mcmc argument are function-specific, but typically include starting values for the algorithm. For example, the Mcmc argument for runiregGibbs takes sigmasq as a scalar element of the list. The Gibbs Sampler for runiregGibbs first draws \(\beta | \sigma^2\), then draws \(\sigma^2 | \beta\), and then repeats. For the first draw of \(\beta\) in the MCMC chain, a value of \(\sigma^2\) is required. The user can specify a value using Mcmc$sigmasq, or the user can omit the argument and the function will use its default (sigmasq = var(Data$y)).

-
-
-
-

3.2 Output: Returned Results

-

bayesm posterior sampling functions return a consistent set of results and output to the user. At a minimum, this output includes draws from the posterior distribution. bayesm provides a set of summary and plot methods to facilitate analysis of this output, but the user is free to analyze the results as he or she sees fit.

-
-

3.2.1 Output Formats

-

All bayesm posterior sampling functions return a list. The elements of that list include a set of vectors, matrices, and/or arrays (and possibly a list), the exact set of which depend on the function.

-
    -
  • Vectors are returned for draws of parameters with no natural grouping. For example, runireg implements and iid sampler to draw from the posterior of a homoskedastic univariate regression with a conjugate prior (i.e., a Bayesian analog to OLS regression). One output list element is sigmasqdraw, a length R/keep vector for the scalar parameter \(\sigma^2\).

  • -
  • Matrices are returned for draws of parameters with a natural grouping. Again using runireg as the example, the output list includes betadraw, an R/keep \(\times\) ncol(X) matrix for the vector of \(\beta\) parameters.

    -

    Contrary to the next bullet, draws for the parameters in a variance-covariance matrix are returned in matrix form. For example, rmnpGibbs implements a Gibbs Sampler for a multinomial probit model where one set of parameters is the \((p-1) \times (p-1)\) matrix \(\Sigma\). The output list for rmnpGibbs includes the list element sigmadraw, which is a matrix of dimension R/keep\(\times (p-1)*(p-1)\) with each row containing a draw (in vector form) for all the elements of the matrix \(\Sigma\). bayesm’s summary and plot methods (see below) are designed to handle this format.

  • -
  • Arrays are used when parameters have a natural matrix-grouping, such that the MCMC algorithm returns R/keep draws of the matrix. For example, rsurGibbs returns a list that includes Sigmadraw, an \(m \times m \times\)R/keep array, where \(m\) is the number of regression equations. As a second example, rhierLinearModel estimates a hierarchical linear regression model with a normal prior, and returns a list that includes betadraw, an \(n \times k \times\)R/keep array, where \(n\) signifies the number of individuals (each with their own \(\beta_i\)) and \(k\) signifies the number of covariates (ncol(X) = \(k\)).

  • -
  • For functions that use a mixture-of-normals or Dirichlet Process prior, the output list includes a list (nmix) pertaining to that prior. nmix is itself a list with 3 components: probdraw, zdraw, and compdraw. probdraw reports the probability that each draw came from a particular normal component; zdraw indicates which mixture-of-normals component each draw is assigned to; and compdraw provides draws for the mixture-of-normals components (i.e., mean vectors and Cholesky roots of covariance matrices). Note that if you specify a “mixture” with only one normal component, there will be no useful information in probdraw. Also note that zdraw is not relevant for density estimation and will be null except in rnmixGibbs and rDPGibbs.

  • -
-
-
-

3.2.2 Classes and Methods

-

In R generally, objects can be assigned a class and then a generic function can be used to run a method on an object with that class. The list elements in the output from bayesm posterior sampling functions are assigned special bayesm classes. The bayesm package includes summary and plot methods for use with these classes (see the table in Section 2 above). This means you can call the generic function (e.g., summary) on individual list elements of bayesm output and it will return specially-formatted summary results, including the effective sample size.

-

To see this, the code below provides an example using runireg. Here the generic function summary dispatches the method summary.bayesm.mat because the betadraw element of runireg’s output has class bayesm.mat. This example also shows the information about the prior that is printed to the console during the call to a posterior sampling function. Notice, however, that no remaining time is printed because nprint is set to zero.

-
set.seed(66)
-R <- 2000
-n <- 200
-X <- cbind(rep(1,n), runif(n))
-beta <- c(1,2)
-sigsq <- 0.25
-y <- X %*% beta + rnorm(n, sd = sqrt(sigsq))
-out <- runireg(Data = list(y = y, X = X), Mcmc = list(R = R, nprint = 0))
-
##  
-## Starting IID Sampler for Univariate Regression Model
-##   with  200  observations
-##  
-## Prior Parms: 
-## betabar
-## [1] 0 0
-## A
-##      [,1] [,2]
-## [1,] 0.01 0.00
-## [2,] 0.00 0.01
-## nu =  3  ssq=  0.5721252
-##  
-## MCMC parms: 
-## R=  2000  keep=  1  nprint=  0
-## 
-
summary(out$betadraw, tvalues = beta)
-
## Summary of Posterior Marginal Distributions 
-## Moments 
-##   tvalues mean std dev num se rel eff sam size
-## 1       1  1.0    0.07 0.0015    0.85     1800
-## 2       2  2.1    0.12 0.0029    1.01      900
-## 
-## Quantiles 
-##   tvalues 2.5%  5% 50% 95% 97.5%
-## 1       1 0.88 0.9 1.0 1.1   1.2
-## 2       2 1.83 1.9 2.1 2.3   2.3
-##    based on 1800 valid draws (burn-in=200)
-
-
-
-

3.3 Access to Code

-

bayesm was originally created as a companion to BSM, at which time most functions were written in R. The package has since been expanded to include additional functionality and most code has been converted to C++ via Rcpp for faster performance. However, for users interested in obtaining the original implementation of a posterior sampling function (in R instead of C++), you may still access the last version (2.2-5) of bayesm prior to the C++/Rcpp conversion from the package archive on CRAN.

-

To access the R code in the current version of bayesm, the user can simply call a function without parenthesis. For example, bayesm::runireg. However, most posterior sampling functions only perform basic checks in R and then call an unexported C++ function to do the heavy lifting (i.e., the MCMC draws). This C++ source code is not available to the user via the installed bayesm package because C++ code is compiled upon package installation on Linux machines and pre-compiled by CRAN for Mac and Windows. To access this source code, the user must download the “package source” from CRAN. This can be accomplished by clicking on the appropriate link at the bayesm package archive or by executing the R command download.packages(pkgs="bayesm", destdir=".", type="source"). Either of these methods will provide you with a compressed file “bayesm_version.tar.gz” that can be uncompressed. The C++ code can then be found in the “src” subdirectory.

-
-
-
-

4 Examples

-

We begin with a brief introduction to regression and Bayesian estimation. This will help set the notation and provide background for the examples that follow. We do not claim that this will be a sufficient introduction to the reader for which these ideas are new. We refer that reader to excellent texts on regression analysis by Cameron & Trivedi, Davidson & MacKinnon, Goldberger, Greene, Wasserman, and Wooldridge.5 For Bayesian methods, we recommend Gelman et al., Jackman, Marin & Robert, Rossi et al., and Zellner.6

-
-

4.1 What is Regression

-

Suppose you believe a variable \(y\) varies with (or is caused by) a set of variables \(x_1, x_2, \ldots, x_k\). For notational convenience, we’ll collect the set of \(x\) variables into \(X\). These variables \(y\) and \(X\) have a joint distribution \(f(y, X)\). Typically, interest will not fall on this joint distribution, but rather on the conditional distribution of the “outcome” variable \(y\) given the “explanatory” variables (or “covariates”) \(x_1, x_2, \ldots, x_k\); this conditional distribution being \(f(y|X)\).

-

To carry out inference on the relationship between \(y\) and \(X\), the researcher then often focuses attention on one aspect of the conditional distribution, most commonly its expected value. This conditional mean is assumed to be a function \(g\) of the covariates such that \(\mathbb{E}[y|X] = g(X, \beta)\) where \(\beta\) is a vector of parameters. A function for the conditional mean is known as a “regression” function.

-

The canonical introductory regression model is the normal linear regression model, which assumes that \(y \sim N(X\beta, \sigma^2)\). Most students of regression will have first encountered this model as a combination of deterministic and stochastic components. There, the stochastic component is defined as deviations from the conditional mean, \(\varepsilon = y - \mathbb{E}[y|X]\), such that \(y = \mathbb{E}[y|X] + \varepsilon\) or that \(y = g(X, \beta) + \varepsilon\). The model is then augmented with the assumptions that \(g(X, \beta) = X \beta\) and \(\varepsilon \sim N(0,\sigma^2)\) so that the normal linear regression model is:

-

\[ y = X \beta + \varepsilon \text{ with } \varepsilon \sim N(0,\sigma^2) \hspace{1em} \text{or} \hspace{1em} y \sim N(X\beta, \sigma^2) \]

-

When taken to data, additional assumptions are made which include a full-rank condition on \(X\) and often that \(\varepsilon_i\) for \(i=1,\ldots,n\) are independent and identically distributed.

-

Our first example will demonstrate how to estimate the parameters of the normal linear regression model using Bayesian methods made available by the posterior sampling function runireg. We then provide an example to estimate the parameters of a model when \(y\) is a categorical variable. This second example is called a multinomial logit model and uses the logistic “link” function \(g(X, \beta) = [1 + exp(-X\beta)]^{-1}\). Our third and final example will extend the multinomial logit model to permit individual-level parameters. This is known as a hierarchical model and requires panel data to perform the estimation.

-

Before launching into the examples, we briefly introduce Bayesian methodology and contrast it with classical methods.

-
-
-

4.2 What is Bayesian Inference

-

Under classical econometric methods, \(\beta\) is most commonly estimated by minimizing the sum of squared residuals, maximizing the likelihood, or matching sample moments to population moments. The distribution of the estimators (e.g., \(\hat{\beta}\)) and test statistics derived from these methods rely on asymptotic concepts and are based on imaginary samples not observed.

-

In contrast, Bayesian inference provides the benefits of (a) exact sample results, (b) integration of descision-making, estimation, testing, and model selection, and (c) a full accounting of uncertainty. These benefits from Bayesian inference rely heavily on probability theory and, in particular, distributional theory, some elements of which we now briefly review.

-

Recall the relationship between the joint and conditional densities for random variables \(W\) and \(Z\):

-

\[ P_{A|B}(A=a|B=b) = \frac{P_{A,B}(A=a, B=b)}{P_B(B=b)} \]

-

This relationship can be used to derive Bayes’ Theorem, which we write with \(D\) for “data” and \(\theta\) as the parameters (and with implied subscripts):

-

\[ P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)} \]

-

Noticing that \(P(D)\) does not contain the parameters of interest (\(\theta\)) and is therefore simply a normalizing constant, we can instead write:

-

\[ P(\theta|D) \propto P(D|\theta)P(\theta) \]

-

Introducing Bayesian terminology, we have that the “Posterior” is proportional to the Likelihood times the Prior.

-

Thus, given (1) a dataset (\(D\)), (2) an assumption on the data generating process (the likelihood, \(P(D|\theta)\)), and (3) a specification of the prior distribution of the parameters (\(P(\theta)\)), we can find the exact (posterior) distribution of the parameters given the observed data. This is in stark contrast to classical econometric methods, which typically only provide the asymptotic distributions of estimators.

-

However, for any problem of practical interest, the posterior distribution is a high-dimensional object. Additionally, it may not be possible to analytically calculate the posterior or its features (e.g., marginal distributions or moments such as the mean). To handle these issues, the modern approach to Bayesian inference relies on simulation methods to sample from the (high-dimensional) posterior distribution and then construct marginal distributions (or their features) from the sampled draws of the posterior. As a result, simulation and summaries of the posterior play important roles in modern Bayesian statistics.

-

bayesm’s posterior sampling functions (as their name suggests) sample from posterior distributions. bayesm’s summary and plot methods can be used to analyze those draws. Unlike most classical econometric methods, the MCMC methods implemented in bayesm’s posterior sampling functions provide an estimate of the entire posterior distribution, not just a few moments. Given this “rich” result from Bayesian methods, it is best to summarize posterior distributions using histograms or quantiles. We advise that you resist the temptation to simply report the posterior mean and standard deviation; for non-normal distributions, those moments may have little meaning.

-

In the examples that follow, we will describe the data we use, present the model, demonstrate how to estimate it using the appropriate posterior sampling function, and provide various ways to summarize the output.

-
-
-

4.3 Example 1: Linear Normal Regression

-
-

4.3.1 Data

-

For our first example, we will use the cheese dataset, which provides 5,555 observations of weekly sales volume for a package of Borden sliced cheese, as well as a measure of promotional display activity and price. The data are aggregated to the “key” account (i.e., retailer-market) level.

-
data(cheese)
-names(cheese) <- tolower(names(cheese))
-str(cheese)
-
## 'data.frame':    5555 obs. of  4 variables:
-##  $ retailer: Factor w/ 88 levels "ALBANY,NY - PRICE CHOPPER",..: 42 43 44 19 20 21 35 36 64 31 ...
-##  $ volume  : int  21374 6427 17302 13561 42774 4498 6834 3764 5112 6676 ...
-##  $ disp    : num  0.162 0.1241 0.102 0.0276 0.0906 ...
-##  $ price   : num  2.58 3.73 2.71 2.65 1.99 ...
-

Suppose we want to assess the relationship between sales volume and price and promotional display activity. For this example, we will abstract from whether these relationships vary by retailer or whether prices are set endogenously. Simple statistics show a negative correlation between volume and price, and a positive correlation between volume and promotional activity, as we would expect.

-
options(digits=3)
-cor(cheese$volume, cheese$price)
-
## [1] -0.227
-
cor(cheese$volume, cheese$disp)
-
## [1] 0.173
-
-
-

4.3.2 Model

-

We model the expected log sales volume as a linear function of log(price) and promotional activity. Specifically, we assume \(y_i\) to be iid with \(p(y_i|x_i,\beta)\) normally distributed with a mean linear in \(x\) and a variance of \(\sigma^2\). We will denote observations with the index \(i = 1, \ldots, n\) and covariates with the index \(j = 1, \ldots, k\). The model can be written as:

-

\[ y_i = \sum_{j=1}^k \beta_j x_{ij} + \varepsilon_i = x_i'\beta + \varepsilon_i \hspace{1em} \text{with} \hspace{1em} \varepsilon_i \sim iid\ N(0,\sigma^2) \]

-

or equivalently but more compactly as:

-

\[ y \sim MVN(X\beta,\ \sigma^2I_n) \]

-

Here, the notation \(N(0, \sigma^2)\) indicates a univariate normal distribution with mean \(0\) and variance \(\sigma^2\), while \(MVN(X\beta,\ \sigma^2I_n)\) indicates a multivariate normal distribution with mean vector \(X\beta\) and variance-covariance matrix \(\sigma^2I_n\). In addition, \(y_i\), \(x_{ij}\), \(\varepsilon_i\), and \(\sigma^2\) are scalars while \(x_i\) and \(\beta\) are \(k \times 1\) dimensional vectors. In the more compact notation, \(y\) is an \(n \times 1\) dimensional vector, \(X\) is an \(n \times k\) dimensional matrix with row \(x_i\), and \(I_n\) is an \(n \times n\) dimensional identity matrix. With regard to the cheese dataset, \(k = 2\) and \(n = 5,555\).

-

When employing Bayesian methods, the model is incomplete until the prior is specified. For our example, we elect to use natural conjugate priors, meaning the family of distributions for the prior is chosen such that, when combined with the likelihood, the posterior will be of the same distributional family. Specifically, we first factor the joint prior into marginal and conditional prior distributions:

-

\[ p(\beta,\sigma^2) = p(\beta|\sigma^2)p(\sigma^2) \]

-

We then specify the prior for \(\sigma^2\) as inverse-gamma (written in terms of a chi-squared random variable) and the prior for \(\beta|\sigma^2\) as multivariate normal:

-

\[ \sigma^2 \sim \frac{\nu s^2}{\chi^2_{\nu}} \hspace{1em} \text{and} \hspace{1em} \beta|\sigma^2 \sim MVN(\bar{\beta},\sigma^2A^{-1}) \]

-

Other than convenience, we have little reason to specify priors from these distributional families; however, we will select diffuse priors so as not to impose restrictions on the model. To do so, we must pick values for \(\nu\) and \(s^2\) (the degrees of freedom and scale parameters for the inverted chi-squared prior on \(\sigma^2\)) as well as \(\bar{\beta}\) and \(A^{-1}\) (the mean vector and variance-covariance matrix for the multivariate normal prior on the \(\beta\) vector). The bayesm posterior sampling function for this model, runireg, defaults to the following values:

-
    -
  • \(\nu = 3\)
  • -
  • \(s^2 =\) var(y)
  • -
  • \(\bar{\beta} = 0\)
  • -
  • \(A = 0.01*I\)
  • -
-

We will use these defaults, as they are chosen to be diffuse for data with a unit scale. Thus, for each \(\beta_j | \sigma^2\) we have specified a normal prior with mean 0 and variance \(100\sigma^2\), and for \(\sigma^2\) we have specified an inverse-gamma prior with \(\nu = 3\) and \(s^2 = \text{var}(y)\). We graph these prior distributions below.

-
par(mfrow = c(1,2))
-
-curve(dnorm(x,0,10), xlab = "", ylab = "", xlim = c(-30,30),
-      main = expression(paste("Prior for ", beta[j])),
-      col = "dodgerblue4")
-
-nu  <- 3
-ssq <- var(log(cheese$volume))
-curve(nu*ssq/dchisq(x,nu), xlab = "", ylab = "", xlim = c(0,1),
-      main = expression(paste("Prior for ", sigma^2)), 
-      col = "darkred")
-
-par(mfrow = c(1,1))
-

-
-
-

4.3.3 Bayesian Estimation

-

Although this model involves nontrivial natural conjugate priors, the posterior is available in closed form:

-

\[ p(\beta, \sigma^2 | y, X) \propto (\sigma^2)^{-k/2} \exp \left\{ -\frac{1}{2\sigma^2}(\beta - \bar{\beta})'(X'X+A)(\beta - \bar{\beta}) \right\} \times (\sigma^2)^{-((n+\nu_0)/2+1)} \exp \left\{ -\frac{\nu_0s_0^2 + ns^2}{2\sigma^2} \right\} \]

-

or

-\[\begin{align*} - \beta | \sigma^2, y, X &\sim N(\bar{\beta}, \sigma^2(X'X+A)^{-1}) \\ \\ - \sigma^2 | y, X &\sim \frac{\nu_1s_1^2}{\chi^2_{\nu_1}}, \hspace{3em} \text{with} \> \nu_1 = \nu_0+n \hspace{1em} \text{and} \hspace{1em} s_1^2 = \frac{\nu_0s_0^2 + ns^2}{\nu_0+n} -\end{align*}\] -

The latter representation suggests a simulation strategy for making draws from the posterior. We draw a value of \(\sigma^2\) from its marginal posterior distribution, insert this value into the expression for the covariance matrix of the conditional normal distribution of \(\beta|\{\sigma^2,y\}\) and draw from this multivariate normal. This simulation strategy is implemented by runireg, using the defaults for Prior specified above. The code is quite simple.

- -
dat <- list(y = log(cheese$volume), X = model.matrix( ~ price + disp, data = cheese))
-out <- runireg(Data = dat, Mcmc = list(R=1e4, nprint=1e3))
- -

Note that bayesm posterior sampling functions print out information about the prior and about MCMC progress during the function call (unless nprint is set to 0), but for presentation purposes we suppressed that output here.

-

runireg returns a list that we have saved in out. The list contains two elements, betadraw and sigmasqdraw, which you can verify by running str(out). betadraw is an R/keep \(\times\) ncol(X) (\(10,000 \times 3\) with a column for each of the intercept, price, and display) dimension matrix with class bayesm.mat. We can analyze or summarize the marginal posterior distributions for any \(\beta\) parameter or the \(\sigma^2\) parameter. For example, we can plot histograms of the price coefficient (even though it is known to follow a t-distrbution, see BSM Ch. 2.8) and for \(\sigma^2\). Notice how concentrated the posterior distributions are compared to their priors above.

-
B <- 1000+1 #burn in draws to discard 
-R <- 10000
-
-par(mfrow = c(1,2))
-hist(out$betadraw[B:R,2], breaks = 30, 
-     main = "Posterior Dist. of Price Coef.", 
-     yaxt = "n", yaxs="i",
-     xlab = "", ylab = "", 
-     col = "dodgerblue4", border = "gray")
-hist(out$sigmasqdraw[B:R], breaks = 30, 
-     main = "Posterior Dist. of Sigma2", 
-     yaxt = "n", yaxs="i",
-     xlab = "", ylab = "", 
-     col = "darkred", border = "gray")
-par(mfrow = c(1,1))
-

-

Additionally, we can compute features of these posterior distributions. For example, the posterior means price and display are:

-
apply(out$betadraw[B:R,2:3], 2, mean)
-
## [1] -0.393  0.542
-

Conveniently, bayesm offers this functionality (and more) with its summary and plot methods. Notice that bayesm’s methods use a default burn-in length, calculated as trunc(0.1*nrow(X)). You can override the default by specifying the burnin argument. We see that the means for the first few retail fixed effects in the summary information below match those calculated “by hand” above.

-
summary(out$betadraw)
-
## Summary of Posterior Marginal Distributions 
-## Moments 
-##    mean std dev  num se rel eff sam size
-## 1  9.19   0.059 0.00057    0.85     9000
-## 2 -0.39   0.020 0.00019    0.87     9000
-## 3  0.54   0.063 0.00072    1.18     4500
-## 
-## Quantiles 
-##    2.5%    5%   50%   95% 97.5%
-## 1  9.08  9.09  9.19  9.29  9.31
-## 2 -0.43 -0.43 -0.39 -0.36 -0.35
-## 3  0.42  0.44  0.54  0.64  0.66
-##    based on 9000 valid draws (burn-in=1000)
-

The same can be done with the plot generic function. However, we reference the method directly (plot.bayesm.mat) to plot a subset of the bayesm output – specifically we plot the price coefficient. In the histogram, note that the green bars delimit a 95% Bayesian credibility interval, yellow bars shows +/- 2 numerical standard errors for the posterior mean, and the red bar indicates the posterior mean. Also notice that this pink histogram of the posterior distribution on price, which was created by calling the plot generic, matches the blue one we created “by hand” above.

-

The plot generic function also provides a trace plot and and ACF plot. In many applications (although not in this simple model), we cannot be certain that our draws from the posterior distribution adequately represent all areas of the posterior with nontrivial mass. This may occur, for instance, when using a “slow mixing” Markov Chain Monte Carlo (MCMC) algorithm to draw from the posterior. In such a case, we might see patterns in the trace plot and non-zero autocorrelations in the ACF plot; these will coincide with values for the Effective Sample Size less than R. (Effective Sample Size prints out with the summary generic function, as above.) Here, however, we are able to sample from the posterior distribution by taking iid draws, and so we see large Effective Sample Sizes in the summary output above, good mixing the trace plot below, and virtually no autocorrelation between draws in the ACF plot below.

-
plot.bayesm.mat(out$betadraw[,2])
-

-
-
-
-

4.4 Example 2: Multinomial Logistic Regression

-
-

4.4.1 Data

-

Linear regression models like the one in the previous example are best suited for continuous outcome variables. Different models (known as limited dependent variable models) have been developed for binary or multinomial outcome variables, the most popular of which — the multinomial logit — will be the subject of this section.

-

For this example, we analyze the margarine dataset, which provides panel data on purchases of margarine. The data are stored in two dataframes. The first, choicePrice, lists the outcome of 4,470 choice occasions as well as the choosing household and the prices of the 10 choice alternatives. The second, demos, provides demographic information about the choosing households, such as their income and family size. We begin by merging the information from these two dataframes:

-
data(margarine)
-str(margarine)
-marg <- merge(margarine$choicePrice, margarine$demos, by = "hhid")
-
## List of 2
-##  $ choicePrice:'data.frame': 4470 obs. of  12 variables:
-##   ..$ hhid    : int [1:4470] 2100016 2100016 2100016 2100016 2100016 2100016 2100016 2100024 2100024 2100024 ...
-##   ..$ choice  : num [1:4470] 1 1 1 1 1 4 1 1 4 1 ...
-##   ..$ PPk_Stk : num [1:4470] 0.66 0.63 0.29 0.62 0.5 0.58 0.29 0.66 0.66 0.66 ...
-##   ..$ PBB_Stk : num [1:4470] 0.67 0.67 0.5 0.61 0.58 0.45 0.51 0.45 0.59 0.67 ...
-##   ..$ PFl_Stk : num [1:4470] 1.09 0.99 0.99 0.99 0.99 0.99 0.99 1.08 1.08 1.09 ...
-##   ..$ PHse_Stk: num [1:4470] 0.57 0.57 0.57 0.57 0.45 0.45 0.29 0.57 0.57 0.57 ...
-##   ..$ PGen_Stk: num [1:4470] 0.36 0.36 0.36 0.36 0.33 0.33 0.33 0.36 0.36 0.36 ...
-##   ..$ PImp_Stk: num [1:4470] 0.93 1.03 0.69 0.75 0.72 0.72 0.72 0.93 0.93 0.93 ...
-##   ..$ PSS_Tub : num [1:4470] 0.85 0.85 0.79 0.85 0.85 0.85 0.85 0.85 0.85 0.85 ...
-##   ..$ PPk_Tub : num [1:4470] 1.09 1.09 1.09 1.09 1.07 1.07 1.07 1.09 1.09 1.09 ...
-##   ..$ PFl_Tub : num [1:4470] 1.19 1.19 1.19 1.19 1.19 1.19 1.19 1.19 1.34 1.19 ...
-##   ..$ PHse_Tub: num [1:4470] 0.33 0.37 0.59 0.59 0.59 0.59 0.59 0.33 0.33 0.33 ...
-##  $ demos      :'data.frame': 516 obs. of  8 variables:
-##   ..$ hhid     : num [1:516] 2100016 2100024 2100495 2100560 2100610 ...
-##   ..$ Income   : num [1:516] 32.5 17.5 37.5 17.5 87.5 12.5 17.5 17.5 27.5 67.5 ...
-##   ..$ Fs3_4    : int [1:516] 0 1 0 0 0 0 0 0 0 0 ...
-##   ..$ Fs5.     : int [1:516] 0 0 0 0 0 0 0 0 1 0 ...
-##   ..$ Fam_Size : int [1:516] 2 3 2 1 1 2 2 2 5 2 ...
-##   ..$ college  : int [1:516] 1 1 0 0 1 0 1 0 1 1 ...
-##   ..$ whtcollar: int [1:516] 0 1 0 1 1 0 0 0 1 1 ...
-##   ..$ retired  : int [1:516] 1 1 1 0 0 1 0 1 0 0 ...
-

Compared to the standard \(n \times k\) rectangular format for data to be used in a linear regression model, choice data may be stored in various formats, including a rectangular format where the \(p\) choice alternatives are allocated across columns or rows, or a list-of-lists format as used in bayesm’s hierarchical models, which we demonstrate in Example 3 below. For all functions — and notably those that implement multinomial logit and probit models — the data must be in the format expected by the function, and bayesm’s posterior sampling funtions are no exception.

-

In this example, we will implement a multinomial logit model using rmnlIndepMetrop. This posterior sampling function requires y to be a length-\(n\) vector (or an \(n \times 1\) matrix) of multinomial outcomes (\(1, \dots, p\)). That is, each element of y corresponds to a choice occasion \(i\) with the value of the element \(y_i\) indicating the choice that was made. So if the fourth alternative was chosen on the seventh choice occasion, then \(y_7 = 4\). The margarine data are stored in that format, and so we easily specify y with the following code:

-
y <- marg[,2]
-

rmnlIndepMetrop requires X to be an \(np \times k\) matrix. That is, each alternative is listed on its own row, with a group of \(p\) rows together corresponding to the alternatives available on one choice occasion. However, the margarine data are stored with the various choice alternatives in columns rather than rows, so reformatting is necessary. bayesm provides the utility function createX to assist with the conversion. createX requires the user to specify the number of choice alternatives p as well as the number of alternative-specific variables na and an \(n \times\) na matrix of alternative-specific data Xa (“a” for alternative-specific). Here, we have \(p=10\) choice alternatives with na \(=1\) alternative-specific variable (price). If we were only interested in using price as a covariate, we would code:

-
X1 <- createX(p=10, na=1, Xa=marg[,3:12], nd=NULL, Xd=NULL, base=1)
-colnames(X1) <- c(names(marg[,3:11]), "price")
-head(X1, n=10)
-
##       PPk_Stk PBB_Stk PFl_Stk PHse_Stk PGen_Stk PImp_Stk PSS_Tub PPk_Tub
-##  [1,]       0       0       0        0        0        0       0       0
-##  [2,]       1       0       0        0        0        0       0       0
-##  [3,]       0       1       0        0        0        0       0       0
-##  [4,]       0       0       1        0        0        0       0       0
-##  [5,]       0       0       0        1        0        0       0       0
-##  [6,]       0       0       0        0        1        0       0       0
-##  [7,]       0       0       0        0        0        1       0       0
-##  [8,]       0       0       0        0        0        0       1       0
-##  [9,]       0       0       0        0        0        0       0       1
-## [10,]       0       0       0        0        0        0       0       0
-##       PFl_Tub price
-##  [1,]       0  0.66
-##  [2,]       0  0.67
-##  [3,]       0  1.09
-##  [4,]       0  0.57
-##  [5,]       0  0.36
-##  [6,]       0  0.93
-##  [7,]       0  0.85
-##  [8,]       0  1.09
-##  [9,]       0  1.19
-## [10,]       1  0.33
-

Notice that createX uses \(p-1\) dummy variables to distinguish the \(p\) choice alternatives. As with factor variables in linear regression, one factor must be the base; the coefficients on the other factors report deviations from the base. The user may specify the base alternative using the base argument (as we have done above), or let it default to the alternative with the highest index.

-

For our example, we might also like to include some “demographic” variables. These are variables that do not vary with the choice alternatives. For example, with the margarine data we might want to include family size. Here again we turn to createX, this time specifying the nd and Xd arguments (“d” for demographic):

-
X2 <- createX(p=10, na=NULL, Xa=NULL, nd=2, Xd=as.matrix(marg[,c(13,16)]), base=1)
-print(X2[1:10,1:9]); cat("\n")
-print(X2[1:10,10:18])
-
##       [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
-##  [1,]    0    0    0    0    0    0    0    0    0
-##  [2,]    1    0    0    0    0    0    0    0    0
-##  [3,]    0    1    0    0    0    0    0    0    0
-##  [4,]    0    0    1    0    0    0    0    0    0
-##  [5,]    0    0    0    1    0    0    0    0    0
-##  [6,]    0    0    0    0    1    0    0    0    0
-##  [7,]    0    0    0    0    0    1    0    0    0
-##  [8,]    0    0    0    0    0    0    1    0    0
-##  [9,]    0    0    0    0    0    0    0    1    0
-## [10,]    0    0    0    0    0    0    0    0    1
-## 
-##       [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
-##  [1,]  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
-##  [2,] 32.5  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
-##  [3,]  0.0 32.5  0.0  0.0  0.0  0.0  0.0  0.0  0.0
-##  [4,]  0.0  0.0 32.5  0.0  0.0  0.0  0.0  0.0  0.0
-##  [5,]  0.0  0.0  0.0 32.5  0.0  0.0  0.0  0.0  0.0
-##  [6,]  0.0  0.0  0.0  0.0 32.5  0.0  0.0  0.0  0.0
-##  [7,]  0.0  0.0  0.0  0.0  0.0 32.5  0.0  0.0  0.0
-##  [8,]  0.0  0.0  0.0  0.0  0.0  0.0 32.5  0.0  0.0
-##  [9,]  0.0  0.0  0.0  0.0  0.0  0.0  0.0 32.5  0.0
-## [10,]  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0 32.5
-

Notice that createX again uses \(p-1\) dummy variables to distinguish the \(p\) choice alternatives. However, for demographic variables, the value of the demographic variable is spread across \(p-1\) columns and \(p-1\) rows.

-
-
-

4.4.2 Model

-

The logit specification was originally derived by Luce (1959) from assumptions on characteristics of choice probabilities. McFadden (1974) tied the model to rational economic theory by showing how the multinomial logit specification models choices made by a utility-maximizing consumer, assuming that the unobserved utility component is distributed Type I Extreme Value. We motivate use of this model following McFadden by assuming the decision maker chooses the alternative providing him with the highest utility, where the utility \(U_{ij}\) from the choice \(y_{ij}\) made by a decision maker in choice situation \(i\) for product \(j = 1, \ldots, p\) is modeled as the sum of a deterministic and a stochastic component:

-

\[ U_{ij} = V_{ij} + \varepsilon_{ij} \hspace{2em} \text{with } \varepsilon_{ij}\ \sim \text{ iid T1EV} \]

-

Regressors (both demographic and alternative-specific) are included in the model by assuming \(V_{ij} = x_{ij}'\beta\). These assumptions result in choice probabilities of:

-

\[ -\text{Pr}(y_i=j) = \frac{\exp \{x_{ij}'\beta\}}{\sum_{k=1}^p\exp\{x_{ik}'\beta\}} -\]

-

Independent priors for the components of the beta vector are specified as normal distributions. Using notation for the multivariate normal distribution, we have:

-

\[ \beta \sim MVN(\bar{\beta},\ A^{-1}) \]

-

We use bayesm’s default values for the parameters of the priors: \(\bar{\beta} = 0\) and \(A = 0.01I\).

-
-
-

4.4.3 Bayesian Estimation

-

Experience with the MNL likelihood is that the asymptotic normal approximation is excellent. rmnlIndepMetrop implements an independent Metropolis algorithm to sample from the normal approximation to the posterior distribution of \(\beta\):

-

\[ p(\beta | X, y) \overset{\cdot}{\propto} |H|^{1/2} \exp \left\{ \frac{1}{2}(\beta - \hat{\beta})'H(\beta - \hat{\beta}) \right\} \]

-

where \(\hat{\beta}\) is the MLE, \(H = \sum_i x_i A_i x_i'\), and the candidate distribution used in the Metropolis algorithm is the multivariate student t. For more detail, see Section 11 of BSM Chapter 3.

-

We sample from the normal approximation to the posterior as follows:

- -
X <- cbind(X1, X2[,10:ncol(X2)])
-out <- rmnlIndepMetrop(Data = list(y=y, X=X, p=10), 
-                       Mcmc = list(R=1e4, nprint=1e3))
- -

rmnlIndepMetrop returns a list that we have saved in out. The list contains 3 elements, betadraw, loglike, and acceptr, which you can verify by running str(out). betadraw is a \(10,000 \times 28\) dimension matrix with class bayesm.mat. As with the linear regression of Example 1 above, we can plot or summarize features of the the posterior distribution in many ways. For information on each marginal posterior distribution, call summary(out) or plot(out). Because we have 28 covariates (intercepts and demographic variables make up 9 columns each and there is one column for the price variable) we omit the full set of results to save space and instead, we only present summary statistics for the marginal posterior distribution for \(\beta_\text{price}\):

-
summary.bayesm.mat(out$betadraw[,10], names = "Price")
-
## Summary of Posterior Marginal Distributions 
-## Moments 
-##       mean std dev num se rel eff sam size
-## Price -6.7    0.18 0.0039     4.5     1800
-## 
-## Quantiles 
-##       2.5% 5%  50%  95% 97.5%
-## Price   -7 -7 -6.7 -6.4  -6.3
-##    based on 9000 valid draws (burn-in=1000)
-

In addition to summary information for a marginal posterior distribution, we can plot it. We use bayesm’s plot generic function (calling plot(out$betadraw) would provide the same plots for all 28 \(X\) variables). In the histogram, the green bars delimit a 95% Bayesian credibility interval, yellow bars shows +/- 2 numerical standard errors for the posterior mean, and the red bar indicates the posterior mean. The subsequent two plots are a trace plot and and ACF plot.

-
plot.bayesm.mat(out$betadraw[,10], names = "Price")
-

-

We see that the posterior is approximately normally distributed with a mean of -6.7 and a standard deviation of 0.17. The trace plot shows good mixing. The ACF plot shows a fair amount of correlation such that, even though the algorithm took R = 10,000 draws, the Effective Sample Size (as reported in the summary stats above) is only 1,800.

-

Because of the nonlinearity in this model, interpreting the results is more difficult than with linear regression models. We do not elaborate further on the interpretation of coefficients and methods of displaying results from multinomial logit models, but for the uninitiated reader, we refer you to excellent sources for this information authored by Kenneth Train and Gary King.7

-
-
-
-

4.5 Example 3: Hierarchical Logit

-
-

4.5.1 Data

-

While we could add individual-specific parameters to the previous model and use the same dataset, we elect to provide the reader with greater variety. For this example, we use the camera dataset in bayesm, which contains conjoint choice data for 332 respondents who evaluated digital cameras. These data have already been processed to exclude respondents that always answered “none”, always picked the same brand, always selected the highest priced offering, or who appeared to be answering randomly.

-
data(camera)
-length(camera)
-
## [1] 332
-
str(camera[[1]])
-
## List of 2
-##  $ y: int [1:16] 1 2 2 4 2 2 1 1 1 2 ...
-##  $ X: num [1:80, 1:10] 0 1 0 0 0 0 1 0 0 0 ...
-##   ..- attr(*, "dimnames")=List of 2
-##   .. ..$ : chr [1:80] "1" "2" "3" "4" ...
-##   .. ..$ : chr [1:10] "canon" "sony" "nikon" "panasonic" ...
-
colnames(camera[[1]]$X)
-
##  [1] "canon"     "sony"      "nikon"     "panasonic" "pixels"   
-##  [6] "zoom"      "video"     "swivel"    "wifi"      "price"
-

The camera data is stored in a list-of-lists format, which is the format required by bayesm’s posterior sampling functions for hierarchical models. This format has one list per individual with each list containing a vector y of choice outcomes and a matrix X of covariates. As with the multinomial logit model of the last example, y is a length-\(n_i\) vector (or one-column matrix) and X has dimensions \(n_ij \times k\) where \(n_i\) is the number of choice occasions faced by individual \(i\), \(j\) is the number of choice alternatives, and \(k\) is the number of covariates. For the camera data, \(N=332\), \(n_i=16\) for all \(i\), \(j=5\), and \(k=10\).

-

If your data were not in this format, it could be easily converted with a for loop and createX. For example, we can format margarine data from Example 2 above into a list-of-lists format with the following code, which simply loops over individuals, extracting and storing their y and X data one individual at a time:

-
data(margarine)
-chpr <- margarine$choicePrice
-chpr$hhid <- as.factor(chpr$hhid)
-N <- nlevels(chpr$hhid)
-dat <- vector(mode = "list", length = N)
-for (i in 1:N) {
-  dat[[i]]$y <- chpr[chpr$hhid==levels(chpr$hhid)[i], "choice"]
-  dat[[i]]$X <- createX(p=10, na=1, Xa=chpr[chpr$hhid==levels(chpr$hhid)[i],3:12], nd=NULL, Xd=NULL)
-}
-

Returning to the camera data, the first 4 covariates are binary indicators for the brands Canon, Sony, Nikon, and Panasonic. These correspond to choice (y) values of 1, 2, 3, and 4. y can also take the value 5, indicating that the respondent chose “none”. The data include binary indicators for two levels of pixel count, zoom strength, swivel video display capability, and wifi connectivity. The last covaritate is price, recorded in hundreds of U.S. dollars (we leave price in these units so that we do not need to adjust the default prior settings).

-

When we look at overall choice outcomes, we see that the brands and the outside alternative (“none”) are chosen roughly in fifths, with specific implied market shares ranging from 17%–25%:

-
N <- length(camera)
-dat <- matrix(NA, N*16, 2)
-for (i in 1:length(camera)) {
-  Ni <- length(camera[[i]]$y)
-  dat[((i-1)*Ni+1):(i*Ni),1] <- i
-  dat[((i-1)*Ni+1):(i*Ni),2] <- camera[[i]]$y
-}
-round(prop.table(table(dat[,2])), 3)
-
## 
-##     1     2     3     4     5 
-## 0.207 0.176 0.195 0.169 0.253
-

However, when we look at a few individuals’ choices, we see much greater variability:

-
round(prop.table(table(dat[,1], dat[,2])[41:50,], 1), 3)
-
##     
-##          1     2     3     4     5
-##   41 0.188 0.000 0.312 0.500 0.000
-##   42 0.250 0.125 0.312 0.312 0.000
-##   43 0.312 0.188 0.125 0.125 0.250
-##   44 0.000 0.000 0.188 0.188 0.625
-##   45 0.312 0.250 0.125 0.125 0.188
-##   46 0.375 0.250 0.062 0.062 0.250
-##   47 0.188 0.062 0.062 0.250 0.438
-##   48 0.125 0.500 0.188 0.125 0.062
-##   49 0.062 0.125 0.500 0.188 0.125
-##   50 0.000 0.125 0.375 0.250 0.250
-

It is this heterogeneity in individual choice that motivates us to employ a hierarchical model.

-
-
-

4.5.2 Model

-

Hierarchical (also known as multi-level, random-coefficient, or mixed) models allow each respondent to have his or her own coefficients. Different people have different preferences, and models that estimate individual-level coefficients can fit data better and make more accurate predictions than single-level models. These models are quite popular in marketing, as they allow, for example, promotions to be targeted to individuals with high promotional part worths — meaning those inviduals who are most likely to respond to the promotion. For more information, see Rossi et al. (1996).8

-

The model follows the multinomial logit specification given in Example 2 above where individuals are assumed to be rational economic agents that make utility-maximizing choices. Now, however, the model includes individual-level parameters (\(\beta_i\)) assumed to be drawn from a normal distribution and with mean values driven by cross-sectional unit characteristics \(Z\):

-

\[ y_i \sim \text{MNL}(x_i'\beta_i) \hspace{1em} \text{with} \hspace{1em} \beta_i = z_i' \Delta + u_i \hspace{1em} \text{and} \hspace{1em} u_i \sim MVN(\mu, \Sigma) \]

-

\(x_i\) is \(n_i \times k\) and \(i = 1, \ldots, N\).

-

We can alternatively write the middle equation as \(B=Z\Delta + U\) where \(\beta_i\), \(z_i\), and \(u_i\) are the \(i^\text{th}\) rows of \(B\), \(Z\), and \(U\). \(B\) is \(N \times k\), \(Z\) is \(N \times m\), \(\Delta\) is \(m \times k\), and \(U\) is \(N \times k\).

-

Note that we do not have any cross-sectional unit characteristics in the camera dataset and thus \(Z\) will be omitted.

-

The priors are:

-

\[ \text{vec}(\Delta) = \delta \sim MVN(\bar{\delta}, A_\delta^{-1}) \hspace{2em} \mu \sim MVN(\bar{\mu}, \Sigma \otimes a^{-1}) \hspace{2em} \Sigma \sim IW(\nu, V) \]

-

This specification of priors assumes that, conditional on the hyperparameters (that is, the parameters of the prior distribution), the \(\beta\)’s are a priori independent. This means that inference for each unit can be conducted independently of all other units, conditional on the hyperparameters, which is the Bayesian analogue of the fixed effects approach in classical statistics.

-

Note also that we have assumed a normal “first-stage” prior distribution over the \(\beta\)’s. rhierMnlRwMixture permits a more-flexible mixture-of-normals first-stage prior (hence the “mixture” in the function name). However, for our example, we will not include this added flexibility (Prior$ncomp = 1 below).

-
-
-

4.5.3 Bayesian Estimation

-

Although the model is more complex than the models used in the two previous examples, the increased programming difficulty for the researcher is minimal. As before, we specify Data, Prior, and Mcmc arguments, and call the posterior sampling function:

-
data  <- list(lgtdata = camera, p = 5)
-prior <- list(ncomp = 1)
-mcmc  <- list(R = 1e4, nprint = 0)
-
-out <- rhierMnlRwMixture(Data = data, Prior = prior, Mcmc = mcmc)
-
## Z not specified
-## Table of Y values pooled over all units
-## ypooled
-##    1    2    3    4    5 
-## 1100  936 1035  898 1343 
-##  
-## Starting MCMC Inference for Hierarchical Logit:
-##    Normal Mixture with 1 components for first stage prior
-##    5  alternatives;  10  variables in X
-##    for  332  cross-sectional units
-##  
-## Prior Parms: 
-## nu = 13
-## V 
-##       [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
-##  [1,]   13    0    0    0    0    0    0    0    0     0
-##  [2,]    0   13    0    0    0    0    0    0    0     0
-##  [3,]    0    0   13    0    0    0    0    0    0     0
-##  [4,]    0    0    0   13    0    0    0    0    0     0
-##  [5,]    0    0    0    0   13    0    0    0    0     0
-##  [6,]    0    0    0    0    0   13    0    0    0     0
-##  [7,]    0    0    0    0    0    0   13    0    0     0
-##  [8,]    0    0    0    0    0    0    0   13    0     0
-##  [9,]    0    0    0    0    0    0    0    0   13     0
-## [10,]    0    0    0    0    0    0    0    0    0    13
-## mubar 
-##      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
-## [1,]    0    0    0    0    0    0    0    0    0     0
-## Amu 
-##      [,1]
-## [1,] 0.01
-## a 
-## [1] 5
-##  
-## MCMC Parms: 
-## s= 0.753  w=  0.1  R=  10000  keep=  1  nprint=  0
-## 
-## initializing Metropolis candidate densities for  332  units ...
-##   completed unit # 50
-##   completed unit # 100
-##   completed unit # 150
-##   completed unit # 200
-##   completed unit # 250
-##   completed unit # 300
-

We store the results in out, which is a list of length 4. The list elements are betadraw, nmix, loglike, and SignRes.

-
    -
  • betadraw is a \(332 \times 10 \times 10,000\) array. These dimensions correspond to the number of individuals, the number of covariates, and the number of MCMC draws.

  • -
  • nmix is a list with elements probdraw, zdraw, and compdraw.

    -
      -
    • probdraw tells us the probability that each draw came from a particular normal component. This is relevant when there is a mixture-of-normals first-stage prior. However, since our specified prior over the \(\beta\) vector is one normal distribution, probdraw is a \(10,000 \times 1\) vector of all 1’s.

    • -
    • zdraw is NULL as it is not relevant for this function.

    • -
    • compdraw provides draws for the mixture-of-normals components. Here, compdraw is a list of 10,000 lists. The \(r^\text{th}\) of the 10,000 lists contains the \(r^\text{th}\) draw of the \(\mu\) vector (dim \(1 \times 10\)) and the Cholesky root of the \(r^\text{th}\) draw for the \(10 \times 10\) covariance matrix.

    • -
  • -
  • loglike is a \(10,000 \times 1\) vector that provides the log-likelihood of each draw.

  • -
  • SignRes relates to whether any sign restrictions were placed on the model. This is discussed in detail in a separate vignette detailing a contrainsted hierarchical multinomial logit model; it is not relevant here.

  • -
-

We can summarize results as before. plot(out$betadraw) provides plots for each variable that summarize the distributions of the individual parameters. For brevity, we provide just a histogram of posterior means for the 332 individual coefficients on wifi capability.

-
hist(apply(out$betadraw, 1:2, mean)[,9], col = "dodgerblue4", 
-     xlab = "", ylab = "", yaxt="n", xlim = c(-4,6), breaks = 20,
-     main = "Histogram of Posterior Means For Individual Wifi Coefs")
-

-

We see that the distribution of individual posterior means is skewed, suggesting that our assumption of a normal first-stage prior may be incorrect. We could improve this model by using the more flexible mixture-of-normals prior or, if we believe all consumers value wifi connectivity positively, we could impose a sign constraint on that set of parameters — both of which are demonstrated in the vignette for sign-constrained hierarchical multinomial logit.

-
-
-
-
-

5 Conclusion

-

We hope that this vignette has provided the reader with a thorough introduction to the bayesm package and is sufficient to enable immediate use of its posterior sampling functions for bayesian estimation. Comments or edits can be sent to the vignette authors at the email addresses below.

-
-

Authored by Dan Yavorsky and Peter Rossi. Last updated June 2017.

-
-
-
-
    -
  1. For example, calling the generic function summary(x) on an object x with class bayesm.mat actually dispatches the method summary.bayesm.mat on x, which is equivalent to calling summary.bayes.mat(x). For a nice introduction, see Advanced R by Hadley Wickham, available online.

  2. -
  3. Functions such as rivGibbs only permit one endogenous variable and so the function argument is lowercase: x.

  4. -
  5. LHS and RHS stand for left-hand side and right-hand side.

  6. -
  7. Rossi, Peter, Greg Allenby and Robert McCulloch, Bayesian Statistics and Marketing, John Wiley & Sons, 2005. Website.

  8. -
  9. Cameron, Colin and Pravin Trivedi, Microeconometrics: Methods and Applications, Cambridge University Press, 2005.

    -

    Davidson, Russell and James MacKinnon, Estimation and Inference in Econometrics, Oxford University Press, 1993.

    -

    Goldberger, Arthur, A Course in Econometrics, Harvard University Press, 1991.

    -

    Greene, William, Econometric Analysis, Prentice Hall, 2012 (7th edition).

    -

    Wasserman, Larry, All of Statistics: A Concise Course in Statistical Inferece, Springer, 2004.

    -

    Wooldridge, Jeffrey, Econometric Analysis of Cross Section and Panel Data, MIT Press, 2010 (2nd edition).

  10. -
  11. Gelman, Andrew, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin, Bayesian Data Analysis, CRC Press, 2013 (3rd edition).

    -

    Jackman, Simon, Bayesian Analysis for the Social Sciences, Wiley, 2009.

    -

    Marin, Jean-Michel and Christian Robert, Bayesian Essentials with R, Springer, 2014 (2nd edition).

    -

    Rossi, Peter, Greg Allenby, and Robert McCulloch, Bayesian Statistics and Marketing, Wiley, 2005.

    -

    Zellner, Arnold, An Introduction to Bayesian Inference in Economics, Wiley, 1971.

  12. -
  13. Train, Kenneth, Discrete Choice Models with Simulation Cambridge University Press, 2009.

    -

    King, Gary Unifying Political Methodlogy: The Likelihood Theory of Statistical Inference, University of Michigan Press (1998) p. 108.

    -

    King, Gary, Michael Tomz, and Jason Wittenberg, “Making the Most of Statistical Analyses: Improving Interpretation and Presentation” American Journal of Political Science, Vol. 44 (April 2000) pp. 347–361 at p. 355.

  14. -
  15. Rossi, McCulloch, and Allenby “The Value of Purchase History Data in Target Marketing” Marketing Science (1996).

  16. -
-
- - - -
-
- -
- - - - - - - - diff -Nru r-cran-bayesm-3.1-1/inst/doc/bayesm_Overview_Vignette.Rmd r-cran-bayesm-3.1-3+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd --- r-cran-bayesm-3.1-1/inst/doc/bayesm_Overview_Vignette.Rmd 2017-06-12 21:48:30.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/inst/doc/bayesm_Overview_Vignette.Rmd 2019-07-13 20:25:14.000000000 +0000 @@ -671,9 +671,9 @@ # Conclusion -We hope that this vignette has provided the reader with a thorough introduction to the `bayesm` package and is sufficient to enable immediate use of its posterior sampling functions for bayesian estimation. Comments or edits can be sent to the vignette authors at the email addresses below. +We hope that this vignette has provided the reader with an introduction to the `bayesm` package and is sufficient to enable immediate use of its posterior sampling functions for bayesian estimation. ***** -_Authored by [Dan Yavorsky](dyavorsky@gmail.com) and [Peter Rossi](perossichi@gmail.com). Last updated June 2017._ +_ Last updated July 2019._ diff -Nru r-cran-bayesm-3.1-1/inst/doc/Constrained_MNL_Vignette.html r-cran-bayesm-3.1-3+dfsg/inst/doc/Constrained_MNL_Vignette.html --- r-cran-bayesm-3.1-1/inst/doc/Constrained_MNL_Vignette.html 2018-12-20 19:37:04.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/inst/doc/Constrained_MNL_Vignette.html 1970-01-01 00:00:00.000000000 +0000 @@ -1,480 +0,0 @@ - - - - - - - - - - - - - -Hierarchical Multinomial Logit with Sign Constraints - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - -
-
-
-
-
- -
- - - - - - - -
-
-

Introduction

-

bayesm’s posterior sampling function rhierMnlRwMixture permits the imposition of sign constraints on the individual-specific parameters of a hierarchical multinomial logit model. This may be desired if, for example, the researcher believes there are heterogenous effects from, say, price, but that all responses should be negative (i.e., sign-constrained). This vignette provides exposition of the model, discussion of prior specification, and an example.

-
-
-

Model

-

The model follows the hierarchical multinomial logit specification given in Example 3 of the “bayesm Overview” Vignette, but will be repeated here succinctly. Individuals are assumed to be rational economic agents that make utility-maximizing choices. Utility is modeled as the sum of deterministic and stochastic components, where the inverse-logit of the probability of chosing an alternative is linear in the parameters and the error is assumed to follow a Type I Extreme Value distribution:

-

\[ U_{ij} = X_{ij}\beta_i + \varepsilon_{ij} -\hspace{0.8em} \text{with} \hspace{0.8em} -\varepsilon_{ij}\ \sim \text{ iid Type I EV} \]

-

These assumptions yield choice probabilities of:

-

\[ \text{Pr}(y_i=j) = \frac{\exp \{x_{ij}'\beta_i\}}{\sum_{k=1}^p\exp\{x_{ik}'\beta_i\}} \]

-

\(x_i\) is \(n_i \times k\) and \(i = 1, \ldots, N\). There are \(p\) alternatives, \(j = 1, \ldots, p\). An outside option, often denoted \(j=0\) can be introduced by assigning \(0\)’s to that option’s covariate (\(x\)) values.

-

We impose sign constraints by defining a \(k\)-length constraint vector SignRes that takes values from the set \(\{-1, 0, 1\}\) to define \(\beta_{ik} = f(\beta_{ik}^*)\) where \(f(\cdot)\) is as follows:

-

\[ \beta_{ik} = f(\beta_{ik}^*) = \left\{ - \begin{array}{lcl} - \exp(\beta_{ik}^*) & \text{if} & \texttt{SignRes[k]} = 1 \\ - \beta_{ik}^* & \text{if} & \texttt{SignRes[k]} = 0 \\ - -\exp(\beta_{ik}^*) & \text{if} & \texttt{SignRes[k]} = -1 \\ - \end{array} - \right. \]

-

The “deep” individual-specific parameters (\(\beta_i^*\)) are assumed to be drawn from a mixture of \(M\) normal distributions with mean values driven by cross-sectional unit characteristics \(Z\). That is, \(\beta_i^* = z_i' \Delta + u_i\) where \(u_i\) has a mixture-of-normals distribution.1

-

Considering \(\beta_i^*\) a length-\(k\) row vector, we will stack the \(N\) \(\beta_i^*\)’s vertically and write:

-

\[ B=Z\Delta + U \] Thus we have \(\beta_i\), \(z_i\), and \(u_i\) as the \(i^\text{th}\) rows of \(B\), \(Z\), and \(U\). \(B\) is \(N \times k\), \(Z\) is \(N \times M\), \(\Delta\) is \(M \times k\), and \(U\) is \(N \times k\) where the distribution on \(U\) is such that:

-

\[ \Pr(\beta_{ik}^*) = \sum_{m=1}^M \pi_m \phi(z_i' \Delta \vert \mu_j, \Sigma_j) \]

-

\(\phi\) is the normal pdf.

-
-
-

Priors

-

Natural conjugate priors are specified:

-

\[ \pi \sim \text{Dirichlet}(a) \] \[ \text{vec}(\Delta) = \delta \sim MVN(\bar{\delta}, A_\delta^{-1}) \] \[ \mu_m \sim MVN(\bar{\mu}, \Sigma_m \otimes a_\mu^{-1}) \] \[ \Sigma_m \sim IW(\nu, V) \]

-

This specification of priors assumes that \(\mu_m\) is independent of \(\Sigma_m\) and that, conditional on the hyperparameters, the \(\beta_i\)’s are a priori independent.

-

\(a\) implements prior beliefs on the number of normal components in the mixture with a default of 5. \(\nu\) is a “tightness” parameter of the inverted-Wishart distribution and \(V\) is its location matrix. Without sign constraints, they default to \(\nu=k+3\) and \(V=\nu I\), which has the effect of centering the prior on \(I\) and making it “barely proper”. \(a_\mu\) is a tightness parameter for the priors on \(\mu\), and when no sign constraints are imposed it defaults to an extremely diffuse prior of 0.01.

-

These defaults assume the logit coefficients (\(\beta_{ik}\)’s) are on the order of approximately 1 and, if so, are typically reasonable hyperprior values. However, when sign constraints are imposed, say, SignRes[k]=-1 such that \(\beta_{ik} = -\exp\{\beta_{ik}^*\}\), then these hyperprior defults pile up mass near zero — a result that follows from the nature of the exponential function and the fact that the \(\beta_{ik}^*\)’s are on the log scale. Let’s show this graphically.

-
# define function
-drawprior <- function (mubar_betak, nvar, ncomp, a, nu, Amu, V, ndraw) {
-  betakstar <- double(ndraw)
-  betak     <- double(ndraw)
-  otherbeta <- double(ndraw)
-  mubar     <- c(rep(0, nvar-1), mubar_betak)
-  
-  for(i in 1:ndraw) {
-    comps=list()
-    for(k in 1:ncomp) {
-      Sigma <- rwishart(nu,chol2inv(chol(V)))$IW
-      comps[[k]] <- list(mubar + t(chol(Sigma/Amu)) %*% rnorm(nvar), 
-                         backsolve(chol(Sigma), diag(1,nvar)) )
-    }
-    pvec         <- rdirichlet(a)
-    beta         <- rmixture(1,pvec,comps)$x
-    betakstar[i] <- beta[nvar]
-    betak[i]     <- -exp(beta[nvar])
-    otherbeta[i] <- beta[1]
-  }
-  
-  return(list(betakstar=betakstar, betak=betak, otherbeta=otherbeta))
-}
-set.seed(1234)
-
# specify rhierMnlRwMixture defaults
-mubar_betak <- 0
-nvar  <- 10
-ncomp <- 3
-a     <- rep(5, ncomp)
-nu    <- nvar + 3
-Amu   <- 0.01
-V     <- nu*diag(c(rep(1,nvar-1),1))
-ndraw <- 10000
-defaultprior <- drawprior(mubar_betak, nvar, ncomp, a, nu, Amu, V, ndraw)
-
# plot priors under defaults
-par(mfrow=c(1,3))
-trimhist <- -20
-hist(defaultprior$betakstar, breaks=40, col="magenta", 
-     main="Beta_k_star", xlab="", ylab="", yaxt="n")
-hist(defaultprior$betak[defaultprior$betak>trimhist],
-     breaks=40, col="magenta", main="Beta_k",
-     xlab="", ylab="", yaxt="n", xlim=c(trimhist,0))
-hist(defaultprior$otherbeta, breaks=40, col="magenta",
-     main="Other Beta", xlab="", ylab="", yaxt="n")
-

-

We see that the hyperprior values for constrained logit parameters are far from uninformative. As a result, rhierMnlRwMixture implements different default priors for parameters when sign constraints are imposed. In particular, \(a_\mu=0.1\), \(\nu = k + 15\), and \(V = \nu*\text{diag}(d)\) where \(d_i=4\) if \(\beta_{ik}\) is unconstrained and \(d_i=0.1\) if \(\beta_{ik}\) is constrained. Additionally, \(\bar{\mu}_m = 0\) if unconstrained and \(\bar{\mu}_m = 2\) otherwise. As the following plots show, this yields substantially less informative hyperpriors on \(\beta_{ik}^*\) without significantly affecting the hyperpriors on \(\beta_{ik}\) or \(\beta_{ij}\) (\(j \ne k\)).

-
# adjust priors for constraints
-mubar_betak <- 2
-nvar  <- 10
-ncomp <- 3 
-a     <- rep(5, ncomp)
-nu    <- nvar + 15
-Amu   <- 0.1 
-V     <- nu*diag(c(rep(4,nvar-1),0.1))
-ndraw <- 10000
-tightprior <- drawprior(mubar_betak, nvar, ncomp, a, nu, Amu, V, ndraw)
-
# plot priors under adjusted values
-par(mfrow=c(1,3))
-trimhist <- -20
-hist(tightprior$betakstar, breaks=40, col="magenta", 
-     main="Beta_k_star", xlab="", ylab="", yaxt="n")
-hist(tightprior$betak[tightprior$betak>trimhist],
-     breaks=40, col="magenta", main="Beta_k",
-     xlab="", ylab="", yaxt="n", xlim=c(trimhist,0))
-hist(tightprior$otherbeta, breaks=40, col="magenta", 
-     main="Other Beta", xlab="", ylab="", yaxt="n")
-

-
-
-

Example

-

Here we demonstrate the implementation of the hierarchical multinomial logit model with sign-constrained parameters. We return to the camera data used in Example 3 of the “bayesm Overview” Vignette. This dataset contains conjoint choice data for 332 respondents who evaluated digital cameras. The data are stored in a lists-of-lists format with one list per respondent, and each respondent’s list having two elements: a vector of choices (y) and a matrix of covariates (X). Notice the dimensions: there is one value for each choice occasion in each individual’s y vector but one row per alternative in each individual’s X matrix, making nrow(x) = 5 \(\times\) length(y) because there are 5 alternatives per choice occasion.

-
library(bayesm)
-data(camera)
-length(camera)
-
## [1] 332
-
str(camera[[1]])
-
## List of 2
-##  $ y: int [1:16] 1 2 2 4 2 2 1 1 1 2 ...
-##  $ X: num [1:80, 1:10] 0 1 0 0 0 0 1 0 0 0 ...
-##   ..- attr(*, "dimnames")=List of 2
-##   .. ..$ : chr [1:80] "1" "2" "3" "4" ...
-##   .. ..$ : chr [1:10] "canon" "sony" "nikon" "panasonic" ...
-

As shown next, the first 4 covariates are binary indicators for the brands Canon, Sony, Nikon, and Panasonic. These correspond to choice (y) values of 1, 2, 3, and 4. y can also take the value 5, indicating that the respondent chose “none”. The data include binary indicators for two levels of pixel count, zoom strength, swivel video display capability, and wifi connectivity. The last covaritate is price, recorded in hundreds of U.S. dollars so that the magnitude of the expected price coefficient is such that the default prior settings in rhierMnlRwMixture do not need to be adjusted.

-
str(camera[[1]]$y)
-
##  int [1:16] 1 2 2 4 2 2 1 1 1 2 ...
-
str(as.data.frame(camera[[1]]$X))
-
## 'data.frame':    80 obs. of  10 variables:
-##  $ canon    : num  0 1 0 0 0 0 1 0 0 0 ...
-##  $ sony     : num  0 0 0 1 0 0 0 1 0 0 ...
-##  $ nikon    : num  1 0 0 0 0 0 0 0 1 0 ...
-##  $ panasonic: num  0 0 1 0 0 1 0 0 0 0 ...
-##  $ pixels   : num  0 1 0 0 0 1 1 1 1 0 ...
-##  $ zoom     : num  1 1 0 1 0 0 0 0 0 0 ...
-##  $ video    : num  0 0 0 0 0 0 1 1 1 0 ...
-##  $ swivel   : num  1 1 0 1 0 1 0 0 1 0 ...
-##  $ wifi     : num  0 1 1 0 0 0 1 1 1 0 ...
-##  $ price    : num  0.79 2.29 1.29 2.79 0 2.79 0.79 1.79 1.29 0 ...
-

Let’s say we would like to estimate the part-worths of the various attributes of these digital cameras using a multinomial logit model. To incorporate individual-level heterogeneous effects, we elect to use a hierarchical (i.e., random coefficient) specification. Further, we believe that despite the heterogeneity, each consumer’s estimate price response (\(\beta_{i,\text{price}}\)) should be negative, which we will impose with a sign constraint. Following the above discussion, we use the default priors, which “adjust” automatically when sign constraints are imposed.

- -
SignRes <- c(rep(0,nvar-1),-1)
-
-data  <- list(lgtdata=camera, p=5)
-prior <- list(mubar=mubar, Amu=Amu, ncomp=ncomp, a=a, nu=nu, V=V, SignRes=SignRes)
-mcmc  <- list(R=1e4, nprint=0)
-
-out <- rhierMnlRwMixture(Data=data, Prior=prior, Mcmc=mcmc)
- -

While much can be done to analyze the output, we will focus here on the constrained parameters on price. We first plot the posterior distributions for the price parameter for individuals \(i=1,2,3\). Notice that the posterior distributions for the selected individual’s price parameters lie entirely below zero.

- -
par(mfrow=c(1,3))
-ind_hist <- function(mod, i) {
-  hist(mod$betadraw[i , 10, ], breaks = seq(-14,0,0.5), 
-     col = "dodgerblue3", border = "grey", yaxt = "n",
-     xlim = c(-14,0), xlab = "", ylab = "", main = paste("Ind.",i))
-}
-ind_hist(out,1)
-ind_hist(out,2)
-ind_hist(out,3)
-

-

Next we plot a histogram of the posterior means for the 332 individual price paramters (\(\beta_{i,\text{price}}\)):

-
par(mfrow=c(1,1))
-hist(apply(out$betadraw[ , 10, ], 1, mean), 
-     xlim = c(-20,0), breaks = 20, 
-     col = "firebrick2", border = "gray", xlab = "", ylab = "",
-     main = "Posterior Means for Ind. Price Params, 
-             With Sign Constraint")
-

-

As a point of comparison, we re-run the model without the sign constraint using the default priors (output omitted) and provide the same set of plots. Note now that the right tail of the posterior distribution of \(\beta_2^\text{price}\) extends to the right of zero.

-
data0  <- list(lgtdata = camera, p = 5)
-prior0 <- list(ncomp = 5)
-mcmc0  <- list(R = 1e4, nprint = 0)
- -
out0 <- rhierMnlRwMixture(Data = data0, Prior = prior0, Mcmc = mcmc0)
- -
par(mfrow=c(1,3))
-ind_hist <- function(mod, i) {
-  hist(mod$betadraw[i , 10, ], breaks = seq(-12,2,0.5), 
-     col = "dodgerblue4", border = "grey", yaxt = "n",
-     xlim = c(-12,2), xlab = "", ylab = "", main = paste("Ind.",i))
-}
-ind_hist(out0,1)
-ind_hist(out0,2)
-ind_hist(out0,3)
-

-
par(mfrow=c(1,1))
-hist(apply(out0$betadraw[ , 10, ], 1, mean), 
-     xlim = c(-15,5), breaks = 20, 
-     col = "firebrick3", border = "gray", 
-     xlab = "", ylab = "",
-     main = "Posterior Means for Ind. Price Params, 
-             No Sign Constraint")
-

-
-

Authored by Dan Yavorsky and Peter Rossi. Last updated June 2017.

-
-
-
-
    -
  1. As documented in the helpfile for this function (accessible by ?bayesm::rhierMnlRwMixture), draws from the posterior of the constrained parameters (\(\beta\)) can be found in the output $betadraw while draws from the posterior of the unconstrained parameters (\(\beta^*\)) are available in $nmix$compdraw.

  2. -
-
- - - -
-
- -
- - - - - - - - diff -Nru r-cran-bayesm-3.1-1/inst/doc/Constrained_MNL_Vignette.Rmd r-cran-bayesm-3.1-3+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd --- r-cran-bayesm-3.1-1/inst/doc/Constrained_MNL_Vignette.Rmd 2017-06-12 19:50:12.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/inst/doc/Constrained_MNL_Vignette.Rmd 2019-07-13 20:32:45.000000000 +0000 @@ -76,7 +76,7 @@ $$ \mu_m \sim MVN(\bar{\mu}, \Sigma_m \otimes a_\mu^{-1}) $$ $$ \Sigma_m \sim IW(\nu, V) $$ -This specification of priors assumes that $\mu_m$ is independent of $\Sigma_m$ and that, conditional on the hyperparameters, the $\beta_i$'s are _a priori_ independent. +This specification of priors assumes that the $(\mu_m,\Sigma_m)$ are independent and that, conditional on the hyperparameters, the $\beta_i$'s are independent. $a$ implements prior beliefs on the number of normal components in the mixture with a default of 5. $\nu$ is a "tightness" parameter of the inverted-Wishart distribution and $V$ is its location matrix. Without sign constraints, they default to $\nu=k+3$ and $V=\nu I$, which has the effect of centering the prior on $I$ and making it "barely proper". $a_\mu$ is a tightness parameter for the priors on $\mu$, and when no sign constraints are imposed it defaults to an extremely diffuse prior of 0.01. @@ -290,7 +290,7 @@ ***** -_Authored by [Dan Yavorsky](dyavorsky@gmail.com) and [Peter Rossi](perossichi@gmail.com). Last updated June 2017._ +_ Last updated July 2019._ diff -Nru r-cran-bayesm-3.1-1/inst/include/bayesm.h r-cran-bayesm-3.1-3+dfsg/inst/include/bayesm.h --- r-cran-bayesm-3.1-1/inst/include/bayesm.h 2017-06-01 23:10:10.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/inst/include/bayesm.h 2019-07-13 20:11:02.000000000 +0000 @@ -1,155 +1,155 @@ -#ifndef __BAYESM_H__ -#define __BAYESM_H__ - -#include -#include -#include -#include - -using namespace arma; -using namespace Rcpp; - -//CUSTOM STRUCTS-------------------------------------------------------------------------------------------------- -//Used in rhierLinearMixture, rhierLinearModel, rhierMnlDP, rhierMnlRwMixture, rhierNegbinRw, and rsurGibbs -struct moments{ - vec y; - mat X; - mat XpX; - vec Xpy; - mat hess; -}; - -//Used in rhierLinearMixture, rhierLinearModel, rhierMnlRWMixture, and utilityFunctions.cpp -struct unireg{ - vec beta; - double sigmasq; - }; - -//Used in rhierMnlDP, rhierMnlRwMixture, and utilityFunctions.cpp -struct mnlMetropOnceOut{ - vec betadraw; - int stay; - double oldll; -}; - -//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp -struct lambda{ - vec mubar; - double Amu; - double nu; - mat V; -}; - -//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp -struct priorAlpha{ - double power; - double alphamin; - double alphamax; - int n; -}; - -//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp -struct murooti{ - vec mu; - mat rooti; -}; - -//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp -struct thetaStarIndex{ - ivec indic; - std::vector thetaStar_vector; -}; - -//Used in rhierMnlDP, rivDP -struct DPOut{ - ivec indic; - std::vector thetaStar_vector; - std::vector thetaNp1_vector; - double alpha; - int Istar; - lambda lambda_struct; -}; - -//EXPOSED FUNCTIONS----------------------------------------------------------------------------------------------- -List rwishart(double nu, mat const& V); - -List rmultireg(mat const& Y, mat const& X, mat const& Bbar, mat const& A, double nu, mat const& V); - -vec rdirichlet(vec const& alpha); - -double llmnl(vec const& beta, vec const& y, mat const& X); - -mat lndIChisq(double nu, double ssq, mat const& X); - -double lndMvst(vec const& x, double nu, vec const& mu, mat const& rooti, bool NORMC); - -double lndMvn(vec const& x, vec const& mu, mat const& rooti); - -double lndIWishart(double nu, mat const& V, mat const& IW); - -vec rmvst(double nu, vec const& mu, mat const& root); - -vec breg(vec const& y, mat const& X, vec const& betabar, mat const& A); - -vec cgetC(double e, int k); - -List rmixGibbs( mat const& y, mat const& Bbar, mat const& A, double nu, mat const& V, vec const& a, vec const& p, vec const& z); - //rmixGibbs contains the following support functions, which are called ONLY THROUGH rmixGibbs: drawCompsFromLabels, drawLabelsFromComps, and drawPFromLabels - -//SUPPORT FUNCTIONS (contained in utilityFunctions.cpp and trunNorm.cpp)----------------------------------------------------------- -//Used in rmvpGibbs and rmnpGibbs -vec condmom(vec const& x, vec const& mu, mat const& sigmai, int p, int j); - -//double rtrun1(double mu, double sigma,double trunpt, int above); <--NO LONGER USED - -double trunNorm(double mu,double sig, double trunpt, int above); - -//Used in rhierLinearModel, rhierLinearMixture and rhierMnlRWMixture -mat drawDelta(mat const& x,mat const& y,vec const& z,List const& comps,vec const& deltabar,mat const& Ad); - -unireg runiregG(vec const& y, mat const& X, mat const& XpX, vec const& Xpy, double sigmasq, mat const& A, vec const& Abetabar, double nu, double ssq); - -//Used in rnegbinRW and rhierNegbinRw -double llnegbin(vec const& y, vec const& lambda, double alpha, bool constant); - -double lpostbeta(double alpha, vec const& beta, mat const& X, vec const& y, vec const& betabar, mat const& rootA); - -double lpostalpha(double alpha, vec const& beta, mat const& X, vec const& y, double a, double b); - -//Used in rbprobitGibbs (uses breg1 and trunNorm_vec) and rordprobitGibbs (uses breg1 and rtrunVec) -vec breg1(mat const& root, mat const& X, vec const& y, vec const& Abetabar); - -vec rtrunVec(vec const& mu,vec const& sigma, vec const& a, vec const& b); - -vec trunNorm_vec(vec const& mu, vec const& sig, vec const& trunpt, vec const& above); - -//Used in rhierMnlDP and rhierMnlRwMixture -mnlMetropOnceOut mnlMetropOnce(vec const& y, mat const& X, vec const& oldbeta, double oldll,double s, mat const& incroot, vec const& betabar, mat const& rootpi); - -//Used in rDPGibbs, rhierMnlDP, rivDP -int rmultinomF(vec const& p); - -mat yden(std::vector const& thetaStar, mat const& y); - -ivec numcomp(ivec const& indic, int k); - -murooti thetaD(mat const& y, lambda const& lambda_struct); - -thetaStarIndex thetaStarDraw(ivec indic, std::vector thetaStar_vector, mat const& y, mat ydenmat, vec const& q0v, double alpha, lambda const& lambda_struct, int maxuniq); - -vec q0(mat const& y, lambda const& lambda_struct); - -vec seq_rcpp(double from, double to, int len); //kept _rcpp due to conflict with base seq function - -double alphaD(priorAlpha const& priorAlpha_struct, int Istar, int gridsize); - -murooti GD(lambda const& lambda_struct); - -lambda lambdaD(lambda const& lambda_struct, std::vector const& thetaStar_vector, vec const& alim, vec const& nulim, vec const& vlim, int gridsize); - -//FUNCTION TIMING (contained in functionTiming.cpp)--------------------------------------------------------------- -void startMcmcTimer(); -void infoMcmcTimer(int rep, int R); -void endMcmcTimer(); - -#endif +#ifndef __BAYESM_H__ +#define __BAYESM_H__ + +#include +#include +#include +#include + +using namespace arma; +using namespace Rcpp; + +//CUSTOM STRUCTS-------------------------------------------------------------------------------------------------- +//Used in rhierLinearMixture, rhierLinearModel, rhierMnlDP, rhierMnlRwMixture, rhierNegbinRw, and rsurGibbs +struct moments{ + vec y; + mat X; + mat XpX; + vec Xpy; + mat hess; +}; + +//Used in rhierLinearMixture, rhierLinearModel, rhierMnlRWMixture, and utilityFunctions.cpp +struct unireg{ + vec beta; + double sigmasq; + }; + +//Used in rhierMnlDP, rhierMnlRwMixture, and utilityFunctions.cpp +struct mnlMetropOnceOut{ + vec betadraw; + int stay; + double oldll; +}; + +//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp +struct lambda{ + vec mubar; + double Amu; + double nu; + mat V; +}; + +//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp +struct priorAlpha{ + double power; + double alphamin; + double alphamax; + int n; +}; + +//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp +struct murooti{ + vec mu; + mat rooti; +}; + +//Used in rDPGibbs, rhierMnlDP, rivDP, and utilityFunctions.cpp +struct thetaStarIndex{ + ivec indic; + std::vector thetaStar_vector; +}; + +//Used in rhierMnlDP, rivDP +struct DPOut{ + ivec indic; + std::vector thetaStar_vector; + std::vector thetaNp1_vector; + double alpha; + int Istar; + lambda lambda_struct; +}; + +//EXPOSED FUNCTIONS----------------------------------------------------------------------------------------------- +List rwishart(double nu, mat const& V); + +List rmultireg(mat const& Y, mat const& X, mat const& Bbar, mat const& A, double nu, mat const& V); + +vec rdirichlet(vec const& alpha); + +double llmnl(vec const& beta, vec const& y, mat const& X); + +mat lndIChisq(double nu, double ssq, mat const& X); + +double lndMvst(vec const& x, double nu, vec const& mu, mat const& rooti, bool NORMC); + +double lndMvn(vec const& x, vec const& mu, mat const& rooti); + +double lndIWishart(double nu, mat const& V, mat const& IW); + +vec rmvst(double nu, vec const& mu, mat const& root); + +vec breg(vec const& y, mat const& X, vec const& betabar, mat const& A); + +vec cgetC(double e, int k); + +List rmixGibbs( mat const& y, mat const& Bbar, mat const& A, double nu, mat const& V, vec const& a, vec const& p, vec const& z); + //rmixGibbs contains the following support functions, which are called ONLY THROUGH rmixGibbs: drawCompsFromLabels, drawLabelsFromComps, and drawPFromLabels + +//SUPPORT FUNCTIONS (contained in utilityFunctions.cpp and trunNorm.cpp)----------------------------------------------------------- +//Used in rmvpGibbs and rmnpGibbs +vec condmom(vec const& x, vec const& mu, mat const& sigmai, int p, int j); + +//double rtrun1(double mu, double sigma,double trunpt, int above); <--NO LONGER USED + +double trunNorm(double mu,double sig, double trunpt, int above); + +//Used in rhierLinearModel, rhierLinearMixture and rhierMnlRWMixture +mat drawDelta(mat const& x,mat const& y,vec const& z,List const& comps,vec const& deltabar,mat const& Ad); + +unireg runiregG(vec const& y, mat const& X, mat const& XpX, vec const& Xpy, double sigmasq, mat const& A, vec const& Abetabar, double nu, double ssq); + +//Used in rnegbinRW and rhierNegbinRw +double llnegbin(vec const& y, vec const& lambda, double alpha, bool constant); + +double lpostbeta(double alpha, vec const& beta, mat const& X, vec const& y, vec const& betabar, mat const& rootA); + +double lpostalpha(double alpha, vec const& beta, mat const& X, vec const& y, double a, double b); + +//Used in rbprobitGibbs (uses breg1 and trunNorm_vec) and rordprobitGibbs (uses breg1 and rtrunVec) +vec breg1(mat const& root, mat const& X, vec const& y, vec const& Abetabar); + +vec rtrunVec(vec const& mu,vec const& sigma, vec const& a, vec const& b); + +vec trunNorm_vec(vec const& mu, vec const& sig, vec const& trunpt, vec const& above); + +//Used in rhierMnlDP and rhierMnlRwMixture +mnlMetropOnceOut mnlMetropOnce(vec const& y, mat const& X, vec const& oldbeta, double oldll,double s, mat const& incroot, vec const& betabar, mat const& rootpi); + +//Used in rDPGibbs, rhierMnlDP, rivDP +int rmultinomF(vec const& p); + +mat yden(std::vector const& thetaStar, mat const& y); + +ivec numcomp(ivec const& indic, int k); + +murooti thetaD(mat const& y, lambda const& lambda_struct); + +thetaStarIndex thetaStarDraw(ivec indic, std::vector thetaStar_vector, mat const& y, mat ydenmat, vec const& q0v, double alpha, lambda const& lambda_struct, int maxuniq); + +vec q0(mat const& y, lambda const& lambda_struct); + +vec seq_rcpp(double from, double to, int len); //kept _rcpp due to conflict with base seq function + +double alphaD(priorAlpha const& priorAlpha_struct, int Istar, int gridsize); + +murooti GD(lambda const& lambda_struct); + +lambda lambdaD(lambda const& lambda_struct, std::vector const& thetaStar_vector, vec const& alim, vec const& nulim, vec const& vlim, int gridsize); + +//FUNCTION TIMING (contained in functionTiming.cpp)--------------------------------------------------------------- +void startMcmcTimer(); +void infoMcmcTimer(int rep, int R); +void endMcmcTimer(); + +#endif diff -Nru r-cran-bayesm-3.1-1/man/rbayesBLP.Rd r-cran-bayesm-3.1-3+dfsg/man/rbayesBLP.Rd --- r-cran-bayesm-3.1-1/man/rbayesBLP.Rd 2017-06-12 16:37:37.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/man/rbayesBLP.Rd 2019-07-13 20:18:55.000000000 +0000 @@ -142,7 +142,7 @@ } } -\references{For further discussion, see \emph{Bayesian Analysis of Random Coefficient Logit Models Using Aggregate Data} by Jiang, Manchanda, and Rossi, \emph{Journal of Econometrics}, 2009. \cr \url{http://www.sciencedirect.com/science/article/pii/S0304407608002297} +\references{For further discussion, see \emph{Bayesian Analysis of Random Coefficient Logit Models Using Aggregate Data} by Jiang, Manchanda, and Rossi, \emph{Journal of Econometrics}, 2009. \cr } \author{Keunwoo Kim, Anderson School, UCLA, \email{keunwoo.kim@gmail.com}.} diff -Nru r-cran-bayesm-3.1-1/MD5 r-cran-bayesm-3.1-3+dfsg/MD5 --- r-cran-bayesm-3.1-1/MD5 2018-12-21 07:20:49.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/MD5 2019-07-29 22:50:02.000000000 +0000 @@ -1,4 +1,4 @@ -8a121ab1baae6641a553ad19cef9e7d7 *DESCRIPTION +1f10482492ae2ede0127e8d6bbd46248 *DESCRIPTION a7ac4733ceca5b46a0a1dfd391e6e72a *NAMESPACE 722ef18fc061d3605cf687388e4be2a1 *R/BayesmConstants.R 750237445e7fb47a0c72db9038bad9b3 *R/BayesmFunctions.R @@ -18,9 +18,9 @@ e8c68b7db1d51d1ce270e09648db3112 *R/momMix.R dd4a04551d37b77645a4cb38b3feb114 *R/nmat.R 72dbf320a442a6ee65a3de8bc4e6f238 *R/numEff.R -c459d8fd4c6fec89b2e76914813750ca *R/plot.bayesm.hcoef.R -2761d0e0e3bc3b97179790616880c5e1 *R/plot.bayesm.mat.R -d14eae20edc467e1557775af62c45eb0 *R/plot.bayesm.nmix.R +de723d422e8c22336df4ab23220e7bfe *R/plot.bayesm.hcoef.R +5953e4441411fec59bf0346af0a9dc27 *R/plot.bayesm.mat.R +ca6a8109a8bfe57cd53bd34b9772edfd *R/plot.bayesm.nmix.R 3b35e7b69751438d7c06ed92717892c2 *R/rbayesBLP_rcpp.R e7be9effe9226ccf40154f86491d3e89 *R/rbiNormGibbs.R 4fb5551547c7a4b043243381832983e6 *R/rbprobitgibbs_rcpp.r @@ -29,7 +29,7 @@ d6f886dd9321369927df7167ebd966ba *R/rhierLinearMixture_rcpp.r ca0ff52fd2ef0ea9eb6a1055e67fa37f *R/rhierLinearModel_rcpp.R d2b0844bb4cede20ba0a1880013b0c78 *R/rhierMnlDP_rcpp.r -197cd52a3d0111781e1028724f92d6ff *R/rhierMnlRwMixture_rcpp.r +cbfbe99308bae0d6ad294cddecb2390c *R/rhierMnlRwMixture_rcpp.R 185015f355711febecec70841089280a *R/rhiernegbinrw_rcpp.r ec71cae23dc6dc1a5d23bccaa4eefaa2 *R/rivDP_rcpp.R f4903ad0daae8987f84926f1aff5a034 *R/rivGibbs_rcpp.R @@ -44,10 +44,10 @@ 2c1b75af6274d8843d626d5492db68db *R/runireg_rcpp.r 55d3a319658b97ea46997f392d992fe2 *R/runireggibbs_rcpp.r 88c70d97e33d665f93f08e5c5a7e31e8 *R/simnhlogit.R -f6bbbc1dd104aa996c7ba62f17c1460b *R/summary.bayesm.mat.R -14b1a1bf0cd8784439a7c16d2408ddde *R/summary.bayesm.nmix.R -8fa4d18ae9076fb6a474d7483386f432 *R/summary.bayesm.var.R -e43ef4743fb0a51e3203d382b643cad6 *build/vignette.rds +f6f4d9259ad3d8a75f64ee7e0adaf2a8 *R/summary.bayesm.mat.R +bfd00f530a84f72512a93261eedd3ef7 *R/summary.bayesm.nmix.R +38be681c0f7971a9a253395e8a56bb53 *R/summary.bayesm.var.R +2c721c2ddc851edbd4dc5f5357ef16fb *build/vignette.rds fa58bfb68b79925644d958bdd659b14c *data/Scotch.rda 8233d19a217afa65d03bbd5e1f95cec8 *data/bank.rda b95835dcc2899199080a62c9c5b5cd90 *data/camera.RData @@ -58,12 +58,12 @@ 506ede0b22d91973c03085ca17ba9d8e *data/orangeJuice.rda dbce99be695403a0df5c8874b8f6ddcd *data/tuna.rda 53e5c8b98d6b710c656fd969bf6d71b6 *inst/doc/Constrained_MNL_Vignette.R -7dd837d812cf03c9a5c260d5334ff525 *inst/doc/Constrained_MNL_Vignette.Rmd -ae0c78a91939d4f5b82ba495603748bf *inst/doc/Constrained_MNL_Vignette.html +9e127ae004b468c5feb3341bfe7794ac *inst/doc/Constrained_MNL_Vignette.Rmd +6dd96d3541ef61d9d8e4c4844cf7cc6f *inst/doc/Constrained_MNL_Vignette.html 6072b88a2b44bb212e221fe47a977045 *inst/doc/bayesm_Overview_Vignette.R -6b8bc3010578ff01c1fa1240035bdfcc *inst/doc/bayesm_Overview_Vignette.Rmd -9f6d8e55ad4930d8dec47474894c0ca0 *inst/doc/bayesm_Overview_Vignette.html -f6ce98a7f6aa9e853a3b4cb4d3db254f *inst/include/bayesm.h +b9488fb44d97ec4bf8d1bac0a2094c65 *inst/doc/bayesm_Overview_Vignette.Rmd +56dee5fdbb5ad784a3d5ba8d5718a9a0 *inst/doc/bayesm_Overview_Vignette.html +5aa8cada5375ea39eb2ce67359df88c3 *inst/include/bayesm.h c702dda44e13952523d39224e34928f6 *man/Scotch.Rd 73c5d32b59a0824091f6a8f104826943 *man/bank.Rd c06c7bff1ae48639df2df727f1d70861 *man/breg.Rd @@ -98,7 +98,7 @@ 378003d8aeceeb033be1855c4fc9f006 *man/plot.bayesm.mat.Rd d6194bc76d2014e7937a62e1d77d1ca3 *man/plot.bayesm.nmix.Rd ff56b11aea220dd888c6ba8c49c136b4 *man/rDPGibbs.Rd -b8658b4e2e44c44dfe4b3d23d34fe9ae *man/rbayesBLP.Rd +7dc25f80ccf1f7207ed16dd3a0ddb65f *man/rbayesBLP.Rd 8604546a510dd5fb9f1c8644fbc6c533 *man/rbiNormGibbs.Rd 5ff1b6bc2e6c0cba3c3f25cece7f70c8 *man/rbprobitGibbs.Rd b5d6cd8b9c59f80ad749ca0fc4f9b605 *man/rdirichlet.Rd @@ -173,5 +173,5 @@ 1524fe60c6fea0029759326afe3a163c *src/rwishart_rcpp.cpp f00ee55ec4e2ec8ebf0e33ca555f8c6f *src/trunNorm.cpp 28301b73be4deb83a4955b68bfc8371f *src/utilityFunctions.cpp -7dd837d812cf03c9a5c260d5334ff525 *vignettes/Constrained_MNL_Vignette.Rmd -6b8bc3010578ff01c1fa1240035bdfcc *vignettes/bayesm_Overview_Vignette.Rmd +9e127ae004b468c5feb3341bfe7794ac *vignettes/Constrained_MNL_Vignette.Rmd +b9488fb44d97ec4bf8d1bac0a2094c65 *vignettes/bayesm_Overview_Vignette.Rmd diff -Nru r-cran-bayesm-3.1-1/R/plot.bayesm.hcoef.R r-cran-bayesm-3.1-3+dfsg/R/plot.bayesm.hcoef.R --- r-cran-bayesm-3.1-1/R/plot.bayesm.hcoef.R 2018-12-20 18:12:12.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/plot.bayesm.hcoef.R 2019-07-28 05:47:05.000000000 +0000 @@ -18,6 +18,8 @@ R=d[3] if(missing(names)) {names=as.character(1:nvar)} if(R < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} + if(burnin > R) {cat("burnin set larger than number of draws submitted (chk keep) \n"); + return(invisible())} # # plot posterior distributions of nvar coef for 30 rand units # @@ -31,7 +33,7 @@ colnames(ext)=as.character(rsam) out=boxplot(ext,plot=FALSE,...) out$stats=apply(ext,2,quantile,probs=c(0,.05,.5,.95,1)) - bxp(out,xlab="Cross-sectional Unit",main=paste("Coefficients on Var ",names[var],sep=""),boxfill="magenta",...) + bxp(out,xlab="Cross-sectional Unit",main=paste("Var ",names[var]," Coefficient",sep=""),boxfill="magenta",...) } # # plot posterior means for each var diff -Nru r-cran-bayesm-3.1-1/R/plot.bayesm.mat.R r-cran-bayesm-3.1-3+dfsg/R/plot.bayesm.mat.R --- r-cran-bayesm-3.1-1/R/plot.bayesm.mat.R 2015-08-24 21:28:45.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/plot.bayesm.mat.R 2019-07-12 22:27:30.000000000 +0000 @@ -1,59 +1,61 @@ -plot.bayesm.mat=function(x,names,burnin=trunc(.1*nrow(X)),tvalues,TRACEPLOT=TRUE,DEN=TRUE,INT=TRUE, - CHECK_NDRAWS=TRUE,...){ -# -# S3 method to print matrices of draws the object X is of class "bayesm.mat" -# -# P. Rossi 2/07 -# - X=x - if(mode(X) == "list") stop("list entered \n Possible Fixup: extract from list \n") - if(mode(X) !="numeric") stop("Requires numeric argument \n") - op=par(no.readonly=TRUE) - on.exit(par(op)) - on.exit(devAskNewPage(FALSE),add=TRUE) - if(is.null(attributes(X)$dim)) X=as.matrix(X) - nx=ncol(X) - if(nrow(X) < 100 & CHECK_NDRAWS) {cat("fewer than 100 draws submitted \n"); return(invisible())} - if(!missing(tvalues)){ - if(mode(tvalues) !="numeric") {stop("tvalues must be a numeric vector \n")} - else - {if(length(tvalues)!=nx) stop("tvalues are wrong length \n")} - } - if(nx==1) par(mfrow=c(1,1)) - if(nx==2) par(mfrow=c(2,1)) - if(nx==3) par(mfrow=c(3,1)) - if(nx==4) par(mfrow=c(2,2)) - if(nx>=5) par(mfrow=c(3,2)) - - if(missing(names)) {names=as.character(1:nx)} - if (DEN) ylabtxt="density" else ylabtxt="freq" - devAskNewPage(TRUE) - for(index in 1:nx){ - hist(X[(burnin+1):nrow(X),index],xlab="",ylab=ylabtxt,main=names[index],freq=!DEN,col="magenta",...) - if(!missing(tvalues)) abline(v=tvalues[index],lwd=2,col="blue") - if(INT){ - quants=quantile(X[(burnin+1):nrow(X),index],prob=c(.025,.975)) - mean=mean(X[(burnin+1):nrow(X),index]) - semean=numEff(X[(burnin+1):nrow(X),index])$stderr - text(quants[1],0,"|",cex=3.0,col="green") - text(quants[2],0,"|",cex=3.0,col="green") - text(mean,0,"|",cex=3.0,col="red") - text(mean-2*semean,0,"|",cex=2,col="yellow") - text(mean+2*semean,0,"|",cex=2,col="yellow") - } - } - if(TRACEPLOT){ - if(nx==1) par(mfrow=c(1,2)) - if(nx==2) par(mfrow=c(2,2)) - if(nx>=3) par(mfrow=c(3,2)) - for(index in 1:nx){ - plot(as.vector(X[,index]),xlab="",ylab="",main=names[index],type="l",col="red") - if(!missing(tvalues)) abline(h=tvalues[index],lwd=2,col="blue") - if(var(X[,index])>1.0e-20) {acf(as.vector(X[,index]),xlab="",ylab="",main="")} - else - {plot.default(X[,index],xlab="",ylab="",type="n",main="No ACF Produced")} - } - } - invisible() -} - +plot.bayesm.mat=function(x,names,burnin=trunc(.1*nrow(X)),tvalues,TRACEPLOT=TRUE,DEN=TRUE,INT=TRUE, + CHECK_NDRAWS=TRUE,...){ +# +# S3 method to print matrices of draws the object X is of class "bayesm.mat" +# +# P. Rossi 2/07 +# + X=x + if(mode(X) == "list") stop("list entered \n Possible Fixup: extract from list \n") + if(mode(X) !="numeric") stop("Requires numeric argument \n") + op=par(no.readonly=TRUE) + on.exit(par(op)) + on.exit(devAskNewPage(FALSE),add=TRUE) + if(is.null(attributes(X)$dim)) X=as.matrix(X) + nx=ncol(X) + if(nrow(X) < 100 & CHECK_NDRAWS) {cat("fewer than 100 draws submitted \n"); return(invisible())} + if(burnin > nrow(X)) {cat("burnin set larger than number of draws submitted (chk keep) \n"); + return(invisible())} + if(!missing(tvalues)){ + if(mode(tvalues) !="numeric") {stop("tvalues must be a numeric vector \n")} + else + {if(length(tvalues)!=nx) stop("tvalues are wrong length \n")} + } + if(nx==1) par(mfrow=c(1,1)) + if(nx==2) par(mfrow=c(2,1)) + if(nx==3) par(mfrow=c(3,1)) + if(nx==4) par(mfrow=c(2,2)) + if(nx>=5) par(mfrow=c(3,2)) + + if(missing(names)) {names=as.character(1:nx)} + if (DEN) ylabtxt="density" else ylabtxt="freq" + devAskNewPage(TRUE) + for(index in 1:nx){ + hist(X[(burnin+1):nrow(X),index],xlab="",ylab=ylabtxt,main=names[index],freq=!DEN,col="magenta",...) + if(!missing(tvalues)) abline(v=tvalues[index],lwd=2,col="blue") + if(INT){ + quants=quantile(X[(burnin+1):nrow(X),index],prob=c(.025,.975)) + mean=mean(X[(burnin+1):nrow(X),index]) + semean=numEff(X[(burnin+1):nrow(X),index])$stderr + text(quants[1],0,"|",cex=3.0,col="green") + text(quants[2],0,"|",cex=3.0,col="green") + text(mean,0,"|",cex=3.0,col="red") + text(mean-2*semean,0,"|",cex=2,col="yellow") + text(mean+2*semean,0,"|",cex=2,col="yellow") + } + } + if(TRACEPLOT){ + if(nx==1) par(mfrow=c(1,2)) + if(nx==2) par(mfrow=c(2,2)) + if(nx>=3) par(mfrow=c(3,2)) + for(index in 1:nx){ + plot(as.vector(X[,index]),xlab="",ylab="",main=names[index],type="l",col="red") + if(!missing(tvalues)) abline(h=tvalues[index],lwd=2,col="blue") + if(var(X[,index])>1.0e-20) {acf(as.vector(X[,index]),xlab="",ylab="",main="")} + else + {plot.default(X[,index],xlab="",ylab="",type="n",main="No ACF Produced")} + } + } + invisible() +} + diff -Nru r-cran-bayesm-3.1-1/R/plot.bayesm.nmix.R r-cran-bayesm-3.1-3+dfsg/R/plot.bayesm.nmix.R --- r-cran-bayesm-3.1-1/R/plot.bayesm.nmix.R 2012-05-14 18:45:40.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/plot.bayesm.nmix.R 2019-07-12 22:27:56.000000000 +0000 @@ -1,105 +1,107 @@ -plot.bayesm.nmix=function(x,names,burnin=trunc(.1*nrow(probdraw)),Grid,bi.sel,nstd=2,marg=TRUE, - Data,ngrid=50,ndraw=200,...){ -# -# S3 method to plot normal mixture marginal and bivariate densities -# nmixlist is a list of 3 components, nmixlist[[1]]: array of mix comp prob draws, -# mmixlist[[2]] is not used, nmixlist[[3]] is list of draws of components -# P. Rossi 2/07 -# P. Rossi 3/07 fixed problem with dropping dimensions on probdraw (if ncomp=1) -# P. Rossi 2/08 added marg flag to plot marginals -# P. Rossi 3/08 added Data argument to paint histograms on the marginal plots -# - nmixlist=x - if(mode(nmixlist) != "list") stop(" Argument must be a list \n") - probdraw=nmixlist[[1]]; compdraw=nmixlist[[3]] - if(!is.matrix(probdraw)) stop(" First element of list (probdraw) must be a matrix \n") - if(mode(compdraw) != "list") stop(" Third element of list (compdraw) must be a list \n") - op=par(no.readonly=TRUE) - on.exit(par(op)) - R=nrow(probdraw) - if(R < 100) {cat(" fewer than 100 draws submitted \n"); return(invisible())} - datad=length(compdraw[[1]][[1]]$mu) - OneDimData=(datad==1) - if(missing(bi.sel)) bi.sel=list(c(1,2)) # default to the first pair of variables - ind=as.integer(seq(from=(burnin+1),to=R,length.out=max(ndraw,trunc(.05*R)))) - if(missing(names)) {names=as.character(1:datad)} - if(!missing(Data)){ - if(!is.matrix(Data)) stop("Data argument must be a matrix \n") - if(ncol(Data)!= datad) stop("Data matrix is of wrong dimension \n") - } - if(mode(bi.sel) != "list") stop("bi.sel must be as list, e.g. bi.sel=list(c(1,2),c(3,4)) \n") - - if(missing(Grid)){ - Grid=matrix(0,nrow=ngrid,ncol=datad) - if(!missing(Data)) - {for(i in 1:datad) Grid[,i]=c(seq(from=range(Data[,i])[1],to=range(Data[,i])[2],length=ngrid))} - else - { - out=momMix(probdraw[ind,,drop=FALSE],compdraw[ind]) - mu=out$mu - sd=out$sd - for(i in 1:datad ) Grid[,i]=c(seq(from=(mu[i]-nstd*sd[i]), - to=(mu[i]+nstd*sd[i]),length=ngrid)) - } - } - # - # plot posterior mean of marginal densities - # - if(marg){ - mden=eMixMargDen(Grid,probdraw[ind,,drop=FALSE],compdraw[ind]) - nx=datad - if(nx==1) par(mfrow=c(1,1)) - if(nx==2) par(mfrow=c(2,1)) - if(nx==3) par(mfrow=c(3,1)) - if(nx==4) par(mfrow=c(2,2)) - if(nx>=5) par(mfrow=c(3,2)) - - for(index in 1:nx){ - if(index == 2) par(ask=dev.interactive()) - plot(range(Grid[,index]),c(0,1.1*max(mden[,index])),type="n",xlab="",ylab="density") - title(names[index]) - if(!missing(Data)){ - deltax=(range(Grid[,index])[2]-range(Grid[,index])[1])/nrow(Grid) - hist(Data[,index],xlim=range(Grid[,index]), - freq=FALSE,col="yellow",breaks=max(20,.1*nrow(Data)),add=TRUE) - lines(Grid[,index],mden[,index]/(sum(mden[,index])*deltax),col="red",lwd=2)} - else - {lines(Grid[,index],mden[,index],col="black",lwd=2) - polygon(c(Grid[1,index],Grid[,index],Grid[nrow(Grid),index]),c(0,mden[,index],0),col="magenta")} - } - } - # - # now plot bivariates in list bi.sel - # - if(!OneDimData){ - par(ask=dev.interactive()) - nsel=length(bi.sel) - den=array(0,dim=c(ngrid,ngrid,nsel)) - lstxixj=NULL - for(sel in 1:nsel){ - i=bi.sel[[sel]][1] - j=bi.sel[[sel]][2] - xi=Grid[,i] - xj=Grid[,j] - lstxixj[[sel]]=list(xi,xj) - for(elt in ind){ - den[,,sel]=den[,,sel]+mixDenBi(i,j,xi,xj,probdraw[elt,,drop=FALSE],compdraw[[elt]]) - } - den[,,sel]=den[,,sel]/sum(den[,,sel]) - } - nx=nsel - par(mfrow=c(1,1)) - - for(index in 1:nx){ - xi=unlist(lstxixj[[index]][1]) - xj=unlist(lstxixj[[index]][2]) - xlabtxt=names[bi.sel[[index]][1]] - ylabtxt=names[bi.sel[[index]][2]] - image(xi,xj,den[,,index],col=terrain.colors(100),xlab=xlabtxt,ylab=ylabtxt) - contour(xi,xj,den[,,index],add=TRUE,drawlabels=FALSE) - } - } - - invisible() -} - +plot.bayesm.nmix=function(x,names,burnin=trunc(.1*nrow(probdraw)),Grid,bi.sel,nstd=2,marg=TRUE, + Data,ngrid=50,ndraw=200,...){ +# +# S3 method to plot normal mixture marginal and bivariate densities +# nmixlist is a list of 3 components, nmixlist[[1]]: array of mix comp prob draws, +# mmixlist[[2]] is not used, nmixlist[[3]] is list of draws of components +# P. Rossi 2/07 +# P. Rossi 3/07 fixed problem with dropping dimensions on probdraw (if ncomp=1) +# P. Rossi 2/08 added marg flag to plot marginals +# P. Rossi 3/08 added Data argument to paint histograms on the marginal plots +# + nmixlist=x + if(mode(nmixlist) != "list") stop(" Argument must be a list \n") + probdraw=nmixlist[[1]]; compdraw=nmixlist[[3]] + if(!is.matrix(probdraw)) stop(" First element of list (probdraw) must be a matrix \n") + if(mode(compdraw) != "list") stop(" Third element of list (compdraw) must be a list \n") + op=par(no.readonly=TRUE) + on.exit(par(op)) + R=nrow(probdraw) + if(R < 100) {cat(" fewer than 100 draws submitted \n"); return(invisible())} + if(burnin > R) {cat("burnin set larger than number of draws submitted (chk keep) \n"); + return(invisible())} + datad=length(compdraw[[1]][[1]]$mu) + OneDimData=(datad==1) + if(missing(bi.sel)) bi.sel=list(c(1,2)) # default to the first pair of variables + ind=as.integer(seq(from=(burnin+1),to=R,length.out=max(ndraw,trunc(.05*R)))) + if(missing(names)) {names=as.character(1:datad)} + if(!missing(Data)){ + if(!is.matrix(Data)) stop("Data argument must be a matrix \n") + if(ncol(Data)!= datad) stop("Data matrix is of wrong dimension \n") + } + if(mode(bi.sel) != "list") stop("bi.sel must be as list, e.g. bi.sel=list(c(1,2),c(3,4)) \n") + + if(missing(Grid)){ + Grid=matrix(0,nrow=ngrid,ncol=datad) + if(!missing(Data)) + {for(i in 1:datad) Grid[,i]=c(seq(from=range(Data[,i])[1],to=range(Data[,i])[2],length=ngrid))} + else + { + out=momMix(probdraw[ind,,drop=FALSE],compdraw[ind]) + mu=out$mu + sd=out$sd + for(i in 1:datad ) Grid[,i]=c(seq(from=(mu[i]-nstd*sd[i]), + to=(mu[i]+nstd*sd[i]),length=ngrid)) + } + } + # + # plot posterior mean of marginal densities + # + if(marg){ + mden=eMixMargDen(Grid,probdraw[ind,,drop=FALSE],compdraw[ind]) + nx=datad + if(nx==1) par(mfrow=c(1,1)) + if(nx==2) par(mfrow=c(2,1)) + if(nx==3) par(mfrow=c(3,1)) + if(nx==4) par(mfrow=c(2,2)) + if(nx>=5) par(mfrow=c(3,2)) + + for(index in 1:nx){ + if(index == 2) par(ask=dev.interactive()) + plot(range(Grid[,index]),c(0,1.1*max(mden[,index])),type="n",xlab="",ylab="density") + title(names[index]) + if(!missing(Data)){ + deltax=(range(Grid[,index])[2]-range(Grid[,index])[1])/nrow(Grid) + hist(Data[,index],xlim=range(Grid[,index]), + freq=FALSE,col="yellow",breaks=max(20,.1*nrow(Data)),add=TRUE) + lines(Grid[,index],mden[,index]/(sum(mden[,index])*deltax),col="red",lwd=2)} + else + {lines(Grid[,index],mden[,index],col="black",lwd=2) + polygon(c(Grid[1,index],Grid[,index],Grid[nrow(Grid),index]),c(0,mden[,index],0),col="magenta")} + } + } + # + # now plot bivariates in list bi.sel + # + if(!OneDimData){ + par(ask=dev.interactive()) + nsel=length(bi.sel) + den=array(0,dim=c(ngrid,ngrid,nsel)) + lstxixj=NULL + for(sel in 1:nsel){ + i=bi.sel[[sel]][1] + j=bi.sel[[sel]][2] + xi=Grid[,i] + xj=Grid[,j] + lstxixj[[sel]]=list(xi,xj) + for(elt in ind){ + den[,,sel]=den[,,sel]+mixDenBi(i,j,xi,xj,probdraw[elt,,drop=FALSE],compdraw[[elt]]) + } + den[,,sel]=den[,,sel]/sum(den[,,sel]) + } + nx=nsel + par(mfrow=c(1,1)) + + for(index in 1:nx){ + xi=unlist(lstxixj[[index]][1]) + xj=unlist(lstxixj[[index]][2]) + xlabtxt=names[bi.sel[[index]][1]] + ylabtxt=names[bi.sel[[index]][2]] + image(xi,xj,den[,,index],col=terrain.colors(100),xlab=xlabtxt,ylab=ylabtxt) + contour(xi,xj,den[,,index],add=TRUE,drawlabels=FALSE) + } + } + + invisible() +} + diff -Nru r-cran-bayesm-3.1-1/R/rhierMnlRwMixture_rcpp.r r-cran-bayesm-3.1-3+dfsg/R/rhierMnlRwMixture_rcpp.r --- r-cran-bayesm-3.1-1/R/rhierMnlRwMixture_rcpp.r 2017-06-01 23:10:10.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/rhierMnlRwMixture_rcpp.r 1970-01-01 00:00:00.000000000 +0000 @@ -1,332 +0,0 @@ -rhierMnlRwMixture=function(Data,Prior,Mcmc){ -# -# revision history: -# 12/04 changed by rossi to fix bug in drawdelta when there is zero/one unit in a mixture component -# 09/05 added loglike output, changed to reflect new argument order in llmnl, mnlHess -# 12/05 changed weighting scheme to (1-w)logl_i + w*Lbar (normalized) -# 03/07 added classes -# 09/08 changed Dirichlet a check -# 04/15 by Wayne Taylor: added nprint option to MCMC argument -# 07/16 by Wayne Taylor: added sign restrictions -# 10/10 by Dan Yavorsky: changed default priors when sign restrictions imposed -# -# purpose: run hierarchical mnl logit model with mixture of normals -# using RW and cov(RW inc) = (hess_i + Vbeta^-1)^-1 -# uses normal approximation to pooled likelihood -# -# Arguments: -# Data contains a list of (p,lgtdata, and possibly Z) -# p is number of choice alternatives -# lgtdata is a list of lists (one list per unit) -# lgtdata[[i]]=list(y,X) -# y is a vector indicating alternative chosen -# integers 1:p indicate alternative -# X is a length(y)*p x nvar matrix of values of -# X vars including intercepts -# Z is an length(lgtdata) x nz matrix of values of variables -# note: Z should NOT contain an intercept -# Prior contains a list of (deltabar,Ad,mubar,Amu,nu,V,ncomp,SignRes) -# ncomp is the number of components in normal mixture -# if elements of Prior (other than ncomp) do not exist, defaults are used -# SignRes is a vector of sign restrictions -# Mcmc contains a list of (s,c,R,keep,nprint) -# -# Output: as list containing -# Deltadraw R/keep x nz*nvar matrix of draws of Delta, first row is initial value -# betadraw is nlgt x nvar x R/keep array of draws of betas -# probdraw is R/keep x ncomp matrix of draws of probs of mixture components -# compdraw is a list of list of lists (length R/keep) -# compdraw[[rep]] is the repth draw of components for mixtures -# loglike log-likelikelhood at each kept draw -# -# Priors: -# beta_i = D %*% z[i,] + u_i -# u_i ~ N(mu_ind[i],Sigma_ind[i]) -# ind[i] ~multinomial(p) -# p ~ dirichlet (a) -# D is a k x nz array -# delta= vec(D) ~ N(deltabar,A_d^-1) -# mu_j ~ N(mubar,A_mu^-1(x)Sigma_j) -# Sigma_j ~ IW(nu,V^-1) -# ncomp is number of components -# -# MCMC parameters -# s is the scaling parameter for the RW inc covariance matrix; s^2 Var is inc cov -# matrix -# w is parameter for weighting function in fractional likelihood -# w is the weight on the normalized pooled likelihood -# R is number of draws -# keep is thinning parameter, keep every keepth draw -# nprint - print estimated time remaining on every nprint'th draw -# -# check arguments -# -if(missing(Data)) {pandterm("Requires Data argument -- list of p,lgtdata, and (possibly) Z")} - if(is.null(Data$p)) {pandterm("Requires Data element p (# choice alternatives)") } - p=Data$p - if(is.null(Data$lgtdata)) {pandterm("Requires Data element lgtdata (list of data for each unit)")} - lgtdata=Data$lgtdata - nlgt=length(lgtdata) - drawdelta=TRUE -if(is.null(Data$Z)) { cat("Z not specified",fill=TRUE); fsh() ; drawdelta=FALSE} - else {if (!is.matrix(Data$Z)) {pandterm("Z must be a matrix")} - else {if (nrow(Data$Z) != nlgt) {pandterm(paste("Nrow(Z) ",nrow(Z),"ne number logits ",nlgt))} - else {Z=Data$Z}}} - if(drawdelta) { - nz=ncol(Z) - colmeans=apply(Z,2,mean) - if(sum(colmeans) > .00001) - {pandterm(paste("Z does not appear to be de-meaned: colmeans= ",colmeans))} - } -# -# check lgtdata for validity -# -ypooled=NULL -Xpooled=NULL -if(!is.null(lgtdata[[1]]$X & is.matrix(lgtdata[[1]]$X))) {oldncol=ncol(lgtdata[[1]]$X)} -for (i in 1:nlgt) -{ - if(is.null(lgtdata[[i]]$y)) {pandterm(paste0("Requires element y of lgtdata[[",i,"]]"))} - if(is.null(lgtdata[[i]]$X)) {pandterm(paste0("Requires element X of lgtdata[[",i,"]]"))} - if(!is.matrix(lgtdata[[i]]$X)) {pandterm(paste0("lgtdata[[",i,"]]$X must be a matrix"))} - if(!is.vector(lgtdata[[i]]$y, mode = "numeric") & !is.vector(lgtdata[[i]]$y, mode = "logical") & !is.matrix(lgtdata[[i]]$y)) - {pandterm(paste0("lgtdata[[",i,"]]$y must be a numeric or logical vector or matrix"))} - if(is.matrix(lgtdata[[i]]$y)) { if(ncol(lgtdata[[i]]$y)>1) { pandterm(paste0("lgtdata[[",i,"]]$y must be a vector or one-column matrix")) } } - ypooled=c(ypooled,lgtdata[[i]]$y) - nrowX=nrow(lgtdata[[i]]$X) - if((nrowX/p) !=length(lgtdata[[i]]$y)) {pandterm(paste("nrow(X) ne p*length(yi); exception at unit",i))} - newncol=ncol(lgtdata[[i]]$X) - if(newncol != oldncol) {pandterm(paste("All X elements must have same # of cols; exception at unit",i))} - Xpooled=rbind(Xpooled,lgtdata[[i]]$X) - oldncol=newncol -} -nvar=ncol(Xpooled) -levely=as.numeric(levels(as.factor(ypooled))) -if(length(levely) != p) {pandterm(paste("y takes on ",length(levely)," values -- must be = p"))} -bady=FALSE -for (i in 1:p ) -{ - if(levely[i] != i) bady=TRUE -} -cat("Table of Y values pooled over all units",fill=TRUE) -print(table(ypooled)) -if (bady) - {pandterm("Invalid Y")} -# -# check on prior -# -if(missing(Prior)) {pandterm("Requires Prior list argument (at least ncomp)")} -if(is.null(Prior$ncomp)) {pandterm("Requires Prior element ncomp (num of mixture components)")} else {ncomp=Prior$ncomp} -if(is.null(Prior$SignRes)) {SignRes=rep(0,nvar)} else {SignRes=Prior$SignRes} -if(length(SignRes) != nvar) {pandterm("The length SignRes must be equal to the dimension of X")} -if(sum(!(SignRes %in% c(-1,0,1))>0)) {pandterm("All elements of SignRes must be equal to -1, 0, or 1")} -if(is.null(Prior$mubar) & sum(abs(SignRes))==0) { - mubar=matrix(rep(0,nvar),nrow=1) - } else { - if(is.null(Prior$mubar) & sum(abs(SignRes)) >0) { - mubar=matrix(rep(0,nvar)+2*abs(SignRes),nrow=1) - } else { - mubar=matrix(Prior$mubar,nrow=1) } } -if(ncol(mubar) != nvar) {pandterm(paste("mubar must have ncomp cols, ncol(mubar)= ",ncol(mubar)))} -if(is.null(Prior$Amu) & sum(abs(SignRes))==0) { - Amu=matrix(BayesmConstant.A,ncol=1) - } else { - if(is.null(Prior$Amu) & sum(abs(SignRes)) >0) { - Amu=matrix(BayesmConstant.A*10,ncol=1) - } else {Amu=matrix(Prior$Amu,ncol=1) } } -if(ncol(Amu) != 1 | nrow(Amu) != 1) {pandterm("Am must be a 1 x 1 array")} -if(is.null(Prior$nu) & sum(abs(SignRes))==0) { - nu=nvar+BayesmConstant.nuInc - } else { - if(is.null(Prior$nu) & sum(abs(SignRes)) >0) { - nu=nvar+BayesmConstant.nuInc+12 - } else { - nu=Prior$nu } } -if(nu < 1) {pandterm("invalid nu value")} -if(is.null(Prior$V) & sum(abs(SignRes))==0) { - V=nu*diag(nvar) - } else { - if(is.null(Prior$V) & sum(abs(SignRes)) >0) { - V=nu*diag(abs(SignRes)*0.1+(!abs(SignRes))*4) - } else { - V=Prior$V } } -if(sum(dim(V)==c(nvar,nvar)) !=2) pandterm("Invalid V in prior") -if(is.null(Prior$Ad) & drawdelta) {Ad=BayesmConstant.A*diag(nvar*nz)} else {Ad=Prior$Ad} -if(drawdelta) {if(ncol(Ad) != nvar*nz | nrow(Ad) != nvar*nz) {pandterm("Ad must be nvar*nz x nvar*nz")}} -if(is.null(Prior$deltabar)& drawdelta) {deltabar=rep(0,nz*nvar)} else {deltabar=Prior$deltabar} -if(drawdelta) {if(length(deltabar) != nz*nvar) {pandterm("deltabar must be of length nvar*nz")}} -if(is.null(Prior$a)) { a=rep(BayesmConstant.a,ncomp)} else {a=Prior$a} -if(length(a) != ncomp) {pandterm("Requires dim(a)= ncomp (no of components)")} -bada=FALSE -for(i in 1:ncomp) { if(a[i] < 0) bada=TRUE} -if(bada) pandterm("invalid values in a vector") - -if(is.null(Prior$nu)&sum(abs(SignRes))>0) {nu = nvar+15} -if(is.null(Prior$Amu)&sum(abs(SignRes))>0) {Amu = matrix(.1)} -if(is.null(Prior$V)&sum(abs(SignRes))>0) {V = nu*(diag(nvar)-diag(abs(SignRes)>0)*.8)} - -# -# check on Mcmc -# -if(missing(Mcmc)) - {pandterm("Requires Mcmc list argument")} -else - { - if(is.null(Mcmc$s)) {s=BayesmConstant.RRScaling/sqrt(nvar)} else {s=Mcmc$s} - if(is.null(Mcmc$w)) {w=BayesmConstant.w} else {w=Mcmc$w} - if(is.null(Mcmc$keep)) {keep=BayesmConstant.keep} else {keep=Mcmc$keep} - if(is.null(Mcmc$R)) {pandterm("Requires R argument in Mcmc list")} else {R=Mcmc$R} - if(is.null(Mcmc$nprint)) {nprint=BayesmConstant.nprint} else {nprint=Mcmc$nprint} - if(nprint<0) {pandterm('nprint must be an integer greater than or equal to 0')} - } -# -# print out problem -# -cat(" ",fill=TRUE) -cat("Starting MCMC Inference for Hierarchical Logit:",fill=TRUE) -cat(" Normal Mixture with",ncomp,"components for first stage prior",fill=TRUE) -cat(paste(" ",p," alternatives; ",nvar," variables in X"),fill=TRUE) -cat(paste(" for ",nlgt," cross-sectional units"),fill=TRUE) -cat(" ",fill=TRUE) -cat("Prior Parms: ",fill=TRUE) -cat("nu =",nu,fill=TRUE) -cat("V ",fill=TRUE) -print(V) -cat("mubar ",fill=TRUE) -print(mubar) -cat("Amu ", fill=TRUE) -print(Amu) -cat("a ",fill=TRUE) -print(a) -if(drawdelta) -{ - cat("deltabar",fill=TRUE) - print(deltabar) - cat("Ad",fill=TRUE) - print(Ad) -} -cat(" ",fill=TRUE) -cat("MCMC Parms: ",fill=TRUE) -cat(paste("s=",round(s,3)," w= ",w," R= ",R," keep= ",keep," nprint= ",nprint),fill=TRUE) -cat("",fill=TRUE) - -oldbetas = matrix(double(nlgt * nvar), ncol = nvar) - -#-------------------------------------------------------------------------------------------------- -# -# create functions needed -# -llmnlFract= - function(beta,y,X,betapooled,rootH,w,wgt,SignRes = rep(0,ncol(X))){ - z=as.vector(rootH%*%(beta-betapooled)) - return((1-w)*llmnl_con(beta,y,X,SignRes)+w*wgt*(-.5*(z%*%z))) - } - -mnlHess_con=function (betastar, y, X, SignRes = rep(0,ncol(X))) { - - #Reparameterize beta to betastar to allow for sign restrictions - beta = betastar - beta[SignRes!=0] = SignRes[SignRes!=0]*exp(beta[SignRes!=0]) - - n = length(y) - j = nrow(X)/n - k = ncol(X) - Xbeta = X %*% beta - Xbeta = matrix(Xbeta, byrow = T, ncol = j) - Xbeta = exp(Xbeta) - iota = c(rep(1, j)) - denom = Xbeta %*% iota - Prob = Xbeta/as.vector(denom) - Hess = matrix(double(k * k), ncol = k) - for (i in 1:n) { - p = as.vector(Prob[i, ]) - A = diag(p) - outer(p, p) - Xt = X[(j * (i - 1) + 1):(j * i), ] - Hess = Hess + crossprod(Xt, A) %*% Xt - } - return(Hess) -} - -#------------------------------------------------------------------------------------------------------- -# -# intialize compute quantities for Metropolis -# -cat("initializing Metropolis candidate densities for ",nlgt," units ...",fill=TRUE) -fsh() -# -# now go thru and computed fraction likelihood estimates and hessians -# -# Lbar=log(pooled likelihood^(n_i/N)) -# -# fraction loglike = (1-w)*loglike_i + w*Lbar -# -betainit=c(rep(0,nvar)) -# -# compute pooled optimum -# -out=optim(betainit,llmnl_con,method="BFGS",control=list( fnscale=-1,trace=0,reltol=1e-6), - X=Xpooled,y=ypooled,SignRes=SignRes) -betapooled=out$par -H=mnlHess_con(betapooled,ypooled,Xpooled,SignRes) -rootH=chol(H) -for (i in 1:nlgt) -{ - wgt=length(lgtdata[[i]]$y)/length(ypooled) - out=optim(betapooled,llmnlFract,method="BFGS",control=list( fnscale=-1,trace=0,reltol=1e-4), - X=lgtdata[[i]]$X,y=lgtdata[[i]]$y,betapooled=betapooled,rootH=rootH,w=w,wgt=wgt,SignRes=SignRes) - if(out$convergence == 0) { - hess=mnlHess_con(out$par,lgtdata[[i]]$y,lgtdata[[i]]$X,SignRes) - lgtdata[[i]]=c(lgtdata[[i]],list(converge=1,betafmle=out$par,hess=hess)) } - else - { lgtdata[[i]]=c(lgtdata[[i]],list(converge=0,betafmle=c(rep(0,nvar)), - hess=diag(nvar))) } - oldbetas[i,]=lgtdata[[i]]$betafmle - if(i%%50 ==0) cat(" completed unit #",i,fill=TRUE) - fsh() -} -# -# initialize values -# -# set initial values for the indicators -# ind is of length(nlgt) and indicates which mixture component this obs -# belongs to. -# -ind=NULL -ninc=floor(nlgt/ncomp) -for (i in 1:(ncomp-1)) {ind=c(ind,rep(i,ninc))} -if(ncomp != 1) {ind = c(ind,rep(ncomp,nlgt-length(ind)))} else {ind=rep(1,nlgt)} -# -# initialize probs -# -oldprob=rep(1/ncomp,ncomp) -# -#initialize delta -# -if (drawdelta){ - olddelta = rep(0,nz*nvar) -} else { #send placeholders to the _loop function if there is no Z matrix - olddelta = 0 - Z = matrix(0) - deltabar = 0 - Ad = matrix(0) -} - -################################################################### -# Wayne Taylor -# 09/22/2014 -################################################################### -draws = rhierMnlRwMixture_rcpp_loop(lgtdata, Z, - deltabar, Ad, mubar, Amu, - nu, V, s, - R, keep, nprint, drawdelta, - as.matrix(olddelta), a, oldprob, oldbetas, ind, SignRes) -#################################################################### - -if(drawdelta){ - attributes(draws$Deltadraw)$class=c("bayesm.mat","mcmc") - attributes(draws$Deltadraw)$mcpar=c(1,R,keep)} -attributes(draws$betadraw)$class=c("bayesm.hcoef") -attributes(draws$nmix)$class="bayesm.nmix" - -return(draws) -} \ No newline at end of file diff -Nru r-cran-bayesm-3.1-1/R/rhierMnlRwMixture_rcpp.R r-cran-bayesm-3.1-3+dfsg/R/rhierMnlRwMixture_rcpp.R --- r-cran-bayesm-3.1-1/R/rhierMnlRwMixture_rcpp.R 1970-01-01 00:00:00.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/rhierMnlRwMixture_rcpp.R 2019-07-29 20:58:33.000000000 +0000 @@ -0,0 +1,380 @@ +rhierMnlRwMixture=function(Data,Prior,Mcmc){ +# +# revision history: +# 12/04 changed by rossi to fix bug in drawdelta when there is zero/one unit in a mixture component +# 09/05 added loglike output, changed to reflect new argument order in llmnl, mnlHess +# 12/05 changed weighting scheme to (1-w)logl_i + w*Lbar (normalized) +# 03/07 added classes +# 09/08 changed Dirichlet a check +# 04/15 by Wayne Taylor: added nprint option to MCMC argument +# 07/16 by Wayne Taylor: added sign restrictions +# 10/10 by Dan Yavorsky: changed default priors when sign restrictions imposed +# 12/18 by Peter Rossi: print out vector of sign-restrictions +# 12/18 by Peter Rossi: changed Hessian for constrained parameters to reflect +# reparameterization +# 7/19 by Peter Rossi: further fixes for reparameterization of constrained parms +# fixed Hessian as well as problems with initial values to find +# constrained optima used to tune Metropolis. +# switched to Nelder-Mead to find constrained pooled optimum +# BFGS sometimes has trouble with reparameterized model +# +# purpose: run hierarchical mnl logit model with mixture of normals +# using RW and cov(RW inc) = (hess_i + Vbeta^-1)^-1 +# uses normal approximation to pooled likelihood +# +# Arguments: +# Data contains a list of (p,lgtdata, and possibly Z) +# p is number of choice alternatives +# lgtdata is a list of lists (one list per unit) +# lgtdata[[i]]=list(y,X) +# y is a vector indicating alternative chosen +# integers 1:p indicate alternative +# X is a length(y)*p x nvar matrix of values of +# X vars including intercepts +# Z is an length(lgtdata) x nz matrix of values of variables +# note: Z should NOT contain an intercept +# Prior contains a list of (deltabar,Ad,mubar,Amu,nu,V,ncomp,SignRes) +# ncomp is the number of components in normal mixture +# if elements of Prior (other than ncomp) do not exist, defaults are used +# SignRes is a vector of sign restrictions +# Mcmc contains a list of (s,c,R,keep,nprint) +# +# Output: as list containing +# Deltadraw R/keep x nz*nvar matrix of draws of Delta, first row is initial value +# betadraw is nlgt x nvar x R/keep array of draws of betas +# probdraw is R/keep x ncomp matrix of draws of probs of mixture components +# compdraw is a list of list of lists (length R/keep) +# compdraw[[rep]] is the repth draw of components for mixtures +# loglike log-likelikelhood at each kept draw +# +# Priors: +# beta_i = D %*% z[i,] + u_i +# u_i ~ N(mu_ind[i],Sigma_ind[i]) +# ind[i] ~multinomial(p) +# p ~ dirichlet (a) +# D is a k x nz array +# delta= vec(D) ~ N(deltabar,A_d^-1) +# mu_j ~ N(mubar,A_mu^-1(x)Sigma_j) +# Sigma_j ~ IW(nu,V^-1) +# ncomp is number of components +# +# MCMC parameters +# s is the scaling parameter for the RW inc covariance matrix; s^2 Var is inc cov +# matrix +# w is parameter for weighting function in fractional likelihood +# w is the weight on the normalized pooled likelihood +# R is number of draws +# keep is thinning parameter, keep every keepth draw +# nprint - print estimated time remaining on every nprint'th draw +# +# check arguments +# +if(missing(Data)) {pandterm("Requires Data argument -- list of p,lgtdata, and (possibly) Z")} + if(is.null(Data$p)) {pandterm("Requires Data element p (# choice alternatives)") } + p=Data$p + if(is.null(Data$lgtdata)) {pandterm("Requires Data element lgtdata (list of data for each unit)")} + lgtdata=Data$lgtdata + nlgt=length(lgtdata) + drawdelta=TRUE +if(is.null(Data$Z)) { cat("Z not specified",fill=TRUE); fsh() ; drawdelta=FALSE} + else {if (!is.matrix(Data$Z)) {pandterm("Z must be a matrix")} + else {if (nrow(Data$Z) != nlgt) {pandterm(paste("Nrow(Z) ",nrow(Z),"ne number logits ",nlgt))} + else {Z=Data$Z}}} + if(drawdelta) { + nz=ncol(Z) + colmeans=apply(Z,2,mean) + if(sum(colmeans) > .00001) + {pandterm(paste("Z does not appear to be de-meaned: colmeans= ",colmeans))} + } +# +# check lgtdata for validity +# +ypooled=NULL +Xpooled=NULL +if(!is.null(lgtdata[[1]]$X & is.matrix(lgtdata[[1]]$X))) {oldncol=ncol(lgtdata[[1]]$X)} +for (i in 1:nlgt) +{ + if(is.null(lgtdata[[i]]$y)) {pandterm(paste0("Requires element y of lgtdata[[",i,"]]"))} + if(is.null(lgtdata[[i]]$X)) {pandterm(paste0("Requires element X of lgtdata[[",i,"]]"))} + if(!is.matrix(lgtdata[[i]]$X)) {pandterm(paste0("lgtdata[[",i,"]]$X must be a matrix"))} + if(!is.vector(lgtdata[[i]]$y, mode = "numeric") & !is.vector(lgtdata[[i]]$y, mode = "logical") & !is.matrix(lgtdata[[i]]$y)) + {pandterm(paste0("lgtdata[[",i,"]]$y must be a numeric or logical vector or matrix"))} + if(is.matrix(lgtdata[[i]]$y)) { if(ncol(lgtdata[[i]]$y)>1) { pandterm(paste0("lgtdata[[",i,"]]$y must be a vector or one-column matrix")) } } + ypooled=c(ypooled,lgtdata[[i]]$y) + nrowX=nrow(lgtdata[[i]]$X) + if((nrowX/p) !=length(lgtdata[[i]]$y)) {pandterm(paste("nrow(X) ne p*length(yi); exception at unit",i))} + newncol=ncol(lgtdata[[i]]$X) + if(newncol != oldncol) {pandterm(paste("All X elements must have same # of cols; exception at unit",i))} + Xpooled=rbind(Xpooled,lgtdata[[i]]$X) + oldncol=newncol +} +nvar=ncol(Xpooled) +levely=as.numeric(levels(as.factor(ypooled))) +if(length(levely) != p) {pandterm(paste("y takes on ",length(levely)," values -- must be = p"))} +bady=FALSE +for (i in 1:p ) +{ + if(levely[i] != i) bady=TRUE +} +cat("Table of Y values pooled over all units",fill=TRUE) +print(table(ypooled)) +if (bady) + {pandterm("Invalid Y")} +# +# check on prior +# +if(missing(Prior)) {pandterm("Requires Prior list argument (at least ncomp)")} +if(is.null(Prior$ncomp)) {pandterm("Requires Prior element ncomp (num of mixture components)")} else {ncomp=Prior$ncomp} +if(is.null(Prior$SignRes)) {SignRes=rep(0,nvar)} else {SignRes=Prior$SignRes} +if(length(SignRes) != nvar) {pandterm("The length SignRes must be equal to the dimension of X")} +if(sum(!(SignRes %in% c(-1,0,1))>0)) {pandterm("All elements of SignRes must be equal to -1, 0, or 1")} +if(is.null(Prior$mubar) & sum(abs(SignRes))==0) { + mubar=matrix(rep(0,nvar),nrow=1) + } else { + if(is.null(Prior$mubar) & sum(abs(SignRes)) >0) { + mubar=matrix(rep(0,nvar)+2*abs(SignRes),nrow=1) + } else { + mubar=matrix(Prior$mubar,nrow=1) } } +if(ncol(mubar) != nvar) {pandterm(paste("mubar must have ncomp cols, ncol(mubar)= ",ncol(mubar)))} +if(is.null(Prior$Amu) & sum(abs(SignRes))==0) { + Amu=matrix(BayesmConstant.A,ncol=1) + } else { + if(is.null(Prior$Amu) & sum(abs(SignRes)) >0) { + Amu=matrix(BayesmConstant.A*10,ncol=1) + } else {Amu=matrix(Prior$Amu,ncol=1) } } +if(ncol(Amu) != 1 | nrow(Amu) != 1) {pandterm("Am must be a 1 x 1 array")} +if(is.null(Prior$nu) & sum(abs(SignRes))==0) { + nu=nvar+BayesmConstant.nuInc + } else { + if(is.null(Prior$nu) & sum(abs(SignRes)) >0) { + nu=nvar+BayesmConstant.nuInc+12 + } else { + nu=Prior$nu } } +if(nu < 1) {pandterm("invalid nu value")} +if(is.null(Prior$V) & sum(abs(SignRes))==0) { + V=nu*diag(nvar) + } else { + if(is.null(Prior$V) & sum(abs(SignRes)) >0) { + V=nu*diag(abs(SignRes)*0.1+(!abs(SignRes))*4) + } else { + V=Prior$V } } +if(sum(dim(V)==c(nvar,nvar)) !=2) pandterm("Invalid V in prior") +if(is.null(Prior$Ad) & drawdelta) {Ad=BayesmConstant.A*diag(nvar*nz)} else {Ad=Prior$Ad} +if(drawdelta) {if(ncol(Ad) != nvar*nz | nrow(Ad) != nvar*nz) {pandterm("Ad must be nvar*nz x nvar*nz")}} +if(is.null(Prior$deltabar)& drawdelta) {deltabar=rep(0,nz*nvar)} else {deltabar=Prior$deltabar} +if(drawdelta) {if(length(deltabar) != nz*nvar) {pandterm("deltabar must be of length nvar*nz")}} +if(is.null(Prior$a)) { a=rep(BayesmConstant.a,ncomp)} else {a=Prior$a} +if(length(a) != ncomp) {pandterm("Requires dim(a)= ncomp (no of components)")} +bada=FALSE +for(i in 1:ncomp) { if(a[i] < 0) bada=TRUE} +if(bada) pandterm("invalid values in a vector") + +if(is.null(Prior$nu)&sum(abs(SignRes))>0) {nu = nvar+15} +if(is.null(Prior$Amu)&sum(abs(SignRes))>0) {Amu = matrix(.1)} +if(is.null(Prior$V)&sum(abs(SignRes))>0) {V = nu*(diag(nvar)-diag(abs(SignRes)>0)*.8)} + +# +# check on Mcmc +# +if(missing(Mcmc)) + {pandterm("Requires Mcmc list argument")} +else + { + if(is.null(Mcmc$s)) {s=BayesmConstant.RRScaling/sqrt(nvar)} else {s=Mcmc$s} + if(is.null(Mcmc$w)) {w=BayesmConstant.w} else {w=Mcmc$w} + if(is.null(Mcmc$keep)) {keep=BayesmConstant.keep} else {keep=Mcmc$keep} + if(is.null(Mcmc$R)) {pandterm("Requires R argument in Mcmc list")} else {R=Mcmc$R} + if(is.null(Mcmc$nprint)) {nprint=BayesmConstant.nprint} else {nprint=Mcmc$nprint} + if(nprint<0) {pandterm('nprint must be an integer greater than or equal to 0')} + } +# +# print out problem +# +cat(" ",fill=TRUE) +cat("Starting MCMC Inference for Hierarchical Logit:",fill=TRUE) +cat(" Normal Mixture with",ncomp,"components for first stage prior",fill=TRUE) +cat(paste(" ",p," alternatives; ",nvar," variables in X"),fill=TRUE) +cat(paste(" for ",nlgt," cross-sectional units"),fill=TRUE) +cat(" ",fill=TRUE) +cat("Prior Parms: ",fill=TRUE) +cat("nu =",nu,fill=TRUE) +cat("V ",fill=TRUE) +print(V) +cat("mubar ",fill=TRUE) +print(mubar) +cat("Amu ", fill=TRUE) +print(Amu) +cat("a ",fill=TRUE) +print(a) +if(drawdelta) +{ + cat("deltabar",fill=TRUE) + print(deltabar) + cat("Ad",fill=TRUE) + print(Ad) +} +if(sum(abs(SignRes)) != 0){ + cat("Sign Restrictions Vector (0: unconstrained, 1: positive, -1: negative)",fill=TRUE) + print(matrix(SignRes,ncol=1)) +} + +cat(" ",fill=TRUE) +cat("MCMC Parms: ",fill=TRUE) +cat(paste("s=",round(s,3)," w= ",w," R= ",R," keep= ",keep," nprint= ",nprint),fill=TRUE) +cat("",fill=TRUE) + +oldbetas = matrix(double(nlgt * nvar), ncol = nvar) + +#-------------------------------------------------------------------------------------------------- +# +# create functions needed +# +llmnlFract= + function(beta,y,X,betapooled,rootH,w,wgt,SignRes = rep(0,ncol(X))){ + z=as.vector(rootH%*%(beta-betapooled)) + return((1-w)*llmnl_con(beta,y,X,SignRes)+w*wgt*(-.5*(z%*%z))) + } + +mnlHess_con=function (betastar, y, X, SignRes = rep(0,ncol(X))) { + + #Reparameterize betastar to beta to allow for sign restrictions + beta = betastar + beta[SignRes!=0] = SignRes[SignRes!=0]*exp(betastar[SignRes!=0]) + + n = length(y) + j = nrow(X)/n + k = ncol(X) + Xbeta = X %*% beta + Xbeta = matrix(Xbeta, byrow = T, ncol = j) + Xbeta = exp(Xbeta) + iota = c(rep(1, j)) + denom = Xbeta %*% iota + Prob = Xbeta/as.vector(denom) + Hess = matrix(double(k * k), ncol = k) + for (i in 1:n) { + p = as.vector(Prob[i, ]) + A = diag(p) - outer(p, p) + Xt = X[(j * (i - 1) + 1):(j * i), ] + Hess = Hess + crossprod(Xt, A) %*% Xt + } + # modify Hessian for reparameterization + # Hess above is the hessian in the constrained parms (beta) + # we must express obtain hessian in betastar (unconstrained parms) + # Hess_beta = J^t Hess_betastar J + # Hess_betastar = (J^-1)t Hess_beta J^-1 + # J: jacobian from beta to betastar + # J^-1: jacobian from betastar to beta -- see notes + lambda = c(rep(1,length(SignRes))) + lambda[SignRes == 1]= beta[SignRes == 1] + lambda[SignRes == -1]= - beta[SignRes == -1] + Hess=Hess * crossprod(t(lambda)) + # hess[i,j] = hess[i,j]*lambda[i]*lambda[j] + return(Hess) +} + +#------------------------------------------------------------------------------------------------------- +# +# intialize compute quantities for Metropolis +# +cat("initializing Metropolis candidate densities for ",nlgt," units ...",fill=TRUE) +fsh() +# +# now go thru and computed fraction likelihood estimates and hessians +# +# Lbar=log(pooled likelihood^(n_i/N)) +# +# fraction loglike = (1-w)*loglike_i + w*Lbar +# +betainit=c(rep(0,nvar)) +noRes=c(rep(0,nvar)) +# run unconstrainted opt first +out=optim(betainit,llmnl_con,method="BFGS",control=list( fnscale=-1,trace=0,reltol=1e-6), + X=Xpooled,y=ypooled,SignRes=noRes) + +betainit=out$par +betainit[SignRes!=0] = 0 # set constrained terms to zero -- implies setting "beta" to either 1, -1 +# +# compute pooled optimum +# +# changed to default method - Nelder-Mead for more robust optimization- sometimes BFGS +# fails to find optimum using exponential reparameterization +out=optim(betainit,llmnl_con,control=list( fnscale=-1,trace=0,reltol=1e-6), + X=Xpooled,y=ypooled,SignRes=SignRes) + +betapooled=out$par +# +# warn user if the constrained pooled model has unreasonably small coefficients +# +if(sum(abs(betapooled[SignRes])>10)) +{ + cat("In tuning Metropolis algorithm, constrained pooled parameter estimates contain very small values", + fill=TRUE) + print(cbind(betapooled,SignRes)) + cat("check any constrained values with absolute value > 10 above - implies abs(beta) > exp(10)", + fill=TRUE) +} + + +H=mnlHess_con(betapooled,ypooled,Xpooled,SignRes) +rootH=chol(H) +for (i in 1:nlgt) +{ + wgt=length(lgtdata[[i]]$y)/length(ypooled) + out=optim(betapooled,llmnlFract,method="BFGS",control=list( fnscale=-1,trace=0,reltol=1e-4), + X=lgtdata[[i]]$X,y=lgtdata[[i]]$y,betapooled=betapooled,rootH=rootH,w=w,wgt=wgt,SignRes=SignRes) + if(out$convergence == 0) { + hess=mnlHess_con(out$par,lgtdata[[i]]$y,lgtdata[[i]]$X,SignRes) + lgtdata[[i]]=c(lgtdata[[i]],list(converge=1,betafmle=out$par,hess=hess)) } + else + { lgtdata[[i]]=c(lgtdata[[i]],list(converge=0,betafmle=c(rep(0,nvar)), + hess=diag(nvar))) } + oldbetas[i,]=lgtdata[[i]]$betafmle + if(i%%50 ==0) cat(" completed unit #",i,fill=TRUE) + fsh() +} +# +# initialize values +# +# set initial values for the indicators +# ind is of length(nlgt) and indicates which mixture component this obs +# belongs to. +# +ind=NULL +ninc=floor(nlgt/ncomp) +for (i in 1:(ncomp-1)) {ind=c(ind,rep(i,ninc))} +if(ncomp != 1) {ind = c(ind,rep(ncomp,nlgt-length(ind)))} else {ind=rep(1,nlgt)} +# +# initialize probs +# +oldprob=rep(1/ncomp,ncomp) +# +#initialize delta +# +if (drawdelta){ + olddelta = rep(0,nz*nvar) +} else { #send placeholders to the _loop function if there is no Z matrix + olddelta = 0 + Z = matrix(0) + deltabar = 0 + Ad = matrix(0) +} + +################################################################### +# Wayne Taylor +# 09/22/2014 +################################################################### +draws = rhierMnlRwMixture_rcpp_loop(lgtdata, Z, + deltabar, Ad, mubar, Amu, + nu, V, s, + R, keep, nprint, drawdelta, + as.matrix(olddelta), a, oldprob, oldbetas, ind, SignRes) +#################################################################### + +if(drawdelta){ + attributes(draws$Deltadraw)$class=c("bayesm.mat","mcmc") + attributes(draws$Deltadraw)$mcpar=c(1,R,keep)} +attributes(draws$betadraw)$class=c("bayesm.hcoef") +attributes(draws$nmix)$class="bayesm.nmix" + +return(draws) +} \ No newline at end of file diff -Nru r-cran-bayesm-3.1-1/R/summary.bayesm.mat.R r-cran-bayesm-3.1-3+dfsg/R/summary.bayesm.mat.R --- r-cran-bayesm-3.1-1/R/summary.bayesm.mat.R 2012-05-15 20:42:59.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/summary.bayesm.mat.R 2019-07-12 22:26:10.000000000 +0000 @@ -1,48 +1,50 @@ -summary.bayesm.mat=function(object,names,burnin=trunc(.1*nrow(X)),tvalues,QUANTILES=TRUE,TRAILER=TRUE,...){ -# -# S3 method to compute and print posterior summaries for a matrix of draws -# P. Rossi 2/07 -# - X=object - if(mode(X) == "list") stop("list entered \n Possible Fixup: extract from list \n") - if(mode(X) !="numeric") stop("Requires numeric argument \n") - if(is.null(attributes(X)$dim)) X=as.matrix(X) - nx=ncol(X) - if(missing(names)) names=as.character(1:nx) - if(nrow(X) < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} - X=X[(burnin+1):nrow(X),,drop=FALSE] - mat=matrix(apply(X,2,mean),nrow=1) - mat=rbind(mat,sqrt(matrix(apply(X,2,var),nrow=1))) - num_se=double(nx); rel_eff=double(nx); eff_s_size=double(nx) - for(i in 1:nx) - {out=numEff(X[,i]) - if(is.nan(out$stderr)) - {num_se[i]=-9999; rel_eff[i]=-9999; eff_s_size[i]=-9999} - else - {num_se[i]=out$stderr; rel_eff[i]=out$f; eff_s_size[i]=nrow(X)/ceiling(out$f)} - } - mat=rbind(mat,num_se,rel_eff,eff_s_size) - colnames(mat)=names - rownames(mat)[1]="mean" - rownames(mat)[2]="std dev" - rownames(mat)[3]="num se" - rownames(mat)[4]="rel eff" - rownames(mat)[5]="sam size" - if(!missing(tvalues)) - {if(mode(tvalues)!="numeric") stop("true values arguments must be numeric \n") - if(length(tvalues) != nx) stop("true values argument is wrong length \n") - mat=rbind(tvalues,mat) } - cat("Summary of Posterior Marginal Distributions ") - cat("\nMoments \n") - print(t(mat),digits=2) - if(QUANTILES){ - qmat=apply(X,2,quantile,probs=c(.025,.05,.5,.95,.975)) - colnames(qmat)=names - if(!missing(tvalues)) - { qmat=rbind(tvalues,qmat)} - cat("\nQuantiles \n") - print(t(qmat),digits=2)} - if(TRAILER) cat(paste(" based on ",nrow(X)," valid draws (burn-in=",burnin,") \n",sep="")) - invisible(t(mat)) -} - +summary.bayesm.mat=function(object,names,burnin=trunc(.1*nrow(X)),tvalues,QUANTILES=TRUE,TRAILER=TRUE,...){ +# +# S3 method to compute and print posterior summaries for a matrix of draws +# P. Rossi 2/07 +# + X=object + if(mode(X) == "list") stop("list entered \n Possible Fixup: extract from list \n") + if(mode(X) !="numeric") stop("Requires numeric argument \n") + if(is.null(attributes(X)$dim)) X=as.matrix(X) + nx=ncol(X) + if(missing(names)) names=as.character(1:nx) + if(nrow(X) < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} + if(burnin > nrow(X)) {cat("burnin set larger than number of draws submitted (chk keep) \n"); + return(invisible())} + X=X[(burnin+1):nrow(X),,drop=FALSE] + mat=matrix(apply(X,2,mean),nrow=1) + mat=rbind(mat,sqrt(matrix(apply(X,2,var),nrow=1))) + num_se=double(nx); rel_eff=double(nx); eff_s_size=double(nx) + for(i in 1:nx) + {out=numEff(X[,i]) + if(is.nan(out$stderr)) + {num_se[i]=-9999; rel_eff[i]=-9999; eff_s_size[i]=-9999} + else + {num_se[i]=out$stderr; rel_eff[i]=out$f; eff_s_size[i]=nrow(X)/ceiling(out$f)} + } + mat=rbind(mat,num_se,rel_eff,eff_s_size) + colnames(mat)=names + rownames(mat)[1]="mean" + rownames(mat)[2]="std dev" + rownames(mat)[3]="num se" + rownames(mat)[4]="rel eff" + rownames(mat)[5]="sam size" + if(!missing(tvalues)) + {if(mode(tvalues)!="numeric") stop("true values arguments must be numeric \n") + if(length(tvalues) != nx) stop("true values argument is wrong length \n") + mat=rbind(tvalues,mat) } + cat("Summary of Posterior Marginal Distributions ") + cat("\nMoments \n") + print(t(mat),digits=2) + if(QUANTILES){ + qmat=apply(X,2,quantile,probs=c(.025,.05,.5,.95,.975)) + colnames(qmat)=names + if(!missing(tvalues)) + { qmat=rbind(tvalues,qmat)} + cat("\nQuantiles \n") + print(t(qmat),digits=2)} + if(TRAILER) cat(paste(" based on ",nrow(X)," valid draws (burn-in=",burnin,") \n",sep="")) + invisible(t(mat)) +} + diff -Nru r-cran-bayesm-3.1-1/R/summary.bayesm.nmix.R r-cran-bayesm-3.1-3+dfsg/R/summary.bayesm.nmix.R --- r-cran-bayesm-3.1-1/R/summary.bayesm.nmix.R 2007-07-18 19:02:48.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/summary.bayesm.nmix.R 2019-07-12 22:23:53.000000000 +0000 @@ -1,35 +1,37 @@ -summary.bayesm.nmix=function(object,names,burnin=trunc(.1*nrow(probdraw)),...){ - nmixlist=object - if(mode(nmixlist) != "list") stop(" Argument must be a list \n") - probdraw=nmixlist[[1]]; compdraw=nmixlist[[3]] - if(!is.matrix(probdraw)) stop(" First Element of List (probdraw) must be a matrix \n") - if(mode(compdraw) != "list") stop(" Third Element of List (compdraw) must be a list \n") - ncomp=length(compdraw[[1]]) - if(ncol(probdraw) != ncomp) stop(" Dim of First Element of List not compatible with Dim of Second - \n") -# -# function to summarize draws of normal mixture components -# -R=nrow(probdraw) -if(R < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} -datad=length(compdraw[[1]][[1]]$mu) -mumat=matrix(0,nrow=R,ncol=datad) -sigmat=matrix(0,nrow=R,ncol=(datad*datad)) -if(missing(names)) names=as.character(1:datad) -for(i in (burnin+1):R){ - if(i%%500 ==0) cat("processing draw ",i,"\n",sep="");fsh() - out=momMix(probdraw[i,,drop=FALSE],compdraw[i]) - mumat[i,]=out$mu - sigmat[i,]=out$sigma - } -cat("\nNormal Mixture Moments\n Mean\n") -attributes(mumat)$class="bayesm.mat" -attributes(sigmat)$class="bayesm.var" -summary(mumat,names,burnin=burnin,QUANTILES=FALSE,TRAILER=FALSE) -cat(" \n") -summary(sigmat,burnin=burnin) -cat("note: 1st and 2nd Moments for a Normal Mixture \n") -cat(" may not be interpretable, consider plots\n") -invisible() -} - +summary.bayesm.nmix=function(object,names,burnin=trunc(.1*nrow(probdraw)),...){ + nmixlist=object + if(mode(nmixlist) != "list") stop(" Argument must be a list \n") + probdraw=nmixlist[[1]]; compdraw=nmixlist[[3]] + if(!is.matrix(probdraw)) stop(" First Element of List (probdraw) must be a matrix \n") + if(mode(compdraw) != "list") stop(" Third Element of List (compdraw) must be a list \n") + ncomp=length(compdraw[[1]]) + if(ncol(probdraw) != ncomp) stop(" Dim of First Element of List not compatible with Dim of Second + \n") +# +# function to summarize draws of normal mixture components +# +R=nrow(probdraw) +if(R < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} +if(burnin > R) {cat("burnin set larger than number of draws submitted (chk keep) \n"); + return(invisible())} +datad=length(compdraw[[1]][[1]]$mu) +mumat=matrix(0,nrow=R,ncol=datad) +sigmat=matrix(0,nrow=R,ncol=(datad*datad)) +if(missing(names)) names=as.character(1:datad) +for(i in (burnin+1):R){ + if(i%%500 ==0) cat("processing draw ",i,"\n",sep="");fsh() + out=momMix(probdraw[i,,drop=FALSE],compdraw[i]) + mumat[i,]=out$mu + sigmat[i,]=out$sigma + } +cat("\nNormal Mixture Moments\n Mean\n") +attributes(mumat)$class="bayesm.mat" +attributes(sigmat)$class="bayesm.var" +summary(mumat,names,burnin=burnin,QUANTILES=FALSE,TRAILER=FALSE) +cat(" \n") +summary(sigmat,burnin=burnin) +cat("note: 1st and 2nd Moments for a Normal Mixture \n") +cat(" may not be interpretable, consider plots\n") +invisible() +} + diff -Nru r-cran-bayesm-3.1-1/R/summary.bayesm.var.R r-cran-bayesm-3.1-3+dfsg/R/summary.bayesm.var.R --- r-cran-bayesm-3.1-1/R/summary.bayesm.var.R 2013-04-21 21:20:20.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/R/summary.bayesm.var.R 2019-07-12 22:27:01.000000000 +0000 @@ -1,37 +1,39 @@ -summary.bayesm.var=function(object,names,burnin=trunc(.1*nrow(Vard)),tvalues,QUANTILES=FALSE,...){ -# -# S3 method to summarize draws of var-cov matrix (stored as a vector) -# Vard is R x d**2 array of draws -# P. Rossi 2/07 -# - Vard=object - if(mode(Vard) == "list") stop("list entered \n Possible Fixup: extract from list \n") - if(!is.matrix(Vard)) stop("Requires matrix argument \n") - if(trunc(sqrt(ncol(Vard)))!=sqrt(ncol(Vard))) stop("Argument cannot be draws from a square matrix \n") - if(nrow(Vard) < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} - d=sqrt(ncol(Vard)) - corrd=t(apply(Vard[(burnin+1):nrow(Vard),],1,nmat)) - pmeancorr=apply(corrd,2,mean) - dim(pmeancorr)=c(d,d) - indexdiag=(0:(d-1))*d+1:d - var=Vard[(burnin+1):nrow(Vard),indexdiag] - sdd=sqrt(var) - pmeansd=apply(sdd,2,mean) - mat=cbind(pmeansd,pmeancorr) - if(missing(names)) names=as.character(1:d) - - cat("Posterior Means of Std Deviations and Correlation Matrix \n") - rownames(mat)=names - colnames(mat)=c("Std Dev",names) - print(mat,digits=2) - - cat("\nUpper Triangle of Var-Cov Matrix \n") - ind=as.vector(upper.tri(matrix(0,ncol=d,nrow=d),diag=TRUE)) - labels=cbind(rep(c(1:d),d),rep(c(1:d),each=d)) - labels=labels[ind,] - plabels=paste(labels[,1],labels[,2],sep=",") - uppertri=as.matrix(Vard[,ind]) - attributes(uppertri)$class="bayesm.mat" - summary(uppertri,names=plabels,burnin=burnin,tvalues=tvalues,QUANTILES=QUANTILES) - invisible() -} +summary.bayesm.var=function(object,names,burnin=trunc(.1*nrow(Vard)),tvalues,QUANTILES=FALSE,...){ +# +# S3 method to summarize draws of var-cov matrix (stored as a vector) +# Vard is R x d**2 array of draws +# P. Rossi 2/07 +# + Vard=object + if(mode(Vard) == "list") stop("list entered \n Possible Fixup: extract from list \n") + if(!is.matrix(Vard)) stop("Requires matrix argument \n") + if(trunc(sqrt(ncol(Vard)))!=sqrt(ncol(Vard))) stop("Argument cannot be draws from a square matrix \n") + if(nrow(Vard) < 100) {cat("fewer than 100 draws submitted \n"); return(invisible())} + if(burnin > nrow(Vard)) {cat("burnin set larger than number of draws submitted (chk keep) \n"); + return(invisible())} + d=sqrt(ncol(Vard)) + corrd=t(apply(Vard[(burnin+1):nrow(Vard),],1,nmat)) + pmeancorr=apply(corrd,2,mean) + dim(pmeancorr)=c(d,d) + indexdiag=(0:(d-1))*d+1:d + var=Vard[(burnin+1):nrow(Vard),indexdiag] + sdd=sqrt(var) + pmeansd=apply(sdd,2,mean) + mat=cbind(pmeansd,pmeancorr) + if(missing(names)) names=as.character(1:d) + + cat("Posterior Means of Std Deviations and Correlation Matrix \n") + rownames(mat)=names + colnames(mat)=c("Std Dev",names) + print(mat,digits=2) + + cat("\nUpper Triangle of Var-Cov Matrix \n") + ind=as.vector(upper.tri(matrix(0,ncol=d,nrow=d),diag=TRUE)) + labels=cbind(rep(c(1:d),d),rep(c(1:d),each=d)) + labels=labels[ind,] + plabels=paste(labels[,1],labels[,2],sep=",") + uppertri=as.matrix(Vard[,ind]) + attributes(uppertri)$class="bayesm.mat" + summary(uppertri,names=plabels,burnin=burnin,tvalues=tvalues,QUANTILES=QUANTILES) + invisible() +} diff -Nru r-cran-bayesm-3.1-1/vignettes/bayesm_Overview_Vignette.Rmd r-cran-bayesm-3.1-3+dfsg/vignettes/bayesm_Overview_Vignette.Rmd --- r-cran-bayesm-3.1-1/vignettes/bayesm_Overview_Vignette.Rmd 2017-06-12 21:48:30.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/vignettes/bayesm_Overview_Vignette.Rmd 2019-07-13 20:25:14.000000000 +0000 @@ -671,9 +671,9 @@ # Conclusion -We hope that this vignette has provided the reader with a thorough introduction to the `bayesm` package and is sufficient to enable immediate use of its posterior sampling functions for bayesian estimation. Comments or edits can be sent to the vignette authors at the email addresses below. +We hope that this vignette has provided the reader with an introduction to the `bayesm` package and is sufficient to enable immediate use of its posterior sampling functions for bayesian estimation. ***** -_Authored by [Dan Yavorsky](dyavorsky@gmail.com) and [Peter Rossi](perossichi@gmail.com). Last updated June 2017._ +_ Last updated July 2019._ diff -Nru r-cran-bayesm-3.1-1/vignettes/Constrained_MNL_Vignette.Rmd r-cran-bayesm-3.1-3+dfsg/vignettes/Constrained_MNL_Vignette.Rmd --- r-cran-bayesm-3.1-1/vignettes/Constrained_MNL_Vignette.Rmd 2017-06-12 19:50:12.000000000 +0000 +++ r-cran-bayesm-3.1-3+dfsg/vignettes/Constrained_MNL_Vignette.Rmd 2019-07-13 20:32:45.000000000 +0000 @@ -76,7 +76,7 @@ $$ \mu_m \sim MVN(\bar{\mu}, \Sigma_m \otimes a_\mu^{-1}) $$ $$ \Sigma_m \sim IW(\nu, V) $$ -This specification of priors assumes that $\mu_m$ is independent of $\Sigma_m$ and that, conditional on the hyperparameters, the $\beta_i$'s are _a priori_ independent. +This specification of priors assumes that the $(\mu_m,\Sigma_m)$ are independent and that, conditional on the hyperparameters, the $\beta_i$'s are independent. $a$ implements prior beliefs on the number of normal components in the mixture with a default of 5. $\nu$ is a "tightness" parameter of the inverted-Wishart distribution and $V$ is its location matrix. Without sign constraints, they default to $\nu=k+3$ and $V=\nu I$, which has the effect of centering the prior on $I$ and making it "barely proper". $a_\mu$ is a tightness parameter for the priors on $\mu$, and when no sign constraints are imposed it defaults to an extremely diffuse prior of 0.01. @@ -290,7 +290,7 @@ ***** -_Authored by [Dan Yavorsky](dyavorsky@gmail.com) and [Peter Rossi](perossichi@gmail.com). Last updated June 2017._ +_ Last updated July 2019._