Spring 1999 Quantitative Methods Colloquia
¾ All are welcome! ¾

Friday, 1/22/99: S. Mathiyalakan, Syracuse University, “GDSS as a Vehicle For Optimal Group Decision Making And Knowledge Gathering”

Friday, 4/2/99: Scott Webster, Syracuse University, “Evolutionary Algorithm Design”

Wednesday, 4/7/99: Professor Dennis K. J. Lin, Department of Management Science and Information Systems, The Smeal College of Business Administration, Pennsylvania State University, “Designing Computer Experiments”

Friday 4/16/99: Linda Roberge, “Health Informatics”

Friday 4/23/99: John Walker, “Influence Diagnostics for Two Stage Least Squares Estimation”

Friday 5/7/99: Chung Chen, “Effects of Outliers in the Analysis of Predictive Relationship: Some Issues in Empirical Finance”

Friday 5/14/99: Fred Easton, “Optimizing Service Attributes: The Seller’s Utility Problem”

Friday, 1/22/99, 3:00-4:30, room 003 SOM: S. Mathiyalakan, Syracuse University, “GDSS as a Vehicle For Optimal Group Decision Making And Knowledge Gathering”

Summary: This presentation will focus on the use of GDSS expert(s) selection and knowledge acquisition. Meetings are crucial elements in the functioning of organizations. Prior research has focussed on determining the optimal group size for a meeting.  It was noted that as the size of a group increases, meeting outcome measures (net value) increase until a maximum point is reached.  Any further increases in group size would yield negative net benefits.  Using induced-value experimentation, we completed controlled experimentation of the relationships between group size and group meeting outcomes. The results raise questions about optimal group size results cited in earlier studies. Possible future research directions & strategies will also be examined.

Friday, 4/2/99, 4:00, room 101 SOM: Scott Webster, Syracuse University, “Evolutionary Algorithm Design”

Summary: The dollar value of material in various stages of supply chains in the U.S. is about the same as President Clinton’s proposed federal budget.  With advances in information technology, many companies are looking for ways to better manage this investment through software–based planning tools.  The development of these tools is still in the very early stages, and one of great challenges in the design of a class of underlying algorithms stems from the diversity of supply chain planning problems in practice.  In this talk I will outline one possible approach for developing a flexible and adaptable planning algorithm.  The basic idea is to design a programming language and interpreter with 2 characteristics: (1) most any algorithm that one might conceive for planning material flow can be encoded in the programming language and (2) new computer programs (and consequently algorithms) can be generated by perturbing any given program.  The perturbation characteristic provides a mechanism for easily modifying existing algorithms within an evolutionary process.  Algorithms are created and exposed to a series of planning problems over the course of their “lives.”  Their performance influences how long they survive and, in turn, the degree to which they reproduce their “genes” in the creation of new algorithms.  To illustrate the concept, I will describe one simple design of an interpreter and discuss preliminary computational results.

Wednesday, 4/7/99, 4:15 p.m., room 313 Carnegie, refreshments at 3:45 in room 312 Carnegie: Professor Dennis K. J. Lin, Department of Management Science and Information Systems, The Smeal College of Business Administration, Pennsylvania State University, “Designing Computer Experiments”

Summary: Computer models/simulations can describe complicated physical phenomena, such as performance characteristics of integrated circuits. However, to use these models for scientific investigation, their generally running times and mostly deterministic nature require a special designed experiments. Standard factorial designs are inadequate; in the absence of one or more main effects, their replication cannot be used to estimate error but instead produces redundancy. A number of alternative designs have been proposed, but many can be burdensome computationally.  This paper presents a new class of designs developed from the rotation of a factorial design. These rotated factorial designs are very easy to construct and preserve many of the attractive properties of standard factorial designs: they have equally-spaced projections to univariate dimensions and uncorrelated regression effect estimates (orthogonality). They also rate comparably to maximin Latin hypercube designs by the minimum interpoint distance criterion used in the latter's construction.  Sponsored by: Quantitative Methods Department, The Brethen Institute, the Interdisciplinary Statistics Program at Syracuse University, and the Syracuse Chapter of the American Statistical Association

Friday 4/16/99, 4:00 p.m., room 003 SOM: Linda Roberge, “Health Informatics”

Summary: The delivery of health care services in the U.S. has been beset by myriad problems.  Despite spending ever increasing proportions of our GDP on health care ($1.1 trillion in 1997), the health of our population lags behind that of other industrialized countries.  The health care industry is currently undergoing sweeping changes in the form of “managed care” with the goals of controlling spending, improving access to and delivery of services, and possibly even improving the quality of care and the health of the nation.

Managed care, the financing mechanism that attempts to apply business principles to the delivery of health care services, has been one of the major drivers behind the increased use of information technology within the industry.  In order to survive in a managed care environment, health care organizations are using IT for knowledge management to an extent not previously seen.  For this colloquium, I will present an overview of the health care industry, the application of IT within the industry, and highlight some of the many research opportunities available.

Friday 4/23/99, 4:00 p.m., room 323 SOM: John Walker, “Influence Diagnostics for Two Stage Least Squares Estimation”

Summary: Using results from Phillips (1977), Kuh and Welsch (1979) developed two-stage least squares estimation influence diagnostics analogous to some common ordinary least squares influence diagnostics.  I extend these results by providing 2SLS versions of other OLS influence diagnostics and interpreting important components of Phillips’ formula.  The performance of the two-stage influence diagnostics is compared to the less computationally expensive alternative of applying OLS influence diagnostics to the second stage of estimation only.  An example using Klein’s (1950) Model I data demonstrates that ignoring the first stage of estimation can lead to gross miscalculations of the influence for some observations.  Sponsored by The Brethen Institute

Friday 5/7/99, 3:00 p.m., room 101 SOM: Chung Chen, “Effects of Outliers in the Analysis of Predictive Relationship: Some Issues in Empirical Finance”

Summary: Current issues of empirical finance are discussed. The predictive relation between financial variables are examined under the vector ARMA structure. The effects of outliers are investigated. It can be shown that it's not necessary to develop outlier detection under the vector ARMA framework. A procedure based on univariate outlier detection by Chen and Liu (1993) is proposed to study the predictive relation in the presence of outliers. Several stock price series have been analyzed to illustrate the procedure.

Friday 5/14/99, 4:00 – 5:30 p.m., room 003 SOM: Fred Easton, “Optimizing Service Attributes: The Seller’s Utility Problem”

Summary: Normative techniques for product design problems seldom address services, and typically seek configurations that optimize market share or a surrogate.  Methods addressing the more challenging seller's profit criterion tend to suffer from one of two key limitations: either they limit their evaluation to a small number of attractive configurations with high predicted market share, or they assume that production costs are continuous and linear in output volume.

However, maximizing predicted market share for a service design does not assure maximum profit. Where services are labor intensive, direct labor costs often vary non-linearly and discontinuously over the ranges for key service attributes. Often, unit costs for a service's tangibles exhibit similar discontinuities and non-linearities, due to volume discounts or other complex interactions between services attribute choices. This paper addresses these limitations.

We present an ideal point heuristic that seeks the most profitable levels for a service's price and non-price attributes. It uses an accurate seller’s utility function, estimated by regressing the attribute levels for a set of orthogonal service configurations to expected profits, to reveal profitable attribute levels.  Our method allows linear, non-linear, discontinuous and interactive relationships between service attribute levels and service delivery costs, and employs separate cost functions for consumables, direct labor, and other key service expenses.

Using efficient fractional factorial designs, our model estimates the seller's utility function and isolates attribute ideals in seconds.  In a computational study with simulated service design problems, its recommended configurations returned an average of 98 percent of optimal.  By contrast, designs determined by a market share criterion returned less than 60 percent of the maximum profit.