Template-Type: ReDIF-Article 1.0 Author-Name: Jun Wu Author-X-Name-First: Jun Author-X-Name-Last: Wu Author-Name: Gang Du Author-X-Name-First: Gang Author-X-Name-Last: Du Author-Name: Roger J. Jiao Author-X-Name-First: Roger J. Author-X-Name-Last: Jiao Title: Dynamic postponement design for crowdsourcing in open manufacturing: A hierarchical joint optimization approach Abstract: Open manufacturing and crowdsourcing have been envisioned as a trend for industries to promote collaboration across different firms and support the sharing and exchange of knowledge and services throughout the value chain. Incorporating postponement decisions with the crowdsourcing strategy helps reduce the risk and uncertainty associated with product variety in an open manufacturing environment. The inherent coupling of product design and postponement decisions in an open manufacturing environment necessitates joint optimization of product family design and postponement planning. This article formulates a Dynamic Postponement Design (DPD) problem that considers an undefined product architecture that interacts with the postponement design according to optimal planning of open manufacturing activities. The DPD problem differs from traditional (static) postponement design models in that the latter assumes that a (fixed) product architecture is available at the outset. This article r develops a Hierarchical Joint Optimization (HJO) model based on Stackelberg game theory. The HJO model deploys a bi-level mixed 0-1 nonlinear programing decision structure to reveal the coupling of product design and postponement decisions. To solve the bi-level programing model, a nested genetic algorithm is developed and implemented. A case study of smart refrigerator design for postponement is reported to illustrate the DPD problem and the proposed HJO approach. Journal: IISE Transactions Pages: 255-275 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1616858 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1616858 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:255-275 Template-Type: ReDIF-Article 1.0 Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Meimei Zheng Author-X-Name-First: Meimei Author-X-Name-Last: Zheng Author-Name: Yichi Shen Author-X-Name-First: Yichi Author-X-Name-Last: Shen Title: A generalization of the Theory of Constraints: Choosing the optimal improvement option with consideration of variability and costs Abstract: The Theory of Constraints (TOC) was proposed in the mid-1980s and has significantly impacted productivity improvement in manufacturing systems. Although it is intuitive and easy to understand, its conclusions are mainly derived from deterministic settings or based on mean values. This article generalizes the concept of TOC to stochastic settings through the performance analysis of queueing systems and simulation studies. We show that, in stochastic settings, the conventional TOC may not be optimal, and a throughput bottleneck should be considered in certain types of machines at the planning stage. Incorporating the system variability and improvement costs, the Generalized Process Of OnGoing Improvement (GPOOGI) is developed in this study. It shows that improving a frontend machine in a production line can be more effective than improving the throughput bottleneck. The findings indicate that we should consider the dependence among stations and the cost of improvement options during productivity improvement and should not simply improve the system bottleneck according to the conventional TOC. According to the GPOOGI, the managers of production systems would be able to make optimal decision during the continuous improvement process. Journal: IISE Transactions Pages: 276-287 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1632503 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1632503 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:276-287 Template-Type: ReDIF-Article 1.0 Author-Name: Linhan Ouyang Author-X-Name-First: Linhan Author-X-Name-Last: Ouyang Author-Name: Jianxiong Chen Author-X-Name-First: Jianxiong Author-X-Name-Last: Chen Author-Name: Yizhong Ma Author-X-Name-First: Yizhong Author-X-Name-Last: Ma Author-Name: Chanseok Park Author-X-Name-First: Chanseok Author-X-Name-Last: Park Author-Name: Jionghua (Judy) Jin Author-X-Name-First: Jionghua Author-X-Name-Last: (Judy) Jin Title: Bayesian closed-loop robust process design considering model uncertainty and data quality Abstract: Response-surface-based design optimization has been commonly used in Robust Process Design (RPD) to seek optimal process settings for minimizing the output variability around a target value. Recently, the online RPD strategy has attracted increasing research attention, as it is expected to provide a better performance than offline RPD by utilizing online process feedback to continuously adjust process settings during process operation. However, the lack of knowledge about process model parameter uncertainty and data quality in the online RPD decisions means that this superiority cannot be guaranteed. This motivates this article to present a Bayesian approach for online RPD, which can provide systematic decisions of when and how to update the process model parameters for online process design optimization by considering data quality. The effectiveness of the proposed approach is illustrated with both simulation studies and a case study on a micro-milling process. The comparison results demonstrate that the proposed approach can achieve a better process performance than two conventional design approaches that do not consider the data quality and model parameter uncertainty. Journal: IISE Transactions Pages: 288-300 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1636428 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1636428 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:288-300 Template-Type: ReDIF-Article 1.0 Author-Name: Cesar Ruiz Author-X-Name-First: Cesar Author-X-Name-Last: Ruiz Author-Name: Mohammadhossein Heydari Author-X-Name-First: Mohammadhossein Author-X-Name-Last: Heydari Author-Name: Kelly M. Sullivan Author-X-Name-First: Kelly M. Author-X-Name-Last: Sullivan Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Ed Pohl Author-X-Name-First: Ed Author-X-Name-Last: Pohl Title: Data analysis and resource allocation in Bayesian selective accelerated reliability growth Abstract: The rapid pace of technology advancement has resulted in increasingly complex systems with more potential failure modes. However, it is quite common that multiple key components of such a system may be developed, tested and improved independently during product development. Without taking a holistic approach to system reliability improvement, a significant amount of time and resources may be wasted on over-design of some components, which can be otherwise used for strengthening other under-designed components. The technical challenge is more prominent when accelerated testing is utilized in a reliability growth program in hopes of shortening the system development cycle. To overcome limitations of the traditional reliability growth method using the Crow-AMSAA model, a Bayesian selective accelerated reliability growth method is proposed in this article to accelerate potential failure modes and aggregate component testing results and prior knowledge for predicting system reliability growth and corrective actions. As one of the key steps, the method dynamically allocates limited resources for testing and correcting failures on all system levels. Numerical examples illustrate that the proposed integrated statistical and optimization method is effective in estimating and improving the overall reliability of a system. Journal: IISE Transactions Pages: 301-320 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1567957 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1567957 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:301-320 Template-Type: ReDIF-Article 1.0 Author-Name: Yifu Li Author-X-Name-First: Yifu Author-X-Name-Last: Li Author-Name: Hongyue Sun Author-X-Name-First: Hongyue Author-X-Name-Last: Sun Author-Name: Xinwei Deng Author-X-Name-First: Xinwei Author-X-Name-Last: Deng Author-Name: Chuck Zhang Author-X-Name-First: Chuck Author-X-Name-Last: Zhang Author-Name: Hsu-Pin (Ben) Wang Author-X-Name-First: Hsu-Pin (Ben) Author-X-Name-Last: Wang Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Title: Manufacturing quality prediction using smooth spatial variable selection estimator with applications in aerosol jet® printed electronics manufacturing Abstract: Additive manufacturing (AM) has advantages in terms of production cycle time, flexibility, and precision compared with traditional manufacturing. Spatial data, collected from optical cameras or in situ sensors, are widely used in various AM processes to quantify the product quality and reduce variability. However, it is challenging to extract useful information and features from spatial data for modeling, because of the increasing spatial resolutions and feature complexities due to the highly diversified nature of AM processes. Motivated by the aerosol jet® printing process in printed electronics, we propose a smooth spatial variable selection procedure to extract meaningful predictors from spatial contrast information in high-definition microscopic images to model the resistances of printed wires. The proposed method does not rely on extensive feature engineering, and has the generality to be applied to a variety of spatial data modeling problems. The performance of the proposed method in prediction and variable selection through simulations and a real case study has proven to be both accurate and easy to be interpreted. Journal: IISE Transactions Pages: 321-333 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1593556 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1593556 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:321-333 Template-Type: ReDIF-Article 1.0 Author-Name: Babak Farmanesh Author-X-Name-First: Babak Author-X-Name-Last: Farmanesh Author-Name: Arash Pourhabib Author-X-Name-First: Arash Author-X-Name-Last: Pourhabib Title: Sparse pseudo-input local Kriging for large spatial datasets with exogenous variables Abstract: We study large-scale spatial systems that contain exogenous variables, e.g., environmental factors that are significant predictors in spatial processes. Building predictive models for such processes is challenging, due to the large numbers of observations present making it inefficient to apply full Kriging. In order to reduce computational complexity, this article proposes Sparse Pseudo-input Local Kriging (SPLK), which utilizes hyperplanes to partition a domain into smaller subdomains and then applies a sparse approximation of the full Kriging to each subdomain. We also develop an optimization procedure to find the desired hyperplanes. To alleviate the problem of discontinuity in the global predictor, we impose continuity constraints on the boundaries of the neighboring subdomains. Furthermore, partitioning the domain into smaller subdomains makes it possible to use different parameter values for the covariance function in each region and, therefore, the heterogeneity in the data structure can be effectively captured. Numerical experiments demonstrate that SPLK outperforms, or is comparable to, the algorithms commonly applied to spatial datasets. Journal: IISE Transactions Pages: 334-348 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1624926 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1624926 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:334-348 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Author-Name: Yuqiang Fu Author-X-Name-First: Yuqiang Author-X-Name-Last: Fu Author-Name: Tao Yuan Author-X-Name-First: Tao Author-X-Name-Last: Yuan Title: Optimum reassignment of degrading components for non-repairable systems Abstract: The components in a system can degrade differently, due to the operational loads or environmental conditions, or both, in their positions being different. Therefore, reassignment of the functionally exchangeable components to the positions at appropriate time can increase system reliability and extend system lifetime. In this article, a new component reassignment problem is proposed, and a mixed binary nonlinear programming model is built to determine the optimal reassignment time and optimal reassignment of degrading components with the objective of maximizing system lifetime. The model integrates continuous optimization and combinatorial optimization, and provides a framework for optimizing the component reassignment decisions for various degradation models and system structures. Furthermore, the optimization model and analytical results are derived for k-out-of-n systems of linearly degrading components or exponentially degrading components. The analytical results reduce the complexity of solving the optimization model and are used to design an efficient solution method. Finally, numerical examples in a power supply system demonstrate applications of the optimization model and further provide managerial insights for the component reassignment problem of degrading components. Journal: IISE Transactions Pages: 349-361 Issue: 3 Volume: 52 Year: 2020 Month: 3 X-DOI: 10.1080/24725854.2019.1628373 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1628373 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:3:p:349-361 Template-Type: ReDIF-Article 1.0 Author-Name: Ge Yu Author-X-Name-First: Ge Author-X-Name-Last: Yu Author-Name: Sheldon Howard Jacobson Author-X-Name-First: Sheldon Howard Author-X-Name-Last: Jacobson Author-Name: Negar Kiyavash Author-X-Name-First: Negar Author-X-Name-Last: Kiyavash Title: A bi-criteria multiple-choice secretary problem Abstract: This article studies a Bi-criteria Multiple-choice Secretary Problem (BMSP) with full information. A sequence of candidates arrive one at a time, with a two-dimensional attribute vector revealed upon arrival. A decision maker needs to select a total number of η candidates to fill η job openings, based on the attribute vectors of candidates. The objective of the decision maker is to maximize the expected sum of attribute values of selected candidates for both dimensions of the attribute vector. An approach for generating Pareto-optimal policies for BMSP is proposed using the weighted sum method. Moreover, closed-form expressions for values of both objective functions under Pareto-optimal policies for BMSP are provided to help a decision maker in the policy planning stage. These analysis techniques can be applied directly to solve the more general class of multi-criteria multiple-choice Secretary Problems, provided the objective functions are in the form of accumulating a product-form reward for each selected candidate. Journal: IISE Transactions Pages: 577-588 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1516054 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1516054 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:577-588 Template-Type: ReDIF-Article 1.0 Author-Name: Qing-Mi Hu Author-X-Name-First: Qing-Mi Author-X-Name-Last: Hu Author-Name: Laijun Zhao Author-X-Name-First: Laijun Author-X-Name-Last: Zhao Author-Name: Huiyong Li Author-X-Name-First: Huiyong Author-X-Name-Last: Li Author-Name: Rongbing Huang Author-X-Name-First: Rongbing Author-X-Name-Last: Huang Title: Integrated design of emergency shelter and medical networks considering diurnal population shifts in urban areas Abstract: This article addresses an emergency shelter and medical network design problem by integrating evacuation and medical service activities and considering diurnal population shifts to respond to large-scale natural disasters in urban areas. A multi-objective mixed-integer programming model that incorporates the characteristics of diurnal population shifts is developed to determine the configuration of the integrated emergency shelter and medical network. An accelerated Benders decomposition algorithm is then devised to solve large-scale problems in reasonable time. A realistic case study on the Xuhui District of Shanghai City in China and extensive numerical experiments are presented to demonstrate the effectiveness of the proposed model and solution method. Computational results suggest that more emergency shelters and emergency medical centers should be established when accounting for diurnal population shifts than when diurnal population shifts are not considered. The accelerated Benders decomposition algorithm is significantly more time efficient as compared with the CPLEX solver. Journal: IISE Transactions Pages: 614-637 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1519744 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1519744 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:614-637 Template-Type: ReDIF-Article 1.0 Author-Name: Liu Su Author-X-Name-First: Liu Author-X-Name-Last: Su Author-Name: Longsheng Sun Author-X-Name-First: Longsheng Author-X-Name-Last: Sun Author-Name: Mark Karwan Author-X-Name-First: Mark Author-X-Name-Last: Karwan Author-Name: Changhyun Kwon Author-X-Name-First: Changhyun Author-X-Name-Last: Kwon Title: Spectral risk measure minimization in hazardous materials transportation Abstract: Due to catastrophic consequences of potential accidents in hazardous materials (hazmat) transportation, a risk-averse approach for routing is necessary. In this article, we consider spectral risk measures, for risk-averse hazmat routing, which overcome challenges posed in the existing approaches such as conditional value-at-risk. In spectral risk measures, one can define the spectrum function precisely to reflect the decision maker’s risk preference. We show that spectral risk measures can provide a unified routing framework for popular existing hazmat routing methods based on expected risk, maximum risk, and conditional value-at-risk. We first consider a special class of spectral risk measures, for which the spectrum function is represented as a step function. We develop a mixed-integer linear programming model in hazmat routing to minimize these special spectral risk measures and propose an efficient search algorithm to solve the problem. For general classes of spectral risk measures, we suggest approximation methods and path-based approaches. We propose an optimization procedure to approximate general spectrum functions using a step function. We illustrate the usage of spectral risk measures and the proposed computational approaches using data from real road networks. Journal: IISE Transactions Pages: 638-652 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1530488 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1530488 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:638-652 Template-Type: ReDIF-Article 1.0 Author-Name: Gino J. Lim Author-X-Name-First: Gino J. Author-X-Name-Last: Lim Author-Name: Mukesh Rungta Author-X-Name-First: Mukesh Author-X-Name-Last: Rungta Author-Name: Ayda Davishan Author-X-Name-First: Ayda Author-X-Name-Last: Davishan Title: A robust chance constraint programming approach for evacuation planning under uncertain demand distribution Abstract: This study focuses on an evacuation planning problem where the number of actual evacuees (demand) is unknown at the planning phase. In the context of mass evacuation, we assume that only partial information about the demand distribution (i.e., moment, support, or symmetry) is known as opposed to the exact distribution in a stochastic environment. To address this issue, robust approximations of chance-constrained problems are explored to model traffic demand uncertainty in evacuation networks. Specifically, a distributionally robust chance-constrained model is proposed to ensure a reliable evacuation plan (start time, path selection, and flow assignment) in which the vehicle demand constraints are satisfied for any probability distribution consistent with the known properties of the underlying unknown evacuation demand. Using a path-based model, the minimum clearance time is found for the evacuation problem under partial information of the random demand. Numerical experiments show that the proposed approach works well in terms of solution feasibility and robustness as compared to the solution provided by a chance constrained programming model under the assumption that the demand distribution follows a known probability distribution. Journal: IISE Transactions Pages: 589-604 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1533675 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1533675 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:589-604 Template-Type: ReDIF-Article 1.0 Author-Name: Boxiao Chen Author-X-Name-First: Boxiao Author-X-Name-Last: Chen Author-Name: Xiuli Chao Author-X-Name-First: Xiuli Author-X-Name-Last: Chao Title: Parametric demand learning with limited price explorations in a backlog stochastic inventory system Abstract: We study a multi-period stochastic inventory system with backlogs. Demand in each period is random and price sensitive, but the firm has little or no prior knowledge about the demand distribution and how each customer responds to the selling price, so the firm has to learn the demand process when making periodic pricing and inventory replenishment decisions to maximize its expected total profit. We consider the scenario where the firm is faced with the business constraint that prevents it from conducting extensive price exploration, and develop parametric data-driven algorithms for pricing and inventory decisions. We measure the performances of the algorithms by regret, which is the profit loss compared with a clairvoyant who has complete information about the demand distribution. We analyze the cases where the number of price changes is restricted to a given number or a small number relative to the planning horizon, and show that the regrets for the corresponding learning algorithms converge at the best possible rates in the sense that they reach the theoretical lower bounds. Numerical results indicate that these algorithms empirically perform very well. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transaction, for datasets, additional tables, detailed proofs, etc. Journal: IISE Transactions Pages: 605-613 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1538594 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1538594 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:605-613 Template-Type: ReDIF-Article 1.0 Author-Name: Ye Shi Author-X-Name-First: Ye Author-X-Name-Last: Shi Author-Name: Layth C. Alwan Author-X-Name-First: Layth C. Author-X-Name-Last: Alwan Author-Name: Christopher Tang Author-X-Name-First: Christopher Author-X-Name-Last: Tang Author-Name: Xiaohang Yue Author-X-Name-First: Xiaohang Author-X-Name-Last: Yue Title: A newsvendor model with autocorrelated demand under a time-consistent dynamic CVaR measure Abstract: As a result of autocorrelation, static risk measures such as value at risk and Conditional Value at Risk (CVaR) are time inconsistent and can thus result in inconsistent decisions over time. In this article, we present a time-consistent dynamic CVaR measure and examine it in the context of a newsvendor problem with autocorrelated demand. Due to the concavity of our CVaR measure, the dynamic program formulation associated with our dynamic newsvendor problem is not immediately separable. However, by exploring certain properties of the dynamic CVaR measure and underlying profit function, our dynamic program can be transformed into a sequence of (single-period) risk-averse newsvendor problems that depend on the observed demand history. By examining the structure of the optimal order quantities, we find both intuitive and counterintuitive results. When demands are positively correlated, the optimal order quantity is monotonically increasing in the degree of risk aversion. However, when demands are negatively correlated and the underlying cost structure satisfies certain conditions, the optimal order quantity is no longer monotonically increasing in the degree of risk aversion. Instead, the optimal order quantity is a decreasing (increasing) function of the degree of risk aversion when it is below (above) a certain threshold. We also show that these results continue to hold when demands follow a general ARMA process, and when inventory carryover is considered. Journal: IISE Transactions Pages: 653-671 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1539888 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1539888 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:653-671 Template-Type: ReDIF-Article 1.0 Author-Name: Moutaz Khouja Author-X-Name-First: Moutaz Author-X-Name-Last: Khouja Author-Name: Jing Zhou Author-X-Name-First: Jing Author-X-Name-Last: Zhou Title: Early sale of seasonal inventory in the newsvendor problem Abstract: Off-price retailers buy excess inventory from manufacturers and retailers and offer it at discounts to consumers. Off-price retailers want a larger assortment of trendy products, which provides retailers with a way to sell excess inventory. We analyze a supply chain of a manufacturer who sells a product to a retailer. Upon realizing demand, the retailer can sell excess inventory to the off-price retailer. The retailer and off-price retailer have their exclusive consumer segments and share a dual segment. We find that adding the off-price retailer increases the manufacturer’s optimal expected profit. Interestingly, selling inventory to the off-price retailer may decrease or increase the retailer’s optimal expected profit. The retailer’s optimal expected profit increases when the off-price retailer has a large exclusive consumer segment. Also, centralization in a supply chain with an off-price retailer leads to larger increase in the optimal order quantity and expected profit compared with centralization in a supply chain without an off-price retailer. The off-price retailer can be worse-off by having a large consumer segment shared with the retailer. Finally, the effect of the off-price retailer consumer segment demand being a random variable has only a small effect on the order quantity, the wholesale price, and profits. Journal: IISE Transactions Pages: 672-689 Issue: 6 Volume: 51 Year: 2019 Month: 6 X-DOI: 10.1080/24725854.2018.1550824 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1550824 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:6:p:672-689 Template-Type: ReDIF-Article 1.0 Author-Name: Benjamin Legros Author-X-Name-First: Benjamin Author-X-Name-Last: Legros Author-Name: Oualid Jouini Author-X-Name-First: Oualid Author-X-Name-Last: Jouini Author-Name: Ger Koole Author-X-Name-First: Ger Author-X-Name-Last: Koole Title: Blended call center with idling times during the call service Abstract: We consider a blended call center with calls arriving over time and an infinitely backlogged amount of outbound jobs. Inbound calls have a non-preemptive priority over outbound jobs. The inbound call service is characterized by three successive stages where the second one is a break; i.e., there is no required interaction between the customer and the agent for a non-negligible duration. This leads to a new opportunity to efficiently split the agent time between inbound calls and outbound jobs. We focus on the optimization of the outbound job routing to agents. The objective is to maximize the expected throughput of outbound jobs subject to a constraint on the inbound call waiting time. We develop a general framework with two parameters for the outbound job routing to agents. One parameter controls the routing between calls and the other performs the control inside a call. We then derive structural results with regard to the optimization problem and numerically illustrate them. Various guidelines to call center managers are provided. In particular, we prove for the optimal routing that at least one of the two outbound job routing parameters has an extreme value. Journal: IISE Transactions Pages: 279-297 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1387318 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1387318 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:279-297 Template-Type: ReDIF-Article 1.0 Author-Name: Zhengwei Sun Author-X-Name-First: Zhengwei Author-X-Name-Last: Sun Author-Name: Ali E. Abbas Author-X-Name-First: Ali E. Author-X-Name-Last: Abbas Title: Pareto optimality and risk sharing in group utility functions Abstract: The Pareto optimality condition is a widely used assumption in group decision making. The condition requires that if each individual in the group prefers one alternative to another, then the group as a whole should prefer the alternative that is most preferred by each of the group members. This condition implies that the group utility function is an additive combination of the individual utility functions of the members of the group. We argue that Pareto optimality is a desirable property for deterministic decisions but that it need not be desirable for lotteries. We show, for example, that Pareto optimality need not be a desirable property for risk sharing or partnerships. We then present a new condition, which we refer to as “independence of indifferent group members.”  We show that it is a weaker condition than Pareto optimality and derive the corresponding functional form of the group utility function. Journal: IISE Transactions Pages: 298-306 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1394601 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1394601 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:298-306 Template-Type: ReDIF-Article 1.0 Author-Name: Amihai Glazer Author-X-Name-First: Amihai Author-X-Name-Last: Glazer Author-Name: Refael Hassin Author-X-Name-First: Refael Author-X-Name-Last: Hassin Author-Name: Liron Ravner Author-X-Name-First: Liron Author-X-Name-Last: Ravner Title: A strategic model of job arrivals to a single machine with earliness and tardiness penalties Abstract: We consider a game of decentralized timing of jobs to a single server (machine) with a penalty for deviation from a due date, and no delay costs. The jobs’ sizes are homogeneous and deterministic. Each job belongs to a single decision maker, a customer, who aims to arrive at a time that minimizes his(her) deviation penalty. If multiple customers arrive at the same time, then their order of service is determined by a uniform random draw. We show that if the cost function has a weighted absolute deviation form, then any Nash equilibrium is pure and symmetric, that is, all customers arrive together. Furthermore, we show that there exist multiple, in fact a continuum, of equilibrium arrival times, and provide necessary and sufficient conditions for the socially optimal arrival time to be an equilibrium. The base model is solved explicitly, but the prevalence of a pure symmetric equilibrium is shown to be robust to several relaxations of the assumptions: restricted server availability, inclusion of small waiting costs, stochastic job sizes, randomly sized population, heterogeneous due dates, and nonlinear deviation penalties. Journal: IISE Transactions Pages: 265-278 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1395098 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1395098 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:265-278 Template-Type: ReDIF-Article 1.0 Author-Name: Michael P. Atkinson Author-X-Name-First: Michael P. Author-X-Name-Last: Atkinson Author-Name: Moshe Kress Author-X-Name-First: Moshe Author-X-Name-Last: Kress Title: Operating with an incomplete checklist Abstract: We consider a time-critical operation that is contingent on completing a preliminary set of actions in a checklist. Aerial combat missions, emergency surgeries, launching a new product, and rescuing hostages are a few examples of such situations. The operation may be executed before the full checklist is completed but then it may fail. The failure probability depends on the uncompleted actions. The question is when to abort the checklist and initiate the operation. In this article, we study this problem and prove that in certain realistic cases a simple myopic approach is optimal. Journal: IISE Transactions Pages: 307-315 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1395977 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1395977 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:307-315 Template-Type: ReDIF-Article 1.0 Author-Name: Nail Orkun Baycik Author-X-Name-First: Nail Orkun Author-X-Name-Last: Baycik Author-Name: Thomas C. Sharkey Author-X-Name-First: Thomas C. Author-X-Name-Last: Sharkey Author-Name: Chase E. Rainwater Author-X-Name-First: Chase E. Author-X-Name-Last: Rainwater Title: Interdicting layered physical and information flow networks Abstract: This article focuses on the problem of interdicting layered networks that involve a physical flow network and an information flow network. There exist dependencies between these networks since components of the physical flow network are only operational should their counterparts in the information flow network receive enough demand. This leads to a network interdiction problem over these layered networks. The objective of the defender is to send the maximum amount of flow through its physical flow network. The objective of the attacker is to interdict components within the layered networks to minimize this maximum flow. For the case where the information supply arcs are uncapacitated, we apply a novel multi-step, dual-based reformulation technique. We apply this reformulation technique to two applications in order to provide policy-driven analysis: law enforcement efforts against illegal drug trafficking networks and cyber vulnerability analysis of infrastructure and supply chain networks. The computational results prove that our reformulation technique outperforms the traditional duality-based reformulation technique by orders of magnitude. This allows us to analyze instances of realistic size. Journal: IISE Transactions Pages: 316-331 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1401754 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1401754 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:316-331 Template-Type: ReDIF-Article 1.0 Author-Name: Hadi Karimi Author-X-Name-First: Hadi Author-X-Name-Last: Karimi Author-Name: Sandra Duni Ekşioğlu Author-X-Name-First: Sandra Duni Author-X-Name-Last: Ekşioğlu Author-Name: Amin Khademi Author-X-Name-First: Amin Author-X-Name-Last: Khademi Title: Analyzing tax incentives for producing renewable energy by biomass cofiring Abstract: This article examines the impacts of governmental incentives for coal-fired power plants to generate renewable energy via biomass cofiring technology. The most common incentive is the Production Tax Credit (PTC), a flat-rate reimbursement for each unit of renewable energy generated. The work presented here proposes PTC alternatives, incentives that are functions of plant capacity and the biomass cofiring ratio. The capacity-based incentives favor plants of small capacity, whereas the ratio-based incentives favor plants that cofire larger percentages of biomass. Following a resource allocation perspective, this article evaluates the impacts of alternative PTC schemes on biomass utilization and power plants’ profit-earning potentials. The efficiency of these incentive schemes is evaluated by comparing with a reference profit optimization model that finds a distribution of credits that maximizes the total profits in the system. To evaluate the fairness of the proposed schemes, the results of the max–min fairness solution are used as a basis. A realistic case study, developed with data pertaining to the southeastern. United States, suggests how total system costs and efforts to generate renewable energy are impacted by both the existing and proposed incentives. The observations presented in this study provide helpful insights to policymakers in designing effective incentive schemes that promote biomass cofiring. Journal: IISE Transactions Pages: 332-344 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1401755 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1401755 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:332-344 Template-Type: ReDIF-Article 1.0 Author-Name: Gökçe Palak Author-X-Name-First: Gökçe Author-X-Name-Last: Palak Author-Name: Sandra Duni Ekşioğlu Author-X-Name-First: Sandra Duni Author-X-Name-Last: Ekşioğlu Author-Name: Joseph Geunes Author-X-Name-First: Joseph Author-X-Name-Last: Geunes Title: Heuristic algorithms for inventory replenishment with perishable products and multiple transportation modes Abstract: This study extends classic economic lot-sizing problems to permit the replenishment of age-dependent perishable inventories via multiple transportation modes. Inventory replenishment costs include a multiple-setup cost function that considers order setup, purchase, and cargo container costs. The objective is to identify the timing of orders, order quantities, and a choice from among I transportation modes that minimizes the cost of replenishing perishable inventories during a planning horizon of length T. We present a mixed-integer programming formulation of this problem and characterize properties of optimal solutions. We propose a primal-dual heuristic algorithm that runs in O(IT2). In addition, we provide heuristic algorithms for two special cases of the problem involving one or two replenishment modes. For the single replenishment mode problem, we propose (i) a dynamic programming algorithm that explores solutions that satisfy the Zero Inventory Ordering Policy and runs in O(T2) and (ii) a dynamic programming algorithm that explores solutions that satisfy the Less-than-Truckload first positioning property and runs in O(T3). For the two replenishment mode problem, we present a knapsack-based algorithm that identifies the minimum number of cargo containers required to meet demand. The running time of this algorithm is O(T2). We evaluate the quality of the solutions generated by these different approaches via extensive numerical analyses. Journal: IISE Transactions Pages: 345-365 Issue: 4 Volume: 50 Year: 2018 Month: 4 X-DOI: 10.1080/24725854.2017.1405296 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1405296 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:4:p:345-365 Template-Type: ReDIF-Article 1.0 Author-Name: Nicola Secomandi Author-X-Name-First: Nicola Author-X-Name-Last: Secomandi Title: An improved basket of spread options heuristic for merchant energy storage Abstract: Practitioners use the Basket of Spread Options (BSO) heuristic to model merchant energy storage as a portfolio of spread options and spot/forward sales. This method solves a linear program to obtain the composition of this portfolio and its associated BSO policy. Sequential reoptimization of this model yields the Rolling BSO (RBSO) policy. Although this policy performs well, typically dominating the BSO policy and often being near optimal, it can struggle when storage is fast. To attempt to obtain an improved RBSO policy, especially for fast storage, this article proposes a BSO heuristic that modifies the objective function of the BSO linear program based on exchange option prices and a tunable parameter. On a set of known natural gas storage instances, limited optimization of this adjustable quantity leads to modestly improved RBSO policies on average but substantially so when the original RBSO policies perform poorly, which occurs on some fast storage instances. Moreover, fixing this parameter to 0.6 gives RBSO policies that virtually match the performance of the best considered RBSO policies. The proposed BSO heuristic is thus as easy to use in practice as the original BSO heuristic. Journal: IISE Transactions Pages: 645-653 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2017.1336685 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1336685 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:645-653 Template-Type: ReDIF-Article 1.0 Author-Name: Yongpei Guan Author-X-Name-First: Yongpei Author-X-Name-Last: Guan Author-Name: Kai Pan Author-X-Name-First: Kai Author-X-Name-Last: Pan Author-Name: Kezhuo Zhou Author-X-Name-First: Kezhuo Author-X-Name-Last: Zhou Title: Polynomial time algorithms and extended formulations for unit commitment problems Abstract: Recently, increasing penetration of renewable energy generation has created challenges for power system operators to perform efficient power generation daily scheduling, due to the intermittent nature of the renewable generation and discrete decisions of each generation unit. Among all aspects to be considered, a unit commitment polytope is fundamental and embedded in the models at different stages of power system planning and operations. In this article, we focus on deriving polynomial-time algorithms for the unit commitment problems with a general convex cost function and piecewise linear cost function, respectively. We refine an O(T3)$\mathcal {O}(T^3)$ time, where T represents the number of time periods, algorithm for the deterministic single-generator unit commitment problem with a general convex cost function and accordingly develop an extended formulation in a higher-dimensional space that can provide an integral solution, in which the physical meanings of the decision variables are described. This means the original problem can be solved as a convex program instead of a mixed-integer convex program. Furthermore, for the case in which the cost function is piecewise linear, by exploring the optimality conditions, we derive more efficient algorithms for both deterministic (i.e., O(T)$\mathcal {O}(T)$ time) and stochastic (i.e., O(N)$\mathcal {O}(N)$ time, where N represents the number of nodes in the stochastic scenario tree) single-generator unit commitment problems. We also develop the corresponding extended formulations for both deterministic and stochastic single-generator unit commitment problems that solve the original mixed-integer linear programs as linear programs. Similarly, physical meanings of the decision variables are explored to show the insights of the new modeling approach. Journal: IISE Transactions Pages: 735-751 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2017.1397303 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1397303 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:735-751 Template-Type: ReDIF-Article 1.0 Author-Name: Somayeh Moazeni Author-X-Name-First: Somayeh Author-X-Name-Last: Moazeni Author-Name: Boris Defourny Author-X-Name-First: Boris Author-X-Name-Last: Defourny Title: Optimal control of energy storage under random operation permissions Abstract: This article studies the optimal control of energy storage when operations are permitted only at random times. At the arrival of a permission, the storage operator has the option, but not the obligation, to transact. A nonlinear pricing structure incentivizes small transactions spread out among arrivals, instead of a single unscheduled massive transaction, which could stress the energy delivery system. The problem of optimizing storage operations to maximize the expected cumulated revenue over a finite horizon is modeled as a piecewise deterministic Markov decision process. Various properties of the value function and the optimal storage operation policy are established, first when permission times follow a Poisson process and then for permissions arriving as a self-exciting point process. The sensitivity of the value function and optimal policy to the permission arrival process parameters is studied as well. A numerical scheme to compute the optimal policy is developed and employed to illustrate the theoretical results. Current distribution systems cannot support simultaneous and identical actions of a large number of agents reacting to an identical signal. That motivates transactive market frameworks when their access to transactions is restricted. Therefore, the optimal policy of an agent under this restriction is important to study. Being able to act at random arrival of permissions and to act under a nonlinear pricing structure are salient characteristics differentiating this study from existing work on energy storage optimization. Journal: IISE Transactions Pages: 668-682 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2017.1401756 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1401756 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:668-682 Template-Type: ReDIF-Article 1.0 Author-Name: Arnab Bhattacharya Author-X-Name-First: Arnab Author-X-Name-Last: Bhattacharya Author-Name: Jeffrey P. Kharoufeh Author-X-Name-First: Jeffrey P. Author-X-Name-Last: Kharoufeh Author-Name: Bo Zeng Author-X-Name-First: Bo Author-X-Name-Last: Zeng Title: Structured storage policies for energy distribution networks Abstract: We consider the problem of dynamically controlling a two-bus energy distribution network with energy storage capabilities. An operator seeks to dynamically adjust the amount of energy to charge to, or discharge from, energy storage devices in response to randomly evolving demand, renewable supply, and prices. The objective is to minimize the expected total discounted costs incurred within the network over a finite planning horizon. We formulate a Markov decision process model that prescribes the optimal amount of energy to charge or discharge and transmit between the two buses during each stage of the planning horizon. Established are the multimodularity of the value function and the monotonicity of the optimal policy in the energy storage levels. We also show that the optimal operational cost is convex and monotone in the storage capacities. Furthermore, we establish bounds on the optimal cost by analyzing comparable single-storage systems with pooled and decentralized storage configurations, respectively. These results extend to more general multi-bus network topologies. Numerical examples illustrate the main results and highlight the significance of interacting demand-side entities. Journal: IISE Transactions Pages: 683-698 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2018.1440670 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1440670 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:683-698 Template-Type: ReDIF-Article 1.0 Author-Name: Prajwal Khadgi Author-X-Name-First: Prajwal Author-X-Name-Last: Khadgi Author-Name: Lihui Bai Author-X-Name-First: Lihui Author-X-Name-Last: Bai Title: A simulation study for residential electricity user behavior under dynamic variable pricing with demand charge Abstract: Attempting to increase energy efficiency and improve system load factors in an electricity distribution system, Demand Response (DR) has long been proposed and implemented as a form of load management. Various pricing structures incentivizing consumers to shift energy consumption from on-peak to off-peak periods are evident in this field. Most DR methods currently used in practice belong to static variable pricing (e.g., Time of Use, Critical Peak Pricing) and the impact of such tariffs has been well established. However, dynamic variable pricing in general is less studied and much less practiced in the field, due to the lack of understanding of consumer behavior in response to price uncertainty. In this article, we study a novel dynamic variable pricing scheme that uses the coincident demand charge to reduce load consumption during peak events. We employ a multi-attribute utility function and model predictive control to simulate consumer behavior of utility maximization in home energy consumption. We use a conditional Markov chain to model and predict the system peak. Effects of the proposed residential electricity rate based on coincident demand charge are compared with other pricing schemes through simulation validated with real-world residential load profiles. Finally, we extend the simulations to study the impact of integrating renewable solar production in a DR program. Journal: IISE Transactions Pages: 699-710 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2018.1440671 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1440671 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:699-710 Template-Type: ReDIF-Article 1.0 Author-Name: Yingjue Zhou Author-X-Name-First: Yingjue Author-X-Name-Last: Zhou Author-Name: Tieming Liu Author-X-Name-First: Tieming Author-X-Name-Last: Liu Author-Name: Chaoyue Zhao Author-X-Name-First: Chaoyue Author-X-Name-Last: Zhao Title: Backup capacity coordination with renewable energy certificates in a regional electricity market Abstract: This article studies a coordination mechanism between a renewable energy supplier and a conventional supplier in a regional electricity market. The intermittent nature of the renewable supplier results in random power shortages. Though the renewable supplier can buy backup power from a conventional supplier who prepares backup capacity to cover the shortage, there is no commitment that enough backup capacity will be prepared without any incentives to the conventional supplier. We design a coordination mechanism where the renewable supplier offers the conventional supplier Renewable Energy Certificates (RECs) proportional to the backup capacity committed. We prove that this mechanism coordinates the conventional supplier’s decision on backup capacity and can arbitrarily split the system profit between the two suppliers. Our analytical results show that when the shortage cost increases, the backup capacity increases, the REC offering rate increases, the total profit decreases, and the renewable supplier’s profit decreases but the conventional supplier’s profit increases. We also show analytically that the social welfare under this mechanism is higher than in the decentralized case unless the regional environment is extremely sensitive to conventional power’s carbon footprint, and the benefit of buffering power shortage cannot compensate for the damage to the environment. Journal: IISE Transactions Pages: 711-719 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2018.1440672 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1440672 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:711-719 Template-Type: ReDIF-Article 1.0 Author-Name: Yiduo Zhan Author-X-Name-First: Yiduo Author-X-Name-Last: Zhan Author-Name: Qipeng P. Zheng Author-X-Name-First: Qipeng P. Author-X-Name-Last: Zheng Title: A multistage decision-dependent stochastic bilevel programming approach for power generation investment expansion planning Abstract: In this article, we study the long-term power generation investment expansion planning problem under uncertainty. We propose a bilevel optimization model that includes an upper-level multistage stochastic expansion planning problem and a collection of lower-level economic dispatch problems. This model seeks for the optimal sizing and siting for both thermal and wind power units to be built to maximize the expected profit for a profit-oriented power generation investor. To address the future uncertainties in the decision-making process, this article employs a decision-dependent stochastic programming approach. In the scenario tree, we calculate the non-stationary transition probabilities based on discrete choice theory and the economies of scale theory in electricity systems. The model is further reformulated as a single-level optimization problem and solved by decomposition algorithms. The investment decisions, computation times, and optimality of the decision-dependent model are evaluated by case studies on IEEE reliability test systems. The results show that the proposed decision-dependent model provides effective investment plans for long-term power generation expansion planning. Journal: IISE Transactions Pages: 720-734 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2018.1442032 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1442032 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:720-734 Template-Type: ReDIF-Article 1.0 Author-Name: Pelin Cay Author-X-Name-First: Pelin Author-X-Name-Last: Cay Author-Name: Ali Esmali Author-X-Name-First: Ali Author-X-Name-Last: Esmali Author-Name: Camilo Mancilla Author-X-Name-First: Camilo Author-X-Name-Last: Mancilla Author-Name: Robert H. Storer Author-X-Name-First: Robert H. Author-X-Name-Last: Storer Author-Name: Luis F. Zuluaga Author-X-Name-First: Luis F. Author-X-Name-Last: Zuluaga Title: Solutions with performance guarantees on tactical decisions for industrial gas network problems Abstract: In the gas distribution industry, creating a tactical strategy to meet customer demand while meeting the physical constraints in a gas pipeline network leads to complex and challenging optimization problems due to the non-linearity, non-convexity, and combinatorial nature of the corresponding mathematical formulation of the problem. In this article, we study the performance of different approaches presented in the literature to solve both natural gas and industrial gas problems to either find global optimal solutions or determine the optimality gap between a local optimal solution and a valid lower bound for the problem’s objective. In addition to those considered in the literature, we consider alternative reformulations of the operational-level gas pipeline optimization problem. The performance of these alternative reformulations varies in terms of the optimality gap provided for a feasible solution of the problem and their solution time. In industry-sized problem instances, significant improvements are possible compared to solving the standard formulation of the problem. Journal: IISE Transactions Pages: 654-667 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2018.1443233 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1443233 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:654-667 Template-Type: ReDIF-Article 1.0 Author-Name: Natarajan Gautam Author-X-Name-First: Natarajan Author-X-Name-Last: Gautam Author-Name: Yongpei Guan Author-X-Name-First: Yongpei Author-X-Name-Last: Guan Title: Contributions to energy systems modeling and analytics Journal: IISE Transactions Pages: 643-644 Issue: 8 Volume: 50 Year: 2018 Month: 8 X-DOI: 10.1080/24725854.2018.1454230 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1454230 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:8:p:643-644 Template-Type: ReDIF-Article 1.0 Author-Name: Gonen Singer Author-X-Name-First: Gonen Author-X-Name-Last: Singer Author-Name: Eugene Khmelnitsky Author-X-Name-First: Eugene Author-X-Name-Last: Khmelnitsky Title: A finite-horizon, stochastic optimal control policy for a production–inventory system with backlog-dependent lost sales Abstract: This article considers a problem of optimal production control of a one-product-type production–inventory system. The demand is a discrete-time stochastic process, while production of a limited capacity is continious in time. The dynamics of the inventory over a finite time horizon are investigated and an optimal feedback strategy for production control under a backlog-dependent lost sales effect is developed. This strategy is called a target level strategy since, contrary to the existing production–inventory control strategies, it determines an optimal target inventory level as a function of time period and the initial stock at that period. Numerical examples illustrate the applicability of the developed solutions. Journal: IIE Transactions Pages: 855-864 Issue: 12 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.491498 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491498 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:12:p:855-864 Template-Type: ReDIF-Article 1.0 Author-Name: Yue Zhang Author-X-Name-First: Yue Author-X-Name-Last: Zhang Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Author-Name: Patrice Marcotte Author-X-Name-First: Patrice Author-X-Name-Last: Marcotte Author-Name: Vedat Verter Author-X-Name-First: Vedat Author-X-Name-Last: Verter Title: A bilevel model for preventive healthcare facility network design with congestion Abstract: Preventive healthcare aims at reducing the likelihood and severity of potentially life-threatening illnesses by protection and early detection. The level of participation in preventive healthcare programs is a critical determinant in terms of their effectiveness and efficiency. This article presents a methodology for designing a network of preventive healthcare facilities so as to improve its accessibility to potential clients and thus maximize participation in preventive healthcare programs. The problem is formulated as a mathematical program with equilibrium constraints; i.e., a bilevel non-linear optimization model. The lower level problem which determines the allocation of clients to facilities is formulated as a variational inequality; the upper level is a facility location and capacity allocation problem. The developed solution approach is based on the location–allocation framework. The variational inequality is formulated as a convex optimization problem, which can be solved by the gradient projection method; a Tabu search procedure is developed to solve the upper level problem. Computational experiments show that large-sized instances can be solved in a reasonable time. The model is used to analyze an illustrative case, a network of mammography centers in Montreal, and a number of interesting results and managerial insights are discussed, especially about capacity pooling. Journal: IIE Transactions Pages: 865-880 Issue: 12 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.491500 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491500 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:12:p:865-880 Template-Type: ReDIF-Article 1.0 Author-Name: Young Ko Author-X-Name-First: Young Author-X-Name-Last: Ko Author-Name: Natarajan Gautam Author-X-Name-First: Natarajan Author-X-Name-Last: Gautam Title: Transient analysis of queues for peer-based multimedia content delivery Abstract: Consider a firm that sells online multimedia content. In order to manage costs and quality of service, this firm maintains a peer network that allows new users to download files from their peers who have previously downloaded the required files. The scenario can be modeled as a queueing system where the number of servers varies over time. Analytical models are developed that are based on fluid and diffusion approximations and allow analysis of transient system performance. The same approximations are used to analyze the steady-state behavior of this network. It is shown that the existing fluid and diffusion approximations are inaccurate for transient analysis. To address this shortcoming, a novel Gaussian-based adjustment is proposed and it significantly improves the accuracy of the approximations. Furthermore, the models used in this research can be extended seamlessly to the case of time-varying system parameters (e.g., arrival rates and service rates). Several numerical examples are provided that show how the proposed adjusted models work for the analysis of transient phenomena. Journal: IIE Transactions Pages: 881-896 Issue: 12 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.491501 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491501 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:12:p:881-896 Template-Type: ReDIF-Article 1.0 Author-Name: H. Geismar Author-X-Name-First: H. Author-X-Name-Last: Geismar Author-Name: Michael Pinedo Author-X-Name-First: Michael Author-X-Name-Last: Pinedo Title: Robotic cells with stochastic processing times Abstract: This article considers the scheduling of flow shops with a common server that performs all material handling, including the transfer of parts between machines, in the context of robotic cell flow shops. Specifically, the first analytic study of the operations of a robotic cell in which one process has a stochastic processing time, which is common in the microlithography portion of semiconductor manufacturing, is presented. How to measure throughput in such cells is defined and then how the proximity of the stochastic process to the bottleneck process affects throughput is demonstrated. The distribution function of the robot's sequence time is found and verified with simulation results. This yields formulas for the cell's expected throughput. On-line scheduling schemes to improve throughput are presented. Journal: IIE Transactions Pages: 897-914 Issue: 12 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.491505 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491505 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:12:p:897-914 Template-Type: ReDIF-Article 1.0 Author-Name: Joseph Hartman Author-X-Name-First: Joseph Author-X-Name-Last: Hartman Author-Name: İ. Büyüktahtakin Author-X-Name-First: İ. Author-X-Name-Last: Büyüktahtakin Author-Name: J. Smith Author-X-Name-First: J. Author-X-Name-Last: Smith Title: Dynamic-programming-based inequalities for the capacitated lot-sizing problem Abstract: Iterative solutions of forward dynamic programming formulations for the capacitated lot sizing problem are used to generate inequalities for an equivalent integer programming formulation. The inequalities capture convex and concave envelopes of intermediate-stage value functions and can be lifted by examining potential state information at future stages. Several possible implementations that employ these inequalities are tested and it is demonstrated that the proposed approach is more efficient than alternative integer programming–based algorithms. For certain datasets, the proposed algorithm also outperforms a pure dynamic programming algorithm for the problem. Journal: IIE Transactions Pages: 915-930 Issue: 12 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.504683 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504683 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:12:p:915-930 Template-Type: ReDIF-Article 1.0 Author-Name: James Dai Author-X-Name-First: James Author-X-Name-Last: Dai Author-Name: Qi Fu Author-X-Name-First: Qi Author-X-Name-Last: Fu Author-Name: Neville Lee Author-X-Name-First: Neville Author-X-Name-Last: Lee Title: Beacon placement strategies in an ultrasonic positioning system Abstract: Ultrasonic Positioning Systems (UPSs) are widely used to detect, locate, or track targets. One of the key factors that determines the performance of a UPS is beacon placement. In this article, beacon placement strategies for a two-dimensional array in the xy-plane above the target with an adaptive height are studied and optimized as a function of the beacon’s characteristics and application requirements in terms of positioning precision and particularly reliability. The effect of positioning requirements on placement is also investigated. It is shown that for triangle or square placements or a hexagon placement with a low precision requirement, the optimal side length of each placement pattern is restricted by the upper bounds of the geometry and reliability, and the placement pattern is valid only when there is a gap between those upper bounds and the lower bound specified by the precision requirement. However, for a hexagon placement with a high precision requirement, the optimal side length is restricted by the upper bound imposed by the precision requirement. The use of a high beacon height with respect to the target allows positioning requirements to significantly reduce the optimal side length. In addition to the beacon height, another important factor is the beacon placement pattern, such as triangle, square, or hexagon. A comparison of the obtained results shows that under a loose precision requirement, triangle placement is the best; when it is moderate, either square or hexagon placement is preferred; and if the precision requirement is stringent, only hexagon placement is feasible. From the comparison of beacon placement strategies, an 18% reduction in the numbers of beacons is readily achievable for commonly available beacons. Journal: IIE Transactions Pages: 477-493 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2011.649387 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649387 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:477-493 Template-Type: ReDIF-Article 1.0 Author-Name: Amir Ahmadi-Javid Author-X-Name-First: Amir Author-X-Name-Last: Ahmadi-Javid Author-Name: Nasrin Ramshe Author-X-Name-First: Nasrin Author-X-Name-Last: Ramshe Title: On the block layout shortest loop design problem Abstract: This article verifies a formulation presented by Asef-Vaziri, Laporte, and Sriskandarajah (2000) for the block layout shortest loop design problem. An error in the set of connectivity constraints of the formulation is corrected and then it is shown that a new set of constraints is needed to guarantee that every solution satisfying the corrected formulation determines a complete loop; i.e., a loop covering each production cell of the block layout. Finally, a cutting-plane algorithm is developed to solve the corrected formulation. The proposed algorithm solves instances with sizes of up to 100 departments in less than 3 seconds. This shows that the algorithm outperforms the best algorithm proposed in the literature, which can solve instances with sizes of up to 80 in less than 60 seconds. Journal: IIE Transactions Pages: 494-501 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.693649 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.693649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:494-501 Template-Type: ReDIF-Article 1.0 Author-Name: Svenja Lagershausen Author-X-Name-First: Svenja Author-X-Name-Last: Lagershausen Author-Name: Michael Manitz Author-X-Name-First: Michael Author-X-Name-Last: Manitz Author-Name: Horst Tempelmeier Author-X-Name-First: Horst Author-X-Name-Last: Tempelmeier Title: Performance analysis of closed-loop assembly lines with general processing times and finite buffer spaces Abstract: This article analyzes flow lines with converging and diverging material flow, limited buffer sizes, generally distributed processing times, and a constant number of workpieces as a closed assembly or disassembly queueing network. A decomposition approach in which each subsystem is modeled as a G/G/1/K queueing system is used. The population constraint is enforced by requiring that the sum of the expected number of customers in the subsystems is equal to the total number of workpieces. The results of a simulation experiment indicate that the proposed approximation provides accurate results and that it performs better than other approaches. Journal: IIE Transactions Pages: 502-515 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.705450 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.705450 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:502-515 Template-Type: ReDIF-Article 1.0 Author-Name: Mark Hillier Author-X-Name-First: Mark Author-X-Name-Last: Hillier Title: Designing unpaced production lines to optimize throughput and work-in-process inventory Abstract: This article considers the optimal design of unpaced assembly lines. Two key decisions in designing an unpaced assembly line are the allocation of work to the stations and the allocation of buffer storage space between the stations. To the best of the author's knowledge, this is the first article to jointly optimize both the allocation of workload and the allocation of buffer spaces simultaneously when the objective is to maximize the revenue from throughput minus the cost of work-in-process inventory. Exact solutions are provided for small lines (three or four stations) with a fixed kind of processing time distribution (exponential or Erlang). Ten observations are made about the characteristics of the allocation of workload and buffer spaces. Heuristics are suggested for designing lines with more stations or different processing time distributions. A simulation study is done to test the observations and heuristics for longer lines and different processing time distributions (lognormal). Significant savings can be achieved by jointly optimizing both the workload and the buffer space allocations. Journal: IIE Transactions Pages: 516-527 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706733 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706733 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:516-527 Template-Type: ReDIF-Article 1.0 Author-Name: Martín Tanco Author-X-Name-First: Martín Author-X-Name-Last: Tanco Author-Name: Enrique del Castillo Author-X-Name-First: Enrique Author-X-Name-Last: del Castillo Author-Name: Elisabeth Viles Author-X-Name-First: Elisabeth Author-X-Name-Last: Viles Title: Robustness of three-level response surface designs against missing data Abstract: Experimenters should be aware of the possibility that some of their observations may be unavailable for analysis. This article considers different criteria to assess the impact that missing data can have when running three-level designs to estimate a full second-order polynomial model. Designs for three to seven factors were studied and included Box–Behnken designs, face-centered composite designs, and designs due to Morris, Mee, Block–Mee, Draper–Lin, Hoke, Katasaounis, and Notz. These designs were studied under two existing robustness criteria: (i) the maximum number of runs that can be missing and still allow the remaining runs to estimate a given model; and (ii) the loss of D-efficiency in the remaining design compared with the original design. The robustness of three-level designs was studied using a third, new criterion: the maximum number of observations that can be missing from a design and still allow the estimation of the given model with a high probability. This criterion represents a useful generalization of the first criterion, which determines the maximum number of runs that make the probability of estimating the model equal to one. The new criterion provides a better assessment of the robustness of each design than previous criteria. Journal: IIE Transactions Pages: 544-553 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.712240 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.712240 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:544-553 Template-Type: ReDIF-Article 1.0 Author-Name: Liang Zhang Author-X-Name-First: Liang Author-X-Name-Last: Zhang Author-Name: Chuanfeng Wang Author-X-Name-First: Chuanfeng Author-X-Name-Last: Wang Author-Name: Jorge Arinez Author-X-Name-First: Jorge Author-X-Name-Last: Arinez Author-Name: Stephan Biller Author-X-Name-First: Stephan Author-X-Name-Last: Biller Title: Transient analysis of Bernoulli serial lines: performance evaluation and system-theoretic properties Abstract: Transient behavior of production systems has significant practical and theoretical implications. However, analytical methods for analysis and control of production systems during transients remain largely unexplored. In the framework of serial production lines with Bernoulli machines and finite buffers, this article develops a mathematical model for transient analysis and derives closed-form expressions for evaluating the production rate, consumption rate, work-in-process, and probabilities of machine starvation and blockage during transients. In addition, a computationally efficient procedure based on recursive aggregation is developed to approximate the transient performance measures with high accuracy. Finally, based on the mathematical model derived, system-theoretic properties of several important system transient characteristics are studied. Journal: IIE Transactions Pages: 528-543 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.721946 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.721946 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:528-543 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Erratum Journal: IIE Transactions Pages: 554-554 Issue: 5 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.763536 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.763536 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:5:p:554-554 Template-Type: ReDIF-Article 1.0 Author-Name: Marc Gascons Author-X-Name-First: Marc Author-X-Name-Last: Gascons Author-Name: Norbert Blanco Author-X-Name-First: Norbert Author-X-Name-Last: Blanco Author-Name: Koen Matthys Author-X-Name-First: Koen Author-X-Name-Last: Matthys Title: Evolution of manufacturing processes for fiber-reinforced thermoset tanks, vessels, and silos: a review Abstract: Since the first Fibre-Reinforced Polymer (FRP) material applications emerged in the 1950s, various industrial markets (from aerospace to consumer goods) have adopted FRP composites due to their attractive inherent mix of properties: FRP composites represent low density with elevated mechanical performance and display better environmental resistance than traditional materials such as steel and aluminum. As FRP composites gradually became more used in structural design, early manufacturing processes changed, were optimized, and became partly automated. The evolution in manufacturing processes was indeed necessary to keep pace with the increasing complexity of geometric designs, more demanding mechanical requirements, more stringent environmental regulations, and ever stronger market pressures concerning cost and production volume. This article aims to present a review of manufacturing processes for FRP applications through one of the most common products for the industrial environment: storage pressure tanks, vessels and silos. Journal: IIE Transactions Pages: 476-489 Issue: 6 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.590177 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.590177 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:6:p:476-489 Template-Type: ReDIF-Article 1.0 Author-Name: Ying Rong Author-X-Name-First: Ying Author-X-Name-Last: Rong Author-Name: Zuo-Jun Shen Author-X-Name-First: Zuo-Jun Author-X-Name-Last: Shen Author-Name: Candace Yano Author-X-Name-First: Candace Author-X-Name-Last: Yano Title: Cheaper by the pallet? Multi-item procurement with standard batch sizes Abstract: This research was motivated by challenges facing inventory managers at a major retail chain in deciding how often to order each product and whether to use a standard batch size of a pallet, a half-pallet, or even less. The retailer offers thousands of different products, but the total demand for a typical product is only a few pallets per year. Manufacturers offer lower per unit prices for larger standard batch sizes, but larger order quantities increase inventory holding costs. The inventory managers are also concerned about how the ordering strategy might affect transportation costs and material handling costs at the warehouse.    We develop a framework and solution strategy to determine the best shipment frequency, standard batch size (from a set of options), and an ordering plan for a set of products procured from a single supply location. To do so, the inventory holding and material handling costs incurred by a single product are derived for a given review interval and standard batch size.We incorporate the individual product costs into an optimization model to find, for a given transportation interval (with a limit on transport capacity for each shipment), the best procurement plan considering variable procurement, inventory, material handling, and excess transportation costs. With this, several transportation intervals can be compared and the best one selected. To the best of our knowledge, this work is the first to consider the effects of transportation capacity and standard batch sizes in a multi-item procurement problem with the goal of minimizing transportation, inventory, and material handling costs. Journal: IIE Transactions Pages: 405-418 Issue: 6 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609527 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609527 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:6:p:405-418 Template-Type: ReDIF-Article 1.0 Author-Name: Jianxin Jiao Author-X-Name-First: Jianxin Author-X-Name-Last: Jiao Title: Product platform flexibility planning by hybrid real options analysis Abstract: Product platforms are believed to facilitate product design flexibility; however, the value and best form of such flexibility have yet to be fully understood. This article treats the design flexibility that is inherent in product platforms as being irreversible investment decisions that are crafted with a portfolio of platform real options that are continuously exercised to fulfill expected returns on investment. While options thinking has been appealing to the design community, the existing design real options models suffer from poor validity of the basic assumptions about the real options. Most existing work intuitively imitates real options decision making based on modularity and configuration. The missing link lies in the lack of an explicit time horizon, in which real options theory has its roots. This article proposes to craft platform real options in line with the management flexibility that is staged along the design project life. A hybrid real options analysis framework is developed by incorporating product-related and project-related flexibility to synthesize both financial and technical analyses of product platforms within a coherent framework. A case study of vibration motor platform planning demonstrates the potential of the proposed hybrid approach for the valuation of platform flexibility under uncertainty and for supporting optimal product platform planning. Journal: IIE Transactions Pages: 431-445 Issue: 6 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609874 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609874 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:6:p:431-445 Template-Type: ReDIF-Article 1.0 Author-Name: Haluk Yapicioglu Author-X-Name-First: Haluk Author-X-Name-Last: Yapicioglu Author-Name: Alice Smith Author-X-Name-First: Alice Author-X-Name-Last: Smith Title: Retail space design considering revenue and adjacencies using a racetrack aisle network Abstract: In this article, a model and solution approach for the design of the block layout of a single-story department store is presented. The approach consists of placing departments in a racetrack configuration within the store subject to area and shape constraints. The objective function considers the area allocated to each department, contiguity of the departments to the aisle network, adjacency requirements among departments, and department revenues. The revenue generated by a department is defined as a function of its area and its exposure to the aisle network. The aisle network is comprised of two components: the racetrack, which serves as the main travel path for the customers, and the entry/exit aisle. The racetrack aisle itself is treated as a department with area allocation and corresponding revenue generation. A general tabu search optimization framework for the model with variable department areas and an aisle network with non-zero area is devised and tested. Journal: IIE Transactions Pages: 446-458 Issue: 6 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635177 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635177 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:6:p:446-458 Template-Type: ReDIF-Article 1.0 Author-Name: Konstantin Kogan Author-X-Name-First: Konstantin Author-X-Name-Last: Kogan Title: Manufacturing under uncertainty: offsetting the inability to instantaneously adjust production with dynamic pricing Abstract: In many manufacturing systems the production rate cannot be instantaneously adjusted in response to inventory updates. This article addresses such a system under price-dependent stochastic demand. The objective of the system is to choose a time-invariant production rate and time-dependent product price that maximize the expected profit. This manufacturing system is compared to a benchmark system where both production rate and product price are adjustable. It is shown that the expected inventory level does not necessarily increase when the manufacturer can handle stock spikes only by leveling the demand with the product price. Although the inability to adjust production rate under a high level of uncertainty is difficult to offset with dynamic pricing, the non-linear components of the production and inventory costs can make a difference. For example, by reducing the non-linear component of the production cost and increasing price volatility, the manufacturer may close the profit gap between the two systems from 25% to only 5%. Journal: IIE Transactions Pages: 419-430 Issue: 6 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635182 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635182 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:6:p:419-430 Template-Type: ReDIF-Article 1.0 Author-Name: Ömer Öztürkoğlu Author-X-Name-First: Ömer Author-X-Name-Last: Öztürkoğlu Author-Name: Kevin Gue Author-X-Name-First: Kevin Author-X-Name-Last: Gue Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: Optimal unit-load warehouse designs for single-command operations Abstract: We present a continuous space model for travel in a unit-load warehouse that allows cross-aisles and picking aisles to take on any angle. The model produces optimal designs for one, two, and three-cross-aisle warehouses, which are called chevron, leaf, and butterfly designs. We then use a more accurate discrete model to show which designs are best for a wide range of warehouse sizes. We show that the chevron design, which is new to theory and to practice, is the best design for many industrial applications. Journal: IIE Transactions Pages: 459-475 Issue: 6 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.636793 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.636793 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:6:p:459-475 Template-Type: ReDIF-Article 1.0 Author-Name: Navneet Vidyarthi Author-X-Name-First: Navneet Author-X-Name-Last: Vidyarthi Author-Name: Samir Elhedhli Author-X-Name-First: Samir Author-X-Name-Last: Elhedhli Author-Name: Elizabeth Jewkes Author-X-Name-First: Elizabeth Author-X-Name-Last: Jewkes Title: Response time reduction in make-to-order and assemble-to-order supply chain design Abstract: Make-to-order and assemble-to-order systems are successful business strategies in managing responsive supply chains, characterized by high product variety, highly variable customer demand and short product life cycles. These systems usually spell long customer response times due to congestion. Motivated by the strategic importance of response time reduction, this paper presents models for designing make-to-order and assemble-to-order supply chains under Poisson customer demand arrivals and general service time distributions. The make-to-order supply chain design model seeks to simultaneously determine the location and the capacity of distribution centers (DCs) and allocate stochastic customer demand to DCs by minimizing response time in addition to the fixed cost of opening DCs and equipping them with sufficient assembly capacity and the variable cost of serving customers. The problem is setup as a network of spatially distributed M/G/1 queues, modeled as a non-linear mixed-integer program, and linearized using a simple transformation and a piecewise linear approximation. An exact solution approach is presented that is based on the cutting plane method. Then, the problem of designing a two-echelon assemble-to-order supply chain comprising of plants and DCs serving a set of customers is considered. A Lagrangean heuristic is proposed that exploits the echelon structure of the problem and uses the solution methodology for the make-to-order problem. Computational results and managerial insights are provided. It is empirically shown that substantial reduction in response times can be achieved with minimal increase in total costs in the design of responsive supply chains. Furthermore, a supply chain configuration that considers congestion is proposed and its effect on the response time can be very different from the traditional configuration that ignores congestion. Journal: IIE Transactions Pages: 448-466 Issue: 5 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802382741 File-URL: http://hdl.handle.net/10.1080/07408170802382741 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:5:p:448-466 Template-Type: ReDIF-Article 1.0 Author-Name: Opher Baron Author-X-Name-First: Opher Author-X-Name-Last: Baron Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Author-Name: Seokjin Kim Author-X-Name-First: Seokjin Author-X-Name-Last: Kim Author-Name: Dmitry Krass Author-X-Name-First: Dmitry Author-X-Name-Last: Krass Title: Ensuring feasibility in location problems with stochastic demands and congestion Abstract: A location problem with stochastic demand and congestion where mobile servers respond to service calls originating from nodes is considered. The problem is of the set-covering type: only servers within the coverage radius of the demand-generating node may respond to a call. The service level constraint requires that at least one server must be available to respond to an arriving call, with some prespecified probability. The objective is to minimize the total number of servers. It is shown that earlier models quite often overestimate servers' availability and thus may lead to infeasible solutions (i.e., solutions that fail to satisfy the service level constraint). System stability conditions and lower bounds on system availability are developed by analyzing the underlying partially accessible queueing system. These lead to the development of two new models for which feasibility is guaranteed. Simulation-based computational experiments show that the proposed models achieve feasibility without significantly increasing the total number of servers.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix of Tables of Computational Results for Section 7.] Journal: IIE Transactions Pages: 467-481 Issue: 5 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802382758 File-URL: http://hdl.handle.net/10.1080/07408170802382758 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:5:p:467-481 Template-Type: ReDIF-Article 1.0 Author-Name: Selçuk Karabati Author-X-Name-First: Selçuk Author-X-Name-Last: Karabati Author-Name: Bariş Tan Author-X-Name-First: Bariş Author-X-Name-Last: Tan Author-Name: Ömer Öztürk Author-X-Name-First: Ömer Author-X-Name-Last: Öztürk Title: A method for estimating stock-out-based substitution rates by using point-of-sale data Abstract: Empirical studies in retailing suggest that stock-out rates are quite high in many product categories. Stock-outs result in demand spillover, or substitution, among items within a product category. Product assortment and inventory management decisions can be improved when the substitution rates are known. In this paper, a method is presented to estimate product substitution rates by using only Point-Of-Sale (POS) data. The approach clusters POS intervals into states where each state corresponds to a specific substitution scenario. Then available POS data for each state is consolidated and the substitution rates are estimated using the consolidated information. An extensive computational analysis of the proposed substitution rate estimation method is provided. The computational analysis and comparisons with an estimation method from the literature show that the proposed estimation method performs satisfactorily with limited information. Journal: IIE Transactions Pages: 408-420 Issue: 5 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802512578 File-URL: http://hdl.handle.net/10.1080/07408170802512578 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:5:p:408-420 Template-Type: ReDIF-Article 1.0 Author-Name: Ada Barlatt Author-X-Name-First: Ada Author-X-Name-Last: Barlatt Author-Name: Amy Cohn Author-X-Name-First: Amy Author-X-Name-Last: Cohn Author-Name: Yakov Fradkin Author-X-Name-First: Yakov Author-X-Name-Last: Fradkin Author-Name: Oleg Gusikhin Author-X-Name-First: Oleg Author-X-Name-Last: Gusikhin Author-Name: Craig Morford Author-X-Name-First: Craig Author-X-Name-Last: Morford Title: Using composite variable modeling to achieve realism and tractability in production planning: An example from automotive stamping Abstract: Applying traditional mathematical programming techniques to problems in production planning can lead to tremendous challenges. These include non-linearities, very large numbers of constraints and weak linear programming relaxations. To ensure tractability, problems are often either simplified in scope or limited in instance size, resulting in solutions that may no longer address important real-world issues. As an alternative, this paper considers the use of models based on composite variables (variables that capture multiple decisions simultaneously) as a way to solve complex production planning problems. The scheduling of an automotive stamping facility is used as a demonstrative example, and it is shown how composite variable models and a novel corresponding algorithm can lead to high-quality, realistic solutions with acceptable run times. In the proposed approach, batch sizes, labor availability or sequencing of part types is not restricted and the number of changeovers is not fixed a priori. In addition, sequence-dependent changeover times and varying due dates are allowed. Computational results are presented using data from Ford Motor Company.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 421-436 Issue: 5 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802512594 File-URL: http://hdl.handle.net/10.1080/07408170802512594 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:5:p:421-436 Template-Type: ReDIF-Article 1.0 Author-Name: Vishv Jeet Author-X-Name-First: Vishv Author-X-Name-Last: Jeet Author-Name: Erhan Kutanoglu Author-X-Name-First: Erhan Author-X-Name-Last: Kutanoglu Author-Name: Amit Partani Author-X-Name-First: Amit Author-X-Name-Last: Partani Title: Logistics network design with inventory stocking for low-demand parts: Modeling and optimization Abstract: This paper models, analyzes and develops solution techniques for a network design and inventory stocking problem. The proposed model captures important features of a real service part logistics system, namely time-based service level requirements, and stochastic demands satisfied by facilities operating with a one-for-one replenishment policy. In essence, along with usual decisions of location and allocation, the model considers stock levels and fill rates as decisions, varying across facilities to achieve system-wide target service levels. A variable substitution scheme is used to develop an equivalent convex model for an originally non-convex problem. An outer-approximation scheme is used to linearize the convex model. Exact solution schemes based on the linearized model are proposed and computationally less demanding lower and upper bounding techniques for the problem are devised. Results from extensive computational experiments on a variety of problem instances based on real-life industrial data show the effectiveness of the overall approach.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resources: An Appendix consisting of proofs of the propositions, explanation and effectiveness of valid inequalities obtained via binary representation, settings of CPLEX options and further insights and observations.] Journal: IIE Transactions Pages: 389-407 Issue: 5 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802512602 File-URL: http://hdl.handle.net/10.1080/07408170802512602 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:5:p:389-407 Template-Type: ReDIF-Article 1.0 Author-Name: Jayant Rajgopal Author-X-Name-First: Jayant Author-X-Name-Last: Rajgopal Author-Name: Zhouyan Wang Author-X-Name-First: Zhouyan Author-X-Name-Last: Wang Author-Name: Andrew Schaefer Author-X-Name-First: Andrew Author-X-Name-Last: Schaefer Author-Name: Oleg Prokopyev Author-X-Name-First: Oleg Author-X-Name-Last: Prokopyev Title: Effective management policies for remnant inventory supply chains Abstract: A remnant inventory distribution system is considered where a set of geographically dispersed distribution centers meet stochastic demand for a one-dimensional product. This demand arises from some other set of geographically dispersed locations. The product is replenished at the centers in a limited number of standard sizes, while the demand is for various smaller sizes of the product and arrives over time according to a Poisson process. There are costs associated with cutting and transportation and scrap can be profitably reclaimed. The combined production (cutting) and distribution problem is modeled as a network. A linear programming formulation is solved for a deterministic version of this problem using mean demand rates and the optimal dual multipliers are used to assign inherent values to remnants of various sizes. These values are then used to develop a price-directed policy that can be used in a stochastic environment. A simulation study shows that this policy significantly outperforms heuristic policies from the literature as well as other heuristic policies that have been used in the steel industry for a similar problem. Theoretical insights into the structure of the proposed optimization problem are provided along with proofs of several important results.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 437-447 Issue: 5 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802516298 File-URL: http://hdl.handle.net/10.1080/07408170802516298 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:5:p:437-447 Template-Type: ReDIF-Article 1.0 Author-Name: Zhaojun Li Author-X-Name-First: Zhaojun Author-X-Name-Last: Li Author-Name: Kailash Kapur Author-X-Name-First: Kailash Author-X-Name-Last: Kapur Title: Continuous-state reliability measures based on fuzzy sets Abstract: This article proposes to use the theory and methods of fuzzy sets to model the reliability of a component or system experiencing continuous stochastic performance degradation. The performance characteristic variable, which indicates the continuous performance levels of degradable systems, is used to fuzzify the states of a component or system. The concept of an engineering or technological performance variable is understood by both customers and system designers and can be used to represent different degrees of success. Thus, the imprecision in the meaning of success/failure is quantified through the fuzzy success/failure membership function, which is defined over the performance characteristic variable. The proposed fuzzy reliability measures provide an alternative to model the continuous state behavior for a component or system as it evolves from a binary state to a multi-state and finally to a fuzzy state. The dynamic behavior of fuzzy reliability is investigated using the concept of a fuzzy random variable under appropriate stochastic performance degradation processes. This article also develops some reliability performance metrics that are able to capture the cumulative experiences of customers with the system. In addition, the perception and utility from the customers are utilized to develop customer-centric reliability performance measures. Journal: IIE Transactions Pages: 1033-1044 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.588684 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.588684 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:1033-1044 Template-Type: ReDIF-Article 1.0 Author-Name: Sharareh Taghipour Author-X-Name-First: Sharareh Author-X-Name-Last: Taghipour Author-Name: Dragan Banjevic Author-X-Name-First: Dragan Author-X-Name-Last: Banjevic Title: Optimum inspection interval for a system under periodic and opportunistic inspections Abstract: This article proposes a model to find an optimal periodic inspection interval over a finite time horizon for a multi-component system. The system’s components are subject to either hard or soft failures. Hard failures are detected and fixed instantaneously. Soft failures are unrevealed and can only be detected at inspections. Soft failures do not stop the system from operating; however, they may reduce its level of performance from its designed value. The system is inspected periodically to detect soft failures; however, a hard failure instance also provides an opportunity called opportunistic inspection to inspect and fix soft failures. Two models are discussed in this article. The first model assumes that components with soft and hard failures are minimally repaired. The second model assumes the possibility of either minimal repair or replacement of a component with soft failure, with some age-dependent probabilities. Recursive procedures are developed to calculate the expected number of minimal repairs and replacements and expected downtimes of components with soft failure. Examples of the calculation of the optimal inspection intervals are given. The data used in the examples are adapted from a hospital’s maintenance data for general infusion pump. Journal: IIE Transactions Pages: 932-948 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.618176 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.618176 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:932-948 Template-Type: ReDIF-Article 1.0 Author-Name: Xuemin Zi Author-X-Name-First: Xuemin Author-X-Name-Last: Zi Author-Name: Changliang Zou Author-X-Name-First: Changliang Author-X-Name-Last: Zou Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A distribution-free robust method for monitoring linear profiles using rank-based regression Abstract: Profile monitoring is a technique for checking the stability of the functional relationship between a response variable and one or more explanatory variables over time. Linear profile monitoring is particularly useful in practice due to its simplicity and flexibility. The existing monitoring methods suffer from a drawback in that they all assume the error distribution to be normal. When the underlying distribution is misspecified, the efficiency of the commonly used Least Squares Estimation (LSE) is likely to be low and as a consequence the detection ability of procedures based on LSE is reduced. To overcome this drawback, this article develops a non-parametric methodology for monitoring the linear profile, including the regression coefficients and profile variations. The Multivariate Sign Exponentially Weighted Moving Average (MSEWMA) control scheme is applied to the estimated profile parameters obtained using a rank-based regression approach. Benefiting from certain favorable properties of MSEWMA and the efficiency of rank-based regression estimators, the proposed chart is robust from the point of view of the in-control and out-of-control average run length, particularly when the process distribution is heavily tailed. An example with real data from a manufacturing facility shows that it performs well in application. Journal: IIE Transactions Pages: 949-963 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649386 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649386 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:949-963 Template-Type: ReDIF-Article 1.0 Author-Name: Rui Peng Author-X-Name-First: Rui Author-X-Name-Last: Peng Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Author-Name: Szu Ng Author-X-Name-First: Szu Author-X-Name-Last: Ng Author-Name: Gregory Levitin Author-X-Name-First: Gregory Author-X-Name-Last: Levitin Title: Element maintenance and allocation for linear consecutively connected systems Abstract: This article considers optimal maintenance and allocation of elements in a Linear Multi-state Consecutively Connected System (LMCCS), which is important in signal transmission and other network systems. The system consists of N+1 linearly ordered positions (nodes) and fails if the first node (source) is not connected with the final node (sink). The reliability of an LMCCS has been studied in the past but has been restricted to the case when each system element has a constant reliability. In practice, system elements usually fail with increasing failure probability due to aging effects. Furthermore, in order to increase system availability, resources can be put into the maintenance of each element to increase the availability of the element. In this article, a framework is proposed to solve the cost optimal maintenance and allocation strategy of this type of system subject to an availability requirement. A universal generating function is used to estimate the availability of the system. A genetic algorithm is adopted for optimization. Illustrative examples are presented. Journal: IIE Transactions Pages: 964-973 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649388 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649388 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:964-973 Template-Type: ReDIF-Article 1.0 Author-Name: Shuai Huang Author-X-Name-First: Shuai Author-X-Name-Last: Huang Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Author-Name: Kewei Chen Author-X-Name-First: Kewei Author-X-Name-Last: Chen Author-Name: Teresa Wu Author-X-Name-First: Teresa Author-X-Name-Last: Wu Author-Name: Jieping Ye Author-X-Name-First: Jieping Author-X-Name-Last: Ye Author-Name: Xia Wu Author-X-Name-First: Xia Author-X-Name-Last: Wu Author-Name: Li Yao Author-X-Name-First: Li Author-X-Name-Last: Yao Title: A transfer learning approach for network modeling Abstract: Network models have been widely used in many subject areas to characterize the interactions between physical entities. A typical problem is to identify the network for multiple related tasks that share some similarities. In this case, a transfer learning approach that can leverage the knowledge gained during the modeling of one task to help better model another task is highly desirable. This article proposes a transfer learning approach that adopts a Bayesian hierarchical model framework to characterize the relatedness between tasks and additionally uses L1-regularization to ensure robust learning of the networks with limited sample sizes. A method based on the Expectation–Maximization (EM) algorithm is further developed to learn the networks from data. Simulation studies are performed that demonstrate the superiority of the proposed transfer learning approach over single-task learning that learns the network of each task in isolation. The proposed approach is also applied to identify brain connectivity networks associated with Alzheimer’s Disease (AD) from functional magnetic resonance image data. The findings are consistent with the AD literature. Journal: IIE Transactions Pages: 915-931 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649390 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:915-931 Template-Type: ReDIF-Article 1.0 Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Title: Computing and updating the first-passage time distribution for randomly evolving degradation signals Abstract: This article considers systems that degrade gradually and whose degradation can be monitored using sensor technology. Different degradation modeling techniques, such as the Brownian motion process, gamma process, and random coefficients models, have been used to model the evolution of sensor-based degradation signals with the goal of estimating lifetime distributions of various engineering systems. A parametric stochastic degradation modeling approach to estimate the Residual Life Distributions (RLDs) of systems/components that are operating in the field is presented. The proposed methodology rests on the idea of utilizing in situ degradations signals, communicated from fielded components, to update their respective RLDs in real time. Given the observed partial degradation signals, RLDs are evaluated based on a first-passage time approach. Expressions for the first-passage time for a base case linear degradation model, in which the degradation signal evolves as a Brownian motion, are derived. The model is tested using simulated and real-world degradation signals from a rotating machinery application. Journal: IIE Transactions Pages: 974-987 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649661 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649661 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:974-987 Template-Type: ReDIF-Article 1.0 Author-Name: Ulrike Grömping Author-X-Name-First: Ulrike Author-X-Name-Last: Grömping Title: Creating clear designs: a graph-based algorithm and a catalog of clear compromise plans Abstract: A graph-based algorithm is proposed for creating regular fractional factorial designs with two-level factors such that a pre-specified set of two-factor interactions is clear of aliasing with any main effects or two-factor interactions (clear design). The Clear Interactions Graph (CIG) used in the algorithm is unique for each design and different in nature from the well-known Taguchi linear graph. Based on published catalogs of two-level fractional factorials, enhanced by the CIG, a search algorithm finds an appropriate clear design or declares its non-existence. The approach is applied to the creation of a catalog of minimum aberration clear compromise plans, which is also of interest in its own right. [Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for additional discussions on run times and implementation of the algorithm for larger designs.] Journal: IIE Transactions Pages: 988-1001 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.654848 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.654848 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:988-1001 Template-Type: ReDIF-Article 1.0 Author-Name: Paul Goethals Author-X-Name-First: Paul Author-X-Name-Last: Goethals Author-Name: Byung Cho Author-X-Name-First: Byung Author-X-Name-Last: Cho Title: Designing the optimal process mean vector for mixed multiple quality characteristics Abstract: For the manufacturing community, determining the optimal process mean can often lead to a significant reduction in waste and increased opportunity for monetary gain. Given the process specification limits and associated rework or rejection costs, the traditional method for identifying the optimal process mean involves assuming values for each of the process distribution parameters prior to implementing an optimization scheme. In contrast, this article proposes integrating response surface methods into the framework of the problem, thus removing the need to make assumptions on the parameters. Furthermore, whereas researchers have studied models to investigate this research problem for a single quality characteristic and multiple nominal-the-best type characteristics, this article specifically examines the mixed multiple quality characteristic problem. A non-linear programming routine with economic considerations is established to facilitate the identification of the optimal process mean vector. An analysis of the sensitivity corresponding to the cost structure, tolerance, and quality loss settings is also provided to illustrate their effect on the solutions. Journal: IIE Transactions Pages: 1002-1021 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.655061 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.655061 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:1002-1021 Template-Type: ReDIF-Article 1.0 Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Author-Name: Shijia Du Author-X-Name-First: Shijia Author-X-Name-Last: Du Author-Name: Alan Hawkes Author-X-Name-First: Alan Author-X-Name-Last: Hawkes Title: A study on a single-unit repairable system with state aggregations Abstract: This article analyzes a single-unit repairable system consisting of an operating subsystem and a maintenance subsystem that is solely used to repair the operating subsystem in the event of it breaking down. Formulae for reliability indexes such as availability and distributions concerned with visits to certain subsets of states are presented in terms of state aggregations in which two partitions are used. The situation with exponentially distributed operational times is discussed not only as a special case of the approach used in this article but also using a Markov model. A numerical example is given to illustrate the results obtained in this article. Journal: IIE Transactions Pages: 1022-1032 Issue: 11 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.662309 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.662309 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:11:p:1022-1032 Template-Type: ReDIF-Article 1.0 Author-Name: Mingyang Li Author-X-Name-First: Mingyang Author-X-Name-Last: Li Author-Name: Jiali Han Author-X-Name-First: Jiali Author-X-Name-Last: Han Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Title: Bayesian nonparametric modeling of heterogeneous time-to-event data with an unknown number of sub-populations Abstract: Time-to-event data are a broad class of data widely encountered at different stages of the product life cycle. In practice, time-to-event data often exhibit heterogeneity, due to a variety of design and manufacturing issues, such as material quality inhomogeneity, unverified design changes, and manufacturing defects. Existing time-to-event modeling approaches mainly ignore this heterogeneity or account for it by pre-determining a fixed number of sub-populations. However, neglecting heterogeneity hinders the modeling accuracy, whereas pre-determining the number of sub-populations is often subjective or unjustifiable. In this article, a Bayesian nonparametric model is proposed to model heterogeneous time-to-event data by assuming an unknown number of sub-populations and quantifying the influence of possible covariates. An estimation algorithm is further proposed to achieve the joint model estimation and selection and to deal with the non-conjugate priors. Case studies demonstrate the effectiveness of the proposed work. Journal: IISE Transactions Pages: 481-492 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/0740817X.2016.1234732 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1234732 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:481-492 Template-Type: ReDIF-Article 1.0 Author-Name: Young Myoung Ko Author-X-Name-First: Young Myoung Author-X-Name-Last: Ko Author-Name: Eunshin Byon Author-X-Name-First: Eunshin Author-X-Name-Last: Byon Title: Condition-based joint maintenance optimization for a large-scale system with homogeneous units Abstract: A joint maintenance policy that simultaneously repairs multiple units is useful for large-scale systems where the setup cost to initiate the maintenance is generally higher than the repair costs. This study proposes a new method for scheduling maintenance activities in a large-scale system with homogeneous units that degrade over time. Specifically, we consider the maintenance type that renews all units at each maintenance activity, which is practically applicable for systems where the units need to be regularly maintained. To make the analysis computationally tractable, we discretize the health condition of each unit into a finite number of states. The proposed optimization formulation triggers the maintenance activity based on the fraction of units at each degradation state. Based on relevant asymptotic theories, we analytically obtain the optimal threshold in the fraction of units at each state that minimizes the long-run average maintenance cost. Our implementation results with a wide range of parameter settings show that the proposed maintenance strategy is more cost-effective than alternative strategies. Journal: IISE Transactions Pages: 493-504 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/0740817X.2016.1241457 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1241457 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:493-504 Template-Type: ReDIF-Article 1.0 Author-Name: Ulrike Grömping Author-X-Name-First: Ulrike Author-X-Name-Last: Grömping Title: Frequency tables for the coding-invariant quality assessment of factorial designs Abstract: Quality assessment of factorial designs, particularly mixed-level factorial designs, is a nontrivial task. Existing methods for orthogonal arrays include generalized minimum aberration, a modification thereof that was proposed by Wu and Zhang for mixed two- and four-level arrays, and minimum projection aberration. For supersaturated designs, E(s2) or χ2-based criteria are widely used. Based on recent insights by Grömping and Xu regarding the interpretation of the projected aR values used in minimum projection aberration, this article proposes three new types of frequency tables for assessing the quality of level-balanced factorial designs. These are coding invariant, which is particularly important for designs with qualitative factors. The proposed tables are used in the same way as those used in minimum projection aberration and behave more favorably when used for mixed-level arrays. Furthermore, they are much more manageable than the above-mentioned approach by Wu and Zhang. The article justifies the proposed tables based on their statistical information content, makes recommendations for their use, and compares them with each other and with existing criteria. As a byproduct, it is shown that generalized minimum aberration refines the established expected χ2 criterion for level-balanced supersaturated designs. Journal: IISE Transactions Pages: 505-517 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/0740817X.2016.1241458 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1241458 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:505-517 Template-Type: ReDIF-Article 1.0 Author-Name: Chen Zhang Author-X-Name-First: Chen Author-X-Name-Last: Zhang Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Author-Name: Zhiguo Li Author-X-Name-First: Zhiguo Author-X-Name-Last: Li Title: State space modeling of autocorrelated multivariate Poisson counts Abstract: Although many applications involve autocorrelated multivariate counts, there is a scarcity of research on their statistical modeling. To fill this research gap, this article proposes a state space model to describe autocorrelated multivariate counts. The model builds upon the multivariate log-normal mixture Poisson distribution and allows for serial correlations by considering the Poisson mean vector as a latent process driven by a nonlinear autoregressive model. In this way, the model allows for flexible cross-correlation and autocorrelation structures of count data and can also capture overdispersion. The Monte Carlo Expectation Maximization algorithm, together with particle filtering and smoothing methods, provides satisfactory estimators for the model parameters and the latent process variables. Numerical studies show that, compared with other state-of-the-art models, the proposed model has superiority and more generality with respect to describing count data generated from different mechanisms of the process of counts. Finally, we use this model to analyze counts of different types of damage collected from a power utility system as a case study. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transactions for additional tables and figures. Journal: IISE Transactions Pages: 518-531 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/24725854.2016.1251665 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1251665 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:518-531 Template-Type: ReDIF-Article 1.0 Author-Name: Yanjun Qian Author-X-Name-First: Yanjun Author-X-Name-Last: Qian Author-Name: Jianhua Z. Huang Author-X-Name-First: Jianhua Z. Author-X-Name-Last: Huang Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Identifying multi-stage nanocrystal growth using in situ TEM video data Abstract: The in situ transmission electron microscopy technique is receiving considerable attention in material science research, as its in situ nature makes possible discoveries that ex situ instruments are unable to make and provides the capability of directly observing nanocrystal growth processes. As incresing amounts of dynamic transmission electron microscopy (TEM) video data become available, one of the bottlenecks appears to be the lack of automated, quantitative, and dynamic analytic tools that can process the video data efficiently. The current processing is largely manual in nature and thus laborious, with existing tools focusing primarily on static TEM images. The absence of automated processing of TEM videos does not come as a surprise, as the growth of nanocrystals is highly stochastic and goes through multiple stages. We introduce a method in this article that is suitable for analyzing the in situ TEM videos in an automated and effective way. The method learns and tracks the normalized particle size distribution and identifies the phase-change points delineating the stages in nanocrystal growth. Using the outcome of the change-point detection process, we propose a hybrid multi-stage growth model and test it on an in situ TEM video, made available in 2009 by Science. Journal: IISE Transactions Pages: 532-543 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/24725854.2016.1251666 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1251666 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:532-543 Template-Type: ReDIF-Article 1.0 Author-Name: Andy Alexander Author-X-Name-First: Andy Author-X-Name-Last: Alexander Author-Name: Yanjun Li Author-X-Name-First: Yanjun Author-X-Name-Last: Li Author-Name: Robert Plante Author-X-Name-First: Robert Author-X-Name-Last: Plante Title: Sustaining system coordination in outsourcing the maintenance function of a process having a linear failure rate Abstract: An increasing trend in manufacturing is the outsourcing of maintenance and repair activities to an external contractor. An outsourced maintenance contract is presented that details the costs, timing, and possible bonuses for maintaining uptime thresholds and covers both minimal corrective repairs and regularly scheduled preventive replacements. By negotiating an incentive-based maintenance contract, the manufacturer and contractor can achieve system coordination, a mutually beneficial relationship that maximizes system profit. We study the sensitivity of system coordination to the expected cost of minimal corrective process repairs. For a manufacturing process with a linear failure rate, we develop a complete characterization of the intervals for the expected cost of repair and the corresponding contract parameters wherein system coordination is guaranteed for any expected cost of repair that occurs within the interval. Journal: IISE Transactions Pages: 544-552 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/24725854.2016.1252074 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1252074 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:544-552 Template-Type: ReDIF-Article 1.0 Author-Name: Wei Xie Author-X-Name-First: Wei Author-X-Name-Last: Xie Author-Name: Lijuan Shen Author-X-Name-First: Lijuan Author-X-Name-Last: Shen Author-Name: Yuanguang Zhong Author-X-Name-First: Yuanguang Author-X-Name-Last: Zhong Title: Two-dimensional aggregate warranty demand forecasting under sales uncertainty Abstract: Capital-intensive products, such as automobiles and heavy equipment, are often sold under a two-dimensional warranty policy considering both age and usage. As the products are sold to customers intermittently, the sales process is usually performed under uncertainty. This stochastic nature presents difficulties in predicting the warranty demand over time. The existing literature on two-dimensional warranty only focuses on warranty demand of a single unit, which overlooks the effect of the sales dynamics on the total warranty claims of sold units. To address this issue, from a new warranty analysis perspective, this study develops a general forecasting technique for the aggregate repair demand of all units sold up to a given time period. The stochastic sales process is modeled by a non-homogeneous Poisson process, and the product failures are captured by a two-dimensional failure process with minimal repair.  When the first time-to-failure follows a bivariate exponential distribution, the expected aggregate repair demand for a given period of time is derived in an analytical form. Numerical experiments are presented to show the applicability and flexibility of our model in estimating the warranty demands with time-varying sales processes. Journal: IISE Transactions Pages: 553-565 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/24725854.2016.1263769 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1263769 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:553-565 Template-Type: ReDIF-Article 1.0 Author-Name: Nima Zaerpour Author-X-Name-First: Nima Author-X-Name-Last: Zaerpour Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Author-Name: René B. M. de Koster Author-X-Name-First: René B. M. Author-X-Name-Last: de Koster Title: Response time analysis of a live-cube compact storage system with two storage classes Abstract: We study a next generation of storage systems: live-cube compact storage systems. These systems are becoming increasingly popular, due to their small physical and environmental footprint paired with a large storage space. At each level of a live-cube system, multiple shuttles take care of the movement of unit loads in the x and y directions. When multiple empty locations are available, the shuttles can cooperate to create a virtual aisle for the retrieval of a desired unit load. A lift takes care of the movement across different levels in the z-direction. Two-class-based storage, in which high turnover unit loads are stored at storage locations closer to the Input/Output point, can result in a short response time. We study two-class-based storage for a live-cube system and derive closed-form formulas for the expected retrieval time. Although the system needs to be decomposed into several cases and sub-cases, we eventually obtain simple-to-use closed-form formulas to evaluate the performance of systems with any configuration and first zone boundary. Continuous-space closed-form formulas are shown to be very close to the results obtained for discrete-space live-cube systems. The numerical results show that two-class-based storage can reduce the average response time of a live-cube system by up to 55% compared with random storage for the instances tested. Journal: IISE Transactions Pages: 461-480 Issue: 5 Volume: 49 Year: 2017 Month: 5 X-DOI: 10.1080/24725854.2016.1273563 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1273563 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:5:p:461-480 Template-Type: ReDIF-Article 1.0 Author-Name: Seung Moon Author-X-Name-First: Seung Author-X-Name-Last: Moon Author-Name: Jun Shu Author-X-Name-First: Jun Author-X-Name-Last: Shu Author-Name: Timothy Simpson Author-X-Name-First: Timothy Author-X-Name-Last: Simpson Author-Name: Soundar Kumara Author-X-Name-First: Soundar Author-X-Name-Last: Kumara Title: A module-based service model for mass customization: service family design Abstract: Service science research seeks to improve the productivity and quality of service offerings by creating new innovations, facilitating business management, and applying practical applications. Recent trends seek to apply and extend principles from product family design and mass customization into new service development. Product family design is a cost-effective way to achieve mass customization by allowing highly differentiated products to be developed from a common platform while targeting individual products to distinct market segments. This article extends concepts from module-based product families to create a method for service design. The objective in this research is to develop a method for designing customized families of services using game theory to model situations involving dynamic market environments. A module-based service model is proposed to facilitate customized service design and represent the relationships between functions and processes in a service. A module selection problem for platform design is considered as a strategic module-sharing problem under a collaboration situation. A coalitional game is used to model potential module sharing and determine which modules used in the platform provide the most benefit. A case study involving a family of banking services is used to demonstrate implementation of the proposed method. Journal: IIE Transactions Pages: 153-163 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/07408171003705383 File-URL: http://hdl.handle.net/10.1080/07408171003705383 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:153-163 Template-Type: ReDIF-Article 1.0 Author-Name: Refael Hassin Author-X-Name-First: Refael Author-X-Name-Last: Hassin Author-Name: Yana Kleiner Author-X-Name-First: Yana Author-X-Name-Last: Kleiner Title: Equilibrium and optimal arrival patterns to a server with opening and closing times Abstract: This article considers a first-come first-served single-server system with opening and closing times. Service durations are exponentially distributed, and the total number of arrivals is a Poisson random variable. Naturally each customer wishes to minimize his/her waiting time. The process of choosing an arrival time is presented as a (non-cooperative) multi-player game. The overall goal of this work is to find a Nash equilibrium game strategy. It is assumed in the literature that arrivals before the opening time of the system are allowed. In this work the case where early arrivals are forbidden is studied. It turns out that unless the system is very heavily loaded, the equilibrium solution with such a restriction does not reduce the expected waiting time in a significant way. The equilibrium solution is compared with the solution which maximizes social welfare. Finally, it is show that social welfare can be increased in equilibrium by restricting arrivals to certain points of time. Journal: IIE Transactions Pages: 164-175 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/07408171003792449 File-URL: http://hdl.handle.net/10.1080/07408171003792449 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:164-175 Template-Type: ReDIF-Article 1.0 Author-Name: Andrew Johnson Author-X-Name-First: Andrew Author-X-Name-Last: Johnson Author-Name: Leon McGinnis Author-X-Name-First: Leon Author-X-Name-Last: McGinnis Title: Performance measurement in the warehousing industry Abstract: Warehouses are a substantial component of logistic operations and an important contributor to speed and cost in supply chains. While there are widely accepted benchmarks for individual warehouse functions such as order picking, little is known about the overall technical efficiency of warehouses. Lacking a general understanding of warehouse technical efficiency and the associated causal factors limits industry's ability to identify the best opportunities for improving warehouse performance. The problem is compounded by the significant gap in the education and training of the industry's professionals. This article addresses this gap by describing both a new methodology for assessing warehouse technical efficiency based on empirical data integrating several statistical approaches and the new results derived from applying the method to a large sample of warehouses. The self-reported nature of attributes and performance data makes the use of statistical methods for rectifying data, validating models, and identifying key factors affecting efficient performance particularly appropriate. This article also identifies several opportunities for additional research on warehouse assessment and optimization. [Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for appendices and additional tables.] Journal: IIE Transactions Pages: 220-230 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.491497 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491497 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:220-230 Template-Type: ReDIF-Article 1.0 Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Simulation-based estimation of cycle time using quantile regression Abstract: Production cycle time is an important performance measure in manufacturing systems, and thus it is of interest to characterize distributional properties, such as quantiles, for informative decision making. This article proposes a non-linear quantile regression model for the relationship between stationary cycle time quantiles and corresponding throughput rates of a manufacturing system. The statistical properties of the estimated cycle time quantiles are investigated and the impact of dependent data from simulation output on parameter estimations is analyzed. Extensive numerical studies are presented to demonstrate the effectiveness of the proposed methods. Journal: IIE Transactions Pages: 176-191 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.521806 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.521806 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:176-191 Template-Type: ReDIF-Article 1.0 Author-Name: Ezgi Eren Author-X-Name-First: Ezgi Author-X-Name-Last: Eren Author-Name: Natarajan Gautam Author-X-Name-First: Natarajan Author-X-Name-Last: Gautam Title: Efficient control for a multi-product quasi-batch process via stochastic dynamic programming Abstract: This article considers a quasi-batch process where items are continuously processed while they move on a conveyor belt. In addition, the products arriving into the processor require variable amounts of processing, which translate into different processor levels. Keeping the processing level constant in such a system results in severe inefficiency in terms of consumption of energy and resources with high production costs and a poor level of environmental performance. A stochastic dynamic programming model is formulated that strikes a balance between consumption of energy and material, processor performance, and product quality. The model minimizes total system-wide cost, which is essentially a unified measure across all the objectives. The structural properties of the optimal policy and value functions are analyzed taking into account high-dimensionality of the state space. Based on some of these results, efficient heuristic methodologies are developed to solve large instances of the problem. It is shown using several numerical experiments that a significant amount of energy or material resources can be saved and total costs can be reduced considerably compared to the current practices in the process industry. Insights on the sensitivity of results with respect to the cost parameters are provided. Journal: IIE Transactions Pages: 192-206 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.521808 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.521808 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:192-206 Template-Type: ReDIF-Article 1.0 Author-Name: Charu Sinha Author-X-Name-First: Charu Author-X-Name-Last: Sinha Author-Name: Matthew Sobel Author-X-Name-First: Matthew Author-X-Name-Last: Sobel Author-Name: Volodymyr Babich Author-X-Name-First: Volodymyr Author-X-Name-Last: Babich Title: Computationally simple and unified approach to finite- and infinite-horizon Clark–Scarf inventory model Abstract: In this article, it is shown that an easily computed and simply structured policy for making work-order decisions is optimal in the Clark–Scarf inventory model. That is a model of a make-to-stock multistage serial manufacturing process with convex costs of finished goods inventory, a setup cost for purchasing, linear costs of work-in-process inventories, and backlogging of excess demand. The criteria used in this article include the expected present value of costs (in the finite and infinite horizons) and the long-run average cost per period. Moreover, the same myopic policy that is optimal for the finite-horizon model is optimal also for the infinite-horizon model. This permits a unified approach to the various criteria. Journal: IIE Transactions Pages: 207-219 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.523766 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.523766 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:207-219 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Corrigendum Journal: IIE Transactions Pages: 231-231 Issue: 3 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.547782 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.547782 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:3:p:231-231 Template-Type: ReDIF-Article 1.0 Author-Name: Devashish Das Author-X-Name-First: Devashish Author-X-Name-Last: Das Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Statistical process monitoring based on maximum entropy density approximation and level set principle Abstract: Most control charts are based on the idea of separating the sample space of the quantity being monitored into an in-control region and an out-of-control region. This article proposes a control chart scheme that is based on the following ideas. First, a maximum entropy density is fitted to the null distribution of the quantity being monitored. Then the in-control region is selected as the one with the minimum volume from a set of acceptable in-control regions. The proposed control chart method utilizes the idea of density level sets being the minimum volume sets and thus a level set is selected as the optimal in-control region. The proposed control chart scheme defined by the level set is shown to be effective in detecting changes in the distribution of the quantity being monitored. Various numerical case studies are presented to illustrate the effectiveness of the proposed method. Journal: IIE Transactions Pages: 215-229 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.916460 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.916460 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:215-229 Template-Type: ReDIF-Article 1.0 Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Xinwei Deng Author-X-Name-First: Xinwei Author-X-Name-Last: Deng Title: Ensemble modeling for data fusion in manufacturing process scale-up Abstract: In modern manufacturing process scale-up, design of experiments is widely used to identify optimal process settings, followed by production runs to validate these process settings. Both experimental data and observational data are collected in the manufacturing process. However, current methodologies often use a single type of data to model the process. This work presents an innovative method to efficiently model a manufacturing process by integrating the two types of data. An ensemble modeling strategy is proposed that utilizes the constrained likelihood approach, where the constraints incorporate the sequential nature and inherent features of the two types of data. It therefore achieves better estimation and prediction than conventional methods. Simulations and a case study in wafer manufacturing are provided to illustrate the merits of the proposed method. Journal: IIE Transactions Pages: 203-214 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.916580 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.916580 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:203-214 Template-Type: ReDIF-Article 1.0 Author-Name: Fangyi He Author-X-Name-First: Fangyi Author-X-Name-Last: He Author-Name: Huiliang Xie Author-X-Name-First: Huiliang Author-X-Name-Last: Xie Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Title: Optimal setup adjustment and control of a process under ARMA disturbances Abstract: Process adjustment uses information from past runs to adjust settings for the next run and bring the output to its target. The efficiency of a control algorithm depends on the nature of the disturbance and dynamics of the process. This article develops a control algorithm when the disturbance is a general ARMA(p, q) process, in the presence of measurement error and adjustment error together with a random initial bias. Its optimality property is established and the stability conditions are derived. It is shown that the popular Exponentially Weighted Moving Average (EWMA) controller is a special case of the proposed controller. In addition, Monte Carlo simulations are conducted to study the finite sample behavior of the proposed controller and compare it with the proportional–integral–derivative controller when the disturbance is an ARMA(1,1) process and with the EWMA controller when the disturbance is an IMA(1,1) process. The ARMA controller is also implemented to control an ARMA(2,1) disturbance and its performance is compared with the other two controllers. All of the results reflect the new controller’s superiority when multiple sources of uncertainty exist or a general ARMA(p, q) disturbance is incurred. Journal: IIE Transactions Pages: 230-244 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.928959 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928959 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:230-244 Template-Type: ReDIF-Article 1.0 Author-Name: Cheng-Hung Hu Author-X-Name-First: Cheng-Hung Author-X-Name-Last: Hu Author-Name: Robert D. Plante Author-X-Name-First: Robert D. Author-X-Name-Last: Plante Author-Name: Jen Tang Author-X-Name-First: Jen Author-X-Name-Last: Tang Title: Equivalent step-stress accelerated life tests with log-location-scale lifetime distributions under Type-I censoring Abstract: Accelerated Life Testing (ALT) is used to provide timely estimates of a product's lifetime distribution. Step-Stress ALT (SSALT) is one of the most widely adopted stress loadings and the optimum design of a SSALT plan has been extensively studied. However, few research efforts have been devoted to establishing the theoretical rationale for using SSALT in lieu of other types of stress loadings. This article proves the existence of statistically equivalent SSALT plans that can provide equally precise estimates to those derived from any continuous stress loading for the log-location-scale lifetime distributions with Type-I censoring. That is, for any optimization criterion based on the Fisher information matrix, SSALT is identical in comparison to other continuous stress loadings. The Weibull and lognormal distributions are introduced as special cases. For these two distributions, the relationship among statistical equivalencies is investigated and it is shown that two equivalent ALT plans must be equivalent in terms of the strongest version of equivalency for many objective functions. A numerical example for a ramp-stress ALT, using data from an existing study on miniature lamps, is used to illustrate equivalent SSALT plans. Results show that SSALT is not only equivalent to the existing ramp-stress test plans but also more cost-effective in terms of the total test cost. Journal: IIE Transactions Pages: 245-257 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.928960 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928960 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:245-257 Template-Type: ReDIF-Article 1.0 Author-Name: Churlzu Lim Author-X-Name-First: Churlzu Author-X-Name-Last: Lim Author-Name: Hanif D. Sherali Author-X-Name-First: Hanif D. Author-X-Name-Last: Sherali Author-Name: Theodore S. Glickman Author-X-Name-First: Theodore S. Author-X-Name-Last: Glickman Title: Cost-of-Quality Optimization via Zero-One Polynomial Programming Abstract: In this paper, we consider a Cost-of-Quality (CoQ) optimization problem that finds an optimal allocation of prevention and inspection resources to minimize the expected total quality costs under a prevention-appraisal-failure framework, where the quality costs in the proposed model are involved with prevention, inspection, and correction of internal and external failures. Commencing with a simple structure of the problem, we progressively increase the complexity of the problem by accommodating realistic scenarios regarding preventive, appraisal, and corrective actions. The resulting problem is formulated as a zero-one polynomial program, which can be solved either directly using a mixed-integer nonlinear programming solver such as BARON, or using a more conventional mixed-integer linear programming (MILP) solver such as CPLEX after performing an appropriate linearization step. We examine two case studies from the literature (related to a lamp manufacturing context and an order entry process) to illustrate how the proposed model can be utilized to find optimal inspection and prevention strategies, as well as to analyze sensitivity with respect to different cost parameters. We also provide a comparative numerical study of using the aforementioned solvers to optimize the respective model formulations. The results provide insights into the use of such quantitative methods for optimizing the CoQ, and indicate the efficacy of using the linearized MILP model for this purpose. Journal: IIE Transactions Pages: 258-273 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.928964 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928964 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:258-273 Template-Type: ReDIF-Article 1.0 Author-Name: Kyungmee O. Kim Author-X-Name-First: Kyungmee O. Author-X-Name-Last: Kim Author-Name: Ming J. Zuo Author-X-Name-First: Ming J. Author-X-Name-Last: Zuo Title: Effects of subsystem mission time on reliability allocation Abstract: During the early stages of system development, various factors are considered when determining an allocation weight to apportion a system’s reliability requirement to each subsystem. Previous methods have included subsystem mission time as a factor in obtaining the allocation weight in order to allocate a higher failure rate to a subsystem with a shorter mission time than the system’s mission time. This article, first shows that the results obtained from previous methods are misleading, mainly because the allocated failure rate of the subsystem is expressed in the system’s mission time rather than the subsystem’s mission time. It is further shown that if a designer intends to allocate a lower failure rate to a subsystem that has to operate longer in the system, subsystem mission time must not be included as a factor when determining the allocation weight. If a designer wants to allocate the system failure rate equally to each subsystem regardless of a subsystem’s mission time, subsystem mission time must be included as a factor. Journal: IIE Transactions Pages: 285-293 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.929363 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.929363 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:285-293 Template-Type: ReDIF-Article 1.0 Author-Name: Lijuan Xu Author-X-Name-First: Lijuan Author-X-Name-Last: Xu Author-Name: Li Wang Author-X-Name-First: Li Author-X-Name-Last: Wang Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: Growth process modeling of semiconductor nanowires for scale-up of nanomanufacturing: A review Abstract: To keep up with the increasing need for nanomanufacturing (NM), research on the scale-up of NM has become an emerging field of study. This review article first discusses the author understanding on research classification of scale-up NM research, which entails scale-up process research and scale-up methodology research. The scale-up methodology research includes establishing modeling, simulation, and control methodologies that enable and support economic production at commercial scale. Since NM process modeling provides the basis for process monitoring and control, guided inspection and sensing strategy, and more-efficient experimental design strategy for robust synthesis of nanomaterials, semiconductor nanowire growth is used as an example to review different process modeling strategies for scalable NM. The modeling strategies from the existing literature on nanowire growth studies are summarized into four categories: (i) physical modeling; (ii) statistical modeling; (iii) physical-statistical modeling; and (iv) cross-domain modeling and validation. In addition to illustrating modeling approaches in the literature, suitable domains of applications for each modeling strategy are discussed. Two potential areas worthy of efforts in future research are highlighted. Journal: IIE Transactions Pages: 274-284 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.937018 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.937018 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:274-284 Template-Type: ReDIF-Article 1.0 Author-Name: Christian H. Weiß Author-X-Name-First: Christian H. Author-X-Name-Last: Weiß Author-Name: Murat Caner Testik Author-X-Name-First: Murat Caner Author-X-Name-Last: Testik Title: On the Phase I analysis for monitoring time-dependent count processes Abstract: In designing a control chart for online process monitoring, a Phase I analysis is often conducted as a first step to estimate the unknown process parameters. It is based on historical data, where parameter estimation, chart design, and data filtering are iterated until a stable and reliable chart design is obtained. Researchers sometimes neglect the effects of Phase I analysis by assuming that process parameters are known but directly evaluate the performance of a control chart in Phase II (online process monitoring). In this research, the Phase I analysis of time-dependent count data stemming from Poisson INAR(1) and binomial AR(1) processes is considered. Due to data filtering, parameter estimation in Phase I analysis is challenging, especially when the data are autocorrelated. In this regard, solutions on how to modify the method of moments, least squares, and maximum likelihood estimation are presented. Performance of these estimators in Phase I as well as performance of designed control charts in Phase II are evaluated, and recommendations are provided. A real-world example on IP counts is used to illustrate the Phase I and Phase II control chart implementations. Journal: IIE Transactions Pages: 294-306 Issue: 3 Volume: 47 Year: 2015 Month: 3 X-DOI: 10.1080/0740817X.2014.952850 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.952850 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:3:p:294-306 Template-Type: ReDIF-Article 1.0 Author-Name: Haengju Lee Author-X-Name-First: Haengju Author-X-Name-Last: Lee Author-Name: Yongsoon Eun Author-X-Name-First: Yongsoon Author-X-Name-Last: Eun Title: Estimating Primary Demand for a Heterogeneous-Groups Product Category under Hierarchical Consumer Choice Model Abstract: This paper discusses the estimation of primary demand (i.e., the true demand before the stockout-based substitution effect occurs) for a heterogeneous-groups product category that is sold in the department store setting, based on historical sales data, product availability, and market share information. For such products, a hierarchical consumer choice model can better represent purchasing behavior. This means that choice occurs on multiple levels: A consumer might choose a particular product group on the first level and purchase a product within that chosen group on the second level. Hence, in the present study, we used the nested multinomial logit (NMNL) choice model for the hierarchical choice and combined it with non-homogeneous Poisson arrivals over multiple periods. The expectation-maximization (EM) algorithm was applied to estimate the primary demand while treating the observed sales data as an incomplete observation of that demand. We considered the estimation problem as an optimization problem in terms of the inter-product-group heterogeneity, and this approach relieves the revenue management system of the computational burden of using a nonlinear optimization package. We subsequently tested the procedure with simulated data sets. The results confirmed that our algorithm estimates the demand parameters effectively for data sets with a high level of inter-product-group heterogeneity. Journal: IIE Transactions Pages: 541-554 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1078524 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078524 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:541-554 Template-Type: ReDIF-Article 1.0 Author-Name: Susan R. Hunter Author-X-Name-First: Susan R. Author-X-Name-Last: Hunter Author-Name: Benjamin McClosky Author-X-Name-First: Benjamin Author-X-Name-Last: McClosky Title: Maximizing quantitative traits in the mating design problem via simulation-based Pareto estimation Abstract: Commercial plant breeders improve economically important traits by selectively mating individuals from a given breeding population. Potential pairings are evaluated before the growing season using Monte Carlo simulation, and a mating design is created to allocate a fixed breeding budget across the parent pairs to achieve desired population outcomes. We introduce a novel objective function for this mating design problem that accurately models the goals of a certain class of breeding experiments. The resulting mating design problem is a computationally burdensome simulation optimization problem on a combinatorially large set of feasible points. We propose a two-step solution to this problem: (i) simulate to estimate the performance of each parent pair and (ii) solve an estimated version of the mating design problem, which is an integer program, using the simulation output. To reduce the computational burden when implementing steps (i) and (ii), we analytically identify a Pareto set of parent pairs that will receive the entire breeding budget at optimality. Since we wish to estimate the Pareto set in step (i) as input to step (ii), we derive an asymptotically optimal simulation budget allocation to estimate the Pareto set that, in our numerical experiments, out-performs Multi-objective Optimal Computing Budget Allocation in reducing misclassifications. Given the estimated Pareto set, we provide a branch-and-bound algorithm to solve the estimated mating design problem. Our approach dramatically reduces the computational effort required to solve the mating design problem when compared with naïve methods. Journal: IIE Transactions Pages: 565-578 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1096430 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1096430 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:565-578 Template-Type: ReDIF-Article 1.0 Author-Name: Erick Moreno-Centeno Author-X-Name-First: Erick Author-X-Name-Last: Moreno-Centeno Author-Name: Adolfo R. Escobedo Author-X-Name-First: Adolfo R. Author-X-Name-Last: Escobedo Title: Axiomatic aggregation of incomplete rankings Abstract: In many different applications of group decision-making, individual ranking agents or judges are able to rank only a small subset of all available candidates. However, as we argue in this article, the aggregation of these incomplete ordinal rankings into a group consensus has not been adequately addressed. We propose an axiomatic method to aggregate a set of incomplete rankings into a consensus ranking; the method is a generalization of an existing approach to aggregate complete rankings. More specifically, we introduce a set of natural axioms that must be satisfied by a distance between two incomplete rankings; prove the uniqueness and existence of a distance satisfying such axioms; formulate the aggregation of incomplete rankings as an optimization problem; propose and test a specific algorithm to solve a variation of this problem where the consensus ranking does not contain ties; and show that the consensus ranking obtained by our axiomatic approach is more intuitive than the consensus ranking obtained by other approaches. Journal: IIE Transactions Pages: 475-488 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1109737 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1109737 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:475-488 Template-Type: ReDIF-Article 1.0 Author-Name: David J. Eckman Author-X-Name-First: David J. Author-X-Name-Last: Eckman Author-Name: Lisa M. Maillart Author-X-Name-First: Lisa M. Author-X-Name-Last: Maillart Author-Name: Andrew J. Schaefer Author-X-Name-First: Andrew J. Author-X-Name-Last: Schaefer Title: Optimal pinging frequencies in the search for an immobile beacon Abstract: We consider a search for an immobile object that can only be detected if the searcher is within a given range of the object during one of a finite number of instantaneous detection opportunities; i.e., “pings.” More specifically, motivated by naval searches for battery-powered flight data recorders of missing aircraft, we consider the trade-off between the frequency of pings for an underwater locator beacon and the duration of the search. First, assuming that the search speed is known, we formulate a mathematical model to determine the pinging period that maximizes the probability that the searcher detects the beacon before it stops pinging. Next, we consider generalizations to discrete search speed distributions under a uniform beacon location distribution. Lastly, we present a case study based on the search for Malaysia Airlines Flight 370 that suggests that the industry-standard beacon pinging period—roughly 1 second between pings—is too short. Journal: IIE Transactions Pages: 489-500 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1110270 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110270 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:489-500 Template-Type: ReDIF-Article 1.0 Author-Name: Jeremy Staum Author-X-Name-First: Jeremy Author-X-Name-Last: Staum Author-Name: Mingbin Feng Author-X-Name-First: Mingbin Author-X-Name-Last: Feng Author-Name: Ming Liu Author-X-Name-First: Ming Author-X-Name-Last: Liu Title: Systemic risk components in a network model of contagion Abstract: We show how to perform a systemic risk attribution in a network model of contagion with interlocking balance sheets, using the Shapley and Aumann–Shapley values. Along the way, we establish new results on the sensitivity analysis of the Eisenberg–Noe network model of contagion, featuring a Markov chain interpretation. We illustrate the design process for systemic risk attribution methods by developing several examples. Journal: IIE Transactions Pages: 501-510 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1110650 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110650 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:501-510 Template-Type: ReDIF-Article 1.0 Author-Name: Taner Bilgiç Author-X-Name-First: Taner Author-X-Name-Last: Bilgiç Author-Name: Refik Güllü Author-X-Name-First: Refik Author-X-Name-Last: Güllü Title: Innovation race under revenue and technology uncertainty of heterogeneous firms where the winner does not take all Abstract: We analyze the competitive investment behavior on innovative products or services under revenue and technology uncertainty for heterogenous firms. Firms make a decision on how much to invest in research and development of an innovative technology at the beginning of the time horizon. They discover the technology at an uncertain time in the future. The time of successful discovery depends on the amount of investment and the characteristics of the firms. All firms collect revenues even though they are not winners. Although there can be positive or negative external shocks, the potential revenue rates decrease in time and the first firm to adopt the technology is less prone to negative shocks and benefits more from positive shocks. Therefore, the competition is a stochastic race, where all firms collect some revenue once they adopt. We show the existence of a pure strategy Nash equilibrium for this game in a duopoly market under general assumptions and provide more structural results when the time to successfully innovate is exponentially distributed. We show the uniqueness of the equilibrium for an arbitrary number of symmetric firms. We argue that for sufficiently efficient firms who are resilient against market shocks, consolidating racing firms will decrease their expected profits. We also provide an illustrative computational analysis for comparative statics, where we show the non-monotonic behavior of equilibrium investment levels as examples. It appears that the equilibrium investment level behavior in innovation can be highly dependent on firm characteristics. Journal: IIE Transactions Pages: 527-540 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1110651 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110651 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:527-540 Template-Type: ReDIF-Article 1.0 Author-Name: Soonhui Lee Author-X-Name-First: Soonhui Author-X-Name-Last: Lee Author-Name: Barry L. Nelson Author-X-Name-First: Barry L. Author-X-Name-Last: Nelson Title: General-purpose ranking and selection for computer simulation Abstract: Many indifference-zone Ranking-and-Selection (R&S) procedures have been invented for choosing the best simulated system. To obtain the desired Probability of Correct Selection (PCS), existing procedures exploit knowledge about the particular combination of system performance measure (e.g., mean, probability, variance, quantile) and assumed output distribution (e.g., normal, exponential, Poisson). In this article, we take a step toward general-purpose R&S procedures that work for many types of performance measures and output distributions, including situations where different simulated alternatives have entirely different output distribution families. There are only two versions of our procedure: with and without the use of common random numbers. To obtain the required PCS we exploit intense computation via bootstrapping, and to mitigate the computational effort we create an adaptive sample-allocation scheme that guides the procedure to quickly reach the necessary sample size. We establish the asymptotic PCS of these procedures under very mild conditions and provide a finite-sample empirical evaluation of them as well. Journal: IIE Transactions Pages: 555-564 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1125043 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1125043 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:555-564 Template-Type: ReDIF-Article 1.0 Author-Name: Xing Gao Author-X-Name-First: Xing Author-X-Name-Last: Gao Author-Name: Weijun Zhong Author-X-Name-First: Weijun Author-X-Name-Last: Zhong Title: A differential game approach to security investment and information sharing in a competitive environment Abstract: Information security economics, an emerging and thriving research topic, attempts to address the problems of distorted incentives for stakeholders in an Internet environment, including firms, hackers, the public sector, and other participants, using economic approaches. To alleviate consumer anxiety about the loss of sensitive information, and to further increase consumer demand, firms usually integrate their information security investment strategies to capture market share from competitors and their security information sharing strategies to increase consumer demand across all member firms in industry-based information sharing centers. Using differential game theory, this article investigates dynamic strategies for security investment and information sharing for two competing firms under targeted attacks, in which both firms can influence the value of their information assets through the endogenous determination of pricing rates. We analytically and numerically examine how both security investment rates and information sharing rates are affected by several key parameters in a non-cooperative scenario, including the efficiency of security investment rates, sensitivity parameters for pricing rates, coefficients of consumer demand losses, and the density of targeted attacks. Our results reveal that, confronted with a higher coefficient of consumer demand loss and a higher density of targeted attacks, both firms are reluctant to aggressively defend against hackers and would rather decrease the negative effect of hacker attacks by lowering their pricing rates. Also, we derive feedback equilibrium solutions for the situation where both firms cooperate in security investment, information sharing, or both. It is revealed that although a higher hacker attack density always decreases a firm's integral profits, both firms are not always willing to cooperate in security investment and information sharing. Specifically, the superior firm benefits most when both firms fully cooperate and benefits the least when they behave fully non-cooperatively. However, the inferior firm enjoys the highest integral profit when both firms only cooperate in information sharing and the lowest integral profit in the completely cooperative situation. Journal: IIE Transactions Pages: 511-526 Issue: 6 Volume: 48 Year: 2016 Month: 6 X-DOI: 10.1080/0740817X.2015.1125044 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1125044 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:6:p:511-526 Template-Type: ReDIF-Article 1.0 Author-Name: Min Zhang Author-X-Name-First: Min Author-X-Name-Last: Zhang Author-Name: Rajan Batta Author-X-Name-First: Rajan Author-X-Name-Last: Batta Author-Name: Rakesh Nagi Author-X-Name-First: Rakesh Author-X-Name-Last: Nagi Title: Designing manufacturing facility layouts to mitigate congestion Abstract: When workflow congestion is prevalent, minimizing total expected material handling time is more appropriate than minimizing a distance-based objective in manufacturing facility layout design. This article presents a model labeled the Full Assignment Problem with Congestion (FAPC), which simultaneously optimizes the layout and flow routing. FAPC is a generalization of the Quadratic Assignment Problem (QAP), a classic problem for the location of a set of indivisible economic activities. A branch-and-price algorithm is proposed and a computational study is performed to verify its effectiveness as a solution methodology for the FAPC. A numerical study confirms the benefits of simultaneous consideration of layout and routing when confronted with workflow congestion. A detailed simulation for a case problem is presented to verify the overall benefits of incorporating congestion in layout/routing. A critique of FAPC with two alternative models is also provided. Three conclusions are offered from this work. First, a combination of re-layout and re-routing is a more powerful way to mitigate the impact of workflow congestion rather than using just the re-layout or just the re-routing options. Second, it is important to model workflow congestion in a manufacturing facility—that is, ignoring it can result in a significantly poor design. Third, the QAP layout is dominated by the FAPC layout for situations of medium workflow intensity. Journal: IIE Transactions Pages: 689-702 Issue: 10 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.546386 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.546386 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:10:p:689-702 Template-Type: ReDIF-Article 1.0 Author-Name: Letitia Pohl Author-X-Name-First: Letitia Author-X-Name-Last: Pohl Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Author-Name: Kevin Gue Author-X-Name-First: Kevin Author-X-Name-Last: Gue Title: Turnover-based storage in non-traditional unit-load warehouse designs Abstract: This article investigates the effect of assigning the most-active items to the best locations in unit-load warehouses with non-traditional aisles. Specifically, the performance of flying-V and fishbone designs are investigated when products exhibit different velocity profiles. Both single- and dual-command operations are considered for a warehouse where receiving and shipping are located at the midpoint of one side of the warehouse. For dual-command operations, a fishbone design shows similar reductions in travel distances for both random and turnover-based storage policies. The fishbone designs that provide the best performance have a diagonal cross aisle that extends to the upper corners of the picking space and are approximately half as tall as they are wide. In general, warehouse design parameters that perform best under random storage also perform well under turnover-based storage. Journal: IIE Transactions Pages: 703-720 Issue: 10 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.549098 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.549098 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:10:p:703-720 Template-Type: ReDIF-Article 1.0 Author-Name: Jennifer Pazour Author-X-Name-First: Jennifer Author-X-Name-Last: Pazour Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: An analytical model for A-frame system design Abstract: An A-frame system is a highly automated piece-level order-fulfillment technology. A systematic analysis is performed in this article in order to understand the design decisions of using an A-frame system in a distribution center. A math-programming-based approach to determine the amount of A-frame infrastructure investment is presented along with the assignment and allocation of Stock Keeping Units (SKUs) to the A-frame. Then throughput considerations are explicitly considered by developing analytical models for the throughput of an A-frame and heuristics to adjust the allocation and assignment of SKUs in order for the A-frame to meet a throughput constraint. The proposed heuristic approach performs well as compared to the exact solution approaches for small problems. Since the proposed methodology is capable of solving industrial-sized problems it is applied to a case study from the pharmaceutical industry. Design testing indicates that A-frame systems provide the greatest potential in labor savings when a distribution center has high item commonality, small order sizes, and high skewness levels and in throughput when many small orders have low item commonality and low skewness levels. Journal: IIE Transactions Pages: 739-752 Issue: 10 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.549099 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.549099 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:10:p:739-752 Template-Type: ReDIF-Article 1.0 Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Leon McGinnis Author-X-Name-First: Leon Author-X-Name-Last: McGinnis Author-Name: Bert Zwart Author-X-Name-First: Bert Author-X-Name-Last: Zwart Title: Queueing models for a single machine subject to multiple types of interruptions Abstract: Queueing models are commonly applied to quantify the performance of production systems. Prior research has usually focused on deriving queueing models for a specific type of interruptions. However, machines generally suffer multiple types of interruptions in practical manufacturing systems. To satisfy this need, an integrated model is proposed, in which multiple types of interruptions commonly seen on the shop floor are considered. Journal: IIE Transactions Pages: 753-759 Issue: 10 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.550907 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.550907 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:10:p:753-759 Template-Type: ReDIF-Article 1.0 Author-Name: Dima Nazzal Author-X-Name-First: Dima Author-X-Name-Last: Nazzal Title: A closed queueing network approach to analyzing multi-vehicle material handling systems Abstract: This article models a multi-vehicle material handling system as a closed-loop queueing network with finite buffers and general service times, where the vehicles represent the jobs in the network. This type of network differs from other queueing systems, because the vehicles’ residence times on track segments (servers) depend on the number of jobs (vehicles) in circulation. A new iterative approximation algorithm is presented that estimates throughput capacity and decomposes the network consisting of S servers into S separate G/G/1 systems. Each subsystem is analyzed separately to estimate the work-in-process via a population constraint to ensure that the summation of the average buffer sizes across all servers equals the total number of vehicles. Numerical results show that the methodology proposed is accurate in a wide range of operating scenarios. Journal: IIE Transactions Pages: 721-738 Issue: 10 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.566907 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.566907 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:10:p:721-738 Template-Type: ReDIF-Article 1.0 Author-Name: Yan Jin Author-X-Name-First: Yan Author-X-Name-Last: Jin Author-Name: Shuai Huang Author-X-Name-First: Shuai Author-X-Name-Last: Huang Author-Name: Guan Wang Author-X-Name-First: Guan Author-X-Name-Last: Wang Author-Name: Houtao Deng Author-X-Name-First: Houtao Author-X-Name-Last: Deng Title: Diagnostic monitoring of high-dimensional networked systems via a LASSO-BN formulation Abstract: Quality control of multivariate processes has been extensively studied in the past decades; however, fundamental challenges still remain due to the complexity and the decision-making challenges that require not only sensitive fault detection but also identification of the truly out-of-control variables. In existing approaches, fault detection and diagnosis are considered as two separate tasks. Recent developments have revealed that selective monitoring of the potentially out-of-control variables, identified by a variable selection procedure combined with the process monitoring method, could lead to promising performances. Following this line, we propose the diagnostic monitoring that takes an additional step on from the selective monitoring idea and directs the monitoring effort on the potentially out-of-control variables. The identification of the truly out-of-control variables can be achieved by integrating the process monitoring formulation with process cascade knowledge represented by a Bayesian Network. Computationally efficient algorithms are developed for solving the optimization formulation with connection to the Least Absolute Shrinkage and Selection Operator (LASSO) problem being identified. Both theoretical analysis and extensive experiments on a simulated data set and real-world applications are conducted that show the superior performance. Journal: IISE Transactions Pages: 874-884 Issue: 9 Volume: 49 Year: 2017 Month: 9 X-DOI: 10.1080/24725854.2017.1301692 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1301692 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:9:p:874-884 Template-Type: ReDIF-Article 1.0 Author-Name: Kangwon Seo Author-X-Name-First: Kangwon Author-X-Name-Last: Seo Author-Name: Rong Pan Author-X-Name-First: Rong Author-X-Name-Last: Pan Title: Data analysis of step-stress accelerated life tests with heterogeneous group effects Abstract: Step-Stress Accelerated Life Testing (SSALT) is a special type of experiment that tests a product′s lifetime with time-varying stress levels. Typical testing protocols deployed in SSALTs cannot implement complete randomization of experiments; instead, they often result in grouped structures of experimental units and, thus, correlated observations. In this article, we propose a Generalized Linear Mixed Model (GLMM) approach to take into account the random group effect in SSALT. Failure times are assumed to be exponentially distributed under any stress level. Two parameter estimation methods, Adaptive Gaussian Quadrature (AGQ) and Integrated Nested Laplace Approximation (INLA), are introduced. A simulation study is conducted to compare the proposed random effect model with the traditional model, which pools data groups together, and with the fixed effect model. We also compare AGQ and INLA with different priors for parameter estimation. Results show that the proposed model can validate the existence of group-to-group variation. Lastly, the GLMM model is applied to a real data and it shows that disregarding experimental protocols in SSALT may result in large bias in the estimation of the effect of stress variable. Journal: IISE Transactions Pages: 885-898 Issue: 9 Volume: 49 Year: 2017 Month: 9 X-DOI: 10.1080/24725854.2017.1312038 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1312038 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:9:p:885-898 Template-Type: ReDIF-Article 1.0 Author-Name: Chao Wang Author-X-Name-First: Chao Author-X-Name-Last: Wang Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Contamination source identification based on sequential Bayesian approach for water distribution network with stochastic demands Abstract: Efficient identification of the source of contamination in a water distribution network is crucial to the safe operation of the system. In this article, we propose a real-time sequential Bayesian approach to deal with this problem. Simulations are conducted to simulate hydraulic information and the propagation of contamination in the network. Sensor alarms are recorded in multiple simulations to establish the observation probability distribution function. Then this information is used to compute the posterior probability of each possible source for the observed alarm pattern in real time. Finally, the contamination source is identified based on a ranking of the posterior probability. The key contribution of this work is that the probability distributions for all possible observations are organized into a concise hierarchical tree structure and the challenge of combinatorial explosion is avoided. Furthermore, a variation analysis of the posterior probability is conducted to give significance probability to the obtained identification result. The effectiveness of this method is verified by a case study with a realistic water distribution network. Journal: IISE Transactions Pages: 899-910 Issue: 9 Volume: 49 Year: 2017 Month: 9 X-DOI: 10.1080/24725854.2017.1315782 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1315782 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:9:p:899-910 Template-Type: ReDIF-Article 1.0 Author-Name: Jingyuan Shen Author-X-Name-First: Jingyuan Author-X-Name-Last: Shen Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Title: Reliability performance for dynamic multi-state repairable systems with regimes Abstract: It is well known that many factors can influence the rate at which a machine degrades. In this article, we study a multi-state repairable system subject to continuous degradation and dynamically evolving regimes. The degradation rate of the system depends not only on the system states but also on the regimes driven by the varying external environments. Movements between the system states are governed by continuous-time Markov processes but with different transition rate matrices due to different regimes; meanwhile, the evolution of regime is also governed by a Markov process. Such a system can be developed by the Markov regime-switching model. To derive the system performance such as the first passage time distribution, a Markov renewal process is introduced by giving its semi-Markov kernel. We also consider the system in the context of periodic inspections and maintenance and give the limiting average availability. Finally, some numerical examples are given to demonstrate and validate the proposed framework. Journal: IISE Transactions Pages: 911-926 Issue: 9 Volume: 49 Year: 2017 Month: 9 X-DOI: 10.1080/24725854.2017.1318228 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1318228 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:9:p:911-926 Template-Type: ReDIF-Article 1.0 Author-Name: Haobin Li Author-X-Name-First: Haobin Author-X-Name-Last: Li Author-Name: Chenhao Zhou Author-X-Name-First: Chenhao Author-X-Name-Last: Zhou Author-Name: Byung Kwon Lee Author-X-Name-First: Byung Kwon Author-X-Name-Last: Lee Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Author-Name: Rick Siow Mong Goh Author-X-Name-First: Rick Siow Mong Author-X-Name-Last: Goh Title: Capacity planning for mega container terminals with multi-objective and multi-fidelity simulation optimization Abstract: Container terminals play a significant role as representative logistics facilities for contemporary trades by handling outbound, inbound, and transshipment containers to and from the sea (shipping liners) and the hinterland (consignees). Capacity planning is a fundamental decision process when constructing, expanding, or renovating a container terminal to meet demand, and the outcome of this planning is typically represented in terms of configurations of resources (e.g., the numbers of quay cranes, yard cranes, and vehicles), which enables the container flows to satisfy a high service level for vessels (e.g., berth-on-arrivals). This study presents a decision-making process that optimizes the capacity planning of large-scale container terminals. Advanced simulation-based optimization algorithms, such as Multi-Objective Multi-Fidelity Optimization with Ordinal Transformation and Optimal Sampling (MO-MO2TOS), Multi-Objective Optimal Computing Budget Allocation (MOCBA), and Multi-Objective Convergent Optimization via Most-Promising-Area Stochastic Search (MO-COMPASS), were employed to formulate and optimally solve the large-scale multi-objective problem with multi-fidelity simulation models. Various simulation results are compared with one another in terms of the capacities over different resource configurations to understand the effect of various parameter settings on optimal capacity across the algorithms. Journal: IISE Transactions Pages: 849-862 Issue: 9 Volume: 49 Year: 2017 Month: 9 X-DOI: 10.1080/24725854.2017.1318229 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1318229 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:9:p:849-862 Template-Type: ReDIF-Article 1.0 Author-Name: Wujun Si Author-X-Name-First: Wujun Author-X-Name-Last: Si Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Author-Name: Xin Wu Author-X-Name-First: Xin Author-X-Name-Last: Wu Title: A distribution-based functional linear model for reliability analysis of advanced high-strength dual-phase steels by utilizing material microstructure images Abstract: The microstructure of a material is known to strongly influence its macroscopic properties, such as strength, hardness, toughness, and wear resistance, which in turn affect material service lifetime. In the reliability literature, most existing research conducts reliability analysis based on either lifetime data or degradation data. However, none of these studies take the information contained in an image of the microstructure of the material into account when conducting reliability analysis. In this article, considering the strong effect on a material's reliability created by its microstructure, we conduct a reliability analysis of an advanced high-strength dual-phase steel by utilizing information about its microstructure. Specifically, the lifetime distribution of the steel, which is assumed to belong to a log-location-scale family, is predicted by utilizing the information contained in images of its microstructure. For the prediction, we propose a novel statistical model called the distribution-based functional linear model, in which the effect of the microstructure on both the location and scale parameters of lifetime distribution is formulated. The proposed model generalizes the existing functional linear regression model. A maximum penalized likelihood method is developed to estimate the model parameters. A simulation study is implemented to illustrate the developed methods. Physical experiments on dual-phase steel are designed and conducted to demonstrate the proposed model. The results show that the proposed model more precisely predicts the lifetime of the steel than existing methods that ignore the information contained in microstructure images. Journal: IISE Transactions Pages: 863-873 Issue: 9 Volume: 49 Year: 2017 Month: 9 X-DOI: 10.1080/24725854.2017.1320599 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1320599 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:9:p:863-873 Template-Type: ReDIF-Article 1.0 Author-Name: Pierre Brice Author-X-Name-First: Pierre Author-X-Name-Last: Brice Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Title: A context tree method for multistage fault detection and isolation with applications to commercial video broadcasting systems Abstract: Many systems have distributed functionalities among several autonomous components. Such complex systems can generally be described by finite state machines whose behavior is often non-linear and context-dependent. This paper proposes a generic system model based on context trees to predict system behavior for the purpose of fault detection and isolation in a multistage serial system. The approach starts with learning multistage model structures by capturing the expected statistical distribution of the input/output at different stages. The estimated model is then employed to detect departures by comparing the contexts of the new system output with a set of optimal contexts for each stage using the Kullback–Leibler divergence measure. Problems can then be isolated to the stage that contributes the most to these differences. The methodology is demonstrated by an application in commercial video broadcasting systems. Journal: IIE Transactions Pages: 776-789 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802323018 File-URL: http://hdl.handle.net/10.1080/07408170802323018 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:776-789 Template-Type: ReDIF-Article 1.0 Author-Name: Young Chun Author-X-Name-First: Young Author-X-Name-Last: Chun Title: Improving product quality by multiple inspections: Prior and posterior planning of serial inspection procedures Abstract: In many practical situations, a complex product is inspected more than once in a sequential manner to further improve its quality. In this paper, the problem of designing the multiple inspection plan via a Bayesian method is considered. As a prior distribution in the Bayesian model, a negative binomial distribution that has many desirable properties is used. Two types of design problem are considered. In prior planning of a serial inspection procedure, the number of inspections necessary to achieve a desired level of quality must be determined prior to starting the inspection process. In posterior planning, the inspection process can be terminated if the product meets a given level of quality. In both cases, the improved level of quality is measured in this paper either by the expected number of undetected errors still remaining in the product or by the probability of no undetected errors in the product. Journal: IIE Transactions Pages: 831-842 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802389324 File-URL: http://hdl.handle.net/10.1080/07408170802389324 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:831-842 Template-Type: ReDIF-Article 1.0 Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Author-Name: Yong Chen Author-X-Name-First: Yong Author-X-Name-Last: Chen Title: Sensor system reliability modeling and analysis for fault diagnosis in multistage manufacturing processes Abstract: This paper investigates the reliability of coordinate sensor systems used for process fault diagnosis in multistage manufacturing processes. When considering catastrophic sensor failure, the reliability of a coordinate sensor system is defined based on its diagnosability performance. A mathematical tool called matroid theory is applied to study the reliability of the coordinate sensor system; properties of the minimal paths and minimal cuts are derived; efficient methods are developed to evaluate the exact system reliability for two special types of systems; and an efficient algorithm to evaluate the min–max lower bound of system reliability is provided that does not require assuming that all minimal paths must be known in advance. The proposed models and the developed methods are illustrated and applied in two case studies for fault diagnosis of a multistage panel assembly process. Journal: IIE Transactions Pages: 819-830 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789035 File-URL: http://hdl.handle.net/10.1080/07408170902789035 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:819-830 Template-Type: ReDIF-Article 1.0 Author-Name: Joel Fenner Author-X-Name-First: Joel Author-X-Name-Last: Fenner Author-Name: Young-Seon Jeong Author-X-Name-First: Young-Seon Author-X-Name-Last: Jeong Author-Name: Myong Jeong Author-X-Name-First: Myong Author-X-Name-Last: Jeong Author-Name: Jye-Chyi Lu Author-X-Name-First: Jye-Chyi Author-X-Name-Last: Lu Title: A Bayesian parallel site methodology with an application to uniformity modeling in semiconductor manufacturing Abstract: The increasing use of standard machines in manufacturing processes provides opportunities for sharing information from similar operations to improve the accuracy of process model characterization for supporting root cause analysis and process monitoring activities. This article investigates how to integrate data from parallel sites that are alike in some ways but dissimilar in others. The parallel sites could be several machines that accomplish the same process step, several industrial locations that produce the same product or even several different time windows for the same machine. The proposed Bayesian parallel site model flexibly allows for a compromise between pooling the data completely and treating sites as completely unrelated. One of the key features in the hierarchical model is the quasi-common parameters that are different from site to site, but have some commonality between sites. The similarities between the individual quasi-common parameters are modeled through common global hyperparameters which determine the prior distribution for the quasi-common parameters. A case study on the uniformity modeling (across the different dies on a silicon wafer, across slots in a furnace, etc.) illustrates how the hierarchy of a Bayesian model can be used to incorporate correlation structure. The Bayesian approach provides the flexibility for handling many other possible parallel data source scenarios that might be encountered in many applications. Journal: IIE Transactions Pages: 754-763 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789043 File-URL: http://hdl.handle.net/10.1080/07408170902789043 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:754-763 Template-Type: ReDIF-Article 1.0 Author-Name: Yuan Ren Author-X-Name-First: Yuan Author-X-Name-Last: Ren Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Optimal sensor distribution in multi-station assembly processes for maximal variance detection capability Abstract: Recent advances in sensor technology now allow manufacturers to distribute multiple sensors in multi-station assembly processes. A distributed sensor system enables the continual monitoring of manufactured products and greatly facilitates the determination of the underlying process variation sources that cause product quality defects. This paper addresses the problem of optimally distributing sensors in a multi-station assembly process to achieve a maximal variance detection capability. A sensitivity index is proposed for characterizing the detection ability of process variance components and the optimization problem for sensor distribution is formulated for a multi-station assembly process. A data-mining-guided evolutionary method is devised to solve this non-linear optimization problem. The data-mining-guided method demonstrates a considerable improvement compared to the existing alternatives. Guidance on practical issues such as the interpretation of the rules generated by the data mining method and how many sensors are required are also provided. Journal: IIE Transactions Pages: 804-818 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789050 File-URL: http://hdl.handle.net/10.1080/07408170902789050 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:804-818 Template-Type: ReDIF-Article 1.0 Author-Name: Ming Jin Author-X-Name-First: Ming Author-X-Name-Last: Jin Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A chart allocation strategy for multistage processes Abstract: Statistical Process Control (SPC) in multistage manufacturing has attracted a great deal of attention recently. Applying conventional SPC methods in a multistage environment may not work well because these methods do not consider the inherent structure of the process, such as the interrelationship information between stages. In this paper, a strategy is proposed to properly allocate control charts in a multistage process in order to enhance the fast detection of out-of-control behaviors of conventional SPC. Based on the proposed chart allocation strategy, inherent structural information is involved in decision making to achieve quicker detection of a potential fault. Two automotive assembly examples are used to demonstrate the applications of the chart allocation strategy. The impact of uncertainty in the structural parameters is also considered, which may allow practitioners to make more realistic decisions in multistage manufacturing processes. Journal: IIE Transactions Pages: 790-803 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789068 File-URL: http://hdl.handle.net/10.1080/07408170902789068 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:790-803 Template-Type: ReDIF-Article 1.0 Author-Name: Jie Yu Author-X-Name-First: Jie Author-X-Name-Last: Yu Author-Name: S. Qin Author-X-Name-First: S. Author-X-Name-Last: Qin Title: Variance component analysis based fault diagnosis of multi-layer overlay lithography processes Abstract: The overlay lithography process is one of the most important steps in semiconductor manufacturing. This work attempts to solve a challenging problem in this technique, namely error source identification and diagnosis for multistage overlay processes. In this paper, a multistage state space model for the misalignment errors of the lithography process is developed and a general mixed linear input–output model is then formulated to incorporate both fixed and random effects. Furthermore, the minimum norm quadric unbiased estimation strategy is used to estimate the mean and variance components of potential fault sources, and their asymptotic distributions are used to test the hypothesis concerning the statistical significance of each potential fault. Based on the above procedures, the root cause of misalignment errors in a multi-layer overlay process can be detected and diagnosed with physical inference. A number of simulated examples are designed and tested to verify the validity of the presented approach in fault detection and diagnosis of multi-stepper overlay processes. Journal: IIE Transactions Pages: 764-775 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789076 File-URL: http://hdl.handle.net/10.1080/07408170902789076 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:764-775 Template-Type: ReDIF-Article 1.0 Author-Name: Thavanrath Chaipradabgiat Author-X-Name-First: Thavanrath Author-X-Name-Last: Chaipradabgiat Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Optimal fixture locator adjustment strategies for multi-station assembly processes Abstract: Fixture locating errors directly impact the dimensional quality of products in assembly processes. During a production run, fixture locators may deviate from their designed positions and this can possibly lead to defects and quality loss in the final assembled products. Mass production in multi-station assembly processes involves multiple fixtures/stations, which leads to extreme complexity in dimensional control through locator position adjustment. This research aims to develop a systematic methodology for fixture locator adjustment to minimize total production costs in multi-station assembly processes. In this paper, a linear model is derived to describe the complex propagation effect of fixture adjustments throughout all stations in an assembly process. Bayesian estimation with iterative algorithms is used to adaptively estimate the unknown parameters of locator deviation errors during production. An optimal fixture locator adjustment strategy is obtained through dynamic programming based on the given process and product design scheme. A case study is provided to illustrate the implementation procedures and the significance of the proposed methodology. Journal: IIE Transactions Pages: 843-852 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902806870 File-URL: http://hdl.handle.net/10.1080/07408170902806870 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:843-852 Template-Type: ReDIF-Article 1.0 Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Quality control and improvement for multistage systems: A survey Abstract: A multistage system refers to a system consisting of multiple components, stations or stages required to finish the final product or service. Multistage systems are very common in practice and include a variety of modern manufacturing and service systems. In most cases, the quality of the final product or service produced by a multistage system is determined by complex interactions among multiple stages—the quality characteristics at one stage are not only influenced by local variations at that stage, but also by variations propagated from upstream stages. Multistage systems present significant challenges, yet also opportunities for quality engineering research. The purpose of this paper is to provide a brief survey of emerging methodologies for tackling various issues in quality control and improvement for multistage systems including modeling, analysis, monitoring, diagnosis, control, inspection and design optimization. Journal: IIE Transactions Pages: 744-753 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902966344 File-URL: http://hdl.handle.net/10.1080/07408170902966344 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:744-753 Template-Type: ReDIF-Article 1.0 Author-Name: Jye-Chyi Lu Author-X-Name-First: Jye-Chyi Author-X-Name-Last: Lu Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Foreword Journal: Pages: 743-743 Issue: 9 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902966401 File-URL: http://hdl.handle.net/10.1080/07408170902966401 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:9:p:743-743 Template-Type: ReDIF-Article 1.0 Author-Name: Pedram Sahba Author-X-Name-First: Pedram Author-X-Name-Last: Sahba Author-Name: Barış Balciog̃lu Author-X-Name-First: Barış Author-X-Name-Last: Balciog̃lu Author-Name: Dragan Banjevic Author-X-Name-First: Dragan Author-X-Name-Last: Banjevic Title: Spare parts provisioning for multiple -out-of-:G systems Abstract: This article considers a repair shop that fixes failed components from different k-out-of-n:G systems. It is assumed that each system consists of the same type of component; to increase availability, a certain number of critical components are stocked as spare parts. A shared inventory that serves all systems and/or reserved inventories for each system are allowed; this is called a hybrid model. Additionally, two alternative dispatching rules for the repaired component are considered. The destination for a repaired component can be chosen either on a first-come first-served basis or by following a static priority rule. The analysis gives the steady-state system size distribution of the two alternative models at the repair shop. Numerical examples are performed that minimize the spare parts held while subjecting the availability of each system to exceed a targeted value. It is shown that a hybrid priority policy is better than a hybrid first-come first-served policy, unless the availabilities of systems are close. Journal: IIE Transactions Pages: 953-963 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.695102 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.695102 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:953-963 Template-Type: ReDIF-Article 1.0 Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Zhigang Tian Author-X-Name-First: Zhigang Author-X-Name-Last: Tian Title: A framework for predicting the remaining useful life of a single unit under time-varying operating conditions Abstract: Product reliability in the field is important for a wide variety of critical applications such as manufacturing, transportation, power generation, and health care. In particular, the propensity of achieving zero-downtime emphasizes the need for Remaining Useful Life (RUL) prediction for a single unit. The task is quite challenging when the unit is subject to time-varying operating conditions. This article provides a framework for predicting the RUL of a single unit under time-varying operating conditions by incorporating the results of both accelerated degradation testing and in situ condition monitoring. For illustration purposes, the underlying degradation process is modeled as a Brownian motion evolving in response to the operating conditions. The model is combined with in situ degradation measurements of the unit and the operating conditions to predict the unit's RUL through a Bayesian technique. When the operating conditions are piecewise constant, statistical approaches using a conjugate prior distribution and Markov chain Monte Carlo approach are developed for cases involving linear and non-linear degradation–stress relationships, respectively. The proposed framework is also extended to handle a more complex case where the projected future operating conditions are stochastic. Simulation experiments and a case study for ball bearings are used to verify the prediction capability and practicality of the framework. In the case study, a quantile regression technique is proposed to handle load-dependent failure threshold values in RUL prediction. Journal: IIE Transactions Pages: 964-980 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.705451 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.705451 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:964-980 Template-Type: ReDIF-Article 1.0 Author-Name: Wenpo Huang Author-X-Name-First: Wenpo Author-X-Name-Last: Huang Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Title: Evaluation of run-length distribution for CUSUM charts under gamma distributions Abstract: Numerical evaluation of run-length distributions of CUSUM charts under normal distributions has received considerable attention. However, accurate approximation of run-length distributions under non-normal or skewed distributions is challenging and has generally been overlooked. This article provides a fast and accurate algorithm based on the piecewise collocation method for computing the run-length distribution of CUSUM charts under skewed distributions such as gamma distributions. It is shown that the piecewise collocation method can provide a more robust approximation of the run-length distribution than other existing methods such as the Gaussian quadrature-based approach, especially when the process distribution is heavily skewed. Some computational aspects including an alternative formulation based on matrix decomposition and geometric approximation of run-length distribution are discussed. Design guidelines of such a CUSUM chart are also provided. Journal: IIE Transactions Pages: 981-994 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.705455 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.705455 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:981-994 Template-Type: ReDIF-Article 1.0 Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Author-Name: Kwok Tsui Author-X-Name-First: Kwok Author-X-Name-Last: Tsui Title: Condition monitoring and remaining useful life prediction using degradation signals: revisited Abstract: Condition monitoring is an important prognostic tool to determine the current operation status of a system/device and to estimate the distribution of the remaining useful life. This article proposes a two-phase model to characterize the degradation process of rotational bearings. A Bayesian framework is used to integrate historical data with up-to-date in situ observations of new working units to improve the degradation modeling and prediction. A new approach is developed to compute the distribution of the remaining useful life based on the degradation signals, which is more accurate compared with methods reported in the literature. Finally, extensive numerical results demonstrate that the proposed framework is effective and efficient. Journal: IIE Transactions Pages: 939-952 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706376 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706376 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:939-952 Template-Type: ReDIF-Article 1.0 Author-Name: Mohamed Sallak Author-X-Name-First: Mohamed Author-X-Name-Last: Sallak Author-Name: Walter Schön Author-X-Name-First: Walter Author-X-Name-Last: Schön Author-Name: Felipe Aguirre Author-X-Name-First: Felipe Author-X-Name-Last: Aguirre Title: Reliability assessment for multi-state systems under uncertainties based on the Dempster–Shafer theory Abstract: This article presents an original method for evaluating reliability indices for Multi-State Systems (MSSs) in the presence of aleatory and epistemic uncertainties. In many real- world MSSs, an insufficiency of data makes it difficult to estimate precise values for component state probabilities. The proposed approach applies the transferable belief model interpretation of the Dempster–Shafer theory to represent component state beliefs and to evaluate the MSS reliability indices. The example of an oil transmission system is used to demonstrate the proposed approach and it is compared with the universal generating function method. The value of the Dempster–Shafer theory lies in its ability to use several combination rules in order to evaluate reliability indices for MSSs that depend on the reliability of the experts’ opinions as well as their independence. Journal: IIE Transactions Pages: 995-1007 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706378 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706378 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:995-1007 Template-Type: ReDIF-Article 1.0 Author-Name: Yanfen Shang Author-X-Name-First: Yanfen Author-X-Name-Last: Shang Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Author-Name: Changliang Zou Author-X-Name-First: Changliang Author-X-Name-Last: Zou Title: Statistical process control for multistage processes with binary outputs Abstract: Statistical Process Control (SPC) including monitoring and diagnosis is very important and challenging for multistage processes with categorical data. This article proposes a Binary State Space Model (BSSM) for modeling multistage processes with binomial (binary) data and develops corresponding monitoring and diagnosis schemes by utilizing a hierarchical likelihood approach and directional information based on the BSSM. The proposed schemes not only provide an SPC solution that incorporates both interstage and intrastage correlations, but they also resolve the confounding issue in monitoring and diagnosis due to the cumulative effects from stage to stage. Simulation results show that the proposed schemes consistently outperform the existing χ2 scheme in monitoring and diagnosing for binomial multistage processes. An aluminum electrolytic capacitor example from the manufacturing industry is used to illustrate the implementation of the proposed approach. Journal: IIE Transactions Pages: 1008-1023 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.723839 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.723839 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:1008-1023 Template-Type: ReDIF-Article 1.0 Author-Name: Mahmood Shafiee Author-X-Name-First: Mahmood Author-X-Name-Last: Shafiee Author-Name: Maxim Finkelstein Author-X-Name-First: Maxim Author-X-Name-Last: Finkelstein Author-Name: Ming Zuo Author-X-Name-First: Ming Author-X-Name-Last: Zuo Title: Optimal burn-in and preventive maintenance warranty strategies with time-dependent maintenance costs Abstract: This article considers the determination of the optimal burn-in time, the degree of preventive maintenance, and the preventive maintenance interval (or, equivalently, the number of preventive maintenance actions) for warranted products with time-dependent maintenance costs. The expected cost function is derived by adopting an appropriate age-reduction model and the determination of the optimal joint solution is discussed. Finally, the impact of providing a burn-in/preventive maintenance program is evaluated through numerical examples. Journal: IIE Transactions Pages: 1024-1033 Issue: 9 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.768784 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.768784 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:9:p:1024-1033 Template-Type: ReDIF-Article 1.0 Author-Name: Yonit Barron Author-X-Name-First: Yonit Author-X-Name-Last: Barron Author-Name: Opher Baron Author-X-Name-First: Opher Author-X-Name-Last: Baron Title: QMCD approach for perishability models: The (S, s) control policy with lead time Abstract: We consider cost minimization for an (S, s) continuous-review perishable inventory system with random lead times and times to perishability, and a state-dependent Poisson demand. We derive the stationary distributions for the inventory level using the Queueing and Markov Chain Decomposition (QMCD) methodology. Applying QMCD, we develop an intuitive approach to characterizing the distribution of the residual time for the next event in different states of the system. We provide comprehensive analysis of two main models. The first model assumes a general random lifetime and an exponential distributed lead time. The second model assumes an exponential distributed lifetime and a general lead time. Each model is analyzed under both backordering and lost sales assumptions. We consider a fixed cost for each order, a purchase cost, a holding cost, a cost for perished items, and a penalty cost in the case of shortage. Numerical examples are provided and show that variability of lead time is more costly than that of perishability time. Therefore, after reducing lead time and increasing perishability time, managers should focus on reducing variability of lead time. Journal: IISE Transactions Pages: 133-150 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2019.1614697 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1614697 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:133-150 Template-Type: ReDIF-Article 1.0 Author-Name: Miao Yu Author-X-Name-First: Miao Author-X-Name-Last: Yu Author-Name: Siqian Shen Author-X-Name-First: Siqian Author-X-Name-Last: Shen Title: An integrated car-and-ride sharing system for mobilizing heterogeneous travelers with application in underserved communities Abstract: The fast-growing carsharing and ride-hailing businesses are generating economic benefits and societal impacts in modern society, while both have limitations to satisfy diverse users, e.g., travelers in low-income, underserved communities. In this article, we consider two types of users: Type 1 drivers who rent shared cars and Type 2 passengers who need shared rides. We propose an integrated car-and-ride sharing (CRS) system to enable community-based shared transportation. To compute solutions, we propose a two-phase approach where in Phase I we determine initial car allocation and Type 1 drivers to accept; in Phase II we solve a stochastic mixed-integer program to match the accepted Type 1 drivers with Type 2 users, and optimize their pick-up routes under a random travel time. The goal is to minimize the total travel cost plus expected penalty cost of users’ waiting and system overtime. We demonstrate the performance of a CRS system in Washtenaw County, Michigan by testing instances generated based on census data and different demand patterns. We also demonstrate the computational efficacy of our decomposition algorithm benchmarked with the traditional Benders decomposition for solving the stochastic model in Phase II. Our results show high demand fulfillment rates and effective matching and scheduling with low risk of waiting and overtime. Journal: IISE Transactions Pages: 151-165 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2019.1628377 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1628377 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:151-165 Template-Type: ReDIF-Article 1.0 Author-Name: Saloumeh Sadeghzadeh Author-X-Name-First: Saloumeh Author-X-Name-Last: Sadeghzadeh Author-Name: Ebru K. Bish Author-X-Name-First: Ebru K. Author-X-Name-Last: Bish Author-Name: Douglas R. Bish Author-X-Name-First: Douglas R. Author-X-Name-Last: Bish Title: Optimal data-driven policies for disease screening under noisy biomarker measurement Abstract: Biomarker testing, where a biochemical marker is used to predict the presence or absence of a disease in a subject, is an essential tool in public health screening. For many diseases, related biomarkers may have a wide range of concentration among subjects, particularly among the disease positive subjects. Furthermore, biomarker levels may fluctuate based on external or subject-specific factors. These sources of variability can increase the likelihood of subject misclassification based on a biomarker test. We study the minimization of the subject misclassification cost for public health screening of non-infectious diseases, considering regret and expectation-based objectives, and derive various key structural properties of optimal screening policies. Our case study of newborn screening for cystic fibrosis, based on real data from North Carolina, indicates that substantial reductions in classification errors can be achieved through the use of the proposed optimization-based models over current practices. Journal: IISE Transactions Pages: 166-180 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2019.1630867 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1630867 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:166-180 Template-Type: ReDIF-Article 1.0 Author-Name: Helmut A. Sedding Author-X-Name-First: Helmut A. Author-X-Name-Last: Sedding Title: Line side placement for shorter assembly line worker paths Abstract: Placing material containers at moving assembly lines is an intriguing problem because each container position influences worker paths. This optimization is relevant in practice as worker walking time accounts for about 10–15% of total work time. Nonetheless, we find few computational approaches in the literature. We address this gap and model walking time to containers, then optimize their placement. Our findings suggest this reduces walking time of intuitive solutions by an average of 20%, with considerable estimated savings. To investigate the subject, we formulate a quintessential optimization model for basic sequential container placement along the line side. However, even this core problem turns out as strongly NP-complete. Nonetheless, it possesses several polynomial cases that allow to construct a lower bound on the walking time. Moreover, we discover exact and heuristic dominance conditions between partial placements. This facilitates an exact and a truncated branch-and-bound solution algorithm. In extensive tests, they consistently deliver superior performance compared to several mixed integer programming and metaheuristic approaches. To aid practitioners in quickly recognizing instances with high optimization potential even before performing a full optimization, we provide a criterion to estimate it with just few measurements. Journal: IISE Transactions Pages: 181-198 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2018.1508929 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1508929 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:181-198 Template-Type: ReDIF-Article 1.0 Author-Name: Jianjun Xu Author-X-Name-First: Jianjun Author-X-Name-Last: Xu Author-Name: Shaoxiang Chen Author-X-Name-First: Shaoxiang Author-X-Name-Last: Chen Author-Name: Gangshu (George) Cai Author-X-Name-First: Gangshu (George) Author-X-Name-Last: Cai Title: Optimal policy for production systems with two flexible resources and two products Abstract: Manufacturing companies are facing increasing volatility in demand. As a result, there has been an emerging need for a flexible multi-period manufacturing system that uses multiple resources to produce multiple products with stochastic demands. To manage such multi-product, multi-resource systems, manufacturers need to make two decisions simultaneously: setting a production quantity for each product and allocating the limited resources dynamically among the products. Unfortunately, although the flexibility design and investment have been extensively studied, the literature has been muted on how to make production and allocation decisions optimally from an operational perspective. This article attempts to fill this literature gap by investigating a multi-period system using multiple flexible resources to produce two products. We identify the structural property of the cost functions, namely ρ-differential monotone. Based on this property, the optimal production and allocation policy can be characterized by switching curves, which divide the state space into eight or nine sub-regions based on the segmentation of decision rules. We analyze different cases in terms of production costs and resource utilization ratios, and show how they affect the optimal production and allocation decisions. Finally, we compare three heuristic policies to the optimal one to display the advantage of resource flexibility and the effectiveness of a heuristic policy. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transaction, datasets, additional tables, detailed proofs, etc. Journal: IISE Transactions Pages: 199-215 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2019.1602747 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1602747 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:199-215 Template-Type: ReDIF-Article 1.0 Author-Name: Mohsen Varmazyar Author-X-Name-First: Mohsen Author-X-Name-Last: Varmazyar Author-Name: Raha Akhavan-Tabatabaei Author-X-Name-First: Raha Author-X-Name-Last: Akhavan-Tabatabaei Author-Name: Nasser Salmasi Author-X-Name-First: Nasser Author-X-Name-Last: Salmasi Author-Name: Mohammad Modarres Author-X-Name-First: Mohammad Author-X-Name-Last: Modarres Title: Operating room scheduling problem under uncertainty: Application of continuous phase-type distributions Abstract: This article studies the stochastic Operating Room (OR) scheduling problem integrated with a Post-Anesthesia Care Unit (PACU), the overall problem is called the Operating Theater Room (OTR) problem. Due to the inherent uncertainty in surgery duration and its consecutive PACU time, the completion time of a patient should be modeled as the sum of a number of random variables. Some researchers have proposed the use of the normal distribution for its well-known additive property, but there are questions regarding its fitting adequacy to real OTR data, which tends to be asymmetric with a long tail. We propose to estimate the surgery and PACU times with the family of Continuous PHase-type (CPH) distributions, which provides both fitting adequacy and additive property. We first compute the completion time of each patient analytically and compare the results with normal and lognormal distributions on a series of real OTR datasets. Then, we develop a search algorithm embedding a constructive heuristic and a meta-heuristic algorithm as a sequence generator engine for the patients, and apply the CPH distribution as a chance constraint to eventually find the schedule of each sequence in the OTR problem. The best algorithm among several tested constructive heuristic algorithms is used as the neighborhood structure of meta-heuristic algorithms. We finally construct a numerical example of OTR problem to illustrate the application of the proposed algorithm. Journal: IISE Transactions Pages: 216-235 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2019.1628372 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1628372 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:216-235 Template-Type: ReDIF-Article 1.0 Author-Name: Michael D. Sherwin Author-X-Name-First: Michael D. Author-X-Name-Last: Sherwin Author-Name: Hugh R. Medal Author-X-Name-First: Hugh R. Author-X-Name-Last: Medal Author-Name: Cameron A. MacKenzie Author-X-Name-First: Cameron A. Author-X-Name-Last: MacKenzie Author-Name: Kennedy J. Brown Author-X-Name-First: Kennedy J. Author-X-Name-Last: Brown Title: Identifying and mitigating supply chain risks using fault tree optimization Abstract: Although supply chain risk management and supply chain reliability are topics that have been studied extensively, a gap exists for solutions that take a systems approach to quantitative risk mitigation decision making and especially in industries that present unique risks. In practice, supply chain risk mitigation decisions are made in silos and are reactionary. In this article, we address these gaps by representing a supply chain as a system using a fault tree based on the bill of materials of the product being sourced. Viewing the supply chain as a system provides the basis to develop an approach that considers all suppliers within the supply chain as a portfolio of potential risks to be managed. Next, we propose a set of mathematical models to proactively and quantitatively identify and mitigate at-risk suppliers using enterprise available data with consideration for a firm’s budgetary constraints. Two approaches are investigated and demonstrated on actual problems experienced in industry. The examples presented focus on Low-Volume High-Value (LVHV) supply chains that are characterized by long lead times and a limited number of capable suppliers, which make them especially susceptible to disruption events that may cause delays in delivered products and subsequently increase the financial risk exposure of the firm. Although LVHV supply chains are used to demonstrate the methodology, the approach is applicable to other types of supply chains as well. Results are presented as a Pareto frontier and demonstrate the practical application of the methodology. Journal: IISE Transactions Pages: 236-254 Issue: 2 Volume: 52 Year: 2020 Month: 2 X-DOI: 10.1080/24725854.2019.1630865 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1630865 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:2:p:236-254 Template-Type: ReDIF-Article 1.0 Author-Name: Kaiyue Zheng Author-X-Name-First: Kaiyue Author-X-Name-Last: Zheng Author-Name: Laura A. Albert Author-X-Name-First: Laura A. Author-X-Name-Last: Albert Author-Name: James R. Luedtke Author-X-Name-First: James R. Author-X-Name-Last: Luedtke Author-Name: Eli Towle Author-X-Name-First: Eli Author-X-Name-Last: Towle Title: A budgeted maximum multiple coverage model for cybersecurity planning and management Abstract: This article studies how to identify strategies for mitigating cyber-infrastructure vulnerabilities. We propose an optimization framework that prioritizes the investment in security mitigations to maximize the coverage of vulnerabilities. We use multiple coverage to reflect the implementation of a layered defense, and we consider the possibility of coverage failure to address the uncertainty in the effectiveness of some mitigations. Budgeted Maximum Multiple Coverage (BMMC) problems are formulated, and we demonstrate that the problems are submodular maximization problems subject to a knapsack constraint. Other variants of the problem are formulated given different possible requirements for selecting mitigations, including unit cost cardinality constraints and group cardinality constraints. We design greedy approximation algorithms for identifying near-optimal solutions to the models. We demonstrate an optimal (1–1/e)-approximation ratio for BMMC and a variation of BMMC that considers the possibility of coverage failure, and a 1/2-approximation ratio for a variation of BMMC that uses a cardinality constraint and group cardinality constraints. The computational study suggests that our models yield robust solutions that use a layered defense and provide an effective mechanism to hedge against the risk of possible coverage failure. We also find that the approximation algorithms efficiently identify near-optimal solutions, and that a Benders branch-and-cut algorithm we propose can find provably optimal solutions to the vast majority of our test instances within an hour for the variations of the proposed models that consider coverage failures. Journal: IISE Transactions Pages: 1303-1317 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1584832 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1584832 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1303-1317 Template-Type: ReDIF-Article 1.0 Author-Name: Onur A. Kilic Author-X-Name-First: Onur A. Author-X-Name-Last: Kilic Author-Name: Wilco van den Heuvel Author-X-Name-First: Wilco Author-X-Name-Last: van den Heuvel Title: Economic lot sizing with remanufacturing: Structural properties and polynomial-time heuristics Abstract: We consider the economic lot sizing problem with remanufacturing, an NP-hard problem which appears in integrated manufacturing and remanufacturing systems. We identify the network flow structure of the problem and derive important properties of its optimal solution. These properties are used to decompose the problem and to show that the resulting subproblems can be solved in polynomial-time. However, the number of subproblems to be solved is exponential, which we overcome by evaluating a limited set of subproblems such that the overall complexity is kept polynomial. This approach leads to a class of polynomial-time heuristics, where the time complexity depends on how the set of subproblems are chosen and how individual subproblems are solved. As a result, a trade-off can be made between computation time and solution quality. A numerical study, where we compare several heuristics within our class of heuristics, shows that our heuristics provide almost optimal solutions and significantly outperform earlier heuristics. Journal: IISE Transactions Pages: 1318-1331 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1593555 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1593555 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1318-1331 Template-Type: ReDIF-Article 1.0 Author-Name: N. Orkun Baycik Author-X-Name-First: N. Orkun Author-X-Name-Last: Baycik Author-Name: Kelly M. Sullivan Author-X-Name-First: Kelly M. Author-X-Name-Last: Sullivan Title: Robust location of hidden interdictions on a shortest path network Abstract: We study a version of the shortest path network interdiction problem in which the follower seeks a path of minimum length on a network and the leader seeks to maximize the follower’s path length by interdicting arcs. We consider placement of interdictions that are not visible to the follower; however, we seek to locate interdictions in a manner that is robust against the possibility that some information about the interdictions becomes known to the follower. We formulate the problem as a bilevel program and derive some properties of the inner problem, which enables solving the problem optimally via a Benders decomposition approach. We derive supervalid inequalities to improve the performance of the algorithm and test the performance of the algorithm on randomly generated, varying-sized grid networks and acyclic networks. We apply our approach to investigate the tradeoffs between conservative (i.e., the follower discovers all interdiction locations) and risky (i.e., the follower discovers no interdiction locations) assumptions regarding the leader’s information advantage. Journal: IISE Transactions Pages: 1332-1347 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1597316 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1597316 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1332-1347 Template-Type: ReDIF-Article 1.0 Author-Name: Maichel M. Aguayo Author-X-Name-First: Maichel M. Author-X-Name-Last: Aguayo Author-Name: Subhash C. Sarin Author-X-Name-First: Subhash C. Author-X-Name-Last: Sarin Author-Name: John S. Cundiff Author-X-Name-First: John S. Author-X-Name-Last: Cundiff Title: A branch-and-price approach for a biomass feedstock logistics supply chain design problem Abstract: This article addresses a biomass feedstock logistics supply chain design problem, which comprises a multi-period facility location problem, a special case of a single-item parallel-facilities capacitated lot-sizing problem, and a network flow problem. We formulate this problem as a mixed-integer program and propose a branch-and-price-based method for its solution that relies on effective implementation strategies. Our computational investigation reveals efficacy of the proposed method in obtaining near-optimal solutions for large-sized problem instances over direct solution of our model formulation of the problem by CPLEX. Several useful managerial insights resulting from our analysis are also presented. Journal: IISE Transactions Pages: 1348-1364 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1589656 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1589656 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1348-1364 Template-Type: ReDIF-Article 1.0 Author-Name: Taher Ahmadi Author-X-Name-First: Taher Author-X-Name-Last: Ahmadi Author-Name: Zumbul Atan Author-X-Name-First: Zumbul Author-X-Name-Last: Atan Author-Name: Ton de Kok Author-X-Name-First: Ton Author-X-Name-Last: de Kok Author-Name: Ivo Adan Author-X-Name-First: Ivo Author-X-Name-Last: Adan Title: Optimal control policies for assemble-to-order systems with commitment lead time Abstract: In this article, we study a preorder strategy which requires customers to place orders ahead of their actual need. We characterize the preorder strategy by a commitment lead time. We define the commitment lead time as the time that elapses between the moment an order is communicated by the customer and the moment the order must be delivered to the customer. We investigate the value of using this preorder strategy in managing assemble-to-order systems. For this purpose, we consider a manufacturer, who operates an assemble-to-order system with two components and a single end product. The manufacturer uses continuous-review base-stock policies for replenishing component inventories. Customer demand occurs for the end product only and unsatisfied customer demands are backordered. Since customers provide advance demand information by preordering, they receive a bonus. We refer to this bonus from the manufacturer’s perspective as a commitment cost. We determine the optimal component base-stock levels and the optimal length of the commitment lead time, which minimize the sum of long-run average component inventory holding, backordering and commitment costs. We find that the optimal commitment lead time is either zero or equals the replenishment lead time of one of the components. When the optimal commitment lead time is zero, the preorder strategy is not beneficial and the optimal control strategy for both components is buy-to-stock. When the optimal commitment lead time equals the lead time of the component with the shorter lead time, the optimal control strategy for this component is buy-to-order and it is buy-to-stock for the other component. On the other hand, when the optimal commitment lead time equals the lead time of the component with the longer lead time, the optimal control strategy is the buy-to-order strategy for both components. We find the unit commitment cost thresholds which determine the conditions under which one of these three cases hold. Journal: IISE Transactions Pages: 1365-1382 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1589658 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1589658 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1365-1382 Template-Type: ReDIF-Article 1.0 Author-Name: Qing Yue Author-X-Name-First: Qing Author-X-Name-Last: Yue Author-Name: Zhi-Long Chen Author-X-Name-First: Zhi-Long Author-X-Name-Last: Chen Author-Name: Guohua Wan Author-X-Name-First: Guohua Author-X-Name-Last: Wan Title: Integrated pricing and production scheduling of multiple customized products with a common base product Abstract: Make-To-Order (MTO) is a popular production strategy commonly used by manufacturers selling customized products. Dynamic pricing is a popular tactical tool commonly used by sellers to match supply with demand when there is a limited capacity and high demand uncertainty over time. In this article, we consider joint pricing and production scheduling decisions faced by a manufacturer that uses an MTO strategy to sell a number of customized products made from a common base product. At the beginning of each period in a planning horizon, the manufacturer sets the price of the base product, which in turn sets the prices for the customized products accordingly. Given the prices, orders for the products arrive. In each period, together with the pricing decision, the manufacturer needs to make a production scheduling decision for processing accepted orders on a single production line. The manufacturer’s objective is to maximize the total revenue of the processed orders minus a scheduling penalty over the planning horizon. Three specific problems with different order acceptance rules and objective functions are studied. In the first problem, the manufacturer has to accept all the incoming orders and treats the total weighted completion time of the orders as a part of the objective function. In the second problem, the manufacturer has to accept all incoming orders, but is allowed to complete some orders after their due dates with tardiness penalties. In the third problem, the manufacturer may reject some incoming orders, but must complete all the accepted orders by their due dates. We show that all these problems are NP-hard, propose optimal pseudo-polynomial-time dynamic programming algorithms and fully-polynomial-time approximation schemes for solving these problems, and conduct computational experiments to show the performance of the proposed algorithms. Furthermore, we derive several managerial insights through computational experiments. Journal: IISE Transactions Pages: 1383-1401 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1589659 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1589659 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1383-1401 Template-Type: ReDIF-Article 1.0 Author-Name: Meimei Zheng Author-X-Name-First: Meimei Author-X-Name-Last: Zheng Author-Name: Masha Shunko Author-X-Name-First: Masha Author-X-Name-Last: Shunko Author-Name: Nagesh Gavirneni Author-X-Name-First: Nagesh Author-X-Name-Last: Gavirneni Author-Name: Yan Shu Author-X-Name-First: Yan Author-X-Name-Last: Shu Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Title: Reactive production with preprocessing restriction in supply chains with forecast updates Abstract: We study a two-mode production system that allows the supply chain to utilize a second (reactive) production opportunity after demand information is updated before the selling season. Such reactive production quantity, however, is limited by the required preprocessing of raw materials that must be decided before the demand information is updated. We analyze the problem for two cases: perfect and imperfect demand information updates. For the case of imperfect demand updates, whether to engage in preprocessing that provides an opportunity for future reactive production depends on the relative magnitude of the resolved demand uncertainty compared with the unresolved one. In the case of perfect demand updates, however, it is independent of the demand characteristics. To the best of our knowledge, this is the first study to provide guidance on when and how much to invest in preprocessing in a situation with a general demand forecast updating. We also present a coordinating Pareto-improving reservation contract and show how the manufacturer can extract more profit by setting a lower reservation fee. Counterintuitively, we find that the manufacturer in the case of perfect demand updates benefits more from a contract without a return policy than that with a return policy. Numerical examples demonstrate that when the preprocessing restriction exists, the benefit of the two-mode production can be as large as 104% as compared to the single-mode production. Journal: IISE Transactions Pages: 1402-1436 Issue: 12 Volume: 51 Year: 2019 Month: 12 X-DOI: 10.1080/24725854.2019.1600080 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1600080 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:12:p:1402-1436 Template-Type: ReDIF-Article 1.0 Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Author-Name: Hongda Gao Author-X-Name-First: Hongda Author-X-Name-Last: Gao Author-Name: Yuchang Mo Author-X-Name-First: Yuchang Author-X-Name-Last: Mo Title: Reliability for k-out-of-n:F balanced systems with m sectors Abstract: Rapid technological developments have resulted in the development of new reliability systems, such as unmanned aerial vehicles with balanced engine systems, whose reliability analysis cannot be covered by currently existing techniques. Thus, research on this topic is very important and significant. In this article, a k-out-of-n:F balanced system with m sectors is introduced based on some real background and then four related system reliability models are developed. Several methods, such as the order statistics technique, Markov process imbedding technique, recursive method, and convolution method, are used on the various models for different situations. The elegant Markov process imbedding technique is extensively considered in order to obtain the smallest number of system states. The labeling of system states is presented and the state labeling formula is given for exponential distributions and the presented models, which is very convenient for usage. To the best of our knowledge, this is the first report in the literature on such a balanced system, and the corresponding research results are new contributions. In addition to system reliability formulas, the moments of the system lifetime are given. Finally, some numerical examples are presented to illustrate the results obtained in this article. Journal: IISE Transactions Pages: 381-393 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2017.1397856 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1397856 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:381-393 Template-Type: ReDIF-Article 1.0 Author-Name: Longwei Cheng Author-X-Name-First: Longwei Author-X-Name-Last: Cheng Author-Name: Andi Wang Author-X-Name-First: Andi Author-X-Name-Last: Wang Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A prediction and compensation scheme for in-plane shape deviation of additive manufacturing with information on process parameters Abstract: Shape fidelity is a critical issue that hinders the wider application of Additive Manufacturing (AM) technologies. In many AM processes, the shape of a product is usually different from its input design and the deviation usually depends on certain process parameters. In this article, we aim to improve the shape fidelity of AM products through compensation, with the information on these parameters. To achieve this, a two-step hierarchical scheme is proposed to predict the in-plane deviation of the product shape, which relates to the process parameters and the two-dimensional input shape. Based on this prediction procedure, a shape compensation strategy is developed that greatly improves the dimensional accuracy of products. Experimental studies of fused deposition modeling processes validate the effectiveness of our proposed scheme in terms of both predicting the shape deviation and improving the shape accuracy. Journal: IISE Transactions Pages: 394-406 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2017.1402224 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1402224 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:394-406 Template-Type: ReDIF-Article 1.0 Author-Name: Guanghan Bai Author-X-Name-First: Guanghan Author-X-Name-Last: Bai Author-Name: Zhigang Tian Author-X-Name-First: Zhigang Author-X-Name-Last: Tian Author-Name: Ming J. Zuo Author-X-Name-First: Ming J. Author-X-Name-Last: Zuo Title: Reliability evaluation of multistate networks: An improved algorithm using state-space decomposition and experimental comparison Abstract: This article introduces an improved algorithm using State-Space Decomposition for exact reliability evaluation of multistate networks given all minimal path vectors (d-MPs for short). We make two main contributions to the area. First, during each recursive call for the decomposition process, we find that the set of d-MPs can be decomposed recursively, and only those qualified d-MPs from a previous set of unspecified states are needed. Second, an improved heuristic rule is proposed choose an appropriate d-MP to decompose each set of unspecified states. Then, efficiency investigations of the proposed algorithm are conducted using hypothetical networks by changing one of the following network parameters while fixing the others, namely, the number of components, the number of d-MPs, and the number of states for each component. Efficiency investigations on networks with known structures are also conducted. Based on the computational experiments, it is found that (i) the proposed algorithm is more efficient than existing algorithms using the state-space decomposition method; (ii) the proposed algorithm is more efficient than existing algorithms using the Recursive Sum of Disjoint Products method when the number of d-MPs is not too small; and (iii) the indirect approach incorporating the proposed algorithm is more efficient than existing direct approaches. Guidelines for choosing the appropriate algorithm are provided. In addition, an algorithm is developed for network reliability evaluation given all minimal cut vectors (d-MCs for short). Journal: IISE Transactions Pages: 407-418 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2017.1410598 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1410598 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:407-418 Template-Type: ReDIF-Article 1.0 Author-Name: Kristin McCullough Author-X-Name-First: Kristin Author-X-Name-Last: McCullough Author-Name: Nader Ebrahimi Author-X-Name-First: Nader Author-X-Name-Last: Ebrahimi Title: Approximate Bayesian computation for censored data and its application to reliability assessment Abstract: Approximate Bayesian Computation (ABC) refers to a family of algorithms that perform Bayesian inference under intractable likelihoods. It is widely used to perform statistical inference on complex models. In this article, we propose using ABC for reliability analysis, and we extend the scope of ABC to encompass problems that involve censored data. We are motivated by the need to assess the reliability of nanoscale components in devices. This type of analysis is difficult to perform, due to the complex structure of nanodevices and limitations imposed by fabrication processes. A consequence is that failure data often include a high proportion of censored observations. We demonstrate that our proposed ABC algorithms perform well and produce accurate parameter estimates in this setting. Journal: IISE Transactions Pages: 419-430 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2017.1412091 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1412091 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:419-430 Template-Type: ReDIF-Article 1.0 Author-Name: Li Zeng Author-X-Name-First: Li Author-X-Name-Last: Zeng Author-Name: Xinwei Deng Author-X-Name-First: Xinwei Author-X-Name-Last: Deng Author-Name: Jian Yang Author-X-Name-First: Jian Author-X-Name-Last: Yang Title: Constrained Gaussian process with application in tissue-engineering scaffold biodegradation Abstract: In many biomanufacturing areas, such as tissue-engineering scaffold fabrication, the biodegradation performance of products is a key to producing products with desirable properties. The prediction of biodegradation often encounters the challenge of how to incorporate expert knowledge. This article proposes a Constrained Gaussian Process (CGP) method for predictive modeling with application to scaffold biodegradation. It provides a unified framework of using appropriate constraints to accommodate various types of expert knowledge in predictive modeling, including censoring, monotonicity, and bounds requirements. Efficient Bayesian sampling procedures for prediction are also developed. The performance of the proposed method is demonstrated in a case study on a novel scaffold fabrication process. Compared with the unconstrained GP and artificial neural networks, the proposed method can provide more accurate and meaningful prediction. A simulation study is also conducted to further reveal the properties of the CGP. Journal: IISE Transactions Pages: 431-447 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2017.1414973 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1414973 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:431-447 Template-Type: ReDIF-Article 1.0 Author-Name: Young-Seon Jeong Author-X-Name-First: Young-Seon Author-X-Name-Last: Jeong Author-Name: Myong K. Jeong Author-X-Name-First: Myong K. Author-X-Name-Last: Jeong Author-Name: Jye-Chyi Lu Author-X-Name-First: Jye-Chyi Author-X-Name-Last: Lu Author-Name: Ming Yuan Author-X-Name-First: Ming Author-X-Name-Last: Yuan Author-Name: Jionghua (Judy) Jin Author-X-Name-First: Jionghua (Judy) Author-X-Name-Last: Jin Title: Statistical process control procedures for functional data with systematic local variations Abstract: Many engineering studies for manufacturing processes, such as for quality monitoring and fault detection, consist of complicated functional data with sharp changes. That is, the data curves in these studies exhibit large local variations. This article proposes a wavelet-based local random-effect model that characterizes the variations within multiple curves in certain local regions. An integrated mean and variance thresholding procedure is developed to address the large number of parameters in both the mean and variance models and keep the model simple and fit the data curves well. Guidelines are provided to select the regularization parameters in the penalized wavelet-likelihood method used for the parameter estimations. The proposed mean and variance thresholding procedure is used to develop new statistical procedures for process monitoring with complicated functional data. A real-life case study shows that the proposed procedure is much more effective in detecting local variations than existing techniques extended from methods based on a single data curve. Journal: IISE Transactions Pages: 448-462 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2017.1419315 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1419315 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:448-462 Template-Type: ReDIF-Article 1.0 Author-Name: Gudrun P. Kiesmüller Author-X-Name-First: Gudrun P. Author-X-Name-Last: Kiesmüller Author-Name: Julia Zimmermann Author-X-Name-First: Julia Author-X-Name-Last: Zimmermann Title: The influence of spare parts provisioning on buffer size in a production system Abstract: We consider a discrete-part production line consisting of two machines with random processing and failure times. One option that can mitigate the effect of these uncertainties is the installation of a buffer between the two machines to avoid starving and blocking of the machines. In this article, we additionally allow the keeping of spare parts in stock to enable a fast repair and reduce machine downtime. We introduce a new model to support the optimization of the buffer size and the spare parts inventory level simultaneously. The model is based on a continuous-time Markov chain and our aim is to minimize the average costs, which are composed of costs for work in process and the stock-keeping of spare parts, subject to a minimum target throughput. Our numerical analysis reveals that the availability of spare parts can increase throughput enormously or reduce the required buffer size for a given target throughput. Using our approach, as opposed to a sequential approach, we can quantify the cost savings, which can be obtained by a joint optimization of the buffer size and the inventory level. Large cost savings can be obtained by the application of our new approach. Journal: IISE Transactions Pages: 367-380 Issue: 5 Volume: 50 Year: 2018 Month: 5 X-DOI: 10.1080/24725854.2018.1426134 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1426134 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:5:p:367-380 Template-Type: ReDIF-Article 1.0 Author-Name: Kirby Clark Author-X-Name-First: Kirby Author-X-Name-Last: Clark Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: Incorporating vertical travel into non-traditional cross aisles for unit-load warehouse designs Abstract: This article proposes modifications to the travel-time models for non-traditional warehouse aisle layouts, Flying-V and Fishbone, by incorporating a vertical travel dimension. The resulting non-linear optimization models incorporate Chebychev travel within the picking aisles. The obtained shape of the aisle and the percent improvement for a traditional warehouse are compared with the results found in previous research that ignores vertical travel. It is shown that the percent improvement diminishes as the height of the rack increases, with Fishbone maintaining a higher percent improvement over Flying-V. It is also shown that while the shape of Flying-V can be considerably altered by considering vertical travel, the Fishbone layout often maintains its recommended shape regardless of the height of the rack. The article concludes with recommendations for effective implementation of these two designs. Journal: IIE Transactions Pages: 1322-1331 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.724188 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.724188 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1322-1331 Template-Type: ReDIF-Article 1.0 Author-Name: Jongsung Lee Author-X-Name-First: Jongsung Author-X-Name-Last: Lee Author-Name: Byung-in kim Author-X-Name-First: Byung-in Author-X-Name-Last: kim Author-Name: Andrew Johnson Author-X-Name-First: Andrew Author-X-Name-Last: Johnson Title: A two-dimensional bin packing problem with size changeable items for the production of wind turbine flanges in the open die forging industry Abstract: Efficient cutting design is essential to reduce the costs of production in the open die forging industry. This article discusses a slab cutting design problem that occurs when parallel piped items are cut from raw material steel slabs with varying widths and lengths to meet a volume requirement. The problem is modeled as a two-dimensional cutting stock problem or bin packing problem with size-changeable items. Cut loss and guillotine cut constraints are included. A knapsack-based heuristic algorithm is proposed and it is tested by a real-world manufacturer who is cutting steel for wind turbine flanges. The firm generates an annual cost reduction of approximately US $2000 000. Journal: IIE Transactions Pages: 1332-1344 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.725506 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.725506 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1332-1344 Template-Type: ReDIF-Article 1.0 Author-Name: Yuan Yuan Author-X-Name-First: Yuan Author-X-Name-Last: Yuan Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Adaptive B-spline knot selection using multi-resolution basis set Abstract: B-splines are commonly used to fit complicated functions in Computer Aided Design and signal processing because they are simple yet flexible. However, how to place the knots appropriately in B-spline curve fitting remains a difficult problems. This article discusses a two-stage knot placement method to place knots adapting to the curvature structures of unknown function. In the first stage, a subset of basis functions is selected from the pre-specified multi-resolution basis set using a statistical variable selection method: Lasso. In the second stage, a vector space that is spanned by the selected basis functions is constructed and a concise knot vector is identified that is sufficient to characterize the vector space to fit the unknown function. The effectiveness of the proposed method is demonstrated using numerical studies on multiple representative functions. Journal: IIE Transactions Pages: 1263-1277 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.726758 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.726758 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1263-1277 Template-Type: ReDIF-Article 1.0 Author-Name: Soondo Hong Author-X-Name-First: Soondo Author-X-Name-Last: Hong Author-Name: Andrew Johnson Author-X-Name-First: Andrew Author-X-Name-Last: Johnson Author-Name: Brett Peters Author-X-Name-First: Brett Author-X-Name-Last: Peters Title: A note on picker blocking models in a parallel-aisle order picking system Abstract: This note develops analytical picker blocking models to simply and accurately assess picker blocking in parallel-aisle order picking systems when multiple picks occur at a pick point. The Markov chain--based models characterize the two bounding walking speeds for modeling picker movement: unit walk time and instantaneous walk time. The unit walk time model has a state-space transition matrix that is reduced by a factor of 16 for both narrow-aisle and wide-aisle systems. Additionally, the model improves upon the existing literature by providing a closed-form expression for the narrow-aisle system with instantaneous walk time. Experimental results are provided to demonstrate how picker blocking is influenced by pick density in a variety of scenarios under varying assumptions regarding the maximum number of picks at a pick point. These results broaden those previously presented in the literature, as well as demonstrate the improved efficiency of the proposed model.1 Journal: IIE Transactions Pages: 1345-1355 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.745204 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.745204 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1345-1355 Template-Type: ReDIF-Article 1.0 Author-Name: Liang Lu Author-X-Name-First: Liang Author-X-Name-Last: Lu Author-Name: Zhixin Liu Author-X-Name-First: Zhixin Author-X-Name-Last: Liu Author-Name: Xiangtong Qi Author-X-Name-First: Xiangtong Author-X-Name-Last: Qi Title: Coordinated price quotation and production scheduling for uncertain order inquiries Abstract: This article studies the joint price quotation and production scheduling problem for a manufacturer. The novelty of the work includes modeling the uncertainty in the customer’s order placement, as well as considering the detailed sequencing decision for multiple distinct orders, in a unified framework. We derive closed-form expressions for the expected production cost, measured by the total weighted completion time, under a given set of price quotations and then design dynamic programming algorithms to find the optimal price quotations. The proposed model and algorithms are validated by computational experiments. Important managerial insights are provided. First, the manufacturer only needs to evaluate a few discrete prices to quote, rather than consider a full, continuous spectrum of prices. Second, knowing accurate information on order placement probabilities is important for the manufacturer to make profitable pricing and scheduling decisions. Third, the integrated decision on price quotation and production scheduling has a significant advantage in profit maximization compared with various alternative decision approaches. The quotation model also demonstrates efficiency in quoting dynamically arriving inquiries. [Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for the proof of Theorem 3.] Journal: IIE Transactions Pages: 1293-1308 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.748993 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.748993 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1293-1308 Template-Type: ReDIF-Article 1.0 Author-Name: Ali Yassine Author-X-Name-First: Ali Author-X-Name-Last: Yassine Author-Name: Bacel Maddah Author-X-Name-First: Bacel Author-X-Name-Last: Maddah Author-Name: Nabil Nehme Author-X-Name-First: Nabil Author-X-Name-Last: Nehme Title: Optimal information exchange policies in integrated product development Abstract: This article considers information exchange in an Integrated Product Development (IPD) environment. First, a dynamic programming model is formulated that is able to capture upstream partial information flow in a two-activity IPD process. A simple threshold policy is derived that aids the downstream activity in deciding whether to consider or ignore this upstream information as a function of information quality and its associated setup and rework penalties. Then, this formulation is expanded to model analytically, for the first time, information flow in a three-activity IPD process. In this case, the focus is on aiding the midstream activity in deciding whether to consider or ignore partial upstream information, taking into consideration downstream concerns. Because it is difficult to derive threshold policies in this case, the dynamic program has to be solved directly and then an extensive Monte Carlo simulation study is performed to analyze the behavior of the optimal policy. The simulation results suggest several important insights regarding the timing and frequency of considering partial information in an IPD environment. Journal: IIE Transactions Pages: 1249-1262 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.762487 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.762487 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1249-1262 Template-Type: ReDIF-Article 1.0 Author-Name: Shih-Fen Cheng Author-X-Name-First: Shih-Fen Author-X-Name-Last: Cheng Author-Name: Blake Nicholson Author-X-Name-First: Blake Author-X-Name-Last: Nicholson Author-Name: Marina Epelman Author-X-Name-First: Marina Author-X-Name-Last: Epelman Author-Name: Daniel Reaume Author-X-Name-First: Daniel Author-X-Name-Last: Reaume Author-Name: Robert Smith Author-X-Name-First: Robert Author-X-Name-Last: Smith Title: A dynamic programming approach to achieving an optimal end-state along a serial production line Abstract: In modern production systems, it is critical to perform maintenance, calibration, installation, and upgrade tasks during planned downtime. Otherwise, the systems become unreliable and new product introductions are delayed. For reasons of safety, testing, and access, task performance often requires the vicinity of impacted equipment to be left in a specific “end state” when production halts. Therefore, planning the shutdown of a production system to balance production goals against enabling non-production tasks yields a challenging optimization problem. This article proposes a mathematical formulation of this problem and a dynamic programming approach that efficiently finds optimal shutdown policies for deterministic serial production lines. An event-triggered re-optimization procedure that is based on the proposed deterministic dynamic programming approach is also introduced for handling uncertainties in the production line for the stochastic case. It is demonstrated numerically that in these cases with random breakdowns and repairs, the re-optimization procedure is efficient and even obtains results that are optimal or nearly optimal. Journal: IIE Transactions Pages: 1278-1292 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.770183 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770183 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1278-1292 Template-Type: ReDIF-Article 1.0 Author-Name: Subir Rao Author-X-Name-First: Subir Author-X-Name-Last: Rao Author-Name: Gajendra Adil Author-X-Name-First: Gajendra Author-X-Name-Last: Adil Title: Optimal class boundaries, number of aisles, and pick list size for low-level order picking systems Abstract: This article considers a two-block warehouse with low-level aisles having dedicated pickers who follow a return routing policy within the aisle. Travel distance models are formulated assuming a given product density distribution in aisles for multi-item picks. An analytical recursive procedure is developed to find optimal n-class partitions of each warehouse aisle. A range on the number of classes and number of picks for which class-based storage is an attractive option over the random storage policy is determined through a computational study. The article also presents an iterative hierarchical framework to simultaneously obtain pick list size, number of aisles, and class storage boundaries. A sensitivity analysis of optimal pick travel distance against important warehouse design parameters is presented. Journal: IIE Transactions Pages: 1309-1321 Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.772691 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.772691 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:1309-1321 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Editorial Board EOV Journal: IIE Transactions Pages: ebi-ebiv Issue: 12 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.826062 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.826062 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:12:p:ebi-ebiv Template-Type: ReDIF-Article 1.0 Author-Name: Qiaofeng Li Author-X-Name-First: Qiaofeng Author-X-Name-Last: Li Author-Name: Kanglin Liu Author-X-Name-First: Kanglin Author-X-Name-Last: Liu Author-Name: Zhi-Hai Zhang Author-X-Name-First: Zhi-Hai Author-X-Name-Last: Zhang Title: Robust design of a strategic network planning for photovoltaic module recycling considering reclaimed resource price uncertainty Abstract: PhotoVoltaic (PV) power is one of the rapidly growing solar energy technologies worldwide. The installed PV power capacity has considerably increased over the past decades. Consequently, the End-of-Life management of used PV modules is becoming very urgent. This article investigates strategic recycling network planning for recycling PV module considering reclaimed resource price uncertainty. Based on a real case setting, the problem is first formulated as a risk-neutral model and then extended to a risk-averse model that considers the risk preferences of investors. Moreover, robust reformulations of the risk-neutral/risk-averse models are proposed to hedge against ambiguity in the probability distribution of the uncertain resource price. An outer approximation based solution approach is proposed to solve the robust reformulations. Numerical experiments with test data generated from the real case are carried out to demonstrate the benefits of the resulting robust model and managerial insights are explored. Lastly, conclusions are drawn and future research directions are outlined. Journal: IISE Transactions Pages: 691-708 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2018.1501169 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1501169 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:691-708 Template-Type: ReDIF-Article 1.0 Author-Name: Feifan Wang Author-X-Name-First: Feifan Author-X-Name-Last: Wang Author-Name: Feng Ju Author-X-Name-First: Feng Author-X-Name-Last: Ju Author-Name: Ningxuan Kang Author-X-Name-First: Ningxuan Author-X-Name-Last: Kang Title: Transient analysis and real-time control of geometric serial lines with residence time constraints Abstract: Residence time constraints are commonly seen in practical production systems, where the time that intermediate products spend in a buffer is limited within a certain range. Parts have to be scrapped or reworked if their maximum allowable residence time is exceeded, while they cannot be released downstream before the minimum required residence time is reached. Such dynamics impose additional complexity onto the production system analysis. In order to optimize the production performance in a timely manner, the transient behavior of the production system and a real-time control strategy need to be investigated. In this article, we develop a Markov chain model to analyze the transient behavior of a two-machine geometric serial line with constraints on both maximum allowable residence time and minimum required residence time being. Compared with the simulation, the proposed analytical method is shown to estimate the system’s transient performance with high accuracy. Structural properties are investigated based on the model to provide insights into the effects of residence time constraints and buffer capacity on system performance. An iterative learning algorithm is proposed to perform real-time controls, which improves the system performance by balancing the trade-off between the production rate and scrap rate. Specifically, a control policy derived from Markov Decision Processes is implemented as an initial control policy, and the Bayesian method is then applied to the run time data to improve the control policy. Journal: IISE Transactions Pages: 709-728 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2018.1511937 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1511937 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:709-728 Template-Type: ReDIF-Article 1.0 Author-Name: Jun-Qiang Wang Author-X-Name-First: Jun-Qiang Author-X-Name-Last: Wang Author-Name: Fei-Yi Yan Author-X-Name-First: Fei-Yi Author-X-Name-Last: Yan Author-Name: Peng-Hao Cui Author-X-Name-First: Peng-Hao Author-X-Name-Last: Cui Author-Name: Chao-Bo Yan Author-X-Name-First: Chao-Bo Author-X-Name-Last: Yan Title: Bernoulli serial lines with batching machines: Performance analysis and system-theoretic properties Abstract: Aiming at Bernoulli serial lines with batching machines and finite buffers, this study develops analytical methods for performance analysis and system-theoretic properties. Batching machines can process several jobs simultaneously as a batch, as long as the number of jobs in a batch does not exceed the machine’s batch capacity. The batch capacities of all machines are not necessarily equal. Batching machines bring about a new characteristic of state transitions depending on batch capacities and machine states. For two-machine lines, the joint impacts of batching machines on state transitions are analyzed. Theoretically, the state transition rules are revealed. Based on the state transition rules, this study proposes a hierarchical state transition diagram, proves the ergodicity condition, and derives analytical formulas to evaluate the performance measures. Then, for multi-machine lines, this study develops a computationally efficient aggregation method with high accuracy. Furthermore, the impacts of system parameters, including machine efficiency pattern, batch capacity pattern, batch capacity mismatch, and system size, on the accuracy are qualitatively analyzed. Finally, this study investigates the reversibility and monotonicity properties. These analytical methods and results help production managers to scientifically evaluate, accurately predict, and continuously improve Bernoulli serial lines with batching machines. Journal: IISE Transactions Pages: 729-743 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2018.1519745 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1519745 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:729-743 Template-Type: ReDIF-Article 1.0 Author-Name: Sudipta Chowdhury Author-X-Name-First: Sudipta Author-X-Name-Last: Chowdhury Author-Name: Omid Shahvari Author-X-Name-First: Omid Author-X-Name-Last: Shahvari Author-Name: Mohammad Marufuzzaman Author-X-Name-First: Mohammad Author-X-Name-Last: Marufuzzaman Author-Name: Jack Francis Author-X-Name-First: Jack Author-X-Name-Last: Francis Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Title: Sustainable design of on-demand supply chain network for additive manufacturing Abstract: This study proposes a novel optimization framework that simultaneously considers interdependence of flow networks, resource restrictions, and process-and-system level costs under a unified decision framework for the design and management of an integrated Additive Manufacturing (AM) supply chain network. A two-stage stochastic programming model is proposed that minimizes the facility location and capacity selection decisions at the first-stage prior to realizing any customer demand information. However, after the demand information is revealed, a number of second-stage decisions, such as optimal layer thickness for AM products, production, post-processing, procurement, storage, and transportation decisions, are made. Due to the need to solve our proposed optimization framework in a realistic-size network problem, a hybrid decomposition algorithm, combining the Sample Average Approximation algorithm with an Adaptive Large Neighborhood Search algorithm, is proposed. The performance of the proposed algorithm is validated by developing a case study using data from Alabama and Mississippi. Based on a set of numerical experiments, the effect of process-and-system level factors on the design and management of an AM supply chain network are analyzed. Numerous managerial insights, particularly on layer thickness, customer demand variability, mean demand variation, powder safety stock, and wastage rate on overall system performance, are gained which are crucial for the sustainment of this new manufacturing and supply chain paradigm. Journal: IISE Transactions Pages: 744-765 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2018.1532134 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1532134 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:744-765 Template-Type: ReDIF-Article 1.0 Author-Name: Nina Sundström Author-X-Name-First: Nina Author-X-Name-Last: Sundström Author-Name: Oskar Wigström Author-X-Name-First: Oskar Author-X-Name-Last: Wigström Author-Name: Bengt Lennartson Author-X-Name-First: Bengt Author-X-Name-Last: Lennartson Title: Robust and energy efficient trajectories for robots in a common workspace setting Abstract: A method, incorporating robustness into trajectory planning, is proposed in this article. In the presence of delays, the suggested approach guarantees collision-free scenarios for robots with predefined paths and overlapping workspaces. Traditionally, only the time at which a robot can enter a common workspace is constrained so as to avoid collisions. If the shared zone becomes available later than planned, collisions can potentially occur if the robot is unable to stop before entering the shared space. In this work, a clearance point is introduced where the occupancy of the common workspace is evaluated. The velocity is constrained at this point such that, if necessary, the robot is able to stop at the boundary of the shared space. The closer to the boundary the evaluation is performed, the more restricted is the velocity. The problem formulation is stated in space assuming a predefined path, where robot dynamics and robust constraints are included. Multiple objectives corresponding to final time and energy consumption are considered. The impact on the system performance concerning the position and timing related to the clearance point is analyzed. An example is presented, where the optimal clearance point position is determined, based on the time at which the shared space is assumed to become available. Journal: IISE Transactions Pages: 766-776 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2018.1542543 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1542543 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:766-776 Template-Type: ReDIF-Article 1.0 Author-Name: Yunyi Kang Author-X-Name-First: Yunyi Author-X-Name-Last: Kang Author-Name: Feng Ju Author-X-Name-First: Feng Author-X-Name-Last: Ju Title: Flexible preventative maintenance for serial production lines with multi-stage degrading machines and finite buffers Abstract: In production systems, machines are typically subject to degradation, which is a gradual and accumulating process that can influence the performance of the production systems. In this work, we focus on the flexible preventative maintenance problem on serial production lines with multi-stage degrading machines and finite buffers. Condition-based maintenance decisions are first investigated for a two-machine-one-buffer system, considering machine degradation stages and the buffer level. The optimal maintenance policy is obtained using Markov decision models. For longer lines, approximation methods are developed based on the results from the two machine case. Specifically, an iterative state and machine aggregation approach is developed to find the optimal preventative maintenance policy for each machine in large systems. Numerical experiments show that the proposed method outperforms the state-of-the-art. Journal: IISE Transactions Pages: 777-791 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2018.1562283 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1562283 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:777-791 Template-Type: ReDIF-Article 1.0 Author-Name: Jie Song Author-X-Name-First: Jie Author-X-Name-Last: Song Author-Name: Yunzhe Qiu Author-X-Name-First: Yunzhe Author-X-Name-Last: Qiu Author-Name: Jie Xu Author-X-Name-First: Jie Author-X-Name-Last: Xu Author-Name: Feng Yang Author-X-Name-First: Feng Author-X-Name-Last: Yang Title: Multi-fidelity sampling for efficient simulation-based decision making in manufacturing management Abstract: Today’s manufacturers operate in highly dynamic and uncertain market environments. Process-level disturbances present further challenges. Consequently, it is of strategic importance for a manufacturing company to develop robust manufacturing capabilities that can quickly adapt to varying customer demands in the presence of external and internal uncertainty and stochasticity. Discrete-event simulations have been used by manufacturing managers to conduct “look-ahead” analysis and optimize resource allocation and production plan. However, simulations of complex manufacturing systems are time-consuming. Therefore, there is a great need for a highly efficient procedure to allocate a limited number of simulations to improve a system’s performance. In this article, we propose a multi-fidelity sampling algorithm that greatly increases the efficiency of simulation-based robust manufacturing management by utilizing ordinal estimates obtained from a low-fidelity, but fast, approximate model. We show that the multi-fidelity optimal sampling policy minimizes the expected optimality gap of the selected solution, and thus optimally uses a limited simulation budget. We derive an upper bound for the multi-fidelity sampling policy and compare it with other sampling policies to illustrate the efficiency improvement. We demonstrate its computational efficiency improvement and validate the convergence results derived using both benchmark test functions and two robust manufacturing management case studies. Journal: IISE Transactions Pages: 792-805 Issue: 7 Volume: 51 Year: 2019 Month: 7 X-DOI: 10.1080/24725854.2019.1576951 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1576951 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:7:p:792-805 Template-Type: ReDIF-Article 1.0 Author-Name: Hrayer Aprahamian Author-X-Name-First: Hrayer Author-X-Name-Last: Aprahamian Author-Name: Ebru K. Bish Author-X-Name-First: Ebru K. Author-X-Name-Last: Bish Author-Name: Douglas R. Bish Author-X-Name-First: Douglas R. Author-X-Name-Last: Bish Title: Adaptive risk-based pooling in public health screening Abstract: Pooled testing is commonly used in public health screening for classifying subjects in a large population as positive or negative for an infectious or genetic disease. Pooling is especially useful when screening for low-prevalence diseases under limited resources. Although pooled testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal pooling scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We propose and study an adaptive risk–based pooling scheme that considers important test and population level characteristics often over looked in the literature (e.g., dilution of pooling and heterogeneous subjects). We characterize important structural properties of optimal subject assignment policies (i.e., assignment of subjects, with different risk, to pools) and provide key insights. Our case study, on chlamydia screening, demonstrates the effectiveness of the proposed pooling scheme, with the expected number of false classifications reduced substantially over policies proposed in the literature. Journal: IISE Transactions Pages: 753-766 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1434333 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1434333 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:753-766 Template-Type: ReDIF-Article 1.0 Author-Name: Giulia Livieri Author-X-Name-First: Giulia Author-X-Name-Last: Livieri Author-Name: Saad Mouti Author-X-Name-First: Saad Author-X-Name-Last: Mouti Author-Name: Andrea Pallavicini Author-X-Name-First: Andrea Author-X-Name-Last: Pallavicini Author-Name: Mathieu Rosenbaum Author-X-Name-First: Mathieu Author-X-Name-Last: Rosenbaum Title: Rough volatility: Evidence from option prices Abstract: It has been recently shown that spot volatilities can be closely modeled by rough stochastic volatility-type dynamics. In such models, the log-volatility follows a fractional Brownian motion with Hurst parameter smaller than half. This result has been established using high-frequency volatility estimations from historical price data. We revisit this finding by studying implied volatility-based approximations of the spot volatility. Using at-the-money options on the S&P500 index with short maturity, we are able to confirm that volatility is rough. The Hurst parameter found here, of order 0.3, is slightly larger than that usually obtained from historical data. This is easily explained from a smoothing effect due to the remaining time to maturity of the considered options. Journal: IISE Transactions Pages: 767-776 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1444297 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1444297 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:767-776 Template-Type: ReDIF-Article 1.0 Author-Name: Shakiba Enayati Author-X-Name-First: Shakiba Author-X-Name-Last: Enayati Author-Name: Osman Y. Özaltın Author-X-Name-First: Osman Y. Author-X-Name-Last: Özaltın Author-Name: Maria E. Mayorga Author-X-Name-First: Maria E. Author-X-Name-Last: Mayorga Author-Name: Cem Saydam Author-X-Name-First: Cem Author-X-Name-Last: Saydam Title: Ambulance redeployment and dispatching under uncertainty with personnel workload limitations Abstract: Emergency Medical Services (EMS) managers are concerned with responding to emergency calls in a timely manner. Redeployment and dispatching strategies can be used to improve coverage that pertains to the proportion of calls that are responded to within a target time threshold. Dispatching refers to the choice of which ambulance to send to a call, and redeployment refers to repositioning of idle ambulances to compensate for coverage loss due to busy ambulances. Redeployment moves, however, impose additional workload on EMS personnel and must be executed with care. We propose a two-stage stochastic programming model to redeploy and dispatch ambulances to maximize the expected coverage. Our model restricts personnel workload in a shift and incorporates multiple call priority levels. We develop a Lagrangian branch-and-bound algorithm to solve realistic size instances. We evaluate the model performance based on average coverage and average ambulance workload during a shift. Our computational results indicate that the proposed Lagrangian branch-and-bound is significantly more efficient than CPLEX, especially for large problem instances. We also compare our model with benchmarks from the literature and show that it can improve the performance of an EMS system considerably, in particular with respect to mean response time to high-priority calls. Journal: IISE Transactions Pages: 777-788 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1446105 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1446105 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:777-788 Template-Type: ReDIF-Article 1.0 Author-Name: Xi Chen Author-X-Name-First: Xi Author-X-Name-Last: Chen Author-Name: Enlu Zhou Author-X-Name-First: Enlu Author-X-Name-Last: Zhou Author-Name: Jiaqiao Hu Author-X-Name-First: Jiaqiao Author-X-Name-Last: Hu Title: Discrete optimization via gradient-based adaptive stochastic search methods Abstract: Gradient-based Adaptive Stochastic Search (GASS) is a new stochastic search optimization algorithm that has recently been proposed. It iteratively searches promising candidate solutions through a population of samples generated from a parameterized probabilistic model on the solution space, and updates the parameter of the probabilistic model based on a direct gradient method. Under the framework of GASS, we propose two discrete optimization algorithms: discrete Gradient-based Adaptive Stochastic Search (discrete-GASS) and annealing Gradient-based Adaptive Stochastic Search (annealing-GASS). In discrete-GASS, we transform the discrete optimization problem into a continuous optimization problem on the parameter space of a family of independent discrete distributions, and apply a gradient-based method to find the optimal parameter, such that the corresponding distribution has the best capability to generate optimal solution(s) to the original discrete problem. In annealing-GASS, we use a Boltzmann distribution as the parameterized probabilistic model, and propose a gradient-based temperature schedule that changes adaptively with respect to the current performance of the algorithm. We show convergence of both discrete-GASS and annealing-GASS under appropriate conditions. Numerical results on several benchmark optimization problems and the traveling salesman problem indicate that both algorithms perform competitively against a number of other algorithms, including model reference adaptive search, the cross-entropy method, and multi-start simulated annealing with different temperature schedules. Journal: IISE Transactions Pages: 789-805 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1448489 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1448489 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:789-805 Template-Type: ReDIF-Article 1.0 Author-Name: Anastasia Borovykh Author-X-Name-First: Anastasia Author-X-Name-Last: Borovykh Author-Name: Andrea Pascucci Author-X-Name-First: Andrea Author-X-Name-Last: Pascucci Author-Name: Stefano La Rovere Author-X-Name-First: Stefano Author-X-Name-Last: La Rovere Title: Systemic risk in a mean-field model of interbank lending with self-exciting shocks Abstract: In this article we consider a mean-field model of interacting diffusions for the monetary reserves in which the reserves are subjected to a self- and cross-exciting shock. This is motivated by the financial acceleration and fire sales observed in the market. We derive a mean-field limit using a weak convergence analysis and find an explicit measure-valued process associated with a large interbanking system. We define systemic risk indicators and derive, using the limiting process, several law of large numbers results and verify these numerically. We conclude that self-exciting shocks increase the systemic risk in the network and their presence in interbank networks should not be ignored. Journal: IISE Transactions Pages: 806-819 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1448491 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1448491 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:806-819 Template-Type: ReDIF-Article 1.0 Author-Name: Su Xiu Xu Author-X-Name-First: Su Xiu Author-X-Name-Last: Xu Author-Name: Saijun Shao Author-X-Name-First: Saijun Author-X-Name-Last: Shao Author-Name: Ting Qu Author-X-Name-First: Ting Author-X-Name-Last: Qu Author-Name: Jian Chen Author-X-Name-First: Jian Author-X-Name-Last: Chen Author-Name: George Q. Huang Author-X-Name-First: George Q. Author-X-Name-Last: Huang Title: Auction-based city logistics synchronization Abstract: This article is the first that proposes an efficient auction mechanism for the City Logistics Synchronization (CLS) problem, which aims to capture both logistics punctuality and simultaneity in a city or region. The main motivation of CLS is if a delay has already occurred or will occur, customers tend to pursue simultaneity. We develop the one-sided Vickrey-Clarke-Groves (O-VCG) auction for the CLS problem. The proposed O-VCG auction realizes incentive compatibility (on the buy side), approximate allocative efficiency, budget balance, and individual rationality. We also prove that if buyers (firms) are substitutes, the utility of the third-party logistics (3PL) company (auctioneer) will be non-negative when it sets real transportation costs in the auction. The vehicle routing problem faced by the 3PL company is formulated as the lane covering problem with CLS requirements. Three effective heuristics are developed: Merge, Exchange, and Mutate. Our computational results show that the three operators are effective but sensitive to the bid duration. A Hybrid operator significantly outperforms each individual operator. We also numerically analyze the impacts of five key factors: the strategic behavior of the 3PL company, flexible due dates, the maximum bid duration, the radius of a city or region, and the number of depots. Journal: IISE Transactions Pages: 837-851 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1450541 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1450541 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:837-851 Template-Type: ReDIF-Article 1.0 Author-Name: Wenjing Wang Author-X-Name-First: Wenjing Author-X-Name-Last: Wang Author-Name: Xi Chen Author-X-Name-First: Xi Author-X-Name-Last: Chen Title: An adaptive two-stage dual metamodeling approach for stochastic simulation experiments Abstract: In this article we propose an adaptive two-stage dual metamodeling approach for stochastic simulation experiments, aiming at exploiting the benefits of fitting the mean and variance function models simultaneously to improve the predictive performance of Stochastic Kriging (SK). To this end, we study the effects of replacing the sample variances with smoothed variance estimates on the predictive performance of SK, articulate the links between SK and least-squares support vector regression, and provide some useful data-driven methods for identifying important design points. We argue that efficient data-driven experimental designs for stochastic simulation metamodeling can be “learned” through a “dense and shallow” initial design (i.e., relatively many design points with relatively little effort at each), and efficient budget allocation rules can be seamlessly incorporated into the proposed approach to intelligently spend the remaining simulation budget on the important design points identified. Two numerical examples are provided to demonstrate the promise held by the proposed approach in providing highly accurate mean response surface approximations. Journal: IISE Transactions Pages: 820-836 Issue: 9 Volume: 50 Year: 2018 Month: 9 X-DOI: 10.1080/24725854.2018.1452082 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1452082 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:9:p:820-836 Template-Type: ReDIF-Article 1.0 Author-Name: Xu Chen Author-X-Name-First: Xu Author-X-Name-Last: Chen Author-Name: Zuo-Jun Shen Author-X-Name-First: Zuo-Jun Author-X-Name-Last: Shen Title: An analysis of a supply chain with options contracts and service requirements Abstract: This article studies a one-period two-party supply chain with a service requirement. At the beginning of a single retail season, the retailer can obtain goods by either ordering from a firm or by purchasing and exercising call options. The retailer’s optimal ordering policy and the supplier’s optimal production policy are derived in the presence of options contracts and a service requirement. In addition, it is shown that options contracts benefit both the retailer and supplier. Furthermore, it is shown that the retailer’s optimal expected profit is non-increasing in the service requirement and the supplier’s optimal expected profit is non-decreasing in the service requirement, either with or without options contracts. A special class of distribution-free contracts that can coordinate the supply chain with options contracts and the service requirement are derived. Furthermore, as opposed to the case of non-coordinating contracts, it is shown that there is always a Pareto contract. Finally, the retailer’s and supply chain’s optimal service level are derived. Journal: IIE Transactions Pages: 805-819 Issue: 10 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649383 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649383 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:10:p:805-819 Template-Type: ReDIF-Article 1.0 Author-Name: Zhixin Liu Author-X-Name-First: Zhixin Author-X-Name-Last: Liu Author-Name: Liang Lu Author-X-Name-First: Liang Author-X-Name-Last: Lu Author-Name: Xiangtong Qi Author-X-Name-First: Xiangtong Author-X-Name-Last: Qi Title: Simultaneous and sequential price quotations for uncertain order inquiries with production scheduling cost Abstract: This article studies the coordination between pricing and production scheduling decisions of a manufacturer who quotes prices for a set of order inquiries. Each inquiry is either canceled or confirmed by its owner following a certain probability distribution that depends on the quoted price. The manufacturer then incurs a production scheduling cost for processing each firm order. Two types of price quotation schemes, simultaneous and sequential quotations, are investigated. A simultaneous quotation quotes all order inquiries simultaneously. The problem is formulated with a specific form of price function as a quadratic program that can be solved efficiently. Properties of optimal quotations are provided. A sequential quotation quotes order inquiries one at a time. For the problem, optimal algorithms are developed using implicit enumeration and design-efficient heuristics. Simultaneous and sequential quotations and several managerial insights are obtained in computational studies. The conditions under which the simultaneous quotation performs close to sequential quotation and conditions under which heuristic sequential quotation performs near optimal are highlighted. Journal: IIE Transactions Pages: 820-833 Issue: 10 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649389 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649389 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:10:p:820-833 Template-Type: ReDIF-Article 1.0 Author-Name: Soroush Saghafian Author-X-Name-First: Soroush Author-X-Name-Last: Saghafian Author-Name: Mark Van Oyen Author-X-Name-First: Mark Author-X-Name-Last: Van Oyen Title: The value of flexible backup suppliers and disruption risk information: newsvendor analysis with recourse Abstract: This article develops a model and analysis to provide insight into two effective remedies to increase supply chain resilience: (i) contracting with a secondary flexible backup supplier; and (ii) monitoring primary suppliers to obtain disruption risk information. To investigate the true value of these strategies, an analysis is performed under imperfect information concerning the disruption risks and considering a two-stage setting with recourse. In this setting, the firm first monitors its suppliers and then utilizes a recourse option subject to the limited quantity of a capacity reserved a priori via a contract with a flexible backup supplier. The firm’s jointly optimal behavior is analytically characterized (utilizing only the information available to the firm) regarding two interconnected decisions: (i) the advance capacity investment/reservation level with a flexible backup supplier; and (ii) the inventory ordering policy of the underlying products from both primary and backup suppliers. The presented results quantify effective disruption risk mitigation strategies for firms and provide managerial insights into the value of (i) a flexible backup supplier; (ii) disruption risk information; (iii) a contracted recourse option; and (iv) flexibility in the backup system. Journal: IIE Transactions Pages: 834-867 Issue: 10 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.654846 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.654846 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:10:p:834-867 Template-Type: ReDIF-Article 1.0 Author-Name: Yufen Shao Author-X-Name-First: Yufen Author-X-Name-Last: Shao Author-Name: Jonathan Bard Author-X-Name-First: Jonathan Author-X-Name-Last: Bard Author-Name: Ahmad Jarrah Author-X-Name-First: Ahmad Author-X-Name-Last: Jarrah Title: The therapist routing and scheduling problem Abstract: In a majority of settings, rehabilitative services are provided at healthcare facilities by skilled therapists who work as independent contractors. Facilities include hospitals, nursing homes, clinics, and assisted living centers and may be located throughout a wide geographic area. To date, the problem of constructing weekly schedules for the therapists has yet to be fully investigated. This article presents the first algorithm for supporting weekly planning at the agencies that do the contracting. The goal is to better match patient demand with therapist skills while minimizing treatment, travel, administrative and mileage reimbursement costs.The problem was modeled as a mixed-integer program but has several complicating components, including different patient classes, optional weekly treatment patterns and a complex payment structure that frustrated the use of exact methods. Alternatively, a parallel (two-phase) greedy randomized adaptive search procedure was developed that relies on an innovative decomposition scheme and a number of benefit measures that explicitly address the trade-off between feasibility and solution quality. In Phase I, daily routes are constructed for the therapists in parallel and then combined to form weekly schedules. In Phase II, a high-level neighborhood search is executed to converge towards a local optimum. This is facilitated by solving a series of newly formulated traveling salesman problems with side constraints. Extensive testing with both real data provided by a U.S. rehabilitation agency and associated random instances demonstrates the effectiveness of the purposed procedure. Journal: IIE Transactions Pages: 868-893 Issue: 10 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.665202 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.665202 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:10:p:868-893 Template-Type: ReDIF-Article 1.0 Author-Name: Hoda Parvin Author-X-Name-First: Hoda Author-X-Name-Last: Parvin Author-Name: Mark Van Oyen Author-X-Name-First: Mark Author-X-Name-Last: Van Oyen Author-Name: Dimitrios Pandelis Author-X-Name-First: Dimitrios Author-X-Name-Last: Pandelis Author-Name: Damon Williams Author-X-Name-First: Damon Author-X-Name-Last: Williams Author-Name: Junghee Lee Author-X-Name-First: Junghee Author-X-Name-Last: Lee Title: Fixed task zone chaining: worker coordination and zone design for inexpensive cross-training in serial CONWIP lines Abstract: This work introduces a new canonical model of worker cross-training, called a Fixed Task Zone Chain (FTZC), as a special type of zone-based cross-training and develops a methodology to employ it in U-shaped CONstant Work In Process (CONWIP) lines. The FTZC approach is intended to address lines with more stations than workers in environments where extensive cross-training is prohibitive. It incorporates a two-skill chain to cross-train one skill at each end of each zone; however, tasks at the interior of a zone are not cross-trained. A heuristic dynamic control policy is developed that maximizes the line’s throughput given a zone structure. Some useful properties of the optimal policy are provided as a basis for a heuristic control policy that yields high throughput. The performance of an FTZC system is contingent upon the choice of zone structure; therefore, the zone assignment (ZonA) algorithm is created to design the zone structure to achieve high throughput levels. Sufficient conditions which guarantee that the line is balanceable through ZonA are derived. Benchmarking over a test suite supports the effectiveness of the proposed heuristic worker control policy as well as the ZonA algorithm, and its performance is compared with other paradigms. Journal: IIE Transactions Pages: 894-914 Issue: 10 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.668264 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.668264 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:10:p:894-914 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Author-Name: S. Hu Author-X-Name-First: S. Author-X-Name-Last: Hu Title: Quality-assured setup planning based on the stream-of-variation model for multi-stage machining processes Abstract: Setup planning is a set of activities used to arrange manufacturing features into an appropriate sequence for processing. It has significant impact on the product quality, which is often measured in terms of dimensional variation in key product characteristics. Current approaches to setup planning are experience-based and tend to be conservative due to the selection of unnecessarily precise machines and fixtures to ensure final product quality. This is especially true in multi-stage machining processes (MMPs) since it is difficult to predict variation propagation and its impact on the quality of the final product. In this paper, a methodology is proposed to realize cost-effective, quality-assured setup planning for MMPs. Setup planning is formulated as an optimization problem based on quantitative evaluation of variation propagations. The optimal setup plan minimizes the cost related to process precision and satisfies the quality specifications. The proposed approach can significantly improve the effectiveness as well as the efficiency of the setup planning for MMPs. Journal: IIE Transactions Pages: 323-334 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802108526 File-URL: http://hdl.handle.net/10.1080/07408170802108526 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:323-334 Template-Type: ReDIF-Article 1.0 Author-Name: O. Vanli Author-X-Name-First: O. Author-X-Name-Last: Vanli Author-Name: Enrique Del Castillo Author-X-Name-First: Enrique Author-X-Name-Last: Del Castillo Title: Bayesian approaches for on-line robust parameter design Abstract: Two new Bayesian approaches to Robust Parameter Design (RPD) are presented that recompute the optimal control factor settings based on on-line measurements of the noise factors. A dual response model approach to RPD is taken. The first method uses the posterior predictive density of the responses to determine the optimal control factor settings. A second method uses in addition the predictive density of the noise factors. The control factor settings obtained are thus robust not only against on-line variability of the noise factors but also against the uncertainty in the response model parameters. On-line controllable and off-line controllable factors are treated in a unified manner through a quadratic cost function. Both single and multiple-response processes are considered and closed-form robust control laws are provided. Two simulation examples and an example taken from the literature are used to compare the proposed methods with existing RPD approaches that are based on similar models and cost functions.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 359-371 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802108534 File-URL: http://hdl.handle.net/10.1080/07408170802108534 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:359-371 Template-Type: ReDIF-Article 1.0 Author-Name: Giovanna Capizzi Author-X-Name-First: Giovanna Author-X-Name-Last: Capizzi Author-Name: Guido Masarotto Author-X-Name-First: Guido Author-X-Name-Last: Masarotto Title: Bootstrap-based design of residual control charts Abstract: One approach to monitoring autocorrelated data consists in applying a control chart to the residuals of a time series model estimated from process observations. Recent research shows that the impact of estimation error on the run length properties of the resulting charts is not negligible. In this paper a general strategy for implementing residual-based control schemes is investigated. The designing procedure uses the AR-sieve approximation assuming that the process allows an autoregressive representation of order infinity. The run length distribution is estimated using bootstrap resampling in order to account for uncertainty in the estimated parameters. Control limits that satisfy a given constraint on the false alarm rate are computed via stochastic approximation. The proposed procedure is investigated for three residual-based control charts: generalized likelihood ratio, cumulative sum and exponentially weighted moving average. Results show that the bootstrap approach safeguards against an undesirably high rate of false alarms. In addition, the out-of-control bootstrap chart sensitivity seems to be comparable to that of charts designed under the assumption that the estimated model is equal to the true generating process.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 275-286 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802120059 File-URL: http://hdl.handle.net/10.1080/07408170802120059 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:275-286 Template-Type: ReDIF-Article 1.0 Author-Name: Hong-Zhong Huang Author-X-Name-First: Hong-Zhong Author-X-Name-Last: Huang Author-Name: Jian Qu Author-X-Name-First: Jian Author-X-Name-Last: Qu Author-Name: Ming Zuo Author-X-Name-First: Ming Author-X-Name-Last: Zuo Title: Genetic-algorithm-based optimal apportionment of reliability and redundancy under multiple objectives Abstract: When solving multi-objective optimization problems subject to constraints in reliability-based design, it is desirable for the decision maker to have a sufficient number of solutions available for selection. However, many existing approaches either combine multiple objectives into a single objective or treat the objectives as penalties. This results in fewer optimal solutions than would be provided by a multi-objective approach. For such cases, a niched Pareto Genetic Algorithm (GA) may be a viable alternative. Unfortunately, it is often difficult to set penalty parameters that are required in these algorithms. In this paper, a multi-objective optimization algorithm is proposed that combines a niched Pareto GA with a constraint handling method that does not need penalty parameters. The proposed algorithm is based on Pareto tournament and equivalence sharing, and involves the following components: search for feasible solutions, selection of non-dominated solutions and maintenance of diversified solutions. It deals with multiple objectives by incorporating the concept of Pareto dominance in its selection operator while applying a niching pressure to spread the population along the Pareto frontier. To demonstrate the performance of the proposed algorithm, a test problem is presented and the solution distributions in three different generations of the algorithm are illustrated. The optimal solutions obtained with the proposed algorithm for a practical reliability problem are compared with those obtained by a single-objective optimization method, a multi-objective GA method, and a hybrid GA method. Journal: IIE Transactions Pages: 287-298 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322994 File-URL: http://hdl.handle.net/10.1080/07408170802322994 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:287-298 Template-Type: ReDIF-Article 1.0 Author-Name: Hang Zhang Author-X-Name-First: Hang Author-X-Name-Last: Zhang Author-Name: Susan Albin Author-X-Name-First: Susan Author-X-Name-Last: Albin Title: Detecting outliers in complex profiles using a χ control chart method Abstract: The quality of products or manufacturing processes is sometimes characterized by profiles or functions. A method is proposed to identify outlier profiles among a set of complex profiles which are difficult to model with explicit functions. It treats profiles as vectors in high-dimension space and applies a χ2 control chart to identify outliers. This method is useful in Statistical Process Control (SPC) in two ways: (i) identifying outliers in SPC baseline data; and (ii) the on-line monitoring of profiles. The method does not require explicit expression of the function between the response and explanatory variables or fitting regression models. It is especially useful and sometimes the only option when profiles are very complex. Given a set of profiles (high-dimension vectors), the median of these vectors is derived. The variance among profiles is estimated by considering the pair-wise differences between profiles. A χ2 statistic is derived to compare each profile to the center vector. A simulation experiment and manufacturing data are used to illustrate applications of the method. Comparing it with the existing non-linear regression method shows that it has a better performance: it misidentifies fewer non-outlier profiles as outliers than the non-linear regression method, and misidentifies similarly small fractions of outlier profiles as non-outliers.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 335-345 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802323000 File-URL: http://hdl.handle.net/10.1080/07408170802323000 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:335-345 Template-Type: ReDIF-Article 1.0 Author-Name: Ming Jin Author-X-Name-First: Ming Author-X-Name-Last: Jin Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Smith–EWMA run-to-run control schemes for a process with measurement delay Abstract: The Exponentially Weighted Moving Average (EWMA) controller is a popular run-to-run controller in the semiconductor manufacturing industry. The controller adjusts input based on measurement information from previous runs. EWMA controllers can guarantee satisfactory results in many cases; however, when there is a measurement delay in the process, the stability properties and performance of the EWMA controller cannot be guaranteed. In order to maintain the satisfactory outcomes of EWMA controllers, a Smith predictor control scheme is introduced, created particularly for time delay systems in control theory, into EWMA controllers. A modification of the EWMA controller, called the Smith–EWMA run-to-run controller, is proposed. Comparisons between the stability properties of Smith–EWMA and EWMA run-to-run controllers are studied. Moreover, a performance comparison with the EWMA and recursive-least-square controllers under disturbance conditions based on simulation is conducted. The results show that when there is a measurement delay, the proposed Smith–EWMA run-to-run controllers enlarge the stability region and also achieve better performance when there is serious metrology delay and model uncertainty under process disturbances.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 346-358 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802331243 File-URL: http://hdl.handle.net/10.1080/07408170802331243 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:346-358 Template-Type: ReDIF-Article 1.0 Author-Name: Santanu Chakraborty Author-X-Name-First: Santanu Author-X-Name-Last: Chakraborty Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Author-Name: Mark Lawley Author-X-Name-First: Mark Author-X-Name-Last: Lawley Author-Name: Hong Wan Author-X-Name-First: Hong Author-X-Name-Last: Wan Title: Residual-life estimation for components with non-symmetric priors Abstract: Condition monitoring uses sensory signals to assess the health of engineering systems. A degradation model is a mathematical characterization of the evolution of a condition signal. Our recent research focuses on using degradation models to compute residual-life distributions for degrading components. Residual-life distributions are important for providing probabilistic estimates of failure time for use in maintenance planning and spare parts inventory management. To obtain residual-life distributions, our earlier work assumed the degradation model's stochastic parameters to be normally distributed. This paper investigates the performance of these residual-life distributions when the underlying normality assumptions are not satisfied. The paper also develops methods for estimating residual-life when the stochastic parameters of the degradation model follow more general distributions. Journal: IIE Transactions Pages: 372-387 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802369409 File-URL: http://hdl.handle.net/10.1080/07408170802369409 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:372-387 Template-Type: ReDIF-Article 1.0 Author-Name: Yeu-Shiang Huang Author-X-Name-First: Yeu-Shiang Author-X-Name-Last: Huang Author-Name: Chia Yen Author-X-Name-First: Chia Author-X-Name-Last: Yen Title: A study of two-dimensional warranty policies with preventive maintenance Abstract: In dealing with the effects of product deterioration in the context of reliability analysis, it may not be satisfactory to consider only the effects of time or age because usage is often another essential factor that accounts for deterioration. A two-dimensional warranty with consideration of both time and usage for deteriorating products would be more advantageous for manufacturers. In this paper, a two-dimensional warranty model in which the customer is expected to perform appropriate preventive maintenance is analyzed and the warranty policy that maximizes the manufacturers' profits is determined. The proposed approach provides manufacturers with guidelines on how to offer customers two-dimensional warranty programs with proper time and usage limits. A numerical example shows the effectiveness of the proposed approach. Sensitivity analyses are conducted to investigate the robustness of the derived optimal warranty policy. Journal: IIE Transactions Pages: 299-308 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802432967 File-URL: http://hdl.handle.net/10.1080/07408170802432967 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:299-308 Template-Type: ReDIF-Article 1.0 Author-Name: Wenzhen Huang Author-X-Name-First: Wenzhen Author-X-Name-Last: Huang Author-Name: Tirawat Phoomboplab Author-X-Name-First: Tirawat Author-X-Name-Last: Phoomboplab Author-Name: Dariusz Ceglarek Author-X-Name-First: Dariusz Author-X-Name-Last: Ceglarek Title: Process capability surrogate model-based tolerance synthesis for multi-station manufacturing systems Abstract: The main challenges in tolerance synthesis for complex assembly design currently are: (i) to produce a simplified deterministic model that is able to formulate general statistic models in complex assembly problems; (ii) to lower the high computation intensity required in optimization studies when the process capability (yield) model is used for key product characteristics. In this paper, tolerance synthesis for complex assemblies is defined as a probabilistic optimization problem which allows the modeling of assemblies with a general multivariate statistical model and complex tolerance regions. An approach is developed for yield surrogate model generation based on an assembly model in multi-station manufacturing systems, computer experiments, multivariate distribution transformation and regression analysis. Therefore, efficient gradient-based approaches can be applied to avoid the intensive computation in direct optimization. Industrial case studies are presented to illustrate and validate the proposed methodology and compared with the existing tolerance synthesis methods.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 309-322 Issue: 4 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802510408 File-URL: http://hdl.handle.net/10.1080/07408170802510408 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:4:p:309-322 Template-Type: ReDIF-Article 1.0 Author-Name: Chia-Han Yang Author-X-Name-First: Chia-Han Author-X-Name-Last: Yang Author-Name: Tao Yuan Author-X-Name-First: Tao Author-X-Name-Last: Yuan Author-Name: Way Kuo Author-X-Name-First: Way Author-X-Name-Last: Kuo Author-Name: Yue Kuo Author-X-Name-First: Yue Author-X-Name-Last: Kuo Title: Non-parametric Bayesian modeling of hazard rate with a change point for nanoelectronic devices Abstract: This study proposes a non-parametric Bayesian approach to the inference of the L-shaped hazard rate with a change point, which has been observed for nanoelectronic devices in experimental studies. Instead of assuming a restrictive parametric model for the hazard rate function, this article uses a flexible non-parametric model based on a stochastic jump process to describe the decreasing hazard rate in the infant mortality period. A Markov chain Monte Carlo simulation algorithm that implements a dynamic version of the Gibbs sampler is developed for posterior simulation and inference. The proposed approach is applied to analyze an experimental data set, which consists of the failure times of a novel nanoelectronic device: a metal oxide semiconductor capacitor with mixed oxide high-k gate dielectric. Results obtained from the analysis demonstrate that the proposed non-parametric Bayesian approach is capable of producing reasonable estimates of the hazard rate function and the change point. As a flexible method, the proposed approach has the potential to be applied to assess the reliability of novel nanoelectronic devices when the failure mechanisms are generally unknown, parametric reliability models are not readily available, and the availability of data is limited. Journal: IIE Transactions Pages: 496-506 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.587864 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.587864 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:496-506 Template-Type: ReDIF-Article 1.0 Author-Name: Chiwoo Park Author-X-Name-First: Chiwoo Author-X-Name-Last: Park Author-Name: Jianhua Huang Author-X-Name-First: Jianhua Author-X-Name-Last: Huang Author-Name: David Huitink Author-X-Name-First: David Author-X-Name-Last: Huitink Author-Name: Subrata Kundu Author-X-Name-First: Subrata Author-X-Name-Last: Kundu Author-Name: Bani Mallick Author-X-Name-First: Bani Author-X-Name-Last: Mallick Author-Name: Hong Liang Author-X-Name-First: Hong Author-X-Name-Last: Liang Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: A multistage, semi-automated procedure for analyzing the morphology of nanoparticles Abstract: This article presents a multistage, semi-automated procedure that can expedite the morphology analysis of nanoparticles. Material scientists have long conjectured that the morphology of nanoparticles has a profound impact on the properties of the hosting material, but a bottleneck is the lack of a reliable and automated morphology analysis of the particles based on their image measurements. This article attempts to fill in this critical void. One particular challenge in nanomorphology analysis is how to analyze the overlapped nanoparticles, a problem not well addressed by the existing methods but effectively tackled by the method proposed in this article. This method entails multiple stages of operations, executed sequentially, and is considered semi-automated due to the inclusion of a semi-supervised clustering step. The proposed method is applied to several images of nanoparticles, producing the needed statistical characterization of their morphology. Journal: IIE Transactions Pages: 507-522 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.587867 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.587867 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:507-522 Template-Type: ReDIF-Article 1.0 Author-Name: Chia-Jung Chang Author-X-Name-First: Chia-Jung Author-X-Name-Last: Chang Author-Name: Lijuan Xu Author-X-Name-First: Lijuan Author-X-Name-Last: Xu Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Quantitative characterization and modeling strategy of nanoparticle dispersion in polymer composites Abstract: Nanoparticle dispersion plays a crucial role in the mechanical properties of polymer nanocomposites. Transmission Electron Microscope/Scanning Electron Microscope (TEM/SEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its properties. Therefore, there is a strong need to develop a quantitative measure to effectively describe nanoparticle dispersion from a TEM/SEM image. This article reports an effective modeling strategy to characterize nanoparticle dispersion states among different locations of a nanocomposite surface. An engineering-driven inhomogenous Poisson random field is proposed to represent the nanoparticle dispersion at the nanoscale. The model parameters are estimated through the Bayesian Markov Chain Monte Carlo technique to overcome the challenge of the limited amount of accessible data due to the time-consuming sample collection process. The TEM images taken from nano-silica/epoxy composites are used to support the proposed methodology. The research strategy and framework are generally applicable to other nanocomposite materials. Journal: IIE Transactions Pages: 523-533 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.588995 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.588995 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:523-533 Template-Type: ReDIF-Article 1.0 Author-Name: Marcus Perry Author-X-Name-First: Marcus Author-X-Name-Last: Perry Author-Name: Jeffrey Kharoufeh Author-X-Name-First: Jeffrey Author-X-Name-Last: Kharoufeh Author-Name: Shashank Shekhar Author-X-Name-First: Shashank Author-X-Name-Last: Shekhar Author-Name: Jiazhao Cai Author-X-Name-First: Jiazhao Author-X-Name-Last: Cai Author-Name: M. Shankar Author-X-Name-First: M. Author-X-Name-Last: Shankar Title: Statistical characterization of nanostructured materials from severe plastic deformation in machining Abstract: Endowing conventional microcrystalline materials with nanometer-scale grains at the surfaces can offer enhanced mechanical properties, including improved wear, fatigue, and friction properties, while simultaneously enabling useful functionalizations with regard to biocompatibility, osseointegration, electrochemical performance, etc. To inherit such multifunctional properties from the surface nanograined state, existing approaches often use coatings that are created through an array of secondary processing techniques (e.g., physical or chemical vapor deposition, surface mechanical attrition treatment, etc.). Obviating the need for such surface processing, recent empirical evidence has demonstrated the introduction of integral surface nanograin structures on bulk materials as a result of severe plastic deformation during machining-based processes. Building on these observations, if empirically driven, process–structure mappings can be developed, it may be possible to engineer enhanced nanoscale surface microstructures directly using machining processes while simultaneously incorporating them within existing computer-numeric-controlled manufacturing systems. Toward this end, this article provides a statistical characterization of nanograined metals created by severe plastic deformation in machining-based processes that maps machining conditions to the resulting microstructure, namely, the mean grain size. A specialized designed experiments approach is used to hypothesize and test a linear mixed-effects model of two important machining parameters. Unlike standard analysis approaches, the statistical dependence between subsets of experimental grain size observations is accounted for and it is shown that ignoring this inherent dependence can yield misleading results for the mean response function. The statistical model is applied to pure copper specimens to identify the factors that most significantly contribute to variability in the mean grain size and is shown to accurately predict the mean grain size under a few scenarios. Journal: IIE Transactions Pages: 534-550 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.596509 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.596509 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:534-550 Template-Type: ReDIF-Article 1.0 Author-Name: Chumpol Yuangyai Author-X-Name-First: Chumpol Author-X-Name-Last: Yuangyai Author-Name: Harriet Nembhard Author-X-Name-First: Harriet Author-X-Name-Last: Nembhard Author-Name: Gregory Hayes Author-X-Name-First: Gregory Author-X-Name-Last: Hayes Author-Name: James Adair Author-X-Name-First: James Author-X-Name-Last: Adair Title: Robust parameter design for multiple-stage nanomanufacturing Abstract: Process reproducibility is a major concern for scientists and engineers, especially when new processes or new products are transitioned from laboratory-scale to full-scale manufacturing. Robust Parameter Design (RPD) is often used to mitigate this problem. However, in multiple-stage manufacturing process environments, it is difficult to employ the RPD concept because experiments cannot strictly follow the principle of complete randomization. Furthermore, the stages can be located at different sites, leading to multiple sets of noise factors. In the existing literature, only a single set of noise factors is considered. Therefore, in this research, the foundation of using the RPD concept with multistage experiments is developed and discussed. Some optimal design catalogs are provided based on a modified minimum aberration criterion. The context for this work is the development of a medical device made of nanoscale composites using a multiple-stage manufacturing process. Journal: IIE Transactions Pages: 580-589 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635176 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635176 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:580-589 Template-Type: ReDIF-Article 1.0 Author-Name: Li Zeng Author-X-Name-First: Li Author-X-Name-Last: Zeng Author-Name: Qiang Zhou Author-X-Name-First: Qiang Author-X-Name-Last: Zhou Author-Name: Michael De Cicco Author-X-Name-First: Michael Author-X-Name-Last: De Cicco Author-Name: Xiaochun Li Author-X-Name-First: Xiaochun Author-X-Name-Last: Li Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Quantifying boundary effect of nanoparticles in metal matrix nanocomposite fabrication processes Abstract: Lightweight, high-strength Metal Matrix NanoComposites (MMNCs) are promising materials for use in automotive, aerospace, and numerous other applications. A uniform distribution of nanoparticles within the metal matrix is critical to the quality of such composites. In current MMNC fabrication processes, however, a boundary effect often occurs where the nanoparticles tend to gather around the grain boundaries of the metal matrix. To realize quality control and guide process improvement efforts, this article proposes a method for quantitatively assessing the boundary effect observed in microstructure images of MMNC samples based on the theory of spatial statistics. Two indices for quantifying the degree of boundary effect in an image, called Boundary Indices (BIs), are developed and their statistical properties are provided. The performances of the BIs are shown and compared in a numerical study. They are also applied to images from a real MMNC fabrication process to validate the effectiveness of the proposed method. Journal: IIE Transactions Pages: 551-567 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635180 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635180 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:551-567 Template-Type: ReDIF-Article 1.0 Author-Name: Salil Desai Author-X-Name-First: Salil Author-X-Name-Last: Desai Author-Name: Ravindra Kaware Author-X-Name-First: Ravindra Author-X-Name-Last: Kaware Title: Computational modeling of nanodroplet evaporation for scalable micro-/nano-manufacturing Abstract: This article focuses on the Molecular Dynamics (MD) modeling and simulation of a droplet-based scalable micro-/nano-manufacturing process. In order to aid precise control of the nanodroplet deposition on substrates, it is important to study its evaporation dynamics. Water and acetone are used as candidate fluids for the simulation based on the differences in their densities and volatilities. The MD simulations describe the effects of ambient conditions and fluid properties on the vaporization of the nanodroplets. Physical drop size reductions, volume slices at the cross section, and root mean square deviations are evaluated for different time scales and temperature ranges. The MD results show different evaporation rates and varied molecular dispersion patterns outside the droplet core region. These results are validated using standard molecular density values and a theoretical evaporation model for the respective fluids at given ambient conditions. This research provides a systematic understanding of droplet evaporation for predicting size variations in the nanoscale regime. These results are applicable to a direct-write droplet-based approach for depositing different nanopatterns on substrates. Journal: IIE Transactions Pages: 568-579 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635181 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635181 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:568-579 Template-Type: ReDIF-Article 1.0 Author-Name: Chao-Hsi Tsai Author-X-Name-First: Chao-Hsi Author-X-Name-Last: Tsai Author-Name: Chia-Jung Chang Author-X-Name-First: Chia-Jung Author-X-Name-Last: Chang Author-Name: Kan Wang Author-X-Name-First: Kan Author-X-Name-Last: Wang Author-Name: Chuck Zhang Author-X-Name-First: Chuck Author-X-Name-Last: Zhang Author-Name: Zhiyong Liang Author-X-Name-First: Zhiyong Author-X-Name-Last: Liang Author-Name: Ben Wang Author-X-Name-First: Ben Author-X-Name-Last: Wang Title: Predictive model for carbon nanotube–reinforced nanocomposite modulus driven by micromechanical modeling and physical experiments Abstract: This article proposes an improved surrogate model for the prediction of the elastic modulus of carbon nanotube–reinforced-nanocomposites. By statistically combining micromechanical modeling results with limited amounts of experimental data, a better predictive surrogate model is constructed using a two-stage sequential modeling approach. A set of data for multi-walled carbon nanotube–bismaleimide nanocomposites is used in a case study to demonstrate the effectiveness of the proposed surrogate modeling procedure. In the case study, the theoretical composite modulus is computed with micromechanical models, and the experimental modulus is measured through tensile tests. Both theoretical and experimental composite moduli are integrated by using a statistical adjustment method to construct the surrogate model. The results demonstrate an improved predictive ability compared to the original micromechanical model. Journal: IIE Transactions Pages: 590-602 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649385 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649385 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:590-602 Template-Type: ReDIF-Article 1.0 Author-Name: Satish Bukkapatnam Author-X-Name-First: Satish Author-X-Name-Last: Bukkapatnam Author-Name: Sagar Kamarthi Author-X-Name-First: Sagar Author-X-Name-Last: Kamarthi Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Author-Name: Abe Zeid Author-X-Name-First: Abe Author-X-Name-Last: Zeid Author-Name: Ranga Komanduri Author-X-Name-First: Ranga Author-X-Name-Last: Komanduri Title: Nanomanufacturing systems: opportunities for industrial engineers Journal: IIE Transactions Pages: 492-495 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.658315 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.658315 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:492-495 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Dedication Journal: IIE Transactions Pages: 491-491 Issue: 7 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.658319 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.658319 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:7:p:491-491 Template-Type: ReDIF-Article 1.0 Author-Name: Noa Ruschin-Rimini Author-X-Name-First: Noa Author-X-Name-Last: Ruschin-Rimini Author-Name: Irad Ben-Gal Author-X-Name-First: Irad Author-X-Name-Last: Ben-Gal Author-Name: Oded Maimon Author-X-Name-First: Oded Author-X-Name-Last: Maimon Title: Fractal geometry statistical process control for non-linear pattern-based processes Abstract: This article suggests a new Statistical Process Control (SPC) approach for data-rich environments. The proposed approach is based on the theory of fractal geometry. In particular, a monitoring scheme is developed that is based on fractal representation of the monitored data at each stage to account for online changes in monitored processes. The proposed fractal-SPC enables a dynamic inspection of non-linear and state-dependent processes with a discrete and finite state space. It is aimed for use with both univariate and multivariate data. The SPC is accomplished by applying an iterated function system to represent a process as a fractal and exploiting the fractal dimension as an important monitoring attribute. It is shown that data patterns can be transformed into representing fractals in a manner that preserves their reference (in control) correlations and dependencies. The fractal statistics can then be used for anomaly detection, pattern analysis, and root cause analysis. Numerical examples and comparisons to conventional SPC methods are given. Journal: IIE Transactions Pages: 355-373 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.662420 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.662420 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:355-373 Template-Type: ReDIF-Article 1.0 Author-Name: O. Vanli Author-X-Name-First: O. Author-X-Name-Last: Vanli Author-Name: Chuck Zhang Author-X-Name-First: Chuck Author-X-Name-Last: Zhang Author-Name: Ben Wang Author-X-Name-First: Ben Author-X-Name-Last: Wang Title: An adaptive Bayesian approach for robust parameter design with observable time series noise factors Abstract: In Robust Parameter Design (RPD) the means and the covariances of noise variables, commonly assumed as known, are estimated from operating or historical data and hence can involve considerable sampling variability. In addition, for cases where there are noise factors that are measurable or with strong autocorrelation a more effective control strategy is to update the estimates of noise factor as the production takes place. This article presents a Bayesian approach to online RPD that accounts for uncertainty in noise factor and response models and allows the user to update the model estimates with production data and achieve more effective control performance. The proposed method is compared to existing dual response and certainty equivalence control approaches from the literature. Simulation examples and a case study that uses real manufacturing data from an injection molding process are used to demonstrate the proposed method. Journal: IIE Transactions Pages: 374-390 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.689123 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.689123 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:374-390 Template-Type: ReDIF-Article 1.0 Author-Name: Yisha Xiang Author-X-Name-First: Yisha Author-X-Name-Last: Xiang Author-Name: David Coit Author-X-Name-First: David Author-X-Name-Last: Coit Author-Name: Qianmei Feng Author-X-Name-First: Qianmei Author-X-Name-Last: Feng Title: Subpopulations experiencing stochastic degradation: reliability modeling, burn-in, and preventive replacement optimization Abstract: For some engineering design and manufacturing applications, particularly for evolving and new technologies, populations of manufactured components can be heterogeneous and consist of several subpopulations. The co-existence of n subpopulations is particularly common in devices when the manufacturing process is still maturing or highly variable. A new model is developed and demonstrated to simultaneously determine burn-in and age-based preventive replacement policies for populations composed of distinct subpopulations subject to stochastic degradation. Unlike traditional burn-in procedures that stress devices to failure, we present a decision rule that uses burn-in threshold on cumulative deterioration, in addition to burn-in time, to eliminate weak subpopulations. Only devices with post-burn-in deterioration levels below the burn-in threshold are released for field operations. Inspection errors are considered when screening burned-in devices. Preventive replacement is employed to prevent failures from occurring during field operation. We examine the effectiveness of such integrated polycies for non-homogeneous populations. Numerical examples are provided to illustrate the proposed procedure. Sensitivity analysis is performed to analyze the impacts of model parameters on optimal policies. Numerical results indicate there are potential cost savings from simutaneouly determining burn-in and maintenance policies as opposed to a traditional approach that makes decisions on burn-in and maintenance actions separately. Journal: IIE Transactions Pages: 391-408 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.689124 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.689124 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:391-408 Template-Type: ReDIF-Article 1.0 Author-Name: Chien-Hua Lin Author-X-Name-First: Chien-Hua Author-X-Name-Last: Lin Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Hsiang-Fan Wang Author-X-Name-First: Hsiang-Fan Author-X-Name-Last: Wang Title: Modified EWMA controller subject to metrology delay Abstract: The resource and capacity limitations of the metrology equipment used in the creation of integrated circuits result in delay being a common issue for the practical implementation of run-to-run control scheme. In the literature, several papers have provided the means of using metrology delay data or Virtual Metrology (VM) prediction models on the transient behavior and asymptotic stability of Exponentially Weighted Moving Average (EWMA) controllers. However, these procedures still suffer from large bias and/or variation of the process response variable. To overcome this difficulty, a modified EWMA controller is proposed that adjusts the process by using both the metrology delay data and VM information. The analytical expression of the process output of this developed controller and its long-term stability along with short-term output performance are derived. Furthermore, under some specific parameter settings, a more comprehensive study is presented to illustrate that the proposed controller has the capability to reduce the total mean square error of process response variable compared with existing controllers.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the Appendices to this article.] Journal: IIE Transactions Pages: 409-421 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.689242 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.689242 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:409-421 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao Liu Author-X-Name-First: Xiao Author-X-Name-Last: Liu Author-Name: Jingrui Li Author-X-Name-First: Jingrui Author-X-Name-Last: Li Author-Name: Khalifa Al-Khalifa Author-X-Name-First: Khalifa Author-X-Name-Last: Al-Khalifa Author-Name: Abdelmagid Hamouda Author-X-Name-First: Abdelmagid Author-X-Name-Last: Hamouda Author-Name: David Coit Author-X-Name-First: David Author-X-Name-Last: Coit Author-Name: Elsayed Elsayed Author-X-Name-First: Elsayed Author-X-Name-Last: Elsayed Title: Condition-based maintenance for continuously monitored degrading systems with multiple failure modes Abstract: This article develops an optimum Condition-Based Maintenance (CBM) policy for continuously monitored degrading systems with multiple failure modes. The degradation of system state is described by a stochastic process, and a maintenance alarm is used to signal when the degradation reaches a threshold level. Unlike existing CBM models, this article considers multiple sudden failures that can occur during a system's degradation. The failure rate corresponding to each failure mode is influenced by either the age of the system, the state degradation of the system, or both. A joint model is constructed for the statistically dependent time-to-maintenance due to system degradation and time-to-failure of different failure modes. This model is then utilized to obtain the optimum maintenance threshold level that maximizes the system's limiting availability over its life cycle or minimizes the long-run cost per unit time. A numerical example, using real-life data from a reliability test of communication systems, is provided to demonstrate the application of the proposed approach. Journal: IIE Transactions Pages: 422-435 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.690930 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.690930 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:422-435 Template-Type: ReDIF-Article 1.0 Author-Name: Changsoon Park Author-X-Name-First: Changsoon Author-X-Name-Last: Park Title: Economic design of charts when signals may be misclassified and the bounded reset chart Abstract: In monitoring manufacturing processes, mistakes made by statisticians can lead to Type I and Type II errors. Process engineers also commit action errors, such as search and judgment errors, when following the out-of-control action plan. These errors arise because process engineers are not always successful in searching for special causes or judging search results correctly. Action errors committed by process engineers have not been considered in the control chart literature; however, they exist in most manufacturing processes. They degrade the process quality as well as increase the control cost. The efficiency of a Traditional Control Chart (TCC) procedure is re-evaluated considering the action errors in terms of the economic context. It is shown that the efficiency of the TCC procedure is overestimated when these errors are present. The Bounded Reset Chart (BRC) procedure is proposed as an alternative to the TCC procedure. The BRC procedure is to reset the process upon a signal instead of searching for special causes so that action errors are not committed. An example and an extensive comparison of the economic design of the TCC and BRC procedures are presented and it is shown that the BRC procedure is more effective than the TCC procedure for cases where action errors are considered. Journal: IIE Transactions Pages: 436-448 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.695101 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.695101 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:436-448 Template-Type: ReDIF-Article 1.0 Author-Name: Hakan Tarakci Author-X-Name-First: Hakan Author-X-Name-Last: Tarakci Author-Name: Kwei Tang Author-X-Name-First: Kwei Author-X-Name-Last: Tang Author-Name: Sunantha Teyarachakul Author-X-Name-First: Sunantha Author-X-Name-Last: Teyarachakul Title: Learning and forgetting effects on maintenance outsourcing Abstract: This article studies the effects of learning and forgetting on the design of maintenance outsourcing contracts. Consider a situation in which a manufacturer offers an outsourcing contract to an external contractor to maintain a manufacturing process. Under the contract, the contractor schedules and performs preventive maintenance and repairs the process whenever a breakdown occurs. Two types of learning effects on the cost and time of performing preventive maintenance are considered: learning from experience (natural) and learning by a costly effort/investment. It is assumed that forgetting occurs under each learning type. A model is developed for designing an optimal outsourcing contract to maximize the manufacturer's profit. An extensive numerical analysis is carried out to empirically demonstrate the effects of learning and forgetting on the optimal maintenance contract and the manufacturer's profit. Journal: IIE Transactions Pages: 449-463 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706734 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706734 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:449-463 Template-Type: ReDIF-Article 1.0 Author-Name: Xianghui Ning Author-X-Name-First: Xianghui Author-X-Name-Last: Ning Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Improved design of kernel distance–based charts using support vector methods Abstract: Statistical Process Control (SPC) techniques that originated in manufacturing have also been used to monitoring the quality of various service processes, which can be characterized by one or several variables. In the literature, these variables are usually assumed to be either continuous or categorical. However, in reality, the quality characteristics of a service process may include both continuous and categorical variables (i.e., mixed-type variables). Direct application of conventional SPC techniques to monitor such mixed-type variables may cause increased false alarm rates and misleading conclusions. One promising solution is the kernel distance–based chart (K-chart), which makes use of Support Vector Machine (SVM) methods and requires no assumption on the variable distribution. This article provides an improved design of the SVM-based K-chart. A systematic approach to parameter selection for the considered charts is provided. An illustration and comparison are presented based on a real example from a logistics firm. The results confirm the improved performance obtained by using the proposed design scheme. Journal: IIE Transactions Pages: 464-476 Issue: 4 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.712237 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.712237 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:4:p:464-476 Template-Type: ReDIF-Article 1.0 Author-Name: Jianguo Wu Author-X-Name-First: Jianguo Author-X-Name-Last: Wu Author-Name: Yong Chen Author-X-Name-First: Yong Author-X-Name-Last: Chen Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Online detection of steady-state operation using a multiple-change-point model and exact Bayesian inference Abstract: The detection of steady-state operation is critical in system/process performance assessment, optimization, fault detection, and process automation and control. In this article, we propose a new robust and computationally efficient online steady-state detection method using multiple change-point models and exact Bayesian inference. An average run length approximation is derived that can provide insight and guidance in the application of the proposed algorithm. An extensive numerical analysis shows that the proposed method is much more accurate and robust than currently available methods. Journal: IIE Transactions Pages: 599-613 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1110268 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110268 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:599-613 Template-Type: ReDIF-Article 1.0 Author-Name: Yang Zhao Author-X-Name-First: Yang Author-X-Name-Last: Zhao Author-Name: Abhishek K. Shrivastava Author-X-Name-First: Abhishek K. Author-X-Name-Last: Shrivastava Author-Name: Kwok Leung Tsui Author-X-Name-First: Kwok Leung Author-X-Name-Last: Tsui Title: Imbalanced classification by learning hidden data structure Abstract: Approaches to solve the imbalanced classification problem usually focus on rebalancing the class sizes, neglecting the effect of the hidden structure within the majority class. The purpose of this article is to first highlight the effect of sub-clusters within the majority class on the detection of the minority instances and then handle the imbalanced classification problem by learning the structure in the data. We propose a decomposition-based approach to a two-class imbalanced classification problem. This approach works by first learning the hidden structure of the majority class using an unsupervised learning algorithm and thus transforming the classification problem into several classification sub-problems. The base classifier is constructed on each sub-problem. The ensemble is tuned to increase its sensitivity toward the minority class. We also provide a metric for selecting the clustering algorithm by comparing estimates of the stability of the decomposition, which appears necessary for good classifier performance. We demonstrate the performance of the proposed approach through various real data sets. Journal: IIE Transactions Pages: 614-628 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1110269 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110269 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:614-628 Template-Type: ReDIF-Article 1.0 Author-Name: I Tang Yu Author-X-Name-First: I Tang Author-X-Name-Last: Yu Title: A Bayesian approach to the identification of active location and dispersion factors Abstract: In this article, we extend the modified Box–Meyer method and propose an approach to identify both active location and dispersion factors in a screening experiment. Since several candidate models can be simultaneously considered under the framework of Bayesian model averaging, the proposed method can overcome the problem of missing the identification of some active factors caused by either the alias structure or misspecification of the location model. For illustration, three practical experiments and one synthetic data set are analyzed. Journal: IIE Transactions Pages: 629-637 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1122252 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1122252 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:629-637 Template-Type: ReDIF-Article 1.0 Author-Name: Erik Tryggvi Striz Bjarnason Author-X-Name-First: Erik Tryggvi Striz Author-X-Name-Last: Bjarnason Author-Name: Sharareh Taghipour Author-X-Name-First: Sharareh Author-X-Name-Last: Taghipour Title: Periodic inspection frequency and inventory policies for a -out-of- system Abstract: We investigate the maintenance and inventory policy for a k-out-of-n system where the components' failures are hidden and follow a non-homogeneous Poisson process. Two types of inspections are performed to find failed components: planned periodic inspections and unplanned opportunistic inspections. The latter are performed at system failure times when n − k +1 components are simultaneously down. In all cases, the failed components are either minimally repaired or replaced with spare parts from the inventory. The inventory is replenished either periodically or when the system fails. The periodic orders have a random lead-time, but there is no lead-time for emergency orders, as these are placed at system failure times. The key objective is to develop a method to solve the joint maintenance and inventory problem for systems with a large number of components, long planning horizon, and large inventory. We construct a simulation model to jointly optimize the periodic inspection interval, the periodic reorder interval, and periodic and emergency order-up-to levels. Due to the large search space, it is infeasible to try all possible combinations of decision variables in a reasonable amount of time. Thus, the simulation model is integrated with a heuristic search algorithm to obtain the optimal solution. Journal: IIE Transactions Pages: 638-650 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1122253 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1122253 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:638-650 Template-Type: ReDIF-Article 1.0 Author-Name: Kaveh Bastani Author-X-Name-First: Kaveh Author-X-Name-Last: Bastani Author-Name: Prahalad K. Rao Author-X-Name-First: Prahalad K. Author-X-Name-Last: Rao Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Title: An online sparse estimation-based classification approach for real-time monitoring in advanced manufacturing processes from heterogeneous sensor data Abstract: The objective of this work is to realize real-time monitoring of process conditions in advanced manufacturing using multiple heterogeneous sensor signals. To achieve this objective we propose an approach invoking the concept of sparse estimation called online sparse estimation-based classification (OSEC). The novelty of the OSEC approach is in representing data from sensor signals as an underdetermined linear system of equations and subsequently solving the underdetermined linear system using a newly developed greedy Bayesian estimation method. We apply the OSEC approach to two advanced manufacturing scenarios, namely, a fused filament fabrication additive manufacturing process and an ultraprecision semiconductor chemical–mechanical planarization process. Using the proposed OSEC approach, process drifts are detected and classified with higher accuracy compared with popular machine learning techniques. Process drifts were detected and classified with a fidelity approaching 90% (F-score) using OSEC. In comparison, conventional signal analysis techniques—e.g., neural networks, support vector machines, quadratic discriminant analysis, naïve Bayes—were evaluated with F-score in the range of 40% to 70%. Journal: IIE Transactions Pages: 579-598 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1122254 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1122254 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:579-598 Template-Type: ReDIF-Article 1.0 Author-Name: Rodrigo Pascual Author-X-Name-First: Rodrigo Author-X-Name-Last: Pascual Author-Name: Gabriel Santelices Author-X-Name-First: Gabriel Author-X-Name-Last: Santelices Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Sergio Maturana Author-X-Name-First: Sergio Author-X-Name-Last: Maturana Title: Channel coordination on fixed-term maintenance outsourcing contracts Abstract: This article studies the positive and negative effects that fixed-term maintenance contracts may have on related decision-making. We present an original model to estimate such effects and select the optimal preventive maintenance intervals and contract terms for pieces of equipment that are serviced by an external party. In the context of the contract, the intention of each party is in general to maximize its own profit, which usually leads to unaligned interests and decisions. To resolve this issue, we propose incentive schemes to ensure the contract sustainability by achieving channel coordination between the client and its service vendor. Special focus is put on how the performed net-present-value analysis of both parties affects decision-making regarding equipment maintenance. Our model considers a new alternative of negotiating contracts with non-constant maintenance intervals. The proposed model helps to identify conditions that justify maintenance deferrals with their associated negligence, in terms of life cycle reduction and performance deterioration, when no channel coordination is promoted. Additionally, we present a simple procedure to settle an optimal contract duration, benefiting both parties. The proposed methodology is tested using a baseline case study from the literature. It illustrates how return-on-investment analysis may significantly impact optimal maintenance intervals during the contract for both parties. Accordingly, incentives need to be re-evaluated to achieve channel coordination. The suggested approach can be easily implemented in commercial spreadsheets, facilitating sensitivity analyses. Journal: IIE Transactions Pages: 651-660 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1122255 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1122255 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:651-660 Template-Type: ReDIF-Article 1.0 Author-Name: Marina Vives-Mestres Author-X-Name-First: Marina Author-X-Name-Last: Vives-Mestres Author-Name: Josep Daunis-i-Estadella Author-X-Name-First: Josep Author-X-Name-Last: Daunis-i-Estadella Author-Name: Josep-Antoni Martín-Fernández Author-X-Name-First: Josep-Antoni Author-X-Name-Last: Martín-Fernández Title: Signal interpretation in Hotelling’s control chart for compositional data Abstract: Nowadays, the control of the concentrations of elements is of crucial importance in industry. Concentrations are expressed in terms of proportions or percentages, which means that they are Compositional Data (CoDa). CoDa are defined as vectors of positive elements that represent parts of a whole and usually add to a constant sum. The classical T2 Control Chart is not appropriate for CoDa; rather, it is better to use a compositional T2 Control Chart (T2C CC). This article generalizes the interpretation of the out-of-control signals of the individual T2C CC for more than three components. We propose two methods for identifying the ratio of components that mainly contribute to the signal. The first one is suitable for low-dimensional problems and consists in finding the log ratio of the components that maximizes the univariate T2 statistic. The second approach is an optimized method for large-dimensional problems that simplifies the calculation by transforming the coordinates into an sphere. We illustrate the T2C CC signal interpretation with a practical example from the chemical and pharmaceutical industry. Journal: IIE Transactions Pages: 661-672 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1125042 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1125042 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:661-672 Template-Type: ReDIF-Article 1.0 Author-Name: Dong Ding Author-X-Name-First: Dong Author-X-Name-Last: Ding Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Title: Rank-based process control for mixed-type data Abstract: Conventional statistical process control tools target either continuous or categorical data but seldom both at the same time. However, mixed-type data consisting of both continuous and categorical observations are becoming more common in modern manufacturing processes and service management. However, they cannot be analyzed using traditional methods. By assuming that there is a latent continuous variable that determines the attribute levels of a categorical variable, the ordinal information among the attribute levels can be exploited. This enables us to simultaneously describe and monitor continuous and categorical data in a unified framework of standardized ranks, based on which a multivariate exponentially weighted moving average control chart is proposed. This control chart specializes in detecting location shifts in continuous data and in latent continuous distributions of categorical data. Numerical simulations show that our proposed chart can efficiently detect location shifts and is robust to various distributions. Journal: IIE Transactions Pages: 673-683 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1126002 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1126002 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:673-683 Template-Type: ReDIF-Article 1.0 Author-Name: Yanting Li Author-X-Name-First: Yanting Author-X-Name-Last: Li Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A false discovery approach for scanning spatial disease clusters with arbitrary shapes Abstract: The spatial scan statistic is one of the main tools for testing the presence of clusters in a geographical region. The recently proposed Fast Subset Scan (FSS) method represents an important extension, as it is computationally efficient and enables detection of clusters with arbitrary shapes. Aimed at automatically and simultaneously detecting multiple clusters of any shapes, this article explores the False Discovery (FD) approach originated from multiple hypothesis testing. We show that the FD approach can provide a higher detection power and better identification capability than the standard scan and FSS methods, on average. Journal: IIE Transactions Pages: 684-698 Issue: 7 Volume: 48 Year: 2016 Month: 7 X-DOI: 10.1080/0740817X.2015.1133940 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1133940 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:7:p:684-698 Template-Type: ReDIF-Article 1.0 Author-Name: Li Zeng Author-X-Name-First: Li Author-X-Name-Last: Zeng Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Title: Bayesian hierarchical modeling for monitoring optical profiles in low-E glass manufacturing processes Abstract: Low-emittance (low-E) glass manufacturing has become an important sector of the glass industry for energy efficiency of such glasses. However, the quality control scheme in the current processes is rather primitive and advanced statistical quality control methods need to be developed. As the first attempt for this purpose, this article considers monitoring of optical profiles, which are typical quality measurements in low-E glass manufacturing. A Bayesian hierarchical approach is proposed for modeling the optical profiles, which conducts model selection and estimation in an integrated framework. The effectiveness of the proposed approach is validated in a numerical study, and its use in Phase I analysis of optical profiles is demonstrated in a case study. The proposed approach will lay a foundation for quality control and variation reduction in low-E glass manufacturing. Journal: IIE Transactions Pages: 109-124 Issue: 2 Volume: 47 Year: 2015 Month: 2 X-DOI: 10.1080/0740817X.2014.892230 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.892230 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:2:p:109-124 Template-Type: ReDIF-Article 1.0 Author-Name: Bariş Tan Author-X-Name-First: Bariş Author-X-Name-Last: Tan Title: Mathematical programming representations of the dynamics of continuous-flow production systems Abstract: This study presents a mathematical programming representation of discrete-event systems with a continuous time and mixed continuous-discrete state space. In particular, continuous material flow production systems are considered. A mathematical programming representation is used to generate simulated sample realizations of the system and also to optimize control parameters. The mathematical programming approach has been used in the literature for performance evaluation and optimization of discrete material flow production systems. In order to show the applicability of the same approach to continuous material flow systems, this article focuses on optimal production flow rate control problems for a continuous material flow system with an unreliable station and deterministic demand. These problems exhibit most of the dynamics observed in various continuous flow productions systems: flow dynamics, machine failures and repairs, changing flow rates due to system status, and control. Moreover, these problems include decision variables related to the control policies and different objective functions. By analyzing the backlog, lost sales, and production and subcontracting rate control problems, it is shown that a mixed-integer linear programming formulation with a linear objective function and linear constraints can be developed to determine the simulated performance of the system. The optimal value of the control policy that optimizes an objective function that includes the estimated expected inventory carrying and backlog cost and also the revenue through sales can also be determined by solving a quadratic integer program with a quadratic objective function and linear constraints. As a result, it is shown that the mathematical programming representation is also a viable method for performance evaluation and optimization of continuous material production systems. Journal: IIE Transactions Pages: 173-189 Issue: 2 Volume: 47 Year: 2015 Month: 2 X-DOI: 10.1080/0740817X.2014.892232 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.892232 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:2:p:173-189 Template-Type: ReDIF-Article 1.0 Author-Name: Ghulam Moeen Uddin Author-X-Name-First: Ghulam Moeen Author-X-Name-Last: Uddin Author-Name: Katherine S. Ziemer Author-X-Name-First: Katherine S. Author-X-Name-Last: Ziemer Author-Name: Abe Zeid Author-X-Name-First: Abe Author-X-Name-Last: Zeid Author-Name: Sagar Kamarthi Author-X-Name-First: Sagar Author-X-Name-Last: Kamarthi Title: Monte Carlo study of the molecular beam epitaxy process for manufacturing magnesium oxide nano-scale films Abstract: This article presents a Monte Carlo-based factor-wise sensitivity analysis conducted on the performance variables of a Molecular Beam Epitaxy (MBE) process. Using lab-scale MBE equipment, magnesium oxide (MgO 111) films are grown on a hexagonal silicon carbide 6H-SiC (0001) substrate. The thin film surface chemistry in terms of O‒Mg and OH‒Mg bonding states is examined using X-ray photoelectron spectroscopy. A multi-layer perceptron is used to model the process. Monte Carlo experiments are conducted on the process model to study the causal relationship between the critical process control variables and the key performance indicators. The sensitivity of O‒Mg and OH‒Mg bonding states in MgO films to each of the four process control variables (growth time, substrate temperature, magnesium source temperature, and percentage starting oxygen) is examined. Each control variable is varied individually while keeping other control variables constant at their mid values in one case and randomly varying in another case. The sensitivity of the performance variables to the interaction between a select set of control variable pairs is also examined. The interaction between substrate temperature and oxygen on the starting surface is found to significantly affect the dynamics of OH‒Mg bonding state. Journal: IIE Transactions Pages: 125-140 Issue: 2 Volume: 47 Year: 2015 Month: 2 X-DOI: 10.1080/0740817X.2014.905732 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.905732 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:2:p:125-140 Template-Type: ReDIF-Article 1.0 Author-Name: Arash Pourhabib Author-X-Name-First: Arash Author-X-Name-Last: Pourhabib Author-Name: Jianhua Z. Huang Author-X-Name-First: Jianhua Z. Author-X-Name-Last: Huang Author-Name: Kan Wang Author-X-Name-First: Kan Author-X-Name-Last: Wang Author-Name: Chuck Zhang Author-X-Name-First: Chuck Author-X-Name-Last: Zhang Author-Name: Ben Wang Author-X-Name-First: Ben Author-X-Name-Last: Wang Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Modulus prediction of buckypaper based on multi-fidelity analysis involving latent variables Abstract: Buckypapers are thin sheets produced from Carbon NanoTubes (CNTs) that effectively transfer the exceptional mechanical properties of CNTs to bulk materials. To accomplish a sensible tradeoff between effectiveness and efficiency in predicting the mechanical properties of CNT buckypapers, a multi-fidelity analysis appears necessary, combining costly but high-fidelity physical experiment outputs with affordable but low-fidelity Finite Element Analysis (FEA)-based simulation responses. Unlike the existing multi-fidelity analysis reported in the literature, not all of the input variables in the FEA simulation code are observable in the physical experiments; the unobservable ones are the latent variables in our multi-fidelity analysis. This article presents a formulation for multi-fidelity analysis problems involving latent variables and further develops a solution procedure based on nonlinear optimization. In a broad sense, this latent variable-involved multi-fidelity analysis falls under the category of non-isometric matching problems. The performance of the proposed method is compared with both a single-fidelity analysis and the existing multi-fidelity analysis without considering latent variables, and the superiority of the new method is demonstrated, especially when we perform extrapolation. Journal: IIE Transactions Pages: 141-152 Issue: 2 Volume: 47 Year: 2015 Month: 2 X-DOI: 10.1080/0740817X.2014.917777 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.917777 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:2:p:141-152 Template-Type: ReDIF-Article 1.0 Author-Name: Zhufeng Gao Author-X-Name-First: Zhufeng Author-X-Name-Last: Gao Author-Name: Jonathan F. Bard Author-X-Name-First: Jonathan F. Author-X-Name-Last: Bard Author-Name: Rodolfo Chacon Author-X-Name-First: Rodolfo Author-X-Name-Last: Chacon Author-Name: John Stuber Author-X-Name-First: John Author-X-Name-Last: Stuber Title: An assignment-sequencing methodology for scheduling assembly and test operations with multi-pass requirements Abstract: This article presents a three-phase methodology for scheduling assembly and test operations for semiconductor devices. The facility in which these operations are performed is a re-entrant flow shop consisting of several dozen to several hundred machines and up to a 1000 specialized tools. The semiconductor devices are contained in lots, and each lot follows a specific route through the facility, perhaps returning to the same machine multiple times. Each step in the route is referred to as a “pass.” In the first phase of the methodology an extended assignment model is solved to simultaneously assign tooling and lots to the machines. Four prioritized objectives are considered: minimize the weighted sum of key device shortages, maximize the weighted sum of lots processed, minimize the number of machines used, and minimize the makespan. In the second phase, lots are optimally sequenced on their assigned machines using the same prioritized objectives. Due to the precedent relations induced by the pass requirements, some lots may have to be delayed or removed from the assignment model solution to ensure that no machine runs beyond the planning horizon. In the third phase, machines are reset to allow additional lots to be processed when tooling is available. The methodology was tested using data provided by the Assembly and Test facility of a leading manufacturer. The results indicate that high-quality solutions can be obtained within 1 hour when compared with those obtained with a greedy randomized adaptive search procedure. Cost reductions were observed across all objectives and averaged 62% in the aggregate. Journal: IIE Transactions Pages: 153-172 Issue: 2 Volume: 47 Year: 2015 Month: 2 X-DOI: 10.1080/0740817X.2014.917778 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.917778 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:2:p:153-172 Template-Type: ReDIF-Article 1.0 Author-Name: Dorit S. Hochbaum Author-X-Name-First: Dorit S. Author-X-Name-Last: Hochbaum Author-Name: Michael R. Wagner Author-X-Name-First: Michael R. Author-X-Name-Last: Wagner Title: Production cost functions and demand uncertainty effects in price-only contracts Abstract: The price-only contract is the simplest and most common contract between a supplier and buyer in a supply chain. In such a contract, the supplier proposes a fixed wholesale price, and the buyer chooses a corresponding order quantity. The buyer’s optimal behavior is modeled using the Newsvendor model and the supplier’s optimal behavior is modeled as the solution to an optimization problem. This article explores, for the first time, the impact of general production costs on the supplier’s and buyer’s behavior. It is revealed that increased supplier’s production efficiency, reflected in lower marginal production costs, increases the buyer’s optimal profit. Therefore, a buyer would always prefer the more efficient supplier. A higher supplier efficiency, however, may or may not increase the supplier’s optimal profit, depending on the production function’s fixed costs. The effect of demand uncertainty, as measured by the coefficient of variation, is shown to increase the optimal order quantity. The uncertainty effect on the firms’ optimal profits is analyzed. Also, the relationship between production efficiency and the response to demand uncertainty is explored and it is shown that a higher efficiency level increases the responsiveness and volatility of the supplier’s production quantities. Thus, higher-efficiency suppliers are better positioned to respond to changes in the demand uncertainty in the supply chain. Journal: IIE Transactions Pages: 190-202 Issue: 2 Volume: 47 Year: 2015 Month: 2 X-DOI: 10.1080/0740817X.2014.938843 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.938843 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:2:p:190-202 Template-Type: ReDIF-Article 1.0 Author-Name: Milind Dawande Author-X-Name-First: Milind Author-X-Name-Last: Dawande Author-Name: H. Geismar Author-X-Name-First: H. Author-X-Name-Last: Geismar Author-Name: Michael Pinedo Author-X-Name-First: Michael Author-X-Name-Last: Pinedo Author-Name: Chelliah Sriskandarajah Author-X-Name-First: Chelliah Author-X-Name-Last: Sriskandarajah Title: Throughput optimization in dual-gripper interval robotic cells Abstract: Interval robotic cells with several processing stages (chambers) have been increasingly used for diverse wafer fabrication processes in semiconductor manufacturing. Processes such as low-pressure chemical vapor deposition, etching, cleaning and chemical-mechanical planarization, require strict time control for each processing stage. A wafer treated in a processing chamber must leave that chamber within a specified time limit; otherwise the wafer is exposed to residual gases and heat, resulting in quality problems. Interval robotic cells are also widely used in the manufacture of printed circuit boards.The problem of scheduling operations in dual-gripper interval robotic cells that produce identical wafers (or parts) is considered in this paper. The objective is to find a 1-unit cyclic sequence of robot moves that minimizes the long-run average time to produce a part or, equivalently, maximizes the throughput. Initially two extreme cases are considered, namely no-wait cells and free-pickup cells; for no-wait cells (resp., free-pickup cells), an optimal (resp., asymptotically optimal) solution is obtained in polynomial time. It is then proved that the problem is strongly NP-hard for a general interval cell. Finally, results of an extensive computational study aimed at analyzing the improvement in throughput realized by using a dual-gripper robot instead of a single-gripper robot are presented. It is shown that employing a dual-gripper robot can lead to a significant gain in productivity. Operations managers can compare the resulting increase in revenue with the additional costs of acquiring and maintaining a dual-gripper robot to determine the circumstances under which such an investment is appropriate.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplemental resources: Proofs of all theoretical results, a table summarizing these results, a summary of Algorithm FindCycle, and the Levner–Kats–Levit Algorithm.] Journal: IIE Transactions Pages: 1-15 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902789092 File-URL: http://hdl.handle.net/10.1080/07408170902789092 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:1-15 Template-Type: ReDIF-Article 1.0 Author-Name: Brian Keller Author-X-Name-First: Brian Author-X-Name-Last: Keller Author-Name: GÜzİn Bayraksan Author-X-Name-First: GÜzİn Author-X-Name-Last: Bayraksan Title: Scheduling jobs sharing multiple resources under uncertainty: A stochastic programming approach Abstract: A two-stage stochastic integer program to determine an optimal schedule for jobs requiring multiple classes of resources under uncertain processing times, due dates, resource consumption and availabilities is formulated. Temporary resource capacity expansion for a penalty is allowed. Potential applications of this model include team scheduling problems that arise in service industries such as engineering consulting and operating room scheduling. An exact solution method is developed based on Benders decomposition for problems with a moderate number of scenarios. Benders decomposition is then embedded within a sampling-based solution method for problems with a large number of scenarios. A sequential sampling procedure is modified to allow for approximate solution of integer programs and its asymptotic validity and finite stopping are proved under this modification. The solution methodologies are compared on a set of test problems. Several algorithmic enhancements are added to improve efficiency. Journal: IIE Transactions Pages: 16-30 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902942683 File-URL: http://hdl.handle.net/10.1080/07408170902942683 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:16-30 Template-Type: ReDIF-Article 1.0 Author-Name: Jennifer Bekki Author-X-Name-First: Jennifer Author-X-Name-Last: Bekki Author-Name: John Fowler Author-X-Name-First: John Author-X-Name-Last: Fowler Author-Name: Gerald Mackulak Author-X-Name-First: Gerald Author-X-Name-Last: Mackulak Author-Name: Barry Nelson Author-X-Name-First: Barry Author-X-Name-Last: Nelson Title: Indirect cycle time quantile estimation using the Cornish–Fisher expansion Abstract: This paper proposes a technique for estimating steady-state quantiles from discrete-event simulation models, with particular attention paid to cycle time quantiles of manufacturing systems. The technique is based on the Cornish–Fisher expansion, justified through an extensive empirical study, and is supported with mathematical analysis. It is shown that the technique provides precise and accurate estimates for the most commonly estimated quantiles with minimal data storage and low computational requirements. Journal: IIE Transactions Pages: 31-44 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903019135 File-URL: http://hdl.handle.net/10.1080/07408170903019135 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:31-44 Template-Type: ReDIF-Article 1.0 Author-Name: Edieal Pinker Author-X-Name-First: Edieal Author-X-Name-Last: Pinker Author-Name: Hsiao-Hui Lee Author-X-Name-First: Hsiao-Hui Author-X-Name-Last: Lee Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Title: Can flexibility be constraining? Abstract: Five common options for workforce flexibility and their robustness under uncertain demand are investigated. In the first stage, a firm makes optimal staffing decisions according to estimated demand and a given workforce flexibility policy. In the second stage, it reallocates its workforce to react to demand shocks. Numerical results are presented that show that flexibility can lead a firm to staff with too little slack to be flexible to demand shocks, thus leading to higher total costs, i.e., staffing and inventory costs. The forms of flexibility that give robust benefits are identified and an analysis on how different forms of flexibility interact with each other is performed.[Supplemental materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplemental resource: Appendix with additional tables of results.] Journal: IIE Transactions Pages: 45-59 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903113789 File-URL: http://hdl.handle.net/10.1080/07408170903113789 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:45-59 Template-Type: ReDIF-Article 1.0 Author-Name: Chun-Hung Chen Author-X-Name-First: Chun-Hung Author-X-Name-Last: Chen Author-Name: Enver Yücesan Author-X-Name-First: Enver Author-X-Name-Last: Yücesan Author-Name: Liyi Dai Author-X-Name-First: Liyi Author-X-Name-Last: Dai Author-Name: Hsiao-Chang Chen Author-X-Name-First: Hsiao-Chang Author-X-Name-Last: Chen Title: Optimal budget allocation for discrete-event simulation experiments Abstract: Simulation plays a vital role in analyzing discrete-event systems, particularly in comparing alternative system designs with a view to optimizing system performance. Using simulation to analyze complex systems, however, can be both prohibitively expensive and time-consuming. Effective algorithms to allocate intelligently a computing budget for discrete-event simulation experiments are presented in this paper. These algorithms dynamically determine the simulation lengths for all simulation experiments and thus significantly improve simulation efficiency under the constraint of a given computing budget. Numerical illustrations are provided and the algorithms are compared with traditional two-stage ranking-and-selection procedures through numerical experiments. Although the proposed approach is based on heuristics, the numerical results indicate that it is much more efficient than the compared procedures. Journal: IIE Transactions Pages: 60-70 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903116360 File-URL: http://hdl.handle.net/10.1080/07408170903116360 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:60-70 Template-Type: ReDIF-Article 1.0 Author-Name: Shing Tsai Author-X-Name-First: Shing Author-X-Name-Last: Tsai Author-Name: Barry Nelson Author-X-Name-First: Barry Author-X-Name-Last: Nelson Title: Fully sequential selection procedures with control variates Abstract: Fully sequential selection procedures have been developed in the field of stochastic simulation to find the simulated system with the best expected performance when the number of alternatives is finite. Kim and Nelson proposed the procedure to allow for unknown and unequal variances and the use of common random numbers. approximates the raw sum of differences between observations from two systems as a Brownian motion process with drift and uses a triangular continuation region to decide the stopping time of the selection process. In this paper new fully sequential selection procedures are derived that employ a more effective sum of differences, which is called a controlled sum. Two provably valid procedures and an approximate procedure are described. Empirical results and a realistic illustration are provided to compare the efficiency of these procedures with other procedures that solve the same problem.[Supplemental materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplemental resources: Proofs and guidelines to choose appropriate parameters.] Journal: IIE Transactions Pages: 71-82 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903228942 File-URL: http://hdl.handle.net/10.1080/07408170903228942 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:71-82 Template-Type: ReDIF-Article 1.0 Author-Name: E. Chen Author-X-Name-First: E. Author-X-Name-Last: Chen Author-Name: W. Kelton Author-X-Name-First: W. Author-X-Name-Last: Kelton Title: Confidence interval estimation using quasi-independent sequences Abstract: A Quasi-Independent (QI) subsequence is a subset of time series observations obtained by systematic sampling. Because the observations appear to be independent, as determined by the runs tests, classical statistical techniques can be used on those observations directly. This paper discusses implementation of a sequential procedure to determine the simulation run length to obtain a QI subsequence, and the batch size for constructing confidence intervals for an estimator of the steady-state mean of a stochastic process. The proposed QI procedures increase the simulation run length and batch size progressively until a certain number of essentially independent and identically distributed samples are obtained. The only (mild) assumption is that the correlations of the stochastic-process output sequence eventually die off as the lag increases. An experimental performance evaluation demonstrates the validity of the QI procedure. Journal: IIE Transactions Pages: 83-93 Issue: 1 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903232266 File-URL: http://hdl.handle.net/10.1080/07408170903232266 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:1:p:83-93 Template-Type: ReDIF-Article 1.0 Author-Name: Avi Giloni Author-X-Name-First: Avi Author-X-Name-Last: Giloni Author-Name: Clifford Hurvich Author-X-Name-First: Clifford Author-X-Name-Last: Hurvich Author-Name: Sridhar Seshadri Author-X-Name-First: Sridhar Author-X-Name-Last: Seshadri Title: Forecasting and information sharing in supply chains under ARMA demand Abstract: This article considers the problem of determining the value of information sharing in a multi-stage supply chain in which the retailer faces AutoRegressive Moving Average (ARMA) demand, all players use a myopic order-up-to policy, and information sharing can only occur between adjacent players in the chain. It is shown that an upstream supply chain player can determine whether information sharing is of any value directly from the parameters of the model for the adjacent downstream player's order. This can be done by examining the location of the roots of the moving average polynomial of the model for the downstream player's order. If at least one of these roots is inside the unit circle or if the polynomial is applied to a lagged set of the downstream player's shocks, there is value of information sharing for the upstream player. It is also shown that under credible assumptions, neither player k−1's order nor player k's demand is necessarily an ARMA process with respect to the relevant shocks. It is shown that demand activity propagates in general to a process that is called quasi-ARMA, or QUARMA, in which the most recent shock(s) may be absent. It is shown that the typical player faces QUARMA demand and places orders that are also QUARMA. Thus, the demand propagation model is QUARMA in–QUARMA out. The presented analysis hence reverses and sharpens several previous results in the literature involving information sharing and also opens up many questions for future research. Journal: IIE Transactions Pages: 35-54 Issue: 1 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.689122 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.689122 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:1:p:35-54 Template-Type: ReDIF-Article 1.0 Author-Name: Amar Sapra Author-X-Name-First: Amar Author-X-Name-Last: Sapra Author-Name: Peter Jackson Author-X-Name-First: Peter Author-X-Name-Last: Jackson Title: A continuous-time analog of the Martingale model of forecast evolution Abstract: In many practical situations, a manager would like to simulate forecasts for periods whose duration (e.g., week) is not equal to the periods (e.g., month) for which past forecasting data are available. This article addresses this problem by developing a continuous-time analog of the Martingale model of forecast evolution, called the Continuous-Time Martingale Model of Forecast Evolution (CTMMFE). The CTMMFE is used to parameterize the variance–covariance matrix of forecast updates in such a way that the matrix can be scaled for any planning period length. The parameters can then be estimated from past forecasting data corresponding to a specific planning period. Once the parameters are estimated, a variance–covariance matrix can be generated for any planning period length. Numerical experiments are conducted to derive insights into how various characteristics of the variance–covariance matrix (for example, the underlying correlation structure) influence the number of parameters needed as well as the accuracy of the approximation. Journal: IIE Transactions Pages: 23-34 Issue: 1 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.761367 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.761367 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:1:p:23-34 Template-Type: ReDIF-Article 1.0 Author-Name: Sin-Hoon Hum Author-X-Name-First: Sin-Hoon Author-X-Name-Last: Hum Author-Name: Mahmut Parlar Author-X-Name-First: Mahmut Author-X-Name-Last: Parlar Title: Measurement and optimization of supply chain responsiveness Abstract: This article considers make-to-order supply chains with multiple stages where each stage is completed in a random length of time. An order that is placed in stage 1 is considered fulfilled when all of the stages are completed. The responsiveness of such a supply chain is defined as the probability that an order placed now will be fulfilled within t time units. The responsiveness of the supply chain is optimized by maximizing the probability that the order will be fulfilled within some promised time interval subject to a budget constraint. This is achieved by manipulating the rates of distributions representing the duration of each stage. It is assumed that the completion time of each stage is exponential (with possibly different rates) and generalized Erlang and phase-type distributed fulfillment times are both considered. This is followed by more realistic scenarios where the time to completion of a stage is non-exponential. The cases (i) of generalized beta-distributed, (ii) of correlated stage durations, (iii) where stages may be completed immediately with a positive probability (possibly corresponding to the availability of inventory), and (iv) where the number of stages traversed is a random variable are considered. Then an assembly-type system is analyzed for the case where the completion of a stage may depend on the availability of components to be delivered by an outside supplier and a serial system where each stage consists of a multi-server queue. Also considered is a related model of network of queues where the congestion effects are taken into account in the measurement of supply chain responsiveness. This model is analyzed using an approximation and its results are compared to those obtained by simulation. Detailed numerical examples of measurement and optimization of supply chain responsiveness are presented for each model. Journal: IIE Transactions Pages: 1-22 Issue: 1 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.783251 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.783251 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:1:p:1-22 Template-Type: ReDIF-Article 1.0 Author-Name: Mathijn Retel Helmrich Author-X-Name-First: Mathijn Author-X-Name-Last: Retel Helmrich Author-Name: Raf Jans Author-X-Name-First: Raf Author-X-Name-Last: Jans Author-Name: Wilco van den Heuvel Author-X-Name-First: Wilco Author-X-Name-Last: van den Heuvel Author-Name: Albert Wagelmans Author-X-Name-First: Albert Author-X-Name-Last: Wagelmans Title: Economic lot-sizing with remanufacturing: complexity and efficient formulations Abstract: Within the framework of reverse logistics, the classic economic lot-sizing problem has been extended with a remanufacturing option. In this extended problem, known quantities of used products are returned from customers in each period. These returned products can be remanufactured so that they are as good as new. Customer demand can then be fulfilled from both newly produced and remanufactured items. In each period, one can choose to set up a process to remanufacture returned products or produce new items. These processes can have separate or joint setup costs. In this article, it is shown that both variants are NP-hard. Furthermore, several alternative mixed-integer programming (MIP) formulations of both problems are proposed and compared. Because “natural” lot-sizing formulations provide weak lower bounds, tighter formulations are proposed, namely, shortest path formulations, a partial shortest path formulation, and an adaptation of the (l, S, WW) inequalities used in the classic problem with Wagner–Whitin costs. Their efficiency is tested on a large number of test data sets and it is found that, for both problem variants, a (partial) shortest path–type formulation performs better than the natural formulation, in terms of both the linear programming relaxation and MIP computation times. Moreover, this improvement can be substantial. Journal: IIE Transactions Pages: 67-86 Issue: 1 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.802842 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.802842 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:1:p:67-86 Template-Type: ReDIF-Article 1.0 Author-Name: Amirhosein Norouzi Author-X-Name-First: Amirhosein Author-X-Name-Last: Norouzi Author-Name: Reha Uzsoy Author-X-Name-First: Reha Author-X-Name-Last: Uzsoy Title: Modeling the evolution of dependency between demands, with application to inventory planning Abstract: This article shows that the progressive realization of uncertain demands across successive discrete time periods through additive or multiplicative forecast updates results in the evolution of the conditional covariance of demand in addition to its conditional mean. A dynamic inventory model with forecast updates is used to illustrate the application of the proposed method. It is shown that the optimal inventory policy depends on conditional covariances, and a model without information updates is used to quantify the benefit of using the available forecast information in the presence of additive forecast updates. The proposed approach yields significant reductions in system costs and is applicable to a wide range of production and inventory models. It is also shown that the proposed approach can be extended to the case of multiplicative forecast updates and directions for future work are suggested. Journal: IIE Transactions Pages: 55-66 Issue: 1 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.803637 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803637 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:1:p:55-66 Template-Type: ReDIF-Article 1.0 Author-Name: Jiejian Feng Author-X-Name-First: Jiejian Author-X-Name-Last: Feng Author-Name: Liming Liu Author-X-Name-First: Liming Author-X-Name-Last: Liu Author-Name: Mahmut Parlar Author-X-Name-First: Mahmut Author-X-Name-Last: Parlar Title: An efficient dynamic optimization method for sequential identification of group-testable items Abstract: Group testing with variable group sizes for incomplete identification has been proposed in the literature but remains an open problem because the available solution approaches cannot handle even relatively small problems. This article proposes a general two-stage model that uses stochastic dynamic programming at stage 2 for the optimal group sizes and non-linear programming at stage 1 for the optimal number of group-testable units. By identifying tight bounds on the optimal group size for each step at stage 2 and the optimal initial purchase quantity of the group-testable units at stage 1, an efficient solution approach is developed that dramatically reduces both the number of functional evaluations and the intermediate results/data that need to be stored and retrieved. With this approach, large-scale practical problems can be solved exactly within very reasonable computation time. This makes the practical implementation of the dynamic group-testing scheme possible in manufacturing and health care settings. Journal: IIE Transactions Pages: 69-83 Issue: 2 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.504684 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504684 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:2:p:69-83 Template-Type: ReDIF-Article 1.0 Author-Name: Jens Brunner Author-X-Name-First: Jens Author-X-Name-Last: Brunner Author-Name: Jonathan Bard Author-X-Name-First: Jonathan Author-X-Name-Last: Bard Author-Name: Rainer Kolisch Author-X-Name-First: Rainer Author-X-Name-Last: Kolisch Title: Midterm scheduling of physicians with flexible shifts using branch and price Abstract: A methodology is presented to solve the flexible shift scheduling problem of physicians when hospital administrators can exploit flexible start times, variable shift lengths, and overtime to cover demand. The objective is to minimize the total assignment cost subject to individual contracts and prevailing labor regulations. A wide range of legal restrictions, facility-specific staffing policies, individual preferences, and on-call requirements throughout the week are considered. The resulting model constructs shifts implicitly rather than starting with a predefined set of several shift types. To find high-quality rosters, a Branch-and-Price (B&P) algorithm is developed that uses two different branching strategies and generates new rosters as needed. The first strategy centers on the master problem variables and the second is based on the subproblem variables. Using data provided by an anesthesia department of an 1100-bed hospital as well as an extensive set of randomly generated test instances for 15 and 18 physicians, computational results demonstrate the efficiency of the B&P algorithm for planning horizons of up to 6 weeks. Journal: IIE Transactions Pages: 84-109 Issue: 2 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.504685 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504685 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:2:p:84-109 Template-Type: ReDIF-Article 1.0 Author-Name: Ali Tafazzoli Author-X-Name-First: Ali Author-X-Name-Last: Tafazzoli Author-Name: James Wilson Author-X-Name-First: James Author-X-Name-Last: Wilson Title: Skart: A skewness- and autoregression-adjusted batch-means procedure for simulation analysis Abstract: Skart is an automated sequential batch-means procedure for constructing a skewness- and autoregression-adjusted confidence interval (CI) for the steady-state mean of a simulation output process either in discrete time (i.e., using observation-based statistics), or in continuous time (i.e., using time-persistent statistics). Skart delivers a CI designed to satisfy user-specified requirements concerning both the CI's coverage probability and its absolute or relative precision. Skart exploits separate adjustments to the classical batch-means CI to account for the effects on the distribution of the underlying Student's t-statistic arising from skewness and autocorrelation of the batch means. The skewness adjustment is based on a Cornish–Fisher expansion for the classical batch-means t-statistic, and the autocorrelation adjustment is based on a first-order autoregressive approximation to the batch-means autocorrelation function. Skart also delivers a point estimator for the steady-state mean that is approximately free of initialization bias. The associated warm-up period is based on iteratively applying Von Neumann's randomness test to spaced batch means with increasing sizes for each batch and its preceding spacer. In extensive experimentation, Skart compared favorably with its competitors. [Supplementary material is available for this article. Go to the publisher's online edition of IIE Transactions for additional discussion, detailed proofs, etc.] Journal: IIE Transactions Pages: 110-128 Issue: 2 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.504688 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504688 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:2:p:110-128 Template-Type: ReDIF-Article 1.0 Author-Name: Alexander Erdelyi Author-X-Name-First: Alexander Author-X-Name-Last: Erdelyi Author-Name: Huseyin Topaloglu Author-X-Name-First: Huseyin Author-X-Name-Last: Topaloglu Title: Approximate dynamic programming for dynamic capacity allocation with multiple priority levels Abstract: This article considers a quite general dynamic capacity allocation problem. There is a fixed amount of daily processing capacity. On each day, jobs of different priorities arrive randomly and a decision has to made about which jobs should be scheduled on which days. Waiting jobs incur a holding cost that is a function of their priority levels. The objective is to minimize the total expected cost over a finite planning horizon. The problem is formulated as a dynamic program, but this formulation is computationally difficult as it involves a high-dimensional state vector. To address this difficulty, an approximate dynamic programming approach is used that decomposes the dynamic programming formulation by the different days in the planning horizon to construct separable approximations to the value functions. Value function approximations are used for two purposes. First, it is shown that the value function approximations can be used to obtain a lower bound on the optimal total expected cost. Second, the value function approximations can be used to make the job scheduling decisions over time. Computational experiments indicate that the job scheduling decisions made by the proposed approach perform significantly better than a variety of benchmark strategies. Journal: IIE Transactions Pages: 129-142 Issue: 2 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.504690 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504690 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:2:p:129-142 Template-Type: ReDIF-Article 1.0 Author-Name: Hongrui Liu Author-X-Name-First: Hongrui Author-X-Name-Last: Liu Author-Name: Zelda Zabinsky Author-X-Name-First: Zelda Author-X-Name-Last: Zabinsky Author-Name: Wolf Kohn Author-X-Name-First: Wolf Author-X-Name-Last: Kohn Title: Rule-based forecasting and production control system design utilizing a feedback control architecture Abstract: Forecasting and production control systems typically rely on operational rules that have been accumulated and refined from enterprise experts. Designing a rule-based system is a challenging task. In this article, a new rule-based system design methodology for forecasting and production control is proposed. The methodology first represents the rule-based system as a finite state automaton (a Moore machine) and then formulates an optimal control problem in a feedback control architecture. The solution to the optimal control problem provides action rules for forecasting and production that minimize cost over a given time horizon. The proposed methodology provides a systematic tool for rule-based system design that gives robust and realistic solutions. Journal: IIE Transactions Pages: 143-152 Issue: 2 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.504691 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504691 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:2:p:143-152 Template-Type: ReDIF-Article 1.0 Author-Name: Halit Üster Author-X-Name-First: Halit Author-X-Name-Last: Üster Author-Name: Jyotirmoy Dalal Author-X-Name-First: Jyotirmoy Author-X-Name-Last: Dalal Title: Strategic emergency preparedness network design integrating supply and demand sides in a multi-objective approach Abstract: We consider integration of fast evacuation and cost-effective relief distribution objectives, the two critical aspects of emergency management, to design a strategic emergency preparedness network for foreseen disasters, such as hurricanes. To this end, we introduce the design of a three-tier system, involving evacuation source, shelters, and distribution centers, that integrates the relief (supply) and evacuation (demand) sides of an emergency preparedness network. This is motivated by the realization that the shelters are shared facilities at the interface of the supply and demand sides. Although primarily intended for strategic decision making, our model can also make tactical decisions, thus spanning two separate time frames before a disaster’s occurrence. To solve models for large-scale instances, we adopt a Benders Decomposition approach with an implementation that solves only one instance of the master problem. We also determine that, in this framework, tuning of master tree search parameters along with the strengthening of Benders cuts significantly impact convergence. We conduct an extensive computational study to examine the impact of algorithmic improvements and further consider a realistic case study based on geographic information system (GIS) data from coastal Texas and examine the effects of changing problem parameters. By comparing our approach with current practice, we illustrate that a pro-active strategic integration of evacuation and distribution can relieve the resource-constrained large urban areas, traditionally considered as shelter locations. Journal: IISE Transactions Pages: 395-413 Issue: 4 Volume: 49 Year: 2017 Month: 4 X-DOI: 10.1080/0740817X.2016.1234731 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1234731 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:4:p:395-413 Template-Type: ReDIF-Article 1.0 Author-Name: Miao Bai Author-X-Name-First: Miao Author-X-Name-Last: Bai Author-Name: Robert H. Storer Author-X-Name-First: Robert H. Author-X-Name-Last: Storer Author-Name: Gregory L. Tonkay Author-X-Name-First: Gregory L. Author-X-Name-Last: Tonkay Title: A sample gradient-based algorithm for a multiple-OR and PACU surgery scheduling problem Abstract: In this article, we study a surgery scheduling problem in multiple Operating Rooms (ORs) constrained by the Post-Anesthesia Care Unit (PACU) capacity within the block-booking framework. With surgery sequences predetermined in each OR, a Discrete-Event Dynamic System (DEDS) is devised for the problem. A DEDS-based stochastic optimization model is formulated in order to minimize the cost incurred from patient waiting time, OR idle time, OR blocking time, OR overtime, and PACU overtime. A sample gradient-based algorithm is proposed for the sample average approximation of our formulation. Numerical experiments suggest that the proposed method identifies near-optimal solutions and outperforms previous methods. We also show that considerable cost savings (11.8% on average) are possible in hospitals where PACU beds are a constraint. Journal: IISE Transactions Pages: 367-380 Issue: 4 Volume: 49 Year: 2017 Month: 4 X-DOI: 10.1080/0740817X.2016.1237061 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1237061 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:4:p:367-380 Template-Type: ReDIF-Article 1.0 Author-Name: Candace A. Yano Author-X-Name-First: Candace A. Author-X-Name-Last: Yano Author-Name: Elizabeth J. Durango-Cohen Author-X-Name-First: Elizabeth J. Author-X-Name-Last: Durango-Cohen Author-Name: Liad Wagman Author-X-Name-First: Liad Author-X-Name-Last: Wagman Title: Outsourcing in place: Should a retailer sell its store-brand factory? Abstract: Several major grocery chains in the United States own factories that produce some of their store-brand products. Historically, these store-brand products have been the low-price, lower-quality alternatives to higher-priced national brands, but the quality and consumer acceptance of store brands have increased markedly in recent years. Although demand for store-brand products has grown, managing the associated factories can be costly for retailers, leading some to consider selling the factories to third parties.We study the impact of selling a retailer’s existing capacity-limited factory to a third party when a store-brand product competes with a similar national-brand product. We examine the equilibrium dynamics between two external suppliers and show how the outcome changes with respect to prices, capacity limitations, the distribution of profits, and the sequencing of pricing decisions. Among other things, we show that, surprisingly, the national brand’s equilibrium wholesale price may fall when the factory is sold. We also show that the retailer may be strictly better off if he sells the factory, with these benefits being above and beyond any savings in fixed ownership and operating costs. Taken together, these results imply that when the store-brand factory has tight capacity, the adverse effects due to double marginalization on the store-brand product from selling the factory to a third party may be partially or fully offset by a reduction in the national brand’s wholesale price. Journal: IISE Transactions Pages: 442-459 Issue: 4 Volume: 49 Year: 2017 Month: 4 X-DOI: 10.1080/0740817X.2016.1243280 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1243280 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:4:p:442-459 Template-Type: ReDIF-Article 1.0 Author-Name: Rob J. I. Basten Author-X-Name-First: Rob J. I. Author-X-Name-Last: Basten Author-Name: Joachim J. Arts Author-X-Name-First: Joachim J. Author-X-Name-Last: Arts Title: Fleet readiness: Stocking spare parts and high-tech assets Abstract: We consider a maintenance shop that is responsible for the availability of a fleet of assets; e.g., trains. Unavailability of assets may be due to active maintenance time or unavailability of spare parts. Both spare assets and spare parts may be stocked in order to ensure a certain fleet readiness, which is the probability of having sufficient assets available for the primary process (e.g., running a train schedule) at any given moment. This is different from guaranteeing a certain average availability, as is typically done in the literature on spare parts inventories. We analyze the corresponding system, assuming continuous review and base stock control. We propose an algorithm, based on a marginal analysis approach, to solve the optimization problem of minimizing holding costs for spare assets and spare parts. Since the problem is not item separable, even marginal analysis is time-consuming, but we show how to efficiently solve this problem. Using a numerical experiment, we show that our algorithm generally leads to a solution that is close to optimal and that it is much faster than an existing algorithm for a closely related problem. We further show that the additional costs that are incurred when the problem of stocking spare assets and spare parts is not solved jointly can be significant. A key managerial insight is that typically the number of spare assets to be acquired is very close to a lower bound that is determined only by the active maintenance time on the assets. It is typically not cost-effective to acquire more spare assets to cover spare parts unavailability. Journal: IISE Transactions Pages: 429-441 Issue: 4 Volume: 49 Year: 2017 Month: 4 X-DOI: 10.1080/0740817X.2016.1243281 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1243281 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:4:p:429-441 Template-Type: ReDIF-Article 1.0 Author-Name: Vera Hemmelmayr Author-X-Name-First: Vera Author-X-Name-Last: Hemmelmayr Author-Name: Karen Smilowitz Author-X-Name-First: Karen Author-X-Name-Last: Smilowitz Author-Name: Luis de la Torre Author-X-Name-First: Luis Author-X-Name-Last: de la Torre Title: A periodic location routing problem for collaborative recycling Abstract: Motivated by collaborative recycling efforts for non-profit agencies, we study a variant of the periodic location routing problem, in which one decides the set of open depots from the customer set, the capacity of open depots, and the visit frequency to nodes in an effort to design networks for collaborative pickup activities. We formulate this problem, highlighting the challenges introduced by these decisions. We examine the relative difficulty introduced with each decision through exact solutions and a heuristic approach that can incorporate extensions of model constraints and solve larger instances. The work is motivated by a project with a network of hunger relief agencies (e.g., food pantries, soup kitchens and shelters) focusing on collaborative approaches to address their cardboard recycling challenges collectively. We present a case study based on data from the network. In this novel setting, we evaluate collaboration in terms of participation levels and cost impact. These insights can be generalized to other networks of organizations that may consider pooling resources. Journal: IISE Transactions Pages: 414-428 Issue: 4 Volume: 49 Year: 2017 Month: 4 X-DOI: 10.1080/24725854.2016.1267882 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1267882 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:4:p:414-428 Template-Type: ReDIF-Article 1.0 Author-Name: Hadar Amrani Author-X-Name-First: Hadar Author-X-Name-Last: Amrani Author-Name: Eugene Khmelnitsky Author-X-Name-First: Eugene Author-X-Name-Last: Khmelnitsky Title: Estimation of quantiles of non-stationary demand distributions Abstract: Many problems involve the use of quantiles of the probability distributions of the problem's parameters. A well-known example is the newsvendor problem, where the optimal order quantity equals a quantile of the demand distribution function. In real-life situations, however, the demand distribution is usually unknown and has to be estimated from past data. In these cases, quantile prediction is a complicated task, given that (i) the number of available samples is usually small and (ii) the demand distribution is not necessarily stationary. In some cases the distribution type can be meaningfully presumed, whereas the parameters of the distribution remain unknown. This article suggests a new method for estimating a quantile at a future time period. The method attaches weights to the available samples based on their chronological order and then, similar to the sample quantile method, it sets the estimator at the sample that reaches the desired quantile value. The method looks for the weights that minimize the expected absolute error of the estimator. A method for determining optimal weights in both stationary and non-stationary settings of the problem is developed. The applicability of the method is illustrated by solving a problem that has limited information regarding the distribution parameters and stationarity. Journal: IISE Transactions Pages: 381-394 Issue: 4 Volume: 49 Year: 2017 Month: 4 X-DOI: 10.1080/24725854.2016.1273565 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1273565 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:4:p:381-394 Template-Type: ReDIF-Article 1.0 Author-Name: Cai Wen Zhang Author-X-Name-First: Cai Wen Author-X-Name-Last: Zhang Author-Name: Zhisheng Ye Author-X-Name-First: Zhisheng Author-X-Name-Last: Ye Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Title: Monitoring the shape parameter of a Weibull renewal process Abstract: This research arose from a challenge faced in real practice—monitoring changes to the Weibull shape parameter. From first-hand experience, we understand that a mechanism for such a purpose is very useful. This article is primarily focused on monitoring the shape parameter of a Weibull renewal process. We derive a novel statistic on the Weibull shape parameter making use of maximum likelihood theory, which is demonstrated to follow an approximately normal distribution. This desirable normality property makes the statistic well suited for use in monitoring the Weibull shape parameter. It also allows for a simple approach to constructing a Shewhart-type control chart, named the Beta chart. The parameter values required to design a Beta chart are provided. A self-starting procedure is also proposed for setting up the Phase I Beta chart. The Average Run Length (ARL) performance of the Beta chart is evaluated through Monte Carlo simulation. A comparison with a moving range exponentially weighted moving average (EWMA) chart from the literature shows that the Beta chart has much better ARL performance when properly designed. Application examples, using both simulated and real data, demonstrate that the Beta chart is effective and makes good sense in real practice. Journal: IISE Transactions Pages: 800-813 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2016.1278315 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1278315 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:800-813 Template-Type: ReDIF-Article 1.0 Author-Name: Devashish Das Author-X-Name-First: Devashish Author-X-Name-Last: Das Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Detecting entropy increase in categorical data using maximum entropy distribution approximations Abstract: We propose a statistical monitoring method to detect the increase of entropy in categorical data. First, we propose a distribution estimation method to approximate the probability distribution of the observed categorical data. The problem is formulated as a convex optimization problem, which involves finding the distribution that maximizes Shannon's entropy with the constraint defined by the given confidence intervals on possible distributions. Then we use this procedure to estimate the non-parametric, maximum entropy distribution of an observed data sample and use it for statistical monitoring based on a χ2-test statistic. This monitoring scheme was found to be effective in detecting entropy increases in the observed data based on various numerical studies and a real-world case study. Journal: IISE Transactions Pages: 827-837 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2017.1299952 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299952 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:827-837 Template-Type: ReDIF-Article 1.0 Author-Name: Wancheng Feng Author-X-Name-First: Wancheng Author-X-Name-Last: Feng Author-Name: Chen Wang Author-X-Name-First: Chen Author-X-Name-Last: Wang Author-Name: Zuo-Jun Max Shen Author-X-Name-First: Zuo-Jun Max Author-X-Name-Last: Shen Title: Process flexibility design in heterogeneous and unbalanced networks: A stochastic programming approach Abstract: Most studies of process flexibility design have focused on homogeneous networks, whereas production systems in practice usually differ in many aspects, such as plant efficiency and product profitability. This research investigates the impacts of two dimensions of production system heterogeneity, plant uniformity and product similarity, on process flexibility design in unbalanced networks, where the numbers of plants and products are not equal. We model the design of flexible process structures under uncertain market demand as a two-stage stochastic programming problem and solve it by applying Benders decomposition with a set of acceleration techniques. To overcome slow convergence of the exact algorithm, we also develop an efficient optimization-based heuristic capable of obtaining solutions with optimality gaps less than 6% on average for realistic-scale production systems (e.g., with five plants and 10 types of products). Numerical results using the proposed heuristic show that flexibility designs are influenced by both dimensions of system heterogeneity, though the desired level of flexibility is more sensitive to the effect of plant uniformity than that of product similarity. Journal: IISE Transactions Pages: 781-799 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2017.1299953 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299953 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:781-799 Template-Type: ReDIF-Article 1.0 Author-Name: Shuluo Ning Author-X-Name-First: Shuluo Author-X-Name-Last: Ning Author-Name: Eunshin Byon Author-X-Name-First: Eunshin Author-X-Name-Last: Byon Author-Name: Teresa Wu Author-X-Name-First: Teresa Author-X-Name-Last: Wu Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: A sparse partitioned-regression model for nonlinear system–environment interactions Abstract: This article focuses on the modeling of nonlinear interactions between the design and operational variables of a system and the multivariate outside environment in predicting the system's performance. We propose a Sparse Partitioned-Regression (SPR) model that automatically searches for a partition of the environmental variables and fits a sparse regression within each subdivision of the partition, in order to fulfill an optimal criterion. Two optimal criteria are proposed, a penalized and a held-out criterion. We study the theoretical properties of SPR by deriving oracle inequalities to quantify the risks of the penalized and held-out criteria in both prediction and classification problems. An efficient recursive partition algorithm is developed for model estimation. Extensive simulation experiments are conducted to demonstrate the better performance of SPR compared with competing methods. Finally, we present an application of using building design and operational variables, outdoor environmental variables, and their interactions to predict energy consumption based on the Department of Energy's EnergyPlus data sets. SPR produces a high level of prediction accuracy. The result of the application also provides insights into the design, operation, and management of energy-efficient buildings. Journal: IISE Transactions Pages: 814-826 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2017.1299955 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299955 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:814-826 Template-Type: ReDIF-Article 1.0 Author-Name: Rob Goedhart Author-X-Name-First: Rob Author-X-Name-Last: Goedhart Author-Name: Michele M. da Silva Author-X-Name-First: Michele M. Author-X-Name-Last: da Silva Author-Name: Marit Schoonhoven Author-X-Name-First: Marit Author-X-Name-Last: Schoonhoven Author-Name: Eugenio K. Epprecht Author-X-Name-First: Eugenio K. Author-X-Name-Last: Epprecht Author-Name: Subha Chakraborti Author-X-Name-First: Subha Author-X-Name-Last: Chakraborti Author-Name: Ronald J. M. M. Does Author-X-Name-First: Ronald J. M. M. Author-X-Name-Last: Does Author-Name: Álvaro Veiga Author-X-Name-First: Álvaro Author-X-Name-Last: Veiga Title: Shewhart control charts for dispersion adjusted for parameter estimation Abstract: Several recent studies have shown that the number of Phase I samples required for a Phase II control chart with estimated parameters to perform properly may be prohibitively high. Looking for a more practical alternative, adjusting the control limits has been considered in the literature. We consider this problem for the classic Shewhart charts for process dispersion under normality and present an analytical method to determine the adjusted control limits. Furthermore, we examine the performance of the resulting chart at signaling increases in the process dispersion. The proposed adjustment ensures that a minimum in-control performance of the control chart is guaranteed with a specified probability. This performance is indicated in terms of the false alarm rate or, equivalently, the in-control average run length. We also discuss the tradeoff between the in-control and out-of-control performance. Since our adjustment is based on exact analytical derivations, the recently suggested bootstrap method is no longer necessary. A real-life example is provided in order to illustrate the proposed methodology. Journal: IISE Transactions Pages: 838-848 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2017.1299956 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299956 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:838-848 Template-Type: ReDIF-Article 1.0 Author-Name: Aakil M. Caunhye Author-X-Name-First: Aakil M. Author-X-Name-Last: Caunhye Author-Name: Michel-Alexandre Cardin Author-X-Name-First: Michel-Alexandre Author-X-Name-Last: Cardin Title: An approach based on robust optimization and decision rules for analyzing real options in engineering systems design Abstract: In this article, a novel approach to analyze flexibility and real options in engineering systems design is proposed based on robust optimization and decision rules. A semi-infinite robust counterpart is formulated for a worst-case non-flexible Generation Expansion Planning (GEP) problem taken as a demonstration application. An exact solution methodology is proven by converting the model into an explicit mixed-integer programming model. Strategic capacity expansion flexibility—also referred to as real options—is analyzed in the GEP problem formulation and a multi-stage finite adaptability decision rule is developed to solve the resulting model. Finite adaptability relies on uncertainty set partitions, and in order to avoid arbitrary choices of partitions, a novel heuristic partitioning methodology is developed based on upper-bound paths to guide the partitioning of uncertainty sets. The modeling approach and heuristic partitioning methodology are applied to analyze a realistic GEP problem using data from the Midwestern United States. The case study provides insights on the convergence rates of the proposed heuristic partitioning methodology, decision rule performances, and the value of flexibility compared with non-flexible solutions, showing that explicit considerations of flexibility through real options can yield significant cost savings and improved system performance in the face of uncertainty. Journal: IISE Transactions Pages: 753-767 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2017.1299958 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299958 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:753-767 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaojie Liu Author-X-Name-First: Xiaojie Author-X-Name-Last: Liu Author-Name: Gang Du Author-X-Name-First: Gang Author-X-Name-Last: Du Author-Name: Roger J. Jiao Author-X-Name-First: Roger J. Author-X-Name-Last: Jiao Author-Name: Yi Xia Author-X-Name-First: Yi Author-X-Name-Last: Xia Title: Product line design considering competition by bilevel optimization of a Stackelberg–Nash game Abstract: Product Line Design (PLD) is one of the most critical decisions to be made by a firm for it to be successful in a competitive business environment. Existing conjoint models for PLD optimization either have not accounted for the retaliatory reactions by incumbent firms to the introduction of new products or have focused on the Nash game to model such competitive interactions, in which all firms are treated equally. However, one firm may own more information on the rivals' behavior, more resources to pre-commit, or a first-mover advantage. This article formulates a Stackelberg–Nash game-theoretic model for the Competitive Product Line Design (CPLD) problem, in which a new entrant wants to enter a competitive market by offering new products where there are existing products belonging to several incumbent firms. A bilevel 0–1 integer nonlinear programming model is developed based on the Stackelberg–Nash game where the new entrant is a leader and the incumbent firms are followers. Consistent with the bilevel optimization model, a nested bilevel genetic algorithm with sequential tatonnement is implemented to find the corresponding Stackelberg–Nash equilibrium for CPLD. An industrial case of mobile phone product is also presented to illustrate the feasibility and potential of the proposed leader–followers model and algorithm. Journal: IISE Transactions Pages: 768-780 Issue: 8 Volume: 49 Year: 2017 Month: 8 X-DOI: 10.1080/24725854.2017.1303764 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1303764 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:8:p:768-780 Template-Type: ReDIF-Article 1.0 Author-Name: Christian Weiß Author-X-Name-First: Christian Author-X-Name-Last: Weiß Author-Name: Murat Testik Author-X-Name-First: Murat Author-X-Name-Last: Testik Title: The Poisson INAR(1) CUSUM chart under overdispersion and estimation error Abstract: The Poisson INAR(1) CUSUM chart has been proposed to monitor integer-valued autoregressive processes of order 1 with Poisson marginals. The effectiveness of this chart has been shown under the assumptions of Poisson marginals and known in-control process parameters, but these assumptions may not be very well satisfied in practical applications. This article investigates the practical issues concerning applications of the Poisson INAR(1) CUSUM chart, considering average run lengths obtained through a bivariate Markov chain approach. First, the effects of deviations from the assumed Poisson model are investigated when there is overdispersion. Design recommendations for achieving robustness are provided along with an extension, the Winsorized Poisson INAR(1) CUSUM chart. Next, analyzing the conditional average run length performance under some hypothetical cases of parameter estimation, it is shown that estimation errors may severely affect the chart’s performance. The marginal average run length performance is used to derive sample size recommendations. An example for monitoring the number of beds occupied at a hospital emergency department is used to illustrate the proposed approach. Journal: IIE Transactions Pages: 805-818 Issue: 11 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.550910 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.550910 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:11:p:805-818 Template-Type: ReDIF-Article 1.0 Author-Name: Alan Hawkes Author-X-Name-First: Alan Author-X-Name-Last: Hawkes Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Author-Name: Zhihua Zheng Author-X-Name-First: Zhihua Author-X-Name-Last: Zheng Title: Modeling the evolution of system reliability performance under alternative environments Abstract: The dynamics of a system represented by a finite-state Markov process operating under two alternating regimes, for example, day/night, machine working/machine idling, etc., are modeled in this article. The transition rate matrices under the two regimes will usually be different. Also, the set of states of the system that are regarded as satisfactory may depend on the regime in operation: for example, a particular state of the system that may be regarded as satisfactory by day might not be tolerated at night (e.g., the headlights on a car not working). It is assumed that the regime durations are random variables and results are obtained for the availability of such a system and probability distributions for uptimes. Results and numerical examples are also given for two special cases: (i) when the regimes are of fixed duration; and (ii) when the regime durations have negative exponential distributions. Journal: IIE Transactions Pages: 761-772 Issue: 11 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.551758 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.551758 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:11:p:761-772 Template-Type: ReDIF-Article 1.0 Author-Name: Kyungmee Kim Author-X-Name-First: Kyungmee Author-X-Name-Last: Kim Author-Name: Way Kuo Author-X-Name-First: Way Author-X-Name-Last: Kuo Title: Component and system burn-in for repairable systems Abstract: From the system viewpoint there are at least two options to screen out defective components. First, components undergo burn-in for various times before they are placed together in a system, which is called component burn-in. Second, systems face burn-in after being assembled from components, called system burn-in. This article compares these two options for repairable systems. System reliability and rate of occurrence of system failures are used as the criteria for comparisons. To model successive system failures during system burn-in, two common types of repair are considered: component replacement and minimal repair. For each repair type analytical results are obtained that show that both component and system burn-in have a positive impact on the criteria, with component burn-in outperforming system burn-in. These results are obtained under the assumption that each component in a system has a decreasing failure rate distribution.Accepted in 2005 for a special issue on Reliability co-edited by Hoang Pham, Rutgers University; Dong Ho Pak, Hallym University, Korea; and Richard Cassady, University of Arkansas. Journal: IIE Transactions Pages: 773-782 Issue: 11 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.590432 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.590432 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:11:p:773-782 Template-Type: ReDIF-Article 1.0 Author-Name: Won Yun Author-X-Name-First: Won Author-X-Name-Last: Yun Author-Name: Jong-Woon Kim Author-X-Name-First: Jong-Woon Author-X-Name-Last: Kim Title: Estimating the mixture of a proportional hazards model with three types of failure data Abstract: Cox’s Proportional Hazards Model (PHM) has been widely applied in the analysis of lifetime data. It can be characterized in terms of covariates that influence the system lifetime, where the covariates describe the operating environment (e.g., temperature, pressure, humidity). When the covariates are assumed to be random variables, the hazards model becomes the mixed PHM. In this article, a parametric method is proposed to estimate the unknown parameters in the mixed PHM. Three types of data are considered: uncategorized field observations, categorized field ones, and categorized experimental ones. The expectation-maximization algorithm is used to handle the incomplete data problem. Simulation results are presented to illustrate the precision and some properties of the estimation results.Accepted in 2005 for a special issue on Reliability co-edited by Hoang Pham, Rutgers University; Dong Ho Park, Hallym University, Korea; and Richard Cassady, University of Arkansas. Journal: IIE Transactions Pages: 783-796 Issue: 11 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.590436 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.590436 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:11:p:783-796 Template-Type: ReDIF-Article 1.0 Author-Name: In Chang Author-X-Name-First: In Author-X-Name-Last: Chang Author-Name: Byung Kim Author-X-Name-First: Byung Author-X-Name-Last: Kim Title: Non-informative priors in the generalized gamma stress–strength systems Abstract: This article deals with non-informative priors for parameters when both stress and strength follow generalized gamma distributions. First, the orthogonal reparameterization is treated and then, using this reparameterization, Jeffreys’ prior, group ordering reference priors, and matching priors are derived. The propriety of posterior distributions is investigated and marginal posterior distributions are provided under those non-informative priors. The question of whether or not the reference priors satisfy the probability matching criterion is addressed. Finally, the reference prior that satisfies the probability matching criterion is shown to be good in the sense of frequentist coverage probability of the posterior quantile.Accepted in 2005 for a special issue on Reliability co-edited by Hoang Pham, Rutgers University; Dong Ho Park, Hallym University, Korea; and Richard Cassady, University of Arkansas. Journal: IIE Transactions Pages: 797-804 Issue: 11 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.590439 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.590439 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:11:p:797-804 Template-Type: ReDIF-Article 1.0 Author-Name: Hao-Wei Chen Author-X-Name-First: Hao-Wei Author-X-Name-Last: Chen Author-Name: Diwakar Gupta Author-X-Name-First: Diwakar Author-X-Name-Last: Gupta Author-Name: Haresh Gurnani Author-X-Name-First: Haresh Author-X-Name-Last: Gurnani Title: Fast-ship commitment contracts in retail supply chains Abstract: This article analyzes three types of supply contracts between a supplier and a retailer when both agree as follows—if a customer experiences a stockout, then the purchased item can be shipped to the customer on an expedited basis at no extra cost. This practice is referred to as the fast-ship option in this article. In the first contract (Structure A), the supplier specifies a total supply commitment and allows the retailer to choose its split between the initial order and the amount left to satisfy fast-ship orders. In the other two contracts (Structures B and C), the supplier agrees to fully supply the retailer’s initial order but places a restriction on the quantity available for fast-ship commitment. The difference between the second and third contracts is that in contract Structure B the supplier moves first, whereas in contract Structure C the supplier determines its commitment after observing the retailer’s order. The supplier’s and the retailer’s optimal decisions and preferences are characterized. The question of how the supplier and the retailer may resolve their conflict regarding the preferred contract type is addressed. [Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for proofs.] Journal: IIE Transactions Pages: 811-825 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.705449 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.705449 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:811-825 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammad Fazel-Zarandi Author-X-Name-First: Mohammad Author-X-Name-Last: Fazel-Zarandi Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Author-Name: J. Beck Author-X-Name-First: J. Author-X-Name-Last: Beck Title: Solving a stochastic facility location/fleet management problem with logic-based Benders' decomposition Abstract: This article addresses a stochastic facility location and vehicle assignment problem in which customers are served by full return trips. The problem consists of simultaneously locating a set of facilities, determining the vehicle fleet size at each facility, and allocating customers to facilities and vehicles in the presence of random travel times. Such travel times can arise, for example, due to daily traffic patterns or weather-related disturbances. These various travel time conditions are considered as different scenarios with known probabilities. A stochastic programming with bounded penalties model is presented for the problem. In order to solve the problem, integer programming and two-level and three-level logic-based Benders’ decomposition models are proposed. Computational experiments demonstrate that the Benders’ models were able to substantially outperform the integer programming model in terms of both finding and verifying the optimal solution. Journal: IIE Transactions Pages: 896-911 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.705452 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.705452 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:896-911 Template-Type: ReDIF-Article 1.0 Author-Name: Ningxiong Xu Author-X-Name-First: Ningxiong Author-X-Name-Last: Xu Title: Optimality of myopic inventory policy for a single-product, multi-period, stochastic inventory problem with batch ordering and capacity commitment Abstract: The single-product, multi-period, stochastic inventory problem with batch ordering has been studied for decades. However, most existing research focuses only on the case in which there is no capacity constraint on the ordered quantity. This article generalizes that research to the case in which the capacity is purchased at the beginning of a planning horizon and the total ordered quantity over the planning horizon is constrained by the capacity. The objective is to minimize the expected total cost (the cost of purchasing capacity plus the minimum expected sum of the ordering, storage, and shortage costs incurred over the planning horizon for the given capacity). The conditions that ensure that a myopic ordering policy is optimal for any given capacity commitment are obtained. The structure of the expected total cost is characterized under these conditions and an algorithm is presented that can be used to calculate the optimal capacity commitment. A simulation study is performed to better understand the impact of various parameters on the performance of the model. Journal: IIE Transactions Pages: 925-938 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.721944 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.721944 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:925-938 Template-Type: ReDIF-Article 1.0 Author-Name: Alexandra Newman Author-X-Name-First: Alexandra Author-X-Name-Last: Newman Author-Name: Candace Yano Author-X-Name-First: Candace Author-X-Name-Last: Yano Author-Name: Enrique Rubio Author-X-Name-First: Enrique Author-X-Name-Last: Rubio Title: Mining above and below ground: timing the transition Abstract: Some mining operations eventually transition underground because surface mining becomes increasingly expensive as one progresses downward. Mining firms often delay this transition because large underground infrastructure costs are incurred up front, whereas underground extraction may occur over decades. When and how deep to install the underground infrastructure, as well as extraction schedules above and below ground, are decisions with a sizable impact on profits. This article addresses these questions while considering realistic factors, including choices of cutoff grades (minimum ore concentration at which the extracted material is processed to recover ore) and mining rates. We present a large longest-path representation of the problem and show that it can be solved via a series of small longest-path problems. The latter representation is not a decomposition of the original network but takes advantage of the structure of the problem. Together, the small networks require only a few seconds to solve. We illustrate our approach using data from a South African mine and provide insights regarding the effects of ore prices, discount rates, and their interactions on the characteristics of optimal solutions; we find that common wisdom is not always applicable. Our solutions have significantly higher profits than benchmark solutions, representing up to billions of dollars. Journal: IIE Transactions Pages: 865-882 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.722810 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.722810 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:865-882 Template-Type: ReDIF-Article 1.0 Author-Name: Hark-Chin Hwang Author-X-Name-First: Hark-Chin Author-X-Name-Last: Hwang Author-Name: Wilco van den Heuvel Author-X-Name-First: Wilco Author-X-Name-Last: van den Heuvel Author-Name: Albert Wagelmans Author-X-Name-First: Albert Author-X-Name-Last: Wagelmans Title: The economic lot-sizing problem with lost sales and bounded inventory Abstract: This article considers an economic lot-sizing problem with lost sales and bounded inventory. The structural properties of optimal solutions under different assumptions on the cost functions are proved. Using these properties, new and improved algorithms for the problem are presented. Specifically, the first polynomial algorithm for the general lot-sizing problem with lost sales and bounded inventory is presented, and it is shown that the complexity can be reduced considerably in the special case of non-increasing lost sales costs. Moreover, with the additional assumption that there is no speculative motive for holding inventory, an existing result is improved by providing a linear time algorithm. Journal: IIE Transactions Pages: 912-924 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.724187 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.724187 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:912-924 Template-Type: ReDIF-Article 1.0 Author-Name: Chung-Yee Lee Author-X-Name-First: Chung-Yee Author-X-Name-Last: Lee Author-Name: Xi Li Author-X-Name-First: Xi Author-X-Name-Last: Li Author-Name: Yapeng Xie Author-X-Name-First: Yapeng Author-X-Name-Last: Xie Title: Procurement risk management using capacitated option contracts with fixed ordering costs Abstract: This article considers a single-period, multiple-supplier procurement problem with capacity constraints and fixed ordering costs. The buyer can procure from suppliers by signing option contracts with them to meet future uncertain demand. It can purchase from the spot market for prompt delivery at an uncertain price. The objective is to find the optimal portfolio of option contracts with minimal total expected procurement cost. Three cases are discussed. For the case with constant capacity constraints and fixed ordering cost, a dynamic programming approach is used to build a cost function that is strong CK-convex and characterize the structure of the optimal procurement policy, which is similar to the (s, S) policy. However, there is no efficient algorithm for the calculation of the critical parameters or the optimal solution. For the remaining two more restricted cases, one with only capacity constraints (yet zero ordering cost) and the other one with positive ordering cost (yet without capacity constraint), two polynomial algorithms are provided that are able to solve each of them, respectively. Journal: IIE Transactions Pages: 845-864 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.745203 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.745203 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:845-864 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Huang Author-X-Name-First: Jian Author-X-Name-Last: Huang Author-Name: Mingming Leng Author-X-Name-First: Mingming Author-X-Name-Last: Leng Author-Name: Liping Liang Author-X-Name-First: Liping Author-X-Name-Last: Liang Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Title: Promoting electric automobiles: supply chain analysis under a government’s subsidy incentive scheme Abstract: This article analyzes a Fuel Automobile (FA) supply chain and an electric-and-fuel automobile supply chain in a duopoly setting, under a government’s subsidy incentive scheme that is implemented to promote the use of Electric Automobiles (EAs) for the control of air pollution. Benefiting from such a scheme, each EA consumer can enjoy a subsidy from the government. It is shown that the incentive scheme is more effective in increasing the sales of EAs when consumers’ bargaining power is stronger. The impact of the incentive scheme on consumers’ net surplus is the largest among all components in the social welfare. A higher subsidy may not result in a greater reduction in the environmental hazard. Moreover, a larger number of service and charging stations can reduce the negative impact of the incentive scheme on the FA market while enhancing its positive impact on the EA market. An incentive scheme with the centralized control with no subsidy is also considered and it is found that the incentive scheme is more effective in promoting EAs and protecting the environment.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the Appendices to this article.] Journal: IIE Transactions Pages: 826-844 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.763003 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.763003 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:826-844 Template-Type: ReDIF-Article 1.0 Author-Name: Jiaming Qiu Author-X-Name-First: Jiaming Author-X-Name-Last: Qiu Author-Name: Thomas Sharkey Author-X-Name-First: Thomas Author-X-Name-Last: Sharkey Title: Integrated dynamic single-facility location and inventory planning problems Abstract: This article considers a class of dynamic single-article facility location problems in which the facility must determine order and inventory levels to meet the dynamic demands of the customers over a finite horizon. The motivating application of this class of problems is in military logistics and the decision makers in this area are not only concerned with the logistical costs of the facility but also with centering the facility among the customers in each time period in order to be able to provide other services. Both the location plan and inventory plan of the facility in the problem must be determined while considering these different metrics associated with the performance of these plans. Effective dynamic programming algorithms for this class of problem are provided for both of these metrics. These dynamic programming algorithms are utilized in order to construct the efficient frontier associated with these two metrics in polynomial time. Computational testing indicates that these algorithms can be used in planning activities for military logistics. [Supplemental materials are available for this article. Go to the publisher’s online edition of IIE Transactions for a worst-case example of constructing the efficient frontier.] Journal: IIE Transactions Pages: 883-895 Issue: 8 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.770184 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770184 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:8:p:883-895 Template-Type: ReDIF-Article 1.0 Author-Name: Xuan Zhao Author-X-Name-First: Xuan Author-X-Name-Last: Zhao Author-Name: Derek Atkins Author-X-Name-First: Derek Author-X-Name-Last: Atkins Title: Transshipment between competing retailers Abstract: This paper is based on observations that competing retailers have the option of either agreeing in advance to transship excess inventory to each other or seeing unsatisfied customers switch to the competitor for a substitute. A transshipment game and a substitution game between competing retailers is studied. After establishing the existence and uniqueness of a pure-strategy Nash equilibrium in retail prices and safety stocks for each game, it is shown that transshipment never leads to a lower retail price and a higher safety stock, so transshipment never leads to a situation that definitely benefits consumers. It is also shown that when the transshipment price is low and competition strong (perhaps because of low retailer differentiation), retailers should prefer consumers substituting. However, when the transshipment price is high and competition weak (with high retailer differentiation), then transshipment benefits them. Transshipment becomes less attractive as competition increases or retailer differentiation decreases. Competitive retailers serving the same market need to be cautious in agreeing to transship because they face larger opportunity costs than retailers serving independent markets. The results provide guidance for management as well as for public policy regarding transshipment in a competitive market.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix with proofs of statements in Transshipment Between Competing Retailers] Journal: IIE Transactions Pages: 665-676 Issue: 8 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802702120 File-URL: http://hdl.handle.net/10.1080/07408170802702120 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:8:p:665-676 Template-Type: ReDIF-Article 1.0 Author-Name: Laura McLay Author-X-Name-First: Laura Author-X-Name-Last: McLay Title: A maximum expected covering location model with two types of servers Abstract: Designing emergency medical service systems that improve patient outcomes is a problem of national concern. This paper introduces the Maximum Expected Coverage Location Problem with Two Types of Servers (MEXCLP2) for determining how to optimally locate and use medical units (such as ambulances) in order to improve patient survivability and to provide insight into how to optimally coordinate multiple types of medical units. In MEXCLP2, there are two types of servers (medical units), and there are dependencies between the types of servers and between servers of the same type. A Hypercube queuing model is developed to quantify these dependencies when servicing multiple types of customers (patients). MEXCLP2 is formulated as an integer programming model, and the results of the Hypercube model provide its input parameters. MEXCLP2 is applied to emergency medical service systems with two types of medical units (ambulances and non-transport vehicles). Results are illustrated using real-world data collected from Hanover County, Virginia. Journal: IIE Transactions Pages: 730-741 Issue: 8 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802702138 File-URL: http://hdl.handle.net/10.1080/07408170802702138 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:8:p:730-741 Template-Type: ReDIF-Article 1.0 Author-Name: Özgür Yazlali Author-X-Name-First: Özgür Author-X-Name-Last: Yazlali Author-Name: Feryal Erhun Author-X-Name-First: Feryal Author-X-Name-Last: Erhun Title: Dual-supply inventory problem with capacity limits on order sizes and unrestricted ordering costs Abstract: This paper considers a single-product dual-supply problem under a periodically reviewed, finite planning horizon. The downstream party, the manufacturer, is supplied by two upstream parties, local and global suppliers, with consecutive leadtimes. Both suppliers place per period minimum and maximum capacity limits on the manufacturer's orders. It is shown that a two-level modified base stock policy is optimal without any restrictions on the ordering costs. Using various analytical results, it is illustrated how the optimal policy parameters change as a function of the problem parameters. To prove the analytical results, a new functional property—bounded increasing [decreasing] differences—which is a subset of the increasing [decreasing] differences property commonly used in the literature is introduced. Numerical analyses are used to explain the trade-offs between complementary services in terms of prices, leadtimes and order capacity limits. For example, it is shown that the manufacturer follows different strategies for different product types: for inventory-cost-driven products, she relies on the local supplier to keep her supply chain responsive. Furthermore, the manufacturer procures from the local supplier as part of a balanced supply portfolio, i.e., orders from the local supplier are not limited to emergency situations. This role of the local supplier diminishes, however, as the leadtime increases. It is also found that increases in minimum capacity limits are generally more favorable to the local supplier.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resources: Appendix with additional proofs and further details of numerical analysis.] Journal: IIE Transactions Pages: 716-729 Issue: 8 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802705768 File-URL: http://hdl.handle.net/10.1080/07408170802705768 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:8:p:716-729 Template-Type: ReDIF-Article 1.0 Author-Name: Dimitris Kostamis Author-X-Name-First: Dimitris Author-X-Name-Last: Kostamis Author-Name: Izak Duenyas Author-X-Name-First: Izak Author-X-Name-Last: Duenyas Title: Quantity commitment, production and subcontracting with bargaining Abstract: This paper considers a firm that can make products in-house but also can purchase from a unique qualified supplier. The firm contracts to buy a guaranteed minimum quantity from the supplier and the supplier and the firm then establish their respective production quantities. Once the firm realizes its demand, it may contact the supplier for further units beyond the committed quantity, and a renegotiation ensues. Of particular interest is the role that in-house production capabilities and post-demand negotiation power plays in determining how firms set prices and make production decisions. It is shown that “bargaining power” may actually hurt the buyer beyond a certain point as it may lead the supplier to underproduce. The situation for which suppliers would be willing to speculate and produce beyond the contracted quantities is explored. A four-stage, game-theoretic model is proposed to explore these issues and characterize the structure of optimal production levels, prices and commitment levels. Managerial insights on the effects of bargaining power, in-house capacity and production costs in such relationships are provided.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix.] Journal: IIE Transactions Pages: 677-686 Issue: 8 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902736689 File-URL: http://hdl.handle.net/10.1080/07408170902736689 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:8:p:677-686 Template-Type: ReDIF-Article 1.0 Author-Name: Joseph Begnaud Author-X-Name-First: Joseph Author-X-Name-Last: Begnaud Author-Name: Saif Benjaafar Author-X-Name-First: Saif Author-X-Name-Last: Benjaafar Author-Name: Lisa Miller Author-X-Name-First: Lisa Author-X-Name-Last: Miller Title: The multi-level lot sizing problem with flexible production sequences Abstract: This paper considers a multi-level/multi-machine lot sizing problem with flexible production sequences, where the quantity and combination of items required to produce another item need not be unique. The problem is formulated as a mixed-integer linear program and the notion of echelon inventory is used to construct a new class of valid inequalities, which are called echelon cuts. Numerical results show the computational power of the echelon cuts in a branch-and-cut algorithm. These inequalities are compared to known cutting planes from the literature and it is found that, in addition to being strong and valid for the flexible production case, echelon cuts are at least as strong as certain classes of known cuts in the restricted fixed production setting. Journal: IIE Transactions Pages: 702-715 Issue: 8 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902736697 File-URL: http://hdl.handle.net/10.1080/07408170902736697 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:8:p:702-715 Template-Type: ReDIF-Article 1.0 Author-Name: Xingchu Liu Author-X-Name-First: Xingchu Author-X-Name-Last: Liu Author-Name: Sila Çetinkaya Author-X-Name-First: Sila Author-X-Name-Last: Çetinkaya Title: Designing supply contracts in supplier vs buyer-driven channels: The impact of leadership, contract flexibility and information asymmetry Abstract: In the context of supply contract design, the more powerful party usually has the ability to assume the leadership position. Traditionally, the supplier (e.g., manufacturer) has been more powerful, and, hence, the existing literature in the area emphasizes supplier-driven contracts. However, in some current markets, such as the B2B grocery channel, the power has shifted to the buyer (e.g., retailer). In keeping with these trends, this paper considers a buyer-driven channel and two specific cases are analyzed where the buyer has: (i) full information; and (ii) incomplete information about the supplier's cost structure under three general contract types. The buyer's optimal contracts and profits for all of the corresponding six scenarios are derived. A comparison of the presented results with previous work on supplier-driven channels allows an analysis of the individual and joint impacts of leadership structure, contract flexibility and information asymmetry on supply chain performance. It is shown that, from the system's perspective, the buyer-driven channel is more efficient than the supplier-driven channel under an optimal one-part linear contract. The common wisdom is confirmed that assuming the leadership position is beneficial for the leader in both supplier and buyer-driven channels and the value of the leadership in either channel is greater under more general contract types under full information. Further, under conditions of information asymmetry, it is demonstrated that the leadership is not necessarily beneficial for either party, and, hence, the common wisdom is not valid. Interestingly, it is found that sometimes one party can forfeit the leadership and still achieve a higher profit.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 687-701 Issue: 8 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789019 File-URL: http://hdl.handle.net/10.1080/07408170902789019 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:8:p:687-701 Template-Type: ReDIF-Article 1.0 Author-Name: Ashutosh Nayak Author-X-Name-First: Ashutosh Author-X-Name-Last: Nayak Author-Name: Seokcheon Lee Author-X-Name-First: Seokcheon Author-X-Name-Last: Lee Author-Name: John W. Sutherland Author-X-Name-First: John W. Author-X-Name-Last: Sutherland Title: Storage trade-offs and optimal load scheduling for cooperative consumers in a microgrid with different load types Abstract: Growing demand and aging infrastructure has put the current electricity grid under increased pressure. Microgrids (μGs) equipped with storage are believed to be the future of electricity grids that will be able to achieve energy efficiency by integrating renewable energy sources. Storage can be used to mitigate the time-varying and intermittent nature of renewable energy sources. In this article, we consider optimal load scheduling in a μG for four different load types: production line loads, non-moveable loads, time moveable loads, and modifiable power loads for different types of consumers. Consumers cooperate with the System Operator to schedule their loads to achieve overall energy efficiency in the μG. Two different options for charging the storage are considered: (i) charging from excess harvest in μG and (ii) charging from the Macrogrid. We perform sensitivity analysis on the storage capacity for two pricing policies to understand its trade-offs with the total electricity cost and Peak to Average Ratio. Computational experiments with different problem instances demonstrate that: (i) charging storage from the Macrogrid allows higher flexibility in load scheduling; and (ii) load scheduling with cooperative consumers outperforms the individualistic and random scheduling in terms of total electricity cost. Journal: IISE Transactions Pages: 397-405 Issue: 4 Volume: 51 Year: 2019 Month: 4 X-DOI: 10.1080/24725854.2018.1460517 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1460517 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:4:p:397-405 Template-Type: ReDIF-Article 1.0 Author-Name: Yong Liang Author-X-Name-First: Yong Author-X-Name-Last: Liang Author-Name: Tianhu Deng Author-X-Name-First: Tianhu Author-X-Name-Last: Deng Author-Name: Zuo-Jun Max Shen Author-X-Name-First: Zuo-Jun Author-X-Name-Last: Max Shen Title: Demand-side energy management under time-varying prices Abstract: Under time-varying electricity prices, an end-user may be stimulated to delay flexible demands that can be shifted over time. In this article, we study the problem where each end-user adopts an energy management system that helps time flexible demands fulfillments. Discomfort costs are incurred if demand is not satisfied immediately upon arrival. Energy storage and trading decisions are also considered. We model the problem as a finite horizon undiscounted Markov Decision Process, and outline a tractable approximate dynamic programming approach to overcome the curse of dimensionality. Specifically, we construct an approximation for the value-to-go function such that Bellman equations are converted into mixed-integer problems with structural properties. Finally, we numerically demonstrate that our approach achieves close performance to the exact approach, while dominating the myopic policy and no-control policy. Most importantly, the proposed approach can take advantage of the price differences and efficiently shift demands. Journal: IISE Transactions Pages: 422-436 Issue: 4 Volume: 51 Year: 2019 Month: 4 X-DOI: 10.1080/24725854.2018.1504357 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1504357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:4:p:422-436 Template-Type: ReDIF-Article 1.0 Author-Name: Luis F. Cardona Author-X-Name-First: Luis F. Author-X-Name-Last: Cardona Author-Name: Kevin R. Gue Author-X-Name-First: Kevin R. Author-X-Name-Last: Gue Title: How to determine slot sizes in a unit-load warehouse Abstract: Storage racks in a unit-load warehouse typically have slots of equal height, whereas the unit-loads themselves have heights that vary significantly. The result of this mismatch is unused vertical space and storage areas larger than they otherwise could be. We propose the use of storage racks with multiple slot heights to better match the distribution of pallet heights. The slot profile design problem seeks the best set of slot heights and their corresponding quantities such that a desired service level is met, where service level is the probability that all pallets present in a period can be stored. Using data from several companies, we found that the potential space savings of using multiple slot heights are between 29% and 45%. Journal: IISE Transactions Pages: 355-367 Issue: 4 Volume: 51 Year: 2019 Month: 4 X-DOI: 10.1080/24725854.2018.1509159 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1509159 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:4:p:355-367 Template-Type: ReDIF-Article 1.0 Author-Name: Jin Xu Author-X-Name-First: Jin Author-X-Name-Last: Xu Author-Name: Hoang M. Tran Author-X-Name-First: Hoang M. Author-X-Name-Last: Tran Author-Name: Natarajan Gautam Author-X-Name-First: Natarajan Author-X-Name-Last: Gautam Author-Name: Satish T. S. Bukkapatnam Author-X-Name-First: Satish T. S. Author-X-Name-Last: Bukkapatnam Title: Joint production and maintenance operations in smart custom-manufacturing systems Abstract: Machines in custom manufacturing environments with IoT (Internet-of-Things) capability are predicted to be pervading enterprises. However, there is a need to develop new algorithms that reap the benefits of such technologies. We consider a system where jobs with stochastic workloads arrive to a machine in an arbitrary fashion and upon arrival, their workload is revealed (enabled by IoT). The tool on the machine gets used up based on the speed at which the jobs are processed. Knowing that tool-replacement consumes a significant amount of time, we want to develop online algorithms that maximize the capacity of the machine by determining: (i) the speed at which each job is processed; and (ii) the epoch when the tool is replaced. We provide online approaches that leverage the ability to reveal workload in real-time and effectively balance future uncertainties. We derive asymptotic bounds for the online algorithm performance and show using numerical experimentation that a little revealed information could result in a tremendous improvement in performance. Our online algorithms also work under realistic conditions of non-stationary batch arrivals and correlated workloads. Our work opens up research directions for a variety of operational settings that may benefit from revealing stochastic quantities by mining information. Journal: IISE Transactions Pages: 406-421 Issue: 4 Volume: 51 Year: 2019 Month: 4 X-DOI: 10.1080/24725854.2018.1511938 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1511938 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:4:p:406-421 Template-Type: ReDIF-Article 1.0 Author-Name: David Boywitz Author-X-Name-First: David Author-X-Name-Last: Boywitz Author-Name: Stefan Schwerdfeger Author-X-Name-First: Stefan Author-X-Name-Last: Schwerdfeger Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Title: Sequencing of picking orders to facilitate the replenishment of A-Frame systems Abstract: A-Frame systems are among the most efficient order picking devices. Stock keeping units are stockpiled in vertical channels successively arranged along an A-shaped frame and a dispenser automatically flips the bottommost item(s) of each channel into totes or shipping cartons that pass by under the frame on a conveyor belt. Although order picking itself is fully automated, continuously replenishing hundreds of channels still remains a laborious task for human workers. We treat the research question: if the sequencing of picking orders on the A-Frame can facilitate the replenishment process. In many real-life applications, workers are assigned to fixed areas of successive channels, which they have to timely replenish. Our sequencing approach, thus, aims to properly spread the replenishment events of each area over time, such that each worker has sufficient time to move from one replenishment event to the next. We formulate the resulting order sequencing problem, consider computational complexity, and suggest suited heuristic solution procedures. In our computational study, we also apply a simulation study of the replenishment process to explore the gains of our approach. Journal: IISE Transactions Pages: 368-381 Issue: 4 Volume: 51 Year: 2019 Month: 4 X-DOI: 10.1080/24725854.2018.1513672 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1513672 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:4:p:368-381 Template-Type: ReDIF-Article 1.0 Author-Name: Gökçe Esenduran Author-X-Name-First: Gökçe Author-X-Name-Last: Esenduran Author-Name: Atalay Atasu Author-X-Name-First: Atalay Author-X-Name-Last: Atasu Author-Name: Luk N. Van Wassenhove Author-X-Name-First: Luk N. Author-X-Name-Last: Van Wassenhove Title: Valuable e-waste: Implications for extended producer responsibility Abstract: Extended Producer Responsibility (EPR)-based product take-back regulation holds OEMs (Original Equipment Manufacturers) of electronics responsible for the collection and recovery (e.g., recycling) of electronic waste (e-waste). This is because of the assumption that recycling these products has a net cost, and unless regulated they end up in landfills and harm the environment. However, in the last decade, advances in product design and recycling technologies have allowed for profitable recycling. This change challenges the basic assumption behind such regulation and creates a competitive marketplace for e-waste. That is, OEMs subject to EPR have to compete with Independent Recyclers (IRs) in collecting and recycling e-waste. Then a natural question is whether EPR achieves its intended goal of increased landfill diversion amid such competition and what its welfare implications are, where welfare is the sum of OEM and IR profits, environmental benefit, and waste-holder surplus. Using an economic model, we find that EPR that focuses on producer responsibility alone may reduce the total landfill diversion and welfare amid competition. A possible remedy in the form of counting IRs collection towards OEM obligations guarantees higher landfill diversion. However, EPR may continue to reduce the total welfare, particularly when OEM recycling replaces more cost-effective IR activity. Journal: IISE Transactions Pages: 382-396 Issue: 4 Volume: 51 Year: 2019 Month: 4 X-DOI: 10.1080/24725854.2018.1515515 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1515515 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:4:p:382-396 Template-Type: ReDIF-Article 1.0 Author-Name: Abhishek Shrivastava Author-X-Name-First: Abhishek Author-X-Name-Last: Shrivastava Title: Efficient construction of split-plot design catalogs using graphs Abstract: Fractional-factorial split-plot designs are useful variants of the traditional fractional-factorial designs. They incorporate practical constraints on the randomization of experiment runs. Catalogs of split-plot designs are useful to practitioners as they provide a means of selecting the best design suitable for their task. However, the construction of these catalogs is computationally challenging as it requires comparing designs for isomorphism, usually in a large collection. This article presents an efficient approach for constructing these catalogs by transforming the design isomorphism problem to a graph isomorphism problem. A new graph representation of split-plot designs is presented to achieve this aim. Using examples it is shown how these graph representations can be extended to certain other classes of factorial designs for solving the (corresponding) design isomorphism problem. The efficacy of this approach is demonstrated by presenting catalogs of two-level regular fractional factorial split-plot designs of up to 4096 runs, which is much larger than available in existing literature. Journal: IIE Transactions Pages: 1137-1152 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.723840 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.723840 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1137-1152 Template-Type: ReDIF-Article 1.0 Author-Name: Kalliopi Mylona Author-X-Name-First: Kalliopi Author-X-Name-Last: Mylona Author-Name: Harrison Macharia Author-X-Name-First: Harrison Author-X-Name-Last: Macharia Author-Name: Peter Goos Author-X-Name-First: Peter Author-X-Name-Last: Goos Title: Three-level equivalent-estimation split-plot designs based on subset and supplementary difference set designs Abstract: In many industrial experiments, complete randomization of the runs is impossible as, often, they involve factors whose levels are hard or costly to change. In such cases, the split-plot design is a cost-efficient alternative that reduces the number of independent settings of the hard-to-change factors. In general, the use of generalized least squares is required for model estimation based on data from split-plot designs. However, the ordinary least squares estimator is equivalent to the generalized least squares estimator for some split-plot designs, including some second-order split-plot response surface designs. These designs are called equivalent-estimation designs. An important consequence of the equivalence is that basic experimental design software can be used for model estimation. This article introduces two new families of equivalent-estimation split-plot designs, one based on subset designs and another based on supplementary difference set designs. The resulting designs complement existing catalogs of equivalent-estimation designs and allow for a more flexible choice of the number of hard-to-change factors, the number of easy-to-change factors, the number and size of whole plots, and the total sample size. It is shown that many of the newly proposed designs possess good predictive properties when compared to D-optimal split-plot designs. Journal: IIE Transactions Pages: 1153-1165 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.723841 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.723841 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1153-1165 Template-Type: ReDIF-Article 1.0 Author-Name: Nathaniel Stevens Author-X-Name-First: Nathaniel Author-X-Name-Last: Stevens Author-Name: Stefan Steiner Author-X-Name-First: Stefan Author-X-Name-Last: Steiner Author-Name: Ryan Browne Author-X-Name-First: Ryan Author-X-Name-Last: Browne Author-Name: R. MacKay Author-X-Name-First: R. Author-X-Name-Last: MacKay Title: Gauge R&R studies that incorporate baseline information Abstract: The standard plan for gauge reproducibility and repeatability studies is for each of r operators to measure k parts n times for a total of N = krn measurements. These studies are usually planned and conducted in isolation ignoring available baseline data generated by the measurement system used for inspection or process control. This article has two goals. First, it quantifies the substantial benefits of incorporating baseline data into the analysis of measurement study data. Second, it searches for good standard plans with a fixed total number of measurements that take into account available baseline data. With operator effects being considered to be fixed, situations where the part by operator interaction is excluded from or included in the model are investigated. The analysis of the combined data is based on maximum likelihood estimation and the ranking of plans on the approximate standard errors of the estimates using the Fisher information matrix. The benefit of incorporating baseline data into the analysis is significant and most of the gains in precision can be obtained with small baseline sample sizes. In general, depending on the context and number of baseline measurements, the standard plan with either the minimum or maximum number of parts is recommended. Journal: IIE Transactions Pages: 1166-1175 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.723842 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.723842 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1166-1175 Template-Type: ReDIF-Article 1.0 Author-Name: Yada Zhu Author-X-Name-First: Yada Author-X-Name-Last: Zhu Author-Name: Elsayed Elsayed Author-X-Name-First: Elsayed Author-X-Name-Last: Elsayed Title: Optimal design of accelerated life testing plans under progressive censoring Abstract: This article investigates the design of accelerated life testing plans under progressive censoring when test units experience competing failure modes and are subjected to either single or multiple stress types. The optimal test plan results in failure data at accelerated conditions that can then be used to obtain accurate reliability prediction at normal conditions. The new test plan criterion is based on the minimization of the asymptotic variance of the mean time of first failure and meets practical constraints. Numerical examples based on parameters from a real test are presented to illustrate the application of the proposed method. Journal: IIE Transactions Pages: 1176-1187 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.725504 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.725504 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1176-1187 Template-Type: ReDIF-Article 1.0 Author-Name: Eunshin Byon Author-X-Name-First: Eunshin Author-X-Name-Last: Byon Title: Wind turbine operations and maintenance: a tractable approximation of dynamic decision making Abstract: Timely decision making for least-cost maintenance of wind turbines is a critical factor in reducing the total cost of wind energy. The current models for the wind industry as well as other industries often involve solving computationally expensive algorithms such as dynamic programming. This article presents a tractable approximation of the dynamic decision-making process to alleviate the computational burden. Based upon an examination of decision rules in stationary weather conditions, a new set of decision rules is developed to incorporate dynamic weather changes. Since the decisions are made with a set of If–Then rules, the proposed approach is computationally efficient and easily integrated into the simulation framework. It can also benefit actual wind farm operations by providing implementable control. Numerical studies using field data mainly from the literature demonstrate that the proposed method provides practical guidelines for reducing operational costs as well as enhancing the marketability of wind energy. [Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for detailed proofs.] Journal: IIE Transactions Pages: 1188-1201 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.726819 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.726819 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1188-1201 Template-Type: ReDIF-Article 1.0 Author-Name: İ. Altinel Author-X-Name-First: İ. Author-X-Name-Last: Altinel Author-Name: Bora Çekyay Author-X-Name-First: Bora Author-X-Name-Last: Çekyay Author-Name: Orhan Feyzio[gtilde]lu Author-X-Name-First: Orhan Author-X-Name-Last: Feyzio[gtilde]lu Author-Name: M. Keskin Author-X-Name-First: M. Author-X-Name-Last: Keskin Author-Name: Süleyman Özekici Author-X-Name-First: Süleyman Author-X-Name-Last: Özekici Title: The design of mission-based component test plans for series connection of subsystems Abstract: This article analyzes the mission-based component testing problem of devices which consist of series connection of 1-out-of-n subsystems or series connection of m-out-of-n subsystems. The device is designed to perform a mission that has a random sequence of phases with random durations. It is assumed that the deterioration of the components of the system is modulated by the mission process in such a way that the component failure rates depend on the phase that is performed. The objective is to find optimal component test times that yield desired levels of system reliability. An algorithmic procedure that is based on a column generation technique and d.c. programming is presented. This procedure eventually solves a semi-infinite linear program and it is illustrated by numerical examples. The existence of optimal component test times is discussed and sufficient conditions for the feasibility of the underlying semi-infinite linear programming model are determined. Journal: IIE Transactions Pages: 1202-1220 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.733484 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.733484 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1202-1220 Template-Type: ReDIF-Article 1.0 Author-Name: Mayank Pandey Author-X-Name-First: Mayank Author-X-Name-Last: Pandey Author-Name: Ming Zuo Author-X-Name-First: Ming Author-X-Name-Last: Zuo Author-Name: Ramin Moghaddass Author-X-Name-First: Ramin Author-X-Name-Last: Moghaddass Title: Selective maintenance modeling for a multistate system with multistate components under imperfect maintenance Abstract: In many industrial environments, maintenance is performed during successive mission breaks. In these conditions, it may not be feasible to perform all possible maintenance actions due to limited maintenance resources such as time, budget, repairman availability, etc. A subset of maintenance actions is then performed on selected components such that the system is able to meet the next mission requirement. Such a maintenance policy is called selective maintenance. In this article, a selective maintenance strategy is developed for a MultiState System (MSS). The system can have several finite levels of performance in an MSS. Previous studies on selective maintenance have solely focused on MSSs with binary components. However, components in an MSS may be in more than two possible states. Hence, a series-parallel MSS that consists of multistate components is considered in this article. Imperfect maintenance of a component is considered to be a maintenance option, along with the replacement and the do-nothing options. Maintenance resources need to be allocated such that maximum system reliability during the next mission is ensured. A universal generating function is used to determine system reliability. An illustrative example is presented that depicts the advantages of utilizing imperfect maintenance/repair options. Journal: IIE Transactions Pages: 1221-1234 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.761371 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.761371 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1221-1234 Template-Type: ReDIF-Article 1.0 Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Author-Name: Massimo Pacella Author-X-Name-First: Massimo Author-X-Name-Last: Pacella Title: Monitoring and diagnosis of multichannel nonlinear profile variations using uncorrelated multilinear principal component analysis Abstract: In modern manufacturing systems, online sensing is being increasingly used for process monitoring and fault diagnosis. In many practical situations, the output of the sensing system is represented by time-ordered data known as profiles or waveform signals. Most of the work reported in the literature has dealt with cases in which the production process is characterized by single profiles. In some industrial practices, however, the online sensing system is designed so that it records more than one profile at each operation cycle. For example, in multi-operation forging processes with transfer or progressive dies, four sensors are used to measure the tonnage force exerted on dies. To effectively analyze multichannel profiles, it is crucial to develop a method that considers the interrelationships between different profile channels. A method for analyzing multichannel profiles based on uncorrelated multilinear principal component analysis is proposed in this article for the purpose of characterizing process variations, fault detection, and fault diagnosis. The effectiveness of the proposed method is demonstrated by using simulations and a case study on a multi-operation forging process. Journal: IIE Transactions Pages: 1235-1247 Issue: 11 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.770187 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770187 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:11:p:1235-1247 Template-Type: ReDIF-Article 1.0 Author-Name: Hui Zhao Author-X-Name-First: Hui Author-X-Name-Last: Zhao Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Edward Huang Author-X-Name-First: Edward Author-X-Name-Last: Huang Title: Clinical trial supply chain design based on the Pareto-optimal trade-off between time and cost Abstract: Long duration and high cost are two key characteristics of clinical trials. Since the clinical trial duration is part of the limited patent life and determines the time to market, its reduction is of critical importance. Although the duration can be reduced through increasing the number of clinical sites in the supply chain, the total enrollment and operational costs are also increased. Hence, Pareto-optimal supply chain configurations are used to improve clinical trial efficiency. In this study, we propose a multi-objective clinical site selection model that considers the trade-off between time and cost of clinical trials. An efficiency curve representing the Pareto-optimal trade-off is provided for decision makers to design the supply chain. To identify all Pareto-optimal supply chain configurations efficiently, we develop an algorithm based on proposed propositions. The optimality cuts are derived to improve the solving efficiency of this problem. Journal: IISE Transactions Pages: 512-524 Issue: 6 Volume: 50 Year: 2018 Month: 6 X-DOI: 10.1080/24725854.2017.1395978 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1395978 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:6:p:512-524 Template-Type: ReDIF-Article 1.0 Author-Name: Wancheng Feng Author-X-Name-First: Wancheng Author-X-Name-Last: Feng Author-Name: Zuo-Jun Max Shen Author-X-Name-First: Zuo-Jun Max Author-X-Name-Last: Shen Title: Process flexibility in homogeneous production–inventory systems with a single-period demand Abstract: Most studies of process flexibility have considered a make-to-order setting, whereas in practice, production systems usually have a make-to-stock environment. In this article, we investigate the process flexibility in homogeneous production–inventory systems with a single-period demand. We formulate the capacitated multi-product production–inventory system as a convex optimization model with an implicit objective function, which is then transformed into a network flow problem. We develop optimality conditions for the production decision problem based on concepts such as the flow position and the influencing set of each product node in the associated network for any given flexibility design. We further characterize the optimal production decision with an analytical approach for dedicated and completely flexible systems and two numerical algorithms for production systems with general flexibility designs. We then investigate the popular long chain design in homogeneous production–inventory systems with a comprehensive numerical study, showing that its performance is more sensitive to the asymmetry in initial product inventory than the system cost structure. However, the long chain design is still capable of achieving most of the benefits gained from a completely flexible design in a make-to-stock environment unless the asymmetry in product initial inventory is too high. Journal: IISE Transactions Pages: 463-483 Issue: 6 Volume: 50 Year: 2018 Month: 6 X-DOI: 10.1080/24725854.2017.1404661 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1404661 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:6:p:463-483 Template-Type: ReDIF-Article 1.0 Author-Name: Mouna Kchaou Boujelben Author-X-Name-First: Mouna Author-X-Name-Last: Kchaou Boujelben Author-Name: Youssef Boulaksil Author-X-Name-First: Youssef Author-X-Name-Last: Boulaksil Title: Modeling international facility location under uncertainty: A review, analysis, and insights Abstract: In this article, we focus on international facility location models. First, we conduct an extensive literature review on the subject and we propose a classification of the surveyed papers. The classification includes the modeling approach used, international factors, as well as dynamic and stochastic aspects of the approach. Based on the literature review, we find that international facility location problems received little attention. In particular, dynamic facility location models under uncertainty have been hardly studied. Therefore, we develop a stochastic dynamic international facility location model, using a Mixed-Integer Linear Programming (MILP) formulation. Through a case study, we show that international factors, as well as the dynamic and stochastic components of the problem, might influence strategic location decisions. We also quantify the added value of using a stochastic model instead of a deterministic counterpart and we derive insights regarding policies that governments can use to attract investments. Journal: IISE Transactions Pages: 535-551 Issue: 6 Volume: 50 Year: 2018 Month: 6 X-DOI: 10.1080/24725854.2017.1408165 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1408165 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:6:p:535-551 Template-Type: ReDIF-Article 1.0 Author-Name: Eghbal Rashidi Author-X-Name-First: Eghbal Author-X-Name-Last: Rashidi Author-Name: Hugh Richard Medal Author-X-Name-First: Hugh Richard Author-X-Name-Last: Medal Author-Name: Aaron Hoskins Author-X-Name-First: Aaron Author-X-Name-Last: Hoskins Title: Mitigating a pyro-terror attack using fuel treatment Abstract: We study a security problem in which an adversary seeks to attack a landscape by setting a wildfire in a strategic location, whereas wildfire managers wish to mitigate the damage of the attack by implementing a fuel treatment in the landscape. We model the problem as a min–max Stackelberg game with the goal of identifying an optimal fuel treatment plan that minimizes the impact of a pyro-terror attack. As the adversary's problem is discrete, we use a decomposition algorithm suitable for integer bi-level programs. We test our model on three test landscape cases located in the Western United States. The results indicate that fuel treatment can effectively mitigate the effects of an attack: implementing fuel treatment on 2, 5, and 10% of the landscape, on average, reduces the damage caused by a pyro-terror attack by 14, 27, and 43%, respectively. The resulting fuel treatment plan is also effective in mitigating natural wildfires with randomly placed ignition points. The pyro-terrorism mitigation problem studied in this article is equivalent to the b-interdiction-covering problem where the intermediate nodes are subject to interdiction. It can also be interpreted as the problem of identifying the b-most-vital nodes in a one-to-all shortest path problem. Journal: IISE Transactions Pages: 499-511 Issue: 6 Volume: 50 Year: 2018 Month: 6 X-DOI: 10.1080/24725854.2017.1415490 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1415490 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:6:p:499-511 Template-Type: ReDIF-Article 1.0 Author-Name: Robert M. Curry Author-X-Name-First: Robert M. Author-X-Name-Last: Curry Author-Name: J. Cole Smith Author-X-Name-First: J. Cole Author-X-Name-Last: Smith Title: Models and algorithms for maximum flow problems having semicontinuous path flow constraints Abstract: This article considers a variation of the node- and arc-capacitated maximum flow problem having semicontinuous flow restrictions. A semicontinuous variable must either take a value of zero or belong in the interval [ℓ, u] for some 0 < ℓ ⩽ u. Of particular interest are problems in which the variables correspond to the amount of flow sent along an entire origin–destination path. We examine both static and dynamic variations of this problem. As opposed to solving a Mixed-Integer Programming (MIP) model, we propose a branch-and-price algorithm for the static version of this problem, including a specialized branching strategy that leverages the existence of cut-sets in a non-feasible solution. For the dynamic version of the problem, we present an exact MIP formulation along with a set of symmetry-breaking and subtour-elimination constraints to improve its solvability. We demonstrate the efficacy of our algorithms on a set of randomly generated test instances. Journal: IISE Transactions Pages: 484-498 Issue: 6 Volume: 50 Year: 2018 Month: 6 X-DOI: 10.1080/24725854.2017.1415491 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1415491 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:6:p:484-498 Template-Type: ReDIF-Article 1.0 Author-Name: Jiejian Feng Author-X-Name-First: Jiejian Author-X-Name-Last: Feng Title: Fair no-unnecessary-waiting-time-FCFS allocation rule in multi-item inventory systems Abstract: The average customer waiting time is an important measure of service quality in systems in which customers may order multiple items. In this article, we introduce a new allocation discipline that not only eliminates the unnecessary waiting time of new customers by reallocating reserved items but also retains the fairness of the First-Come-First-Serve allocation rule. We develop analytic expressions for the average customer waiting time in the system with two items and unit demands under the new allocation rule, and our numerical experiments demonstrate a reduction in waiting time. We also provide an algorithm to evaluate the average customer waiting time when a system is with multiple items and batch ordering; simulation results show that the average customer waiting time of some demands can be significantly reduced up to 99.97%. Journal: IISE Transactions Pages: 525-534 Issue: 6 Volume: 50 Year: 2018 Month: 6 X-DOI: 10.1080/24725854.2018.1431743 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1431743 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:6:p:525-534 Template-Type: ReDIF-Article 1.0 Author-Name: Hong-Bin Yan Author-X-Name-First: Hong-Bin Author-X-Name-Last: Yan Author-Name: Xiang-Sheng Meng Author-X-Name-First: Xiang-Sheng Author-X-Name-Last: Meng Author-Name: Tieju Ma Author-X-Name-First: Tieju Author-X-Name-Last: Ma Author-Name: Van-Nam Huynh Author-X-Name-First: Van-Nam Author-X-Name-Last: Huynh Title: An uncertain target-oriented QFD approach to service design based on service standardization with an application to bank window service Abstract: This article proposes an uncertain target-oriented QFD approach to service standardization-based service design with an application to bank window service, based on a probabilistic interpretation of weighting information. On the one hand, the proposed approach performs computations solely based on the order-based semantics of linguistic labels and comparisons of linguistic profiles, without needing to quantify the qualitative concepts. It can thus guarantee the robustness of QFD and easy of use in practice. On the other hand, the proposed approach sets uncertain targets for customer needs (WHATs) and service standards (HOWs) by competitors’ uncertain service performance on WHATs and HOWs, and conducts satisfactory-oriented competitive analysis from the perspective of uncertain target-oriented decision analysis. Moreover, the proposed approach is applied to an empirical case study of window service design based on service standardization in the Shanghai Branch of Bank JT. The results show that the bank should pay more attention to “Service specifications”, “Service providing specifications”, and “Service evaluation and improvement standards”. Industry feedback shows that the results are consistent with service acceptance and provide valuable insights to the service standardization in the bank. Comparisons with existing studies show that our proposed approach is comparable with existing studies. Journal: IISE Transactions Pages: 1167-1189 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1542545 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1542545 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1167-1189 Template-Type: ReDIF-Article 1.0 Author-Name: Hui Yang Author-X-Name-First: Hui Author-X-Name-Last: Yang Author-Name: Soundar Kumara Author-X-Name-First: Soundar Author-X-Name-Last: Kumara Author-Name: Satish T.S. Bukkapatnam Author-X-Name-First: Satish T.S. Author-X-Name-Last: Bukkapatnam Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: The internet of things for smart manufacturing: A review Abstract: The modern manufacturing industry is investing in new technologies such as the Internet of Things (IoT), big data analytics, cloud computing and cybersecurity to cope with system complexity, increase information visibility, improve production performance, and gain competitive advantages in the global market. These advances are rapidly enabling a new generation of smart manufacturing, i.e., a cyber-physical system tightly integrating manufacturing enterprises in the physical world with virtual enterprises in cyberspace. To a great extent, realizing the full potential of cyber-physical systems depends on the development of new methodologies on the Internet of Manufacturing Things (IoMT) for data-enabled engineering innovations. This article presents a review of the IoT technologies and systems that are the drivers and foundations of data-driven innovations in smart manufacturing. We discuss the evolution of internet from computer networks to human networks to the latest era of smart and connected networks of manufacturing things (e.g., materials, sensors, equipment, people, products, and supply chain). In addition, we present a new framework that leverages IoMT and cloud computing to develop a virtual machine network. We further extend our review to IoMT cybersecurity issues that are of paramount importance to businesses and operations, as well as IoT and smart manufacturing policies that are laid out by governments around the world for the future of smart factory. Finally, we present the challenges and opportunities arising from IoMT. We hope this work will help catalyze more in-depth investigations and multi-disciplinary research efforts to advance IoMT technologies. Journal: IISE Transactions Pages: 1190-1216 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1555383 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1555383 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1190-1216 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Tonke Author-X-Name-First: Daniel Author-X-Name-Last: Tonke Author-Name: Martin Grunow Author-X-Name-First: Martin Author-X-Name-Last: Grunow Author-Name: Renzo Akkerman Author-X-Name-First: Renzo Author-X-Name-Last: Akkerman Title: Robotic-cell scheduling with pick-up constraints and uncertain processing times Abstract: Technological developments have propelled the deployment of robots in many applications, which has led to the trend to integrate an increasing number of uncertain processes into robotic and automated equipment. We contribute to this domain by considering the scheduling of a dual-gripper robotic cell. For systems with one potential bottleneck, we determine conditions under which the widely used swap sequence does not guarantee optimality or even feasibility and prove that optimal schedules can be derived under certain conditions when building on two types of slack we introduce. With the addition of a third type of slack and the concept of fixed partial schedules, we develop an offline-online scheduling approach that, in contrast with previous work, is able to deal with uncertainty in all process steps and robot handling tasks, even under pick-up constraints. The approach can deal with single- or multiple-bottleneck systems, and is the first approach that is not restricted to a single predefined sequence such as the swap sequence. Our approach is well suited for real-world applications, since it generates cyclic schedules and allows integration into commonly-used frameworks for robotic-cell scheduling and control.We demonstrate the applicability of our approach to cluster tools in semiconductor manufacturing, showing that our approach generates feasible results for all tested levels of uncertainty and optimal or near-optimal results for low levels of uncertainty. With additional symmetry-breaking constraints, the model can be efficiently applied to industrial-scale test instances. We show that reducing uncertainty to below 10% of the processing time would yield significantly improved cycle lengths and throughput. We also demonstrate that the widely used swap sequence only finds solutions for less than 1% of the instances when strict pick-up constraints are enforced and processing times are heterogeneous. As our approach finds feasible solutions to all of these instances, it enables the application of robotic cells to a significantly broader application environment. Journal: IISE Transactions Pages: 1217-1235 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1555727 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1555727 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1217-1235 Template-Type: ReDIF-Article 1.0 Author-Name: Weizhong Wang Author-X-Name-First: Weizhong Author-X-Name-Last: Wang Author-Name: Xinwang Liu Author-X-Name-First: Xinwang Author-X-Name-Last: Liu Author-Name: Jindong Qin Author-X-Name-First: Jindong Author-X-Name-Last: Qin Author-Name: Shuli Liu Author-X-Name-First: Shuli Author-X-Name-Last: Liu Title: An extended generalized TODIM for risk evaluation and prioritization of failure modes considering risk indicators interaction Abstract: Failure Mode and Effect Analysis (FMEA) is considered as a proactive risk prevention and control technique that has been widely applied to identify, assess and eliminate the risk of failure modes in various fields. Nevertheless, the interactions between risk indicators and decision maker’s psychological behavior characteristics are seldom considered simultaneously in the current FMEA method. In this article, we develop a hybrid FMEA framework integrating generalized TODIM (an acronym in Portuguese of Interactive and Multi-criteria Decision Making) method, Choquet integral and Shapley index to remedy this gap. In the proposed FMEA framework, fuzzy measures and Shapley index are used to model the interaction relationships among risk indicators and to determine the weights of these indicators. The extended generalized TODIM method with fuzzy measure and Shapley index is presented to simulate the psychological behavior characteristics of FMEA team members. It is also applied to calculate the risk priority of each failure mode. Trapezoidal Fuzzy Numbers (TrFNs) are adopted to depict the uncertainty in the risk evaluation process. Furthermore, a new risk evaluation information fusion with TrFNs-WAIA (weighted arithmetic interaction averaging operator of the trapezoidal fuzzy numbers) operator based on λ−Shapley Choquet is developed to aggregate individual risk evaluation information of FMEA team member into a group risk evaluation matrix, which considers the potential correlations among these members. Finally, a practical example of FMEA problem is presented to demonstrate the application and feasibility of the proposed hybrid FMEA framework, and comparison and sensitivity studies are also conducted to validate the effectiveness of the improved FMEA approach. Journal: IISE Transactions Pages: 1236-1250 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1539889 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1539889 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1236-1250 Template-Type: ReDIF-Article 1.0 Author-Name: Mostafa Reisi Gahrooei Author-X-Name-First: Mostafa Reisi Author-X-Name-Last: Gahrooei Author-Name: Kamaran Paynabar Author-X-Name-First: Kamaran Author-X-Name-Last: Paynabar Author-Name: Massimo Pacella Author-X-Name-First: Massimo Author-X-Name-Last: Pacella Author-Name: Bianca Maria Colosimo Author-X-Name-First: Bianca Maria Author-X-Name-Last: Colosimo Title: An adaptive fused sampling approach of high-accuracy data in the presence of low-accuracy data Abstract: In several applications, a large amount of Low-Accuracy (LA) data can be acquired at a small cost. However, in many situations, such LA data is not sufficient for generating a higidelity model of a system. To adjust and improve the model constructed by LA data, a small sample of High-Accuracy (HA) data, which is expensive to obtain, is usually fused with the LA data. Unfortunately, current techniques assume that the HA data is already collected and concentrate on fusion strategies, without providing guidelines on how to sample the HA data. This work addresses the problem of collecting HA data adaptively and sequentially so when it is integrated with the LA data a more accurate surrogate model is achieved. For this purpose, we propose an approach that takes advantage of the information provided by LA data as well as the previously selected HA data points and computes an improvement criterion over a design space to choose the next HA data point. The performance of the proposed method is evaluated, using both simulation and case studies. The results show the benefits of the proposed method in generating an accurate surrogate model when compared to three other benchmarks. Journal: IISE Transactions Pages: 1251-1264 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1540901 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1540901 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1251-1264 Template-Type: ReDIF-Article 1.0 Author-Name: Chao Wang Author-X-Name-First: Chao Author-X-Name-Last: Wang Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Approximate multivariate distribution of key performance indicators through ordered block model and pair-copula construction Abstract: Key Performance Indicators (KPIs) play an important role in comprehending and improving a manufacturing system. This article proposes a novel method using Ordered Block Model and Pair-Copula Construction (OBM-PCC) to approximate the multivariate distribution of KPIs. The KPIs are treated as random variables in the OBM and studied under the stochastic queuing framework. The dependence structure of the OBM represents the influence flow from system input parameters to KPIs. Based on the OBM structure, the PCC is employed to simultaneously approximate the joint probability density function represented by KPIs and quantify the KPI values. The OBM-PCC model removes the redundant pair-copulas in traditional modeling, at the same time enjoying the flexibility and desirable analytical properties in KPI modeling, thus efficiently providing the accurate approximation. Extensive numerical studies are presented to demonstrate the effectiveness of the OBM-PCC model. Journal: IISE Transactions Pages: 1265-1278 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1550826 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1550826 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1265-1278 Template-Type: ReDIF-Article 1.0 Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Author-Name: Jo-Hua Wu Author-X-Name-First: Jo-Hua Author-X-Name-Last: Wu Title: Stability conditions and robustness analysis of a general MMSE run-to-run controller Abstract: Run-to-run (R2R) control plays a vital role in monitoring or adjusting the manufacturing process of integrated circuits. In this article we propose a generalized quasi-MMSE controller for a process whose Input-Output (I-O) model follows a general Transfer Function (TF) model with ARIMA disturbance and analytically derive the long-term stability conditions and their limiting distribution. Furthermore, we use a comprehensive simulation study to compare the control performances among several potential controllers when the process I-O model follows a TF model of order (2, 2, 0) with ARIMA disturbance of order (2, 1, 2). The results demonstrate that using improper controllers may seriously affect the control performance in terms of the long-term stability conditions and the short-term total mean squared error. Supplementary materials are available for this article. Go to this article’s online edition of IIE Transactions for Appendices. Journal: IISE Transactions Pages: 1279-1287 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1554288 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1554288 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1279-1287 Template-Type: ReDIF-Article 1.0 Author-Name: Changyue Song Author-X-Name-First: Changyue Author-X-Name-Last: Song Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Author-Name: Xi Zhang Author-X-Name-First: Xi Author-X-Name-Last: Zhang Title: A generic framework for multisensor degradation modeling based on supervised classification and failure surface Abstract: In condition monitoring, multiple sensors are widely used to simultaneously collect measurements from the same unit to estimate the degradation status and predict the remaining useful life. In this article, we propose a generic framework for multisensor degradation modeling, which can be viewed as an extension of the degradation models from one-dimensional space to multi-dimensional space. Specifically, we model each sensor signal based on random-effect models and characterize failure events by a multi-dimensional failure surface, which is an extension of the conventional definition of the failure threshold for a single sensor signal. To overcome the challenges in estimating the failure surface, we transform the degradation modeling problem into a supervised classification problem, where a variety of classifiers can be incorporated to estimate the degradation status of the unit based on the underlying signal paths, i.e., the collected sensor signals after removing the noise. As a result, the proposed method gains great flexibility. It can also be used for sensor selection, can handle asynchronous sensor signals, and is easy to implement in practice. Simulation studies and a case study on the degradation of aircraft engines are conducted to evaluate the performance of the proposed framework in parameter estimation and prognosis. Journal: IISE Transactions Pages: 1288-1302 Issue: 11 Volume: 51 Year: 2019 Month: 11 X-DOI: 10.1080/24725854.2018.1555384 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1555384 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:11:p:1288-1302 Template-Type: ReDIF-Article 1.0 Author-Name: Tim Lamballais Tessensohn Author-X-Name-First: Tim Author-X-Name-Last: Lamballais Tessensohn Author-Name: Debjit Roy Author-X-Name-First: Debjit Author-X-Name-Last: Roy Author-Name: René B.M. De Koster Author-X-Name-First: René B.M. Author-X-Name-Last: De Koster Title: Inventory allocation in robotic mobile fulfillment systems Abstract: A Robotic Mobile Fulfillment System is a recently developed automated, parts-to-picker material handling system. Robots can move storage shelves, also known as inventory pods, between the storage area and the workstations and can continually reposition them during operations. This article shows how to optimize three key decision variables: (i) the number of pods per SKU; (ii) the ratio of the number of pick stations to replenishment stations; and (iii) the replenishment level per pod. Our results show that throughput performance improves substantially when inventory is spread across multiple pods, when an optimum ratio between the number of pick stations to replenishment stations is achieved and when a pod is replenished before it is completely empty. This article contributes methodologically by introducing a new type of Semi-Open Queueing Network (SOQN): cross-class matching multi-class SOQN, by deriving necessary stability conditions, and by introducing a novel interpretation of the classes. Journal: IISE Transactions Pages: 1-17 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2018.1560517 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1560517 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:1-17 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammed M. Mabkhot Author-X-Name-First: Mohammed M. Author-X-Name-Last: Mabkhot Author-Name: Sana Kouki Amri Author-X-Name-First: Sana Kouki Author-X-Name-Last: Amri Author-Name: Saber Darmoul Author-X-Name-First: Saber Author-X-Name-Last: Darmoul Author-Name: Ali M. Al-Samhan Author-X-Name-First: Ali M. Author-X-Name-Last: Al-Samhan Author-Name: Sabeur Elkosantini Author-X-Name-First: Sabeur Author-X-Name-Last: Elkosantini Title: An ontology-based multi-criteria decision support system to reconfigure manufacturing systems Abstract: There is extensive literature on the reconfiguration of manufacturing systems; however, there are only a few decision support approaches that allow full advantage to be taken of the flexibilities introduced by this paradigm. Existing approaches do not consider expert knowledge to deal with new occurrences of similar, previously encountered disturbances. Most approaches are preventive and off-line planning and scheduling approaches, thus missing updated accurate data about plant activities that may trigger reconfiguration decisions and make such decisions worth consideration. In this article, we design a decision support system to suggest candidate configurations and select a suitable configuration considering a knowledge-based multi-criteria decision making approach. Expert knowledge is captured using an ontology, which is used both to monitor the manufacturing system and to make configuration recommendations. A multi-criteria decision-making approach based on TOPSIS relies on the recommended configurations to select a suitable configuration. An industrial case study shows how the suggested approach can be used to reconfigure the system at the execution stage to cope with disturbances in a reactive manner. Journal: IISE Transactions Pages: 18-42 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2019.1597317 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1597317 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:18-42 Template-Type: ReDIF-Article 1.0 Author-Name: Enrique del Castillo Author-X-Name-First: Enrique Author-X-Name-Last: del Castillo Author-Name: Adam Meyers Author-X-Name-First: Adam Author-X-Name-Last: Meyers Author-Name: Peng Chen Author-X-Name-First: Peng Author-X-Name-Last: Chen Title: Exponential random graph modeling of a faculty hiring network: The IEOR case Abstract: Faculty hiring networks consist of academic departments in a particular field (vertices) and directed edges from the departments that award Ph.D. degrees to students to the institutions that hires them as faculty. Study of these networks has been used in the past to find a hierarchy, or ranking, among departments, but they can also help reveal sociological aspects of a profession that have consequences in the dissemination of educational innovations and knowledge. In this article, we propose to use a new latent variable Exponential Random Graph Model (ERGM) to study faculty hiring networks. The model uses hierarchy information only as an input to the ERGM, where the hierarchy is obtained by modification of the Minimum Violation Ranking (MVR) method recently suggested in the literature. In contrast to single indices of ranking that can only capture partial features of a complex network, we demonstrate how our latent variable ERGM model provides a clustering of departments that does not necessarily align with the hierarchy as given by the MVR rankings, permits to simplify the network for ease of interpretation, and allows us to reproduce its main characteristics including its otherwise difficult to model presence of directed self-edges, common in faculty hiring networks. Throughout the paper, we illustrate our methods with application to the Industrial/Systems/Operations Research (IEOR) faculty hiring network, not studied before. The IEOR network is contrasted with those previously studied for other related disciplines, such as Computer Science and Business. Journal: IISE Transactions Pages: 43-60 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2018.1557354 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1557354 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:43-60 Template-Type: ReDIF-Article 1.0 Author-Name: Wendong Li Author-X-Name-First: Wendong Author-X-Name-Last: Li Author-Name: Peihua Qiu Author-X-Name-First: Peihua Author-X-Name-Last: Qiu Title: A general charting scheme for monitoring serially correlated data with short-memory dependence and nonparametric distributions Abstract: Traditional statistical process control charts are based on the assumptions that process observations are independent and identically normally distributed when the related process is In-Control (IC). In recent years, it has been demonstrated in the literature that these traditional control charts are unreliable to use when their model assumptions are violated. Several new research directions have been developed, in which new control charts have been proposed for handling cases when the IC process distribution is nonparametric with a reasonably large IC data, when the IC process distribution is unknown with a small IC data, or when the process observations are serially correlated. However, existing control charts in these research directions can only handle one or two cases listed above, and they cannot handle all cases simultaneously. In most applications, it is typical that the IC process distribution is unknown and hard to be described by a parametric form, the process observations are serially correlated with a short-memory dependence, and only a small to moderate IC dataset is available. This article suggests an effective charting scheme to tackle such a challenging and general process monitoring problem. Numerical studies show that it works well in different cases considered. Journal: IISE Transactions Pages: 61-74 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2018.1557794 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1557794 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:61-74 Template-Type: ReDIF-Article 1.0 Author-Name: Cesar Ruiz Author-X-Name-First: Cesar Author-X-Name-Last: Ruiz Author-Name: Hongwei Luo Author-X-Name-First: Hongwei Author-X-Name-Last: Luo Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Wei Xie Author-X-Name-First: Wei Author-X-Name-Last: Xie Title: Component replacement and reordering policies for spare parts experiencing two-phase on-shelf deterioration Abstract: Spare parts provisioning strategies for critical components are essential for maintenance management in a wide range of industrial applications. In practice, some types of spare parts may suffer from on-shelf deterioration, which will not only affect the reliability and availability of the parts themselves but also the overall operational costs. In this context, a natural question that arises is: “Should we use new parts or degraded ones first for component replacement?” Indeed, developing mathematical models to answer this question has both theoretical and practical values. This article studies component replacement and spare parts reordering policies for a system with spare parts experiencing on-shelf deterioration. A two-phase continuous-time Markov chain is utilized to model the on-shelf deterioration process for each spare part, and two different part consumption strategies, i.e., Degraded-First (DF) and New-First (NF) strategies, are formulated. An optimization framework and a solution algorithm are developed for the two strategies to determine the optimal order intervals and order quantities of spare parts for a fixed planning horizon. The monetary performance measures of the two strategies are studied and compared to a random selection (RS) alternative. Numerical examples show that when the overall system-wise replacement demand rate is approximately independent of the part consumption strategies, the DF strategy leads to the biggest savings compared to the RS strategy while the NF strategy results in the highest expected cost among the three alternatives. Journal: IISE Transactions Pages: 75-90 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2018.1560751 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1560751 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:75-90 Template-Type: ReDIF-Article 1.0 Author-Name: Xiujie Zhao Author-X-Name-First: Xiujie Author-X-Name-Last: Zhao Author-Name: Kangzhe He Author-X-Name-First: Kangzhe Author-X-Name-Last: He Author-Name: Way Kuo Author-X-Name-First: Way Author-X-Name-Last: Kuo Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Title: Planning accelerated reliability tests for mission-oriented systems subject to degradation and shocks Abstract: This article presents a novel accelerated reliability testing framework for mission-oriented systems. The system to be tested is assumed to suffer from cumulative degradation and traumatic shocks with increasing intensity. We propose a new optimality criterion that minimizes the asymptotic variance of the predicted reliability evaluated at the mission’s end time. Two usage scenarios are considered in this study: one is to assume that systems are brand new at the start of the mission and the other is that systems are randomly selected from used ones under pre-determined policies. Optimal test plans for both scenarios are obtained via delta methods by utilizing the Fisher information. The global optimality of test plans is verified using general equivalence theorems. A revisited example of a carbon-film resistor is presented to illustrate the efficiency and robustness of optimal test plans for both new and randomly aged systems. The result shows that the test plan tends to explore more on lower stress levels for randomly aged systems. Furthermore, we conduct simulation studies and explore compromise test plans for the example. Journal: IISE Transactions Pages: 91-103 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2019.1567958 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1567958 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:91-103 Template-Type: ReDIF-Article 1.0 Author-Name: Bei Wu Author-X-Name-First: Bei Author-X-Name-Last: Wu Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Author-Name: Chen Fang Author-X-Name-First: Chen Author-X-Name-Last: Fang Title: Generalized phase-type distributions based on multi-state systems Abstract: It is a well-known definition that the time until entering an absorbing state in a finite state Markov process follows a phase-type distribution. In this article we extend this distribution through adding two new events: one is that the number of transitions among states reaches a specified threshold; the other is that the sojourn time in a specified subset of states exceeds a given threshold. The system fails when it enters the absorbing state or two new events happen, whichever occurs first. We develop three models in terms of three circumstances: (i) the two thresholds are constants; (ii) the number of transitions is random while the sojourn time in the specified states is constant; and (iii) the sojourn time in the specified states is random while the number of transitions is constant. To the performance of such systems, we employ the theory of aggregated stochastic processes and obtain closed-form expressions for all reliability indexes, such as point-wise availabilities, various interval availabilities, and distributions of lifetimes. We select special distributions of two thresholds for models 2 and 3, which are the exponential and geometric distributions. The corresponding formulas are presented. Finally, some numerical examples are given to demonstrate the proposed formulas. Journal: IISE Transactions Pages: 104-119 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2019.1567959 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1567959 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:104-119 Template-Type: ReDIF-Article 1.0 Author-Name: Hongyue Sun Author-X-Name-First: Hongyue Author-X-Name-Last: Sun Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Yuan Luo Author-X-Name-First: Yuan Author-X-Name-Last: Luo Title: Supervised subgraph augmented non-negative matrix factorization for interpretable manufacturing time series data analytics Abstract: Data analytics has been extensively used for manufacturing time series to reduce process variation and mitigate product defects. However, the majority of data analytics approaches are hard to understand for humans who do not have a data analysis background. Many manufacturing conditions, such as trouble shooting, need situation-dependent responses and are mainly performed by humans. Therefore, it is critical to discover insights from the time series and present those to a human operator in an interpretable format. We propose a novel Supervised Subgraph Augmented Non-negative Matrix Factorization (Super-SANMF) approach to represent and model manufacturing time series. We use a graph representation to approximate a human’s description of time series changing patterns and identify frequent subgraphs as common patterns. The appearances of the subgraphs in the time series are organized in a count matrix, in which each row corresponds to a time series and each column corresponds to a frequent subgraph. Super-SANMF then identifies groups of subgraphs as features that minimize the Kullback–Leibler divergence between measured and approximated matrices. The learned features can yield comparable prediction accuracy (normal or defective) in case studies, compared with the widely used basis expansion approaches (such as spline and wavelet), and are easy for humans to memorize and understand. Journal: IISE Transactions Pages: 120-131 Issue: 1 Volume: 52 Year: 2020 Month: 1 X-DOI: 10.1080/24725854.2019.1581389 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1581389 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:1:p:120-131 Template-Type: ReDIF-Article 1.0 Author-Name: Ross Sparks Author-X-Name-First: Ross Author-X-Name-Last: Sparks Author-Name: Tim Keighley Author-X-Name-First: Tim Author-X-Name-Last: Keighley Author-Name: David Muscatello Author-X-Name-First: David Author-X-Name-Last: Muscatello Title: Exponentially weighted moving average plans for detecting unusual negative binomial counts Abstract: Exponentially Weighted Moving Average (EWMA) plans for negative binomial counts with a non-homogeneous (time-varying) mean are developed for monitoring disease counts. These plans are used to identify unusual disease outbreaks or unusual epidemics. Time-varying means are typical for disease counts. The recommended surveillance plan in this article differs from the traditional approach of using standardized forecast errors based on the normality assumption, which suffers assumption concerns. The article demonstrates that the proposed EWMA plan has efficient detection properties for signaling unusually large outbreaks. These plans may be a useful tool for epidemiologists. Journal: IIE Transactions Pages: 721-733 Issue: 10 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903468597 File-URL: http://hdl.handle.net/10.1080/07408170903468597 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:10:p:721-733 Template-Type: ReDIF-Article 1.0 Author-Name: Yu Liu Author-X-Name-First: Yu Author-X-Name-Last: Liu Author-Name: Yanfeng Li Author-X-Name-First: Yanfeng Author-X-Name-Last: Li Author-Name: Hong-Zhong Huang Author-X-Name-First: Hong-Zhong Author-X-Name-Last: Huang Author-Name: Ming Zuo Author-X-Name-First: Ming Author-X-Name-Last: Zuo Author-Name: Zhanquan Sun Author-X-Name-First: Zhanquan Author-X-Name-Last: Sun Title: Optimal preventive maintenance policy under fuzzy Bayesian reliability assessment environments Abstract: Reliability assessment is an important issue in reliability engineering. Classical reliability-estimating methods are based on precise (also called “crisp”) lifetime data. It is usually assumed that the observed lifetime data take precise real numbers. Due to the lack, inaccuracy, and fluctuation of data, some collected lifetime data may be in the form of fuzzy values. Therefore, it is necessary to characterize estimation methods along a continuum that ranges from crisp to fuzzy. Bayesian methods have proved to be very useful for small data samples. There is limited literature on Bayesian reliability estimation based on fuzzy reliability data. Most reported studies in this area deal with single-parameter lifetime distributions. This article, however, proposes a new method for determining the membership functions of parameter estimates and the reliability functions of multi-parameter lifetime distributions. Also, a preventive maintenance policy is formulated using a fuzzy reliability framework. An artificial neural network is used for parameter estimation, reliability prediction, and evaluation of the expected maintenance cost. A genetic algorithm is used to find the boundary values for the membership function of the estimate of interest at any cut level. The long-run fuzzy expected replacement cost per unit time is calculated under different preventive maintenance policies, and the optimal preventive replacement interval is determined using the fuzzy decision making (ordering) methods. The effectiveness of the proposed method is illustrated using the two-parameter Weibull distribution. Finally, a preventive maintenance strategy for a power generator is presented to illustrate the proposed models and algorithms. Journal: IIE Transactions Pages: 734-745 Issue: 10 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903539611 File-URL: http://hdl.handle.net/10.1080/07408170903539611 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:10:p:734-745 Template-Type: ReDIF-Article 1.0 Author-Name: Yibo Jiao Author-X-Name-First: Yibo Author-X-Name-Last: Jiao Author-Name: Dragan Djurdjanovic Author-X-Name-First: Dragan Author-X-Name-Last: Djurdjanovic Title: Joint allocation of measurement points and controllable tooling machines in multistage manufacturing processes Abstract: Stream of variations (SoV) modeling of multistage manufacturing process has been studied for the past 15 years and has been used for identification of root causes of manufacturing errors, characterization and optimal allocation of measurements, process-oriented tolerance allocation, fixture design, and operation sequence optimization. Most recently, it was used for optimal in-process adjustments of programmable, controllable tooling (controllable fixtures, CNC machines) in order to enable autonomous minimization of errors in dimensional product quality. However, due to the time and resources needed to take the measurements and the high cost of controllable tooling, it is plausible to strategically position such measurements and controllable devices across a manufacturing system in a way that the ability to mitigate quality problems is maximized. In this article, a distributed stochastic feed-forward control method is devised to optimally (in the least square sense) reduce the variations in dimensional workpiece quality with a limited number of controllable tooling components and measurements distributed across a multistage manufacturing process. Based on this, a reactive tabu search algorithm is proposed to enable joint optimal allocation of measurement points a controllable tooling devices. Theoretical results are evaluated and demonstrate using the SoV model of an actual industrial process for automotive cylinder head machining. Journal: IIE Transactions Pages: 703-720 Issue: 10 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903544330 File-URL: http://hdl.handle.net/10.1080/07408170903544330 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:10:p:703-720 Template-Type: ReDIF-Article 1.0 Author-Name: George Runger Author-X-Name-First: George Author-X-Name-Last: Runger Author-Name: Zilong Lian Author-X-Name-First: Zilong Author-X-Name-Last: Lian Author-Name: Enrique Del Castillo Author-X-Name-First: Enrique Author-X-Name-Last: Del Castillo Title: Optimal multivariate bounded adjustment Abstract: A bounded adjustment strategy is an important link between statistical process control and engineering process control (or closed-loop feedback adjustment). The optimal bounded adjustment strategy for the case of a single variable has been reported in the literature and recently a number of publications have enhanced this relationship (but still for a single variable). The optimal bounded adjustment strategy for a multivariate processes (of arbitrary dimension) is derived in this article. This uses optimization and exploits a symmetry relationship to obtain a closed-form solution for the optimal strategy. Furthermore, a numerical method is developed to analyze the adjustment strategy for an arbitrary number of dimensions with only a one-dimensional integral. This provides the link between statistical and engineering process control in the important multivariate case. Both infinite- and finite-horizon solutions are presented along with a numerical illustration. Journal: IIE Transactions Pages: 746-752 Issue: 10 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003670967 File-URL: http://hdl.handle.net/10.1080/07408171003670967 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:10:p:746-752 Template-Type: ReDIF-Article 1.0 Author-Name: Chanseok Park Author-X-Name-First: Chanseok Author-X-Name-Last: Park Title: Parameter estimation for the reliability of load-sharing systems Abstract: Consider a multi-component system connected in parallel. In this system, as components fail one by one, the total load or traffic applied to the system is redistributed among the remaining surviving components, which is commonly referred to as load-sharing. This develops parameter estimation methods for these type of systems. A closed-form Maximum Likelihood Estimator (MLE) and Best Unbiased Estimator (BUE) are provided under a general load-sharing rule when the underlying lifetime distribution of the components in the system is exponential. As an extension, it is assumed that the underlying lifetime distribution of the components is Weibull and it is shown that, after the shape parameter is estimated by solving the one-dimensional log-likelihood estimating equation, the closed-form MLE and conditional BUE of the rate parameter are easily obtained. The asymptotic distribution of the proposed MLE is also provided. Illustrative examples and Monte Carlo simulation results are also presented and these substantiate the proposed methods. Journal: IIE Transactions Pages: 753-765 Issue: 10 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003670991 File-URL: http://hdl.handle.net/10.1080/07408171003670991 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:10:p:753-765 Template-Type: ReDIF-Article 1.0 Author-Name: Saumil Ambani Author-X-Name-First: Saumil Author-X-Name-Last: Ambani Author-Name: Semyon Meerkov Author-X-Name-First: Semyon Author-X-Name-Last: Meerkov Author-Name: Liang Zhang Author-X-Name-First: Liang Author-X-Name-Last: Zhang Title: Feasibility and optimization of preventive maintenance in exponential machines and serial lines Abstract: This article is devoted to a theoretical study of exponential machines with maintenance–reliability coupling, according to which the machine breakdown rate is inversely proportional to the rate of Preventive Maintenance (PM). For such a machine, a feasibility condition, under which PM leads to machine efficiency improvement, is derived. Under this condition, the optimal rate of PM is calculated, and it is shown that the efficiency of the machine with exponential and deterministic PM remains practically the same. In addition, a method for production rate evaluation in serial lines with PM-optimized machines is developed, and it is illustrated that the improvement due to PM may be as high as 40%. Finally, the robustness of the obtained results with respect to the nature of PM–reliability coupling is investigated. Journal: IIE Transactions Pages: 766-777 Issue: 10 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003749209 File-URL: http://hdl.handle.net/10.1080/07408171003749209 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:10:p:766-777 Template-Type: ReDIF-Article 1.0 Author-Name: Lina Yu Author-X-Name-First: Lina Author-X-Name-Last: Yu Author-Name: Huasheng Yang Author-X-Name-First: Huasheng Author-X-Name-Last: Yang Author-Name: Lixin Miao Author-X-Name-First: Lixin Author-X-Name-Last: Miao Author-Name: Canrong Zhang Author-X-Name-First: Canrong Author-X-Name-Last: Zhang Title: Rollout algorithms for resource allocation in humanitarian logistics Abstract: Large-scale disasters and catastrophic events typically result in a significant shortage of critical resources, posing a great challenge to allocating limited resources among different affected areas to improve the quality of emergency logistics operations. This article pays attention to the performance of resource allocation, which includes three metrics: efficiency, effectiveness, and equity, respectively corresponding to economic cost, service quality, and fairness. In particular, the effectiveness metric considers human suffering by depicting it as deprivation cost, an economic valuation measurement that has been recently proposed and the equity metric concerns about the service equality at the end of planning horizon. A nonlinear integer model is first proposed and then an equivalent dynamic programming model is developed to avoid the nonlinear terms created by the introduction of the deprivation cost. The dynamic programming method can solve small-scale problems to optimality but meets difficulty when solving medium- and large-scale problems, due to the curse of dimensionality. Therefore, an approximate dynamic programming algorithm, called the rollout algorithm, is proposed to overcome this computational difficulty. The computational complexity of the proposed algorithm is theoretically analyzed. Furthermore, a modified version of the rollout algorithm is presented, with its computational complexity analyzed. Extensive numerical experiments are conducted to test the performance of the proposed algorithms, and the experimental results demonstrate that the initially proposed rollout algorithm yields optimal or near-optimal solutions within a reasonable amount of time. In addition, the impacts of some important parameters are investigated and managerial insights are drawn. Journal: IISE Transactions Pages: 887-909 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2017.1417655 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1417655 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:887-909 Template-Type: ReDIF-Article 1.0 Author-Name: Yinglei Li Author-X-Name-First: Yinglei Author-X-Name-Last: Li Author-Name: Sung Hoon Chung Author-X-Name-First: Sung Hoon Author-X-Name-Last: Chung Title: Disaster relief routing under uncertainty: A robust optimization approach Abstract: This article addresses the Capacitated Vehicle Routing Problem (CVRP) and the Split Delivery Vehicle Routing Problem (SDVRP) with uncertain travel times and demands when planning vehicle routes for delivering critical supplies to a population in need after a disaster. A robust optimization approach is used for CVRP and SDVRP considering the five objective functions: minimization of the total number of vehicles deployed (minV), the total travel time/travel cost (minT), the summation of arrival times (minS), the summation of demand-weighted arrival times (minD), and the latest arrival time (minL), out of which we claim that minS, minD, and minL are critical for deliveries to be fast and fair for relief efforts whereas minV and minT are common cost-based objective functions in the traditional VRP. A new two-stage heuristic method that combines the extended insertion algorithm and tabu search is proposed to solve the VRP models for large-scale problems. The solutions of CVRP and SDVRP are compared for different examples using five different metrics in which we show that the latter is not only capable of accommodating the demand greater than the vehicle capacity but also is quite effective to mitigate demand and travel time uncertainty, and thereby outperforms CVRP in the disaster relief routing perspective. Journal: IISE Transactions Pages: 869-886 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2018.1450540 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1450540 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:869-886 Template-Type: ReDIF-Article 1.0 Author-Name: Mahdi Mostajabdaveh Author-X-Name-First: Mahdi Author-X-Name-Last: Mostajabdaveh Author-Name: Walter J. Gutjahr Author-X-Name-First: Walter J. Author-X-Name-Last: Gutjahr Author-Name: F. Sibel Salman Author-X-Name-First: F. Author-X-Name-Last: Sibel Salman Title: Inequity-averse shelter location for disaster preparedness Abstract: We study the problem of selecting a set of shelter locations in preparation for natural disasters. Shelters provide victims of a disaster both a safe place to stay and relief necessities such as food, water and medical support. Individuals from the affected population living in a set of population points go to, or are transported to the assigned open shelters. We aim to take both efficiency and inequity into account, thus we minimize a linear combination of: (i) the mean distance between opened shelter locations and the locations of the individuals assigned to them; and (ii) Gini’s Mean Absolute Difference of these distances. We develop a stochastic programming model with a set of scenarios that consider uncertain demand and disruptions in the transportation network. A chance constraint is defined on the total cost of opening the shelters and their capacity expansion. In this stochastic context, a weighted mean of the so-called ex ante and ex post versions of the inequity-averse objective function under uncertainty is optimized. Since the model can be solved to optimality only for small instances, we develop a tailored Genetic Algorithm (GA) that utilizes a mixed-integer programming subproblem to solve this problem heuristically for larger instances. We compare the performance of the mathematical program and the GA via benchmark instances where the model can be solved to optimality or near optimality. It turns out that the GA yields small optimality gaps in much shorter time for these instances. We run the GA also on Istanbul data to drive insights to guide decision-makers for preparation. Journal: IISE Transactions Pages: 809-829 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2018.1496372 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1496372 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:809-829 Template-Type: ReDIF-Article 1.0 Author-Name: Eko Setiawan Author-X-Name-First: Eko Author-X-Name-Last: Setiawan Author-Name: Jiyin Liu Author-X-Name-First: Jiyin Author-X-Name-Last: Liu Author-Name: Alan French Author-X-Name-First: Alan Author-X-Name-Last: French Title: Resource location for relief distribution and victim evacuation after a sudden-onset disaster Abstract: Quick responses to sudden-onset disasters and the effective allocation of rescue and relief resources are vital for saving lives and reducing the suffering of the victims. This article deals with the problem of positioning medical and relief distribution facilities after a sudden-onset disaster event. The background of this study is the situation in Padang Pariaman District after the West Sumatra earthquake. Three models are built for the resource location and deployment decisions. The first model reflects current practice where relief distribution and victim evacuation are performed separately and relief is distributed by distribution centers within administrative boundaries. The second model allows relief to be distributed across boundaries by any distribution center. The third model further breaks down functional barriers to allow the evacuation and relief distribution operations share vehicles. These models are solved directly for small problems and by using a direct approach as well as heuristics for large problems. Test results on small problems show that resource sharing measures, both across boundaries and across different functions, improve on current practice. For large problems, the results give similar conclusions to those for small problems when each model is solved using its own best approach. Journal: IISE Transactions Pages: 830-846 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2018.1517284 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1517284 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:830-846 Template-Type: ReDIF-Article 1.0 Author-Name: Ece Aslan Author-X-Name-First: Ece Author-X-Name-Last: Aslan Author-Name: Melih Çelik Author-X-Name-First: Melih Author-X-Name-Last: Çelik Title: Pre-positioning of relief items under road/facility vulnerability with concurrent restoration and relief transportation Abstract: Planning for response to sudden-onset disasters such as earthquakes, hurricanes, or floods needs to take into account the inherent uncertainties regarding the disaster and its impacts on the affected people as well as the logistics network. This article focuses on the design of a multi-echelon humanitarian response network, where the pre-disaster decisions of warehouse location and item pre-positioning are subject to uncertainties in relief item demand and vulnerability of roads and facilities following the disaster. Once the disaster strikes, relief transportation is accompanied by simultaneous repair of blocked roads, which delays the transportation process, but gradually increases the connectivity of the network at the same time. A two-stage stochastic program is formulated to model this system and a Sample Average Approximation (SAA) scheme is proposed for its heuristic solution. To enhance the efficiency of the SAA algorithm, we introduce a number of valid inequalities and bounds on the objective value. Computational experiments on a potential earthquake scenario in Istanbul, Turkey show that the SAA scheme is able to provide an accurate approximation of the objective function in reasonable time, and can help drive policy-based implications that may be applicable in preparation for similar potential disasters. Journal: IISE Transactions Pages: 847-868 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2018.1540900 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1540900 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:847-868 Template-Type: ReDIF-Article 1.0 Author-Name: Qingyi Wang Author-X-Name-First: Qingyi Author-X-Name-Last: Wang Author-Name: Xiaofeng Nie Author-X-Name-First: Xiaofeng Author-X-Name-Last: Nie Title: A stochastic programming model for emergency supply planning considering traffic congestion Abstract: Traffic congestion is one key factor that delays emergency supply logistics after disasters, but it is seldom explicitly considered in previous emergency supply planning models. To fill the gap, we incorporate traffic congestion effects and propose a two-stage location-allocation model that facilitates the planning of emergency supplies pre-positioning and post-disaster transportation. The formulated mixed-integer nonlinear programming model is solved by applying the generalized Benders decomposition algorithm, and the suggested approach outperforms the direct solving strategy. With a case study on a hurricane threat in the southeastern USA, we illustrate that our traffic congestion incorporated model is a meaningful generalization of a previous emergency supply planning model in the literature. Finally, managerial insights about the supplies pre-positioning plan and traffic control policy are discussed. Journal: IISE Transactions Pages: 910-920 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2019.1589657 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1589657 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:910-920 Template-Type: ReDIF-Article 1.0 Author-Name: Rajan Batta Author-X-Name-First: Rajan Author-X-Name-Last: Batta Author-Name: Simin Huang Author-X-Name-First: Simin Author-X-Name-Last: Huang Author-Name: Bahar Kara Author-X-Name-First: Bahar Author-X-Name-Last: Kara Title: Contributions to humanitarian logistics Journal: IISE Transactions Pages: 807-808 Issue: 8 Volume: 51 Year: 2019 Month: 8 X-DOI: 10.1080/24725854.2019.1610625 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1610625 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:8:p:807-808 Template-Type: ReDIF-Article 1.0 Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: Physics-driven Bayesian hierarchical modeling of the nanowire growth process at each scale Abstract: Despite significant advances in nanoscience, current physical models are unable to predict nanomanufacturing processes under uncertainties. This research work aims to model the nanowire (NW) growth process at any scale of interest. The main idea is to integrate available data and physical knowledge through a Bayesian hierarchical framework with consideration of scale effects. At each scale the NW growth model describes the time–space evolution of NWs at different sites on a substrate. The model consists of two major components: NW morphology and local variability. The morphology component represents the overall trend characterized by growth kinetics. The area-specific variability is less understood in nanophysics due to complex interactions among neighboring NWs. The local variability is therefore modeled by an intrinsic Gaussian Markov random field to separate it from the growth kinetics in the morphology component. Case studies are provided to illustrate the NW growth process model at coarse and fine scales, respectively. Journal: IIE Transactions Pages: 1-11 Issue: 1 Volume: 43 Year: 2010 X-DOI: 10.1080/07408171003795335 File-URL: http://hdl.handle.net/10.1080/07408171003795335 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2010:i:1:p:1-11 Template-Type: ReDIF-Article 1.0 Author-Name: Hao Peng Author-X-Name-First: Hao Author-X-Name-Last: Peng Author-Name: Qianmei Feng Author-X-Name-First: Qianmei Author-X-Name-Last: Feng Author-Name: David Coit Author-X-Name-First: David Author-X-Name-Last: Coit Title: Reliability and maintenance modeling for systems subject to multiple dependent competing failure processes Abstract: For complex systems that experience Multiple Dependent Competing Failure Processes (MDCFP), the dependency among the failure processes presents challenging issues in reliability modeling. This article, develops reliability models and preventive maintenance policies for systems subject to MDCFP. Specifically, two dependent/correlated failure processes are considered: soft failures caused jointly by continuous smooth degradation and additional abrupt degradation damage due to a shock process and catastrophic failures caused by an abrupt and sudden stress from the same shock process. A general reliability model is developed based on degradation and random shock modeling (i.e., extreme and cumulative shock models), which is then extended to a specific model for a linear degradation path and normally distributed shock load sizes and damage sizes. A preventive maintenance policy using periodic inspection is also developed by minimizing the average long-run maintenance cost rate. The developed reliability and maintenance models are demonstrated for a micro-electro-mechanical systems application example. These models can also be applied directly or customized for other complex systems that experience multiple dependent competing failure processes. Journal: IIE Transactions Pages: 12-22 Issue: 1 Volume: 43 Year: 2010 X-DOI: 10.1080/0740817X.2010.491502 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491502 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2010:i:1:p:12-22 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Khoo Author-X-Name-First: Michael Author-X-Name-Last: Khoo Author-Name: How Lee Author-X-Name-First: How Author-X-Name-Last: Lee Author-Name: Zhang Wu Author-X-Name-First: Zhang Author-X-Name-Last: Wu Author-Name: Chung-Ho Chen Author-X-Name-First: Chung-Ho Author-X-Name-Last: Chen Author-Name: Philippe Castagliola Author-X-Name-First: Philippe Author-X-Name-Last: Castagliola Title: A synthetic double sampling control chart for the process mean Abstract: This article proposes a synthetic double sampling chart that integrates the Double Sampling (DS) chart and the conforming run length chart. The proposed procedure offers performance improvements in terms of the zero-state Average Run Length (ARL) and Average Number of Observations to Sample (ANOS). When the size of a mean shift δ (given in terms of the number of standard deviation units) is small (i.e., between 0.4 and 0.6) and the mean sample size n= 5, the proposed procedure reduces the out-of-control ARL and ANOS values by nearly half, compared with both the synthetic and DS charts. In terms of detection ability versus the Exponentially Weighted Moving Average (EWMA) chart, the synthetic DS chart is superior to the synthetic or even the DS chart, as the former outperforms the EWMA chart for a larger range of δ values compared to the latter. The proposed procedure generally outperforms the EWMA chart in the detection of a mean shift when δ is larger than 0.5 and n= 5 or 10. Although the proposed procedure is less sensitive than the EWMA chart when δ is smaller than 0.5, this may not be a setback as it is usually not desirable, from a practical viewpoint, to signal very small shifts in the process to avoid too frequent process interruptions. Instead, under such circumstances, it is better to leave the process undisturbed. Journal: IIE Transactions Pages: 23-38 Issue: 1 Volume: 43 Year: 2010 X-DOI: 10.1080/0740817X.2010.491503 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491503 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2010:i:1:p:23-38 Template-Type: ReDIF-Article 1.0 Author-Name: Ray-Bing Chen Author-X-Name-First: Ray-Bing Author-X-Name-Last: Chen Author-Name: Weichung Wang Author-X-Name-First: Weichung Author-X-Name-Last: Wang Author-Name: C.F. Wu Author-X-Name-First: C.F. Author-X-Name-Last: Wu Title: Building surrogates with overcomplete bases in computer experiments with applications to bistable laser diodes Abstract: It is known that regular kriging models do not perform well in fitting and predicting computer experiments with complicated response surfaces. One such experiment arises in the dynamical system of bistable laser diodes for secure optical communications. The problem is challenging because the response surface is complicated, there are multiple solutions, and function evaluations are computationally expensive. Motivated by this problem, this article iteratively constructs surrogates for the complicated surface by using an overcomplete basis set. Application to the laser diodes problem shows that the proposed algorithms can solve the target problem by quickly capturing the trend of the response surface and efficiently guiding the search of desired solutions quickly. Performances and comparisons of the proposed algorithms and Gaussian process-based surrogate algorithms are presented to demonstrate the advantages of these methods. Journal: IIE Transactions Pages: 39-53 Issue: 1 Volume: 43 Year: 2010 X-DOI: 10.1080/0740817X.2010.504686 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504686 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2010:i:1:p:39-53 Template-Type: ReDIF-Article 1.0 Author-Name: Enrique del Castillo Author-X-Name-First: Enrique Author-X-Name-Last: del Castillo Author-Name: Eduardo Santiago Author-X-Name-First: Eduardo Author-X-Name-Last: Santiago Title: A matrix-T approach to the sequential design of optimization experiments Abstract: A new approach to the sequential design of experiments for the rapid optimization of multiple response, multiple controllable factor processes is presented. The approach is Bayesian and is based on an approximation of the cost to go of the underlying dynamic programming formulation. The approximation is based on a matrix T posterior predictive density for the predicted responses over the length of the experimental horizon that allows the responses to be cross-correlated and/or correlated over time. The case of an unknown variance is addressed; the assumed models are linear in the parameters but can be nonlinear in the factors. It is shown that the proposed approach has dual-control features, initially probing the process to reduce the parameter uncertainties and eventually converging to the desired solution. The convergence of the proposed method is numerically studied and convergence conditions discussed. Performance comparisons are given with respect to a known-parameters controller, the efficient global optimization algorithm, popular in sequential optimization of deterministic engineering metamodels, and with respect to the classical use of response surface designs followed by an optimization step. Journal: IIE Transactions Pages: 54-68 Issue: 1 Volume: 43 Year: 2010 X-DOI: 10.1080/0740817X.2010.504687 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504687 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2010:i:1:p:54-68 Template-Type: ReDIF-Article 1.0 Author-Name: Wei Xie Author-X-Name-First: Wei Author-X-Name-Last: Xie Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Title: Estimation of gross profit for a new durable product considering warranty and post-warranty repairs Abstract: This article presents an integrated model to estimate the gross profit for a new durable product to be sold in a fixed sales period at a fixed price. It is assumed that the sales over time can be characterized by a stochastic Bass model in the form of a nonhomogeneous Poisson process and the production system is a make-to-order type of system. An approximate yet accurate approach is developed to quantify the expected total cost of production involving a learning effect. Moreover, a non-renewable free minimal-repair warranty and non-free post-warranty service are considered for the repair service offered by the manufacturer. To quantify the related costs and profit, the fact that customers may not always request warranty and/or post-warranty repairs is explicitly addressed and modeled. A numerical example is provided to illustrate the effects of some key parameters, including the product reliability, price elasticity, and warranty period elasticity, on the optimal settings of the price, warranty period, and post-warranty charge. The saturation effect of sales process on achieving the optimal gross profit is also discussed. Journal: IIE Transactions Pages: 87-105 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.761370 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.761370 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:87-105 Template-Type: ReDIF-Article 1.0 Author-Name: Changliang Zou Author-X-Name-First: Changliang Author-X-Name-Last: Zou Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Zhaojun Wang Author-X-Name-First: Zhaojun Author-X-Name-Last: Wang Title: Outlier detection in general profiles using penalized regression method Abstract: Profile monitoring is a technique for checking the stability of functional relationships between a response variable and one or more explanatory variables over time. The presence of outliers has seriously adverse effects on the modeling, monitoring, and diagnosis of profile data. This article proposes a new outlier detection procedure from the viewpoint of penalized regression, aiming at identifying any abnormal profile observations from a baseline dataset. Profiles are treated as high-dimension vectors and the model is reformulated into a specific regression model. A group-type regularization is then applied that favors a sparse vector of mean shift parameters. Using the classic hard penalty yields a computationally efficient algorithm that is essentially equivalent to an iterative approach. Appropriately choosing a sole tuning parameter in the proposed procedure enables Type-I error to be controlled and robust detection ability to be delivered. Simulation results show that the proposed method has an outstanding performance in identifying outliers in various situations compared with other existing approaches. This methodology is also extended to the case where within-profile correlations exist. Journal: IIE Transactions Pages: 106-117 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.762486 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.762486 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:106-117 Template-Type: ReDIF-Article 1.0 Author-Name: Yu Qiu Author-X-Name-First: Yu Author-X-Name-Last: Qiu Author-Name: Daniel Nordman Author-X-Name-First: Daniel Author-X-Name-Last: Nordman Author-Name: Stephen Vardeman Author-X-Name-First: Stephen Author-X-Name-Last: Vardeman Title: A pseudo-likelihood analysis for incomplete warranty data with a time usage rate variable and production counts Abstract: The most direct purpose of collecting warranty data is tracking associated costs. However, they are also useful for quantifying a relationship between use rate and product time-to-first-failure and for estimating the distribution of product time-to-first-failure (which is modeled in this article as depending on use rate and a unit potential life length under continuous use). Employing warranty data for such reliability analysis purposes is typically complicated by the fact that some parts of some warranty data records are missing. A pseudo-likelihood methodology is introduced to deal with some kinds of incomplete warranty data (such as that available in a motivating real case from a machine manufacturer). A use rate distribution, the distribution of time to first failure, and the time associated with a cumulative probability of first failure are estimated, based on the proposed approach and available data. Journal: IIE Transactions Pages: 118-130 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.770185 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770185 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:118-130 Template-Type: ReDIF-Article 1.0 Author-Name: Ramin Moghaddass Author-X-Name-First: Ramin Author-X-Name-Last: Moghaddass Author-Name: Ming Zuo Author-X-Name-First: Ming Author-X-Name-Last: Zuo Title: Multistate degradation and supervised estimation methods for a condition-monitored device Abstract: Multistate reliability has received significant attention over the past decades, particularly its application to mechanical devices that degrade over time. This degradation can be represented by a multistate continuous-time stochastic process. This article considers a device with discrete multistate degradation, which is monitored by a condition monitoring indicator through an observation process. A general stochastic process called the nonhomogeneous continuous-time hidden semi-Markov process is employed to model the degradation and observation processes associated with this type of device. Then, supervised parametric and nonparametric estimation methods are developed to estimate the maximum likelihood estimators of the main characteristics of the model. Finally, the correctness and empirical consistency of the estimators are evaluated using a simulation-based numerical experiment. Journal: IIE Transactions Pages: 131-148 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.770188 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770188 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:131-148 Template-Type: ReDIF-Article 1.0 Author-Name: Mingyang Li Author-X-Name-First: Mingyang Author-X-Name-Last: Li Author-Name: Qingpei Hu Author-X-Name-First: Qingpei Author-X-Name-Last: Hu Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Title: Proportional hazard modeling for hierarchical systems with multi-level information aggregation Abstract: Reliability modeling of hierarchical systems is crucial for their health management in many mission-critical industries. Conventional statistical modeling methodologies are constrained by the limited availability of reliability test data, especially when the system-level reliability tests of such systems are expensive and/or time-consuming. This article presents a semi-parametric approach to modeling system-level reliability by systematically and explicitly aggregating lower-level information of system elements; i.e., components and/or subsystems. An innovative Bayesian inference framework is proposed to implement information aggregation based on the known multi-level structure of hierarchical systems and interaction relationships among their composing elements. Numerical case study results demonstrate the effectiveness of the proposed method. Journal: IIE Transactions Pages: 149-163 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.772692 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.772692 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:149-163 Template-Type: ReDIF-Article 1.0 Author-Name: Eugene Kagan Author-X-Name-First: Eugene Author-X-Name-Last: Kagan Author-Name: Irad Ben-gal Author-X-Name-First: Irad Author-X-Name-Last: Ben-gal Title: A group testing algorithm with online informational learning Abstract: An online group testing method to search for a hidden object in a discrete search space is proposed. A relevant example is a search after a nonconforming unit in a batch, while many other applications can be related. A probability mass function is defined over the search space to represent the probability of an object (e.g., a nonconforming unit) to be located at some point or subspace. The suggested method follows a stochastic local search procedure and can be viewed as a generalization of the Learning Real-Time A* (LRTA*) search algorithm, while using informational distance measures over the searched space. It is proved that the proposed Informational LRTA* (ILRTA*) algorithm converges and always terminates. Moreover, it is shown that under relevant assumptions, the proposed algorithm generalizes known optimal information-theoretic search procedures, such as the offline Huffman search or the generalized optimum testing algorithm. However, the ILRTA* can be applied to new situations, such as a search with side information or an online search where the probability distribution changes. The obtained results can help to bridge the gap between different search procedures that are related to quality control, artificial intelligence, and information theory. Journal: IIE Transactions Pages: 164-184 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.803639 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803639 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:164-184 Template-Type: ReDIF-Article 1.0 Author-Name: Sheng-tsaing Tseng Author-X-Name-First: Sheng-tsaing Author-X-Name-Last: Tseng Author-Name: Hsin-chao Mi Author-X-Name-First: Hsin-chao Author-X-Name-Last: Mi Title: Quasi-minimum mean square error run-to-run controller for dynamic models Abstract: The Exponentially Weighted Moving Average (EWMA) feedback controller is a popular model-based run-to-run feedback controller that primarily uses data from previous process runs to adjust the settings of the next run. The long-term stability conditions and the transient performance of the EWMA controller have received considerable attention in the literature. Most of these studies have assumed that the process Input–Output (I-O) relationship is static and simply considered colored noise models for the process disturbance. However, process dynamics and disturbance dynamics may occur simultaneously. Under this circumstance, using EWMA-based controllers will usually lead to an unsatisfactory performance. To overcome this weakness, this article first proposes a quasi-minimum mean square error controller. The theoretical results of the long-term stability conditions are derived under a first-order transfer function model together with a general disturbance model. Furthermore, comprehensive simulation studies are conducted to compare the proposed controller with existing popular controllers. The results demonstrate that the proposed controller outperforms those popular controllers for most cases, even when the process I-O model is mis-specified or the process parameters are not precisely estimated. Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplemental resources: Tables 5 to 7 and Appendix. Journal: IIE Transactions Pages: 185-196 Issue: 2 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.803643 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803643 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:2:p:185-196 Template-Type: ReDIF-Article 1.0 Author-Name: Bing Si Author-X-Name-First: Bing Author-X-Name-Last: Si Author-Name: Gerri Lamb Author-X-Name-First: Gerri Author-X-Name-Last: Lamb Author-Name: Madeline H. Schmitt Author-X-Name-First: Madeline H. Author-X-Name-Last: Schmitt Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: A multi-response multilevel model with application in nurse care coordination Abstract: Due to the aging of our society, patient care needs to be well coordinated within the health care team in order to effectively manage the overall health of each patient. Staff nurses, as the patient's “ever-present” health care team members, play a vital role in the care coordination. The recently developed Nurse Care Coordination Instrument (NCCI) is the first of its kind that enables quantitative data to be collected to measure various aspects of nurse care coordination. Driven by this new development, we propose a multi-response multilevel model with joint fixed effect selection and joint random effect selection across multiple responses. This model is particularly suitable for modeling the unique data structure of the NCCI due to its ability of jointly modeling of multilevel predictors, including demographic and workload variables at the individual/nurse level and characteristics of the practice environment at the unit level and multiple response variables that measure the key components of nurse care coordination. We develop a Block Coordinate Descent algorithm integrated with an Expectation-Maximization framework for model estimation. Asymptotic properties are derived. Finally, we present an application to a data set collected across four U.S. hospitals using the NCCI and discuss implications of the findings. Journal: IISE Transactions Pages: 669-681 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1263770 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1263770 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:669-681 Template-Type: ReDIF-Article 1.0 Author-Name: Raed Kontar Author-X-Name-First: Raed Author-X-Name-Last: Kontar Author-Name: Junbo Son Author-X-Name-First: Junbo Author-X-Name-Last: Son Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Chaitanya Sankavaram Author-X-Name-First: Chaitanya Author-X-Name-Last: Sankavaram Author-Name: Yilu Zhang Author-X-Name-First: Yilu Author-X-Name-Last: Zhang Author-Name: Xinyu Du Author-X-Name-First: Xinyu Author-X-Name-Last: Du Title: Remaining useful life prediction based on the mixed effects model with mixture prior distribution Abstract: Modern engineering systems are gradually becoming more reliable and premature failure has become quite rare. As a result, degradation signal data used for prognosis are often imbalanced as most units are reliable and only few tend to fail at early stages of their life cycle. Such imbalanced data may hinder accurate Remaining Useful Life (RUL) prediction especially in terms of detecting premature failures as early as possible. This aspect is detrimental for developing cost-effective condition-based maintenance strategies. In this article, we propose a degradation signal–based RUL prediction method to address the imbalance issue in the data. The proposed method introduces a mixture prior distribution to capture the characteristics of different groups within the same population and provides an efficient and effective online prediction method for the in-service unit under monitoring. The advantageous features of the proposed method are demonstrated through a numerical study as well as a case study with real-world data in the application to the RUL prediction of automotive lead–acid batteries. Journal: IISE Transactions Pages: 682-697 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1263771 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1263771 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:682-697 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaolei Fang Author-X-Name-First: Xiaolei Author-X-Name-Last: Fang Author-Name: Nagi Z. Gebraeel Author-X-Name-First: Nagi Z. Author-X-Name-Last: Gebraeel Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Title: Scalable prognostic models for large-scale condition monitoring applications Abstract: High-value engineering assets are often embedded with numerous sensing technologies that monitor and track their performance. Capturing physical and performance degradation entails the use of various types of sensors that generate massive amounts of multivariate data. Building a prognostic model for such large-scale datasets, however, often presents two key challenges: how to effectively fuse the degradation signals from a large number of sensors and how to make the model scalable to the large data size. To address the two challenges, this article presents a scalable semi-parametric statistical framework specifically designed for synthesizing and combining multistream sensor signals using two signal fusion algorithms developed from functional principal component analysis. Using the algorithms, we identify fused signal features and predict (in near real-time) the remaining lifetime of partially degraded systems using an adaptive functional (log)-location-scale regression modeling framework. We validate the proposed multi-sensor prognostic methodology using numerical and data-driven case studies. Journal: IISE Transactions Pages: 698-710 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1264646 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1264646 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:698-710 Template-Type: ReDIF-Article 1.0 Author-Name: Shaomin Wu Author-X-Name-First: Shaomin Author-X-Name-Last: Wu Author-Name: Frank P. A. Coolen Author-X-Name-First: Frank P. A. Author-X-Name-Last: Coolen Author-Name: Bin Liu Author-X-Name-First: Bin Author-X-Name-Last: Liu Title: Optimization of maintenance policy under parameter uncertainty using portfolio theory Abstract: In reliability mathematics, the optimization of a maintenance policy is derived based on reliability indexes, such as the reliability or its derivatives (e.g., the cumulative failure intensity or the renewal function) and the associated cost information. The reliability indexes, also referred to as models in this article, are normally estimated based on either failure data collected from the field or lab data. The uncertainty associated with them is sensitive to several factors, including the sparsity of data. For a company that maintains a number of different systems, developing maintenance policies for each individual system separately and then allocating the maintenance budget may not lead to optimal management of the model uncertainty and may lead to cost-ineffective decisions. To overcome this limitation, this article uses the concept of risk aggregation. It integrates the uncertainty of model parameters in the optimization of maintenance policies and then collectively optimizes maintenance policies for a set of different systems, using methods from portfolio theory. Numerical examples are given to illustrate the application of the proposed methods. Journal: IISE Transactions Pages: 711-721 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1267881 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1267881 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:711-721 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Author-Name: Mahmoud Boushaba Author-X-Name-First: Mahmoud Author-X-Name-Last: Boushaba Title: A linear weighted system for non-homogeneous Markov-dependent components Abstract: We study a linear weighted (n, f, k) system, denoted by L(n, f, k, w) system and consider the situation where components are non-homogeneous Markov-dependent. An L(n, f, k, w) system consists of n components ordered in a line, and each component u has a positive integer weight wu for u = 1, 2, …, n and w = (w1, w2, …, wn). The L(n, f, k, w):F (G) system fails (works) if the total weight of failed (working) components is at least f or the total weight of consecutive failed (working) components is at least k. For the L(n, f, k, w):F system with non-homogeneous Markov-dependent components, we derive closed-form formulas for the system reliability, the marginal reliability importance measure of a single component, and the joint reliability importance measure of multiple components using a conditional probability generating function method. We extend these results to the L(n, f, k, w):G systems, the weighted consecutive-k-out-of-n systems, and the weighted f-out-of-n systems. Our numerical examples and a case study on a bridge system demonstrate the use of derived formulas and provide the insights on the L(n, f, k, w) systems and the importance measures. In addition, the two failure modes associated with the L(n, f, k, w):F systems are analyzed by comparing to the single failure mode associated with the weighted consecutive-k-out-of-n:F systems and the single failure mode associated with the weighted f-out-of-n:F systems. Journal: IISE Transactions Pages: 722-736 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1269977 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1269977 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:722-736 Template-Type: ReDIF-Article 1.0 Author-Name: Tongdan Jin Author-X-Name-First: Tongdan Author-X-Name-Last: Jin Author-Name: Heidi Taboada Author-X-Name-First: Heidi Author-X-Name-Last: Taboada Author-Name: Jose Espiritu Author-X-Name-First: Jose Author-X-Name-Last: Espiritu Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Title: Allocation of reliability--redundancy and spares inventory under Poisson fleet expansion Abstract: This article proposes an integrated product-service model to ensure the system availability by concurrently allocating reliability, redundancy, and spare parts for a variable fleet. In the literature, reliability and inventory allocation models are often developed based on a static installed base. The decision becomes really challenging during new product introduction, as the demand for spare parts is nonstationary due to the fleet expansion. Under the system availability criteria, our objective is to minimize the fleet costs associated with design, manufacturing, and after-sales support. We tackle this reliability--inventory allocation problem in two steps. First, to accommodate the fleet growth effects, the nonstationary spare parts demand stream is modeled as a sum of randomly delayed renewal processes. When the component's failure time is exponential, the mean and variance of the lead time inventory demand are explicitly derived. Second, we propose an adaptive base stock policy against the time-varying parts demand rate. A bisection search combined with metaheuristics is used to find the optimal solution. Numerical examples show that spare parts inventory results in a lower fleet cost under short-term performance-based contracts, whereas reliability--redundancy is preferred for long-term service programs. Journal: IISE Transactions Pages: 737-751 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1271963 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1271963 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:737-751 Template-Type: ReDIF-Article 1.0 Author-Name: Nima Zaerpour Author-X-Name-First: Nima Author-X-Name-Last: Zaerpour Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Author-Name: René B.M. de Koster Author-X-Name-First: René B.M. Author-X-Name-Last: de Koster Title: Optimal two-class-based storage in a live-cube compact storage system Abstract: Live-cube compact storage systems realize high storage space utilization and high throughput, due to full automation and independent movements of unit loads in three-dimensional space. Applying an optimal two-class-based storage policy where high-turnover products are stored at locations closer to the Input/Output point significantly reduces the response time. Live-cube systems are used in various sectors, such as warehouses and distribution centers, parking systems, and container yards. The system stores unit loads, such as pallets, cars, or containers, multi-deep at multiple levels of storage grids. Each unit load is located on its own shuttle. Shuttles move unit loads at each level in the x and y directions, with a lift taking care of the movement in the z-direction. Movement of a requested unit load to the lift location is comparable to solving a Sam Loyd's puzzle game where 15 numbered tiles move in a 4 × 4 grid. However, with multiple empty locations, a virtual aisle can be created to shorten the retrieval time for a requested unit load. In this article, we optimize the dimensions and zone boundary of a two-class live-cube compact storage system leading to a minimum response time. We propose a mixed-integer nonlinear model that consists of 36 sub-cases, each representing a specific configuration and first zone boundary. Properties of the optimal system are used to simplify the model without losing any optimality. The overall optimal solutions are then obtained by solving the remaining sub-cases. Although the solution procedure is tedious, we eventually obtain two sets of closed-form expressions for the optimal system dimensions and first zone boundary for any desired system size. In addition, we propose an algorithm to obtain the optimal first zone boundary for situations where the optimal system dimensions cannot be achieved. To test the effectiveness of optimal system dimensions and first zone boundary on the performance of a two-class-based live-cube system, we perform a sensitivity analysis by varying the ABC curve, system size, first zone size, and shape factor. The results show that for most cases an optimal two-class-based storage outperforms random storage, with up to 45% shorter expected retrieval time. Journal: IISE Transactions Pages: 653-668 Issue: 7 Volume: 49 Year: 2017 Month: 7 X-DOI: 10.1080/24725854.2016.1273564 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1273564 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:7:p:653-668 Template-Type: ReDIF-Article 1.0 Author-Name: Wenpo Huang Author-X-Name-First: Wenpo Author-X-Name-Last: Huang Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Junjie Cao Author-X-Name-First: Junjie Author-X-Name-Last: Cao Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Title: Probability distribution of CUSUM charting statistics Abstract: In parallel to the conventional study of the distribution of the first time to signal, this paper investigates the probability distribution of the article Cumulative Sum (CUSUM) charting statistics based on a recurrence relationship. The probability distribution of the CUSUM statistic can not only provide statistical significance of observations against the null hypothesis of being in control but also facilitate the analysis of the CUSUM chart in the steady-state scenario. Both the conditional case (CUSUM chart without restarting) and the cyclical case (CUSUM chart with restarting) are considered. It is shown that the distribution of CUSUM charts both with and without restarting approaches a stationary distribution, independent of their initial values. We also show that the null steady-state distribution of the unbounded CUSUM chart previously reported in the literature is a special case of this article as the control limit approaches infinity. Journal: IIE Transactions Pages: 324-332 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1067736 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1067736 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:324-332 Template-Type: ReDIF-Article 1.0 Author-Name: Junbo Son Author-X-Name-First: Junbo Author-X-Name-Last: Son Author-Name: Patricia Flatley Brennan Author-X-Name-First: Patricia Flatley Author-X-Name-Last: Brennan Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Rescue inhaler usage prediction in smart asthma management systems using joint mixed effects logistic regression model Abstract: Asthma is a very common and chronic lung disease that impacts a large portion of population and all ethnic groups. Driven by developments in sensor and mobile communication technology, novel Smart Asthma Management (SAM) systems have been recently established. In SAM systems, patients can create a detailed temporal event log regarding their key health indicators through easy access to a website or their smartphone. Thus, this detailed event log can be obtained inexpensively and aggregated for a large number of patients to form a centralized database for SAM systems. Taking advantage of the data available in SAM systems, we propose an individualized prognostic model based on the unique rescue inhaler usage profile of each individual patient. The model jointly combines two statistical models into a unified prognostic framework. The application of the proposed model to SAM is illustrated in this article and the effectiveness of the method is shown by both a numerical study and a case study that uses real-world data. Journal: IIE Transactions Pages: 333-346 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1078014 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078014 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:333-346 Template-Type: ReDIF-Article 1.0 Author-Name: Wujun Si Author-X-Name-First: Wujun Author-X-Name-Last: Si Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Author-Name: Xin Wu Author-X-Name-First: Xin Author-X-Name-Last: Wu Title: A physical–statistical model of overload retardation for crack propagation and application in reliability estimation Abstract: Crack propagation subjected to fatigue loading has been widely studied under the assumption that loads are ideally cyclic with a constant amplitude. In the real world, loads are not exactly cyclic, due to either environmental randomness or artificial designs. Loads with amplitudes higher than a threshold limit are referred to as overloads. Researchers have revealed that for some materials, overloads decelerate rather than accelerate the crack propagation process. This effect is called overload retardation. Ignoring overload retardation in reliability analysis can result in a biased estimation of product life. In the literature, however, research on overload retardation mainly focuses on studying its mechanical properties without modeling the effect quantitatively and, therefore, it cannot be incorporated into the reliability analysis of fatigue failures. In this article, we propose a physical–statistical model to quantitatively describe overload retardation considering random errors. A maximum likelihood estimation approach is developed to estimate the model parameters. In addition, a likelihood ratio test is developed to determine whether the tested material has either an overload retardation effect or an overload acceleration effect. The proposed model is further applied to reliability estimation of crack failures when a material has the overload retardation effect. Specifically, two algorithms are developed to calculate the failure time cumulative distribution function and the corresponding pointwise confidence intervals. Finally, designed experiments are conducted to verify and illustrate the developed methods along with simulation studies. Journal: IIE Transactions Pages: 347-358 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1078525 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078525 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:347-358 Template-Type: ReDIF-Article 1.0 Author-Name: Lujia Wang Author-X-Name-First: Lujia Author-X-Name-Last: Wang Author-Name: Qingpei Hu Author-X-Name-First: Qingpei Author-X-Name-Last: Hu Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Title: Software reliability growth modeling and analysis with dual fault detection and correction processes Abstract: Computer software is widely applied in safety-critical systems. The ever-increasing complexity of software systems makes it extremely difficult to ensure software reliability, and this problem has drawn considerable attention from both industry and academia. Most software reliability models are built on a common assumption that the detected faults are immediately corrected; thus, the fault detection and correction processes can be regarded as the same process. In this article, a comprehensive study is conducted to analyze the time dependencies between the fault detection and correction processes. The model parameters are estimated using the Maximum Likelihood Estimation (MLE) method, which is based on an explicit likelihood function combining both the fault detection and correction processes. Numerical case studies are conducted under the proposed modeling framework. The obtained results demonstrate that the proposed MLE method can be applied to more general situations and provide more accurate results. Furthermore, the predictive capability of the MLE method is compared with that of the Least Squares Estimation (LSE) method. The prediction results indicate that the proposed MLE method performs better than the LSE method when the data are not large in size or are collected in the early phase of software testing. Journal: IIE Transactions Pages: 359-370 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1096432 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1096432 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:359-370 Template-Type: ReDIF-Article 1.0 Author-Name: Dan Zhang Author-X-Name-First: Dan Author-X-Name-Last: Zhang Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Title: A fully integrated double-loop approach to the design of statistically and energy efficient accelerated life tests Abstract: Accelerated Life Testing (ALT) has been widely used in reliability estimation for highly reliable products. To improve the efficiency of ALT, many optimum ALT design methods have been developed. However, most of the existing methods solely focus on the reliability estimation precision without considering the significant amounts of energy consumed by the equipment that creates the harsher-than-normal operating conditions in such experiments. In order to warrant the reliability estimation precision while reducing the total energy consumption, this article presents a fully integrated double-loop approach to the design of statistically and energy-efficient ALT experiments. As an important option, the new experimental design method is formulated as a multi-objective optimization problem with three objectives: (i) minimizing the experiment's total energy consumption; (ii) maximizing the reliability estimation precision; and (iii) minimizing the tracking error between the desired and actual stress loadings used in the experiment. A controlled elitist non-dominated sorting genetic algorithm is utilized to solve such large-scale optimization problems involving computer simulation. Numerical examples are provided to demonstrate the effectiveness and possible applications of the proposed experimental design method. Compared with the traditional and sequential optimal ALT planning methods, this method further improves the energy and statistical efficiency of ALT experiments. Journal: IIE Transactions Pages: 371-388 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1109738 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1109738 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:371-388 Template-Type: ReDIF-Article 1.0 Author-Name: Shan Li Author-X-Name-First: Shan Author-X-Name-Last: Li Author-Name: Yong Chen Author-X-Name-First: Yong Author-X-Name-Last: Chen Title: A Bayesian variable selection method for joint diagnosis of manufacturing process and sensor faults Abstract: This article presents a Bayesian variable selection–based diagnosis approach to simultaneously identify both process mean shift faults and sensor mean shift faults in manufacturing processes. The proposed method directly models the probability of fault occurrence and can easily incorporate prior knowledge on the probability of a fault occurrence. Important concepts are introduced to understand the diagnosability of the proposed method. A guideline on how to select the values of hyper-parameters is given. A conditional maximum likelihood method is proposed as an alternative method to provide robustness to the selection of some key model parameters. Systematic simulation studies are used to provide insights on the relationship between the success of the diagnosis method and related system structure characteristics. A real assembly example is used to demonstrate the effectiveness of the proposed diagnosis method. Journal: IIE Transactions Pages: 313-323 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1109739 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1109739 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:313-323 Template-Type: ReDIF-Article 1.0 Author-Name: Jingyuan Shen Author-X-Name-First: Jingyuan Author-X-Name-Last: Shen Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Title: Reliability performance for dynamic systems with cycles of regimes Abstract: The environment in which a system operates can have a crucial impact on its performance; for example, a machine operating in mild or harsh environments or the flow of a river changing between seasons. In this article, we consider a dynamic reliability system operating under a cycle of K regimes, which is modeled as a continuous-time Markov process with K different transition rate matrices being used to describe the various regimes. Results for the availability of such a system and probability distributions of the first uptime are given. Three special cases, which occur due to situations where the durations of the regime are constant and where the number of up states in different regimes are identical or increasing, are considered in detail. Finally, some numerical examples are shown to validate the proposed approach. Journal: IIE Transactions Pages: 389-402 Issue: 4 Volume: 48 Year: 2016 Month: 4 X-DOI: 10.1080/0740817X.2015.1110266 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110266 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:4:p:389-402 Template-Type: ReDIF-Article 1.0 Author-Name: Kamal Mannar Author-X-Name-First: Kamal Author-X-Name-Last: Mannar Author-Name: Darek Ceglarek Author-X-Name-First: Darek Author-X-Name-Last: Ceglarek Title: Functional capability space and optimum process adjustments for manufacturing processes with in-specs failure Abstract: This paper introduces a methodology for functional capability analysis and optimal process adjustment for products with failures that occur when design parameters and process variables are within tolerance limits (in-specs). The proposed methodology defines a multivariate functional capability space (FC-Space) using a mathematical morphology operation, the Minkowski sum, in order to represent a unified model with (i) multidimensional design tolerance space; (ii) in-specs failure region(s); and, (iii) non-parametric, multivariate process measurements represented as Kernel Density Estimates (KDEs). The defined FC-Space allows the determination of a desired process fallout rate in the case of products with field failures that occur within design tolerances (in-specs). The outlined process adjustment approach identifies the optimum position of the process mean in order to minimize the overlap between the KDEs and in-specs failure regions, i.e., achieve the minimum possible process fallout rate for current process variation. The FC-Space-based process adjustment methodology is illustrated using a case study from the electronics industry where the in-specs failure region is identified based on warranty information analysis. Journal: IIE Transactions Pages: 95-106 Issue: 2 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902789027 File-URL: http://hdl.handle.net/10.1080/07408170902789027 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:2:p:95-106 Template-Type: ReDIF-Article 1.0 Author-Name: Thuntee Sukchotrat Author-X-Name-First: Thuntee Author-X-Name-Last: Sukchotrat Author-Name: Seoung Kim Author-X-Name-First: Seoung Author-X-Name-Last: Kim Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: One-class classification-based control charts for multivariate process monitoring Abstract: One-class classification problems have attracted a great deal of attention from various disciplines. In the present study, attempts are made to extend the scope of application of the one-class classification technique to Statistical Process Control (SPC) problems. New multivariate control charts that apply the effectiveness of one-class classification to improvement of Phase I and Phase II analysis in SPC are proposed. These charts use a monitoring statistic to represent the degree of being an outlier as obtained through one-class classification. The control limits of the proposed charts are established based on the empirical level of significance on the percentile, estimated by the bootstrap method. A simulation study is conducted to illustrate the limitations of current one-class classification control charts and demonstrate the effectiveness of the proposed control charts. Journal: IIE Transactions Pages: 107-120 Issue: 2 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903019150 File-URL: http://hdl.handle.net/10.1080/07408170903019150 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:2:p:107-120 Template-Type: ReDIF-Article 1.0 Author-Name: Ying Li Author-X-Name-First: Ying Author-X-Name-Last: Li Author-Name: Saijuan Zhang Author-X-Name-First: Saijuan Author-X-Name-Last: Zhang Author-Name: Reginald Baugh Author-X-Name-First: Reginald Author-X-Name-Last: Baugh Author-Name: Jianhua Huang Author-X-Name-First: Jianhua Author-X-Name-Last: Huang Title: Predicting surgical case durations using ill-conditioned CPT code matrix Abstract: Efficient utilization of existing resources is crucial for cost containment in medical institutions. Accurately predicting surgery duration will improve the utilization of indispensable surgical resources such as surgeons, nurses, and operating rooms. Prior research has identified the Current Procedural Terminology (CPT) codes as the most important factor when predicting surgical case durations. However, there have been few attempts to create a general predictive methodology that can effectively extract information from multiple CPT codes. This research proposes two regression-based predictive models: (a) linear regression, and (b) log-linear regression models. To perform these regression analysis, a full-ranked design matrix based on CPT code inclusions in the surgical cases needs to be constructed. However, a naively constructed design matrix is ill conditioned (i.e., singular). A systematic procedure is proposed to construct a full-ranked design matrix by sifting out the CPT codes without any predictive power while retaining useful information as much as possible. The proposed models can be applied in general situations where a surgery can have any number of CPT codes and any combination of CPT codes. Using real-world surgical data, the proposed models are compared with benchmark methods and significant reductions in prediction errors are shown. Journal: IIE Transactions Pages: 121-135 Issue: 2 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903019168 File-URL: http://hdl.handle.net/10.1080/07408170903019168 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:2:p:121-135 Template-Type: ReDIF-Article 1.0 Author-Name: Rodrigo Duran Author-X-Name-First: Rodrigo Author-X-Name-Last: Duran Author-Name: Susan Albin Author-X-Name-First: Susan Author-X-Name-Last: Albin Title: Monitoring and accurately interpreting service processes with transactions that are classified in multiple categories Abstract: Consider a process where transactions, such as customer service transactions, are classified into categories. With just two categories, the fraction in each can be monitored with the familiar p-chart based on the binomial distribution. This paper presents a new method for monitoring the number of transactions among K categories the p-tree method, which provides an accurate and easy way to help pinpoint the categories where there has been a disturbance. In contrast to the existing practice the proposed method not only signals an out-of-control situation but also helps identify which categories are causing the problem. It is shown that a K category process can be represented by a probability tree with K − 1 binary stages and hence monitored with K − 1 independent p-charts. Simulation studies show that the p-tree method is a helpful diagnostic tool and that the sensitivity is comparable to existing multinominal-based control charts. Journal: IIE Transactions Pages: 136-145 Issue: 2 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903074908 File-URL: http://hdl.handle.net/10.1080/07408170903074908 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:2:p:136-145 Template-Type: ReDIF-Article 1.0 Author-Name: Dong-Hee Lee Author-X-Name-First: Dong-Hee Author-X-Name-Last: Lee Author-Name: In-Jun Jeong Author-X-Name-First: In-Jun Author-X-Name-Last: Jeong Author-Name: Kwang-Jae Kim Author-X-Name-First: Kwang-Jae Author-X-Name-Last: Kim Title: A posterior preference articulation approach to dual-response-surface optimization Abstract: In dual-response-surface optimization, the mean and standard deviation responses are often in conflict. To obtain a satisfactory compromise, a Decision Maker (DM)'s preference information on the trade-offs between the responses should be incorporated into the problem. In most existing works, the DM expresses a subjective judgment on the responses through a preference parameter before the problem-solving process, after which a single solution is obtained. This study proposes a posterior preference articulation approach to dual-response-surface optimization. The posterior preference articulation approach initially finds a set of non-dominated solutions without the DM's preference information, and then allows the DM to select the best solution among the non-dominated solutions. The proposed method enables a satisfactory compromise solution to be achieved with minimum cognitive effort and gives the DM the opportunity to explore and better understand the trade-offs between the two responses. Journal: IIE Transactions Pages: 161-171 Issue: 2 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903228959 File-URL: http://hdl.handle.net/10.1080/07408170903228959 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:2:p:161-171 Template-Type: ReDIF-Article 1.0 Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Author-Name: Shuai Huang Author-X-Name-First: Shuai Author-X-Name-Last: Huang Title: Regression-based process monitoring with consideration of measurement errors Abstract: Multivariate process monitoring and fault detection is an important problem in quality improvement. Most existing methods are based on a common assumption that the measured values of variables are the true values, with limited consideration of the various types of measurement errors embedded in the data. On the other hand, research on measurement errors has been conducted from a pure theoretical statistics point of view, without any linking of the modeling and analysis of measurement errors with monitoring and fault detection objectives. This paper proposes a method for multivariate process monitoring and fault detection considering four types of major measurement errors, including sensor bias, sensitivity, noise and dependency of the relationship between a variable and its measured value on some other variables. This method includes the design of new control charts based on data with measurement errors, and identification of the maximum allowable measurement errors to fulfill certain fault detectability requirements. This method is applicable to processes where the natural ordering of the variables is known, such as for cascade or multistage processes, and processes where the causal relationships among variables are known and can be described by a Bayesian network. The method is demonstrated in two industrial processes. Journal: IIE Transactions Pages: 146-160 Issue: 2 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903232563 File-URL: http://hdl.handle.net/10.1080/07408170903232563 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:2:p:146-160 Template-Type: ReDIF-Article 1.0 Author-Name: Benjamin Armbruster Author-X-Name-First: Benjamin Author-X-Name-Last: Armbruster Author-Name: James Luedtke Author-X-Name-First: James Author-X-Name-Last: Luedtke Title: Models and formulations for multivariate dominance-constrained stochastic programs Abstract: The use of a stochastic dominance constraint to specify risk preferences in a stochastic program has been recently proposed in the literature. Such a constraint requires the random outcome resulting from one’s decision to stochastically dominate a given random comparator. These ideas have been extended to problems with multiple random outcomes, using the notion of positive linear stochastic dominance. This article proposes a constraint using a different version of multivariate stochastic dominance. This version is natural due to its connection to expected utility maximization theory and relatively tractable. In particular, it is shown that such a constraint can be formulated with linear constraints for the second-order dominance relation and with mixed-integer constraints for the first-order relation. This is in contrast with a constraint on second-order positive linear dominance, for which no efficient algorithms are known. The proposed formulations are tested in the context of two applications: budget allocation in a setting with multiple objectives and finding radiation treatment plans in the presence of organ motion. Journal: IIE Transactions Pages: 1-14 Issue: 1 Volume: 47 Year: 2015 Month: 1 X-DOI: 10.1080/0740817X.2014.889336 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.889336 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:1:p:1-14 Template-Type: ReDIF-Article 1.0 Author-Name: Arnab Bisi Author-X-Name-First: Arnab Author-X-Name-Last: Bisi Author-Name: Karanjit Kalsi Author-X-Name-First: Karanjit Author-X-Name-Last: Kalsi Author-Name: Golnaz Abdollahian Author-X-Name-First: Golnaz Author-X-Name-Last: Abdollahian Title: A non-parametric adaptive algorithm for the censored newsvendor problem Abstract: This article studies the problem of determining stocking quantities in a periodic-review inventory model when the demand distribution is unknown. Moreover, lost sales are unobservable in the system and hence inventory decisions are to be made solely based on sales data. Both the non-perishable and perishable inventory problems are addressed. Using an online convex optimization procedure, a non-parametric adaptive algorithm that produces inventory policy in each period that depends on the entire history of stocking decisions and sales observations. With the help of a convex quadratic underestimator of the cost function, it is established that the T-period average expected cost of the inventory policy converges to the optimal newsvendor cost at the rate of O(log T/T) for demands whose expected cost functions satisfy an α-exp-concavity property. It is shown that, when the demand distribution is continuous, this property holds the probability density function over the decision set is bounded away from zero. For other continuous distributions, a “shifted” version of the density function is constructed to show an ε-consistency property of the algorithm so that the gap between the T-period average expected cost of the proposed policy and the optimal newsvendor cost is of the order O(log T/T) + ε (for a given small ε > 0). Simulation results show that the proposed algorithm performs consistently better than two existing algorithms that are closely related to the proposed algorithms. Journal: IIE Transactions Pages: 15-34 Issue: 1 Volume: 47 Year: 2015 Month: 1 X-DOI: 10.1080/0740817X.2014.904974 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.904974 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:1:p:15-34 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammad H. Yarmand Author-X-Name-First: Mohammad H. Author-X-Name-Last: Yarmand Author-Name: Douglas G. Down Author-X-Name-First: Douglas G. Author-X-Name-Last: Down Title: Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers Abstract: For tandem queues with no buffer spaces and both dedicated and flexible servers, this article studies how flexible servers should be assigned to maximize the throughput. The optimal policy is completely characterized. Insights gained from applying the Policy Iteration algorithm to systems with three, four, and five stations are used to devise heuristics for systems of arbitrary size. These heuristics are verified by numerical analysis. Throughput improvement obtained when, for a given server assignment, dedicated servers are changed to flexible servers. Journal: IIE Transactions Pages: 35-49 Issue: 1 Volume: 47 Year: 2015 Month: 1 X-DOI: 10.1080/0740817X.2014.905735 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.905735 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:1:p:35-49 Template-Type: ReDIF-Article 1.0 Author-Name: Gino J. Lim Author-X-Name-First: Gino J. Author-X-Name-Last: Lim Author-Name: Mukesh Rungta Author-X-Name-First: Mukesh Author-X-Name-Last: Rungta Author-Name: M. Reza Baharnemati Author-X-Name-First: M. Reza Author-X-Name-Last: Baharnemati Title: Reliability analysis of evacuation routes under capacity uncertainty of road links Abstract: This article presents a reliability-based evacuation route planning model that seeks to find the relationship between the clearance time, number of evacuation paths, and congestion probability during an evacuation. Most of the existing models for network evacuation assume deterministic capacity estimates for road links without taking into account the uncertainty in capacities induced by myriad external conditions. Only a handful of models exist in the literature that account for capacity uncertainty of road links. A dynamic network–based evacuation model is extended by incorporating probabilistic arc capacity constraints and a minimum-cost network flow problem is formulated that finds a lower bound on the clearance time within the framework of a chance-constrained programming technique. Network breakdown minimization principles for traffic flow in evacuation planning problem are applied and a path-based evacuation routing and scheduling model is formulated. Given the horizon time for evacuation, the model selects the evacuation paths and finds flows on the selected paths that result in the minimum congestion in the network along with the reliability of the evacuation plan. Numerical examples are presented and the effectiveness of the stochastic models in evacuation planning is discussed. It is shown that the reliability-based evacuation plan is conservative compared with plans made using a deterministic model. Stochastic models guarantee that congestion can be avoided with a higher confidence level at the cost of an increased clearance time. Journal: IIE Transactions Pages: 50-63 Issue: 1 Volume: 47 Year: 2015 Month: 1 X-DOI: 10.1080/0740817X.2014.905736 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.905736 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:1:p:50-63 Template-Type: ReDIF-Article 1.0 Author-Name: A. Serasu Duran Author-X-Name-First: A. Serasu Author-X-Name-Last: Duran Author-Name: Sinan Gürel Author-X-Name-First: Sinan Author-X-Name-Last: Gürel Author-Name: M. Selim Aktürk Author-X-Name-First: M. Selim Author-X-Name-Last: Aktürk Title: Robust Airline Scheduling with Controllable Cruise Times and Chance Constraints Abstract: Robust airline schedules can be considered as flight schedules that are likely to minimize passenger delay. Airlines usually add an additional time—e.g., schedule padding—to scheduled gate-to-gate flight times to make their schedules less susceptible to variability and disruptions. There is a critical trade-off between any kind of buffer time and daily aircraft productivity. Aircraft speed control is a practical alternative to inserting idle times into schedules. In this study, block times are considered in two parts: Cruise times that are controllable and non-cruise times that are subject to uncertainty. Cruise time controllability is used together with idle time insertion to satisfy passenger connection service levels while ensuring minimum costs. To handle the nonlinearity of the cost functions, they are represented via second-order conic inequalities. The uncertainty in non-cruise times is modeled through chance constraints on passenger connection service levels, which are then expressed using second-order conic inequalities. Overall, it is shown, that a 2% increase in fuel costs cuts down 60% of idle time costs. A computational study shows that exact solutions can be obtained by commercial solvers in seconds for a single-hub schedule and in minutes for a four-hub daily schedule of a major U.S. carrier. Journal: IIE Transactions Pages: 64-83 Issue: 1 Volume: 47 Year: 2015 Month: 1 X-DOI: 10.1080/0740817X.2014.916457 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.916457 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:1:p:64-83 Template-Type: ReDIF-Article 1.0 Author-Name: Ming Chen Author-X-Name-First: Ming Author-X-Name-Last: Chen Author-Name: Zhi-Long Chen Author-X-Name-First: Zhi-Long Author-X-Name-Last: Chen Author-Name: Guruprasad Pundoor Author-X-Name-First: Guruprasad Author-X-Name-Last: Pundoor Author-Name: Suresh Acharya Author-X-Name-First: Suresh Author-X-Name-Last: Acharya Author-Name: John Yi Author-X-Name-First: John Author-X-Name-Last: Yi Title: Markdown optimization at multiple stores Abstract: This article studies a markdown optimization problem commonly faced by many large retailers that involves joint decisions on inventory allocation and markdown pricing at multiple stores subject to various business rules. At the beginning of the markdown planning horizon, there is a certain amount of inventory of a product at a warehouse that needs to be allocated to many retail stores served by the warehouse over the planning horizon. In the same time, a markdown pricing scheme needs to be determined for each store over the planning horizon. A number of business rules for inventory allocation and markdown prices at the stores must be satisfied. The retailer does not have a complete knowledge about the probability distribution of the demand at a given store in a given time period. The retailer’s knowledge about the demand distributions improves over time as new information becomes available. Hence, the retailer employs a rolling horizon approach where the problem is re-solved at the beginning of each period by incorporating the latest demand information. It is shown that the problem involved at the beginning of each period is NP-hard even if the demand functions are deterministic and there is only a single store or a single time period. Thus, attention is focused on heuristic solution approaches. The stochastic demand is modeled using discrete demand scenarios based on the retailer’s latest knowledge about the demand distributions. This enables possible demand correlations to be modeled across different time periods. The problem involved at the beginning of each period is formulated as a mixed-integer program with demand scenarios and it is solved using a Lagrangian relaxation – based decomposition approach. The approach is implimented on a rolling horizon basis and it is compared with several commonly used benchmark approaches in practice. An extensive set of computational experiments is perfomed under various practical situations, and it is demonstrated that the proposed approach significantly outperforms the benchmark approaches. A number of managerial insights are derived about the impact of business rules and price sensitivity of individual stores on the total expected revenue and on the optimal inventory allocation and pricing decisions. Journal: IIE Transactions Pages: 84-108 Issue: 1 Volume: 47 Year: 2015 Month: 1 X-DOI: 10.1080/0740817X.2014.916459 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.916459 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:1:p:84-108 Template-Type: ReDIF-Article 1.0 Author-Name: Nils Löhndorf Author-X-Name-First: Nils Author-X-Name-Last: Löhndorf Author-Name: Stefan Minner Author-X-Name-First: Stefan Author-X-Name-Last: Minner Title: Simulation optimization for the stochastic economic lot scheduling problem Abstract: This article studies simulation optimization methods for the stochastic economic lot scheduling problem. In contrast with prior research, the focus of this work is on methods that treat this problem as a black box. Based on a large-scale numerical study, approximate dynamic programming is compared with a global search for parameters of simple control policies. Two value function approximation schemes are proposed that are based on linear combinations of piecewise-constant functions as well as control policies that can be described by a small set of parameters. While approximate value iteration worked well for small problems with three products, it was clearly outperformed by the global policy search as soon as problem size increased. The most reliable choice in this study was a globally optimized fixed-cycle policy. An additional analysis of the response surface of model parameters on optimal average cost revealed that the cost effect of product diversity was negligible. Journal: IIE Transactions Pages: 796-810 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.662310 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.662310 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:796-810 Template-Type: ReDIF-Article 1.0 Author-Name: Qing-Shan Jia Author-X-Name-First: Qing-Shan Author-X-Name-Last: Jia Author-Name: Enlu Zhou Author-X-Name-First: Enlu Author-X-Name-Last: Zhou Author-Name: Chun-Hung Chen Author-X-Name-First: Chun-Hung Author-X-Name-Last: Chen Title: Efficient computing budget allocation for finding simplest good designs Abstract: In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than others. Such designs are called simple designs and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization, there exist few studies on finding simplest good designs. This article considers this important problem and the following contributions are made to the subject. First, lower bounds are provided for the probabilities of correctly selecting the m simplest designs with top performance and selecting the best m such simplest good designs, respectively. Second, two efficient computing budget allocation methods are developed to find m simplest good designs and to find the best m such designs, respectively, and their asymptotic optimalities have been shown. Third, the performance of the two methods is compared with equal allocations over six academic examples and a smoke detection problem in a wireless sensor network. Journal: IIE Transactions Pages: 736-750 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.705454 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.705454 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:736-750 Template-Type: ReDIF-Article 1.0 Author-Name: Ning Quan Author-X-Name-First: Ning Author-X-Name-Last: Quan Author-Name: Jun Yin Author-X-Name-First: Jun Author-X-Name-Last: Yin Author-Name: Szu Ng Author-X-Name-First: Szu Author-X-Name-Last: Ng Author-Name: Loo Lee Author-X-Name-First: Loo Author-X-Name-Last: Lee Title: Simulation optimization via kriging: a sequential search using expected improvement with computing budget constraints Abstract: Metamodels are commonly used as fast surrogates for the objective function to facilitate the optimization of simulation models. Kriging (or the Gaussian process model) is a very popular metamodel form for deterministic and, recently, stochastic simulations. This article proposes a two-stage sequential framework for the optimization of stochastic simulations with heterogeneous variances under computing budget constraints. The proposed two-stage framework is based on the kriging model and incorporates optimal computing budget allocation techniques and the expected improvement function to drive and improve the estimation of the global optimum. Empirical results indicate that it is effective in obtaining optimal solutions and is more efficient than alternative metamodel-based techniques. The framework is also applied to a complex real ocean liner bunker fuel management problem with promising results. Journal: IIE Transactions Pages: 763-780 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706377 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706377 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:763-780 Template-Type: ReDIF-Article 1.0 Author-Name: Muer Yang Author-X-Name-First: Muer Author-X-Name-Last: Yang Author-Name: Theodore Allen Author-X-Name-First: Theodore Author-X-Name-Last: Allen Author-Name: Michael Fry Author-X-Name-First: Michael Author-X-Name-Last: Fry Author-Name: W. Kelton Author-X-Name-First: W. Author-X-Name-Last: Kelton Title: The call for equity: simulation optimization models to minimize the range of waiting times Abstract: Providing equal access to public service resources is a fundamental goal of democratic societies. Growing research interest in public services (e.g., health care, humanitarian relief, elections) has increased the importance of considering objective functions related to equity. This article studies discrete resource allocation problems where the decision maker is concerned with maintaining equity between some defined subgroups of a customer population and where non-closed-form functions of equity are allowed. Simulation optimization techniques are used to develop rigorous algorithms to allocate resources equitably among these subgroups. The presented solutions are associated with probabilistic bounds on solution quality. A full-factorial experimental design demonstrates that the proposed algorithm outperforms competing heuristics and is robust over various inequity metrics. Additionally, the algorithm is applied to a case study of allocating voting machines to election precincts in Franklin County, Ohio. [Supplementary material is available for this article. Go to the publisher’s online edition of IIE Transactions for the Appendices to the article.] Journal: IIE Transactions Pages: 781-795 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.721947 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.721947 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:781-795 Template-Type: ReDIF-Article 1.0 Author-Name: Yao Luo Author-X-Name-First: Yao Author-X-Name-Last: Luo Author-Name: Eunji Lim Author-X-Name-First: Eunji Author-X-Name-Last: Lim Title: Simulation-based optimization over discrete sets with noisy constraints Abstract: This article considers a constrained optimization problem over a discrete set where noise-corrupted observations of the objective and constraints are available. The problem is challenging because the feasibility of a solution cannot be known for certain, due to the noisy measurements of the constraints. To tackle this issue, a method is proposed that converts constrained optimization into the unconstrained optimization problem of finding a saddle point of the Lagrangian. The method applies stochastic approximation to the Lagrangian in search of the saddle point. The proposed method is shown to converge, under suitable conditions, to the optimal solution almost surely as the number of iterations grows. The effectiveness of the proposed method is demonstrated numerically in three settings: (i) inventory control in a periodic review system; (ii) staffing in a call center; and (iii) staffing in an emergency room. Journal: IIE Transactions Pages: 699-715 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.733580 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.733580 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:699-715 Template-Type: ReDIF-Article 1.0 Author-Name: Zhaolin Hu Author-X-Name-First: Zhaolin Author-X-Name-Last: Hu Author-Name: L. Hong Author-X-Name-First: L. Author-X-Name-Last: Hong Author-Name: Liwei Zhang Author-X-Name-First: Liwei Author-X-Name-Last: Zhang Title: A smooth Monte Carlo approach to joint chance-constrained programs Abstract: This article studies Joint Chance-Constrained Programs (JCCPs). JCCPs are often non-convex and non-smooth and thus are generally challenging to solve. This article proposes a logarithm-sum-exponential smoothing technique to approximate a joint chance constraint by the difference of two smooth convex functions, and uses a sequential convex approximation algorithm, coupled with a Monte Carlo method, to solve the approximation. This approach is called a smooth Monte Carlo approach in this article. It is shown that the proposed approach is capable of handling both smooth and non-smooth JCCPs where the random variables can be either continuous, discrete, or mixed. The numerical experiments further confirm these findings. Journal: IIE Transactions Pages: 716-735 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.745205 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.745205 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:716-735 Template-Type: ReDIF-Article 1.0 Author-Name: Enver Yücesan Author-X-Name-First: Enver Author-X-Name-Last: Yücesan Title: An efficient ranking and selection approach to boost the effectiveness of innovation contests Abstract: Breakthrough innovation has two key prerequisites: idea generation, collection of a large number of competing designs, and idea screening, efficient evaluation, and ranking of these designs to identify the best one(s). Open innovation has recently been modeled and analyzed as innovation contests, where many individuals or teams submit designs or prototypes to an innovating firm. Innovation tournaments increase the capacity of idea generation by enabling access to a broad pool of solvers while avoiding exorbitant costs. To deliver on their promise, however, such tournaments must be designed to enable effective screening of proposed ideas. In particular, given the large number of designs to be evaluated, tournaments must be efficient, favoring quick judgments based on imperfect information over extensive data collection. Through a simulation study, this article shows that contests may not necessarily be the best process for ranking innovation opportunities and selecting the best ones in an efficient way. Instead, we propose a ranking and selection approach that is based on ordinal optimization, which provides both efficiency and accuracy by dynamically allocating evaluation effort away from inferior designs onto promising ones. A numerical example quantifies the benefits. The proposed approach should therefore complement innovation tournaments’ ability of idea generation with efficient idea screening. Journal: IIE Transactions Pages: 751-762 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.757679 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.757679 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:751-762 Template-Type: ReDIF-Article 1.0 Author-Name: Wendy Xu Author-X-Name-First: Wendy Author-X-Name-Last: Xu Author-Name: Barry Nelson Author-X-Name-First: Barry Author-X-Name-Last: Nelson Title: Empirical stochastic branch-and-bound for optimization via simulation Abstract: This article introduces a new method for discrete decision variable optimization via simulation that combines the nested partitions method and the stochastic branch-and-bound method in the sense that advantage is taken of the partitioning structure of stochastic branch-and-bound, but the bounds are estimated based on the performance of sampled solutions, similar to the nested partitions method. The proposed Empirical Stochastic Branch-and-Bound (ESB&B) algorithm also uses improvement bounds to guide solution sampling for better performance. A convergence proof and empirical evaluation are provided. [Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transaction for datasets, additional tables, detailed proofs, etc.] Journal: IIE Transactions Pages: 685-698 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.768783 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.768783 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:685-698 Template-Type: ReDIF-Article 1.0 Author-Name: Loo Lee Author-X-Name-First: Loo Author-X-Name-Last: Lee Author-Name: Ek Chew Author-X-Name-First: Ek Author-X-Name-Last: Chew Author-Name: Peter Frazier Author-X-Name-First: Peter Author-X-Name-Last: Frazier Author-Name: Qing-Shan Jia Author-X-Name-First: Qing-Shan Author-X-Name-Last: Jia Author-Name: Chun-Hung Chen Author-X-Name-First: Chun-Hung Author-X-Name-Last: Chen Title: Advances in simulation optimization and its applications Journal: IIE Transactions Pages: 683-684 Issue: 7 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.778709 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.778709 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:7:p:683-684 Template-Type: ReDIF-Article 1.0 Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Reconfigured piecewise linear regression tree for multistage manufacturing process control Abstract: In a multistage manufacturing process, extensive amounts of observational data are obtained by the measurement of product quality features, process variables, and material properties. These data have temporal and spatial relationships and may have a non-linear data structure. It is a challenging task to model the variation and its propagation using these data and then use the model for feedforward control purposes. This article proposes a methodology for feedforward control that is based on a piecewise linear model. An engineering-driven reconfiguration method for piecewise linear regression trees is proposed. The model complexity is further reduced by merging the leaf nodes with the constraint of the control accuracy requirement. A case study on a multistage wafer manufacturing process is conducted to illustrate the procedure and effectiveness of the proposed method. Journal: IIE Transactions Pages: 249-261 Issue: 4 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.564603 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.564603 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:4:p:249-261 Template-Type: ReDIF-Article 1.0 Author-Name: Abbas Al-Refaie Author-X-Name-First: Abbas Author-X-Name-Last: Al-Refaie Title: Optimizing performance with multiple responses using cross-evaluation and aggressive formulation in data envelopment analysis Abstract: An efficient optimization procedure is proposed for improving a product/process performance with multiple responses using two Data Envelopment Analysis (DEA) techniques, including the cross-evaluation and aggressive formulation techniques. The experiments generated in a Taguchi orthogonal array are considered Decision-Making Units (DMUs). The multiple responses are set inputs and/or outputs for all DMUs. Cross-evaluation and aggressive formulation techniques are employed to measure a DMU’s performance. The efficiency scores are then adopted to identify the combination of process factor levels that optimizes a product/process performance with multiple responses. Finally, the proposed procedure is illustrated by three case studies previously reported in the literature. The computational results show that the aggressive formulation technique is the most efficient in optimizing performance compared with the cross-efficiency technique, principal components analysis, and genetic algorithm methods. In conclusion, the advantages of the proposed optimization procedure may motivate practitioners to implement it in order to optimize a product/process with multiple responses in a wide range manufacturing applications. Journal: IIE Transactions Pages: 262-276 Issue: 4 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.566908 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.566908 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:4:p:262-276 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Author-Name: Qingzhu Yao Author-X-Name-First: Qingzhu Author-X-Name-Last: Yao Author-Name: Way Kuo Author-X-Name-First: Way Author-X-Name-Last: Kuo Title: Patterns of the Birnbaum importance in linear consecutive--out-of- systems Abstract: The Birnbaum importance is a well-known measure that evaluates the relative contribution of components to system reliability. There exist certain patterns of the component Birnbaum importance (i.e., the relative order of the Birnbaum importance values to the individual components) for linear consecutive-k-out-of-n (Lin/Con/k/n) systems when all components have the same reliability p. Previous research has shown that based on the Birnbaum importance, plausible patterns and conjectures exist. This article summarizes and annotates the Birnbaum importance patterns for Lin/Con/k/n systems, proves new Birmbaum importance patterns conditioned on the value of p, disproves some patterns that were conjectured or claimed in the literature, and makes new conjectures based on comprehensive computational tests and analysis. More important, this article defines a concept of segment in Lin/Con/k/n systems for analyzing the Birnbaum importance patterns and investigates the relationship between the Birnbaum importance and the common component reliability p and the relationship between the Birnbaum importance and the system size n. One can then use these relations to further understand the proved, disproved, and conjectured Birnbaum importance patterns. Journal: IIE Transactions Pages: 277-290 Issue: 4 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.566909 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.566909 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:4:p:277-290 Template-Type: ReDIF-Article 1.0 Author-Name: Jing Lin Author-X-Name-First: Jing Author-X-Name-Last: Lin Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Title: A Bayesian framework for online parameter estimation and process adjustment using categorical observations Abstract: In certain manufacturing processes, accurate numerical readings are difficult to collect due to time or resource constraints. Alternatively, low-resolution categorical observations can be obtained that can act as feasible and low-cost surrogates. Under such situations, all classic statistical quality control activities, such as model building, parameter estimation, and feedback adjustment, have to be done on the basis of these categorical observations. However, most existing statistical quality control methods are developed based on numerical observations and cannot be directly applied if only categorical observations are available. In this research, a new online approach for parameter estimation and run-to-run process control using categorical observations is developed. The new approach is built in the Bayesian framework; it provides a convenient way to update parameter estimates when categorical observations arrive gradually in a real production scenario. Studies of performance reveal that the new method can provide stable estimates of unknown parameters and achieve effective control performance for maintaining quality. Journal: IIE Transactions Pages: 291-300 Issue: 4 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.568039 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.568039 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:4:p:291-300 Template-Type: ReDIF-Article 1.0 Author-Name: Xianghui Ning Author-X-Name-First: Xianghui Author-X-Name-Last: Ning Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A density-based statistical process control scheme for high-dimensional and mixed-type observations Abstract: Statistical Process Control (SPC) techniques are useful tools for detecting changes in process variables. The structure of process variables has become increasingly complex as a result of increasingly complex technologies. The number of variables is usually large and categorical variables may appear alongside continuous variables. Such observations are considered to be high-dimensional and mixed-type observations. Conventional SPC techniques may lose their accuracy and efficiency in detecting changes in a process with high-dimensional and mixed-type observations. This article presents a density-based SPC approach, which is derived from a Local Outlier Factor (LOF) scheme, as a solution to this problem. The parameters in an LOF scheme are investigated and a procedure to design a corresponding control chart is presented. The good performance of the proposed control scheme is demonstrated via numerical simulation. Journal: IIE Transactions Pages: 301-311 Issue: 4 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.587863 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.587863 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:4:p:301-311 Template-Type: ReDIF-Article 1.0 Author-Name: Huairui Guo Author-X-Name-First: Huairui Author-X-Name-Last: Guo Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Title: Multiscale monitoring of autocorrelated processes using wavelets analysis Abstract: This article proposes a new method to develop multiscale monitoring control charts for an autocorrelated process that has an underlying unknown ARMA(2, 1) model structure. The Haar wavelet transform is used to obtain effective monitoring statistics by considering the process dynamic characteristics in both the time and frequency domains. Three control charts are developed on three selected levels of Haar wavelet coefficients in order to simultaneously detect the changes in the process mean, process variance, and measurement error variance, respectively. A systematic method for automatically determining the optimal monitoring level of Haar wavelet decomposition is proposed that does not require the estimation of an ARMA model. It is shown that the proposed wavelet-based Cumulative SUM (CUSUM) chart on Haar wavelet detail coefficients is only sensitive to the variance changes and robust to process mean shifts. This property provides the separate monitoring capability between a variance change and a mean shift, which shows its advantage by comparison with the traditional CUSUM monitoring chart. For the purpose of mean shift detection, it is also shown that using the proposed wavelet-based Exponentially Weighted Moving Average (EWMA) chart to monitor Haar wavelet scale coefficients will more successfully detect small mean shifts than direct-EWMA charts. Journal: IIE Transactions Pages: 312-326 Issue: 4 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609872 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609872 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:4:p:312-326 Template-Type: ReDIF-Article 1.0 Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Chien-Hua Lin Author-X-Name-First: Chien-Hua Author-X-Name-Last: Lin Title: Stability analysis of single EWMA controller under dynamic models Abstract: The Exponentially Weighted Moving Average (EWMA) feedback controller is a popular model-based run-to-run controller which primarily uses data from previous process runs to adjust settings for the next run. The long-term stability conditions of EWMA controllers for this closed-loop system have received considerable attention in the literature. Most of the reported results are obtained under the assumption that the process I-O (Input-Output) relationship follows a static model. Generally speaking, the effect of the input recipe on the output response can be carried over several periods. In this paper, focusing on a first-order dynamic I-O model and assuming that the process disturbance follows a general ARIMA series, a systematic approach to address this control problem is proposed. First, the long-term stability conditions of a single EWMA controller are investigated. Then, the determination of sample size to allow the design of a single EWMA controller for dynamic models is considered. Under the assumption that the process I-O variables follow a bivariate normal distribution, a formula to calculate sample sizes that allow the stability condition to be met with a minimum probability protection is derived. Finally, the effects of dynamic parameters on the determination of sample size are considered.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplementary resource: Appendix] Journal: IIE Transactions Pages: 654-663 Issue: 7 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802323034 File-URL: http://hdl.handle.net/10.1080/07408170802323034 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:7:p:654-663 Template-Type: ReDIF-Article 1.0 Author-Name: Avinoam Tzimerman Author-X-Name-First: Avinoam Author-X-Name-Last: Tzimerman Author-Name: Yale Herer Author-X-Name-First: Yale Author-X-Name-Last: Herer Title: Off-line inspections under inspection errors Abstract: The problem of off-line inspection is of both practical and theoretical interest. It has been the subject of research in somewhat simplistic scenarios. In this paper, the theoretical coverage of this problem is extended to include inspection errors. In particular, a process which is subject to random failures that sequentially produces a batch of units is investigated. After the batch is complete, off-line inspections are performed. In the proposed model it is these inspections that are subject to errors. The optimal inspection policy, i.e., which units should be inspected and the inspection order so as to minimize the expected number of inspections, is determined. The considered objective function is to find the point at which the machine fails with a given confidence level. The optimal policy is found by a dynamic programming algorithm and four different heuristic policies are investigated. An extensive computational study that examines the behavior of both the optimal and heuristic policies is presented. In particular, the effect of the model parameters on the behavior of the optimal policy is analyzed. The heuristic policies are computationally studied with the goal of comparing their quality to the optimal solution, and also to compare the heuristics themselves. Journal: IIE Transactions Pages: 626-641 Issue: 7 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802331250 File-URL: http://hdl.handle.net/10.1080/07408170802331250 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:7:p:626-641 Template-Type: ReDIF-Article 1.0 Author-Name: Shan Li Author-X-Name-First: Shan Author-X-Name-Last: Li Author-Name: Yong Chen Author-X-Name-First: Yong Author-X-Name-Last: Chen Title: Sensor fault detection for manufacturing quality control Abstract: This paper proposes a W control chart that is able to detect sensor mean shift faults and distinguish them from potential process faults in discrete-part manufacturing processes. The control chart is set up based on a linear fault quality model. The sensitivity of the W chart to the occurrence of sensor faults is studied. An index called the sensitivity ratio is used to investigate the effects of sensor fault locations and the sensor layout on the sensitivity of the W chart to sensor faults. In comparison with traditional control charts, which directly monitor the product quality characteristics, the proposed W chart can effectively separate sensor faults from process faults. An automotive body assembly process is used as an example to demonstrate the performance of the W chart.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 605-614 Issue: 7 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802389290 File-URL: http://hdl.handle.net/10.1080/07408170802389290 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:7:p:605-614 Template-Type: ReDIF-Article 1.0 Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Detectability study for statistical monitoring of multivariate dynamic processes Abstract: Fault detection and diagnosis for dynamic processes is an intensively investigated area. However, the problem of determining whether or not system faults can be successfully detected based on the output measurements for a given dynamic process remains an open research topic. An intrinsic definition of fault detectability in multivariate dynamic processes is proposed in this paper. It defines the detectability in an intrinsic manner as a system property, without any reference to any specific fault detection algorithm. Furthermore, the relationship between system structure and the detectability for mean change faults and variability change faults are investigated. Analytical criteria for checking the system detectability are established. The results presented in this paper can provide guidelines on system design improvement for process monitoring and control. A case study is presented that illustrates the effectiveness of the proposed methods. Journal: IIE Transactions Pages: 593-604 Issue: 7 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802389308 File-URL: http://hdl.handle.net/10.1080/07408170802389308 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:7:p:593-604 Template-Type: ReDIF-Article 1.0 Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Multiscale mapping of aggregated signal features to embedded time–frequency localized operations using wavelets Abstract: Aggregated signals are referred to as the measurements of system-level responses generated by the multiple operations embedded in a system. While an extensive literature exists on the analysis of general signal profiles, limited research has been performed on the topic of how to map features of the aggregated signals to the responses of individual operations, which is important for individual operation performance monitoring and assessment. In this paper, a two-step mapping algorithm is developed to obtain those mapping features using a mutiscale wavelet analysis integrated with statistical hypothesis testing and engineering knowledge. It is shown that multiscale wavelet analysis is effective for mapping aggregated signals to the embedded individual operations that generate localized time–frequency responses. This algorithm is further demonstrated in a stamping process, in which the extracted wavelet coefficients of aggregated press tonnage signals are explicitly mapped to individual or a few contributing embedded operations. The mapping allows for efficient monitoring and quality assessment of the embedded operations based on the aggregated signals, thereby avoiding installing additional in-die sensors in all operations.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 615-625 Issue: 7 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802389316 File-URL: http://hdl.handle.net/10.1080/07408170802389316 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:7:p:615-625 Template-Type: ReDIF-Article 1.0 Author-Name: Zhang Wu Author-X-Name-First: Zhang Author-X-Name-Last: Wu Author-Name: Jianxin Jiao Author-X-Name-First: Jianxin Author-X-Name-Last: Jiao Author-Name: Mei Yang Author-X-Name-First: Mei Author-X-Name-Last: Yang Author-Name: Ying Liu Author-X-Name-First: Ying Author-X-Name-Last: Liu Author-Name: Zhaojun Wang Author-X-Name-First: Zhaojun Author-X-Name-Last: Wang Title: An enhanced adaptive CUSUM control chart Abstract: Adaptive CUSUM charts, referred to as ACUSUM charts, have attracted considerable attention from the research community. By adjusting the reference parameter k dynamically, an ACUSUM chart may achieve a better performance over a range of mean shifts than conventional CUSUM charts that are designed for maximal detection effectiveness at a particular level of process shift. This article studies a new feature of the ACUSUM chart related to an additional charting parameter w, i.e., the exponential of the sample mean shift in (xt – μ0)w. The ACUSUM chart can be enhanced by adapting this parameter w according to the on-line estimated value of the mean shift, in conjunction with the reference parameter k. The testing cases reveal that this new adaptive CUSUM chart not only outperforms the earlier ACUSUM chart to a substantial degree, but also works as well as the most effective combined schemes consisting of a few CUSUM and/or charts. Furthermore, this enhanced ACUSUM chart is easier to design and implement in a computerized environment compared with those combined schemes. In addition, a general-purpose optimization algorithm is proposed to assist the designs of various CUSUM charts. This paper demonstrates that this algorithm can significantly improve the performance of many CUSUM charts over the entire process shift range. Moreover, a systematic performance comparison of eight CUSUM charts is presented. The findings from this comparison are useful aids for SPC practitioners to select an appropriate CUSUM chart for real applications.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplementary resource: Appendix] Journal: IIE Transactions Pages: 642-653 Issue: 7 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802712582 File-URL: http://hdl.handle.net/10.1080/07408170802712582 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:7:p:642-653 Template-Type: ReDIF-Article 1.0 Author-Name: Camilo Mancilla Author-X-Name-First: Camilo Author-X-Name-Last: Mancilla Author-Name: Robert Storer Author-X-Name-First: Robert Author-X-Name-Last: Storer Title: A sample average approximation approach to stochastic appointment sequencing and scheduling Abstract: This article develops algorithms for a single-resource stochastic appointment sequencing and scheduling problem with waiting time, idle time, and overtime costs. This is a basic stochastic scheduling problem that has been studied in various forms by several previous authors. Applications for this problem cited previously include scheduling of surgeries in an operating room, scheduling of appointments in a clinic, scheduling ships in a port, and scheduling exams in an examination facility. In this article, the problem is formulated as a stochastic integer program using a sample average approximation. A heuristic solution approach based on Benders’ decomposition is developed and compared to exact methods and to previously proposed approaches. Extensive computational testing shows that the proposed methods produce good results compared with previous approaches. In addition, it is proved that the finite scenario sample average approximation problem is NP-complete. Journal: IIE Transactions Pages: 655-670 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635174 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635174 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:655-670 Template-Type: ReDIF-Article 1.0 Author-Name: Ismail Capar Author-X-Name-First: Ismail Author-X-Name-Last: Capar Author-Name: Michael Kuby Author-X-Name-First: Michael Author-X-Name-Last: Kuby Title: An efficient formulation of the flow refueling location model for alternative-fuel stations Abstract: The Flow-Refueling Location Model (FRLM) locates a given number of refueling stations on a network to maximize the traffic flow among origin–destination pairs that can be refueled given the driving range of alternative-fuel vehicles. Traditionally, the FRLM has been formulated using a two-stage approach: the first stage generates combinations of locations capable of serving the round trip on each route, and then a mixed-integer programming approach is used to locate p facilities to maximize the flow refueled given the feasible combinations created in the first stage. Unfortunately, generating these combinations can be computationally burdensome and heuristics may be necessary to solve large-scale networks. This article presents a radically different mixed-binary-integer programming formulation that does not require pre-generation of feasible station combinations. Using several networks of different sizes, it is shown that the proposed model solves the FRLM to optimality as fast as or faster than currently utilized greedy and genetic heuristic algorithms. The ability to solve real-world problems in reasonable time using commercial math programming software offers flexibility for infrastructure providers to customize the FRLM to their particular fuel type and business model, which is demonstrated in the formulation of several FRLM extensions. Journal: IIE Transactions Pages: 622-636 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635175 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635175 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:622-636 Template-Type: ReDIF-Article 1.0 Author-Name: Ho-Yin Mak Author-X-Name-First: Ho-Yin Author-X-Name-Last: Mak Author-Name: Zuo-Jun Shen Author-X-Name-First: Zuo-Jun Author-X-Name-Last: Shen Title: Risk diversification and risk pooling in supply chain design Abstract: Recent research has pointed out that the optimal strategies to mitigate supply disruptions and demand uncertainty are often mirror images of each other. In particular, risk diversification is favorable under the threat of disruptions and risk pooling is favorable under demand uncertainty. This article studies how dynamic sourcing in supply chain design provides partial benefits of both strategies. Optimization models are formulated for supply chain network design with dynamic sourcing under the risk of temporally dependent and temporally independent disruptions of facilities. Using computational experiments, it is shown that supply chain networks that allow small to moderate degrees of dynamic sourcing can be very robust against both disruptions and demand uncertainty. Insights are attained on the optimal degree of dynamic sourcing under different conditions. Journal: IIE Transactions Pages: 603-621 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635178 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635178 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:603-621 Template-Type: ReDIF-Article 1.0 Author-Name: Guido Voigt Author-X-Name-First: Guido Author-X-Name-Last: Voigt Author-Name: Karl Inderfurth Author-X-Name-First: Karl Author-X-Name-Last: Inderfurth Title: Supply chain coordination with information sharing in the presence of trust and trustworthiness Abstract: The strategic use of private information can cause efficiency losses in traditional principal–agent settings, which are found, for example, in supply chain interactions. One stream of research states that these efficiency losses cannot be overcome if all agents use their private information strategically. However, another stream of research highlights the importance of communication, trust, and trustworthiness in supply chain management.In many instances, supplier–buyer relationships are found to reflect a principal–agent context where the supplier acts as the principal and the buyer behaves as the agent. Typically, here it is assumed that the supplier has an a priori distribution assumption over the buyer's private information on cost positions or market conditions. However, little is said on how the principal obtains this distribution. Moreover, it is stressed that the assessment of the a priori distribution is not influenced by communication because of the strategic extent of information sharing.The underlying concept behind this study is that there are two types of buyers (agents). The first type always reports her private information truthfully while the second type does not. In this framework, the supplier (principal) adjusts his a priori distribution conditioned on the buyer's shared information and generates the menu of contracts with respect to the adjusted probabilities.The presented model highlights that the impact of communication on the supplier's, buyer's, and supply chain's performance level is ambiguous and mainly depends on the buyer's information-sharing behavior as well as the relative extent of trust and trustworthiness. This study gives valuable insights into which situations communication is likely to harm the overall supply chain performance, thereby increasing the awareness that the ever increasing claims for trust and information sharing in supply chain management have to be handled carefully. Journal: IIE Transactions Pages: 637-654 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635179 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635179 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:637-654 Template-Type: ReDIF-Article 1.0 Author-Name: Haldun Aytuğ Author-X-Name-First: Haldun Author-X-Name-Last: Aytuğ Author-Name: Anand Paul Author-X-Name-First: Anand Author-X-Name-Last: Paul Title: Sequencing jobs on a non-Markovian machine with random disruptions Abstract: This article considers the problem of sequencing a fixed number of jobs on a single machine subject to random disruptions. If the machine is interrupted in the course of performing a job, it has to restart the job from the beginning. It is assumed that the disruptions arrive according to a renewal process with inter-arrival times that are finite or continuous mixtures of independent exponential distributions, a class of distributions that contains Decreasing Failure Rate (DFR) Weibull, Pareto, and DFR gamma. The machine is non-Markovian in the sense that the expected completion time of a job on a machine depends partly on the history of the machine. It is shown that the shortest processing time first rule minimizes in expectation the total processing time of a batch of jobs, as well as the total waiting time of all of the jobs in the batch. These appear to be the first results in the literature for optimally sequencing an arbitrary number of jobs on a machine with a non-memoryless uptime distribution. Journal: IIE Transactions Pages: 671-680 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.635183 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.635183 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:671-680 Template-Type: ReDIF-Article 1.0 Author-Name: Igor Averbakh Author-X-Name-First: Igor Author-X-Name-Last: Averbakh Author-Name: Jordi Pereira Author-X-Name-First: Jordi Author-X-Name-Last: Pereira Title: The flowtime network construction problem Abstract: Given a network whose edges need to be constructed, the problem is to find a construction schedule that minimizes the total recovery time of the vertices, where the recovery time of a vertex is the time when the vertex becomes connected to a special vertex (depot) that is the initial location of the construction crew. The construction speed is constant and is assumed to be incomparably slower than the travel speed of the construction crew in the already constructed part of the network. In this article, this new problem is introduced, its complexity is discussed, mixed-integer linear programming formulations are developed, fast and simple heuristics are proposed, and an exact branch-and-bound algorithm is presented which is based on specially designed lower bounds and dominance tests that exploit the problem’s combinatorial structure. The results of extensive computational experiments are also presented. Connections between the problem and the Traveling Repairman Problem, also known as the Delivery Man Problem, and applications in emergency restoration operations are discussed. Journal: IIE Transactions Pages: 681-694 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.636792 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.636792 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:681-694 Template-Type: ReDIF-Article 1.0 Author-Name: Arianna Alfieri Author-X-Name-First: Arianna Author-X-Name-Last: Alfieri Author-Name: Celia Glass Author-X-Name-First: Celia Author-X-Name-Last: Glass Author-Name: Steef van de Velde Author-X-Name-First: Steef Author-X-Name-Last: van de Velde Title: Two-machine lot streaming with attached setup times Abstract: Lot streaming is a fundamental production scheduling technique to squeeze manufacturing lead times by splitting a large lot of n identical items into sublots. This article presents a full characterization of optimal solutions for two-stage lot streaming with attached machine setup times to minimize the makespan. An O(n3) time dynamic programming algorithm is presented for the discrete variant of the problem, in which all sublot sizes need to be integral. Since this running time can be prohibitively long for larger n, the continuous variant is also analyzed and an O(n) time algorithm for its solution is presented. Also, rounding procedures for the optimal continuous solution to obtain an approximate solution for the discrete problem are designed and analyzed. It is shown that a particular class of rounding procedures, using dynamic programming, has a compelling absolute worst-case and empirical performance. Journal: IIE Transactions Pages: 695-710 Issue: 8 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.649384 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.649384 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:8:p:695-710 Template-Type: ReDIF-Article 1.0 Author-Name: Bin Liu Author-X-Name-First: Bin Author-X-Name-Last: Liu Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Author-Name: Way Kuo Author-X-Name-First: Way Author-X-Name-Last: Kuo Title: Reliability modeling and preventive maintenance of load-sharing systemswith degrading components Abstract: This article presents certain new approaches to the reliability modeling of systems subject to shared loads. It is assumed that components in the system degrade continuously through an additive impact under load. The reliability assessment of such systems is often complicated by the fact that both the arriving load and the failure of components influence the degradation of the surviving components in a complex manner. The proposed approaches seek to ease this problem, by first deriving the time to prior failures and the arrival of random loads and then determining the number of failed components. Two separate models capable of analyzing system reliability as well as arriving at system maintenance and design decisions are proposed. The first considers a constant load and the other a cumulative load. A numerical example is presented to illustrate the effectiveness of the proposed models. Journal: IIE Transactions Pages: 699-709 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2015.1125041 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1125041 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:699-709 Template-Type: ReDIF-Article 1.0 Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Nan-Jung Hsu Author-X-Name-First: Nan-Jung Author-X-Name-Last: Hsu Author-Name: Yi-Chiao Lin Author-X-Name-First: Yi-Chiao Author-X-Name-Last: Lin Title: Joint modeling of laboratory and field data with application to warranty prediction for highly reliable products Abstract: To achieve a successful warranty management program, a good prediction of a product's field return rate during the warranty period is essential. This study aims to make field return rate predictions for a particular scenario, the one where multiple products have a similar design and discrete-type laboratory data together with continuous-type field data is available for each product. We build a hierarchical model to link the laboratory and field data on failure. The efficient sharing of information among products means that the proposed method generally provides a more stable laboratory summary for each individual product, especially for those cases with few or even no failures during the laboratory testing stage. Furthermore, a real case study is used to verify the proposed method. It is shown that the proposed method provides a better connection between laboratory reliability and field reliability, and this leads to a significant improvement in the estimated field return rate. Journal: IIE Transactions Pages: 710-719 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2015.1133941 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1133941 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:710-719 Template-Type: ReDIF-Article 1.0 Author-Name: Sanling Song Author-X-Name-First: Sanling Author-X-Name-Last: Song Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Author-Name: Qianmei Feng Author-X-Name-First: Qianmei Author-X-Name-Last: Feng Title: Reliability analysis of multiple-component series systems subject to hard and soft failures with dependent shock effects Abstract: New reliability models have been developed for systems subject to competing hard and soft failure processes with shocks that have dependent effects. In the new model, hard failure occurs when transmitted system shocks are large enough to cause any component in a series system to fail immediately, soft failure occurs when any component deteriorates to a certain failure threshold, and system shocks affect both failure processes for all components. Our new research extends previous reliability models that had dependent failure processes, where the dependency was only because of the shared number of shock exposures and not the shock effects associated with individual system shocks. Dependency of transmitted shock sizes and shock damages to the specific failure processes for all components has not been sufficiently considered, and yet for some actual examples, this can be important. In practice, the effects of shock damages to the multiple failure processes among components are often dependent. In this article, we combine both probabilistic and physical degradation modeling concepts to develop the new system reliability model. Four different dependent patterns/scenarios of shock effects on multiple failure processes for all components are considered for series systems. This represents a significant extension from previous research because it is more realistic yet also more difficult for reliability modeling. The model is demonstrated by severalexamples. Journal: IIE Transactions Pages: 720-735 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2016.1140922 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1140922 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:720-735 Template-Type: ReDIF-Article 1.0 Author-Name: Wenpo Huang Author-X-Name-First: Wenpo Author-X-Name-Last: Huang Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: William H. Woodall Author-X-Name-First: William H. Author-X-Name-Last: Woodall Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Title: CUSUM procedures with probability control limits for monitoring processes with variable sample sizes Abstract: Control charts are usually designed with constant control limits. In this article, we consider the design of control charts with probability control limits aimed at controlling the conditional false alarm rate at the desired value at each time step. The resulting control limits are dynamic and thus are more general and capable of accommodating more complex situations in practice as compared with the use of a constant control limit. We consider the situation when the sample sizes are varying over time, with a primary focus on the CUmulative SUM (CUSUM)-type control charts. Unlike other methods, no assumptions about future sample sizes are required with our approach. An integral equation approach is developed to facilitate the design and analysis of the CUSUM control chart with probability control limits. The relationship between the CUSUM charts using probability control limits and the CUSUM charts with a fast initial response feature is investigated. Journal: IIE Transactions Pages: 759-771 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2016.1146422 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1146422 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:759-771 Template-Type: ReDIF-Article 1.0 Author-Name: Jun Li Author-X-Name-First: Jun Author-X-Name-Last: Li Author-Name: Peihua Qiu Author-X-Name-First: Peihua Author-X-Name-Last: Qiu Title: Nonparametric dynamic screening system for monitoring correlated longitudinal data Abstract: In many applications, including the early detection and prevention of diseases and performance evaluation of airplanes and other durable products, we need to sequentially monitor the longitudinal pattern of certain performance variables of a subject. A signal should be given as soon as possible after the pattern has become abnormal. Recently, a new statistical method, called a dynamic screening system (DySS), was proposed to solve this problem. It is a combination of longitudinal data analysis and statistical process control. However, the current DySS method can only handle cases where the observations are normally distributed and within-subject observations are independent or follow a specific time series model (e.g., AR(1) model). In this article, we propose a new nonparametric DySS method that can handle cases where the observation distribution and the correlation among within-subject observations are arbitrary. Therefore, it significantly broadens the application area of the DySS method. Numerical studies show that the new method works well in practice. Journal: IIE Transactions Pages: 772-786 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2016.1146423 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1146423 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:772-786 Template-Type: ReDIF-Article 1.0 Author-Name: Rui Peng Author-X-Name-First: Rui Author-X-Name-Last: Peng Author-Name: Qingqinq Zhai Author-X-Name-First: Qingqinq Author-X-Name-Last: Zhai Author-Name: Liudong Xing Author-X-Name-First: Liudong Author-X-Name-Last: Xing Author-Name: Jun Yang Author-X-Name-First: Jun Author-X-Name-Last: Yang Title: Reliability analysis and optimal structure of series-parallel phased-mission systems subject to fault-level coverage Abstract: Many practical systems have multiple consecutive and non-overlapping phases of operations during their mission and are generally referred to as phased-mission systems (PMSs). This article considers a general type of PMS consisting of subsystems connected in series, where each subsystem contains components with different capacities. The components within the same subsystem are divided into several disjoint work-sharing groups (WSGs). The capacity of each WSG is equal to the summation of the capacities of its working components, and the capacity of each subsystem is equal to the capacity of the WSG with the maximum capacity. The system capacity is bottlenecked by the capacity of the subsystem with the minimum capacity. The system survives the mission only if its capacity meets the predetermined mission demand in all phases. Such PMSs can be commonly found in the power transmission and telecommunication industries. A universal generating function–based method is first proposed for the reliability analysis of the capacitated series-parallel PMSs with the consideration of imperfect fault coverage. As different partitions of the WSGs inside a subsystem can lead to different system reliabilities, the optimal structure that maximizes the system reliability is investigated. Examples are presented to illustrate the proposed reliability evaluation method and optimization procedure. Journal: IIE Transactions Pages: 736-746 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2016.1146424 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1146424 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:736-746 Template-Type: ReDIF-Article 1.0 Author-Name: Maria Luíza Guerra de Toledo Author-X-Name-First: Maria Luíza Guerra de Author-X-Name-Last: Toledo Author-Name: Marta A. Freitas Author-X-Name-First: Marta A. Author-X-Name-Last: Freitas Author-Name: Enrico A. Colosimo Author-X-Name-First: Enrico A. Author-X-Name-Last: Colosimo Author-Name: Gustavo L. Gilardoni Author-X-Name-First: Gustavo L. Author-X-Name-Last: Gilardoni Title: Optimal periodic maintenance policy under imperfect repair: A case study on the engines of off-road vehicles Abstract: In the repairable systems literature one can find a great number of papers that propose maintenance policies under the assumption of minimal repair after each failure (such a repair leaves the system in the same condition as it was just before the failure—as bad as old). This article derives a statistical procedure to estimate the optimal Preventive Maintenance (PM) periodic policy, under the following two assumptions: (i) perfect repair at each PM action (i.e., the system returns to the as-good-as-new state) and (ii) imperfect system repair after each failure (the system returns to an intermediate state between as bad as old and as good as new). Models for imperfect repair have already been presented in the literature. However, an inference procedure for the quantities of interest has not yet been fully studied. In the present article, statistical methods, including the likelihood function, Monte Carlo simulation, and bootstrap resampling methods, are used in order to (i) estimate the degree of efficiency of a repair and (ii) obtain the optimal PM check points that minimize the expected total cost. This study was motivated by a real situation involving the maintenance of engines in off-road vehicles. Journal: IIE Transactions Pages: 747-758 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2016.1147663 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1147663 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:747-758 Template-Type: ReDIF-Article 1.0 Author-Name: Hongyue Sun Author-X-Name-First: Hongyue Author-X-Name-Last: Sun Author-Name: Xinwei Deng Author-X-Name-First: Xinwei Author-X-Name-Last: Deng Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Title: Logistic regression for crystal growth process modeling through hierarchical nonnegative garrote-based variable selection Abstract: Single-crystal silicon ingots are produced from a complex crystal growth process. Such a process is sensitive to subtle process condition changes, which may easily become failed and lead to the growth of a polycrystalline ingot instead of the desired monocrystalline ingot. Therefore, it is important to model this polycrystalline defect in the crystal growth process and identify key process variables and their features. However, to model the crystal growth process poses great challenges due to complicated engineering mechanisms and a large amount of functional process variables. In this article, we focus on modeling the relationship between a binary quality indicator for polycrystalline defect and functional process variables. We propose a logistic regression model with hierarchical nonnegative garrote-based variable selection method that can accurately estimate the model, identify key process variables, and capture important features. Simulations and a case study are conducted to illustrate the merits of the proposed method in prediction and variable selection. Journal: IIE Transactions Pages: 787-796 Issue: 8 Volume: 48 Year: 2016 Month: 8 X-DOI: 10.1080/0740817X.2016.1167286 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1167286 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:8:p:787-796 Template-Type: ReDIF-Article 1.0 Author-Name: Dilip Madan Author-X-Name-First: Dilip Author-X-Name-Last: Madan Title: Joint risk-neutral laws and hedging Abstract: Complex positions on multiple underliers are hedged using the options surface of all underliers. Hedging objectives minimize ask prices for which post-hedge residual risks are acceptable at prespecified levels. It is shown that such hedges require the use of a risk-neutral law on the set of underlying risks. A joint risk-neutral law for multiple underliers is proposed and estimated from multiple option surfaces. Under the proposed joint law, asset returns are a linear mixture of independent Lévy components. Data on the independent components are estimated by an application of independent components analysis on time series data for the underlying returns. A comparison of the the risk-neutral law with the statistical law shows that risk neutral correlations dominate their statistical counterparts. Hedges significantly reduce ask prices. Journal: IIE Transactions Pages: 840-850 Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.541179 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.541179 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:840-850 Template-Type: ReDIF-Article 1.0 Author-Name: Wheyming Song Author-X-Name-First: Wheyming Author-X-Name-Last: Song Author-Name: Bruce Schmeiser Author-X-Name-First: Bruce Author-X-Name-Last: Schmeiser Title: Displaying statistical point estimates using the leading-digit rule Abstract: This article proposes a display format and associated procedure for reporting statistical point estimators and their precision in statistical experiments. For each estimator, two values are reported, separated by a semicolon. The first value is the reported point estimate, which omits meaningless digits, as defined by LDR(1), the previously published standard leading-digit rule. The second value is the reported standard error, which omits all digits after its leading digit. Three sometimes-conflicting criteria—(i) minimal loss of statistical information; (ii) minimal number of reporting positions required; and (iii) maximal appeal to all levels of user sophistication—were used to guide the creation of the leading-digit format for use when reporting many point estimates in tabular form. Journal: IIE Transactions Pages: 851-862 Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.564601 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.564601 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:851-862 Template-Type: ReDIF-Article 1.0 Author-Name: Susan Martonosi Author-X-Name-First: Susan Author-X-Name-Last: Martonosi Title: Dynamic server allocation at parallel queues Abstract: This article explores whether dynamically reassigning servers to parallel queues in response to queue imbalances can reduce average waiting time in those queues. Approximate dynamic programming methods are used to determine when servers should be switched, and the performance of such dynamic allocations is compared to that of a pre-scheduled deterministic allocation. The proposed method is tested on both synthetic data and data from airport security checkpoints at Boston Logan International Airport. It is found that in situations where the uncertainty in customer arrival rates is significant, dynamically reallocating servers can substantially reduce waiting time. Moreover, it is found that intuitive switching strategies that are optimal for queues with homogeneous entry rates are not optimal in this setting. Journal: IIE Transactions Pages: 863-877 Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.564602 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.564602 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:863-877 Template-Type: ReDIF-Article 1.0 Author-Name: Mark Joshi Author-X-Name-First: Mark Author-X-Name-Last: Joshi Author-Name: Chao Yang Author-X-Name-First: Chao Author-X-Name-Last: Yang Title: Algorithmic Hessians and the fast computation of cross-gamma risk Abstract: This article introduces a new methodology for computing Hessians from algorithms for function evaluation using backwards methods. It is shown, that the complexity of the Hessian calculation is a linear function of the number of state variables multiplied by the complexity of the original algorithm. These results are used to compute the gamma matrix of multidimensional financial derivatives including Asian baskets and cancelable swaps. In particular, the algorithm for computing gammas of Bermudan cancelable swaps is order O(n2) per step in the number of rates. Numerical results are presented that demonstrate that computing all n(n+1)/2 gammas in the LMM takes roughly n/3 times as long as computing the price. Journal: IIE Transactions Pages: 878-892 Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.568040 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.568040 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:878-892 Template-Type: ReDIF-Article 1.0 Author-Name: Soroush Saghafian Author-X-Name-First: Soroush Author-X-Name-Last: Saghafian Author-Name: Mark Van Oyen Author-X-Name-First: Mark Author-X-Name-Last: Van Oyen Author-Name: Bora Kolfal Author-X-Name-First: Bora Author-X-Name-Last: Kolfal Title: The “W” network and the dynamic control of unreliable flexible servers Abstract: This article addresses the problem of effectively assigning partially flexible resources to various jobs in Markovian parallel queueing systems with heterogeneous and unreliable servers. Attention is focused on a structure forming a “W” and it is found that this design is highly efficient; it requires only a small amount of cross-training but often performs almost as well as a fully cross-trained system. It is shown that (even allowing disruptions) a version of the cμ rule, which prioritizes serving the “fixed task before the shared,” is optimal under some conditions. Since the optimal policy is complex in general, a powerful and yet simple control policy is developed. This policy (which is implementable in any parallel queueing system) defines a simple measure of workload costs and assigns each server to the queue with the Largest Expected Workload Cost (LEWC). Thus, it effectively combines the intuition underlying two widely used policies: (i) the load-balancing objective in serving the Longest Queue (LQ); and (ii) the greedy cost minimization emphasis of the cμ rule. Extensive numerical tests show that LEWC performs well in comparison with four key policies: optimal, LQ, cμ, and generalized cμ (Gcμ). The stability of the LEWC, LQ, and Gcμ policies is proved. [Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for additional appendices (detailed proofs, additional analyses, data sets, etc.).] Journal: IIE Transactions Pages: 893-907 Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.575678 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.575678 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:893-907 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Hu Author-X-Name-First: Jian Author-X-Name-Last: Hu Author-Name: Tito Homem-de-Mello Author-X-Name-First: Tito Author-X-Name-Last: Homem-de-Mello Author-Name: Sanjay Mehrotra Author-X-Name-First: Sanjay Author-X-Name-Last: Mehrotra Title: Risk-adjusted budget allocation models with application in homeland security Abstract: This article presents and studies models for multi-criteria budget allocation problems under uncertainty. The proposed models incorporate uncertainties in decision maker's weights using a robust weighted sum approach. The risk averseness of the decision maker in satisfying random risk-related constraints is ensured by using stochastic dominance. A sample average approximation approach together with a cutting surface method is used to solve this model. An analysis for the computation of statistical lower and upper bounds is also given. The proposed models are used to study the budget allocation to ten urban areas in the United States under the Urban Areas Security Initiative. Here the decision maker considers property losses, fatalities, air departures, and average daily bridge traffic as separate criteria. The properties of the proposed modeling and solution methodology are discussed using a RAND Corporation–proposed allocation policy and the current government budget allocation as two benchmarks. The budget results are discussed under several parameter scenarios. Journal: IIE Transactions Pages: 819-839 Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.578610 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.578610 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:819-839 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Editorial Board EOV Journal: IIE Transactions Pages: ebi-ebiv Issue: 12 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2011.624388 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.624388 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:12:p:ebi-ebiv Template-Type: ReDIF-Article 1.0 Author-Name: Jianguo Wu Author-X-Name-First: Jianguo Author-X-Name-Last: Wu Author-Name: Yuan Yuan Author-X-Name-First: Yuan Author-X-Name-Last: Yuan Author-Name: Haijun Gong Author-X-Name-First: Haijun Author-X-Name-Last: Gong Author-Name: Tzu-Liang (Bill) Tseng Author-X-Name-First: Tzu-Liang (Bill) Author-X-Name-Last: Tseng Title: Inferring 3D ellipsoids based on cross-sectional images with applications to porosity control of additive manufacturing Abstract: This article develops a series of statistical approaches that can be used to infer size distribution, volume number density, and volume fraction of three-dimensional (3D) ellipsoidal particles based on two-dimensional (2D) cross-sectional images. Specifically, this article first establishes an explicit linkage between the size of the ellipsoidal particles and the size of cross-sectional elliptical contours. Then an efficient Quasi-Monte Carlo EM algorithm is developed to overcome the challenge of 3D size distribution estimation based on the established complex linkage. The relationship between the 3D and 2D particle number densities is also identified to estimate the volume number density and volume fraction. The effectiveness of the proposed method is demonstrated through simulation and case studies. Journal: IISE Transactions Pages: 570-583 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2017.1419316 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1419316 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:570-583 Template-Type: ReDIF-Article 1.0 Author-Name: Sixiang Zhao Author-X-Name-First: Sixiang Author-X-Name-Last: Zhao Author-Name: William Benjamin Haskell Author-X-Name-First: William Benjamin Author-X-Name-Last: Haskell Author-Name: Michel-Alexandre Cardin Author-X-Name-First: Michel-Alexandre Author-X-Name-Last: Cardin Title: Decision rule-based method for flexible multi-facility capacity expansion problem Abstract: Strategic capacity planning for multiple-facility systems with flexible designs is an important topic in the area of capacity expansion problems with random demands. The difficulties of this problem lie in the multidimensional nature of its random variables and action space. For a single-facility problem, the decision rule method has been shown to be efficient in deriving desirable solutions, but for a Multiple-facility Capacity Expansion Problem (MCEP), it has not been well studied. This article designs a novel decision rule–based method for the solution of an MCEP with multiple options, discrete capacity, and a concave capacity expansion cost. An if–then decision rule is designed and the original multi-stage problem is thus transformed into a master problem and a multi-period sub-problem. As the sub-problem contains non-binding constraints, we combine a stochastic approximation algorithm with a branch-and-cut technique so that the sub-problem can be further decomposed across scenarios and be solved efficiently. The proposed decision rule–based method is also extended to solving the MCEP with fixed costs. Numerical studies in this article illustrate that the proposed method affords not only improved performance relative to an inflexible design taken as benchmark but also time savings relative to approximate dynamic programming analysis. Journal: IISE Transactions Pages: 553-569 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2018.1426135 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1426135 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:553-569 Template-Type: ReDIF-Article 1.0 Author-Name: Gaofeng Da Author-X-Name-First: Gaofeng Author-X-Name-Last: Da Author-Name: Maochao Xu Author-X-Name-First: Maochao Author-X-Name-Last: Xu Author-Name: Ping Shing Chan Author-X-Name-First: Ping Shing Author-X-Name-Last: Chan Title: An efficient algorithm for computing the signatures of systems with exchangeable components and applications Abstract: Computing the system signature is an attractive but challenging problem in system reliability. In this article, we propose a novel algorithm to compute the signature of a system with exchangeable components. This new algorithm relies only on the information of minimal cut sets or minimal path sets, which is very intuitive and efficient. The new results in this article are used to address the problem of the aging property of the system signature in the literature. We further discuss the bounds for the system signature when only partial information is available. The application of these new results to cyberattacks is also highlighted. Journal: IISE Transactions Pages: 584-595 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2018.1429694 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1429694 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:584-595 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Jiakun Xu Author-X-Name-First: Jiakun Author-X-Name-Last: Xu Author-Name: Qiang Zhou Author-X-Name-First: Qiang Author-X-Name-Last: Zhou Title: Monitoring serially dependent categorical processes with ordinal information Abstract: In many industrial applications, there is usually a natural order among the attribute levels of categorical process variables or factors, such as good, marginal, and bad. We consider monitoring a serially dependent categorical process with such ordinal information, which is driven by a latent autocorrelated continuous process. The unobservable numerical values of the underlying continuous variable determine the attribute levels of the ordinal factor. We first propose a novel ordinal log-linear model and transform the serially dependent ordinal categorical data into a multi-way contingency table that can be described by the developed model. The ordinal log-linear model can incorporate both the marginal distribution of attribute levels and the serial dependence simultaneously. A serially dependent ordinal categorical chart is proposed to monitor whether there is any shift in the location parameter or in the autocorrelation coefficient of the underlying continuous variable. Simulation results demonstrate its power under various types of latent continuous distributions. Journal: IISE Transactions Pages: 596-605 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2018.1429695 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1429695 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:596-605 Template-Type: ReDIF-Article 1.0 Author-Name: Dingguo Hua Author-X-Name-First: Dingguo Author-X-Name-Last: Hua Author-Name: Elsayed A. Elsayed Author-X-Name-First: Elsayed A. Author-X-Name-Last: Elsayed Title: Reliability approximation of k-out-of-n pairs: G balanced systems with spatially distributed units Abstract: Various industries are finding increasing uses for k-out-of-n pairs: G balanced systems, an example being an unmanned aerial vehicle. The reliability estimation for such systems is difficult to obtain, due to the complexity of the problem: the operation of the systems depends on not only the number of operating pairs but also their spatial configuration. The computation becomes time-consuming when n is large and k is small, since the number of successful events increases significantly. In this article, we develop a Monte Carlo simulation-based reliability approximation for k-out-of-n pairs: G balanced systems for different scenarios. Numerical examples show that the approximation is accurate and computationally efficient. Journal: IISE Transactions Pages: 616-626 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2018.1431742 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1431742 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:616-626 Template-Type: ReDIF-Article 1.0 Author-Name: Xin Geng Author-X-Name-First: Xin Author-X-Name-Last: Geng Title: Opaque pricing over vertically differentiated servers Abstract: We study a pricing problem for a service firm facing delay-sensitive customers. Servers are quality-differentiated, where the quality can be improved based on the servers’ past experience. In such cases, the commonly used pricing scheme fails to effectively facilitate the quality improvement for servers and therefore impedes the firm’s future revenue increase. Motivated by the probabilistic selling strategy, we propose another pricing scheme where the firm does not disclose the server assignment to the customers until payment is made. With respect to the long-run total revenue, we compare the two schemes and establish the superiority of the one that we propose. Moreover, we investigate how the revenue advantage from the proposed pricing scheme is affected by the quality improvement process and the degree of vertical differentiation, both of which are important features in the setting. Finally, we discuss the generalized model with heterogeneous arrival classes. Journal: IISE Transactions Pages: 627-642 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2018.1434332 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1434332 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:627-642 Template-Type: ReDIF-Article 1.0 Author-Name: Joachim Arts Author-X-Name-First: Joachim Author-X-Name-Last: Arts Author-Name: Rob Basten Author-X-Name-First: Rob Author-X-Name-Last: Basten Title: Design of multi-component periodic maintenance programs with single-component models Abstract: Capital assets, such as wind turbines and ships, require maintenance throughout their long lifetimes. Assets usually need to go offline to perform maintenance, and such downs can be either scheduled or unscheduled. Since different components in an asset have different maintenance policies, it is key to have a maintenance program in place that coordinates the maintenance policies of all components, to minimize costs associated with maintenance and downtime. Single-component maintenance policies have been developed for decades, but such policies do not usually allow coordination between different components within an asset. We study a periodic maintenance policy and a condition-based maintenance policy in which the scheduled downs can be coordinated between components. In both policies, we assume that at unscheduled downs, a minimal repair is performed to keep the unscheduled downtime as short as possible. Both policies can be evaluated exactly using renewal theory, and we show how these policies can be used as building blocks to design and optimize maintenance programs for multi-component assets. Journal: IISE Transactions Pages: 606-615 Issue: 7 Volume: 50 Year: 2018 Month: 7 X-DOI: 10.1080/24725854.2018.1437301 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1437301 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:7:p:606-615 Template-Type: ReDIF-Article 1.0 Author-Name: Mojtaba Khanzadeh Author-X-Name-First: Mojtaba Author-X-Name-Last: Khanzadeh Author-Name: Sudipta Chowdhury Author-X-Name-First: Sudipta Author-X-Name-Last: Chowdhury Author-Name: Mark A. Tschopp Author-X-Name-First: Mark A. Author-X-Name-Last: Tschopp Author-Name: Haley R. Doude Author-X-Name-First: Haley R. Author-X-Name-Last: Doude Author-Name: Mohammad Marufuzzaman Author-X-Name-First: Mohammad Author-X-Name-Last: Marufuzzaman Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Title: In-situ monitoring of melt pool images for porosity prediction in directed energy deposition processes Abstract: One major challenge of implementing Directed Energy Deposition (DED) Additive Manufacturing (AM) for production is the lack of understanding of its underlying process–structure–property relationship. Parts manufactured using the DED technologies may be too inconsistent and unreliable to meet the stringent requirements for many industrial applications. The objective of this research is to characterize the underlying thermo-physical dynamics of the DED process, captured by melt pool signals, and predict porosity during the build. Herein we propose a novel porosity prediction method based on the temperature distribution of the top surface of the melt pool as an AM part is being built. Self-Organizing Maps (SOMs) are then used to further analyze the two-dimensional melt pool image streams to identify similar and dissimilar melt pools. X-ray tomography is used to experimentally locate porosity within the Ti-6Al-4V thin-wall specimen, which is then compared with predicted porosity locations based on the melt pool analysis. Results show that the proposed method based on the temperature distribution of the melt pool is able to predict the location of porosity almost 96% of the time when the appropriate SOM model using a thermal profile is selected. Results are also compared with a previous study, that focuses only on the shape and size of the melt pool. We find that the incorporation of thermal distribution significantly improves the accuracy of porosity prediction. The significance of the proposed methodology based on the melt pool profiles is that this can lead the way toward in situ monitoring and minimize or even eliminate pores within the AM parts. Journal: IISE Transactions Pages: 437-455 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2017.1417656 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1417656 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:437-455 Template-Type: ReDIF-Article 1.0 Author-Name: Sophie Weiss Author-X-Name-First: Sophie Author-X-Name-Last: Weiss Author-Name: Justus Arne Schwarz Author-X-Name-First: Justus Arne Author-X-Name-Last: Schwarz Author-Name: Raik Stolletz Author-X-Name-First: Raik Author-X-Name-Last: Stolletz Title: The buffer allocation problem in production lines: Formulations, solution methods, and instances Abstract: Flow production lines with finite buffer capacities are used in practice for mass production, e.g., in the automotive and food industries. The decision regarding the allocation of buffer capacities to mitigate throughput losses from stochastic processing times and unreliable stations is known as the Buffer Allocation Problem (BAP). This article classifies and reviews the literature on the BAP with respect to different versions of the optimization problem. It considers the detailed characteristics of the flow lines, the objective function, and the constraints. Moreover, a new classification scheme for solution methods is presented that differentiates between explicit solutions, integrated optimization methods, and iterative optimization methods. The characteristics of test instances derived from realistic cases and test instances used in multiple references are discussed. The review reveals gaps in the literature regarding the considered optimization problems and solution methods, especially with a view on realistic lines. In addition, a library, FlowLineLib, of realistic and already used test instances is provided. Journal: IISE Transactions Pages: 456-485 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1442031 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1442031 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:456-485 Template-Type: ReDIF-Article 1.0 Author-Name: J. P. van der Gaast Author-X-Name-First: J. P. Author-X-Name-Last: van der Gaast Author-Name: René B. M. de Koster Author-X-Name-First: René B. M. Author-X-Name-Last: de Koster Author-Name: Ivo J. B. F. Adan Author-X-Name-First: Ivo J. B. F. Author-X-Name-Last: Adan Title: Optimizing product allocation in a polling-based milkrun picking system Abstract: E-commerce fulfillment competition evolves around cheap, speedy, and time-definite delivery. Milkrun order picking systems have proven to be very successful in providing handling speed for a large, but highly variable, number of orders. In this system, an order picker picks orders that arrive in real-time during the picking process; by dynamically changing the stops on the picker’s current picking route. The advantage of milkrun picking is that it reduces order picking set-up time and worker travel time compared with conventional batch picking systems. This article is the first to study order throughput times of multi-line orders in a milkrun picking system. We model this system as a cyclic polling system with simultaneous batch arrivals, and determine the mean order throughput time for three picking strategies: exhaustive, locally-gated, and globally-gated. These results allow us to study the effect of different product allocations in an optimization framework. We show that the picking strategy that achieves the shortest order throughput times depends on the ratio between pick times and travel times. In addition, for a real-world application, we show that milkrun order picking significantly reduces the order throughput time compared with conventional batch picking. Journal: IISE Transactions Pages: 486-500 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1493758 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1493758 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:486-500 Template-Type: ReDIF-Article 1.0 Author-Name: Yunyi Kang Author-X-Name-First: Yunyi Author-X-Name-Last: Kang Author-Name: Feng Ju Author-X-Name-First: Feng Author-X-Name-Last: Ju Title: Integrated analysis of productivity and machine condition degradation: Performance evaluation and bottleneck identification Abstract: Machine condition degradation is widely observed in manufacturing systems. It has been shown that machines working at different operating states may break down in different probabilistic manners. In addition, machines working in a worse operating stage are more likely to fail, thus causing more frequent down periods and reducing the system throughput. However, there is still a lack of analytical methods to quantify the potential impact of machine condition degradation on the overall system performance to facilitate operation decision making on the factory floor. In this article, we consider a serial production line with finite buffers and multiple machines following Markovian degradation process. An integrated model based on the aggregation method is built to quantify the overall system performance and its interactions with machine condition process. Moreover, system properties are investigated to analyze the influence of system parameters on system performance. In addition, three types of bottlenecks are defined and their corresponding indicators are derived to provide guidelines on improving system performance. These methods provide quantitative tools for modeling, analyzing, and improving manufacturing systems with the coupling between machine condition degradation and productivity. Journal: IISE Transactions Pages: 501-516 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1494867 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1494867 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:501-516 Template-Type: ReDIF-Article 1.0 Author-Name: Guilin Li Author-X-Name-First: Guilin Author-X-Name-Last: Li Author-Name: Matthias Hwai-yong Tan Author-X-Name-First: Matthias Author-X-Name-Last: Hwai-yong Tan Author-Name: Szu Hui Ng Author-X-Name-First: Szu Author-X-Name-Last: Hui Ng Title: Metamodel-based optimization of stochastic computer models for engineering design under uncertain objective function Abstract: Stochastic computer models are prevailingly used to help the design engineer to understand and optimize analytically intractable systems. A frequently encountered, but often ignored problem is that the objective function representing system performance may contain some uncertain parameters. Due to lack of computationally efficient tools, rational procedures for dealing with the problem such as finding multiple Pareto-optimal solutions or conducting sensitivity analysis on the uncertain parameters require the stochastic computer model to be optimized many times, which would incur extensive computational burden. In this work, we provide a computationally efficient metamodel-based solution to capture this uncertainty. This solution first constructs a Cartesian product design over the space of both design variables and uncertain parameters. Thereafter, a radial basis function metamodel is used to provide a smooth prediction surface of the objective value over the space of both design variables and uncertain parameters. Based on the Cartesian product design structure, a fast fitting algorithm is also derived for fitting the metamodel. To illustrate the effectiveness of the developed tools in solving practical problems, they are applied to seek a robust optimal solution to a drug delivery system with uncertain desirability function parameters based on a criterion that we propose. Journal: IISE Transactions Pages: 517-530 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1504355 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1504355 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:517-530 Template-Type: ReDIF-Article 1.0 Author-Name: Di Wang Author-X-Name-First: Di Author-X-Name-Last: Wang Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Author-Name: Xi Zhang Author-X-Name-First: Xi Author-X-Name-Last: Zhang Title: Modeling of a three-dimensional dynamic thermal field under grid-based sensor networks in grain storage Abstract: Thermal management is a major task in granaries, due to the essential role of temperature in grain storage. The accurate acquisition and updating of thermal field information generates a meaningful index for grain quality surveillance and storage maintenance actions. However, given the unknown mechanisms of local uncertainties, including local grain degradation and fungal infections that may significantly vary the thermal field in granaries, the appropriate modeling of field dynamics remains a challenging task. To address this issue, this article combines a three-dimensional (3D) nonlinear dynamics model with a stochastic spatiotemporal model to capture a 3D dynamic thermal map. To best harness the temperature data from the grid-based sensor network, we integrate the Kriging model into the Gaussian Markov random field model by introducing an anisotropic covariance function. Both simulation and real case studies are conducted to validate our proposed approach, and the results show that our approach outperforms other alternative methods for field estimation. Journal: IISE Transactions Pages: 531-546 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1504356 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1504356 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:531-546 Template-Type: ReDIF-Article 1.0 Author-Name: Majid Forghani-elahabad Author-X-Name-First: Majid Author-X-Name-Last: Forghani-elahabad Author-Name: Nelson Kagan Author-X-Name-First: Nelson Author-X-Name-Last: Kagan Title: Reliability evaluation of a stochastic-flow network in terms of minimal paths with budget constraint Abstract: In a stochastic-flow network with budget constraint, the network reliability for level (d, b), i.e., R(d,b), where d is a given demand value and b is a budget limit, is the probability of transmitting at least d units of flow from a source node to a sink node within the budget of b. The problem of evaluating R(d,b) in terms of Minimal Paths (MPs), which is called the (d, b)-MP problem, has been of considerable interest in the recent decades. Here, presenting some new results, an improved algorithm is proposed for this problem. Some numerical comparisons between our MATLAB implementation of the algorithm proposed in this article and a recently proposed one are made. This way, computational comparative results on some benchmarks and thousands of random test problems are provided in the sense of performance profile introduced by Dolan and Moré. Moreover, complexity results are provided. The complexity and numerical results show the efficiency of our algorithm in comparison with the others. Furthermore, we state how to use the output of the algorithm in order to assess the system reliability. Ultimately, based on the main proposed algorithm, a simple algorithm is stated to evaluate the reliability of some smart grid communication networks. Journal: IISE Transactions Pages: 547-558 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1504358 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1504358 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:547-558 Template-Type: ReDIF-Article 1.0 Author-Name: Adel Alaeddini Author-X-Name-First: Adel Author-X-Name-Last: Alaeddini Author-Name: Edward Craft Author-X-Name-First: Edward Author-X-Name-Last: Craft Author-Name: Rajitha Meka Author-X-Name-First: Rajitha Author-X-Name-Last: Meka Author-Name: Stanford Martinez Author-X-Name-First: Stanford Author-X-Name-Last: Martinez Title: Sequential Laplacian regularized V-optimal design of experiments for response surface modeling of expensive tests: An application in wind tunnel testing Abstract: In an increasing number of cases involving estimation of a response surface, one is often confronted with situations where there are several factors to be evaluated, but experiments are prohibitively expensive. In such scenarios, learning algorithms can actively query the user or other resources to determine the most informative settings to be tested. In this article, we propose an active learning methodology based on the fundamental idea of adding a ridge and a Laplacian penalty to the V-optimal design to shrink the weight of less significant factors, while looking for the most informative settings to be tested. To leverage the intrinsic geometry of the factor settings in highly nonlinear spaces, we generalize the proposed methodology to local regression. We also propose a simple sequential design strategy for efficient determination of subsequent experiments based on the information from previous experiments. The proposed methodology is particularly suited for problems involving expensive experiments with a high standard deviation of the error. We apply the proposed methodology to a simulated wind tunnel testing and compare the result with an existing practice. We also evaluate the estimation accuracy of the proposed methodology using the paper helicopter case study. Finally, through extensive simulated experiments, we demonstrate the performance of the proposed methodology against classic response surface methods in the literature. Journal: IISE Transactions Pages: 559-576 Issue: 5 Volume: 51 Year: 2019 Month: 5 X-DOI: 10.1080/24725854.2018.1508928 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1508928 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:5:p:559-576 Template-Type: ReDIF-Article 1.0 Author-Name: William Millhiser Author-X-Name-First: William Author-X-Name-Last: Millhiser Author-Name: Apostolos Burnetas Author-X-Name-First: Apostolos Author-X-Name-Last: Burnetas Title: Optimal admission control in series production systems with blocking Abstract: This article studies the dynamic control of arrivals of multiple job classes in N-stage production systems with finite buffers and blocking after service. A model with multiple processing stages in series is formulated as a Markov decision process and a state definition from the queueing analysis literature is used to simplify the state-space description. This allows several fundamental admission control results from M/M/N and M/M/N/N queueing models as well as tandem models without blocking to be extended to tandem systems with blocking. Specifically, it is shown that the net benefit of admitting a job declines monotonically with the system congestion; thus the decision to admit any job class is based on threshold values of the number of jobs present in the system. Furthermore, conditions under which a job class is always or never admitted, regardless of the state, are derived. The interaction of blocking and admission control is explored by analyzing the effect of blocking on the optimal admission policy and profit. The article concludes with analyses of why extensions including loss and abandonment cannot sustain the monotonicity properties and two surrogate admission rules that may be used in practice but do not account for the blocking effect. Journal: IIE Transactions Pages: 1035-1047 Issue: 10 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706732 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706732 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:10:p:1035-1047 Template-Type: ReDIF-Article 1.0 Author-Name: Asli Buğdaci Author-X-Name-First: Asli Author-X-Name-Last: Buğdaci Author-Name: Murat Köksalan Author-X-Name-First: Murat Author-X-Name-Last: Köksalan Author-Name: Selin Özpeynirci Author-X-Name-First: Selin Author-X-Name-Last: Özpeynirci Author-Name: Yasemin Serin Author-X-Name-First: Yasemin Author-X-Name-Last: Serin Title: An interactive probabilistic approach to multi-criteria sorting Abstract: This article addresses the problem of sorting alternatives evaluated by multiple criteria among preference-ordered classes. An interactive probabilistic sorting approach is developed in which the probability of an alternative being in each class is calculated and alternatives are assigned to classes keeping the probability of incorrect assignments below a specified small threshold value. The decision maker is occasionally required to place alternatives to classes. The probabilities for unassigned alternatives are updated in light of the new information and the procedure is repeated until all alternatives are classified. This is the first sorting approach reported in the literature to use an explicit probability of classifying alternatives that is consistent with the underlying preference structure of the decision maker. The proposed approach is demonstrated in a problem concerning the sorting MBA programs. Journal: IIE Transactions Pages: 1048-1058 Issue: 10 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.721945 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.721945 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:10:p:1048-1058 Template-Type: ReDIF-Article 1.0 Author-Name: Julian Gallego Arrubla Author-X-Name-First: Julian Author-X-Name-Last: Gallego Arrubla Author-Name: Young Ko Author-X-Name-First: Young Author-X-Name-Last: Ko Author-Name: Ronny Polansky Author-X-Name-First: Ronny Author-X-Name-Last: Polansky Author-Name: Eduardo Pérez Author-X-Name-First: Eduardo Author-X-Name-Last: Pérez Author-Name: Lewis Ntaimo Author-X-Name-First: Lewis Author-X-Name-Last: Ntaimo Author-Name: Natarajan Gautam Author-X-Name-First: Natarajan Author-X-Name-Last: Gautam Title: Integrating virtualization, speed scaling, and powering on/off servers in data centers for energy efficiency Abstract: Data centers consume a phenomenal amount of energy, which can be significantly reduced by appropriately allocating resources using technologies such as virtualization, speed scaling, and powering off servers. This article proposes a unified methodology that combines these technologies under a single framework to efficiently operate data centers. In particular, a large-scale Mixed Integer Program (MIP) is formulated that prescribes optimal allocation of resources while incorporating inherent variability and uncertainty of workload experienced by the data center. However, only for small to medium-sized clients it is possible to solve the MIP using commercial optimization software packages in a reasonable time. Thus, for large-sized clients a heuristic method is developed that is effective and fast. An extensive set of numerical experiments is performed to illustrate the methodology, obtain insights on the allocation policies, evaluate the quality of the proposed heuristic, and test the validity of the assumptions made in the literature. The results show that gains of up to 40% can be obtained by using the integrated approach rather than the traditional approach where virtualization, dynamic voltage/frequency scaling, and powering off servers are done separately. Journal: IIE Transactions Pages: 1114-1136 Issue: 10 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.762484 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.762484 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:10:p:1114-1136 Template-Type: ReDIF-Article 1.0 Author-Name: Prattana Punnakitikashem Author-X-Name-First: Prattana Author-X-Name-Last: Punnakitikashem Author-Name: Jay Rosenberber Author-X-Name-First: Jay Author-X-Name-Last: Rosenberber Author-Name: Deborah Buckley-Behan Author-X-Name-First: Deborah Author-X-Name-Last: Buckley-Behan Title: A stochastic programming approach for integrated nurse staffing and assignment Abstract: The shortage of nurses has attracted considerable attention due to its direct impact on the quality of patient care. High workloads and undesirable schedules are two major reasons for nurses to report job dissatisfaction. The focus of this article is to find non-dominated solutions to an integrated nurse staffing and assignment problem that minimizes excess workload on nurses and staffing cost. A stochastic integer programming model with an objective to minimize excess workload subject to a hard budget constraint is presented. Three solution approaches are applied, which are Benders’ decomposition, Lagrangian relaxation with Benders’ decomposition, and a heuristic based on nested Benders’ decomposition. The maximum allowable staffing cost in the budget constraint is varied in the Benders’ decomposition and nested Benders’ decomposition approaches, and the budget constraint is relaxed and the staffing cost is penalized in the Lagrangian relaxation with Benders’ decomposition approach. Non-dominated bicriteria solutions are collected from the algorithms. The effectiveness of the model and algorithms is demonstrated in a computational study based on data from two medical-surgical units at a Northeast Texas hospital. A floating nurses policy is also evaluated. Finally, areas of future research are discussed. Journal: IIE Transactions Pages: 1059-1076 Issue: 10 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.763002 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.763002 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:10:p:1059-1076 Template-Type: ReDIF-Article 1.0 Author-Name: Tal Raviv Author-X-Name-First: Tal Author-X-Name-Last: Raviv Author-Name: Ofer Kolka Author-X-Name-First: Ofer Author-X-Name-Last: Kolka Title: Optimal inventory management of a bike-sharing station Abstract: Bike-sharing systems allow people to rent a bicycle at one of many automatic rental stations scattered around a city, use them for a short journey, and return them at any other station in that city. A crucial factor in the success of such a system is its ability to meet the fluctuating demand for both bicycles and vacant lockers at each station. In order to meet the demand, the inventory of each station must be reviewed regularly. This article introduces an inventory model suited for the management of bike rental stations and a numerical solution method used to solve it. Moreover, a structural result about the convexity of the model is proved. The method may be applicable for other closed-loop inventory systems. An extensive numerical study based on real-life data is presented to demonstrate its effectiveness and efficiency. Journal: IIE Transactions Pages: 1077-1093 Issue: 10 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.770186 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770186 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:10:p:1077-1093 Template-Type: ReDIF-Article 1.0 Author-Name: Behnam Behdani Author-X-Name-First: Behnam Author-X-Name-Last: Behdani Author-Name: J. Smith Author-X-Name-First: J. Author-X-Name-Last: Smith Author-Name: Ye Xia Author-X-Name-First: Ye Author-X-Name-Last: Xia Title: The lifetime maximization problem in wireless sensor networks with a mobile sink: mixed-integer programming formulations and algorithms Abstract: This article considers the problem of maximizing the lifetime of a wireless sensor network with a mobile sink. The sink travels at finite speed among a subset of possible sink locations to collect data from a stationary set of sensor nodes. The considered problem chooses a subset of sink locations for the sink to visit, finds a tour for the sink among the selected sink locations, and prescribes an optimal data routing scheme from the sensor nodes to each location visited by the sink. The sink’s tour is constrained by the time it spends at each location collecting data. Two variations of this problem are examined based on assumptions regarding delay tolerance. Exact mixed-integer programming formulations to model this problem are provided along with cutting planes, preprocessing techniques, and a Benders decomposition algorithm to improve its solvability. Computational results demonstrate the effectiveness of the proposed methods. Journal: IIE Transactions Pages: 1094-1113 Issue: 10 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.770189 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770189 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:10:p:1094-1113 Template-Type: ReDIF-Article 1.0 Author-Name: Pengfei Zhang Author-X-Name-First: Pengfei Author-X-Name-Last: Zhang Author-Name: Jonathan F. Bard Author-X-Name-First: Jonathan F. Author-X-Name-Last: Bard Author-Name: Douglas J. Morrice Author-X-Name-First: Douglas J. Author-X-Name-Last: Morrice Author-Name: Karl M. Koenig Author-X-Name-First: Karl M. Author-X-Name-Last: Koenig Title: Extended open shop scheduling with resource constraints: Appointment scheduling for integrated practice units Abstract: An Integrated Practice Unit (IPU) is a new approach to outpatient care in which a co-located multidisciplinary team of clinicians, technicians, and staff provide treatment in a single patient visit. This article presents a new integer programming model for an extended open shop problem with application to clinic appointment scheduling for IPUs. The advantages of the new model are discussed and several valid inequalities are introduced to tighten the linear programming relaxation. The objective of the problem is to minimize a combination of makespan and total job processing time, or in terms of an IPU, to minimize a combination of closing time and total patient waiting time. Feasible solutions are obtained with a two-step heuristic, which also provides a lower bound that is used to judge solution quality. Next, a two-stage stochastic optimization model is presented for a joint pain IPU. The expected value solution is used to generate two different patient arrival templates. Extensive computations are performed to evaluate the solutions obtained with these templates and several others found in the literature. Comparisons with the expected value solution and the wait-and-see solution are also included. For the templates derived from the expected value solution, the results show that the average gap between the feasible solution and lower bound provided by the two-step heuristic is 2% for 14 patients. They also show that either of the two templates derived from the expected value solution is a good candidate for assigning appointment times when either the clinic closing time or the patient waiting time is the more important consideration. Sensitivity analysis confirmed that the optimality gap and clinic statistics are stable for marginal changes in key resources. Journal: IISE Transactions Pages: 1037-1060 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2018.1542544 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1542544 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1037-1060 Template-Type: ReDIF-Article 1.0 Author-Name: Banu Kabakulak Author-X-Name-First: Banu Author-X-Name-Last: Kabakulak Author-Name: Z. Caner Taşkın Author-X-Name-First: Z. Caner Author-X-Name-Last: Taşkın Author-Name: Ali Emre Pusane Author-X-Name-First: Ali Author-X-Name-Last: Emre Pusane Title: Optimization–based decoding algorithms for LDPC convolutional codes in communication systems Abstract: In a digital communication system, information is sent from one place to another over a noisy communication channel. It may be possible to detect and correct errors that occur during the transmission if one encodes the original information by adding redundant bits. Low–Density Parity–Check (LDPC) convolutional codes, a member of the LDPC code family, encode the original information to improve error correction capability. In practice these codes are used to decode very long information sequences, where the information arrives in subsequent packets over time, such as video streams. We consider the problem of decoding the received information with minimum error from an optimization point of view and investigate integer programming–based exact and heuristic decoding algorithms for its solution. In particular, we consider relax–and–fix heuristics that decode information in small windows. Computational results indicate that our approaches identify near–optimal solutions significantly faster than a commercial solver in high channel error rates. Our proposed algorithms can find higher quality solutions compared with the state of the art iterative decoding heuristic. Journal: IISE Transactions Pages: 1061-1074 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2018.1550692 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1550692 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1061-1074 Template-Type: ReDIF-Article 1.0 Author-Name: Zheng Zhang Author-X-Name-First: Zheng Author-X-Name-Last: Zhang Author-Name: Bjorn P. Berg Author-X-Name-First: Bjorn P. Author-X-Name-Last: Berg Author-Name: Brian T. Denton Author-X-Name-First: Brian T. Author-X-Name-Last: Denton Author-Name: Xiaolan Xie Author-X-Name-First: Xiaolan Author-X-Name-Last: Xie Title: Appointment scheduling and the effects of customer congestion on service Abstract: This article addresses an appointment scheduling problem in which the server responds to congestion of the service system. Using waiting time as a proxy for how far behind schedule the server is running, we characterize the congestion-induced behavior of the server as a function of a customer’s waiting time. Decision variables are the scheduled arrival times for a specific sequence of customers. The objective of our model is to minimize a weighted cost incurred for a customer’s waiting time, server overtime and server speedup in response to congestion. We provide alternative formulations of this problem as a Simulation Optimization (SO) model and a Stochastic Integer Programming (SIP) model, respectively. We show that the SIP model can solve moderate-sized instances exactly under certain assumptions about a server′s response to congestion. We further show that the SO model achieves near-optimal solutions for moderate-sized problems while also being able to scale up to much larger problem instances. We present theoretical results for both models and we carry out a series of experiments to illustrate the characteristics of the optimal schedules and to measure the importance of accounting for a server′s response to congestion when scheduling appointments using a case study for an outpatient clinic at a large medical center. Finally, we summarize the most important managerial insights obtained from this study. Journal: IISE Transactions Pages: 1075-1090 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2018.1562590 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1562590 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1075-1090 Template-Type: ReDIF-Article 1.0 Author-Name: Aaron M. Lessin Author-X-Name-First: Aaron M. Author-X-Name-Last: Lessin Author-Name: Brian J. Lunday Author-X-Name-First: Brian J. Author-X-Name-Last: Lunday Author-Name: Raymond R. Hill Author-X-Name-First: Raymond R. Author-X-Name-Last: Hill Title: A multi-objective, bilevel sensor relocation problem for border security Abstract: Consider a set of sensors having varying capabilities and respectively located to maximize an intruder’s minimal expected exposure to traverse a defended border region. Given two subsets of the sensors that have been respectively incapacitated or degraded, we formulate a multi-objective, bilevel optimization model to relocate surviving sensors to maximize an intruder’s minimal expected exposure to traverse a defended border region, minimize the maximum sensor relocation time, and minimize the total number of sensors requiring relocation. Our formulation also allows the defender to specify minimum preferential coverage requirements for high-value asset locations and emplaced sensors. Adopting the ε-constraint method for multi-objective optimization, we subsequently develop a single-level reformulation that enables the identification of non-inferior solutions on the Pareto frontier and, consequently, identifies trade-offs between the competing objectives. We demonstrate the aforementioned model and solution procedure for a scenario in which a defender is relocating surviving air defense assets to inhibit intrusion by a fixed-wing aircraft. Journal: IISE Transactions Pages: 1091-1109 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2019.1576952 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1576952 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1091-1109 Template-Type: ReDIF-Article 1.0 Author-Name: Wei Xing Author-X-Name-First: Wei Author-X-Name-Last: Xing Author-Name: Qi Zhang Author-X-Name-First: Qi Author-X-Name-Last: Zhang Author-Name: Xuan Zhao Author-X-Name-First: Xuan Author-X-Name-Last: Zhao Author-Name: Liming Liu Author-X-Name-First: Liming Author-X-Name-Last: Liu Title: Effects of downstream entry in a supply chain with a spot market Abstract: This article contributes to research at the interface of marketing, operations, and risk management by investigating the effects of downstream entry in a two-echelon supply chain with risk-averse supply chain members and a liquid spot market for trading intermediate goods. We find that at downstream entry, the upstream supplier may decrease the contract price of the intermediate goods to balance its risk-free utility from the contract channel and its risky utility from the spot market; correspondingly, downstream manufacturers may increase their contract quantities. As a result, downstream entry may hurt the supplier and benefit the incumbent manufacturers. We also identify situations where downstream entry can lead to a “win-win-win” situation, where the supplier, incumbent manufacturers, and final product consumers all benefit from downstream entry. Furthermore, when spot price volatility is high, the supplier may engage in speculative trading, but downstream entry discourages this speculative behavior. We conclude that these results are driven by the combined effects of spot trading and the risk attitudes of supply chain members. Specifically, the effects of downstream entry are dramatically affected by the risk attitude of manufacturers, but only slightly affected by that of the supplier. Finally, we find that the main results hold when the supplier’s capacity level is endogenous. Journal: IISE Transactions Pages: 1110-1127 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2018.1550825 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1550825 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1110-1127 Template-Type: ReDIF-Article 1.0 Author-Name: Xiangyong Li Author-X-Name-First: Xiangyong Author-X-Name-Last: Li Author-Name: Jieqi Li Author-X-Name-First: Jieqi Author-X-Name-Last: Li Author-Name: Y.P. Aneja Author-X-Name-First: Y.P. Author-X-Name-Last: Aneja Author-Name: Zhaoxia Guo Author-X-Name-First: Zhaoxia Author-X-Name-Last: Guo Author-Name: Peng Tian Author-X-Name-First: Peng Author-X-Name-Last: Tian Title: Integrated order allocation and order routing problem for e-order fulfillment Abstract: In this article, we study the order fulfillment problem, which integrates order allocation and order routing decisions of an online retailer. Our problem is to find the best way to fulfill each customer’s order to minimize the transportation cost. We first present a mixed-integer programming formulation to help online retailers optimally fulfill customers’ order. We then introduce an adaptive large neighborhood search-based approach for this problem. With extensive computational experiments, we demonstrate the effectiveness of the proposed approach, by benchmarking its performance against a leading commercial solver and a greedy heuristic. Our approach can produce high-quality solutions in short computing times. We also experimentally show that products overlap among different fulfillment centers does affect the operation expense of e-tailers. Journal: IISE Transactions Pages: 1128-1150 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2018.1552820 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1552820 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1128-1150 Template-Type: ReDIF-Article 1.0 Author-Name: Shimon Bitton Author-X-Name-First: Shimon Author-X-Name-Last: Bitton Author-Name: Izack Cohen Author-X-Name-First: Izack Author-X-Name-Last: Cohen Author-Name: Morris Cohen Author-X-Name-First: Morris Author-X-Name-Last: Cohen Title: Joint repair sourcing and stocking policies for repairables using Erlang-A and Erlang-B queueing models Abstract: This research focuses on minimizing the life cycle cost of a fleet of aircraft. We consider two categories of repairable parts; upon failure of a first category part (No-Go part), its aircraft becomes non-operational but when a second category part (Go part) fails, the aircraft can still operate for a predetermined period of time before it becomes non-operational. In either case, to minimize aircraft downtime, the failed part has to be replaced with one from the good part inventory, returned from the repair facility or through exchange from a supplier—an emergency sourcing mechanism which is common in the airline industry. Motivated by the observation that a modern aircraft contains a significant fraction of Go parts (estimated at 50% of all repairable parts), we develop a strategic model to decide on stocking and sourcing policies using Erlang-A and Erlang-B queueing models. The suggested model provides an alternative to existing models that typically consider only failed parts that immediately cause a system to be non-operational, and do not consider an emergency sourcing mechanism. A realistic implementation of the model for a fleet of Boeing 737 aircraft, based on a list of 2,805 part types, demonstrates that significant cost savings may be achieved by explicitly modeling Go parts. Journal: IISE Transactions Pages: 1151-1166 Issue: 10 Volume: 51 Year: 2019 Month: 10 X-DOI: 10.1080/24725854.2018.1560752 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1560752 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:10:p:1151-1166 Template-Type: ReDIF-Article 1.0 Author-Name: Qi Fu Author-X-Name-First: Qi Author-X-Name-Last: Fu Author-Name: Chung-Yee Lee Author-X-Name-First: Chung-Yee Author-X-Name-Last: Lee Author-Name: Chung-Piaw Teo Author-X-Name-First: Chung-Piaw Author-X-Name-Last: Teo Title: Procurement management using option contracts: random spot price and the portfolio effect Abstract: This article considers the value of portfolio procurement in a supply chain, where a buyer can either procure parts for future demand from sellers using fixed price contracts or, option contracts or tap into the market for spot purchases. A single-period portfolio procurement problem when both the product demand and the spot price are random (and possibly correlated) is examined and the optimal portfolio procurement strategy for the buyer is constructed. A shortest-monotone path algorithm is provided for the general problem to obtain the optimal procurement solution and the resulting expected minimum procurement cost. In the event that demand and spot price are independent, the solution algorithm simplifies considerably. More interestingly, the optimal procurement cost function in this case has an intuitive geometrical interpretation that facilitates managerial insights. The portfolio effect, i.e., the benefit of portfolio contract procurement over a single contract procurement is also studied. Finally, an extension to a two-period problem to examine the impact of inventory on the portfolio procurement strategy is discussed. Journal: IIE Transactions Pages: 793-811 Issue: 11 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003670983 File-URL: http://hdl.handle.net/10.1080/07408171003670983 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:11:p:793-811 Template-Type: ReDIF-Article 1.0 Author-Name: Jenny Chen Author-X-Name-First: Jenny Author-X-Name-Last: Chen Author-Name: Michele Pfund Author-X-Name-First: Michele Author-X-Name-Last: Pfund Author-Name: John Fowler Author-X-Name-First: John Author-X-Name-Last: Fowler Author-Name: Douglas Montgomery Author-X-Name-First: Douglas Author-X-Name-Last: Montgomery Author-Name: Thomas Callarman Author-X-Name-First: Thomas Author-X-Name-Last: Callarman Title: Robust scaling parameters for composite dispatching rules Abstract: The successful implementation of composite dispatching rules depends on the values of their scaling parameters. A unified four-phase method to determine robust scaling parameters for composite dispatching rules is proposed, with the goal of achieving reasonably good scheduling performance with the least computational effort in implementation. In phase 1, factor ranges that characterize the problem instances in each tool group (one or more machines operating in parallel) are calculated. In phase 2, a face-centered cube design is used to decide the placement of design points in the factor region. The third phase involves using mixture experiments to find good scaling parameter values at each design point. In the last phase, the central point of the area in which all of the good scaling parameters lie is identified with the robust scaling parameter. The proposed method is applied to determine the robust scaling parameter for the Apparent Tardiness Cost with Setups (ATCS) rule to solve the Pm|sjk|Σ wj Tj scheduling problem in a case study. The results of this case study show that the proposed method is more efficient and effective than existing methods in the literature. It requires many fewer experiments and achieves more than a 30% improvement in the average scheduling performance (i.e., total weighted tardiness) and more than a 60% improvement in the standard deviation of the scheduling performance. Journal: IIE Transactions Pages: 842-853 Issue: 11 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003685825 File-URL: http://hdl.handle.net/10.1080/07408171003685825 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:11:p:842-853 Template-Type: ReDIF-Article 1.0 Author-Name: Yanyi Xu Author-X-Name-First: Yanyi Author-X-Name-Last: Xu Author-Name: Arnab Bisi Author-X-Name-First: Arnab Author-X-Name-Last: Bisi Author-Name: Maqbool Dada Author-X-Name-First: Maqbool Author-X-Name-Last: Dada Title: A centralized ordering and allocation system with backorders and strategic lost sales Abstract: This article considers a multi-retailer distribution system that is managed by a central decision maker who, at the start of each period, determines how much to order to replenish the system stock. The decision maker also determines how to allocate incoming pipeline inventory to maintain inventory balance among the retailers. It has been noted in the literature that balancing inventories can equalize service levels among retailers. To improve the efficacy of the allocation, this article allows some demand to be rejected to keep inventories in balance. Consequently, depending on the realized pattern of demand during the delivery lead time, the inventory is dynamically allocated to each of the retailers. For the model with two retailers, an exact representation of the infinite-horizon long-run average cost function is developed. This exact expression is used to develop conditions for the unique solution for the two-retailer case. The presented analysis holds for a wide class of continuously and discretely distributed demand. Journal: IIE Transactions Pages: 812-824 Issue: 11 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.491499 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491499 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:11:p:812-824 Template-Type: ReDIF-Article 1.0 Author-Name: M. Akyüz Author-X-Name-First: M. Author-X-Name-Last: Akyüz Author-Name: Temel Öncan Author-X-Name-First: Temel Author-X-Name-Last: Öncan Author-Name: İ. Altinel Author-X-Name-First: İ. Author-X-Name-Last: Altinel Title: The multi-commodity capacitated multi-facility Weber problem: heuristics and confidence intervals Abstract: The Capacitated Multi-facility Weber Problem (CMWP) is concerned with locating I capacitated facilities so as to satisfy the demand of J customers with the minimum total transportation cost of a single commodity. This is a non-convex optimization problem and is difficult to solve. This work focuses on a multi-commodity extension and considers the situation where K distinct commodities are shipped to the customers subject to capacity and demand constraints. Customer locations, demands, and capacities for each commodity are known a priori. The transportation costs, which are proportional to the distance between customers and facilities, depend on the commodity type. A mathematical programming formulation of the problem is presented and two alternate location-allocation heuristics and a discrete approximation method are proposed and subsequently used to statistically estimate confidence intervals on the optimal objective function values. Computational experiments on standard and randomly generated test instances are also presented. Journal: IIE Transactions Pages: 825-841 Issue: 11 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.491504 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.491504 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:11:p:825-841 Template-Type: ReDIF-Article 1.0 Author-Name: Gopalakrishnan Easwaran Author-X-Name-First: Gopalakrishnan Author-X-Name-Last: Easwaran Author-Name: Halit Üster Author-X-Name-First: Halit Author-X-Name-Last: Üster Title: A closed-loop supply chain network design problem with integrated forward and reverse channel decisions Abstract: This article considers a multi-product closed-loop logistics network design problem with hybrid manufacturing/remanufacturing facilities and finite-capacity hybrid distribution/collection centers to serve a set of retail locations. First, a mixed integer linear program is presented that determines the optimal solution that characterizes facility locations, along with the integrated forward and reverse flows such that the total cost of facility location, processing, and transportation associated with forward and reverse flows in the network is minimized. Second, a solution method based on Benders' decomposition with strengthened Benders' cuts for improved computational efficiencies is devised. In addition to this method, an alternative formulation is presented and a new dual solution method for the associated Benders' decomposition to obtain a different set of strengthened Benders' cuts is developed. In the Benders' decomposition framework, the strengthened cuts obtained from original and alternative formulations simultaneously are used to obtain an improved efficiency. Computational results illustrating the performance of the solution algorithms in terms of both solution quality and time are presented. It is inferred that the simultaneous use of the strengthened cuts obtained using different formulations facilitates tighter bounds and improves computational efficiency of Benders' algorithm. Journal: IIE Transactions Pages: 779-792 Issue: 11 Volume: 42 Year: 2010 X-DOI: 10.1080/0740817X.2010.504689 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.504689 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:11:p:779-792 Template-Type: ReDIF-Article 1.0 Author-Name: Peihua Qiu Author-X-Name-First: Peihua Author-X-Name-Last: Qiu Author-Name: Zhen He Author-X-Name-First: Zhen Author-X-Name-Last: He Author-Name: Zhiqiong Wang Author-X-Name-First: Zhiqiong Author-X-Name-Last: Wang Title: Nonparametric monitoring of multiple count data Abstract: Process monitoring of multiple count data has recently received considerable attention in the statistical process control literature. Most existing methods on this topic are based on parametric modeling of the observed process data. However, the assumed parametric models are often invalid in practice, leading to unreliable performance of the related control charts. In this article, we first show the consequence of using a parametric control chart in cases where the underlying parametric distribution is invalid. Then, we thoroughly investigate the performance of some parametric and nonparametric control charts in monitoring multiple count data. Our numerical results show that nonparametric methods can provide a more reliable and effective process monitoring in such cases. A real-data example about the crime log of the University of Florida Police Department is used for illustrating the implementation of the related control charts. Journal: IISE Transactions Pages: 972-984 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1530486 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1530486 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:972-984 Template-Type: ReDIF-Article 1.0 Author-Name: Miaomiao Yu Author-X-Name-First: Miaomiao Author-X-Name-Last: Yu Author-Name: Chunjie Wu Author-X-Name-First: Chunjie Author-X-Name-Last: Wu Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Monitoring the data quality of data streams using a two-step control scheme Abstract: Data-rich environments provide unprecedented opportunities for monitoring data quality. This article focuses on the quality of data streams. We use indicator variables to measure the six dimensions of data quality and a glitch index to indicate the poor level of quality. A two-step control scheme is proposed considering two relationships: the inter- and intra-correlation. In the first step, the Mahalanobis distance is applied to an χ2-type control chart to monitor the quality of a data stream. In the second step, a Shewhart control chart is built based on a weighted-sum statistic, which measures the quality of the whole process. The feasibility and effectiveness of the control scheme are illustrated through detailed simulation studies and one landslide example. The simulated results, considering the three cases of no correlation, low correlation, and high correlation, show that the proposed approach can detect the mean shift in multi-attribute data sensitively and robustly. The example, in which sensors are used to collect data on accelerations in Taiwan, demonstrates the superiority of our design over four traditional control charts, producing the closest type-I error to the given level and the highest power under the same type-I error. Journal: IISE Transactions Pages: 985-998 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1530487 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1530487 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:985-998 Template-Type: ReDIF-Article 1.0 Author-Name: Yue Shi Author-X-Name-First: Yue Author-X-Name-Last: Shi Author-Name: Yisha Xiang Author-X-Name-First: Yisha Author-X-Name-Last: Xiang Author-Name: Mingyang Li Author-X-Name-First: Mingyang Author-X-Name-Last: Li Title: Optimal maintenance policies for multi-level preventive maintenance with complex effects Abstract: We consider the problem of optimally maintaining a periodically inspected system with multi-level preventive maintenance whose effects are complex. At each inspection, the maintenance decision concerns whether a preventive maintenance action is needed and which level should be selected if preventive maintenance is desired. The objective is to minimize the total expected discounted cost including inspection and maintenance costs. We formulate an infinite-horizon Markov decision process model and establish sufficient conditions to ensure the existence of an optimal monotone control-limit type policy with respect to the system’s deterioration level and age. We also numerically explore the structure of the optimal policy with respect to two additional system states, the level of the last maintenance action and the time since the last maintenance action. Real-world pavement deterioration data is used in our computational experiments, and the results show that the optimal policy is typically of monotone control-limit type. Journal: IISE Transactions Pages: 999-1011 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1532135 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1532135 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:999-1011 Template-Type: ReDIF-Article 1.0 Author-Name: Oktay Karabağ Author-X-Name-First: Oktay Author-X-Name-Last: Karabağ Author-Name: Bariş Tan Author-X-Name-First: Bariş Author-X-Name-Last: Tan Title: Purchasing, production, and sales strategies for a production system with limited capacity, fluctuating sales and purchasing prices Abstract: In many industries, the revenue and cost structures of manufacturers are directly affected by the volatility of purchasing and sales prices in the markets. We analyze the purchasing, production, and sales policies for a continuous-review discrete material flow production/inventory system with fluctuating and correlated purchasing and sales prices, exponentially distributed raw material and demand inter-arrival times, and processing time. The sales and purchasing prices are driven by the random environmental changes that evolve according to a discrete state space continuous-time Markov process. We model the system as an infinite-horizon Markov decision process under the average reward criterion and prove that the optimal purchasing, production, and sales strategies are state-dependent threshold policies. We propose a linear programming formulation to compute the optimal threshold levels. We examine the effects of the sales price variation, purchasing price variation, correlation between sales and purchasing prices, customer arrival rate and limited inventory capacities on the system performance measures, through a range of numerical experiments. We also examine under which circumstances the use of the optimal policy notably improves the system profit compared to the use of the buy low and sell high naive policy. We show that using the optimal purchasing, production, and sales policies allow manufacturers to improve their profits when the purchasing and sales prices fluctuate. Journal: IISE Transactions Pages: 921-942 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1535217 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1535217 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:921-942 Template-Type: ReDIF-Article 1.0 Author-Name: Devashish Das Author-X-Name-First: Devashish Author-X-Name-Last: Das Author-Name: Kalyan S. Pasupathy Author-X-Name-First: Kalyan S. Author-X-Name-Last: Pasupathy Author-Name: Curtis B. Storlie Author-X-Name-First: Curtis B. Author-X-Name-Last: Storlie Author-Name: Mustafa Y. Sir Author-X-Name-First: Mustafa Y. Author-X-Name-Last: Sir Title: Functional regression-based monitoring of quality of service in hospital emergency departments Abstract: This article focuses on building a statistical monitoring scheme for service systems that experience time-varying arrivals of customers and have time-varying service rates. There is lack of research in the systematic statistical monitoring of large-scale service systems, which is critical for maintaining a high quality of service. Motivated by the emergency department at a major academic medical center, this article intends to fill this research gap and provide a practical statistical monitoring scheme capable of detecting changes in service using readily available time stamp data. The proposed method is focused on building a functional regression model based on customer arrival and departure time instances from an in-control system. The model finds the expected departure intensity function for an observed arrival intensity on any given day of operation. The mean squared difference between the expected departure intensity function and the observed departure intensity functions is used to generate an alarm indicating a significant change in service. This methodology is validated using simulation and real data case studies. The proposed method can identify patterns of inefficiency or delay in service that are hard to detect using traditional statistical monitoring algorithms. The method offers a practical approach for monitoring service systems and determining when staffing levels need to be re-optimized. Journal: IISE Transactions Pages: 1012-1024 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1536303 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1536303 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:1012-1024 Template-Type: ReDIF-Article 1.0 Author-Name: Lirong Cui Author-X-Name-First: Lirong Author-X-Name-Last: Cui Author-Name: Jianhui Chen Author-X-Name-First: Jianhui Author-X-Name-Last: Chen Author-Name: Xiangchen Li Author-X-Name-First: Xiangchen Author-X-Name-Last: Li Title: Balanced reliability systems under Markov processes Abstract: Research on reliability for balanced systems has become an important issue in the reliability field; it is being intensively studied both in theory and applications. The concept of balance has many implications, which results in various balanced systems needing to be studied in reliability modelling, analysis and optimization and so on. In the present article, two balanced reliability systems are introduced and their reliability models are developed, then formulas for system reliability, probability density functions, and moments of lifetimes under two models are obtained using the results in aggregated stochastic processes or phase-type distributions for two-dimensional and high-dimensional situations. In addition, a matrix operator, the Kronecker product operator, is used to create concise forms for the related formulas. An algorithm for computing these reliability indexes is given to facilitate use. Some examples including symbolic and numerical ones for the simplest case, the two-dimensional situation, are shown to illustrate the presented results. The presented work may shed light on future research on reliability and safety for balanced systems under different concepts of balance. Journal: IISE Transactions Pages: 1025-1035 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1536304 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1536304 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:1025-1035 Template-Type: ReDIF-Article 1.0 Author-Name: Wenbo Chen Author-X-Name-First: Wenbo Author-X-Name-Last: Chen Author-Name: Huixiao Yang Author-X-Name-First: Huixiao Author-X-Name-Last: Yang Title: A heuristic based on quadratic approximation for dual sourcing problem with general lead times and supply capacity uncertainty Abstract: We study a single-product, periodic-review dual sourcing inventory system with demand and supply uncertainty, where the replenishment lead times can be arbitrary and the expedited supplier has a shorter lead time with a higher unit price than the regular supplier, unmet demand is fully backlogged. Even for the general dual sourcing problem without supply risks, the optimal stochastic policy has been unknown for over 50 years and several simple heuristics have been proposed in the literature. Moreover, the consideration of supply uncertainty brings another challenge, where the objective functions characterized by the dynamic programming recursions are not convex in the ordering quantities. Fortunately, a powerful transformation skill is recently proposed to successfully address the problem above and shows that the value-to-go function is L♮ convex. In this article, we design a Linear Programming greedy (LP-greedy) heuristic based on the quadratic approximation of L♮ convex value-to-go function and convert the problem into a convex optimization problem during each period. In an extensive simulation study, two sets of test instances from the literature are employed to compare the performance of our LP-greedy heuristic with that of some well-known policies in dual sourcing system, including Tailored base-surge, Dual index, Best vector base-stock. In addition, to assess the effectiveness of our heuristic, we construct a lower bound to the exact system. The lower bound is based on an information-relaxation approach and involves a penalty function derived from the proposed heuristic. We show that our proposed LP-greedy heuristic performs better than other heuristics in the dual sourcing problem and it is nearly optimal (within 3%) for the majority of cases. Journal: IISE Transactions Pages: 943-956 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1537532 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1537532 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:943-956 Template-Type: ReDIF-Article 1.0 Author-Name: Shahab Derhami Author-X-Name-First: Shahab Author-X-Name-Last: Derhami Author-Name: Jeffrey S. Smith Author-X-Name-First: Jeffrey S. Author-X-Name-Last: Smith Author-Name: Kevin R. Gue Author-X-Name-First: Kevin R. Author-X-Name-Last: Gue Title: Space-efficient layouts for block stacking warehouses Abstract: In block stacking warehouses, pallets of Stock Keeping Units (SKUs) are stacked on top of one another in lanes on the warehouse floor. A conventional layout consists of multiple bays of lanes separated by aisles. The depths of the bays and the number of aisles determine the storage space utilization. Using an analytical model, we show that the traditional lane depth model underestimates accessibility waste and therefore does not provide an optimal lane depth. We propose a new model of wasted storage space and embed it in a mixed-integer program to find the optimal bay depths. The model improves space utilization by allowing multiple bay depths and allocating SKUs to appropriate bays. Our computational study shows the proposed model is capable of solving large-scale problems with a relatively small optimality gap. We use simulation to evaluate performance of the proposed model on small to industrial-sized warehouses. We also include a case study from the beverage industry. Journal: IISE Transactions Pages: 957-971 Issue: 9 Volume: 51 Year: 2019 Month: 9 X-DOI: 10.1080/24725854.2018.1539280 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1539280 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:9:p:957-971 Template-Type: ReDIF-Article 1.0 Author-Name: Selcuk Goren Author-X-Name-First: Selcuk Author-X-Name-Last: Goren Author-Name: Ihsan Sabuncuoglu Author-X-Name-First: Ihsan Author-X-Name-Last: Sabuncuoglu Title: Optimization of schedule robustness and stability under random machine breakdowns and processing time variability Abstract: In practice, scheduling systems are subject to considerable uncertainty in highly dynamic operating environments. The ability to cope with uncertainty in the scheduling process is becoming an increasingly important issue. This paper takes a proactive scheduling approach to study scheduling problems with two sources of uncertainty: processing time variability and machine breakdowns. Two robustness (expected total flow time and expected total tardiness) and three stability (the sum of the squared and absolute differences of the job completion times and the sum of the variances of the realized completion times) measures are defined. Special cases for which the measures can be easily optimized are identified. A dominance rule and two lower bounds for one of the robustness measures are developed and subseqently used in a branch-and-bound algorithm to solve the problem exactly. A beam search heuristic is also proposed to solve large problems for all five measures. The computational results show that the beam search heuristic is capable of generating robust schedules with little average deviation from the optimal objective function value (obtained via the branch-and-bound algorithm) and it performs significantly better than a number of heuristics available in the literature for all five measures. Journal: IIE Transactions Pages: 203-220 Issue: 3 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903171035 File-URL: http://hdl.handle.net/10.1080/07408170903171035 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:3:p:203-220 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Yang Author-X-Name-First: Jian Author-X-Name-Last: Yang Author-Name: Xiangtong Qi Author-X-Name-First: Xiangtong Author-X-Name-Last: Qi Title: Managing partially controllable raw material acquisition and outsourcing in production planning Abstract: This paper studies a single-item production planning problem for a manufacturing firm. Besides being able to acquire raw material from an external supplier, the firm may also face an incoming stream of internally supplied raw material. In addition, outsourcing may serve as an alternative to in-house production for the firm to satisfy its demands. Attention is focused on the case where acquisition, production, and outsourcing costs are setup-linear and inventory holding costs are linear. For this case, polynomial algorithms are presented for some situations and the NP-hardness of other problem is shown. A computational study is used to show the competitiveness of the proposed heuristic.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 188-202 Issue: 3 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903232555 File-URL: http://hdl.handle.net/10.1080/07408170903232555 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:3:p:188-202 Template-Type: ReDIF-Article 1.0 Author-Name: Wanshan Zhu Author-X-Name-First: Wanshan Author-X-Name-Last: Zhu Author-Name: Srinagesh Gavirneni Author-X-Name-First: Srinagesh Author-X-Name-Last: Gavirneni Author-Name: Roman Kapuscinski Author-X-Name-First: Roman Author-X-Name-Last: Kapuscinski Title: Periodic flexibility, information sharing, and supply chain performance Abstract: A two-stage serial supply chain in which a retailer and his supplier are operating in make-to-stock environments and the retailer faces uncertain demands from the end-customers is studied. When this supply chain is centrally managed, the optimal policy is an extension of the Clark–Scarf echelon base stock policy. Since these supply chains are usually operated in a decentralized manner, an operational change is proposed that reduces the inefficiency associated with decentralization. The policy, which is called Periodic Flexibility (PF), provides the retailer with structural flexibility to order any amount in one period of a cycle, while requiring that the retailer receives a fixed-quantity shipment in the other periods. Optimal policies and their associated costs for the non-stationary inventory control problems faced by the retailer and the supplier under PF are characterized. A detailed computational study shows that PF improves the supply chain performance by about 11% on average. This improvement is a 43% (on average) reduction in the efficiency gap between centralized and decentralized control. The improvement of PF is due to information sharing, i.e., the retailer passing her end-customer demand information to the supplier. The PF strategy is compared to the well-known quantity flexibility scheme and it is shown that the PF approach tends to be more efficient.[Supplementary materials are available for this article. Go to the publishers online edition of IIE Transactions for the following free supplemental resources: Additional proofs] Journal: IIE Transactions Pages: 173-187 Issue: 3 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394314 File-URL: http://hdl.handle.net/10.1080/07408170903394314 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:3:p:173-187 Template-Type: ReDIF-Article 1.0 Author-Name: Yaron Leyvand Author-X-Name-First: Yaron Author-X-Name-Last: Leyvand Author-Name: Dvir Shabtay Author-X-Name-First: Dvir Author-X-Name-Last: Shabtay Author-Name: George Steiner Author-X-Name-First: George Author-X-Name-Last: Steiner Title: Optimal delivery time quotation to minimize total tardiness penalties with controllable processing times Abstract: Scheduling problems with due date assignment and controllable processing times are studied in this paper. It is assumed that the job processing time is a linear function of the amount of resource allocated to the job, and all jobs share the same due date, which is a decision variable. The problems have many applications, e.g., in optimal delivery time quotation and order sequencing when outsourcing is an option. The quality of a schedule is measured by two different criteria. The first is the total weighted number of tardy jobs plus due date assignment cost, and the second one is the total weighted resource consumption. Four different problems for treating the two criteria are considered. It is shown that three of these problems are NP-hard in the ordinary sense, although the problem of minimizing an integrated objective function can be solved in polynomial time. A pseudo-polynomial time optimization algorithm is provided for the three NP-hard versions of the problem. A fully polynomial time algorithm for approximating a Pareto-optimal solution is also presented. Finally, important polynomially solvable special cases are highlighted. Journal: IIE Transactions Pages: 221-231 Issue: 3 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394322 File-URL: http://hdl.handle.net/10.1080/07408170903394322 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:3:p:221-231 Template-Type: ReDIF-Article 1.0 Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Author-Name: Zvi Drezner Author-X-Name-First: Zvi Author-X-Name-Last: Drezner Author-Name: Dmitry Krass Author-X-Name-First: Dmitry Author-X-Name-Last: Krass Title: Cooperative cover location problems: The planar case Abstract: A cooperative-covering family of location problems is proposed in this paper. Each facility emits a (possibly non-physical) “signal” which decays over the distance and each demand point observes the aggregate signal emitted by all facilities. It is assumed that a demand point is covered if its aggregate signal exceeds a given threshold; thus facilities cooperate to provide coverage, as opposed to the classical coverage location model where coverage is only provided by the closest facility. It is shown that this cooperative assumption is appropriate in a variety of applications. Moreover, ignoring the cooperative behavior (i.e., assuming the traditional individual coverage framework) leads to solutions that are significantly worse than the optimal cooperative cover solutions; this is illustrated with a case study of locating warning sirens in North Orange County, California. The problems are formulated, analyzed and solved in the plane for the Euclidean distance case. Optimal and heuristic algorithms are proposed and extensive computational experiments are reported. Journal: IIE Transactions Pages: 232-246 Issue: 3 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394355 File-URL: http://hdl.handle.net/10.1080/07408170903394355 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:3:p:232-246 Template-Type: ReDIF-Article 1.0 Author-Name: Yossi Bukchin Author-X-Name-First: Yossi Author-X-Name-Last: Bukchin Author-Name: Efrat Wexler Author-X-Name-First: Efrat Author-X-Name-Last: Wexler Title: The effect of buffers and work sharing on makespan improvement of small batches in assembly lines under learning effects Abstract: The effect of workers’ learning curves on the production rate in manual assembly lines is significant when producing relatively small batches of different products. This research studies this effect and suggests applying a work-sharing mechanism among the workers to improve the makespan (time to complete the batch). The proposed mechanism suggests that adjacent cross-trained workers will help each other in order to reduce idle times caused by blockage and starvation. The effect of work sharing and buffers on the makespan is studied and compared with a baseline situation, where the line does not contain any buffers and work sharing is not applied. Several linear programming and mixed-integer linear programming formulations for makespan minimization are presented. These formulations provide optimal work allocations to stations and optimal parameters of the work-sharing mechanism. A numerical study is conducted to examine the effect of buffers and work sharing on the makespan reduction in different environment settings. Numerical results are given along with some recommendations regarding the system design and operation. Journal: IIE Transactions Pages: 403-414 Issue: 5 Volume: 48 Year: 2016 Month: 5 X-DOI: 10.1080/0740817X.2015.1056392 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1056392 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:5:p:403-414 Template-Type: ReDIF-Article 1.0 Author-Name: Lixin Tang Author-X-Name-First: Lixin Author-X-Name-Last: Tang Author-Name: Defeng Sun Author-X-Name-First: Defeng Author-X-Name-Last: Sun Author-Name: Jiyin Liu Author-X-Name-First: Jiyin Author-X-Name-Last: Liu Title: Integrated storage space allocation and ship scheduling problem in bulk cargo terminals Abstract: This study is motivated by the practices of large iron and steel companies that have steady and heavy demands for bulk raw materials, such as iron ore, coal, limestone, etc. These materials are usually transported to a bulk cargo terminal by ships (or to a station by trains). Once unloaded, they are moved to and stored in a bulk material stockyard, waiting for retrieval for use in production. Efficient storage space allocation and ship scheduling are critical to achieving high space utilization, low material loss, and low transportation costs. In this article, we study the integrated storage space allocation and ship scheduling problem in the bulk cargo terminal. Our problem is different from other associated problems due to the special way that the materials are transported and stored. A novel mixed-integer programming model is developed and then solved using a Benders decomposition algorithm, which is enhanced by the use of various valid inequalities, combinatorial Benders cuts, variable reduction tests, and an iterative heuristic procedure. Computational results indicate that the proposed solution method is much more efficient than the standard solution software CPLEX. Journal: IIE Transactions Pages: 428-439 Issue: 5 Volume: 48 Year: 2016 Month: 5 X-DOI: 10.1080/0740817X.2015.1063791 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1063791 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:5:p:428-439 Template-Type: ReDIF-Article 1.0 Author-Name: Kaveh Bastani Author-X-Name-First: Kaveh Author-X-Name-Last: Bastani Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Author-Name: Wenzhen Huang Author-X-Name-First: Wenzhen Author-X-Name-Last: Huang Author-Name: Yingqing Zhou Author-X-Name-First: Yingqing Author-X-Name-Last: Zhou Title: Compressive sensing–based optimal sensor placement and fault diagnosis for multi-station assembly processes Abstract: Developments in sensing technologies have created the opportunity to diagnose the process faults in multi-station assembly processes by analyzing measurement data. Sufficient diagnosability for process faults is a challenging issue, as the sensors cannot be excessively used. Therefore, there have been a number of methods reported in the literature for the optimization of the diagnosability of a diagnostic method for a given sensor cost, thus allowing the identification of process faults incurred in multi-station assembly processes. However, most of these methods assume that the number of sensors is more than that of the process errors. Unfortunately, this assumption may not hold in many real industrial applications. Thus, the diagnostic methods have to solve underdetermined linear equations. In order to address this issue, we propose an optimal sensor placement method by devising a new diagnosability criterion based on compressive sensing theory, which is able to handle underdetermined linear equations. Our method seeks the optimal sensor placement by minimizing the average mutual coherence to maximize the diagnosability. The proposed method is demonstrated and validated through case studies from actual industrial applications. Journal: IIE Transactions Pages: 462-474 Issue: 5 Volume: 48 Year: 2016 Month: 5 X-DOI: 10.1080/0740817X.2015.1096431 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1096431 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:5:p:462-474 Template-Type: ReDIF-Article 1.0 Author-Name: Tugce Martagan Author-X-Name-First: Tugce Author-X-Name-Last: Martagan Author-Name: Ananth Krishnamurthy Author-X-Name-First: Ananth Author-X-Name-Last: Krishnamurthy Author-Name: Christos T. Maravelias Author-X-Name-First: Christos T. Author-X-Name-Last: Maravelias Title: Optimal condition-based harvesting policies for biomanufacturing operations with failure risks Abstract: The manufacture of biological products from live systems such as bacteria, mammalian, or insect cells is called biomanufacturing. The use of live cells introduces several operational challenges including batch-to-batch variability, parallel growth of both desired antibodies and unwanted toxic byproducts in the same batch, and random shocks leading to multiple competing failure processes. In this article, we develop a stochastic model that integrates the cell-level dynamics of biological processes with operational dynamics to identify optimal harvesting policies that balance the risks of batch failures and yield/quality tradeoffs in fermentation operations. We develop an infinite horizon, discrete-time Markov decision model to derive the structural properties of the optimal harvesting policies. We use IgG1 antibody production as an example to demonstrate the optimal harvesting policy and compare its performance against harvesting policies used in practice. We leverage insights from the optimal policy to propose smart stationary policies that are easier to implement in practice. Journal: IIE Transactions Pages: 440-461 Issue: 5 Volume: 48 Year: 2016 Month: 5 X-DOI: 10.1080/0740817X.2015.1101523 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1101523 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:5:p:440-461 Template-Type: ReDIF-Article 1.0 Author-Name: Ashesh Kumar Sinha Author-X-Name-First: Ashesh Kumar Author-X-Name-Last: Sinha Author-Name: Ananth Krishnamurthy Author-X-Name-First: Ananth Author-X-Name-Last: Krishnamurthy Title: Dual index production and subcontracting policies for assemble-to-order systems Abstract: We analyze tradeoffs related to production and subcontracting decisions in an assemble-to-order system with capacity constraints and stochastic lead times. We assume that component replenishment is carried out by orders to a subcontractor and component stock levels at the manufacturer are determined by dual index-based policies. Furthermore, customer demands for the final product are immediately satisfied if all of the required components are in stock; otherwise, they are back-ordered. In order to maintain high service levels, the manufacturer reserves the option to produce components internally. Using queuing models, we analyze the tradeoffs related to internal manufacturing versus subcontracting under different types of dual index policies. We use Matrix-Geometric methods to conduct an exact analysis for an assemble-to-order system with two components and develop a decomposition-based algorithm to analyze the performance of systems with more than two products. Numerical studies provide useful insights on the performance of the various dual index policies under study. Journal: IIE Transactions Pages: 415-427 Issue: 5 Volume: 48 Year: 2016 Month: 5 X-DOI: 10.1080/0740817X.2015.1110652 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110652 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:5:p:415-427 Template-Type: ReDIF-Article 1.0 Author-Name: Yiannis Dimitrakopoulos Author-X-Name-First: Yiannis Author-X-Name-Last: Dimitrakopoulos Author-Name: Apostolos Burnetas Author-X-Name-First: Apostolos Author-X-Name-Last: Burnetas Title: The value of service rate flexibility in an queue with admission control Abstract: We consider a single-server queueing system with admission control and the possibility to switch dynamically between k increasing service rate values with the service cost rate being convex in the service rate. We explore the benefit due to service rate flexibility on the optimal profit and the admission thresholds, when service payment is made upon customer's admission. We formulate a Markov Decision Process model for the problem of joint admission and service control considering both discounted and average expected profit maximization and show that the optimal policy has a threshold structure for both controls. Regarding the benefit due to flexibility, we show that it is increasing in system length and that its effect on the admission policy is to increase the admission threshold. We also derive a simple approximate condition between the admission reward and the relative cost of service rate increase, so that the service rate flexibility is beneficial. We finally show that the results extend to the corresponding model where service payment is made at the end of each service completion and differences regarding the benefit due to service flexibility with respect to the original model are pursued numerically. Journal: IISE Transactions Pages: 603-621 Issue: 6 Volume: 49 Year: 2017 Month: 6 X-DOI: 10.1080/24725854.2016.1269976 File-URL: http://hdl.handle.net/10.1080/24725854.2016.1269976 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:6:p:603-621 Template-Type: ReDIF-Article 1.0 Author-Name: Shuaian Wang Author-X-Name-First: Shuaian Author-X-Name-Last: Wang Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Lu Zhen Author-X-Name-First: Lu Author-X-Name-Last: Zhen Author-Name: Xiaobo Qu Author-X-Name-First: Xiaobo Author-X-Name-Last: Qu Title: Cruise itinerary schedule design Abstract: The Cruise Itinerary Schedule Design (CISD) problem determines the optimal sequence of a given set of ports of call (a port of call is an intermediate stop in a cruise itinerary) and the arrival and departure times at each port of call in order to maximize the monetary value of the utility at ports of call minus the fuel cost. To solve this problem, in view of the practical observations that most cruise itineraries do not have many ports of call, we first enumerate all sequences of ports of call and then optimize the arrival and departure times at each port of call by developing a dynamic programming approach. To improve the computational efficiency, we propose effective bounds on the monetary value of each sequence of ports of call, eliminating non-optimal sequences without invoking the dynamic programming algorithm. Extensive computational experiments are conducted and the results show that, first, using the bounds on the profit of each sequence of ports of call considerably improves the computational efficiency; second, the total profit of the cruise itinerary is sensitive to the fuel price and hence an accurate estimation of the fuel price is highly desirable; third, the optimal sequence of ports of call is not necessarily the sequence with the shortest voyage distance, especially when the ports do not have a natural geographic sequence. Journal: IISE Transactions Pages: 622-641 Issue: 6 Volume: 49 Year: 2017 Month: 6 X-DOI: 10.1080/24725854.2017.1299954 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299954 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:6:p:622-641 Template-Type: ReDIF-Article 1.0 Author-Name: Amiya K. Chakravarty Author-X-Name-First: Amiya K. Author-X-Name-Last: Chakravarty Title: Offshore outsourcing and ownership of facilities with productivity concerns Abstract: We examine how companies may create appropriate portfolios of onshore/offshore facilities. Such decisions are complex due to demand volatility, payoff uncertainty, decentralization, productivity differences, and location uncertainties. We study a Build–Operate–Transfer (BOT) scenario where an offshore supplier builds a facility and trains workers and the principal leases it for a period before purchasing it. The parties participate in BOT only if both are better off in relation to continued leasing. The decisions of the principal and supplier include the size, unit price, and productivity of the offshore facility and an appropriate experience level for managing it. We show how economic equilibrium can be restructured to overcome productivity concerns. Although productivity improvements help both parties, we find that the principal stands to gain more. Such improvements should be scheduled early if the principal is sufficiently experienced in offshore operations and delayed if the on-the-job learning is rapid. We establish that the principal should not purchase the facility if productivity cannot be maintained at a pre-purchase level. We also find that an experienced principal can benefit the supplier, not just herself. We develop simple rules for assessing the viability of offshore outsourcing and for allocating budget for productivity improvement. Journal: IISE Transactions Pages: 642-651 Issue: 6 Volume: 49 Year: 2017 Month: 6 X-DOI: 10.1080/24725854.2017.1300357 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1300357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:6:p:642-651 Template-Type: ReDIF-Article 1.0 Author-Name: Irem Sengul Orgut Author-X-Name-First: Irem Author-X-Name-Last: Sengul Orgut Author-Name: Julie Ivy Author-X-Name-First: Julie Author-X-Name-Last: Ivy Author-Name: Reha Uzsoy Author-X-Name-First: Reha Author-X-Name-Last: Uzsoy Title: Modeling for the equitable and effective distribution of food donations under stochastic receiving capacities Abstract: We present and analyze stochastic models developed to facilitate the equitable and effective distribution of donated food by a regional food bank among the population at risk for hunger. Since demand typically exceeds the donated food supply, the food bank must distribute donated food in an equitable manner while minimizing food waste, leading to conflicting objectives. Distribution to beneficiaries in the service area is carried out by local charitable agencies, whose receiving capacities are stochastic, since they depend on factors (such as their budget and workforce) that vary significantly over time. We develop a single-period, two-stage stochastic model that ensures equitable distribution of food donations when the distribution decisions are made prior to observing capacities at the receiving locations. Shipment decisions made at the beginning of the period can be corrected at an additional cost after the capacities are observed in the second stage. We prove that this model has a newsvendor-type closed-form optimal solution and illustrate our results using historical data from our collaborating food bank. Journal: IISE Transactions Pages: 567-578 Issue: 6 Volume: 49 Year: 2017 Month: 6 X-DOI: 10.1080/24725854.2017.1300358 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1300358 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:6:p:567-578 Template-Type: ReDIF-Article 1.0 Author-Name: Junlong Zhang Author-X-Name-First: Junlong Author-X-Name-Last: Zhang Author-Name: Osman Y. Özaltın Author-X-Name-First: Osman Y. Author-X-Name-Last: Özaltın Title: Single-ratio fractional integer programs with stochastic right-hand sides Abstract: We present an equivalent value function reformulation for a class of single-ratio Fractional Integer Programs (FIPs) with stochastic right-hand sides and propose a two-phase solution approach. The first phase constructs the value functions of FIPs in both stages. The second phase solves the reformulation using a global branch-and-bound algorithm or a level-set approach. We derive some basic properties of the value functions of FIPs and utilize them in our algorithms. We show that in certain cases our approach can solve instances whose extensive forms have the same order of magnitude as the largest stochastic quadratic integer programs solved in the literature. Journal: IISE Transactions Pages: 579-592 Issue: 6 Volume: 49 Year: 2017 Month: 6 X-DOI: 10.1080/24725854.2017.1302116 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1302116 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:6:p:579-592 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaobo Zhao Author-X-Name-First: Xiaobo Author-X-Name-Last: Zhao Author-Name: Yun Zhou Author-X-Name-First: Yun Author-X-Name-Last: Zhou Author-Name: Jinxing Xie Author-X-Name-First: Jinxing Author-X-Name-Last: Xie Title: An inventory system with quasi-hyperbolic discounting rate Abstract: In this article, we consider a periodic-review inventory system with stochastic demands for an infinite horizon, where the manager has a time-inconsistent preference with a quasi-hyperbolic discounting rate. We model the inventory system as an intra-personal sequential decision problem. It is shown that the ordering decision follows a base-stock policy but has a systematic bias in that the base-stock level is lower than the standard optimal level. We extend our analysis to a supply chain that is composed of a perfectly rational supplier and a quasi-hyperbolic retailer. The results show that the time-inconsistent preference of the retailer can cause considerable loss in system performance. We propose a contract with a delay-in-payment and income-sharing for such a supply chain. The results show that the contract can effectively un-bias the ordering decision of the retailer and can reach the goal of coordinating the supply chain to improve the performance. Journal: IISE Transactions Pages: 593-602 Issue: 6 Volume: 49 Year: 2017 Month: 6 X-DOI: 10.1080/24725854.2017.1303763 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1303763 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:6:p:593-602 Template-Type: ReDIF-Article 1.0 Author-Name: Yun Jiang Author-X-Name-First: Yun Author-X-Name-Last: Jiang Author-Name: Jiyin Liu Author-X-Name-First: Jiyin Author-X-Name-Last: Liu Title: A new model and an efficient branch-and-bound solution for cyclic multi-hoist scheduling Abstract: This article studies the multi-hoist cyclic scheduling problem in electroplating lines where the processing time of parts in each tank must be within given lower and upper limits and part moves between tanks are allowed in both directions along the line. The problem arises in electroplating lines such as those used in the production of printed circuit boards and has previously been modeled as a mixed-integer linear program. The possible relative positions of any pair of moves are analyzed and a set of linear constraints is derived that expresses the no-collision requirements for hoists. Based on the analysis, a new mixed-integer linear programming model is formulated for the multi-hoist scheduling problem. An efficient branch-and-bound strategy is proposed to solve the problem. Computational results show that the new model can be solved much more quickly than an existing model in the literature and that the proposed solution method is more efficient in solving the problem than a commercial software package. Journal: IIE Transactions Pages: 249-262 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.762485 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.762485 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:249-262 Template-Type: ReDIF-Article 1.0 Author-Name: Melh Çelk Author-X-Name-First: Melh Author-X-Name-Last: Çelk Author-Name: Haldun Süral Author-X-Name-First: Haldun Author-X-Name-Last: Süral Title: Order picking under random and turnover-based storage policies in fishbone aisle warehouses Abstract: A recent trend in the layout design of unit-load warehouses is the application of layouts without conventional parallel pick aisles and straight middle aisles. Two examples for such designs are flying-V and fishbone designs for single- and dual-command operations. In this study, it is shown that the multi-item order picking problem can be solved in polynomial time for both fishbone and flying-V layouts. These two designs are compared with the traditional parallel-aisle design under the case of multi-item pick lists. Simple heuristics are proposed for fishbone layouts that are inspired by those put forward for parallel-aisle warehouses and it is experimentally shown that a modification of the aisle-by-aisle heuristic produces good results compared with other modified S-shape and largest gap heuristics when items have uniform demand. Computational experiments are performed in order to compare the performances of fishbone and traditional layouts under optimal routing and it is shown that a fishbone design can obtain improvements of around 20% over parallel-aisle design in single-command operations but can perform as high as around 30% worse than an equivalent parallel-aisle layout as the size of the pick list increases. The sensitivity of the results to varying demand skewness levels when volume-based storage is applied is tested and it is shown that unlike the single- and dual-command cases, a fishbone design performs better compared to a traditional design under highly skewed demand as opposed to uniform demand. Journal: IIE Transactions Pages: 283-300 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.768871 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.768871 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:283-300 Template-Type: ReDIF-Article 1.0 Author-Name: Lixin Tang Author-X-Name-First: Lixin Author-X-Name-Last: Tang Author-Name: Xie Xie Author-X-Name-First: Xie Author-X-Name-Last: Xie Author-Name: Jiyin Liu Author-X-Name-First: Jiyin Author-X-Name-Last: Liu Title: Crane scheduling in a warehouse storing steel coils Abstract: This article studies a single-crane scheduling problem in a warehouse where steel coils are stored in two levels. A given set of coils is to be retrieved from their designated places. If a required coil at the lower level is blocked by one or two coils at the upper level, to retrieve it, the blocking coils must be first moved to other positions. The considered problem is to determine the new positions and the required crane movements so that all coils are retrieved in the shortest possible time. A mixed-integer linear programming model is formulated for the problem and a sequential solution approach is implemented. A dynamic programming algorithm is proposed for optimally solving a restricted case. Based on the analysis of a special case, a heuristic algorithm is proposed for the general case and its worst-case performance is analyzed. The average performances of the solution approaches are computationally evaluated. The results show that the proposed heuristic algorithms are capable of generating good quality solutions. Journal: IIE Transactions Pages: 267-282 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.802841 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.802841 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:267-282 Template-Type: ReDIF-Article 1.0 Author-Name: Klaus Altendorfer Author-X-Name-First: Klaus Author-X-Name-Last: Altendorfer Author-Name: Stefan Minner Author-X-Name-First: Stefan Author-X-Name-Last: Minner Title: A comparison of make-to-stock and make-to-order in multi-product manufacturing systems with variable due dates Abstract: This article models a single-stage hybrid production system, which can be regarded as a Make To Order (MTO) production system with safety stocks or a Make To Stock (MTS) production system with advance demand information. In an environment with multiple products and variable customer due dates, optimality conditions for safety stocks (base stocks) and safety lead times (work-ahead window) that minimize inventory and backorder costs are derived. For a simplified M/M/1 system with exponentially distributed customer required lead time, an explicit comparison between MTO and MTS is conducted. A pure MTO policy gets relatively more favorable to a pure MTS policy if inventory holding costs increase, backorder costs decrease, the mean customer required lead time increases, or the processing rate increases. In a numerical study, the influence of variance, the behavior of optimal parameters, and the cost reduction potential of this hybrid policy are shown. Journal: IIE Transactions Pages: 197-212 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.803638 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803638 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:197-212 Template-Type: ReDIF-Article 1.0 Author-Name: Andres Abad Author-X-Name-First: Andres Author-X-Name-Last: Abad Author-Name: Weihong Guo Author-X-Name-First: Weihong Author-X-Name-Last: Guo Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Title: Algebraic expression of system configurations and performance metrics for mixed-model assembly systems Abstract: One of the challenges in the design and operation of a mixed model assembly system (MMAS) is the high complexity of the station layout configuration due to the various tasks that have to be performed to produce different product variants. It is therefore desirable to have an effective way of representing complex system configurations and analyzing system performances. By overcoming the drawbacks of two widely used representation methods (block diagrams and adjacency matrix), this article proposes to use algebraic expressions to represent the configuration of an MMAS. By further extending the algebraic configuration operators, algebraic performance operators are defined for the first time to allow systematic evaluation of system performance metrics, such as quality conforming rates for individual product types at each station and process capability for handling complexity induced by product variants. Therefore, the benefits of using the proposed algebraic representation are not only their effectiveness in achieving a compact storage of system configurations but also its ability to systematically implement computational algorithms for automatically evaluating various system performance metrics. Examples are given in the article to illustrate how the proposed algebraic representation can be effectively used in assisting the design and performance analysis of an MMAS. Journal: IIE Transactions Pages: 230-248 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.813093 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.813093 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:230-248 Template-Type: ReDIF-Article 1.0 Author-Name: Kuo-Hao Chang Author-X-Name-First: Kuo-Hao Author-X-Name-Last: Chang Author-Name: Yu-Hsuan Huang Author-X-Name-First: Yu-Hsuan Author-X-Name-Last: Huang Author-Name: Shih-Pang Yang Author-X-Name-First: Shih-Pang Author-X-Name-Last: Yang Title: Vehicle fleet sizing for automated material handling systems to minimize cost subject to time constraints Abstract: Vehicle fleet sizing for an Automated Material Handling System (AMHS) is an important but challenging problem due to the complexity of AMHS design and uncertainty involved in the production process; e.g., random processing time. For a complex manufacturing system such as semiconductor manufacturing, the problem is even more complex. This article studies the vehicle fleet sizing problem in semiconductor manufacturing and proposes a formulation and solution method, called Simulation Sequential Metamodeling (SSM), to facilitate the determination of the optimal vehicle fleet size that minimizes the vehicle cost while satisfying time constraints. The proposed approach is to sequentially construct a series of metamodels, solve the approximate problem, and evaluate the quality of the resulting solution. Once the resulting solution is satisfactory, the algorithm is terminated. Compared with the existing metamodeling approaches that employ a large number of observations for one time, the sequential nature of SSM allows it to achieve much better computational efficiency. Furthermore, a newly developed estimation method enables SSM to quantify the quality of the resulting solution. Extensive numerical experiments show that SSM outperforms the existing methods and the computational advantage of SSM is increasing with the problem size and the level of the variance of response variables. An empirical study based on real data is conducted to validate the viability of SSM in practical settings. Journal: IIE Transactions Pages: 301-312 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.813095 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.813095 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:301-312 Template-Type: ReDIF-Article 1.0 Author-Name: Bariş Tan Author-X-Name-First: Bariş Author-X-Name-Last: Tan Author-Name: Yalçin Akçay Author-X-Name-First: Yalçin Author-X-Name-Last: Akçay Title: Assortment-based cooperation between two make-to-stock firms Abstract: Cooperation can potentially improve competitiveness and profitability of firms with limited resources and production capacities. This article presents a continuous-time Markov chain model to study an assortment-based cooperation between two independent firms with limited capacity. An assortment-based cooperation is an agreement to combine the product assortments of two firms and offer the combined assortment to each firm's customers. Both centralized and decentralized cooperations are studied. In a centralized cooperation, firms jointly make replenishment decisions, whereas in the decentralized case, firms operate under independent base stock policies and manage product exchanges through a discount-based contract where each firm supplies its own product to the other firm at a discounted price and at an agreed-upon fill rate. Under this scheme, assortment-based cooperation also mandates each firm to effectively ration their inventories since they have to deal with two different demand streams. The discount-based contract yields the results of the centralized operation by using specific values of the contract parameters. It is shown that assortment-based cooperation is always beneficial for two symmetrical firms in both centralized and decentralized cooperation. Numerical experiments reveal that assortment-based cooperation is not always beneficial if the firms are not symmetrical. [Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for Proofs of all Propositions.] Journal: IIE Transactions Pages: 213-229 Issue: 3 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.814929 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.814929 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:3:p:213-229 Template-Type: ReDIF-Article 1.0 Author-Name: Stanley Gershwin Author-X-Name-First: Stanley Author-X-Name-Last: Gershwin Author-Name: Bariş Tan Author-X-Name-First: Bariş Author-X-Name-Last: Tan Author-Name: Michael Veatch Author-X-Name-First: Michael Author-X-Name-Last: Veatch Title: Production control with backlog-dependent demand Abstract: A manufacturing firm that builds a product to stock to meet a random demand is studied. Production time is deterministic, so that if there is a backlog, customers are quoted a lead time that is proportional to the backlog. In order to represent the customers' response to waiting, a defection function—the fraction of customers who choose not to order as a function of the quoted lead time—is introduced. Unlike models with backorder costs, the defection function is related to customer behavior. Using a continuous flow control model with linear holding cost and Markov modulated demand, it is shown that the optimal production policy has a hedging point form. The performance of the system under this policy is evaluated, allowing the optimal hedging point to be found.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 511-523 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170801975040 File-URL: http://hdl.handle.net/10.1080/07408170801975040 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:511-523 Template-Type: ReDIF-Article 1.0 Author-Name: Athanassios Avramidis Author-X-Name-First: Athanassios Author-X-Name-Last: Avramidis Author-Name: Wyean Chan Author-X-Name-First: Wyean Author-X-Name-Last: Chan Author-Name: Pierre L'Ecuyer Author-X-Name-First: Pierre Author-X-Name-Last: L'Ecuyer Title: Staffing multi-skill call centers via search methods and a performance approximation Abstract: A multi-skill staffing problem in a call center where the agent skill sets are exogenous and the call routing policy has well-specified features of overflow between different agent types is addressed. Constraints are imposed on the service level for each call class, defined here as the steady-state fraction of calls served within a given time threshold, excluded. An approximation of these service levels is developed that allows an arbitrary overflow mechanism and customer abandonment. A two-stage heuristic that finds good solutions to mathematical programs with such constraints is developed. The first stage uses search methods supported by the approximation. Because service level approximation errors may be substantial, the solution is adjusted in a second stage in which performance is estimated by simulation. Realistic problems of varying size and routing policy are solved. The proposed approach is shown to be competitive with (and often better than) previously available methods.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 483-497 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322986 File-URL: http://hdl.handle.net/10.1080/07408170802322986 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:483-497 Template-Type: ReDIF-Article 1.0 Author-Name: Mahvareh Ahghari Author-X-Name-First: Mahvareh Author-X-Name-Last: Ahghari Author-Name: Bariş Balcioĝlu Author-X-Name-First: Bariş Author-X-Name-Last: Balcioĝlu Title: Benefits of cross-training in a skill-based routing contact center with priority queues and impatient customers Abstract: Customer contact centers that provide different types of services to customers who place phone calls or send e-mail messages are studied. Customers calling are impatient; hence phone requests have a higher priority over e-mail messages. E-mails that are not responded to within a specified time limit can be prioritized. The goal of this paper is to assess the performance improvement via cross-training the agents. The performance of contact centers operated under different strategies are compared. An extensive simulation study is presented that shows that strategies permitting pre-emptive-resume policies provide the best performance for phone calls. The results also demonstrate that limited cross-training with two skills per agent results in considerable performance improvements. However, the unbalanced traffic intensities due to different mean service times for each class necessitate more cross-training at three skills per agent to have considerable improvement.[Supplemental materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix of additional simulation results] Journal: IIE Transactions Pages: 524-536 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802432975 File-URL: http://hdl.handle.net/10.1080/07408170802432975 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:524-536 Template-Type: ReDIF-Article 1.0 Author-Name: Hank Grant Author-X-Name-First: Hank Author-X-Name-Last: Grant Author-Name: Stan Settles Author-X-Name-First: Stan Author-X-Name-Last: Settles Title: A survey of issues in biomanufacturing research Abstract: Biotechnology has become one of the primary development areas of the 21st century. For a medical device or pharmaceutical manufacturer to remain on the competitive edge, not only is it necessary to satisfy the stringent product demands of the biotechnology markets, but those companies must also develop processes that can accelerate the time frame from research and development to manufacturing and actual marketing of the medical products. Biomanufacturing is defined as the design, development, implementation and management of systems for the production of products that are integrated into or interact with human systems. This paper identifies important research issues in this relatively new manufacturing field that is anticipated to be an important area for industrial engineering, because it involves products that are expensive, complex, contain rapidly changing technology and are difficult to manufacture on a large scale. Journal: IIE Transactions Pages: 537-545 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802495139 File-URL: http://hdl.handle.net/10.1080/07408170802495139 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:537-545 Template-Type: ReDIF-Article 1.0 Author-Name: Laura McLay Author-X-Name-First: Laura Author-X-Name-Last: McLay Author-Name: Sheldon Jacobson Author-X-Name-First: Sheldon Author-X-Name-Last: Jacobson Author-Name: Alexander Nikolaev Author-X-Name-First: Alexander Author-X-Name-Last: Nikolaev Title: A sequential stochastic passenger screening problem for aviation security Abstract: Designing effective aviation security systems has become a problem of national concern. Since September 11th, 2001 passenger screening systems have become an important component in the design and operation of aviation security systems. This paper introduces the Sequential Stochastic Passenger Screening Problem (SSPSP), which allows passengers to be optimally assigned (in real-time) to aviation security resources. Passengers are classified as either selectees or non-selectees, with screening procedures in place for each such class. Passengers arrive sequentially, and a prescreening system determines each passenger's perceived risk level, which becomes known upon check-in. The objective of SSPSP is to use the passengers' perceived risk levels to determine the optimal policy for screening passengers that maximizes the expected number of true alarms, subject to capacity and assignment constraints. SSPSP is formulated as a Markov decision process, and an optimal policy is found using dynamic programming. Several structural properties of SSPSP are derived using its relationship to knapsack and sequential assignment problems. An illustrative example is provided, which indicates that a large proportion of high-risk passengers are classified as selectees.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 575-591 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802510416 File-URL: http://hdl.handle.net/10.1080/07408170802510416 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:575-591 Template-Type: ReDIF-Article 1.0 Author-Name: Eylem Tekin Author-X-Name-First: Eylem Author-X-Name-Last: Tekin Author-Name: Wallace Hopp Author-X-Name-First: Wallace Author-X-Name-Last: Hopp Author-Name: Mark Van Oyen Author-X-Name-First: Mark Author-X-Name-Last: Van Oyen Title: Pooling strategies for call center agent cross-training Abstract: The efficiency benefits achievable via cross-training in call and service center environments where agents serve distinct customer types are investigated. This is achieved by first considering specialized agents grouped into N departments according to the customer type they serve. Then, cross-training policies that pool a set of departments into a single larger department that serves all of the pooled call types according to either a first-come-first-served or non-pre-emptive priority service discipline are examined. The impact of system parameters, such as the number of servers, mean service times and service time coefficient of variation, on the decision of which departments to pool in order to minimize the expected delay in the system are characterized by comparing the proposed queueing models via standard queueing approximations and numerical analysis. The results show that if the mean service times of the departments that will be pooled are similar, pooling the departments with the highest service time coefficient of variation reduces the expected delay the most. Sufficient conditions for the mean service times to be considered similar are also provided.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplemental resource: Appendix of proofs for all results developed in the paper] Journal: IIE Transactions Pages: 546-561 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802512586 File-URL: http://hdl.handle.net/10.1080/07408170802512586 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:546-561 Template-Type: ReDIF-Article 1.0 Author-Name: Alexander Erdelyi Author-X-Name-First: Alexander Author-X-Name-Last: Erdelyi Author-Name: Huseyin Topaloglu Author-X-Name-First: Huseyin Author-X-Name-Last: Topaloglu Title: Computing protection level policies for dynamic capacity allocation problems by using stochastic approximation methods Abstract: A dynamic capacity allocation problem is considered in this paper. A fixed amount of daily processing capacity is allowed. Jobs of different priorities arrive randomly over time and a decision is required on which jobs should be scheduled on which days. The jobs that are waiting to be processed incur a holding cost depending on their priority levels. The objective is to minimize the total expected cost over a planning horizon. In this paper the focus is on a class of policies that are characterized by a set of protection levels. The role of the protection levels is to “protect” a portion of the capacity from the lower priority jobs so as to make it available for the future higher priority jobs. A stochastic approximation method to find a good set of protection levels is developed and its convergence is proved. Computational experiments indicate that protection level policies perform especially well when the coefficient of variation for the job arrivals is high.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Technical appendix detailing the proofs of propositions.] Journal: IIE Transactions Pages: 498-510 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802706543 File-URL: http://hdl.handle.net/10.1080/07408170802706543 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:498-510 Template-Type: ReDIF-Article 1.0 Author-Name: N. Maggio Author-X-Name-First: N. Author-X-Name-Last: Maggio Author-Name: A. Matta Author-X-Name-First: A. Author-X-Name-Last: Matta Author-Name: S. Gershwin Author-X-Name-First: S. Author-X-Name-Last: Gershwin Author-Name: T. Tolio Author-X-Name-First: T. Author-X-Name-Last: Tolio Title: A decomposition approximation for three-machine closed-loop production systems with unreliable machines, finite buffers and a fixed population Abstract: This paper describes an approximate analytical method for evaluating the average values of throughput and buffer levels of closed three-machine production systems with finite buffers. The method includes a new set of decomposition equations and a new building block model. The machines have deterministic processing times and geometrically distributed probabilities of failure and repair. The numerical results of the method are close to those from simulation. The method performs well because it takes into account the correlation among the numbers of parts in the buffers. Extensions to larger systems are discussed.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix of additional numerical results] Journal: IIE Transactions Pages: 562-574 Issue: 6 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802714695 File-URL: http://hdl.handle.net/10.1080/07408170802714695 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:6:p:562-574 Template-Type: ReDIF-Article 1.0 Author-Name: Tsan Ng Author-X-Name-First: Tsan Author-X-Name-Last: Ng Author-Name: John Fowler Author-X-Name-First: John Author-X-Name-Last: Fowler Author-Name: Ivy Mok Author-X-Name-First: Ivy Author-X-Name-Last: Mok Title: Robust demand service achievement for the co-production newsvendor Abstract: The co-production newsvendor problem is motivated by two-stage production processes that simultaneously yield a set of output products of different grades from the same input stocks. Co-production is a characteristic feature of processes such as semiconductor manufacturing and crude oil distillation. In the first stage, the newsvendor executes the order quantities for the input stocks prior to learning the actual demands and grading fractions of the products. In the second stage, the available production is allocated to satisfy the realized demands. Downward substitution is allowed in the allocation; i.e., demands for lower grades can always be filled by higher grades but not vice versa. The co-production newsvendor seeks to achieve maximum demand service level, subject to resource or budget constraints. This article proposes the use of the aspiration level approach to model the decision problem. Furthermore, it is assumed that only the means and supports of the uncertain demands and grading fractions are available, and the model is extended using robust optimization techniques. The resulting model is a linear program and can be solved very efficiently. Computational tests show that the proposed model performs favorably compared to other stochastic optimization approaches for the same problem. Journal: IIE Transactions Pages: 327-341 Issue: 5 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.587865 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.587865 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:5:p:327-341 Template-Type: ReDIF-Article 1.0 Author-Name: Eduardo Camponogara Author-X-Name-First: Eduardo Author-X-Name-Last: Camponogara Author-Name: Luiz Nazari Author-X-Name-First: Luiz Author-X-Name-Last: Nazari Author-Name: Claudio Meneses Author-X-Name-First: Claudio Author-X-Name-Last: Meneses Title: A revised model for compressor design and scheduling in gas-lifted oil fields Abstract: The design and real-time scheduling of lift-gas compressors in oil fields entails solving a mixed-integer non-linear problem that generalizes the facility location problem. This article presents a revised formulation that represents the constraints on compressor discharge pressure as a family of linear inequalities, which is shown to be tighter than a previous formulation. Mixed-integer linear approximations for these formulations are obtained by piecewise linearizing the non-linear functions, using binary variables and specially ordered set variables. Valid inequalities for compressor capacity along with separation and lifting procedures are proposed. The article also presents computational experiments comparing the formulations and evaluating the impact of cutting plane generation on solution speed. Journal: IIE Transactions Pages: 342-351 Issue: 5 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.587866 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.587866 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:5:p:342-351 Template-Type: ReDIF-Article 1.0 Author-Name: John Shortle Author-X-Name-First: John Author-X-Name-Last: Shortle Author-Name: Chun-Hung Chen Author-X-Name-First: Chun-Hung Author-X-Name-Last: Chen Author-Name: Ben Crain Author-X-Name-First: Ben Author-X-Name-Last: Crain Author-Name: Alexander Brodsky Author-X-Name-First: Alexander Author-X-Name-Last: Brodsky Author-Name: Daniel Brod Author-X-Name-First: Daniel Author-X-Name-Last: Brod Title: Optimal splitting for rare-event simulation Abstract: Simulation is a popular tool for analyzing large, complex, stochastic engineering systems. When estimating rare-event probabilities, efficiency is a big concern, since a huge number of simulation replications may be needed in order to obtain a reasonable estimate of the rare-event probability. The idea of splitting has emerged as a promising variance reduction technique. The basic idea is to create separate copies (splits) of the simulation whenever it gets close to the rare event. Some splitting methods use an equal number of splits at all levels. This can compromise the efficiency and can even increase the estimation variance. This article formulates the problem of determining the number of splits as an optimization problem that minimizes the variance of an estimator subject to a constraint on the total computing budget. An optimal solution for a certain class of problems is derived that is then extended to the problem of choosing the better of two designs, where each design is evaluated via rare-event simulation. Theoretical results for the improvements that are achievable using the methods are provided. Numerical experiments indicate that the proposed approaches are efficient and robust. Journal: IIE Transactions Pages: 352-367 Issue: 5 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.596507 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.596507 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:5:p:352-367 Template-Type: ReDIF-Article 1.0 Author-Name: Ajay Malaviya Author-X-Name-First: Ajay Author-X-Name-Last: Malaviya Author-Name: Chase Rainwater Author-X-Name-First: Chase Author-X-Name-Last: Rainwater Author-Name: Thomas Sharkey Author-X-Name-First: Thomas Author-X-Name-Last: Sharkey Title: Multi-period network interdiction problems with applications to city-level drug enforcement Abstract: This article considers a new class of multi-period network interdiction problems that focus on scheduling the activities of law enforcement in order to successfully interdict criminals in an illegal drug supply chain. This class of problems possesses several novel features for interdiction problems that were motivated through collaborations with city-level drug enforcement officials. These features include modeling the temporal aspects of these interdictions and the requirements associated with building interdictions in order to arrest high-ranking criminals in the drug supply chain. Based on these collaborations a systematic procedure is developed to generate realistic test instances of the multi-period network interdiction problem. Computational analysis on these realistic test instances provides some direction to the policies that law enforcement should implement in their interdiction activities. Journal: IIE Transactions Pages: 368-380 Issue: 5 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.602659 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.602659 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:5:p:368-380 Template-Type: ReDIF-Article 1.0 Author-Name: Oben Ceryan Author-X-Name-First: Oben Author-X-Name-Last: Ceryan Author-Name: Izak Duenyas Author-X-Name-First: Izak Author-X-Name-Last: Duenyas Author-Name: Yoram Koren Author-X-Name-First: Yoram Author-X-Name-Last: Koren Title: Optimal control of an assembly system with demand for the end-product and intermediate components Abstract: This article considers the production and admission control decisions for a two-stage manufacturing system where intermediate components are produced to stock in the first stage and an end-product is assembled from these components through a second-stage assembly operation. The firm faces two types of demand. The demand for the end-product is satisfied immediately if there are available products in inventory while the firm has the option to accept the order for later delivery or to reject it when no inventory is available. Demand for intermediate components may be accepted or rejected to keep components available for assembly purposes. The structure of demand admission, component production and product assembly decisions are characterized. The proposed model is extended to take into account multiple customer classes and a more general revenue collecting scheme where only an upfront partial payment is collected if a customer demand is accepted for future delivery with the remaining revenue received upon delivery. Since the optimal policy structure is rather complex and defined by switching surfaces in a multidimensional space, a simple heuristic policy is proposed for which the computational load grows linearly with the number of products and its performance is tested under a variety of example problems. Journal: IIE Transactions Pages: 386-403 Issue: 5 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609525 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609525 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:5:p:386-403 Template-Type: ReDIF-Article 1.0 Author-Name: Raghu Pasupathy Author-X-Name-First: Raghu Author-X-Name-Last: Pasupathy Author-Name: Bruce Schmeiser Author-X-Name-First: Bruce Author-X-Name-Last: Schmeiser Author-Name: Michael Taaffe Author-X-Name-First: Michael Author-X-Name-Last: Taaffe Author-Name: Jin Wang Author-X-Name-First: Jin Author-X-Name-Last: Wang Title: Control-variate estimation using estimated control means Abstract: This article studies control-variate estimation where the control mean itself is estimated. Control-variate estimation in simulation experiments can significantly increase sampling efficiency and has traditionally been restricted to cases where the control has a known mean. In a previous paper the current authors generalized the idea of control variate estimation to the case where the control mean is only approximated. The result is a biased but possibly useful estimator. For that case, a mean square error optimal estimator was provided and its properties were discussed. This article generalizes classical control variate estimation to the case of Control Variates using Estimated Means (CVEMs). CVEMs replace the control mean with an estimated value for the control mean obtained from a prior simulation experiment. Although the resulting control-variate estimator is unbiased, it does introduce additional sampling error and so its properties are not the same as those of the standard control-variate estimator. A CVEM estimator is developed that minimizes the overall estimator variance. Both biased control variates and CVEMs can be used to improve the efficiency of stochastic simulation experiments. Their main appeal is that the restriction of having to know (deterministically) the exact value of the control mean is eliminated; thus, the space of possible controls is greatly increased. Journal: IIE Transactions Pages: 381-385 Issue: 5 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.610430 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.610430 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:5:p:381-385 Template-Type: ReDIF-Article 1.0 Author-Name: Nima Safaei Author-X-Name-First: Nima Author-X-Name-Last: Safaei Author-Name: Ali Zuashkiani Author-X-Name-First: Ali Author-X-Name-Last: Zuashkiani Title: Manufacturing system design by considering multiple machine replacements under discounted costs Abstract: This article investigates the effect of equipment replacement on the design phase of multi-machine manufacturing systems, given a finite horizon and discounted costs. For the most part, the manufacturing system design literature has focused on the design issue, ignoring equipment replacement and its economic impact. The design phase generally consists of equipment selection, process routing, and layout decisions. The authors propose an explicit mathematical form for the operating costs of equipment and their salvage values based on their previous experience of life cycle costing projects. The design phase of cellular manufacturing systems, the so-called cell formation problem, is used. The problem is formulated as a non-linear mixed-integer programming model and solved using a proposed branch-and-bound algorithm. The algorithm employs a depth-first branching strategy in conjunction with a bounding procedure with a heuristic method. Selected numerical examples demonstrate the applicability of the model and verify the performance of the proposed algorithm. The results enable the best equipment mix and product process routes to be chosen based on the given horizon and economic factors; in addition, information is obtained about which equipment should be replaced and at what time point this replacement should occur. Journal: IIE Transactions Pages: 1100-1114 Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.654845 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.654845 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:1100-1114 Template-Type: ReDIF-Article 1.0 Author-Name: Jonathan Bard Author-X-Name-First: Jonathan Author-X-Name-Last: Bard Author-Name: Zhufeng Gao Author-X-Name-First: Zhufeng Author-X-Name-Last: Gao Author-Name: Rodolfo Chacon Author-X-Name-First: Rodolfo Author-X-Name-Last: Chacon Author-Name: John Stuber Author-X-Name-First: John Author-X-Name-Last: Stuber Title: Real-time decision support for assembly and test operations in semiconductor manufacturing Abstract: This article presents an efficient procedure for prioritizing machine changeovers in a semiconductor assembly and test facility on a periodic basis. In daily planning, target machine–tooling combinations are derived based on work-in-process, due dates, and backlogs. As machines finish their current lots, they need to be reconfigured to match their target setups. The proposed algorithm is designed to achieve this objective and run in real time. It first determines which machines are set up optimally and for those that are not it sequentially calculates how best to reset them within a given amount of time taking into account when the necessary tooling will become available, the importance of the lots in queue, and the given targets. Alternatively, two mixed-integer programming models are also presented that have similar objectives. Experimental results using data provided by a leading semiconductor manufacturer indicate that high-quality solutions can be obtained with the prioritizing procedure in negligible time. In most cases, these solutions are identical to those obtained with one of the two optimization models. Journal: IIE Transactions Pages: 1083-1099 Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.663519 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.663519 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:1083-1099 Template-Type: ReDIF-Article 1.0 Author-Name: Debjit Roy Author-X-Name-First: Debjit Author-X-Name-Last: Roy Author-Name: Ananth Krishnamurthy Author-X-Name-First: Ananth Author-X-Name-Last: Krishnamurthy Author-Name: Sunderesh Heragu Author-X-Name-First: Sunderesh Author-X-Name-Last: Heragu Author-Name: Charles Malmborg Author-X-Name-First: Charles Author-X-Name-Last: Malmborg Title: Performance analysis and design trade-offs in warehouses with autonomous vehicle technology Abstract: Distribution centers have recently adopted Autonomous Vehicle-based Storage and Retrieval Systems (AVS/RSs) as a potential alternative to traditional automated storage and retrieval systems for processing unit-load operations. The autonomy of the vehicles in an AVS/RS provides a level of hardware sophistication, which can lead to the improvements in operation efficiency and flexibility that will be necessary in distribution centers of the future. However, in order to exploit the potential benefits of the technology, an AVS/RS must be designed using a detailed understanding of the underlying dynamics and performance trade-offs. Design decisions such as the configuration of aisles and columns, allocation of resources to zones, and vehicle assignment rules can have a significant impact on the performance of AVS/RSs. In this research, the performance impact of these design decisions is investigated using an analytical model. The system is modeled as a multi-class semi-open queuing network with class switching and a decomposition-based approach is developed to evaluate the system performance and obtain insights. Numerical studies provide various insights that could be useful in the design conceptualization of AVS/RSs. Journal: IIE Transactions Pages: 1045-1060 Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.665201 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.665201 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:1045-1060 Template-Type: ReDIF-Article 1.0 Author-Name: Remco Bierbooms Author-X-Name-First: Remco Author-X-Name-Last: Bierbooms Author-Name: Ivo Adan Author-X-Name-First: Ivo Author-X-Name-Last: Adan Author-Name: Marcel van Vuuren Author-X-Name-First: Marcel Author-X-Name-Last: van Vuuren Title: Performance analysis of exponential production lines with fluid flow and finite buffers Abstract: This article presents an approximative analysis of production lines with fluid flow, consisting of a number of machines or servers in series and a finite buffer between each pair of servers. Each server suffers from operationally dependent breakdowns, characterized by exponentially distributed up- and downtimes. An iterative method is constructed that efficiently and accurately estimates performance characteristics such as throughput and mean total buffer content. The method is based on decomposition of the production line into single-buffer subsystems. Novel features of the method are (i) modeling of the aggregate servers in each subsystem; (ii) equations to iteratively determine the processing behavior of these servers; and (iii) use of matrix-analytic techniques to analyze each subsystem. The proposed method performs well on a large test set, including long and imbalanced production lines. For production lines with imbalance in mean downtimes, it is shown that a more refined modeling of the servers in each subsystem leads to significantly better performance. Journal: IIE Transactions Pages: 1132-1144 Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.668263 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.668263 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:1132-1144 Template-Type: ReDIF-Article 1.0 Author-Name: Zheng Wang Author-X-Name-First: Zheng Author-X-Name-Last: Wang Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Title: Robust production control policy for a single machine and single part-type manufacturing system with inaccurate observation of production surplus Abstract: The optimal production control policy (i.e., the hedging point policy) for a single machine and single part-type manufacturing system for the case where the production surplus is accurately known has been reported in the literature. However, the production surplus is often observed inaccurately in practice. Ignoring this inaccuracy in using the optimal policy leads to not only non-robust situations but also high production costs. To consider the effect of the inaccurate observation, robustness of the production control policy is required. The robustness can be evaluated in terms of the difference between the production surplus trajectory under the policy with an inaccurate observation and the trajectory under the policy with an accurate production surplus (i.e., the hedging point policy). A difference of zero is the ideal case; however, this is generally impossible to achieve since the extent of the observation error is generally unknown. This article proposes a new robust production control policy that can give a small difference between the two trajectories. Compared with the hedging point policy, the proposed policy is more insensitive to production surplus inaccuracy in the neighborhood of the hedging point but responds more slowly to a failure of the machine. Simulation studies show that the proposed policy has a better robustness than the hedging point policy for cases where, just as in actual industrial practice, the machine failure time is much shorter than the up time of the machine. In these cases, with an increase in the observation error, the robustness of the proposed policy becomes increasingly better than that for the hedging point policy. Journal: IIE Transactions Pages: 1061-1082 Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.669879 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.669879 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:1061-1082 Template-Type: ReDIF-Article 1.0 Author-Name: Chang Kang Author-X-Name-First: Chang Author-X-Name-Last: Kang Author-Name: Yoo Hong Author-X-Name-First: Yoo Author-X-Name-Last: Hong Author-Name: Woonghee Huh Author-X-Name-First: Woonghee Author-X-Name-Last: Huh Title: Platform replacement planning for management of product family obsolescence Abstract: In our rapidly changing world, existing technologies quickly obsolesce and new technologies continually emerge. While it is desirable that product platforms are designed for long life spans, state-of-the-art products cannot be developed on platforms with obsolete technologies. The purpose of this study is to develop a model to determine the optimal lifetime of platforms trading-off the cost efficiency of platform development and lost sales due to obsolete technologies. In order to predict sales revenue with respect to platform lifetime, a stochastic product introduction model is developed based on a diffusion model. A computational analysis is performed to reveals the influence of platform features and the market situation on the optimal lifetime and profitability of platforms. Furthermore, the economic value of retarding the speed of obsolescence is assessed in order to glean insights into the design of a robust or flexible platform. Journal: IIE Transactions Pages: 1115-1131 Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.672791 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.672791 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:1115-1131 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Editorial Board EOV Journal: IIE Transactions Pages: ebi-ebiv Issue: 12 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2012.729981 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.729981 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:12:p:ebi-ebiv Template-Type: ReDIF-Article 1.0 Author-Name: Stratos Ioannidis Author-X-Name-First: Stratos Author-X-Name-Last: Ioannidis Title: Joint production and quality control in production systems with two customer classes and lost sales Abstract: This article considers a single-product, make-to-stock manufacturing system facing random demand from two customer classes with different quality cost and profit parameters. Each outgoing product is inspected and graded on the basis of quality. Each customer class can only be served from the inventory of products of certain quality grades, if any exist, and the system may reserve a fraction of the inventory of some quality grade for customers of a certain class. The problem is one of finding a product quality grading plan and production and inventory rationing policies to maximize the mean profit rate of the system. The structures of the optimal production and order admission control policy are investigated numerically for the case when a specific quality grading plan is used. Based on this investigation, some simple and efficient threshold-type control policies are proposed. From numerical results, it appears that the proposed approach of coordinated quality, production control, and inventory control achieves higher profit than other manufacturing practices, in which there is little or no coordination between the production and quality control departments. Journal: IIE Transactions Pages: 605-616 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.721948 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.721948 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:605-616 Template-Type: ReDIF-Article 1.0 Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Objective-oriented optimal sensor allocation strategy for process monitoring and diagnosis by multivariate analysis in a Bayesian network Abstract: Measurement strategy and sensor allocation have a direct impact on the product quality, productivity, and cost. This article studies the couplings or interactions between the optimal design of a sensor system and quality management in a manufacturing system, which can improve cost-effectiveness and production yield by considering sensor cost, process change detection speed, and fault diagnosis accuracy. Based on an established definition of sensor allocation in a Bayesian network, an algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) is developed in this article to obtain the optimal sensor allocation design at minimum cost under different specified Average Run Length (ARL) requirements. Unlike previous approaches reported in the literature, the BASIS algorithm is developed based on investigating a multivariate T2 control chart when only partial observations are available. After implementing the derived optimal sensor solution, a diagnosis ranking method is proposed to find the root cause variables by ranking all of the identified potential faults. Two case studies are conducted on a hot forming process and a cap alignment process to illustrate and evaluate the developed methods. Journal: IIE Transactions Pages: 630-643 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.725505 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.725505 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:630-643 Template-Type: ReDIF-Article 1.0 Author-Name: Fan-Tien Cheng Author-X-Name-First: Fan-Tien Author-X-Name-Last: Cheng Author-Name: Yu-Chen Chiu Author-X-Name-First: Yu-Chen Author-X-Name-Last: Chiu Title: Applying the Automatic Virtual Metrology system to obtain tube-to-tube control in a PECVD tool Abstract: A joint development project to deploy a Tube-to-Tube (T2T) control scheme for Plasma-Enhanced Chemical Vapor Deposition (PECVD) utilizing the Automatic Virtual Metrology (AVM) system is in progress. In the PECVD process utilized in solar cell manufacturing, the sampling rate is less than 10%. However, T2T control requires 100% total inspection. To accomplish this requirement, a large number of measurements are needed and therefore production cycle time and cost increase. Virtual Metrology (VM) is proposed to resolve this problem. However, a key problem prohibiting effective utilization of VM in T2T control is its inability to take the reliance level in the VM feedback loop of T2T control into consideration. In addition, adopting an unreliable VM value may lead to worse results than not utilizing VM. In this article, the proposed T2T controller utilizes the VM value and its accompanying reliance index and global similarity index of the current run as well as information about a batch obtained in the first run to calculate a suggestion value of the deposition time for the following run to improve the process capability index and resolve unreliability issues. Journal: IIE Transactions Pages: 670-681 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.725507 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.725507 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:670-681 Template-Type: ReDIF-Article 1.0 Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Title: Multimode variation modeling and process monitoring for serial-parallel multistage manufacturing processes Abstract: A Serial-Parallel Multistage Manufacturing Process (SP-MMP) may have multiple variation propagation modes in its production runs when process routes vary from part to part. Conventional methods that ignore such multimode variation may not be able to effectively model and monitor the variation streams. It is also very challenging to model such a process when the available engineering domain knowledge is insufficient to characterize the variation streams. This article proposes a data-driven method, piecewise linear regression trees, to interrelate the variables for an SP-MMP with multimode variation. A unified control chart system is developed to monitor the process considering modeling uncertainty. The application to a more generic multistage multimode process is discussed. Finally, the effectiveness of the proposed procedure is demonstrated in an application involving a wafer manufacturing process. Journal: IIE Transactions Pages: 617-629 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.728729 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.728729 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:617-629 Template-Type: ReDIF-Article 1.0 Author-Name: Yongxin Cao Author-X-Name-First: Yongxin Author-X-Name-Last: Cao Author-Name: Velusamy Subramaniam Author-X-Name-First: Velusamy Author-X-Name-Last: Subramaniam Title: Improving the performance of manufacturing systems with continuous sampling plans Abstract: Continuous sampling plans are commonly adopted in various manufacturing industries to improve quality. Quantitative performance measures such as throughput and Work-In-Process (WIP) are highly coupled with quality and are also affected by sampling plans. However, quantitative and qualitative issues have long been studied separately in the literature. This article proposes an integrated quantity and quality model for manufacturing systems with sampling plans. A continuous-time and discrete part flow Markov model is first proposed for a single-stage system. Unlike previous Markov models, this model is capable of calculating various performance measures; e.g., throughput, quality, average fraction of inspection and WIP. A method for performance analysis of two-stage systems is presented. The method’s satisfactory accuracy is demonstrated with experimental results. Additionally, quantitative analysis is performed to investigate the effect of sampling fraction and clearance number (parameters of a sampling plan) on the various performance measures. Industrial practitioners may benefit in improving the performance by varying sampling plans. Finally, a decomposition model of multistage systems is studied based on which a method to determine the best sampling plans for manufacturing systems is developed. Numerical experimental results demonstrate the effectiveness of the proposed method in finding appropriate sampling parameters for profit improvement. Journal: IIE Transactions Pages: 575-590 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.733485 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.733485 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:575-590 Template-Type: ReDIF-Article 1.0 Author-Name: Zhenrui Wang Author-X-Name-First: Zhenrui Author-X-Name-Last: Wang Author-Name: Sean Dessureault Author-X-Name-First: Sean Author-X-Name-Last: Dessureault Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Title: Quality-driven workforce performance evaluation based on robust regression and ANOMR/ANOMRV chart Abstract: The integration of quality improvement and manufacturing system management has emerged as a promising research topic in recent years. Since operators’ performance variation can be reflected in product quality, workforce performance evaluation should be conducted with quality-based metrics to improve product quality as well as manufacturing system productivity. In this article, a methodology incorporating regression modeling and multiple comparisons is proposed to aid the performance evaluation. The effects of other impacting factors that contribute to operators’ performance variation are quantified with a robust zero-inflated Poisson regression model. The model coefficients are analyzed with multiple hypothesis tests to identify underperforming machines. Two statistical charts used in multiple comparisons are adopted for identifying underperforming operators. A case study with data from a real-world production system and a simulation experiment are presented to demonstrate the proposed methodology. Journal: IIE Transactions Pages: 644-657 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.733486 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.733486 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:644-657 Template-Type: ReDIF-Article 1.0 Author-Name: Elisa Gebennini Author-X-Name-First: Elisa Author-X-Name-Last: Gebennini Author-Name: Stanley Gershwin Author-X-Name-First: Stanley Author-X-Name-Last: Gershwin Title: Modeling waste production into two-machine–one-buffer transfer lines Abstract: This article focuses on analytical models of two-machine one-buffer Markov lines including waste production. The aim is to compute the probability of producing good parts, referred to as effective efficiency, when waste production is related to stoppages of the first machine. This problem is common in industrial fields where parts are generated by a continuous process; e.g., in high-speed beverage packaging lines. Two innovative models including waste production are presented: the WP-Basic Model extends the model of a basic two-machine–one-buffer transfer line; the WP-RP Model extends the model of a two-machine–one-buffer transfer line with a restart policy operating on the first machine (i.e., when the first machine is blocked because the buffer is filled, it is not allowed to resume production until the buffer becomes empty). The two existing models are improved by distinguishing, at any time step the first machine is operational, whether it is producing a good or a bad part. The probabilities of the system being in any feasible state are analytically derived for both the WP-Basic Model and the WP-RP Model. Then, the obtained probabilities are used to determine the performance measures of interest; i.e., waste probability and effective efficiency. Finally, some numerical results are provided to illustrate the effectiveness of the WP-Basic Model and the WP-RP Model. Journal: IIE Transactions Pages: 591-604 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.748994 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.748994 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:591-604 Template-Type: ReDIF-Article 1.0 Author-Name: Robert Inman Author-X-Name-First: Robert Author-X-Name-Last: Inman Author-Name: Dennis Blumenfeld Author-X-Name-First: Dennis Author-X-Name-Last: Blumenfeld Author-Name: Ningjian Huang Author-X-Name-First: Ningjian Author-X-Name-Last: Huang Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Survey of recent advances on the interface between production system design and quality Abstract: Product design's impact on quality is widely recognized. Less well recognized is the impact of production system design on quality. As quality can be improved by integrating it with the design of the product, so it can be improved by integrating quality with the design of the production system. This article provides evidence of the production system's influence on quality and surveys recent advances on the interface between quality and production system design including the design of the production system's quality control process. After mapping the literature, we identify opportunities for future research. Journal: IIE Transactions Pages: 557-574 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.757680 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.757680 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:557-574 Template-Type: ReDIF-Article 1.0 Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Author-Name: Kai Han Author-X-Name-First: Kai Author-X-Name-Last: Han Title: A batch-based run-to-run process control scheme for semiconductor manufacturing Abstract: Run-to-Run (R2R) control has been widely used in semiconductor manufacturing to compensate for process disturbance and to improve quality. The traditional R2R controller only takes the process output in a previous run as its input and generates an optimal recipe for the next run. In a multistage semiconductor manufacturing process, variations in upstream stations are propagated to downstream stations. However, the information from upstream stations is not considered by existing controllers. In addition, most R2R processes have a limited capacity; the products must be processed in batches. Therefore, if the incoming materials could be grouped with small within-batch variations and large batch-to-batch variations and the recipes are customized for each batch to drive all batch averages toward the same target value, the output variation could be reduced and quality improved. A batch Exponentially Weighted Moving Average (EWMA) controller is proposed in this article. It employs a modified K-means algorithm to group the incoming materials into batches with a fixed and equal size while minimizing the within-batch variation. The controller then generates the control settings by taking both the batch information and the feedback quality information into account. Simulation studies show that the proposed controller could significantly reduce output variation and improve quality. Journal: IIE Transactions Pages: 658-669 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.757681 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.757681 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:658-669 Template-Type: ReDIF-Article 1.0 Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Integration of manufacturing system design and quality management Journal: IIE Transactions Pages: 555-556 Issue: 6 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2013.774240 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.774240 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:6:p:555-556 Template-Type: ReDIF-Article 1.0 Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: Zhang Wu Author-X-Name-First: Zhang Author-X-Name-Last: Wu Title: Exponentially weighted moving average control charts for monitoring increases in Poisson rate Abstract: The Exponentially Weighted Moving Average (EWMA) control chart has been widely studied as a tool for monitoring normal processes due to its simplicity and efficiency. However, relatively little attention has been paid to EWMA charts for monitoring Poisson processes. This article extends EWMA charts to Poisson processes with an emphasis on quick detection of increases in Poisson rate. Both cases with and without normalizing transformation for Poisson data are considered. A Markov chain model is established to analyze and design the proposed chart. A comparison of the results obtained indicates that the EWMA chart based on normalized data is nearly optimal. Journal: IIE Transactions Pages: 711-723 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.578609 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.578609 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:711-723 Template-Type: ReDIF-Article 1.0 Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Author-Name: Sung Han Author-X-Name-First: Sung Author-X-Name-Last: Han Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: William Woodall Author-X-Name-First: William Author-X-Name-Last: Woodall Title: A review and comparison of likelihood-based charting methods Abstract: This article presents a review of two popular methods for temporal surveillance and proposes a general framework for spatial and spatiotemporal surveillance based on likelihood ratio statistics. It is shown that the cumulative sum and Shiryayev–Roberts statistics are special cases under such a general framework. The efficiencies of some surveillance methods are compared for the detection of clusters of high incidence rates in spatial and spatiotemporal applications using both Monte Carlo simulations and a real example. Journal: IIE Transactions Pages: 724-743 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.582476 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.582476 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:724-743 Template-Type: ReDIF-Article 1.0 Author-Name: Tao Yuan Author-X-Name-First: Tao Author-X-Name-Last: Yuan Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Title: Reliability study of ultra-thin dielectric films with variable thickness levels Abstract: The time-dependent dielectric breakdown of ultra-thin gate oxides is one of the major reliability issues facing scaled metal-oxide semiconductor technologies. As the thickness of the gate dielectric film approaches its scaling limit, process issues such as poor wafer uniformity and oxide growth control become critical for ultra-thin gate dielectric reliability. This article investigates both the physics and statistical aspects of the reliability of the ultra-thin gate dielectric when film thickness variations are present. A physics-based Spatio-Temporal Monte Carlo Simulation (STMCS) model is developed to study the effects of thickness and thickness non-uniformity on dielectric reliability. Its use allows the root cause for the non-linear characteristic of the Weibull time-to-breakdown distribution observed in experimental studies to be revealed. In addition, a Bayesian Weibull Mixture (BWM) model is proposed to analyze the time-to-breakdown data considering the existence of thickness non-uniformity. Numerical results are presented that show that the proposed BWM model is significantly superior to the basic Weibull model for use in reliability projection. Both the STMCS and BWM models can successfully reproduce the experimentally observed non-linear characteristic of the Weibull time-to-breakdown distribution and thus can be used to predict device reliability when the dielectric films in a gate are of unequal thicknesses. Journal: IIE Transactions Pages: 744-753 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.584958 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.584958 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:744-753 Template-Type: ReDIF-Article 1.0 Author-Name: Cheng-Hung Hu Author-X-Name-First: Cheng-Hung Author-X-Name-Last: Hu Author-Name: Robert Plante Author-X-Name-First: Robert Author-X-Name-Last: Plante Author-Name: Jen Tang Author-X-Name-First: Jen Author-X-Name-Last: Tang Title: Step-stress accelerated life tests: a proportional hazards–based non-parametric model Abstract: Using data from a simple step-stress accelerated life test procedure, a non-parametric proportional hazards model is proposed for obtaining upper confidence bounds for the cumulative failure probability of a product under normal use conditions. The approach is non-parametric in the sense that most of the functions involved in the model do not assume any specific forms, except for certain verifiable conditions. Test statistics are introduced to verify assumptions about the model and to test the goodness of fit of the proposed model to the data. A numerical example, using data simulated from the lifetime distribution of an existing parametric study on metal-oxide semiconductor capacitors, is used to illustrate the proposed methods. Discussions on how to determine the optimal stress levels and sample size are also given. Journal: IIE Transactions Pages: 754-764 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.596508 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.596508 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:754-764 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Khoo Author-X-Name-First: Michael Author-X-Name-Last: Khoo Author-Name: V. Wong Author-X-Name-First: V. Author-X-Name-Last: Wong Author-Name: Zhang Wu Author-X-Name-First: Zhang Author-X-Name-Last: Wu Author-Name: Philippe Castagliola Author-X-Name-First: Philippe Author-X-Name-Last: Castagliola Title: Optimal design of the synthetic chart for the process mean based on median run length Abstract: Control charts are usually designed using average run length as the criterion to be optimized. The shape of the run length distribution changes according to the shift in the mean, from highly skewed for an in-control process to approximately symmetric when the mean shift is large. Since the shape of the run length distribution changes with the mean shift, the Median Run Length (MRL) provides a more meaningful interpretation for in-control and out-of-control performances of the charts, and it is readily understood by practitioners. This article presents an optimal design procedure for a synthetic chart able to monitor the mean based on the MRL under the zero- and steady-state modes. The synthetic chart integrates and conforming run length charts. Pseudocodes and Mathematica programs are presented for the computation of the optimal parameters of the synthetic chart based on a desired in-control MRL (MRL(0)), a given sample size, and a specified mean shift for which a quick detection is needed. [Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for additional example, additional performance study, proof, tables, and figures.] Journal: IIE Transactions Pages: 765-779 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609526 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609526 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:765-779 Template-Type: ReDIF-Article 1.0 Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Title: Separation of individual operation signals from mixed sensor measurements Abstract: Sensor system measurements are generally mixed signals measured from multiple independent/dependent operations embedded in a complex system. In this article, a novel method is developed to separate the source signals of individual operations from the mixed sensor measurements by integrating the independent component analysis method and the Sparse Component Analysis (SCA) method. The proposed method can efficiently estimate the source signals that include both independent signals and dependent signals that have some dominant components in the time or some linear transform domains (e.g., frequency domain, time/frequency domain, or wavelet domain). In addition, an SCA method is also developed in this article that can automatically identify the dominant components in multiple linear transform domains. A case study of a forging process is conducted to demonstrate the effectiveness of the proposed methods. Journal: IIE Transactions Pages: 780-792 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609873 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609873 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:780-792 Template-Type: ReDIF-Article 1.0 Author-Name: Rensheng Zhou Author-X-Name-First: Rensheng Author-X-Name-Last: Zhou Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Author-Name: Nicoleta Serban Author-X-Name-First: Nicoleta Author-X-Name-Last: Serban Title: Degradation modeling and monitoring of truncated degradation signals Abstract: Advancements in condition monitoring techniques have facilitated the utilization of sensor technology for predicting failures of engineering systems. Within this context, failure is defined as the point where a sensor-based degradation signal reaches a pre-specified failure threshold. Parametric degradation models rely on complete signals to estimate the parametric functional form and do not perform well with sparse historical data. On the other hand, non-parametric models that address the challenges of data sparsity usually assume that signal observations can be made beyond the failure threshold. Unfortunately, in most applications, degradation signals can only be observed up to the failure threshold resulting in what this article refers to as truncated degradation signals. This article combines a non-parametric degradation modeling framework with a signal transformation procedure, allowing different types of truncated degradation signals to be characterized. This article considers (i) complete signals that result from constant monitoring of a system up to its failure; (ii) sparse signals resulting from sparse observations; and (iii) fragmented signals that result from dense observations over disjoint time intervals. The goal is to estimate and update the residual life distributions of partially degraded systems using in situ signal observations. It is showed that the proposed model outperforms existing models for all three signal types. Journal: IIE Transactions Pages: 793-803 Issue: 9 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.618175 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.618175 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:9:p:793-803 Template-Type: ReDIF-Article 1.0 Author-Name: Soongeol Kwon Author-X-Name-First: Soongeol Author-X-Name-Last: Kwon Author-Name: Natarajan Gautam Author-X-Name-First: Natarajan Author-X-Name-Last: Gautam Title: Guaranteeing performance based on time-stability for energy-efficient data centers Abstract: We consider a system of multiple parallel single-server queues where servers are heterogeneous with resources of different capacities and can be powered on or off while running at different speeds when they are powered on. In addition, we assume that application requests are heterogeneous with different workload distributions and resource requirements and the arrival, rates of request are time-varying. Managing such a heterogeneous, transient, and non-stationary system is a tremendous challenge. We take an unconventional approach, in that we force the queue lengths in each powered-on server to be time-stable (i.e., stationary). It allows the operators to guarantee performance and effectively monitor the system. We formulate a mixed-integer program to minimize energy costs while satisfying time-stability. Simulation results show that our suggested approach can stabilize queue length distributions and provide probabilistic performance guarantees on waiting times. Journal: IIE Transactions Pages: 812-825 Issue: 9 Volume: 48 Year: 2016 Month: 9 X-DOI: 10.1080/0740817X.2015.1126003 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1126003 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:9:p:812-825 Template-Type: ReDIF-Article 1.0 Author-Name: Jonathan F. Bard Author-X-Name-First: Jonathan F. Author-X-Name-Last: Bard Author-Name: Zhichao Shu Author-X-Name-First: Zhichao Author-X-Name-Last: Shu Author-Name: Douglas J. Morrice Author-X-Name-First: Douglas J. Author-X-Name-Last: Morrice Author-Name: Luci K. Leykum Author-X-Name-First: Luci K. Author-X-Name-Last: Leykum Author-Name: Ramin Poursani Author-X-Name-First: Ramin Author-X-Name-Last: Poursani Title: Annual block scheduling for family medicine residency programs with continuity clinic considerations Abstract: This article presents a new model for constructing annual block schedules for family medicine residents based on the rules and procedures followed by the Family Medicine Department at the University of Texas Health Science Center in San Antonio (UTHSC-SA). Such residency programs provide 3 years of specialty training for recent medical school graduates. At the beginning of each academic year, each trainee is given an annual block schedule that indicates his or her monthly assignments. These assignments are called rotations and include a variety of experiences, such as pediatric ambulatory care, the emergency room, and inpatient surgery. An important requirement associated with a subset of the rotations is that the residents spend multiple half-day sessions a week in a primary care clinic treating patients from the community. This is a key consideration when constructing the annual block schedules. In particular, one of the primary goals of most residencies is to ensure that the number of residents in clinic each day is approximately the same, so that the number of patients that can be seen each day is also the same. Uniformity provides for a more efficient use of supervisory and staff resources. The difficulty in achieving this goal is that not all rotations allow for clinic duty and that the number of patients that can be seen by a resident each session depends on his or her year of training. When constructing annual block schedules, two high-level sets of variables are available to the program coordinator. The first is the assignment of residents to rotations for each of the 12 blocks, and the second is the (partial) ability to adjust the days on which a resident has clinic duty during each rotation. In approaching the problem, our aim was to redesign the current rotations while giving all residents a 12-month schedule that concurrently (i) balances the number of patients that can be seen in the clinic during each half-day session and (ii) minimizes the number of adjustments necessary to achieve the first objective. The problem was formulated as a mixed-integer program; however, it proved too difficult to solve exactly. As an alternative, several optimization-based heuristics were developed that yielded good feasible solutions. The model and computations are illustrated with data provided by the Family Medicine Department at UTHSC-SA for a typical academic year. Journal: IIE Transactions Pages: 797-811 Issue: 9 Volume: 48 Year: 2016 Month: 9 X-DOI: 10.1080/0740817X.2015.1133942 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1133942 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:9:p:797-811 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammad Saied Dehghani Author-X-Name-First: Mohammad Saied Author-X-Name-Last: Dehghani Author-Name: Hanif D. Sherali Author-X-Name-First: Hanif D. Author-X-Name-Last: Sherali Title: A resource allocation approach for managing critical network-based infrastructure systems Abstract: In recent years, many resource allocation models have been developed to protect critical infrastructure by maximizing system resiliency or minimizing its vulnerability to disasters or disruptions. However, these are often computationally intensive and require simplifying assumptions and approximations. In this study, we develop a robust and representative, yet tractable, model for optimizing maintenance planning of generic network-structured systems (transportation, water, power, communication). The proposed modeling framework examines models that consider both linear and nonlinear objective functions and enhances their structure through suitable manipulations. Moreover, the designed models inherently capture the network topography and the stochastic nature of disruptions and can be applied to network-structured systems where performance is assessed based on network flow efficiency and mobility. The developed models are applied to the Istanbul highway system in order to assess their relative computational effectiveness and robustness using several test cases that consider single- and multiple-treatment types, and the problems are solved on the NEOS server using different available software. The results demonstrate that our models are capable of obtaining optimal solutions within a very short time. Furthermore, the linear model is shown to yield a good approximation to the nonlinear model (it determined solutions within 0.3% of optimality, on average). Managerial insights are provided in regard to the optimal policies obtained, which generally appear to favor selecting fewer links and applying a higher quality treatment to them. Journal: IIE Transactions Pages: 826-837 Issue: 9 Volume: 48 Year: 2016 Month: 9 X-DOI: 10.1080/0740817X.2016.1147662 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1147662 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:9:p:826-837 Template-Type: ReDIF-Article 1.0 Author-Name: Siyang Gao Author-X-Name-First: Siyang Author-X-Name-Last: Gao Author-Name: Weiwei Chen Author-X-Name-First: Weiwei Author-X-Name-Last: Chen Title: A new budget allocation framework for selecting top simulated designs Abstract: In this article, the problem of selecting an optimal subset from a finite set of simulated designs is considered. Given the total simulation budget constraint, the selection problem aims to maximize the Probability of Correct Selection (PCS) of the top m designs. To simplify the complexity of the PCS, an approximated probability measure is developed and an asymptotically optimal solution of the resulting problem is derived. A subset selection procedure, which is easy to implement in practice, is then designed. More important, we provide some useful insights on characterizing an efficient subset selection rule and how it can be achieved by adjusting the simulation budgets allocated to all of the designs. Journal: IIE Transactions Pages: 855-863 Issue: 9 Volume: 48 Year: 2016 Month: 9 X-DOI: 10.1080/0740817X.2016.1156788 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1156788 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:9:p:855-863 Template-Type: ReDIF-Article 1.0 Author-Name: Christos Alexopoulos Author-X-Name-First: Christos Author-X-Name-Last: Alexopoulos Author-Name: David Goldsman Author-X-Name-First: David Author-X-Name-Last: Goldsman Author-Name: Peng Tang Author-X-Name-First: Peng Author-X-Name-Last: Tang Author-Name: James R. Wilson Author-X-Name-First: James R. Author-X-Name-Last: Wilson Title: SPSTS: A sequential procedure for estimating the steady-state mean using standardized time series Abstract: This article presents SPSTS, an automated sequential procedure for computing point and Confidence-Interval (CI) estimators for the steady-state mean of a simulation-generated process subject to user-specified requirements for the CI coverage probability and relative half-length. SPSTS is the first sequential method based on Standardized Time Series (STS) area estimators of the steady-state variance parameter (i.e., the sum of covariances at all lags). Whereas its leading competitors rely on the method of batch means to remove bias due to the initial transient, estimate the variance parameter, and compute the CI, SPSTS relies on the signed areas corresponding to two orthonormal STS area variance estimators for these tasks. In successive stages of SPSTS, standard tests for normality and independence are applied to the signed areas to determine (i) the length of the warm-up period, and (ii) a batch size sufficient to ensure adequate convergence of the associated STS area variance estimators to their limiting chi-squared distributions. SPSTS's performance is compared experimentally with that of recent batch-means methods using selected test problems of varying degrees of difficulty. SPSTS performed comparatively well in terms of its average required sample size as well as the coverage and average half-length of the final CIs. Journal: IIE Transactions Pages: 864-880 Issue: 9 Volume: 48 Year: 2016 Month: 9 X-DOI: 10.1080/0740817X.2016.1163443 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1163443 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:9:p:864-880 Template-Type: ReDIF-Article 1.0 Author-Name: Ruiwei Jiang Author-X-Name-First: Ruiwei Author-X-Name-Last: Jiang Author-Name: Yongpei Guan Author-X-Name-First: Yongpei Author-X-Name-Last: Guan Author-Name: Jean-Paul Watson Author-X-Name-First: Jean-Paul Author-X-Name-Last: Watson Title: Risk-averse stochastic unit commitment with incomplete information Abstract: Due to the sustainable nature and stimulus plans from government, renewable energy (such as wind and solar) has been increasingly used in power systems. However, the intermittency of renewable energy creates challenges for power system operators to keep the systems reliable and cost-effective. In addition, information about renewable energy is usually incomplete. Instead of knowing the true probability distribution of the renewable energy course, only a set of historical data samples can be collected from the true (while ambiguous) distribution. In this article, we study two risk-averse stochastic unit commitment models with incomplete information: the first model being a chance-constrained unit commitment model and the second one a two-stage stochastic unit commitment model with recourse. Based on historical data on renewable energy, we construct a confidence set for the probability distribution of the renewable energy and propose data-driven stochastic unit commitment models to hedge against the incomplete nature of the information. Our models also ensure that, with a high probability, a large portion of renewable energy is utilized. Furthermore, we develop solution approaches to solve the models based on deriving strong valid inequalities and Benders’ decomposition algorithms. We show that the risk-averse behavior of both models decreases as more data samples are collected and eventually vanishes as the sample size goes to infinity. Finally, our case studies verify the effectiveness of our proposed models and solution approaches. Journal: IIE Transactions Pages: 838-854 Issue: 9 Volume: 48 Year: 2016 Month: 9 X-DOI: 10.1080/0740817X.2016.1167287 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1167287 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:9:p:838-854 Template-Type: ReDIF-Article 1.0 Author-Name: Stephen Frank Author-X-Name-First: Stephen Author-X-Name-Last: Frank Author-Name: Steffen Rebennack Author-X-Name-First: Steffen Author-X-Name-Last: Rebennack Title: An introduction to optimal power flow: Theory, formulation, and examples Abstract: The set of optimization problems in electric power systems engineering known collectively as Optimal Power Flow (OPF) is one of the most practically important and well-researched subfields of constrained nonlinear optimization. OPF has enjoyed a rich history of research, innovation, and publication since its debut five decades ago. Nevertheless, entry into OPF research is a daunting task for the uninitiated—both due to the sheer volume of literature and because OPF's ubiquity within the electric power systems community has led authors to assume a great deal of prior knowledge that readers unfamiliar with electric power systems may not possess. This article provides an introduction to OPF from an operations research perspective; it describes a complete and concise basis of knowledge for beginning OPF research. The discussion is tailored for the operations researcher who has experience with nonlinear optimization but little knowledge of electrical engineering. Topics covered include power systems modeling, the power flow equations, typical OPF formulations, and common OPF extensions. Journal: IIE Transactions Pages: 1172-1197 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1189626 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189626 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1172-1197 Template-Type: ReDIF-Article 1.0 Author-Name: James Cao Author-X-Name-First: James Author-X-Name-Last: Cao Author-Name: Kut C. So Author-X-Name-First: Kut C. Author-X-Name-Last: So Title: The value of demand forecast updates in managing component procurement for assembly systems Abstract: This article examines the value of demand forecast updates in an assembly system where a single assembler must order components from independent suppliers with different lead times. By staggering each ordering time, the assembler can utilize the latest market information, as it is developed, to form a better forecast over time. The updated forecast can subsequently be used to decide the following procurement decision. The objective of this research is to understand the specific operating environment under which demand forecast updates are most beneficial. Using a uniform demand adjustment model, we are able to derive analytical results that allow us to quantify the impact of demand forecast updates. We show that forecast updates can drastically improve profitability by reducing the mismatch cost caused by demand uncertainty. Journal: IIE Transactions Pages: 1198-1216 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1189630 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189630 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1198-1216 Template-Type: ReDIF-Article 1.0 Author-Name: Liwen Ouyang Author-X-Name-First: Liwen Author-X-Name-Last: Ouyang Author-Name: Daniel W. Apley Author-X-Name-First: Daniel W. Author-X-Name-Last: Apley Author-Name: Sanjay Mehrotra Author-X-Name-First: Sanjay Author-X-Name-Last: Mehrotra Title: Designed sampling from large databases for controlled trials Abstract: Controlled trials are ubiquitously used to investigate the effect of a medical treatment. The trial outcome can be dependent on a set of patient covariates. Traditional approaches have relied primarily on randomized patient sampling and allocation to treatment and control groups. However, when covariate data for a large set of patients are available and the dependence of the outcome on the covariates is of interest, one can potentially design treatment/control groups that provide better estimates of the covariate-dependent effects of the treatment or provide similarly accurate estimates with a smaller trial cohort size. In this article, we develop an approach that uses optimal Design Of Experiments (DOE) concepts to select the patients for the treatment and control groups upfront, based on their covariate values, in a manner that optimizes the information content in the data. For the optimal treatment and control groups selection, we develop simple guidelines and an optimization algorithm that achieves much more accurate estimates of the covariate-dependent effects of the treatment than random sampling. We demonstrate the advantage of our method through both theoretical and numerical performance comparisons. The advantages are more pronounced when the trial cohort size is smaller, relative to the number of records in the database. Moreover, our approach causes no sampling bias in the estimated effects, for the same reason that DOE principles do not bias estimated effects. Although we focus on medical treatment assessment, the approach has applicability in many analytics application domains where one wants to conduct a controlled experimental study to identify the covariate-dependent effects of a factor (e.g., a marketing sales promotion), based on a sample of study subjects selected optimally from a large database of covariates. Journal: IIE Transactions Pages: 1087-1097 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1189633 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189633 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1087-1097 Template-Type: ReDIF-Article 1.0 Author-Name: Yan Li Author-X-Name-First: Yan Author-X-Name-Last: Li Author-Name: Yi Zhang Author-X-Name-First: Yi Author-X-Name-Last: Zhang Author-Name: Nan Kong Author-X-Name-First: Nan Author-X-Name-Last: Kong Author-Name: Mark Lawley Author-X-Name-First: Mark Author-X-Name-Last: Lawley Title: Capacity planning for long-term care networks Abstract: We study the problem of capacity planning for long-term care services, which is important not only for the elderly and disabled who cannot adequately care for themselves but also for long-term care providers and health policymakers. Patients with long-term care needs usually have to transfer between different settings such as nursing homes and home- and community-based services. We model patient flows among these settings using an open migration network and formulate the planning of the capacity needed to provide long-term care with a newsvendor-type model. We explore the structural properties of the model and identify the most influential factors, such as the penalty cost for capacity shortage and transition rates between different care settings, in making capacity decisions. With the model developed, capacity decisions for long-term care service networks can be made more systematically with full consideration of different patient flow patterns and budget constraints. The research will be especially useful to long-term care policymakers in a state or nationwide given the worsening shortage of care providers and the escalating long-term care needs resulting from population aging. Journal: IIE Transactions Pages: 1098-1111 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1190480 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1190480 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1098-1111 Template-Type: ReDIF-Article 1.0 Author-Name: Brian Lunday Author-X-Name-First: Brian Author-X-Name-Last: Lunday Author-Name: Matthew J. Robbins Author-X-Name-First: Matthew J. Author-X-Name-Last: Robbins Title: Informing pediatric vaccine procurement policy via the pediatric formulary design, pricing, and production problem Abstract: This research improves upon the monopsonist vaccine formulary design problem in the literature by incorporating several modeling enhancements and applying different methodologies to efficiently obtain solutions and derive insights. Our multi-objective formulation seeks to minimize the overall price to immunize a cohort of children, maximize the net profit shared among pediatric vaccine manufacturers, and minimize the average number of injections per child among the prescribed formularies. Accounting for Centers for Disease Control and Prevention (CDC) guidelines, we restrict vaccines utilized against a given disease within a given formulary to those produced by a single manufacturer. We also account for a circumstance in which one manufacturer's vaccine has a greater relative efficacy. For the resulting nonconvex mixed-integer nonlinear program, we bound the second and third objectives using optimal formulary designs for current public sector prices and utilize the ϵ -constraint method to solve an instance representative of contemporary immunization schedule requirements. Augmenting our formulation with symmetry reduction constraints to reduce the required computational effort, we identify a set of non-inferior solutions. Of practical interest to the CDC, our model enables the design of a pricing and purchasing policy, creating a sustainable and stable capital investment environment for the provision of pediatric vaccines. Journal: IIE Transactions Pages: 1112-1126 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1198064 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1198064 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1112-1126 Template-Type: ReDIF-Article 1.0 Author-Name: Chenxu Li Author-X-Name-First: Chenxu Author-X-Name-Last: Li Author-Name: Yu An Author-X-Name-First: Yu Author-X-Name-Last: An Author-Name: Dachuan Chen Author-X-Name-First: Dachuan Author-X-Name-Last: Chen Author-Name: Qi Lin Author-X-Name-First: Qi Author-X-Name-Last: Lin Author-Name: Nian Si Author-X-Name-First: Nian Author-X-Name-Last: Si Title: Efficient computation of the likelihood expansions for diffusion models Abstract: Closed-form likelihood expansion is an important method for econometric assessment of continuous-time models driven by stochastic differential equations based on discretely sampled data. However, practical applications for sophisticated models usually involve significant computational efforts in calculating high-order expansion terms in order to obtain the desirable level of accuracy. We provide new and efficient algorithms for symbolically implementing the closed-form expansion of the transition density. First, combinatorial analysis leads to an alternative expression of the closed-form formula for assembling expansion terms from that currently available in the literature. Second, as the most challenging task and central building block for constructing the expansions, a novel analytical formula for calculating the conditional expectation of iterated Stratonovich integrals is proposed and a new algorithm for converting the conditional expectation of the multiplication of iterated Stratonovich integrals to a linear combination of conditional expectation of iterated Stratonovich integrals is developed. In addition to a procedure for creating expansions for a nonaffine exponential Ornstein–Uhlenbeck stochastic volatility model, we illustrate the computational performance of our method. Journal: IIE Transactions Pages: 1156-1171 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1200201 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1200201 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1156-1171 Template-Type: ReDIF-Article 1.0 Author-Name: Ferdinand Kiermaier Author-X-Name-First: Ferdinand Author-X-Name-Last: Kiermaier Author-Name: Markus Frey Author-X-Name-First: Markus Author-X-Name-Last: Frey Author-Name: Jonathan F. Bard Author-X-Name-First: Jonathan F. Author-X-Name-Last: Bard Title: Flexible cyclic rostering in the service industry Abstract: Companies in the service industry frequently depend on cyclic rosters to schedule their workforce. Such rosters offer a high degree of fairness and long-term predictability of days on and off, but they can hinder an organization’s ability to respond to changing demand. Motivated by the need for improving cyclic planning at an airport ground handling company, this article introduces the idea of flexible cyclic rostering as a means of accommodating limited weekly adjustments of employee schedules. The problem is first formulated as a multi-stage stochastic program; however, this turned out to be computationally intractable. To find solutions, two approximations were developed that involved reductions to a two-stage problem. In the computational study, the flexible and traditional cyclic rosters derived from these approximations are compared and metrics associated with the value of stochastic information are reported. In the testing, we considered seven different perturbations of the demand curve that incorporate the types of uncertainty that are common throughout the service industry. To the best of our knowledge, this is the first analysis of cyclic rostering that applies stochastic optimization. The results show that a reduction in undercoverage of more than 10% on average can be achieved with minimal computational effort. It was also observed that the new approach can overcome most of the limitations of traditional cyclic rostering while still providing most of its advantages. Journal: IIE Transactions Pages: 1139-1155 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1200202 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1200202 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1139-1155 Template-Type: ReDIF-Article 1.0 Author-Name: Weihong Hu Author-X-Name-First: Weihong Author-X-Name-Last: Hu Author-Name: Mariel S. Lavieri Author-X-Name-First: Mariel S. Author-X-Name-Last: Lavieri Author-Name: Alejandro Toriello Author-X-Name-First: Alejandro Author-X-Name-Last: Toriello Author-Name: Xiang Liu Author-X-Name-First: Xiang Author-X-Name-Last: Liu Title: Strategic health workforce planning Abstract: Analysts predict impending shortages in the health care workforce, yet wages for health care workers already account for over half of U.S. health expenditures. It is thus increasingly important to adequately plan to meet health workforce demand at reasonable cost. Using infinite linear programming methodology, we propose an infinite-horizon model for health workforce planning in a large health system for a single worker class; e.g., nurses. We give a series of common-sense conditions that any system of this kind should satisfy and use them to prove the optimality of a natural lookahead policy. We then use real-world data to examine how such policies perform in more complex systems; in particular, our experiments show that a natural extension of the lookahead policy performs well when incorporating stochastic demand growth. Journal: IIE Transactions Pages: 1127-1138 Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1204488 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1204488 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:1127-1138 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: EOV Focus Area Editorial Boards Journal: IIE Transactions Pages: ebi-ebi Issue: 12 Volume: 48 Year: 2016 Month: 12 X-DOI: 10.1080/0740817X.2016.1241052 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1241052 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:12:p:ebi-ebi Template-Type: ReDIF-Article 1.0 Author-Name: Feng Ju Author-X-Name-First: Feng Author-X-Name-Last: Ju Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Author-Name: Guoxian Xiao Author-X-Name-First: Guoxian Author-X-Name-Last: Xiao Author-Name: Jorge Arinez Author-X-Name-First: Jorge Author-X-Name-Last: Arinez Author-Name: Weiwen Deng Author-X-Name-First: Weiwen Author-X-Name-Last: Deng Title: Modeling, analysis, and improvement of integrated productivity and quality system in battery manufacturing Abstract: A battery manufacturing system typically includes a serial production line with multiple inspection stations and repair processes. In such systems, productivity and quality are tightly coupled. Variations in battery quality may add up along the line so that the upstream quality may impact the downstream operations. The repair process after each inspection can also affect downstream quality behavior and may further impose an effect on the throughput of conforming batteries. In this article, an analytical model of such an integrated productivity and quality system is introduced. Analytical methods based on an overlapping decomposition approach are developed to estimate the production rate of conforming batteries. The convergence of the method is analytically proved and the accuracy of the estimation is numerically justified. In addition, bottleneck identification methods based on the probabilities of blockage, starvation, and quality statistics are investigated. Indicators are proposed to identify the downtime and quality bottlenecks that remove the need to calculate throughput and quality performance and their sensitivities. These methods provide a quantitative tool for modeling, analysis, and improvement of productivity and quality in battery manufacturing systems and can be applied to other manufacturing systems ameanable to investigation using integrated productivity and quality models. Journal: IIE Transactions Pages: 1313-1328 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1005777 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1005777 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1313-1328 Template-Type: ReDIF-Article 1.0 Author-Name: Mohsen Moghaddam Author-X-Name-First: Mohsen Author-X-Name-Last: Moghaddam Author-Name: Shimon Y. Nof Author-X-Name-First: Shimon Y. Author-X-Name-Last: Nof Title: Balanceable assembly lines with dynamic tool sharing and best matching decisions—a collaborative assembly framework Abstract: A Collaborative Assembly Framework (CAF), inspired by the design principles of the Collaborative Control Theory, is developed in this article to enhance the extent of balancing of assembly lines. The notion of the CAF lies in the dynamic utilization of idle resources to eliminate bottlenecks. The CAF is composed of two modules: the first one, the Tool Sharing Protocol (TShP), makes dynamic tool-sharing decisions among fully loaded (i.e., bottleneck) and partially loaded Work Stations (WSs), and the second one, the Best Matching Protocol (BMP), dynamically matches tasks and WSs (BMP-1) and partially and fully loaded WSs for tool sharing (BMP-2). A Multi-Objective Mixed-Integer Programming model is developed for mathematical representation and a Fuzzy Goal Programming approach is applied for optimization purposes. The objectives are to minimize (i) the number of WSs, (ii) cycle time, and (iii) the total collaboration cost. The developed CAF is proven to guarantee the relative extent of balancing of assembly lines, depending on pairwise tool compatibility and tool-sharing performance. Numerical experiments on a set of small-sized case studies repeated and expanded from previous research show the superiority of the CAF over the existing non-collaborative approaches in terms of line efficiency, utilization, and balancing. Journal: IIE Transactions Pages: 1363-1378 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1027456 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1027456 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1363-1378 Template-Type: ReDIF-Article 1.0 Author-Name: Faraz Ramtin Author-X-Name-First: Faraz Author-X-Name-Last: Ramtin Author-Name: Jennifer A. Pazour Author-X-Name-First: Jennifer A. Author-X-Name-Last: Pazour Title: Product allocation problem for an AS/RS with multiple in-the-aisle pick positions Abstract: An automated storage/retrieval system with multiple in-the-aisle pick positions is a semi-automated case-level order fulfillment technology that is widely used in distribution centers. We study the impact of product to pick position assignments on the expected throughput for different operating policies, demand profiles, and shape factors. We develop efficient algorithms of complexity O(nlog(n)) that provide the assignment that minimizes the expected travel time. Also, for different operating policies, shape configurations, and demand curves, we explore the structure of the optimal assignment of products to pick positions and quantify the difference between using a simple, practical assignment policy versus the optimal assignment. Finally, we derive closed-form analytical travel time models by approximating the optimal assignment's expected travel time using continuous demand curves and assuming an infinite number of pick positions in the aisle. We illustrate that these continuous models work well in estimating the travel time of a discrete rack and use them to find optimal design configurations. Journal: IIE Transactions Pages: 1379-1396 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1027458 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1027458 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1379-1396 Template-Type: ReDIF-Article 1.0 Author-Name: Yanqing Duanmu Author-X-Name-First: Yanqing Author-X-Name-Last: Duanmu Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: Analysis and optimization of the edge effect for III–V nanowire synthesis via selective area metal-organic chemical vapor deposition Abstract: Selective Area Metal-Organic Chemical Vapor Deposition (SA-MOCVD) is a promising technique for the scale-up of nanowire fabrication. Our previous study investigated the growth mechanism of SA-MOCVD processes by quantifying contributions from various diffusion sources. However, the edge effect on nanostructure uniformity captured by skirt area diffusion was not quantitatively analyzed. This work further improves our understanding of the process by considering the edge effect as a superposition of skirt area diffusion and “blocking effect” and optimizing the edge effect for uniformity control of nanowire growth. We directly model the blocking effect of nanowires in the process of precursor diffusion from the skirt area to the center of a substrate. The improved model closely captures the distribution of the nanowire length across the substrate. Physical interpretation of the edge effect is provided. With the established model, we provide a method to optimize the width of the skirt area to improve the predicted structural uniformity of SA-MOCVD growth. Journal: IIE Transactions Pages: 1424-1431 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1033038 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1033038 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1424-1431 Template-Type: ReDIF-Article 1.0 Author-Name: Jack Brimberg Author-X-Name-First: Jack Author-X-Name-Last: Brimberg Author-Name: Zvi Drezner Author-X-Name-First: Zvi Author-X-Name-Last: Drezner Title: A location–allocation problem with concentric circles Abstract: We consider a continuous location problem for p concentric circles serving a given set of demand points. Each demand point is serviced by the closest circle. The objective is to minimize the sum of weighted distances between demand points and their closest circle. We analyze and solve the problem when demand is uniformly and continuously distributed in a disk and when a finite number of demand points are located in the plane. Heuristic and exact algorithms are proposed for the solution of the discrete demand problem. A much faster heuristic version of the exact algorithm is also proposed and tested. The exact algorithm solves the largest tested problem with 1000 demand points in about 3.5 hours. The faster heuristic version solves it in about 2 minutes. Journal: IIE Transactions Pages: 1397-1406 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1034897 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1034897 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1397-1406 Template-Type: ReDIF-Article 1.0 Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Ning Zhao Author-X-Name-First: Ning Author-X-Name-Last: Zhao Title: Analysis of dual tandem queues with a finite buffer capacity and non-overlapping service times and subject to breakdowns Abstract: Tandem queues with a finite buffer capacity are the common structures embedded in practical production systems. We study the properties of tandem queues with a finite buffer capacity and non-overlapping service times subject to time-based preemptive breakdowns. Different from prior aggregation and decomposition approaches, we view a tandem queue as an integrated system and develop an innovative approach to analyze the performance of a dual tandem queue through the insight from Friedman's reduction method. We show that the system capacity of a dual tandem queue with a finite buffer and breakdowns can be less than its bottleneck-sees-initial-arrivals system due to the existence of virtual interruptions. Furthermore, the virtual interruptions depend on job arrival rates in general. Approximate models are derived using priority queues and the concept of virtual interruptions. Journal: IIE Transactions Pages: 1329-1341 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1055389 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1055389 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1329-1341 Template-Type: ReDIF-Article 1.0 Author-Name: Jie Pan Author-X-Name-First: Jie Author-X-Name-Last: Pan Author-Name: Yi Tao Author-X-Name-First: Yi Author-X-Name-Last: Tao Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Title: Production planning and inventory control for a two-product recovery system Abstract: The significance of product recovery through remanufacturing has been widely recognized and has compelled manufacturers to incorporate product recovery activities into normal manufacturing processes. Consequently, increasing attention has been paid to production and inventory management of the product recovery system where demand is satisfied through either manufacturing brand-new products or remanufacturing returned products into new ones. In this work, we investigate a recovery system with two product types and two return flows. A periodic-review inventory problem is addressed in the two-product recovery system and an approximate dynamic programming approach is proposed to obtain production and recovery decisions. A single-period problem is first solved and the optimal solution is characterized by a multilevel threshold policy. For the multi-period problem, we show that the threshold levels of each period are solely dependent on the gradients of the cost-to-go function at points of interest after approximation. The gradients are estimated by an infinitesimal perturbation analysis–based method and a backward induction approach is then applied to derive the threshold levels of each period. Numerical experiments are conducted under different scenarios and the threshold policy is shown to outperform two other heuristic policies. Journal: IIE Transactions Pages: 1342-1362 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1056389 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1056389 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1342-1362 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Corrigendum Journal: IIE Transactions Pages: 1432-1432 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1082403 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1082403 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1432-1432 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Editorial Boards Journal: IIE Transactions Pages: ebi-ebi Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2015.1095585 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1095585 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:ebi-ebi Template-Type: ReDIF-Article 1.0 Author-Name: Liyu Zheng Author-X-Name-First: Liyu Author-X-Name-Last: Zheng Author-Name: Janis Terpenny Author-X-Name-First: Janis Author-X-Name-Last: Terpenny Author-Name: Peter Sandborn Author-X-Name-First: Peter Author-X-Name-Last: Sandborn Title: Design refresh planning models for managing obsolescence Abstract: Fast moving technologies cause high-tech components to have shortened life cycles, rendering them quickly obsolete. Obsolescence is a significant problem for systems whose operational and support life is much longer than the procurement lifetimes of their constituent components. Long field-life systems such as aircraft, ships, and other systems require many updates of components and technology over their life to remain in manufacture and supportable. Design refresh planning is a strategic way of managing obsolescence. In this article, efficient mathematical models based on Integer Programming for design refresh planning are developed to determine the plan that minimizes the total obsolescence management costs. Decisions are made on when to execute design refreshes (dates) and what obsolete/non-obsolete system components should be replaced at a specific design refresh. Data uncertainty is also considered and obsolescence dates of the components are assumed to follow specific probability distributions. With this approach, different scenarios of executing design refreshes and the probabilities of adopting these scenarios can be determined. The final optimal cost becomes an expected value. An example of an electronic engine control unit is included for demonstration of the developed models. Journal: IIE Transactions Pages: 1407-1423 Issue: 12 Volume: 47 Year: 2015 Month: 12 X-DOI: 10.1080/0740817X.2014.999898 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.999898 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:12:p:1407-1423 Template-Type: ReDIF-Article 1.0 Author-Name: Yu Jin Author-X-Name-First: Yu Author-X-Name-Last: Jin Author-Name: Harry A. Pierson Author-X-Name-First: Harry A. Author-X-Name-Last: Pierson Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Title: Toolpath allocation and scheduling for concurrent fused filament fabrication with multiple extruders Abstract: Fused filament fabrication, like most layer-wise additive manufacturing processes, is hindered by low production rates and scalability issues. Concurrent fused filament fabrication mitigates these disadvantages by distributing the processing of each layer among multiple extruders working in parallel. The objective of this work is to develop a general toolpath allocation and scheduling methodology to achieve this objective. Breaks in a toolpath that are inherently created by slicing software for single-extruder machines are used to form sub-paths, and the assignment of these to available extruders is formulated as a scheduling problem with collision constraints. A formal optimization model is presented, and two novel heuristics are developed to obtain approximate solutions. Three case studies demonstrate the application of these algorithms and compare their relative performance with respect to fabrication time and computational cost. In simulations with three extruders, layer printing times were reduced by as much as 60% compared with single-extruder machines. The proposed heuristics also exceeded the performance of two baseline toolpath scheduling algorithms by as much as 45%. Two key layer characteristics were found to influence heuristic performance, and the advantages and disadvantages of each algorithm are discussed in the context of these characteristics. Journal: IISE Transactions Pages: 192-208 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2017.1374582 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1374582 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:192-208 Template-Type: ReDIF-Article 1.0 Author-Name: Lin Li Author-X-Name-First: Lin Author-X-Name-Last: Li Author-Name: Azadeh Haghighi Author-X-Name-First: Azadeh Author-X-Name-Last: Haghighi Author-Name: Yiran Yang Author-X-Name-First: Yiran Author-X-Name-Last: Yang Title: Theoretical modelling and prediction of surface roughness for hybrid additive–subtractive manufacturing processes Abstract: Hybrid additive–subtractive manufacturing processes are becoming increasingly popular as a promising solution to overcome the current limitations of Additive Manufacturing (AM) technology and improve the dimensional accuracy and surface quality of parts. Surface roughness, as one of the most important surface quality measures, plays a key role in the fit of assemblies and thus needs to be thoroughly evaluated at the design and manufacturing stages. However, most of the studies on surface roughness modelling and analysis employ empirical approaches, and only consider the effect of a single manufacturing process. In particular, the existing surface roughness models are not applicable to hybrid additive–subtractive manufacturing processes in which a secondary process is involved. In this article, analytical models are established to predict the surface roughness of parts fabricated by AM as well as hybrid additive–subtractive manufacturing processes. A novel surface profile representation scheme is also proposed to increase the prediction accuracy. Case studies are performed to validate the effectiveness of the proposed models. An average of 4.25% error is observed for the AM case, which is significantly smaller than the prediction error of the existing models in the literature. Furthermore, in the hybrid case, an average of 91.83% accuracy is obtained. Journal: IISE Transactions Pages: 124-135 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1458268 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1458268 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:124-135 Template-Type: ReDIF-Article 1.0 Author-Name: Yossi Luzon Author-X-Name-First: Yossi Author-X-Name-Last: Luzon Author-Name: Eugene Khmelnitsky Author-X-Name-First: Eugene Author-X-Name-Last: Khmelnitsky Title: Job sizing and sequencing in additive manufacturing to control process deterioration Abstract: The term Additive Manufacturing (AM) describes a set of novel manufacturing technologies in which successive layers of matter are formed to create an object, e.g., three-dimensional (3D) printing. These technologies have several major advantages that have led to their rapidly increasing involvement in mass production. However, due to their unique properties they are subject to deterioration, which is expressed in the aging of different components followed by random maintenance requirements. Additionally, they are all preemptive-repeat; namely, if a failure occurs during the printing of an object, then its printing will have to recommence from the start as the work is resumed. This article addresses the problem of sequencing an AM process while referring to its relevant properties. It also addresses a more complicated environment in which the work may arrive over time. We adopt a stochastic preemptive-repeat scheduling model, generalize it to incorporate the process age, and develop the formalization of two main measures of a given schedule: the expected completion time, i.e., the time duration required to complete the printing of all jobs, and the total expected flow time, i.e., the expected time a job spends in the system. Our formalization enables the determination of a schedule that minimizes these measures. In particular, we formulate and solve a constrained continuous optimization problem to determine the optimal size of the designed jobs to be printed. This challenge, which relates to the unique flexibility of these technologies, currently hinders the practice of dental 3D printing manufacturing lines. Journal: IISE Transactions Pages: 181-191 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1460518 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1460518 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:181-191 Template-Type: ReDIF-Article 1.0 Author-Name: Kubra Karayagiz Author-X-Name-First: Kubra Author-X-Name-Last: Karayagiz Author-Name: Alaa Elwany Author-X-Name-First: Alaa Author-X-Name-Last: Elwany Author-Name: Gustavo Tapia Author-X-Name-First: Gustavo Author-X-Name-Last: Tapia Author-Name: Brian Franco Author-X-Name-First: Brian Author-X-Name-Last: Franco Author-Name: Luke Johnson Author-X-Name-First: Luke Author-X-Name-Last: Johnson Author-Name: Ji Ma Author-X-Name-First: Ji Author-X-Name-Last: Ma Author-Name: Ibrahim Karaman Author-X-Name-First: Ibrahim Author-X-Name-Last: Karaman Author-Name: Raymundo Arróyave Author-X-Name-First: Raymundo Author-X-Name-Last: Arróyave Title: Numerical and experimental analysis of heat distribution in the laser powder bed fusion of Ti-6Al-4V Abstract: Laser Powder Bed Fusion (LPBF) of metallic parts is a complex process involving simultaneous interplay between several physical mechanisms such as solidification, heat transfer (convection, conduction, radiation, etc.), and fluid flow. In the present work, a three-dimensional finite element model is developed for studying the thermal behavior during LPBF of Ti-6Al-4V alloy. Two phase transitions are considered in the model: solid-to-liquid and liquid-to-gas. It is demonstrated that metal evaporation has a notable effect on the thermal history evolution during fabrication and should not be overlooked in contrast with the majority of previous research efforts on modeling and simulation of additive manufacturing processes. The model is validated through experimental measurements of different features including the size and morphology of the Heat-Affected Zone (HAZ), melt pool size, and thermal history. Reasonable agreement with experimental measurements of the HAZ width and depth are obtained with corresponding errors of 3.2% and 10.8%. Qualitative agreement with experimental measurements of the multi-track thermal history is also obtained, with some discrepancies whose sources are discussed in detail. The current work presents one of the first efforts to validate the multi-track thermal history using dual-wavelength pyrometry, as opposed to single-track experiments. The effects of selected model parameters and evaporation on the melt pool/HAZ size, geometry and peak predicted temperature during processing, and their sensitivities to these parameters are also discussed. Sensitivity analysis reveals that thermal conductivity of the liquid phase, porosity level of the powder bed, and absorptivity have direct influence on the model predictions, with the influence of the thermal conductivity of the liquid phase being most significant. Journal: IISE Transactions Pages: 136-152 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1461964 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1461964 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:136-152 Template-Type: ReDIF-Article 1.0 Author-Name: Jia (Peter) Liu Author-X-Name-First: Jia (Peter) Author-X-Name-Last: Liu Author-Name: Chenang Liu Author-X-Name-First: Chenang Author-X-Name-Last: Liu Author-Name: Yun Bai Author-X-Name-First: Yun Author-X-Name-Last: Bai Author-Name: Prahalada Rao Author-X-Name-First: Prahalada Author-X-Name-Last: Rao Author-Name: Christopher B. Williams Author-X-Name-First: Christopher B. Author-X-Name-Last: Williams Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Title: Layer-wise spatial modeling of porosity in additive manufacturing Abstract: The objective of this work is to model and quantify the layer-wise spatial evolution of porosity in parts made using Additive Manufacturing (AM) processes. This is an important research area because porosity has a direct impact on the functional integrity of AM parts such as their fatigue life and strength. To realize this objective, an Augmented Layer-wise Spatial log Gaussian Cox process (ALS-LGCP) model is proposed. The ALS-LGCP approach quantifies the spatial distribution of pores within each layer of the AM part and tracks their sequential evolution across layers. Capturing the layer-wise spatial behavior of porosity leads to a deeper understanding of where (at what location), when (at which layer), and to what severity (size and number) pores are formed. This work therefore provides a mathematical framework for identifying specific pore-prone areas in an AM part, and tracking the evolution of porosity in AM parts in a layer-wise manner. This knowledge is essential for initiating remedial corrective actions to avoid porosity in future parts, e.g., by changing the process parameters or part design. The ALS-LGCP approach proposed herein is a significant improvement over the current scalar metric used to quantify porosity, namely, the percentage porosity relative to the bulk part volume. In this article, the ALS-LGCP approach is tested for metal parts made using a binder jetting AM process to model the layer-wise spatial behavior of porosity. Based on offline, non-destructive X-Ray computed tomography (XCT) scan data of the part the approach identifies those areas with high risk of porosity with statistical fidelity approaching 85% (F-score). While the proposed work uses offline XCT data, it takes the critical first-step from a data analytics perspective for taking advantage of the recently reported breakthroughs in online, in-situ X-Ray-based monitoring of AM processes. Further, the ALS-LGCP approach is readily extensible for porosity analysis in other AM processes; our future forays will focus on improving the computational tractability of the approach for online monitoring. Journal: IISE Transactions Pages: 109-123 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1478169 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1478169 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:109-123 Template-Type: ReDIF-Article 1.0 Author-Name: Ulas Yaman Author-X-Name-First: Ulas Author-X-Name-Last: Yaman Author-Name: Melik Dolen Author-X-Name-First: Melik Author-X-Name-Last: Dolen Author-Name: Christoph Hoffmann Author-X-Name-First: Christoph Author-X-Name-Last: Hoffmann Title: Generation of patterned indentations for additive manufacturing technologies Abstract: This article proposes a novel approach to generate patterned indentations for different additive manufacturing methodologies. Surface textures have many practical applications in various fields, but require special manufacturing considerations. In addition to conventional manufacturing processes, additive processes have also been utilized in the last decade to obtain textured surfaces. The current design and fabrication pipeline of additive manufacturing operations have many disadvantages in that respect. For instance, the size of the design (CAD) files grows considerably when there are detailed indentations on the surfaces of the artifacts. The presented method, which employs morphological operations on a sequence of binary images representing the cross-sections of the printed artefact, overcomes such problems while fabricating the textured objects. Furthermore, the presented technique could be conveniently implemented using the existing hardware resources of almost any three-dimensional printer. Journal: IISE Transactions Pages: 209-217 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1491076 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1491076 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:209-217 Template-Type: ReDIF-Article 1.0 Author-Name: Yanglong Lu Author-X-Name-First: Yanglong Author-X-Name-Last: Lu Author-Name: Yan Wang Author-X-Name-First: Yan Author-X-Name-Last: Wang Title: An efficient transient temperature monitoring of fused filament fabrication process with physics-based compressive sensing Abstract: Sensors play an important role in manufacturing processes. Different types of sensors have been used in process monitoring to ensure the quality of products. As a result, the cost of quality control is rising. Processing a large amount of sensor data for real-time process monitoring is also challenging. Recently, a Physics-Based Compressive Sensing (PBCS) approach was proposed to reduce the number of sensors and the amount of data collection associated with manufacturing process monitoring. PBCS significantly improves the compression ratio from traditional compressed sensing by incorporating the knowledge of physical phenomena in specific applications. In this article the PBCS approach is demonstrated with the dynamic process of fused filament fabrication where the constantly changing temperature field needs to be continuously monitored. A transient thermal model for PBCS is formulated. Based on the model, three-dimensional thermal distributions in manufacturing processes can be efficiently monitored by reconstructing distributions from sparse samplings in both spatial and temporal domains. The systematic error from reconstruction can also be predicted and compensated based on a Gaussian process uncertainty quantification approach. Journal: IISE Transactions Pages: 168-180 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1499054 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1499054 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:168-180 Template-Type: ReDIF-Article 1.0 Author-Name: Tianjiao Wang Author-X-Name-First: Tianjiao Author-X-Name-Last: Wang Author-Name: Chi Zhou Author-X-Name-First: Chi Author-X-Name-Last: Zhou Author-Name: Wenyao Xu Author-X-Name-First: Wenyao Author-X-Name-Last: Xu Title: Online droplet monitoring in inkjet 3D printing using catadioptric stereo system Abstract: Inkjet 3D printing is becoming one of the most disruptive additive manufacturing technologies, due to its unique capability of precisely depositing micro-droplets of multi-functional materials. It has found widespread industrial applications in aerospace, energy and health areas by processing multi-functional metal-materials, nano-materials, and bio-materials. However, the current inkjet 3D printing system still suffers from a low production quality issue, due to low process reliability caused by the complex and dynamic droplet dispensing behavior. Due to the challenges in terms of efficiency, accuracy, and versatility, robust droplet monitoring and process inspection tools are still largely unavailable. To this end, a novel catadioptric stereo system is proposed for online droplet monitoring in an inkjet 3D printing process. In this system, a regular industrial CCD camera is coupled with a flat mirror and magnification lens system to capture the tiny droplet images to detect the droplet location in 3D space. A mathematical model is formulated to calculate the droplet location in 3D world space from 2D image space. A holistic hardware and software framework is constructed to evaluate the performance of the proposed system in terms of resolution, accuracy, efficiency, and versatility, both theoretically and experimentally. The results show that the proposed catadioptric stereo system can achieve single micron resolution and accuracy, which is one-order-of-magnitude higher than the 3D printing system itself. The proposed droplet location detection algorithm has low time complexity, and the detection efficiency can meet the online monitoring requirement. Multi-facet features including the droplet location and speed can be effectively detected by the presented technique. The proposed catadioptric stereo system is a promising online droplet monitoring tool and has tremendous potential to enable trustworthy quality assurance in inkjet 3D printing. Journal: IISE Transactions Pages: 153-167 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2018.1532133 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1532133 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:153-167 Template-Type: ReDIF-Article 1.0 Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Author-Name: Zhengyu (James) Kong Author-X-Name-First: Zhengyu (James) Author-X-Name-Last: Kong Author-Name: Xiaoping Qian Author-X-Name-First: Xiaoping Author-X-Name-Last: Qian Author-Name: Bianca Colosimo Author-X-Name-First: Bianca Author-X-Name-Last: Colosimo Title: Contributions to additive manufacturing Journal: IISE Transactions Pages: 107-108 Issue: 2 Volume: 51 Year: 2019 Month: 2 X-DOI: 10.1080/24725854.2019.1540686 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1540686 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:2:p:107-108 Template-Type: ReDIF-Article 1.0 Author-Name: Marianne Frisén Author-X-Name-First: Marianne Author-X-Name-Last: Frisén Title: Spatial outbreak detection based on inference principles for multivariate surveillance Abstract: Spatial surveillance is a special case of multivariate surveillance. Thus, in this review of spatial outbreak methods, the relation to general multivariate surveillance approaches is discussed. Different outbreak models are useful for different aims. First, it makes a great difference which spreading pattern is of main interest to detect. We will discuss methods for the detection of (i) spatial clusters of increased incidence; (ii) increased incidence at only one (unknown) location; (iii) simultaneous increase at all locations; and (iv) outbreaks with a time lag between the onsets in different regions. The sufficient reduction was used to find likelihood ratio methods for some of these spreading patterns. Second, an alternative to the common assumption of a step change to an increased incidence level is suggested. The assumption is sometimes too restrictive and errors in the estimation of the baseline have great influence. Instead, a robust nonparametric model is suggested. The seasonal variation of influenza in Sweden is used as an example. Here, the outbreak was characterized by a monotonic increase following the constant non-epidemic level. The semi-parametric generalized likelihood ratio surveillance method used for this application is described. Third, evaluation metrics are discussed. Evaluation in spatial and other multivariate surveillance requires special consideration. Journal: IIE Transactions Pages: 759-769 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2012.748995 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.748995 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:759-769 Template-Type: ReDIF-Article 1.0 Author-Name: Huifen Chen Author-X-Name-First: Huifen Author-X-Name-Last: Chen Author-Name: Chaosian Huang Author-X-Name-First: Chaosian Author-X-Name-Last: Huang Title: The use of a CUSUM residual chart to monitor respiratory syndromic data Abstract: This article reports a surveillance mechanism that can be used to monitor syndromic data on respiratory syndrome. The data used for illustration are the daily counts of respiratory-syndrome visits sampled from the National Health Insurance Research Database in Taiwan. The population size is 160 000. A regression model with an autoregressive-integrated-moving-average error term is fitted to the data and then CUmulative SUM (CUSUM) residual charts are plotted to detect aberrations in the frequency of visits to a walk in clinic. Day-of-the-week, seasonal, and holiday effects are considered in the regression model. It is shown that a CUSUM residual chart can be used to detect abnormal increases in daily counts of respiratory-syndrome visits. Journal: IIE Transactions Pages: 790-797 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2012.761369 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.761369 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:790-797 Template-Type: ReDIF-Article 1.0 Author-Name: Hai-yan Xu Author-X-Name-First: Hai-yan Author-X-Name-Last: Xu Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Author-Name: Thong Ngee Goh Author-X-Name-First: Thong Ngee Author-X-Name-Last: Goh Title: Objective Bayes analysis of zero-inflated Poisson distribution with application to healthcare data Abstract: In this article, non-informative priors are investigated for a zero-inflated Poisson distribution with two parameters: the probability of zeros and the mean of the Poisson part. Both the reference prior and the Jeffreys prior are derived and shown to be second-order matching priors when only the mean of the Poisson part is of interest. However, when the probability of zeros is of interest, the reference prior is still a second-order matching prior, whereas the Jeffreys prior is not so. Furthermore, when both parameters are of interest, the reference prior is a unique second-order matching prior. Frequentist coverage probabilities of the posterior confidence sets based on the Jeffreys and reference priors are compared with each other using Monte Carlo simulations and with confidence sets based on the maximum likelihood estimation. Journal: IIE Transactions Pages: 843-852 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2013.770190 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.770190 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:843-852 Template-Type: ReDIF-Article 1.0 Author-Name: Mi Lim Lee Author-X-Name-First: Mi Lim Author-X-Name-Last: Lee Author-Name: David Goldsman Author-X-Name-First: David Author-X-Name-Last: Goldsman Author-Name: Seong-Hee Kim Author-X-Name-First: Seong-Hee Author-X-Name-Last: Kim Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Title: Spatiotemporal biosurveillance with spatial clusters: control limit approximation and impact of spatial correlation Abstract: Multivariate CUSUM charts formed over spatial clusters have been used over the last several years to detect emerging disease clusters in spatiotemporal biosurveillance. The control limits for the CUSUM charts are typically calibrated by trial-and-error simulation, but this task can be time-consuming and challenging when the monitoring area is large. This article introduces an analytical method that approximates the control limits and average run length when spatial correlation is not strong. In addition, the practical range of the scan radius in which the approximation method works well is investigated. Also studied is how the outbreak radius and spatial correlation impact the scheme’s outbreak detection performance with respect to two metrics: detection delay and identification accuracy. Experimental results show that the approximation method performs well, making the design of the multivariate CUSUM chart convenient; and higher spatial correlation does not always yield faster detection but often facilitates accurate identification of outbreak clusters. Journal: IIE Transactions Pages: 813-827 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2013.785296 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.785296 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:813-827 Template-Type: ReDIF-Article 1.0 Author-Name: Ralph Gailis Author-X-Name-First: Ralph Author-X-Name-Last: Gailis Author-Name: Ajith Gunatilaka Author-X-Name-First: Ajith Author-X-Name-Last: Gunatilaka Author-Name: Leo Lopes Author-X-Name-First: Leo Author-X-Name-Last: Lopes Author-Name: Alex Skvortsov Author-X-Name-First: Alex Author-X-Name-Last: Skvortsov Author-Name: Kate Smith-Miles Author-X-Name-First: Kate Author-X-Name-Last: Smith-Miles Title: Managing uncertainty in early estimation of epidemic behaviors using scenario trees Abstract: The onset of an epidemic can be foreshadowed theoretically through observation of a number of syndromic signals, such as absenteeism or rising sales of particular pharmaceuticals. The success of such approaches depends on how well the uncertainty associated with the early stages of an epidemic can be managed. This article uses scenario trees to summarize the uncertainty in the parameters defining an epidemiological process and the future path the epidemic might take. Extensive simulations are used to generate various syndromic and epidemic time series, which are then summarized in scenario trees, creating a simple data structure that can be explored quickly at surveillance time without the need to fit models. Decisions can be made based on the subset of the uncertainty (the subtree) that best fits the current observed syndromic signals. Simulations are performed to investigate how well an underlying dynamic model of an epidemic with inhomogeneous mixing and noise fluctuations can capture the effects of social interactions. Two noise terms are introduced to capture the observable fluctuations in the social network connectivity and variation in some model parameters (e.g., infectious time). Finally, it is shown how the entire framework can be used to compare syndromic surveillance systems against each other; to evaluate the effect of lag and noise on accuracy; and to evaluate the impact that differences in syndromic behavior among susceptible and infected populations have on accuracy. Journal: IIE Transactions Pages: 828-842 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2013.803641 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803641 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:828-842 Template-Type: ReDIF-Article 1.0 Author-Name: Sheng-I Chen Author-X-Name-First: Sheng-I Author-X-Name-Last: Chen Author-Name: Bryan A. Norman Author-X-Name-First: Bryan A. Author-X-Name-Last: Norman Author-Name: Jayant Rajgopal Author-X-Name-First: Jayant Author-X-Name-Last: Rajgopal Author-Name: Tina M. Assi Author-X-Name-First: Tina M. Author-X-Name-Last: Assi Author-Name: Bruce Y. Lee Author-X-Name-First: Bruce Y. Author-X-Name-Last: Lee Author-Name: Shawn T. Brown Author-X-Name-First: Shawn T. Author-X-Name-Last: Brown Title: A planning model for the WHO-EPI vaccine distribution network in developing countries Abstract: In many developing countries, inefficiencies in the supply chain for the World Health Organization's Expanded Program on Immunization (EPI) vaccines are of grave concern; these inefficiencies result in thousands of people not being fully immunized and creates significant risk of disease epidemics. Thus, there is a great deal of interest in these countries in building tools to analyze and optimize how vaccines flow down several levels of the supply chain from manufacturers to vaccine recipients. This article develops a mathematical model for typical vaccine distribution networks in developing countries. This model has been successfully adapted for supply chains in three different countries (Niger, Thailand, and Vietnam), and its application to several issues of interest to public health administrators in developing countries is discussed. Journal: IIE Transactions Pages: 853-865 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2013.813094 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.813094 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:853-865 Template-Type: ReDIF-Article 1.0 Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Yan Su Author-X-Name-First: Yan Author-X-Name-Last: Su Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Title: A comparison of exponentially weighted moving average-based methods for monitoring increases in incidence rate with varying population size Abstract: Estimation of incidence rate and quick detection of its increases are important tasks in public health surveillance. In addition to being an efficient tool for online parameter estimation, the Exponentially Weighted Moving Average (EWMA) method has been widely used as an effective monitoring tool in statistical process control. Motivated by its successful applications, several EWMA-type methods are discussed for monitoring and estimating the incidence rate of adverse events in health care applications. The comparison results show that the conventional EWMA chart has a superior performance in detecting small shifts that occur at the start-up but very poor performance when shifts occur at a later time point. Instead, the adaptive EWMA method that is capable of dynamically updating its smoothing parameter can provide an overall good detection performance when shifts occur at both the first time point and a later time point. This result is validated using male thyroid cancer data in New Mexico. Journal: IIE Transactions Pages: 798-812 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2014.894805 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.894805 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:798-812 Template-Type: ReDIF-Article 1.0 Author-Name: Saylisse Dávila Author-X-Name-First: Saylisse Author-X-Name-Last: Dávila Author-Name: George Runger Author-X-Name-First: George Author-X-Name-Last: Runger Author-Name: Eugene Tuv Author-X-Name-First: Eugene Author-X-Name-Last: Tuv Title: Public health surveillance with ensemble-based supervised learning Abstract: Public health surveillance is a special case of the general problem that monitors counts (or rates) of events for changes. Modern data complements event counts with many additional measurements (such as geographic, demographic, and others) that comprise high-dimensional covariates. This leads to an important challenge to detect a change that only occurs within a region, initially unspecified, defined by these covariates. Current methods used to handle covariate information are limited to low-dimensional data. The approach presented in this article transforms the problem to supervised learning, so that an appropriate learner and signal criteria can then be defined. A feature selection algorithm is used to identify covariates that contribute to a model (either individually or through interactions) and this is used to generate a signal based on formal statistical inference. A measure of statistical significance is also included to control false alarms. Graphical plots are used to isolate change locations in covariate space. Results on a variety of simulated examples are provided. Journal: IIE Transactions Pages: 770-789 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2014.894806 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.894806 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:770-789 Template-Type: ReDIF-Article 1.0 Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Kwok-leung Tsui Author-X-Name-First: Kwok-leung Author-X-Name-Last: Tsui Title: Public health and healthcare surveillance and response Journal: IIE Transactions Pages: 757-758 Issue: 8 Volume: 46 Year: 2014 Month: 8 X-DOI: 10.1080/0740817X.2014.900306 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.900306 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:8:p:757-758 Template-Type: ReDIF-Article 1.0 Author-Name: Xiang Wu Author-X-Name-First: Xiang Author-X-Name-Last: Wu Author-Name: Sarah Ryan Author-X-Name-First: Sarah Author-X-Name-Last: Ryan Title: Value of condition monitoring for optimal replacement in the proportional hazards model with continuous degradation Abstract: This article investigats the value of perfect monitoring information for optimal replacement of deteriorating systems in the Proportional Hazards Model (PHM). A continuous-time Markov chain describes the condition of the system. Although the form of an optimal replacement policy for system under periodic monitoring in the PHM was developed previously, an approximation of the Markov process as constant within inspection intervals led to a counterintuitive result that less frequent monitoring could yield a replacement policy with lower average cost. This article explicitly accounts for possible state transitions between inspection epochs to remove the approximation and eliminate the cost anomaly. However, the mathematical evaluation becomes significantly more complicated. To overcome this difficulty, a new recursive procedure to obtain the parameters of the optimal replacement policy and the optimal average cost is presented. A numerical example is provided to illustrate the computational procedure and the value of condition monitoring. By taking the monitoring cost into consideration, the relationships between the unit cost of periodic monitoring and the upfront cost of continuous monitoring under which the continuous, periodic, or no monitoring scheme is optimal are obtained. Journal: IIE Transactions Pages: 553-563 Issue: 8 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903232571 File-URL: http://hdl.handle.net/10.1080/07408170903232571 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:8:p:553-563 Template-Type: ReDIF-Article 1.0 Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Author-Name: Jionghua (Judy) Jin Author-X-Name-First: Jionghua (Judy) Author-X-Name-Last: Jin Title: Optimal sensor allocation by integrating causal models and set-covering algorithms Abstract: Massive amounts of data are generated in Distributed Sensor Networks (DSNs), posing challenges to effective and efficient detection of system abnormality through data analysis. This article proposes a new method for optimal sensor allocation in a DSN with the objective of timely detection of the abnormalities in a underlying physical system. This method involves two steps: first, a Bayesian Network (BN) is built to represent the causal relationships among the physical variables in the system; second, an integrated algorithm by combining the BN and a set-covering algorithm is developed to determine which physical variables should be sensed, in order to minimize the total sensing cost as well as satisfy a prescribed detectability requirement. Case studies are performed on a hot forming process and a large-scale cap alignment process, showing that the developed algorithm satisfies both the cost and detectability requirements. Journal: IIE Transactions Pages: 564-576 Issue: 8 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903232597 File-URL: http://hdl.handle.net/10.1080/07408170903232597 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:8:p:564-576 Template-Type: ReDIF-Article 1.0 Author-Name: Ming Jin Author-X-Name-First: Ming Author-X-Name-Last: Jin Author-Name: Yanting Li Author-X-Name-First: Yanting Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Chart allocation strategy for serial-parallel multistage manufacturing processes Abstract: The application of Statistical Process Control (SPC) to multistage manufacturing process has received considerable attention recently. How to effectively allocate conventional SPC charts in a serial multistage environment to monitor the process quality has not been thoroughly studied. This article adopts the approach of a linear state space model to describe multistage processes and proposes a strategy to properly allocate control charts in serial parallel-multistage manufacturing processes by considering the interrelationship information between stages. Based on the proposed chart allocation strategy it proves possible to make rational chart allocation decisions to achieve a quicker detection capability over the whole potential fault set. A hood assembly example is used to demonstrate the applications of the chart allocation strategy. Extensions are also discussed. Journal: IIE Transactions Pages: 577-588 Issue: 8 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394330 File-URL: http://hdl.handle.net/10.1080/07408170903394330 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:8:p:577-588 Template-Type: ReDIF-Article 1.0 Author-Name: Jeffrey Kharoufeh Author-X-Name-First: Jeffrey Author-X-Name-Last: Kharoufeh Author-Name: Christopher Solo Author-X-Name-First: Christopher Author-X-Name-Last: Solo Author-Name: M. Ulukus Author-X-Name-First: M. Author-X-Name-Last: Ulukus Title: Semi-Markov models for degradation-based reliability Abstract: This article presents hybrid, degradation-based reliability models for a single-unit system whose degradation is driven by a semi-Markov environment. The primary objective is to develop a mathematical framework and associated computational techniques that unite environmental data and stochastic failure models to assess the current or future health of the system. By employing phase-type distributions, it is possible to construct a surrogate environment process that is amenable to analysis by exact Markovian techniques to obtain reliability estimates. The viability of the proposed approach and the quality of the approximations are demonstrated in two numerical experiments. The numerical results indicate that remarkably accurate lifetime distribution and moment approximations are attainable. Journal: IIE Transactions Pages: 599-612 Issue: 8 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394371 File-URL: http://hdl.handle.net/10.1080/07408170903394371 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:8:p:599-612 Template-Type: ReDIF-Article 1.0 Author-Name: Jinsuk Lee Author-X-Name-First: Jinsuk Author-X-Name-Last: Lee Author-Name: Rong Pan Author-X-Name-First: Rong Author-X-Name-Last: Pan Title: Analyzing step-stress accelerated life testing data using generalized linear models Abstract: In this article the parameter estimation method of Step-Stress Accelerated Life Testing (SSALT) model is discussed by utilizing techniques of Generalized Linear Model (GLM). A multiple progressive SSALT with exponential failure data and right censoring is analyzed. The likelihood function of the SSALT is treated as being a censoring variate with Poisson distribution and the life-stress relationship is defined by a log link function of a GLM. Both the maximum likelihood estimation and the Bayesian estimation of GLM parameters are discussed. The iteratively weighted least squares method is implemented to obtain the maximum likelihood estimation solution. The Bayesian estimation is derived by applying Jeffreys' non-informative prior and the Markov chain Monte Carlo method. Finally, a real industrial example is presented to demonstrate these estimation methods. Journal: IIE Transactions Pages: 589-598 Issue: 8 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903459976 File-URL: http://hdl.handle.net/10.1080/07408170903459976 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:8:p:589-598 Template-Type: ReDIF-Article 1.0 Author-Name: José Ramirez-Marquez Author-X-Name-First: José Author-X-Name-Last: Ramirez-Marquez Author-Name: Claudio Rocco Author-X-Name-First: Claudio Author-X-Name-Last: Rocco Title: Evolutionary optimization technique for multi-state two-terminal reliability allocation in multi-objective problems Abstract: This article presents a newly developed evolutionary algorithm for solving multi-objective optimization models for the design of multi-state two-terminal networks. It is assumed that for each network component, a known set of functionally equivalent component types (with different performance specifications) can be used to provide redundancy. Furthermore, the reliability behavior of the network and its components can have a range of states varying from perfect functioning to complete failure; that is, a multi-state behavior. Thus, the new algorithm allows solving the multi-objective optimization case of the reliability allocation problem for general multi-state two-terminal networks. The optimization routine is based on three major steps that use an evolutionary optimization approach and Monte Carlo simulation to generate a Pareto optimal string of probabilistic solutions to these problems. Examples for different multi-state two-terminal networks are used throughout the article to illustrate the approach. The results obtained for test cases are compared with other proposed methods to show the accuracy of the algorithm in generating approximate Pareto optimal sets for problems with a large solution space. Journal: IIE Transactions Pages: 539-552 Issue: 8 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903459984 File-URL: http://hdl.handle.net/10.1080/07408170903459984 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:8:p:539-552 Template-Type: ReDIF-Article 1.0 Author-Name: Maximiliano Udenio Author-X-Name-First: Maximiliano Author-X-Name-Last: Udenio Author-Name: Eleni Vatamidou Author-X-Name-First: Eleni Author-X-Name-Last: Vatamidou Author-Name: Jan C. Fransoo Author-X-Name-First: Jan C. Author-X-Name-Last: Fransoo Author-Name: Nico Dellaert Author-X-Name-First: Nico Author-X-Name-Last: Dellaert Title: Behavioral causes of the bullwhip effect: An analysis using linear control theory Abstract: It has long been recognized that the bullwhip effect in real life depends on a behavioral component. However, non-experimental research typically considers only structural causes in its analysis. In this article, we study the impact of behavioral biases on the performance of inventory/production systems modeled through an APVIOBPCS (Automatic Pipeline, Variable Inventory, Order-Based Production Control System) design using linear control theory. To explicitly model managerial behavior, we allow independent adjustments to inventory and pipeline feedback loops. We consider the biases of smoothing/over-reaction to inventory and pipeline mismatches and the under-/over-estimation of the pipeline. To quantify the performance of the system, we first develop a new procedure to determine the exact stability region of the system and we derive an asymptotic stability region that is independent of the lead time. Afterwards, we analyze the effect of different demand signals on order and inventory variations. Our findings suggest that normative policy recommendations must take demand structure explicitly into account. Finally, through extensive numerical experiments, we find that the performance of the system depends on the combination of the behavioral biases and the structure of the demand stream. Journal: IISE Transactions Pages: 980-1000 Issue: 10 Volume: 49 Year: 2017 Month: 10 X-DOI: 10.1080/24725854.2017.1325026 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1325026 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:10:p:980-1000 Template-Type: ReDIF-Article 1.0 Author-Name: Maya Bam Author-X-Name-First: Maya Author-X-Name-Last: Bam Author-Name: Brian T. Denton Author-X-Name-First: Brian T. Author-X-Name-Last: Denton Author-Name: Mark P. Van Oyen Author-X-Name-First: Mark P. Author-X-Name-Last: Van Oyen Author-Name: Mark E. Cowen Author-X-Name-First: Mark E. Author-X-Name-Last: Cowen Title: Surgery scheduling with recovery resources Abstract: Surgical services are large revenue sources that account for a large portion of hospital expenses. Thus, efficient resource allocation is crucial in this system; however, this is a challenging problem, in part due to the interaction of the different stages of the surgery delivery system and the uncertainty of surgery and recovery durations. This article focuses on single-day in-patient elective surgery scheduling considering surgeons, operating rooms (ORs), and the post-anesthesia care unit (recovery). We propose a mixed-integer programming formulation of this problem and then present a fast two-phase heuristic: phase 1 is used for determining the number of ORs to open for the day and surgeon-to-OR assignments, and phase 2 is used for surgical case sequencing. Both phases have provable worst-case performance guarantees and excellent average case performance. We evaluate schedules under uncertainty using a discrete-event simulation model based on data provided by a mid-sized hospital. We show that the fast and easy-to-implement two-phase heuristic performs extremely well, in both deterministic and stochastic settings. The new methods developed reduce the computational barriers to implementation and demonstrate that hospitals can realize substantial benefits without resorting to sophisticated optimization software implementations. Journal: IISE Transactions Pages: 942-955 Issue: 10 Volume: 49 Year: 2017 Month: 10 X-DOI: 10.1080/24725854.2017.1325027 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1325027 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:10:p:942-955 Template-Type: ReDIF-Article 1.0 Author-Name: H. Dharma Kwon Author-X-Name-First: H. Dharma Author-X-Name-Last: Kwon Author-Name: Onesun Steve Yoo Author-X-Name-First: Onesun Steve Author-X-Name-Last: Yoo Title: Retention of capable new employees under uncertainty: Impact of strategic interactions Abstract: We study a game involving a firm and a newly hired employee whose capability is initially unknown to both parties. Both players observe the performance of the employee and update their common posterior beliefs about the employee’s capability. The learning process presents each party with an option: the firm can terminate an incapable employee, and a capable employee can leave the firm for greater financial remuneration elsewhere. To understand the impact of this noncooperative interaction, we examine the Markov perfect equilibrium termination strategies and payoffs that unfold. We find that in the region of sufficiently high learning rates, reducing the rate of learning can increase the equilibrium payoff for both parties. Slower learning prolongs the employment because more performance outcomes must be observed to fully assess the employee’s capability. In the region of sufficiently slow learning rates, reducing the rate of learning can benefit the firm if the employee is deemed capable but hurt the firm otherwise. Our result identifies a nonfinancial way for firms to improve retention of capable new employees. Journal: IISE Transactions Pages: 927-941 Issue: 10 Volume: 49 Year: 2017 Month: 10 X-DOI: 10.1080/24725854.2017.1325028 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1325028 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:10:p:927-941 Template-Type: ReDIF-Article 1.0 Author-Name: George G. Polak Author-X-Name-First: George G. Author-X-Name-Last: Polak Author-Name: David F. Rogers Author-X-Name-First: David F. Author-X-Name-Last: Rogers Author-Name: Chaojiang Wu Author-X-Name-First: Chaojiang Author-X-Name-Last: Wu Title: A generalized maximin decision model for managing risk and measurable uncertainty Abstract: We propose an innovative approach to probabilistic decision making, in which the optimal selection is made both for a decision alternative to manage risk and for a collection of measurable events to simultaneously manage uncertainty as measured by information entropy. The resulting generalized maximin model is a combinatorial optimization problem for maximizing the expected value of a random variable, defined as the minimum return in a given event, over all measurable events in a discrete sample space. The collection of measurable events and applicable probability measure are endogenously determined by a partition of the sample space and optimized for a given index that specifies the number of constituent events. The modeling approach is very general, encompassing as a special case the maximin decision criterion and providing an equivalent solution to the expected value criterion with other cases representing trade-offs between these criteria. A dynamic programming algorithm for solving the non-diversified model in polynomial time is developed. Diversification of the decisions results in a nonlinear integer optimization model that is transformed to an easily solvable mixed-integer linear model. Publicly available data of 79 investments over 10 periods are used to compare the model with mean–variance, conditional value-at-risk, and constrained maximin models. Journal: IISE Transactions Pages: 956-966 Issue: 10 Volume: 49 Year: 2017 Month: 10 X-DOI: 10.1080/24725854.2017.1335918 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1335918 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:10:p:956-966 Template-Type: ReDIF-Article 1.0 Author-Name: M. Serkan Akturk Author-X-Name-First: M. Serkan Author-X-Name-Last: Akturk Author-Name: James D. Abbey Author-X-Name-First: James D. Author-X-Name-Last: Abbey Author-Name: H. Neil Geismar Author-X-Name-First: H. Neil Author-X-Name-Last: Geismar Title: Strategic design of multiple lifecycle products for remanufacturing operations Abstract: Based on observations from practice, this study analytically investigates product design philosophies for remanufacturing original equipment manufacturers to determine how the optimal design choice depends on market conditions. Though designing to increase the level of remanufacturability can yield increased profitability by lowering remanufacturing costs, several complicating factors exist. We examine how these market factors—industry clockspeed, the level of competition, and the product’s original market value—interact with characteristics whose values are determined by the choice of design paradigm: time-to-market, manufacturing cost, and remanufacturing cost. A key determinant of the optimal design choice is the number of profitable lifecycles that each design choice provides under specific combinations of values for the market factors. Journal: IISE Transactions Pages: 967-979 Issue: 10 Volume: 49 Year: 2017 Month: 10 X-DOI: 10.1080/24725854.2017.1336684 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1336684 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:10:p:967-979 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao Liu Author-X-Name-First: Xiao Author-X-Name-Last: Liu Author-Name: Khalifa N. Al-Khalifa Author-X-Name-First: Khalifa N. Author-X-Name-Last: Al-Khalifa Author-Name: Elsayed A. Elsayed Author-X-Name-First: Elsayed A. Author-X-Name-Last: Elsayed Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Author-Name: Abdelmagid S. Hamouda Author-X-Name-First: Abdelmagid S. Author-X-Name-Last: Hamouda Title: Criticality measures for components with multi-dimensional degradation Abstract: Failures of engineering structures and equipment are often attributed to the failure of a single component. Hence, it is important to identify critical components in a system and understand how a component's criticality changes over time under dynamic environments. This article investigates the criticality analysis for components with multiple competing failure modes due to degradation. The component degradation is modeled as a k-dimensional Wiener process. A component fails when any of the k degradation processes associated with that component attains a certain threshold level. Motivated by Nelson's cumulative exposure model, a relationship between both the mean and diffusion of the degradation process and environmental conditions is established. Expressions of a component's criticality measures are derived. Numerical examples are presented to illustrate the criticality analysis. Journal: IIE Transactions Pages: 987-998 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2013.851433 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.851433 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:987-998 Template-Type: ReDIF-Article 1.0 Author-Name: Shuai Huang Author-X-Name-First: Shuai Author-X-Name-Last: Huang Author-Name: Zhenyu Kong Author-X-Name-First: Zhenyu Author-X-Name-Last: Kong Author-Name: Wenzhen Huang Author-X-Name-First: Wenzhen Author-X-Name-Last: Huang Title: High-dimensional process monitoring and change point detection using embedding distributions in reproducing kernel Hilbert space Abstract: High-dimensional process monitoring has become ubiquitous in many domains, which creates tremendous challenges for conventional process monitoring methods. This article proposes a novel Reproducing Kernel Hilbert Space (RKHS)-based control chart that can be applied to high-dimensional processes with sophisticated process distributions to detect a wide range of process changes beyond the ones that are detected by traditional statistical process control methods. Through extensive experiments on both simulated and real-world processes and various kinds of process change patterns, it is shown that the RKHS-based control chart leads to improved statistical stability, fault detection power, and robustness to non-normality as compared with existing methods such as T2 and MEWMA control charts. Journal: IIE Transactions Pages: 999-1016 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2013.855848 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.855848 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:999-1016 Template-Type: ReDIF-Article 1.0 Author-Name: Qiang Zhou Author-X-Name-First: Qiang Author-X-Name-Last: Zhou Author-Name: Junbo Son Author-X-Name-First: Junbo Author-X-Name-Last: Son Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Xiaofeng Mao Author-X-Name-First: Xiaofeng Author-X-Name-Last: Mao Author-Name: Mutasim Salman Author-X-Name-First: Mutasim Author-X-Name-Last: Salman Title: Remaining useful life prediction of individual units subject to hard failure Abstract: To develop a cost-effective condition-based maintenance strategy, accurate prediction of the Remaining Useful Life (RUL) is the key. It is known that many failure mechanisms in engineering can be traced back to some underlying degradation processes. This article proposes a two-stage prognostic framework for individual units subject to hard failure, based on joint modeling of degradation signals and time-to-event data. The proposed algorithm features a low computational load, online prediction, and dynamic updating. Its application to automotive battery RUL prediction is discussed in this article as an example. The effectiveness of the proposed method is demonstrated through a simulation study and real data. Journal: IIE Transactions Pages: 1017-1030 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2013.876126 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876126 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1017-1030 Template-Type: ReDIF-Article 1.0 Author-Name: Dan Zhang Author-X-Name-First: Dan Author-X-Name-Last: Zhang Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Title: Design of statistically and energy-efficient accelerated life testing experiments Abstract: The basic idea of Accelerated Life Testing (ALT) is to expose a limited number of test units of a product to harsher-than-normal operating conditions to expedite failures. Based on the failure time data collected in a short time period, an ALT model incorporating the underlying failure time distribution and life–stress relationship can be developed for predicting the reliability of the product under the normal operating condition. However, ALT experiments often consume significant amounts of energy due to the harsher-than-normal operating conditions created and controlled by test equipment. In this article, a new ALT design methodology is developed that has the objective of improving the statistical and energy efficiency of ALT experiments. The resulting statistically and energy-efficient ALT plan depends not only on the reliability of the product to be evaluated, but also on the physical characteristics of the test equipment and its controller. Particularly, the statistical efficiency of each candidate ALT plan needs to be evaluated and the corresponding controller capable of providing the required stress loadings must be designed and simulated to evaluate the total energy consumption of the ALT plan. In this article, mathematical formulations, computational algorithms, and simulation tools are provided to handle such complex experimental design problems. Numerical examples are provided to demonstrate the effectiveness of the proposed methodology in energy reduction in ALT. Journal: IIE Transactions Pages: 1031-1049 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2013.876127 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876127 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1031-1049 Template-Type: ReDIF-Article 1.0 Author-Name: George Nenes Author-X-Name-First: George Author-X-Name-Last: Nenes Author-Name: Philippe Castagliola Author-X-Name-First: Philippe Author-X-Name-Last: Castagliola Author-Name: Giovanni Celano Author-X-Name-First: Giovanni Author-X-Name-Last: Celano Author-Name: Sofia Panagiotidou Author-X-Name-First: Sofia Author-X-Name-Last: Panagiotidou Title: The variable sampling interval control chart for finite-horizon processes Abstract: The requirement to be globally competitive requires companies to have a high level of flexibility to allow for the production of a large variety of products. To limit work-in-process, decision makers periodically schedule according to a make-to-order management strategy i.e. the production of finite batches of the same product code. Scheduling calls for frequent set up activities, which require the reconfiguration of a manufacturing process, and allows manufacturers to switch between different codes. This can limit the production horizon of one product code to a few hours or shifts. In this context, efficient online quality control monitoring using control charts is strategic to eliminate scrap or rework and to meet the demand at the time specified by the production plan. The design of control charts for a process with a limited production horizon is a challenge for statistical process control practitioners. Under this framework, this article investigates the issues related to the implementation of the Variable Sampling Interval (VSI) Shewhart control chart in a process with finite production horizon. When the production horizon is finite, the statistical properties of a control chart are known to be a function of the number of scheduled inspections. In the case of a VSI control chart, the quality practitioner cannot fix the number of inspections a priori due to the stochastic nature of the sampling interval selection. Therefore, the aim of this article is to propose a new Markov chain approach for the exact computation of the statistical performance of the VSI control chart in processes with an unknown and finite number of inspections. The proposed approach is general and does not depend on the monitored sample statistic. With reference to the process mean monitoring, an extensive numerical analysis compares the performance of the VSI X‾$\skew5\bar X $ chart to the Variable Sample Size and Fixed Sampling Rate z charts. Numerical results show that the VSI X‾$\skew5\bar X $ chart outperforms other charts for moderate to large shift sizes. An illustrative example shows the implementation of the VSI X‾$\skew5\bar X $ chart in a short run producing a finite batch of mechanical parts. Journal: IIE Transactions Pages: 1050-1065 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2013.876128 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876128 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1050-1065 Template-Type: ReDIF-Article 1.0 Author-Name: Cheng-Ta Yeh Author-X-Name-First: Cheng-Ta Author-X-Name-Last: Yeh Author-Name: Yi-Kuei Lin Author-X-Name-First: Yi-Kuei Author-X-Name-Last: Lin Author-Name: Cheng-Fu Huang Author-X-Name-First: Cheng-Fu Author-X-Name-Last: Huang Title: A reliability indicator to measure a stochastic supply chain network with transportation damage and limited production capacity Abstract: This article proposes a reliability measurement for a supply chain network, in which a vertex denotes a supplier, a transfer center, or a customer, while a route connecting a pair of vertices denotes a carrier. Each carrier's available transportation capacity (e.g., number of containers) is not deterministic since the transportation capacity may be partially reserved by other customers. Thus, the supply chain network can be regarded as a Stochastic Supply Chain Network (SSCN). In an SSCN with multiple suppliers and markets, the goods may be damaged due to traffic accident, natural disaster, inclement weather, time, or collision during transportation such that the intact goods may not meet the customer's demands. In addition, the goods supplied by a specified supplier cannot exceed its production capacity, and the total transportation cost cannot exceed a budget. SSCN reliability is defined as the probability that the SSCN can successfully deliver goods to multiple customers subject to a specified level of damage, budget, and limited production capacity. An algorithm is proposed to evaluate the SSCN reliability based on minimal paths. A real case study of a pineapple supply chain network is utilized to demonstrate the utility of the proposed algorithm. Journal: IIE Transactions Pages: 1066-1078 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2013.876130 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876130 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1066-1078 Template-Type: ReDIF-Article 1.0 Author-Name: Minsu Kim Author-X-Name-First: Minsu Author-X-Name-Last: Kim Author-Name: Suk Joo Bae Author-X-Name-First: Suk Joo Author-X-Name-Last: Bae Title: Drop fragility of the display of a smart mobile phone: weakest link failure or cumulative shock failure? Abstract: It is not unusual for portable devices to be damaged when they are accidentally dropped on hard floors. Shock tests are increasingly being used to evaluate the drop impact response of portable devices. However, the underlying failure mechanisms have not been fully theoretically explored. There are two candidate failure mechanisms that can account for the drop impact fragility of the display in a smart mobile phone during a shock test: weakest link failure and cumulative damage failure. The weakest link theory provides a basis on a Weibull distribution and the cumulative damage theory on an inverse Gaussian distribution. This article proposes a discrimination procedure for the two distribution types. The probability of correct selection is computed using asymptotic results on the ratio of the Maximum Likelihood (ML) to discriminate between the two distributions. Expressions are provided that can be used to compute the asymptotic distributions of ML estimators or their functions when there is model mis-specification. The proposed method is applied to real shock test data for smart mobile phone display modules to determine the underlying failure mechanisms. Journal: IIE Transactions Pages: 1079-1092 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2014.882039 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882039 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1079-1092 Template-Type: ReDIF-Article 1.0 Author-Name: Yisha Xiang Author-X-Name-First: Yisha Author-X-Name-Last: Xiang Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Author-Name: Qianmei (May) Feng Author-X-Name-First: Qianmei (May) Author-X-Name-Last: Feng Title: Accelerated burn-in and condition-based maintenance for -subpopulations subject to stochastic degradation Abstract: For some engineering design and manufacturing applications, particularly for evolving and new technologies, populations of manufactured components can be heterogeneous and consist of several subpopulations. The co-existence of n subpopulations can be common in devices when the manufacturing process is still maturing or highly variable. A new model is developed and demonstrated to determine accelerated burn-in and condition-based maintenance policies for populations composed of distinct subpopulations subject to stochastic degradation. Accelerated burn-in procedures with multiple accelerating factors are considered for the degradation-based heterogeneous populations. Condition-based maintenance is implemented during field operation after burn-in procedures. The proposed joint accelerated burn-in and condition-based maintenance policy are compared with two benchmark policies: a joint accelerated burn-in and age-based preventive replacement policy and a condition-based maintenance-only policy. Numerical examples are provided to illustrate the proposed procedure. Sensitivity analysis is performed to investigate the value of joint accelerated burn-in and condition-based maintenance policy and to indicate which type of policy should be applied according to different conditions and device characteristics. Journal: IIE Transactions Pages: 1093-1106 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2014.889335 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.889335 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1093-1106 Template-Type: ReDIF-Article 1.0 Author-Name: Michael E. Cholette Author-X-Name-First: Michael E. Author-X-Name-Last: Cholette Author-Name: Dragan Djurdjanovic Author-X-Name-First: Dragan Author-X-Name-Last: Djurdjanovic Title: Degradation modeling and monitoring of machines using operation-specific hidden Markov models Abstract: In this article, a novel data-driven approach to monitoring of systems operating under variable operating conditions is described. The method is based on characterizing the degradation process via a set of operation-specific hidden Markov models (HMMs), whose hidden states represent the unobservable degradation states of the monitored system while its observable symbols represent the sensor readings. Using the HMM framework, modeling, identification, and monitoring methods are detailed that allow one to identify an HMM of degradation for each operation from mixed-operation data and perform operation-specific monitoring of the system. Using a large data set provided by a major manufacturer, the new methods are applied to a semiconductor manufacturing process running multiple operations in a production environment. Journal: IIE Transactions Pages: 1107-1123 Issue: 10 Volume: 46 Year: 2014 Month: 10 X-DOI: 10.1080/0740817X.2014.905734 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.905734 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:10:p:1107-1123 Template-Type: ReDIF-Article 1.0 Author-Name: Wei-Chang Yeh Author-X-Name-First: Wei-Chang Author-X-Name-Last: Yeh Title: A simple universal generating function method for estimating the reliability of general multi-state node networks Abstract: Many real-world systems (such as cellular telephones, transportation, etc.) are Multi-state Node Networks (MNNs) that are composed of multi-state nodes with different states determined by a set of nodes receiving the signal directly from these nodes without satisfying the conservation law. Current methods for evaluating MNN reliability are all derived from Universal Generating Function Methods (UGFMs). Unfortunately, UGFMs are only effective for special MNNs without any cycle, i.e. acyclic MNNs. A very simple revised UGFM is developed for the general MNN reliability problem. The proposed UGFM allows cycles with the same time complexity as the best-known UGFM. The correctness and computational complexity of the proposed UGFM are analyzed and proven. One example is given to illustrate how MNN reliability is evaluated using the proposed UGFM. Journal: IIE Transactions Pages: 3-11 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322622 File-URL: http://hdl.handle.net/10.1080/07408170802322622 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:3-11 Template-Type: ReDIF-Article 1.0 Author-Name: Nozer Singpurwalla Author-X-Name-First: Nozer Author-X-Name-Last: Singpurwalla Author-Name: Alyson Wilson Author-X-Name-First: Alyson Author-X-Name-Last: Wilson Title: Probability, chance and the probability of chance Abstract: In our day-to-day discourse on uncertainty, words like belief, chance, plausible, likelihood and probability are commonly encountered. Often, these words are used interchangeably, because they are intended to encapsulate some loosely articulated notions about the unknowns. The purpose of this paper is to propose a framework that is able to show how each of these terms can be made precise, so that each reflects a distinct meaning. To construct our framework, we use a basic scenario upon which caveats are introduced. Each caveat motivates us to bring in one or more of the above notions. The scenario considered here is very basic; it arises in both the biomedical context of survival analysis and the industrial context of engineering reliability. This paper is expository and much of what is said here has been said before. However, the manner in which we introduce the material via a hierarchy of caveats that could arise in practice, namely our proposed framework, is the novel aspect of this paper. To appreciate all this, we require of the reader a knowledge of the calculus of probability. However, in order to make our distinctions transparent, probability has to be interpreted subjectively, not as an objective relative frequency. Journal: IIE Transactions Pages: 12-22 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322630 File-URL: http://hdl.handle.net/10.1080/07408170802322630 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:12-22 Template-Type: ReDIF-Article 1.0 Author-Name: Jason Cook Author-X-Name-First: Jason Author-X-Name-Last: Cook Author-Name: Jose Ramirez-Marquez Author-X-Name-First: Jose Author-X-Name-Last: Ramirez-Marquez Title: Mobility and reliability modeling for a mobile network Abstract: The mobile ad hoc wireless network (MAWN) promises to become an ever-present technology with application to a myriad of areas. This networking scheme has its roots in DoD technology, yet, the ability to measure and calculate its reliability is largely absent. This paper describes the unique attributes of the MAWN and how the classical methods for analysis of network reliability must be adjusted. The methods developed acknowledge the dynamic and scalable nature of the new networking scheme along with its absence of infrastructure and remove the need to a priori define a network configuration. The methods rely on a novel modeling approach that recognizes the effects of mobility on the formation of the network's configurations. Hence, this paper proposes a Monte Carlo simulation method that accounts for node mobility, node reliability and node performance in the network's connectivity and resultant network reliability. Journal: IIE Transactions Pages: 23-31 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322648 File-URL: http://hdl.handle.net/10.1080/07408170802322648 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:23-31 Template-Type: ReDIF-Article 1.0 Author-Name: Zhigang Tian Author-X-Name-First: Zhigang Author-X-Name-Last: Tian Author-Name: Ming Zuo Author-X-Name-First: Ming Author-X-Name-Last: Zuo Author-Name: Richard Yam Author-X-Name-First: Richard Author-X-Name-Last: Yam Title: Multi-state systems and their performance evaluation Abstract: The k-out-of-n system structure is a very popular type of redundancy in fault-tolerant systems, with wide applications in both industrial and military systems. In this paper, the modeling, application and reliability evaluation of k-out-of-n systems are studied for the case where the components and the system have multiple performance levels. A multi-state k-out-of-n system model is proposed that allows different requirements on the number of components for different state levels, and, very importantly, more practical engineering systems can fit into this model. The multiple states in the model can be interpreted in two ways: (i) multiple levels of capacity; and (ii) multiple failure modes. Application examples of the proposed multi-state k-out-of-n system model are given under each of the interpretations. An approach is presented for efficient reliability evaluation of multi-state k-out-of-n systems with identically and independently distributed components. A recursive algorithm is proposed for reliability evaluation of multi-state k-out-of-n systems with independent components. Efficiency investigations show that both of the reliability evaluation approaches are efficient. The multi-state k-out-of-n system model with a constant k value, which is a special case of the general multi-state k-out-of-n system model, has been studied for a long time, but only on the theoretical stage. A practical application of this model is presented in this paper as well. Journal: IIE Transactions Pages: 32-44 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322655 File-URL: http://hdl.handle.net/10.1080/07408170802322655 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:32-44 Template-Type: ReDIF-Article 1.0 Author-Name: Juan Ruiz-Castro Author-X-Name-First: Juan Author-X-Name-Last: Ruiz-Castro Author-Name: Gemma Fernández-Villodre Author-X-Name-First: Gemma Author-X-Name-Last: Fernández-Villodre Author-Name: Rafael Pérez-Ocón Author-X-Name-First: Rafael Author-X-Name-Last: Pérez-Ocón Title: A level-dependent general discrete system involving phase-type distributions Abstract: A discrete repairable redundant n-system with one online unit and the others in warm standby is presented. The operational and repair times follow general distributions and the phase-type representation is considered. It is shown that the process governing the system is a discrete level-dependent M/G/1 process. For this system, the stationary distribution, performance reliability measures in transient and stationary regime, the up period and the involved costs are worked out in a matrix and algorithmic form. The presented results have been computationally implemented using Matlab. An application illustrates the developed model. Journal: IIE Transactions Pages: 45-56 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322663 File-URL: http://hdl.handle.net/10.1080/07408170802322663 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:45-56 Template-Type: ReDIF-Article 1.0 Author-Name: Nader Ebrahimi Author-X-Name-First: Nader Author-X-Name-Last: Ebrahimi Title: The mean function of a repairable system that is subjected to an imperfect repair policy Abstract: A repairable system can be simply characterized as a system which is repaired rather than replaced after a failure. In many practical situations, the mean function of a repairable system depends on several explanatory variables referred to as covariates that are time dependent. The mean function is an important figure that represents the expected number of failures up to a certain time. In this article, a repairable policy is proposed by using auxiliary stochastic processes which describe behaviors of these covariates and various properties are derived using the proposed policy. A general method to assess indirectly the mean function of a repairable system by using information on these covariates is also discussed. Journal: IIE Transactions Pages: 57-64 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322671 File-URL: http://hdl.handle.net/10.1080/07408170802322671 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:57-64 Template-Type: ReDIF-Article 1.0 Author-Name: Toshio Nakagawa Author-X-Name-First: Toshio Author-X-Name-Last: Nakagawa Author-Name: Satoshi Mizutani Author-X-Name-First: Satoshi Author-X-Name-Last: Mizutani Title: Optimum problems in backward times of reliability models Abstract: This paper considers the problem of searching for the actual time of failure for a system when the only information that is available is that it is has failed by a time t. This situation can be analyzed by using the concept of the reversed failure rate. This paper considers optimization problems that can be solved by using the reversed failure rate. When a unit is detected to have failed at time t, we discuss an optimum backward time from time t to search for its failure time which minimizes the expected cost. The recovery of a database system and of reweighing products using a scale are taken to be two typical applications of the backward time concept. Two models are proposed for appropriate maintenance actions for these situations when a unit is detected to have failed at time t. Journal: IIE Transactions Pages: 65-71 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322689 File-URL: http://hdl.handle.net/10.1080/07408170802322689 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:65-71 Template-Type: ReDIF-Article 1.0 Author-Name: Bo Lindqvist Author-X-Name-First: Bo Author-X-Name-Last: Lindqvist Author-Name: Guro Skogsrud Author-X-Name-First: Guro Author-X-Name-Last: Skogsrud Title: Modeling of dependent competing risks by first passage times of Wiener processes Abstract: Consider the competing risks situation for a component which may be subject to either a failure or a preventive maintenance action, where the latter will prevent the failure. It is then reasonable to expect a dependence between the time to failure and the time to preventive maintenance. This paper briefly reviews some modeling approaches and introduces a new approach based on modeling of the degradation of a component by means of Wiener processes, with failure corresponding to the first crossing of a certain level, and potential time for maintenance corresponding to the crossing of a certain lower degradation level. Journal: IIE Transactions Pages: 72-80 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322697 File-URL: http://hdl.handle.net/10.1080/07408170802322697 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:72-80 Template-Type: ReDIF-Article 1.0 Author-Name: Sheldon Ross Author-X-Name-First: Sheldon Author-X-Name-Last: Ross Title: A new simulation approach to estimating expected values of functions of Bernoulli random variables under certain types of dependencies Abstract: Consider an n-component system, where each component either works or is failed. With Xi equal to one if component i works and zero otherwise, a new simulation approach is presented that is based on an innovative use of stratified sampling, for estimating E[h(X1, …, Xn)], when h is a monotone function and the vector X1, …, Xn is exchangeable. It is shown how to extend the proposed approach to the case where there is a random environmental parameter Θ such that, conditional on Θ = θ, the components act independently, with component i working with probability θ pi. Improvements in the method when the components are independent are also indicated. Journal: IIE Transactions Pages: 81-85 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322705 File-URL: http://hdl.handle.net/10.1080/07408170802322705 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:81-85 Template-Type: ReDIF-Article 1.0 Author-Name: Bo Bergman Author-X-Name-First: Bo Author-X-Name-Last: Bergman Title: Conceptualistic Pragmatism: A framework for Bayesian analysis? Abstract: This paper argues for an extended framework for the subjectivist approach to statistical decision making—the judgements made for deriving a likelihood function should be carefully reflected upon. The Harvard professor of philosophy Clarence I. Lewis did offer a philosophical action-oriented framework for this type of reflection. The philosophy of Lewis has very much influenced the originators of the quality movement. This constitutes an interesting link between two important learning-oriented approaches in the current statistical discourse—the subjectivist theory of statistical inference and the quality movement with its focus on continuous improvements. Journal: IIE Transactions Pages: 86-93 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802322713 File-URL: http://hdl.handle.net/10.1080/07408170802322713 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:86-93 Template-Type: ReDIF-Article 1.0 Author-Name: E. Elsayed Author-X-Name-First: E. Author-X-Name-Last: Elsayed Title: Foreword Journal: Pages: 1-2 Issue: 1 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802424055 File-URL: http://hdl.handle.net/10.1080/07408170802424055 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:1:p:1-2 Template-Type: ReDIF-Article 1.0 Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Author-Name: René De Koster Author-X-Name-First: René Author-X-Name-Last: De Koster Title: Sequencing heuristics for storing and retrieving unit loads in 3D compact automated warehousing systems Abstract: Sequencing unit-load retrieval requests has been extensively reported on in the literature for conventional single-deep automated warehousing systems. A proper sequence can greatly reduce the makespan when carrying out a group of such requests. Although the sequencing problem is NP-hard, some very good heuristics exist. Surprisingly, the problem has not yet been investigated for compact (multi-deep) storage systems, which have greatly increased in popularity the last decade. This article studies how to sequence a group (or block) of storage and retrieval requests in a multi-deep automated storage system with the objective to minimize the makespan. Currently utilized sequencing heuristics for the multi-deep system are adapted in this article and in addition a new heuristic, Percentage Priority to Retrievals with Shortest Leg (PPR-SL), is proposed and evaluated. It is shown that the PPR-SL heuristic consistently outperforms all of the other heuristics. Generally, it can outperform the benchmark First-Come First-Served (FCFS) heuristic by between 20 and 70%. The nearest neighbor heuristic that performs very well in conventional single-deep storage systems appears to perform poorly in the multi-deep system, even worse than FCFS. In addition, based on FCFS and PPR-SL, robust rack dimensions that yield a short makespan, regardless of the number of storage and retrieval requests, are found. Journal: IIE Transactions Pages: 69-87 Issue: 2 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.575441 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.575441 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:2:p:69-87 Template-Type: ReDIF-Article 1.0 Author-Name: Simon Emde Author-X-Name-First: Simon Author-X-Name-Last: Emde Author-Name: Malte Fliedner Author-X-Name-First: Malte Author-X-Name-Last: Fliedner Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Title: Optimally loading tow trains for just-in-time supply of mixed-model assembly lines Abstract: In today's mixed-model assembly production, there are two recent trends—namely, increasing vertical integration and the proliferation of product variety—that shift focus to an efficient just-in-time part supply. In this context, many automobile manufacturers set up decentralized logistics areas referred to as supermarkets. Here, small tow trains are loaded with parts and travel across the shop floor on specific routes to make frequent small-lot deliveries that are needed by the stations of the line. This article investigates the loading problem of tow trains, which aims at minimizing inventory near the line while avoiding material shortages given the limited capacity of the tow trains. An exact polynomial-time solution procedure is presented and interdependencies with production planning, that is, the sequencing problem of product models launched down the line, are investigated in a comprehensive computational study. Journal: IIE Transactions Pages: 121-135 Issue: 2 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.575442 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.575442 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:2:p:121-135 Template-Type: ReDIF-Article 1.0 Author-Name: Paul Berglund Author-X-Name-First: Paul Author-X-Name-Last: Berglund Author-Name: Rajan Batta Author-X-Name-First: Rajan Author-X-Name-Last: Batta Title: Optimal placement of warehouse cross-aisles in a picker-to-part warehouse with class-based storage Abstract: Given a picker-to-part warehouse having a simple rectilinear aisle arrangement with north–south storage aisles and east–west travel aisles (or “cross-aisles”), this article investigates the optimal placement of the cross-aisles as a consequence of the probability mass function of the order pick points, as determined by the storage policy. That is, what placement of the cross-aisles will result in a minimal expected path length for the picker? An analytical solution procedure is developed for the optimal placement of a single middle cross-aisle for a given storage policy. A simplifying assumption is made as regards picker routing, but arbitrary non-random storage policies are considered. The solution procedure is generalized to a method for multiple cross-aisles. Some example problems are solved and a simulation study is used to measure the impact of the assumptions made to generate the method. Journal: IIE Transactions Pages: 107-120 Issue: 2 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.578608 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.578608 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:2:p:107-120 Template-Type: ReDIF-Article 1.0 Author-Name: Soondo Hong Author-X-Name-First: Soondo Author-X-Name-Last: Hong Author-Name: Andrew Johnson Author-X-Name-First: Andrew Author-X-Name-Last: Johnson Author-Name: Brett Peters Author-X-Name-First: Brett Author-X-Name-Last: Peters Title: Large-scale order batching in parallel-aisle picking systems Abstract: This article discusses an order batching formulation and heuristic solution procedure suitable for a large-scale order picking situation in parallel-aisle picking systems. Order batching can decrease the total travel distance of pickers not only through reducing the number of trips but also by shortening the length of each trip. In practice, some order picking systems retrieve 500–2000 orders per hour and include ten or more aisles. The proposed heuristic produces near-optimal solutions with run times of roughly 70 s in a ten-aisle system. The quality of the solutions is demonstrated by comparing with a lower bound developed as a linear programming relaxation of the batching formulation developed in this article. A simulation study indicates that the proposed heuristic outperforms existing methods described in the literature or used in practice. In addition, the resulting order picking operations are relatively robust to picker blocking. Journal: IIE Transactions Pages: 88-106 Issue: 2 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.588994 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.588994 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:2:p:88-106 Template-Type: ReDIF-Article 1.0 Author-Name: Sainath Gopinath Author-X-Name-First: Sainath Author-X-Name-Last: Gopinath Author-Name: Theodor Freiheit Author-X-Name-First: Theodor Author-X-Name-Last: Freiheit Title: A waste relationship model and center point tracking metric for lean manufacturing systems Abstract: Lean manufacturing is about eliminating waste, which requires the creation of waste metrics that are tracked in order to create the conditions for its elimination. In this article, metrics used to monitor the seven traditional non-value adding wastes types of overproduction, defects, transportation, waiting, inventory, motion, and processing are explored and a “center point metric pair” is proposed that can give systematic insight into system waste performance and trade-offs. For example, lower work-in-process levels (inventory waste) may require more replenishment (transportation waste) in order to maintain production. A waste relationship model is proposed that can be used to derive the relationship between different wastes in a Pareto-optimal waste-dependent lean system. The trade-off relationships are statistically verified using simulation experiments across different system configurations, complexities, and planning scenarios. Journal: IIE Transactions Pages: 136-154 Issue: 2 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.593609 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.593609 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:2:p:136-154 Template-Type: ReDIF-Article 1.0 Author-Name: Salil Desai Author-X-Name-First: Salil Author-X-Name-Last: Desai Author-Name: Taye Esho Author-X-Name-First: Taye Author-X-Name-Last: Esho Author-Name: Ravindra Kaware Author-X-Name-First: Ravindra Author-X-Name-Last: Kaware Title: Experimental investigation of controlled microdroplet evaporation toward scalable micro/nanomanufacturing Abstract: This article focuses on an experimental investigation of microdroplet evaporation as a step toward developing a scalable droplet-based micro/nanomanufacturing process. A customized direct-write inkjet setup is utilized to generate monodisperse microdroplets for two fluid types (acetone and distilled water). The microdroplet evaporation dynamics was studied using a light-emitting diode strobe-based high-speed photography. The microdroplet was heated using convective heat transfer via a resistive heating ring fixture that controlled the heat flux. The effect of nozzle size and fluid type on microdroplet size reduction was investigated. The output responses include percentage volume reductions, drop size shrinkage, and changes in the surface-to-volume ratio. The experimental results were validated with an equivalent theoretical model and close agreement between the results was obtained. This research provides a basic understanding of the evaporation dynamics of microdroplets and is a precursor toward their transition to the sub-micro meter and nano regimes. Journal: IIE Transactions Pages: 155-162 Issue: 2 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.593610 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.593610 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:2:p:155-162 Template-Type: ReDIF-Article 1.0 Author-Name: Yenho Chung Author-X-Name-First: Yenho Author-X-Name-Last: Chung Author-Name: Feryal Erhun Author-X-Name-First: Feryal Author-X-Name-Last: Erhun Title: Designing supply contracts for perishable goods with two periods of shelf life Abstract: A critical feature of a perishable product is its age. When “old” units are on the shelf along with “young” units, a supplier who is designing a contract for a buyer must consider the product’s age. While numerous papers in the contracting literature have discussed channel coordination, none have studied the case in which the supplier needs to account for both old and young units. This article considers a supply chain for perishable goods with a two-period shelf life to address the aforementioned supplier’s problem. A two-level wholesale price contract, a two-level buy-back contract, and a buy-back contract with channel rebates are studied. The channel-coordinating conditions under each of these contracts are demonstrated. Although all three contracts can coordinate the channel, potential implementation issues with each of them are discussed and guidance for suppliers who want to design a contract when selling both old and young units is provided. Journal: IIE Transactions Pages: 53-67 Issue: 1 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.654847 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.654847 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:1:p:53-67 Template-Type: ReDIF-Article 1.0 Author-Name: Chung-Yee Lee Author-X-Name-First: Chung-Yee Author-X-Name-Last: Lee Author-Name: Ruina Yang Author-X-Name-First: Ruina Author-X-Name-Last: Yang Title: Supply chain contracting with competing suppliers under asymmetric information Abstract: This article employs a screening model to examine the problem of supply chain contracting involving one retailer and two suppliers. The two suppliers compete to sell their products, which are partial substitutes, through a common retailer. The problem is analyzed using a two-stage game. In the first stage, suppliers independently but simultaneously announce the contract bundles. The retailer, who is closer to customers, has superior market information and decides which contracts to sign. Then the suppliers invest in raw materials. In the second stage, the retailer sets prices, which in turn determine the demand rates of products, to optimize their profits. The game is analyzed for two types of contracts: two-part tariff contracts and quantity discount contracts. The retailer’s optimal strategy is derived and the suppliers’ optimal contract design for both types of contracts is fully characterized. In particular, the performance of the two types of contracts is evaluated when the two products are independent. The result suggests that the information rent is higher under quantity discounts than two-part tariffs, although the latter makes the supplier better off. Additionally, the two types of contracts are compared in terms of total supply chain profit, information rent, and suppliers’ expected profits when the two products are imperfect substitutes. Both analytical and numerical results support the proposed approach. Journal: IIE Transactions Pages: 25-52 Issue: 1 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.662308 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.662308 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:1:p:25-52 Template-Type: ReDIF-Article 1.0 Author-Name: Laura McLay Author-X-Name-First: Laura Author-X-Name-Last: McLay Author-Name: Maria Mayorga Author-X-Name-First: Maria Author-X-Name-Last: Mayorga Title: A model for optimally dispatching ambulances to emergency calls with classification errors in patient priorities Abstract: The decision of which servers to dispatch to which customers is an important aspect of service systems. Such decisions are complicated when servers have different operating characteristics, customers are prioritized, and there are errors in assessing customer priorities. This article formulates a model for determining how to optimally dispatch servers to prioritized customers given that dispatchers make classification errors in assessing the true customer priorities. These issues are examined through the lens of Emergency Medical Service (EMS) dispatch, for which a Markov Decision Process (MDP) model is developed that captures how to optimally dispatch ambulances (servers) to prioritized patients (customers). It is assumed that patients arrive sequentially, with the location and perceived priority of each patient becoming known upon arrival. The proposed model determines how to optimally dispatch ambulances to patients to maximize the long-run average utility of the system, defined as the expected coverage of true high-risk patients. The utilities and transition probabilities are location dependent, with respect to both the ambulance and patient locations. The analysis considers two cases for approaching the classification errors that correspond to over- and under-responding to perceived patient risk. A computational example is applied to an EMS system. The optimal policies under different classification strategies are compared to a myopic policy and the effect that classification errors have on the performance of these policies is examined. Simulations suggest that the policies remain effective when they are applied to more realistic situations. Journal: IIE Transactions Pages: 1-24 Issue: 1 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.665200 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.665200 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:1:p:1-24 Template-Type: ReDIF-Article 1.0 Author-Name: Ted Klastorin Author-X-Name-First: Ted Author-X-Name-Last: Klastorin Author-Name: Gary Mitchell Author-X-Name-First: Gary Author-X-Name-Last: Mitchell Title: Optimal project planning under the threat of a disruptive event Abstract: This article considers the problem of planning a complex project when there is the possibility of a Disruptive Event (DE) occurring sometime during the project. If such a disruption occurs, work on all activities will stop for some (random) time, but overhead and indirect costs will continue to accrue as well as possible penalty costs. Given information about the likelihood of such an event, how should a risk-neutral manager who wants to minimize the expected total cost of the project react (where total cost includes direct labor costs, indirect/overhead costs, and penalty costs)? Should a manager take preventive action at the start of the project (i.e., build additional slack into the project beyond that of a normal cost-minimizing schedule), act at any time during the project after gaining more information about the likelihood of a disruption, or wait until the DE occurs? The problem is formulated as a stochastic dynamic programming problem and this model is used to demonstrate several important implications for managers who face the threat of potential DEs. An efficient algorithm is described that can find the optimal compression strategies for large-scale projects; a numerical example illustrates both the algorithm and implications. Journal: IIE Transactions Pages: 68-80 Issue: 1 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.682700 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.682700 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:1:p:68-80 Template-Type: ReDIF-Article 1.0 Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Author-Name: Iman Hajizadeh Author-X-Name-First: Iman Author-X-Name-Last: Hajizadeh Author-Name: Dmitry Krass Author-X-Name-First: Dmitry Author-X-Name-Last: Krass Title: The maximum covering problem with travel time uncertainty Abstract: Both public and private facilities often have to provide adequate service under a variety of conditions. In particular travel times, that determine customer access, change due to changing traffic patterns throughout the day, as well as a result of special events ranging from traffic accidents to natural disasters. This article studies the maximum covering location problem on a network with travel time uncertainty represented by different travel time scenarios. Three model types—expected covering, robust covering, and expected p-robust covering—are studied; each one is appropriate for different types of facilities operating under different conditions. Exact and approximate algorithms are developed. The models are applied to the analysis of the location of fire stations in the city of Toronto. Using real traffic data it is shown that the current system design is quite far from optimality. The best locations for the four new fire stations that the city of Toronto is planning to add to the system are determined and alternative improvement plans are discussed. Journal: IIE Transactions Pages: 81-96 Issue: 1 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.689121 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.689121 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:1:p:81-96 Template-Type: ReDIF-Article 1.0 Author-Name: Burak Boyaci Author-X-Name-First: Burak Author-X-Name-Last: Boyaci Author-Name: İ. Altinel Author-X-Name-First: İ. Author-X-Name-Last: Altinel Author-Name: Necat Aras Author-X-Name-First: Necat Author-X-Name-Last: Aras Title: Approximate solution methods for the capacitated multi-facility Weber problem Abstract: This work considers the capacitated multi-facility Weber problem, which is concerned with locating m facilities and allocating their limited capacities to n customers in order to satisfy their demand at minimum total transportation cost. This is a non-convex optimization problem and difficult to solve. Therefore, approximate solution methods are proposed in this article. Some of them are based on the relaxation of the capacity constraints and apply the subgradient algorithm. The resulting Lagrangian subproblem is a variant of the well-known multi-facility Weber problem and can be solved using column generation and branch-and-price approach on a variant of the set covering formulation. Others are based on the approximating mixed-integer linear programming formulations obtained by exploiting norm properties and the alternate solution of the discrete location and transportation problems. The results of a detailed computational analysis are also reported. Journal: IIE Transactions Pages: 97-120 Issue: 1 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.695100 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.695100 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:1:p:97-120 Template-Type: ReDIF-Article 1.0 Author-Name: Elisa Gebennini Author-X-Name-First: Elisa Author-X-Name-Last: Gebennini Author-Name: Andrea Grassi Author-X-Name-First: Andrea Author-X-Name-Last: Grassi Title: Discrete-time model for two-machine one-buffer transfer lines with buffer bypass and two capacity levels Abstract: This article deals with the analytical modeling of transfer lines consisting of two machines decoupled by one finite buffer. The innovative contribution of this work consists in representing a particular behavior that can be found in a number of industrial applications, such as in the ceramics and electronics industries. Specifically, the buffer significantly affects the line’s performance as, when it is accumulating or releasing material (i.e., when one machine is operational and the other machine is under repair), it forces the operational machine to slow down. Conversely, when both machines are operational they can work at a higher capacity since the buffer is bypassed. Thus, two levels for the machine capacity can be identified, based on the conditions of the machines and, consequently, the state of the buffer. The system is modeled as a discrete-time, discrete-state Markov process. The resulting two-Machine one-Buffer Model with Buffer Bypass is here called 2M-1B-BB model. The analytical solution of the model is obtained and mathematical expressions of the most important performance measures are provided. Finally, some numerical results are discussed. Journal: IIE Transactions Pages: 715-727 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.952849 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.952849 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:715-727 Template-Type: ReDIF-Article 1.0 Author-Name: Mengying Fu Author-X-Name-First: Mengying Author-X-Name-Last: Fu Author-Name: Ronald Askin Author-X-Name-First: Ronald Author-X-Name-Last: Askin Author-Name: John Fowler Author-X-Name-First: John Author-X-Name-Last: Fowler Author-Name: Muhong Zhang Author-X-Name-First: Muhong Author-X-Name-Last: Zhang Title: Stochastic optimization of product–machine qualification in a semiconductor back-end facility Abstract: In order to process a product in a semiconductor back-end facility, a machine needs to be qualified, first by having product-specific software installed and then running test wafers through it to verify that the machine is capable of performing the process correctly. In general, not all machines are qualified to process all products due to the high machine qualification cost and tool set availability. The machine qualification decision affects future capacity allocation in the facility and subsequently affects daily production schedules. To balance the tradeoff between current machine qualification costs and future potential backorder costs due to not enough machines qualified with uncertain demand, a stochastic product–machine qualification optimization model is proposed in this article. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to show the necessity of the stochastic model and the performance of different solution methods. Journal: IIE Transactions Pages: 739-750 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.964887 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.964887 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:739-750 Template-Type: ReDIF-Article 1.0 Author-Name: Lixin Tang Author-X-Name-First: Lixin Author-X-Name-Last: Tang Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: Jiyin Liu Author-X-Name-First: Jiyin Author-X-Name-Last: Liu Author-Name: Yun Dong Author-X-Name-First: Yun Author-X-Name-Last: Dong Title: Research into container reshuffling and stacking problems in container terminal yards Abstract: Container stacking and reshuffling are important issues in the management of operations in a container terminal. Minimizing the number of reshuffles can increase productivity of the yard cranes and the efficiency of the terminal. In this research, the authors improve the existing static reshuffling model, develop five effective heuristics, and analyze the performance of these algorithms. A discrete-event simulation model is developed to animate the stacking, retrieving, and reshuffling operations and to test the performance of the proposed heuristics and their extended versions in a dynamic environment with arrivals and retrievals of containers. The experimental results for the static problem show that the improved model can solve the reshuffling problem more quickly than the existing model and the proposed extended heuristics are superior to the existing ones. The experimental results for the dynamic problem show that the results of the extended versions of the five proposed heuristics are superior or similar to the best results of the existing heuristics and consume very little time. Journal: IIE Transactions Pages: 751-766 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.971201 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.971201 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:751-766 Template-Type: ReDIF-Article 1.0 Author-Name: Ravindra Kaware Author-X-Name-First: Ravindra Author-X-Name-Last: Kaware Author-Name: Salil Desai Author-X-Name-First: Salil Author-X-Name-Last: Desai Title: Molecular dynamics modeling of water nanodroplet spreading on topographically patterned silicon dioxide and silicon nitride substrates Abstract: This article reports the investigation of the spreading behavior of a nanodroplet in a droplet-based scalable nanomanufacturing process using Molecular Dynamics (MD) modeling and simulation. The objective of the study is to understand the effect of substrate topology on the wetting behavior of nanodroplets at the molecular level. A water nanodroplet spreading on silicon dioxide (SiO2) and silicon nitride (Si3N4) substrates with different topologies was studied. A migration of the SiO2–water system from a hydrophilic to hydrophobic interaction was observed with an increase in the aspect ratio of the patterns. In contrast, for the Si3N4–water system the fluid–structural interaction shifted from a hydrophobic to hydrophilic behavior for patterns with corresponding higher aspect ratios. The MD models were validated using molecular kinetic theory. This research provides a foundation for extending the functional range of substrate and solvent combinations by manipulating the substrate topology. The results of this work are expected to serve in the effective control of the hydrophobic/hydrophilic nature of the substrates and therefore aid in the prediction of nanofeature deposition in droplet-based micro/nanomanufacturing processes. Journal: IIE Transactions Pages: 767-782 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.973983 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.973983 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:767-782 Template-Type: ReDIF-Article 1.0 Author-Name: Svenja Lagershausen Author-X-Name-First: Svenja Author-X-Name-Last: Lagershausen Author-Name: Bariş Tan Author-X-Name-First: Bariş Author-X-Name-Last: Tan Title: On the Exact Inter-departure, Inter-start, and Cycle Time Distribution of Closed Queueing Networks Subject to Blocking Abstract: This paper presents a method to determine the exact inter-departure, inter-start and cycle time distribution of closed queueing networks that can be modeled as Continuous-Time Markov Chains with finite state space. The method is based on extending the state space to determine the transitions that lead to a departure or to an arrival of a part on a station. Once these transitions are identified and represented in an indicator matrix, a first passage time analysis is utilized to determine the exact distributions of the inter-departure, inter-start, and cycle time. In order to demonstrate the methodology, we consider closed-loop production lines with phase-type service time distributions and finite buffers. We discuss the methodology to automatically generate the state space and to obtain the transition rate matrices for the considered distributions. We use the proposed method to analyze the effects of the system parameters on the inter-departure, inter-start time, and cycle time distributions numerically for various cases. The proposed methodology allows the exact analysis of the inter-departure, inter-start, and cycle time distributions of a wide range of production systems with phase-type servers that can be modeled as Continuous-Time Markov Chains in a unified way. Journal: IIE Transactions Pages: 673-692 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.982841 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.982841 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:673-692 Template-Type: ReDIF-Article 1.0 Author-Name: Sanket Bhat Author-X-Name-First: Sanket Author-X-Name-Last: Bhat Author-Name: Ananth Krishnamurthy Author-X-Name-First: Ananth Author-X-Name-Last: Krishnamurthy Title: Value of capacity flexibility in manufacturing systems with seasonal demands Abstract: In this article, we analyze a manufacturing system subject to seasonal demands. The system is modeled as a single-stage production facility with flexible production rate, and seasonal demands are modeled using a Markov-modulated Poisson process. Using dynamic programming, we develop optimal policies under the infinite-horizon discounted expected cost and the infinite-horizon average expected cost criteria. We prove that the optimal policy is a season-dependent base-stock policy with state-dependent production rates. Furthermore, we identify the monotonic structure of the optimal policy with respect to the net inventory, the demand in a season, and the overall workload on the system. We, then, illustrate the value of joint flexibility in capacity and inventory levels under the seasonal demand settings. Numerical comparisons with the policies used in practice demonstrate the value of capacity flexibility. Journal: IIE Transactions Pages: 693-714 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.991477 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.991477 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:693-714 Template-Type: ReDIF-Article 1.0 Author-Name: İbrahim Muter Author-X-Name-First: İbrahim Author-X-Name-Last: Muter Author-Name: Temel Öncan Author-X-Name-First: Temel Author-X-Name-Last: Öncan Title: An exact solution approach for the order batching problem Abstract: In this article, we deal with the Order Batching Problem (OBP) considering traversal, return, and midpoint routing policies. We consider the Set Partitioning Problem formulation of the OBP and develop a specially tailored column generation–based algorithm for this problem. We suggest acceleration techniques such as a column pool strategy and a relaxation of the column generation subproblem. Also, a specially devised upper-bounding procedure and a lower-bounding method based on column generation that is strengthened by adding subset-row inequalities are employed. According to the computational results, the proposed solution approach manages to solve OBP instances with up to 100 orders to optimality. Journal: IIE Transactions Pages: 728-738 Issue: 7 Volume: 47 Year: 2015 Month: 7 X-DOI: 10.1080/0740817X.2014.991478 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.991478 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:7:p:728-738 Template-Type: ReDIF-Article 1.0 Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Bo-Yan Jou Author-X-Name-First: Bo-Yan Author-X-Name-Last: Jou Author-Name: Chuan-Hao Liao Author-X-Name-First: Chuan-Hao Author-X-Name-Last: Liao Title: Adaptive variable EWMA controller for drifted processes Abstract: The double exponentially weighted moving average (dEWMA) feedback controller is a popular run-to-run (RTR) controller that is used to adjust semiconductor manufacturing processes that have a linear drift. Although this controller, with suitable fixed discount factors, can guarantee long-term stability, it usually requires a moderately large number of runs to bring the process output to its target value. To overcome this difficulty, an enhanced controller called the “adaptive variable EWMA controller,” is proposed. This controller brings the process output to a desired target much more quickly. An analytical expression of the process output of this controller and its long-term stability conditions are derived. Furthermore, examples are presented that compare the performance of the proposed controller with several competing controllers. It is demonstrated that the proposed controller has the best performance of the considered controllers in terms of the reduction of total mean square errors.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplemental resource: Appendix] Journal: IIE Transactions Pages: 247-259 Issue: 4 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902735392 File-URL: http://hdl.handle.net/10.1080/07408170902735392 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:4:p:247-259 Template-Type: ReDIF-Article 1.0 Author-Name: Loon Tang Author-X-Name-First: Loon Author-X-Name-Last: Tang Author-Name: Shao Lam Author-X-Name-First: Shao Author-X-Name-Last: Lam Author-Name: Quock Ng Author-X-Name-First: Quock Author-X-Name-Last: Ng Author-Name: Jing Goh Author-X-Name-First: Jing Author-X-Name-Last: Goh Title: A reliability modeling framework for the hard disk drive development process Abstract: Motivated by the fact that the major causes of catastrophic failure in micro hard disk drives are mostly induced by the presence of particles, a new particle-induced failure susceptibility metric, called the Cumulative Particle Counts (CPC), is proposed for managing reliability risk in a fast-paced hard disk drive product development process. This work is thought to represent the first successful attempt to predict particle-induced failure through an accelerated testing framework which leverages on existing streams of research for both particle-injection-based and inherent-particle-generation laboratory experiments to produce a practical reliability prediction framework. In particular, a new testing technique that injects particles into hard disk drives so as to increase the susceptibility of failure is introduced. The experimental results are then analyzed through a proposed framework which comprises the modeling of a CPC-to-failure distribution. The framework also requires the estimation of the growth curve for the CPC in a prime hard disk drive under normal operating conditions without particle injection. Both parametric and non-parametric inferences are presented for the estimation of the CPC growth curve. Statistical inferential procedures are developed in relation to a proposed non-linear CPC growth curve with a change-point. Finally, two applications of the framework to design selection during an actual hard disk drive development project and the subsequent assessment of reliability growth are discussed. Journal: IIE Transactions Pages: 260-272 Issue: 4 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902906019 File-URL: http://hdl.handle.net/10.1080/07408170902906019 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:4:p:260-272 Template-Type: ReDIF-Article 1.0 Author-Name: Berna Dengiz Author-X-Name-First: Berna Author-X-Name-Last: Dengiz Author-Name: Fulya Altiparmak Author-X-Name-First: Fulya Author-X-Name-Last: Altiparmak Author-Name: Onder Belgin Author-X-Name-First: Onder Author-X-Name-Last: Belgin Title: Design of reliable communication networks: A hybrid ant colony optimization algorithm Abstract: This article proposes a hybrid approach based on Ant Colony Optimization (ACO) and Simulated Annealing (SA), called ACO_SA, for the design of communication networks. The design problem is to find the optimal network topology for which the total cost is a minimum and the all-terminal reliability is not less than a given level of reliability. The proposed ACO_SA has the advantages of the ability to find higher performance solutions, created by the ACO, and the ability to jump out of local minima to find better solutions, created by the SA. The effectiveness of ACO_SA is investigated by comparing its results with those obtained by individual application of SA and ACO, which are basic forms of ACO_SA, two different genetic algorithms and a probabilistic solution discovery algorithm given in the literature for the design problem. Computational results show that ACO_SA has a better performance than its basic forms and the investigated heuristic approaches. Journal: IIE Transactions Pages: 273-287 Issue: 4 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903039836 File-URL: http://hdl.handle.net/10.1080/07408170903039836 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:4:p:273-287 Template-Type: ReDIF-Article 1.0 Author-Name: Eunshin Byon Author-X-Name-First: Eunshin Author-X-Name-Last: Byon Author-Name: Abhishek Shrivastava Author-X-Name-First: Abhishek Author-X-Name-Last: Shrivastava Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: A classification procedure for highly imbalanced class sizes Abstract: This article develops an effective procedure for handling two-class classification problems with highly imbalanced class sizes. In many imbalanced two-class problems, the majority class represents “normal” cases, while the minority class represents “abnormal” cases, detection of which is critical to decision making. When the class sizes are highly imbalanced, conventional classification methods tend to strongly favor the majority class, resulting in very low or even no detection of the minority class. The research objective of this article is to devise a systematic procedure to substantially improve the power of detecting the minority class so that the resulting procedure can help screen the original data set and select a much smaller subset for further investigation. A procedure is developed that is based on ensemble classifiers, where each classifier is constructed from a resized training set with reduced dimension space. In addition, how to find the best values of the decision variables in the proposed classification procedure is specified. The proposed method is compared to a set of off-the-shelf classification methods using two real data sets. The prediction results of the proposed method show remarkable improvements over the other methods. The proposed method can detect about 75% of the minority class units, while the other methods turn out much lower detection rates. Journal: IIE Transactions Pages: 288-303 Issue: 4 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903228967 File-URL: http://hdl.handle.net/10.1080/07408170903228967 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:4:p:288-303 Template-Type: ReDIF-Article 1.0 Author-Name: Young Chun Author-X-Name-First: Young Author-X-Name-Last: Chun Title: Bayesian inspection model for the production process subject to a random failure Abstract: Consider a sequence of items produced on a high-speed mass production line which is subject to a random failure. When an item in the sequence is inspected it is possible to obtain directional information about the exact timing of a process failure—before or after producing the inspected item. Using this directional information this paper proposes Bayesian inspection procedures that deal with three related problems: (i) how often to inspect items on the production line; (ii) how to conduct the search for more defective items; and (iii) when to stop the search process and salvage the remaining items. Based on various cost factors, the problem of optimal inspection interval, optimal search process and an optimal stopping rule is formulated as a profit-maximization model via a dynamic programming approach. For the production process with an unknown failure rate, Bayesian methods of estimating the process failure rate are proposed. The proposed Bayesian inspection procedures can be applied to a wide variety of high-speed mass production processes such as printing labels, filling containers or mixing ingredients. Journal: IIE Transactions Pages: 304-316 Issue: 4 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903228975 File-URL: http://hdl.handle.net/10.1080/07408170903228975 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:4:p:304-316 Template-Type: ReDIF-Article 1.0 Author-Name: Lawrence Leemis Author-X-Name-First: Lawrence Author-X-Name-Last: Leemis Title: Reliability growth via testing Abstract: Observed data values are typically assumed to come from an infinite population of items in reliability and survival analysis applications. The case of a finite population of items with exponentially distributed lifetimes is considered here. The data set consists of the lifetimes of a large number of items that are known to have exponentially distributed failure times with a failure rate that is known with high precision. Failure of the items is not self-announcing, as is the case with a smoke detector. A significant fraction of the items are sampled periodically, and the items that have failed are repaired to a like-new condition with respect to their survival distribution. The goal is to assess the impact of this periodic sampling and repair on the overall finite population reliability over time. Journal: IIE Transactions Pages: 317-324 Issue: 4 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903398000 File-URL: http://hdl.handle.net/10.1080/07408170903398000 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:4:p:317-324 Template-Type: ReDIF-Article 1.0 Author-Name: Wei Zhang Author-X-Name-First: Wei Author-X-Name-Last: Zhang Author-Name: Zhongsheng Hua Author-X-Name-First: Zhongsheng Author-X-Name-Last: Hua Author-Name: Yu Xia Author-X-Name-First: Yu Author-X-Name-Last: Xia Author-Name: Baofeng Huo Author-X-Name-First: Baofeng Author-X-Name-Last: Huo Title: Dynamic multi-technology production-inventory problem with emissions trading Abstract: We study a periodic-review multi-technology production-inventory problem of a single product with emissions trading over a planning horizon consisting of multiple periods. A manufacturer selects among multiple technologies with different unit production costs and emissions allowance consumption rates to produce the product to meet independently distributed random market demands. The manufacturer receives an emissions allowance at the beginning of the planning horizon and is allowed to trade allowances through an outside market in each of the following periods. To solve the dynamic multi-technology production-inventory problem, we virtually separate the problem into an inner layer and an outer layer. Based on the structural properties of the two layers, we find that the optimal emissions trading policy follows a target interval policy with two thresholds, whereas the optimal production policy has a composite base-stock structure. Our theoretical results show that no more than two technologies should be selected simultaneously at any state. However, different groups of technologies may be selected at different states. Our numerical tests confirm that it can be economically beneficial for a manufacturer to maintain multiple available technologies. Journal: IIE Transactions Pages: 110-119 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1011357 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1011357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:110-119 Template-Type: ReDIF-Article 1.0 Author-Name: Oguz Solyali Author-X-Name-First: Oguz Author-X-Name-Last: Solyali Author-Name: Meltem Denizel Author-X-Name-First: Meltem Author-X-Name-Last: Denizel Author-Name: Haldun Süral Author-X-Name-First: Haldun Author-X-Name-Last: Süral Title: Effective network formulations for lot sizing with backlogging in two-level serial supply chains Abstract: This study considers the serial lot sizing problem with backlogging in two-level supply chains to determine when and how much to order at a warehouse and ship to a retailer over a T-period planning horizon so that the external known demand occurring at the retailer is satisfied and the total cost at all levels is minimized. In particular, the uncapacitated two-level serial lot sizing problem with backlogging and the two-level serial lot sizing problem with cargo capacity and backlogging are formulated using effective shortest-path network representations, which define the convex hull of their feasible solutions. These representations lead to efficient algorithms with O(T3) time for the uncapacitated problem and O(T6) time for the capacitated problem. Furthermore, a tight reformulation with O(T3) variables and O(T2) constraints (resp. O(T6) variables and O(T5) constraints) is proposed for the uncapacitated (resp. capacitated) problem. Journal: IIE Transactions Pages: 146-157 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1027457 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1027457 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:146-157 Template-Type: ReDIF-Article 1.0 Author-Name: Emine Gundogdu Author-X-Name-First: Emine Author-X-Name-Last: Gundogdu Author-Name: Hakan Gultekin Author-X-Name-First: Hakan Author-X-Name-Last: Gultekin Title: Scheduling in two-machine robotic cells with a self-buffered robot Abstract: This study considers a production cell consisting of two machines and a material handling robot. The robot has a buffer space that moves with it. Identical parts are to be produced repetitively in this flowshop environment. The problem is to determine the cyclic schedule of the robot moves that maximizes the throughput rate. After developing the necessary framework to analyze such cells, we separately consider the single-, double-, and infinite-capacity buffer cases. For single- and double-capacity cases, consistent with the literature, we consider one-unit cycles that produce a single part in one repetition. We compare these cycles with each other and determine the set of undominated cycles. For the single-capacity case, we determine the parameter regions where each cycle is optimal, whereas for the double-capacity case, we determine efficient cycles and their worst-case performance bounds. For the infinite-capacity buffer case, we define a new class of cycles that better utilizes the benefits of the buffer space. We derive all such cycles and determine the set of undominated ones.We perform a computational study where we investigate the benefits of robots with a buffer space and the effects of the size of the buffer space on the performance. We compare the performances of self-buffered robots, dual-gripper robots, and robots with swap ability. Journal: IIE Transactions Pages: 170-191 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1047475 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1047475 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:170-191 Template-Type: ReDIF-Article 1.0 Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Author-Name: Dirk Briskorn Author-X-Name-First: Dirk Author-X-Name-Last: Briskorn Author-Name: Simon Emde Author-X-Name-First: Simon Author-X-Name-Last: Emde Title: Just-in-time vehicle scheduling with capacity constraints Abstract: This article treats a scheduling problem where timely departures of vehicles executing Just-In-Time (JIT) deliveries between a single supplier and a single receiver along with the assignment of supplies to vehicles are to be determined. This problem, for instance, arises in the automotive industry where parts are to be delivered JIT from a central distribution center to a car manufacturer. We define different subproblems and provide an analysis of computational complexity along with suitable solution procedures. Furthermore, we apply these procedures to an industry case, so that managerial implications are revealed; e.g., with regard to the impact of standardized containers on delivery costs. Journal: IIE Transactions Pages: 134-145 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1056390 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1056390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:134-145 Template-Type: ReDIF-Article 1.0 Author-Name: Lawrence V. Snyder Author-X-Name-First: Lawrence V. Author-X-Name-Last: Snyder Author-Name: Zümbül Atan Author-X-Name-First: Zümbül Author-X-Name-Last: Atan Author-Name: Peng Peng Author-X-Name-First: Peng Author-X-Name-Last: Peng Author-Name: Ying Rong Author-X-Name-First: Ying Author-X-Name-Last: Rong Author-Name: Amanda J. Schmitt Author-X-Name-First: Amanda J. Author-X-Name-Last: Schmitt Author-Name: Burcu Sinsoysal Author-X-Name-First: Burcu Author-X-Name-Last: Sinsoysal Title: OR/MS models for supply chain disruptions: a review Abstract: We review the Operations Research/Management Science (OR/MS) literature on supply chain disruptions in order to take stock of the research to date and to provide an overview of the research questions that have been addressed. We first place disruptions in the context of other forms of supply uncertainty and discuss common modeling approaches. We then discuss 180 scholarly works on the topic, organized into six categories: evaluating supply disruptions; strategic decisions; sourcing decisions; contracts and incentives; inventory; and facility location. We conclude with a discussion of future research directions. Journal: IIE Transactions Pages: 89-109 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1067735 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1067735 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:89-109 Template-Type: ReDIF-Article 1.0 Author-Name: Kayse Lee Maass Author-X-Name-First: Kayse Lee Author-X-Name-Last: Maass Author-Name: Mark S. Daskin Author-X-Name-First: Mark S. Author-X-Name-Last: Daskin Author-Name: Siqian Shen Author-X-Name-First: Siqian Author-X-Name-Last: Shen Title: Mitigating hard capacity constraints with inventory in facility location modeling Abstract: Although the traditional capacitated facility location model uses inflexible, limited capacities, facility managers often have many operational tools to extend capacity or to allow a facility to accept demands in excess of the capacity constraint for short periods of time. We present a mixed-integer program that captures these operational extensions. In particular, demands are not restricted by the capacity constraint, as we allow for unprocessed materials from one day to be held over in inventory and processed on a following day. We also consider demands at a daily level, which allows us to explicitly incorporate the daily variation in, and possibly correlated nature of, demands. Large problem instances, in terms of the number of demand nodes, candidate nodes, and number of days in the time horizon, are generated from United States census population data. We demonstrate that, in some instances, optimal locations identified by the new model differ from those of the traditional capacitated facility location problem and result in significant cost savings. Journal: IIE Transactions Pages: 120-133 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1078015 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078015 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:120-133 Template-Type: ReDIF-Article 1.0 Author-Name: Mark Mahyar Nejad Author-X-Name-First: Mark Mahyar Author-X-Name-Last: Nejad Author-Name: Lena Mashayekhy Author-X-Name-First: Lena Author-X-Name-Last: Mashayekhy Author-Name: Ratna Babu Chinnam Author-X-Name-First: Ratna Babu Author-X-Name-Last: Chinnam Author-Name: Anthony Phillips Author-X-Name-First: Anthony Author-X-Name-Last: Phillips Title: Hierarchical time-dependent shortest path algorithms for vehicle routing under ITS Abstract: The development of efficient algorithms for vehicle routing on time-dependent networks is one of the major challenges in routing under intelligent transportation systems. Existing vehicle routing navigation systems, whether built-in or portable, lack the ability to rely on online servers. Such systems must compute the route in a stand-alone mode with limited hardware processing/memory capacity given an origin/destination pair and departure time. In this article, we propose a computationally efficient, yet effective, hierarchical algorithm to solve the time-dependent shortest path problem. Our proposed algorithm exploits community-based hierarchical representations of road networks, and it recursively reduces the search space in each level of the hierarchy by using our proposed search strategy algorithm. Our proposed algorithm is efficient in terms of finding shortest paths in milliseconds for large-scale road networks while eliminating the need to store preprocessed shortest paths, shortcuts, lower bounds, etc. We demonstrate the performance of the proposed algorithm using data from Detroit, New York, and San Francisco road networks. Journal: IIE Transactions Pages: 158-169 Issue: 2 Volume: 48 Year: 2016 Month: 2 X-DOI: 10.1080/0740817X.2015.1078523 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078523 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:2:p:158-169 Template-Type: ReDIF-Article 1.0 Author-Name: Arthur Yeh Author-X-Name-First: Arthur Author-X-Name-Last: Yeh Author-Name: Longcheen Huwang Author-X-Name-First: Longcheen Author-X-Name-Last: Huwang Author-Name: Yu-Mei Li Author-X-Name-First: Yu-Mei Author-X-Name-Last: Li Title: Profile monitoring for a binary response Abstract: Pertaining to industrial applications in which the response variable of interest is binary, this paper studies how the profile functional relationship between the response and predictor variables can be monitored using logistic regression. Under such a premise, several Hotelling T2 charts that have been studied under continuous response variable to binary response variable for the purpose of Phase I profile monitoring are extended. The performance of these T2 charts in terms of the signal probability for different out-of-control scenarios is compared based on simulation studies. A real example originated from aircraft construction is given in which these T2 charts are applied and compared using the data. A discussion of potential future research is also given. Journal: IIE Transactions Pages: 931-941 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902735400 File-URL: http://hdl.handle.net/10.1080/07408170902735400 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:931-941 Template-Type: ReDIF-Article 1.0 Author-Name: Matthias Tan Author-X-Name-First: Matthias Author-X-Name-Last: Tan Author-Name: Szu Ng Author-X-Name-First: Szu Author-X-Name-Last: Ng Title: Estimation of the mean and variance response surfaces when the means and variances of the noise variables are unknown Abstract: The means and variances of noise variables are typically assumed known in the design and analysis of robust design experiments. However, these parameters are often not known with certainty and estimated with field data. Standard experimentation and optimization conducted with the estimated parameters can lead to results that are far from optimal due to variability in the data. In this paper, the estimation of the mean and variance response surfaces are considered using a combined array experiment in which estimates of the means and variances of the noise variables are obtained from random samples. The effects of random sampling error on the estimated mean and variance models are studied and a method to guide the design of the sampling effort and experiment to improve the estimation of the models is proposed. Mathematical programs are formulated to find the sample sizes for the noise variables and number of factorial, axial and center point replicates for a mixed resolution design that minimize the average variances of the estimators for the mean and variance models. Furthermore, an algorithm is proposed to find the optimal design and sample sizes given a candidate set of design points.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 942-956 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902735418 File-URL: http://hdl.handle.net/10.1080/07408170902735418 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:942-956 Template-Type: ReDIF-Article 1.0 Author-Name: Sangmun Shin Author-X-Name-First: Sangmun Author-X-Name-Last: Shin Author-Name: Byung Cho Author-X-Name-First: Byung Author-X-Name-Last: Cho Title: Studies on a biobjective robust design optimization problem Abstract: The vast majority of the multiobjective robust design research reported in the literature has been performed under the assumption of multiple quality characteristics. This paper differs in that the process mean and variance are considered to be a biobjective problem since the primary goal of robust design is to determine the optimal robust design factor settings by minimizing performance variability and deviation from a target value of a product. A more comprehensive set of solutions is developed using a lexicographic weighted Tchebycheff approach to the biobjective robust design model rather than the approaches traditionally used in the dual-response approach to obtain efficient solutions. Numerical examples show that the proposed model is far more effective than the traditional weighted sum approach.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix with mathematical proof, figures, and tables.] Journal: IIE Transactions Pages: 957-968 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902789084 File-URL: http://hdl.handle.net/10.1080/07408170902789084 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:957-968 Template-Type: ReDIF-Article 1.0 Author-Name: Navinchandra Acharya Author-X-Name-First: Navinchandra Author-X-Name-Last: Acharya Author-Name: Harriet Nembhard Author-X-Name-First: Harriet Author-X-Name-Last: Nembhard Title: Bayesian algorithms for missing observations in experimental designs for a nanolubrication process Abstract: Three new Bayesian algorithms for missing observations based on predictive ability and minimization of the Residual Sum of Squares (RSS) are proposed. Their performance is compared to three existing algorithms based on an appropriate predicted residual error sum of squares statistic. Different positions of the missing observations and initial model conditions are considered. In all the investigated cases, the Bayesian algorithms perform significantly better than non-Bayesian algorithms. A numerical study is performed using a nanolubrication process. It shows that the Bayesian complete RSS minimization algorithm yields the closest estimates of the missing observations, with the maximum predictive ability. Journal: IIE Transactions Pages: 969-978 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902806888 File-URL: http://hdl.handle.net/10.1080/07408170902806888 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:969-978 Template-Type: ReDIF-Article 1.0 Author-Name: Joongsup (Jay) Lee Author-X-Name-First: Joongsup (Jay) Author-X-Name-Last: Lee Author-Name: Christos Alexopoulos Author-X-Name-First: Christos Author-X-Name-Last: Alexopoulos Author-Name: David Goldsman Author-X-Name-First: David Author-X-Name-Last: Goldsman Author-Name: Seong-Hee Kim Author-X-Name-First: Seong-Hee Author-X-Name-Last: Kim Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Author-Name: James Wilson Author-X-Name-First: James Author-X-Name-Last: Wilson Title: Monitoring autocorrelated processes using a distribution-free tabular CUSUM chart with automated variance estimation Abstract: We formulate and evaluate distribution-free statistical process control (SPC) charts for monitoring shifts in the mean of an autocorrelated process when a training data set is used to estimate the marginal variance of the process and the variance parameter (i.e., the sum of covariances at all lags). Two alternative variance estimators are adapted for automated use in DFTC-VE, a distribution-free tabular CUSUM chart, based on the simulation-analysis methods of standardized time series and a simplified combination of autoregressive representation and non-overlapping batch means. Extensive experimentation revealed that these variance estimators did not seriously degrade DFTC-VE's performance compared with its performance using the exact values of the marginal variance and the variance parameter. Moreover, DFTC-VE's performance compared favorably with that of other competing distribution-free SPC charts.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplementary resource: Appendix] Journal: IIE Transactions Pages: 979-994 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902906035 File-URL: http://hdl.handle.net/10.1080/07408170902906035 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:979-994 Template-Type: ReDIF-Article 1.0 Author-Name: Qianmei (May) Feng Author-X-Name-First: Qianmei (May) Author-X-Name-Last: Feng Author-Name: Hande Sahin Author-X-Name-First: Hande Author-X-Name-Last: Sahin Author-Name: Marvin Karson Author-X-Name-First: Marvin Author-X-Name-Last: Karson Title: Bayesian analysis models for aviation baggage screening Abstract: This paper not only develops Bayesian analysis models for addressing both single-level and sequential multiple-level baggage screening problems but also provides a new perspective on the design and evaluation of aviation security systems. Using the Bayesian approach, the operator's prior knowledge about the status of a bag is incorporated with present data. The posterior distributions for single-level and multiple-level screenings are developed, respectively. To evaluate the performance for Bayesian screening, two metrics are introduced and implemented: (i) system risk bounded to the posterior mean of undetected threats; and (ii) system direct cost per bag that incorporates purchasing costs, operating costs and processing rates. By evaluating the trade-off between system risk and system cost, this paper assesses different screening technologies and combinations of them for single-level and two-level systems. The findings from numerical analyses provide recommendations for cost-effective selections of technologies with low risk levels.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 995-1006 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902906043 File-URL: http://hdl.handle.net/10.1080/07408170902906043 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:995-1006 Template-Type: ReDIF-Article 1.0 Author-Name: Yongzhong Zhu Author-X-Name-First: Yongzhong Author-X-Name-Last: Zhu Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Title: An adaptive chart for multivariate process monitoring and diagnosis Abstract: Hotelling's T2 chart is one of the most popular multivariate control charts for monitoring multiple variables simultaneously. The identification of response variables from T2 charts is an area that is currently receiving considerable attention. This paper proposes an adaptive T2 chart that combines process monitoring and diagnosis in a unified manner. The proposed procedure has a close relationship with the U2 chart, but it is data oriented and does not have a priori knowledge of the potential shifts space. It can adaptively capture shift information from the sample data to construct the U2 test statistic and is shown to be very competitive with other alternative charts for multivariate statistical process control in terms of both mean monitoring and fault diagnosis. Journal: IIE Transactions Pages: 1007-1018 Issue: 11 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902942675 File-URL: http://hdl.handle.net/10.1080/07408170902942675 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:11:p:1007-1018 Template-Type: ReDIF-Article 1.0 Author-Name: Michel-Alexandre Cardin Author-X-Name-First: Michel-Alexandre Author-X-Name-Last: Cardin Author-Name: Qihui Xie Author-X-Name-First: Qihui Author-X-Name-Last: Xie Author-Name: Tsan Sheng Ng Author-X-Name-First: Tsan Sheng Author-X-Name-Last: Ng Author-Name: Shuming Wang Author-X-Name-First: Shuming Author-X-Name-Last: Wang Author-Name: Junfei Hu Author-X-Name-First: Junfei Author-X-Name-Last: Hu Title: An approach for analyzing and managing flexibility in engineering systems design based on decision rules and multistage stochastic programming Abstract: This article introduces an approach to assess the value and manage flexibility in engineering systems design based on decision rules and stochastic programming. The approach differs from standard Real Options Analysis (ROA) that relies on dynamic programming in that it parameterizes the decision variables used to design and manage the flexible system in operations. Decision rules are based on heuristic-triggering mechanisms that are used by Decision Makers (DMs) to determine when it is appropriate to exercise the flexibility. They can be treated similarly as, and combined with, physical design variables, and optimal values can be determined using multistage stochastic programming techniques. The proposed approach is applied as demonstration to the analysis of a flexible hybrid waste-to-energy system with two independent flexibility strategies under two independent uncertainty drivers in an urban environment subject to growing waste generation. Results show that the proposed approach recognizes the value of flexibility to a similar extent as the standard ROA. The form of the solution provides intuitive guidelines to DMs for exercising the flexibility in operations. The demonstration shows that the method is suitable to analyze complex systems and problems when multiple uncertainty sources and different flexibility strategies are considered simultaneously. Journal: IISE Transactions Pages: 1-12 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1189627 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189627 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:1-12 Template-Type: ReDIF-Article 1.0 Author-Name: Amir M. Aboutaleb Author-X-Name-First: Amir M. Author-X-Name-Last: Aboutaleb Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Author-Name: Alaa Elwany Author-X-Name-First: Alaa Author-X-Name-Last: Elwany Author-Name: Nima Shamsaei Author-X-Name-First: Nima Author-X-Name-Last: Shamsaei Author-Name: Scott M. Thompson Author-X-Name-First: Scott M. Author-X-Name-Last: Thompson Author-Name: Gustavo Tapia Author-X-Name-First: Gustavo Author-X-Name-Last: Tapia Title: Accelerated process optimization for laser-based additive manufacturing by leveraging similar prior studies Abstract: Manufacturing parts with target properties and quality in Laser-Based Additive Manufacturing (LBAM) is crucial toward enhancing the “trustworthiness” of this emerging technology and pushing it into the mainstream. Most of the existing LBAM studies do not use a systematic approach to optimize process parameters (e.g., laser power, laser velocity, layer thickness, etc.) for desired part properties. We propose a novel process optimization method that directly utilizes experimental data from previous studies as the initial experimental data to guide the sequential optimization experiments of the current study. This serves to reduce the total number of time- and cost-intensive experiments needed. We verify our method and test its performance via comprehensive simulation studies that test various types of prior data. The results show that our method significantly reduces the number of optimization experiments, compared with conventional optimization methods. We also conduct a real-world case study that optimizes the relative density of parts manufactured using a Selective Laser Melting system. A combination of optimal process parameters is achieved within five experiments. Journal: IISE Transactions Pages: 31-44 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1189629 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189629 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:31-44 Template-Type: ReDIF-Article 1.0 Author-Name: Xin Li Author-X-Name-First: Xin Author-X-Name-Last: Li Author-Name: Phong H. Tran Author-X-Name-First: Phong H. Author-X-Name-Last: Tran Author-Name: Tao Liu Author-X-Name-First: Tao Author-X-Name-Last: Liu Author-Name: Chiwoo Park Author-X-Name-First: Chiwoo Author-X-Name-Last: Park Title: Simulation-guided regression approach for estimating the size distribution of nanoparticles with dynamic light scattering data Abstract: This article presents a simulation-guided regression approach for estimating the size distribution of nanoparticles from Dynamic Light Scattering (DLS) measurements. The properties and functionalities exhibited by nanoparticles often depend on their sizes, so the precise quantification of the sizes is important for characterizing and monitoring the quality of a nanoparticle synthesis process. The state-of-the-art method used in the size quantification from DLS measurements is the CONTIN, which is based on a computationally ineffective numerical inversion. We propose a new approach that avoids the numerical inversion by reformulating the problem into a regularized regression problem, with the basis functions being generated by a computer simulation of DLS measurements. For many simulation studies and one real data study, our method outperformed the CONTIN in terms of estimation accuracy and computational efficiency. Journal: IISE Transactions Pages: 70-83 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1198063 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1198063 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:70-83 Template-Type: ReDIF-Article 1.0 Author-Name: Na Zou Author-X-Name-First: Na Author-X-Name-Last: Zou Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Modeling and change detection of dynamic network data by a network state space model Abstract: Dynamic network data are often encountered in social, biological, and engineering domains. There are two types of variability in dynamic network data: variability of natural evolution and variability due to assignable causes. The latter is the “change” referred to in this article. Accurate and timely change detection from dynamic network data is important. However, it has been infrequently studied, with most of the existing research having focused on community detection, prediction, and visualization. Change detection is a classic research area in Statistical Process Control (SPC), and various approaches have been developed for dynamic data in the form of univariate or multivariate time series but not in the form of networks. We propose a Network State Space Model (NSSM) to characterize the natural evolution of dynamic networks. For tractable parameter estimation of the NSSM, we develop an Expectation Propagation algorithm to produce an approximation for the observation equation of the NSSM and then use Expectation–Maximization integrated with Bayesian Optimal Smoothing to estimate the parameters. For change detection, we further propose a Singular Value Decomposition (SVD)-based method that integrates the NSSM with SPC. A real-world application on Enron dynamic email networks is presented, in which our method successfully detects two known changes. Journal: IISE Transactions Pages: 45-57 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1198065 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1198065 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:45-57 Template-Type: ReDIF-Article 1.0 Author-Name: Sangahn Kim Author-X-Name-First: Sangahn Author-X-Name-Last: Kim Author-Name: Myong K. Jeong Author-X-Name-First: Myong K. Author-X-Name-Last: Jeong Author-Name: Elsayed A. Elsayed Author-X-Name-First: Elsayed A. Author-X-Name-Last: Elsayed Title: Generalized smoothing parameters of a multivariate EWMA control chart Abstract: The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is effective in detecting a small process mean shift. Its simplicity and generality stem from the assumption that the smoothing parameters of the variables are given constants and equally distributed on the diagonal of the smoothing matrix. Recently, the MEWMA model with the full non-diagonal smoothing matrix (FEWMA) is studied. The model, however, has limited use due to the assumption that the off-diagonal elements are the same; therefore, it would necessarily be sensitive to the correlation structure of observations. In this article, we propose a generalized model for the MEWMA, that uses appropriate non-diagonal elements in the smoothing matrix based on the correlation among variables. We also offer an interpretation of off-diagonal elements of the smoothing matrix and suggest an optimal design for a proposed MEWMA chart. A case study on the automatic monitoring of dimensions of bolts using an imaging processing system is presented to illustrate the proposed control chart. The proposed model is effective in detecting small mean shifts and shows improved performance when compared with MEWMA, FEWMA, and other recently improved control charts. Journal: IISE Transactions Pages: 58-69 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1198509 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1198509 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:58-69 Template-Type: ReDIF-Article 1.0 Author-Name: Mingdi You Author-X-Name-First: Mingdi Author-X-Name-Last: You Author-Name: Eunshin Byon Author-X-Name-First: Eunshin Author-X-Name-Last: Byon Author-Name: Jionghua (Judy) Jin Author-X-Name-First: Jionghua (Judy) Author-X-Name-Last: Jin Author-Name: Giwhyun Lee Author-X-Name-First: Giwhyun Author-X-Name-Last: Lee Title: When wind travels through turbines: A new statistical approach for characterizing heterogeneous wake effects in multi-turbine wind farms Abstract: Modern utility-scale wind farms consist of a large number of wind turbines. In order to improve the power generation efficiency of wind turbines, accurate quantification of power generation levels of multi-turbines is critical, in both wind farm design and operational controls. One challenging issue is that the power output levels of multiple wind turbines are different, due to complex interactions between turbines, known as wake effects. In general, upstream turbines in a wind farm absorb kinetic energy from wind. Therefore, downstream turbines tend to produce less power than upstream turbines. Moreover, depending on weather conditions, the power deficits of downstream turbines exhibit heterogeneous patterns. This study proposes a new statistical approach to characterize heterogeneous wake effects. The proposed approach decomposes the power outputs into the average pattern commonly exhibited by all turbines and the turbine-to-turbine variability caused by multi-turbine interactions. To capture the wake effects, turbine-specific regression parameters are modeled using a Gaussian Markov random field. A case study using actual wind farm data demonstrates the proposed approach's superior performance. Journal: IISE Transactions Pages: 84-95 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1204489 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1204489 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:84-95 Template-Type: ReDIF-Article 1.0 Author-Name: Chiel van Oosterom Author-X-Name-First: Chiel Author-X-Name-Last: van Oosterom Author-Name: Hao Peng Author-X-Name-First: Hao Author-X-Name-Last: Peng Author-Name: Geert-Jan van Houtum Author-X-Name-First: Geert-Jan Author-X-Name-Last: van Houtum Title: Maintenance optimization for a Markovian deteriorating system with population heterogeneity Abstract: We develop a partially observable Markov decision process model to incorporate population heterogeneity when scheduling replacements for a deteriorating system. The single-component system deteriorates over a finite set of condition states according to a Markov chain. The population of spare components that is available for replacements is composed of multiple component types that cannot be distinguished by their exterior appearance but deteriorate according to different transition probability matrices. This situation may arise, for example, because of variations in the production process of components. We provide a set of conditions for which we characterize the structure of the optimal policy that minimizes the total expected discounted operating and replacement cost over an infinite horizon. In a numerical experiment, we benchmark the optimal policy against a heuristic policy that neglects population heterogeneity. Journal: IISE Transactions Pages: 96-109 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1205239 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1205239 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:96-109 Template-Type: ReDIF-Article 1.0 Author-Name: George Nenes Author-X-Name-First: George Author-X-Name-Last: Nenes Author-Name: Philippe Castagliola Author-X-Name-First: Philippe Author-X-Name-Last: Castagliola Author-Name: Giovanni Celano Author-X-Name-First: Giovanni Author-X-Name-Last: Celano Title: Economic and statistical design of Vp control charts for finite-horizon processes Abstract: In manufacturing environments where the production horizon for a specific product can be limited to a few hours or shifts, statistical process monitoring based on control charts is strategic to cut scrap, rework costs, and meet due dates. In this article, a Markov chain model is proposed to design a fully adaptive Shewhart control chart in a process with finite production horizon. The proposed Markov chain model allows the exact computation of several statistical performance metrics, as well as the expected cost of the monitoring and operation process for any adaptive Shewhart control chart with an unknown but finite number of inspections. Illustrative examples show the implementation of the Vp X‾$\bar{X}$ chart in short runs producing a finite batch of products. Journal: IISE Transactions Pages: 110-125 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1206674 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1206674 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:110-125 Template-Type: ReDIF-Article 1.0 Author-Name: Dominik Kress Author-X-Name-First: Dominik Author-X-Name-Last: Kress Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Author-Name: Erwin Pesch Author-X-Name-First: Erwin Author-X-Name-Last: Pesch Title: Which items should be stored together? A basic partition problem to assign storage space in group-based storage systems Abstract: We consider a basic partition problem that subdivides Stock Keeping Units (SKUs) into disjoint subsets, such that the minimum number of groups has to be accessed when retrieving a given order set under a pick-by-order policy. We formalize this SKU partition problem and show its applicability in a wide range of storage systems that are based on separating their storage space into groups of SKUs stored in separate areas; examples are carousel racks and mobile shelves. We analyze the computational complexity and propose two mathematical models for the problem under consideration. Furthermore, we present an ejection chain heuristic and a branch and bound procedure. We analyze these algorithms and the mathematical models in computational tests. Journal: IISE Transactions Pages: 13-30 Issue: 1 Volume: 49 Year: 2017 Month: 1 X-DOI: 10.1080/0740817X.2016.1213469 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1213469 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:1:p:13-30 Template-Type: ReDIF-Article 1.0 Author-Name: Wanmo Kang Author-X-Name-First: Wanmo Author-X-Name-Last: Kang Author-Name: Kyoung-Kuk Kim Author-X-Name-First: Kyoung-Kuk Author-X-Name-Last: Kim Author-Name: Hayong Shin Author-X-Name-First: Hayong Author-X-Name-Last: Shin Title: Fairing the gamma: an engineering approach to sensitivity estimation Abstract: In the finance industry, obtaining stable estimates for sensitivities of derivatives to price changes in an underlying asset is very important from a practical point of view. However, this aim is often hindered by the absence of closed-form expressions for Greeks or the requirement of an excessive computational workload due to the complexities of various exotic derivative structures. However, ad hoc numerical schemes to produce stable Greeks such as nonlinear regression can result in nonsensical values. This article proposes a fairing algorithm designed for the computation of gamma values of exotic derivatives. Examples are presented at exotic derivatives to which the algorithm is applied and some analytical and numerical results are provided that show its usefulness in reducing the mean square error of gamma estimates. Journal: IIE Transactions Pages: 374-396 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.689125 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.689125 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:374-396 Template-Type: ReDIF-Article 1.0 Author-Name: Gonglin Yuan Author-X-Name-First: Gonglin Author-X-Name-Last: Yuan Author-Name: Zengxin Wei Author-X-Name-First: Zengxin Author-X-Name-Last: Wei Author-Name: Qiumei Zhao Author-X-Name-First: Qiumei Author-X-Name-Last: Zhao Title: A modified Polak–Ribière–Polyak conjugate gradient algorithm for large-scale optimization problems Abstract: Mathematical programming is a rich and well-advanced area in operations research. However, there are still many challenging problems in mathematical programming, and the large-scale optimization problem is one of them. In this article, a modified Polak–Ribière–Polyak conjugate gradient algorithm that incorporates a non-monotone line search technique is presented. This method possesses not only gradient value information but also function value information. Moreover, the sufficient descent condition holds without any line search. Under suitable conditions, the global convergence is established for non-convex functions. Numerical results show that the proposed method is competitive with other conjugate gradient methods for large-scale optimization problems. Journal: IIE Transactions Pages: 397-413 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.726757 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.726757 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:397-413 Template-Type: ReDIF-Article 1.0 Author-Name: Tiaojun Xiao Author-X-Name-First: Tiaojun Author-X-Name-Last: Xiao Author-Name: Yusen Xia Author-X-Name-First: Yusen Author-X-Name-Last: Xia Author-Name: G. Zhang Author-X-Name-First: G. Author-X-Name-Last: Zhang Title: Strategic outsourcing decisions for manufacturers competing on product quality Abstract: This article examines strategic outsourcing decisions for two competing manufacturers whose key components have quality improvement (QI) opportunities. We consider the effects of vertical and horizontal product differentiation on demand. In deriving the Subgame Perfect Nash Equilibria (SPNE), it is shown that either a symmetric outsourcing strategy (i.e., both manufacturers outsource) or an asymmetric sourcing strategy profile (i.e., manufacturers use different sourcing strategies) can be an SPNE. Insights are provided into how sourcing strategies are affected by the factors such as fixed setup cost, unit production cost, QI efficiency, and horizontal and vertical differentiations. It is found that larger horizontal differentiation expands the range over which symmetric outsourcing is an equilibrium. In addition, the outsourcing manufacturer may provide a higher QI level, produce a higher quantity, and offer a lower retail price than the insourcing one in the asymmetric sourcing setting. On the other hand, the insourcing manufacturer should pay more attention to the effect of the rival's sourcing strategy on the QI level than the outsourcing one. Finally, it is shown that all players have an incentive to determine sourcing partner relationships before the wholesale price contract is offered. Journal: IIE Transactions Pages: 313-329 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.761368 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.761368 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:313-329 Template-Type: ReDIF-Article 1.0 Author-Name: M. Güler Author-X-Name-First: M. Author-X-Name-Last: Güler Author-Name: Taner Bi̇lgi̇ç Author-X-Name-First: Taner Author-X-Name-Last: Bi̇lgi̇ç Author-Name: Refi̇k Güllü Author-X-Name-First: Refi̇k Author-X-Name-Last: Güllü Title: Joint inventory and pricing decisions with reference effects Abstract: This article considers a periodic review joint replenishment and pricing problem of a single item with reference effects. The demand is random and is contingent on the price history as well as the current price. Randomness is introduced with both an additive and a multiplicative random term. Price history is captured by a reference price, which is developed by consumers that are frequent buyers of a product or a service. The common reference price acts as a benchmark against which the consumers compare the price of a product. They perceive the difference between the price and the reference price as a loss or a gain and have different attitudes to these perceptions, such as loss aversion, loss neutrality, or loss seeking. A general way to handle the nonconvexity of the holding cost for nonlinear demand models is to make a transformation and use the inverse demand function. However, in reference price-dependent demand models, this brings the problem of a nonconvex action space. This problem is circumvented by defining an action space that preserves the convexity after a transformation. For the transformed problem, it is shown that a state-dependent order-up-to policy is optimal for concave demand models and concave transformed expected revenue functions that are not necessarily differentiable. It is shown that there are demand models with relative difference reference effects and loss-averse customers that satisfy the considered concavity assumptions. A computational study is performed to highlight the effects of joint inventory and pricing decisions under reference effects. Journal: IIE Transactions Pages: 330-343 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.768782 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.768782 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:330-343 Template-Type: ReDIF-Article 1.0 Author-Name: Manuel Lozano Author-X-Name-First: Manuel Author-X-Name-Last: Lozano Author-Name: Fred Glover Author-X-Name-First: Fred Author-X-Name-Last: Glover Author-Name: Carlos García-Martínez Author-X-Name-First: Carlos Author-X-Name-Last: García-Martínez Author-Name: Francisco Rodríguez Author-X-Name-First: Francisco Author-X-Name-Last: Rodríguez Author-Name: Rafael Martí Author-X-Name-First: Rafael Author-X-Name-Last: Martí Title: Tabu search with strategic oscillation for the quadratic minimum spanning tree Abstract: The quadratic minimum spanning tree problem consists of determining a spanning tree that minimizes the sum of costs of the edges and pairs of edges in the tree. Many algorithms and methods have been proposed for this hard combinatorial problem, including several highly sophisticated metaheuristics. This article presents a simple Tabu Search (TS) for this problem that incorporates Strategic Oscillation (SO) by alternating between constructive and destructive phases. The commonalties shared by this strategy and the more recently introduced methodology called iterated greedy search are shown and implications of their differences regarding the use of memory structures are identified. Extensive computational experiments reveal that the proposed SO algorithm with embedded TS is highly effective for solving complex instances of the problem as compared with the best metaheuristics in the literature. A hybrid method that proves similarly effective for problem instances that are both simple and complex is introduced. Journal: IIE Transactions Pages: 414-428 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.768785 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.768785 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:414-428 Template-Type: ReDIF-Article 1.0 Author-Name: Raimund Kovacevic Author-X-Name-First: Raimund Author-X-Name-Last: Kovacevic Author-Name: David Wozabal Author-X-Name-First: David Author-X-Name-Last: Wozabal Title: A semiparametric model for electricity spot prices Abstract: This article proposes a semiparametric single-index model for short-term forecasting day-ahead electricity prices. The approach captures the dependency of electricity prices on covariates, such as demand for electricity, amount of energy produced by intermittent sources, and weather-dependent variables. To obtain parsimonious models, principal component analysis is used for dimension reduction. The approach is tested on two data sets from different markets and its performance is analyzed in terms of fit, forecast quality, and computational efficiency. The results are encouraging, in that the proposed method leads to a good in-sample fit and performs well out-of-sample compared with four benchmark models, including a SARIMA model as well as a functional nonparametric regression approach recently proposed in the literature. Journal: IIE Transactions Pages: 344-356 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.803640 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803640 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:344-356 Template-Type: ReDIF-Article 1.0 Author-Name: Kuo-Hao Chang Author-X-Name-First: Kuo-Hao Author-X-Name-Last: Chang Author-Name: Ming-Kai Li Author-X-Name-First: Ming-Kai Author-X-Name-Last: Li Author-Name: Hong Wan Author-X-Name-First: Hong Author-X-Name-Last: Wan Title: Combining STRONG with screening designs for large-scale simulation optimization Abstract: Simulation optimization has received a great deal of attention over the decades due to its generality and solvability in many practical problems. On the other hand, simulation optimization is well recognized as a difficult problem, especially when the problem dimensionality grows. Stochastic Trust-Region Response Surface Method (STRONG) is a newly developed method built upon the traditional Response Surface Methodology (RSM). Like the traditional RSM, STRONG employs efficient design of experiments and regression analysis; hence, it can enjoy computational advantages for higher-dimensional problems. However, STRONG is superior to the traditional RSM in that it is an automated algorithm and has provable convergence guarantee. This article exploits the structure of STRONG and proposes a new framework that combines STRONG with efficient screening designs to enable the solving of large-scale problems; e.g., hundreds of factors. It is shown that the new framework is convergent with probability one. Numerical experiments show that the new framework is capable of handling problems with hundreds of factors and its computational performance is far more satisfactory than other existing approaches. Two illustrative examples are provided to show the viability of the new framework in practical settings. Journal: IIE Transactions Pages: 357-373 Issue: 4 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.812268 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.812268 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:4:p:357-373 Template-Type: ReDIF-Article 1.0 Author-Name: Emre Yamangil Author-X-Name-First: Emre Author-X-Name-Last: Yamangil Author-Name: İ. Altınel Author-X-Name-First: İ. Author-X-Name-Last: Altınel Author-Name: Bora Çekyay Author-X-Name-First: Bora Author-X-Name-Last: Çekyay Author-Name: Orhan Feyziog[gtilde]lu Author-X-Name-First: Orhan Author-X-Name-Last: Feyziog[gtilde]lu Author-Name: Süleyman Özekici Author-X-Name-First: Süleyman Author-X-Name-Last: Özekici Title: Design of optimum component test plans in the demonstration of diverse system performance measures Abstract: While component-level tests have many advantages over system-level tests, the actual protection offered in making inferences about system reliability is not the same as what is expected. Thus, a significant proportion of research has concentrated on the design of system-based component test plans that also have minimum cost. This article extends those previous studies by considering two additional system performance measures: expected system lifetime and system availability. After explicitly expressing these performance measures as a function of failure rates for various system types, the component testing problem is formulated as a semi-infinite linear programming problem and solved with a column generation technique incorporating signomial geometric programming. Several numerical examples are presented that provide insight on the model parameters. Journal: IIE Transactions Pages: 535-546 Issue: 7 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.523768 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.523768 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:7:p:535-546 Template-Type: ReDIF-Article 1.0 Author-Name: Seoung Kim Author-X-Name-First: Seoung Author-X-Name-Last: Kim Author-Name: Thuntee Sukchotrat Author-X-Name-First: Thuntee Author-X-Name-Last: Sukchotrat Author-Name: Sun-Kyoung Park Author-X-Name-First: Sun-Kyoung Author-X-Name-Last: Park Title: A nonparametric fault isolation approach through one-class classification algorithms Abstract: Multivariate control charts provide control limits for the monitoring of processes and detection of abnormal events so that processes can be improved. However, these multivariate control charts provide limited information about the contribution of any specific variable to the out-of-control alarm. Although many fault isolation methods have been developed to address this deficiency, most of these methods require a parametric distributional assumption that restricts their applicability to specific problems of process control and thus limits their broader usefulness. This study proposes a nonparametric fault isolation method based on a one-class classification algorithm that overcomes the limitation posed by the parametric assumption in existing fault isolation methods. The proposed approach decomposes the monitoring statistics obtained from a one-class classification algorithm into components that reflect the contribution of each variable to the out-of-control signal. A bootstrap approach is used to determine the significance of each variable. A simulation study is presented that examines the performance of the proposed method under various scenarios and to results are compared with those obtained using the T2 decomposition method. The simulation results reveal that the proposed method outperforms the T2 decomposition method in non-normal distribution cases. Journal: IIE Transactions Pages: 505-517 Issue: 7 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.523769 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.523769 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:7:p:505-517 Template-Type: ReDIF-Article 1.0 Author-Name: Gregory Levitin Author-X-Name-First: Gregory Author-X-Name-Last: Levitin Title: Reliability of multi-state systems with common bus performance sharing Abstract: This article extends an existing model for performance sharing among the multi-state units. The extended model considers an arbitrary number of units that must satisfy individual random demands. If a unit has a performance that exceeds the demand it can transmit the surplus performance to other units. The amount of transmitted performance is limited by the random capacity of a transmission system. The entire system fails if at least one demand is not satisfied. An algorithm based on the universal generating function technique is suggested to evaluate the system reliability and expected performance deficiency. Analytical and numerical examples are presented. Journal: IIE Transactions Pages: 518-524 Issue: 7 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.523770 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.523770 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:7:p:518-524 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Apley Author-X-Name-First: Daniel Author-X-Name-Last: Apley Author-Name: Jeongbae Kim Author-X-Name-First: Jeongbae Author-X-Name-Last: Kim Title: A cautious approach to robust design with model parameter uncertainty Abstract: Industrial robust design methods rely on empirical process models that relate an output response variable to a set of controllable input variables and a set of uncontrollable noise variables. However, when determining the input settings that minimize output variability, model uncertainty is typically neglected. Using a Bayesian problem formulation similar to what has been termed cautious control in the adaptive feedback control literature, this article develops a cautious robust design approach that takes model parameter uncertainty into account via the posterior (given the experimental data) parameter covariance. A tractable and interpretable expression for the posterior response variance and mean square error is derived that is well suited for numerical optimization and that also provides insight into the impact of parameter uncertainty on the robust design objective. The approach is cautious in the sense that as parameter uncertainty increases, the input settings are often chosen closer to the center of the experimental design region or, more generally, in a manner that mitigates the adverse effects of parameter uncertainty. A brief discussion on an extension of the approach to consider model structure uncertainty is presented. Journal: IIE Transactions Pages: 471-482 Issue: 7 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.532854 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.532854 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:7:p:471-482 Template-Type: ReDIF-Article 1.0 Author-Name: Mahmood Shafiee Author-X-Name-First: Mahmood Author-X-Name-Last: Shafiee Author-Name: Stefanka Chukova Author-X-Name-First: Stefanka Author-X-Name-Last: Chukova Author-Name: Won Yun Author-X-Name-First: Won Author-X-Name-Last: Yun Author-Name: Seyed Niaki Author-X-Name-First: Seyed Author-X-Name-Last: Niaki Title: On the investment in a reliability improvement program for warranted second-hand items Abstract: A reliability improvement program (such as an upgrade action) can be seen as an investment by a dealer to restore a second-hand product to a better operational state. Due to the nature of the actions performed, the item's reliability at the end of this program is usually uncertain. This article develops a stochastic cost–benefit model for investment made in reliability improvement programs for second-hand items sold with failure-free warranty. Depending on the product's lifetime modeling approach, two modifications of the model are considered and are solved for the optimal improvement level. A real case application of the model is presented to validate the proposed approach. Journal: IIE Transactions Pages: 525-534 Issue: 7 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.540638 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.540638 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:7:p:525-534 Template-Type: ReDIF-Article 1.0 Author-Name: Kjell Hausken Author-X-Name-First: Kjell Author-X-Name-Last: Hausken Title: Strategic defense and attack of series systems when agents move sequentially Abstract: In the September 11, 2001, attack the defender moved first with a weak defense, and the attacker moved second with an overwhelming attack. One alternative is that the attacker moves first by announcing an attack, while the defender moves second to defend against that attack. Third, two ships in a simultaneous encounter cannot take the opponent's strategy as given. For a series system that the defender prefers should operate reliably, and the attacker prefers should operate unreliably, this article demonstrates that these three scenarios cause crucially different recommendations for defense and attack investments. For example, the defender prefers to move first rather than participate in a simultaneous game in a series system with two components. In contrast, an advantaged attacker in a series system prefers the simultaneous game since it does not want to expose which components are to be attacked. When the defender is advantaged in a series system, its first move deters the attacker. Deterrence is not possible in simultaneous games. When equally matched, both agents prefer to avoid the uncertain and costly simultaneous game that causes high investment costs. The results for the defender (attacker) in a parallel system are equivalent to the results for the attacker (defender) in a series system. Journal: IIE Transactions Pages: 483-504 Issue: 7 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.541178 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.541178 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:7:p:483-504 Template-Type: ReDIF-Article 1.0 Author-Name: Yossi Bukchin Author-X-Name-First: Yossi Author-X-Name-Last: Bukchin Author-Name: Eran Hanany Author-X-Name-First: Eran Author-X-Name-Last: Hanany Author-Name: Eugene Khmelnitsky Author-X-Name-First: Eugene Author-X-Name-Last: Khmelnitsky Title: Bucket brigade with stochastic worker pace Abstract: We study Bucket Brigade (BB) production systems under the assumption of stochastic worker speeds. Our analysis provides interesting and counter-intuitive insights into realistic production environments. We analyze the following three systems: the traditional BB found in the literature, BB with overtaking allowed (BBO), and a benchmark system of parallel workers. After formulating the dynamic equations for all systems, we solve them analytically when possible and numerically in general. We identify settings in which conclusions that emerge from deterministic analysis fail to hold when speeds are stochastic, in particular relating to worker order assignment. Specifically, a fastest to slowest order with respect to expected speeds may be optimal as long as the standard deviation of the fastest worker is large enough. Significantly, in a stochastic environment the BB can improve the throughput rate compared to parallel workers, despite the fact that no blockage or starvation may occur in the latter. The BBO setting, which is relevant in a stochastic environment, and can be sometimes implemented in practice, provides an upper bound on the throughput rate of parallel workers, and is shown numerically to significantly improve upon BB. Journal: IISE Transactions Pages: 1027-1042 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1476790 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1476790 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1027-1042 Template-Type: ReDIF-Article 1.0 Author-Name: Lanqing Hong Author-X-Name-First: Lanqing Author-X-Name-Last: Hong Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Author-Name: Josephine Kartika Sari Author-X-Name-First: Josephine Author-X-Name-Last: Kartika Sari Title: Interval estimation for Wiener processes based on accelerated degradation test data Abstract: Degradation is a primary cause of failures for many materials and products. Although stochastic processes have been widely applied to degradation data, there is a lack of efficient and accurate methods for interval estimation of model parameters and reliability characteristics given limited degradation data. Using the method of generalized pivotal quantities, this study develops interval estimation procedures for fixed-effects and mixed-effects Wiener degradation processes based on accelerated degradation test data. The fixed-effects processes are common for mature products and the mixed-effects model is capable of capturing heterogeneities in an immature product. The constructed confidence intervals are shown to have exact, or asymptotically exact, frequentist coverage probabilities. Extensive simulations are conducted to compare the proposed procedures to other competing methods, including the large sample normal approximation, and the bootstrap. The simulation results reveal that the proposed intervals have satisfactory performance in terms of the coverage probability and the average interval length. The proposed interval estimation procedures are successfully applied to accelerated degradation data from commercial white LEDs. Journal: IISE Transactions Pages: 1043-1057 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1468121 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1468121 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1043-1057 Template-Type: ReDIF-Article 1.0 Author-Name: Shuming Wang Author-X-Name-First: Shuming Author-X-Name-Last: Wang Author-Name: Yan-Fu Li Author-X-Name-First: Yan-Fu Author-X-Name-Last: Li Author-Name: Yakun Wang Author-X-Name-First: Yakun Author-X-Name-Last: Wang Title: Hybrid uncertainty model for multi-state systems and linear programming-based approximations for reliability assessment Abstract: This article studies reliability assessment for Multi-State Systems (MSSs) with components states that are uncertain in both probability and performance realizations. First, we propose a model of (discrete) Hybrid Uncertainty Variable (HUV) for modeling the hybrid uncertainty of the MSS, in which both state performance levels and associated probability level are described by uncertain values. The HUV can be regarded as a generalization of random variable whose realizations and corresponding probabilities are both uncertain values. Especially the uncertain probabilities are controlled by the probability law. Leveraging the HUV-based hybrid uncertainty model, the primitive probability law is considered throughout the whole process from modeling component state probabilities, through the resulting system state probabilities, to the final reliability computations. Therefore, the information loss is reduced to a minimum. Furthermore, we develop a framework for assessing the reliability of the MSS with hybrid uncertainty. In particular, due to hybrid uncertainty considered together with the primitive probability law constraints, the reliability bound computations essentially require solving a pair of multi-linear optimization problems, which in general are non-convex and non-concave and therefore belong to a class of difficult optimization problems. Therefore, we develop a linear programming‐based cut-generation approach for solving the reliability bound assessment problem which achieves a computationally attractive approximation. Finally, the effectiveness of our approaches is validated in the case study with the comparisons to the published results. Journal: IISE Transactions Pages: 1058-1075 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1468123 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1468123 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1058-1075 Template-Type: ReDIF-Article 1.0 Author-Name: Ying Lin Author-X-Name-First: Ying Author-X-Name-Last: Lin Author-Name: Shan Liu Author-X-Name-First: Shan Author-X-Name-Last: Liu Author-Name: Shuai Huang Author-X-Name-First: Shuai Author-X-Name-Last: Huang Title: Selective sensing of a heterogeneous population of units with dynamic health conditions Abstract: Monitoring a large number of units whose health conditions follow complex dynamic evolution is a challenging problem in many healthcare and engineering applications. For instance, a unit may represent a patient in a healthcare application or a machine in a manufacturing process. Challenges mainly arise from: (i) insufficient data collection that results in limited measurements for each unit to build an accurate personalized model in the prognostic modeling stage; and (ii) limited capacity to further collect surveillance measurement of the units in the monitoring stage. In this study, we develop a selective sensing method that integrates prognostic models, collaborative learning, and sensing resource allocation to efficiently and economically monitor a large number of units by exploiting the similarity between them. We showcased the effectiveness of the proposed method using two real-world applications; one on depression monitoring and another with cognitive degradation monitoring for Alzheimer’s disease. Comparing with existing benchmark methods such as the ranking-and-selection method, our fully integrated prognosis-driven selective sensing method enables more accurate and faster identification of high-risk individuals. Journal: IISE Transactions Pages: 1076-1088 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1470357 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1470357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1076-1088 Template-Type: ReDIF-Article 1.0 Author-Name: Phillip Howard Author-X-Name-First: Phillip Author-X-Name-Last: Howard Author-Name: Daniel W. Apley Author-X-Name-First: Daniel W. Author-X-Name-Last: Apley Author-Name: George Runger Author-X-Name-First: George Author-X-Name-Last: Runger Title: Identifying nonlinear variation patterns with deep autoencoders Abstract: The discovery of nonlinear variation patterns in high-dimensional profile data is an important task in many quality control and manufacturing settings. We present an automated method for discovering nonlinear variation patterns using deep autoencoders. The approach provides a functional mapping from a low-dimensional representation to the original spatially-dense feature space of the profile data that is both interpretable and efficient with respect to preserving information. We compare our deep autoencoder approach to several other methods for discovering variation patterns in profile data. Our results indicate that deep autoencoders consistently outperform the alternative approaches in reproducing the original profiles from the learned variation sources. Journal: IISE Transactions Pages: 1089-1103 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1472407 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1472407 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1089-1103 Template-Type: ReDIF-Article 1.0 Author-Name: Kun Wang Author-X-Name-First: Kun Author-X-Name-Last: Wang Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Integration of sparse singular vector decomposition and statistical process control for traffic monitoring and quality of service improvement in mission-critical communication networks Abstract: Mission-Critical Communication Networks (MCCNs) are wireless networks whose malfunction can cause significant problems. The nature of MCCNs puts an extremely high standard on the Quality of Service (QoS). QoS assurance starts from monitoring and change/anomaly detection of network packets data. This problem has been primarily studied by the research community of communication networks, in which the existing methods fall short for providing a privacy-preserving, minimum-disruption, global monitoring tool. Another relevant research area is Multivariate Statistical Process Control (MSPC), in which generic methods have been developed for monitoring high-dimensional data streams. These methods do not account for the special data distribution and correlation structure of packet streams. Nor are they efficient enough to suit real-time analytics in MCCNs. We propose a method called Sparse Singular Value Decomposition (SSVD)-MSPC. SSVD-MSPC addresses the aforementioned limitations and additionally provides key capabilities toward QoS improvement, including monitoring, fault identification, and fault characterization. Extensive case studies are conducted for MCCNs that experience faults of different magnitudes and various temporal shapes. SSVD-MSPC achieves good performance across the different settings in comparison with existing methods. Journal: IISE Transactions Pages: 1104-1116 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1474300 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1474300 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1104-1116 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: IISE Transactions Journal: IISE Transactions Pages: 1117-1120 Issue: 12 Volume: 50 Year: 2018 Month: 12 X-DOI: 10.1080/24725854.2018.1545508 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1545508 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:12:p:1117-1120 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaonan Liu Author-X-Name-First: Xiaonan Author-X-Name-Last: Liu Author-Name: Mirek Fatyga Author-X-Name-First: Mirek Author-X-Name-Last: Fatyga Author-Name: Teresa Wu Author-X-Name-First: Teresa Author-X-Name-Last: Wu Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Integration of biological and statistical models toward personalized radiation therapy of cancer Abstract: Radiation Therapy (RT) is one of the most common treatments for cancer. To understand the impact of radiation toxicity on normal tissue, a Normal Tissue Complication Probability (NTCP) model is needed to link RT dose with radiation-induced complications. There are two types of NTCP models: biological and statistical models. Biological models have good generalizability but low accuracy, as they cannot factor in patient-specific information. Statistical models can incorporate patient-specific variables, but may not generalize well across different studies. We propose an integrated model that borrows strength from both biological and statistical models. Specifically, we propose a novel model formulation followed by an efficient parameter estimation algorithm, and investigate statistical properties of the estimator. We apply the integrated model to a real dataset of prostate cancer patients treated with Intensity Modulated RT at the Mayo Clinic Arizona, who are at risk of developing the grade 2+ acute rectal complication. The integrated model achieves an Area Under the Curve (AUC) level of 0.82 in prediction, whereas the AUCs for the biological and statistical models are only 0.66 and 0.76, respectively. The superior performance of the integrated model is also consistently observed over different simulation experiments. Journal: IISE Transactions Pages: 311-321 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1486054 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1486054 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:311-321 Template-Type: ReDIF-Article 1.0 Author-Name: Yinchao Zhu Author-X-Name-First: Yinchao Author-X-Name-Last: Zhu Author-Name: Giulia Pedrielli Author-X-Name-First: Giulia Author-X-Name-Last: Pedrielli Author-Name: Loo Hay Lee Author-X-Name-First: Loo Author-X-Name-Last: Hay Lee Title: TD-OCBA: Optimal computing budget allocation and time dilation for simulation optimization of manufacturing systems Abstract: Discrete event simulation has been widely applied to study the behavior of stochastic manufacturing systems. This is due to the fact that manufacturing systems are usually too complex to obtain a closed-form analytical model that can accurately predict their performance. This becomes particularly critical when the optimization of these systems is of concern. In fact, Simulation optimization techniques are employed to identify the manufacturing system configuration which can maximize the expected system performance when this can only be estimated by running a simulator. In this article, we look into simulation-based optimization when a finite number of solutions are available and we have to identify the best. In particular, we propose, for the first time, the integration of Optimal Computing Budget Allocation (OCBA), which is based on independent measures from each simulation experiment, and Time Dilation (TD), which is a single-run simulation optimization algorithm. As a result, the optimization problem is solved when only one experiment of the system is performed by changing the “speed” of the simulation at each configuration in order to control the computational effort. The challenge is how to iteratively select such a speed. We solve this problem by proposing TD-OCBA, which integrates TD and OCBA while relying on standardized time series variance estimators. Numerical experiments have been conducted to study the performance of the algorithm when the response is generated from a time series. This provides the possibility to test the robustness of TD-OCBA. Comparison between TD-OCBA and the original TD method was performed by simulating a job shop system reported in the literature. Finally, an application involving semiconductors remote diagnostics is used to compare the TD-OCBA method and what is known as the equal allocation method. Journal: IISE Transactions Pages: 219-231 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1488305 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1488305 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:219-231 Template-Type: ReDIF-Article 1.0 Author-Name: Mahmut Tutam Author-X-Name-First: Mahmut Author-X-Name-Last: Tutam Author-Name: John A. White Author-X-Name-First: John A. Author-X-Name-Last: White Title: A multi-dock, unit-load warehouse design Abstract: Expected distance formulations are developed for a rectangle-shaped, unit-load warehouse having dock doors along one warehouse wall. Based on dock-door configurations treated in the literature and/or used in practice, three scenarios are considered: (i) equally spaced dock doors spanning a wall; (ii) equally spaced dock doors with a specified distance between adjacent dock doors, and an equal number of dock doors located on each side of the wall’s centerline; and (iii) equally spaced dock doors with a specified distance between adjacent dock doors and the first dock door located a given distance to the right of the left wall. Defining the shape factor as the warehouse width divided by its depth, the shape factor minimizing expected distance is determined. Single- and dual-command travel results from discrete formulations are compared with results from closed-form expressions using continuous approximations. The optimal shape factor depends on the number and locations of dock doors. When the distance between adjacent dock doors is a function of the warehouse’s width, previous research results are confirmed. However, when distances between adjacent dock doors are specified, our results differ from a commonly held belief the optimal shape factor is always less than or equal to 2.0. Journal: IISE Transactions Pages: 232-247 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1488307 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1488307 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:232-247 Template-Type: ReDIF-Article 1.0 Author-Name: Xiujie Zhao Author-X-Name-First: Xiujie Author-X-Name-Last: Zhao Author-Name: Olivier Gaudoin Author-X-Name-First: Olivier Author-X-Name-Last: Gaudoin Author-Name: Laurent Doyen Author-X-Name-First: Laurent Author-X-Name-Last: Doyen Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Title: Optimal inspection and replacement policy based on experimental degradation data with covariates Abstract: In this article, a novel maintenance model is proposed for single-unit systems with an atypical degradation path, whose pattern is influenced by inspections. After each inspection, the system degradation is assumed to instantaneously decrease by a random value. Meanwhile, the degrading rate is elevated due to the inspection. Considering the double effects of inspections, we develop a parameter estimation procedure for such systems from experimental data obtained via accelerated degradation tests with environmental covariates. Next, the inspection and replacement policy is optimized with the objective to minimize the Expected Long-Run Cost Rate (ELRCR). Inspections are assumed to be non-periodically scheduled. A numerical algorithm that combines analytical and simulation methods is presented to evaluate the ELRCR. We then investigate the robustness of maintenance policies for such systems by taking the parameter uncertainty into account with the aid of large-sample approximation and parametric bootstrapping. The application of the proposed method is illustrated by degradation data from the electricity industry. Journal: IISE Transactions Pages: 322-336 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1488308 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1488308 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:322-336 Template-Type: ReDIF-Article 1.0 Author-Name: Vedat Bayram Author-X-Name-First: Vedat Author-X-Name-Last: Bayram Author-Name: Fatma Gzara Author-X-Name-First: Fatma Author-X-Name-Last: Gzara Author-Name: Samir Elhedhli Author-X-Name-First: Samir Author-X-Name-Last: Elhedhli Title: Joint capacity, inventory, and demand allocation decisions in manufacturing systems Abstract: We study the demand, inventory, and capacity allocation problem in production systems with multiple inventory locations and a production facility operating under linear and concave costs. Independent stochastic demand from multiple sources is fulfilled from multiple warehouses that are in turn replenished from a shared production facility with stochastic production lead times. We propose a novel formulation of the demand allocation problem, and show that the optimal customer allocations are not necessarily single-sourced. The new formulation allows the inclusion of additional decisions alongside demand and inventory allocation. Capacity decisions are incorporated under two cost structures: linear and concave. For the concave case, we show that for a given demand and inventory allocation, the optimal capacity of the production facility takes on discrete values within a finite set, which allows the objective to be linearized. We demonstrate numerically that the joint optimization of capacity, inventory, and demand allocation decisions has significant cost savings over a sequential decision and leads to a high utilization of the production facility under linear capacity costs, but relatively low utilization under concave costs. Safety stock, on the other hand, at the distribution centers is relatively low under linear and concave cost. Journal: IISE Transactions Pages: 248-265 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1490045 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1490045 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:248-265 Template-Type: ReDIF-Article 1.0 Author-Name: Ziwei Lin Author-X-Name-First: Ziwei Author-X-Name-Last: Lin Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Author-Name: J. George Shanthikumar Author-X-Name-First: J. George Author-X-Name-Last: Shanthikumar Title: Combining simulation experiments and analytical models with area-based accuracy for performance evaluation of manufacturing systems Abstract: Simulation is considered as one of the most practical tools to estimate manufacturing system performance, but it is slow in its execution. Analytical models are generally available to provide fast, but biased, estimates of the system performance. These two approaches are commonly used distinctly in a sequential approach, or one as alternative to the other, for assessing manufacturing system performance. This article proposes a method to combine simulation experiments with analytical results in a single performance evaluation model. The method is based on kernel regression and allows considering more than one analytical methods. A high-fidelity model is combined with low-fidelity models for manufacturing system performance evaluation. Multiple area-based low-fidelity models can be considered for the prediction. The numerical results show that the proposed method is able to identify the reliability of low-fidelity models in different areas and provide estimates with higher accuracy. Comparison with alternative approaches shows that the method is more accurate in a studied manufacturing application. Journal: IISE Transactions Pages: 266-283 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1490046 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1490046 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:266-283 Template-Type: ReDIF-Article 1.0 Author-Name: Hoang M. Tran Author-X-Name-First: Hoang M. Author-X-Name-Last: Tran Author-Name: Satish T. S. Bukkapatnam Author-X-Name-First: Satish T. S. Author-X-Name-Last: Bukkapatnam Author-Name: Mridul Garg Author-X-Name-First: Mridul Author-X-Name-Last: Garg Title: Detecting changes in transient complex systems via dynamic network inference Abstract: Graph analytics methods have evoked significant interest in recent years. Their applicability to real-world complex systems is currently limited by the challenges of inferring effective graph representations of the high-dimensional, noisy, nonlinear and transient dynamics from limited time series outputs, as well as of extracting statistical quantifiers that capture the salient structure of the inferred graphs for detecting change. In this article, we present an approach to detecting changes in complex dynamic systems that is based on spectral-graph-theory and uses a single realization of time series data collected under specific, common types of transient conditions, such as intermittency. We introduce a statistic, γk, based on the spectral content of the inferred graph. We show that the γk statistic under high-dimensional dynamics converges to a normal distribution, and we employ the parameters of this distribution to construct a procedure to detect qualitative changes in the coupling structure of a dynamical system. Experimental investigations suggest that the γk statistic by itself is able to detect changes with modified area under curve (mAUC) of about 0.96 (for numerical simulation tests), and can, by itself, achieve a true positive rate of about 40% for detecting seizures from EEG signals. In addition, by incorporating this statistic with random forest, one of the best seizure detection methods, the seizure detection rate of the random forest method improves by 5% in 35% of the subjects. These studies of the network inferred from EEG signals suggest that γk can capture salient structural changes in the physiology of the process and can therefore serve as an effective feature for detecting seizures from EEG signals. Journal: IISE Transactions Pages: 337-353 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1491075 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1491075 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:337-353 Template-Type: ReDIF-Article 1.0 Author-Name: Eleonora Bottani Author-X-Name-First: Eleonora Author-X-Name-Last: Bottani Author-Name: Giuseppe Vignali Author-X-Name-First: Giuseppe Author-X-Name-Last: Vignali Title: Augmented reality technology in the manufacturing industry: A review of the last decade Abstract: The aim of this article is to analyze and review the scientific literature relating to the application of Augmented Reality (AR) technology in industry. AR technology is becoming increasingly diffuse, due to the ease of application development and the widespread use of hardware devices (mainly smartphones and tablets) able to support its adoption. Today, a growing number of applications based on AR solutions are being developed for industrial purposes. Although these applications are often little more than experimental prototypes, AR technology is proving highly flexible and is showing great potential in numerous areas (e.g., maintenance, training/learning, assembly or product design) and in industrial sectors (e.g., the automotive, aircraft or manufacturing industries). It is expected that AR systems will become even more widespread in the near future.The purpose of this review is to classify the literature on AR published from 2006 to early 2017, to identify the main areas and sectors where AR is currently deployed, describe the technological solutions adopted, as well as the main benefits achievable with this kind of technology. Journal: IISE Transactions Pages: 284-310 Issue: 3 Volume: 51 Year: 2019 Month: 3 X-DOI: 10.1080/24725854.2018.1493244 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1493244 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:3:p:284-310 Template-Type: ReDIF-Article 1.0 Author-Name: Maichel M. Aguayo Author-X-Name-First: Maichel M. Author-X-Name-Last: Aguayo Author-Name: Subhash C. Sarin Author-X-Name-First: Subhash C. Author-X-Name-Last: Sarin Author-Name: Hanif D. Sherali Author-X-Name-First: Hanif D. Author-X-Name-Last: Sherali Title: Solving the single and multiple asymmetric Traveling Salesmen Problems by generating subtour elimination constraints from integer solutions Abstract: We present an algorithm to solve single and multiple asymmetric traveling salesmen problems (ATSP and mATSP) by generating violated subtour elimination constraints from specific integer solutions. Computational results for the ATSP reveal that the proposed approach is able to solve 29 out of 33 well-known instances taken from the literature (involving between 100 and 1001 cities) to optimality within an hour of CPU time. Furthermore, the proposed approach is demonstrated to outperform any of the most effective state-of-the-art exact algorithms available in the literature when applied to solve the given ATSP instances via their equivalently transformed symmetric TSP representations. For the mATSP, the proposed approach is able to solve 27 out of 36 instances derived from the ATSP library involving up to 1001 cities to optimality within an hour of CPU time and also outperforms the direct solution by CPLEX, one of the three most effective formulations reported in the literature for this class of problems. The proposed approach is easy to implement and can be used to solve ATSP and mATSP as stand-alone models or can be applied in contexts where they appear as sub-models within some application settings. Journal: IISE Transactions Pages: 45-53 Issue: 1 Volume: 50 Year: 2018 Month: 1 X-DOI: 10.1080/24725854.2017.1374580 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1374580 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:1:p:45-53 Template-Type: ReDIF-Article 1.0 Author-Name: Damiano Brigo Author-X-Name-First: Damiano Author-X-Name-Last: Brigo Author-Name: Francesco Rapisarda Author-X-Name-First: Francesco Author-X-Name-Last: Rapisarda Author-Name: Abir Sridi Author-X-Name-First: Abir Author-X-Name-Last: Sridi Title: The multivariate mixture dynamics: Consistent no-arbitrage single-asset and index volatility smiles Abstract: We introduce a new arbitrage-free multivariate dynamic asset pricing model that allows us to reconcile single name and index/basket volatility smiles using a tractable and explicit dependence structure that goes beyond instantaneous correlation. Each asset volatility smile is modeled according to a density-mixture dynamical model while the same property holds for the multivariate process of all assets whose density is a mixture of multivariate basic densities. After introducing the model, we derive tractable index option smile formulas resulting from the model and related closed-form solutions for multivariate densities taking the form of multivariate mixtures. Using Markovian projection techniques, we relate our model to a multivariate uncertain volatility model and show a consistency result with geometric baskets with hints on possible uses in investigating triangular relationships between foreign exchange rates and the related smiles in practice. We also derive closed-form solutions for a number of terminal statistics of dependence and derive a precise relationship with a simpler, but less tractable, model based on a basic instantaneous correlation structure. Finally, closed-form solutions for volatility/asset correlations illuminating the relationship with the uncertain volatility model are introduced. The model tractability makes it particularly suited for calibration and risk management applications, where speed of calculations and tractability are essential. A few numerical examples on basket and spread options pricing conclude the article. Journal: IISE Transactions Pages: 27-44 Issue: 1 Volume: 50 Year: 2018 Month: 1 X-DOI: 10.1080/24725854.2017.1374581 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1374581 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:1:p:27-44 Template-Type: ReDIF-Article 1.0 Author-Name: Chung-Lun Li Author-X-Name-First: Chung-Lun Author-X-Name-Last: Li Author-Name: Weiya Zhong Author-X-Name-First: Weiya Author-X-Name-Last: Zhong Title: Task scheduling with progress control Abstract: Tasks with long durations often face the requirement of having to periodically report their progress to process controllers. Under this requirement, working teams that simultaneously process multiple tasks need to schedule their work carefully in order to demonstrate satisfactory progress on each unfinished task. We present a single-machine scheduling model that reflects this requirement. Our model has multiple milestones at which the tasks are penalized if their progress is below a satisfactory level. We develop polynomial solution methods for the general case with convex nonlinear penalty functions and for the special case with linear penalty functions. Extensions of our model are also discussed. Journal: IISE Transactions Pages: 54-61 Issue: 1 Volume: 50 Year: 2018 Month: 1 X-DOI: 10.1080/24725854.2017.1380334 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1380334 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:1:p:54-61 Template-Type: ReDIF-Article 1.0 Author-Name: Victor Richmond R. Jose Author-X-Name-First: Victor Richmond R. Author-X-Name-Last: Jose Author-Name: Jun Zhuang Author-X-Name-First: Jun Author-X-Name-Last: Zhuang Title: Incorporating risk preferences in stochastic noncooperative games Abstract: Traditional game-theoretic models of competition with uncertainty often ignore preferences and attitudes toward risk by assuming that players are risk neutral. In this article, we begin by considering how a comprehensive analysis and incorporation of expected utility theory affect players’ equilibrium behavior in a simple, single-period, sequential stochastic game. Although the literature posits that the more risk averse a first mover is, the more likely she is to compete and defend her position as the “leader”, and that the more risk seeking a “follower” is, the more likely he is willing to participate and compete, we find that this behavior may not always be true in this more general setting. Under simple assumptions on the utility function, we perform sensitivity analyses on the parameters and show which behavior changes when deviations from risk neutrality are introduced into a model. We also provide some insights on how risk preferences influence pre-emption and interdiction by looking at how these preferences affect the first mover’s advantage in a sequential setting. This article generates novel insights when a confluence of factors leads players to deviate or change their behavior in many risk analysis settings where stochastic games are used. Journal: IISE Transactions Pages: 1-13 Issue: 1 Volume: 50 Year: 2018 Month: 1 X-DOI: 10.1080/24725854.2017.1382749 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1382749 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:1:p:1-13 Template-Type: ReDIF-Article 1.0 Author-Name: Jin Fang Author-X-Name-First: Jin Author-X-Name-Last: Fang Author-Name: L. Jeff Hong Author-X-Name-First: L. Jeff Author-X-Name-Last: Hong Title: A simulation-based estimation method for bias reduction Abstract: Models are often built to evaluate system performance measures or to make quantitative decisions. These models sometimes involve unknown input parameters that need to be estimated statistically using data. In these situations, a statistical method is typically used to estimate these input parameters and the estimates are then plugged into the models to evaluate system output performances. The output performance estimators obtained from this approach usually have large bias when the model is nonlinear and the sample size of the data is finite. A simulation-based estimation method to reduce the bias of performance estimators for models that have a closed-form expression already exists in the literature. In this article, we extend that method to more general situations where the models have no closed-form expression and can only be evaluated through simulation. A stochastic root-finding problem is formulated to obtain the simulation-based estimators and several algorithms are designed. Furthermore, we give a thorough asymptotic analysis of the properties of the simulation-based estimators, including the consistency, the order of the bias, the asymptotic variance, and so on. Our numerical experiments show that the experimental results are consistent with the theoretical analysis. Journal: IISE Transactions Pages: 14-26 Issue: 1 Volume: 50 Year: 2018 Month: 1 X-DOI: 10.1080/24725854.2017.1382751 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1382751 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:1:p:14-26 Template-Type: ReDIF-Article 1.0 Author-Name: Ross Sparks Author-X-Name-First: Ross Author-X-Name-Last: Sparks Author-Name: Chris Carter Author-X-Name-First: Chris Author-X-Name-Last: Carter Author-Name: Petra Graham Author-X-Name-First: Petra Author-X-Name-Last: Graham Author-Name: David Muscatello Author-X-Name-First: David Author-X-Name-Last: Muscatello Author-Name: Tim Churches Author-X-Name-First: Tim Author-X-Name-Last: Churches Author-Name: Jill Kaldor Author-X-Name-First: Jill Author-X-Name-Last: Kaldor Author-Name: Robyn Turner Author-X-Name-First: Robyn Author-X-Name-Last: Turner Author-Name: Wei Zheng Author-X-Name-First: Wei Author-X-Name-Last: Zheng Author-Name: Louise Ryan Author-X-Name-First: Louise Author-X-Name-Last: Ryan Title: Understanding sources of variation in syndromic surveillance for early warning of natural or intentional disease outbreaks Abstract: Daily counts of computer records of hospital emergency department arrivals grouped according to diagnosis (called here syndrome groupings) can be monitored by epidemiologists for changes in frequency that could provide early warning of bioterrorism events or naturally occurring disease outbreaks and epidemics. This type of public health surveillance is sometimes called syndromic surveillance. We used transitional Poisson regression models to obtain one-day-ahead arrival forecasts. Regression parameter estimates and forecasts were updated for each day using the latest 365 days of data. The resulting time series of recursive estimates of parameters such as the amplitude and location of the seasonal peaks as well as the one-day-ahead forecasts and forecast errors can be monitored to understand changes in epidemiology of each syndrome grouping.The counts for each syndrome grouping were autocorrelated and non-homogeneous Poisson. As such, the main methodological contribution of the article is the adaptation of Cumulative Sum (CUSUM) and Exponentially Weighted Moving Average (EWMA) plans for monitoring non-homogeneous counts. These plans were valid for small counts where the assumption of normally distributed one-day-ahead forecasts errors, typically used in other papers, breaks down. In addition, these adaptive plans have the advantage that control limits do not have to be trained for different syndrome groupings or aggregations of emergency departments.Conventional methods for signaling increases in syndrome grouping counts, Shewhart, CUSUM, and EWMA control charts of the standardized forecast errors were also examined. Shewhart charts were, at times, insensitive to shifts of interest. CUSUM and EWMA charts were only reasonable for large counts. We illustrate our methods with respiratory, influenza, diarrhea, and abdominal pain syndrome groupings. Journal: IIE Transactions Pages: 613-631 Issue: 9 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902942667 File-URL: http://hdl.handle.net/10.1080/07408170902942667 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:9:p:613-631 Template-Type: ReDIF-Article 1.0 Author-Name: Steven Shechter Author-X-Name-First: Steven Author-X-Name-Last: Shechter Author-Name: Oguzhan Alagoz Author-X-Name-First: Oguzhan Author-X-Name-Last: Alagoz Author-Name: Mark Roberts Author-X-Name-First: Mark Author-X-Name-Last: Roberts Title: Irreversible treatment decisions under consideration of the research and development pipeline for new therapies Abstract: This article addresses a topic not considered in previous models of patient treatment: the possible downstream availability of improved treatment options coming out of the medical research and development (R&D) pipeline. We provide clinical examples in which a patient may prefer to wait and take the chance that an improved therapy comes to market rather than choose an irreversible treatment option that has serious quality of life ramifications and would render future treatment discoveries meaningless for that patient. We then develop a Markov decision process model of the optimal time to initiate treatment, which incorporates uncertainty around the development of new therapies and their effects. After deriving structural properties for the model, we provide a numerical example that demonstrates how models that do not have any foresight of the R&D pipeline may result in optimal policies that differ from models that have such foresight, implying erroneous decisions in the former models. Our example quantifies the effects of such errors. Journal: IIE Transactions Pages: 632-642 Issue: 9 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903468589 File-URL: http://hdl.handle.net/10.1080/07408170903468589 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:9:p:632-642 Template-Type: ReDIF-Article 1.0 Author-Name: Paul Enders Author-X-Name-First: Paul Author-X-Name-Last: Enders Author-Name: Alan Scheller-Wolf Author-X-Name-First: Alan Author-X-Name-Last: Scheller-Wolf Author-Name: Nicola Secomandi Author-X-Name-First: Nicola Author-X-Name-Last: Secomandi Title: Interaction between technology and extraction scaling real options in natural gas production Abstract: This article is the outcome of a research engagement studying questions of technology utilization and production management with managers at EQT Corp., an integrated natural gas production and distribution company. The question of how to best leverage the use of technology is fundamental to almost any industry; this is especially true for those companies operating in the volatile field of commodities production, as does EQT Corp¨ This article considers the interaction between two types of real options that arise in natural gas production: The option to scale the production level, through enhanced extraction and communication technologies, and the option to scale the extraction rate, by pausing production. This interaction is studied by applying stochastic dynamic programming to actual operational and financial data. The analysis brings to light data-driven managerial principles pertaining to the valuation and deployment of these three scaling options, the effect of price uncertainty on the option values, how to effectively simplify the optimal deployment policy, and whether these options, or subsets of them, are complements or substitutes. These principles are significant to natural gas production managers and, potentially, to managers of other natural resource production processes, such as the extraction of oil and mining. Journal: IIE Transactions Pages: 643-655 Issue: 9 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003673003 File-URL: http://hdl.handle.net/10.1080/07408171003673003 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:9:p:643-655 Template-Type: ReDIF-Article 1.0 Author-Name: Loo Lee Author-X-Name-First: Loo Author-X-Name-Last: Lee Author-Name: Ek Chew Author-X-Name-First: Ek Author-X-Name-Last: Chew Author-Name: Suyan Teng Author-X-Name-First: Suyan Author-X-Name-Last: Teng Author-Name: David Goldsman Author-X-Name-First: David Author-X-Name-Last: Goldsman Title: Finding the non-dominated Pareto set for multi-objective simulation models Abstract: This article considers a multi-objective Ranking and Selection (R+S) problem, where the system designs are evaluated in terms of more than one performance measure. The concept of Pareto optimality is incorporated into the R+S scheme, and attempts are made to find all of the non-dominated designs rather than a single “best” one. In addition to a performance index to measure how non-dominated a design is, two types of errors are defined to measure the probabilities that designs in the true Pareto/non-Pareto sets are dominated/non-dominated based on observed performance. Asymptotic allocation rules are derived for simulation replications based on a Lagrangian relaxation method, under the assumption that an arbitrarily large simulation budget is available. Finally, a simple sequential procedure is proposed to allocate the simulation replications based on the asymptotic allocation rules. Computational results show that the proposed solution framework is efficient when compared to several other algorithms in terms of its capability of identifying the Pareto set. Journal: IIE Transactions Pages: 656-674 Issue: 9 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003705367 File-URL: http://hdl.handle.net/10.1080/07408171003705367 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:9:p:656-674 Template-Type: ReDIF-Article 1.0 Author-Name: Wooseung Jang Author-X-Name-First: Wooseung Author-X-Name-Last: Jang Author-Name: James Noble Author-X-Name-First: James Author-X-Name-Last: Noble Author-Name: Thomas Hutsel Author-X-Name-First: Thomas Author-X-Name-Last: Hutsel Title: An integrated model to solve the winter asset and road maintenance problem Abstract: Winter road maintenance operations require many complex strategic and operational planning decisions. The main problems include locating depots, designing sectors and routes, and configuring and scheduling vehicles. The complexity involved in each of these decisions has generally resulted in research that approaches each of the problems separately, which can lead to isolated and suboptimal solutions. In addition to being integrated, a successful approach to these problems must consider the necessary practical aspects of each problem and include a straightforward, easily executable solution methodology; otherwise, it is unlikely to be implemented. This research proposes a systematic, heuristic-based optimization approach to integrate the winter road maintenance planning decisions. The approach is illustrated using an example of the state highway road network in Boone County, Missouri, which is maintained by the Missouri Department of Transportation. Journal: IIE Transactions Pages: 675-689 Issue: 9 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003705375 File-URL: http://hdl.handle.net/10.1080/07408171003705375 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:9:p:675-689 Template-Type: ReDIF-Article 1.0 Author-Name: Lisa Maillart Author-X-Name-First: Lisa Author-X-Name-Last: Maillart Author-Name: Akram Kamrani Author-X-Name-First: Akram Author-X-Name-Last: Kamrani Author-Name: Bryan Norman Author-X-Name-First: Bryan Author-X-Name-Last: Norman Author-Name: Jayant Rajgopal Author-X-Name-First: Jayant Author-X-Name-Last: Rajgopal Author-Name: Peter Hawrylak Author-X-Name-First: Peter Author-X-Name-Last: Hawrylak Title: Optimizing RFID tag-inventorying algorithms Abstract: Maximizing the rate at which Radio Frequency IDentification (RFID) tags can be read is a critical issue for end-users of RFID technology as well as for RFID hardware manufacturers. Supply chain applications typically involve tags for which the reader–tag communication is regulated by a protocol that enforces a slotted Aloha scheme. This article shows how to dynamically adjust the parameter of this scheme as a function of the number of tags remaining and the last-used value of the parameter, such that the total amount of time required to read a given set of tags is minimized. To do so, several variations of this stochastic shortest path problem are formulated as Markov decision processes, which are then solved for the optimal policies by exploiting the models' decomposability. Computational results indicate that the optimal policies are complex and static heuristics perform poorly but that a myopic heuristic performs nearly optimally under the current cost structure when the reader processes all slots before selecting a new parameter value. Journal: IIE Transactions Pages: 690-702 Issue: 9 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003705714 File-URL: http://hdl.handle.net/10.1080/07408171003705714 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:9:p:690-702 Template-Type: ReDIF-Article 1.0 Author-Name: Margarit Khachatryan Author-X-Name-First: Margarit Author-X-Name-Last: Khachatryan Author-Name: Leon F. McGinnis Author-X-Name-First: Leon F. Author-X-Name-Last: McGinnis Title: Picker travel time model for an order picking system with buffers Abstract: This article addresses a new class of picker-to-goods order picking systems, in which item retrieval is decoupled from order assembly by a pick-to-buffer technology. This decoupling of item retrieval and order assembly dramatically changes the dynamics of the pick process. Picking to buffers means that the picker is limited by the number of available buffers, rather than by the number of items or orders that may be transported simultaneously, so that order assembly must be carefully synchronized, in real-time, so the picker is not blocked by already filled buffers. This synchronization requirement forces consideration of the stochastic nature of the picking process in designing such systems, in particular, estimating the variance of pick times. This is in sharp contrast with traditional picker-to-goods systems, where estimates of mean pick times are sufficient for evaluating design performance. In this article, models for both expected value and variance of picker travel time are proposed and evaluated. The approximate analytic results are compared with results from detailed simulation and are sufficiently accurate to support design. The real-time synchronization of order assembly with picking is a topic of further research. Journal: IIE Transactions Pages: 894-904 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2013.823001 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.823001 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:894-904 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao Cai Author-X-Name-First: Xiao Author-X-Name-Last: Cai Author-Name: Sunderesh S. Heragu Author-X-Name-First: Sunderesh S. Author-X-Name-Last: Heragu Author-Name: Yang Liu Author-X-Name-First: Yang Author-X-Name-Last: Liu Title: Modeling and evaluating the AVS/RS with tier-to-tier vehicles using a semi-open queueing network Abstract: The Autonomous Vehicle Storage and Retrieval System (AVS/RS) with tier-to-tier vehicles is modeled as a semi-open queueing network (SOQN). Different storage/retrieval requests in the AVS/RS are modeled as different classes of customers in the SOQN model. Analyzing multiple configurations of an SOQN via computer simulation is time-consuming. Therefore, this article uses some analytical methods to evaluate the performances of a multi-class, multi-stage SOQN with general service time and interarrival time distributions. Two synchronization policies are also compared and the results show that an AVS/RS with virtual synchronization has a better performance than that with physical synchronization. Journal: IIE Transactions Pages: 905-927 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2013.849832 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849832 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:905-927 Template-Type: ReDIF-Article 1.0 Author-Name: Olga Battaïa Author-X-Name-First: Olga Author-X-Name-Last: Battaïa Author-Name: Alexandre Dolgui Author-X-Name-First: Alexandre Author-X-Name-Last: Dolgui Author-Name: Nikolai Guschinsky Author-X-Name-First: Nikolai Author-X-Name-Last: Guschinsky Author-Name: Genrikh Levin Author-X-Name-First: Genrikh Author-X-Name-Last: Levin Title: Combinatorial techniques to optimally customize an automated production line with rotary transfer and turrets Abstract: A problem of design of complex automated production lines with rotary transfer and turrets is considered. Operations are partitioned into groups that are performed by spindle heads or by turrets. Constraints related to the design of spindle heads, turrets, and working positions, as well as precedence constraints related to operations, are given. The problem consists of minimizing the estimated cost of this automated production line, while reaching a given cycle time and satisfying all constraints. Two methods are proposed to solve the problem. The first uses a mixed-integer programming formulation of the problem. The second method is based on its reduction to a constrained shortest path problem. An industrial example is presented. Journal: IIE Transactions Pages: 867-879 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2013.849837 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849837 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:867-879 Template-Type: ReDIF-Article 1.0 Author-Name: Lisa M. Thomas Author-X-Name-First: Lisa M. Author-X-Name-Last: Thomas Author-Name: Russell D. Meller Author-X-Name-First: Russell D. Author-X-Name-Last: Meller Title: Analytical models for warehouse configuration Abstract: The performance of a warehouse is impacted by how it is configured, yet there is no optimization model in the literature to answer the question of how to best configure the warehouse in terms of warehouse shape and the configuration of the dock doors. Moreover, the building blocks for such a model (put-away, replenishment, and order picking models that can be combined in an optimization model) are either not available (in the case of replenishment) or built on a set of inconsistent assumptions (in the case of put-away and order picking). Therefore, this article lays the foundation for more sophisticated warehouse configuration optimization models by developing the first analytical model for replenishment operation performance and extending put-away and order picking performance models. These new models are used to address a question motivated by industry: the optimal configuration of a case-picking warehouse in terms of the shape of the facility and whether the facility is configured with dock doors on one or both sides. An example is presented to demonstrate the use of the proposed models in answering such a question, quantifying the benefit of using an integrated approach to warehouse configuration. Journal: IIE Transactions Pages: 928-947 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2013.855847 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.855847 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:928-947 Template-Type: ReDIF-Article 1.0 Author-Name: Debjit Roy Author-X-Name-First: Debjit Author-X-Name-Last: Roy Author-Name: Jennifer A. Pazour Author-X-Name-First: Jennifer A. Author-X-Name-Last: Pazour Author-Name: René de Koster Author-X-Name-First: René Author-X-Name-Last: de Koster Title: A novel approach for designing rental vehicle repositioning strategies Abstract: An important tactical decision for vehicle rental providers is the design of a repositioning strategy to balance vehicle utilization with customer wait times due to vehicle unavailabilities. To address this problem, this article analyzes alternative repositioning strategies: a no-repositioning strategy, a customer repositioning strategy, and a vehicle repositioning strategy, using queuing network models that are able to handle stochastic demand and vehicle unavailabilities. Optimization models are formulated to determine the repositioning fractions for alternate strategies that minimize the rental provider’s cost by balancing repositioning costs with customer waiting penalty costs. The nonlinear optimization problems are challenging to solve because the objective functions are non-differentiable and the decision variables (such as effective arrival rates and customer repositioning fractions) are interrelated. Therefore, a two-phase sequential solution approach to estimate the repositioning fractions is developed. Phase 1 determines the effective arrival rates by developing an approximate network model, deriving structural results, determining a high-quality solution point, and refining the solution. Phase 2 determines the repositioning fractions by solving a transportation problem. Numerical experiments are used to evaluate the efficacy of the proposed solution approach, to analyze alternate repositioning strategies, and to illustrate how the developed techniques can be adopted to create a better readiness at a depot. Journal: IIE Transactions Pages: 948-967 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2013.876129 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876129 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:948-967 Template-Type: ReDIF-Article 1.0 Author-Name: Faraz Ramtin Author-X-Name-First: Faraz Author-X-Name-Last: Ramtin Author-Name: Jennifer A. Pazour Author-X-Name-First: Jennifer A. Author-X-Name-Last: Pazour Title: Analytical models for an automated storage and retrieval system with multiple in-the-aisle pick positions Abstract: An automated storage and retrieval system with multiple in-the-aisle pick positions (MIAPP-AS/RS) is a case-level order fulfillment technology that enables order picking via multiple pick positions (outputs) located in the aisle. This article develops expected travel time models for different operating policies and different physical configurations. These models can be used to analyze MIAPP-AS/RS throughput performance during peak and non-peak hours. Moreover, closed-form approximations are derived for the case of an infinite number of pick positions, which enable the optimal shape configuration that minimizes expected travel times to be derived. The expected travel time models are compared with a simulation model of a discrete rack, and the results validate that the proposed models provide good estimates. Finally, a numerical experiment is conducted to illustrate the trade-offs between performance of operating policies and design configurations. It is found that MIAPP-AS/RS with a dual picking floor and input point is a robust configuration due to the single command operating policy having a comparable throughput performance to a dual-command operating policy. Journal: IIE Transactions Pages: 968-986 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2014.882037 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882037 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:968-986 Template-Type: ReDIF-Article 1.0 Author-Name: Toyin Clottey Author-X-Name-First: Toyin Author-X-Name-Last: Clottey Author-Name: W.C. Benton Author-X-Name-First: W.C. Author-X-Name-Last: Benton Title: Determining core acquisition quantities when products have long return lags Abstract: An important problem faced in closed-loop supply chains is ensuring a sufficient supply of reusable products (i.e., cores) to support reuse activities. Accurate forecasting of used product returns can assist in effectively managing the sourcing activities for cores. The application of existing forecasting models to actual data provided by an Original Equipment Manufacturer (OEM) remanufacturer resulted in the following challenges: (i) inherent difficulties in estimation due to long return lags in the data and (ii) required adjustments for initial conditions. This article develops methods to address these issues and illustrates the proposed approach using the data provided by the OEM remanufacturer. The cost implications of using the proposed method to source cores are also investigated. Results of the performed analysis showed that the proposed forecasting approach performed best when the product is in the maturity or decline stages of its life cycle, with the rate of product returns balanced with the demand volume for the remanufactured product. Forecasting product returns can therefore be best leveraged for reducing the acquisition costs of cores in such settings. Journal: IIE Transactions Pages: 880-893 Issue: 9 Volume: 46 Year: 2014 Month: 9 X-DOI: 10.1080/0740817X.2014.882531 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882531 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:9:p:880-893 Template-Type: ReDIF-Article 1.0 Author-Name: Yonit Barron Author-X-Name-First: Yonit Author-X-Name-Last: Barron Author-Name: Uri Yechiali Author-X-Name-First: Uri Author-X-Name-Last: Yechiali Title: Generalized control-limit preventive repair policies for deteriorating cold and warm standby Markovian systems Abstract: Consider a deteriorating repairable Markovian system with N stochastically independent identical units. The lifetime of each unit follows a discrete phase-type distribution. There is one online unit and the others are in standby status. In addition, there is a single repair facility and the repair time of a failed unit has a geometric distribution. The system is inspected at equally spaced points in time. After each inspection, either repair or a full replacement is possible. We consider state-dependent operating costs, repair costs that are dependent on the extent of the repair, and failure penalty costs. Applying dynamic programming, we show that under reasonable conditions on the system’s law of evolution and on the state-dependent costs, a generalized control-limit policy is optimal for the expected total discounted criterion for both cold standby and warm standby systems. Illustrative numerical examples are presented and insights are provided. Journal: IISE Transactions Pages: 1031-1049 Issue: 11 Volume: 49 Year: 2017 Month: 11 X-DOI: 10.1080/24725854.2017.1335919 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1335919 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:11:p:1031-1049 Template-Type: ReDIF-Article 1.0 Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Yichi Shen Author-X-Name-First: Yichi Author-X-Name-Last: Shen Author-Name: Ning Zhao Author-X-Name-First: Ning Author-X-Name-Last: Zhao Title: Analysis of tandem queues with finite buffer capacity Abstract: Tandem queues with finite buffer capacity commonly exist in practical applications. By viewing a tandem queue as an integrated system, an innovative approach has been developed to analyze its performance through insight from Friedman's reduction method. In our approach, the starvation at the bottleneck caused by service time randomness is modeled by interruptions. Fundamental properties of tandem queues with finite buffer capacity are examined. Without the assumptions of phase-type distributions and stochastic independence, we show that, in general, the system service rate of a tandem queue with a finite buffer capacity is equal to or smaller than its bottleneck service rate, and virtual interruptions, which are the extra idle period at the bottleneck caused by the non-bottlenecks, depend on arrival rates. Hence, the system service rate is a function of arrival rates when the buffer capacity of a tandem queue is finite. Approximations for the mean queue time of a dual tandem queue are developed using the concept of virtual interruptions. Journal: IISE Transactions Pages: 1001-1013 Issue: 11 Volume: 49 Year: 2017 Month: 11 X-DOI: 10.1080/24725854.2017.1342055 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1342055 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:11:p:1001-1013 Template-Type: ReDIF-Article 1.0 Author-Name: Peican Zhu Author-X-Name-First: Peican Author-X-Name-Last: Zhu Author-Name: Yangming Guo Author-X-Name-First: Yangming Author-X-Name-Last: Guo Author-Name: Shubin Si Author-X-Name-First: Shubin Author-X-Name-Last: Si Author-Name: Jie Han Author-X-Name-First: Jie Author-X-Name-Last: Han Title: A stochastic analysis of competing failures with propagation effects in functional dependency gates Abstract: Various dynamic gates have been utilized to model behaviors in dynamic fault trees (DFTs). For the functional dependency relationship among different components, a functional dependency (FDEP) gate models the scenario that the failure of some trigger events may result in the failures of other components. Conventionally, dependent relationships are modeled by an OR gate for systems with a perfect fault coverage. However, this is usually inaccurate, due to the effect of different types of failures, including local and propagated failures. A propagated failure originating from a component may affect the status of other dependent components. However, whether this occurs or not is determined by the failure order of the trigger and dependent events. A conventional DFT analysis incurs a high computational complexity for this scenario. In this article, a stochastic analysis is performed for an FDEP gate under imperfect fault coverage. The reliability of a system with competing failures can be efficiently predicted by the proposed stochastic analysis. Furthermore, the encoding using the non-Bernoulli sequence with random permutation of fixed number of ones and zeros enables an effective modeling of any time-to-failure distribution for the components. The corresponding efficiency and accuracy are revealed by the analysis of several benchmarks. Journal: IISE Transactions Pages: 1050-1064 Issue: 11 Volume: 49 Year: 2017 Month: 11 X-DOI: 10.1080/24725854.2017.1342056 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1342056 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:11:p:1050-1064 Template-Type: ReDIF-Article 1.0 Author-Name: Marcelo Bacher Author-X-Name-First: Marcelo Author-X-Name-Last: Bacher Author-Name: Irad Ben-Gal Author-X-Name-First: Irad Author-X-Name-Last: Ben-Gal Title: Ensemble-Bayesian SPC: Multi-mode process monitoring for novelty detection Abstract: We propose a monitoring method based on a Bayesian analysis of an ensemble-of-classifiers for Statistical Process Control (SPC) of multi-mode systems. A specific case is considered, in which new modes of operations (new classes), also called “novelties,” are identified during the monitoring stage of the system. The proposed Ensemble-Bayesian SPC (EB-SPC) models the known operating modes by categorizing their corresponding observations into data classes that are detected during the training stage. Ensembles of decision trees are trained over replicated subspaces of features, with class-dependent thresholds being computed and used to detect novelties. In contrast with existing monitoring approaches that often focus on a single operating mode as the “in-control” class, the EB-SPC exploits the joint information of the trained classes and combines the posterior probabilities of various classifiers by using a “mixture-of-experts” approach. Performance evaluation on real datasets from both public repositories and real-world semiconductor datasets shows that the EB-SPC outperforms both conventional multivariate SPC as well as ensemble-of-classifiers methods and has a high potential for novelty detection including the monitoring of multimode systems. Journal: IISE Transactions Pages: 1014-1030 Issue: 11 Volume: 49 Year: 2017 Month: 11 X-DOI: 10.1080/24725854.2017.1347984 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1347984 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:11:p:1014-1030 Template-Type: ReDIF-Article 1.0 Author-Name: Chen Zhang Author-X-Name-First: Chen Author-X-Name-Last: Zhang Author-Name: Yong Lei Author-X-Name-First: Yong Author-X-Name-Last: Lei Author-Name: Linmiao Zhang Author-X-Name-First: Linmiao Author-X-Name-Last: Zhang Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Title: Modeling tunnel profile in the presence of coordinate errors: A Gaussian process-based approach Abstract: This article presents a Gaussian process (GP)-based approach to model a tunnel’s inner surface profile with high frequency sensing data provided by a Terrestrial Laser Scanner (TLS). We introduce a reading-surface profile that uniquely determines a three-dimensional tunnel in a Cartesian coordinate system. This reading-surface transforms the cylindrical tunnel to a two-dimensional surface profile, hence allowing us to model the tunnel profile by GP. To account for coordinate errors induced by TLS, we take repeated measurements at designed coordinates. We apply a Taylor approximation to extract mean and gradient estimations from the repeated measurements and then fit the GP model with both estimations to obtain a more robust reconstruction of the tunnel profile. We validate our method through numerical examples. The simulation results show that with the help of derivative estimations, our method outperforms the conventional GP regression with noisy observations in terms of mean-squared prediction error. We also present a case study to demonstrate that our method provides a more accurate result than the existing cylinder-fitting approach and has great potential for deformation monitoring in the presence of coordinate errors. Journal: IISE Transactions Pages: 1065-1077 Issue: 11 Volume: 49 Year: 2017 Month: 11 X-DOI: 10.1080/24725854.2017.1348646 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1348646 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:11:p:1065-1077 Template-Type: ReDIF-Article 1.0 Author-Name: Wei-Chang Yeh Author-X-Name-First: Wei-Chang Author-X-Name-Last: Yeh Title: Methodology for the reliability evaluation of the novel learning-effect multi-state flow network Abstract: In the traditional multi-state flow networks (MFNs), it is assumed that the flow is fixed in each arc. However, the flow may experience gain after transmission via arcs in many real-life applications; e.g., the infected population size is increased from time to time for a certain period during outbreaks of disease, the number of bit errors is amplified in digital transmission, etc. Hence, a novel network model called the learning-effect MFN (MFNle) is proposed to meet real-world problems. A straightforward and simple algorithm based on minimal path (MP) set is presented here to evaluate MFNle reliability, which is defined as the probability that at least d units of data can be sent from the source node and dout (≥d) units of data exists from the sink node through a single MP in the MFNle. The computational complexity of the proposed algorithm is also analyzed. Finally, an example is given to illustrate how the MFNle reliability is calculated using the proposed algorithm. Journal: IISE Transactions Pages: 1078-1085 Issue: 11 Volume: 49 Year: 2017 Month: 11 X-DOI: 10.1080/24725854.2017.1351044 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1351044 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:11:p:1078-1085 Template-Type: ReDIF-Article 1.0 Author-Name: Peter Carr Author-X-Name-First: Peter Author-X-Name-Last: Carr Author-Name: Dilip B. Madan Author-X-Name-First: Dilip B. Author-X-Name-Last: Madan Title: Joint modeling of VIX and SPX options at a single and common maturity with risk management applications Abstract: A double gamma model is proposed for the VIX. The VIX is modeled as gamma distributed with a mean and variance that respond to a gamma-distributed realized variance over the preceeding month. Conditional on VIX and the realized variance, the logarithm of the stock is variance gamma distributed with affine conditional drift and quadratic variation. The joint density for the triple realized variance, VIX, and the SPX is in closed form. Maximum likelihood estimation on time series data addresses model adequacy. A joint calibration of the model to SPX and VIX options is employed to illustrate a risk management application hedging realized volatility options. Journal: IIE Transactions Pages: 1125-1131 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2013.857063 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.857063 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1125-1131 Template-Type: ReDIF-Article 1.0 Author-Name: Fei Li Author-X-Name-First: Fei Author-X-Name-Last: Li Author-Name: Diwakar Gupta Author-X-Name-First: Diwakar Author-X-Name-Last: Gupta Title: The extraboard operator scheduling and work assignment problem Abstract: An instance of the operational fixed job scheduling problem arises when open work caused by unplanned events such as bus breakdowns, inclement weather, and driver (operator) absenteeism need to be covered by reserve (extraboard) drivers. Each work-piece, which is referred to as a job, requires one operator who must work continuously between specified start and end times to complete the job. Each extraboard operator may be assigned up to w hours of work, which may not to be continuous so long as the total work time is within a s-hour time window of that operator’s shift start time. Parameters w and s are called allowable work-time and spread-time, respectively. The objective is to choose operators’ shift start times and work assignments, while honoring work-time and spread-time constraints, such that the amount of work covered as part of regular duties is maximized. This paper argues that the extraboard operator scheduling problem is NP-hard and three heuristic approaches are presented for the solution of such problems. These include a decomposition-based algorithm whose worst-case performance ratio is proved to lie in [1 − 1/e, 19/27], where e ≈ 2:718 is the base of the natural logarithm. Numerical experiments are presented that use data from a large transit agency, which show that the average performance of the decomposition algorithm is good when applied to real-world data. Journal: IIE Transactions Pages: 1132-1146 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.882036 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882036 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1132-1146 Template-Type: ReDIF-Article 1.0 Author-Name: Douglas R. Bish Author-X-Name-First: Douglas R. Author-X-Name-Last: Bish Author-Name: Ebru K. Bish Author-X-Name-First: Ebru K. Author-X-Name-Last: Bish Author-Name: Ryan S. Xie Author-X-Name-First: Ryan S. Author-X-Name-Last: Xie Author-Name: Susan L. Stramer Author-X-Name-First: Susan L. Author-X-Name-Last: Stramer Title: Going beyond “same-for-all” testing of infectious agents in donated blood Abstract: Blood products, derived from donated blood, are essential for many medical treatments, and their safety, in terms of being free of Transfusion-Transmitted Infections (TTIs)—i.e., infectious agents that can be spread through their use—is crucial. However, blood screening tests are not perfectly reliable and may produce false negative or false-positive results. Currently, blood donations are tested using a same-for-all testing scheme, where a single test set is used on all blood donations. This article studies differential testing schemes, which may involve multiple test sets, each applied to a randomly selected fraction of the donated blood. Thus, although each blood donation is still tested by a single test set, multiple test sets may be used by the Blood Center. This problem is modeled within an optimization framework and a novel solution methodology is provided that allows important structural properties of such testing schemes to be characterized. It is shown that an optimal differential testing scheme consists of at most two test sets, and such a dual-test scheme can significantly reduce the TTI risk over the current same-for-all testing. The presented analysis leads to an efficient greedy algorithm that generates the optimal differential test sets for a range of budgets to inform the decision-maker (e.g., Blood Center). The differential model is extended to the case where different test sets can be used on sub-sets of donations defined by donation characteristics (e.g., donor demographics, seasonality, or region) that differentiate the sub-set’s TTI prevalence rates. The risk reduction potential of differential testing is quantified through two case studies that use published data from Sub-Saharan Africa and the United States. The study generates key insight into public policy decision making on the design of blood screening schemes. Journal: IIE Transactions Pages: 1147-1168 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.882038 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882038 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1147-1168 Template-Type: ReDIF-Article 1.0 Author-Name: Arash Khatibi Author-X-Name-First: Arash Author-X-Name-Last: Khatibi Author-Name: Golshid Baharian Author-X-Name-First: Golshid Author-X-Name-Last: Baharian Author-Name: Estelle R. Kone Author-X-Name-First: Estelle R. Author-X-Name-Last: Kone Author-Name: Sheldon H. Jacobson Author-X-Name-First: Sheldon H. Author-X-Name-Last: Jacobson Title: The sequential stochastic assignment problem with random success rates Abstract: Given a finite number of workers with constant success rates, the Sequential Stochastic Assignment problem (SSAP) assigns the workers to sequentially arriving tasks with independent and identically distributed reward values, so as to maximize the total expected reward. This article studies the SSAP, with some (or all) workers having random success rates that are assumed to be independent but not necessarily identically distributed. Several assignment policies are proposed to address different levels of uncertainty in the success rates. Specifically, if the probability density functions of the random success rates are known, an optimal mixed policy is provided. If only the expected values of these rates are known, an optimal expectation policy is derived. Journal: IIE Transactions Pages: 1169-1180 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.882530 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882530 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1169-1180 Template-Type: ReDIF-Article 1.0 Author-Name: Banu Lokman Author-X-Name-First: Banu Author-X-Name-Last: Lokman Author-Name: Murat Köksalan Author-X-Name-First: Murat Author-X-Name-Last: Köksalan Title: Finding highly preferred points for multi-objective integer programs Abstract: This article develops exact algorithms to generate all non-dominated points in a specified region of the criteria space in Multi-Objective Integer Programs (MOIPs). Typically, there are too many non-dominated points in large MOIPs and it is not practical to generate them all. Therefore, the problem of generating non-dominated points in the preferred region of the decision-maker is addressed. To define the preferred region, the non-dominated set is approximated using a hyper-surface. A procedure is developed that then finds a preferred hypothetical point on this surface and defines a preferred region around the hypothetical point. Once the preferred region is defined, all non-dominated points in that region are generated. The performance of the proposed approach is tested on multi-objective assignment, multi-objective knapsack, and multi-objective shortest path problems with three and four objectives. Computational results show that a small set of non-dominated points is generated that contains highly preferred points in a reasonable time. Journal: IIE Transactions Pages: 1181-1195 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.882532 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882532 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1181-1195 Template-Type: ReDIF-Article 1.0 Author-Name: Nader Ebrahimi Author-X-Name-First: Nader Author-X-Name-Last: Ebrahimi Author-Name: Lei Hua Author-X-Name-First: Lei Author-X-Name-Last: Hua Title: Assessing the reliability of a nanocomponent by using copulas Abstract: A nanocomponent is a collection of atoms arranged in a definite pattern in order to achieve a desired function with an acceptable performance and reliability. Types of atoms, the manner in which they are arranged within the nanocomponent, and their inter-relationships have a direct effect on the nanocomponent’s reliability and its failure. This article, proposed a general method using the notion of a copula to model the inter-relationships between the atoms of a nanocomponent and then assess the reliability or probability of failure of the nanocomponent under different structures. This approach is refered to as a “zoom-out” approach. The proposed method is very flexible and it is very easy to implement. This article considers a nanocomponent at a fixed moment of time, say the present moment, and assumes that the present status of a nanocomponent depends on the present status of its atoms. Journal: IIE Transactions Pages: 1196-1208 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.883240 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.883240 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1196-1208 Template-Type: ReDIF-Article 1.0 Author-Name: Benjamin J. Lobo Author-X-Name-First: Benjamin J. Author-X-Name-Last: Lobo Author-Name: James R. Wilson Author-X-Name-First: James R. Author-X-Name-Last: Wilson Author-Name: Kristin A. Thoney Author-X-Name-First: Kristin A. Author-X-Name-Last: Thoney Author-Name: Thom J. Hodgson Author-X-Name-First: Thom J. Author-X-Name-Last: Hodgson Author-Name: Russell E. King Author-X-Name-First: Russell E. Author-X-Name-Last: King Title: A practical method for evaluating worker allocations in large-scale dual resource constrained job shops Abstract: In two recent articles, Lobo et al. present algorithms for allocating workers to machine groups in a Dual Resource Constrained (DRC) job shop so as to minimize Lmax , the maximum job lateness. Procedure LBSA delivers an effective lower bound on Lmax , while the heuristic HSP$\text{{HSP}}$ delivers an allocation whose associated schedule has a (usually) near-optimal Lmax  value. To evaluate an HSP-based allocation’s quality in a given DRC job shop, the authors first compute the gap between HSP’s associated Lmax  value and LBSA$\text{{LBSA}}$’s lower bound. Next they refer this gap to the distribution of a “quasi-optimality” gap that is generated as follows: (i) independent simulation replications of the given job shop are obtained by randomly sampling each job’s characteristics; and (ii) for each replication, the associated quasi-optimality gap is computed by enumerating all feasible allocations. Because step (ii) is computationally intractable in large-scale problems, this follow-up article formulates a revised step (ii) wherein each simulation invokes HSP2$\text{{HSP2}}$, an improved version of HSP$\text{{HSP}}$, to yield an approximation to the quasi-optimality gap. Based on comprehensive experimentation, it is concluded that the HSP2$\text{{HSP2}}$-based distribution did not differ significantly from its enumeration-based counterpart; and the revised evaluation method was computationally tractable in practice. Two examples illustrate the use of the revised method. Journal: IIE Transactions Pages: 1209-1226 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.892231 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.892231 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1209-1226 Template-Type: ReDIF-Article 1.0 Author-Name: John A. Flory Author-X-Name-First: John A. Author-X-Name-Last: Flory Author-Name: Jeffrey P. Kharoufeh Author-X-Name-First: Jeffrey P. Author-X-Name-Last: Kharoufeh Author-Name: Nagi Z. Gebraeel Author-X-Name-First: Nagi Z. Author-X-Name-Last: Gebraeel Title: A switching diffusion model for lifetime estimation in randomly varying environments Abstract: This article presents a switching diffusion model for estimating the useful lifetime of a component that operates in a randomly varying environment. The component’s degradation process is unobservable; therefore, a signal of degradation is observed to estimate the environment parameters using a Markov chain Monte Carlo statistical procedure. These parameter estimates serve as key inputs to an analytical stochastic model that approximates the first passage time of the degradation process to a critical threshold. Several numerical examples involving simulated and real degradation data are presented to illustrate the quality of these approximations. Journal: IIE Transactions Pages: 1227-1241 Issue: 11 Volume: 46 Year: 2014 Month: 11 X-DOI: 10.1080/0740817X.2014.893400 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.893400 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:11:p:1227-1241 Template-Type: ReDIF-Article 1.0 Author-Name: Rachel Chen Author-X-Name-First: Rachel Author-X-Name-Last: Chen Author-Name: Lawrence Robinson Author-X-Name-First: Lawrence Author-X-Name-Last: Robinson Title: Optimal multiple-breakpoint quantity discount schedules for customers with heterogeneous demands: all-unit or incremental? Abstract: The supplier's problem of designing a quantity discount schedule is much more complicated when she faces customers who vary in size. This article considers both all-unit and incremental discount schedules with multiple breakpoints that maximize the supplier's net savings. Specifically, we assume that the distribution of customers’ demand belongs to the general family of Bender's Pareto curves, which generalizes the well-known “80-20” rule from A-B-C inventory analysis. For any number of breakpoints, we prove that the supplier is at least as well off under the optimal incremental discount schedule as she would be under the optimal all-unit discount schedule. Numerical study shows that most of the savings can be captured by a modest number (five) of breakpoints and that the advantage of incremental schedules over all-unit schedules goes to zero as the number of breakpoints grows large. Journal: IIE Transactions Pages: 199-214 Issue: 3 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.568038 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.568038 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:3:p:199-214 Template-Type: ReDIF-Article 1.0 Author-Name: Shuang Chen Author-X-Name-First: Shuang Author-X-Name-Last: Chen Author-Name: Joseph Geunes Author-X-Name-First: Joseph Author-X-Name-Last: Geunes Author-Name: Ajay Mishra Author-X-Name-First: Ajay Author-X-Name-Last: Mishra Title: Algorithms for multi-item procurement planning with case packs Abstract: A distribution case pack contains an assortment of varying quantities of different stock keeping units (SKUs) packed in a single box or pallet, with a goal of reducing handling requirements in the distribution chain. This article studies case pack procurement planning problems that address the trade-off between reduced order handling costs and higher inventory-related costs under dynamic, deterministic demand. The properties of optimal solutions for special cases of the problem involving one and two case packs are first established and these properties are used to solve the problem via dynamic programming. For the general model with multiple predefined case packs, which is shown to be strongly NP-hard, the exact approach is generalized to solve the problem in pseudopolynomial time for a fixed number of case packs. In addition, for large-size problems, the problem formulation is strengthed using valid inequalities and a family of heuristic solutions is designed. Computational tests show that these heuristic approaches perform very well compared to the commercial mixed-integer programming solver CPLEX. In addition to providing detailed methods for solving problems with deterministic demand, strategies for addressing problems with uncertain demands are discussed. Journal: IIE Transactions Pages: 181-198 Issue: 3 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.596510 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.596510 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:3:p:181-198 Template-Type: ReDIF-Article 1.0 Author-Name: Jeffrey Herrmann Author-X-Name-First: Jeffrey Author-X-Name-Last: Herrmann Title: Finding optimally balanced words for production planning and maintenance scheduling Abstract: Balanced words are useful for scheduling mixed-model, just-in-time assembly lines; planning preventive maintenance; managing inventory; and controlling asynchronous transfer mode networks. This article considers the challenging problem of finding a balanced word (a periodic sequence) for a finite set of letters, when the desired densities of the letters in the alphabet are given. Two different measures of balance are considered. This article presents a branch-and-bound approach for finding optimally balanced words and presents the results of computational experiments to show how problem characteristics affect the time required to find an optimal solution. The optimal solutions are also used to evaluate the performance of an aggregation approach that combines letters with the same density, constructs a word for the aggregated alphabet, and then disaggregates this word into a feasible word for the original alphabet. Computational experiments show that using aggregation with the heuristics not only finds more balanced words but also reduces computational effort for larger instances. Journal: IIE Transactions Pages: 215-229 Issue: 3 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.602660 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.602660 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:3:p:215-229 Template-Type: ReDIF-Article 1.0 Author-Name: David Sinreich Author-X-Name-First: David Author-X-Name-Last: Sinreich Author-Name: Ola Jabali Author-X-Name-First: Ola Author-X-Name-Last: Jabali Author-Name: Nico Dellaert Author-X-Name-First: Nico Author-X-Name-Last: Dellaert Title: Reducing emergency department waiting times by adjusting work shifts considering patient visits to multiple care providers Abstract: Reducing Emergency Department (ED) overcrowding in the hope of improving the ED's operational efficiency and health care delivery ranks high on every health care decision maker's wish list. The current study concentrates on developing efficient work shift schedules that make the best use of current resource capacity with the objectives of reducing patient waiting time and leveling resource utilization as much as possible. The study introduces two iterative heuristic algorithms, which combine simulation and optimization models for scheduling the work shifts of the ED resources: physicians, nurses and technicians. The algorithms are distinctive because they account for patients being treated by multiple care providers, possibly over the course of several hours, often with interspersed waiting. In such instances, patient arrival time is not a good indicator of when the various care providers are needed. The algorithms were tested using a detailed simulation based on data from five general hospital EDs. A patient's Length of Stay (LOS) is measured as the time a patient spends in the ED until being admitted to the hospital or discharged. The first algorithm achieved an average reduction of between 20 and 45% in the total patient waiting time, which led to a reduction of between 7 and 17% in the combined average patient LOS. By allowing a restructure of the ED resource capacities, the second algorithm achieved an average reduction of between 20 and 64% in the total patient waiting time, leading to an 11 to 29% reduction in the combined average patient LOS. Journal: IIE Transactions Pages: 163-180 Issue: 3 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.609875 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.609875 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:3:p:163-180 Template-Type: ReDIF-Article 1.0 Author-Name: Neil Geismar Author-X-Name-First: Neil Author-X-Name-Last: Geismar Author-Name: U. Manoj Author-X-Name-First: U. Author-X-Name-Last: Manoj Author-Name: Avanthi Sethi Author-X-Name-First: Avanthi Author-X-Name-Last: Sethi Author-Name: Chelliah Sriskandarajah Author-X-Name-First: Chelliah Author-X-Name-Last: Sriskandarajah Title: Scheduling robotic cells served by a dual-arm robot Abstract: This article assesses the benefits of implementing a dual-arm robot in a flow shop manufacturing cell. Such a robot has the ability to tend (unload or load) to two adjacent machines simultaneously. This significantly changes the analysis required to find sequences of robot actions that maximize a cell's throughput. For cells processing identical parts, optimal sequences are identified for two- and three-machine cells and also structural results are derived for cells with an arbitrary number of machines. Cells processing different part-types are fully analyzed for the case of two-machine cells. For each case the productivity of single-arm and dual-arm robotic cells is compared. Journal: IIE Transactions Pages: 230-248 Issue: 3 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.618174 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.618174 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:3:p:230-248 Template-Type: ReDIF-Article 1.0 Author-Name: Dirk Briskorn Author-X-Name-First: Dirk Author-X-Name-Last: Briskorn Author-Name: Joseph Leung Author-X-Name-First: Joseph Author-X-Name-Last: Leung Author-Name: Michael Pinedo Author-X-Name-First: Michael Author-X-Name-Last: Pinedo Title: Robust scheduling on a single machine using time buffers Abstract: This article studies the scheduling of buffer times in a single-machine environment. A buffer time is a machine idle time in between two consecutive jobs and is a common tool to protect a schedule against disruptions such as machine failures. This article introduces new classes of robust machine scheduling problems. For an arbitrary non-preemptive scheduling problem 1|β|γ, three corresponding robustness problems are obtained: (i) maximize overall (weighted) buffer time while ensuring a given schedule's performance (with regard to the objective function γ); (ii) optimize the schedule's performance (with regard to γ) while ensuring a given minimum overall (weighted) buffer time; and (iii) find the trade-off curve regarding both objectives. The relationships between the different classes of problems and the corresponding underlying problems are outlined. Furthermore, the robust counterparts of three very basic single-machine scheduling problems are analyzed. Journal: IIE Transactions Pages: 383-398 Issue: 6 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.505123 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.505123 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:6:p:383-398 Template-Type: ReDIF-Article 1.0 Author-Name: Chee-Chong Teo Author-X-Name-First: Chee-Chong Author-X-Name-Last: Teo Author-Name: Rohit Bhatnagar Author-X-Name-First: Rohit Author-X-Name-Last: Bhatnagar Author-Name: Stephen Graves Author-X-Name-First: Stephen Author-X-Name-Last: Graves Title: Setting planned lead times for a make-to-order production system with master schedule smoothing Abstract: This article considers a make-to-order manufacturing environment with fixed guaranteed delivery lead times and multiple product families, each with a stochastic demand process. The primary challenge in this environment is how to meet the quoted delivery times subject to fluctuating workload and capacity limits. The tactical planning parameters are considered, namely, the planning windows and planned lead times. The planning process that is modeled is the one in which the demand represents a dynamic input into the master production schedule. A planning window for each product family controls how the schedule of each product family is translated into a job release. It can be thought of as the slack that exists when the fixed quoted delivery lead time is longer than the total planned production lead time. Furthermore, the planned lead time of each station regulates the workflow within a multi-station shop. The model has underlying discrete time periods to allow the modeling of the planning process that is typically defined in time buckets; within each time period, the intra-period workflow that permits multiple job movements within the time period is modeled. The presented model characterizes key performance measures for the shop as functions of the planning windows and planned lead times. An optimization model is formulated that is able to determine the values of these planning parameters that minimize the relevant production-related costs. [Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for additional results of the simulation study in Section 6 of this manuscript.] Journal: IIE Transactions Pages: 399-414 Issue: 6 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.523765 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.523765 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:6:p:399-414 Template-Type: ReDIF-Article 1.0 Author-Name: John Buzacott Author-X-Name-First: John Author-X-Name-Last: Buzacott Author-Name: Houmin Yan Author-X-Name-First: Houmin Author-X-Name-Last: Yan Author-Name: Hanqin Zhang Author-X-Name-First: Hanqin Author-X-Name-Last: Zhang Title: Risk analysis of commitment–option contracts with forecast updates Abstract: The standard treatment of supply chain models largely focuses on the optimization of the expected value of a given cost or profit measure. Due to highly uncertain supply and demand conditions, the use of the expected objective measure may not be justified. This article studies a class of commitment–option supply contracts in a mean-variance framework. With structure properties established it is shown that a mean-variance trade-off analysis with advanced reservation can be carried out. Moreover, it is indicated how the corresponding contract decisions differ from decisions for optimizing an expected objective value. Journal: IIE Transactions Pages: 415-431 Issue: 6 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.532851 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.532851 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:6:p:415-431 Template-Type: ReDIF-Article 1.0 Author-Name: Önder Bulut Author-X-Name-First: Önder Author-X-Name-Last: Bulut Author-Name: Mehmet Fadiloğlu Author-X-Name-First: Mehmet Author-X-Name-Last: Fadiloğlu Title: Production control and stock rationing for a make-to-stock system with parallel production channels Abstract: This article considers the problem of production control and stock rationing in a make-to-stock production system with lost sales, multiple servers in parallel production channels, and several customer classes. It is assumed that independent stationary Poisson demand streams and exponential service times are in operation. At decision epochs, the control specifies whether or not to increase the number of active servers in conjunction with the stock allocation decision. Previously placed production orders cannot be cancelled. The system is modeled as an M/M/s make-to-stock queue, and properties of the optimal cost function and of the optimal production and rationing policies are characterized. It is shown that the optimal production policy is a state-dependent base-stock policy, and the optimal rationing policy is of threshold type. Furthermore, it is shown that the rationing levels are non-increasing in the number of active channels. It is also shown that the optimal ordering policy transforms into a bang-bang type policy when the model is relaxed by allowing order cancellations. Another model with partial order-cancellation flexibility is provided to fill the gap between the no-flexibility and the full-flexibility models. The additional gain that the optimal policy provides over the suboptimal base-stock policy proposed in the literature is qualified along with the value of the flexibility to cancel production orders. Journal: IIE Transactions Pages: 432-450 Issue: 6 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.532853 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.532853 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:6:p:432-450 Template-Type: ReDIF-Article 1.0 Author-Name: Yusen Xia Author-X-Name-First: Yusen Author-X-Name-Last: Xia Author-Name: Karthik Ramachandran Author-X-Name-First: Karthik Author-X-Name-Last: Ramachandran Author-Name: Haresh Gurnani Author-X-Name-First: Haresh Author-X-Name-Last: Gurnani Title: Sharing demand and supply risk in a supply chain Abstract: This article studies two contract mechanisms to share demand and supply risk in a decentralized supply chain. In an option contract, the buyer reserves capacity with a supplier who guarantees delivery up to this limit. This insulates the buyer from any disruption risk, but the supplier faces both demand and supply risk. The second mechanism, the firm order contract, represents a conventional dyadic channel relationship where the buyer places a firm order and the supplier builds capacity but does not guarantee delivery if any disruption occurs. It is shown that the buyer's preference for using the different risk-sharing mechanisms switches back and forth (as the probability of disruption increases). Consequently, a supplier with a higher disruption risk may make higher expected profits compared to one with lower risk. In addition, the buyer may benefit from a higher wholesale price since it provides incentive for the supplier to participate without requiring the buyer to use higher order quantities. Two operational mitigation strategies that can be used by the buyer to hedge against the disruption risk are considered: the use of an alternate reliable supplier during a shortage and use of a direct subsidy for the supplier to improve reliability. It is found that the value of the reliable supplier depends on the type of contract with the unreliable supplier: interestingly, it is in the option contract—where supply is guaranteed—that the buyer almost always uses the reliable supplier as well. Also, it is found that offering a subsidy for reliability improvement acts as a strategic alternative to placing large pre-orders as a way to improve supplier operations. Journal: IIE Transactions Pages: 451-469 Issue: 6 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.541415 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.541415 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:6:p:451-469 Template-Type: ReDIF-Article 1.0 Author-Name: Wenpo Huang Author-X-Name-First: Wenpo Author-X-Name-Last: Huang Author-Name: Lianjie Shu Author-X-Name-First: Lianjie Author-X-Name-Last: Shu Author-Name: Yan Su Author-X-Name-First: Yan Author-X-Name-Last: Su Title: An accurate evaluation of adaptive exponentially weighted moving average schemes Abstract: As a natural generalization of the conventional Exponentially Weighted Moving Average (EWMA) monitoring scheme, the Adaptive EWMA (AEWMA) scheme has received a great deal of attention. The Markov chain method was originally used to approximate the average run length performance of the AEWMA chart; however, this method may suffer from the issue of slow convergence and unstable approximation due to kernel discontinuity. In order to overcome this issue, this article extends the piecewise collocation method and the Clenshaw–Curtis (CC) quadrature (method) to the evaluation of AEWMA chart performance. It is shown that both the collocation and CC quadrature methods are very competitive and can provide more accurate and fast approximation to the run length performance of AEWMA charts than the conventional Markov chain approach. Journal: IIE Transactions Pages: 457-469 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.803642 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.803642 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:457-469 Template-Type: ReDIF-Article 1.0 Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Title: Stochastic modeling and real-time prognostics for multi-component systems with degradation rate interactions Abstract: Many conventional models that characterize the reliability of multi-component systems are developed on the premise that component failures in a system are independent. By contrast, this article offers a unique perspective on modeling component interdependencies and predicting their residual lifetimes. Specifically, the article provides a stochastic modeling framework for characterizing interactions among the degradation processes of interdependent components of a given system. This is achieved by modeling the behaviors of condition-/degradation-based sensor signals that are associated with each component. The proposed model is also used to estimate the residual lifetime distributions of each component. In addition, a Bayesian framework is used to update the predicted residual lifetime distributions using sensor signals that are correlated with the real-time dynamics associated with the interactions. The robustness and prediction accuracy of the methodology are investigated through a comprehensive simulation study that compares the performance of the proposed model to a counterpart benchmark that does not account for degradation interactions. Journal: IIE Transactions Pages: 470-482 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.812269 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.812269 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:470-482 Template-Type: ReDIF-Article 1.0 Author-Name: Koosha Rafiee Author-X-Name-First: Koosha Author-X-Name-Last: Rafiee Author-Name: Qianmei Feng Author-X-Name-First: Qianmei Author-X-Name-Last: Feng Author-Name: David Coit Author-X-Name-First: David Author-X-Name-Last: Coit Title: Reliability modeling for dependent competing failure processes with changing degradation rate Abstract: This article proposes reliability models for devices subject to dependent competing failure processes of degradation and random shocks with a changing degradation rate according to particular random shock patterns. The two dependent failure processes are soft failure due to continuous degradation, in addition to sudden degradation increases caused by random shocks, and hard failure due to the same shock process. In complex devices such as Micro-Electro-Mechanical Systems the degradation rate can change when the system becomes more susceptible to fatigue and deteriorates faster, as a result of withstanding shocks. This article considers four different shock patterns that can increase the degradation rate: (i) generalized extreme shock model: when the first shock above a critical value is recorded; (ii) generalized δ-shock model: when the inter-arrival time of two sequential shocks is less than a threshold δ; (iii) generalized m-shock model: when m shocks greater than a critical level are recorded; and (iv) generalized run shock model: when there is a run of n consecutive shocks that are greater than a critical value. Numerical examples are presented to illustrate the developed reliability models, along with sensitivity analysis. Journal: IIE Transactions Pages: 483-496 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.812270 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.812270 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:483-496 Template-Type: ReDIF-Article 1.0 Author-Name: Wenzhen Huang Author-X-Name-First: Wenzhen Author-X-Name-Last: Huang Author-Name: Jinya Liu Author-X-Name-First: Jinya Author-X-Name-Last: Liu Author-Name: Vijya Chalivendra Author-X-Name-First: Vijya Author-X-Name-Last: Chalivendra Author-Name: Darek Ceglarek Author-X-Name-First: Darek Author-X-Name-Last: Ceglarek Author-Name: Zhenyu Kong Author-X-Name-First: Zhenyu Author-X-Name-Last: Kong Author-Name: Yingqing Zhou Author-X-Name-First: Yingqing Author-X-Name-Last: Zhou Title: Statistical modal analysis for variation characterization and application in manufacturing quality control Abstract: A Statistical Modal Analysis (SMA) methodology is developed for geometric variation characterization, modeling, and applications in manufacturing quality monitoring and control. The SMA decomposes a variation (spatial) signal into modes, revealing the fingerprints engraved on the feature in manufacturing with a few truncated modes. A discrete cosine transformation approach is adopted for mode decomposition. Statistical methods are used for model estimation, mode truncation, and determining sample strategy. The emphasis is on implementation and application aspects, including quality monitoring, diagnosis, and process capability study in manufacturing. Case studies are conducted to demonstrate application examples in modeling, diagnosis, and process capability analysis. Journal: IIE Transactions Pages: 497-511 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.814928 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.814928 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:497-511 Template-Type: ReDIF-Article 1.0 Author-Name: Shuai Huang Author-X-Name-First: Shuai Author-X-Name-Last: Huang Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Author-Name: Gerri Lamb Author-X-Name-First: Gerri Author-X-Name-Last: Lamb Author-Name: Madeline Schmitt Author-X-Name-First: Madeline Author-X-Name-Last: Schmitt Author-Name: John Fowler Author-X-Name-First: John Author-X-Name-Last: Fowler Title: Multiple data sources fusion for enterprise quality improvement by a multilevel latent response model Abstract: Quality improvement of an enterprise needs a model to link multiple data sources, including the independent and interdependent activities of individuals in the enterprise, enterprise infrastructure, climate, and administration strategies, as well as the quality outcomes of the enterprise. This is a challenging problem because the data are at two levels—i.e., the individual and enterprise levels—and each individual's contribution to the enterprise quality outcome is usually not explicitly known. These challenges make general regression analysis and conventional multilevel models non-applicable to the problem. This article a new multilevel model that treats each individual's contribution to the enterprise quality outcome as a latent variable. Under this new formulation, an algorithm is developed to estimate the model parameters, which integrates the Fisher scoring algorithm and generalized least squares estimation. Extensive simulation studies are performed that demonstrate the superiority of the proposed model over the competing approach in terms of the statistical properties in parameter estimation. The proposed model is applied to a real-world application of nursing quality improvement and helps identify key nursing activities and unit (a hospital unit is an enterprise in this context) quality-improving measures that help reduce patient falls. Journal: IIE Transactions Pages: 512-525 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849829 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849829 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:512-525 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Author-Name: Changliang Zou Author-X-Name-First: Changliang Author-X-Name-Last: Zou Title: Multivariate binomial/multinomial control chart Abstract: This article considers statistical process control for multivariate categorical processes. In particular, there is a focus on multivariate binomial and multivariate multinomial processes. More and more real applications involve categorical quality characteristics, which cannot be measured on a continuous scale. These characteristic factors usually correlate with each other, indicating a need for multivariate charting techniques. However, there is a scarcity of research on monitoring multivariate categorical data, and most existing methods lack robustness for some deficiencies. This article reports the use of log-linear models for characterizing the relationship among categorical factors that are adapted into a framework of multivariate binomial and multivariate multinomial distributions. A Phase II control chart is proposed that is robust in efficiently detecting various shifts, especially those in interaction effects representing the dependence among factors. Numerical simulations and a real data example demonstrate the effectiveness of the chart. Journal: IIE Transactions Pages: 526-542 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849830 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849830 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:526-542 Template-Type: ReDIF-Article 1.0 Author-Name: Arash Pourhabib Author-X-Name-First: Arash Author-X-Name-Last: Pourhabib Author-Name: Faming Liang Author-X-Name-First: Faming Author-X-Name-Last: Liang Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Bayesian site selection for fast Gaussian process regression Abstract: Gaussian Process (GP) regression is a popular method in the field of machine learning and computer experiment designs; however, its ability to handle large data sets is hindered by the computational difficulty in inverting a large covariance matrix. Likelihood approximation methods were developed as a fast GP approximation, thereby reducing the computation cost of GP regression by utilizing a much smaller set of unobserved latent variables called pseudo points. This article reports a further improvement to the likelihood approximation methods by simultaneously deciding both the number and locations of the pseudo points. The proposed approach is a Bayesian site selection method where both the number and locations of the pseudo inputs are parameters in the model, and the Bayesian model is solved using a reversible jump Markov chain Monte Carlo technique. Through a number of simulated and real data sets, it is demonstrated that with appropriate priors chosen, the Bayesian site selection method can produce a good balance between computation time and prediction accuracy: it is fast enough to handle large data sets that a full GP is unable to handle, and it improves, quite often remarkably, the prediction accuracy, compared with the existing likelihood approximations. Journal: IIE Transactions Pages: 543-555 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849833 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849833 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:543-555 Template-Type: ReDIF-Article 1.0 Author-Name: Amit Shinde Author-X-Name-First: Amit Author-X-Name-Last: Shinde Author-Name: Anshuman Sahu Author-X-Name-First: Anshuman Author-X-Name-Last: Sahu Author-Name: Daniel Apley Author-X-Name-First: Daniel Author-X-Name-Last: Apley Author-Name: George Runger Author-X-Name-First: George Author-X-Name-Last: Runger Title: Preimages for variation patterns from kernel PCA and bagging Abstract: Manufacturing industries collect massive amounts of multivariate measurement through automated inspection processes. Noisy measurements and high-dimensional, irrelevant features make it difficult to identify useful patterns in the data. Principal component analysis provides linear summaries of datasets with fewer latent variables. Kernel Principal Component Analysis (KPCA), however, identifies nonlinear patterns. One challenge in KPCA is to inverse map the denoised signal from a high-dimensional feature space into its preimage in input space to visualize the nonlinear variation sources. However, such an inverse map is not always defined. This article provides a new meta-method applicable to any KPCA algorithm to approximate the preimage. It improves upon previous work where a strong assumption was the availability of noise-free training data. This is problematic for applications such as manufacturing variation analysis. To attenuate noise in kernel subspace estimation the final preimage is estimated as the average from bagged samples drawn from the original dataset. The improvement is most pronounced when the parameters differ from those that minimize the error rate. Consequently, the proposed approach improves the robustness of any base KPCA algorithm. The usefulness of the proposed method is demonstrated by analyzing a classic handwritten digit dataset and a face dataset. Significant improvement over the existing methods is observed. Journal: IIE Transactions Pages: 429-456 Issue: 5 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849836 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849836 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:5:p:429-456 Template-Type: ReDIF-Article 1.0 Author-Name: G. Mincsovics Author-X-Name-First: G. Author-X-Name-Last: Mincsovics Author-Name: N. Dellaert Author-X-Name-First: N. Author-X-Name-Last: Dellaert Title: Workload-dependent capacity control in production-to-order systems Abstract: The development of job intermediation and the increasing use of the Internet allow companies to carry out ever quicker capacity changes. In many cases, capacity can be adapted rapidly to the actual workload, which is especially important in production-to-order systems, where inventory cannot be used as a buffer for demand variation. A set of Markov chain models is introduced that are able to represent workload-dependent capacity control policies. Two analytical approaches to evaluate the policies' due date performance based on a stationary analysis are presented. One provides an explicit expression of throughput time distribution, the other is a fixed-point iteration method that calculates the moments of the throughput time. The due date performance, capacity, capacity switching and lost sales costs are compared to select optimal policies. Insights into situations in which a workload-dependent policy can be beneficial are presented. The results can be used by manufacturing and service industries when establishing a static policy for dynamic capacity planning.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplementary resource: Appendix] Journal: IIE Transactions Pages: 853-865 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802369391 File-URL: http://hdl.handle.net/10.1080/07408170802369391 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:853-865 Template-Type: ReDIF-Article 1.0 Author-Name: Ali Koc Author-X-Name-First: Ali Author-X-Name-Last: Koc Author-Name: Ihsan Sabuncuoglu Author-X-Name-First: Ihsan Author-X-Name-Last: Sabuncuoglu Author-Name: Erdal Erel Author-X-Name-First: Erdal Author-X-Name-Last: Erel Title: Two exact formulations for disassembly line balancing problems with task precedence diagram construction using an AND/OR graph Abstract: In this paper, the disassembly line balancing problem, which involves determining a line design in which used products are completely disassembled to obtain useable components in a cost-effective manner, is studied. Because of the growing demand for a cleaner environment, this problem has become an important issue in reverse manufacturing. In this study, two exact formulations are developed that utilize an AND/OR Graph (AOG) as the main input to ensure the feasibility of the precedence relations among the tasks. It is also shown that traditional task precedence diagrams can be derived from the AOG of a given product structure. This procedure leads to considerably better solutions of the traditional assembly line balancing problems; it may alter the approach taken by previous researchers in this area. Journal: IIE Transactions Pages: 866-881 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802510390 File-URL: http://hdl.handle.net/10.1080/07408170802510390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:866-881 Template-Type: ReDIF-Article 1.0 Author-Name: Marcel van Vuuren Author-X-Name-First: Marcel Author-X-Name-Last: van Vuuren Author-Name: Ivo Adan Author-X-Name-First: Ivo Author-X-Name-Last: Adan Title: Performance analysis of tandem queues with small buffers Abstract: An approximation for the performance analysis of single-server tandem queues with small buffers and generally distributed service times is presented. The approximation is based on the decomposition of the tandem queue into subsystems, the parameters of which are determined by an iterative algorithm. By employing a detailed description of the service process of each subsystem, it proved possible to obtain an accurate approximation of performance characteristics such as throughput and mean sojourn time. The proposed technique significantly outperforms existing methods. Journal: IIE Transactions Pages: 882-892 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902806862 File-URL: http://hdl.handle.net/10.1080/07408170902806862 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:882-892 Template-Type: ReDIF-Article 1.0 Author-Name: Jean-Philippe Loose Author-X-Name-First: Jean-Philippe Author-X-Name-Last: Loose Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Title: Surrogate modeling of dimensional variation propagation in multistage assembly processes Abstract: In assembly process control and design optimization, it is critical to establish a mathematical model that describes the relationship between the dimensional quality of the final product and the various process parameters (e.g., the fixture layout and locator position deviation). This article presents a surrogate modeling methodology for multistage assembly processes to characterize the relationship between fixture layout and product dimensional quality. The mathematical structure of the model is derived from a physical analysis based on first principles and then the parameters of the model are identified using data from computer experiments. The resulting surrogate model can enable fixture layout optimization in process planning. A comprehensive case study of a multistage assembly process is also presented to demonstrate the effectiveness and high fidelity of the developed method. Journal: IIE Transactions Pages: 893-904 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902906027 File-URL: http://hdl.handle.net/10.1080/07408170902906027 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:893-904 Template-Type: ReDIF-Article 1.0 Author-Name: Konstantin Kogan Author-X-Name-First: Konstantin Author-X-Name-Last: Kogan Title: Production control under uncertainty: Closed-loop versus open-loop approach Abstract: A basic production system facing two types of uncertainty (shocks), multiplicative and additive, is considered. The former is due to a stochastic yield; the other to a stochastic demand. The objective of the production system is to choose a production rate (control) that minimizes expected inventory and production costs. Stochastic production control is typically considered the prerogative of closed-loop or on-line approaches. However, in certain manufacturing systems, information about inventory levels may at best be imprecise. Moreover, the production rate cannot be instantaneously adjusted in response to inventory updates. This warrants the exploration of an open-loop or off-line control methodology. In the comparative analysis of the two approaches presented in this paper, the probability distribution of inventories is characterized, the damage associated with the inability to adjust production is assessed, and the conditions at when the gap between the two approaches are insignificant are highlighted.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 905-915 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902973944 File-URL: http://hdl.handle.net/10.1080/07408170902973944 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:905-915 Template-Type: ReDIF-Article 1.0 Author-Name: Linda Zhang Author-X-Name-First: Linda Author-X-Name-Last: Zhang Author-Name: Brian Rodrigues Author-X-Name-First: Brian Author-X-Name-Last: Rodrigues Title: A tree unification approach to constructing generic processes Abstract: In dealing with product diversity, manufacturing companies strive to maintain stable production by eliminating variations in production processes. In this respect, planning process families in relation to product families to achieve production stability is a promising approach. In this paper, the generic processes underlying process families and their construction is studied. Such generic processes entail well-structured mechanisms that can help companies develop similar processes to fulfill a diversity of customized products. In view of the fact that a production process is commonly represented by a tree, in particular, a binary tree, an approach based on tree unification to construct generic processes from large volumes of existing production data is proposed. This approach is tested using an industrial example involving electronic products, and the derived generic processes have been verified by the users.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 916-929 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170903026049 File-URL: http://hdl.handle.net/10.1080/07408170903026049 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:916-929 Template-Type: ReDIF-Article 1.0 Author-Name: Qi Liu Author-X-Name-First: Qi Author-X-Name-Last: Liu Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: Erratum for “A sequence-pair representation and MIP-model-based heuristic for the facility layout problem with rectangular departments” Journal: Pages: 930-930 Issue: 10 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170903070633 File-URL: http://hdl.handle.net/10.1080/07408170903070633 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:10:p:930-930 Template-Type: ReDIF-Article 1.0 Author-Name: Eric Beier Author-X-Name-First: Eric Author-X-Name-Last: Beier Author-Name: Saravanan Venkatachalam Author-X-Name-First: Saravanan Author-X-Name-Last: Venkatachalam Author-Name: V. Jorge Leon Author-X-Name-First: V. Jorge Author-X-Name-Last: Leon Author-Name: Lewis Ntaimo Author-X-Name-First: Lewis Author-X-Name-Last: Ntaimo Title: Nodal decomposition–coordination for stochastic programs with private information restrictions Abstract: We present a nodal decomposition–coordination method for stochastic programs with private data (information) restrictions. We consider coordinated systems where a single optimal or close-to-optimal solution is desired. However, because of competitive issues, confidentiality requirements, incompatible database issues, or other complicating factors, no global view of the system is possible. In our iterative methodology, each entity in the cooperation forms its own nodal deterministic or stochastic program. We use Lagrangian relaxation and subgradient optimization techniques to facilitate negotiation between the nodal decisions in the system without any one entity gaining access to the private information from other nodes. We perform a computational study on supply chain inventory coordination problem instances. The results demonstrate that the new methodology can obtain solution values that are close to the optimal within a stipulated time without violating private information restrictions. The results also show that the stochastic solutions outperform the corresponding expected value solutions. Journal: IIE Transactions Pages: 283-297 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1055390 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1055390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:283-297 Template-Type: ReDIF-Article 1.0 Author-Name: Rafay Ishfaq Author-X-Name-First: Rafay Author-X-Name-Last: Ishfaq Author-Name: Uzma Raja Author-X-Name-First: Uzma Author-X-Name-Last: Raja Author-Name: Mark Clark Author-X-Name-First: Mark Author-X-Name-Last: Clark Title: Fuel-switch decisions in the electric power industry under environmental regulations Abstract: The changing landscape of environmental regulations, discovery of new domestic sources of natural gas, and the economics of energy markets has resulted in a major shift in the choice of fuel for electric power generation. This research focuses on the relevant factors that impact a power plant's decision to switch fuel from coal to natural gas and the timing of such decisions. The factors studied in this article include capital costs of plant replacement, public policy, associated monetary penalties, availability and access to gas supply networks, and the option of plant retirement. These factors are evaluated in a case study of power plants in the Southeastern United States, using mathematical programming and logistic regression models. The results show that environmental regulations can be effective if the monetary penalties imposed by such regulations are set at an appropriate level, with respect to plant replacement costs. Although it is economic for large-size (power generation capacity > 600 MW) coal-fired power plants to switch fuel to natural gas, plant retirement is more suitable for smaller-sized plants. This article also presents a multi-logit decision model that can help identify the best time for a power plant to switch fuel and whether such a decision is useful in the context of plant replacement costs, fuel costs, electric power decommission limits, and environmental penalties. Journal: IIE Transactions Pages: 205-219 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1056391 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1056391 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:205-219 Template-Type: ReDIF-Article 1.0 Author-Name: Chun-Hung Cheng Author-X-Name-First: Chun-Hung Author-X-Name-Last: Cheng Author-Name: Yong-Hong Kuo Author-X-Name-First: Yong-Hong Author-X-Name-Last: Kuo Title: A dissimilarities balance model for a multi-skilled multi-location food safety inspector scheduling problem Abstract: In this work, we examine a staff scheduling problem in a governmental food safety center that is responsible for the surveillance of imported food at an international airport. In addition to the fact that the staff have different levels of efficiency and have different preference for work shifts, the Operations Manager of the food safety center would like to balance the dissimilarities of workers in order to provide unbiased work schedules for staff members. We adopt a two-phase approach, where the first phase is to schedule the work shifts of food safety inspectors (including rest days and shift types) with schedule fairness and staff preference taken into account and the second phase is to best-fit them to tasks in terms of skill-matches and create diversity of team formations. We also provide polyhedral results and devise valid inequalities for the two formulations. For the first-phase problem, we relax some constraints of the fairness criteria to reduce the problem size to reduce computational effort. We derive an upper bound for the objective value of the relaxation and provide computational results to show that the solutions devised from our proposed methodology are of good quality. For the second-phase problem, we develop a shift-by-shift assignment heuristic to obtain an upper bound for the maximum number of times any pair of workers is assigned to the same shift at the same location. We propose an enumeration algorithm, that solves the problems for fixed values of this number until an optimality condition holds or the problem is infeasible. Computational results show that our proposed approach can produce solutions of good quality in a much shorter period of time, compared with a standalone commercial solver. Journal: IIE Transactions Pages: 235-251 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1057303 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1057303 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:235-251 Template-Type: ReDIF-Article 1.0 Author-Name: Irem Sengul Orgut Author-X-Name-First: Irem Sengul Author-X-Name-Last: Orgut Author-Name: Julie Ivy Author-X-Name-First: Julie Author-X-Name-Last: Ivy Author-Name: Reha Uzsoy Author-X-Name-First: Reha Author-X-Name-Last: Uzsoy Author-Name: James R. Wilson Author-X-Name-First: James R. Author-X-Name-Last: Wilson Title: Modeling for the equitable and effective distribution of donated food under capacity constraints Abstract: Mathematical models are presented and analyzed to facilitate a food bank's equitable and effective distribution of donated food among a population at risk for hunger. Typically exceeding the donated supply, demand is proportional to the poverty population within the food bank's service area. The food bank seeks to ensure a perfectly equitable distribution of food; i.e., each county in the service area should receive a food allocation that is exactly proportional to the county's demand such that no county is at a disadvantage compared to any other county. This objective often conflicts with the goal of maximizing effectiveness by minimizing the amount of undistributed food. Deterministic network-flow models are developed to minimize the amount of undistributed food while maintaining a user-specified upper bound on the absolute deviation of each county from a perfectly equitable distribution. An extension of this model identifies optimal policies for the allocation of additional receiving capacity to counties in the service area. A numerical study using data from a large North Carolina food bank illustrates the uses of the models. A probabilistic sensitivity analysis reveals the effect on the models' optimal solutions arising from uncertainty in the receiving capacities of the counties in the service area. Journal: IIE Transactions Pages: 252-266 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1063792 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1063792 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:252-266 Template-Type: ReDIF-Article 1.0 Author-Name: Wenting Pan Author-X-Name-First: Wenting Author-X-Name-Last: Pan Author-Name: Kut C. So Author-X-Name-First: Kut C. Author-X-Name-Last: So Title: Component procurement strategies in decentralized assembly systems under supply uncertainty Abstract: In this article we analyze the interactions among the assembler and two component suppliers in their procurement decisions under a Vendor-Managed Inventory (VMI) contract. Under the VMI contract, the assembler first offers a unit price for each component and will pay component suppliers only for the amounts used to meet the actual demand. The two independent component suppliers then decide on the production quantities of their individual components before the actual demand is realized. We assume that one of the component suppliers has uncertainty in the supply process, in which the actual number of components available for assembly is equal to a random fraction of the production quantity. Under the assembly structure, both component suppliers need to take into account the underlying supply uncertainty in deciding their individual production quantities, as both components are required for the assembly of the final product. We first analyze the special case under deterministic demand and then extend our analysis to the general case under stochastic demand. We derive the optimal component prices offered by the assembler and the corresponding equilibrium production quantities of the component suppliers. Journal: IIE Transactions Pages: 267-282 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1063793 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1063793 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:267-282 Template-Type: ReDIF-Article 1.0 Author-Name: İhsan Yanıkoğlu Author-X-Name-First: İhsan Author-X-Name-Last: Yanıkoğlu Author-Name: Dick den Hertog Author-X-Name-First: Dick Author-X-Name-Last: den Hertog Author-Name: Jack P. C. Kleijnen Author-X-Name-First: Jack P. C. Author-X-Name-Last: Kleijnen Title: Robust dual-response optimization Abstract: This article presents a robust optimization reformulation of the dual-response problem developed in response surface methodology. The dual-response approach fits separate models for the mean and the variance and analyzes these two models in a mathematical optimization setting. We use metamodels estimated from experiments with both controllable and environmental inputs. These experiments may be performed with either real or simulated systems; we focus on simulation experiments. For the environmental inputs, classic approaches assume known means, variances, or covariances and sometimes even a known distribution. We, however, develop a method that uses only experimental data, so it does not need a known probability distribution. Moreover, our approach yields a solution that is robust against the ambiguity in the probability distribution. We also propose an adjustable robust optimization method that enables adjusting the values of the controllable factors after observing the values of the environmental factors. We illustrate our novel methods through several numerical examples, which demonstrate their effectiveness. Journal: IIE Transactions Pages: 298-312 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1067737 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1067737 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:298-312 Template-Type: ReDIF-Article 1.0 Author-Name: Hugh R. Medal Author-X-Name-First: Hugh R. Author-X-Name-Last: Medal Author-Name: Edward A. Pohl Author-X-Name-First: Edward A. Author-X-Name-Last: Pohl Author-Name: Manuel D. Rossetti Author-X-Name-First: Manuel D. Author-X-Name-Last: Rossetti Title: Allocating Protection Resources to Facilities When the Effect of Protection is Uncertain Abstract: We study a new facility protection problem in which one must allocate scarce protection resources to a set of facilities given that allocating resources to a facility only has a probabilistic effect on the facility’s post-disruption capacity. This study seeks to test three common assumptions made in the literature on modeling infrastructure systems subject to disruptions: 1) perfect protection, e.g., protecting an element makes it fail-proof, 2) binary protection, i.e., an element is either fully protected or unprotected, and 3) binary state, i.e., disrupted elements are fully operational or non-operational. We model this facility protection problem as a two-stage stochastic program with endogenous uncertainty. Because this stochastic program is non-convex we present a greedy algorithm and show that it has a worst-case performance of 0.63. However, empirical results indicate that the average performance is much better. In addition, experimental results indicate that the mean-value version of this model, in which parameters are set to their mean values, performs close to optimal. Results also indicate that the perfect and binary protection assumptions together significantly affect the performance of a model. On the other hand, the binary state assumption was found to have a smaller effect. Journal: IIE Transactions Pages: 220-234 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1078013 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078013 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:220-234 Template-Type: ReDIF-Article 1.0 Author-Name: Diwakar Gupta Author-X-Name-First: Diwakar Author-X-Name-Last: Gupta Author-Name: Fei Li Author-X-Name-First: Fei Author-X-Name-Last: Li Title: Reserve driver scheduling Abstract: Transit agencies use reserve drivers to cover open work that arises from planned and unplanned time off, equipment breakdowns, weather, and special events. Work assignment decisions must be made sequentially without information about future job requests, a driver’s earlier assignment may not be interrupted to accommodate a new job (no pre-emption), and the scheduler may need to select a particular driver when multiple drivers can perform a job. Motivated by this instance of the interval scheduling problem, we propose a randomized algorithm that carries a performance guarantee relative to the best offline solution and simultaneously performs better than any deterministic algorithm. A key objective of this article is to develop an algorithm that performs well in both average and worst-case scenarios. For this reason, our approach includes discretionary parameters that allow the user to achieve a balance between a myopic approach (accept all jobs that can be scheduled) and a strategic approach (consider accepting only if jobs are longer than a certain threshold). We test our algorithm on data from a large transit agency and show that it performs well relative to the commonly used myopic approach. Although this article is motivated by a transit industry application, the approach we develop is applicable in a whole host of applications involving on-demand-processing of jobs. Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc. Journal: IIE Transactions Pages: 193-204 Issue: 3 Volume: 48 Year: 2016 Month: 3 X-DOI: 10.1080/0740817X.2015.1078016 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1078016 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:3:p:193-204 Template-Type: ReDIF-Article 1.0 Author-Name: Nurcin Celik Author-X-Name-First: Nurcin Author-X-Name-Last: Celik Author-Name: Seungho Lee Author-X-Name-First: Seungho Author-X-Name-Last: Lee Author-Name: Karthik Vasudevan Author-X-Name-First: Karthik Author-X-Name-Last: Vasudevan Author-Name: Young-Jun Son Author-X-Name-First: Young-Jun Author-X-Name-Last: Son Title: DDDAS-based multi-fidelity simulation framework for supply chain systems Abstract: Dynamic-Data-Driven Application Systems (DDDAS) is a new modeling and control paradigm which adaptively adjusts the fidelity of a simulation model. The fidelity of the simulation model is adjusted against available computational resources by incorporating dynamic data into the executing model, which then steers the measurement process for selective date update. To this end, comprehensive system architecture and methodologies are first proposed, where the components include a real-time DDDAS simulation, grid modules, a web service communication server, databases, various sensors and a real system. Abnormality detection, fidelity selection, fidelity assignment, and prediction and task generation are enabled through the embedded algorithms developed in this work. Grid computing is used for computational resources management and web services are used for inter-operable communications among distributed software components. The proposed DDDAS is demonstrated on an example of preventive maintenance scheduling in a semiconductor supply chain. Journal: IIE Transactions Pages: 325-341 Issue: 5 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394306 File-URL: http://hdl.handle.net/10.1080/07408170903394306 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:5:p:325-341 Template-Type: ReDIF-Article 1.0 Author-Name: Chin Tan Author-X-Name-First: Chin Author-X-Name-Last: Tan Author-Name: Joseph Hartman Author-X-Name-First: Joseph Author-X-Name-Last: Hartman Title: Equipment replacement analysis with an uncertain finite horizon Abstract: Equipment replacement strategies are highly dependent on the horizon of analysis. While the literature can generally be categorized according to infinite or finite-horizon solutions, this paper examines the case where the horizon will last at least Ts periods but may last as long as Tl periods, due to uncertainty in the length of production runs or the temporary provision of services. Stochastic dynamic programming formulations are presented and solutions that minimize either expected costs or maximum regret are explored. Furthermore, the critical time period where optimal decisions diverge for different horizon realizations is identified. As this time period may be substantially earlier than Ts, a lease option contract is designed that allows owners to lower the risk of high costs that may result from a given horizon realization, while opening a possible source of revenue for a leasor. Journal: IIE Transactions Pages: 342-353 Issue: 5 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394363 File-URL: http://hdl.handle.net/10.1080/07408170903394363 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:5:p:342-353 Template-Type: ReDIF-Article 1.0 Author-Name: Santanu Chakraborty Author-X-Name-First: Santanu Author-X-Name-Last: Chakraborty Author-Name: Kumar Muthuraman Author-X-Name-First: Kumar Author-X-Name-Last: Muthuraman Author-Name: Mark Lawley Author-X-Name-First: Mark Author-X-Name-Last: Lawley Title: Sequential clinical scheduling with patient no-shows and general service time distributions Abstract: A sequential clinical scheduling method for patients with general service time distributions is developed in this paper. Patients call a medical clinic to request an appointment with their physician. During the call, the scheduler assigns the patient to an available slot in the physician's schedule. This is communicated to the patient before the call terminates and, thus, the schedule is constructed sequentially. In practice, there is very limited opportunity to adjust the schedule once the complete set of patients is known. Scheduled patients might not attend, that is, they might “no-show,” and the service times of those attending are random. A myopic scheduling algorithm with an optimal stopping criteria for this problem assuming exponential service times already exists in the literature. This work relaxes this assumption and develops numerical techniques for general service time distributions. A special case in which service times are gamma distributed is considered and it is shown that computation is significantly reduced. Finally, exhaustive experimental results are provided along with discussions that provide insights into the practical aspects of the scheduling approach. Journal: IIE Transactions Pages: 354-366 Issue: 5 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903396459 File-URL: http://hdl.handle.net/10.1080/07408170903396459 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:5:p:354-366 Template-Type: ReDIF-Article 1.0 Author-Name: Fatih Mutlu Author-X-Name-First: Fatih Author-X-Name-Last: Mutlu Author-Name: Sila Çetinkaya Author-X-Name-First: Sila Author-X-Name-Last: Çetinkaya Author-Name: James Bookbinder Author-X-Name-First: James Author-X-Name-Last: Bookbinder Title: An analytical model for computing the optimal time-and-quantity-based policy for consolidated shipments Abstract: The logistics literature reports that three different types of shipment consolidation policies are popular in current practice. These are time-based, quantity-based and Time-and-Quantity (TQ)-based consolidation policies. Although time-based and quantity-based policies have been studied via analytical modeling, to the best of the authors knowledge, there is no exact analytical model for computing the optimal TQ-based policy parameters. Considering the case of stochastic demand/order arrivals, an analytical model for computing the expected long-run average cost of a consolidation system implementing a TQ-based policy is developed. The cost expression is used to analyze the optimal TQ-based policy parameters. The presented analytical results prove that: (i) the optimal TQ-based policy outperforms the optimal time-based policy; and (ii) the optimal quantity-based policy is superior to the other two (i.e., optimal time-based and TQ-based) policies in terms of cost. Considering the expected maximum waiting time as a measure of timely delivery performance, however, it is numerically demonstrated that the TQ-based policies improve on the quantity-based policies significantly with only a slight increase in the cost. Journal: IIE Transactions Pages: 367-377 Issue: 5 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903462368 File-URL: http://hdl.handle.net/10.1080/07408170903462368 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:5:p:367-377 Template-Type: ReDIF-Article 1.0 Author-Name: Kyung Sung Jung Author-X-Name-First: Kyung Sung Author-X-Name-Last: Jung Author-Name: H. Neil Geismar Author-X-Name-First: H. Neil Author-X-Name-Last: Geismar Author-Name: Michael Pinedo Author-X-Name-First: Michael Author-X-Name-Last: Pinedo Author-Name: Chelliah Sriskandarajah Author-X-Name-First: Chelliah Author-X-Name-Last: Sriskandarajah Title: Approximations to optimal sequences in single-gripper and dual-gripper robotic cells with circular layouts Abstract: This article considers the problems of scheduling operations in single-gripper and dual-gripper bufferless robotic cells in which the arrangement of machines is circular. The cells are designed to produce identical parts under the free-pickup criterion with additive intermachine travel time. The objective is to find a cyclic sequence of robot moves that minimizes the long-run average time required to produce a part or, equivalently, that maximizes the throughput. Obtaining an efficient algorithm for an approximation to an optimal k-unit cyclic solution (over all k ≥ 1) is the focus of this article. The proposed algorithms introduce a new class of schedules, which are refered to as epi-cyclic cycles. A polynomial algorithm with a 5/3-approximation to an optimal k-unit cycle over all cells is developed. The performed structural analysis for dual-gripper cells leads to a polynomial-time algorithm that provides at worst a 3/2-approximation for the practically relevant case in which the dual-gripper switch time is less than twice the intermachine robot movement time. A computational study demonstrates that the algorithm performs much better on average than this worst-case bound suggests. The performed theoretical studies are a stepping stone for researching the complexity status of the corresponding domain. They also provide theoretical as well as practical insights that are useful in maximizing productivity of any cell configuration with either type of robot. Journal: IIE Transactions Pages: 634-652 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.937019 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.937019 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:634-652 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammad E. Nikoofal Author-X-Name-First: Mohammad E. Author-X-Name-Last: Nikoofal Author-Name: Mehmet Gümüs Author-X-Name-First: Mehmet Author-X-Name-Last: Gümüs Title: On the value of terrorist’s private information in a government’s defensive resource allocation problem Abstract: The ability to understand and predict the sequence of events leading to a terrorist attack is one of the main issues in developing pre-emptive defense strategies for homeland security. This article, explores the value of terrorist’s private information on a government’s defense allocation decision. In particular, two settings with different informational structures are considered. In the first setting, the government knows the terrorist’s target preference but does not know whether the terrorist is fully rational in his target selection decision. In the second setting, the government knows the degree of rationality of the terrorist but does not know the terrorist’s target preference. The government’s equilibrium budget allocation strategy for each setting is fully characterized and it is shown that the government makes resource allocation decisions by comparing her valuation for each target with a set of thresholds. The Value Of Information (VOI) from the perspective of the government for each setting is derived. The obtained results show that VOI mainly depends on the government’s budget and the degree of heterogeneity among the targets. In general, VOI goes to zero when the government’s budget is high enough. However, the impact of heterogeneity among the targets on VOI further depends on whether the terrorist’s target preference matches those of the government’s or not. Finally, various extensions on the baseline model are performed and it is shown that the structural properties of budget allocation equilibrium still hold true. Journal: IIE Transactions Pages: 533-555 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.938844 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.938844 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:533-555 Template-Type: ReDIF-Article 1.0 Author-Name: Lance Sherry Author-X-Name-First: Lance Author-X-Name-Last: Sherry Title: Improving the accuracy of airport emissions inventories using disparate datasets Abstract: Environmental regulations require airports to report air quality emissions inventories (i.e., tons emitted) for aircraft emissions such as carbon oxides (COx) and nitrogen oxides (NOx). Traditional methods for emission inventory calculations yield over-estimated inventories due an assumption of the use of maximum takeoff thrust settings for all departures. To reduce costs, airlines use “reduced” thrust settings (such as derated or flex temperature thrust settings) that can be up to 25% lower than the maximum takeoff thrust setting. Thrust data for each flight operation are not readily available to those responsible for the emission inventory. This article describes an approach to estimate the actual takeoff thrust for each flight operation using algorithms that combine radar surveillance track data, weather data, and standardized aircraft performance models. A case study for flights from Chicago's O’Hare airport exhibited an average takeoff thrust of 86% of maximum takeoff thrust (within 4% of the average for actual takeoff thrust settings). The implications and limitations of this method are discussed. Journal: IIE Transactions Pages: 577-585 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.938845 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.938845 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:577-585 Template-Type: ReDIF-Article 1.0 Author-Name: Paul A. Rubin Author-X-Name-First: Paul A. Author-X-Name-Last: Rubin Author-Name: Lihui Bai Author-X-Name-First: Lihui Author-X-Name-Last: Bai Title: Forming competitively balanced teams Abstract: This article examines the problem of assigning individuals to teams to make the teams as similar as possible to each other across multiple attributes. This may be complicated by a variety of constraints, including restrictions on whether specific individuals can or should be assigned to the same team. The problem arises in multiple contexts, including youth recreation leagues and academic programs or courses with mandated project groups. A model for the problem is proposed and various solution approaches are investigated, including mixed-integer programming and several heuristics. Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc. Journal: IIE Transactions Pages: 620-633 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.953643 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.953643 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:620-633 Template-Type: ReDIF-Article 1.0 Author-Name: Andrew Romich Author-X-Name-First: Andrew Author-X-Name-Last: Romich Author-Name: Guanghui Lan Author-X-Name-First: Guanghui Author-X-Name-Last: Lan Author-Name: J. Cole Smith Author-X-Name-First: J. Cole Author-X-Name-Last: Smith Title: Algorithms for optimizing the placement of stationary monitors Abstract: This article examines the problem of placing stationary monitors in a continuous space, with the goal of minimizing an adversary’s maximum probability of traversing an origin–destination route without being detected. The problem arises, for instance, in defending against the transport of illicit material through some area of interest. In particular, we consider the deployment of monitors whose probability of detecting an intruder is a function of the distance between the monitor and the intruder. Under the assumption that the detection probabilities are mutually independent, a two-stage mixed-integer nonlinear programming formulation is constructed for the problem. An algorithm is provided that optimally locates monitors in a continuous space. Then, this problem is examined for the case where the monitor locations are restricted to two different discretized subsets of continuous space. The analysis provides optimization algorithms for each case and derives bounds on the worst-case optimality gap between the restrictions and the initial (continuous-space) problem. Empirically, it is shown that discretized solutions can be obtained whose worst-case and actual optimality gaps are well within practical limits. Journal: IIE Transactions Pages: 556-576 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.953646 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.953646 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:556-576 Template-Type: ReDIF-Article 1.0 Author-Name: Muer Yang Author-X-Name-First: Muer Author-X-Name-Last: Yang Author-Name: Michael J. Fry Author-X-Name-First: Michael J. Author-X-Name-Last: Fry Author-Name: Corey Scurlock Author-X-Name-First: Corey Author-X-Name-Last: Scurlock Title: The ICU will see you now: efficient–equitable admission control policies for a surgical ICU with batch arrivals Abstract: Intensive Care Units (ICUs) are frequently the bottleneck in a hospital system, limiting patient flow and negatively impacting profits. This article examines admission control policies for a surgical ICU where patients arrive in batches. This problem is formulated as a Markov Decision Process (MDP) with an objective function that allows for varying degrees of emphasis on efficiency versus equity. Equity concerns are driven by a combination of surgery type and operating surgeon and are captured in a robust manner in the proposed models. A simple and efficient heuristic solution method related to our MDP formulation is proposed that provides a performance guarantee. The proposed admissions policy is applied to a real setting motivated by the cardiothoracic surgical ICU at Mount Sinai Medical Center in New York; the results demonstrate that the ICU can achieve large equity gains with no efficiency losses. Journal: IIE Transactions Pages: 586-599 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.955151 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955151 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:586-599 Template-Type: ReDIF-Article 1.0 Author-Name: Jeremy J. Tejada Author-X-Name-First: Jeremy J. Author-X-Name-Last: Tejada Author-Name: Julie S. Ivy Author-X-Name-First: Julie S. Author-X-Name-Last: Ivy Author-Name: James R. Wilson Author-X-Name-First: James R. Author-X-Name-Last: Wilson Author-Name: Matthew J. Ballan Author-X-Name-First: Matthew J. Author-X-Name-Last: Ballan Author-Name: Kathleen M. Diehl Author-X-Name-First: Kathleen M. Author-X-Name-Last: Diehl Author-Name: Bonnie C. Yankaskas Author-X-Name-First: Bonnie C. Author-X-Name-Last: Yankaskas Title: Combined DES/SD model of breast cancer screening for older women, I: Natural-history simulation Abstract: Two companion articles develop and exploit a simulation modeling framework to evaluate the effectiveness of breast cancer screening policies for U.S. women who are at least 65 years old. This first article examines the main components in the breast cancer screening-and-treatment process for older women; then it introduces a two-phase simulation approach to defining and modeling those components. Finally this article discusses the first-phase simulation, a natural-history model of the incidence and progression of untreated breast cancer for randomly sampled individuals from the designated population of older U.S. women. The companion article details the second-phase simulation, an integrated screening-and-treatment model that uses information about the genesis of breast cancer in the sampled individuals as generated by the natural-history model to estimate the benefits of different policies for screening the designated population and treating the women afflicted with the disease. Both simulation models are composed of interacting sub-models that represent key aspects of the incidence, progression, screening, treatment, survival, and cost of breast cancer in the population of older U.S. women as well as the overall structure of the system for detecting and treating the disease. Journal: IIE Transactions Pages: 600-619 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.959671 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.959671 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:600-619 Template-Type: ReDIF-Article 1.0 Author-Name: Michael J. Brusco Author-X-Name-First: Michael J. Author-X-Name-Last: Brusco Title: An exact algorithm for maximizing grouping efficacy in part–machine clustering Abstract: The Grouping Efficacy Index (GEI) is well-recognized as a measure of the quality of a solution to a part–machine clustering problem. During the past two decades, numerous approximation procedures (heuristics and metaheuristics) have been proposed for maximization of the GEI. Although the development of effective approximation procedures is essential for large part–machine incidence matrices, the design of computationally feasible exact algorithms for modestly sized matrices also affords an important contribution. This article presents an exact (branch-and-bound) algorithm for maximization of the GEI. Among the important features of the algorithm are (i) the use of a relocation heuristic to establish a good lower bound for the GEI; (ii) a careful reordering of the parts and machines; and (iii) the establishment of upper bounds using the minimum possible contributions to the number of exceptional elements and voids for yet unassigned parts and machines. The scalability of the algorithm is limited by the number of parts and machines, as well as the inherent structure of the part–machine incidence matrix. Nevertheless, the proposed method produced globally optimal solutions for 104 test problems spanning 31 matrices from the literature, many of which are of nontrivial size. The new algorithm also compares favorably to a mixed-integer linear programming approach to the problem using CPLEX. Journal: IIE Transactions Pages: 653-671 Issue: 6 Volume: 47 Year: 2015 Month: 6 X-DOI: 10.1080/0740817X.2014.971202 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.971202 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:6:p:653-671 Template-Type: ReDIF-Article 1.0 Author-Name: Prahalad K. Rao Author-X-Name-First: Prahalad K. Author-X-Name-Last: Rao Author-Name: Omer F. Beyca Author-X-Name-First: Omer F. Author-X-Name-Last: Beyca Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Author-Name: Satish T. S. Bukkapatnam Author-X-Name-First: Satish T. S. Author-X-Name-Last: Bukkapatnam Author-Name: Kenneth E. Case Author-X-Name-First: Kenneth E. Author-X-Name-Last: Case Author-Name: Ranga Komanduri Author-X-Name-First: Ranga Author-X-Name-Last: Komanduri Title: A graph-theoretic approach for quantification of surface morphology variation and its application to chemical mechanical planarization process Abstract: We present an algebraic graph-theoretic approach for quantification of surface morphology. Using this approach, heterogeneous, multi-scaled aspects of surfaces; e.g., semiconductor wafers, are tracked from optical micrographs as opposed to reticent profile mapping techniques. Therefore, this approach can facilitate in situ real-time assessment of surface quality. We report two complementary methods for realizing graph-theoretic representation and subsequent quantification of surface morphology variations from optical micrograph images. Experimental investigations with specular finished copper wafers (surface roughness (Sa) ∼ 6 nm) obtained using a semiconductor chemical mechanical planarization process suggest that the graph-based topological invariant Fiedler number (λ2) was able to quantify and track variations in surface morphology more effectively compared to other quantifiers reported in literature. Journal: IIE Transactions Pages: 1088-1111 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2014.1001927 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.1001927 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1088-1111 Template-Type: ReDIF-Article 1.0 Author-Name: Brandon Pope Author-X-Name-First: Brandon Author-X-Name-Last: Pope Author-Name: Abhijit Deshmukh Author-X-Name-First: Abhijit Author-X-Name-Last: Deshmukh Author-Name: Andrew Johnson Author-X-Name-First: Andrew Author-X-Name-Last: Johnson Author-Name: J. James Rohack Author-X-Name-First: J. James Author-X-Name-Last: Rohack Title: Modeling dependence in health behaviors Abstract: The prediction and control of distributed healthcare behaviors within a population such as smoking, diet, and physical activity are of great concern to those who pay for healthcare, including employers, insurers, and public policy makers, given the significant effect on costs. In considering the selection of multiple health behaviors, the nature of dependence between behaviors must be considered because simplifying assumptions such as independence are untenable. Using data from the National Heart, Lung, and Blood Institute, we find strong evidence to reject the hypothesis of independence between the aforementioned behaviors, while finding some evidence of conditional independence. In this article, several alternatives to the assumption of independence are presented, each of which significantly improves the ability to predict combined behavior. We present models of dependence through marginal probabilities and, taking inspiration from non-expected utility maximizing behavior, through attractions to behavioral alternatives. We find that consistently healthy (or unhealthy) combinations of behaviors are more likely to occur relative to the assumption of independence. We discuss how our results could be used in designing policies to curtail costs and improve health. Journal: IIE Transactions Pages: 1112-1121 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2015.1009197 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1009197 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1112-1121 Template-Type: ReDIF-Article 1.0 Author-Name: Tongdan Jin Author-X-Name-First: Tongdan Author-X-Name-Last: Jin Author-Name: Ying Yu Author-X-Name-First: Ying Author-X-Name-Last: Yu Author-Name: Elsayed Elsayed Author-X-Name-First: Elsayed Author-X-Name-Last: Elsayed Title: Reliability and quality control for distributed wind/solar energy integration: a multi-criteria approach Abstract: A distributed generation system that integrates wind and solar energy has emerged as a new paradigm for power supply. The goal of distributed generation planning is to determine the generation capacity, placement, and maintenance such that the system performance is optimized. In prior studies, the decision is often made assuming a deterministic operating condition. Power intermittency and equipment cost are the main challenges in deploying a wind- and solar-based energy solution. This article proposes a multi-criteria generation planning model to maximize the renewable energy throughput while minimizing its cost. The system is designed under stringent reliability and power quality criteria stipulated as loss-of-load-probability and voltage variation, respectively. A variance propagation model is derived to decouple the voltage correlations between upstream and downstream nodes. A two-stage, metaheuristic algorithm is developed to search for the non-dominant solution set. Our approach differs from existing planning methods in that we implement a statistical quality control mechanism to reduce the voltage drops. A 13-node distribution network is used to demonstrate the performance of the proposed method. Journal: IIE Transactions Pages: 1122-1138 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2015.1009199 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1009199 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1122-1138 Template-Type: ReDIF-Article 1.0 Author-Name: Yonit Barron Author-X-Name-First: Yonit Author-X-Name-Last: Barron Title: Group replacement policies for a repairable cold standby system with fixed lead times Abstract: Consider a multi-component repairable cold standby system and assume that repaired units are as good as new. The operational times of the units follow phase-type distribution. Downtime cost is occurred when failed components are not repaired or replaced. There are also fixed, unit repair, and replacement costs associated with the maintenance facility, which are carried out after a fixed lead time τ. Closed-form results are derived for three classes of group replacement policies (m-failure, T-age, and (m, T, τ), which is a refinement of the classic (m, T) policy) for the expected discounted case and for the long-run average criteria. Illustrative examples are provided. Journal: IIE Transactions Pages: 1139-1151 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2015.1019163 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1019163 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1139-1151 Template-Type: ReDIF-Article 1.0 Author-Name: Saumuy Suriano Author-X-Name-First: Saumuy Author-X-Name-Last: Suriano Author-Name: Hui Wang Author-X-Name-First: Hui Author-X-Name-Last: Wang Author-Name: Chenhui Shao Author-X-Name-First: Chenhui Author-X-Name-Last: Shao Author-Name: S. Jack Hu Author-X-Name-First: S. Jack Author-X-Name-Last: Hu Author-Name: Praveen Sekhar Author-X-Name-First: Praveen Author-X-Name-Last: Sekhar Title: Progressive measurement and monitoring for multi-resolution data in surface manufacturing considering spatial and cross correlations Abstract: Controlling variations in part surface shapes is critical to high-precision manufacturing. To estimate the surface variations, a manufacturing plant usually employs a number of multi-resolution metrology systems to measure surface flatness and roughness with limited information about surface shape. Conventional research establishes surface models by considering spatial correlation; however, the prediction accuracy is restricted by the measurement range, speed, and resolution of metrology systems. In addition, existing monitoring approaches do not locate abnormal variations and lead to high rates of false alarms or misdetections. This article proposes a new methodology for efficiently measuring and monitoring surface variations by fusing in-plant multi-resolution measurements and process information. The fusion is achieved by considering cross-correlations among the measured data and manufacturing process variables along with spatial correlations. Such cross-correlations are induced by cutting force dynamics and can be used to reduce the amount of measurements or improve prediction precision. Under a Bayesian framework, the prediction model is combined with measurements on incoming parts to progressively make inferences on surface shapes. Based on the inference, a new monitoring scheme is proposed for jointly detecting and locating defective areas without significantly increasing false alarms. A case study demonstrates the effectiveness of the method. Journal: IIE Transactions Pages: 1033-1052 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2014.998389 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.998389 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1033-1052 Template-Type: ReDIF-Article 1.0 Author-Name: Changqing Cheng Author-X-Name-First: Changqing Author-X-Name-Last: Cheng Author-Name: Akkarapol Sa-Ngasoongsong Author-X-Name-First: Akkarapol Author-X-Name-Last: Sa-Ngasoongsong Author-Name: Omer Beyca Author-X-Name-First: Omer Author-X-Name-Last: Beyca Author-Name: Trung Le Author-X-Name-First: Trung Author-X-Name-Last: Le Author-Name: Hui Yang Author-X-Name-First: Hui Author-X-Name-Last: Yang Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Author-Name: Satish T.S. Bukkapatnam Author-X-Name-First: Satish T.S. Author-X-Name-Last: Bukkapatnam Title: Time series forecasting for nonlinear and non-stationary processes: a review and comparative study Abstract: Forecasting the evolution of complex systems is noted as one of the 10 grand challenges of modern science. Time series data from complex systems capture the dynamic behaviors and causalities of the underlying processes and provide a tractable means to predict and monitor system state evolution. However, the nonlinear and non-stationary dynamics of the underlying processes pose a major challenge for accurate forecasting. For most real-world systems, the vector field of state dynamics is a nonlinear function of the state variables; i.e., the relationship connecting intrinsic state variables with their autoregressive terms and exogenous variables is nonlinear. Time series emerging from such complex systems exhibit aperiodic (chaotic) patterns even under steady state. Also, since real-world systems often evolve under transient conditions, the signals obtained therefrom tend to exhibit myriad forms of non-stationarity. Nonetheless, methods reported in the literature focus mostly on forecasting linear and stationary processes. This article presents a review of these advancements in nonlinear and non-stationary time series forecasting models and a comparison of their performances in certain real-world manufacturing and health informatics applications. Conventional approaches do not adequately capture the system evolution (from the standpoint of forecasting accuracy, computational effort, and sensitivity to quantity and quality of a priori information) in these applications. Journal: IIE Transactions Pages: 1053-1071 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2014.999180 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.999180 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1053-1071 Template-Type: ReDIF-Article 1.0 Author-Name: Junbo Son Author-X-Name-First: Junbo Author-X-Name-Last: Son Author-Name: Qiang Zhou Author-X-Name-First: Qiang Author-X-Name-Last: Zhou Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Mutasim Salman Author-X-Name-First: Mutasim Author-X-Name-Last: Salman Title: Prediction of the failure interval with maximum power based on the remaining useful life distribution Abstract: Prognosis of the Remaining Useful Life (RUL) of a unit or system plays an important role in system reliability analysis and maintenance decision making. One key aspect of the RUL prognosis is the construction of the best prediction interval for failure occurrence. The interval should have a reasonable length and yield the best prediction power. In current practice, the center-based interval and traditional confidence interval are widely used. Although both are easy to construct, they do not provide the best prediction performance. In this article, we propose a new scheme, the Maximum Power Interval (MPI), for estimating the interval with maximum prediction power. The MPI guarantees the best prediction power under a given interval length. Some technical challenges involved in the MPI method were resolved using the maximum entropy principle and truncation method. A numerical simulation study confirmed that the MPI has better prediction power than other prediction intervals. A case study using a real industry data set was conducted to illustrate the capability of the MPI method. Journal: IIE Transactions Pages: 1072-1087 Issue: 10 Volume: 47 Year: 2015 Month: 10 X-DOI: 10.1080/0740817X.2014.999899 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.999899 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:10:p:1072-1087 Template-Type: ReDIF-Article 1.0 Author-Name: Frank Meisel Author-X-Name-First: Frank Author-X-Name-Last: Meisel Author-Name: Walter Rei Author-X-Name-First: Walter Author-X-Name-Last: Rei Author-Name: Michel Gendreau Author-X-Name-First: Michel Author-X-Name-Last: Gendreau Author-Name: Christian Bierwirth Author-X-Name-First: Christian Author-X-Name-Last: Bierwirth Title: Designing supply networks under maximum customer order lead times Abstract: We consider the problem of designing supply networks for producing and distributing goods under restricted customer order lead times. Companies apply various instruments for fulfilling orders within preset lead times, such as locating facilities close to markets, producing products to stock, choosing fast modes of transportation, or delivering products directly from plants to customers without the use of distribution centers. We provide two alternative models that consider these options to a different extent, when designing multi-layer, multi-product facility networks that guarantee meeting restricted lead times. A computational evaluation compares both models with respect to solvability and the quality of the obtained networks. We find that formulating the problem as a time–space network flow model considerably helps to design high-quality networks. Furthermore, the lead times quoted to customers affect the design of all layers in the supply network. In turn, this shows that when service requirements are applied, the strategic planning of the network should be adapted accordingly. Concerning the instruments considered for meeting quoted lead times, the choice between make-to-order and make-to-stock production is found to be of utmost importance, whereas transportation decisions have a minor impact. Journal: IIE Transactions Pages: 921-937 Issue: 10 Volume: 48 Year: 2016 Month: 10 X-DOI: 10.1080/0740817X.2015.1110267 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110267 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:10:p:921-937 Template-Type: ReDIF-Article 1.0 Author-Name: Jen-Yi Chen Author-X-Name-First: Jen-Yi Author-X-Name-Last: Chen Author-Name: Maqbool Dada Author-X-Name-First: Maqbool Author-X-Name-Last: Dada Author-Name: Qiaohai (Joice) Hu Author-X-Name-First: Qiaohai (Joice) Author-X-Name-Last: Hu Title: Designing supply contracts: buy-now, reserve, and wait-and-see Abstract: We consider three types of purchase contracts a manufacturer could offer in order to maximize its profit when supplying a retailer that uses responsive pricing to sell in an uncertain market: buy-now before the selling season starts, reserve stock for possible future purchase, and wait-and-see the market before making purchases. The existing literature has shown that adding a recourse purchase—i.e., the wait-and-see alternative—is always beneficial for the retailer who faces an uncertain demand. We find that this is not necessarily the case for the manufacturer who supplies the retailer, as its optimal contract mix depends on the market uncertainty as well as its production characteristics. The manufacturer should offer only the buy-now alternative if its recourse production is much more costly than advance production. As the recourse production cost decreases, the manufacturer should add a second contract to the portfolio: initially the reserve contract and then the wait-and-see contract. However, when the recourse production is cheaper than advance production, the manufacturer should drop the buy-now contract from the mix. As such, it is only in a small region, which shrinks with decreasing uncertainty in demand, that the manufacturer finds it optimal to offer all three purchasing alternatives. Journal: IIE Transactions Pages: 881-900 Issue: 10 Volume: 48 Year: 2016 Month: 10 X-DOI: 10.1080/0740817X.2015.1110649 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1110649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:10:p:881-900 Template-Type: ReDIF-Article 1.0 Author-Name: Sandra Duni Ekşioğlu Author-X-Name-First: Sandra Duni Author-X-Name-Last: Ekşioğlu Author-Name: Hadi Karimi Author-X-Name-First: Hadi Author-X-Name-Last: Karimi Author-Name: Burak Ekşioğlu Author-X-Name-First: Burak Author-X-Name-Last: Ekşioğlu Title: Optimization models to integrate production and transportation planning for biomass co-firing in coal-fired power plants Abstract: Co-firing biomass is a strategy that leads to reduced greenhouse gas emissions in coal-fired power plants. Incentives such as the Production Tax Credit (PTC) are designed to help power plants overcome the financial challenges faced during the implementation phase. Decision makers at power plants face two big challenges. The first challenge is identifying whether the benefits from incentives such as PTC can overcome the costs associated with co-firing. The second challenge is identifying the extent to which a plant should co-fire in order to maximize profits. We present a novel mathematical model that integrates production and transportation decisions at power plants. Such a model enables decision makers to evaluate the impacts of co-firing on the system performance and the cost of generating renewable electricity. The model presented is a nonlinear mixed integer program that captures the loss in process efficiencies due to using biomass, a product that has lower heating value as compared with coal; the additional investment costs necessary to support biomass co-firing as well as savings due to PTC. In order to solve efficiently real-life instances of this problem we present a Lagrangean relaxation model that provides upper bounds and two linear approximations that provide lower bounds for the problem in hand. We use numerical analysis to evaluate the quality of these bounds. We develop a case study using data from nine states located in the southeast region of the United States. Via numerical experiments we observe that (i) incentives such as PTC do facilitate renewable energy production; (ii) the PTC should not be “one size fits all”; instead, tax credits could be a function of plant capacity or the amount of renewable electricity produced; (iii) there is a need for comprehensive tax credit schemes to encourage renewable electricity production and reduce GHG emissions. Journal: IIE Transactions Pages: 901-920 Issue: 10 Volume: 48 Year: 2016 Month: 10 X-DOI: 10.1080/0740817X.2015.1126004 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1126004 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:10:p:901-920 Template-Type: ReDIF-Article 1.0 Author-Name: Oguzhan Vicil Author-X-Name-First: Oguzhan Author-X-Name-Last: Vicil Author-Name: Peter Jackson Author-X-Name-First: Peter Author-X-Name-Last: Jackson Title: Computationally efficient optimization of stock pooling and allocation levels for two-demand-classes under general lead time distributions Abstract: In this article we develop a procedure for estimating service levels (fill rates) and for optimizing stock and threshold levels in a two-demand-class model managed based on a lot-for-lot replenishment policy and a static threshold allocation policy. We assume that the priority demand classes exhibit mutually independent, stationary, Poisson demand processes and non-zero order lead times that are independent and identically distributed. A key feature of the optimization routine is that it requires computation of the stationary distribution only once. There are two approaches extant in the literature for estimating the stationary distribution of the stock level process: a so-called single-cycle approach and an embedded Markov chain approach. Both approaches rely on constant lead times. We propose a third approach based on a Continuous-Time Markov Chain (CTMC) approach, solving it exactly for the case of exponentially distributed lead times. We prove that if the independence assumption of the embedded Markov chain approach is true, then the CTMC approach is exact for general lead time distributions as well. We evaluate all three approaches for a spectrum of lead time distributions and conclude that, although the independence assumption does not hold, both the CTMC and embedded Markov chain approaches perform well, dominating the single-cycle approach. The advantages of the CTMC approach are that it is several orders of magnitude less computationally complex than the embedded Markov chain approach and it can be extended in a straightforward fashion to three demand classes. Journal: IIE Transactions Pages: 955-974 Issue: 10 Volume: 48 Year: 2016 Month: 10 X-DOI: 10.1080/0740817X.2016.1146421 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1146421 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:10:p:955-974 Template-Type: ReDIF-Article 1.0 Author-Name: Kunlei Lian Author-X-Name-First: Kunlei Author-X-Name-Last: Lian Author-Name: Ashlea Bennett Milburn Author-X-Name-First: Ashlea Bennett Author-X-Name-Last: Milburn Author-Name: Ronald L. Rardin Author-X-Name-First: Ronald L. Author-X-Name-Last: Rardin Title: An improved multi-directional local search algorithm for the multi-objective consistent vehicle routing problem Abstract: This article presents a multi-objective variant of the Consistent Vehicle Routing Problem (MoConVRP). Instead of modeling consistency considerations such as driver consistency and time consistency as constraints as in the majority of the ConVRP literature, they are included as objectives. Furthermore, instead of formulating a single weighted objective that relies on specifying relative priorities among objectives, an approach to approximate the Pareto frontier is developed. Specifically, an improved version of multi-directional local search (MDLS) is developed. The updated algorithm, IMDLS, makes use of large neighborhood search to find solutions that are improved according to at least one objective to add to the set of nondominated solutions at each iteration. The performance of IMDLS is compared with MDLS and five other multi-objective algorithms on a set of ConVRP test instances from the literature. The computational study validates the competitive performance of IMDLS. Furthermore, results of the computational study suggest that pursuing the best compromise solution among all three objectives may increase travel costs by about 5% while improving driver and time consistency by approximately 60% and over 75% on average, when compared with a compromise solution having lowest overall travel distance. Supplementary materials are available for this article. Go to the publishe's online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc. Journal: IIE Transactions Pages: 975-992 Issue: 10 Volume: 48 Year: 2016 Month: 10 X-DOI: 10.1080/0740817X.2016.1167288 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1167288 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:10:p:975-992 Template-Type: ReDIF-Article 1.0 Author-Name: Z. Melis Teksan Author-X-Name-First: Z. Melis Author-X-Name-Last: Teksan Author-Name: Joseph Geunes Author-X-Name-First: Joseph Author-X-Name-Last: Geunes Title: Production planning with price-dependent supply capacity Abstract: We consider a production planning problem in which a producer procures an input component for production by offering a price to suppliers. The available supply quantity for the production input depends on the price the producer offers, and this supply level constrains production output. The producer seeks to meet a set of demands over a finite horizon at a minimum cost, including component procurement costs. We model the problem as a discrete-time production and component supply–pricing planning problem with nonstationary costs, demands, and component supply levels. This leads to a two-level lot-sizing problem with an objective function that is neither concave nor convex. Although the most general version of the problem is NP$\mathcal {NP}$-hard, we provide polynomial-time algorithms for two special cases of the model under particular assumptions on the cost structure. We then apply the resulting algorithms heuristically to the more general problem version and provide computational results that demonstrate the high performance quality of the resulting heuristic solution methods. Journal: IIE Transactions Pages: 938-954 Issue: 10 Volume: 48 Year: 2016 Month: 10 X-DOI: 10.1080/0740817X.2016.1189628 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189628 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:10:p:938-954 Template-Type: ReDIF-Article 1.0 Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Sandeep Srivathsan Author-X-Name-First: Sandeep Author-X-Name-Last: Srivathsan Author-Name: Yichi Shen Author-X-Name-First: Yichi Author-X-Name-Last: Shen Title: Three-moment approximation for the mean queue time of a GI/G/1 queue Abstract: The approximation of a GI/G/1 queue plays a key role in the performance evaluation of queueing systems. To improve the conventional two-moment approximations, we propose a three-moment approximation for the mean queue time of a GI/G/1 queue based on the exact results of the H2/M/1 queue. The model is validated over a wide range of numerical experiments. Based on paired t-tests, our three-moment approximation outperforms the two-moment ones when the inter-arrival time variability is greater than one. Journal: IISE Transactions Pages: 63-73 Issue: 2 Volume: 50 Year: 2018 Month: 2 X-DOI: 10.1080/24725854.2017.1357216 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1357216 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:2:p:63-73 Template-Type: ReDIF-Article 1.0 Author-Name: Hoon Hwangbo Author-X-Name-First: Hoon Author-X-Name-Last: Hwangbo Author-Name: Andrew L. Johnson Author-X-Name-First: Andrew L. Author-X-Name-Last: Johnson Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Spline model for wake effect analysis: Characteristics of a single wake and its impacts on wind turbine power generation Abstract: Understanding and quantifying the wake effect plays an important role in improving wind turbine designs and operations, as well as wind farm layout planning. The majority of the current wake effect models are physics based, but these models have a number of shortcomings. Sophisticated models based on computational fluid dynamics suffer from computational limitations and are impractical for modeling commercial-sized wind farms, whereas simplified physics-based models are generally inaccurate for wake effect quantification. Nowadays, data-driven wake effect models are gaining popularity as the data from commercially operating wind turbines become available, but this development is still in its early stages. This study contributes to the general category of data-driven wake effect modeling that makes use of actual wind turbine operational data. We propose a wake effect model based on splines with physical constraints incorporated, which sets out to estimate wake effect characteristics such as wake width and wake depth under single-wake situations. Our model is one of the first data-driven models that provides a detailed account of the wake effect. Prediction accuracy of the proposed spline model, when compared with other alternatives, also confirms the benefit of incorporating the physical constraints in the statistical estimation. Journal: IISE Transactions Pages: 112-125 Issue: 2 Volume: 50 Year: 2018 Month: 2 X-DOI: 10.1080/24725854.2017.1370176 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1370176 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:2:p:112-125 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaonan Liu Author-X-Name-First: Xiaonan Author-X-Name-Last: Liu Author-Name: Andrew M. Gough Author-X-Name-First: Andrew M. Author-X-Name-Last: Gough Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Semiconductor corner lot generation robust to process variation: Modeling and analysis Abstract: Product characterization is an important phase in developing new semiconductors. The goal is to determine if the new product will function when produced under the extreme edge of fabrication variation; if not, the product might be considered to have insufficient design margin, necessitating circuit redesign. Achieving this goal requires producing a so-called corner lot that consists of skew chips; i.e., chips whose key performance parameters that are expected to be around certain targeted extreme values. These skew chips are extensively tested to determine whether their functions still meet specifications. However, due to extensive variation in the fabrication process, few skewed chips can be guaranteed in a produced corner lot, and this is a long-standing frustration in the semiconductor industry. One approach to produce a satisfactory corner lot is through variation reduction of the fabrication process. Despite being a popular research area, variation reduction is a long-term effort that involves both technical and managerial considerations. We approach this problem from a different avenue by treating process variation as given and instead identifying a design strategy that guarantees production of a good corner lot robust to the variation. Specifically, we propose a first-of-its-kind rigorous mathematical formulation about this problem, investigate the theoretical properties and practical implications of this formulation, and further propose several optimal criteria and a corresponding design search algorithm. Applications to a broad range of semiconductor products are presented to demonstrate the universal improvement of the proposed optimal design compared with the traditional design used in current industrial practice. Journal: IISE Transactions Pages: 126-139 Issue: 2 Volume: 50 Year: 2018 Month: 2 X-DOI: 10.1080/24725854.2017.1383636 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1383636 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:2:p:126-139 Template-Type: ReDIF-Article 1.0 Author-Name: Adel Alaeddini Author-X-Name-First: Adel Author-X-Name-Last: Alaeddini Author-Name: Abed Motasemi Author-X-Name-First: Abed Author-X-Name-Last: Motasemi Author-Name: Syed Hasib Akhter Faruqui Author-X-Name-First: Syed Hasib Akhter Author-X-Name-Last: Faruqui Title: A spatiotemporal outlier detection method based on partial least squares discriminant analysis and area Delaunay triangulation for image-based process monitoring Abstract: Over the past two decades, statistical process control has evolved from monitoring individual data points to linear profiles to image data. Image sensors are now being deployed in complex systems at increasing rates due to the rich information they can provide. As a result, image data play an important role in process monitoring in different application domains ranging from manufacturing to service systems. Many of the existing process monitoring methods fail to take full advantage of the image data due to the data's complex nature in both the spatial and temporal domains. This article proposes a spatiotemporal outlier detection method based on the partial least squares discriminant analysis and a control statistic based on the area Delaunay triangulation of the squared prediction errors to improve the performance of an image-based monitoring scheme. First, the discriminant analysis of the partial least squares is used to efficiently extract the most important features from the high-dimensional image data to identify the benchmark images of the products and obtain the pixel value errors. Next, the squared errors resulting from the previous step are connected using a Delaunay triangulation to form a surface, the area of which is used as the control statistic for the purpose of outlier detection. A real case study at a paper product manufacturing company is used to compare the performance of the proposed method in detecting different types of outliers with some of the existing methods and demonstrate the merit of the proposed method. Journal: IISE Transactions Pages: 74-87 Issue: 2 Volume: 50 Year: 2018 Month: 2 X-DOI: 10.1080/24725854.2017.1386336 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1386336 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:2:p:74-87 Template-Type: ReDIF-Article 1.0 Author-Name: Jinho Kim Author-X-Name-First: Jinho Author-X-Name-Last: Kim Author-Name: Youngmin Lee Author-X-Name-First: Youngmin Author-X-Name-Last: Lee Author-Name: Heeyoung Kim Author-X-Name-First: Heeyoung Author-X-Name-Last: Kim Title: Detection and clustering of mixed-type defect patterns in wafer bin maps Abstract: In semiconductor manufacturing, a wafer bin map (WBM) is a map that consists of assigned bin values for dies based on wafer test results (e.g., value 1 for good dies and value 0 for defective dies). The bin values of adjacent dies are often spatially correlated, forming some systematic defect patterns. These non-random defect patterns occur due to assignable causes; therefore, it is important to identify these systematic defect patterns in order to know the root causes of failure and to take actions for quality management and yield enhancement. In particular, as wafer fabrication processes have become more complicated, mixed-type defect patterns (two or more different types of defect patterns occur simultaneously in a single wafer) occur more frequently than in the past. For more effective classification of wafers based on their defect patterns, mixed-type defect patterns need to be detected and separated into several clusters of different patterns; subsequently, each cluster of a single pattern can be matched to a well-known defect type (e.g., scratch, ring) or it may indicate the emergence of a new defect pattern. There are several challenges to be overcome in the detection and clustering of mixed-type defect patterns. These include (i) the separation of random defects from systematic defect patterns; (ii) determining the number of clusters; and (iii) the clustering of defect patterns of complex shapes. To address these challenges, in this article, we propose a new framework for detecting and clustering mixed-type defect patterns. First, we propose a new filtering method, called the connected-path filtering method, to denoise WBMs. Subsequently, we adopt the infinite warped mixture model for the clustering of mixed-type defect patterns; this model is flexible in its ability to deal with complex shapes of defect patterns; furthermore, the number of clusters does not need to be specified in advance but is automatically determined simultaneously during the clustering procedure. We validate the proposed method using real data from a semiconductor company. The experimental results demonstrate the effectiveness of the proposed method in estimating the number of underlying clusters as well as in the clustering of mixed-type defect patterns. Journal: IISE Transactions Pages: 99-111 Issue: 2 Volume: 50 Year: 2018 Month: 2 X-DOI: 10.1080/24725854.2017.1386337 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1386337 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:2:p:99-111 Template-Type: ReDIF-Article 1.0 Author-Name: Qingsong Gong Author-X-Name-First: Qingsong Author-X-Name-Last: Gong Author-Name: Genke Yang Author-X-Name-First: Genke Author-X-Name-Last: Yang Author-Name: Changchun Pan Author-X-Name-First: Changchun Author-X-Name-Last: Pan Author-Name: Yuwang Chen Author-X-Name-First: Yuwang Author-X-Name-Last: Chen Title: Performance analysis of single EWMA controller subject to metrology delay under dynamic models Abstract: This article mainly focuses on analyzing the performance of a closed-loop system where a single exponentially weighted moving average controller (SEWMA) subject to metrology delay is applied to regulate a semiconductor manufacturing process that exhibits input–output dynamics. Based on the Hurwitz stability criterion, the sufficient and necessary conditions for the stability of the closed-loop system are established. Based on these, it is convenient to study the effect of metrology delay on the feasible region of the weighting factor in the SEWMA controller. Later, under the stability condition, the asymptotical properties of the SEWMA controller are discussed and the performance of the closed-loop control system is analyzed in terms of the asymptotical variation and the transient deviation in the presence of several typical types of process stochastic disturbance. Then an optimization model is built to find the appropriate weighting factor to reduce the overall variation of the process output during production. Finally, extensive simulations are carried out to demonstrate the validity of our theoretical analysis in the context of chemical–mechanical planarization process. Journal: IISE Transactions Pages: 88-98 Issue: 2 Volume: 50 Year: 2018 Month: 2 X-DOI: 10.1080/24725854.2017.1386338 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1386338 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:2:p:88-98 Template-Type: ReDIF-Article 1.0 Author-Name: Afshin Kamyabniya Author-X-Name-First: Afshin Author-X-Name-Last: Kamyabniya Author-Name: M. M. Lotfi Author-X-Name-First: M. M. Author-X-Name-Last: Lotfi Author-Name: Hua Cai Author-X-Name-First: Hua Author-X-Name-Last: Cai Author-Name: Hasan Hosseininasab Author-X-Name-First: Hasan Author-X-Name-Last: Hosseininasab Author-Name: Saeed Yaghoubi Author-X-Name-First: Saeed Author-X-Name-Last: Yaghoubi Author-Name: Yuehwern Yih Author-X-Name-First: Yuehwern Author-X-Name-Last: Yih Title: A two-phase coordinated logistics planning approach to platelets provision in humanitarian relief operations Abstract: For provision of platelets in humanitarian relief operations, due to the excessive number of non-coordinated organizations, relief response times and corresponding costs of platelets sharply increase. This article proposes a two-phase mechanism to coordinate two heterogeneous relief organizations (i.e., relief and rescue organization of the Red Crescent Society and a blood transfusion organization) in a decentralized network in such a way that its own interests and objectives are also satisfied. The blood transfusion organization tries to minimize the wastage level of platelets considering related total costs. At the same time, the relief and rescue organization decides on the selection of shelters where the platelets can be administered to injured people, while minimizing total relief time. Thus, first a bi-level mixed integer linear model under the demand and supply uncertainties is developed (phase 1), and then a capacity-sharing coordination mechanism based on the collaborative control theory is proposed (phase 2). To solve large-scale instances, a fuzzy Kth-Best algorithm is developed to solve the first phase and then phase 2 is solved by the proposed coordination mechanism. We compare our model to centralized relief logistics model using a data set for a possible earthquake in Tehran, Iran. Results show that our model reduces shortage and wastage compared with the centralized model Journal: IISE Transactions Pages: 1-21 Issue: 1 Volume: 51 Year: 2019 Month: 1 X-DOI: 10.1080/24725854.2018.1479901 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1479901 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:1:p:1-21 Template-Type: ReDIF-Article 1.0 Author-Name: Yanzi Zhang Author-X-Name-First: Yanzi Author-X-Name-Last: Zhang Author-Name: Zhi-Hai Zhang Author-X-Name-First: Zhi-Hai Author-X-Name-Last: Zhang Title: Impact of the cannibalization effect between new and remanufactured products on supply chain design and operations Abstract: The cannibalization effect between new and remanufactured products impacts market demand and further influences supply chain design, which makes supply chain operations complex. This article studies the impact of cannibalization between new and remanufactured products on supply chain network design and operations by considering a joint pricing-location-inventory problem. A three-level supply chain network that consists of multi-distribution centers and retailers is considered. New and remanufactured products are supplied simultaneously. The problem is formulated as a nonlinear mixed-integer program and is then transformed into a conic quadratic mixed-integer program. An outer approximation-based solution approach is developed to solve the program. Extensive numerical experiments are conducted to explore the performance of the algorithm and the effects of market cannibalization on the supply chain network design and operations. Journal: IISE Transactions Pages: 22-40 Issue: 1 Volume: 51 Year: 2019 Month: 1 X-DOI: 10.1080/24725854.2018.1486055 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1486055 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:1:p:22-40 Template-Type: ReDIF-Article 1.0 Author-Name: Zhi-Hai Zhang Author-X-Name-First: Zhi-Hai Author-X-Name-Last: Zhang Author-Name: Gemma Berenguer Author-X-Name-First: Gemma Author-X-Name-Last: Berenguer Author-Name: Xiaoyong Pan Author-X-Name-First: Xiaoyong Author-X-Name-Last: Pan Title: Location, inventory and testing decisions in closed-loop supply chains: A multimedia company Abstract: Our partnering firm is a Chinese manufacturer of multimedia products that needs guidance developing its imminent Closed-Loop Supply Chain (CLSC). To study this problem, we take into account location, inventory, and testing decisions in a CLSC setting with stochastic demands of new and time-sensitive returned products. Our analysis pays particular attention to the different roles assigned to the reverse Distribution Centers (DCs) and how each option affects the optimal CLSC design. The roles considered are collection and consolidation, additional testing tasks, and direct shipments with no reverse DCs. The problem concerning our partnering firm is formulated as a scenario-based chance-constrained mixed-integer program and it is reformulated to a conic quadratic mixed-integer program that can be solved efficiently via commercial optimization packages. The completeness of the model proposed allows us to develop a decision support tool for the firm and to offer several useful managerial insights. These insights are inferred from our computational experiments using data from the Chinese firm and a second data set based on the U.S. geography. Particularly interesting insights are related to how changes in the reverse flows can impact the forward supply chain and the inventory dynamics concerning the joint DCs. Journal: IISE Transactions Pages: 41-56 Issue: 1 Volume: 51 Year: 2019 Month: 1 X-DOI: 10.1080/24725854.2018.1494868 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1494868 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:1:p:41-56 Template-Type: ReDIF-Article 1.0 Author-Name: Jen-Yen Lin Author-X-Name-First: Jen-Yen Author-X-Name-Last: Lin Author-Name: Sean X. Zhou Author-X-Name-First: Sean X. Author-X-Name-Last: Zhou Author-Name: Fei Gao Author-X-Name-First: Fei Author-X-Name-Last: Gao Title: Production and technology choice under emissions regulation: Centralized vs decentralized supply chains Abstract: To study how emissions regulations impact supply chain operations, we consider a supply chain where a supplier produces and sells raw material to a manufacturer, who then uses it to produce a final product to satisfy random market demand. Both firms are equipped with two production technologies, one of which is costlier, but generates fewer emissions than the other. Each firm’s emissions are capped by the amount of allowances it holds, and if the firm over-emits, it pays a penalty. We solve the optimal solutions of a centralized system, both jointly regulated and separately regulated, and a decentralized system. We find that the relationships between the emissions abatement cost, the emissions penalty, and the salvage value of the allowance largely determine the technology choice of the firms. For the centralized system, joint regulation results in a higher profit than separate regulation, but it may not result in a larger production quantity. For the decentralized system, under a more stringent regulation (fewer allowances), the firms may produce more while not using more of the green technology; and if the manufacturer has fewer allowances, the manufacturer and the whole chain may be better-off. The numerical study further illustrates that adding a green technology is always economically beneficial to the centralized supply chain, although it may hurt the manufacturer and the decentralized chain. In the scenarios where only either the supplier or the manufacturer is regulated, we show analytically that the centralized system produces more, uses more green technology, and generates more emissions than the decentralized one. More interestingly, the decentralized supply chain with the regulated supplier produces more, has a higher profit, and emits more than the supply chain with the regulated manufacturer when the emissions intensities of the production technologies are the same for the firms. Journal: IISE Transactions Pages: 57-73 Issue: 1 Volume: 51 Year: 2019 Month: 1 X-DOI: 10.1080/24725854.2018.1506193 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1506193 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:1:p:57-73 Template-Type: ReDIF-Article 1.0 Author-Name: Greggory J. Schell Author-X-Name-First: Greggory J. Author-X-Name-Last: Schell Author-Name: Gian-Gabriel P. Garcia Author-X-Name-First: Gian-Gabriel P. Author-X-Name-Last: Garcia Author-Name: Mariel S. Lavieri Author-X-Name-First: Mariel S. Author-X-Name-Last: Lavieri Author-Name: Jeremy B. Sussman Author-X-Name-First: Jeremy B. Author-X-Name-Last: Sussman Author-Name: Rodney A. Hayward Author-X-Name-First: Rodney A. Author-X-Name-Last: Hayward Title: Optimal coinsurance rates for a heterogeneous population under inequality and resource constraints Abstract: Although operations research has contributed heavily to the derivation of optimal treatment guidelines for chronic diseases, patient adherence to treatment plans is low and variable. One mechanism for improving patient adherence to guidelines is to tailor coinsurance rates for prescription medications to patient characteristics. We seek to find coinsurance rates that maximize the welfare of the heterogeneous patient population at risk for cardiovascular disease. We analyze the problem as a bilevel optimization model where the lower optimization problem has the structure of a Markov decision process that determines the optimal treatment plan for each patient class. The upper optimization problem is a nonlinear resource allocation problem with constraints on total expenditures and coinsurance inequality. We used dynamic programming with a penalty function for nonseparable constraint violations to derive the optimal coinsurance rates. We parameterized and solved this model by considering patients who are insured by Medicare and are prescribed medications for prevention of cardiovascular disease. We find that optimizing coinsurance rates can be a cost-effective intervention for improving patient adherence and health outcomes, particularly for those patients at high risk for cardiovascular disease. Journal: IISE Transactions Pages: 74-91 Issue: 1 Volume: 51 Year: 2019 Month: 1 X-DOI: 10.1080/24725854.2018.1499053 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1499053 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:1:p:74-91 Template-Type: ReDIF-Article 1.0 Author-Name: Nevin Mutlu Author-X-Name-First: Nevin Author-X-Name-Last: Mutlu Author-Name: Ebru K. Bish Author-X-Name-First: Ebru K. Author-X-Name-Last: Bish Title: Optimal demand shaping for a dual-channel retailer under growing e-commerce adoption Abstract: The e-commerce adoption level within our society has been growing in the past decade, leading to dynamically evolving demand patterns across retailing channels. In this work, we study a dual-channel retailer’s optimal demand shaping strategy, through e-commerce marketing efforts and store service levels, in the presence of this dynamic evolution. Our stylized model integrates the growing adoption of e-commerce within society with individual consumers’ channel choice, and explicitly models the reference effects of the retailer’s prior decisions on consumer decision-making in a multi-period setting. This model allows us to characterize the settings in which e-commerce marketing is beneficial for the retailer, and to show that the retailer’s optimal demand shaping strategy depends on the product’s e-commerce adoption phase. Interestingly, we find that if the retailer provides the consumers with information on store availability levels, then the retailer’s optimal service levels stay constant over time, even if e-commerce adoption in the society grows. Journal: IISE Transactions Pages: 92-106 Issue: 1 Volume: 51 Year: 2019 Month: 1 X-DOI: 10.1080/24725854.2018.1508927 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1508927 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:51:y:2019:i:1:p:92-106 Template-Type: ReDIF-Article 1.0 Author-Name: Burcu Balcik Author-X-Name-First: Burcu Author-X-Name-Last: Balcik Author-Name: Seyed Iravani Author-X-Name-First: Seyed Author-X-Name-Last: Iravani Author-Name: Karen Smilowitz Author-X-Name-First: Karen Author-X-Name-Last: Smilowitz Title: Multi-vehicle sequential resource allocation for a nonprofit distribution system Abstract: This article introduces a multi-vehicle sequential allocation problem that considers two critical objectives for nonprofit operations: providing equitable service and minimizing unused donations. This problem is motivated by an application in food redistribution from donors such as restaurants and grocery stores to agencies such as soup kitchens and homeless shelters. A set partitioning model is formulated that can be used to design vehicle routes; it primarily focuses on equity maximization and implicitly considers waste. The behavior of the model in clustering agencies and donors on routes is studied, and the impacts of demand variability and supply availability on route composition and solution performance are analyzed. A comprehensive numerical study is performed in order to develop insights on optimal solutions. Based on this study, an efficient decomposition-based heuristic for the problem that can handle an additional constraint on route length is developed and it is shown that the heuristic obtains high-quality solutions in terms of equity and waste. Journal: IIE Transactions Pages: 1279-1297 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2013.876240 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876240 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1279-1297 Template-Type: ReDIF-Article 1.0 Author-Name: Cameron A. MacKenzie Author-X-Name-First: Cameron A. Author-X-Name-Last: MacKenzie Author-Name: Kash Barker Author-X-Name-First: Kash Author-X-Name-Last: Barker Author-Name: Joost R. Santos Author-X-Name-First: Joost R. Author-X-Name-Last: Santos Title: Modeling a severe supply chain disruption and post-disaster decision making with application to the Japanese earthquake and tsunami Abstract: Modern supply chains are increasingly vulnerable to disruptions, and a disruption in one part of the world can cause supply difficulties for companies around the globe. This article develops a model of severe supply chain disruptions in which several suppliers suffer from disabled production facilities and firms that purchase goods from those suppliers may consequently suffer a supply shortage. Suppliers and firms can choose disruption management strategies to maintain operations. A supplier with a disabled facility may choose to move production to an alternate facility, and a firm encountering a supply shortage may be able to use inventory or buy supplies from an alternate supplier. The supplier’s and firm’s optimal decisions are expressed in terms of model parameters such as the cost of each strategy, the chances of losing business, and the probability of facilities reopening. The model is applied to a simulation based on the 2011 Japanese earthquake and tsunami, which closed several facilities of key suppliers in the automobile industry and caused supply difficulties for both Japanese and U.S. automakers. Journal: IIE Transactions Pages: 1243-1260 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2013.876241 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.876241 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1243-1260 Template-Type: ReDIF-Article 1.0 Author-Name: Subhash C. Sarin Author-X-Name-First: Subhash C. Author-X-Name-Last: Sarin Author-Name: Hanif D. Sherali Author-X-Name-First: Hanif D. Author-X-Name-Last: Sherali Author-Name: Lingrui Liao Author-X-Name-First: Lingrui Author-X-Name-Last: Liao Title: Primary pharmaceutical manufacturing scheduling problem Abstract: This article addresses an integrated lot-sizing and scheduling problem that arises in the primary manufacturing phase of a pharmaceutical supply chain. Multiple pharmaceutical ingredients and their intermediate products are to be scheduled on parallel and capacitated bays for production in batches. Sequence-dependent setup times and costs are incurred when cleaning a bay during changeovers between different product families. The problem also contains a high multiplicity asymmetric traveling salesman-type of substructure because of sequence-dependent setups and special restrictions. Mixed-integer programming formulations are proposed for this problem and several valid inequalities are developed to tighten the model. A column generation method along with a decomposition scheme and an advanced-start solution are designed to efficiently derive good solutions to this highly complex problem. A computational investigation is performed, based on instances that closely follow a real-life application, and it demonstrates the efficacy of the proposed solution approach. Journal: IIE Transactions Pages: 1298-1314 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.882529 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.882529 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1298-1314 Template-Type: ReDIF-Article 1.0 Author-Name: King-Wah Pang Author-X-Name-First: King-Wah Author-X-Name-Last: Pang Author-Name: Jiyin Liu Author-X-Name-First: Jiyin Author-X-Name-Last: Liu Title: An integrated model for ship routing with transshipment and berth allocation Abstract: This article studies a decision problem faced by short sea shipping companies that operate both container vessels and container terminals. The problem is to jointly decide container ship routing, berth allocation at the terminals, as well as transshipment of containers to minimize the overall operating cost. The problem is formulated as an integer programming model and an iterative decomposition heuristic method is proposed for the solution of this complex problem. Computational experiments are conducted on problem instances that simulate the practice of a feeder service company operating around the Pearl River Delta region. Results show that integrating the decisions on ship routing, berth allocation, and transshipment of containers can achieve significant benefit compared with traditional approaches in which sequential and independent decisions are made. Journal: IIE Transactions Pages: 1357-1370 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.889334 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.889334 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1357-1370 Template-Type: ReDIF-Article 1.0 Author-Name: Tianke Feng Author-X-Name-First: Tianke Author-X-Name-Last: Feng Author-Name: Joseph Geunes Author-X-Name-First: Joseph Author-X-Name-Last: Geunes Title: Speculation in a two-stage retail supply chain Abstract: One perhaps surprising outcome of E-Commerce has been the emergence of speculators who resell products via the web. These speculators create retail shortages for popular products (e.g., toys) by removing them from store shelves in bulk and then selling them at inflated prices through secondary channels; e.g., on sites such as eBay. This article examines the impact of such speculation on ordering decisions in a two-stage manufacturer–retailer supply chain. The equilibrium results of the proposed model demonstrate a range of outcomes: in some cases both the retailer and manufacturer benefit from speculators, whereas in other cases, both may be hurt by a high number of speculators. The proposed model provides insight on when it is best for the manufacturer to take measures to preclude a high degree of speculation. Journal: IIE Transactions Pages: 1315-1328 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.904975 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.904975 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1315-1328 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Adelman Author-X-Name-First: Daniel Author-X-Name-Last: Adelman Author-Name: Christiane Barz Author-X-Name-First: Christiane Author-X-Name-Last: Barz Title: A price-directed heuristic for the economic lot scheduling problem Abstract: The article formulates the well-known economic lot scheduling problem (ELSP) with sequence-dependent setup times and costs as a semi-Markov decision process. Using an affine approximation of the bias function, a semi-infinite linear program is obtained and a lower bound for the minimum average total cost rate is determined. The solution of this problem is directly used in a price-directed, dynamic heuristic to determine a good cyclic schedule. As the state space of the ELSP is non-trivial for the multi-product setting with setup times, the authors further illustrate how a lookahead version of the price-directed, dynamic heuristic can be used to construct and dynamically improve an approximation of the state space. Numerical results show that the resulting heuristic performs competitively with one reported in the literature. Journal: IIE Transactions Pages: 1343-1356 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.905733 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.905733 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1343-1356 Template-Type: ReDIF-Article 1.0 Author-Name: Yazhe Feng Author-X-Name-First: Yazhe Author-X-Name-Last: Feng Author-Name: Kimberly P. Ellis Author-X-Name-First: Kimberly P. Author-X-Name-Last: Ellis Author-Name: Charlie W. Crawford Author-X-Name-First: Charlie W. Author-X-Name-Last: Crawford Title: Trailer fleet planning for industrial gas distribution Abstract: For industrial gas providers, fleet planning is important to their financial and operational performance. This article considers long-term vehicle purchase decisions, medium-term vehicle relocation decisions, and short-term rental decisions that are useful for increasing flexibility to meet time-varying demand. A mixed-integer programming model is developed to minimize total distribution costs and fleet investment costs over multiple time periods and multiple depots. To solve the industrial-sized problem efficiently, a two-phase approach is proposed. In Phase I, routes are generated to capture the characteristics of typical gas delivery operations. A reduced model is solved to select routes for meeting customer demands, estimate distribution costs, and determine the preferred fleet size. Phase II addresses the trailer purchase, relocation, and rental decisions based on the outputs of Phase I. The numerical studies, conducted using a data set from a leading industrial gas company, demonstrate the effectiveness and efficiency of the decomposition approach. Different routing algorithms are compared to evaluate the impact of candidate routes. When compared with the integrated optimization model, the two-phase approach obtains quality solutions within reasonable computational time. Journal: IIE Transactions Pages: 1329-1342 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.905737 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.905737 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1329-1342 Template-Type: ReDIF-Article 1.0 Author-Name: Anantaram Balakrishnan Author-X-Name-First: Anantaram Author-X-Name-Last: Balakrishnan Author-Name: Harihara Prasad Natarajan Author-X-Name-First: Harihara Prasad Author-X-Name-Last: Natarajan Title: Designing fee tables for retail delivery services by third-party logistics providers Abstract: Manufacturers are increasingly relying on third-party logistics service providers to distribute their products to retail stores. Fee tables, specifying how much to pay for each delivery based on weight and distance, are commonly used as the basis for compensating distributors for their delivery services. This article proposes and solves an optimization model to help a large building products manufacturer design an appropriate fee table for payments to its distributors for delivering products from regional distribution centers to retail stores. Given the distance and the distribution of shipment weights to each store served by every distribution center, the model selects the weight and distance ranges of the fee table and sets the fees for each combination of ranges to minimize total distribution costs while satisfying fee structure requirements and ensuring adequate total compensation for each distributor. Since the problem is difficult to solve using commercial solvers, we develop a tailored approach to obtain near-optimal solutions quickly by adding valid inequalities to strengthen the model formulation and using an optimization-based procedure to generate a heuristic solution. When applied to actual data from the building products manufacturer, our Composite solution method, combining cutting planes and heuristic, was effective (yielding solutions that are within 1% of optimality) and generated substantial savings (of nearly 10%) over the current fee table. Journal: IIE Transactions Pages: 1261-1278 Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.916458 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.916458 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:1261-1278 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Editorial Boards EOV Journal: IIE Transactions Pages: ebi-ebi Issue: 12 Volume: 46 Year: 2014 Month: 12 X-DOI: 10.1080/0740817X.2014.955405 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955405 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:12:p:ebi-ebi Template-Type: ReDIF-Article 1.0 Author-Name: Alf Kimms Author-X-Name-First: Alf Author-X-Name-Last: Kimms Author-Name: Igor Kozeletskyi Author-X-Name-First: Igor Author-X-Name-Last: Kozeletskyi Title: Consideration of multiple objectives in horizontal cooperation with an application to transportation planning Abstract: This article contributes to the interface between mathematical programming and (cooperative) game theory. Using the well-known traveling salesman problem as a basis, we discuss situations where multiple players cooperate, which leads to a multi-objective optimization problem. The important issue that is new is that not only individual objectives of the players are considered but also a joint objective. Hence, a sharing problem is created, which must somehow be integrated into multi-objective optimization. From a game-theoretic view, we thus face a cooperative game with non-transferable, as well as transferable, utilities. This is an innovative problem setting, for which we propose a solution procedure. To succeed, we extend knowledge from cooperative game theory and propose a concept based on the core to tackle the sharing problem when non-transferable, as well as transferable, utilities are present. As a result, we obtain a mathematical programming–based procedure that solves the multi-objective optimization problem and computes fair shares. Similar settings may occur in a universe of applications, and the presented ideas may be adapted for those situations. Journal: IISE Transactions Pages: 1160-1171 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1335920 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1335920 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1160-1171 Template-Type: ReDIF-Article 1.0 Author-Name: Kena Zhao Author-X-Name-First: Kena Author-X-Name-Last: Zhao Author-Name: Tsan Sheng (Adam) Ng Author-X-Name-First: Tsan Sheng (Adam) Author-X-Name-Last: Ng Author-Name: Harn Wei Kua Author-X-Name-First: Harn Wei Author-X-Name-Last: Kua Author-Name: Muchen Tang Author-X-Name-First: Muchen Author-X-Name-Last: Tang Title: Modeling environmental impacts and risk under data uncertainties Abstract: In this article, we propose a methodology to evaluate the risk of environmental and life cycle impacts under data uncertainties that can be applied to a broad range of data availability assumptions. Specifically, we first propose a data uncertainty model that can accommodate scenarios where only a few data points are known, where data histograms are available, or where multiple, inconsistent data sources are present. An impact risk valuation model is then developed, based on the certainty equivalent of an exponential dis-utility function. We show that the evaluation of the impact risk value can be achieved using a closed-form expression and demonstrate an application in a food waste recycling alternatives comparison. We further extend the methodology to construct an impact safety index model that evaluates uncertain impacts using an impact tolerance level. We show that the proposed model is computationally tractable and can be used as an optimization criterion. Computational studies in an example of sustainable building material selection are then used to demonstrate the improvement of the proposed model compared with the standard approach of optimizing average impacts across several statistical criteria. Journal: IISE Transactions Pages: 1150-1159 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1342054 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1342054 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1150-1159 Template-Type: ReDIF-Article 1.0 Author-Name: Divya J. Nair Author-X-Name-First: Divya J. Author-X-Name-Last: Nair Author-Name: David Rey Author-X-Name-First: David Author-X-Name-Last: Rey Author-Name: Vinayak V. Dixit Author-X-Name-First: Vinayak V. Author-X-Name-Last: Dixit Title: Fair allocation and cost-effective routing models for food rescue and redistribution Abstract: The not-for-profit food rescue organizations play a vital role in alleviating hunger in many developing and developed countries. They rescue surplus food from the business sector and redistribute to welfare agencies supporting different forms of food relief. Routing and allocation decisions are critical in food rescue operations, in particular when there is a significant gap between supply and demand. However, there is a gap in the literature with regards to models that account for fairness in the allocation of limited rescued food along with efficient routing. We present three objective functions: utilitarian, egalitarian, and deviation-based for efficient and fair food allocation, and present a goal programming–based formulation combining cost-effective routing and allocation objectives to obtain balanced solutions. We propose and implement a heuristic solution algorithm for this food relief logistics problem and report numerical results from realistic food rescue instances. Journal: IISE Transactions Pages: 1172-1188 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1351043 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1351043 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1172-1188 Template-Type: ReDIF-Article 1.0 Author-Name: Canan G. Corlu Author-X-Name-First: Canan G. Author-X-Name-Last: Corlu Author-Name: Bahar Biller Author-X-Name-First: Bahar Author-X-Name-Last: Biller Author-Name: Sridhar Tayur Author-X-Name-First: Sridhar Author-X-Name-Last: Tayur Title: Demand fulfillment probability in a multi-item inventory system with limited historical data Abstract: In a budget-constrained multi-item inventory system with independent demands, we consider the case of unknown demand parameters that are estimated from limited amounts of historical demand data. In this situation, the probability of satisfying all item demands, as a measure of demand fulfillment, is a function of the finite-sample estimates of the unknown demand parameters; thus, the demand fulfillment probability is a random variable. First, we characterize the properties of an asymptotical approximation to the mean and variance of this random variable due to the use of limited data for demand parameter estimation. Second, we use the characterization of the variance of the demand fulfillment probability for quantifying the impact of demand parameter uncertainty on demand fulfillment via numerical experiments. Third, we propose an inventory optimization problem that minimizes the variance of the demand fulfillment probability due to demand parameter uncertainty subject to a budget constraint on the total inventory investment. Our numerical experiments demonstrate that, despite the availability of limited amounts of historical demand data, it is possible to manage inventory with significantly reduced variance in the demand fulfillment probability. Journal: IISE Transactions Pages: 1087-1100 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1355125 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1355125 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1087-1100 Template-Type: ReDIF-Article 1.0 Author-Name: Dacheng Yao Author-X-Name-First: Dacheng Author-X-Name-Last: Yao Title: Joint pricing and inventory control for a stochastic inventory system with Brownian motion demand Abstract: In this article, we consider an infinite horizon, continuous-review, stochastic inventory system in which the cumulative customers’ demand is price dependent and is modeled as a Brownian motion. Excess demand is backlogged. The revenue is earned by selling products and the costs are incurred by holding/shortage and ordering; the latter consists of a fixed cost and a proportional cost. Our objective is to simultaneously determine a pricing strategy and an inventory control strategy to maximize the expected long-run average profit. Specifically, the pricing strategy provides the price pt for any time t ⩾ 0 and the inventory control strategy characterizes when and how much we need to order. We show that an (s*, S*, p*) policy is optimal and obtain the equations of optimal policy parameters, where p* = {p*t: t ⩾ 0}. Furthermore, we find that at each time t, the optimal price p*t depends on the current inventory level z, and it is increasing in [s*, z*] and decreasing in [z*, ∞), where z* is a negative level. Journal: IISE Transactions Pages: 1101-1111 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1355126 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1355126 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1101-1111 Template-Type: ReDIF-Article 1.0 Author-Name: Jorge A. Sefair Author-X-Name-First: Jorge A. Author-X-Name-Last: Sefair Author-Name: J. Cole Smith Author-X-Name-First: J. Cole Author-X-Name-Last: Smith Author-Name: Miguel A. Acevedo Author-X-Name-First: Miguel A. Author-X-Name-Last: Acevedo Author-Name: Robert J. Fletcher Author-X-Name-First: Robert J. Author-X-Name-Last: Fletcher Title: A defender-attacker model and algorithm for maximizing weighted expected hitting time with application to conservation planning Abstract: This article studies an interdiction problem in which two agents with opposed interests, a defender and an attacker, interact in a system governed by an absorbing discrete-time Markov chain. The defender protects a subset of transient states, whereas the attacker targets a subset of the unprotected states. By changing some of the transition probabilities related to the attacked states, the attacker seeks to minimize the Weighted Expected Hitting Time (WEHT). The defender seeks to maximize the attacker’s minimum possible objective, mitigating the worst-case WEHT. Many applications can be represented by this problem; this article focuses on conservation planning. We present a defender–attacker model and algorithm for maximizing the minimum WEHT. As WEHT is not generally a convex function of the attacker’s decisions, we examine large-scale integer programming formulations and first-order approximation methods for its solution. We also develop an algorithm for solving the defender’s problem via mixed-integer programming methods augmented by supervalid inequalities. The efficacy of the proposed solution methods is then evaluated using data from a conservation case study, along with an array of randomly generated instances. Journal: IISE Transactions Pages: 1112-1128 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1360533 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1360533 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1112-1128 Template-Type: ReDIF-Article 1.0 Author-Name: Andrew L. Liu Author-X-Name-First: Andrew L. Author-X-Name-Last: Liu Author-Name: Yihsu Chen Author-X-Name-First: Yihsu Author-X-Name-Last: Chen Title: Price containment in emissions permit markets: Balancing market risk and environmental outcomes Abstract: While cap-and-trade policies have been advocated as an efficient market-based approach in regulating greenhouse gas (GHG) emissions from the power sector, a major criticism is that the resulting prices of emissions permit may be volatile, adding more uncertainty to market participants, and hence deter their interests in participating the electricity market. To ease such a concern, various permit price-containment instruments, such as a price ceiling, floor, or collar, have been proposed. Though such instruments may prevent permit prices from being extreme, they may incur inadvertent results such as underinvestment in low-emission technologies or little reduction of system-wide GHG emissions, hence defeating the purpose of establishing a cap-and-trade policy in the first place. To address such issues, this article examines the effect of imposing various price-containment policies on investment decisions and spot market equilibria in an electricity market. Our major contribution is that, unlike other work in this area in which the price-containment schemes are exogenous to their market models we endogenously incorporate a price ceiling/floor in our models and hence can analyze the interactions between the policies and their corresponding market outcomes. We further introduce uncertainties in our market models and use California's data as a case study. Journal: IISE Transactions Pages: 1129-1149 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1362506 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1362506 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1129-1149 Template-Type: ReDIF-Article 1.0 Author-Name: Phillip O. Kriett Author-X-Name-First: Phillip O. Author-X-Name-Last: Kriett Author-Name: Martin Grunow Author-X-Name-First: Martin Author-X-Name-Last: Grunow Title: Generation of low-dimensional capacity constraints for parallel machines Abstract: A crucial input to production planning is a capacity model that accurately describes the amount of work that parallel machines can complete per planning period. This article proposes a procedure that generates the irredundant set of low-dimensional, linear capacity constraints for unrelated parallel machines. Low-dimensional means that the constraints contain one decision variable per product type, modeling the total production quantity across all machines. The constraint generation procedure includes the Minkowski addition and the facet enumeration of convex polytopes. We discuss state-of-the-art algorithms and demonstrate their effectiveness in experiments with data from semiconductor manufacturing. Since the computational complexity of the procedure is critical, we show how uniformity among machines and products can be used to reduce the problem size. Further, we propose a heuristic based on graph partitioning that trades constraint accuracy against computation time. A full-factorial experiment with randomly generated problem instances shows that the heuristic provides more accurate capacity constraints than alternative low-dimensional capacity models. Journal: IISE Transactions Pages: 1189-1205 Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1364875 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1364875 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:1189-1205 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: EOV Focus Area Editorial Boards Journal: IISE Transactions Pages: ebi-ebiv Issue: 12 Volume: 49 Year: 2017 Month: 12 X-DOI: 10.1080/24725854.2017.1392784 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1392784 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:12:p:ebi-ebiv Template-Type: ReDIF-Article 1.0 Author-Name: Massimiliano Giorgio Author-X-Name-First: Massimiliano Author-X-Name-Last: Giorgio Author-Name: Maurizio Guida Author-X-Name-First: Maurizio Author-X-Name-Last: Guida Author-Name: Gianpaolo Pulcini Author-X-Name-First: Gianpaolo Author-X-Name-Last: Pulcini Title: An age- and state-dependent Markov model for degradation processes Abstract: Many technological units are subjected during their operating life to a gradual deterioration process that progressively degrades their characteristics until a failure occurs. Statisticians and engineers have almost always modeled degradation phenomena using independent increments processes, which imply that the degradation growth depends, at most, on the unit age. Only a few models have been proposed in which the degradation growth is assumed to depend on the current unit state. In many cases, however, both the current age and the current state of a unit can affect the degradation process. As such, this article proposes a degradation model in which the transition probabilities between unit states depend on both the current age and the current degradation level. Two applications based on real data sets are analyzed and discussed. Journal: IIE Transactions Pages: 621-632 Issue: 9 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.532855 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.532855 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:9:p:621-632 Template-Type: ReDIF-Article 1.0 Author-Name: Qingzhu Yao Author-X-Name-First: Qingzhu Author-X-Name-Last: Yao Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Author-Name: Way Kuo Author-X-Name-First: Way Author-X-Name-Last: Kuo Title: Heuristics for component assignment problems based on the Birnbaum importance Abstract: This article considers the Component Assignment Problem (CAP), which concerns the problem of finding the optimal arrangement of n available components in the n positions of a system so that the system reliability is maximized. The Birnbaum Importance (BI) is a well-known measure that evaluates the relative contributions of components to system reliability. The ordering of BI values of components is a good indicator for the solution of the CAP and has been used to design heuristics for the CAP. This article proposes five new BI-based heuristics and presents their corresponding properties. Based on the numerical testing of the BI-based heuristics, a two-stage approach is proposed to solve the CAP with each stage using different BI-based heuristics. Comprehensive numerical experiments involving both small and large systems are used to evaluate the two-stage approach and to benchmark it against the GAMS/CoinBonmin solver and a randomization method. The numerical results show that the two-stage approach is much more efficient and is able to generate solutions of higher quality than the GAMS/CoinBonmin solver and the randomization method. Journal: IIE Transactions Pages: 633-646 Issue: 9 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.532856 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.532856 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:9:p:633-646 Template-Type: ReDIF-Article 1.0 Author-Name: Yuan Yuan Author-X-Name-First: Yuan Author-X-Name-Last: Yuan Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Crispian Sievenpiper Author-X-Name-First: Crispian Author-X-Name-Last: Sievenpiper Author-Name: Kamal Mannar Author-X-Name-First: Kamal Author-X-Name-Last: Mannar Author-Name: Yibin Zheng Author-X-Name-First: Yibin Author-X-Name-Last: Zheng Title: Event log modeling and analysis for system failure prediction Abstract: Event logs, commonly available in modern mechatronic systems, contain rich information on the operating status and working conditions of the system. This article proposes a method to build a statistical model using event logs for system failure prediction. To achieve the best prediction performance, prescreening and statistical variable selection are adopted to select the best set of predictor events, coded as covariates in the statistical model. In-depth discussion of the prediction power of the model in terms of false alarm and misdetection probability is presented. Using a real-world example, the effectiveness of the proposed method is further confirmed. Journal: IIE Transactions Pages: 647-660 Issue: 9 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.546385 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.546385 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:9:p:647-660 Template-Type: ReDIF-Article 1.0 Author-Name: Chi Zhang Author-X-Name-First: Chi Author-X-Name-Last: Zhang Author-Name: José Ramirez-Marquez Author-X-Name-First: José Author-X-Name-Last: Ramirez-Marquez Author-Name: Claudio Sanseverino Author-X-Name-First: Claudio Author-X-Name-Last: Sanseverino Title: A holistic method for reliability performance assessment and critical components detection in complex networks Abstract: Many infrastructures are now considered to be critical for both the economic development and general functioning of modern societies. Thus, understanding their performance is important as a basis to develop intelligent and cost-effective ways to protect these networks. In this article, a critical infrastructure is modeled as a complex network for which a new metric is defined to understand its reliability. This metric called reliability Π describes the average reliability between every pair of nodes in a complex network. As such, it is related to the two-terminal reliability concept in the traditional network context. Furthermore, in an effort to identify the most critical components that affect reliability Π, a multi-objective optimization problem, known as the critical component detection problem, is introduced. The solution to this problem provides two important insights about the behavior of a complex network: (i) an approximation to the set of optimal solutions that identifies the most critical components; and (ii) a quantitative assessment of how these failures affect the complete complex network. Journal: IIE Transactions Pages: 661-675 Issue: 9 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.546387 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.546387 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:9:p:661-675 Template-Type: ReDIF-Article 1.0 Author-Name: Ying Zhang Author-X-Name-First: Ying Author-X-Name-Last: Zhang Author-Name: Philippe Castagliola Author-X-Name-First: Philippe Author-X-Name-Last: Castagliola Author-Name: Zhang Wu Author-X-Name-First: Zhang Author-X-Name-Last: Wu Author-Name: Michael Khoo Author-X-Name-First: Michael Author-X-Name-Last: Khoo Title: The synthetic [Xbar] chart with estimated parameters Abstract: A synthetic [Xbar] chart consists of an integration of a Shewhart [Xbar] chart and a conforming run length chart. This type of chart has been extensively used to detect a process mean shift under the assumption of known process parameters. However, in practice, the process parameters are rarely known and are usually estimated from an in-control Phase I data set. The goals of this article are to (i) evaluate (using a Markov chain model) the performances of the synthetic [Xbar] chart when the process parameters are estimated; (ii) compare it with the case where the process parameters are assumed known to demonstrate that these performances are quite different when the number of samples used during Phase I is small; and (iii) suggest guidelines concerning the choice of the number of Phase I samples and to provide new optimal constants, especially dedicated to the number of samples used in practice. Journal: IIE Transactions Pages: 676-687 Issue: 9 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.549547 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.549547 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:9:p:676-687 Template-Type: ReDIF-Article 1.0 Author-Name: Gregory Steeger Author-X-Name-First: Gregory Author-X-Name-Last: Steeger Author-Name: Steffen Rebennack Author-X-Name-First: Steffen Author-X-Name-Last: Rebennack Title: Strategic bidding for multiple price-maker hydroelectric producers Abstract: In a market comprised of multiple price-maker firms, the payoff each firm receives depends not only on one’s own actions but also on the actions of the other firms. This is the defining characteristic of a non-cooperative economic game. In this article, we ask: What is the revenue-maximizing production schedule for multiple price-maker hydroelectric producers competing in a deregulated, bid-based market? In every time stage, we seek a set of bids such that, given all other price-maker producers’ bids, no price-maker can improve (increase) their revenue by changing their bid; i.e., a pure-strategy Nash–Cournot equilibrium. From a theoretical game theory perspective, the analysis on the underlying non-cooperative game is lacking. Specifically, existing approaches are not able to detect when multiple equilibria exist and consider any equilibrium found optimal. In our approach, we create interpolations for each price-maker’s best response function using mixed-integer linear programming formulations within a dynamic programming framework. In the presence of multiple Nash equilibria, when one exists, our approach finds the equilibrium that is Pareto optimal. If a Pareto-optimal Nash equilibrium does not exist, we use a tailored bargaining algorithm to determine a unique solution. To illustrate some of the finer details of our method, we present three examples and a case study on an electricity market in Honduras. Journal: IIE Transactions Pages: 1013-1031 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.1001928 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.1001928 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:1013-1031 Template-Type: ReDIF-Article 1.0 Author-Name: Wai Kin Victor Chan Author-X-Name-First: Wai Kin Victor Author-X-Name-Last: Chan Author-Name: Cheng Hsu Author-X-Name-First: Cheng Author-X-Name-Last: Hsu Title: When human networks collide: the degree distributions of hyper-networks Abstract: The study of service value networks adds a new dimension of investigation to industrial systems: human networks. Existing literature shows humans hyper-network to co-create value within and outside of the traditional structures of an organization or an extended enterprise—such as social networking for innovation and e-commerce for supply chain. Since human networks tend to be composite and multi-dimensional, they need new results to understand how networks collide during economic activities and what new coalesced networks will result. The hyper-network model uniquely describes this multi-layered evolving nature of human networks and reveals some of the basic networking properties either directly from the initial community base networks or directly from the colliding single networks. This article answers an important question about service value networks: What are the connection patterns of a network of networks, such as the distribution of the number of connections at a node—the degree distribution? We develop formulae to determine four prototypical classes of hyper-networks, which constitute a baseline analysis to the new study of network evolution for network science and service science. Journal: IIE Transactions Pages: 929-942 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.980868 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.980868 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:929-942 Template-Type: ReDIF-Article 1.0 Author-Name: Eunhye Song Author-X-Name-First: Eunhye Author-X-Name-Last: Song Author-Name: Barry L. Nelson Author-X-Name-First: Barry L. Author-X-Name-Last: Nelson Title: Quickly Assessing Contributions to Input Uncertainty Abstract: “Input uncertainty” refers to the (often unmeasured) variability in simulation-based performance estimators that is a consequence of driving the simulation with input models (e.g., fully specified univariate distributions of i.i.d. inputs) that are based on real-world data. In 2012 Ankenman and Nelson presented a quick-and-easy diagnostic experiment to assess the overall effect of input uncertainty on simulation output. When their method reveals that input uncertainty is substantial, then the natural next questions are which input distributions contribute the most to input uncertainty, and from which input distributions would it be most beneficial to collect more data? They proposed a possibly lengthy sequence of additional diagnostic experiments to answer these questions. In this paper we provide a method that obtains an estimator of the overall variance due to input uncertainty, the relative contribution to this variance of each input distribution, and a measure of the sensitivity of overall uncertainty to increasing the real-world sample-size used to fit each distribution, all from a single diagnostic experiment. Our approach exploits a metamodel that relates the means and variances of the input distributions to the mean response of the simulation output, and bootstrapping of the real-world data to represent input-model uncertainty. Further, we investigate whether and how the simulation outputs from the nominal and diagnostic experiments may be combined to obtain a better performance estimator. For the case when the analyst obtains additional real-world data, refines the input models, and runs a follow-up experiment, we analyze whether and how the simulation outputs from all three experiments should be combined. Numerical illustrations are provided. Journal: IIE Transactions Pages: 893-909 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.980869 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.980869 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:893-909 Template-Type: ReDIF-Article 1.0 Author-Name: M. Tekin Author-X-Name-First: M. Author-X-Name-Last: Tekin Author-Name: S. Özekici Author-X-Name-First: S. Author-X-Name-Last: Özekici Title: Mean-Variance Newsvendor Model with Random Supply and Financial Hedging Abstract: In this paper, we follow a mean-variance (MV) approach to the newsvendor model. Unlike the risk-neutral newsvendor that is mostly adopted in the literature, the MV newsvendor considers the risks in demand as well as supply. We further consider the case where the randomness in demand and supply is correlated with the financial markets. The MV newsvendor hedges demand and supply risks by investing in a portfolio composed of various financial instruments. The problem therefore includes both the determination of the optimal ordering policy and the selection of the optimal portfolio. Our aim is to maximize the hedged MV objective function. We provide explicit characterizations on the structure of the optimal policy. We also present numerical examples to illustrate the effects of risk-aversion on the optimal order quantity and the effects of financial hedging on risk reduction. Journal: IIE Transactions Pages: 910-928 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.981322 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.981322 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:910-928 Template-Type: ReDIF-Article 1.0 Author-Name: Issac Shams Author-X-Name-First: Issac Author-X-Name-Last: Shams Author-Name: Saeede Ajorlou Author-X-Name-First: Saeede Author-X-Name-Last: Ajorlou Author-Name: Kai Yang Author-X-Name-First: Kai Author-X-Name-Last: Yang Title: Bayesian component selection in multi-response hierarchical structured additive models with an application to clinical workload prediction in patient-centered medical homes Abstract: Motivated by a large health care data obtained from the U.S. Veterans Health Administration (VHA), we develop a multivariate version of hierarchical structured additive regression (STAR) models that involves a set of health care responses defined at the lowest level of the hierarchy, a set of patient factors to account for individual heterogeneity, and a set of higher level effects to capture dependence between patients within the same medical home team and facility. We show how a special class of such models can equivalently be represented and estimated in structural equation modeling framework. We then propose a Bayesian component selection with a spike and slab prior structure that allows inclusion or exclusion single effects as well as grouped coefficients representing particular model terms. A simple parameter expansion is used to improve mixing and convergence properties of Markov chain Monte Carlo simulation. The proposed methods are applied to a real-world application of the VHA patient centered medical home (PCMH) data and help to provide a good prediction of clinical workload portfolio for a certain mix of health care professionals based on patient key demographic, diagnostic, and medical attributes. Journal: IIE Transactions Pages: 943-960 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.982840 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.982840 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:943-960 Template-Type: ReDIF-Article 1.0 Author-Name: Qipeng P. Zheng Author-X-Name-First: Qipeng P. Author-X-Name-Last: Zheng Author-Name: Siqian Shen Author-X-Name-First: Siqian Author-X-Name-Last: Shen Author-Name: Yuhui Shi Author-X-Name-First: Yuhui Author-X-Name-Last: Shi Title: Loss-constrained minimum cost flow under arc failure uncertainty with applications in risk-aware kidney exchange Abstract: In this article, we study a Stochastic Minimum Cost Flow (SMCF) problem under arc failure uncertainty, where an arc flow solution may correspond to multiple path flow representations. We assume that the failure of an arc will cause flow losses on all paths using that arc, and for any path carrying positive flows, the failure of any arc on the path will lose all flows carried by the path. We formulate two SMCF variants to minimize the cost of arc flows, while respectively restricting the Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) of random path flow losses due to uncertain arc failure (reflected as network topological changes). We formulate a linear program to compute possible losses, yielding a mixed-integer programming formulation of SMCF-VaR and a linear programming formulation of SMCF-CVaR. We present a kidney exchange problem under uncertain match failure as an application and use the two SMCF models to maximize the utility/social welfare of pairing kidneys subject to constrained risk of utility losses. Our results show the efficacy of our approaches, the conservatism of using CVaR, and optimal flow patterns given by VaR and CVaR models on diverse instances. Journal: IIE Transactions Pages: 961-977 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.991476 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.991476 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:961-977 Template-Type: ReDIF-Article 1.0 Author-Name: Jiun Hong Chan Author-X-Name-First: Jiun Hong Author-X-Name-Last: Chan Author-Name: Mark Joshi Author-X-Name-First: Mark Author-X-Name-Last: Joshi Title: Optimal limit methods for computing sensitivities of discontinuous integrals including triggerable derivative securities Abstract: We introduce an approach to computing sensitivities of discontinuous integrals. The methodology is generic in that it only requires knowledge of the simulation scheme and the location of the integrand’s singularities. The methodology is proven to be optimal in terms of minimizing the variance of the measure changes. For piecewise constant payoffs this minimizes the variance of Monte Carlo sensitivities. An efficient adjoint implementation is discussed, and the method is shown to be effective for a number of natural financial examples including double barrier options and triggerable interest rate derivative securities. Journal: IIE Transactions Pages: 978-997 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.998390 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.998390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:978-997 Template-Type: ReDIF-Article 1.0 Author-Name: Zheng Zhang Author-X-Name-First: Zheng Author-X-Name-Last: Zhang Author-Name: Xiaolan Xie Author-X-Name-First: Xiaolan Author-X-Name-Last: Xie Title: Simulation-based optimization for surgery appointment scheduling of multiple operating rooms Abstract: This study is devoted to the appointment scheduling (AS) for a sequence of surgeries with random durations served by multiple operating rooms (Multi-OR). Surgeries are assigned to ORs dynamically on a first-come, first-serve (FCFS) basis. It materially differs from past literature in the sense that dynamic assignments are proactively anticipated in the determination of appointment times. A discrete-event framework is proposed to model the execution of the surgery schedule and to evaluate the sample path gradient of a total cost incurred by surgeon waiting, OR idling, and OR overtime. The sample path cost function is shown to be unimodal, Lipchitz-continuous, and differentiable w.p.1 and the expected cost function continuously differentiable. A stochastic approximation algorithm based on unbiased gradient estimators is proposed and extensive numerical experiments suggest that it converges to a global optimum. A series of numerical experiments is performed to show the significant benefits of the Multi-OR setting and properties of the optimal solution with respect to various system parameters such as cost structure and numbers of surgeries and ORs. Journal: IIE Transactions Pages: 998-1012 Issue: 9 Volume: 47 Year: 2015 Month: 9 X-DOI: 10.1080/0740817X.2014.999900 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.999900 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:9:p:998-1012 Template-Type: ReDIF-Article 1.0 Author-Name: Yu-Hung Chien Author-X-Name-First: Yu-Hung Author-X-Name-Last: Chien Author-Name: Zhe George Zhang Author-X-Name-First: Zhe George Author-X-Name-Last: Zhang Title: Analysis of a hybrid warranty policy for discrete-time operating products Abstract: This article considers a warranty policy consisting of a renewable free-replacement period and a rebate period for products operating in discrete time. Under such a Hybrid Warranty Policy (HWP), if a product fails during the first N periods (from period 1 to period N), it is replaced with either a new or a repaired unit for free (or at the manufacturer's expense) and the policy is renewed; if it fails in the next W − N periods (from period N + 1 to period W), then the manufacturer refunds a pre-specified proportion of the sales price to the buyer. A pure rebate warranty policy and a pure renewable replacement policy can be considered as special cases of the HWP. With the HWP, customer service and warranty cost can be traded off. The conditions under which the HWP is more cost-effective are derived from the perspective of the manufacturer or seller. Some structural properties of the HWP are examined. Furthermore, how to choose between using new and repaired products to replace failed products is discussed. A numerical example is presented to demonstrate the computation of the optimal N*, the decision variable of the HWP, and a sensitivity analysis is performed to gain some managerial insights for practitioners. Journal: IIE Transactions Pages: 442-459 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.953645 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.953645 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:442-459 Template-Type: ReDIF-Article 1.0 Author-Name: Lei Jiang Author-X-Name-First: Lei Author-X-Name-Last: Jiang Author-Name: Qianmei Feng Author-X-Name-First: Qianmei Author-X-Name-Last: Feng Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Title: Modeling zoned shock effects on stochastic degradation in dependent failure processes Abstract: This article studies a system that experiences two dependent competing failure processes, in which shocks are categorized into different shock zones. These two failure processes, a stochastic degradation process and a random shock process, are dependent because arriving shocks can cause instantaneous damage on the degradation process. In existing studies, every shock causes an abrupt damage on degradation. However, this may not be the case when shock loads are small and within the tolerance of system resistance. In the proposed model, only shock loads that are larger than a certain level are considered to cause abrupt damage on degradation, which makes this new model realistic and challenging. Shocks are divided into three zones based on their magnitudes: safety zone, damage zone, and fatal zone. The abrupt damage is modeled using an explicit function of shock load exceedances (differences between load magnitudes and a given threshold). Due to the complexity in modeling these two dependent stochastic failure processes, no closed form of the reliability function can be derived. Monte Carlo importance sampling is used to estimate the system reliability. Finally, two application examples with sensitivity analyses are presented to demonstrate the models. Journal: IIE Transactions Pages: 460-470 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.955152 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955152 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:460-470 Template-Type: ReDIF-Article 1.0 Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Author-Name: Jeffrey P. Kharoufeh Author-X-Name-First: Jeffrey P. Author-X-Name-Last: Kharoufeh Title: Degradation modeling for real-time estimation of residual lifetimes in dynamic environments Abstract: This article presents a methodology for modeling degradation signals from components functioning under dynamically evolving environment conditions. In situ sensor signals related to the degradation process are utilized as well as the environment conditions, to predict and update, in real-time, the distribution of a component’s residual lifetime. The model assumes that the time-dependent rate at which a component’s degradation signal increases (or decreases) is affected by the severity of the current environmental or operational conditions. These conditions are assumed to evolve as a continuous-time Markov chain. Unique to the proposed model is the union of historical data with real-time, sensor-based data to update the signal parameters, environment parameters, and the residual lifetime distribution of the component within a Bayesian framework. Journal: IIE Transactions Pages: 471-486 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.955153 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955153 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:471-486 Template-Type: ReDIF-Article 1.0 Author-Name: Li Hao Author-X-Name-First: Li Author-X-Name-Last: Hao Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Simultaneous signal separation and prognostics of multi-component systems: the case of identical components Abstract: When monitoring complex engineering systems, sensors often measure mixtures of signals that are unique to individual components (component signals). However, isolating component signals directly from sensor signals can be a challenge. As an example, in vibration monitoring of a rotating machine, if different components generate vibration signals at similar frequencies, they cannot be distinguished using traditional spectrum analysis (non-inseparable). However, developing degradation signals from component signals is important to monitor the deterioration of crucial components and to predict their residual lifetimes. This article proposes a simultaneous signal separation and prognostics framework for multi-component systems with non-inseparable component signals. In the signal separation stage, an Independent Component Analysis (ICA) algorithm is used to isolate component signals from mixed sensor signals, and an online amplitude recovery procedure is used to recover amplitude information that is lost after applying the ICA. In the prognostics stage, an adaptive prognostics method to model component degradation signals as continuous stochastic processes is used to predict the residual lifetimes of individual components. A case study is presented that investigates the performance of the signal separation stage and that of the final residual-life prediction under different conditions. The simulation results show a reasonable robustness of the methodology. Journal: IIE Transactions Pages: 487-504 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.955357 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:487-504 Template-Type: ReDIF-Article 1.0 Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Author-Name: Jizhe Zhang Author-X-Name-First: Jizhe Author-X-Name-Last: Zhang Author-Name: Arman Sabbaghi Author-X-Name-First: Arman Author-X-Name-Last: Sabbaghi Author-Name: Tirthankar Dasgupta Author-X-Name-First: Tirthankar Author-X-Name-Last: Dasgupta Title: Optimal offline compensation of shape shrinkage for three-dimensional printing processes Abstract: Dimensional accuracy is a key control issue in direct three-dimensional (3D) printing. Part shrinkage due to material phase changes often leads to deviations in the final shape, requiring extra post-machining steps for correction. Shrinkage has traditionally been analyzed through finite element simulation and experimental investigations. Systematic models for accuracy control through shrinkage compensation are rarely available, particularly for complete control of all local features. To fill the gap for direct printing and compensate for shape shrinkage, this article develops a new approach to (i) model and predict part shrinkage and (ii) derive an optimal shrinkage compensation plan to achieve dimensional accuracy. The developed approach is demonstrated both analytically and experimentally in a stereolithography process, one of the most widely employed 3D printing techniques. Experimental results demonstrate the ability of the proposed compensation approach to achieve an improvement of an order of magnitude in the reduction of geometric errors for cylindrical products. Journal: IIE Transactions Pages: 431-441 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.955599 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955599 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:431-441 Template-Type: ReDIF-Article 1.0 Author-Name: Ramin Moghaddass Author-X-Name-First: Ramin Author-X-Name-Last: Moghaddass Author-Name: Ming J Zuo Author-X-Name-First: Ming J Author-X-Name-Last: Zuo Author-Name: Yu Liu Author-X-Name-First: Yu Author-X-Name-Last: Liu Author-Name: Hong-zhong Huang Author-X-Name-First: Hong-zhong Author-X-Name-Last: Huang Title: Predictive analytics using a nonhomogeneous semi-Markov model and inspection data Abstract: Predicting the remaining useful life plays an important role in minimizing the overall maintenance cost of mechanical systems. Although most conventional reliability models deal with binary systems to perform such predictions, in most practical cases, mechanical systems experience multiple levels of degradation states before failure. When the degradation level associated with such a multistate deteriorating process is monitored only at fixed inspection points, extracted monitoring data are interval-censored. Interval censoring can influence both the parameter estimation (model training) and the calculation of principal reliability measures. This article studies the problem of parameter estimation and the development of principal prognostic-based reliability measures, including reliability function and mean residual life, for a multistate device under limited inspection capacity. The correctness of the introduced models is demonstrated through simulation-based numerical experiments. Finally, an example of the wear process of the shell of a bearing is used to demonstrate the application of the proposed models. Journal: IIE Transactions Pages: 505-520 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.959672 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.959672 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:505-520 Template-Type: ReDIF-Article 1.0 Author-Name: Nailong Zhang Author-X-Name-First: Nailong Author-X-Name-Last: Zhang Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Title: Optimal maintenance planning for repairable multi-component systems subject to dependent competing risks Abstract: Many complex multi-component systems suffer from dependent competing risks. The reliability modeling and maintenance planning of repairable dependent competing risks systems are challenging tasks because the repair of the failed component can change the lifetime of the other components when multiple components fail dependently. This article first proposes a generally dependent latent age model to capture the dependence of competing risks under general component repairs. Based on the proposed reliability model, both system- and component-level periodic inspection-based maintenance polices are considered for repairable multi-component systems that are subject to dependent competing risks. Under the system-level maintenance policy, the entire system is restored to the as-good-as-new state once a failure is detected. While under the component-level maintenance policy, only the failed component is repaired imperfectly. The optimal solution of the system-level policy is obtained by using renewal theory. The optimal solution of the component-level policy, however, cannot be obtained analytically, due to its complex failure and repair characteristics. A simulation-based optimization approach with stochastic approximation is developed to solve the optimization problem for the component-level policy. The developed methods are illustrated by using a cylinder head assembly cell that consists of multiple stations. Journal: IIE Transactions Pages: 521-532 Issue: 5 Volume: 47 Year: 2015 Month: 5 X-DOI: 10.1080/0740817X.2014.974115 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.974115 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:5:p:521-532 Template-Type: ReDIF-Article 1.0 Author-Name: Mandyam Srinivasan Author-X-Name-First: Mandyam Author-X-Name-Last: Srinivasan Author-Name: S. Viswanathan Author-X-Name-First: S. Author-X-Name-Last: Viswanathan Title: Optimal work-in-process inventory levels for high-variety, low-volume manufacturing systems Abstract: This article considers a manufacturing system that operates in a high-variety, low-volume environment, with significant setup times. The goal is to determine the optimal Work-In-Process (WIP) inventory levels for operating the system to meet the required demand for each product. The decision variables are the number of pallets (containers) for each product and the number of units in each pallet (lot size). The objective is to minimize the total WIP inventory across all products. To capture congestion in the system, it is modeled as a closed queueing network with multiple product types. However, this leads to a complex non-linear integer program with a non-convex objective function. A lower bound on the objective function is developed that is used to develop upper and lower bounds on the number of pallets for each product. The bounds on the number of pallets allow the use of exhaustive enumeration within these bounds to obtain the optimal solution to this complex queueing network-based optimization problem. A simple heuristic is developed to further reduce the number of candidate configurations evaluated in the search for the optimal solution. A computational study reveals that the heuristic obtains the optimal solution in many of the test instances.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for supplemental resources containing details on some procedures and heuristics.] Journal: IIE Transactions Pages: 379-391 Issue: 6 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170902761406 File-URL: http://hdl.handle.net/10.1080/07408170902761406 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:6:p:379-391 Template-Type: ReDIF-Article 1.0 Author-Name: Pratik Parikh Author-X-Name-First: Pratik Author-X-Name-Last: Parikh Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: A note on worker blocking in narrow-aisle order picking systems when pick time is non-deterministic Abstract: The focus of this work is to develop analytical models to estimate worker blocking in narrow-aisle order picking systems. The situation with deterministic pick times has already been reported in the literature but the case with non-deterministic pick times remains an open question. Models that use a non-deterministic pick time for cases where the pick:walk-time ratios are 1:1 and ∞:1 are presented. It is shown that the results obtained using deterministic pick times are in fact sensitive to this assumption. Journal: IIE Transactions Pages: 392-404 Issue: 6 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903171043 File-URL: http://hdl.handle.net/10.1080/07408170903171043 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:6:p:392-404 Template-Type: ReDIF-Article 1.0 Author-Name: Seok Chang Author-X-Name-First: Seok Author-X-Name-Last: Chang Author-Name: Stanley Gershwin Author-X-Name-First: Stanley Author-X-Name-Last: Gershwin Title: Modeling and analysis of two unreliable batch machines with a finite buffer in between Abstract: This article considers two unreliable batch machines with a finite buffer in between. Batch machines process a set of parts simultaneously; the maximum number in the set is the size of the machine. The purpose of this article is threefold: (i) to present a model of these systems and its exact analysis; (ii) to present new qualitative insights and interpretations of system behavior; and (iii) to present the comparison between full-batch and partial-batch policies. We demonstrate new generalized conservation of flow and flow-rate–idle-time relationships. Various performance measures of interest such as production rate, mean size of batches served in each machine, machine efficiencies, probabilities of blocking and starvation, and expected in-process inventory are presented. A reversibility property is demonstrated and deadlock behavior is described. The effect of the size of machines on performance measures is examined, new phenomena and insights are observed, and possible interpretations are presented.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following supplementary resources: datasets, additional tables, detailed proofs] Journal: IIE Transactions Pages: 405-421 Issue: 6 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903228934 File-URL: http://hdl.handle.net/10.1080/07408170903228934 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:6:p:405-421 Template-Type: ReDIF-Article 1.0 Author-Name: Yang Liu Author-X-Name-First: Yang Author-X-Name-Last: Liu Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Title: Split and merge production systems: performance analysis and structural properties Abstract: Production split and merge operations are widely used in many manufacturing systems to increase production capacity and variety, improve product quality, and carry out scheduling and control activities. This article presents analytical methods to analyze split and merge production systems with exponential machine reliability models, operating under circulate, strictly circulate, priority, and percentage split/merge policies. In addition to developing the recursive procedures for performance analysis, the structural properties of the systems and the impacts of routing policies on system performance are investigated. Journal: IIE Transactions Pages: 422-434 Issue: 6 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903394348 File-URL: http://hdl.handle.net/10.1080/07408170903394348 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:6:p:422-434 Template-Type: ReDIF-Article 1.0 Author-Name: Kwan Wee Author-X-Name-First: Kwan Author-X-Name-Last: Wee Author-Name: Maqbool Dada Author-X-Name-First: Maqbool Author-X-Name-Last: Dada Title: A make-to-stock manufacturing system with component commonality: A queuing approach Abstract: In this article a manufacturing system that features component commonality and a production allocation mechanism that postpones inventory commitment is modeled. Under this mechanism, which is called the First-Use First-Serve (FUFS), approach, a component is committed to an order only when all other required components become available. Furthermore, production of a component can only occur if its inventory is below a threshold. The performance of the proposed model is compared with the benchmark case of the First-Come First-Served (FCFS) approach under the sequence in which orders are received. The presented results show that FUFS outperforms FCFS on most system performance criteria. However, FCFS may outperform FUFS under some performance measures of dispersion, when the workload is heavy. Journal: IIE Transactions Pages: 435-453 Issue: 6 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903544322 File-URL: http://hdl.handle.net/10.1080/07408170903544322 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:6:p:435-453 Template-Type: ReDIF-Article 1.0 Author-Name: Kimberly P. Ellis Author-X-Name-First: Kimberly P. Author-X-Name-Last: Ellis Author-Name: Hanif D. Sherali Author-X-Name-First: Hanif D. Author-X-Name-Last: Sherali Author-Name: Lance Saunders Author-X-Name-First: Lance Author-X-Name-Last: Saunders Author-Name: Shengzhi Shao Author-X-Name-First: Shengzhi Author-X-Name-Last: Shao Author-Name: Charlie Crawford Author-X-Name-First: Charlie Author-X-Name-Last: Crawford Title: Bulk tank allocation to improve distribution planning for the industrial gas industry Abstract: An important strategic-level decision in the industrial gas industry involves the allocation of bulk gas tanks to customer sites. For a set of customers having specified demands, the bulk tank allocation problem determines the preferred size of bulk tanks to assign to customer sites in order to minimize tank investment costs and gas distribution costs for the industrial gas distributor. For a single gas product and multiple depots, the problem is modeled as a mixed-integer program and then solved using a decomposition approach. A heuristic for clustering customers and developing routes is proposed based on a sweep algorithm. These potential routes serve as input for the bulk tank allocation problem, which selects routes and assigns tanks to customer sites. The approach provides a valuable method for analyzing strategic-level decisions while incorporating operational-level characteristics. The approach is evaluated using data from an international industrial gas distributer to demonstrate the potential for improving the efficiency of gas distribution. Journal: IIE Transactions Pages: 557-566 Issue: 6 Volume: 46 Year: 2014 Month: 6 X-DOI: 10.1080/0740817X.2013.849831 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849831 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:6:p:557-566 Template-Type: ReDIF-Article 1.0 Author-Name: Nickolas K. Freeman Author-X-Name-First: Nickolas K. Author-X-Name-Last: Freeman Author-Name: John Mittenthal Author-X-Name-First: John Author-X-Name-Last: Mittenthal Author-Name: Sharif H. Melouk Author-X-Name-First: Sharif H. Author-X-Name-Last: Melouk Title: Parallel-machine scheduling to minimize overtime and waste costs Abstract: This article considers scheduling products in a parallel, non-identical machine environment subject to sequence-dependent setup costs and sequence-dependent setup times, where production waste and processing time of a product depend on feasible machine assignments. A Mixed-Integer Programming (MIP) formulation is developed that captures trade-offs between overtime labor costs and waste costs. Two solution procedures are developed to address large problem instances. One procedure is an algorithm that determines a vector of product-to-machine assignments that assists an MIP solver to find an initial feasible solution. The second procedure is a decomposition heuristic that iteratively solves a relaxed subproblem and uses the subproblem solution to fix assignment variables in the main MIP formulation. In addition, bounds on the quality of solutions found using the decomposition heuristic are presented. Experiments are conducted that show the developed formulation outperforms more traditional scheduling objectives with respect to the waste and overtime labor costs. Additional experimentation investigates the effects that problem parameter values have on total waste and labor costs, performance of the approaches, and the use of overtime labor. Journal: IIE Transactions Pages: 601-618 Issue: 6 Volume: 46 Year: 2014 Month: 6 X-DOI: 10.1080/0740817X.2013.851432 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.851432 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:6:p:601-618 Template-Type: ReDIF-Article 1.0 Author-Name: Christos Koulamas Author-X-Name-First: Christos Author-X-Name-Last: Koulamas Title: A note on the scheduling problem with revised delivery times and job-dependent tardiness penalties Abstract: This article considers the single-machine scheduling problem with revised delivery times and job-dependent tardiness penalties. It is shown that the exact dynamic programming algorithm for the simpler problem with job-independent tardiness penalties can be extended to the problem with job-dependent tardiness penalties. When the job-dependent tardiness penalties are bounded by a polynomial function of the number of jobs, the dynamic programming algorithm can be converted to a Fully Polynomial Time Approximation Scheme by implementing the “scaling the input” technique. Journal: IIE Transactions Pages: 619-622 Issue: 6 Volume: 46 Year: 2014 Month: 6 X-DOI: 10.1080/0740817X.2013.851435 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.851435 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:6:p:619-622 Template-Type: ReDIF-Article 1.0 Author-Name: Da Lu Author-X-Name-First: Da Author-X-Name-Last: Lu Author-Name: Fatma Gzara Author-X-Name-First: Fatma Author-X-Name-Last: Gzara Author-Name: Samir Elhedhli Author-X-Name-First: Samir Author-X-Name-Last: Elhedhli Title: Facility location with economies and diseconomies of scale: models and column generation heuristics Abstract: Most of the literature on facility location assumes a fixed setup and a linear variable cost. It is known, however, that as volume increases cost savings are achieved through economies of scale, but when the volume exceeds a certain level, diseconomies of scale occur and marginal costs start to increase. This is best captured by an inverse S-shaped cost function that is initially concave and then turns convex. This article studies such a class of location problems and solution methods are proposed that are based on Lagrangian relaxation, column generation, and branch-and-bound methods. A nonlinear mixed-integer programming formulation is introduced that is decomposable by environment type; i.e., economies or diseconomies of scale. The resulting concave and convex subproblems are then solved efficiently as piecewise convex and concave bounded knapsack problems, respectively. A heuristic solution is found based on dual information from the column generation master problems and the solution of the subproblems. Armed with the Lagrangian lower bound and the heuristic solution, the procedure is embedded in a branch-and-price-type algorithm. Unfortunately, due to the nonlinearity of the problem, global optimality is not guaranteed, but high-quality solutions are achieved depending on the amount of branching performed. The methodology is tested on three function types and four cost settings. Solutions with an average gap of 1.1% are found within an average of 20 minutes. Journal: IIE Transactions Pages: 585-600 Issue: 6 Volume: 46 Year: 2014 Month: 6 X-DOI: 10.1080/0740817X.2013.860508 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.860508 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:6:p:585-600 Template-Type: ReDIF-Article 1.0 Author-Name: Bacel Maddah Author-X-Name-First: Bacel Author-X-Name-Last: Maddah Author-Name: Ebru K. Bish Author-X-Name-First: Ebru K. Author-X-Name-Last: Bish Author-Name: Hussein Tarhini Author-X-Name-First: Hussein Author-X-Name-Last: Tarhini Title: Newsvendor pricing and assortment under Poisson decomposition Abstract: This article studies the structure of and the interdependence among the critical decisions on pricing, inventory, and assortment of a retailer’s product line. It considers substitutable retail products that are horizontally differentiated variants under a logit consumer choice model, within a newsvendor-type supply setting and homogeneous pricing. The focus of this article is on analyzing joint pricing and inventory decisions for a given assortment, within a natural Poisson decomposition setting, under a “multiplicative–additive” demand model, where both variance and coefficient of variation of the demand depend on the price. For this problem, a Taylor series-based approximation is developed for the inventory cost and its accuracy is subsequently demonstrated. It is then shown that, under this approximation, the expected profit is unimodal in the price, and sufficient conditions are provided for the “risky” price, at optimal inventory, to be above (or below) the “riskless” price, pertaining to a make-to-order system. It is also shown that inventory considerations alter the behavior of the risky price in demand and cost parameters. Furthermore, joint assortment and inventory decisions under exogenous pricing are considered, and the unimodularity of the expected profit in the assortment size is proven. Also, a comparative statics analysis is performed and insights are presented. Journal: IIE Transactions Pages: 567-584 Issue: 6 Volume: 46 Year: 2014 Month: 6 X-DOI: 10.1080/0740817X.2013.860509 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.860509 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:6:p:567-584 Template-Type: ReDIF-Article 1.0 Author-Name: Liang Liang Author-X-Name-First: Liang Author-X-Name-Last: Liang Author-Name: Zhao-Qiong Li Author-X-Name-First: Zhao-Qiong Author-X-Name-Last: Li Author-Name: Wade Cook Author-X-Name-First: Wade Author-X-Name-Last: Cook Author-Name: Joe Zhu Author-X-Name-First: Joe Author-X-Name-Last: Zhu Title: Data envelopment analysis efficiency in two-stage networks with feedback Abstract: Conventional applications of data envelopment analysis generally treat the Decision-Making Unit (DMU) as a black box in that the internal processes are not examined in detail. In some situations, such as the measurement of performance of a set of supply chains, the DMU can be viewed as exhibiting a network structure. A significant body of recent literature has examined a particular form of network structure, namely, where the DMU is a two-stage serial process in which the outputs from the first stage are intermediate variables that serve as inputs to the second stage. The current article extends this idea to include those situations where outputs from the second stage can be fed back as inputs to the first stage. Such feedback variables thus serve a dual role. Models are developed for examining performance in this feedback setting and are illustrated using an application involving the measurement of performance of a set of Chinese universities. Journal: IIE Transactions Pages: 309-322 Issue: 5 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.509307 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.509307 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:5:p:309-322 Template-Type: ReDIF-Article 1.0 Author-Name: Minsuk Suh Author-X-Name-First: Minsuk Author-X-Name-Last: Suh Author-Name: Goker Aydin Author-X-Name-First: Goker Author-X-Name-Last: Aydin Title: Dynamic pricing of substitutable products with limited inventories under logit demand Abstract: This article considers the dynamic pricing of two substitutable products over a predetermined, finite selling season. The initial inventory levels of the products are fixed exogenously and there are no replenishment opportunities during the season. It is assumed that each arriving customer chooses from available products based on the multinomial logit choice model, which captures the effect of prices on consumer choice. Every time a product runs out of stock, the set of choices shrinks, capturing the effect of stockouts on consumer choice. It is shown that, under the optimal pricing policy, the marginal value of an item is increasing in the remaining time and decreasing in its own stock level and the other product's stock level. Despite such non-surprising behavior on the part of marginal values, the optimal price itself is not simply monotonic in the remaining time or the other product's stock level. For example, a product's optimal price may increase if the remaining time decreases or if the total inventory grows. It is shown that such optimal behavior can be understood through alternative gauges such as the optimal price difference between the two products and the optimal purchase probabilities. Journal: IIE Transactions Pages: 323-331 Issue: 5 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.521803 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.521803 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:5:p:323-331 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Chen Author-X-Name-First: Jian Author-X-Name-Last: Chen Author-Name: Youhua Chen Author-X-Name-First: Youhua Author-X-Name-Last: Chen Author-Name: Mahmut Parlar Author-X-Name-First: Mahmut Author-X-Name-Last: Parlar Author-Name: Yongbo Xiao Author-X-Name-First: Yongbo Author-X-Name-Last: Xiao Title: Optimal inventory and admission policies for drop-shipping retailers serving in-store and online customers Abstract: This article studies the optimal inventory and dynamic admission policies of two physical retailers who, besides selling through their traditional in-store channels, also act as drop-shippers for an online retailer (e-tailer). The e-tailer carries no inventory of its own and always turns to one of the two physical retailers for order fulfillment. The considered scenario is the one in which retailer 1 (R1) and retailer 2 (R2) act as the primary and secondary drop-shippers of the e-tailer, respectively. While trying to maximize their respective revenues, both retailers face the problem of whether or not to accept the e-tailer's order-fulfillment request. It is initially assumed that the initial inventory levels of each retailer are fixed and that R1 shares his inventory information with R2. By adopting a revenue management framework, the dynamic admission policies of both retailers are studied and it is shown that R1 and R2 should implement one-dimensional and two-dimensional threshold policies, respectively. The scenario in which R1 does not share his inventory information with R2 is considered. For this scenario two heuristic policies for R2 are proposed and they are compared to the optimal policy when information is shared. A detailed sensitivity analysis for varying parameter value is presented, which shows the impact of information sharing between the two retailers. Finally, the assumption of fixed initial inventory levels is relaxed and the optimal initial inventory levels of each retailer that maximize their expected profits are determined. Journal: IIE Transactions Pages: 332-347 Issue: 5 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.540637 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.540637 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:5:p:332-347 Template-Type: ReDIF-Article 1.0 Author-Name: Elodie Adida Author-X-Name-First: Elodie Author-X-Name-Last: Adida Author-Name: Po-Ching DeLaurentis Author-X-Name-First: Po-Ching Author-X-Name-Last: DeLaurentis Author-Name: Mark Lawley Author-X-Name-First: Mark Author-X-Name-Last: Lawley Title: Hospital stockpiling for disaster planning Abstract: In response to the increasing threat of terrorist attacks and natural disasters, governmental and private organizations worldwide have invested significant resources in disaster planning activities. This article addresses joint inventory stockpiling of medical supplies for groups of hospitals prior to a disaster. Specifically, the problem of determining the stockpile quantity of a medical item at several hospitals is considered. It is assumed that demand is uncertain and driven by the characteristics of a variety of disaster scenarios. Furthermore, it is assumed that hospitals have mutual aid agreements for inventory sharing in the event of a disaster. Each hospital's desire to minimize its stockpiling cost together with the potential to borrow from other stockpiles creates individual incentives well represented in a game-theoretic framework. This problem is modeled as a non-cooperative strategic game, the existence of a Nash equilibrium is proved, and the equilibrium solutions are analyzed. A centralized model of stockpile decision making where a central decision maker optimizes the entire system is also examined and the solutions obtained using this model are compared to those of the decentralized (game) model. The comparison provides some managerial insights and public health policy implications valuable for disaster planning. Journal: IIE Transactions Pages: 348-362 Issue: 5 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.540639 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.540639 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:5:p:348-362 Template-Type: ReDIF-Article 1.0 Author-Name: Seyed Iravani Author-X-Name-First: Seyed Author-X-Name-Last: Iravani Author-Name: Bora Kolfal Author-X-Name-First: Bora Author-X-Name-Last: Kolfal Author-Name: Mark Van Oyen Author-X-Name-First: Mark Author-X-Name-Last: Van Oyen Title: Capability flexibility: a decision support methodology for parallel service and manufacturing systems with flexible servers Abstract: To obtain improved performance, many firms pursue operational flexibility by endowing their production operations with multiple capabilities (e.g., multi-skilled workers, flexible machines and/or flexible plants). This article focuses on the problem of ranking (according to average wait in queue) alternative system designs that vary by capacity and the structure of capabilities for open, parallel queueing networks with partially flexible servers. Prior literature introduced the Structural Flexibility (SF) concept and because the SF method was intended for a strategic context with very little information, it did not incorporate mean service times by demand type, server speeds, or wide ranges in demand arrival rates. This article develops the Capability Flexibility (CF) index methodology to extend the range of operational environments and designs that can be ranked. By showing the effectiveness of a deterministic, second-order approximation of a capability-design's relative flexibility/performance—the CF index—it proved possible to establish the insight that the proposed simple deterministic approximation of these complex stochastic is able to capture the dominant drivers of congestion of one design relative to another. Journal: IIE Transactions Pages: 363-382 Issue: 5 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.541177 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.541177 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:5:p:363-382 Template-Type: ReDIF-Article 1.0 Author-Name: William H. Woodall Author-X-Name-First: William H. Author-X-Name-Last: Woodall Author-Name: Meng J. Zhao Author-X-Name-First: Meng J. Author-X-Name-Last: Zhao Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Ross Sparks Author-X-Name-First: Ross Author-X-Name-Last: Sparks Author-Name: James D. Wilson Author-X-Name-First: James D. Author-X-Name-Last: Wilson Title: An overview and perspective on social network monitoring Abstract: In this expository article we give an overview of some statistical methods for the monitoring of social networks. We discuss the advantages and limitations of various methods as well as some relevant issues. One of our primary contributions is to give the relationships between network monitoring methods and monitoring methods in engineering statistics and public health surveillance. We encourage researchers in the industrial process monitoring area to work on developing and comparing the performance of social network monitoring methods. We also discuss some of the issues in social network monitoring and give a number of research ideas. Journal: IISE Transactions Pages: 354-365 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1213468 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1213468 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:354-365 Template-Type: ReDIF-Article 1.0 Author-Name: Yaonan Kong Author-X-Name-First: Yaonan Author-X-Name-Last: Kong Author-Name: Zhisheng Ye Author-X-Name-First: Zhisheng Author-X-Name-Last: Ye Title: Interval estimation for -out-of- load-sharing systems Abstract: Load-sharing systems are commonly seen in industry; e.g., water pumps in a cooling system. In a load-sharing system, the stress level on each surviving component increases as components fail. The load-sharing nature of the component failure process creates difficulties in statistical inference for the system. This article develops two interval procedures for failure data from load-sharing systems. We assume that the component lifetime under each stress level follows an exponential distribution, a common distribution in reliability engineering. A log-linear link function is used to model the relationship between the stress levels and component lifetimes. In the two proposed interval procedures, we construct confidence intervals for the model parameters by using pivotal quantities and generalized pivotal quantities. Interval estimation for important reliability characteristics including the mean lifetime and the reliability of the load-sharing system is also discussed. A simulation study shows that the confidence intervals produced from the proposed procedures are more accurate compared with traditional approximate interval procedures, such as the large sample normal approximation and the bootstrap. A numerical example is used to demonstrate the performance of the proposed procedures. Journal: IISE Transactions Pages: 344-353 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1217102 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1217102 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:344-353 Template-Type: ReDIF-Article 1.0 Author-Name: Behnaz Hosseini Author-X-Name-First: Behnaz Author-X-Name-Last: Hosseini Author-Name: Barış Tan Author-X-Name-First: Barış Author-X-Name-Last: Tan Title: Simulation and optimization of continuous-flow production systems with a finite buffer by using mathematical programming Abstract: We present a mathematical programming approach for simulation and optimization of a general continuous-flow production system with an intermediate finite buffer. In this system, each station is represented with a discrete state space–continuous time process with given transition time distributions between the states and a set of flow rates associated with each discrete state. We develop a mathematical programming formulation to determine the critical time instances of the sample trajectory of the buffer that correspond to state transitions, buffer dynamics, and changing flow rates. We show that a simulated sample realization of the system is obtained by solving a mixed-integer linear program. The mathematical programming representation is also used to show that the production rate is a monotonically increasing function of the buffer capacity. We analyze the buffer capacity determination problem with the objective of determining the minimum buffer capacity that achieves a desired production rate and also with the objective of maximizing the profit. It is shown that the computational performance depends on the rates of change among system states and not on the number of states at each stage and on the buffer capacity. Our numerical results show a significant computational improvement compared with using a discrete-event simulation. As a result, the mathematical programming approach is proposed as a viable alternative method for performance evaluation and optimization of continuous-flow systems with a finite buffer. Journal: IISE Transactions Pages: 255-267 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1217103 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1217103 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:255-267 Template-Type: ReDIF-Article 1.0 Author-Name: Yutian Yang Author-X-Name-First: Yutian Author-X-Name-Last: Yang Author-Name: Jonathan F. Bard Author-X-Name-First: Jonathan F. Author-X-Name-Last: Bard Title: Internal mail transport at processing & distribution centers Abstract: This article addresses a Vehicle Routing Problem (VRP) within mail processing and distribution centers. Throughout the day, large volumes of partially processed mail must be transferred between workstations in accordance with narrow time windows and a host of operational constraints. To facilitate management supervision, it is first necessary to cluster the pickup and delivery points into zones. Given these zones, the first objective is to solve a VRP to minimize the number of vehicles required to satisfy all demand requests and, second, to minimize the total distance traveled. A solution consists of an invariant assignment of vehicles to zones and a routing plan for each 8-hour shift of the day. The clustering is performed with a greedy randomized adaptive search procedure, and two heuristics are developed to find solutions to the VRP, which proved intractable for realistic instances. The heuristics are optimization based within a rolling horizon framework. The first uses a fixed time increment and the second a fixed number of requests for each sub-problem. The respective solutions are pieced together to determine the “optimal” fleet size and set of routes. An extensive analysis was undertaken to evaluate the relative performance of the two heuristics and to better understand how solution quality is affected by changes in parameter values, including sub-problem size, vehicle speed, number of zones, and time window length. Test data were provided by the Chicago center. Journal: IISE Transactions Pages: 285-303 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1217104 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1217104 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:285-303 Template-Type: ReDIF-Article 1.0 Author-Name: Barış Tan Author-X-Name-First: Barış Author-X-Name-Last: Tan Author-Name: Svenja Lagershausen Author-X-Name-First: Svenja Author-X-Name-Last: Lagershausen Title: On the output dynamics of production systems subject to blocking Abstract: Analyzing the output dynamics of a production system gives valuable information for operation and performance evaluation of a production system. In this article, we present an analytical method to determine the autocorrelation of the inter-departure times in queueing networks subject to blocking that can be represented by a Continuous-Time Markov Chain. We particularly focus on production systems that are modeled as open or closed queueing networks, and where stations have phase-type service time distributions. We use the analytical results for the mean and the variance of the time to produce a given number of products in queueing networks to determine the correlation of inter-departure times with different lags. We present a computationally efficient recursive method to determine the correlation of the inter-departure times in open and closed queueing networks. The method also yields closed-form expressions for the correlations of a two-station production line with exponential servers and a finite buffer. We show how the correlations develop with increasing lags subject to different processing time distributions, buffer capacities, and number of stations, in both open and closed queueing networks. As a result, we propose the analytical method given in this study as a tool to study the effects of design and control parameters on the output dynamics of production systems. Journal: IISE Transactions Pages: 268-284 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1222470 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1222470 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:268-284 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao Huang Author-X-Name-First: Xiao Author-X-Name-Last: Huang Author-Name: Greys Sošić Author-X-Name-First: Greys Author-X-Name-Last: Sošić Author-Name: Gregory Kersten Author-X-Name-First: Gregory Author-X-Name-Last: Kersten Title: Selling through Priceline? On the impact of name-your-own-price in competitive market Abstract: Priceline.com patented the innovative pricing strategy, Name-Your-Own-Price (NYOP), that sells opaque products through customer-driven pricing. In this article, we study how competitive sellers with substitutable, non-replenishable goods may sell their products (i) as regular goods, through a direct channel at posted prices, and possibly at the same time (ii) as opaque goods, through a third-party channel that engages in NYOP. We establish a stylized model framework that incorporates three sets of stakeholders: two competing sellers, an intermediary NYOP firm, and a sequence of customers. We first characterize customers’ optimal purchasing/bidding decisions under various channel structures and then analyze corresponding sellers’ dynamic pricing equilibrium. We conduct extensive numerical studies to illustrate the impact of inventory and time on equilibrium prices, expected profit, and channel strategies. We find that the implications are highly dependent on channel structure (dual versus single). In particular, more inventory may reduce one’s expected profit under the dual structure, whereas this never happens when a seller only uses the direct channel. Interestingly, although competing sellers seldom benefit from the existence of NYOP channels, it is possible that one or both of the sellers adopt it in equilibrium. We identify timing, inventory levels, and channel opaqueness as key drivers for NYOP adoption and characterize equilibrium areas for each type of channel structure. Journal: IISE Transactions Pages: 304-319 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1237060 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1237060 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:304-319 Template-Type: ReDIF-Article 1.0 Author-Name: Wenmeng Tian Author-X-Name-First: Wenmeng Author-X-Name-Last: Tian Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Tingting Huang Author-X-Name-First: Tingting Author-X-Name-Last: Huang Author-Name: Jaime A. Camelio Author-X-Name-First: Jaime A. Author-X-Name-Last: Camelio Title: Statistical process control for multistage processes with non-repeating cyclic profiles Abstract: In many manufacturing processes, process data are observed in the form of time-based profiles, which may contain rich information for process monitoring and fault diagnosis. Most approaches currently available in profile monitoring focus on single-stage processes or multistage processes with repeating cyclic profiles. However, a number of manufacturing operations are performed in multiple stages, where non-repeating profiles are generated. For example, in a broaching process, non-repeating cyclic force profiles are generated by the interaction between each cutting tooth and the workpiece. This article presents a process monitoring method based on Partial Least Squares (PLS) regression models, where PLS regression models are used to characterize the correlation between consecutive stages. Instead of monitoring the non-repeating profiles directly, the residual profiles from the PLS models are monitored. A Group Exponentially Weighted Moving Average control chart is adopted to detect both global and local shifts. The performance of the proposed method is compared with conventional methods in a simulation study. Finally, a case study of a hexagonal broaching process is used to illustrate the effectiveness of the proposed methodology in process monitoring and fault diagnosis. Journal: IISE Transactions Pages: 320-331 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1241454 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1241454 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:320-331 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Author-Name: Xiaochen Xian Author-X-Name-First: Xiaochen Author-X-Name-Last: Xian Title: Causation-based process monitoring and diagnosis for multivariate categorical processes Abstract: As many manufacturing and service processes nowadays involve multiple categorical quality characteristics, statistical surveillance for multivariate categorical processes has attracted increasing attention recently. However, in the literature there are only a few research papers that focus on the monitoring and diagnosis of such processes. This may be partly due to the challenges and limitations in describing the correlation relationships among categorical variables. In many applications, causal relationships may exist among categorical variables, in which the shifts at upstream, or cause, variables will propagate to their downstream, or effect, variables based on the causal structure. In such cases, a causation-based rather than correlation-based description would better account for the relationship among multiple categorical variables. This provides a new opportunity to establish improved monitoring and diagnosis schemes. In this article, we employ a Bayesian network to characterize such causal relationships and integrate it with a statistical process control technique. We propose two control charts for detecting shifts in the conditional probabilities of the multiple categorical variables that are embedded in the Bayesian network. The first chart provides a general tool, and the second chart integrates directional information, which also leads to a diagnostic prescription of shift locations. Both simulation and real case studies are used to demonstrate the effectiveness of the proposed monitoring and diagnostic schemes. Journal: IISE Transactions Pages: 332-343 Issue: 3 Volume: 49 Year: 2017 Month: 3 X-DOI: 10.1080/0740817X.2016.1241455 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1241455 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:3:p:332-343 Template-Type: ReDIF-Article 1.0 Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Author-Name: Malte Fliedner Author-X-Name-First: Malte Author-X-Name-Last: Fliedner Author-Name: Armin Scholl Author-X-Name-First: Armin Author-X-Name-Last: Scholl Title: Assembly line balancing: Joint precedence graphs under high product variety Abstract: Previous approaches for balancing mixed-model assembly lines rely on detailed forecasts of the demand for each model to be produced on the line (model mix). With the help of the anticipated model mix a joint precedence graph for a virtual average model is deduced, so that the mixed-model balancing problem is reduced to the single-model case and traditional balancing approaches can be employed. Today's ever increasing product variety often impedes reliable forecasts for individual models. Instead, forecasts for the estimated occurrences of each product feature (e.g., percentage of cars with air conditioning) are merely obtainable. This paper shows how the generation of joint precedence graphs can be altered to account for this fundamental change in information. For the first time a tractable approach to provide the information necessary to balance mixed-model assembly lines carrying considerable product variety is presented. Journal: IIE Transactions Pages: 183-193 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170801965082 File-URL: http://hdl.handle.net/10.1080/07408170801965082 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:183-193 Template-Type: ReDIF-Article 1.0 Author-Name: Pratik Parikh Author-X-Name-First: Pratik Author-X-Name-Last: Parikh Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: Estimating picker blocking in wide-aisle order picking systems Abstract: In designing an Order Picking System (OPS) with multiple pickers, the designing (or selection) of several parameters (e.g., width of aisles, storage system and picking strategy) is dependent on the blocking that occurs between pickers. In this paper analytical models to estimate blocking in an OPS that has picking aisles wide enough to allow pickers to pass other pickers in the aisle are developed. In such OPSs, pickers can experience blocking at a pick face when two or more pickers need to pick at the same pick face. The developed models are compared with simulation, with results indicating that the proposed models are sufficiently accurate. Test results suggest that when pickers pick one SKU at a pick face, blocking is less in a wide-aisle OPS compared to that in a narrow-aisle OPS. However, when pickers pick more than one SKU at a pick face, blocking increases monotonically with an increase in the number of SKUs picked. The last result is significant since it highlights the importance of the proposed model that considers the variation in the time the picker is stopped to pick.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resources: (i) Appendix A, which describes the procedure to obtain a closed-form expression for b1(2); (ii) Appendix B, which describes the derivations of the distributions for the case when pickers pick one SKU and pick:walk time ratio is ∞:1; and (iii) Appendix C, which describes the derivations of the distributions for the case when pickers may pick more than one SKU and pick:walk time ratio is ∞:1.] Journal: IIE Transactions Pages: 232-246 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802108518 File-URL: http://hdl.handle.net/10.1080/07408170802108518 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:232-246 Template-Type: ReDIF-Article 1.0 Author-Name: Kevin Gue Author-X-Name-First: Kevin Author-X-Name-Last: Gue Author-Name: Russell Meller Author-X-Name-First: Russell Author-X-Name-Last: Meller Title: Aisle configurations for unit-load warehouses Abstract: Unit-load warehouses are used to store items—typically pallets—that can be stowed or retrieved in a single trip. In the traditional, ubiquitous design, storage racks are arranged to create parallel picking aisles, which force workers to travel rectilinear distances to picking locations. We consider the problem of arranging aisles in new ways to reduce the cost of travel for a single-command cycle within these warehouses. The proposed models produce alternative designs with piecewise diagonal cross aisles, and with picking aisles that are not parallel. One of the designs promises to reduce the expected distance that workers travel by more than 20% for warehouses of reasonable size. We also develop a theoretical bound that shows that this design is close to optimal. Journal: IIE Transactions Pages: 171-182 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802112726 File-URL: http://hdl.handle.net/10.1080/07408170802112726 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:171-182 Template-Type: ReDIF-Article 1.0 Author-Name: Konstantin Kogan Author-X-Name-First: Konstantin Author-X-Name-Last: Kogan Author-Name: Hanan Tell Author-X-Name-First: Hanan Author-X-Name-Last: Tell Title: Production smoothing by balancing capacity utilization and advance orders Abstract: This paper focuses on advance orders and continuous-time production smoothing under uncertain demands. Similar to the classical newsboy problem, the demands may represent items that quickly become obsolete, spoil or have a future that is uncertain beyond a single period or selling season. The exact demand for items is unknown prior to the end of the selling season and may exceed the available capacity. To handle the uncertainty, initial inventories are accumulated by advance ordering or contracting out at a lower cost relative to the manufacturer's production cost. In contrast to the classical newsboy problem, the manufacturer's capacity is used to smooth inaccuracy in demand estimation during the selling season. The objective is to determine both the advance order quantity to be delivered by the beginning of the selling season and the production rates over the selling season to minimize advance order costs and expected inventory shortage or surplus costs as well as production costs during the season. The maximum principle is employed to study the problem. As a result, closed-form optimal solutions are derived for various production conditions. The sensitivity analysis shows that these solutions do not always depend on the demand shape. An example illustrates the approach. Journal: IIE Transactions Pages: 223-231 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802116305 File-URL: http://hdl.handle.net/10.1080/07408170802116305 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:223-231 Template-Type: ReDIF-Article 1.0 Author-Name: Manuel Laguna Author-X-Name-First: Manuel Author-X-Name-Last: Laguna Author-Name: Javier Roa Author-X-Name-First: Javier Author-X-Name-Last: Roa Author-Name: Antonio Jiménez Author-X-Name-First: Antonio Author-X-Name-Last: Jiménez Author-Name: Fernando Seco Author-X-Name-First: Fernando Author-X-Name-Last: Seco Title: Diversified local search for the optimal layout of beacons in an indoor positioning system Abstract: The navigation of Autonomous Guided Vehicles (AGVs) in industrial environments is often controlled by positioning systems based on landmarks or artificial beacons. In these systems, the position of an AGV navigating in an interior space is determined by the calculation of its relative distance to beacons, whose location is known in advance. A fundamental design problem associated with landmark navigation systems consists in determining the optimal location of the minimum number of beacons necessary to achieve a desired level of accuracy and reliability. A local search procedure coupled with a diversification strategy is developed for this problem. Comparisons with an earlier solution method based on genetic algorithms are provided and it is shown that the proposed procedure finds better designs in a fraction of the computational time employed by the genetic algorithm. Journal: IIE Transactions Pages: 247-259 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802369383 File-URL: http://hdl.handle.net/10.1080/07408170802369383 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:247-259 Template-Type: ReDIF-Article 1.0 Author-Name: Philippe Lavoie Author-X-Name-First: Philippe Author-X-Name-Last: Lavoie Author-Name: Jean-Pierre Kenné Author-X-Name-First: Jean-Pierre Author-X-Name-Last: Kenné Author-Name: Ali Gharbi Author-X-Name-First: Ali Author-X-Name-Last: Gharbi Title: Optimization of production control policies in failure-prone homogenous transfer lines Abstract: The production control of homogenous transfer lines with machines that are prone to failure is considered in terms of inventory and backlog costs. Because problem complexity grows with line size, a heuristic method based on the profile of the distribution of buffer capacities in moderate size lines is developed in order to enable the optimization of long lines. A method consisting of an analytical formalism, combined discrete/continuous simulation modeling, design of experiments and response surface methodology is used to optimize a set of transfer lines, with one parameter per machine, for up to seven machines. A profile in the parameter distribution which can be modeled using four-parameters is observed. Consequently, the optimization problem is reduced to four parameters, in turn greatly reducing the required optimization effort. An example of a 20-machine line, optimized at 130 runs, versus 5243 090 runs that would be necessary to solve the 20-parameter problem, is presented to illustrate the usefulness of the parameterized profile. Journal: IIE Transactions Pages: 209-222 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802375760 File-URL: http://hdl.handle.net/10.1080/07408170802375760 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:209-222 Template-Type: ReDIF-Article 1.0 Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Author-Name: René de Koster Author-X-Name-First: René Author-X-Name-Last: de Koster Title: Optimal zone boundaries for two-class-based compact three-dimensional automated storage and retrieval systems Abstract: Compact, multi-deep three-dimensional (3D), Automated Storage and Retrieval Systems (AS/RS) are becoming more common, due to new technologies, lower investment costs, time efficiency and compact size. Decision-making research on these systems is still in its infancy. This paper studies a particular compact system with rotating conveyors for the depth movement and a Storage/Retrieval (S/R) machine for the horizontal and vertical movement of unit loads. The optimal storage zone boundaries are determined for this system with two product classes: high- and low-turnover, by minimizing the expected S/R machine travel time. We formulate a mixed-integer non-linear programming model to determine the zone boundaries. A decomposition algorithm and a one-dimensional search scheme are developed to solve the model. The algorithm is complex, but the results are appealing since most of them are in closed-form and easy to apply to optimally layout the 3D AS/RS rack. The results show that the S/R machine travel time is significantly influenced by the zone dimensions, zone sizes and ABC curve skewness (presenting turnover patterns of different products). The presented results are compared with those under random storage and it is shown that significant reductions of the machine travel time are obtainable by using class-based storage.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 194-208 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802375778 File-URL: http://hdl.handle.net/10.1080/07408170802375778 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:194-208 Template-Type: ReDIF-Article 1.0 Author-Name: Zeynep Avsar Author-X-Name-First: Zeynep Author-X-Name-Last: Avsar Author-Name: W. Zijm Author-X-Name-First: W. Author-X-Name-Last: Zijm Author-Name: Umut Rodoplu Author-X-Name-First: Umut Author-X-Name-Last: Rodoplu Title: An approximate model for base-stock-controlled assembly systems Abstract: This study is on continuously reviewed base-stock-controlled assembly systems with Poisson demand arrivals and exponential single-server facilities used for manufacturing and assembly operations. A partially aggregated but exact queueing model is developed and approximated assuming that the state-dependent transition rates arising as a result of the partial aggregation are constant. It is shown that the steady-state probability distribution of this approximate model is a product-form distribution for the simplest case with two components making up an assembly. Based on this analytical observation, similar product-form distributions are proposed for more complex assembly systems. Comparisons with simulation and matrix-geometric solutions show that the proposed product-form steady-state distributions accurately approximate relevant performance measures with a considerable advantage in terms of the required computational effort. A greedy heuristic is devised to use approximate steady-state probabilities for optimizing design parameters like base-stock levels.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resources: appendix for the proofs, additional remarks, and figures and tables for the numerical results.] Journal: IIE Transactions Pages: 260-274 Issue: 3 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802510382 File-URL: http://hdl.handle.net/10.1080/07408170802510382 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:3:p:260-274 Template-Type: ReDIF-Article 1.0 Author-Name: Zhishuang Yao Author-X-Name-First: Zhishuang Author-X-Name-Last: Yao Author-Name: Loo Lee Author-X-Name-First: Loo Author-X-Name-Last: Lee Author-Name: Ek Chew Author-X-Name-First: Ek Author-X-Name-Last: Chew Author-Name: Vernon Hsu Author-X-Name-First: Vernon Author-X-Name-Last: Hsu Author-Name: Wikrom Jaruphongsa Author-X-Name-First: Wikrom Author-X-Name-Last: Jaruphongsa Title: Dual-channel component replenishment problem in an assemble-to-order system Abstract: This article considers a component inventory problem in an assemble-to-order system where the assemble-to-order manufacturer faces a single-period stochastic demand for a single product created from multiple components. After the demand is realized, required components that are not available from inventory can be obtained using two sourcing channels that have different prices and lead times. The considered problem is formulated and solved analytically and the structural properties of its optimal solutions are explored. Computational experiments are performed to demonstrate the efficiency of the solution methods and comparisons are made between the performance of assemble-to-order systems with single- and dual-channel procurement approaches. It is also demonstrated that the proposed approach can be extended to the problem that considers multiple sourcing channels for components. Journal: IIE Transactions Pages: 229-243 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.676748 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.676748 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:229-243 Template-Type: ReDIF-Article 1.0 Author-Name: Chi Zhang Author-X-Name-First: Chi Author-X-Name-Last: Zhang Author-Name: Jose Ramirez-Marquez Author-X-Name-First: Jose Author-X-Name-Last: Ramirez-Marquez Title: Protecting critical infrastructures against intentional attacks: a two-stage game with incomplete information Abstract: It is now paramount to protect critical infrastructures because of their significance for economic development and social well-being of modern societies. One of the main threats to these networked systems is from intentional attackers, who are resourceful and inventive in selecting time, target, and means of attack. Thus, attackers’ intelligence should be considered when developing intelligent and cost-effective protection strategies. In this research, critical infrastructures are modeled as networks and the development of network protection strategies is modeled as a two-stage game between a protector and an attacker with incomplete information. Due to the complexity of critical infrastructures, there are usually a large number of combinations of potential protection and attack strategies leading to a computational challenge to find the Pareto equilibrium solutions for the proposed game. To meet this challenge, this research develops an evolutionary algorithm to solve the proposed a transformation of the game into a multi-objective optimization model. Journal: IIE Transactions Pages: 244-258 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.676749 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.676749 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:244-258 Template-Type: ReDIF-Article 1.0 Author-Name: Imry Rosenbaum Author-X-Name-First: Imry Author-X-Name-Last: Rosenbaum Author-Name: Irad Ben-Gal Author-X-Name-First: Irad Author-X-Name-Last: Ben-Gal Author-Name: Uri Yechiali Author-X-Name-First: Uri Author-X-Name-Last: Yechiali Title: Node generation and capacity reallocation in open Jackson networks Abstract: This article investigates methods for reallocation of service capacities in open Jackson networks in order to minimize either a system's mean total work-in-process or its response time. The focus is mainly on a method called node generation, by which capacity can be transferred from a node in echelon j to a newly generated node in echelon j + 1. The proposed procedure is compared with the more conventional capacity redistribution method, by which capacity can be transferred from any node in echelon j to existing successor nodes in echelon j + 1. Formulation of each method as a mathematical programming problem reveals the structure of the optimal solution for both problems. The motivation for considering these approaches stems from real-life settings, in particular, from a production line or supply chains where the two types of capacity reallocation are applied. Heuristic methods are developed to solve relatively large networks in tractable time. Numerical results and analyses are presented. Journal: IIE Transactions Pages: 259-272 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.677571 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.677571 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:259-272 Template-Type: ReDIF-Article 1.0 Author-Name: Kan Wu Author-X-Name-First: Kan Author-X-Name-Last: Wu Author-Name: Leon McGinnis Author-X-Name-First: Leon Author-X-Name-Last: McGinnis Title: Interpolation approximations for queues in series Abstract: Tandem queues constitute a fundamental structure of queueing networks. However, exact queue times in tandem queues are notoriously difficult to compute except for some special cases. Several approximation schemes that are based on mathematical assumptions that enable approximate analyses of tandem queues have been reported in the literature. This article proposes an approximation approach that is based on observed properties of the behavior of tandem queues: the intrinsic gap and intrinsic ratio. The approach exploits the nearly linear and heavy-traffic properties of the intrinsic ratio, which appear to hold in realistic production situations. The proposed approach outperforms existing approximation methods across a broad range of examined cases. It is also demonstrated that the proposed approach has the potential when applied to historical data to achieve accurate mean queue time estimates in practical production environments.[Supplemental materials are available for this article. Go to the publisher's online edition of IIE Transactions to view the supplemental file.] Journal: IIE Transactions Pages: 273-290 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.682699 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.682699 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:273-290 Template-Type: ReDIF-Article 1.0 Author-Name: Sven Axsäter Author-X-Name-First: Sven Author-X-Name-Last: Axsäter Author-Name: Christian Howard Author-X-Name-First: Christian Author-X-Name-Last: Howard Author-Name: Johan Marklund Author-X-Name-First: Johan Author-X-Name-Last: Marklund Title: A distribution inventory model with transshipments from a support warehouse Abstract: This article considers a distribution inventory system consisting of N retailers and a regional support warehouse, all being replenished from a central warehouse/outside supplier. For the case where stock-outs occur, the retailers receive transshipments from the support warehouse at an extra cost. A model is presented for cost evaluation and optimization of the reorder points in the system under fill rate constraints. The solution method is designed to handle large real-life systems and is fast enough to be directly implemented in practice. A numerical study illustrates that (i) the model renders near-optimal solutions; (ii) the value of using a support warehouse can be significant even for large transshipment costs; and (iii) significant cost savings can be obtained using the proposed model. Using real data for a sample of 50 representative products, the proposed model reduces the expected holding and transshipment costs by 29% while still meeting target fill rates.[Supplemental materials are available for this article. Go to the publisher's online edition of IIE Transactions to view the supplemental file.] Journal: IIE Transactions Pages: 309-322 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.706375 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.706375 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:309-322 Template-Type: ReDIF-Article 1.0 Author-Name: Mark Brantley Author-X-Name-First: Mark Author-X-Name-Last: Brantley Author-Name: Loo Lee Author-X-Name-First: Loo Author-X-Name-Last: Lee Author-Name: Chun-Hung Chen Author-X-Name-First: Chun-Hung Author-X-Name-Last: Chen Author-Name: Argon Chen Author-X-Name-First: Argon Author-X-Name-Last: Chen Title: Efficient simulation budget allocation with regression Abstract: Simulation can be a very powerful tool to help decision making in many applications; however, exploring multiple courses of actions can be time consuming. Numerous Ranking & Selection (R&S) procedures have been developed to enhance the simulation efficiency of finding the best design. This article explores the potential of further enhancing R&S efficiency by incorporating simulation information from across the domain into a regression metamodel. This article assumes that the underlying function to be optimized is one-dimensional as well as approximately quadratic or piecewise quadratic. Under some common conditions in most regression-based approaches, the proposed method provides approximations of the optimal rules that determine the design locations to conduct simulation runs and the number of samples allocated to each design location. Numerical experiments demonstrate that the proposed approach can dramatically enhance efficiency over existing efficient R&S methods and can obtain significant savings over regression-based methods. In addition to utilizing concepts from the Design Of Experiments (DOE) literature, it introduces the probability of correct selection optimality criterion that underpins our new R&S method to the DOE literature. Journal: IIE Transactions Pages: 291-308 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.712238 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.712238 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:291-308 Template-Type: ReDIF-Article 1.0 Author-Name: Amit Bardhan Author-X-Name-First: Amit Author-X-Name-Last: Bardhan Author-Name: Milind Dawande Author-X-Name-First: Milind Author-X-Name-Last: Dawande Author-Name: Srinagesh Gavirneni Author-X-Name-First: Srinagesh Author-X-Name-Last: Gavirneni Author-Name: Yinping Mu Author-X-Name-First: Yinping Author-X-Name-Last: Mu Author-Name: Suresh Sethi Author-X-Name-First: Suresh Author-X-Name-Last: Sethi Title: Forecast and rolling horizons under demand substitution and production changeovers: analysis and insights Abstract: For most multi-period decision-making problems, it is generally well accepted that the influence of information about later periods on the optimal decision in the current period reduces as we move farther into the future. If and when this influence reduces to zero, the corresponding problem horizon is referred to as a forecast horizon. For real businesses, the problem of obtaining a minimal forecast horizon becomes relevant because the task of estimating reliable data for future periods gets progressively more challenging and expensive. This article investigates forecast horizons for a two-product dynamic lot-sizing model under (i) the possibility of substitution in one direction; that is, one product can be used to satisfy the demand of the other product but not vice versa; and (ii) a changeover cost when production switches from one product to the other. It is assumed that only one of the two products can be produced in a period. The notion of substitution, due to the inherent flexibility it offers, has recently been recognized as an effective tool to improve the efficiency of multi-product inventory systems. The concept of regeneration points is used to justify the use of a practically relevant restricted version of the problem to obtain forecast horizons. A dynamic programming-based polynomial-time algorithm for the restricted version is developed and, subsequently, an efficient procedure for obtaining minimal forecast horizons by establishing the monotonicity of the regeneration points is obtained. Using a comprehensive test bed of instances, useful insights are obtained on the impact of substitution and production changeovers on the length of the minimal forecast horizons. Finally, for infinite-horizon problems, a practical rolling-horizon procedure is developed that uses forecasting costs to balance the benefit of additional information. It is shown that, instead of fixing the duration of the rolling horizon at a predetermined value, changing it dynamically based on the lengths of the minimal forecast horizons can significantly reduce the combined production and forecasting cost.[Supplemental materials are available for this article. Go to the publisher’s online edition of IIE Transactions to view the supplemental file.] Journal: IIE Transactions Pages: 323-340 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.712239 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.712239 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:323-340 Template-Type: ReDIF-Article 1.0 Author-Name: Oualid Jouini Author-X-Name-First: Oualid Author-X-Name-Last: Jouini Author-Name: Ger Koole Author-X-Name-First: Ger Author-X-Name-Last: Koole Author-Name: Alex Roubos Author-X-Name-First: Alex Author-X-Name-Last: Roubos Title: Performance indicators for call centers with impatient customers Abstract: An important feature of call center modeling is the presence of impatient customers. This article considers single-skill call centers including customer abandonments. A number of different service-level definitions are structured, including all those used in practice, and the explicit computation of their performance measures is performed. Based on data from different call centers, models are defined that extend the common Erlang A model. It is shown that the proposed models fit reality very well. Journal: IIE Transactions Pages: 341-354 Issue: 3 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.712241 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.712241 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:3:p:341-354 Template-Type: ReDIF-Article 1.0 Author-Name: Suprasad V. Amari Author-X-Name-First: Suprasad V. Author-X-Name-Last: Amari Author-Name: Chaonan Wang Author-X-Name-First: Chaonan Author-X-Name-Last: Wang Author-Name: Liudong Xing Author-X-Name-First: Liudong Author-X-Name-Last: Xing Author-Name: Rahamat Mohammad Author-X-Name-First: Rahamat Author-X-Name-Last: Mohammad Title: An efficient phased-mission reliability model considering dynamic k-out-of-n subsystem redundancy Abstract: In this article, an efficient method is proposed for exact reliability evaluation of a special class of Phased-Mission Systems (PMSs) containing multiple k-out-of-n subsystems, each of which has multiple identical and non-repairable components. A PMS performs missions involving multiple, consecutive, and non-overlapping phases of operations. In each phase, the system has to accomplish a specific task and may be subject to different stresses. Thus, the configuration of each subsystem can change from phase to phase, including its active and inactive status, redundancy type, and minimum required working components. If any one of the required (active) subsystems is failed in a phase, the system is considered to be failed in that phase. The proposed method for accurate reliability analysis of PMS considers statistical dependencies of component states across the phases, time-varying and phase-dependent failure rates, and associated cumulative damage effects. Based on conditional probabilities and an efficient recursive formula to compute these probabilities, the proposed method has both computational time and memory requirements linear to the system size. Medium-scale and large-scale systems are analyzed to demonstrate high efficiency of the proposed method. Journal: IISE Transactions Pages: 868-877 Issue: 10 Volume: 50 Year: 2018 Month: 10 X-DOI: 10.1080/24725854.2018.1439205 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1439205 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:10:p:868-877 Template-Type: ReDIF-Article 1.0 Author-Name: Changyue Song Author-X-Name-First: Changyue Author-X-Name-Last: Song Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Title: Statistical degradation modeling and prognostics of multiple sensor signals via data fusion: A composite health index approach Abstract: Nowadays multiple sensors are widely used to simultaneously monitor the degradation status of a unit. Because those sensor signals are often correlated and measure different characteristics of the same unit, effective fusion of such a diverse “gene pool” is an important step to better understanding the degradation process and producing a more accurate prediction of the remaining useful life. To address this issue, this article proposes a novel data fusion method that constructs a composite Health Index (HI) via the combination of multiple sensor signals for better characterizing the degradation process. In particular, we formulate the problem as indirect supervised learning and leverage the quantile regression to derive the optimal fusion coefficient. In this way, the prognostic performance of the proposed method is guaranteed. To the best of our knowledge, this is the first article that provides the theoretical analysis of the data fusion method for degradation modeling and prognostics. Simulation studies are conducted to evaluate the proposed method in different scenarios. A case study on the degradation of aircraft engines is also performed, which shows the superior performance of our method over existing HI-based methods. Journal: IISE Transactions Pages: 853-867 Issue: 10 Volume: 50 Year: 2018 Month: 10 X-DOI: 10.1080/24725854.2018.1440673 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1440673 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:10:p:853-867 Template-Type: ReDIF-Article 1.0 Author-Name: Yanqing Duanmu Author-X-Name-First: Yanqing Author-X-Name-Last: Duanmu Author-Name: Carson T. Riche Author-X-Name-First: Carson T. Author-X-Name-Last: Riche Author-Name: Malancha Gupta Author-X-Name-First: Malancha Author-X-Name-Last: Gupta Author-Name: Noah Malmstadt Author-X-Name-First: Noah Author-X-Name-Last: Malmstadt Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: Scale-up modeling for manufacturing nanoparticles using microfluidic T-junction Abstract: Nanoparticles have great potential to revolutionize industry and improve our lives in various fields such as energy, security, medicine, food, and environmental science. Droplet-based microfluidic reactors serve as an important tool to facilitate monodisperse nanoparticles with a high yield. Depending on process settings, droplet formation in a typical microfluidic T-junction is explained by different mechanisms, squeezing, dripping, or squeezing-to-dripping. Therefore, the manufacturing process can potentially operate under multiple physical domains due to uncertainties. Although mechanistic models have been developed for individual domains, a modeling approach for the scale-up manufacturing of droplet formation across multiple domains does not exist. Establishing an integrated and scalable droplet formation model, which is vital for scaling up microfluidic reactors for large-scale production, faces two critical challenges: the high dimensionality of the modeling space; and ambiguity among the boundaries of physical domains. This work establishes a novel and generic formulation for the scale-up of multiple-domain manufacturing processes and provides a scalable modeling approach for the quality control of products, which enables and supports the scale-up of manufacturing processes that can potentially operate under multiple physical domains due to uncertainties. Journal: IISE Transactions Pages: 892-899 Issue: 10 Volume: 50 Year: 2018 Month: 10 X-DOI: 10.1080/24725854.2018.1443529 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1443529 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:10:p:892-899 Template-Type: ReDIF-Article 1.0 Author-Name: Lijun Shang Author-X-Name-First: Lijun Author-X-Name-Last: Shang Author-Name: Shubin Si Author-X-Name-First: Shubin Author-X-Name-Last: Si Author-Name: Shudong Sun Author-X-Name-First: Shudong Author-X-Name-Last: Sun Author-Name: Tongdan Jin Author-X-Name-First: Tongdan Author-X-Name-Last: Jin Title: Optimal warranty design and post-warranty maintenance for products subject to stochastic degradation Abstract: Warranty policy, as a marketing strategy, has been widely studied for several decades, but warranty models incorporating condition-based maintenance are still rare. In condition monitoring, product reliability in the warranty period can be tracked and predicted based on its degradation path. In this article, we first propose a condition-based renewable replacement warranty policy through the integration of Inverse Gaussian degradation model. The goal is to maximize the manufacturer's profit by optimizing the warranty period, sale price, and replacement threshold. In a monopoly market, we show that it is more profitable to let the replacement threshold equal the failure threshold. However, in the competitive market the optimal replacement threshold should be below and no more than the failure threshold. Second, depending on whether the historical degradation level is observable or not to the customer, optimal post-warranty maintenance policy considering hybrid preventative maintenance effect (i.e., both age and degradation level reduction) is derived. Numerical experiments show that a larger replacement threshold can increase the manufacturer's profit, reduce sale price and prolong warranty period, but it has less effect on saving the consumer's cost or extending the replacement age. Journal: IISE Transactions Pages: 913-927 Issue: 10 Volume: 50 Year: 2018 Month: 10 X-DOI: 10.1080/24725854.2018.1448490 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1448490 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:10:p:913-927 Template-Type: ReDIF-Article 1.0 Author-Name: Chen Zhang Author-X-Name-First: Chen Author-X-Name-Last: Zhang Author-Name: Hao Yan Author-X-Name-First: Hao Author-X-Name-Last: Yan Author-Name: Seungho Lee Author-X-Name-First: Seungho Author-X-Name-Last: Lee Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Weakly correlated profile monitoring based on sparse multi-channel functional principal component analysis Abstract: Although several works have been proposed for multi-channel profile monitoring, two additional challenges are yet to be addressed: (i) how to model complex correlations of multi-channel profiles when different profiles have different features (i.e., weakly or sparsely correlated); (ii) how to efficiently detect sparse changes occurring in only a small segment of a few profiles. To fill this research gap, our contributions are twofold. First, we propose a novel Sparse Multi-channel Functional Principal Component Analysis (SMFPCA) to model multi-channel profiles. SMFPCA can not only flexibly describe the correlation structure of multiple, or even high-dimensional, profiles with distinct features, but also achieve sparse PCA scores which are easily interpretable. Second, we propose an efficient convergence-guaranteed optimization algorithm to solve SMFPCA in real time based on the block coordinate descent algorithm. Third, as the SMFPCA scores can naturally identify sparse out-of-control (OC) patterns, we use the scores to construct a monitoring scheme which provides increased sensitivity to sparse OC changes. Numerical studies together with a real case study in a manufacturing system demonstrate the effectiveness of the developed methodology. Journal: IISE Transactions Pages: 878-891 Issue: 10 Volume: 50 Year: 2018 Month: 10 X-DOI: 10.1080/24725854.2018.1451012 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1451012 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:10:p:878-891 Template-Type: ReDIF-Article 1.0 Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A cost-effective and reliable measurement strategy for 3D printed parts by integrating low- and high-resolution measurement systems Abstract: Metrology data are crucial to quality control of three-dimensional (3D) printed parts. Low-cost measurement systems are often unreliable due to their low resolutions, whereas high-resolution measurement systems usually induce high measurement costs. To balance the measurement cost and accuracy, a new cost-effective and reliable measurement strategy is proposed in this article, which jointly uses two-resolution measurement systems. Specifically, only a small sample of base parts are measured by both the low- and high-resolution measurement systems in order to save costs. The measurement accuracy of most parts with only low-resolution metrology data is improved by effectively integrating high-resolution metrology data of the base parts. A Bayesian generative model parameterizes a part-independent bias and variance pattern of the low-resolution metrology data and facilitates a between-part data integration via an efficient Markov chain Monte Carlo sampling algorithm. This multi-part two-resolution metrology data integration highlights the novelty and contribution of this article compared with the existing one-part data integration methods in the literature. Finally, an intensive experimental study involving a laser scanner and a machine visual system has validated the effectiveness of our measurement strategy in acquisition of reliable metrology data of 3D printed parts. Journal: IISE Transactions Pages: 900-912 Issue: 10 Volume: 50 Year: 2018 Month: 10 X-DOI: 10.1080/24725854.2018.1455117 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1455117 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:10:p:900-912 Template-Type: ReDIF-Article 1.0 Author-Name: Melike Meterelliyoz Author-X-Name-First: Melike Author-X-Name-Last: Meterelliyoz Author-Name: Christos Alexopoulos Author-X-Name-First: Christos Author-X-Name-Last: Alexopoulos Author-Name: David Goldsman Author-X-Name-First: David Author-X-Name-Last: Goldsman Author-Name: Tuba Aktaran-Kalayci Author-X-Name-First: Tuba Author-X-Name-Last: Aktaran-Kalayci Title: Reflected variance estimators for simulation Abstract: We propose a new class of estimators for the asymptotic variance parameter of a stationary simulation output process. The estimators are based on Standardized Time Series (STS) functionals that converge to Brownian bridges that are themselves derived from appropriately reflected Brownian motions. The main result is that certain linear combinations of reflected estimators have substantially smaller variances than their constituents. We illustrate the performance of the new estimators via Monte Carlo experiments. These experiments show that the reflected estimators behave as expected and, in addition, perform better than certain competitors such as nonoverlapping batch means estimators and STS folded estimators. Journal: IIE Transactions Pages: 1185-1202 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1005776 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1005776 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1185-1202 Template-Type: ReDIF-Article 1.0 Author-Name: Haobin Li Author-X-Name-First: Haobin Author-X-Name-Last: Li Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Author-Name: Peter Lendermann Author-X-Name-First: Peter Author-X-Name-Last: Lendermann Title: MO-COMPASS: a fast convergent search algorithm for multi-objective discrete optimization via simulation Abstract: Discrete Optimization via Simulation (DOvS) has drawn considerable attention from both simulation researchers and industry practitioners, due to its wide application and significant effects. In fact, DOvS usually implies the need to solve large-scale problems, making the efficiency a key factor when designing search algorithms. In this research work, MO-COMPASS (Multi-Objective Convergent Optimization via Most-Promising-Area Stochastic Search) is developed, as an extension of the single-objective COMPASS, for solving DOvS problems with two or more objectives by taking into consideration the Pareto optimality and the probability of correct selection. The algorithm is proven to be locally convergent, and numerical experiments have been carried out to show its ability to achieve high convergence rate. Journal: IIE Transactions Pages: 1153-1169 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1005778 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1005778 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1153-1169 Template-Type: ReDIF-Article 1.0 Author-Name: Christopher M. Healey Author-X-Name-First: Christopher M. Author-X-Name-Last: Healey Author-Name: Sigrún Andradóttir Author-X-Name-First: Sigrún Author-X-Name-Last: Andradóttir Author-Name: Seong-Hee Kim Author-X-Name-First: Seong-Hee Author-X-Name-Last: Kim Title: A minimal switching procedure for constrained ranking and selection under independent or common random numbers Abstract: Constrained Ranking and Selection (R&S) aims to select the best system according to a primary performance measure, while also satisfying constraints on secondary performance measures. Several procedures have been proposed for constrained R&S, but these procedures seek to minimize the number of samples required to choose the best constrained system without taking into account the setup costs incurred when switching between systems. We introduce a new procedure that minimizes the number of such switches, while still making a valid selection of the best constrained system. Analytical and experimental results show that the procedure is valid for independent systems and efficient in terms of total cost (incorporating both switching and sampling costs). We also inspect the use of the Common Random Numbers (CRN) approach to improve the efficiency of our new procedure. When implementing CRN, we see a significant decrease in the samples needed to identify the best constrained system, but this is sometimes achieved at the expense of a valid Probability of Correct Selection (PCS) due to the comparison of systems with an unequal number of samples. We propose four variance estimate modifications and show that their use within our new procedure provides good PCS under CRN at the cost of some additional observations. Journal: IIE Transactions Pages: 1170-1184 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1009198 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1009198 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1170-1184 Template-Type: ReDIF-Article 1.0 Author-Name: Banafsheh Behzad Author-X-Name-First: Banafsheh Author-X-Name-Last: Behzad Author-Name: Sheldon H. Jacobson Author-X-Name-First: Sheldon H. Author-X-Name-Last: Jacobson Author-Name: Matthew J. Robbins Author-X-Name-First: Matthew J. Author-X-Name-Last: Robbins Title: A symmetric capacity-constrained differentiated oligopoly model for the United States pediatric vaccine market with linear demand Abstract: The United States pediatric vaccine market is examined using Bertrand–Edgeworth–Chamberlin price competition. The proposed game captures interactions between symmetric, capacity-constrained manufacturers in a differentiated, single-product market with linear demand. Results indicate that a unique pure strategy equilibrium exists in the case where the capacities of the manufacturers are at their extreme. For the capacity region where no pure strategy equilibrium exists, there exists a mixed strategy equilibrium where the distribution function, its support, and the expected profit of the manufacturers are characterized. Three game instances are introduced to model the United States pediatric vaccine market. In each instance, the manufacturers are assumed to have equal capacity in producing vaccines. Vaccines are differentiated based upon the number of reported adverse medical events for that vaccine. Using a game-theoretic model, equilibrium prices are computed for each monovalent vaccine. Results indicate that the equilibrium prices for monovalent vaccines are lower than the federal contract prices. The numerical results provide both a lower and upper bound for the vaccine equilibrium prices in the public sector, based on the capacity of the vaccine manufacturers. Results illustrate the importance of several model parameters such as market demand and vaccine adverse events on the equilibrium prices. Supplementary materials are available for this article. Go to the publisher’s online edition of IIE Transactions for datasets, additional tables, detailed proofs, etc. Journal: IIE Transactions Pages: 1252-1266 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1009759 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1009759 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1252-1266 Template-Type: ReDIF-Article 1.0 Author-Name: S. Ayca Erdogan Author-X-Name-First: S. Ayca Author-X-Name-Last: Erdogan Author-Name: Alexander Gose Author-X-Name-First: Alexander Author-X-Name-Last: Gose Author-Name: Brian T. Denton Author-X-Name-First: Brian T. Author-X-Name-Last: Denton Title: Online appointment sequencing and scheduling Abstract: We formulate and solve a new stochastic integer programming model for dynamic sequencing and scheduling of appointments to a single stochastic server. We assume that service durations and the number of customers to be served on a particular day are uncertain. Customers are sequenced and scheduled dynamically (online) one at a time as they request appointments. We present a two-stage stochastic mixed integer program that uses a novel set of non-anticipativity constraints to capture the dynamic multi-stage nature of appointment requests as well as the sequencing of customers. We describe several ways to improve the computational efficiency of decomposition methods to solve our model. We also present some theoretical findings based on small problems to help motivate decision rules for larger problems. Our numerical experiments provide insights into optimal sequencing and scheduling decisions and the performance of the solution methods we propose. Journal: IIE Transactions Pages: 1267-1286 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1011355 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1011355 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1267-1286 Template-Type: ReDIF-Article 1.0 Author-Name: Su Xiu Xu Author-X-Name-First: Su Xiu Author-X-Name-Last: Xu Author-Name: George Q. Huang Author-X-Name-First: George Q. Author-X-Name-Last: Huang Title: Auction-based transportation procurement in make-to-order systems Abstract: This article presents an auction-based model for Transportation Service Procurement (TSP) in make-to-order systems. It is one of the first to integrate auction-based TSP and inventory. The underlying model is applicable in the general context of coordinating TSP and inventory decisions. Using the well-known Revenue Equivalence Principle, we formulate a dynamic programming problem. When no fixed auction costs occur, we establish the optimality of the state-dependent deliver-down-to allocation policy, which is essentially a state-dependent base-stock-type (S(x)-like policy). We characterize the property of the optimal state-dependent deliver-down-to level. When fixed auction costs apply, we establish the optimality of the state-dependent (s(x), S(x))-like policy. We show that the optimal allocation can be achieved by running a Vickrey–Clarke–Groves auction or a first-price auction with closed-form reserve prices. A symmetric equilibrium bidding strategy for each carrier can be easily computed. Our model is also extended to the case where each carrier has multi-unit supply. By mild technical modifications, all of the results derived in the infinite-horizon case can be extended to the finite-horizon case. Some key features of the finite-horizon case are discussed. Journal: IIE Transactions Pages: 1236-1251 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1011356 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1011356 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1236-1251 Template-Type: ReDIF-Article 1.0 Author-Name: Andrew C. Trapp Author-X-Name-First: Andrew C. Author-X-Name-Last: Trapp Author-Name: Renata A. Konrad Author-X-Name-First: Renata A. Author-X-Name-Last: Konrad Title: Finding diverse optima and near-optima to binary integer programs Abstract: Typical output from an optimization solver is a single optimal solution. There are contexts, however, where a set of high-quality and diverse solutions may be beneficial; for example, problems involving imperfect information or those for which the structure of high-quality solution vectors can reveal meaningful insights. In view of this, we discuss a novel method to obtain multiple diverse optima / near optima to pure binary (0–1) integer programs, employing fractional programming techniques to manage these typically competing goals. Specifically, we develop a general approach that makes use of Dinkelbach’s algorithm to sequentially generate solutions that evaluate well with respect to both (i) individual performance and as a whole and (ii) mutual variety. We assess the performance of our approach on a number of MIPLIB test instances from the literature. Using two diversity metrics, computational results show that our method provides an efficient way to optimize the fractional objective while sequentially generating multiple high-quality and diverse solutions. Journal: IIE Transactions Pages: 1300-1312 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1019161 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1019161 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1300-1312 Template-Type: ReDIF-Article 1.0 Author-Name: Hamed Jalali Author-X-Name-First: Hamed Author-X-Name-Last: Jalali Author-Name: Inneke Van Nieuwenhuyse Author-X-Name-First: Inneke Van Author-X-Name-Last: Nieuwenhuyse Title: Simulation optimization in inventory replenishment: a classification Abstract: Simulation optimization is increasingly popular for solving complicated and mathematically intractable business problems. Focusing on academic articles published between 1998 and 2013, the present survey aims to unveil the extent to which simulation optimization has been used to solve practical inventory problems (as opposed to small, theoretical “toy problem”), and to detect any trends that might have arisen (e.g., popular topics, effective simulation optimization methods, frequently studied inventory system structures). We find that metaheuristics (especially genetic algorithms) and methods that combine several simulation optimization techniques are the most popular. The resulting categorizations provide a useful overview for researchers studying complex inventory management problems, by providing detailed information on the inventory system characteristics and the employed simulation optimization techniques, highlighting articles that involve stochastic constraints (e.g., expected fill rate constraints) or that employ a robust simulation optimization approach. Finally, in highlighting both trends and gaps in the research field, this review suggests avenues for further research. Journal: IIE Transactions Pages: 1217-1235 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1019162 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1019162 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1217-1235 Template-Type: ReDIF-Article 1.0 Author-Name: Hyeong Suk Na Author-X-Name-First: Hyeong Suk Author-X-Name-Last: Na Author-Name: Amarnath Banerjee Author-X-Name-First: Amarnath Author-X-Name-Last: Banerjee Title: A disaster evacuation network model for transporting multiple priority evacuees Abstract: There is an increasing number of natural disasters occurring worldwide, particularly in populated areas. These events affect a large number of people, causing injuries and fatalities. Providing rapid medical treatment is of utmost importance in such circumstances. The problem of transporting patients to medical facilities has been studied to only a small extent. One of the challenges is to find a strategy that can simultaneously maximize the number of survivors and minimize the total evacuation cost under a given set of resource and geographic constraints. We propose a mathematical optimization model called Triage–Assignment–Transportation (TAT) model that decides on the tactical routing assignment of several classes of evacuation vehicles between staging areas and shelters in the nearby area. The model takes into account the level of injury to the evacuees, the capacities of vehicles, and available resources at each shelter. TAT is a mixed-integer linear programming and minimum-cost flow model. Comprehensive computational experiments are performed to examine the applicability of the TAT model. TAT can offer valuable insights for decision-makers about the number of staging areas, evacuation vehicles, and medical resources that are required to complete a large-scale evacuation based on the estimated number of evacuees. Journal: IIE Transactions Pages: 1287-1299 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1040929 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1040929 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1287-1299 Template-Type: ReDIF-Article 1.0 Author-Name: Yingchieh Yeh Author-X-Name-First: Yingchieh Author-X-Name-Last: Yeh Author-Name: Bruce W. Schmeiser Author-X-Name-First: Bruce W. Author-X-Name-Last: Schmeiser Title: VAMP1RE: a single criterion for rating and ranking confidence-interval procedures Abstract: We propose VAMP1RE, a single criterion for rating and ranking confidence-interval procedures (CIPs) that use a fixed sample size. The quality of a CIP is traditionally thought to be many dimensional, typically composed of the probability of covering the unknown performance measure and the mean (and sometimes the standard deviation) of interval width, each of these over some set of nominal coverage probabilities. These many criteria reflect symptoms, rather than causes, of CIP quality. The VAMP1RE criterion focuses on two causes: departure from validity—violation of assumptions—and inability to mimic—the dissimilarity, for every data set, of a CIP’s interval to that of an ideal CIP. The ideal CIP is both valid (that is, adheres to all assumptions) and is an agreed-upon standard; possibly the ideal CIP is allowed knowledge not available to the real-world CIPs of interest. A high inability to mimic the ideal CIP implies that a CIP uses data inefficiently. For a given CIP, the VAMP1RE criterion is the expected squared difference between Schruben’s coverage values (analogous to p values) arising from the given CIP and from the ideal CIP. The implication is that an interval arising from a particular data set is good not because it is large or small but, rather, it is good to the extent that it is similar to the interval provided by the ideal CIP. We discuss the relationship to Schruben’s coverage function, provide a graphical interpretation, decompose the VAMP1RE criterion into the two cause components, and provide examples to illustrate that the VAMP1RE criterion provides numerical values that are useful for rating and ranking CIPs. Journal: IIE Transactions Pages: 1203-1216 Issue: 11 Volume: 47 Year: 2015 Month: 11 X-DOI: 10.1080/0740817X.2015.1047068 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1047068 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:11:p:1203-1216 Template-Type: ReDIF-Article 1.0 Author-Name: Maliheh Aramon Bajestani Author-X-Name-First: Maliheh Aramon Author-X-Name-Last: Bajestani Author-Name: Dragan Banjevic Author-X-Name-First: Dragan Author-X-Name-Last: Banjevic Title: Calendar-based age replacement policy with dependent renewal cycles Abstract: In this article, we introduce an age-based replacement policy in which the preventive replacements are restricted to specific calendar times. Under the new policy, the assets are renewed at failure or if their ages are greater than or equal to a replacement age at given calendar times, whichever occurs first. This policy is logistically applicable in industries such as utilities where there are large and geographically diverse populations of deteriorating assets with different installation times. Since preventive replacements are performed at fixed times, the renewal cycles are dependent random variables. Therefore, the classic renewal reward theorem cannot be directly applied. Using the theory of Markov chains with general state space and a suitably defined ergodic measure, we analyze the problem to find the optimal replacement age, minimizing the long-run expected cost per time unit. We further find the limiting distributions of the backward and forward recurrence times for this policy and show how our ergodic measure can be used to analyze more complicated policies. Finally, using a real data set of utility wood poles’ maintenance records, we numerically illustrate some of our results including the importance of defining an appropriate ergodic measure in reducing the computational expense. Journal: IIE Transactions Pages: 1016-1026 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1163444 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1163444 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:1016-1026 Template-Type: ReDIF-Article 1.0 Author-Name: Mei Han Author-X-Name-First: Mei Author-X-Name-Last: Han Author-Name: Matthias Hwai Yong Tan Author-X-Name-First: Matthias Hwai Author-X-Name-Last: Yong Tan Title: Integrated parameter and tolerance design with computer experiments Abstract: Robust parameter and tolerance design are effective methods to improve process quality. It is reported in the literature that the traditional two-stage approach that performs parameter design followed by tolerance design to reduce the sensitivity to variations of input characteristics is suboptimal. To mitigate the problem, an integrated parameter and tolerance design (IPTD) methodology that is suitable for linear models is suggested. In this article, a computer-aided IPTD approach for computer experiments is proposed, in which the means and tolerances of input characteristics are simultaneously optimized to minimize the total cost. A Gaussian process metamodel is used to emulate the response function to reduce the number of simulations. A closed-form expression for the posterior expected quality loss is derived to facilitate optimization in computer-aided IPTD. As there is often uncertainty about the true quality and tolerance costs, multiobjective optimization with quality loss and tolerance cost as objective functions is proposed to find robust optimal solutions. Journal: IIE Transactions Pages: 1004-1015 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1167289 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1167289 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:1004-1015 Template-Type: ReDIF-Article 1.0 Author-Name: Hu-Chen Liu Author-X-Name-First: Hu-Chen Author-X-Name-Last: Liu Author-Name: Jian-Xin You Author-X-Name-First: Jian-Xin Author-X-Name-Last: You Author-Name: Shouming Chen Author-X-Name-First: Shouming Author-X-Name-Last: Chen Author-Name: Yi-Zeng Chen Author-X-Name-First: Yi-Zeng Author-X-Name-Last: Chen Title: An integrated failure mode and effect analysis approach for accurate risk assessment under uncertainty Abstract: Failure Mode and Effect Analysis (FMEA) is a reliability analysis technique that plays a prominent role in improving the reliability and safety of systems, products, and/or services. Although commonly used in quality improvement efforts, the conventional Risk Priority Number (RPN) method has been heavily criticized in the literature for its various limitations, such as in failure mode evaluations, risk factor weights, and RPN computation. In this article, we describe the application of an ELECTRE (ELimination Et Choix Traduisant la REalité)-based outranking approach for FMEA within the interval two-tuple linguistic environment. Considering different types of FMEA team members' assessment information, we employ a hybrid averaging operator to construct the group assessment matrix and use a modified ELECTRE method to analyze the group interval two-tuple linguistic data. Furthermore, the new risk-ranking model deals with the subjective and objective weights of risk factors concurrently, considering the degree of importance that each concept has in the risk analysis. The practicality and applicability of the proposed methodology are demonstrated by applying it to a risk evaluation problem of proton beam radiotherapy, and a comparative study is conducted to validate the effectiveness of the new FMEA approach. Journal: IIE Transactions Pages: 1027-1042 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1172742 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1172742 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:1027-1042 Template-Type: ReDIF-Article 1.0 Author-Name: Yin Shu Author-X-Name-First: Yin Author-X-Name-Last: Shu Author-Name: Qianmei Feng Author-X-Name-First: Qianmei Author-X-Name-Last: Feng Author-Name: Edward P.C. Kao Author-X-Name-First: Edward P.C. Author-X-Name-Last: Kao Author-Name: Hao Liu Author-X-Name-First: Hao Author-X-Name-Last: Liu Title: Lévy-driven non-Gaussian Ornstein–Uhlenbeck processes for degradation-based reliability analysis Abstract: We use Lévy subordinators and non-Gaussian Ornstein–Uhlenbeck processes to model the evolution of degradation with random jumps. The superiority of our models stems from the flexibility of such processes in the modeling of stylized features of degradation data series such as jumps, linearity/nonlinearity, symmetry/asymmetry, and light/heavy tails. Based on corresponding Fokker–Planck equations, we derive explicit results for the reliability function and lifetime moments in terms of Laplace transforms, represented by Lévy measures. Numerical experiments are used to demonstrate that our general models perform well and are applicable for analyzing a large number of degradation phenomena. More important, they provide us with a new methodology to deal with multi-degradation processes under dynamicenvironments. Journal: IIE Transactions Pages: 993-1003 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1172743 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1172743 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:993-1003 Template-Type: ReDIF-Article 1.0 Author-Name: Mostafa Abouei Ardakan Author-X-Name-First: Mostafa Author-X-Name-Last: Abouei Ardakan Author-Name: Mohammad Sima Author-X-Name-First: Mohammad Author-X-Name-Last: Sima Author-Name: Ali Zeinal Hamadani Author-X-Name-First: Ali Author-X-Name-Last: Zeinal Hamadani Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Title: A novel strategy for redundant components in reliability--redundancy allocation problems Abstract: This article presents a new interpretation and formulation of the Reliability–Redundancy Allocation Problem (RRAP) and demonstrates that solutions to this new problem provide distinct advantages compared with traditional approaches. Using redundant components is a common method to increase the reliability of a system. In order to add the redundant components to a system or a subsystem, there are two traditional types of strategies called active and standby redundancy. Recently a new redundancy strategy, called the “mixed” strategy, has been introduced. It has been proved that in the Redundancy Allocation Problem (RAP), this new strategy has a better performance compared with active and standby strategies alone. In this article, the recently introduced mixed strategy is implemented in the RRAP, which is more complicated than the RAP, and the results of using the mixed strategy are compared with the active and standby strategies. To analyze the performance of the new approach, some benchmark problems on the RRAP are selected and the mixed strategy is used to optimize the system reliability in these situations. Finally, the reliability of benchmark problems with the mixed strategy is compared with the best results of the systems when active or standby strategies are considered. The final results show that the mixed strategy results in an improvement in the reliability of all the benchmark problems and the new strategy outperforms the active and standby strategies in RRAP. Journal: IIE Transactions Pages: 1043-1057 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1189631 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189631 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:1043-1057 Template-Type: ReDIF-Article 1.0 Author-Name: Akram Khaleghei Author-X-Name-First: Akram Author-X-Name-Last: Khaleghei Author-Name: Viliam Makis Author-X-Name-First: Viliam Author-X-Name-Last: Makis Title: Reliability estimation of a system subject to condition monitoring with two dependent failure modes Abstract: A new competing risk model is proposed to calculate the Conditional Mean Residual Life (CMRL) and Conditional Reliability Function (CRF) of a system subject to two dependent failure modes, namely, degradation failure and catastrophic failure. The degradation process can be represented by a three-state continuous-time stochastic process having a healthy state, a warning state, and a failure state. The system is subject to condition monitoring at regular sampling times that provides partial information about the system is working state and only the failure state is observable. To model the dependency between two failure modes, it is assumed that the joint distribution of the time to catastrophic failure and sojourn time in the healthy state follow Marshal–Olkin bivariate exponential distributions. The Expectation–Maximization algorithm is developed to estimate the model's parameters and the explicit formulas for the CRF and CMRL are derived in terms of the posterior probability that the system is in the warning state. A comparison with a previously published model is provided to illustrate the effectiveness of the proposed model using real data. Journal: IIE Transactions Pages: 1058-1071 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1189632 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1189632 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:1058-1071 Template-Type: ReDIF-Article 1.0 Author-Name: Yan-Hui Lin Author-X-Name-First: Yan-Hui Author-X-Name-Last: Lin Author-Name: Yan-Fu Li Author-X-Name-First: Yan-Fu Author-X-Name-Last: Li Author-Name: Enrico Zio Author-X-Name-First: Enrico Author-X-Name-Last: Zio Title: Reliability assessment of systems subject to dependent degradation processes and random shocks Abstract: System failures can be induced either by internal degradation mechanisms or by external causes. In this article, we consider the reliability of systems experiencing both degradation and random shock processes. The dependencies between degradation processes and random shocks and those among the degradation processes are explicitly modeled. The degradation processes of system components are modeled using Multi-State Models (MSMs) and Physics-Based Models (PBMs). The piecewise-deterministic Markov process modeling framework is employed to combine MSMs and PBMs and to incorporate degradation and random shocks dependencies. The Monte Carlo simulation and finite-volume methods are used to compute the system reliability. A subsystem of a residual heat removal system in a nuclear power plant is considered as an illustrative case. Journal: IIE Transactions Pages: 1072-1085 Issue: 11 Volume: 48 Year: 2016 Month: 11 X-DOI: 10.1080/0740817X.2016.1190481 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1190481 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:11:p:1072-1085 Template-Type: ReDIF-Article 1.0 Author-Name: Yuan Feng Author-X-Name-First: Yuan Author-X-Name-Last: Feng Author-Name: Xiang Zhong Author-X-Name-First: Xiang Author-X-Name-Last: Zhong Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Author-Name: Wenhui Fan Author-X-Name-First: Wenhui Author-X-Name-Last: Fan Title: Analysis of closed-loop production lines with Bernoulli reliability machines: Theory and application Abstract: In this article, an iteration approach is introduced to study closed-loop production lines with a constant number of carriers. A Bernoulli machine reliability model is assumed. The closed-loop system is decomposed into multiple small loop lines and further down to two-machine loops, in which the distributions of carriers are derived. Then an iteration procedure is presented to estimate the interactions between the small loops to modify the carrier distributions. Upon convergence, the system production rate can be estimated using these distributions. The convergence of the procedure is proved analytically, and the accuracy of estimation is justified numerically. It is shown that the method has good accuracy and computational efficiency. In addition, a case study at an automotive assembly plant is introduced to illustrate the applicability of the method. Journal: IISE Transactions Pages: 143-160 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1299957 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1299957 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:143-160 Template-Type: ReDIF-Article 1.0 Author-Name: James MacGregor Smith Author-X-Name-First: James MacGregor Author-X-Name-Last: Smith Title: Simultaneous buffer and service rate allocation in open finite queueing networks Abstract: Simultaneous buffer and service rate allocation in open finite queueing networks is a nonlinear mixed-integer programming problem that is NP${\cal NP}$-Hard. A queueing network decomposition methodology is coupled with a nonlinear sequential quadratic programming algorithm to compute the simultaneous optimal buffer allocations and service rates via a branch-and-bound scheme for various network topologies. It is shown that the optimization problem is a nonlinear convex programming problem, which assists in the search for local optimal solutions. The material handling or transportation system for transferring the finite customer population between the nodes in the network is also included. Extensive numerical results demonstrate the efficacy of the methodology for series, split, and merge topology networks. Examination of the persistence or absence of the allocation patterns of the service rates and buffers is one of the focal points of this work. Journal: IISE Transactions Pages: 203-216 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1300359 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1300359 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:203-216 Template-Type: ReDIF-Article 1.0 Author-Name: Ayse Sena Eruguz Author-X-Name-First: Ayse Sena Author-X-Name-Last: Eruguz Author-Name: Tarkan Tan Author-X-Name-First: Tarkan Author-X-Name-Last: Tan Author-Name: Geert-Jan van Houtum Author-X-Name-First: Geert-Jan Author-X-Name-Last: van Houtum Title: Integrated maintenance and spare part optimization for moving assets Abstract: We consider an integrated maintenance and spare part optimization problem for a single critical component of a moving asset for which the degradation level is observable. Degradation is modeled as a function of the current operating mode, mostly dictated by the actual location of the moving asset. The spare part is stocked at the home base that the moving asset eventually visits. Alternatively, the spare part can be stocked on-board the moving asset to prevent costly expedited deliveries. The costs associated with spare part deliveries and part replacements depend on the operating mode. Our objective is to minimize the expected total discounted cost of spare part deliveries, part replacements, and inventory holding over an infinite planning horizon. We formulate the problem as a Markov decision process and characterize the structure of the optimal policy, which is shown to be a bi-threshold policy in each operating mode. Our numerical experiments show that the cost savings obtained by the integrated optimization of spare part inventory and part replacement decisions are significant. We also demonstrate the value of the integrated approach in a case study from the maritime sector. Journal: IISE Transactions Pages: 230-245 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1312037 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1312037 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:230-245 Template-Type: ReDIF-Article 1.0 Author-Name: Sophie Weiss Author-X-Name-First: Sophie Author-X-Name-Last: Weiss Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Author-Name: Raik Stolletz Author-X-Name-First: Raik Author-X-Name-Last: Stolletz Title: Optimization of buffer allocations in flow lines with limited supply Abstract: The supply of flow lines is often assumed to be unlimited or to follow certain distributions. However, this assumption may not always be realistic, as flow lines are usually an integral part of a supply chain where raw material is replenished based on some rule. We therefore include the limited supply into the optimization of buffer capacities in terms of an order policy.To integrate this type of supply into an optimization model, we exploit the flexibility of a sample-based optimization approach. We develop an efficient rule-based local search algorithm that employs new individual lower bounds in order to determine the optimal buffer capacities of a flow line. In addition to the efficiency of the proposed algorithm, the numerical study demonstrates that the order policy has a significant impact on the optimal buffer allocation. Journal: IISE Transactions Pages: 191-202 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1328751 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1328751 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:191-202 Template-Type: ReDIF-Article 1.0 Author-Name: Stefan Helber Author-X-Name-First: Stefan Author-X-Name-Last: Helber Author-Name: Karl Inderfurth Author-X-Name-First: Karl Author-X-Name-Last: Inderfurth Author-Name: Florian Sahling Author-X-Name-First: Florian Author-X-Name-Last: Sahling Author-Name: Katja Schimmelpfeng Author-X-Name-First: Katja Author-X-Name-Last: Schimmelpfeng Title: Flexible versus robust lot-scheduling subject to random production yield and deterministic dynamic demand Abstract: We consider the problem of scheduling production lots for multiple products competing for a common production resource that processes the product units serially. The demand for each product and period is assumed to be known with certainty, but the yield per production lot is random as the production process can reach an out-of-control state while processing each single product unit of a lot. A service-level constraint is used to limit the backlog in the presence of this yield uncertainty. We address the question of how to determine static production lots and how to schedule these lots over the discrete periods of a finite planning horizon. The scheduling problem is characterized by a trade-off between the cost of holding inventory and the cost of overtime, whereas the production output is uncertain. For this purpose, we develop a rigid and robust planning approach and two flexible heuristic scheduling approaches. In an extensive numerical study, we compare the different approaches to assess the cost of operating according to a robust plan as opposed to a flexible policy. Journal: IISE Transactions Pages: 217-229 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1357089 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1357089 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:217-229 Template-Type: ReDIF-Article 1.0 Author-Name: George Liberopoulos Author-X-Name-First: George Author-X-Name-Last: Liberopoulos Title: Performance evaluation of a production line operated under an echelon buffer policy Abstract: We consider a production line consisting of several machines in series separated by intermediate finite-capacity buffers. The line operates under an Echelon Buffer (EB) policy according to which each machine can store the parts that it produces in any of its downstream buffers if the next machine is occupied. If the capacities of all but the last buffer are zero, the EB policy is equivalent to constant work in process (CONWIP). To evaluate the performance of the line under the EB policy, we model it as a queueing network and we develop a method that is based on decomposing this network into as many nested segments as there are buffers and approximating each segment with a two-machine subsystem that can be analyzed in isolation. For the case where the machines have geometrically distributed processing times, we model each subsystem as a two-dimensional Markov chain that can be solved numerically. The parameters of the subsystems are determined by relationships among the flows of parts through the echelon buffers in the original system. An iterative algorithm is developed to solve these relationships. We use this method to evaluate the performance of several instances of five- and 10-machine lines including cases where the EB policy is equivalent to CONWIP. Our numerical results show that this method is highly accurate and computationally efficient. We also compare the performance of the EB policy against the performance of the traditional “installation buffer” policy according to which each machine can store the parts that it produces only in its immediate downstream buffer if the next machine is occupied. Journal: IISE Transactions Pages: 161-177 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1390800 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1390800 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:161-177 Template-Type: ReDIF-Article 1.0 Author-Name: Engin Topan Author-X-Name-First: Engin Author-X-Name-Last: Topan Author-Name: Tarkan Tan Author-X-Name-First: Tarkan Author-X-Name-Last: Tan Author-Name: Geert-Jan van Houtum Author-X-Name-First: Geert-Jan Author-X-Name-Last: van Houtum Author-Name: Rommert Dekker Author-X-Name-First: Rommert Author-X-Name-Last: Dekker Title: Using imperfect advance demand information in lost-sales inventory systems with the option of returning inventory Abstract: Motivated by real-life applications, we consider an inventory system where it is possible to collect information about the quantity and timing of future demand in advance. However, this Advance Demand Information (ADI) is imperfect as (i) it may turn out to be false; (ii) a time interval is provided for the demand occurrences rather than its exact time; and (iii) there are still customer demand occurrences for which ADI cannot be provided. To make best use of imperfect information and integrate it with inventory supply decisions, we allow for returning excess stock built up due to imperfections to the upstream supplier and we propose a lost-sales inventory model with a general representation of imperfect ADI. A partial characterization of the optimal ordering and return policy is provided. Through an extensive numerical study, we investigate the value of ADI and factors that affect that value. We show that using imperfect ADI can yield substantial savings, the amount of savings being sensitive to the quality of information; the benefit of the ADI increases considerably if the excess stock can be returned. We apply our model to a spare parts case. The value of imperfect ADI turns out to be significant. Journal: IISE Transactions Pages: 246-264 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1403060 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1403060 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:246-264 Template-Type: ReDIF-Article 1.0 Author-Name: George Liberopoulos Author-X-Name-First: George Author-X-Name-Last: Liberopoulos Author-Name: Cathal Heavey Author-X-Name-First: Cathal Author-X-Name-Last: Heavey Author-Name: Stefan Helber Author-X-Name-First: Stefan Author-X-Name-Last: Helber Author-Name: Fikri Karaesmen Author-X-Name-First: Fikri Author-X-Name-Last: Karaesmen Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Title: Contributions to stochastic models of manufacturing and service operations Journal: IISE Transactions Pages: 141-142 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2018.1404810 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1404810 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:141-142 Template-Type: ReDIF-Article 1.0 Author-Name: Alessio Angius Author-X-Name-First: Alessio Author-X-Name-Last: Angius Author-Name: Marcello Colledani Author-X-Name-First: Marcello Author-X-Name-Last: Colledani Author-Name: Andras Horvath Author-X-Name-First: Andras Author-X-Name-Last: Horvath Title: Lead-time-oriented production control policies in two-machine production lines Abstract: The ability to meet target production lead times is of fundamental importance in modern manufacturing systems producing perishable products, where the product quality or value deteriorates with the time parts spend in the system, and in manufacturing contexts where strict lead time constraints are imposed due to tight shipping schedules. In these settings, traditional manufacturing system engineering methods and token-based production control policies lose effectiveness as they aim at achieving target production rates while minimizing inventory, without directly taking into account the effect on the lead time distribution. In this article, a production control policy for unreliable manufacturing systems that aims at maximizing the throughput of parts that respect a given lead time constraint is proposed for the first time. The proposed policy jointly considers the actual level of the buffer and the state of the second machine in the system and stops the part loading at the first machine if there is unacceptable risk of exceeding the lead time constraint. The effectiveness of this new policy against the traditional kanban policy is quantified by numerical analysis. The results show that this new policy outperforms the kanban policy by providing a tighter control on the production lead time. This approach paves the way to the introduction of new lead time–oriented production control policies to maximize the effective throughput in real manufacturing systems. Journal: IISE Transactions Pages: 178-190 Issue: 3 Volume: 50 Year: 2018 Month: 3 X-DOI: 10.1080/24725854.2017.1417654 File-URL: http://hdl.handle.net/10.1080/24725854.2017.1417654 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:3:p:178-190 Template-Type: ReDIF-Article 1.0 Author-Name: David Mildebrath Author-X-Name-First: David Author-X-Name-Last: Mildebrath Author-Name: Wendy Knight Author-X-Name-First: Wendy Author-X-Name-Last: Knight Author-Name: Andrew Schaefer Author-X-Name-First: Andrew Author-X-Name-Last: Schaefer Title: Optimal jersey retirement in the National Basketball Association Abstract: One of the highest honors an individual player in the National Basketball Association (NBA) can receive is to have their jersey number retired by a franchise. Players selected for jersey retirement are often chosen carefully, in part because each franchise has only a finite number of jerseys available for retirement. In this work, we present a method to optimize the selection of players to honor with jersey retirement. We first present a Markov Decision Process (MDP) to model the jersey retirement decisions of a given franchise. We then embed this MDP into a nonlinear regression model, which we solve approximately using a modified support vector machine. Our results indicate that most NBA franchises behave approximately in accordance with our optimality criteria. We also use our model to suggest optimal retirement decisions for several NBA franchises. Journal: IISE Transactions Pages: 363-376 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1633030 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1633030 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:363-376 Template-Type: ReDIF-Article 1.0 Author-Name: Haowei Wang Author-X-Name-First: Haowei Author-X-Name-Last: Wang Author-Name: Jun Yuan Author-X-Name-First: Jun Author-X-Name-Last: Yuan Author-Name: Szu Hui Ng Author-X-Name-First: Szu Hui Author-X-Name-Last: Ng Title: Gaussian process based optimization algorithms with input uncertainty Abstract: Metamodels as cheap approximation models for expensive to evaluate functions have been commonly used in simulation optimization problems. Among various types of metamodels, the Gaussian Process (GP) model is popular for both deterministic and stochastic simulation optimization problems. However, input uncertainty is usually ignored in simulation optimization problems, and thus current GP-based optimization algorithms do not incorporate input uncertainty. This article aims to refine the current GP-based optimization algorithms to solve the stochastic simulation optimization problems when input uncertainty is considered. The comprehensive numerical results indicate that our refined algorithms with input uncertainty can find optimal designs more efficiently than the existing algorithms when input uncertainty is present. Journal: IISE Transactions Pages: 377-393 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1639859 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1639859 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:377-393 Template-Type: ReDIF-Article 1.0 Author-Name: Emily L. Tucker Author-X-Name-First: Emily L. Author-X-Name-Last: Tucker Author-Name: Mark S. Daskin Author-X-Name-First: Mark S. Author-X-Name-Last: Daskin Author-Name: Burgunda V. Sweet Author-X-Name-First: Burgunda V. Author-X-Name-Last: Sweet Author-Name: Wallace J. Hopp Author-X-Name-First: Wallace J. Author-X-Name-Last: Hopp Title: Incentivizing resilient supply chain design to prevent drug shortages: policy analysis using two- and multi-stage stochastic programs Abstract: Supply chain disruptions have caused hundreds of shortages of medically-necessary drugs since 2011. Once a disruption occurs, the industry is limited in its ability to adapt, and improving strategic resiliency decisions is important to preventing future shortages. Yet, many shortages have been of low-margin, generic injectable drugs, and it is an open question whether resiliency is optimal. It is also unknown what policies would be effective at inducing companies to be resilient. To study these questions, we develop new supply chain design models that consider disruptions and recovery over time. The first model is a two-stage stochastic program which selects the configuration of suppliers, plants, and lines. The second is a multi-stage stochastic program which selects the configuration and target safety stock level. We then overlay incentives and regulations to change the market conditions and evaluate their effects on two generic oncology drug supply chains. We find that profit-maximizing firms may maintain vulnerable supply chains without intervention. Shortages may be reduced with: moderate failure-to-supply penalties; mandatory supply chain redundancy; substantial amounts of inventory; and/or large price increases. We compare policies by evaluating the societal costs to reduce the expected shortages to 2% and 5% of demand. Journal: IISE Transactions Pages: 394-412 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1646441 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1646441 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:394-412 Template-Type: ReDIF-Article 1.0 Author-Name: Ankit Bansal Author-X-Name-First: Ankit Author-X-Name-Last: Bansal Author-Name: Reha Uzsoy Author-X-Name-First: Reha Author-X-Name-Last: Uzsoy Author-Name: Karl Kempf Author-X-Name-First: Karl Author-X-Name-Last: Kempf Title: Iterative combinatorial auctions for managing product transitions in semiconductor manufacturing Abstract: Successful management of product transitions in the semiconductor industry requires effective coordination of manufacturing and product development activities. Manufacturing units must meet demand for current products while also allocating capacity to product development units for prototype fabrication that will support timely introduction of new products into high-volume manufacturing. Knowledge of detailed operational constraints and capabilities is only available within each unit, precluding the use of a centralized planning model with complete information of all units. However, the decision support tools used by the individual units offer the possibility of a decentralized decision framework that uses these local models as components to rapidly obtain mutually acceptable, implementable solutions. We develop Iterative Combinatorial Auctions (ICAs) that achieve coordinated decisions for all units to maximize the firm’s profit while motivating all units to share information truthfully. Computational results show that the ICA that uses column generation to update prices outperforms that using subgradient search, obtaining near-optimal corporate profit in low CPU times. Journal: IISE Transactions Pages: 413-431 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1651951 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1651951 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:413-431 Template-Type: ReDIF-Article 1.0 Author-Name: Ge Yu Author-X-Name-First: Ge Author-X-Name-Last: Yu Author-Name: Sheldon H. Jacobson Author-X-Name-First: Sheldon H. Author-X-Name-Last: Jacobson Title: Approximation algorithms for scheduling C-benevolent jobs on weighted machines Abstract: This article considers a new variation of the online interval scheduling problem, which consists of scheduling C-benevolent jobs on multiple heterogeneous machines with different positive weights. The reward for completing a job assigned to a machine is given by the product of the job value and the machine weight. The objective of this scheduling problem is to maximize the total reward for completed jobs. Two classes of approximation algorithms are analyzed, Cooperative Greedy algorithms and Prioritized Greedy algorithms, with competitive ratios provided. We show that when the weight ratios between machines are small, the Cooperative Greedy algorithm outperforms the Prioritized Greedy algorithm. As the weight ratios increase, the Prioritized Greedy algorithm outperforms the Cooperative Greedy algorithm. Moreover, as the weight ratios approach infinity, the competitive ratio of the Prioritized Greedy algorithm approaches four. We also provide a lower bound of 3/2 and 9/7 for the competitive ratio of any deterministic algorithm for scheduling C-benevolent jobs on two and three machines with arbitrary weights, respectively. Journal: IISE Transactions Pages: 432-443 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1657606 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1657606 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:432-443 Template-Type: ReDIF-Article 1.0 Author-Name: Afshin Oroojlooyjadid Author-X-Name-First: Afshin Author-X-Name-Last: Oroojlooyjadid Author-Name: Lawrence V. Snyder Author-X-Name-First: Lawrence V. Author-X-Name-Last: Snyder Author-Name: Martin Takáč Author-X-Name-First: Martin Author-X-Name-Last: Takáč Title: Applying deep learning to the newsvendor problem Abstract: The newsvendor problem is one of the most basic and widely applied inventory models. If the probability distribution of the demand is known, the problem can be solved analytically. However, approximating the probability distribution is not easy and is prone to error; therefore, the resulting solution to the newsvendor problem may not be optimal. To address this issue, we propose an algorithm based on deep learning that optimizes the order quantities for all products based on features of the demand data. Our algorithm integrates the forecasting and inventory-optimization steps, rather than solving them separately, as is typically done, and does not require knowledge of the probability distributions of the demand. One can view the optimal order quantities as the labels in the deep neural network. However, unlike most deep learning applications, our model does not know the true labels (order quantities), but rather learns them during the training. Numerical experiments on real-world data suggest that our algorithm outperforms other approaches, including data-driven and machine learning approaches, especially for demands with high volatility. Finally, in order to show how this approach can be used for other inventory optimization problems, we provide an extension for (r, Q) policies. Journal: IISE Transactions Pages: 444-463 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1632502 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1632502 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:444-463 Template-Type: ReDIF-Article 1.0 Author-Name: Özgen Karaer Author-X-Name-First: Özgen Author-X-Name-Last: Karaer Author-Name: Tim Kraft Author-X-Name-First: Tim Author-X-Name-Last: Kraft Author-Name: Pınar Yalçın Author-X-Name-First: Pınar Author-X-Name-Last: Yalçın Title: Supplier development in a multi-tier supply chain Abstract: We examine how a buyer can use a full-control strategy and cost sharing to develop the sustainable quality capabilities of his tier-1 and tier-2 suppliers. In particular, we consider how the buyer’s development decisions and the suppliers’ sustainable quality decisions are impacted by consumers’ demand sensitivity to sustainable quality and the division of the supply chain margin. Two quality-demand models are studied – the overall quality of the supply chain equals either (i) the sum of or (ii) the minimum between the suppliers’ quality levels. We find that when the suppliers’ sustainable quality levels are additive, even if the low-margin supplier has a positive net profit return from improved quality, she may still choose to free ride on the high-margin supplier’s quality investment. Interestingly, the buyer can cause the free riding with his cost-sharing decisions. When instead, the overall sustainable quality is determined by the minimum between the suppliers’ quality levels, the buyer’s strategy is often to focus only on developing the low-margin supplier. Nevertheless, when the buyer’s market gain from improved quality is large and the suppliers’ gains are comparable to one another, the buyer can justify sharing costs with both suppliers and raising the overall sustainable quality of the supply chain to a level neither supplier can achieve without development support. Journal: IISE Transactions Pages: 464-477 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1659523 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1659523 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:464-477 Template-Type: ReDIF-Article 1.0 Author-Name: Ali Diabat Author-X-Name-First: Ali Author-X-Name-Last: Diabat Author-Name: Alexandre Dolgui Author-X-Name-First: Alexandre Author-X-Name-Last: Dolgui Author-Name: Władysław Janiak Author-X-Name-First: Władysław Author-X-Name-Last: Janiak Author-Name: Mikhail Y. Kovalyov Author-X-Name-First: Mikhail Y. Author-X-Name-Last: Kovalyov Title: Three parallel task assignment problems with shared resources Abstract: We study three optimization problems in which non-renewable resources are used to execute tasks in parallel. Problems differentiate by the assumptions of whether a resource can be shared between several tasks or not, or whether resource sharing between the tasks is limited. We present very efficient solution procedures for two of these problems and prove that the third problem is NP-hard in the strong sense and that it can be solved efficiently for special cases. Applications include optimal resource allocation problems in labor-intensive cellular manufacturing and in parallel task computing. Journal: IISE Transactions Pages: 478-485 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1680907 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1680907 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:478-485 Template-Type: ReDIF-Article 1.0 Author-Name: Y. Guan Author-X-Name-First: Y. Author-X-Name-Last: Guan Author-Name: K. Pan Author-X-Name-First: K. Author-X-Name-Last: Pan Author-Name: K. Zhou Author-X-Name-First: K. Author-X-Name-Last: Zhou Title: Correction Journal: IISE Transactions Pages: 486-487 Issue: 4 Volume: 52 Year: 2020 Month: 4 X-DOI: 10.1080/24725854.2019.1697094 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1697094 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:4:p:486-487 Template-Type: ReDIF-Article 1.0 Author-Name: Subhamoy Ganguly Author-X-Name-First: Subhamoy Author-X-Name-Last: Ganguly Author-Name: Manuel Laguna Author-X-Name-First: Manuel Author-X-Name-Last: Laguna Title: Modeling and solving a closed-loop scheduling problem with two types of setups Abstract: Production systems with closed-loop facilities must deal with the problem of sequencing batches in consecutive loops. This article studies a problem encountered in a production facility in which plastic parts of several shapes must be painted with different colors to satisfy the demand given by a set of production orders. The shapes and the colors produce a dual-setup problem that to the best of our knowledge has not been considered in the literature. The problem is formulated as a mixed-integer program and the limitations of this approach as a viable solution method are discussed. Two alternative solution approaches are described that are heuristic in nature: one specialized procedure developed from scratch and the other one built in the framework of commercial software. The presented computational experiments were designed to assess the advantages and disadvantages of both approaches. Journal: IIE Transactions Pages: 880-891 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.928963 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928963 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:880-891 Template-Type: ReDIF-Article 1.0 Author-Name: Michael R. Wagner Author-X-Name-First: Michael R. Author-X-Name-Last: Wagner Title: Robust purchasing and information asymmetry in supply chains with a price-only contract Abstract: This article proves that information can be a double-edged sword in supply chains. A simple supply chain is studied that consists of one supplier and one retailer, interacting via a wholesale price contract, where one firm knows the probabilistic distribution of demand and the other only knows the mean and variance. The firm with limited distributional knowledge applies simple robust optimization techniques. It is proved that a firm’s informational advantage is not necessarily beneficial and can lead to a reduction of the firm’s profit, demonstrating the detriment of information. It is shown how the direction of asymmetry, demand variability, and product economics affect both firms’ profits. These results also provide an understanding of how asymmetric information impacts the double-marginalization effect for the cumulative profits of the supply chain in certain cases reducing the effect. The symmetric incomplete informational case, where both firms only know the mean and variance of demand, is also studied and it is shown that it is possible that both firms can benefit from their collective lack of information. Throughout this article, practical guidelines where a supplier or retailer is motivated to share, hide, or seek information are identified. Journal: IIE Transactions Pages: 819-840 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.953644 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.953644 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:819-840 Template-Type: ReDIF-Article 1.0 Author-Name: Kyoung-Kuk Kim Author-X-Name-First: Kyoung-Kuk Author-X-Name-Last: Kim Author-Name: Jun Liu Author-X-Name-First: Jun Author-X-Name-Last: Liu Author-Name: Chi-Guhn Lee Author-X-Name-First: Chi-Guhn Author-X-Name-Last: Lee Title: A stochastic inventory model with price quotation Abstract: This article studies a single-item periodic-review inventory problem with stochastic demand, uncertain price, and price search cost. At the beginning of a period, an inventory manager has to decide, considering the current inventory level, whether a price should be searched for at a non-zero cost. Once the price is known, she will have to decide the order size. For tractability the number of realizable prices is limited to two and (r, S1, S2)-type policies are considered, where r is the threshold for the price search decision and Si is the order-up-to level for price pi for i = 1, 2. Although the problem is significantly simplified, it still allows for price speculations by the inventory manager; i.e., she requests a quote but may not buy. The properties of long-run average costs are studies and optimization algorithms are presented. Numerical studies show the effectiveness of the proposed policy compared with classic (s, S)-type policy and its natural three-parameter extension. Journal: IIE Transactions Pages: 851-864 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.955598 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.955598 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:851-864 Template-Type: ReDIF-Article 1.0 Author-Name: Emre Kirac Author-X-Name-First: Emre Author-X-Name-Last: Kirac Author-Name: Ashlea Bennett Milburn Author-X-Name-First: Ashlea Bennett Author-X-Name-Last: Milburn Author-Name: Clarence Wardell Author-X-Name-First: Clarence Author-X-Name-Last: Wardell Title: The Traveling Salesman Problem with Imperfect Information with Application in Disaster Relief Tour Planning Abstract: Many in the disaster response community have begun to explore ways to use information posted on social media platforms to identify a larger set of needs in a shorter amount of time following a disaster. However, needs communicated through social media platforms have initially not been verified so many within the emergency response community remain skeptical over the usefulness of such information. Consequently, as emergency managers consider whether to incorporate social media data in disaster planning efforts, a key tradeoff must be assessed. Confidence in the accuracy of needs to which resources are allocated is increased when information discovered on social media is ignored, but there is potential to leave populations that have not yet been discovered through traditional means unassisted. This paper introduces a new problem framework that describes a formal method for quantitatively assessing the impact of including unverified information in disaster relief planning. The usefulness of the framework is demonstrated in the context of the traveling salesman problem. A decision approach that considers social media information is compared to one that does not on the basis of total response time of resulting tours. A case study that considers variations in report accuracy and quantity for uniformly distributed demand instances is presented. Journal: IIE Transactions Pages: 783-799 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.976351 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.976351 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:783-799 Template-Type: ReDIF-Article 1.0 Author-Name: Zhongsheng Hua Author-X-Name-First: Zhongsheng Author-X-Name-Last: Hua Author-Name: Yimin Yu Author-X-Name-First: Yimin Author-X-Name-Last: Yu Author-Name: Wei Zhang Author-X-Name-First: Wei Author-X-Name-Last: Zhang Author-Name: Xiaoyan Xu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Xu Title: Structural properties of the optimal policy for dual-sourcing systems with general lead times Abstract: This article considers a periodic-review inventory problem with two suppliers. The regular supplier has a longer lead time than the expedited supplier but has a lower unit cost. The structural properties of the optimal orders are characterized using the notion of L♮-convexity. Interestingly, the optimal regular order is more sensitive to the late-to-arrive outstanding orders, but the optimal expedited order is more sensitive to the soon-to-arrive outstanding orders. A heuristic policy is designed that provides an average cost saving of 1.02% over the best heuristic policy in the literature. Journal: IIE Transactions Pages: 841-850 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.982839 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.982839 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:841-850 Template-Type: ReDIF-Article 1.0 Author-Name: Mingzhu Yu Author-X-Name-First: Mingzhu Author-X-Name-Last: Yu Author-Name: Kap Hwan Kim Author-X-Name-First: Kap Hwan Author-X-Name-Last: Kim Author-Name: Chung-Yee Lee Author-X-Name-First: Chung-Yee Author-X-Name-Last: Lee Title: Inbound container storage pricing schemes Abstract: In ocean transportation, contractual and operational relationships may be asynchronous. A case in point is the relationship between a customer requiring inbound container storage and a container terminal operator. The customer may store inbound containers in a container terminal and will pay the ocean carrier a specific storage fee, without establishing any contractual relationship with the terminal operator. After collecting the storage fee, the ocean carrier will then pay the container terminal operator for the storage of the inbound container. In this article, we study two-level inbound container storage pricing problems involving a container terminal operator and an ocean carrier in two different inbound container storage contract systems: the free-time contract system and the free-space contract system. In each contract system, we propose a two-stage pricing game model. We derive the optimal decisions for the container terminal operator and the ocean carrier. Analysis about the coordination of the two kinds of contract system is also provided. Numerical studies are conducted to compare the outcomes of the two systems for the ocean carrier and the container terminal. The results of computational experiments reveal that the free-space contract system is a preferable strategy for busy container terminals in terms of traffic control. Journal: IIE Transactions Pages: 800-818 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.999179 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.999179 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:800-818 Template-Type: ReDIF-Article 1.0 Author-Name: Kai Hoberg Author-X-Name-First: Kai Author-X-Name-Last: Hoberg Author-Name: Ulrich W. Thonemann Author-X-Name-First: Ulrich W. Author-X-Name-Last: Thonemann Title: Analyzing Variability, Cost, and Responsiveness of Base-Stock Inventory Policies with Linear Control Theory Abstract: The effect of inventory policies on order variability has been analyzed extensively. Two popular means of reducing order variability are demand smoothing and order smoothing. If the objective is minimizing demand variability, demands and orders can be heavily smoothed, resulting in an inventory policy that orders equal amounts in each time period. Such a policy obviously minimizes order variability, but it leads to high cost and low responsiveness of the inventory system. To optimize the overall performance of an inventory system, the effect of the inventory policy on all relevant dimensions of operational performance must be analyzed. We address this issue and analyze the effect of the parameter values of an inventory policy on three main dimensions of operational performance: Order variability, expected cost, and responsiveness. The inventory policy we use is the partial correction policy, a policy that can be used to smooth demand and to smooth orders. To analyze this policy, we use linear control theory. We derive the transfer function of the policy and prove the stability of the inventory system under this policy. Then, we determine the effect of the policy parameters on order variability, cost, and responsiveness and discuss how good parameter values can be chosen. Journal: IIE Transactions Pages: 865-879 Issue: 8 Volume: 47 Year: 2015 Month: 8 X-DOI: 10.1080/0740817X.2014.999897 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.999897 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:8:p:865-879 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Hassoun Author-X-Name-First: Michael Author-X-Name-Last: Hassoun Author-Name: Gad Rabinowitz Author-X-Name-First: Gad Author-X-Name-Last: Rabinowitz Author-Name: Noam Reshef Author-X-Name-First: Noam Author-X-Name-Last: Reshef Title: Security agent allocation to partially observable heterogeneous frontier segments Abstract: This article proposes a stochastic attention allocation and reactive scheduling model to prevent illegal border crossings. To intercept infiltrators, a limited pool of security agents is dynamically assigned to heterogeneous frontier segments that transmit erratic signals of crossing attempts by independent trespassers. The frontier segments may differ in terms of rates of crossing attempts, ease of crossing, and reliability of the detection systems. Due to the huge complexity of the agent scheduling decision, a relaxed Markovian model is proposed whose solution is a set of optimal steady-state allocation rates for sending security agents to any frontier segment where a crossing attempt is apparently taking place. This solution is used to derive a heuristic policy for dispatching security agents among the frontier segments based on the evolving signals. Simulation experiments demonstrate that the proposed heuristic outperforms other scheduling policies. Border crossing is just one example of a viable application for this attention allocation model, which can be extended and customized for a wide variety of other scenarios. Journal: IIE Transactions Pages: 566-574 Issue: 8 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.532852 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.532852 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:8:p:566-574 Template-Type: ReDIF-Article 1.0 Author-Name: Ahmed Ghoniem Author-X-Name-First: Ahmed Author-X-Name-Last: Ghoniem Author-Name: Hanif Sherali Author-X-Name-First: Hanif Author-X-Name-Last: Sherali Title: Defeating symmetry in combinatorial optimization via objective perturbations and hierarchical constraints Abstract: This article introduces the concept of defeating symmetry in combinatorial optimization via objective perturbations based on, and combined with, symmetry-defeating constraints. Under this novel reformulation, the original objective function is suitably perturbed using a weighted sum of expressions derived from hierarchical symmetry-defeating constraints in a manner that preserves optimality and judiciously guides and curtails the branch-and-bound enumeration process. Computational results are presented for a noise dosage problem, a doubles tennis scheduling problem, and a wagon load-balancing problem to demonstrate the efficacy of using this strategy in concert with traditional hierarchical symmetry-defeating constraints. The proposed methodology is shown to significantly outperform the use of hierarchical constraints or objective perturbations in isolation, as well as the automatic symmetry-defeating feature that is enabled by CPLEX, version 11.2. Journal: IIE Transactions Pages: 575-588 Issue: 8 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.541899 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.541899 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:8:p:575-588 Template-Type: ReDIF-Article 1.0 Author-Name: Fred Easton Author-X-Name-First: Fred Author-X-Name-Last: Easton Title: Cross-training performance in flexible labor scheduling environments Abstract: Cross-training effectively pools multiple demand streams, improving service levels and, when demand streams are negatively correlated, boosting productivity. When services operate for extended hours, however, those benefits are intermittent because employees take their skills home with them at the end of their shift. This study explores how cross-training and workforce management decisions interact to affect labor costs and service levels in extended hour service operations with uncertain demand and employee attendance. Using a two-stage stochastic model, we first optimally staff, cross-train, schedule, and allocate workers across departments. We then simulate demand and attendance and, as needed, re-allocate available cross-trained workers to best satisfy realized demand. Comparing the performance of full- and partial cross-training policies with that of dedicated specialists, we found that cross-training often, but not always, dominated the performance of a specialized workforce. When cross-trained workers are less proficient than specialists, however, increased cross-training forced tradeoffs between workforce size and capacity shortages. However, both workforce size and service levels often improved with increased scheduling flexibility. Further, increased scheduling flexibility appears to be an efficient strategy for mitigating the effects of absenteeism. Thus, scheduling flexibility may be an important cofactor for exploiting the benefits of cross-training in labor scheduling environments. Journal: IIE Transactions Pages: 589-603 Issue: 8 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.550906 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.550906 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:8:p:589-603 Template-Type: ReDIF-Article 1.0 Author-Name: Joachim Arts Author-X-Name-First: Joachim Author-X-Name-Last: Arts Author-Name: Marcel van Vuuren Author-X-Name-First: Marcel Author-X-Name-Last: van Vuuren Author-Name: Gudrun Kiesmüller Author-X-Name-First: Gudrun Author-X-Name-Last: Kiesmüller Title: Efficient optimization of the dual-index policy using Markov chains Abstract: This article considers the inventory control of a single product in one location with two supply sources facing stochastic demand. A premium is paid for each product ordered from the faster “emergency” supply source. Unsatisfied demand is backordered and ordering decisions are made periodically. The optimal control policy for this system is known to be complex. For this reason a type of base-stock policy known as the Dual-Index Policy (DIP) is used as the as control mechanism for this inventory system. Under this policy ordering decisions are based on a regular and an emergency inventory position and their corresponding order-up-to levels. Previous work on this policy assumes deterministic lead times and uses simulation to find the optimal order-up-to levels. This article provides an alternate proof for the result that separates the optimization of the DIP in two one-dimensional problems. An insight from this proof allows the model to be generalized to accommodate stochastic regular lead times and provide an approximate evaluation method based on limiting results so that optimization can be done without simulation. An extensive numerical study shows that this approach yields excellent results for deterministic lead times and good results for stochastic lead times. Journal: IIE Transactions Pages: 604-620 Issue: 8 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.550908 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.550908 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:8:p:604-620 Template-Type: ReDIF-Article 1.0 Author-Name: Adrian Lee Author-X-Name-First: Adrian Author-X-Name-Last: Lee Author-Name: Sheldon Jacobson Author-X-Name-First: Sheldon Author-X-Name-Last: Jacobson Title: Evaluating the effectiveness of sequential aviation security screening policies Abstract: Passenger and baggage screening is an essential component of aviation security systems designed to detect and remove threats endangering the safety of air transportation. Upon entering the security checkpoint, passengers are assigned to a multilevel security class system defined by sets of screening devices. The sequential passenger assignment problem has been formulated as a stochastic process, with the objective of maximizing the overall true alarm rate, subject to device capacity constraints. This article introduces three metrics for evaluating the performance of sequential passenger assignment policies with respect to the retrospective optimal solution. The concepts of under-screening and over-screening passengers are defined, from which expressions for the expectation and variance of the number of under-screened passengers are obtained in part through a Markov chain. A conditional probability inequality is used to develop an upper bound on attaining the set of optimal assignments for a given realization of passenger risk. Estimators for the performance metrics are presented for the efficient simulation of cases involving a large number of passenger assignments. The key result is that for populations containing a majority of low-risk passengers, an initial underestimation of the overall risk level produces fewer under-screened passengers in comparison to that which results if the true risk level lies below what was anticipated. Furthermore, fewer passengers are incorrectly assigned to undergo the appropriate screening intensity if security class capacities are biased toward screening the majority of low-risk passengers using a combination of low-intensity detection devices, while reserving the high-intensity, time-consuming devices for the limited number of high-risk passengers. Journal: IIE Transactions Pages: 547-565 Issue: 8 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.550909 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.550909 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:8:p:547-565 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaojun (Gene) Shan Author-X-Name-First: Xiaojun (Gene) Author-X-Name-Last: Shan Author-Name: Frank A. Felder Author-X-Name-First: Frank A. Author-X-Name-Last: Felder Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Title: Game-theoretic models for electric distribution resiliency/reliability from a multiple stakeholder perspective Abstract: We study decentralized decisions among resiliency investors for hardening electric distribution systems with governance, which could coordinate the achievement of social optimums. Significant investments are being made to build resilient infrastructure for society well-being by hardening electric distribution networks. However, whether independent investment decisions can reach social optimums is not well studied. Previous research has focused on optimization of system designs to improve resiliency with limited modeling efforts on the interactions of decentralized decision making. Within regulatory governance, we investigate interactions between two independent resiliency investors with a game-theoretic model incorporating detailed payoff functions. Moreover, we demonstrate the framework with typical data and sensitivity analyses. We find that the decentralized optimal solution is not a social optimum without governance and the government could subsidize grid hardening to achieve the social optimum. Additionally, we conduct Monte Carlo simulations by varying key parameters and find that a socially undesirable outcome could occur with the highest frequency. Therefore, it is important to narrow the uncertain ranges for particular benefits/costs and use policy instruments to induce the socially desired outcomes. These results yield important insights into the role of regulatory governance in supervising resiliency investors and highlight the significance of studying the interactions between independent investors. Journal: IISE Transactions Pages: 159-177 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1213466 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1213466 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:159-177 Template-Type: ReDIF-Article 1.0 Author-Name: Mark M. Nejad Author-X-Name-First: Mark M. Author-X-Name-Last: Nejad Author-Name: Lena Mashayekhy Author-X-Name-First: Lena Author-X-Name-Last: Mashayekhy Author-Name: Ratna Babu Chinnam Author-X-Name-First: Ratna Babu Author-X-Name-Last: Chinnam Author-Name: Daniel Grosu Author-X-Name-First: Daniel Author-X-Name-Last: Grosu Title: Online scheduling and pricing for electric vehicle charging Abstract: We design strategy-proof online scheduling and pricing mechanisms for Electric Vehicle (EV) charging in a competitive environment. EV drivers submit their requests for charging services dynamically over time, and they can name their own price on the charging services. The mechanisms schedule EV charging and determine charging prices considering the incentives of both EV drivers and power providers. In addition, our proposed online mechanisms do not assume availability of information about future demand. Our charging mechanisms are preemption aware, allowing flexibility on when charging takes place. This is in alignment with power providers’ load-balancing goals. We perform extensive experiments to investigate the performance of our proposed mechanisms compared to that of the optimal offline mechanism. We analyze the various properties of our proposed mechanisms, in particular, we prove that they are strategy proof; that is, truthful reporting of price and amount of charging is a dominant strategy for self-interested EV drivers. Journal: IISE Transactions Pages: 178-193 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1213467 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1213467 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:178-193 Template-Type: ReDIF-Article 1.0 Author-Name: Ruina Yang Author-X-Name-First: Ruina Author-X-Name-Last: Yang Author-Name: Xiaochen Gao Author-X-Name-First: Xiaochen Author-X-Name-Last: Gao Author-Name: Chung-Yee Lee Author-X-Name-First: Chung-Yee Author-X-Name-Last: Lee Title: A novel floating price contract for the ocean freight industry Abstract: In this article, we investigate the carrier–shipper contracting issue arising from the ocean freight industry. In current practice, the carrier may sign a contract with the shipper stating a fixed price per container for the whole year, which is comprised of a low-demand season followed by a high-demand season. By accepting the contract, the shipper agrees to comply with the obligation of transporting the exact number of containers needed in both the low and high seasons at the contracted price. However, often the shipper will default on the committed quantity in the low season when the spot market price drops below the contract price. To address this problem, we propose a floating price contract. In the article, we characterize the optimal contract parameters and the shipper's optimal strategy and evaluate the effectiveness of the floating price contract with a combination of analytical and numerical methods. We find that under a broad class of macroeconomic market conditions, the floating price contract not only enables the carrier to tackle contract default issue more effectively while better enjoying the benefit of capacity planning but makes the shipper better off than relying solely on the spot market. Journal: IISE Transactions Pages: 194-208 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1215610 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1215610 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:194-208 Template-Type: ReDIF-Article 1.0 Author-Name: Izack Cohen Author-X-Name-First: Izack Author-X-Name-Last: Cohen Author-Name: Chen Epstein Author-X-Name-First: Chen Author-X-Name-Last: Epstein Author-Name: Tal Shima Author-X-Name-First: Tal Author-X-Name-Last: Shima Title: On the discretized Dubins Traveling Salesman Problem Abstract: This research deals with a variation of the Traveling Salesman Problem in which the cost of a tour, during which a kinematically constrained vehicle visits a set of targets, has to be minimized. We are motivated by situations that include motion planning for unmanned aerial, marine, and ground vehicles, just to name a few possible application outlets. We discretize the original continuous problem and explicitly formulate it as an integer optimization problem. Then we develop a performance bound as a function of the discretization level and the number of targets. The inclusion of a discretization level provides an opportunity to achieve tighter bounds, compared to what has been reported in the literature. We perform a numerical study that quantifies the performance of the suggested approach. The suggested linkage between discretization level, number of targets, and performance may guide discretization-level choices for the solution of motion planning problems. Specifically, theoretical and numerical results indicate that, in many instances, discretization may be set at a low level to strike a balance between computational time and the length of a tour. Journal: IISE Transactions Pages: 238-254 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1217101 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1217101 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:238-254 Template-Type: ReDIF-Article 1.0 Author-Name: Kai-Chuan Yang Author-X-Name-First: Kai-Chuan Author-X-Name-Last: Yang Author-Name: Candace Arai Yano Author-X-Name-First: Candace Arai Author-X-Name-Last: Yano Title: Multi-product (, T) inventory policies with short-shipping: Enabling fixed order intervals and well-utilized vehicles under random demand Abstract: We devise and analyze a new (R, T) policy that allows for short shipments (shipping less than the desired quantities) when the total order quantity would otherwise exceed a truckload. All products are ordered every T time units, with a target order-up-to level of Ri for each product i, where T and the Ri values can be decided. By allowing short shipments, it is possible to maintain both a fixed order interval, which manufacturers often prefer because it facilitates their production planning (thereby reducing their costs indirectly), as well as relatively high utilization of truck capacity, which is a key concern of customers when they face transportation economies of scale and thus prefer that orders utilize the truck capacity well. We develop an approach to find near-optimal control parameters for our proposed (R, T) policy with short-shipments. In a numerical study, we show that our proposed policy performs comparably to a (Q, S) policy that roughly mimics how customers would order if they were concerned about filling a truck when placing an order. Thus, the proposed (R, T) policy can be used in place of a continuous review policy, offering the potential for better coordination between the customer and manufacturer when the manufacturer prefers fixed order timing. Journal: IISE Transactions Pages: 209-222 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1224398 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1224398 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:209-222 Template-Type: ReDIF-Article 1.0 Author-Name: Yiwei Huang Author-X-Name-First: Yiwei Author-X-Name-Last: Huang Author-Name: H. Neil Geismar Author-X-Name-First: H. Neil Author-X-Name-Last: Geismar Author-Name: Divakar Rajamani Author-X-Name-First: Divakar Author-X-Name-Last: Rajamani Author-Name: Suresh Sethi Author-X-Name-First: Suresh Author-X-Name-Last: Sethi Author-Name: Chelliah Sriskandarajah Author-X-Name-First: Chelliah Author-X-Name-Last: Sriskandarajah Author-Name: Marcelo Carlos Author-X-Name-First: Marcelo Author-X-Name-Last: Carlos Title: Optimizing logistics operations in a country's currency supply network Abstract: We optimize a large country's currency supply network for its central bank. The central bank provides currency to all branches (who in turn serve consumers and commerce) through its network of big vaults, regional vaults, and retail vaults. The central bank intends to reduce its total transportation cost by enlarging a few retail vaults to regional vaults. It seeks further reductions by optimizing the sourcing in the updated currency network. We develop an optimization model to select the retail vaults to upgrade, so that the total cost is minimized. Optimally choosing which retail vaults to upgrade is strongly NP-hard, so we develop an efficient heuristic that provides solutions whose costs average less than 3% above the optimum for realistic problem instances. An implementation of our methodology for a particular state has generated a total cost reduction of approximately 57% (equivalently, $2 million). To optimize the sourcing, we propose an alternative delivery process that further reduces the transportation cost by over 31% for the actual collected data and by over 38% for randomly generated data. This alternative optimizes the sourcing within the new currency network and requires significantly less computational effort. Journal: IISE Transactions Pages: 223-237 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1224958 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1224958 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:223-237 Template-Type: ReDIF-Article 1.0 Author-Name: Kai He Author-X-Name-First: Kai Author-X-Name-Last: He Author-Name: Lisa M. Maillart Author-X-Name-First: Lisa M. Author-X-Name-Last: Maillart Author-Name: Oleg A. Prokopyev Author-X-Name-First: Oleg A. Author-X-Name-Last: Prokopyev Title: Optimal planning of unpunctual preventive maintenance Abstract: In traditional maintenance decision-making, maintenance planners assume that their prescribed Preventive Maintenance (PM) policies will be implemented without error. In practice, however, the individuals responsible for implementing such plans often deviate from the intended PM policy, resulting in unpunctual PM actions. We formulate cost-rate minimizing models to investigate the impact of such deviations, assuming that the actual PM time differs from the scheduled PM time in a probabilistic manner. We establish both analytical and numerical results for two specific types of maintenance policies, namely, age replacement with and without minimal repair. Journal: IISE Transactions Pages: 127-143 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1224959 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1224959 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:127-143 Template-Type: ReDIF-Article 1.0 Author-Name: Mor Kaspi Author-X-Name-First: Mor Author-X-Name-Last: Kaspi Author-Name: Tal Raviv Author-X-Name-First: Tal Author-X-Name-Last: Raviv Author-Name: Michal Tzur Author-X-Name-First: Michal Author-X-Name-Last: Tzur Title: Bike-sharing systems: User dissatisfaction in the presence of unusable bicycles Abstract: In bike-sharing systems, at any given moment, a certain share of the bicycle fleet is unusable. This phenomenon may significantly affect the quality of service provided to the users. However, to date this matter has not received any attention in the literature. In this article, the users' quality of service is modeled in terms of their satisfaction from the system. We measure user dissatisfaction using a weighted sum of the expected shortages of bicycles and lockers at a single station. The shortages are evaluated as a function of the initial inventory of usable and unusable bicycles at the station. We analyze the convexity of the resulting bivariate function and propose an accurate method for fitting a convex polyhedral function to it. The fitted polyhedral function can later be used in linear optimization models for operational and strategic decision making in bike-sharing systems. Our numerical results demonstrate the significant effect of the presence of unusable bicycles on the level of user dissatisfaction. This emphasizes the need to have accurate real-time information regarding bicycle usability. Journal: IISE Transactions Pages: 144-158 Issue: 2 Volume: 49 Year: 2017 Month: 2 X-DOI: 10.1080/0740817X.2016.1224960 File-URL: http://hdl.handle.net/10.1080/0740817X.2016.1224960 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:49:y:2017:i:2:p:144-158 Template-Type: ReDIF-Article 1.0 Author-Name: Tirthankar Dasgupta Author-X-Name-First: Tirthankar Author-X-Name-Last: Dasgupta Author-Name: Benjamin Weintraub Author-X-Name-First: Benjamin Author-X-Name-Last: Weintraub Author-Name: V. Joseph Author-X-Name-First: V. Author-X-Name-Last: Joseph Title: A physical–statistical model for density control of nanowires Abstract: In order to develop a simple, scalable, and cost-effective technique for controlling zinc oxide nanowire array growth density, layer-by-layer polymer thin films were used in a solution-based growth process. The objective of this article is to develop a model connecting the thickness of polymer films to the observed density of nanowires that would enable prediction, and consequently control, of nanowire array density. A physical–statistical model that incorporates available physical knowledge of the process in a statistical framework is proposed. Model parameters are estimated using the maximum likelihood method. Apart from helping scientists achieve the basic objective of prediction control and quantification of uncertainty, the model facilitates a better understanding of the fundamental scientific phenomena that explain the growth mechanism. Journal: IIE Transactions Pages: 233-241 Issue: 4 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.505124 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.505124 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:4:p:233-241 Template-Type: ReDIF-Article 1.0 Author-Name: Haifeng Xia Author-X-Name-First: Haifeng Author-X-Name-Last: Xia Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Author-Name: Bani Mallick Author-X-Name-First: Bani Author-X-Name-Last: Mallick Title: Bayesian hierarchical model for combining misaligned two-resolution metrology data Abstract: This article presents a Bayesian hierarchical model to combine misaligned two-resolution metrology data for inspecting the geometric quality of manufactured parts. High-resolution data points are scarce and scatter over the surface being measured, while low-resolution data are pervasive but less accurate and less precise. Combining the two datasets should produce better predictions than using a single dataset. One challenge in combining them is the misalignment existing between data from different resolutions. This article attempts to address this issue and make improved predictions. The proposed method improves on the methods of using a single dataset or a combined prediction that does not address the misalignment problem. Improvements of 24% to 74% are demonstrated both for simulated data of circles and datasets obtained for a milled sinewave surface measured by two coordinate measuring machines of different resolutions. Journal: IIE Transactions Pages: 242-258 Issue: 4 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.521804 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.521804 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:4:p:242-258 Template-Type: ReDIF-Article 1.0 Author-Name: Yanting Li Author-X-Name-First: Yanting Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Detecting and diagnosing covariance matrix changes in multistage processes Abstract: Multistage process monitoring and fault identification are currently receiving considerable attention. This article focuses on detecting common faults in a multistage process that affect the process covariance matrix. The process covariance matrix monitoring problem is formulated into a multiple hypotheses testing problem. The proposed method is an exponentially weighted moving average chart built on vectors that are transformed from sample covariance matrices of the collected observations. Extensive simulation analysis shows that, compared to alternative methods for multistage process covariance monitoring and diagnosis, the proposed method is capable of not only detecting variation changes quicker but also identifying faults with higher accuracy. Journal: IIE Transactions Pages: 259-274 Issue: 4 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.521805 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.521805 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:4:p:259-274 Template-Type: ReDIF-Article 1.0 Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Jionghua Jin Author-X-Name-First: Jionghua Author-X-Name-Last: Jin Title: Characterization of non-linear profiles variations using mixed-effect models and wavelets Abstract: There is an increasing research interest in the modeling and analysis of complex non-linear profiles using the wavelet transform. However, most existing modeling and analysis methods assume that the total inherent profile variations are mainly due to the noise within each profile. In many practical situations, however, the profile-to-profile variation is often too large to be neglected. In this article, a new method is proposed to model non-linear profile data variations using wavelets. For this purpose, a wavelet-based mixed-effect model is developed to consider both within- and between-profile variations. The utilization of wavelets not only simplifies the computational complexity of the mixed-effect model estimation but also facilitates the identification of the sources of the between-profile variations. In addition, a change-point model involving the likelihood ratio test is applied to ensure that the collected profiles used in the model estimation follow an identical distribution. Finally, the performance of the proposed model is evaluated using both Monte Carlo simulations and a case study. Journal: IIE Transactions Pages: 275-290 Issue: 4 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.521807 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.521807 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:4:p:275-290 Template-Type: ReDIF-Article 1.0 Author-Name: Shuohui Chen Author-X-Name-First: Shuohui Author-X-Name-Last: Chen Author-Name: Harriet Nembhard Author-X-Name-First: Harriet Author-X-Name-Last: Nembhard Title: Multivariate Cuscore control charts for monitoring the mean vector in autocorrelated processes Abstract: In many systems, quantitative observations of process variables can be used to characterize a process for quality control purposes. As the intervals between observations become shorter, autocorrelation may occur and lead to a high false alarm rate in traditional Statistical Process Control (SPC) charts. In this article, a Multivariate Cuscore (MCuscore) SPC procedure based on the sequential likelihood ratio test and fault signature analysis is developed for monitoring the mean vector of an autocorrelated multivariate process. The MCuscore charts for the transient, steady and ramp mean shift signal are designed; they do not rely on the assumption of known signal starting time. An example is presented to demonstrate the application of the MCuscore chart to monitoring three autocorrelated variables of an online search engine marketing tracking process. Furthermore, the simulation analysis shows that the MCuscore chart outperforms the traditional multivariate cumulative sum control chart in detecting process shifts. Journal: IIE Transactions Pages: 291-307 Issue: 4 Volume: 43 Year: 2011 X-DOI: 10.1080/0740817X.2010.523767 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.523767 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:43:y:2011:i:4:p:291-307 Template-Type: ReDIF-Article 1.0 Author-Name: Wen-Chih Chen Author-X-Name-First: Wen-Chih Author-X-Name-Last: Chen Title: Revisiting dual-role factors in data envelopment analysis: derivation and implications Abstract: Data Envelopment Analysis (DEA) is a mathematical programming method to evaluate relative performance. Typical DEA studies consider a production process transforming inputs to outputs. In some cases, however, some factors can be both inputs and outputs simultaneously and are termed dual-role factors. For example, research funding can be an input that strengthens a university's academic performance and the actual funds can be an output. This article investigates the problem of how to incorporate dual-role factors in DEA. Rather than proposing an ad hoc evaluation model directly, this article considers the concept of “joint technology,” two individual production processes acting in common by summarizing the intuitive thinking. The efficiency evaluation models, based on variant assumptions, thus can be axiomatically derived, validated, and extended. How to determine the input/output tendency of a dual-role factor based on the evaluating results is shown and explained from different aspects. It is concluded that the tendency is a property on the projected boundary, not the data point itself. Journal: IIE Transactions Pages: 653-663 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2012.721943 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.721943 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:653-663 Template-Type: ReDIF-Article 1.0 Author-Name: Maryam Mofrad Author-X-Name-First: Maryam Author-X-Name-Last: Mofrad Author-Name: Lisa Maillart Author-X-Name-First: Lisa Author-X-Name-Last: Maillart Author-Name: Bryan Norman Author-X-Name-First: Bryan Author-X-Name-Last: Norman Author-Name: Jayant Rajgopal Author-X-Name-First: Jayant Author-X-Name-Last: Rajgopal Title: Dynamically optimizing the administration of vaccines from multi-dose vials Abstract: Many vaccines are manufactured in large, multi-dose vials that once opened, must be used within a matter of hours. As a result, clinicians (especially those in remote locations) face difficult tradeoffs between opening a vial to satisfy a potentially small immediate demand versus retaining the vial to satisfy a potentially large future demand. This article formulates a Markov decision process model that determines when to conserve vials as a function of time of day, the current vial inventory, and the remaining clinic-days until the next replenishment. The objective is to minimize open-vial waste while administering as many vaccinations as possible. It is analytically established that the optimal policy is of a threshold type. Furthermore, an extensive sensitivity analysis is conducted that speaks to the benefits of consolidating demand, investing in buffer stock, and adopting different vial sizes. Lastly, a practical heuristic is evaluated and shown to perform competitively with the optimal policy. Journal: IIE Transactions Pages: 623-635 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849834 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849834 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:623-635 Template-Type: ReDIF-Article 1.0 Author-Name: Anahita Khojandi Author-X-Name-First: Anahita Author-X-Name-Last: Khojandi Author-Name: Lisa Maillart Author-X-Name-First: Lisa Author-X-Name-Last: Maillart Author-Name: Oleg Prokopyev Author-X-Name-First: Oleg Author-X-Name-Last: Prokopyev Title: Optimal planning of life-depleting maintenance activities Abstract: This article considers a system with a deterministic initial lifetime that generates reward at a decreasing rate as its virtual age increases. Maintenance can be performed to reduce the virtual age of the system; however, maintenance also shortens the remaining lifetime of the system. Given this tradeoff, the lifetime reward-maximizing maintenance policies under perfect maintenance for non-failure-prone systems, and both perfect and imperfect maintenance for failure-prone systems are analyzed. For each combination considered, structural properties of the resulting optimal policies are derived and exploited to develop solution techniques. Insightful numerical examples are also provided. Journal: IIE Transactions Pages: 636-652 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849835 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849835 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:636-652 Template-Type: ReDIF-Article 1.0 Author-Name: Philip Kaminsky Author-X-Name-First: Philip Author-X-Name-Last: Kaminsky Author-Name: Ming Yuen Author-X-Name-First: Ming Author-X-Name-Last: Yuen Title: Production capacity investment with data updates Abstract: This article presents a model of the capacity investment problem faced by pharmaceutical firms and other firms with long and risky product development cycles. These firms must balance two conflicting objectives: on one hand, the delay in scaling up production once the product is approved must be minimized and, on the other hand, the risk of investing in ultimately unused capacity must be minimized. A stylized model of this type of capacity investment problem is developed and analyzed. In this model the firm re-evaluates its capacity investment strategy as information about the potential success of the product is continually updated (for example, via clinical trial results in the case of the pharmaceutical industry). Motivated by observations of current practices in the biopharmaceutical industry, a computational study is used to explore how practices such as more frequent re-evaluations of investment decisions, stopping and restarting of projects, and the use of alternative types of capacity can, under certain conditions, help the firm reduce both the delay of the commercial launch of the new product and the risk of lost investment. Journal: IIE Transactions Pages: 664-682 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.849838 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.849838 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:664-682 Template-Type: ReDIF-Article 1.0 Author-Name: Jonathan Bard Author-X-Name-First: Jonathan Author-X-Name-Last: Bard Author-Name: Yufen Shao Author-X-Name-First: Yufen Author-X-Name-Last: Shao Author-Name: Xiangtong Qi Author-X-Name-First: Xiangtong Author-X-Name-Last: Qi Author-Name: Ahmad Jarrah Author-X-Name-First: Ahmad Author-X-Name-Last: Jarrah Title: The traveling therapist scheduling problem Abstract: This article presents a new model for constructing weekly schedules for therapists who treat patients with fixed appointment times at various healthcare facilities throughout a large geographic area. The objective is to satisfy the demand for service over a 5-day planning horizon at minimum cost subject to a variety of constraints related to time windows, overtime rules, and breaks. Each therapist works under an individually negotiated contract and may be full-time or part-time. Patient preferences for specific therapists and therapist preferences for assignments at specific facilities are also taken into account when they do not jeopardize feasibility. To gain an understanding of the computational issues, the complexity of various relaxations is examined and characterized. The results indicated that even simple versions of the problem are NP-hard. The model takes the form of a large-scale mixed-integer program but was not solvable with CPLEX for instances of realistic size. Subsequently, a branch-and-price-and-cut algorithm was developed and proved capable of finding near-optimal solutions within 50 minutes for small instances. High-quality solutions were ultimately found with a rolling horizon algorithm in a fraction of that time. The work was performed in conjunction with Key Rehab, a company that provides physical, occupational, and speech therapy services throughout the U.S. Midwest. The policies, practices, compensation rules, and legal restrictions under which Key operates are reflected in the model formulation. Journal: IIE Transactions Pages: 683-706 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.851434 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.851434 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:683-706 Template-Type: ReDIF-Article 1.0 Author-Name: Jeremy Tejada Author-X-Name-First: Jeremy Author-X-Name-Last: Tejada Author-Name: Julie Ivy Author-X-Name-First: Julie Author-X-Name-Last: Ivy Author-Name: Russell King Author-X-Name-First: Russell Author-X-Name-Last: King Author-Name: James Wilson Author-X-Name-First: James Author-X-Name-Last: Wilson Author-Name: Matthew Ballan Author-X-Name-First: Matthew Author-X-Name-Last: Ballan Author-Name: Michael Kay Author-X-Name-First: Michael Author-X-Name-Last: Kay Author-Name: Kathleen Diehl Author-X-Name-First: Kathleen Author-X-Name-Last: Diehl Author-Name: Bonnie Yankaskas Author-X-Name-First: Bonnie Author-X-Name-Last: Yankaskas Title: Combined DES/SD model of breast cancer screening for older women, II: screening-and-treatment simulation Abstract: In the second article of a two-article sequence, the focus is on a simulation model for screening and treatment of breast cancer in U.S. women of age 65+. The first article details a natural-history simulation model of the incidence and progression of untreated breast cancer in a representative simulated population of older U.S. women, which ultimately generates a database of untreated breast cancer histories for individuals in the simulated population. Driven by the resulting database, the screening-and-treatment simulation model is composed of discrete-event simulation (DES) and system dynamics (SD) submodels. For each individual in the simulated population, the DES submodel simulates screening policies and treatment procedures to estimate the resulting survival rates and the costs of screening and treatment. The SD submodel represents the overall structure and operation of the U.S. system for detecting and treating breast cancer. The main results and conclusions are summarized, including a final recommendation for annual screening between ages 65 and 80. A discussion is also presented on how both the natural-history and screening-and-treatment simulations can be used for performance comparisons of proposed screening policies based on overall cost-effectiveness, the numbers of life-years and quality-adjusted life-years saved, and the main components of the total cost incurred by each policy. Journal: IIE Transactions Pages: 707-727 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.851436 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.851436 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:707-727 Template-Type: ReDIF-Article 1.0 Author-Name: Izack Cohen Author-X-Name-First: Izack Author-X-Name-Last: Cohen Author-Name: Avishai Mandelbaum Author-X-Name-First: Avishai Author-X-Name-Last: Mandelbaum Author-Name: Noa Zychlinski Author-X-Name-First: Noa Author-X-Name-Last: Zychlinski Title: Minimizing mortality in a mass casualty event: fluid networks in support of modeling and staffing Abstract: The demand for medical treatment of casualties in mass casualty events (MCEs) exceeds resource supply. A key requirement in the management of such tragic but frequent events is thus the efficient allocation of scarce resources. This article develops a mathematical fluid model that captures the operational performance of a hospital during an MCE. The problem is how to allocate the surgeons—the scarcest of resources—between two treatment stations in order to minimize mortality. A focus is placed on casualties in need of immediate care. To this end, optimization problems are developed that are solved by combining theory with numerical analysis. This approach yields structural results that create optimal or near-optimal resource allocation policies. The results give rise to two types of policies, one that prioritizes a single treatment station throughout the MCE and a second policy in which the allocation priority changes. The approach can be implemented when preparing for MCEs and also during their real-time management when future decisions are based on current available information. The results of experiments, based on the outline of real MCEs, demonstrate that the proposed approach provides decision support tools, which are both useful and implementable. Journal: IIE Transactions Pages: 728-741 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.855846 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.855846 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:728-741 Template-Type: ReDIF-Article 1.0 Author-Name: Archis Ghate Author-X-Name-First: Archis Author-X-Name-Last: Ghate Author-Name: Shih-Fen Cheng Author-X-Name-First: Shih-Fen Author-X-Name-Last: Cheng Author-Name: Stephen Baumert Author-X-Name-First: Stephen Author-X-Name-Last: Baumert Author-Name: Daniel Reaume Author-X-Name-First: Daniel Author-X-Name-Last: Reaume Author-Name: Dushyant Sharma Author-X-Name-First: Dushyant Author-X-Name-Last: Sharma Author-Name: Robert Smith Author-X-Name-First: Robert Author-X-Name-Last: Smith Title: Sampled fictitious play for multi-action stochastic dynamic programs Abstract: This article introduces a class of finite-horizon dynamic optimization problems that are called multi-action stochastic Dynamic Programs (DPs). Their distinguishing feature is that the decision in each state is a multi-dimensional vector. These problems can in principle be solved using Bellman’s backward recursion. However, the complexity of this procedure grows exponentially in the dimension of the decision vectors. This is called the curse of action space dimensionality. To overcome this computational challenge, an approximation algorithm is proposed that is rooted in the game-theoretic paradigm of Sampled Fictitious Play (SFP). SFP solves a sequence of DPs with a one-dimensional action space that are exponentially smaller than the original multi-action stochastic DP. In particular, the computational effort in a fixed number of SFP iterations is linear in the dimension of the decision vectors. It is shown that the sequence of SFP iterates converges to a local optimum, and a numerical case study in manufacturing is presented in which SFP is able to find solutions with objective values within 1% of the optimal objective value hundreds of times faster than the time taken by backward recursion. In this case study, SFP solutions are also better by a statistically significant margin than those found by a one-step look ahead heuristic. Journal: IIE Transactions Pages: 742-756 Issue: 7 Volume: 46 Year: 2014 X-DOI: 10.1080/0740817X.2013.857062 File-URL: http://hdl.handle.net/10.1080/0740817X.2013.857062 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:46:y:2014:i:7:p:742-756 Template-Type: ReDIF-Article 1.0 Author-Name: Seçil Savaşaneril Author-X-Name-First: Seçil Author-X-Name-Last: Savaşaneril Author-Name: Nesim Erkip Author-X-Name-First: Nesim Author-X-Name-Last: Erkip Title: An analysis of manufacturer benefits under vendor-managed systems Abstract: Vendor-Managed Inventory (VMI) has attracted a lot of attention due to its benefits such as fewer stock-outs, higher sales, and lower inventory levels at the retailers. Vendor-Managed Availability (VMA) is an improvement that exploits the advantages beyond VMI. This article analyzes the benefits beyond information sharing and assesses the motivation for the manufacturer (vendor) behind joining such a program. It is shown that such vendor-managed systems provide increased flexibility in manufacturer's operations and may bring additional benefits. An analysis is presented on how the system parameters affect the profitability and determine the conditions that make the vendor-managed system a viable strategy for the manufacturer. Journal: IIE Transactions Pages: 455-477 Issue: 7 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903459968 File-URL: http://hdl.handle.net/10.1080/07408170903459968 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:7:p:455-477 Template-Type: ReDIF-Article 1.0 Author-Name: Halit Üster Author-X-Name-First: Halit Author-X-Name-Last: Üster Author-Name: Homarjun Agrahari Author-X-Name-First: Homarjun Author-X-Name-Last: Agrahari Title: An integrated load-planning problem with intermediate consolidated truckload assignments Abstract: This article considers an integrated load-planning problem where decisions on how commodities with unique origin–destination nodes are routed over a given transportation network, along with decisions on their explicit consolidation and assignment to capacitated truckloads, are addressed. In a logistical context, a commodity may refer to a shipper's load handled by a freight forwarder who works as an intermediary between the shippers and carriers. A compact formulation that addresses the load consolidations from many shippers into truckloads and the associated transportation decisions explicitly is first provided. Then, to develop efficient solution algorithms, four compound neighborhood functions and a branching scheme are suggested. Each compound neighborhood function has two main components, level change and content change, with the latter based on various schemes of combining simple neighborhood functions. The compound neighborhood functions and branching strategies enable the solution space to be efficiently searched. Two heuristic algorithms (one with deterministic and the other with probabilistic features) and a tabu search algorithm are also developed. The two components of compound neighborhood functions provide the means to efficiently incorporate intensification and diversification characteristics into these algorithms. Extensive computational results illustrating and comparing the relative efficiency and effectiveness of the algorithms and the compound neighborhood functions are reported. The alternative compounding schemes and the search strategies provided in this study are potentially useful in other problem domains as well. Journal: IIE Transactions Pages: 490-513 Issue: 7 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903468571 File-URL: http://hdl.handle.net/10.1080/07408170903468571 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:7:p:490-513 Template-Type: ReDIF-Article 1.0 Author-Name: Shervin AhmadBeygi Author-X-Name-First: Shervin Author-X-Name-Last: AhmadBeygi Author-Name: Amy Cohn Author-X-Name-First: Amy Author-X-Name-Last: Cohn Author-Name: Marcial Lapp Author-X-Name-First: Marcial Author-X-Name-Last: Lapp Title: Decreasing airline delay propagation by re-allocating scheduled slack Abstract: Passenger airline delays have received increasing attention over the past several years as air space congestion, severe weather, mechanical problems, and other sources cause substantial disruptions to a planned flight schedule. Adding to this challenge is the fact that each flight delay can propagate to disrupt subsequent downstream flights that await the delayed flight's aircraft and crew. This potential for delays to propagate is exacerbated by a fundamental conflict: slack in the planned schedule is often viewed as undesirable, as it implies missed opportunities to utilize costly perishable resources, whereas slack is critical in operations as a means for absorbing disruption. This article shows how delay propagation can be reduced by redistributing existing slack in the planning process, making minor modifications to the flight schedule while leaving the original fleeting and crew scheduling decisions unchanged. Computational results based on data from a major U.S. carrier are presented that show that significant improvements in operational performance can be achieved without increasing planned costs. Journal: IIE Transactions Pages: 478-489 Issue: 7 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903468605 File-URL: http://hdl.handle.net/10.1080/07408170903468605 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:7:p:478-489 Template-Type: ReDIF-Article 1.0 Author-Name: Alexandre Dolgui Author-X-Name-First: Alexandre Author-X-Name-Last: Dolgui Author-Name: Anton Eremeev Author-X-Name-First: Anton Author-X-Name-Last: Eremeev Author-Name: Mikhail Kovalyov Author-X-Name-First: Mikhail Author-X-Name-Last: Kovalyov Author-Name: Pavel Kuznetsov Author-X-Name-First: Pavel Author-X-Name-Last: Kuznetsov Title: Multi-product lot sizing and scheduling on unrelated parallel machines Abstract: This article studies a problem of optimal scheduling and lot sizing a number of products on m unrelated parallel machines to satisfy given demands. A sequence-dependent setup time is required between lots of different products. The products are assumed to be all continuously divisible or all discrete. The criterion is to minimize the time at which all the demands are satisfied, Cmax, or the maximum lateness of the product completion times from the given due dates, Lmax. The problem is motivated by the real-life scheduling applications in multi-product plants. The properties of optimal solutions, NP-hardness proofs, enumeration, and dynamic programming algorithms for various special cases of the problem are presented. A greedy-type heuristic is proposed and experimentally tested. The major contributions are an NP-hardness proof, pseudo-polynomial algorithms linear in m for the case, in which the number of products is a given constant and the heuristic. The results can be adapted for solving a production line design problem. Journal: IIE Transactions Pages: 514-524 Issue: 7 Volume: 42 Year: 2010 X-DOI: 10.1080/07408170903542649 File-URL: http://hdl.handle.net/10.1080/07408170903542649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:7:p:514-524 Template-Type: ReDIF-Article 1.0 Author-Name: Eun-Seok Kim Author-X-Name-First: Eun-Seok Author-X-Name-Last: Kim Author-Name: Marc Posner Author-X-Name-First: Marc Author-X-Name-Last: Posner Title: Parallel machine scheduling with s-precedence constraints Abstract: For s-precedence constraints, job i cannot start processing until all jobs that precede i start. This is different from the standard definition of a precedence relation where i cannot start until all prior jobs complete. While not discussed in the scheduling literature, s-precedence constraints have wide applicability in real-world settings such as first-come, first-served processing systems. This article considers a deterministic scheduling problem where jobs with s-precedence relations are processed by multiple identical parallel machines. The objective is to minimize the makespan. The problem is shown to be intractable. A heuristic procedure is developed and tight worst-case bounds on the relative error are derived. Finally, computational experiments show that the proposed heuristic provides effective solutions. Journal: IIE Transactions Pages: 525-537 Issue: 7 Volume: 42 Year: 2010 X-DOI: 10.1080/07408171003670975 File-URL: http://hdl.handle.net/10.1080/07408171003670975 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:42:y:2010:i:7:p:525-537 Template-Type: ReDIF-Article 1.0 Author-Name: Zhili Tian Author-X-Name-First: Zhili Author-X-Name-Last: Tian Author-Name: Panos Kouvelis Author-X-Name-First: Panos Author-X-Name-Last: Kouvelis Author-Name: Charles L. Munson Author-X-Name-First: Charles L. Author-X-Name-Last: Munson Title: Understanding and managing product line complexity: Applying sensitivity analysis to a large-scale MILP model to price and schedule new customer orders Abstract: This article analyzes a complex scheduling problem at a company that uses a continuous chemical production process. A detailed mixed-integer linear programming model is developed for scheduling the expansive product line, which can save the company an average of 1.5% of production capacity per production run. Furthermore, through sensitivity analysis of the model, key independent variables are identified, and regression equations are created that can estimate both the capacity usage and material waste generated by the product line complexity of a particular production run. These regression models can be used to estimate the complexity costs imposed on the system by any particular product or customer order. Such cost estimates can be used to properly price new customer orders and to most economically assign them to the production runs with the best fit. The proposed approach may be adapted for other long-production-run manufacturing companies that face uncertain demand and short customer lead times. Journal: IIE Transactions Pages: 307-328 Issue: 4 Volume: 47 Year: 2015 Month: 4 X-DOI: 10.1080/0740817X.2014.916461 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.916461 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:4:p:307-328 Template-Type: ReDIF-Article 1.0 Author-Name: Xing Hong Author-X-Name-First: Xing Author-X-Name-Last: Hong Author-Name: Miguel A. Lejeune Author-X-Name-First: Miguel A. Author-X-Name-Last: Lejeune Author-Name: Nilay Noyan Author-X-Name-First: Nilay Author-X-Name-Last: Noyan Title: Stochastic network design for disaster preparedness Abstract: This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale–Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale–Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method. Journal: IIE Transactions Pages: 329-357 Issue: 4 Volume: 47 Year: 2015 Month: 4 X-DOI: 10.1080/0740817X.2014.919044 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.919044 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:4:p:329-357 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Hu Author-X-Name-First: Jian Author-X-Name-Last: Hu Author-Name: Sanjay Mehrotra Author-X-Name-First: Sanjay Author-X-Name-Last: Mehrotra Title: Robust decision making over a set of random targets or risk-averse utilities with an application to portfolio optimization Abstract: In many situations, decision-makers need to exceed a random target or make decisions using expected utilities. These two situations are equivalent when a decision-maker’s utility function is increasing and bounded. This article focuses on the problem where the random target has a concave cumulative distribution function (cdf) or a risk-averse decision-maker’s utility is concave (alternatively, the probability density function (pdf) of the random target or the decision-maker’ marginal utility is decreasing) and the concave cdf or utility can only be specified by an uncertainty set. Specifically, a robust (maximin) framework is studied to facilitate decision making in such situations. Functional bounds on the random target’s cdf and pdf are used. Additional general auxiliary requirements may also be used to describe the uncertainty set. It is shown that a discretized version of the problem may be formulated as a linear program. A result showing the convergence of discretized models for uncertainty sets specified using continuous functions is also proved. A portfolio investment decision problem is used to illustrate the construction and usefulness of the proposed decision-making framework. Journal: IIE Transactions Pages: 358-372 Issue: 4 Volume: 47 Year: 2015 Month: 4 X-DOI: 10.1080/0740817X.2014.919045 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.919045 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:4:p:358-372 Template-Type: ReDIF-Article 1.0 Author-Name: Renato E. De Matta Author-X-Name-First: Renato E. Author-X-Name-Last: De Matta Author-Name: Vernon N. Hsu Author-X-Name-First: Vernon N. Author-X-Name-Last: Hsu Author-Name: Chung-Lun Li Author-X-Name-First: Chung-Lun Author-X-Name-Last: Li Title: Coordinated production and delivery for an exporter Abstract: This article considers an exporter who produces multiple products at geographically separated production plants to meet a stream of deterministic overseas demands over a planning horizon. Each production plant uses either a direct delivery mode, which sends pre-loaded cargos of a product directly from its production facility to the ocean port, or a consolidated delivery mode where products are consolidated into outbound ocean cargos at the facility of a third-party logistics firm and delivered to the ocean port. The exporter's problem is to develop a minimum-cost production and delivery plan for the entire supply chain over a finite planning horizon. The problem is modeled as a mixed-integer program and it is shown to be NP-hard even for the case with one production plant. Two distinct but related decisions characterize the problem, namely, the product delivery mode selection decision and the production scheduling decision. The natural separation of these decisions is exploited in a Benders decomposition solution procedure. An important finding of this study is that the exporter can extract the most value from integrating consolidated deliveries in the production and distribution plan when plants have modest production costs and demand variability is high. Journal: IIE Transactions Pages: 373-391 Issue: 4 Volume: 47 Year: 2015 Month: 4 X-DOI: 10.1080/0740817X.2014.928961 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928961 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:4:p:373-391 Template-Type: ReDIF-Article 1.0 Author-Name: Jan-Pieter L. Dorsman Author-X-Name-First: Jan-Pieter L. Author-X-Name-Last: Dorsman Author-Name: Sandjai Bhulai Author-X-Name-First: Sandjai Author-X-Name-Last: Bhulai Author-Name: Maria Vlasiou Author-X-Name-First: Maria Author-X-Name-Last: Vlasiou Title: Dynamic server assignment in an extended machine-repair model Abstract: This article considers an extension of the classic machine-repair problem. The machines, apart from receiving service from a single repairman, now also supply service themselves to queues of products. The extended model can be viewed as a two-layered queueing network, in which the queues of products in the first layer are generally correlated, due to the fact that the machines have to share the repairman’s capacity in the second layer. Of particular interest is the dynamic control problem of how the repairman should allocate his/her capacity to the machines at any point in time so that the long-term average (weighted) sum of the queue lengths of the first-layer queues is minimized. Since the optimal policy for the repairman cannot be found analytically due to the correlations in the queue lengths, a near-optimal policy is proposed. This is obtained by combining intuition and results from queueing theory with techniques from Markov decision theory. Specifically, the relative value functions for several policies for which the model can be decomposed in less complicated subsystems are studied, and the results are combined with the classic one-step policy improvement algorithm. The resulting policy is easy to apply, is scalable in the number of machines, and is shown to be highly accurate over a wide range of parameter settings. Journal: IIE Transactions Pages: 392-413 Issue: 4 Volume: 47 Year: 2015 Month: 4 X-DOI: 10.1080/0740817X.2014.928962 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928962 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:4:p:392-413 Template-Type: ReDIF-Article 1.0 Author-Name: Benjamin Legros Author-X-Name-First: Benjamin Author-X-Name-Last: Legros Author-Name: Oualid Jouini Author-X-Name-First: Oualid Author-X-Name-Last: Jouini Author-Name: Ger Koole Author-X-Name-First: Ger Author-X-Name-Last: Koole Title: Adaptive threshold policies for multi-channel call centers Abstract: In the context of multi-channel call centers with inbound calls and emails, this article considers a threshold policy on the reservation of agents for the inbound calls. We study a general non-stationary model where calls arrive according to a non-homogeneous Poisson process. The optimization problem consists in maximizing the throughput of emails under a constraint on the waiting time of inbound calls. An efficient adaptive threshold policy is proposed that is easy to implement in the automatic call distributor. This scheduling policy is evaluated through a comparison with the optimal performance measures found in the case of a constant arrival rate and also with other intuitive adaptive threshold policies in the general non-stationary case. Journal: IIE Transactions Pages: 414-430 Issue: 4 Volume: 47 Year: 2015 Month: 4 X-DOI: 10.1080/0740817X.2014.928965 File-URL: http://hdl.handle.net/10.1080/0740817X.2014.928965 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:47:y:2015:i:4:p:414-430 Template-Type: ReDIF-Article 1.0 Author-Name: Li Zeng Author-X-Name-First: Li Author-X-Name-Last: Zeng Author-Name: Xinwei Deng Author-X-Name-First: Xinwei Author-X-Name-Last: Deng Author-Name: Jian Yang Author-X-Name-First: Jian Author-X-Name-Last: Yang Title: Constrained hierarchical modeling of degradation data in tissue-engineered scaffold fabrication Abstract: In tissue-engineered scaffold fabrication, the degradation of scaffolds is a critical issue because it needs to match with the rate of new tissue formation in the human body. However, scaffold degradation is a very complicated process, making degradation regulation a challenging task. To provide a scientific understanding on the degradation of scaffolds, we propose a novel constrained hierarchical model (CHM) for the degradation data. The proposed model has two levels, with the first level characterizing scaffold degradation profiles and the second level characterizing the effect of process parameters on the degradation. Moreover, it can incorporate expert knowledge in the modeling through meaningful constraints, leading to insightful inference on scaffold degradation. Bayesian methods are used for parameter estimation and model comparison. In the case study, the proposed method is illustrated and compared with existing methods using data from a novel tissue-engineered scaffold fabrication process. A numerical study is conducted to examine the effect of sample size on model estimation. Journal: IIE Transactions Pages: 16-33 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1019164 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1019164 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:16-33 Template-Type: ReDIF-Article 1.0 Author-Name: Linmiao Zhang Author-X-Name-First: Linmiao Author-X-Name-Last: Zhang Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Author-Name: Nan Chen Author-X-Name-First: Nan Author-X-Name-Last: Chen Title: Monitoring wafers’ geometric quality using an additive Gaussian process model Abstract: The geometric quality of a wafer is an important quality characteristic in the semiconductor industry. However, it is difficult to monitor this characteristic during the manufacturing process due to the challenges created by the complexity of the data structure. In this article, we propose an Additive Gaussian Process (AGP) model to approximate a standard geometric profile of a wafer while quantifying the deviations from the standard when a manufacturing process is in an in-control state. Based on the AGP model, two statistical tests are developed to determine whether or not a newly produced wafer is conforming. We have conducted extensive numerical simulations and real case studies, the results of which indicate that our proposed method is effective and has potentially wide application. Journal: IIE Transactions Pages: 1-15 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1027455 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1027455 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:1-15 Template-Type: ReDIF-Article 1.0 Author-Name: Nailong Zhang Author-X-Name-First: Nailong Author-X-Name-Last: Zhang Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Title: A random effect autologistic regression model with application to the characterization of multiple microstructure samples Abstract: The microstructure of a material can strongly influence its properties such as strength, hardness, wear resistance, etc., which in turn play an important role in the quality of products produced from these materials. Existing studies on a material's microstructure have mainly focused on the characteristics of a single microstructure sample and the variation between different microstructure samples is ignored. In this article, we propose a novel random effect autologistic regression model that can be used to characterize the variation in microstructures between different samples for two-phase materials that consist of two distinct parts with different chemical structures. The proposed model differs from the classic autologistic regression model in that we consider the unit-to-unit variability among the microstructure samples, which is characterized by the random effect parameters. To estimate the model parameters given a set of microstructure samples, we first derive a likelihood function, based on which a maximum likelihood estimation method is developed. However, maximizing the likelihood function of the proposed model is generally difficult as it has a complex form. To overcome this challenge, we further develop a stochastic approximation expectation maximization algorithm to estimate the model parameters. A simulation study is conducted to verify the proposed methodology. A real-world example of a dual-phase high strength steel is used to illustrate the developed methods. Journal: IIE Transactions Pages: 34-42 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1047069 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1047069 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:34-42 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao Liu Author-X-Name-First: Xiao Author-X-Name-Last: Liu Author-Name: Loon Ching Tang Author-X-Name-First: Loon Ching Author-X-Name-Last: Tang Title: Reliability analysis and spares provisioning for repairable systems with dependent failure processes and a time-varying installed base Abstract: Line Replaceable Units (LRUs), which can be quickly replaced at a first-level maintenance facility, are widely deployed on capital-intensive systems in order to maintain high system availability. Failed LRU are repaired after replacement and reused as fully serviceable spare units. Demand for spare LRUs depends on factors such as the time-varying installed base, reliability deterioration or growth over maintenance cycles, procurement leadtime of new LRUs, turn-around leadtime of repaired LRUs, etc. In this article, we propose an integrated framework for both reliability analysis and spares provisioning for LRUs with a time-varying installed base. We assume that each system consists of multiple types of LRUs and associated with each type of LRU is a non-stationary sub-failure process. The failure of a system is triggered by sub-failure processes that are statistically dependent. A hierarchical probability model is developed for the demand forecasting of LRUs. Based on the forecasted demand, the optimum inventory level is found through dynamic programming. An application example is presented. A computer program, called the Integrated Platform for Reliability Analysis and Spare Provision, is available that makes the proposed methods readily applicable. Journal: IIE Transactions Pages: 43-56 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1055391 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1055391 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:43-56 Template-Type: ReDIF-Article 1.0 Author-Name: Giovanna Capizzi Author-X-Name-First: Giovanna Author-X-Name-Last: Capizzi Author-Name: Guido Masarotto Author-X-Name-First: Guido Author-X-Name-Last: Masarotto Title: Efficient control chart calibration by simulated stochastic approximation Abstract: The accurate determination of control limits is crucial in statistical process control. The usual approach consists in computing the limits so that the in-control run-length distribution has some desired properties; for example, a prescribed mean. However, as a consequence of the increasing complexity of process data, the run-length of many control charts discussed in the recent literature can be studied only through simulation. Furthermore, in some scenarios, such as profile and autocorrelated data monitoring, the limits cannot be tabulated in advance, and when different charts are combined, the control limits depend on a multidimensional vector of parameters. In this article, we propose the use of stochastic approximation methods for control chart calibration and discuss enhancements for their implementation (e.g., the initialization of the algorithm, an adaptive choice of the gain, a suitable stopping rule for the iterative process, and the advantages of using multicore workstations). Examples are used to show that simulated stochastic approximation provides a reliable and fully automatic approach for computing the control limits in complex applications. An R package implementing the algorithm is available in the supplemental materials. Journal: IIE Transactions Pages: 57-65 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1055392 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1055392 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:57-65 Template-Type: ReDIF-Article 1.0 Author-Name: Philippe Castagliola Author-X-Name-First: Philippe Author-X-Name-Last: Castagliola Author-Name: Petros E. Maravelakis Author-X-Name-First: Petros E. Author-X-Name-Last: Maravelakis Author-Name: Fernanda Otilia Figueiredo Author-X-Name-First: Fernanda Otilia Author-X-Name-Last: Figueiredo Title: The EWMA median chart with estimated parameters Abstract: The usual practice in control charts is to assume that the chart parameters are known or can be accurately estimated from in-control historical samples and the data are free from outliers. Both of these assumptions are not realistic in practice: a control chart may involve the estimation of process parameters from a very limited number of samples and the data may contain some outliers. In order to overcome these issues, in this article, we develop an Exponentially Weighted Moving Average (EWMA) median chart with estimated parameters to monitor the mean value of a normal process. We study the run length properties of the proposed chart using a Markov Chain approach and the performance of the proposed chart is compared to the EWMA median chart with known parameters. Several tables for the design of the proposed chart are given in order to expedite the use of the chart by practitioners. An illustrative example is also given along with some recommendations about the minimum number of initial subgroups m for different sample sizes n that must be collected for the estimation of the parameters so that the proposed chart has identical performance as the chart with known parameters. From the results we deduce that (i) there is a large difference between the known and estimated parameters cases unless the initial number of subgroups m is large; and (ii) the difference between the known and estimated parameters cases can be reduced by using dedicated chart parameters. Journal: IIE Transactions Pages: 66-74 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1056861 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1056861 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:66-74 Template-Type: ReDIF-Article 1.0 Author-Name: Paul D. Arendt Author-X-Name-First: Paul D. Author-X-Name-Last: Arendt Author-Name: Daniel W. Apley Author-X-Name-First: Daniel W. Author-X-Name-Last: Apley Author-Name: Wei Chen Author-X-Name-First: Wei Author-X-Name-Last: Chen Title: A preposterior analysis to predict identifiability in the experimental calibration of computer models Abstract: When using physical experimental data to adjust, or calibrate, computer simulation models, two general sources of uncertainty that must be accounted for are calibration parameter uncertainty and model discrepancy. This is complicated by the well-known fact that systems to be calibrated are often subject to identifiability problems, in the sense that it is difficult to precisely estimate the parameters and to distinguish between the effects of parameter uncertainty and model discrepancy. We develop a form of preposterior analysis that can be used, prior to conducting physical experiments but after conducting the computer simulations, to predict the degree of identifiability that will result after conducting the physical experiments for a given experimental design. Specifically, we calculate the preposterior covariance matrix of the calibration parameters and demonstrate that, in the examples that we consider, it provides a reasonable prediction of the actual posterior covariance that is calculated after the experimental data are collected. Consequently, the preposterior covariance can be used as a criterion for designing physical experiments to help achieve better identifiability in calibration problems. Journal: IIE Transactions Pages: 75-88 Issue: 1 Volume: 48 Year: 2016 Month: 1 X-DOI: 10.1080/0740817X.2015.1064554 File-URL: http://hdl.handle.net/10.1080/0740817X.2015.1064554 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:48:y:2016:i:1:p:75-88 Template-Type: ReDIF-Article 1.0 Author-Name: Jean-Philippe Gayon Author-X-Name-First: Jean-Philippe Author-X-Name-Last: Gayon Author-Name: Francis de Véricourt Author-X-Name-First: Francis Author-X-Name-Last: de Véricourt Author-Name: Fikri Karaesmen Author-X-Name-First: Fikri Author-X-Name-Last: Karaesmen Title: Stock rationing in an M/E/1 multi-class make-to-stock queue with backorders Abstract: A model of a single-item make-to-stock production system is presented. The item is demanded by several classes of customers arriving according to Poisson processes with different backorder costs. Item processing times have an Erlang distribution. It is shown that certain structural properties of optimal stock and capacity allocation policies exist for the case where production may be interrupted and restarted. Also, a complete characterization of the optimal policy in the case of uninterrupted production when excess production can be diverted to a salvage market is presented. A heuristic policy is developed and assessed based on the results obtained in the analysis. Finally the value of production status information and the effects of processing time variability are investigated.[Supplemental materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 1096-1109 Issue: 12 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902800279 File-URL: http://hdl.handle.net/10.1080/07408170902800279 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:12:p:1096-1109 Template-Type: ReDIF-Article 1.0 Author-Name: Kumar Rajaram Author-X-Name-First: Kumar Author-X-Name-Last: Rajaram Author-Name: Zhili Tian Author-X-Name-First: Zhili Author-X-Name-Last: Tian Title: Buffer location and sizing to optimize cost and quality in semi-continuous manufacturing processes: Methodology and application Abstract: The problem of optimizing the location and size of buffers in semi-continuous manufacturing processes is considered. This problem is formulated as a non-linear integer program that determines the optimal buffer size for individual stages and allocates tanks to those stages in order to minimize total tank inclusion, holding, quality, process overshoot and undershoot costs. Heuristics are developed to solve the problem and bounds are derived to evaluate the quality of the heuristics. This method has been implemented at three glucose and three sorbitol production processes at a leading food processing company. This has resulted in total annual cost savings of around 6.4% or $9000 000. In addition, this work has had a significant impact on several strategic operational decisions at this company.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 1035-1048 Issue: 12 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902889694 File-URL: http://hdl.handle.net/10.1080/07408170902889694 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:12:p:1035-1048 Template-Type: ReDIF-Article 1.0 Author-Name: Ayten Turkcan Author-X-Name-First: Ayten Author-X-Name-Last: Turkcan Author-Name: M. Akturk Author-X-Name-First: M. Author-X-Name-Last: Akturk Author-Name: Robert Storer Author-X-Name-First: Robert Author-X-Name-Last: Storer Title: Predictive/reactive scheduling with controllable processing times and earliness-tardiness penalties Abstract: In this study, a machine scheduling problem with controllable processing times in a parallel-machine environment is considered. The objectives are the minimization of manufacturing cost, which is a convex function of processing time, and total weighted earliness and tardiness. It is assumed that parts have job-dependent earliness and tardiness penalties and distinct due dates, and idle time is allowed. The problem is formulated as a time-indexed integer programming model with discrete processing time alternatives for each part. A linear-relaxation-based algorithm is used to assign the parts to the machines and to find a sequence on each machine. A non-linear programming model is proposed to find the optimal starting and processing times of the parts for a given sequence. The proposed non-linear programming model is converted to a minimum-cost network flow model by piecewise linearization of the convex manufacturing cost in the objective function. The proposed method is used to find initial schedules in predictive scheduling. The proposed models are revised to incorporate a stability measure for reacting to unexpected disruptions such as machine breakdown, arrival of a new job, delay in the arrival or the shortage of materials in reactive scheduling. Journal: IIE Transactions Pages: 1080-1095 Issue: 12 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902905995 File-URL: http://hdl.handle.net/10.1080/07408170902905995 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:12:p:1080-1095 Template-Type: ReDIF-Article 1.0 Author-Name: Minhui Liu Author-X-Name-First: Minhui Author-X-Name-Last: Liu Author-Name: Mandyam Srinivasan Author-X-Name-First: Mandyam Author-X-Name-Last: Srinivasan Author-Name: Nana Vepkhvadze Author-X-Name-First: Nana Author-X-Name-Last: Vepkhvadze Title: What is the value of real-time shipment tracking information? Abstract: This paper presents a stochastic model for evaluating the value of real-time shipment tracking information in a supply system with a manufacturer that fulfills demand from a retailer for a single product using a periodic review, order-up-to-level inventory control policy. Products shipped by the manufacturer can move through multiple stages before they reach the retailer, where each stage represents a physical location or a step in the replenishment process. The lead time for an order placed by the retailer depends on the distribution of all the shipments, or outstanding orders, across the different stages. Hence, it is desirable to track the location of shipments to determine the ordering policy that minimizes the long-run average cost for the retailer. The long-run average cost is modeled under different scenarios: with real-time tracking information, with lagged tracking information, with no tracking information and with historical information on lead times for past orders. Under optimal myopic ordering policies, it is shown that the long-run average cost for the retailer increases when the shipment tracking information is lagged. Numerical examples are used to demonstrate the savings in long-run average cost with real-time tracking information.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 1019-1034 Issue: 12 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902906001 File-URL: http://hdl.handle.net/10.1080/07408170902906001 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:12:p:1019-1034 Template-Type: ReDIF-Article 1.0 Author-Name: O. Jabali Author-X-Name-First: O. Author-X-Name-Last: Jabali Author-Name: T. Van Woensel Author-X-Name-First: T. Author-X-Name-Last: Van Woensel Author-Name: A. de Kok Author-X-Name-First: A. Author-X-Name-Last: de Kok Author-Name: C. Lecluyse Author-X-Name-First: C. Author-X-Name-Last: Lecluyse Author-Name: H. Peremans Author-X-Name-First: H. Author-X-Name-Last: Peremans Title: Time-dependent vehicle routing subject to time delay perturbations Abstract: This paper extends the existing vehicle routing models by incorporating unanticipated delays at the customers' docking stations in time-dependent environments, i.e., where speeds are not constant throughout the planning horizon. A model is presented that is capable of optimizing the relevant costs taking into account these unplanned delays. The theoretical implications on the composition of the routing schedules are examined in detail. Experiments using a Tabu Search heuristic on a large number of datasets are provided. Based on these experiments, the generated routing schedules are analyzed and the cost-benefit trade-off between routing schedules that are protected against delays and routing schedules that are not protected against these delays are examined. Structural properties of the different solutions that have higher endurance capabilities with regards to the disruptions are highlighted. Journal: IIE Transactions Pages: 1049-1066 Issue: 12 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170902976194 File-URL: http://hdl.handle.net/10.1080/07408170902976194 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:12:p:1049-1066 Template-Type: ReDIF-Article 1.0 Author-Name: G. Gutiérrez-Jarpa Author-X-Name-First: G. Author-X-Name-Last: Gutiérrez-Jarpa Author-Name: V. Marianov Author-X-Name-First: V. Author-X-Name-Last: Marianov Author-Name: C. Obreque Author-X-Name-First: C. Author-X-Name-Last: Obreque Title: A single vehicle routing problem with fixed delivery and optional collections Abstract: The Single-Vehicle Routing Problem with Fixed Delivery and Optional Collections considers a set of delivery customers receiving goods from a depot and a set of collection customers sending goods to the same depot. All delivery customers must be visited by the vehicle, while a collection customer is visited only if the capacity of the vehicle is large enough to fit the collected load and the visit reduces collection costs that would be otherwise incurred. The goal is to minimize the transportation and collection costs. A model is proposed and solved utilizing a branch-and-cut method. Efficient new cuts are proposed. Computational experience is offered on two sets of test problems. It is proved possible to solve instances that previous methods were unable to solve. The method was tested on larger instances. Journal: IIE Transactions Pages: 1067-1079 Issue: 12 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170903113771 File-URL: http://hdl.handle.net/10.1080/07408170903113771 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:12:p:1067-1079 Template-Type: ReDIF-Article 1.0 Author-Name: Bo Li Author-X-Name-First: Bo Author-X-Name-Last: Li Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Author-Name: Arthur Yeh Author-X-Name-First: Arthur Author-X-Name-Last: Yeh Title: Monitoring the covariance matrix via penalized likelihood estimation Abstract: In many industrial multivariate quality control applications, based on the engineering and operational understanding of how the process works, when the process variability is out of control it is typically the case that changes only occur in a small number of elements in the covariance matrix. Under such a premise, we propose a new Phase II Shewhart chart for monitoring changes in the covariance matrix of a multivariate normal process. The new control chart is essentially based on calculating the likelihood ratio of testing the hypothesis that the in-control covariance matrix is equal to a known covariance matrix, where the unknown covariance matrix that appears in the likelihood ratio is replaced by an estimate obtained from a penalized likelihood function. The penalized likelihood function is derived by adding an L1 penalty function to the usual likelihood. The performance of the proposed chart is evaluated based on simulations and compared with that of several existing Shewhart charts for monitoring the covariance matrix. The simulation results indicate that the proposed chart outperforms existing charts. A real example from the semiconductor industry is presented and analyzed using the proposed chart and other existing charts. Potential future research directions are also discussed. Journal: IIE Transactions Pages: 132-146 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.663952 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.663952 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:132-146 Template-Type: ReDIF-Article 1.0 Author-Name: Hussam Alshraideh Author-X-Name-First: Hussam Author-X-Name-Last: Alshraideh Author-Name: Enrique Castillo Author-X-Name-First: Enrique Author-X-Name-Last: Castillo Title: Statistical performance of tests for factor effects on the shape of objects with application in manufacturing Abstract: This article considers experiments in manufacturing where the response of interest is the geometric shape of a manufactured part and the goal is to determine whether the process settings varied during the experiment affect the resulting shape of the part. An approach in practice to determine factor effects is to estimate the form error of the part—if a standard definition of the form error of interest exists—and conduct an analysis of variance (ANOVA) on the form errors. Instead, we study the performance of several statistical shape analysis techniques to analyze this class of experiments. Simulated shape data were used to perform power comparisons for two- and three-dimensional shapes. The ANOVA on the form errors was found to have a poor performance in detecting mean shape differences in circular and cylindrical shapes. Procrustes-based tests such as an ANOVA test due to Goodall and a recently proposed ANOVA permutation test provide the highest power to detect differences in the mean shape. These tests can also be applied to parts produced in “free form” manufacturing, where no standard definition of form error exists, provided that correspondent points exist on each part. Journal: IIE Transactions Pages: 121-131 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.669877 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.669877 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:121-131 Template-Type: ReDIF-Article 1.0 Author-Name: Chanseok Park Author-X-Name-First: Chanseok Author-X-Name-Last: Park Title: Parameter estimation from load-sharing system data using the expectation–maximization algorithm Abstract: This article considers a system of multiple components connected in parallel. As components fail one by one, the remaining working components share the total load applied to the system. This is commonly referred to as load sharing in the reliability engineering literature. This article considers the traditional approach to the modeling of a load-sharing system under the assumption of the existence of underlying hypothetical latent random variables. Using the Expectation–Maximization (EM) algorithm, a methodology is proposed to obtain the maximum likelihood estimates in such a model in the case where the underlying lifetime distribution of the components is lognormal or normal. The proposed EM method is also illustrated and substantiated using numerical examples. The estimates obtained using the EM algorithm are compared with those obtained using the Broyden–Fletcher–Goldfarb–Shanno algorithm, which falls under the class of numerical methods known as Newton or quasi-Newton methods. The results show that the estimates obtained using the proposed EM method always converge to a unique global maximizer, whereas the estimates obtained using the Newton-type method are highly sensitive to the choice of starting values and thus often fail to converge. Journal: IIE Transactions Pages: 147-163 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.669878 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.669878 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:147-163 Template-Type: ReDIF-Article 1.0 Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Author-Name: D.N. Murthy Author-X-Name-First: D.N. Author-X-Name-Last: Murthy Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Author-Name: Loon-Ching Tang Author-X-Name-First: Loon-Ching Author-X-Name-Last: Tang Title: Optimal burn-in for repairable products sold with a two-dimensional warranty Abstract: Warranty data analyses reveal that products sold with two-dimensional warranties may have significant infant mortalities. To deal with this problem, this article proposes and studies a new burn-in modeling approach for repairable products sold with a two-dimensional warranty. More specifically, two types of failures are characterized—i.e., normal and defect failures—and performance and cost-based burn-in models are developed under the non-renewing free-repair warranty policy. The proposed models subsume the special cases of a one-dimensional warranty, allow different failure modes to have distinct accelerated relationships, and take consumer usage heterogeneity into consideration. Under mild assumptions, it is established that the optimal burn-in usage rate should be as high as possible, provided that no extraneous failure modes are introduced. Furthermore, It is shown that the optimal burn-in duration determined from the performance-based model is not shorter than that from the cost-based model. Numerical examples are used to demonstrate the benefits of burn-in. In addition, some practical implications from a sensitivity analysis are elaborated. Journal: IIE Transactions Pages: 164-176 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.677573 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.677573 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:164-176 Template-Type: ReDIF-Article 1.0 Author-Name: Leslie Murray Author-X-Name-First: Leslie Author-X-Name-Last: Murray Author-Name: Héctor Cancela Author-X-Name-First: Héctor Author-X-Name-Last: Cancela Author-Name: Gerardo Rubino Author-X-Name-First: Gerardo Author-X-Name-Last: Rubino Title: A splitting algorithm for network reliability estimation Abstract: Splitting is a variance reduction technique widely used to make efficient estimations of the probability of rare events in the simulation of Markovian models. In this article, splitting is applied to improve a well-known method called the Creation Process (CP), used in network reliability estimation. The resulting proposal, called here Splitting/CP, is particularly appropriate in the case of highly reliable networks; i.e., networks for which failure is a rare event. The article introduces the basis of Splitting/CP and presents a set of computational experiments based on network topologies taken from the literature. The results of these experiments show that Splitting/CP is accurate, efficient, and robust and is therefore a valid alternative to the best known methods used in network reliability estimation. Journal: IIE Transactions Pages: 177-189 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.677574 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.677574 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:177-189 Template-Type: ReDIF-Article 1.0 Author-Name: Junwen Wang Author-X-Name-First: Junwen Author-X-Name-Last: Wang Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Author-Name: Jorge Arinez Author-X-Name-First: Jorge Author-X-Name-Last: Arinez Author-Name: Stephan Biller Author-X-Name-First: Stephan Author-X-Name-Last: Biller Title: Quality bottleneck transitions in flexible manufacturing systems with batch productions Abstract: A Markov chain model to analyze quality in flexible manufacturing systems with batch productions is developed in this article. The cycles when good quality and defective parts are produced are defined as the good and defective states, respectively, and transition probabilities are introduced to characterize the changes between these states. The product quality is presented as a function of these transition probabilities, and the transition that has the largest impact on quality is referred to as the quality bottleneck transition (BN-t). Analytical expressions to quantify the sensitivity of quality with respect to transition probabilities are derived, and indicators to identify the BN-t based on data collected on the factory floor are developed. Through extensive numerical experiments, it is shown that such indicators have a high accuracy in identifying the correct bottlenecks and can be used as an effective tool in quality improvement efforts. Finally, a case study at an automotive paint shop is presented to illustrate the applicability of the method. Journal: IIE Transactions Pages: 190-205 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.677575 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.677575 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:190-205 Template-Type: ReDIF-Article 1.0 Author-Name: Zeynep Icten Author-X-Name-First: Zeynep Author-X-Name-Last: Icten Author-Name: Steven Shechter Author-X-Name-First: Steven Author-X-Name-Last: Shechter Author-Name: Lisa Maillart Author-X-Name-First: Lisa Author-X-Name-Last: Maillart Author-Name: Mahesh Nagarajan Author-X-Name-First: Mahesh Author-X-Name-Last: Nagarajan Title: Optimal management of a limited number of replacements under Markovian deterioration Abstract: This article examines the problem of adaptively scheduling a limited number of identical replacements of a vital component, each of which degrades according to a discrete-time Markov chain. The component is vital in the sense that its failure results in irreparable system breakdown; therefore, the objective is to maximize the total expected lifetime of the system, though other performance measures are explored under the optimal policy. The problem is formulated as a discrete-time Markov decision process and its structural properties are examined. While the problem scenario is fairly simple, the proofs of intuitive structural properties via standard dynamic programming approaches are elusive. Instead, we demonstrate the usefulness of a sample path approach in a maintenance optimization context, and provide some counterexamples to other seemingly intuitive structural results. Journal: IIE Transactions Pages: 206-214 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.679349 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.679349 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:206-214 Template-Type: ReDIF-Article 1.0 Author-Name: Nader Ebrahimi Author-X-Name-First: Nader Author-X-Name-Last: Ebrahimi Author-Name: Kristin McCullough Author-X-Name-First: Kristin Author-X-Name-Last: McCullough Author-Name: Zhili Xiao Author-X-Name-First: Zhili Author-X-Name-Last: Xiao Title: Reliability of sensors based on nanowire networks Abstract: Nanowires have a great potential in many industrial applications, including electronics and sensors. Palladium nanowire network-based hydrogen sensors have been found to outperform their counterparts that consist of an individual nanowire or palladium thin or thick films. However, reliability issues that affect these sensors still need to be addressed. This article considers hydrogen gas sensors based on a nanowire network with a square lattice structure. A general model for describing the reliability behavior of this network is proposed. It is shown that the reliability function can be imposed by considering a network of nanowires rather than a single nanowire. Among many other applications, the presented results can also be used to assess the reliability of any nanosystem/ nanodevice where the proposed model is a reasonable choice. What distinguishes the work presented in this article from related work are the unique difficulties that the nanocomponents introduce to the evaluation of reliability and the way reliability is defined over cycles of hydrogen gas. Journal: IIE Transactions Pages: 215-228 Issue: 2 Volume: 45 Year: 2013 X-DOI: 10.1080/0740817X.2012.679350 File-URL: http://hdl.handle.net/10.1080/0740817X.2012.679350 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:45:y:2013:i:2:p:215-228 Template-Type: ReDIF-Article 1.0 Author-Name: Matthias Tan Author-X-Name-First: Matthias Author-X-Name-Last: Tan Author-Name: C. Wu Author-X-Name-First: C. Author-X-Name-Last: Wu Title: Generalized selective assembly Abstract: Selective assembly has traditionally been used to achieve tight specifications on the clearance of two mating parts. However, its applicability is not limited to this particular type of assembly. This article develops a generalized version of selective assembly called GSA. It is a powerful tool that can be used to improve the quality of assemblies of single units of different component types. Two variants of GSA are considered: Direct Selective Assembly (DSA) and Fixed Bin Selective Assembly (FBSA). The former is selective assembly using information from measurements on component characteristics directly, whereas the latter is selective assembly of components sorted into bins. For each variant, the problem of matching the N components of each type to give N assemblies that minimize quality cost is formulated as a linear integer program. The component matching problem for DSA is an axial multi-index assignment problem, whereas for FBSA, it is an axial multi-index transportation problem. Simulations are performed to evaluate the performance of GSA and to find the optimal number of bins. Realistic examples are given to show that the proposed methods can significantly improve the quality of assemblies. Journal: IIE Transactions Pages: 27-42 Issue: 1 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2010.551649 File-URL: http://hdl.handle.net/10.1080/0740817X.2010.551649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:1:p:27-42 Template-Type: ReDIF-Article 1.0 Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Chia-Jung Chang Author-X-Name-First: Chia-Jung Author-X-Name-Last: Chang Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Sequential measurement strategy for wafer geometric profile estimation Abstract: The geometric profile, factors such as thickness, flatness, and local warp, are important quality features for a wafer. Fast and accurate measurements of those features are crucial in multistage wafer manufacturing processes to ensure product quality, process monitoring, and quality improvement. The current wafer profile measurement schemes are time-consuming and are essentially an offline technology and hence are unable to provide a quick assessment of wafer quality. This article proposes a sequential measurement strategy to reduce the number of samples measured in wafers while still providing an adequate accuracy for quality feature estimation. In the proposed approach, initial samples are measured and then a Gaussian process model is used to fit the measured data and generate a true profile of the measured wafer. The profile prediction and its uncertainty serve as guidelines to determine the measurement locations for the next sampling iteration. The measurement stops when the prediction error of the testing sample set satisfies a pre-designated accuracy requirement. A case study is provided to illustrate the procedures and effectiveness of the proposed methods based on the wafer thickness profile measurement in slicing processes. Journal: IIE Transactions Pages: 1-12 Issue: 1 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.557030 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.557030 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:1:p:1-12 Template-Type: ReDIF-Article 1.0 Author-Name: Dong-Hee Lee Author-X-Name-First: Dong-Hee Author-X-Name-Last: Lee Author-Name: Kwang-Jae Kim Author-X-Name-First: Kwang-Jae Author-X-Name-Last: Kim Author-Name: Murat Köksalan Author-X-Name-First: Murat Author-X-Name-Last: Köksalan Title: An interactive method to multiresponse surface optimization based on pairwise comparisons Abstract: In Multi-Response Surface Optimization (MRSO), responses are often in conflict. To obtain a satisfactory compromise, the preference information of a Decision Maker (DM) on the trade-offs among the responses should be incorporated into the problem. Most existing methods employ preference parameters to incorporate the DM’s subjective judgment on the responses. The preference parameter values are specified in advance or adjusted in an interactive manner. However, it is often difficult to specify or adjust the preference parameter values that are representative of the DM’s preference structure without use of a systematic method. An interactive method for MRSO is developed in this article in which the DM provides preference information in the form of pairwise comparisons. The results of these comparisons are used to estimate the preference parameter values in an interactive manner. The required preference information is relevant and therefore easy for the DM to provide. The method is effective in that a highly satisfactory solution for the DM can be obtained through a few pairwise comparisons, regardless of the type of the DM’s utility function, in the problems solved in this work. Journal: IIE Transactions Pages: 13-26 Issue: 1 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.564604 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.564604 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:1:p:13-26 Template-Type: ReDIF-Article 1.0 Author-Name: Hyunjoong Kim Author-X-Name-First: Hyunjoong Author-X-Name-Last: Kim Author-Name: Frank Guess Author-X-Name-First: Frank Author-X-Name-Last: Guess Author-Name: Timothy Young Author-X-Name-First: Timothy Author-X-Name-Last: Young Title: An extension of regression trees to generate better predictive models Abstract: For situations where the data are drawn from reasonably homogeneous populations, traditional methods such as multiple regression typically yield insightful analyses. For situations where the data are drawn from more heterogeneous populations, decision tree approaches, such as Classification and Regression Trees (CART) and Generalized, Unbiased, Interaction, Detection, and Estimation (GUIDE), are more likely to recognize idiosyncratic subpopulations and interactions automatically. In contrast to CART, however, GUIDE yields models with better predictive performance for each subpopulation. This article extends the idea of GUIDE to handle analysis of covariance-type problems. This article compares GUIDE modeling to various decision tree methods and to multiple regression. The article identifies and discusses the relative advantages and disadvantages of multiple regression, CART, and GUIDE. GUIDE produces quality or reliability models that exhibit greater predictive accuracy than multiple regression or CART for complex, highly diverse populations. Also, GUIDE is readily applicable to many other areas, such as repairability and maintainability settings involving both qualitative and quantitative variables. A small case study of an engineered wood product, medium-density fiberboard, is presented to illustrate the application of GUIDE. Accepted in 2005 for a special issue on Reliability co-edited by Hoang Pham, Rutgers University; Dong Ho Park, Hallym University, Korea; and Richard Cassady, University of Arkansas. Journal: IIE Transactions Pages: 43-54 Issue: 1 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.590441 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.590441 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:1:p:43-54 Template-Type: ReDIF-Article 1.0 Author-Name: Won Yun Author-X-Name-First: Won Author-X-Name-Last: Yun Author-Name: Gui Kim Author-X-Name-First: Gui Author-X-Name-Last: Kim Author-Name: Hisashi Yamamoto Author-X-Name-First: Hisashi Author-X-Name-Last: Yamamoto Title: Economic design of a load-sharing consecutive -out-of-:F system Abstract: This article considers a linear and circular consecutive-k-out-of-n:F system composed of n identical components with exponential failure time distributions and subjected to a total load that is equally shared by all the working components in the system. The event of a component failure results in a higher load, therefore inducing a higher failure rate, in each of the surviving components. A power rule relationship between the amount of the load shared by surviving components and the failure rate of the surviving components is assumed. The system reliability of the proposed consecutive-k-out-of-n:F system with load-sharing dependency is obtained. Three optimization problems (in which the expected cost per unit time is used as an optimization criterion) are considered to determine the system configuration n and a preventive maintenance interval. The effect of dependence parameters, the system configuration parameter k, and various cost parameters on the optimal n and maintenance interval are investigated in numerical examples. A comparison between the three problems is also performed. Accepted in 2005 for a special issue on Reliability co-edited by Hoang Pham, Rutgers University: Dong Ho Park, Hallym University, Korea; and Richard Cassady, University of Arkansas. Journal: IIE Transactions Pages: 55-67 Issue: 1 Volume: 44 Year: 2012 X-DOI: 10.1080/0740817X.2011.590442 File-URL: http://hdl.handle.net/10.1080/0740817X.2011.590442 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:44:y:2012:i:1:p:55-67 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Veatch Author-X-Name-First: Michael Author-X-Name-Last: Veatch Title: The impact of customer impatience on production control Abstract: Most analyses of make-to-stock production control assume that either all orders are eventually met (complete backordering) or that no customers are willing to wait (lost sales). We consider a more nuanced model of customer behavior, where the fraction of potential customers who place orders depends on the current backlog, and hence the lead time. A continuous one-part-type, single machine model with Markov modulated demand and deterministic production is considered. We show that the impact of customer impatience is captured by one quantity, the mean sojourn time in the backlog states. A simple procedure finds this quantity and the optimal policy, which has hedging point form. In applications, observing the durations of stockouts gives a practical method of incorporating the effect of customer impatience. Journal: IIE Transactions Pages: 95-102 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170801958913 File-URL: http://hdl.handle.net/10.1080/07408170801958913 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:95-102 Template-Type: ReDIF-Article 1.0 Author-Name: Philip Kaminsky Author-X-Name-First: Philip Author-X-Name-Last: Kaminsky Author-Name: Onur Kaya Author-X-Name-First: Onur Author-X-Name-Last: Kaya Title: Combined make-to-order/make-to-stock supply chains Abstract: A multi-item manufacturer served by a single supplier in a stochastic environment is considered. The manufacturer and the supplier have to decide which items to produce to stock and which to produce to order. The manufacturer also has to quote due dates to arriving customers for make-to-order products. The manufacturer is penalized for long lead times, missing the quoted lead times and high inventory levels. Several variations of this problem are considered and effective heuristics for the make-to-order/make-to stock decision are designed to find the appropriate inventory levels for make-to-stock items. Scheduling and lead time quotation algorithms for centralized and decentralized versions of the model are also developed. Extensive computational testing is performed to assess the effectiveness of the proposed algorithms, and the centralized and decentralized models are compared in order to quantify the value of centralized control in this supply chain. As centralized control is not always practical or cost-effective, the value of limited information exchange for this system is explored.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource(s): Online appendix including additional computational analysis and proofs.] Journal: IIE Transactions Pages: 103-119 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170801975065 File-URL: http://hdl.handle.net/10.1080/07408170801975065 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:103-119 Template-Type: ReDIF-Article 1.0 Author-Name: Chester Chambers Author-X-Name-First: Chester Author-X-Name-Last: Chambers Author-Name: Panos Kouvelis Author-X-Name-First: Panos Author-X-Name-Last: Kouvelis Author-Name: Ping Su Author-X-Name-First: Ping Author-X-Name-Last: Su Title: Adoption of cost-reducing technology by asymmetric duopolists in stochastically evolving markets Abstract: This work considers the adoption of technology that will reduce unit production costs by one or two players sharing a single market. Three models are developed involving a monopolist, a Stackelberg game with two firms and a designated order of adoption, and an open loop game with no prespecified order of adoption. In the “two-firm” cases the firms are allowed to differ in per unit production costs both before and after technology adoption, as well as the capital outlay required for adoption. In each setting, an evolution of market size is manifested by the level of an exogenous parameter which evolves according to geometric Brownian motion. Structural and numerical results are presented that help to explain the logic and optimal timing of technology adoption. The inclusion of cost and investment level asymmetry leads to a variety of cases. In some instances the high-cost firm is the first to adopt, and adopts at the point that maximizes its profits. In other cases, the higher-cost firm is the first to adopt but the timing of its adoption is dictated by the threat that its rival can make a pre-emptive move. In some cases the lower-cost firm does pre-empt its higher-cost rival and it is optimal for the higher-cost firm to sit idle while this happens. Such an outcome is possible even when both firms have the same per unit production costs after adoption. This work expands on existing literature in that it is the first to consider output rate selection, pricing decisions and technology investments in a continuous time framework while considering a real deferral option and asymmetric players.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 145-157 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802116313 File-URL: http://hdl.handle.net/10.1080/07408170802116313 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:145-157 Template-Type: ReDIF-Article 1.0 Author-Name: Diego Klabjan Author-X-Name-First: Diego Author-X-Name-Last: Klabjan Title: “Approximate dynamic programming: Solving the curses of dimensionality” by Warren B. Powell Journal: Pages: 168-169 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802189500 File-URL: http://hdl.handle.net/10.1080/07408170802189500 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:168-169 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Masin Author-X-Name-First: Michael Author-X-Name-Last: Masin Author-Name: Vittal Prabhu Author-X-Name-First: Vittal Author-X-Name-Last: Prabhu Title: AWIP: A simulation-based feedback control algorithm for scalable design of self-regulating production control systems Abstract: A new simulation-based feedback control algorithm, called Adaptive Work In Process (AWIP), for the design of Self-regulating Production Control Systems (SPCSs) such as Kanban, CONWIP, base stock control and their generalizations is presented. The problem of minimizing average Work In Process (WIP) subject to a required throughput is solved. The AWIP algorithm is used as a feedback controller to adjust the WIP at various stages in the production system. The algorithm is synthesized based on the structural properties of SPCSs that are established analytically in this paper. In this approach simulation is used to provide the feedback to the controllers, and leads to an iterative numerical computational algorithm. Computational experiments show that the AWIP algorithm is near-optimal, and computationally efficient, which makes it an attractive approach for designing and controlling large production systems.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 120-133 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802275366 File-URL: http://hdl.handle.net/10.1080/07408170802275366 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:120-133 Template-Type: ReDIF-Article 1.0 Author-Name: Claudia Antonini Author-X-Name-First: Claudia Author-X-Name-Last: Antonini Author-Name: Christos Alexopoulos Author-X-Name-First: Christos Author-X-Name-Last: Alexopoulos Author-Name: David Goldsman Author-X-Name-First: David Author-X-Name-Last: Goldsman Author-Name: James Wilson Author-X-Name-First: James Author-X-Name-Last: Wilson Title: Area variance estimators for simulation using folded standardized time series Abstract: We estimate the variance parameter of a stationary simulation-generated process using “folded” versions of standardized time series area estimators. Asymptotically as the sample size increases, different folding levels yield unbiased estimators that are independent scaled chi-squared variates, each with one degree of freedom. This result is exploited to formulate improved variance estimators based on the combination of multiple levels as well as the use of batching. The improved estimators preserve the asymptotic bias properties of their predecessors, but have substantially lower asymptotic variances. The performance of the new variance estimators is demonstrated in a first-order autoregressive process with autoregressive parameter 0.9 and in the queue-waiting-time process for an M/M/1 queue with server utilization 0.8.[Supplementary materials are available for this article. Go to the publisher's online edition of IIE Transactions for the following free supplemental resource: Appendix] Journal: IIE Transactions Pages: 134-144 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802331268 File-URL: http://hdl.handle.net/10.1080/07408170802331268 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:134-144 Template-Type: ReDIF-Article 1.0 Author-Name: Vishnu Nanduri Author-X-Name-First: Vishnu Author-X-Name-Last: Nanduri Author-Name: Tapas Das Author-X-Name-First: Tapas Author-X-Name-Last: Das Title: A reinforcement learning algorithm for obtaining the Nash equilibrium of multi-player matrix games Abstract: With the advent of e-commerce, the contemporary marketplace has evolved significantly toward competition-based trading of goods and services. Competition in many such market scenarios can be modeled as matrix games. This paper presents a computational algorithm to obtain the Nash equilibrium of n-player matrix games. The algorithm uses a stochastic-approximation-based Reinforcement Learning (RL) approach and has the potential to solve n-player matrix games with large player–action spaces. The proposed RL algorithm uses a value-iteration-based approach, which is well established in the Markov decision processes/semi-Markov decision processes literature. To emphasize the broader impact of our solution approach for matrix games, we discuss the established connection of matrix games with discounted and average reward stochastic games, which model a much larger class of problems. The solutions from the RL algorithm are extensively benchmarked with those obtained from an openly available software (GAMBIT). This comparative analysis is performed on a set of 16 matrix games with up to four players and 64 action choices. We also implement our algorithm on practical examples of matrix games that arise due to strategic bidding in restructured electric power markets. Journal: IIE Transactions Pages: 158-167 Issue: 2 Volume: 41 Year: 2009 X-DOI: 10.1080/07408170802369417 File-URL: http://hdl.handle.net/10.1080/07408170802369417 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:41:y:2009:i:2:p:158-167 Template-Type: ReDIF-Article 1.0 Author-Name: Hui Xiao Author-X-Name-First: Hui Author-X-Name-Last: Xiao Author-Name: Fei Gao Author-X-Name-First: Fei Author-X-Name-Last: Gao Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Title: Optimal computing budget allocation for complete ranking with input uncertainty Abstract: Existing research in ranking and selection has focused on the problem of selecting the best design, subset selection and selecting the set of Pareto designs. Few works have addressed the problem of complete ranking. In this research, we consider the problem of ranking all alternatives completely with consideration of input uncertainty. Given a fixed simulation budget, we aim to maximize the probability of correct ranking among all designs based on their worst-case performances. The problem is formulated as an optimal computing budget allocation model. To make this optimization problem computationally tractable, we develop an approximated probability of correct ranking and derive the asymptotic optimality condition based on it. A sequential ranking procedure is then suggested to implement the proposed simulation budget allocation rule. The high efficiency of the proposed simulation procedure is demonstrated via a set of numerical experiments. In addition, useful insights and analysis on characterizing the optimality condition and implementing the efficient budget allocation rule are provided. Journal: IISE Transactions Pages: 489-499 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1659524 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1659524 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:489-499 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammad Montazeri Author-X-Name-First: Mohammad Author-X-Name-Last: Montazeri Author-Name: Abdalla R. Nassar Author-X-Name-First: Abdalla R. Author-X-Name-Last: Nassar Author-Name: Alexander J. Dunbar Author-X-Name-First: Alexander J. Author-X-Name-Last: Dunbar Author-Name: Prahalada Rao Author-X-Name-First: Prahalada Author-X-Name-Last: Rao Title: In-process monitoring of porosity in additive manufacturing using optical emission spectroscopy Abstract: A key challenge in metal additive manufacturing is the prevalence of defects, such as discontinuities within the part (e.g., porosity). The objective of this work is to monitor porosity in Laser Powder Bed Fusion (L-PBF) additive manufacturing of nickel alloy 718 (popularly called Inconel 718) test parts using in-process optical emission spectroscopy. To realize this objective, cylinder-shaped test parts are built under different processing conditions on a commercial L-PBF machine instrumented with an in-situ multispectral photodetector sensor. Optical emission signatures are captured continuously during the build by the multispectral sensor. Following processing, the porosity-level within each layer of a test part is quantified using X-ray Computed Tomography (CT). The graph Fourier transform coefficients are derived layer-by-layer from signatures acquired from the multispectral photodetector sensor. These graph Fourier transform coefficients are subsequently invoked as input features within various machine learning models to predict the percentage porosity-level in each layer with CT data taken as ground truth. This approach is found to predict the porosity on a layer-by-layer basis with an accuracy of ∼90% (F-score) in a computation time less than 0.5 seconds. In comparison, statistical moments, such as mean, variation, etc., are less accurate (F-score ≈ 80%) and require a computation time exceeding 5 seconds. Journal: IISE Transactions Pages: 500-515 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1659525 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1659525 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:500-515 Template-Type: ReDIF-Article 1.0 Author-Name: Piao Chen Author-X-Name-First: Piao Author-X-Name-Last: Chen Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Author-Name: Qingqing Zhai Author-X-Name-First: Qingqing Author-X-Name-Last: Zhai Title: Parametric analysis of time-censored aggregate lifetime data Abstract: Many large organizations have developed ambitious programs to build reliability databases by collecting field failure data from a large variety of components. To make the database concise, only the number of component replacements in a component position during an operation time interval is reported in these databases. This leads to time-censoring in the aggregate failure data. Statistical inference for the time-censored aggregate data is challenging, because the likelihood function based on some common lifetime distributions can be intractable. In this study, we propose a general parametric estimation framework for the aggregate data. We first use the gamma distribution and the Inverse Gaussian (IG) distribution to model the aggregate data. Bayesian inference for the two models is discussed. Unlike the gamma/IG distribution, other lifetime distributions involve multiple integrals in the likelihood function, making the standard Bayesian inference difficult. To address the estimation problem, an approximate Bayesian computation algorithm that does not require evaluating the likelihood function is proposed, and its performance is assessed by simulation. As there are several candidate distributions, we further propose a model selection procedure to identify an appropriate distribution for the time-censored aggregate data. A real aggregate dataset extracted from a reliability database is used for illustration. Journal: IISE Transactions Pages: 516-527 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1628374 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1628374 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:516-527 Template-Type: ReDIF-Article 1.0 Author-Name: Shifeng Xiong Author-X-Name-First: Shifeng Author-X-Name-Last: Xiong Title: Personalized optimization and its implementation in computer experiments Abstract: Optimization problems with both control variables and environmental variables often arise in quality engineering. This article introduces a personalized optimization strategy to handle such problems when the environmental variables can be observed or measured. Unlike traditional robust optimization, personalized optimization aims to find the values of control variables that yield the optimal value of the objective function for given values of environmental variables. Therefore, the solution from personalized optimization, which consists of optimal surfaces defined on the domain of environmental variables, is more reasonable and better than that from robust optimization. The implementation of personalized optimization for expensive black-box computer models is discussed. Based on statistical modeling of computer experiments, we provide two algorithms to sequentially design input values for approximating the optimal surfaces. Numerical examples including a real application show the effectiveness of our algorithms. Journal: IISE Transactions Pages: 528-536 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1630866 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1630866 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:528-536 Template-Type: ReDIF-Article 1.0 Author-Name: Salman Jahani Author-X-Name-First: Salman Author-X-Name-Last: Jahani Author-Name: Raed Kontar Author-X-Name-First: Raed Author-X-Name-Last: Kontar Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Dharmaraj Veeramani Author-X-Name-First: Dharmaraj Author-X-Name-Last: Veeramani Title: Remaining useful life prediction based on degradation signals using monotonic B-splines with infinite support Abstract: Degradation modeling traditionally relies on monitoring degradation signals to model the underlying degradation process. In this context, failure is typically defined as the point where the degradation signal reaches a pre-specified threshold level. Many models assume that degradation signals are completely observed beyond the failure threshold, whereas the issue of truncated degradation signals still remains a challenge. Moreover, based on the physics of a degradation process, the degradation signal should be inherently monotonic. However, it is almost inevitable that most of the sensor-based degradation signals are subject to noise, which can lead to misleading prediction results. In this article, a non-parametric approach to modeling and prognosis of degradation signals using B-splines in a mixed effects setting is proposed. In order to deal with the issue of truncated historical degradation signals, our approach is based on augmenting B-spline basis functions with functions of infinite support. Moreover, to model the degradation signal more accurately and robustly in a noisy setting, necessary and sufficient conditions to ensure monotonic evolution of the modeled signals are derived. Appropriate procedures for online updating of random coefficients of mixed effects model considering derived monotonicity constraints based on degradation data collected from an in-service unit are also presented. The performance of the proposed framework is investigated and benchmarked through analysis based on numerical studies and a case study using real-world data from automotive lead-acid batteries. Journal: IISE Transactions Pages: 537-554 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1630868 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1630868 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:537-554 Template-Type: ReDIF-Article 1.0 Author-Name: Zhaohui Li Author-X-Name-First: Zhaohui Author-X-Name-Last: Li Author-Name: Dan Yu Author-X-Name-First: Dan Author-X-Name-Last: Yu Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Author-Name: Qingpei Hu Author-X-Name-First: Qingpei Author-X-Name-Last: Hu Title: Higher-order normal approximation approach for highly reliable system assessment Abstract: In this study, the issue of system reliability assessment (SRA) based on component failure data is considered. In industrial statistics, the delta method has become a popular approach for confidence interval approximation. However, for high reliability systems, usually the assessment is confronted with very limited component sample size, variant multi-parameter lifetime models, and complex system structure. Along with strict requirement on assessment accuracy and computational efficiency, existing approaches barely work under these circumstances. In this article, a normal approximation approach is proposed for determining the lower confidence limit of system reliability using components’ time-to-failure data. The polynomial adjustment method is adopted to construct higher-order approximate confidence limit. The main contribution of this work is constructing an integrated methodology for SRA. Specifically, a reliability-based Winterbottom-extended Cornish-Fisher (R-WCF) expansion method for log-location-scale family is elaborated. The proposed methodology exceeds the efficient limitation of Cramer Rao’s theory. Numerical studies are conducted to illustrate the effectiveness of the proposed approach, and results show that the R-WCF approach is more efficient than the delta method for highly reliable system assessment, especially with ultra-small sample size. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transactions. Journal: IISE Transactions Pages: 555-567 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1630869 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1630869 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:555-567 Template-Type: ReDIF-Article 1.0 Author-Name: Chenglong Li Author-X-Name-First: Chenglong Author-X-Name-Last: Li Author-Name: Xiaolin Wang Author-X-Name-First: Xiaolin Author-X-Name-Last: Wang Author-Name: Lishuai Li Author-X-Name-First: Lishuai Author-X-Name-Last: Li Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Author-Name: Xin Wang Author-X-Name-First: Xin Author-X-Name-Last: Wang Title: On dynamically monitoring aggregate warranty claims for early detection of reliability problems Abstract: Warranty databases managed by most world-leading manufacturers are constantly expanding in the big data era. An important application of warranty databases is to detect unobservable reliability problems that emerge at design and/or manufacturing stages, through modeling and analysis of warranty claims data. Usually, serious reliability problems will result in certain abnormal patterns in warranty claims, which can be captured by appropriate statistical methods. In this article, a dynamic control charting scheme is developed for early detection of reliability problems by monitoring warranty claims one period after another, over the product life cycle. Instead of specifying a constant control limit, we determine the control limits progressively by considering stochastic product sales and non-homogeneous failure processes, simultaneously. The false alarm rate at each time period is controlled at a desired level, based on which abrupt changes in field reliability, if any, will be detected in a timely manner. Furthermore, a maximum-likelihood-based post-signal diagnosis scheme is presented to aid in identifying the most probable time of problem occurrence (i.e., change point). It is shown, through in-depth simulation studies and a real case study, that the proposed scheme is able to detect an underlying reliability problem promptly and meanwhile estimate the change point with an acceptable accuracy. Finally, a moving window approach concerning only recent production periods is introduced to extend the original model so as to mitigate the “inertia” problem. Journal: IISE Transactions Pages: 568-587 Issue: 5 Volume: 52 Year: 2020 Month: 5 X-DOI: 10.1080/24725854.2019.1647477 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1647477 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:5:p:568-587 Template-Type: ReDIF-Article 1.0 Author-Name: Beste Basciftci Author-X-Name-First: Beste Author-X-Name-Last: Basciftci Author-Name: Shabbir Ahmed Author-X-Name-First: Shabbir Author-X-Name-Last: Ahmed Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Title: Data-driven maintenance and operations scheduling in power systems under decision-dependent uncertainty Abstract: Generator maintenance scheduling plays a pivotal role in ensuring uncompromised operations of power systems. There exists a tight coupling between the condition of the generators and corresponding operational schedules, significantly affecting the reliability of the system. In this study, we effectively model and solve an integrated condition-based maintenance and operations scheduling problem for a fleet of generators with an explicit consideration of decision-dependent generator conditions. We propose a sensor-driven degradation framework with remaining lifetime estimation procedures under time-varying load levels. We present estimation methods by adapting our model to the underlying signal variability. Then, we develop a stochastic optimization model that considers the effect of the operational decisions on the generators’ degradation levels along with the uncertainty of the unexpected failures. As the resulting problem includes nonlinearities, we adopt piecewise linearization along with other linearization techniques and propose formulation enhancements to obtain a stochastic mixed-integer linear programming formulation. We develop a decision-dependent simulation framework for assessing the performance of a given solution. Finally, we present computational experiments demonstrating significant cost savings and reductions in failures in addition to highlighting computational benefits of the proposed approach. Journal: IISE Transactions Pages: 589-602 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1660831 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1660831 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:589-602 Template-Type: ReDIF-Article 1.0 Author-Name: Mehmet Başdere Author-X-Name-First: Mehmet Author-X-Name-Last: Başdere Author-Name: Karen Smilowitz Author-X-Name-First: Karen Author-X-Name-Last: Smilowitz Author-Name: Sanjay Mehrotra Author-X-Name-First: Sanjay Author-X-Name-Last: Mehrotra Title: A study of the lock-free tour problem and path-based reformulations Abstract: Motivated by marathon course design, this article introduces a novel tour-finding problem, the Lock-Free Tour Problem (LFTP), which ensures that the resulting tour does not block access to certain critical vertices. The LFTP is formulated as a mixed-integer linear program. Structurally, the LFTP yields excessive subtour formation, causing the standard branch-and-cut approach to perform poorly, even with valid inequalities derived from locking properties of the LFTP. For this reason, we introduce path-based reformulations arising from a provably stronger disjunctive program, where disjunctions are obtained by fixing the visit orders in which must-visit edges are visited. In computational tests, the reformulations are shown to yield up to 100 times improvement in solution times. Additional tests demonstrate the value of reformulations for more general tour-finding problems with visit requirements and length restrictions. Finally, practical insights from the Bank of America Chicago Marathon are presented. Supplementary materials are available for this article. We refer the reader to the publisher’s online edition for additional experiments. Journal: IISE Transactions Pages: 603-616 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1662141 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1662141 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:603-616 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao-Feng Shao Author-X-Name-First: Xiao-Feng Author-X-Name-Last: Shao Title: Online and offline assortment strategy for vertically differentiated products Abstract: A growing number of retailers are becoming brick-and-click retailers as they pursue a dual-channel strategy that uses both conventional retail stores and the Internet to sell products. In a brick-and-click retail environment, assortment planning in one channel has a significant impact on the demand in both channels, and consequently, influences retailers’ bottom-line performance. We developed a stylized model to address a dual-channel retailer's assortment decision for vertically differentiated products and to investigate how the product and price offerings in online and offline channels jointly affect consumer demand. We incorporated the shelf-space constraint of the brick-and-mortar store and a disutility of online purchase into the assortment optimization model. Analytically, we derived results on the structure of the optimal assortment. We demonstrated that it was more profitable for the retailer to sell through both offline and online channels. Counterintuitively, we showed that in the optimal assortment, it was not necessary to designate a high-quality product to the offline store and a low-quality product to the online channel. We also identified two effective ways to improve profit: expansion of the offline channel and reduction of online disutility. Journal: IISE Transactions Pages: 617-637 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1665758 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1665758 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:617-637 Template-Type: ReDIF-Article 1.0 Author-Name: Debasis Mitra Author-X-Name-First: Debasis Author-X-Name-Last: Mitra Author-Name: Qiong Wang Author-X-Name-First: Qiong Author-X-Name-Last: Wang Title: Management of intellectual asset production in industrial laboratories Abstract: Industrial laboratories generate profit for their parent companies and in so doing benefit society through spillovers of novel technologies and solutions. However, research’s share of corporate investment in R&D has been declining. To understand this trend from the operations perspective, we develop a model-based analysis of the management of intellectual asset production in industrial laboratories. The model consists of a linear network with multiple stages in which the first stage is the research division engaged in generating novel concepts and prototypes. It is followed by multiple development stages that transform research outputs into intellectual assets and marketable products. Management is responsible for strategic budget allocation to the stages, and tactical management of individual projects. Decisions are based on intrinsic return on investment in the laboratory, and option values of projects, both of which are endogenously determined. Our model and analyses have revealed several possible pathways that can lead to the management of the laboratories to reduce the share of research spending in their budgets, namely: (i) lower variability of project values; (ii) improved investment efficiency at development stages; and (iii) higher revenue realization from assets produced at early development stages. Journal: IISE Transactions Pages: 638-650 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1670371 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1670371 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:638-650 Template-Type: ReDIF-Article 1.0 Author-Name: Gülşah Karakaya Author-X-Name-First: Gülşah Author-X-Name-Last: Karakaya Author-Name: Murat Köksalan Author-X-Name-First: Murat Author-X-Name-Last: Köksalan Title: Estimating the form of a decision maker’s preference function and converging towards preferred solutions Abstract: Preference functions have been widely used to scalarize multiple objectives. Various forms such as linear, quasiconcave, or general monotone have been assumed. In this article, we consider a general family of functions that can take a variety of forms and has properties that allow for estimating the form efficiently. We exploit these properties to estimate the form of the function and converge towards a preferred solution(s). We develop the theory and algorithms to efficiently estimate the parameters of the function that best represent a decision maker’s preferences. This in turn facilitates fast convergence to preferred solutions. We demonstrate on a variety of experiments that the algorithms work well both in estimating the form of the preference function and converging to preferred solutions. Journal: IISE Transactions Pages: 651-664 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1670373 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1670373 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:651-664 Template-Type: ReDIF-Article 1.0 Author-Name: Makusee Masae Author-X-Name-First: Makusee Author-X-Name-Last: Masae Author-Name: Christoph H. Glock Author-X-Name-First: Christoph H. Author-X-Name-Last: Glock Author-Name: Panupong Vichitkunakorn Author-X-Name-First: Panupong Author-X-Name-Last: Vichitkunakorn Title: Optimal order picker routing in the chevron warehouse Abstract: Order picking has often been considered as one of the most labor- and time-intensive tasks in warehouse operations. Among the various planning problems that have to be solved in manual picker-to-parts systems, the routing of the order picker usually accounts for the highest share of the total warehouse operating cost. To minimize the cost of order picking, researchers have developed various routing procedures that guide the order picker through the warehouse to complete given customer orders. For some warehouse layouts such as the chevron warehouse, an optimal routing algorithm has not yet been proposed. This article, therefore, contributes to filling this research gap by developing an optimal order picker routing policy for the chevron warehouse. The optimal routing algorithm proposed in this article is based on the concept of graph theory and utilizes a dynamic programming procedure. In addition, we propose various simple routing heuristics for the chevron warehouse. In computational experiments, the average order picking tour lengths resulting from optimal routing and from the simple heuristics are compared. Moreover, we compare the performance of the chevron warehouse to the conventional two-block warehouse under various conditions using the tour lengths obtained by the optimal algorithms. Journal: IISE Transactions Pages: 665-687 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1660833 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1660833 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:665-687 Template-Type: ReDIF-Article 1.0 Author-Name: Vernon Hsu Author-X-Name-First: Vernon Author-X-Name-Last: Hsu Author-Name: Qiaohai (Joice) Hu Author-X-Name-First: Qiaohai Author-X-Name-Last: (Joice) Hu Title: Global sourcing decisions for a multinational firm with foreign tax credit planning Abstract: Various countries’ tax laws adopt certain forms of a so-called worldwide tax system which levy taxes on their countries’ multinational firms’ global incomes at home country tax rates. To avoid double taxation, they permit tax cross-crediting, i.e., a global firm can use excess foreign tax credits (the portion of foreign tax payments that exceed its home country tax liabilities) generated from a subsidiary (located in a high-tax country) to offset the tax obligations from another subsidiary (located in a low-tax country). This article studies a multinational firm’s global sourcing decisions at two subsidiaries located in low- and high-tax countries, respectively, with an objective of maximizing its expected worldwide after-tax profit. We characterize the optimal sourcing decisions under various decentralized and centralized after-tax profit-maximizing performance measures. We show that the global firm can devise an easily implementable incentive scheme to optimally coordinate the decentralized sourcing decisions made at the two subsidiaries. We also demonstrate that decentralized policies with after-tax performance measures are more advantageous than the traditional pretax policies when the “tax effects”, i.e., the impacts of certain features of the tax laws (such as tax cross-crediting) on supply chain decisions, are significant. Journal: IISE Transactions Pages: 688-702 Issue: 6 Volume: 52 Year: 2020 Month: 6 X-DOI: 10.1080/24725854.2019.1670370 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1670370 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:6:p:688-702 Template-Type: ReDIF-Article 1.0 Author-Name: Huy Nguyen Author-X-Name-First: Huy Author-X-Name-Last: Nguyen Author-Name: Thomas C. Sharkey Author-X-Name-First: Thomas C. Author-X-Name-Last: Sharkey Author-Name: John E. Mitchell Author-X-Name-First: John E. Author-X-Name-Last: Mitchell Author-Name: William A. Wallace Author-X-Name-First: William A. Author-X-Name-Last: Wallace Title: Optimizing the recovery of disrupted single-sourced multi-echelon assembly supply chain networks Abstract: We consider optimization problems related to the scheduling of Multi-Echelon Assembly Supply Chain (MEASC) networks that find application in the recovery from large-scale disruptive events. Each manufacturer within this network assembles a component from a series of sub-components received from other manufacturers and, due to high qualification standards, each sub-component of the manufacturer is single-sourced. Our motivating industries for this problem are defense aircraft and biopharmaceutical manufacturing. We develop scheduling decision rules that are applied locally at each manufacturer and are proven to optimize two industry-relevant global recovery metrics: (i) minimizing the maximum tardiness of any order of the final product of the MEASC network, and (ii) and minimizing the time to recover from the disruptive event. Our approaches are applied to a data set based on an industrial partner’s supply chain to show their applicability as well as their advantages over Integer Programming (IP) models. The developed decision rules are proven to be optimal, faster, and more robust than the equivalent IP formulations. In addition, they provide conditions under which local manufacturer decisions will lead to globally optimal recovery efforts. These decision rules can help managers to make better production and shipping decisions to optimize the recovery after disruptions and quantitatively test the impact of different pre-event mitigation strategies against potential disruptions. They can be further useful in MEASCs with or expecting a large amount of backorders. Journal: IISE Transactions Pages: 703-720 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1670372 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1670372 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:703-720 Template-Type: ReDIF-Article 1.0 Author-Name: Yuan Jin Author-X-Name-First: Yuan Author-X-Name-Last: Jin Author-Name: S. Joe Qin Author-X-Name-First: S. Joe Author-X-Name-Last: Qin Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: Modeling inter-layer interactions for out-of-plane shape deviation reduction in additive manufacturing Abstract: Shape accuracy is an important quality measure of finished parts built by Additive Manufacturing (AM) processes. Previous work has established a generic and prescriptive methodology to represent, predict and compensate in-plane (x – y plane) shape deviation of AM built products using a limited number of test cases. However, extension to the out-of-plane (z-plane) shape deviation faces a major challenge due to intricate inter-layer interactions and error accumulation. One direct manifestation of such complication is that products of the same shape exhibit different deviation patterns when varying product sizes.This work devises an economic experimental plan and a data analytical approach to model out-of-plane deviation for improving the understanding of inter-layer interactions using a small set of training shapes. The key strategy is to discover the transition of deviation patterns from a smaller shape with fewer layers to a bigger one with more layers. This transition is established through the effect equivalence principle, which enables the model predicting a smaller shape to digitally “reproduce” the bigger shape by identifying the equivalent amount of design adjustment. In addition, a Bayesian approach is established to infer the deviation models capable of predicting deviation of complex shapes along the z-direction. Furthermore, prediction and compensation of out-of-plane deviation for two-dimensional freeform shapes are accomplished with experimental validation in a stereolithography process. Journal: IISE Transactions Pages: 721-731 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1676936 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1676936 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:721-731 Template-Type: ReDIF-Article 1.0 Author-Name: Chuanzhou Jia Author-X-Name-First: Chuanzhou Author-X-Name-Last: Jia Author-Name: Chi Zhang Author-X-Name-First: Chi Author-X-Name-Last: Zhang Title: Joint optimization of maintenance planning and workforce routing for a geographically distributed networked infrastructure Abstract: It is paramount to perform timely and appropriate maintenance actions on networked infrastructures, such as power transmission, transportation, telecommunications, and so forth, in order to ensure their reliability in satisfying the prescribed demand required by the economic development and social well-being of a society. For this purpose, the time of travelling between the components to be maintained needs to be considered, as the components of a real-world infrastructure are usually geographically widely distributed. To address this problem, we propose a holistic bi-objective optimization approach for the joint optimization of maintenance planning and workforce routing for a networked infrastructure, in order to determine a practical maintenance plan that can simultaneously maximize its reliability and minimize the incurred cost. To deal with the complexity of the proposed problem, we develop a Two-level Pareto Simulated Annealing algorithm to approximate the Pareto-optimal solutions of the proposed problem. Finally, two numerical examples are employed to illustrate the ability of the proposed approach in dealing with the maintenance optimization problem of a geographically distributed networked infrastructure. Journal: IISE Transactions Pages: 732-750 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1647478 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1647478 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:732-750 Template-Type: ReDIF-Article 1.0 Author-Name: Tao Jiang Author-X-Name-First: Tao Author-X-Name-Last: Jiang Author-Name: Yu Liu Author-X-Name-First: Yu Author-X-Name-Last: Liu Title: Robust selective maintenance strategy under imperfect observations: A multi-objective perspective Abstract: Selective maintenance, as a pervasive maintenance policy in both military and industrial environments, aims to achieve the maximum success of subsequent missions under limited maintenance resources by choosing an optimal subset of feasible maintenance actions. The existing works on selective maintenance optimization all assume that the condition of components in a system can be perfectly observed after the system completes the last mission. However, such a premise may not always be true in reality due to the limited accuracy/precision of sensors or inspection instruments. To fill this gap, a new robust selective maintenance model is proposed in this work to consider uncertainties that originate from imperfect observations. The uncertainties associated with imperfect observations are incorporated into the states and effective ages of components via Bayes rule. The Kijima type II model, as a specific imperfect maintenance model, is used to characterize the imperfect maintenance efficiency of each selected maintenance action. The expectation and variance of the probability of a repairable system successfully completing the subsequent mission are derived to quantify the uncertainty that is propagated from imperfect observations. To guarantee the robustness of a selective maintenance strategy under uncertainties, a multi-objective selective maintenance model is constructed with the aims of maximizing the expectation of the probability that a system successfully completes the subsequent mission and to simultaneously minimizing the variance in this probability. The Pareto-optimality approach is utilized to offer a set of non-dominated solutions. Two illustrative examples are presented to demonstrate the advantages of the proposed method. Journal: IISE Transactions Pages: 751-768 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1649505 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1649505 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:751-768 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Qiang Zhou Author-X-Name-First: Qiang Author-X-Name-Last: Zhou Author-Name: Dong Ding Author-X-Name-First: Dong Author-X-Name-Last: Ding Title: Efficient monitoring of autocorrelated Poisson counts Abstract: Statistical surveillance for autocorrelated Poisson counts has drawn considerable attention recently. These works are usually based on a first-order integer-valued autoregressive model and focus on monitoring separately either the marginal mean or the autocorrelation coefficient. Inspired by multivariate statistical process control, this article transforms autocorrelated Poisson counts into a bivariate representation and proposes an efficient control chart. By borrowing the power of the likelihood ratio test, albeit surprisingly, this chart demonstrates almost uniformly stronger power than the existing alternatives in simultaneously detecting shifts in both the marginal mean and the autocorrelation coefficient. In addition, the robustness of the proposed chart against overdispersion encountered often in counts is also verified. It is shown that this chart also has superiority in monitoring autocorrelated overdispersed counts. Journal: IISE Transactions Pages: 769-779 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1649506 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1649506 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:769-779 Template-Type: ReDIF-Article 1.0 Author-Name: Nha Vo-Thanh Author-X-Name-First: Nha Author-X-Name-Last: Vo-Thanh Author-Name: Peter Goos Author-X-Name-First: Peter Author-X-Name-Last: Goos Author-Name: Eric D. Schoen Author-X-Name-First: Eric D. Author-X-Name-Last: Schoen Title: Integer programming approaches to find row–column arrangements of two-level orthogonal experimental designs Abstract: Design of experiments is an effective, generic methodology for problem solving as well as for improving or optimizing product design and manufacturing processes. The most commonly used experimental designs are two-level fractional factorial designs. In recent years, nonregular fractional factorial two-level experimental designs have gained much popularity compared to the traditional regular fractional factorial designs, because they offer more flexibility in terms of run size as well as the possibility to estimate partially aliased effects. For this reason, there is much interest in finding good nonregular designs, and in orthogonal blocking arrangements of these designs. In this contribution, we address the problem of finding orthogonal blocking arrangements of high-quality nonregular two-level designs in scenarios with two crossed blocking factors. We call these blocking arrangements orthogonal row-column arrangements. We propose two strategies to find row-column arrangements of given two-level orthogonal treatment designs such that the treatment factors’ main effects are orthogonal to both blocking factors. The first strategy involves a sequential approach which is especially useful when one blocking factor is more important than the other. The second strategy involves a simultaneous approach for situations where both blocking factors are equally important. For the latter approach, we propose three different optimization models, so that, in total, we consider four different methods to obtain row-column arrangements. We compare the performance of the four methods by looking for good row-column arrangements of the best two-level 24-run orthogonal designs in terms of the G-aberration criterion, and apply the best of these methods to 64- and 72-run orthogonal designs. Journal: IISE Transactions Pages: 780-796 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1655608 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1655608 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:780-796 Template-Type: ReDIF-Article 1.0 Author-Name: Qiuzhuang Sun Author-X-Name-First: Qiuzhuang Author-X-Name-Last: Sun Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Author-Name: Xiaoyan Zhu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Zhu Title: Managing component degradation in series systems for balancing degradation through reallocation and maintenance Abstract: In a physical system, components are usually installed in fixed positions that are known as operating slots. Due to such reasons as user behavior and imbalanced workload, a component’s degradation can be affected by the corresponding installation position in the system. As a result, components degradation levels can be significantly different even when the components come from a homogeneous population. Dynamic reallocation of the components among the installation positions is a feasible way to balance the extent of the degradation, and hence, extend the time from system installation to its replacement. In this study, we quantify the benefit of incorporating reallocation into the condition-based maintenance framework for series systems. The degradation of components in the system is modeled as a multivariate Wiener process, where the correlation between the degradation is considered. Under the periodic inspection framework, the optimal control limits for reallocation and preventive replacement are investigated. We first propose a reallocation policy of two-component systems, where the degradation process with reallocation and replacement is formulated as a semi-regenerative process. Then the long-run average operational cost is computed based on the stationary distribution of its embedded Markov chain. We then generalize the model to general series systems and use Monte Carlo simulations to approximate the maintenance cost. The optimal thresholds for reallocation and replacement are obtained from a stochastic response surface method using a stochastic kriging model. We further generalize the model to the scenario of an unknown degradation rate associated with each slot. The proposed model is applied to the tire system of a car and the battery system of hybrid-electric vehicles, where we show that the reallocation policy is capable of significantly reducing the system’s long-run average operational cost. Journal: IISE Transactions Pages: 797-810 Issue: 7 Volume: 52 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2019.1672908 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1672908 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:7:p:797-810 Template-Type: ReDIF-Article 1.0 Author-Name: Merve Meraklı Author-X-Name-First: Merve Author-X-Name-Last: Meraklı Author-Name: Simge Küçükyavuz Author-X-Name-First: Simge Author-X-Name-Last: Küçükyavuz Title: Risk aversion to parameter uncertainty in Markov decision processes with an application to slow-onset disaster relief Abstract: In classic Markov Decision Processes (MDPs), action costs and transition probabilities are assumed to be known, although an accurate estimation of these parameters is often not possible in practice. This study addresses MDPs under cost and transition probability uncertainty and aims to provide a mathematical framework to obtain policies minimizing the risk of high long-term losses due to not knowing the true system parameters. To this end, we utilize the risk measure value-at-risk associated with the expected performance of an MDP model with respect to parameter uncertainty. We provide mixed-integer linear and nonlinear programming formulations and heuristic algorithms for such risk-averse models of MDPs under a finite distribution of the uncertain parameters. Our proposed models and solution methods are illustrated on an inventory management problem for humanitarian relief operations during a slow-onset disaster. The results demonstrate the potential of our risk-averse modeling approach for reducing the risk of highly undesirable outcomes in uncertain/risky environments. Journal: IISE Transactions Pages: 811-831 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1674464 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1674464 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:811-831 Template-Type: ReDIF-Article 1.0 Author-Name: Jivan Deglise-Hawkinson Author-X-Name-First: Jivan Author-X-Name-Last: Deglise-Hawkinson Author-Name: David L. Kaufman Author-X-Name-First: David L. Author-X-Name-Last: Kaufman Author-Name: Blake Roessler Author-X-Name-First: Blake Author-X-Name-Last: Roessler Author-Name: Mark P. Van Oyen Author-X-Name-First: Mark P. Author-X-Name-Last: Van Oyen Title: Access planning and resource coordination for clinical research operations Abstract: This research creates an operations engineering and management methodology to optimize a complex operational planning and coordination challenge faced by sites that perform clinical research trials. The time-sensitive and resource-specific treatment sequences for each of the many trial protocols conducted at a site make it very difficult to capture the dynamics of this unusually complex system. Existing approaches for site planning and participant scheduling exhibit both excessively long and highly variable Time to First Available Visit (TFAV) waiting times and high staff overtime costs. We have created a new method, termed CApacity Planning Tool And INformatics (CAPTAIN) that provides decision support to identify the most valuable set of research trials to conduct within available resources and a plan for how to book their participants. Constraints include (i) the staff overtime costs, and/or (ii) the TFAV by trial. To estimate the site’s metrics via a Mixed-Integer Program, CAPTAIN combines a participant trajectory forecasting with an efficient visit booking reservation plan to allocate the date for the first visit of every participant’s treatment sequence. It also plans a daily nursing staff schedule that is optimized together with the booking reservation plan to optimize each nurse’s shift assignments in consideration of participants’ requirements/needs. Journal: IISE Transactions Pages: 832-849 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1675202 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1675202 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:832-849 Template-Type: ReDIF-Article 1.0 Author-Name: Kübra Tanınmış Author-X-Name-First: Kübra Author-X-Name-Last: Tanınmış Author-Name: Necati Aras Author-X-Name-First: Necati Author-X-Name-Last: Aras Author-Name: İ. Kuban Altınel Author-X-Name-First: İ. Kuban Author-X-Name-Last: Altınel Author-Name: Evren Güney Author-X-Name-First: Evren Author-X-Name-Last: Güney Title: Minimizing the misinformation spread in social networks Abstract: The Influence Maximization Problem has been widely studied in recent years, due to rich application areas including marketing. It involves finding k nodes to trigger a spread such that the expected number of influenced nodes is maximized. The problem we address in this study is an extension of the reverse influence maximization problem, i.e., misinformation minimization problem where two players make decisions sequentially in the form of a Stackelberg game. The first player aims to minimize the spread of misinformation whereas the second player aims its maximization. Two algorithms, one greedy heuristic and one matheuristic, are proposed for the first player’s problem. In both of them, the second player’s problem is approximated by Sample Average Approximation, a well-known method for solving two-stage stochastic programming problems, that is augmented with a state-of-the-art algorithm developed for the influence maximization problem. Journal: IISE Transactions Pages: 850-863 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1680909 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1680909 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:850-863 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Mitchell Author-X-Name-First: Daniel Author-X-Name-Last: Mitchell Author-Name: Jȩdrzej Białkowski Author-X-Name-First: Jȩdrzej Author-X-Name-Last: Białkowski Author-Name: Stathis Tompaidis Author-X-Name-First: Stathis Author-X-Name-Last: Tompaidis Title: Volume-weighted average price tracking: A theoretical and empirical study Abstract: The Volume-Weighted Average Price (VWAP) of a security is a key measure of execution quality for large orders often used by institutional investors. We propose a VWAP tracking model with general price and volume dynamics and transaction costs. We find the theoretically optimal VWAP tracking strategy in several special cases. With these solutions we investigate three questions empirically. Do dynamic strategies outperform static strategies? How important is the choice of the market impact model? Does capturing the relationship between trading volume and the variance of stock price returns play an important role in optimal VWAP execution? We find that static strategies are preferable to dynamic ones, that simpler market impact models that assume either constant or linear market impact, perform as well as more sophisticated, nonlinear, market impact models, and that capturing the relationship between trading volume and the variance of stock price returns improves the performance of VWAP execution significantly. Journal: IISE Transactions Pages: 864-889 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1688896 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1688896 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:864-889 Template-Type: ReDIF-Article 1.0 Author-Name: Han Ye Author-X-Name-First: Han Author-X-Name-Last: Ye Author-Name: Lawrence D. Brown Author-X-Name-First: Lawrence D. Author-X-Name-Last: Brown Author-Name: Haipeng Shen Author-X-Name-First: Haipeng Author-X-Name-Last: Shen Title: Hazard rate estimation for call center customer patience time Abstract: Estimating the hazard function of customer patience time has become a necessary component of effective operational planning such as workforce staffing and scheduling in call centers. When customers get served, their patience times are right-censored. In addition, the exact event times in call centers are sometimes unobserved and naturally binned into time intervals, due to the design of data collection systems. We develop a TunT (Transform-unTransform) estimator that turns the difficult problem of nonparametric hazard function estimation into a regression problem on binned and right-censored data. Our approach starts with binning event times and transforming event count data with a mean-matching transformation, which enables a simpler characterization of the heteroscedastic variance function. A nonparametric regression technique is then applied to the transformed data. Finally, the estimated regression function is back-transformed to yield an estimator for the original hazard function. The proposed estimation procedure is illustrated using call center data to reveal interesting customer patience behavior, and health insurance plan trial data to compare the effect between treatment and control groups. The numerical study shows that our approach yields more accurate estimates and better staffing decisions than existing methods. Journal: IISE Transactions Pages: 890-903 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1692159 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1692159 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:890-903 Template-Type: ReDIF-Article 1.0 Author-Name: Saeed Poormoaied Author-X-Name-First: Saeed Author-X-Name-Last: Poormoaied Author-Name: Zümbül Atan Author-X-Name-First: Zümbül Author-X-Name-Last: Atan Author-Name: Ton de Kok Author-X-Name-First: Ton Author-X-Name-Last: de Kok Author-Name: Tom Van Woensel Author-X-Name-First: Tom Author-X-Name-Last: Van Woensel Title: Optimal inventory and timing decisions for emergency shipments Abstract: Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs, and hence, can result in substantial benefits. Customer satisfaction and increased service level are immediate consequences of utilizing emergency shipments. In this article, we study a periodic review inventory problem of a retailer, which uses a base-stock policy to replenish its inventory from a supplier with infinite capacity. The retailer can also place emergency shipment orders in each period. The goal is to determine the optimal base-stock level, the optimal period length as well as the optimal timing and size of the emergency shipment orders such that the total expected operating cost composed of expected replenishment, emergency shipment, holding and backordering costs is minimized. Different from classic periodic review inventory models, we consider time-dependent holding and shortage costs. The timing decision is of particular interest since it provides valuable information to the supplier. Our analytical results reveal the complex behavior of the cost function. We prove the existence of the optimal solution and develop a heuristic algorithm to find a near-optimal solution. We compare our policy with simpler policies and with a policy that relies on the inventory level to decide whether to trigger emergency orders. In our analysis we study the case with a fixed period length as well. Our results indicate that the benefit of using the proposed policy depends on emergency shipment costs, backordering costs and demand variability. We observe that when the period length is a decision variable, in the case of small emergency shipment costs, the retailer needs to focus on the timing of emergency shipments, and when the period length is fixed and the demand variation is high, the size of the emergency shipments should be the focus. Our results indicate that using an inventory level-dependent emergency shipment policy is not beneficial especially when the period length is a decision variable and the emergency order lead time is long. Journal: IISE Transactions Pages: 904-925 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1697016 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1697016 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:904-925 Template-Type: ReDIF-Article 1.0 Author-Name: Satya S. Malladi Author-X-Name-First: Satya S. Author-X-Name-Last: Malladi Author-Name: Alan L. Erera Author-X-Name-First: Alan L. Author-X-Name-Last: Erera Author-Name: Chelsea C. White Author-X-Name-First: Chelsea C. Author-X-Name-Last: White Title: A dynamic mobile production capacity and inventory control problem Abstract: We analyze a problem of dynamic logistics planning given uncertain demands for a multi-location production-inventory system with transportable modular production capacity. In such systems, production modules provide capacity, and can be moved from one location to another to produce stock and satisfy demand. We formulate a dynamic programming model for a planning problem that considers production and inventory decisions, and develop suboptimal lookahead and rollout policies that use value function approximations based on geographic decomposition. Mixed-integer programming formulations are provided for several single-period optimization problems that define these policies. These models generalize a formulation for the single-period newsvendor problem, and in some cases the feasible region polyhedra contain only integer extreme points allowing efficient solution computation. A computational study with stationary demand distributions, which should benefit least from mobile capacity, provides an analysis of the effectiveness of these policies and the value that mobile production capacity provides. For problems with 20 production locations, the best suboptimal policies produce on average 13% savings over fixed capacity allocation policies when the costs of module movement, holding, and backordering are accounted for. Greater savings result when the number of locations increases. Journal: IISE Transactions Pages: 926-943 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2019.1693709 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1693709 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:926-943 Template-Type: ReDIF-Article 1.0 Author-Name: The Editors Title: Correction Journal: IISE Transactions Pages: 944-944 Issue: 8 Volume: 52 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2020.1743095 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1743095 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:8:p:944-944 Template-Type: ReDIF-Article 1.0 Author-Name: Yasemin Limon Author-X-Name-First: Yasemin Author-X-Name-Last: Limon Author-Name: Ananth Krishnamurthy Author-X-Name-First: Ananth Author-X-Name-Last: Krishnamurthy Title: Resource allocation strategies for protein purification operations Abstract: We analyze resource allocation challenges in protein purification operations where differences in scientist capabilities can lead to significantly different outcomes. We use queuing models to capture the underlying dynamics and quantify the performance of different strategies based on solutions obtained using the matrix-geometric approach. We show that certain partial flexibility structures coupled with appropriate priority rules can yield very efficient system performance. We also define a new server utilization metric that can be very effective in rank ordering strategies. Through numerical studies, we provide useful rules for the biomanufacturers to achieve higher profits and shorter lead times. Journal: IISE Transactions Pages: 945-960 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1680908 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1680908 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:945-960 Template-Type: ReDIF-Article 1.0 Author-Name: Wan Wu Author-X-Name-First: Wan Author-X-Name-Last: Wu Author-Name: René B.M. de Koster Author-X-Name-First: René B.M. Author-X-Name-Last: de Koster Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Title: Forward-reserve storage strategies with order picking: When do they pay off? Abstract: Customer order response time and system throughput capacity are key performance measures in warehouses. They depend strongly on the storage strategies deployed. One popular strategy is to split inventory into a bulk storage and a pick stock, or Forward-Reserve (FR) storage. Managers often use a rule of thumb: when the ratio m of average picks per replenishment is larger than a certain factor, it is beneficial to split inventory. However, research that systematically quantifies the benefits is lacking. We quantify the benefits analytically by developing response travel time models for FR storage in an Automated Storage/Retrieval system combined with order picking. We compare performance of FR storage with turnover class-based storage, and find when it pays off. Our findings illustrate that, in FR storage systems where forward and reserve stocks are stored in the same rack, FR storage usually pays off, as long as m is sufficiently larger than 1. The response time savings can go up to 50% when m is larger than 10. We validate these results using real data from a wholesale distributor. Journal: IISE Transactions Pages: 961-976 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1699979 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1699979 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:961-976 Template-Type: ReDIF-Article 1.0 Author-Name: Shenghan Guo Author-X-Name-First: Shenghan Author-X-Name-Last: Guo Author-Name: Weihong “Grace’’ Guo Author-X-Name-First: Weihong “Grace’’ Author-X-Name-Last: Guo Author-Name: Linkan Bain Author-X-Name-First: Linkan Author-X-Name-Last: Bain Title: Hierarchical spatial-temporal modeling and monitoring of melt pool evolution in laser-based additive manufacturing Abstract: Melt pool dynamics reflect the formulation of microstructural defects in parts during Laser-Based Additive Manufacturing (LBAM). The thermal images of the melt pool collected during the LBAM process provide unique opportunities for modeling and monitoring its evolution. The recognized anomalies are evidence of part defects that are to be eliminated for higher product quality. A unique concern in analyzing thermal images is spatial-temporal correlations – the heat transfer within the melt pool causes spatial correlations among pixels in an image, and the evolution of the melt pool causes temporal correlations across images. The objective of this study is to develop a LBAM modeling-monitoring framework that incorporates spatial-temporal effects in characterizing and monitoring melt pool behavior. Spatial-Temporal Conditional Autoregressive (STCAR) models are explored. STCAR-AR is identified as the best candidate among the numerous STCAR variants. A novel two-level control chart is constructed on top of the STCAR-AR model to monitor the melt pool dynamics. A hierarchical structure underlies the two-level control chart in the sense that global anomalies recognized in Level II can be traced in Level I for further inspection. A comparison with other recently developed in-situ monitoring approaches shows that the proposed framework achieves the best detection power and false positive rate. Journal: IISE Transactions Pages: 977-997 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1704465 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1704465 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:977-997 Template-Type: ReDIF-Article 1.0 Author-Name: Joonho Bae Author-X-Name-First: Joonho Author-X-Name-Last: Bae Author-Name: Jinkyoo Park Author-X-Name-First: Jinkyoo Author-X-Name-Last: Park Title: Count-based change point detection via multi-output log-Gaussian Cox processes Abstract: The ability to detect change points is a core skill in system monitoring and prognostics. When data take the form of frequencies, i.e., count data, counting processes such as Poisson processes are extensively used for modeling. However, many existing count-based approaches rely on parametric models or deterministic frameworks, failing to consider complex system uncertainty based on temporal and environmental contexts. Another challenge is analyzing interrelated events simultaneously to detect change points that can be missed by independent analyses. This article presents a Multi-Output Log-Gaussian Cox Process with a Cross-Spectral Mixture kernel (MOLGCP-CSM) as a count-based change point detection algorithm. The proposed model employs MOLGCP to flexibly model time-varying intensities of events over multiple channels with the CSM kernel that can capture either negative or positive correlations, as well as phase differences between stochastic processes. During the monitoring, the proposed approach measures the level of change in real-time by computing a weighted likelihood of observation with respect to the constructed model and determines whether a target system experiences a change point by conducting a statistical test based on extreme value theory. Our method is validated using three types of datasets: synthetic, accelerometer vibration, and gas regulator data. Journal: IISE Transactions Pages: 998-1013 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1676937 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1676937 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:998-1013 Template-Type: ReDIF-Article 1.0 Author-Name: Tangfan Xiahou Author-X-Name-First: Tangfan Author-X-Name-Last: Xiahou Author-Name: Yu Liu Author-X-Name-First: Yu Author-X-Name-Last: Liu Title: Reliability bounds for multi-state systems by fusing multiple sources of imprecise information Abstract: It is crucial to evaluate reliability measures of a system over time, so that reliability-related decisions, such as maintenance planning and warranty policy, can be appropriately made for the system. However, accurately assessing system reliability becomes challenging if only limited amounts of reliability data are available. On the other hand, imprecise information related to reliability measures of a system can be collected based on experts’ judgments/experiences, and these pieces of information may be, however, heterogeneous and come from multiple sources. By properly fusing the imprecise information, reliability bounds of a system can be assessed to facilitate the ensuing reliability-related decision-making. In this article, a constrained optimization framework is proposed to assess reliability bounds of multi-state systems by fusing multiple sources of imprecise information. The proposed framework is composed of three basic steps: (i) constructing a set of constraints for a resulting optimization formulation by representing all the imprecise information as functions of unknown parameters of the degradation models for components; (ii) identifying the upper and lower bounds of the system reliability function by resolving the resulting constrained optimization problem via a tailored feasibility-based particle swarm algorithm; and (iii) developing a model selection approach to choose the best component degradation model that matches with all the imprecise information to the maximum extent. A numerical example along with an engineering example is given to demonstrate the effectiveness of the proposed method. Journal: IISE Transactions Pages: 1014-1031 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1680910 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1680910 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:1014-1031 Template-Type: ReDIF-Article 1.0 Author-Name: Hao Yan Author-X-Name-First: Hao Author-X-Name-Last: Yan Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: AKM2D: An adaptive framework for online sensing and anomaly quantification Abstract: In point-based sensing systems such as coordinate measuring machines and laser ultrasonics where complete sensing is impractical due to the high sensing time and cost, adaptive sensing through a systematic exploration is vital for online inspection and anomaly quantification. Most of the existing sequential sampling methodologies focus on reducing the overall fitting error for the entire sampling space. However, in many anomaly quantification applications, the main goal is to estimate sparse anomalous regions at pixel-level accurately. In this article, we develop a novel framework named Adaptive Kernelized Maximum-Minimum Distance (AKM2D) to speed up the inspection and anomaly detection process through an intelligent sequential sampling scheme integrated with fast estimation and detection. The proposed method balances the sampling efforts between the space-filling sampling (exploration) and focused sampling near the anomalous region (exploitation). The proposed methodology is validated by conducting simulations and a case study of anomaly detection in composite sheets using a guided wave test. Journal: IISE Transactions Pages: 1032-1046 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1681606 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1681606 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:1032-1046 Template-Type: ReDIF-Article 1.0 Author-Name: Naichao Wang Author-X-Name-First: Naichao Author-X-Name-Last: Wang Author-Name: Yu (Chelsea) Jin Author-X-Name-First: Yu (Chelsea) Author-X-Name-Last: Jin Author-Name: Lin Ma Author-X-Name-First: Lin Author-X-Name-Last: Ma Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Title: A computational method for finding the availability of opportunistically maintained multi-state systems with non-exponential distributions Abstract: Availability is one of the most important performance measures of a repairable system. Among various mathematical methods, the method of supplementary variables is an effective way of modeling the steady-state availability of systems governed by non-exponential distributions. However, when all the underlying probability distributions are non-exponential (e.g., Weibull), the corresponding state equations are difficult to solve. To overcome this challenge, a new method is proposed in this article to determine the steady-state availability of a multi-state repairable system, where all the state sojourn times, as well as the maintenance times, are generally distributed. As an indispensable step, the well-posedness and stability of the system’s state equations are illustrated and proved using C0 operator semigroup theory. Afterwards, based on the generalized Integral Mean Value Theorem, the expression for system steady-state availability is derived as a function of state probabilities. Then, the original problem is transformed into a system of linear equations that can be easily solved. A simulation study and an instance studied in the literature are used to demonstrate the applications of the proposed method in practice. These numerical examples illustrate that the proposed method provides a new computational tool for effectively evaluating the availability of a repairable system without relying on simulation. Journal: IISE Transactions Pages: 1047-1061 Issue: 9 Volume: 52 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2019.1688897 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1688897 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:9:p:1047-1061 Template-Type: ReDIF-Article 1.0 Author-Name: Gregory Steeger Author-X-Name-First: Gregory Author-X-Name-Last: Steeger Author-Name: Timo Lohmann Author-X-Name-First: Timo Author-X-Name-Last: Lohmann Author-Name: Steffen Rebennack Author-X-Name-First: Steffen Author-X-Name-Last: Rebennack Title: Strategic bidding for a price-maker hydroelectric producer: Stochastic dual dynamic programming and Lagrangian relaxation Abstract: In bid-based markets, energy producers seek bidding strategies that maximize their revenue. In this article, we seek the maximum-revenue bidding schedule for a single price-maker hydroelectric producer. We assume the producer sells energy in the day-ahead electricity market and has the ability to impact the market-clearing price with its bids. To obtain the price-maker hydroelectric producer’s bidding schedule, we use a combination of Stochastic Dual Dynamic Programming and Lagrangian relaxation. In this framework, we dualize the water balance equations, allowing an exact representation of the non-concave immediate revenue function, while preserving the concave shape of the future revenue function. We model inflow uncertainty and its stagewise dependence by a periodic autoregressive model. To demonstrate our approaches’ utility, we model Honduras’ electricity market assuming that the thermal producers act as price-takers and that one price-maker hydro producer operates all of the hydroelectric plants. Journal: IISE Transactions Pages: 929-942 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1461963 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1461963 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:929-942 Template-Type: ReDIF-Article 1.0 Author-Name: Haihui Shen Author-X-Name-First: Haihui Author-X-Name-Last: Shen Author-Name: L. Jeff Hong Author-X-Name-First: L. Jeff Author-X-Name-Last: Hong Author-Name: Xiaowei Zhang Author-X-Name-First: Xiaowei Author-X-Name-Last: Zhang Title: Enhancing stochastic kriging for queueing simulation with stylized models Abstract: Stochastic kriging is a popular metamodeling technique to approximate computationally expensive simulation models. However, it typically treats the simulation model as a black box in practice and often fails to capture the highly nonlinear response surfaces that arise from queueing simulations. We propose a simple, effective approach to improve the performance of stochastic kriging by incorporating stylized queueing models that contain useful information about the shape of the response surface. We provide several statistical tools to measure the usefulness of the incorporated stylized models. We show that even a relatively crude stylized model can substantially improve the prediction accuracy of stochastic kriging. Journal: IISE Transactions Pages: 943-958 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1465242 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1465242 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:943-958 Template-Type: ReDIF-Article 1.0 Author-Name: Guopeng Song Author-X-Name-First: Guopeng Author-X-Name-Last: Song Author-Name: Daniel Kowalczyk Author-X-Name-First: Daniel Author-X-Name-Last: Kowalczyk Author-Name: Roel Leus Author-X-Name-First: Roel Author-X-Name-Last: Leus Title: The robust machine availability problem – bin packing under uncertainty Abstract: We define and solve the robust machine availability problem in a parallel machine environment, which aims to minimize the number of identical machines required while completing all the jobs before a given deadline. The deterministic version of this problem essentially coincides with the bin packing problem. Our formulation preserves a user-defined robustness level regarding possible deviations in the job durations. For better computational performance, a branch-and-price procedure is proposed based on a set covering reformulation. We use zero-suppressed binary decision diagrams for solving the pricing problem, which enable us to manage the difficulty entailed by the robustness considerations as well as by extra constraints imposed by branching decisions. Computational results are reported that show the effectiveness of a pricing solver with zero-suppressed binary decision diagrams compared with a mixed integer programming solver. Journal: IISE Transactions Pages: 997-1012 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1468122 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1468122 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:997-1012 Template-Type: ReDIF-Article 1.0 Author-Name: Simon Emde Author-X-Name-First: Simon Author-X-Name-Last: Emde Author-Name: Hamid Abedinnia Author-X-Name-First: Hamid Author-X-Name-Last: Abedinnia Author-Name: Christoph H. Glock Author-X-Name-First: Christoph H. Author-X-Name-Last: Glock Title: Scheduling electric vehicles making milk-runs for just-in-time delivery Abstract: Battery-operated electric vehicles are frequently used in in-plant logistics systems to feed parts from a central depot to workcells on the shop floor. These vehicles, often called tow trains, make many milk-run trips during a typical day, with the delivery timetable depending on the production schedule. To operate such a milk-run delivery system efficiently, not only do the timetabled trips need to be assigned to vehicles, it is also important to take the limited battery capacity into consideration. Moreover, since most tow trains in use today are still operated by human drivers, fairness aspects with respect to the division of the workload also need to be considered. In this context, we tackle the following problem we encountered at a large manufacturer of engines for trucks and busses in Germany. Given a fixed schedule of milk-runs (round trips) to be performed during a planning horizon, and a fleet of homogeneous electric vehicles stationed at a depot, which vehicle should set out on which milk-run and when should recharging breaks be scheduled, such that all runs can be completed with the minimum number of vehicles and all vehicles are about equally busy? We investigate the computational complexity of this problem and develop suitable heuristics, which are shown to solve instances of realistic size to near-optimality in a matter of a few minutes. We also offer some insight into how battery technology influences vehicle utilization. Journal: IISE Transactions Pages: 1013-1025 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1479899 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1479899 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:1013-1025 Template-Type: ReDIF-Article 1.0 Author-Name: Mahdi Hamzeei Author-X-Name-First: Mahdi Author-X-Name-Last: Hamzeei Author-Name: James Luedtke Author-X-Name-First: James Author-X-Name-Last: Luedtke Title: Service network design with equilibrium-driven demands Abstract: We study a service network design problem in which the network operator wishes to determine facility locations and sizes in order to satisfy the demand of the customers while balancing the cost of the system with a measure of quality-of-service faced by the customers. We assume customers choose the facilities that meet demand, in order to minimize their total cost, including costs associated with traveling and waiting. When having demand served at a facility, customers face a service delay that depends on the total usage (congestion) of the facility. The total cost of meeting a customer’s demand at a facility includes a facility-specific unit travel cost and a function of the service delay. When customers all minimize their own costs, the resulting distribution of customer demand to facilities is modeled as an equilibrium. This problem is motivated by several applications, including supplier selection in supply chain planning, preventive healthcare services planning, and shelter location-allocation in disaster management. We model the problem as a mixed-integer bilevel program that can be reformulated as a nonconvex mixed-integer nonlinear program. The reformulated problem is difficult to solve by general-purpose solvers. Hence, we propose a Lagrangian relaxation approach that finds a candidate feasible solution along with a lower bound that can be used to validate the solution quality. The computational results indicate that the method can efficiently find feasible solutions, along with bounds on their optimality gap, even for large instances. Journal: IISE Transactions Pages: 959-969 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1479900 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1479900 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:959-969 Template-Type: ReDIF-Article 1.0 Author-Name: Xiang Zhong Author-X-Name-First: Xiang Author-X-Name-Last: Zhong Title: A queueing approach for appointment capacity planning in primary care clinics with electronic visits Abstract: Electronic visits (e-visits), which allows patients and primary care providers to communicate through secure messages sent from patient portals, has enabled virtual care delivery as an alternative to traditional office visits for selected and non-urgent medical issues. In this study, we address the appointment capacity planning problem for care providers and administrators who are engaged in facilitating e-visits to improve care delivery efficiency and patient access. We model the dynamics of patient appointment backlog using discrete-time bulk-service queues, and develop novel numerical methods for incorporating patients rejoining and flexible provider capacity tailored to the service nature of e-visit. The analytically tractable model enables evaluating the impacts of service system intensity, effectiveness of e-visits, and popularity of e-visits on care delivery performance, and identifying the conditions favoring e-visit implementation. The insights obtained from the model provide guidance on service capacity design to maximize the potential of e-visits. Journal: IISE Transactions Pages: 970-988 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1486053 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1486053 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:970-988 Template-Type: ReDIF-Article 1.0 Author-Name: Christopher Garcia Author-X-Name-First: Christopher Author-X-Name-Last: Garcia Title: Optimal multiunit transfer over adversarial paths with increasing intercept probabilities Abstract: We consider a problem involving transporting a set of items over a set of hostile paths where an adversary seeks to intercept them, with the goal of maximizing the probability that all items successfully cross. Items leave a unique footprint as they cross a path, and the probability that an item is intercepted on a given path increases according to an intercept probability function as the cumulative footprint on that path increases. We provide a problem formulation and demonstrate several properties important for its solution. We then use these to develop four optimization algorithms: an exact algorithm (E), a greedy heuristic (G), a greedy heuristic with local search (LS), and a genetic algorithm (GA). These algorithms were evaluated via computational experiments on a large set of benchmark problems spanning different sizes and characteristics. LS provided the largest number of best solutions while outperforming GA in terms of solution time. Journal: IISE Transactions Pages: 989-996 Issue: 11 Volume: 50 Year: 2018 Month: 11 X-DOI: 10.1080/24725854.2018.1488306 File-URL: http://hdl.handle.net/10.1080/24725854.2018.1488306 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:50:y:2018:i:11:p:989-996 Template-Type: ReDIF-Article 1.0 Author-Name: Michel Siemon Author-X-Name-First: Michel Author-X-Name-Last: Siemon Author-Name: Maximilian Schiffer Author-X-Name-First: Maximilian Author-X-Name-Last: Schiffer Author-Name: Sumit Mitra Author-X-Name-First: Sumit Author-X-Name-Last: Mitra Author-Name: Grit Walther Author-X-Name-First: Grit Author-X-Name-Last: Walther Title: Value-based production planning in non-ferrous metal industries: Application in the copper industry Abstract: Production planners in the non-ferrous metal industry face an inherent combinatorial complexity of the metal production process within a fast-changing market environment. Herein, we study the benefit of an integrated optimization-based planning approach. We present the first value-based optimization approach for operational planning in the non-ferrous metal industry that yields high economic and technical benefits. We present a mixed-integer linear program for non-ferrous metal operational production planning that covers the complexity of material flows and the entire production process and is amenable for real-time application. We give insights into the practical implementation and evaluation of our modeling approach at a plant of Aurubis, a large European non-ferrous metal producer. Our results show that an optimization and value-based production planning approach provides significant benefits, including a 38% better planning solution in practice. In addition to economic benefits, we highlight the technical advantages that result from a detailed techno-economic representation of the entire production process. Journal: IISE Transactions Pages: 1063-1080 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2020.1711992 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1711992 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1063-1080 Template-Type: ReDIF-Article 1.0 Author-Name: Bradley Guthrie Author-X-Name-First: Bradley Author-X-Name-Last: Guthrie Author-Name: Pratik J. Parikh Author-X-Name-First: Pratik J. Author-X-Name-Last: Parikh Title: The rack orientation and curvature problem for retailers Abstract: The layout of a retail store directly affects the products to which shoppers are exposed, and ultimately their purchase. Optimizing a layout to increase product visibility can directly benefit both the retailer (via increased revenue) and the shopper (via increased satisfaction). We introduce the Rack Orientation and Curvature Problem, which determines the orientation and curvature of each rack in a layout to maximize the expected marginal impulse profit (after discounting for floor space cost). We account for the dynamic interaction between a walking shopper’s field of regard and a layout of racks filled with products. As several constraints in the optimization model cannot be expressed in a closed analytical form, we propose a particle swarm optimization-based solution approach and, subsequently, conduct a comprehensive experimental study using realistic data. Layouts with either high-acute and straight-to-medium-curved racks, or high-obtuse and high-curved racks, appear to dominate common rack layouts with orthogonal and straight racks; increases of over 23% can be realized depending on the location policy of products. Sensitivity of these solutions to shopper volume, cost of floor space, travel direction, and aspect ratio is also evaluated. Retailers can use our approach to quantitatively evaluate new rack designs or benchmark existing ones. Journal: IISE Transactions Pages: 1081-1097 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2020.1725253 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1725253 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1081-1097 Template-Type: ReDIF-Article 1.0 Author-Name: Gian-Gabriel P. Garcia Author-X-Name-First: Gian-Gabriel P. Author-X-Name-Last: Garcia Author-Name: Mariel S. Lavieri Author-X-Name-First: Mariel S. Author-X-Name-Last: Lavieri Author-Name: Ruiwei Jiang Author-X-Name-First: Ruiwei Author-X-Name-Last: Jiang Author-Name: Michael A. McCrea Author-X-Name-First: Michael A. Author-X-Name-Last: McCrea Author-Name: Thomas W. McAllister Author-X-Name-First: Thomas W. Author-X-Name-Last: McAllister Author-Name: Steven P. Broglio Author-X-Name-First: Steven P. Author-X-Name-Last: Broglio Author-Name: CARE Consortium Investigators Author-X-Name-First: CARE Consortium Investigators Author-X-Name-Last: Title: Data-driven stochastic optimization approaches to determine decision thresholds for risk estimation models Abstract: The increasing availability of data has popularized risk estimation models in many industries, especially healthcare. However, properly utilizing these models for accurate diagnosis decisions remains challenging. Our research aims to determine when a risk estimation model provides sufficient evidence to make a positive or negative diagnosis, or if the model is inconclusive. We formulate the Two-Threshold Problem (TTP) as a stochastic program which maximizes sensitivity and specificity while constraining false-positive and false-negative rates. We characterize the optimal solutions to TTP as either two-threshold or one-threshold and show that its optimal solution can be derived from a related linear program (TTP*). We also derive utility-based and multi-class classification frameworks for which our analytical results apply. We solve TTP* using data-driven methods: quantile estimation (TTP*-Q) and distributionally robust optimization (TTP*-DR). Through simulation, we characterize the feasibility, optimality, and computational burden of TTP*-Q and TTP*-DR and compare TTP*-Q to an optimized single threshold. Finally, we apply TTP* to concussion assessment data and find that it achieves greater accuracy at lower misclassification rates compared with traditional approaches. This data-driven framework can provide valuable decision support to clinicians by identifying “easy” cases which can be diagnosed immediately and “hard” cases which may require further evaluation before diagnosing. Journal: IISE Transactions Pages: 1098-1121 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2020.1725254 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1725254 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1098-1121 Template-Type: ReDIF-Article 1.0 Author-Name: German A. Velasquez Author-X-Name-First: German A. Author-X-Name-Last: Velasquez Author-Name: Maria E. Mayorga Author-X-Name-First: Maria E. Author-X-Name-Last: Mayorga Author-Name: Osman Y. Özaltın Author-X-Name-First: Osman Y. Author-X-Name-Last: Özaltın Title: Prepositioning disaster relief supplies using robust optimization Abstract: Emergency disaster managers are concerned with responding to disasters in a timely and efficient manner. We are concerned with determining the location and amount of disaster relief supplies to be prepositioned in anticipation of disasters. These supplies are stocked when the locations of affected areas and the amount of relief items needed are uncertain. Furthermore, a proportion of the prepositioned supplies might be damaged by the disasters. We propose a two-stage robust optimization model. The location and amount of prepositioned relief supplies are decided in the first stage before any disaster occurs. In the second stage, a limited amount of relief supplies can be procured post-disaster and prepositioned supplies are distributed to affected areas. The objective is to minimize the total cost of prepositioning and distributing disaster relief supplies. We solve the proposed robust optimization model using a column-and-constraint generation algorithm. Two optimization criteria are considered: absolute cost and maximum regret. A case study of the hurricane season in the Southeast US is used to gain insights on the effects of optimization criteria and critical model parameters to relief supply prepositioning strategy. Journal: IISE Transactions Pages: 1122-1140 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2020.1725692 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1725692 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1122-1140 Template-Type: ReDIF-Article 1.0 Author-Name: Kyohong Shin Author-X-Name-First: Kyohong Author-X-Name-Last: Shin Author-Name: Taesik Lee Author-X-Name-First: Taesik Author-X-Name-Last: Lee Title: Emergency medical service resource allocation in a mass casualty incident by integrating patient prioritization and hospital selection problems Abstract: Mass casualty incidents often cause a shortage of resources for emergency medical services such as ambulances and emergency departments. These resources must be effectively managed to save as many lives as possible. Critical decisions in operating emergency medical service systems include the prioritization of patients for ambulance transport and the selection of destination hospitals. We develop a stochastic dynamic model that integrates patient transport prioritization and hospital selection problems. Policy solutions from the model are compared with other plausible heuristics, and our experimental results show that our policy solution outperforms other alternatives. More importantly, we show that there are considerable benefits from optimally selecting hospitals, which suggests that this decision is just as important as the patient prioritization decision. Motivated by the finding, we propose a heuristic policy that considers both patient prioritization and hospital selection. Experimental results demonstrate strong performance of our heuristic policy compared with existing heuristics. In addition, the proposed approach offers practical advantages. Whereas the existing heuristic policies use patient information, our heuristic policy requires information on the hospital state, which is more readily available and reliable. Journal: IISE Transactions Pages: 1141-1155 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2020.1727069 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1727069 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1141-1155 Template-Type: ReDIF-Article 1.0 Author-Name: Xufei Liu Author-X-Name-First: Xufei Author-X-Name-Last: Liu Author-Name: Changhyun Kwon Author-X-Name-First: Changhyun Author-X-Name-Last: Kwon Title: Exact robust solutions for the combined facility location and network design problem in hazardous materials transportation Abstract: We consider a leader-follower game in the form of a bi-level optimization problem that simultaneously optimizes facility locations and network design in hazardous materials transportation. In the upper level, the leader intends to reduce the facility setup cost and the hazmat exposure risk, by choosing facility locations and road segments to close for hazmat transportation. When making such decisions, the leader anticipates the response of the followers who want to minimize the transportation costs. Considering uncertainty in the hazmat exposure and the hazmat transport demand, we consider a robust optimization approach with multiplicative uncertain parameters and polyhedral uncertainty sets. The resulting problem has a min-max problem in the upper level and a shortest-path problem in the lower level. We devise an exact algorithm that combines a cutting plane algorithm with Benders decomposition Journal: IISE Transactions Pages: 1156-1172 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2019.1697017 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1697017 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1156-1172 Template-Type: ReDIF-Article 1.0 Author-Name: Levente Sipeki Author-X-Name-First: Levente Author-X-Name-Last: Sipeki Author-Name: Alexandra M. Newman Author-X-Name-First: Alexandra M. Author-X-Name-Last: Newman Author-Name: Candace A. Yano Author-X-Name-First: Candace A. Author-X-Name-Last: Yano Title: Selecting support pillars in underground mines with ore veins Abstract: We address the design optimization problem for a mine in which the ore is concentrated in a system of long, thin veins and for which the so-called top-down open-stope mining method is customarily used. In such a mine, a large volume of earth below the surface is envisioned for extraction and is conceptually divided into three-dimensional rectangular blocks on each of several layers. The mine design specifies which blocks are left behind as part of a pillar to provide geotechnical structural stability; the remainder are extracted and processed to obtain ore. We seek a design that maximizes profit subject to geotechnical stability constraints, which we represent as a set partitioning problem with side constraints. Due to the complex geotechnical considerations, a formulation that guarantees feasibility would require exponentially large numbers of variables and constraints. We devise a method to limit the number of variables that need to be included and develop a heuristic in which violated constraints are iteratively incorporated into the formulation, thereby eliminating the vast majority of voids (openings in the mine) that would cause instability. A final evaluation of geotechnical stability via finite element analysis is necessary, but we have found that systematic inclusion of relatively simple constraints is adequate for the mine design to pass this evaluation. In a case study based on real data, our approach provided a mine design that satisfied the finite element analysis standards, with an estimated profit 16% higher than that of the best solution identified by the company’s mining engineers, leading to tens of millions of dollars in profit enhancement. Journal: IISE Transactions Pages: 1173-1188 Issue: 10 Volume: 52 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2019.1699978 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1699978 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:10:p:1173-1188 Template-Type: ReDIF-Article 1.0 Author-Name: Mengyi Zhang Author-X-Name-First: Mengyi Author-X-Name-Last: Zhang Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Title: Models and algorithms for throughput improvement problem of serial production lines via downtime reduction Abstract: Throughput is one of the key performance indicators for manufacturing systems, and its improvement remains an interesting topic in both industrial and academic fields. One way to achieve improvement is to reduce the downtime of unreliable machines. Along this direction, it is natural to pose questions about the optimal allocation of improvement effort to a set of machines and failure modes. This article develops mixed-integer linear programming models to improve system throughput by reducing downtime in the case of multi-stage serial lines. The models take samples of processing time, uptime and downtime as input, generated from random distributions or collected from real system. To improve computational efficiency while guaranteeing the exact optimality of the solution, algorithms based on Benders Decomposition and discrete-event relationships of serial lines are proposed. Numerical cases show that the solution approach can significantly improve efficiency. The proposed modeling and algorithm is applied to throughput improvement of various systems, including a long line and a multi-failure system, and also to the downtime bottleneck detection problem. Comparison with state-of-the-art approaches shows the effectiveness of the approach. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transactions. Journal: IISE Transactions Pages: 1189-1203 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2019.1700431 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1700431 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1189-1203 Template-Type: ReDIF-Article 1.0 Author-Name: Aniruddha Gaikwad Author-X-Name-First: Aniruddha Author-X-Name-Last: Gaikwad Author-Name: Reza Yavari Author-X-Name-First: Reza Author-X-Name-Last: Yavari Author-Name: Mohammad Montazeri Author-X-Name-First: Mohammad Author-X-Name-Last: Montazeri Author-Name: Kevin Cole Author-X-Name-First: Kevin Author-X-Name-Last: Cole Author-Name: Linkan Bian Author-X-Name-First: Linkan Author-X-Name-Last: Bian Author-Name: Prahalada Rao Author-X-Name-First: Prahalada Author-X-Name-Last: Rao Title: Toward the digital twin of additive manufacturing: Integrating thermal simulations, sensing, and analytics to detect process faults Abstract: The goal of this work is to achieve the defect-free production of parts made using Additive Manufacturing (AM) processes. As a step towards this goal, the objective is to detect flaws in AM parts during the process by combining predictions from a physical model (simulation) with in-situ sensor signatures in a machine learning framework. We hypothesize that flaws in AM parts are detected with significantly higher statistical fidelity (F-score) when both in-situ sensor data and theoretical predictions are pooled together in a machine learning model, compared to an approach that is based exclusively on machine learning of sensor data (black-box model) or physics-based predictions (white-box model). We test the hypothesized efficacy of such a gray-box model or digital twin approach in the context of the laser powder bed fusion (LPBF) and directed energy deposition (DED) AM processes. For example, in the DED process, we first predicted the instantaneous spatiotemporal distribution of temperature in a thin-wall titanium alloy part using a computational heat transfer model based on graph theory. Subsequently, we combined the preceding physics-derived thermal trends with in-situ temperature measurements obtained from a pyrometer in a readily implemented supervised machine learning framework (support vector machine). We demonstrate that the integration of temperature predictions from an ab initio heat transfer model and in-situ sensor data is capable of detecting flaws in the DED-produced thin-wall part with F-score approaching 90%. By contrast, the F-score decreases to nearly 80% when either temperature measurements from the in-situ sensor or temperature distribution predictions from the theoretical model are used alone by themselves. This work thus demonstrates an early foray into the digital twin paradigm for real-time process monitoring in AM via seamless integration of physics-based modeling (simulation), in-situ sensing, and data analytics (machine learning). Journal: IISE Transactions Pages: 1204-1217 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2019.1701753 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1701753 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1204-1217 Template-Type: ReDIF-Article 1.0 Author-Name: Kai Yang Author-X-Name-First: Kai Author-X-Name-Last: Yang Author-Name: Peihua Qiu Author-X-Name-First: Peihua Author-X-Name-Last: Qiu Title: Online sequential monitoring of spatio-temporal disease incidence rates Abstract: Online sequential monitoring of the incidence rates of chronic or infectious diseases is critically important for public health. Governments have invested a great amount of money in building global, national and regional disease reporting and surveillance systems. In these systems, conventional control charts, such as the cumulative sum (CUSUM) and the exponentially weighted moving average (EWMA) charts, are usually included for disease surveillance purposes. However, these charts require many assumptions on the observed data, including the ones that the observed data should be independent at different places and/or times, and they should follow a parametric distribution when no disease outbreaks are present. These assumptions are rarely valid in practice, making the results from the conventional control charts unreliable. Motivated by an application to monitor the Florida influenza-like illness data, we develop a new sequential monitoring approach in this article, which can accommodate the dynamic nature of the observed disease incidence rates (i.e., the distribution of the observed disease incidence rates can change over time due to seasonality and other reasons), spatio-temporal data correlation, and arbitrary data distribution. It is shown that the new method is more reliable to use in practice than the commonly used conventional charts for sequential monitoring of disease incidence rates. Because of its generality, the proposed method should be useful for many other applications as well, including spatio-temporal monitoring of the air quality in a region or the sea-level pressure data collected in a region of an ocean. Journal: IISE Transactions Pages: 1218-1233 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2019.1696496 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1696496 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1218-1233 Template-Type: ReDIF-Article 1.0 Author-Name: Qingqing Zhai Author-X-Name-First: Qingqing Author-X-Name-Last: Zhai Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Title: How reliable should military UAVs be? Abstract: The recent decade has witnessed an emerging usage of Unmanned Aerial Vehicles (UAVs) in both military and civilian applications. Compared with manned aircraft, a much larger percentage of UAVs are reported to crash every year, due to its unmanned nature and immature technologies. The high failure rate is mainly attributed to the lack of redundancy design and insufficient reliability growth tests. It is natural to ask whether the UAVs are reliable enough, and to what extent we shall improve the reliability of the UAVs. Through cost modeling, this study shows that the designed reliability for military UAVs needs not be extremely high. The main reason is that military UAVs are exposed to external threats such as enemy fire and cyber-attacks. Reliability enhancement actions are able to improve the operational reliability, but cannot ease the external threats to the UAVs. In our UAV cost models, both the reliability enhancement actions using reliability growth test and external failures due to intentional attacks are considered, based on which the optimal reliability growth duration that minimizes the total operation cost is derived. We investigate the impacts of the reliability growth pattern and the intensity of external threats on the effectiveness of the reliability growth test. Particularly, the external threats can weaken the effectiveness of the reliability growth test in terms of the overall operation cost. An illustrative example is used to demonstrate our model and support our results. Journal: IISE Transactions Pages: 1234-1245 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2019.1699977 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1699977 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1234-1245 Template-Type: ReDIF-Article 1.0 Author-Name: Hyungjin Kim Author-X-Name-First: Hyungjin Author-X-Name-Last: Kim Author-Name: Chuljin Park Author-X-Name-First: Chuljin Author-X-Name-Last: Park Author-Name: Yoonshik Kang Author-X-Name-First: Yoonshik Author-X-Name-Last: Kang Title: Distribution-guided heuristic search for nonlinear parameter estimation with an application in semiconductor manufacturing Abstract: Estimating a batch of parameter vectors of a nonlinear model is considered, where there exists a model interpreting the independent and the dependent variables, and the parameter vectors of the model are assumed to be sampled from a multivariate normal distribution. The mean vector and the covariance matrix of the parameter distribution can be assumed and such a parameter distribution is referred to as the hypothetical underlying distribution. A new framework is proposed, namely, the distribution-guided heuristic search framework, which uses the information of the hypothetical underlying distribution with the following two main concepts: (i) changing the coordinate of the parameter vectors via linear transformation and (ii) probabilistically filtering a parameter vector sampled by a heuristic algorithm. The framework is not a stand-alone algorithm, but it works with any heuristic algorithms to solve the target problem. The framework was tested in two simulation studies and was applied to a real example of measuring the critical dimensions of a 2-dimensional high-aspect-ratio structure of a wafer in semiconductor manufacturing. The test results show that a heuristic algorithm within the proposed framework outperforms the original heuristic algorithm as well as other existing algorithms. Journal: IISE Transactions Pages: 1246-1261 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2019.1709135 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1709135 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1246-1261 Template-Type: ReDIF-Article 1.0 Author-Name: Jingyuan Shen Author-X-Name-First: Jingyuan Author-X-Name-Last: Shen Author-Name: Jiawen Hu Author-X-Name-First: Jiawen Author-X-Name-Last: Hu Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Title: Optimal switching policy for warm standby systems subjected to standby failure mode Abstract: Standby is used extensively in mission-critical and safety-critical systems to improve reliability and availability. The dormant period during standby may introduce additional failure modes to the standby components. This is commonly observed in many real systems, yet it has been overlooked in existing research. This study is motivated by a two-motor standby system used in a power plant, in which periodic switching between the two motors is used to mitigate standby failure. We propose a generic system reliability model that captures both the normal aging and standby failures. The long-run average cost of the system can be derived using the technique of semi-regenerative processes. Thereafter, the problem of the optimal switching policy is formulated with the objective of determining the optimal switching period that minimizes the long-run average cost. We further consider a special case where the component failures under normal-use conditions follow a Poisson process and the repair times are exponentially distributed. A numerical study is conducted to demonstrate the proposed methodologies. Journal: IISE Transactions Pages: 1262-1274 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2019.1709136 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1709136 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1262-1274 Template-Type: ReDIF-Article 1.0 Author-Name: Hengjie Zhang Author-X-Name-First: Hengjie Author-X-Name-Last: Zhang Author-Name: Yucheng Dong Author-X-Name-First: Yucheng Author-X-Name-Last: Dong Author-Name: Jing Xiao Author-X-Name-First: Jing Author-X-Name-Last: Xiao Author-Name: Francisco Chiclana Author-X-Name-First: Francisco Author-X-Name-Last: Chiclana Author-Name: Enrique Herrera-Viedma Author-X-Name-First: Enrique Author-X-Name-Last: Herrera-Viedma Title: Personalized individual semantics-based approach for linguistic failure modes and effects analysis with incomplete preference information Abstract: Failure Modes and Effects Analysis (FMEA) is a very useful reliability-management instrument for detecting and mitigating risks in various fields. The linguistic assessment approach has recently been widely used in FMEA. Words mean different things to different people, so FMEA members may present Personalized Individual Semantics (PIS) in their linguistic assessment information. This article presents the design of a PIS-based FMEA approach, in which members express their opinions over failure modes and risk factors using Linguistic Distribution Assessment Matrices (LDAMs) and also provide their opinions over failure modes using incomplete Additive Preference Relations (APRs). A preference information preprocessing method with a two-stage optimization model is presented to generate complete APRs with acceptable consistency levels from incomplete APRs. Then, a deviation minimum-based optimization model is designed to personalize individual semantics by minimizing the deviation between APR and the numerical assessment matrix derived from the corresponding LDAM. This is followed by the development of a ranking process to generate the risk ordering of failure modes. A case study and a detailed comparison analysis are presented to show the effectiveness of the PIS-based linguistic FMEA approach. Journal: IISE Transactions Pages: 1275-1296 Issue: 11 Volume: 52 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1731774 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1731774 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:11:p:1275-1296 Template-Type: ReDIF-Article 1.0 Author-Name: Reut Noham Author-X-Name-First: Reut Author-X-Name-Last: Noham Author-Name: Michal Tzur Author-X-Name-First: Michal Author-X-Name-Last: Tzur Title: Design and incentive decisions to increase cooperation in humanitarian relief networks Abstract: During humanitarian relief operations, designated facilities are established to assist the affected population and distribute relief goods. In settings where the authorities manage the operations, they instruct the population regarding which facility they should visit. However, in times of crises and uncertainty, these instructions are often not followed. In this work, we investigate how the authorities should invest in incentivizing the population to follow their instructions. These decisions need to be combined with those concerning the relief network design. The population’s behavior and level of cooperation are key factors in deciding on the incentive investments.We present a new mathematical model that incorporates decisions regarding which populations to incentivize to follow the local authorities’ instructions. Then, we develop properties that can help the authorities decide on the level of investment in incentives. A numerical study demonstrates that incentives can improve the system’s performance and enable an equitable supply allocation. Furthermore, an investment in a small number of communities is typically sufficient to significantly improve the system’s performance. We also demonstrate that incentives affect relief-network design decisions. Journal: IISE Transactions Pages: 1297-1311 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1727070 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1727070 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1297-1311 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Solow Author-X-Name-First: Daniel Author-X-Name-Last: Solow Author-Name: Jie Ning Author-X-Name-First: Jie Author-X-Name-Last: Ning Author-Name: Jieying Zhu Author-X-Name-First: Jieying Author-X-Name-Last: Zhu Author-Name: Yishen Cai Author-X-Name-First: Yishen Author-X-Name-Last: Cai Title: Improved heuristics for finding balanced teams Abstract: This research addresses the problem of dividing a group of people into a collection of teams that need to be “balanced” across a variety of different attributes. This type of problem arises, for example, in an academic setting where it is necessary to partition students into a number of balanced study teams and also in a youth camp in which children need to be formed into sports teams that are competitive with each other. Recent work has resulted in both linear and nonlinear integer programing models for solving this problem. In the research here, improvements to the models are made together with a linear approximation to the nonlinear objective function that significantly reduce the number of integer variables and constraints. Computational experiments are performed on random instances of the problem, as well as on instances for which there are almost perfectly balanced teams, the latter providing a way to determine the quality of the optimal solution obtained by the heuristics. These tests show that the approach developed here almost always obtain better balanced teams than those from prior research. Journal: IISE Transactions Pages: 1312-1323 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1732506 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1732506 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1312-1323 Template-Type: ReDIF-Article 1.0 Author-Name: Girish Jampani Hanumantha Author-X-Name-First: Girish Jampani Author-X-Name-Last: Hanumantha Author-Name: Berkin T. Arici Author-X-Name-First: Berkin T. Author-X-Name-Last: Arici Author-Name: Jorge A. Sefair Author-X-Name-First: Jorge A. Author-X-Name-Last: Sefair Author-Name: Ronald Askin Author-X-Name-First: Ronald Author-X-Name-Last: Askin Title: Demand prediction and dynamic workforce allocation to improve airport screening operations Abstract: Workforce allocation and configuration decisions at airport security checkpoints (e.g., number of lanes open) are usually based on passenger volume forecasts. The accuracy of such forecasts is critical for the smooth functioning of security checkpoints where unexpected surges in passenger volumes are handled proactively. In this article, we present a forecasting model that combines flight schedules and other business fundamentals with historically observed throughput patterns to predict passenger volumes in a multi-terminal multi-security screening checkpoint airport. We then present an optimization model and a solution strategy for dynamically selecting a configuration of open screening lanes to minimize passenger queues and wait times that at the same time determine workforce allocations. We present a real-world case study in a US airport to demonstrate the efficacy of the proposed models. Journal: IISE Transactions Pages: 1324-1342 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1749765 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1749765 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1324-1342 Template-Type: ReDIF-Article 1.0 Author-Name: Xin Pan Author-X-Name-First: Xin Author-X-Name-Last: Pan Author-Name: Jie Song Author-X-Name-First: Jie Author-X-Name-Last: Song Author-Name: Jingtong Zhao Author-X-Name-First: Jingtong Author-X-Name-Last: Zhao Author-Name: Van-Anh Truong Author-X-Name-First: Van-Anh Author-X-Name-Last: Truong Title: Online contextual learning with perishable resources allocation Abstract: We formulate a novel class of online matching problems with learning. In these problems, randomly arriving customers must be matched to perishable resources so as to maximize a total expected reward. The matching accounts for variations in rewards among different customer–resource pairings. It also accounts for the perishability of the resources. Our work is motivated by a healthcare application, but it can be easily extended to other service applications. Our work belongs to the online resource allocation streams in service systems. We propose the first online algorithm for contextual learning and resource allocation with perishable resources. Our algorithm explores and exploits in distinct interweaving phases. We prove that our algorithm achieves an expected regret per period that increases sub-linearly with the number of planning cycles. Journal: IISE Transactions Pages: 1343-1357 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1752958 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1752958 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1343-1357 Template-Type: ReDIF-Article 1.0 Author-Name: Shenming Song Author-X-Name-First: Shenming Author-X-Name-Last: Song Author-Name: Chen Wang Author-X-Name-First: Chen Author-X-Name-Last: Wang Title: Incentivizing catastrophe risk sharing Abstract: Government plays a vital role in improving community resilience against natural disasters. Due to the limited relief capacity of a government, it is desirable to develop a risk-sharing mechanism involving both private sector providers (e.g., insurers, for-profit disaster agencies, and firms that provide resources for risk mitigation and recovery) and the public. In this article, we take catastrophe insurance as an example to examine ways of providing incentives for multilateral risk sharing, especially when it involves socially connected communities. We consider a sequential game with three sets of players, the government, a private insurer, and a community of households. The government determines an optimal subsidy portfolio (including ex ante insurance premium subsidy and ex post relief subsidy) for a community with particular levels of social network influence and risk perception. We characterize the equilibrium purchase rate within the community by positive and negative herding behaviors and identify the government’s optimal subsidy strategy dependent on the available budget and the emphasis on ex post social responsibility. We also extend the game to account for multi-community coverage and multi-year insurance contracts to demonstrate the benefits of spatial and inter-temporal risk pooling. Journal: IISE Transactions Pages: 1358-1385 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1757792 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1757792 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1358-1385 Template-Type: ReDIF-Article 1.0 Author-Name: Yossi Bukchin Author-X-Name-First: Yossi Author-X-Name-Last: Bukchin Author-Name: Eran Hanany Author-X-Name-First: Eran Author-X-Name-Last: Hanany Title: Decentralization cost in two-machine job-shop scheduling with minimum flow-time objective Abstract: A decentralized two-machine job-shop system is considered, where each machine minimizes its own flow-time objective. Analyzing the system as a non-cooperative game, we investigate the Decentralization Cost (DC), the ratio in terms of the system flow-time between the best Nash equilibrium and the centralized solution. Settings generating significant inefficiency are identified and discussed. We provide bounds on the maximal DC, and prove they are tight for two-job problems. For larger problems, we use a cross entropy meta-heuristic that searches for DC maximizing job durations. This supports the tightness of the proposed bounds for a flow-shop. Additionally, for a flow-shop, a simple, scheduling-based mechanism is proposed, which always generates efficiency. Journal: IISE Transactions Pages: 1386-1402 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1730528 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1730528 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1386-1402 Template-Type: ReDIF-Article 1.0 Author-Name: Tammy Drezner Author-X-Name-First: Tammy Author-X-Name-Last: Drezner Author-Name: Zvi Drezner Author-X-Name-First: Zvi Author-X-Name-Last: Drezner Author-Name: Pawel Kalczynski Author-X-Name-First: Pawel Author-X-Name-Last: Kalczynski Title: Multiple obnoxious facilities location: A cooperative model Abstract: A given number of communities exist in an area. Several obnoxious facilities, such as polluting factories, garbage dumps, need to be located in the area. The nuisance emitted by the facilities is cumulative. The objective is to minimize the nuisance inflicted on the most affected community. This problem is useful for planners who frequently face the challenge of locating obnoxious facilities and have no easy way to determine a good set of locations for these facilities. No existing model considers the cumulative effect of nuisance generated by the facilities.A multi-start approach by the SNOPT and IPOPT solvers in Matlab, which are considered to be the best available general-purpose nonlinear solvers, gave poor results. However, an innovative, specially designed Voronoi-based heuristic produced much better results in a small fraction of the run time. In many cases, nuisance is cut by more than half, and run time is more than a hundred times faster. As detailed in the conclusions section, the applications of our methodology extend beyond the model presented in this article Journal: IISE Transactions Pages: 1403-1412 Issue: 12 Volume: 52 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1753898 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1753898 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:52:y:2020:i:12:p:1403-1412 Template-Type: ReDIF-Article 1.0 Author-Name: Jingming Liu Author-X-Name-First: Jingming Author-X-Name-Last: Liu Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: John A. White Author-X-Name-First: John A. Author-X-Name-Last: White Title: Queueing analysis of the replenishment of multiple in-the-aisle pick positions Abstract: A case-picking operation with Multiple In-The-Aisle Pick Positions (MIAPP) is modeled as an M/G/1/N queueing system. Cases are picked manually from pallets located on the bottom level of storage racks. An aisle-captive Narrow-Aisle Lift Truck (NALT) travels rectilinearly to replenish the floor level of the rack by retrieving a pallet load from an upper level of the rack. From a queueing perspective, the NALT is the server and the order-picking positions in need of replenishment are customers. In this article, the replenishment requests from order-picking positions are assumed to occur at a Poisson rate (i.e., homogeneous customers). The corresponding probability density functions of service times are derived, and their Laplace–Stieltjes transforms are obtained, leading to steady-state performance measures of the system. In many situations, the replenishment requests from individual pick positions may not follow a homogeneous Poisson process, and the order-picking operation consists of heterogeneous customers. However, a simulation study indicates that an M/G/1/N queueing model yields accurate performance measures in such situations. Interestingly, when the number of pick positions is large enough to justify an MIAPP-NALT operation, the time between consecutive replenishment requests within a storage/retrieval aisle is approximately exponentially distributed. A numerical example is provided to illustrate the use of the developed model and to show the practical values of the analytical results in the performance analysis of such storage/retrieval systems. Journal: IISE Transactions Pages: 1-20 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1731773 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1731773 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:1-20 Template-Type: ReDIF-Article 1.0 Author-Name: Dmitry Ivanov Author-X-Name-First: Dmitry Author-X-Name-Last: Ivanov Author-Name: Boris Sokolov Author-X-Name-First: Boris Author-X-Name-Last: Sokolov Author-Name: Weiwei Chen Author-X-Name-First: Weiwei Author-X-Name-Last: Chen Author-Name: Alexandre Dolgui Author-X-Name-First: Alexandre Author-X-Name-Last: Dolgui Author-Name: Frank Werner Author-X-Name-First: Frank Author-X-Name-Last: Werner Author-Name: Semyon Potryasaev Author-X-Name-First: Semyon Author-X-Name-Last: Potryasaev Title: A control approach to scheduling flexibly configurable jobs with dynamic structural-logical constraints Abstract: We study the problem of scheduling in manufacturing environments which are dynamically reconfigurable for supporting highly flexible individual operation compositions of the jobs. We show that such production environments yield the simultaneous process design and operation sequencing with dynamically changing hybrid structural-logical constraints. We conceptualize a model to schedule jobs in such environments when the structural-logical constraints are changing dynamically and offer a design framework of algorithmic development to obtain a tractable solution analytically within the proven axiomatic of the optimal control and mathematical optimization. We further develop an algorithm to simultaneously determine the process design and operation sequencing. The algorithm is decomposition-based and leads to an approximate solution of the underlying optimization problem that is modeled by optimal control. We theoretically analyze the algorithmic complexity and apply this approach on an illustrative example. The findings suggest that our approach can be of value for modeling problems with a simultaneous process design and operation sequencing when the structural and logical constraints are dynamic and interconnected. Utilizing the outcomes of this research could also support the analysis of processing dynamics during the operations execution. Journal: IISE Transactions Pages: 21-38 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1739787 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1739787 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:21-38 Template-Type: ReDIF-Article 1.0 Author-Name: Ziwei Lin Author-X-Name-First: Ziwei Author-X-Name-Last: Lin Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Author-Name: Shichang Du Author-X-Name-First: Shichang Author-X-Name-Last: Du Title: A budget allocation strategy minimizing the sample set quantile for initial experimental design Abstract: The increased complexity of manufacturing systems makes the acquisition of the system performance estimate a black-box procedure (e.g., simulation tools). The efficiency of most black-box optimization algorithms is affected significantly by initial designs (populations). In most population initializers, points are spread out to explore the entire domain, e.g., space-filling designs. Some population initializers also consider exploitation procedures to speed up the optimization process. However, they are either application-dependent or require an additional budget. This article proposes a generic method to generate, without an additional budget, several good solutions in the initial design. The aim of the method is to optimize the quantile of the objective function values in the generated sample set. The proposed method is based on a clustering of the solution space; feasible solutions are clustered into groups and the budget is allocated to each group dynamically based on the observed information. The asymptotic performance of the proposed method is analyzed theoretically. The numerical results show that, if proper clustering rules are applied, an unbalanced design is generated in which promising solutions have higher sampling probabilities than non-promising solutions. The numerical results also show that the method is robust to wrong clustering rules. Journal: IISE Transactions Pages: 39-57 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1748771 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1748771 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:39-57 Template-Type: ReDIF-Article 1.0 Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Hierarchical sparse functional principal component analysis for multistage multivariate profile data Abstract: Modern manufacturing systems typically involve multiple production stages, the real-time status of which can be tracked continuously using sensor networks that generate a large number of profiles associated with all process variables at all stages. The analysis of the collective behavior of the multistage multivariate profile data is essential for understanding the variance patterns of the entire manufacturing process. For this purpose, two major challenges regarding the high data dimensionality and low model interpretability have to be well addressed. This article proposes integrating Multivariate Functional Principal Component Analysis (MFPCA) with a three-level structured sparsity idea to develop a novel Hierarchical Sparse MFPCA (HSMFPCA), in which the stage-wise, profile-wise and element-wise sparsity are jointly investigated to clearly identify the informative stages and variables in each eigenvector. In this way, the derived principal components would be more interpretable. The proposed HSMFPCA employs the regression-type reformulation of the PCA and the reparameterization of the entries of eigenvectors, and enjoys an efficient optimization algorithm in high-dimensional settings. The extensive simulations and a real example study verify the superiority of the proposed HSMFPCA with respect to the estimation accuracy and interpretation clarity of the derived eigenvectors. Journal: IISE Transactions Pages: 58-73 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1738599 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1738599 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:58-73 Template-Type: ReDIF-Article 1.0 Author-Name: Alp Akcay Author-X-Name-First: Alp Author-X-Name-Last: Akcay Author-Name: Engin Topan Author-X-Name-First: Engin Author-X-Name-Last: Topan Author-Name: Geert-Jan van Houtum Author-X-Name-First: Geert-Jan Author-X-Name-Last: van Houtum Title: Machine tools with hidden defects: Optimal usage for maximum lifetime value Abstract: We consider randomly failing high-precision machine tools in a discrete manufacturing setting. Before a tool fails, it goes through a defective phase where it can continue processing new products. However, the products processed by a defective tool do not necessarily generate the same reward obtained from the ones processed by a normal tool. The defective phase of the tool is not visible and can only be detected by a costly inspection. The tool can be retired from production to avoid a tool failure and save its salvage value; however, doing so too early causes not fully using the production potential of the tool. We build a Markov decision model and study when it is the right moment to inspect or retire a tool with the objective of maximizing the total expected reward obtained from an individual tool. The structure of the optimal policy is characterized. The implementation of our model by using the real-world maintenance logs at the Philips shaver factory shows that the value of the optimal policy can be substantial compared to the policy currently used in practice. Journal: IISE Transactions Pages: 74-87 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1739786 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1739786 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:74-87 Template-Type: ReDIF-Article 1.0 Author-Name: Zhicheng Zhu Author-X-Name-First: Zhicheng Author-X-Name-Last: Zhu Author-Name: Yisha Xiang Author-X-Name-First: Yisha Author-X-Name-Last: Xiang Title: Condition-based maintenance for multi-component systems: Modeling, structural properties, and algorithms Abstract: Condition-Based Maintenance (CBM) is an effective maintenance strategy to improve system performance while lowering operating and maintenance costs. Real-world systems typically consist of a large number of components with various interactions among components. However, existing studies on CBM mainly focus on single-component systems. Multi-component CBM, which joins the components’ stochastic degradation processes and the combinatorial maintenance grouping problem, remains an open issue in the literature. In this article, we study the CBM optimization problem for multi-component systems. We first develop a multi-stage stochastic integer model with the objective of minimizing the total maintenance cost over a finite planning horizon. We then investigate the structural properties of a two-stage model. Based on the structural properties, two efficient algorithms are designed to solve the two-stage model. Algorithm 1 solves the problem to its optimality and Algorithm 2 heuristically searches for high-quality solutions based on Algorithm 1. Our computational studies show that Algorithm 1 obtains optimal solutions in a reasonable amount of time and Algorithm 2 can find high-quality solutions quickly. The multi-stage problem is solved using a rolling horizon approach based on the algorithms for the two-stage problem. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transaction, datasets, additional tables, detailed proofs, etc. Journal: IISE Transactions Pages: 88-100 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1741740 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1741740 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:88-100 Template-Type: ReDIF-Article 1.0 Author-Name: Samira Karimi Author-X-Name-First: Samira Author-X-Name-Last: Karimi Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Neng Fan Author-X-Name-First: Neng Author-X-Name-Last: Fan Title: Flexible methods for reliability estimation using aggregate failure-time data Abstract: The actual failure times of individual components are usually unavailable in many applications. Instead, only aggregate failure-time data are collected by actual users, due to technical and/or economic reasons. When dealing with such data for reliability estimation, practitioners often face the challenges of selecting the underlying failure-time distributions and the corresponding statistical inference methods. So far, only the exponential, normal, gamma and inverse Gaussian distributions have been used in analyzing aggregate failure-time data, due to these distributions having closed-form expressions for such data. However, the limited choices of probability distributions cannot satisfy extensive needs in a variety of engineering applications. PHase-type (PH) distributions are robust and flexible in modeling failure-time data, as they can mimic a large collection of probability distributions of non-negative random variables arbitrarily closely by adjusting the model structures. In this article, PH distributions are utilized, for the first time, in reliability estimation based on aggregate failure-time data. A Maximum Likelihood Estimation (MLE) method and a Bayesian alternative are developed. For the MLE method, an Expectation-Maximization algorithm is developed for parameter estimation, and the corresponding Fisher information is used to construct the confidence intervals for the quantities of interest. For the Bayesian method, a procedure for performing point and interval estimation is also introduced. Numerical examples show that the proposed PH-based reliability estimation methods are quite flexible and alleviate the burden of selecting a probability distribution when the underlying failure-time distribution is general or even unknown. Journal: IISE Transactions Pages: 101-115 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1746869 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1746869 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:101-115 Template-Type: ReDIF-Article 1.0 Author-Name: Junjie Wang Author-X-Name-First: Junjie Author-X-Name-Last: Wang Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Title: Modeling and monitoring unweighted networks with directed interactions Abstract: Networks have been widely employed to represent interactive relationships among individual units in complex systems such as the Internet of Things. Assignable causes in systems can lead to abrupt increased or decreased frequency of communications within the corresponding network, which allows us to detect such assignable causes by monitoring the communication level of the network. However, existing statistical process control methods for unweighted networks have scarcely incorporated either the network sparsity or the direction of interactions between two network nodes, i.e., dyadic interaction. Regarding this, we establish a matrix-form model to characterize directional dyadic interactions in time-independent unweighted networks. With inactive dyadic interactions excluded, the proposed procedure of parameter estimation achieves higher consistency with less computational cost than its alternative when networks are large-scale and sparse. Using the generalized likelihood ratio test, the work derives two schemes for monitoring directed unweighted networks. The first can be used in general cases whereas the second incorporates a priori shift information to improve change detection efficiency in some cases and estimate the location of a single shifted parameter. Simulation study and a real application are provided to demonstrate the advantages and effectiveness of proposed schemes. Journal: IISE Transactions Pages: 116-130 Issue: 1 Volume: 53 Year: 2021 Month: 1 X-DOI: 10.1080/24725854.2020.1762141 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1762141 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:1:p:116-130 Template-Type: ReDIF-Article 1.0 Author-Name: Erfan Mehmanchi Author-X-Name-First: Erfan Author-X-Name-Last: Mehmanchi Author-Name: Hoda Bidkhori Author-X-Name-First: Hoda Author-X-Name-Last: Bidkhori Author-Name: Oleg A. Prokopyev Author-X-Name-First: Oleg A. Author-X-Name-Last: Prokopyev Title: Analysis of process flexibility designs under disruptions Abstract: Most previous studies concerning process flexibility designs have focused on expected sales and demand uncertainty. In this paper, we examine the worst-case performance of flexibility designs in the case of demand and supply uncertainties, where the latter can be in the form of either plant or arc disruptions. We define the Plant Cover Index under Disruptions (PCID) as the minimum required plants’ capacity to supply a fixed number of products after the disruptions. By exploiting PCID, we establish that under symmetric uncertainty sets the worst-case performance can be expressed in terms of PCID, supply and demand uncertainties. Additionally, PCID enables us to make meaningful comparisons of different designs. In particular, we demonstrate that under disruptions the 2-long chain design is superior to a broad class of designs. Moreover, we identify a condition wherein both Q-short and Q-long chain designs have the same worst-case performance. We also discuss the notion of fragility that quantifies the impact of disruptions in the worst case and compare fragilities of Q-short and Q-long chain designs under different types of disruptions. Finally, by employing PCID, we develop an algorithm to generate designs that perform well under supply and demand uncertainties in both the worst case and in expectation. Journal: IISE Transactions Pages: 131-148 Issue: 2 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1759162 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1759162 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:131-148 Template-Type: ReDIF-Article 1.0 Author-Name: Tugce Martagan Author-X-Name-First: Tugce Author-X-Name-Last: Martagan Author-Name: Alp Akcay Author-X-Name-First: Alp Author-X-Name-Last: Akcay Author-Name: Maarten Koek Author-X-Name-First: Maarten Author-X-Name-Last: Koek Author-Name: Ivo Adan Author-X-Name-First: Ivo Author-X-Name-Last: Adan Title: Optimal production decisions in biopharmaceutical fill-and-finish operations Abstract: Fill-and-finish is among the most commonly outsourced operations in biopharmaceutical manufacturing and involves several challenges. For example, fill-operations have a random production yield, as biopharmaceutical drugs might lose their quality or stability during these operations. In addition, biopharmaceuticals are fragile molecules that need specialized equipment with limited capacity, and the associated production quantities are often strictly regulated. The non-stationary nature of the biopharmaceutical demand and limitations in forecasts add another layer of challenge in production planning. Furthermore, most companies tend to “freeze” their production decisions for a limited period of time, in which they do not react to changes in the manufacturing system. Using such freeze periods helps to improve stability in planning, but comes at a price of reduced flexibility. To address these challenges, we develop a finite-horizon, discounted-cost Markov decision model, and optimize the production decisions in biopharmaceutical fill-and-finish operations. We characterize the structural properties of optimal cost and policies, and propose a new, zone-based decision-making approach for these operations. More specifically, we show that the state space can be partitioned into decision zones that provide guidelines for optimal production policies. We illustrate the use of the model with an industry case study. Journal: IISE Transactions Pages: 149-163 Issue: 2 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1770902 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1770902 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:149-163 Template-Type: ReDIF-Article 1.0 Author-Name: Navid Matin-Moghaddam Author-X-Name-First: Navid Author-X-Name-Last: Matin-Moghaddam Author-Name: Jorge A. Sefair Author-X-Name-First: Jorge A. Author-X-Name-Last: Sefair Title: Route assignment and scheduling with trajectory coordination Abstract: We study the problem of finding optimal routes and schedules for multiple vehicles traveling in a network. Vehicles may have different origins and destinations, and must coordinate their trajectories to keep a minimum distance from each other at any time. We determine a route and a schedule for each vehicle, which possibly requires vehicles to wait at some nodes. Vehicles are heterogeneous in terms of their speed on each arc, which we assume is known and constant once in motion. Applications of this problem include air and maritime routing, where vehicles maintain a steady cruising speed as well as a safety distance to avoid collision. Additional related problems arise in the transportation of hazardous materials and in military operations, where vehicles cannot be too close to each other given the risk posed to the population or the mission in case of a malicious attack. We discuss the hardness of this problem and present an exact formulation for its solution. We devise an exact solution algorithm based on a network decomposition that exploits the sparsity of the optimal solution. We illustrate the performance of our methods on real and randomly generated networks. Journal: IISE Transactions Pages: 164-181 Issue: 2 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1774096 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1774096 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:164-181 Template-Type: ReDIF-Article 1.0 Author-Name: Forough Enayaty-Ahangar Author-X-Name-First: Forough Author-X-Name-Last: Enayaty-Ahangar Author-Name: Laura A. Albert Author-X-Name-First: Laura A. Author-X-Name-Last: Albert Author-Name: Eric DuBois Author-X-Name-First: Eric Author-X-Name-Last: DuBois Title: A survey of optimization models and methods for cyberinfrastructure security Abstract: Critical infrastructure from a cross-section of sectors has become increasingly reliant on cyber systems and cyberinfrastructure. Increasing risks to these cyber components, including cyber-physical systems, have highlighted the importance of cybersecurity in protecting critical infrastructure. The need to cost-effectively improve cyberinfrastructure security has made this topic suitable for optimization research. In this survey, we review studies in the literature that apply optimization to enhance or improve cyberinfrastructure security and were published or accepted before the end of the year 2019. We select 68 relevant peer-reviewed scholarly works among 297 studies found on Scopus and provide an overview of their application areas, mission areas, and optimization models and methods. Finally, we consider gaps in the literature and possible directions for future research. Journal: IISE Transactions Pages: 182-198 Issue: 2 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1781306 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1781306 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:182-198 Template-Type: ReDIF-Article 1.0 Author-Name: Tonghoon Suk Author-X-Name-First: Tonghoon Author-X-Name-Last: Suk Author-Name: Xinchang Wang Author-X-Name-First: Xinchang Author-X-Name-Last: Wang Title: Optimal pricing policies for tandem queues: Asymptotic optimality Abstract: We study the optimal pricing problem for a tandem queueing system with an arbitrary number of stations, finite buffers, and blocking. The problem is formulated using a Markov decision process model with the objective to maximize the long-run expected time-average revenue or gain of the service provider. Our interest lies in comparing the performances of static and dynamic pricing policies in maximizing the gain. We show that the optimal static pricing policies perform as well as the optimal dynamic pricing policies when the buffer size at station 1 becomes large and the arrival rate is either small or large. More importantly, we propose two specific static pricing policies for systems with small and large arrival rates, respectively, and show that each proposed policy produces a gain converging to the optimal gain with an approximately exponential rate as the buffer size before station 1 becomes large. We learn from numerical results that the proposed static policies perform as well as optimal dynamic policies even for a moderate-sized buffer at station 1. We also learn that there exist cases where optimal static pricing policies are, however, neither optimal nor near-optimal. Journal: IISE Transactions Pages: 199-220 Issue: 2 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1783471 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1783471 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:199-220 Template-Type: ReDIF-Article 1.0 Author-Name: Saeed Poormoaied Author-X-Name-First: Saeed Author-X-Name-Last: Poormoaied Author-Name: Ülkü Gürler Author-X-Name-First: Ülkü Author-X-Name-Last: Gürler Author-Name: Emre Berk Author-X-Name-First: Emre Author-X-Name-Last: Berk Title: An exact analysis on age-based control policies for perishable inventories Abstract: We investigate the impact of effective lifetime of items in an age-based control policy for perishable inventories, a so-called (Q, r, T) policy, with positive lead time and fixed lifetime. The exact analysis of this control policy in the presence of a service level constraint is available in the literature under the restriction that the aging process of a batch begins when it is unpacked for consumption, and that at most one order can be outstanding at any time. In this work, we generalize those results to allow for more than one outstanding order and assume that the aging process of a batch starts since the time that it is ordered. Under this aging process, we derive the effective lifetime distribution of batches at the beginning of embedded cycles in an embedded Markov process. We provide the operating characteristic expressions and construct the cost rate function by the renewal reward theorem approach. We develop an exact algorithm by investigating the cost rate and service level constraint structures. The proposed policy considerably dominates its special two-parameter policies, which are time-dependent (Q, T) and stock-dependent (Q, r) policies. Numerical studies demonstrate that the aging process of items significantly influences the inventory policy performance. Moreover, allowing more than one outstanding order in the system reaps considerable cost savings, especially when the lifetime of items is short and the service level is high. Journal: IISE Transactions Pages: 221-245 Issue: 2 Volume: 53 Year: 2020 Month: 8 X-DOI: 10.1080/24725854.2020.1785649 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1785649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:221-245 Template-Type: ReDIF-Article 1.0 Author-Name: Murat Karatas Author-X-Name-First: Murat Author-X-Name-Last: Karatas Author-Name: Erhan Kutanoglu Author-X-Name-First: Erhan Author-X-Name-Last: Kutanoglu Title: Joint optimization of location, inventory, and condition-based replacement decisions in service parts logistics Abstract: We model, analyze and study the effects of considering condition-based replacement of parts within an integrated Service Parts Logistics (SPL) system, where geographically dispersed customers’ products are serviced with new parts from network facilities. Conventional SPL models consider replacing the parts upon failure. This is true even for the latest models in which facility locations and their part stock levels are jointly optimized. Taking advantage of the increasingly affordable, continuous, and accurate collection of part condition data (via sensors and Internet-of-Things devices), we develop a new integrated model in which optimal conditions to replace the parts are decided along with facility locations and stock levels. We capture the part degradation, replacement and failure process using a Continuous Time Markov Chain (CTMC) and embed this into the integrated location and inventory model. The resulting formulation is a mixed-integer optimization model with quadratic constraints and is solved with a state-of-the-art second-order cone programming solver. Our extensive comparison with the traditional failure-based replacement model shows that optimizing replacement conditions in this integrated framework can provide significant cost savings (network, inventory, transportation and downtime costs) leading to different facility location, allocation and inventory decisions. We also study the effects of several important parameters on the condition-based replacement model, including facility costs, shipment speeds, replacement costs, part degradation parameters, and holding costs. Journal: IISE Transactions Pages: 246-271 Issue: 2 Volume: 53 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2020.1793035 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1793035 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:2:p:246-271 Template-Type: ReDIF-Article 1.0 Author-Name: Mikiya Kohama Author-X-Name-First: Mikiya Author-X-Name-Last: Kohama Author-Name: Chiharu Sugimoto Author-X-Name-First: Chiharu Author-X-Name-Last: Sugimoto Author-Name: Ojiro Nakano Author-X-Name-First: Ojiro Author-X-Name-Last: Nakano Author-Name: Yusuke Maeda Author-X-Name-First: Yusuke Author-X-Name-Last: Maeda Title: Robotic additive manufacturing with toy blocks Abstract: We develop and study a block-type three-dimensional (3D) printer that can assemble toy blocks based on 3D CAD models. Our system automatically converts a 3D CAD model into a block model consisting of toy blocks in basic shapes. Next, it automatically generates a feasible assembly plan for the block model. An industrial robot then assembles the block sculpture layer-by-layer, from bottom to top, using this assembly plan. This approach has advantages including the ease of combining multiple types of materials and reusing them which is difficult for conventional 3D printers to accomplish. We also introduce a technique to reliably determine the order of block placement to assemble block models from various patterns. This technique includes converting unassemblable shapes in the models to assemblable ones with support blocks and/or decomposing them into subassemblies. In addition, we implement a robot control system that automatically generates a stable sculpture according to a predetermined placement order. We also demonstrate the assembly of various toy block models using our system. Journal: IISE Transactions Pages: 273-284 Issue: 3 Volume: 53 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1755067 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1755067 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:273-284 Template-Type: ReDIF-Article 1.0 Author-Name: Yuwen Chen Author-X-Name-First: Yuwen Author-X-Name-Last: Chen Author-Name: John Z. Ni Author-X-Name-First: John Z. Author-X-Name-Last: Ni Title: Product positioning and pricing decisions in a two-attribute disruptive new market Abstract: In disruptive innovation, a new entrant firm with fewer resources challenges the established incumbent firms. The new entrant firm offers an innovative product that is considered superior in new features appealing to a group of new customers, but inferior along the traditional performance attributes valued by mainstream customers. Previous research primarily focuses on the product strategies that could benefit the incumbent firms, thus, it is less unclear how the new entrant firm should strategize the innovative product. This article investigates the product pricing and positioning strategies for the new entrant firms who wholly invest in a single product. Our analytical model incorporates two horizontally distinct product attributes, where potential consumers have different preferences and reservation prices toward the two product attributes. We identify four product pricing and positioning strategies, and their corresponding optimal conditions. Our analysis shows that the product position is closely related to the customer valuation gap between the two attributes. As the innovative product will improve sufficiently on the traditional performance attributes, results from this study can be applied at different stage of product development by the new entrant firm. Journal: IISE Transactions Pages: 285-297 Issue: 3 Volume: 53 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1759163 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1759163 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:285-297 Template-Type: ReDIF-Article 1.0 Author-Name: Longwei Cheng Author-X-Name-First: Longwei Author-X-Name-Last: Cheng Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: A hybrid transfer learning framework for in-plane freeform shape accuracy control in additive manufacturing Abstract: Shape accuracy control is one of the quality issues of greatest concern in Additive Manufacturing (AM). An efficient approach to improving the shape accuracy of a fabricated product is to compensate the fabrication errors of AM systems by modifying the input shape defined by a digital design model. In contrast with mass production, AM processes typically fabricate customized products with extremely low volume and huge shape varieties, which makes shape accuracy control in AM a challenging problem. In this article, we propose a hybrid transfer learning framework to predict and compensate the in-plane shape deviations of new and untried freeform products based on a small number of previously fabricated products. Within this framework, the shape deviation is decomposed into a shape-independent error and a shape-specific error. A parameter-based transfer learning approach is used to facilitate a sharing of parameters for modeling the shape-independent error, whereas a feature-based transfer learning approach is taken to promote the learning of a common representation of local shape features for modeling the shape-specific error. Experimental studies of a fused filament fabrication process demonstrate the effectiveness of our proposed framework in predicting the shape deviation and improving the shape accuracy of new products with freeform shapes. Journal: IISE Transactions Pages: 298-312 Issue: 3 Volume: 53 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1741741 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1741741 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:298-312 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaoning Kang Author-X-Name-First: Xiaoning Author-X-Name-Last: Kang Author-Name: Xiaoyu Chen Author-X-Name-First: Xiaoyu Author-X-Name-Last: Chen Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Author-Name: Hao Wu Author-X-Name-First: Hao Author-X-Name-Last: Wu Author-Name: Xinwei Deng Author-X-Name-First: Xinwei Author-X-Name-Last: Deng Title: Multivariate regression of mixed responses for evaluation of visualization designs Abstract: Information visualization significantly enhances human perception by graphically representing complex data sets. The variety of visualization designs makes it challenging to efficiently evaluate all possible designs catering to users’ preferences and characteristics. Most existing evaluation methods perform user studies to obtain multivariate qualitative responses from users via questionnaires and interviews. However, these methods cannot support online evaluation of designs, as they are often time-consuming. A statistical model is desired to predict users’ preferences on visualization designs based on non-interference measurements (i.e., wearable sensor signals). In this work, we propose a Multivariate Regression of Mixed Responses (MRMR) to facilitate quantitative evaluation of visualization designs. The proposed MRMR method is able to provide accurate model prediction with meaningful variable selection. A simulation study and a user study of evaluating visualization designs with 14 effective participants are conducted to illustrate the merits of the proposed model. Journal: IISE Transactions Pages: 313-325 Issue: 3 Volume: 53 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1755068 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1755068 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:313-325 Template-Type: ReDIF-Article 1.0 Author-Name: Minhee Kim Author-X-Name-First: Minhee Author-X-Name-Last: Kim Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Title: A Bayesian deep learning framework for interval estimation of remaining useful life in complex systems by incorporating general degradation characteristics Abstract: Deep learning has emerged as a powerful tool to model complicated relationships between inputs and outputs in various fields including degradation modeling and prognostics. Existing deep learning-based prognostic approaches are often used in a black-box manner and provide only point estimations of remaining useful life. However, accurate interval estimations of the remaining useful life are crucial to understand the stochastic nature of degradation processes and perform reliable risk analysis and maintenance decision making. This study proposes a novel Bayesian deep learning framework that incorporates general characteristics of degradation processes and provides the interval estimations of remaining useful life. The proposed method enjoys several unique advantages: (i) providing a general approach by not assuming any particular type of degradation processes nor the availability of domain-specific prior knowledge such as a failure threshold; (ii) offering the interval estimations of the remaining useful life; (iii) systematically modeling two types of uncertainties embedded in prognostics; and (iv) exhibiting great prognostic performance and wide applicability to complex systems that may involve multiple sensor signals, multiple failure modes, and multiple operational conditions. Numerical studies demonstrate improved prognostic performance and practicality of the proposed method over benchmark approaches. Additional numerical results including the analysis of sensitivity and computational costs are given in the online supplemental materials. Journal: IISE Transactions Pages: 326-340 Issue: 3 Volume: 53 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1766729 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1766729 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:326-340 Template-Type: ReDIF-Article 1.0 Author-Name: Xin Wang Author-X-Name-First: Xin Author-X-Name-Last: Wang Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Title: Design of customized two-dimensional extended warranties considering use rate and heterogeneity Abstract: Extended warranties have become a rich source of profits for manufacturers. In the current marketplace, two-dimensional extended warranties are often sold to all customers with a fixed coverage region and the same price. However, customers are heterogeneous, as they have different use rates and failure history, which significantly affects the cost of the extended warranty. In this study, we first propose a customized extended warranty strategy that takes into account the use rate of the customer. Under the strategy, we develop a warranty cost model to compute the expected repair cost and the expected profit of the manufacturer, based on which the optimal price for the customized extended warranty can be determined. Building upon this model, we further propose a policy that makes use of both the customer’s use rate and the product’s failure history. We conduct a comprehensive numerical study to compare the two proposed policies with the prevailing fixed-region strategy. The results reveal that the customized policies are better than the prevailing policy in terms of fairness, flexibility, and profitability. Journal: IISE Transactions Pages: 341-351 Issue: 3 Volume: 53 Year: 2020 Month: 12 X-DOI: 10.1080/24725854.2020.1768455 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1768455 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:341-351 Template-Type: ReDIF-Article 1.0 Author-Name: Babak Farmanesh Author-X-Name-First: Babak Author-X-Name-Last: Farmanesh Author-Name: Arash Pourhabib Author-X-Name-First: Arash Author-X-Name-Last: Pourhabib Author-Name: Balabhaskar Balasundaram Author-X-Name-First: Balabhaskar Author-X-Name-Last: Balasundaram Author-Name: Austin Buchanan Author-X-Name-First: Austin Author-X-Name-Last: Buchanan Title: A Bayesian framework for functional calibration of expensive computational models through non-isometric matching Abstract: We study statistical calibration, i.e., adjusting features of a computational model that are not observable or controllable in its associated physical system. We focus on functional calibration, which arises in many manufacturing processes where the unobservable features, called calibration variables, are a function of the input variables. A major challenge in many applications is that computational models are expensive and can only be evaluated a limited number of times. Furthermore, without making strong assumptions, the calibration variables are not identifiable. We propose Bayesian Non-isometric Matching Calibration (BNMC) that allows calibration of expensive computational models with only a limited number of samples taken from a computational model and its associated physical system. BNMC replaces the computational model with a dynamic Gaussian process whose parameters are trained in the calibration procedure. To resolve the identifiability issue, we present the calibration problem from a geometric perspective of non-isometric curve to surface matching, which enables us to take advantage of combinatorial optimization techniques to extract necessary information for constructing prior distributions. Our numerical experiments demonstrate that in terms of prediction accuracy BNMC outperforms, or is comparable to, other existing calibration frameworks. Journal: IISE Transactions Pages: 352-364 Issue: 3 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1774688 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1774688 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:352-364 Template-Type: ReDIF-Article 1.0 Author-Name: Changxi Wang Author-X-Name-First: Changxi Author-X-Name-Last: Wang Author-Name: Elsayed A. Elsayed Author-X-Name-First: Elsayed A. Author-X-Name-Last: Elsayed Title: Stochastic modeling of degradation branching processes Abstract: Degradation branching is a common phenomenon in many real-life applications. The degradation of a location not only increases with time, but also propagates to other locations in the same system. While the degradation of an individual location has been studied extensively, research on degradation branching is sparse. In this paper, we develop a general stochastic degradation branching model that characterizes both the degradation growth and degradation propagation. The probabilistic properties of the general degradation branching processes are analyzed. Reliability metrics such as the mean time to failure, mean residual life, failure probability and others are also investigated. In particular, closed-form expressions for the expectation and variance of the degradation and selected reliability metrics are obtained when the time to branch follows an exponential distribution. The model is validated using actual crack growth data. Journal: IISE Transactions Pages: 365-374 Issue: 3 Volume: 53 Year: 2020 Month: 7 X-DOI: 10.1080/24725854.2020.1775914 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1775914 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:3:p:365-374 Template-Type: ReDIF-Article 1.0 Author-Name: Michael G. Klein Author-X-Name-First: Michael G. Author-X-Name-Last: Klein Author-Name: Vedat Verter Author-X-Name-First: Vedat Author-X-Name-Last: Verter Author-Name: Hughie F. Fraser Author-X-Name-First: Hughie F. Author-X-Name-Last: Fraser Author-Name: Brian G. Moses Author-X-Name-First: Brian G. Author-X-Name-Last: Moses Title: Specialist care in rural hospitals: From Emergency Department consultation to hospital discharge Abstract: In urban and rural hospitals, congested Emergency Departments (EDs) are filled with patients boarding in the ED awaiting admission to inpatient wards. We study this problem beyond the walls of the ED, examining the multi-departmental process managed by specialists. In rural hospitals, an Internal Medicine Specialist (Internist) commonly serves simultaneously as both the Intensive Care Unit (ICU) physician and Internist on call. We develop a stochastic dynamic programming framework for specialists’ workflow decisions and apply it to data sets developed from two rural hospitals. One uses the dual role approach and the other, similar to urban hospitals, staffs the ICU with another physician, each with a single role. Our empirical results show that, excluding an overnight batch, arrivals of ED consultation requests for rural specialists follow a homogeneous Poisson process. Our models help identify better policies and determine how much better off a hospital is with two rather than one Internist on call. Although current guidelines suggest an early inpatient discharge strategy, we find that specialists should give higher priority to ED consultations unless a threshold number of patients are boarding in the ED or until a threshold time of day when specialists should give higher priority to inpatient discharges. Journal: IISE Transactions Pages: 375-388 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1790699 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1790699 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:375-388 Template-Type: ReDIF-Article 1.0 Author-Name: Lavanya Marla Author-X-Name-First: Lavanya Author-X-Name-Last: Marla Author-Name: Kaushik Krishnan Author-X-Name-First: Kaushik Author-X-Name-Last: Krishnan Author-Name: Sarang Deo Author-X-Name-First: Sarang Author-X-Name-Last: Deo Title: Managing EMS systems with user abandonment in emerging economies Abstract: In many emerging economies, callers may abandon ambulance requests due to a combination of operational (small fleet size), infrastructural (long travel times) and behavioral factors (low trust in the ambulance system). As a result, ambulance capacity, which is already scarce, is wasted in serving calls that are likely to be abandoned later. In this article, we investigate the design of an ambulance system in the presence of abandonment behavior, using a two-step approach. First, because the callers’ actual willingness to wait for ambulances is censored, we adopt a Maximum Likelihood Estimator estimation approach suitable for interval censored data. Second, we employ a simulation-based optimization approach to explicitly incorporate customers’ willingness to wait in: (i) tactical short-term decisions such as modification of dispatch policies and ambulance allocations at existing base locations; and (ii) strategic long-term network design decisions of increasing fleet size and re-designing base locations. We calibrate our models using data from a major metropolitan city in India where historically 81.3% of calls were successfully served without being abandoned. We find that modifying dispatch policies or reallocating ambulances provide relatively small gains in successfully served calls (around 1%). By contrast, increasing fleet size and network re-design can more significantly increase the fraction of successfully served calls with the latter being particularly more effective. Redesigning bases with the current fleet size is equivalent to increasing the fleet size by 8.6% at current base locations. Similarly, adding 29% more ambulances and redesigning the base locations is equivalent to doubling the fleet size at the current base locations and adding 34% more ambulances and redesigning base locations is equivalent to a three-fold increase. Our results indicate that in the absence of changes in behavioral factors, significant investment is required to modify operational factors by increasing fleet size, and to modify infrastructural factors by redesigning base locations. Journal: IISE Transactions Pages: 389-406 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1802086 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1802086 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:389-406 Template-Type: ReDIF-Article 1.0 Author-Name: Jon M. Stauffer Author-X-Name-First: Jon M. Author-X-Name-Last: Stauffer Author-Name: Aly Megahed Author-X-Name-First: Aly Author-X-Name-Last: Megahed Author-Name: Chelliah Sriskandarajah Author-X-Name-First: Chelliah Author-X-Name-Last: Sriskandarajah Title: Elasticity management for capacity planning in software as a service cloud computing Abstract: Applications of cloud computing are increasing as companies shift from on-premise IT environments to public, private, or hybrid clouds. Consequently, cloud providers use capacity planning to maintain the capacity of computing resources (instances) required to meet the dynamic nature of the demand (queries). However, there is a trade-off between deploying too many costly instances, and deploying too few instances and paying penalties for not being able to process queries on-time. An instance has multiple resource dimensions and executing a query consumes multiple dimensions of an instance’s capacity. This detailed multi-dimensional management of cloud computing resource capacity is known as elasticity management and is an important issue faced by all cloud providers. Determining the optimal number of instances needed in a given planning horizon is challenging, due to the combinatorial nature of the optimization problem involved. We develop an optimization model and related algorithms to capture the trade-off between the resource cost versus the delayed execution penalty in software as a service applications from the cloud provider’s perspective. We develop an exact approach to solve small to medium sized applications and heuristics to solve large applications. We then evaluate their performance via extensive computational analyses with real-world data and current cloud provider approaches. We also develop a stochastic framework and methodology to deal with demand uncertainty, and using two different randomly generated data sets (representing problem instances in practice), we demonstrate that robust solutions can be obtained. Journal: IISE Transactions Pages: 407-424 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1810368 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1810368 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:407-424 Template-Type: ReDIF-Article 1.0 Author-Name: Shadi Sanoubar Author-X-Name-First: Shadi Author-X-Name-Last: Sanoubar Author-Name: Lisa M. Maillart Author-X-Name-First: Lisa M. Author-X-Name-Last: Maillart Author-Name: Oleg A. Prokopyev Author-X-Name-First: Oleg A. Author-X-Name-Last: Prokopyev Title: Age-replacement policies under age-dependent replacement costs Abstract: We consider a stochastically deteriorating system with self-announcing failures that require immediate reactive replacement. For such a system, we consider an age-replacement policy (without minimal repair) under which the system is replaced at failure (reactive replacement) or at a prescribed replacement time (preventive replacement), whichever occurs first. Motivated by factors such as decreasing salvage value or increasing costs associated with obtaining spare parts, we assume that replacement costs are non-decreasing in system age. We formulate a long-run expected cost-rate minimization model with instantaneous replacements that captures this dependency, and provide conditions under which there exists a unique optimal solution. We provide analytical and numerical results that compare the cost-rate minimizing optimal replacement policy, and its performance, to those for the case in which replacement costs are assumed to be constant. Finally, we also consider non-instantaneous replacements, and compare cost-rate minimizing and availability maximizing policies. Journal: IISE Transactions Pages: 425-436 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1819580 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1819580 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:425-436 Template-Type: ReDIF-Article 1.0 Author-Name: Jianqiu Huang Author-X-Name-First: Jianqiu Author-X-Name-Last: Huang Author-Name: Kai Pan Author-X-Name-First: Kai Author-X-Name-Last: Pan Author-Name: Yongpei Guan Author-X-Name-First: Yongpei Author-X-Name-Last: Guan Title: Cutting planes for security-constrained unit commitment with regulation reserve Abstract: With significant economic and environmental benefits, renewable energy is increasingly used to generate electricity. To hedge against the uncertainty due to the increasing penetration of renewable energy, an ancillary service market was introduced to maintain reliability and efficiency, in addition to day-ahead and real-time energy markets. To co-optimize these two markets, a unit commitment problem with regulation reserve (the most common ancillary service product) is solved for daily power system operations, leading to a large-scale and computationally challenging mixed-integer program. In this article, we analyze the polyhedral structure of the co-optimization model to speed up the solution process by deriving problem-specific strong valid inequalities. Convex hull results for certain special cases (i.e., two- and three-period cases) with rigorous proofs are provided, and strong valid inequalities covering multiple periods under the most general setting are derived. We also develop efficient polynomial-time separation algorithms for the inequalities that are in the exponential size. We further tighten the formulation by deriving an extended formulation for each generator in a higher-dimensional space. Finally, we conduct computational experiments to apply our derived inequalities as cutting planes in a branch-and-cut algorithm. Significant improvement from our inequalities over commercial solvers demonstrates the effectiveness of our approach, leading to practical usefulness to enhance the co-optimization of energy and ancillary service markets. Journal: IISE Transactions Pages: 437-452 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1823533 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1823533 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:437-452 Template-Type: ReDIF-Article 1.0 Author-Name: Qingwei Jin Author-X-Name-First: Qingwei Author-X-Name-Last: Jin Author-Name: Jen-Yen Lin Author-X-Name-First: Jen-Yen Author-X-Name-Last: Lin Author-Name: Sean X. Zhou Author-X-Name-First: Sean X. Author-X-Name-Last: Zhou Title: Price discounts and personalized product assortments under multinomial logit choice model: A robust approach Abstract: With increasing availability of consumer data and rapid advancement and application of technologies, online retailers are gaining better knowledge of customers’ shopping behavior and preferences. Thus more and more retailers are providing personalized product assortment to better match the needs of customers and generate more sales. In this article, we study a two-stage revenue management model. In the first stage, the retailer decides non-personalized price discounts for each product. In the second stage (upon the arrival of customers), the retailer offers a personalized assortment to each type of customer. Based on this assortment, the customer then makes a purchase decision according to the Multinomial logit choice model. We employ a robust approach for the joint discounts and personalized assortment optimization problem in order to handle data uncertainty from estimating customer preferences and distribution of different customer segments. We analyze the structural properties of the problems and propose efficient computational methods to solve the problems with/without a cardinality constraint on the assortment. In certain cases, our algorithm converges at a superlinear rate. When there is a cardinality constraint on the assortment, we find that the retailer should offer greater discounts as the constraint becomes more restrictive. We also discuss the value of our robust solution and the extension of when the customer discount sensitivity function is also uncertain. Finally, our extensive numerical study shows that the solutions under the robust approach perform very well when compared to the one assuming accurate information, and are robust when there is uncertainty. Journal: IISE Transactions Pages: 453-471 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1798036 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1798036 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:453-471 Template-Type: ReDIF-Article 1.0 Author-Name: Oguzhan Vicil Author-X-Name-First: Oguzhan Author-X-Name-Last: Vicil Title: Inventory rationing on a one-for-one inventory model for two priority customer classes with backorders and lost sales Abstract: In this study, we are primarily motivated by the research problem of recognizing heterogeneous customer behavior towards waiting for order fulfillment under the threshold rationing policy (also known as the critical level policy), and aim to find its effect on system stock levels and performance measures. We assume a continuous review one-for-one ordering policy with generally distributed lead times. In the first model, we consider the case in which the low-priority customer class exhibits zero patience for waiting if the demand is not satisfied immediately (a lost sale), whereas the demand of the high-priority customer class can be backordered. This is the first study in the literature to consider this model. We provide an exact analysis for the derivation of the steady-state probability distribution and the average infinite horizon cost per unit time. We then develop an efficient optimization procedure to minimize the average expected cost rate. We also determine the forms of the optimal solutions for the two service level optimization models that are common in practice. In the second model, we study the opposite case in which the high-priority customer class exhibits zero patience for waiting. We establish a theoretical basis for the rationale of using the Continuous-Time Markov Chain (CTMC) approach as an approximation. We show that under certain assumptions, the steady-state probabilities of the system with generally distributed lead times are identical to the steady-state probabilities of the CTMC system with the same mean. This result enables us to link the dynamics of the studied model to the CTMC model, which may open new doors for future research. Journal: IISE Transactions Pages: 472-495 Issue: 4 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1805530 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1805530 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:4:p:472-495 Template-Type: ReDIF-Article 1.0 Author-Name: Hong-Bin Yan Author-X-Name-First: Hong-Bin Author-X-Name-Last: Yan Author-Name: Ming Li Author-X-Name-First: Ming Author-X-Name-Last: Li Title: An uncertain Kansei Engineering methodology for behavioral service design Abstract: To perfect a service, service providers must understand the fundamental emotional effects that a service may invoke. Kansei Engineering (KE) has been recently adapted to service industries to realize the relationships between service design elements and customers’ emotional perceptions. However, effective service design based on KE is still seriously challenged by the uncertainty and behavioral biases of customers’ emotions. This article tries to propose an uncertain KE methodology for behavioral service design. To do so, an integrative framework is first proposed by linking design attributes, emotional needs, and overall satisfaction, so as to design services best satisfying customers’ emotional needs. Second, multinomial logistic regression is used to build the uncertain relationships between design attributes and emotional attributes. Third, a quantitative Kano model is proposed to model the asymmetric and nonlinear satisfaction functions reflecting the “gains and losses” effect of positive emotions and negative emotions. Next, the Prospect Theory is used to derive customer overall satisfaction by distinguishing the “gains and losses”. Finally, the proposed methodology is applied to a case study of the campus express delivery service in China. An independent tracking study shows that the results are consistent with service acceptance and provide valuable insights. Journal: IISE Transactions Pages: 497-522 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1766727 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1766727 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:497-522 Template-Type: ReDIF-Article 1.0 Author-Name: Mengyue Wang Author-X-Name-First: Mengyue Author-X-Name-Last: Wang Author-Name: Hongxuan Huang Author-X-Name-First: Hongxuan Author-X-Name-Last: Huang Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Title: Transients in flexible manufacturing systems with setups and batch operations: Modeling, analysis, and design Abstract: Significant research and practice efforts have been devoted to flexible manufacturing systems. Many of them focus on performance analysis, production and inventory control, planning, and scheduling. Steady state analysis is prevalent in these studies. The transient behavior of flexible lines is less investigated. However, the dynamic changes in customer demands and the uncertain nature in production make the transient performance critical for system control, scheduling, and improvement. Due to non-negligible setups during product change, batch operation is typically carried out in many flexible lines. How to design the system to allocate multiple products, determine batch size, and schedule part sequence is of significant importance to system performance during transients. In this article, an analytical method is developed to evaluate the system throughput, work-in-process, and other performance measures in transient periods for multi-product lines with Bernoulli reliability machines, finite buffers, non-negligible setups, and batch productions. System properties, such as monotonicity, are discussed. Moreover, optimal order assignment and part scheduling in systems with multiple flexible lines are studied. Both centralized and decentralized optimization policies are investigated. Journal: IISE Transactions Pages: 523-540 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1766728 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1766728 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:523-540 Template-Type: ReDIF-Article 1.0 Author-Name: Peng Yang Author-X-Name-First: Peng Author-X-Name-Last: Yang Author-Name: Zhijie Zhao Author-X-Name-First: Zhijie Author-X-Name-Last: Zhao Author-Name: Zuo-Jun Max Shen Author-X-Name-First: Zuo-Jun Max Author-X-Name-Last: Shen Title: A flow picking system for order fulfillment in e-commerce warehouses Abstract: A flow picking system in which the existing picking list is updated in real time has been considered as an effective solution for e-commerce warehouses to increase order fulfillment efficiency. The pivotal issues of performance analysis of flow picking systems, and comparison between batch picking systems and flow picking systems are of great concern, both for academics and practitioners of warehouse operation management. In this study, we first develop analytic models to estimate the critical performance indicators of a flow picking system, including picking density and turnover time of an order. Second, we leverage the proposed models and real warehouse data to compare the performance of batch picking and flow picking systems through simulation. Our results show that a flow picking system requires fewer order pickers and shorter walking distances than a batch picking system in most scenarios, especially those with a higher order arrival rate to achieve the same service level. Our study can provide valuable guidelines to warehouse managers and decision-makers for choosing an order fulfillment solution by comparing a batch picking system and a flow picking system. Journal: IISE Transactions Pages: 541-551 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1772525 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1772525 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:541-551 Template-Type: ReDIF-Article 1.0 Author-Name: Hieu Bui Author-X-Name-First: Hieu Author-X-Name-Last: Bui Author-Name: Harry A. Pierson Author-X-Name-First: Harry A. Author-X-Name-Last: Pierson Author-Name: Sarah Nurre Pinkley Author-X-Name-First: Sarah Nurre Author-X-Name-Last: Pinkley Author-Name: Kelly M. Sullivan Author-X-Name-First: Kelly M. Author-X-Name-Last: Sullivan Title: Toolpath planning for multi-gantry additive manufacturing Abstract: Additive Manufacturing (AM), specifically Fused Filament Fabrication (FFF) is revolutionizing the production of many products. FFF is one of the most popular AM processes because it is inexpensive, requires little maintenance, and has high material utilization. Unfortunately, long cycle times are a significant drawback that prevents FFF from being more widely implemented, especially for large-scale components. In response to this, printers that employ multiple independent FFF printheads simultaneously working on the same part have been developed, and multi-gantry configurations are now commercially available; however, there is a dearth of formal research on multi-gantry path planning, and current practices do not maximize printhead utilization or as-built mechanical properties. This article proposes a novel methodology for generating collision-free toolpaths for multi-gantry printers that yields shorter print times and superior mechanical properties compared with the state of the art. In this, a metaheuristic approach is used to seek near-optimal segmentation and scheduling of each layer while a collision checking and resolution algorithm enforces kinematic constraints to ensure collision-free solutions. Simulation is used to show the resulting makespan reduction for various layers, and the proposed methodology is physically implemented and verified. Tensile testing on samples printed via the current and proposed methods confirm that the proposed methodology results in superior mechanical properties. Journal: IISE Transactions Pages: 552-567 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1775915 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1775915 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:552-567 Template-Type: ReDIF-Article 1.0 Author-Name: Shenghan Guo Author-X-Name-First: Shenghan Author-X-Name-Last: Guo Author-Name: Weihong “Grace” Guo Author-X-Name-First: Weihong “Grace” Author-X-Name-Last: Guo Author-Name: Amir Abolhassani Author-X-Name-First: Amir Author-X-Name-Last: Abolhassani Author-Name: Rajeev Kalamdani Author-X-Name-First: Rajeev Author-X-Name-Last: Kalamdani Title: Nonparametric, real-time detection of process deteriorations in manufacturing with parsimonious smoothing Abstract: Machine faults and systematic failures are resulted from manufacturing process deterioration. With early recognition of patterns closely related to process deterioration, e.g., trends, preventative maintenance can be conducted to avoid severe loss of productivity. Change-point detection identifies the time when abnormal patterns occur, thus it is ideal for this purpose. However, trend detection is not extensively explored in existing studies about change-point detection – the widely adopted approaches mainly target abrupt mean shifts and offline monitoring. Practical considerations in manufacturing cast additional challenges to the methodology development: data complexity and real-time detection. Data complexity in manufacturing restricts the utilization of parametric statistical modeling; the industrial demand for online decision-making requires real-time detection. In this article, we develop an innovative change-point detection method based on Parsimonious Smoothing that targets trend detection in nonparametric, online settings. The proposed method is demonstrated to outperform benchmark approaches in capturing trends within complex data. A case study validates the feasibility and performance of the proposed method on real data from automotive manufacturing. Journal: IISE Transactions Pages: 568-581 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1786195 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1786195 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:568-581 Template-Type: ReDIF-Article 1.0 Author-Name: Andi Wang Author-X-Name-First: Andi Author-X-Name-Last: Wang Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Holistic modeling and analysis of multistage manufacturing processes with sparse effective inputs and mixed profile outputs Abstract: In a Multistage Manufacturing Process (MMP), multiple types of sensors are deployed to collect intermediate product quality measurements after each stage of manufacturing. This study aims at modeling the relationship between these quality outputs of mixed profiles and sparse effective process inputs. We propose an analytical framework based on four process characteristics: (i) every input only affects the outputs of the same and the later stages; (ii) the outputs from all stages are smooth functional curves or images; (iii) only a small number of inputs influence the outputs; and (iv) the inputs cause a few variation patterns on the outputs. We formulate an optimization problem that simultaneously estimates the effects of process inputs on the outputs across the entire MMP. An ADMM consensus algorithm is developed to solve this problem. This algorithm is highly parallelizable and can handle a large amount of data of mixed types obtained from multiple stages. The ability of this algorithm in estimations, selecting effective inputs, and identifying the variation patterns of each stage is validated with simulation experiments. Journal: IISE Transactions Pages: 582-596 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1786197 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1786197 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:582-596 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaolei Fang Author-X-Name-First: Xiaolei Author-X-Name-Last: Fang Author-Name: Hao Yan Author-X-Name-First: Hao Author-X-Name-Last: Yan Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Title: Multi-sensor prognostics modeling for applications with highly incomplete signals Abstract: Multi-stream degradation signals have been widely used to predict the residual useful lifetime of partially degraded systems. To achieve this goal, most of the existing prognostics models assume that degradation signals are complete, i.e., they are observed continuously and frequently at regular time grids. In reality, however, degradation signals are often (highly) incomplete, i.e., containing missing and corrupt observations. Such signal incompleteness poses a significant challenge for the parameter estimation of prognostics models. To address this challenge, this article proposes a prognostics methodology that is capable of using highly incomplete multi-stream degradation signals to predict the residual useful lifetime of partially degraded systems. The method first employs multivariate functional principal components analysis to fuse multi-stream signals. Next, the fused features are regressed against time-to-failure using (log)-location-scale regression. To estimate the fused features using incomplete multi-stream degradation signals, we develop two computationally efficient algorithms: subspace detection and signal recovery. The performance of the proposed prognostics methodology is evaluated using simulated datasets and a degradation dataset of aircraft turbofan engines from the NASA repository. Journal: IISE Transactions Pages: 597-613 Issue: 5 Volume: 53 Year: 2021 Month: 2 X-DOI: 10.1080/24725854.2020.1789779 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1789779 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:597-613 Template-Type: ReDIF-Article 1.0 Author-Name: Tingting Huang Author-X-Name-First: Tingting Author-X-Name-Last: Huang Author-Name: Yuepu Zhao Author-X-Name-First: Yuepu Author-X-Name-Last: Zhao Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Author-Name: Loon-Ching Tang Author-X-Name-First: Loon-Ching Author-X-Name-Last: Tang Title: Reliability assessment and lifetime prediction of degradation processes considering recoverable shock damages Abstract: Many products degrade over time and their degradation processes could be affected by instantaneous shocks during field usage. Instantaneous shocks can cause incremental increases to the degradation signals through shock damages, and can also increase the degradation rates of products. In practice, some kinds of products can recover fully or partially from shock damages in a certain period of time. In this article, a degradation model for soft failure is proposed considering continuous degradation processes with recoverable shock damages for reliability assessment and lifetime prediction of products. The random component of the degradation processes is characterized by a Wiener process and the effect of instantaneous shocks on the degradation process is expressed by an exponential function with a residual effect for either partial or full recovery. The impact of shocks affecting the degradation rate is established to be proportional to the shock size. The resulting model includes some existing models as its special cases and can easily be extended to cases where the degradation process follows other random processes with independent increments. Numerical examples are presented to illustrate the applications of the proposed model. Sensitivity analysis and validation of the two-stage parameter estimation approach are conducted based on the simulation. Journal: IISE Transactions Pages: 614-628 Issue: 5 Volume: 53 Year: 2021 Month: 5 X-DOI: 10.1080/24725854.2020.1793036 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1793036 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:5:p:614-628 Template-Type: ReDIF-Article 1.0 Author-Name: Bryan Wilder Author-X-Name-First: Bryan Author-X-Name-Last: Wilder Author-Name: Sze-chuan Suen Author-X-Name-First: Sze-chuan Author-X-Name-Last: Suen Author-Name: Milind Tambe Author-X-Name-First: Milind Author-X-Name-Last: Tambe Title: Allocating outreach resources for disease control in a dynamic population with information spread Abstract: Infected individuals must be aware of disease symptoms to seek care, so outreach and education programs are critical to disease control. However, public health organizations often only have limited resources for outreach and must carefully design campaigns to maximize effectiveness, potentially leveraging word-of-mouth information spread. We show how classic epidemiological models can be reformulated such that identifying an efficient disease control resource allocation policy in the context of information spread becomes a submodular maximization problem. This means that our framework can simultaneously handle multiple, interacting dynamic processes coupled through the likelihood of disease clearance, allowing our framework to provide insight into optimal resource allocation while considering social dynamics in addition to disease dynamics (e.g., knowledge spread and disease spread). We then demonstrate that this problem can be algorithmically solved and can handle stochasticity in input parameters by examining a numerical example of tuberculosis control in India. Journal: IISE Transactions Pages: 629-642 Issue: 6 Volume: 53 Year: 2020 Month: 9 X-DOI: 10.1080/24725854.2020.1798037 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1798037 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:629-642 Template-Type: ReDIF-Article 1.0 Author-Name: Peter Nesbitt Author-X-Name-First: Peter Author-X-Name-Last: Nesbitt Author-Name: Levente Sipeki Author-X-Name-First: Levente Author-X-Name-Last: Sipeki Author-Name: Tulay Flamand Author-X-Name-First: Tulay Author-X-Name-Last: Flamand Author-Name: Alexandra M. Newman Author-X-Name-First: Alexandra M. Author-X-Name-Last: Newman Title: Optimizing underground mine design with method-dependent precedences Abstract: This article addresses an underground mine design and scheduling problem, in which ore extraction methods are determined and resulting mining activities are scheduled. The mining method influences necessary infrastructure, the activities selected, and their timing. We divide the ore body into partitions (i.e., panels), each of which is extracted using a specific method, if at all. We consider two extraction methods, namely open-stope mining and bottom-up stoping with backfill, as well as an option of doing nothing. The myriad decisions present a challenging integer-programming problem for which we propose an optimization-based heuristic to generate an initial feasible solution. We further expedite solutions to the monolith by (i) eliminating unnecessary variables and (ii) strengthening the formulation.Our empirical study, conducted using 36 instances – four of which are directly obtained from our industry partner, demonstrate that the proposed model and corresponding solution methodology provide good-quality solutions with gaps averaging less than 8% and superior to those obtained via several general-purpose-solver heuristics; CPU times of two hours or fewer are considered to be reasonable for long-term planning purposes. Our results also show that instances with irregular disposition and lower development rates are more tractable. We provide managerial insights gained from our solutions which reveal that the mining method selected depends heavily on the extraction rate. Finally, we note that solving an industry-partner-provided instance results in a design with 44% additional value compared to that obtained via industry practice. Journal: IISE Transactions Pages: 643-656 Issue: 6 Volume: 53 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1823534 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1823534 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:643-656 Template-Type: ReDIF-Article 1.0 Author-Name: Laura A. Albert Author-X-Name-First: Laura A. Author-X-Name-Last: Albert Author-Name: Alexander Nikolaev Author-X-Name-First: Alexander Author-X-Name-Last: Nikolaev Author-Name: Adrian J. Lee Author-X-Name-First: Adrian J. Author-X-Name-Last: Lee Author-Name: Kenneth Fletcher Author-X-Name-First: Kenneth Author-X-Name-Last: Fletcher Author-Name: Sheldon H. Jacobson Author-X-Name-First: Sheldon H. Author-X-Name-Last: Jacobson Title: A review of risk-based security and its impact on TSA PreCheck Abstract: Since September 11, 2001, the United States has invested a significant amount of resources into improving aviation security operations, with the Transportation Security Administration (TSA) assuming the responsibilities for security policy-making at commercial airports. This article reviews the literature that supports policies for risk-based passenger screening procedures and chronicles the analytical analysis leading up to the launch (in October 2011) of the TSA Precheck program, as a first step toward implementing a risk-based security strategy for passenger and baggage screening. Multi-level passenger prescreening is the basis of the mathematical framework behind TSA Precheck; the framework provides a prescriptive control of security operations in settings with limited resources. TSA Precheck assigns each passenger to a risk group, based on the initial perceived risk level (assessed in the prescreening stage), and then calibrates the security measures to mitigate the risk associated with each group. With passengers arriving in real-time and the order of their arrivals uncertain, the resource utilization problem is solved by dynamic programming. A numerical comparison between a risk-based and an equal-risk (i.e., non-risk-based) security model is presented to quantify the benefits of risk-based security. Journal: IISE Transactions Pages: 657-670 Issue: 6 Volume: 53 Year: 2020 Month: 10 X-DOI: 10.1080/24725854.2020.1825881 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1825881 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:657-670 Template-Type: ReDIF-Article 1.0 Author-Name: Leonardo Lozano Author-X-Name-First: Leonardo Author-X-Name-Last: Lozano Author-Name: Michael J. Magazine Author-X-Name-First: Michael J. Author-X-Name-Last: Magazine Author-Name: George G. Polak Author-X-Name-First: George G. Author-X-Name-Last: Polak Title: Decision diagram-based integer programming for the paired job scheduling problem Abstract: The paired job scheduling problem seeks to schedule n jobs on a single machine, each job consisting of two tasks for which there is a mandatory minimum waiting time between the completion of the first task and the start of the second task. We provide complexity results for problems defined by three commonly used objective functions. We propose an integer programming formulation based on a decision diagram decomposition that models the objective function and some of the challenging constraints in the space of the flow variables stemming from the diagrams while enforcing the simpler constraints in the space of the original scheduling variables. We then show how to simplify our reformulation by projecting out a subset of the flow variables, resulting in a lifted reformulation for the problem that can be obtained without building the decision diagrams. Computational results show that our proposed model performs considerably better than a standard time-indexed formulation over a set of randomly generated instances. Journal: IISE Transactions Pages: 671-684 Issue: 6 Volume: 53 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1828668 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1828668 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:671-684 Template-Type: ReDIF-Article 1.0 Author-Name: Joris Kinable Author-X-Name-First: Joris Author-X-Name-Last: Kinable Author-Name: Willem-Jan van Hoeve Author-X-Name-First: Willem-Jan Author-X-Name-Last: van Hoeve Author-Name: Stephen F. Smith Author-X-Name-First: Stephen F. Author-X-Name-Last: Smith Title: Snow plow route optimization: A constraint programming approach Abstract: Many cities have to cope with annual snowfall, but are struggling to manage their snow plowing activities efficiently. Despite the fact that winter road maintenance has been a popular research subject for decades, very few papers propose scalable models that can incorporate side constraints encountered in real-life applications. In this work, we propose a Constraint Programming formulation for a Snow Plow Routing Problem (SPRP). The SPRP under consideration involves finding a set of vehicle routes to service a street network in a pre-defined service area, while accounting for various vehicle constraints and traffic restrictions. The fundamental mathematical problem underlying SPRP is the well-known Capacitated Arc Routing Problem (CARP). Common Mathematical Programming (MP) approaches for CARP are typically based on: (i) a graph transformation, thereby transforming CARP into an equivalent node routing problem, or (ii) a sparse network formulation. The CP formulation in this article is based on the former graph transformation. Using geospatial data from the city of Pittsburgh, we empirically show that our CP approach outperforms existing MP formulations for SPRP. For some of the larger instances, our CP model finds 26% shorter plowing schedules than alternative Integer Programming formulations. A test pilot held with actual vehicles proves the applicability of our approach in practice: our routes are 3–156% shorter than the routes the city of Pittsburgh generated with commercial routing software. Journal: IISE Transactions Pages: 685-703 Issue: 6 Volume: 53 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1831713 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1831713 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:685-703 Template-Type: ReDIF-Article 1.0 Author-Name: Feimin Zhong Author-X-Name-First: Feimin Author-X-Name-Last: Zhong Author-Name: Zhongbao Zhou Author-X-Name-First: Zhongbao Author-X-Name-Last: Zhou Author-Name: Mingming Leng Author-X-Name-First: Mingming Author-X-Name-Last: Leng Title: Game-theoretic analyses of strategic pricing decision problems in supply chains Abstract: We consider strategic pricing problems in which each firm chooses between a non-cooperative (individual pricing) strategy and a cooperative (price negotiation) strategy. We first analyze a monopoly supply chain involving a supplier and a retailer, and then investigate two competing supply chains each consisting of a supplier and a retailer. We find that an appropriate power allocation between the supplier and the retailer can make the two firms benefit from negotiating the wholesale and retail prices. When the supplier negotiates the wholesale price, the retailer’s cooperative strategy can always induce supply chain coordination in the monopoly setting, whereas the two supply chains in the duopoly setting can be possibly coordinated only when the retailers determine their retail prices individually. In both the monopoly and duopoly settings, the wholesale price negotiation is a necessary part of the communications between supply chain members. When the supply chain competition intensifies, all firms are more likely to determine their prices individually rather than to negotiate their prices. Journal: IISE Transactions Pages: 704-718 Issue: 6 Volume: 53 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1830206 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1830206 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:704-718 Template-Type: ReDIF-Article 1.0 Author-Name: Yanling Chang Author-X-Name-First: Yanling Author-X-Name-Last: Chang Author-Name: Lu Sun Author-X-Name-First: Lu Author-X-Name-Last: Sun Author-Name: Matthew F. Keblis Author-X-Name-First: Matthew F. Author-X-Name-Last: Keblis Author-Name: Jie Yang Author-X-Name-First: Jie Author-X-Name-Last: Yang Title: Uniform-price auctions in staffing for self-scheduling service Abstract: This research examines a uniform-price auction mechanism in managing staffing for self-scheduling business such as task sourcing and work-from-home call centers. We consider two types of service providers: Type-1 agents who require advanced notice before a shift starts and Type-2 agents who are flexible enough to be scheduled on-demand. We develop an integrated framework that can jointly analyze demand forecast, short-term scheduling, and long-term planning of staff capacity. We discuss the adoption of a blended workforce in scheduling and the implication of attrition costs in the long-term staffing. In addition, we compare the auction model with a popular fixed-wage model, in order to examine under what conditions the auction model is preferred. These results provide insights to staff managers on the choice of staffing and wage models. Journal: IISE Transactions Pages: 719-734 Issue: 6 Volume: 53 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1841345 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1841345 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:6:p:719-734 Template-Type: ReDIF-Article 1.0 Author-Name: Hui Xiao Author-X-Name-First: Hui Author-X-Name-Last: Xiao Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Douglas Morrice Author-X-Name-First: Douglas Author-X-Name-Last: Morrice Author-Name: Chun-Hung Chen Author-X-Name-First: Chun-Hung Author-X-Name-Last: Chen Author-Name: Xiang Hu Author-X-Name-First: Xiang Author-X-Name-Last: Hu Title: Ranking and selection for terminating simulation under sequential sampling Abstract: This research develops an efficient ranking and selection procedure to select the best design for terminating simulation under sequential sampling. This approach enables us to obtain an accurate estimate of the mean performance at a particular point using regression in the case of a terminating simulation. The sequential sampling constraint is imposed to fully utilize the information along the simulation replication. The asymptotically optimal simulation budget allocation among all designs is derived concurrently with the optimal simulation run length and optimal number of simulation groups for each design. To implement the simulation budget allocation rule with a fixed finite simulation budget, a heuristic sequential simulation procedure is suggested with the objective of maximizing the probability of correct selection. Numerical experiments confirm the efficiency of the procedure relative to extant approaches. Journal: IISE Transactions Pages: 735-750 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1785647 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1785647 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:735-750 Template-Type: ReDIF-Article 1.0 Author-Name: Govind Lal Kumawat Author-X-Name-First: Govind Lal Author-X-Name-Last: Kumawat Author-Name: Debjit Roy Author-X-Name-First: Debjit Author-X-Name-Last: Roy Title: AGV or Lift-AGV? Performance trade-offs and design insights for container terminals with robotized transport vehicle technology Abstract: New container terminals are embracing robotized transport vehicles such as lift-automated guided vehicles (LAGVs) and automated guided vehicles (AGVs) to enhance the terminal throughput capacity. Although LAGVs have a high container handling time, they require less coordination with other terminal equipment in comparison with AGVs. In contrast, AGVs are hard-coupled resources, require less container handling times, but operate with high coordination delays in comparison with LAGVs. The effect of such operational trade-offs on terminal performance under various design parameter settings, such as yard block layout and a number of resources, is not well understood and needs to be evaluated at the terminal design phase. To analyze these trade-offs, we develop stylized semi-open queuing network models, which consist of two-phase servers and finite capacity queues. We develop a novel network decomposition method for solving the proposed queuing models. The accuracy of the solution method is validated using detailed simulation models. Using the analytical models, we study the performance trade-offs between the transport vehicle choices: LAGVs and AGVs. Our results show that the throughput capacity of the terminal in the container unloading process increases by up to 16% if LAGVs are chosen as transport vehicles instead of AGVs. However, at certain parameter settings, specifically, when the arrival rate of containers is low, the throughput time performance of the terminal is higher (up to 8%) with AGVs than with LAGVs. We also derive insights on the yard block layout and the technology choice for quay cranes. Journal: IISE Transactions Pages: 751-769 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1785648 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1785648 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:751-769 Template-Type: ReDIF-Article 1.0 Author-Name: Chao Wang Author-X-Name-First: Chao Author-X-Name-Last: Wang Author-Name: Xiaojin Zhu Author-X-Name-First: Xiaojin Author-X-Name-Last: Zhu Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Yingqing Zhou Author-X-Name-First: Yingqing Author-X-Name-Last: Zhou Title: Bayesian learning of structures of ordered block graphical models with an application on multistage manufacturing processes Abstract: The Ordered Block Model (OBM) is a special form of directed graphical models and is widely used in various fields. In this article, we focus on learning of structures of OBM based on prior knowledge obtained from historical data. The proposed learning method is applied to a multistage car body assembly process to validate the learning efficiency. In this approach, Bayesian score is used to learn the graph structure and a novel informative structure prior distribution is constructed to help the learning process. Specifically, the graphical structure is represented by a categorical random variable and its distribution is treated as the informative prior. In this way, the informative prior distribution construction is equivalent to the parameter estimation of the graph random variable distribution using historical data. Since the historical OBMs may not contain the same nodes as those in the new OBM, the sample space of the graphical structure of the historical OBMs and the new OBM may be inconsistent. We deal with this issue by adding pseudo nodes with probability normalization, then removing extra nodes through marginalization to align the sample space between historical OBMs and the new OBM. The performance of the proposed method is illustrated and compared to conventional methods through numerical studies and a real car assembly process. The results show the proposed informative structure prior can effectively boost the performance of the graph structure learning procedure, especially when the data from the new OBM is small. Journal: IISE Transactions Pages: 770-786 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1786196 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1786196 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:770-786 Template-Type: ReDIF-Article 1.0 Author-Name: Mithun Ghosh Author-X-Name-First: Mithun Author-X-Name-Last: Ghosh Author-Name: Yongxiang Li Author-X-Name-First: Yongxiang Author-X-Name-Last: Li Author-Name: Li Zeng Author-X-Name-First: Li Author-X-Name-Last: Zeng Author-Name: Zijun Zhang Author-X-Name-First: Zijun Author-X-Name-Last: Zhang Author-Name: Qiang Zhou Author-X-Name-First: Qiang Author-X-Name-Last: Zhou Title: Modeling multivariate profiles using Gaussian process-controlled B-splines Abstract: Due to the increasing presence of profile data in manufacturing, profile monitoring has become one of the most popular research directions in statistical process control. The core of profile monitoring is how to model the profile data. Most of the current methods deal with univariate profile modeling where only within-profile correlation is considered. In this article, a linear mixed-effect model framework is adopted for dealing with multivariate profiles, having both within- and between-profile correlations. For better flexibility yet reduced computational cost, we propose to construct the random component of the linear mixed effects model using B-splines, whose control points are governed by a multivariate Gaussian process. Extensive simulations have been conducted to compare the model with classic models. In the case study, the proposed model is applied to the transmittance profiles from the low-emittance glasses. Journal: IISE Transactions Pages: 787-798 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1798038 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1798038 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:787-798 Template-Type: ReDIF-Article 1.0 Author-Name: Bing Si Author-X-Name-First: Bing Author-X-Name-Last: Si Author-Name: Todd J. Schwedt Author-X-Name-First: Todd J. Author-X-Name-Last: Schwedt Author-Name: Catherine D. Chong Author-X-Name-First: Catherine D. Author-X-Name-Last: Chong Author-Name: Teresa Wu Author-X-Name-First: Teresa Author-X-Name-Last: Wu Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: A novel hierarchically-structured factor mixture model for cluster discovery from multi-modality data Abstract: Advances in sensing technology have generated multi-modality datasets with complementary information in various domains. In health care, it is common to acquire images of different types/modalities for the same patient to facilitate clinical decision making. We propose a clustering method called hierarchically-structured Factor Mixture Model (hierFMM) that enables cluster discovery from multi-modality datasets to exploit their joint strength. HierFMM employs a novel double-L21-penalized likelihood formulation to achieve hierarchical selection of modalities and features that are nested within the modalities. This formulation is proven to satisfy a Quadratic Majorization condition that allows for an efficient Group-wise Majorization Descent algorithm to be developed for model estimation. Simulation studies show significantly better performance of hierFMM than competing methods. HierFMM is applied to an application of identifying clusters/subgroups of migraine patients based on brain cortical area, thickness, and volume datasets extracted from Magnetic Resonance Imaging. Two subgroups are found, whose patients significantly differ in clinical characteristics. This finding shows the promise of using multi-modality imaging data to help patient stratification and develop optimal treatment for different subgroups with migraine. Journal: IISE Transactions Pages: 799-811 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1800149 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1800149 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:799-811 Template-Type: ReDIF-Article 1.0 Author-Name: Dan Li Author-X-Name-First: Dan Author-X-Name-Last: Li Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Title: A degradation-based detection framework against covert cyberattacks on SCADA systems Abstract: Supervisory Control and Data Acquisition (SCADA) systems are commonly used in critical infrastructures. However, these systems are typically vulnerable to cyberattacks. Among the different types of cyberattacks, the covert attack is one of the hardest to detect – it is undetectable when the system is operating under normal conditions. In this article, we develop a data-driven detection framework that utilizes the degradation process of the system to detect covert attacks. We derive mathematical characteristics of the degradation processes under covert attacks that are used for developing a sequential likelihood ratio test method for attack detection. We verify our methodology through an extensive numerical study and a case study on a rotating machinery setup. Our results show that the methodology helps detect covert attacks within reasonable delay time and is applicable under real-world settings. Journal: IISE Transactions Pages: 812-829 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1802537 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1802537 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:812-829 Template-Type: ReDIF-Article 1.0 Author-Name: Yao Cheng Author-X-Name-First: Yao Author-X-Name-Last: Cheng Author-Name: Elsayed A. Elsayed Author-X-Name-First: Elsayed A. Author-X-Name-Last: Elsayed Title: Design of optimal sequential hybrid testing plans Abstract: One-shot units are produced in batches and stored in either a dormant or standby mode until retrieved or activated to perform their function when needed. In this article, we propose hybrid reliability testing approaches to utilize the advantages of non-destructive testing and destructive testing for assessment of reliability metrics. Specifically, we design a sequence of optimal hybrid testing plans under flexible scenarios. Extensive simulation models are developed to validate the accuracy and efficiency of the proposed approaches. Journal: IISE Transactions Pages: 830-841 Issue: 7 Volume: 53 Year: 2021 Month: 4 X-DOI: 10.1080/24725854.2020.1805828 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1805828 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:7:p:830-841 Template-Type: ReDIF-Article 1.0 Author-Name: Yunxia Zhu Author-X-Name-First: Yunxia Author-X-Name-Last: Zhu Author-Name: Milind Dawande Author-X-Name-First: Milind Author-X-Name-Last: Dawande Author-Name: Nagesh Gavirneni Author-X-Name-First: Nagesh Author-X-Name-Last: Gavirneni Author-Name: Vaidyanathan Jayaraman Author-X-Name-First: Vaidyanathan Author-X-Name-Last: Jayaraman Title: Industrial symbiosis: Impact of competition on firms’ willingness to implement Abstract: Industrial Symbiosis or By-Product Synergy is defined as a resource-sharing strategy that engages traditionally separate industries in a collective approach that involves a physical exchange of materials, water, energy, and by-products. Inspired by a real-world example of a paper–sugar symbiotic complex, we study the impact of competition on a firm’s willingness to implement an industrial symbiotic system. Sugar and paper firms are symbiotically connected, in the sense that the biomass from the manufacture of one product is used as a raw material for the second product, and vice versa. We characterize the firm’s operational optimal/equilibrium decisions for its two products – both in the presence and absence of a symbiotic system – under monopoly as well as under competition. Our models capture the supply-side (e.g., a fixed production cost and changes in the variable production costs), as well as the demand-side (“green” consumers who value the nature-friendly production process) impact of implementing industrial symbiosis. Our results indicate that firms are more willing to implement industrial symbiosis when (i) the proportion of the green consumers is high; or (ii) consumers’ appreciation for the green variants is high; or (iii) variable production costs after implementation are lower. For a firm, competition from firms that only produce regular products encourages implementation of industrial symbiosis, whereas competition from firms that produce both regular and green products discourages it. Journal: IISE Transactions Pages: 897-913 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2020.1781305 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1781305 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:897-913 Template-Type: ReDIF-Article 1.0 Author-Name: Farjana Nur Author-X-Name-First: Farjana Author-X-Name-Last: Nur Author-Name: Mario Aboytes-Ojeda Author-X-Name-First: Mario Author-X-Name-Last: Aboytes-Ojeda Author-Name: Krystel K. Castillo-Villar Author-X-Name-First: Krystel K. Author-X-Name-Last: Castillo-Villar Author-Name: Mohammad Marufuzzaman Author-X-Name-First: Mohammad Author-X-Name-Last: Marufuzzaman Title: A two-stage stochastic programming model for biofuel supply chain network design with biomass quality implications Abstract: Biofuel, an efficient alternative to fossil fuels, has gained considerable attention as a potential source to satisfy energy demands. Biomass collection and distribution typically incur a significant portion of the biofuel production cost. Thus, it is imperative to design a biofuel supply chain network that not only aims to minimize the delivery cost, but also incorporates biomass quality properties that make this raw material so unique yet challenging. This article proposes a novel two-stage stochastic programming model that captures different time- and weather-dependent biomass quality parameters (e.g., the moisture content, ash content, and dry matter loss) and their impact on the overall supply chain design. To efficiently solve this optimization model, we propose a parallelized hybrid decomposition algorithm that combines the sample average approximation with an enhanced progressive hedging algorithm. The proposed mathematical model and solutions are validated with a real-life case study. The numerical experiments reveal that the biomass quality variability impacts the supply chain design by requiring additional depots, and therefore, it increases the capital investment. The storage of unprocessed biomass at depots and biorefineries decreased by 88.5% and 97.9%, respectively, and the densified biomass inventory at biorefineries increased 17-fold when baseline quality considerations were taken into account. Journal: IISE Transactions Pages: 845-868 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2020.1751347 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1751347 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:845-868 Template-Type: ReDIF-Article 1.0 Author-Name: Refael Hassin Author-X-Name-First: Refael Author-X-Name-Last: Hassin Title: Equilibrium and optimal one-shot delegated search Abstract: A target is located at one of m given sites, with known probabilities. Each of a set of searchers selects a site to search, and a fixed prize is shared by those who search the correct location. What (symmetric) search strategies are adopted when the searchers act selfishly to maximize their expected returns, and how can a firm affect this behavior to increase the efficiency of the search? This is a common situation when, for example, a firm faces a time-limited business opportunity. To materialize it the firm must solve a design problem that it delegates to a limited number of experts, and sets up a contest in order to motivate them to search for the solution. The firm is interested in maximizing the probability that the problem is solved by at least one of the searchers. Other applications include mathematical contests, innovation contests, and “guess & win” contests. We investigate the searchers’ incentives and how they conform with the firm’s goal. We analyze the equilibrium selection strategies and the strategies that maximize the probability that the search is successful and the target is discovered by at least one searcher. We show that selfish (equilibrium) search leaves too many sites unsearched while searching excessively the high-probability locations. We analyze the relative loss caused when the equilibrium search strategy is applied rather than the optimal one, and show that even with just two sites, it can be as large as 20%. We present two methods for inducing the optimal strategy in equilibrium, one uses heterogeneous prizes while the other one does not use direct monetary incentives. Awareness of the gap between agents’ incentives and firm’s goals should direct a principal when deciding whether it is desirable to delegate the search for a design problem, and if so then how to provide adequate incentives. Journal: IISE Transactions Pages: 928-941 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2019.1663085 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1663085 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:928-941 Template-Type: ReDIF-Article 1.0 Author-Name: Gökhan Memişoğlu Author-X-Name-First: Gökhan Author-X-Name-Last: Memişoğlu Author-Name: Halit Üster Author-X-Name-First: Halit Author-X-Name-Last: Üster Title: Design of a biofuel supply network under stochastic and price-dependent biomass availability Abstract: This article presents a framework for profit-maximizing strategic bio-energy supply chain design by taking into account variability in biomass as a response to price set as well as uncertainty in biomass yield. We present our model as a two-stage stochastic integer program for a multi-period integrated design of a network in which the here-and-now strategic decisions include biorefinery locations and size as well as base biomass price. To efficiently solve our model, we suggest an L-shaped-based algorithm along with a Sample Average Approximation approach. Finally, we demonstrate our results in a case study in Texas using realistic data. Within our framework, we present the relationship between biomass and biofuel price, as well as the optimal network design for the biofuel producer. Journal: IISE Transactions Pages: 869-882 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2020.1869870 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1869870 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:869-882 Template-Type: ReDIF-Article 1.0 Author-Name: Amin Khademi Author-X-Name-First: Amin Author-X-Name-Last: Khademi Author-Name: Sandra Eksioglu Author-X-Name-First: Sandra Author-X-Name-Last: Eksioglu Title: Optimal governmental incentives for biomass cofiring to reduce emissions in the short-term Abstract: Several studies have shown that biomass cofiring is a viable short-term option for coal-fired power plants to reduce their emissions if supported by appropriate tax incentives. These results suggest a unique opportunity for governments to design monetary incentives such as tax credits that lead to reduction in greenhouse gas emissions in biomass-rich regions. Therefore, a natural question is: What is an optimal tax credit strategy in these regions? To this end, we propose a Stackelberg/Nash game, and solve it algorithmically via reformulating the model as a mixed-integer bilinear program and using a piecewise linear relaxation of bilinear terms. The structure of the optimal solution of special cases is exploited, which helps design efficient heuristics. This study develops a case study using real data about power plants and biomass availability in Mississippi and Arkansas. The results compare the optimal tax credit schemes and plants’ cofiring strategies to provide insights on optimal tax credit mechanisms. Results show that a flexible tax credit scheme, which allows a plant-specific tax credit rate, is more efficient than the currently-used flat tax credit rate. This proposed approach uses a smaller budget and targets the plants that need funding support to comply with emissions regulations. Journal: IISE Transactions Pages: 883-896 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2020.1718247 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1718247 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:883-896 Template-Type: ReDIF-Article 1.0 Author-Name: Sandra Eksioglu Author-X-Name-First: Sandra Author-X-Name-Last: Eksioglu Title: Contributions to sustainable bioenergy systems design, planning and operations Journal: IISE Transactions Pages: 843-844 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2021.1895455 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1895455 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:843-844 Template-Type: ReDIF-Article 1.0 Author-Name: Pelin G. Canbolat Author-X-Name-First: Pelin G. Author-X-Name-Last: Canbolat Title: Risk-sensitive control of branching processes Abstract: This article solves the risk-sensitive control problem for branching processes where the one-period progeny of an individual can take values from a finite set. The decision maker is assumed to maximize the expected risk-averse exponential utility (or to minimize the expected risk-averse exponential disutility) of the rewards earned in an infinite horizon. Individuals are assumed to produce progeny independently, and with the same probability mass function if they take the same action. This article characterizes the expected disutility of stationary policies, identifies necessary and sufficient conditions for the existence of a stationary optimal policy that assigns the same action to all individuals in all periods, and discusses computational methods to obtain such a policy. Supplementary materials are available for this article. See the publisher’s online edition of IIE Transactions, datasets, additional tables, detailed proofs, etc. Journal: IISE Transactions Pages: 914-927 Issue: 8 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2019.1655609 File-URL: http://hdl.handle.net/10.1080/24725854.2019.1655609 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:8:p:914-927 Template-Type: ReDIF-Article 1.0 Author-Name: Akash Deep Author-X-Name-First: Akash Author-X-Name-Last: Deep Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Dharmaraj Veeramani Author-X-Name-First: Dharmaraj Author-X-Name-Last: Veeramani Title: Copula-based multi-event modeling and prediction using fleet service records Abstract: Recent advances in information and communication technology are enabling availability of event sequence data from equipment fleets comprising potentially a large number of similar units. The data from a specific unit may be related to multiple types of events, such as occurrence of different types of failures, and are recorded as part of the unit’s service history. In this article, we present a novel method for modeling and prediction of such event sequences using fleet service records. The proposed method uses copula to approximate the joint distribution of time-to-event variables corresponding to each type of event. The marginal distributions of the time-to-event variables that are needed for the copula function are obtained through Cox Proportional Hazard (PH) regression models. Our method is flexible and efficient in modeling the relationships among multiple events, and overcomes limitations of traditional approaches, such as Cox PH. With simulations and a real-world case study, we demonstrate that the proposed method outperforms the base regression model in prediction accuracy of future event occurrences. Journal: IISE Transactions Pages: 1023-1036 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1802792 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1802792 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:1023-1036 Template-Type: ReDIF-Article 1.0 Author-Name: Girish Jampani Hanumantha Author-X-Name-First: Girish Jampani Author-X-Name-Last: Hanumantha Author-Name: Ronald G. Askin Author-X-Name-First: Ronald G. Author-X-Name-Last: Askin Title: Approximations for dynamic multi-class manufacturing systems with priorities and finite buffers Abstract: Capacity planning models for tactical to operational decisions in manufacturing systems require a performance evaluation component that relates demand processes with production resources and system state. Steady-state queueing models are widely used for such performance evaluations. However, these models typically assume stationary demand processes. With shorter new product development and life cycles, increasing customization, and constantly evolving customer preferences the assumption of stationary demand processes is not always reasonable. Nonstationary demand processes can capture the dynamic nature of modern manufacturing systems. The ability to analyze dynamic manufacturing systems with multiple products and finite buffers is essential for estimating throughput rates, throughput times, and work-in-process levels as well as evaluating the impact of proposed capacity plans and resource allocations. In this article, we present computationally efficient numerical approximations for the performance evaluation of dynamic multi-product manufacturing systems with priorities and finite buffers. The approximation breaks time into short periods, estimates throughput and arrival rates based on system status and current arrival processes, and then pieces the periods together through flow balance equations. The dynamic nature of product demands is modeled through non-homogeneous Poisson processes. The performance of these approximations is presented for practically sized flowshops and jobshops. Journal: IISE Transactions Pages: 974-989 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1811434 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1811434 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:974-989 Template-Type: ReDIF-Article 1.0 Author-Name: Yi Zhang Author-X-Name-First: Yi Author-X-Name-Last: Zhang Author-Name: Elif Akçalı Author-X-Name-First: Elif Author-X-Name-Last: Akçalı Author-Name: Sıla Çetinkaya Author-X-Name-First: Sıla Author-X-Name-Last: Çetinkaya Title: An analytical investigation of alternative batching policies for remanufacturing under stochastic demands and returns Abstract: This article examines a fundamental lot-sizing problem which arises in the context of a make-to-order remanufacturing environment. The problem setting is characterized by a stochastic used-item return process along with a stochastic remanufactured-item demand process faced by a remanufacturer. We explicitly take into account for all relevant costs, including the fixed costs (associated with remanufacturing of used-items and dispatching of remanufactured-item orders in batches) and inventory-related cost (associated with used-item inventory holding costs and remanufactured-item order waiting costs). We propose five batching policies inspired by shipment consolidation practice (three periodic policies and two threshold policies). For the purpose of computing policy parameters, we develop analytical models that are aimed at minimizing the long-run average expected total cost of the remanufacturer. Since the underlying cost expressions are not analytically tractable, we propose easily computable approximations that lead to closed-form expressions for obtaining policy parameters. A careful numerical investigation demonstrates that the resulting policy parameters are highly effective approximations. Then, we extend the policies by considering disposal options when needed. For this extension, an effective parameter-based approximation approach is developed for computational purposes, and additional numerical experiments demonstrate the effectiveness of the proposed approach. Journal: IISE Transactions Pages: 990-1009 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1817632 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1817632 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:990-1009 Template-Type: ReDIF-Article 1.0 Author-Name: Fan Jiang Author-X-Name-First: Fan Author-X-Name-Last: Jiang Author-Name: Matthias Hwai Yong Tan Author-X-Name-First: Matthias Hwai Yong Author-X-Name-Last: Tan Author-Name: Kwok-Leung Tsui Author-X-Name-First: Kwok-Leung Author-X-Name-Last: Tsui Title: Multiple-target robust design with multiple functional outputs Abstract: Robust Parameter Design (RPD) is a quality improvement method to mitigate the effect of input noise on system output quality via adjustment of control and signal factors. This article considers RPD with multiple functional outputs and multiple target functions based on a time-consuming nonlinear simulator, which is a challenging problem rarely studied in the literature. The Joseph–Wu formulation of multi-target RPD as an optimization problem is extended to accommodate multiple functional outputs and use of a Gaussian Process (GP) emulator for the outputs. Due to computational demands in emulator fitting and expected loss function estimation posed by this big-data problem, a separable GP model is used. The separable regression and prior covariance functions, and the Cartesian product structure of the data are exploited to derive computationally efficient formulas for the posterior means of expected loss criteria for optimizing signal and control factors, and to develop a fast Monte Carlo procedure for building credible intervals for the criteria. Our approach is applied to an example on RPD of a coronary stent for treating narrowed arteries, which allows the optimal signal and control factor settings to be estimated efficiently. Supplementary materials are available for this article. Go to the publisher’s online edition of IISE Transactions, datasets, additional tables, detailed proofs, etc. Journal: IISE Transactions Pages: 1052-1066 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1823532 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1823532 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:1052-1066 Template-Type: ReDIF-Article 1.0 Author-Name: Zhengqian Jiang Author-X-Name-First: Zhengqian Author-X-Name-Last: Jiang Author-Name: Hui Wang Author-X-Name-First: Hui Author-X-Name-Last: Wang Author-Name: Yanshuo Sun Author-X-Name-First: Yanshuo Author-X-Name-Last: Sun Title: Improved co-scheduling of multi-layer printing path scanning for collaborative additive manufacturing Abstract: Additive manufacturing processes, especially those based on fused filament fabrication mechanism, have a low productivity. One solution to this problem is to adopt a collaborative additive manufacturing system that employs multiple printers/extruders working simultaneously to improve productivity by reducing the process makespan. However, very limited research is available to address the major challenges in the co-scheduling of printing path scanning for different extruders. Existing studies lack: (i) a consideration of the impact of sub-path partitions and simultaneous printing of multiple layers on the multi-extruder printing makespan; and (ii) efficient algorithms to deal with the multiple decision-making involved. This article develops an improved method by first breaking down printing paths on different printing layers into sub-paths and assigning these generated sub-paths to different extruders. A mathematical model is formulated for the co-scheduling problem, and a hybrid algorithm with sequential solution procedures integrating an evolutionary algorithm and a heuristic is customized to multiple decision-making in the co-scheduling for collaborative printing. The performance was compared with the most recent research, and the results demonstrated further makespan reduction when sub-path partition or the simultaneous printing of multiple layers is considered. This article discusses the impacts of process setups on makespan reduction, providing a quantitative tool for guiding process development. Journal: IISE Transactions Pages: 960-973 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1807076 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1807076 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:960-973 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaonan Liu Author-X-Name-First: Xiaonan Author-X-Name-Last: Liu Author-Name: Kewei Chen Author-X-Name-First: Kewei Author-X-Name-Last: Chen Author-Name: David Weidman Author-X-Name-First: David Author-X-Name-Last: Weidman Author-Name: Teresa Wu Author-X-Name-First: Teresa Author-X-Name-Last: Wu Author-Name: Fleming Lure Author-X-Name-First: Fleming Author-X-Name-Last: Lure Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Author-Name: Author-X-Name-First: Author-X-Name-Last: Title: A novel transfer learning model for predictive analytics using incomplete multimodality data Abstract: Multimodality datasets are becoming increasingly common in various domains to provide complementary information for predictive analytics. One significant challenge in fusing multimodality data is that the multiple modalities are not universally available for all samples due to cost and accessibility constraints. This situation results in a unique data structure called an Incomplete Multimodality Dataset. We propose a novel Incomplete-Multimodality Transfer Learning (IMTL) model that builds a predictive model for each sub-cohort of samples with the same missing modality pattern, and meanwhile couples the model estimation processes for different sub-cohorts to allow for transfer learning. We develop an Expectation-Maximization (EM) algorithm to estimate the parameters of IMTL and further extend it to a collaborative learning paradigm that is specifically valuable for patient privacy preservation in health care applications. We prove two advantageous properties of IMTL: the ability for out-of-sample prediction and a theoretical guarantee for a larger Fisher information compared with models without transfer learning. IMTL is applied to diagnosis and prognosis of Alzheimer’s disease at an early stage called Mild Cognitive Impairment using incomplete multimodality imaging data. IMTL achieves higher accuracy than competing methods without transfer learning. Journal: IISE Transactions Pages: 1010-1022 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1798569 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1798569 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:1010-1022 Template-Type: ReDIF-Article 1.0 Author-Name: Feifan Wang Author-X-Name-First: Feifan Author-X-Name-Last: Wang Author-Name: Feng Ju Author-X-Name-First: Feng Author-X-Name-Last: Ju Title: Decomposition-based real-time control of multi-stage transfer lines with residence time constraints Abstract: It is commonly observed in the food industry, battery production, automotive paint shop, and semiconductor manufacturing that an intermediate product’s residence time in the buffer within a production line is controlled by a time window to guarantee product quality. There is typically a minimum time limit reflected by a part’s travel time or process requirement. Meanwhile, these intermediate parts are prevented from staying in the buffer for too long by an upper time limit, exceeding which a part will be scrapped or need additional treatment. To increase production throughput and reduce scrap, one needs to control machines’ working mode according to real-time system information in the stochastic production environment, which is a difficult problem to solve, due to the system’s complexity. In this article, we propose a novel decomposition-based control approach by decomposing a production system into small-scale subsystems based on domain knowledge and their structural relationship. An iterative aggregation procedure is then used to generate a production control policy with convergence guarantee. Numerical studies suggest that the decomposition-based control approach outperforms general-purpose reinforcement learning method by delivering significant system performance improvement and substantial reduction on computation overhead. Journal: IISE Transactions Pages: 943-959 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1803513 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1803513 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:943-959 Template-Type: ReDIF-Article 1.0 Author-Name: Tao Yuan Author-X-Name-First: Tao Author-X-Name-Last: Yuan Author-Name: Tianqiang Yan Author-X-Name-First: Tianqiang Author-X-Name-Last: Yan Author-Name: Suk Joo Bae Author-X-Name-First: Suk Joo Author-X-Name-Last: Bae Title: Superposed Poisson process models with a modified bathtub intensity function for repairable systems Abstract: Bathtub-shaped failure intensity is typical for large-scaled repairable systems with a number of different failure modes. Sometimes, repairable systems may exhibit a failure pattern different from the traditional bathtub shape, due to the existence of multiple failure modes. This study proposes two superposed Poisson process models with modified bathtub intensity functions to capture this kind of failure pattern. The new models are constructed by the superposition of the generalized Goel–Okumoto process and power law process (or log-linear process). The proposed models can be applied to masked failure-time data from repairable systems where the modes of collected failure-times are unobserved or unavailable. Bayesian posterior computation algorithms based on the data augmentation method are developed for the inference on the parameters or their functions of the superposed Poisson process models. This study also examines the best model selection among the candidate models in the Bayesian framework and modeling check using the residuals. A practical case study with a data set of unscheduled maintenance events for complex artillery systems illustrates potential applications of the proposed models for the purpose of reliability prediction for the repairable systems. Journal: IISE Transactions Pages: 1037-1051 Issue: 9 Volume: 53 Year: 2021 Month: 6 X-DOI: 10.1080/24725854.2020.1820630 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1820630 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:9:p:1037-1051 Template-Type: ReDIF-Article 1.0 Author-Name: Douniel Lamghari-Idrissi Author-X-Name-First: Douniel Author-X-Name-Last: Lamghari-Idrissi Author-Name: Rob Basten Author-X-Name-First: Rob Author-X-Name-Last: Basten Author-Name: Geert-Jan van Houtum Author-X-Name-First: Geert-Jan Author-X-Name-Last: van Houtum Title: Reducing risks in spare parts service contracts with a long downtime constraint Abstract: This article investigates spare parts service contracts for capital goods. We consider a single-item, single-location inventory system that serves one customer with multiple machines. During the contract execution phase, the true demand rate is observed. It can differ from the estimated demand rate because of two factors: increased demand variation in finite horizon settings and a shift in the mean utilization of the machines by the user during the contract. When the true demand rate is higher than the estimated demand rate, the Original Equipment Manufacturer (OEM) is faced with higher-than-expected costs for the execution of the contract, and the asset user is generally faced with a higher number of extreme long downtime events. Therefore, we introduce the flexible-time contract, which ends after a predetermined number of demands. Using a Markov decision process, we prove that a state-dependent base stock policy is optimal under a flexible-time contract. Using simulation, we compare the flexible-time contract with the standard fixed-time contract. Our results show that the flexible-time contract reduces the costs for the OEM by up to 35% and prevents not meeting the agreed-on service level. We obtain similar results in a multi-item setting. Journal: IISE Transactions Pages: 1067-1080 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1849874 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1849874 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1067-1080 Template-Type: ReDIF-Article 1.0 Author-Name: Sait Cakmak Author-X-Name-First: Sait Author-X-Name-Last: Cakmak Author-Name: Di Wu Author-X-Name-First: Di Author-X-Name-Last: Wu Author-Name: Enlu Zhou Author-X-Name-First: Enlu Author-X-Name-Last: Zhou Title: Solving Bayesian risk optimization via nested stochastic gradient estimation Abstract: In this article, we aim to solve Bayesian Risk Optimization (BRO), which is a recently proposed framework that formulates simulation optimization under input uncertainty. In order to efficiently solve the BRO problem, we derive nested stochastic gradient estimators and propose corresponding stochastic approximation algorithms. We show that our gradient estimators are asymptotically unbiased and consistent, and that the algorithms converge asymptotically. We demonstrate the empirical performance of the algorithms on a two-sided market model. Our estimators are of independent interest in extending the literature of stochastic gradient estimation to the case of nested risk measures. Journal: IISE Transactions Pages: 1081-1093 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1869352 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1869352 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1081-1093 Template-Type: ReDIF-Article 1.0 Author-Name: Lauren N. Steimle Author-X-Name-First: Lauren N. Author-X-Name-Last: Steimle Author-Name: David L. Kaufman Author-X-Name-First: David L. Author-X-Name-Last: Kaufman Author-Name: Brian T. Denton Author-X-Name-First: Brian T. Author-X-Name-Last: Denton Title: Multi-model Markov decision processes Abstract: Markov decision processes (MDPs) have found success in many application areas that involve sequential decision making under uncertainty, including the evaluation and design of treatment and screening protocols for medical decision making. However, the data used to parameterize the model can influence what policies are recommended, and multiple competing data sources are common in many application areas, including medicine. In this article, we introduce the Multi-model Markov decision process (MMDP) which generalizes a standard MDP by allowing for multiple models of the rewards and transition probabilities. Solution of the MMDP generates a single policy that maximizes the weighted performance over all models. This approach allows the decision maker to explicitly trade-off conflicting sources of data while generating a policy of the same level of complexity for models that only consider a single source of data. We study the structural properties of this problem and show that it is at least NP-hard. We develop exact methods and fast approximation methods supported by error bounds. Finally, we illustrate the effectiveness and the scalability of our approach using a case study in preventative blood pressure and cholesterol management that accounts for conflicting published cardiovascular risk models. Journal: IISE Transactions Pages: 1124-1139 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2021.1895454 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1895454 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1124-1139 Template-Type: ReDIF-Article 1.0 Author-Name: Kyuree Ahn Author-X-Name-First: Kyuree Author-X-Name-Last: Ahn Author-Name: Jinkyoo Park Author-X-Name-First: Jinkyoo Author-X-Name-Last: Park Title: Cooperative zone-based rebalancing of idle overhead hoist transportations using multi-agent reinforcement learning with graph representation learning Abstract: Due to the recent advances in manufacturing systems, the semiconductor FABs have become larger, and thus, more overhead hoist transporters (OHTs) need to be operated. In this article, we propose a cooperative zone-based rebalancing algorithm to allocate idle overhead hoist vehicles in a semiconductor FAB. The proposed model is composed of two parts: (i) a state representation learning part that extracts the localized embedding of each agent using a graph neural network; and (ii) a policy learning part that makes a rebalancing action using the constructed embedding. By conducting both representation learning and policy learning in a single framework, the proposed method can train the decentralized policy for agents to rebalance OHTs cooperatively. The experiments show that the proposed method can significantly reduce the average retrieval time while reducing the OHT utilization ratio. In addition, we investigated the transferable capability of the suggested algorithm by testing the policy on unseen dynamic scenarios without further training. Journal: IISE Transactions Pages: 1140-1156 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1851823 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1851823 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1140-1156 Template-Type: ReDIF-Article 1.0 Author-Name: Ran Liu Author-X-Name-First: Ran Author-X-Name-Last: Liu Author-Name: Xiaolan Xie Author-X-Name-First: Xiaolan Author-X-Name-Last: Xie Title: Weekly scheduling of emergency department physicians to cope with time-varying demand Abstract: Overcrowding, long waiting times and delays frequently occur in hospital Emergency Departments (EDs). The main causes are the stochastic and strongly time-varying demands of patient arrivals at an ED and the temporary overloading of EDs. Motivated by collaboration with large EDs, we investigate the physicians scheduling problem in the ED for a weekly planning horizon to address the stochastic and time-varying demands. The patient–physician service system is modeled as a time-varying and temporarily overloaded queueing system without abandonments. We employ a continuous-time Markov chain and uniformization method for the analytical evaluation of waiting times of patients. Based on an increasing convex order property, patient waiting times are proven to be convex in a system state. Based on this convexity, an approximation technique is established to model the physician scheduling problem as a mixed-integer program to decide the start and end working times of physicians. We also obtain a tight lower bound of the optimal solution to this scheduling problem. A local search-based algorithm is designed to solve this scheduling problem. Our method improves the physician schedule obtained via the approaches from the literature, significantly improves actual hospital scheduling, and simultaneously reduces physician working times and patient waiting times. Journal: IISE Transactions Pages: 1109-1123 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2021.1894656 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1894656 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1109-1123 Template-Type: ReDIF-Article 1.0 Author-Name: Chenhao Zhou Author-X-Name-First: Chenhao Author-X-Name-Last: Zhou Author-Name: Ning Ma Author-X-Name-First: Ning Author-X-Name-Last: Ma Author-Name: Xinhu Cao Author-X-Name-First: Xinhu Author-X-Name-Last: Cao Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Title: Classification and literature review on the integration of simulation and optimization in maritime logistics studies Abstract: The traditional maritime logistics industry is facing an industry transformation created by technology development. Along with industry transformation, the maritime logistics research field is also facing new challenges and opportunities. It is found that using simulation or optimization alone to solve maritime logistics decision problems has some drawbacks. Instead, a trend of integrating the two methods is becoming more and more popular in recent literature. However, an in-depth and systematic literature review is absent. Thus, this article reviews 107 papers on the integration of simulation and optimization for the maritime logistics studies published in the last two decades. Five modes of integration are identified based on how the two methods interact. For each mode, a detailed literature review on different maritime logistics processes is presented, covering terminal operation, shipping line operation, and hinterland transport operation. Lastly, how the integration of simulation and optimization could contribute to the next generation maritime systems is discussed with future research directions given. Journal: IISE Transactions Pages: 1157-1176 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1856981 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1856981 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1157-1176 Template-Type: ReDIF-Article 1.0 Author-Name: Ali Yekkehkhany Author-X-Name-First: Ali Author-X-Name-Last: Yekkehkhany Author-Name: Ebrahim Arian Author-X-Name-First: Ebrahim Author-X-Name-Last: Arian Author-Name: Rakesh Nagi Author-X-Name-First: Rakesh Author-X-Name-Last: Nagi Author-Name: Ilan Shomorony Author-X-Name-First: Ilan Author-X-Name-Last: Shomorony Title: A cost–based analysis for risk–averse explore–then–commit finite–time bandits Abstract: In this article, a multi–armed bandit problem is studied in an explore–then–commit setting where the cost of pulling an arm in the experimentation (exploration) phase may not be negligible. Identifying the best arm after a pure experimentation phase to exploit it once or for a given finite number of times is the goal of the problem. Applications of this are prevalent in personalized health-care and financial investments where the frequency of exploitation is limited. In this setting, we observe that pulling the arm with the highest expected reward is not necessarily the most desirable objective for exploitation. Alternatively, we advocate the idea of risk aversion, where the objective is to compete against the arm with the best risk–return trade–off. Additionally, a trade–off between cost and regret should be considered in the case where pulling arms in the exploration phase incurs a cost. In the case that the exploration cost is not considered, we propose a class of hyper–parameter–free risk–averse algorithms, called OTE/FTE–MAB (One/Finite–Time Exploitation Multi–Armed Bandit), whose objectives are to select the arm that is most probable to reward the most in a single or finite–time exploitations. To analyze these algorithms, we define a new notion of finite–time exploitation regret for our setting of interest. We provide an upper bound of order ln (1∈r) for the minimum number of experiments that should be done to guarantee upper bound er for regret. As compared with existing risk–averse bandit algorithms, our algorithms do not rely on hyper–parameters, resulting in a more robust behavior in practice. In the case that pulling an arm in the exploration phase has a cost, we propose the c–OTE–MAB algorithm for two–armed bandits that addresses the cost–regret trade–off, corresponding to exploration–exploitation trade–off, by minimizing a linear combination of cost and regret that is called cost– regret function, using a hyper–parameter. This algorithm determines an estimation of the optimal number of explorations whose cost–regret value approaches the minimum value of the cost–regret function at the rate 1ne with an associated confidence level, where ne is the number of explorations of each arm. Journal: IISE Transactions Pages: 1094-1108 Issue: 10 Volume: 53 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2021.1882014 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1882014 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:10:p:1094-1108 Template-Type: ReDIF-Article 1.0 Author-Name: Yue Shi Author-X-Name-First: Yue Author-X-Name-Last: Shi Author-Name: Yisha Xiang Author-X-Name-First: Yisha Author-X-Name-Last: Xiang Author-Name: Ying Liao Author-X-Name-First: Ying Author-X-Name-Last: Liao Author-Name: Zhicheng Zhu Author-X-Name-First: Zhicheng Author-X-Name-Last: Zhu Author-Name: Yili Hong Author-X-Name-First: Yili Author-X-Name-Last: Hong Title: Optimal burn-in policies for multiple dependent degradation processes Abstract: Many complex engineering devices experience multiple dependent degradation processes. For each degradation process, there may exist substantial unit-to-unit heterogeneity. In this article, we describe the dependence structure among multiple dependent degradation processes using copulas and model unit-level heterogeneity as random effects. A two-stage estimation method is developed for statistical inference of multiple dependent degradation processes with random effects. To reduce the heterogeneity, we propose two degradation-based burn-in models, one with a single screening point and the other with multiple screening points. At each screening point, a unit is scrapped if one or more degradation levels pass their respective burn-in thresholds. Efficient algorithms are devised to find optimal burn-in decisions. We illustrate the proposed models using experimental data from light-emitting diode lamps. Impacts of parameter uncertainties on optimal burn-in decisions are investigated. Our results show that ignoring multiple dependent degradation processes can cause inferior system performance, such as increased total costs. Moreover, a higher level of dependence among multiple degradation processes often leads to longer burn-in time and higher burn-in thresholds for the two burn-in models. For the multiple-screening-point model, a higher level of dependence can also result in fewer screening points. Our results also show that burn-in with multiple screening points can lead to potential cost savings. Journal: IISE Transactions Pages: 1281-1293 Issue: 11 Volume: 53 Year: 2020 Month: 11 X-DOI: 10.1080/24725854.2020.1841344 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1841344 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2020:i:11:p:1281-1293 Template-Type: ReDIF-Article 1.0 Author-Name: Esma S. Gel Author-X-Name-First: Esma S. Author-X-Name-Last: Gel Author-Name: John W. Fowler Author-X-Name-First: John W. Author-X-Name-Last: Fowler Author-Name: Ketan Khowala Author-X-Name-First: Ketan Author-X-Name-Last: Khowala Title: Queuing approximations for capacity planning under common setup rules Abstract: We consider the problem of estimating the resulting utilization and cycle times in manufacturing settings that are subject to significant capacity losses due to setups when switching between different product or part types. In particular, we develop queuing approximations for a multi-item server with sequence-dependent setups operating under four distinct setup rules that we have determined to be common in such settings: first-in-first-out, setup avoidance, setup minimization and type priority. We first derive expressions for the setup utilization and overall utilization, and use Kingman’s well-known approximation to estimate the average cycle time at the station under each setup rule. We test the accuracy of the approximations using a simulation experiment, and provide insights on the use of different setup rules under various conditions. Journal: IISE Transactions Pages: 1177-1195 Issue: 11 Volume: 53 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2020.1815105 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1815105 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:11:p:1177-1195 Template-Type: ReDIF-Article 1.0 Author-Name: Shizhe Peng Author-X-Name-First: Shizhe Author-X-Name-Last: Peng Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Author-Name: Wenhui Zhao Author-X-Name-First: Wenhui Author-X-Name-Last: Zhao Title: A preventive maintenance policy with usage-dependent failure rate thresholds under two-dimensional warranties Abstract: This article considers Preventive Maintenance (PM) under a two-dimensional (2-D) warranty contract with time and usage limits. From a manufacturer’s point of view, we develop a dynamic maintenance model with a random horizon to include the impact of random and dynamic usage rates on PM decisions. The model treats the cumulative amount of usage as a state variable that provides information about the failure rate and the expiration of the 2-D warranty. We characterize the optimal PM policy by a sequence of usage-dependent failure rate thresholds. Each threshold is a function of the cumulative usage. Our failure rate threshold policy chooses one of the following two actions in each period: performing perfect PM or no PM. Specifically, the manufacturer should bring the failure rate back to its original level when it exceeds the threshold in the corresponding period. This policy is also optimal under a constant usage rate. In the numerical experiments, we demonstrate the effectiveness of the proposed policy and conduct a sensitivity analysis to investigate how this policy is affected by the model parameters. Journal: IISE Transactions Pages: 1231-1243 Issue: 11 Volume: 53 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2020.1825879 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1825879 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:11:p:1231-1243 Template-Type: ReDIF-Article 1.0 Author-Name: Maoqi Liu Author-X-Name-First: Maoqi Author-X-Name-Last: Liu Author-Name: Li Zheng Author-X-Name-First: Li Author-X-Name-Last: Zheng Author-Name: Changchun Liu Author-X-Name-First: Changchun Author-X-Name-Last: Liu Title: Faster or fewer iterations? A strategic perspective of a sequential product development project Abstract: Shortening the lead time for Product Development (PD) provides enterprises with a competitive advantage. Given the iterative nature of PD projects, two aspects are regularly considered to shorten the PD lead time, that is, conducting faster or fewer iterations. However, executing faster iterations usually causes more iterations and vice versa. Therefore, suitable coordination between faster and fewer iterations is necessary to minimize the PD lead time. We investigate this coordination from a strategic perspective, whereby a PD project is considered as a sequence of stages and characterized by the design rates and rework probabilities of those stages. We model the coordination as a decision to choose the appropriate design rates for each stage, wherein the rework probabilities are negatively related to the design rates. An absorbing Markov process is applied to calculate the expected lead time of a PD project. Further, we formulate a geometric programming model to determine the optimal design rates of the stages with respect to the minimal expected lead time. Several insights are extracted from the model to provide general guidance on the coordination, including the effect of the acceptance check rate of the project, rework risk of the stages on the optimal design rates, and decomposability of the coordination. Inspired by these insights, an efficient heuristic algorithm is designed. The algorithm performs well in numerical experiments, which in turn validates the insights. Additionally, a field case proves the effectiveness of our model. Compared with the current policy, 12.25% of the PD lead time is saved through appropriate coordination between faster and fewer iterations. Journal: IISE Transactions Pages: 1196-1214 Issue: 11 Volume: 53 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2020.1830207 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1830207 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:11:p:1196-1214 Template-Type: ReDIF-Article 1.0 Author-Name: Sinan Obaidat Author-X-Name-First: Sinan Author-X-Name-Last: Obaidat Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Title: Optimal sampling plan for an unreliable multistage production system subject to competing and propagating random shifts Abstract: Sampling plans play an important role in monitoring production systems and reducing quality- and maintenance-related costs. Existing sampling plans usually focus on one assignable cause. However, multiple assignable causes may occur, especially for a multistage production system, and the resulting process shift may propagate downstream. This article addresses the problem of finding the optimal sampling plan for an unreliable multistage production system subject to competing and propagating random quality shifts. In particular, a serial production system with two unreliable machines that produce a product at a fixed production rate is studied. It is assumed that both machines are subject to random quality shifts with increased nonconforming rates and can suddenly fail with increasing failure rates. A sampling plan is implemented at the end of the production line to determine whether the system has shifted or not. If a process shift is detected, a necessary maintenance action will be initiated. The optimal sample size, sampling interval, and acceptance threshold are determined by minimizing the long-run cost rate subject to the constraints on average time to signal a true alarm, effective production rate, and system availability. A numerical example on an automatic shot blasting and painting system is provided to illustrate the application of the proposed sampling plan and the effects of key parameters and system constraints on the optimal sampling plan. Moreover, the proposed model shows better performance for various cases than an alternative model that ignores shift propagation. Journal: IISE Transactions Pages: 1244-1265 Issue: 11 Volume: 53 Year: 2021 Month: 8 X-DOI: 10.1080/24725854.2020.1825880 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1825880 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:11:p:1244-1265 Template-Type: ReDIF-Article 1.0 Author-Name: Chenang Liu Author-X-Name-First: Chenang Author-X-Name-Last: Liu Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Author-Name: Suresh Babu Author-X-Name-First: Suresh Author-X-Name-Last: Babu Author-Name: Chase Joslin Author-X-Name-First: Chase Author-X-Name-Last: Joslin Author-Name: James Ferguson Author-X-Name-First: James Author-X-Name-Last: Ferguson Title: An integrated manifold learning approach for high-dimensional data feature extractions and its applications to online process monitoring of additive manufacturing Abstract: As an effective dimension reduction and feature extraction technique, manifold learning has been successfully applied to high-dimensional data analysis. With the rapid development of sensor technology, a large amount of high-dimensional data such as image streams can be easily available. Thus, a promising application of manifold learning is in the field of sensor signal analysis, particular for the applications of online process monitoring and control using high-dimensional data. The objective of this study is to develop a manifold learning-based feature extraction method for process monitoring of Additive Manufacturing (AM) using online sensor data. Due to the non-parametric nature of most existing manifold learning methods, their performance in terms of computational efficiency, as well as noise resistance has yet to be improved. To address this issue, this study proposes an integrated manifold learning approach termed multi-kernel metric learning embedded isometric feature mapping (MKML-ISOMAP) for dimension reduction and feature extraction of online high-dimensional sensor data such as images. Based on the extracted features with the utilization of supervised classification and regression methods, an online process monitoring methodology for AM is implemented to identify the actual process quality status. In the numerical simulation and real-world case studies, the proposed method demonstrates excellent performance in both prediction accuracy and computational efficiency. Journal: IISE Transactions Pages: 1215-1230 Issue: 11 Volume: 53 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2020.1849876 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1849876 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:11:p:1215-1230 Template-Type: ReDIF-Article 1.0 Author-Name: Marzieh Hashemi Author-X-Name-First: Marzieh Author-X-Name-Last: Hashemi Author-Name: Majid Asadi Author-X-Name-First: Majid Author-X-Name-Last: Asadi Title: Optimal preventive maintenance of coherent systems: A generalized Pólya process approach Abstract: We propose optimal preventive maintenance strategies for n-component coherent systems. We assume that in the early time of the system operation all failed components are repaired, such that the state of a failed component gets back to a working state, worse than that of prior to failure. To model this repair action, we utilize a counting process on the interval (0,τ], known as the generalized Pólya process (which subsumes the non-homogeneous Poisson process as a special case). Two generalized Pólya process-based repair strategies are proposed. The criteria to be optimized are the cost function formulated based on the repair costs of the components/system, and the system availability, to obtain the optimal time of preventive maintenance of the system. To illustrate the theoretical results, two coherent systems are studied for which the optimal preventive maintenance times are explored under different conditions. Journal: IISE Transactions Pages: 1266-1280 Issue: 11 Volume: 53 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2020.1831712 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1831712 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:11:p:1266-1280 Template-Type: ReDIF-Article 1.0 Author-Name: Esra Koca Author-X-Name-First: Esra Author-X-Name-Last: Koca Author-Name: Nilay Noyan Author-X-Name-First: Nilay Author-X-Name-Last: Noyan Author-Name: Hande Yaman Author-X-Name-First: Hande Author-X-Name-Last: Yaman Title: Two-stage facility location problems with restricted recourse Abstract: We introduce a new class of two-stage stochastic uncapacitated facility location problems under system nervousness considerations. The location and allocation decisions are made under uncertainty, while the allocation decisions may be altered in response to the realizations of the uncertain parameters. A practical concern is that the uncertainty-adaptive second-stage allocation decisions might substantially deviate from the corresponding pre-determined first-stage allocation decisions, resulting in a high level of nervousness in the system. To this end, we develop two-stage stochastic programming models with restricted recourse that hedge against undesirable values of a dispersion measure quantifying such deviations. In particular, we control the robustness between the corresponding first-stage and scenario-dependent recourse decisions by enforcing an upper bound on the Conditional Value-at-Risk (CVaR) measure of the random CVaR-norm associated with the scenario-dependent deviations of the recourse decisions. We devise exact Benders-type decomposition algorithms to solve the problems of interest. To enhance the computational performance, we also develop efficient combinatorial algorithms to construct optimal solutions of the Benders cut generation subproblems, as an alternative to using an off-the-shelf solver. The results of our computational study demonstrate the value of the proposed modeling approaches and the effectiveness of our solution methods. Journal: IISE Transactions Pages: 1369-1381 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2021.1910883 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1910883 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1369-1381 Template-Type: ReDIF-Article 1.0 Author-Name: Ankit Bansal Author-X-Name-First: Ankit Author-X-Name-Last: Bansal Author-Name: Bjorn Berg Author-X-Name-First: Bjorn Author-X-Name-Last: Berg Author-Name: Yu-Li Huang Author-X-Name-First: Yu-Li Author-X-Name-Last: Huang Title: A distributionally robust optimization approach for coordinating clinical and surgical appointments Abstract: In this article, we address a two-stage scheduling problem that requires coordination between clinical and surgical appointments for specialized surgeries. First, patients have a clinical appointment with a surgeon to determine whether they are an appropriate candidate for the surgical procedure. Subsequently, if the decision to pursue the surgery is made the patient undergoes the procedure on a later date. However, the scheduling process aims to book both the clinical and surgical appointments for a patient at the time of the initial appointment request. Two sources of uncertainty make this scheduling process challenging: (i) the patient may or may not need surgery after the clinical appointment and (ii) the surgery duration for each patient and procedure is unknown. We present a Distributionally Robust Optimization (DRO) approach for coordinating clinical and surgical appointments under these uncertainties. A case study of the Transcatheter Aortic Valve Replacement procedure at Mayo Clinic, Rochester, MN is presented. Numerical results include comparisons with the current practice and four heuristic scheduling policies from the literature. Results show that the DRO-based scheduling policies lead to lower total surgeon idle-time and overtime per day. The proposed policies also restrict the under and over utilization of clinical capacity. Journal: IISE Transactions Pages: 1311-1323 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2021.1906467 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1906467 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1311-1323 Template-Type: ReDIF-Article 1.0 Author-Name: Lauren N. Steimle Author-X-Name-First: Lauren N. Author-X-Name-Last: Steimle Author-Name: Vinayak S. Ahluwalia Author-X-Name-First: Vinayak S. Author-X-Name-Last: Ahluwalia Author-Name: Charmee Kamdar Author-X-Name-First: Charmee Author-X-Name-Last: Kamdar Author-Name: Brian T. Denton Author-X-Name-First: Brian T. Author-X-Name-Last: Denton Title: Decomposition methods for solving Markov decision processes with multiple models of the parameters Abstract: We consider the problem of decision-making in Markov decision processes (MDPs) when the reward or transition probability parameters are not known with certainty. We study an approach in which the decision maker considers multiple models of the parameters for an MDP and wishes to find a policy that optimizes an objective function that considers the performance with respect to each model, such as maximizing the expected performance or maximizing worst-case performance. Existing solution methods rely on mixed-integer program (MIP) formulations, but have previously been limited to small instances, due to the computational complexity. In this article, we present branch-and-cut and policy-based branch-and-bound (PB-B&B) solution methods that leverage the decomposable structure of the problem and allow for the solution of MDPs that consider many models of the parameters. Numerical experiments show that a customized implementation of PB-B&B significantly outperforms the MIP-based solution methods and that the variance among model parameters can be an important factor in the value of solving these problems. Journal: IISE Transactions Pages: 1295-1310 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2020.1869351 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1869351 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1295-1310 Template-Type: ReDIF-Article 1.0 Author-Name: Yaarit Miriam Cohen Author-X-Name-First: Yaarit Miriam Author-X-Name-Last: Cohen Author-Name: Pinar Keskinocak Author-X-Name-First: Pinar Author-X-Name-Last: Keskinocak Author-Name: Jordi Pereira Author-X-Name-First: Jordi Author-X-Name-Last: Pereira Title: A note on the flowtime network restoration problem Abstract: The flowtime network restoration problem was introduced by Averbakh and Pereira (2012) who presented a Minimum Spanning Tree heuristic, two local search procedures, and an exact branch-and-bound algorithm. This note corrects the computational results in Averbakh and Pereira (2012). Journal: IISE Transactions Pages: 1351-1352 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2020.1869353 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1869353 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1351-1352 Template-Type: ReDIF-Article 1.0 Author-Name: Reem Khir Author-X-Name-First: Reem Author-X-Name-Last: Khir Author-Name: Alan Erera Author-X-Name-First: Alan Author-X-Name-Last: Erera Author-Name: Alejandro Toriello Author-X-Name-First: Alejandro Author-X-Name-Last: Toriello Title: Two-stage sort planning for express parcel delivery Abstract: The design and control of effective sortation systems has become more complex as both the volume of parcels and also the number of time-definite service options offered by parcel carriers have grown. In this article, we describe approaches for planning two-stage parcel sort operations that explicitly consider time deadlines and sorting capacities. In two-stage sorting, parcels are sorted into groups by a primary sorter and then parcels from these groups are dispatched to secondary stations for final sort. We define a sort planning optimization problem in this setting using mixed-integer programming, where the primary objective is to minimize operational cost subject to machine capacity and parcel deadline constraints. Since a detailed optimization problem for sort planning based on flows in a time–space network is difficult to solve for realistically-sized instances, we develop an alternative formulation that is easier to solve and shares the same feasible region of first-stage sorting decisions with the detailed model; for many practical objective functions, this simpler model can be used to find cost-optimal solutions to the detailed model. We illustrate the proposed modeling approach and its effectiveness using real-world instances obtained from a large parcel express service provider. Journal: IISE Transactions Pages: 1353-1368 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2021.1889078 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1889078 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1353-1368 Template-Type: ReDIF-Article 1.0 Author-Name: Amin Aghalari Author-X-Name-First: Amin Author-X-Name-Last: Aghalari Author-Name: Nazanin Morshedlou Author-X-Name-First: Nazanin Author-X-Name-Last: Morshedlou Author-Name: Mohammad Marufuzzaman Author-X-Name-First: Mohammad Author-X-Name-Last: Marufuzzaman Author-Name: Daniel Carruth Author-X-Name-First: Daniel Author-X-Name-Last: Carruth Title: Inverse reinforcement learning to assess safety of a workplace under an active shooter incident Abstract: Active shooter incidents are posing an increasing threat to public safety. Given the majority of the past incidents took place in built environments (e.g., educational, commercial buildings), there is an urgent need for a method to assess the safety of buildings under an active shooter situation. This study aims to bridge this knowledge gap by developing a learning technique that can be used to model the behavior of the shooter and the trapped civilians under an active shooter incident. Understanding how the civilians respond to different simulated environments, a number of actions can be undertaken to bolster the safety measures of a given facility. This study provides a customized decision-making tool that adopts a tailored maximum entropy inverse reinforcement learning algorithm and utilizes some safety measurement metrics, such as the percentage of civilians who can hide/exit in/from the system, to assess a workplace’s safety under an active shooter incident. For instance, our results demonstrate how different building configurations (e.g., location and number of entrances/exits, hiding places) play a significant role in the safety of civilians under an active shooter situation. The results further demonstrate that the shooter’s prior shooting experiences, the type of firearm carried, and the timing of the incident are some of the important factors that may pose serious security concerns to the civilians under an active shooter incident. Journal: IISE Transactions Pages: 1337-1350 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2021.1922785 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1922785 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1337-1350 Template-Type: ReDIF-Article 1.0 Author-Name: Satyam Mukherjee Author-X-Name-First: Satyam Author-X-Name-Last: Mukherjee Author-Name: Tarun Jain Author-X-Name-First: Tarun Author-X-Name-Last: Jain Title: Do the mobility patterns for city taxicabs impact road safety? Abstract: Recently, large investments have been made by cities such as Singapore, New York City, and London towards creating smart city initiatives in the areas of traffic safety enhancement and higher mobility. In this article, we investigate the impact of various network topology measures on the number of vehicle crashes in a city mobility network. Extant literature on mobility in city traffic networks has not studied the impact of network structure on road accidents. We fill this important gap by identifying the structural properties of critical zones in the city traffic network, which have a high risk of vehicle crashes. We use econometric methods to analyze a large dataset on city mobility from the New York Taxi and Limousine Commission, and a dataset on motor vehicle collisions from the New York Police Department; and derive various insights on the scope of traffic safety issues in a smart city. Our dataset has information on about 100,000,000 taxi trips over the year 2018. In this year, around 1,500,000 vehicle crash events were reported in New York City. One would expect that due to a large number of shortest paths, the number of accidents should be significantly more in the high betweenness centrality zones in the traffic mobility network. However, our analysis reveals that zones with high betweenness centrality tend to have a lower number of accidents. Furthermore, zones with a high degree centrality in the traffic mobility network are associated with a higher number of vehicle crash incidents. Our study reveals some crucial pointers for smart city policymakers and the operations managers of ride-sharing companies on how information on the mobility patterns of the high accident risk zones can be leveraged to reduce motor vehicle collisions. Journal: IISE Transactions Pages: 1324-1336 Issue: 12 Volume: 53 Year: 2021 Month: 12 X-DOI: 10.1080/24725854.2021.1914879 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1914879 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:53:y:2021:i:12:p:1324-1336 Template-Type: ReDIF-Article 1.0 Author-Name: Di Liu Author-X-Name-First: Di Author-X-Name-Last: Liu Author-Name: Tugce Isik Author-X-Name-First: Tugce Author-X-Name-Last: Isik Author-Name: B. Rae Cho Author-X-Name-First: B. Author-X-Name-Last: Rae Cho Title: Double-tolerance design for manufacturing systems Abstract: Most production environments are stochastic in nature, due to the randomness inherent in the production processes. One important engineering problem commonly faced by practitioners is to determine optimal engineering tolerances to be used in production. This article develops optimization models for determining tolerance sets to maximize the long-run average net profit on a production line with processing and rework stations, as well as instantaneous inspection and scrap operations. We assume that only one server works at the rework station, and the service times at the processing and rework stations are uncertain, thus, a stochastic queueing system is embedded into the manufacturing process. We also consider the trade-off between the overall production cost and the cost associated with a quality loss in the final product. Our work is the first to introduce the concept of double-tolerance sets to the tolerance design optimization literature. By comparing the proposed double-tolerance model with a single-tolerance model, we investigate the impact of different parameter settings and modeling assumptions on the optimal tolerances through numerical examples and a sensitivity analysis. Journal: IISE Transactions Pages: 17-28 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1852632 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1852632 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:17-28 Template-Type: ReDIF-Article 1.0 Author-Name: Francesco Zammori Author-X-Name-First: Francesco Author-X-Name-Last: Zammori Author-Name: Mattia Neroni Author-X-Name-First: Mattia Author-X-Name-Last: Neroni Author-Name: Davide Mezzogori Author-X-Name-First: Davide Author-X-Name-Last: Mezzogori Title: Cycle time calculation of shuttle-lift-crane automated storage and retrieval system Abstract: This article deals with cycle time calculation of Automated Storage and Retrieval Systems (AS/RS). Cycle time has a high impact on the operating performance of an AS/RS, and its knowledge is essential, both at the operational and design level. The novelty of this work concerns the peculiar kind of system that is considered, as the focus is on the Shuttle-Lift-Crane AS/RS. This solution, common in the steel sector, is used to store bundles of long metal bars, which are automatically handled by cranes, lifts, and shuttles. The functioning of these machines, which can operate in parallel and independently, is stochastically modeled, and the probability distribution function of the cycle time is computed, both for single and dual command cycles. The model, assessed via discrete event simulation, ensures a high average accuracy of 96% and 98%, under single and dual command cycles, respectively. Journal: IISE Transactions Pages: 40-59 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1861391 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1861391 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:40-59 Template-Type: ReDIF-Article 1.0 Author-Name: Jialei Chen Author-X-Name-First: Jialei Author-X-Name-Last: Chen Author-Name: Zhaonan Liu Author-X-Name-First: Zhaonan Author-X-Name-Last: Liu Author-Name: Kan Wang Author-X-Name-First: Kan Author-X-Name-Last: Wang Author-Name: Chen Jiang Author-X-Name-First: Chen Author-X-Name-Last: Jiang Author-Name: Chuck Zhang Author-X-Name-First: Chuck Author-X-Name-Last: Zhang Author-Name: Ben Wang Author-X-Name-First: Ben Author-X-Name-Last: Wang Title: A calibration-free method for biosensing in cell manufacturing Abstract: Chimeric antigen receptor T-cell therapy has demonstrated innovative therapeutic effectiveness in fighting cancers; however, it is extremely expensive, due to the intrinsic patient-to-patient variability in cell manufacturing. We propose in this work a novel calibration-free statistical framework to effectively deduce critical quality attributes under the patient-to-patient variability. Specifically, we model this variability via a patient-specific calibration parameter, and use readings from multiple biosensors to construct a patient-invariance statistic, thereby alleviating the effect of the calibration parameter. A carefully formulated optimization problem and an algorithmic framework are presented to find the best patient-invariance statistic and the model parameters. Using the patient-invariance statistic, we can deduce the critical quality attribute of interest, free from the calibration parameter. We demonstrate improvements of the proposed calibration-free method in different simulation experiments. In the cell manufacturing case study, our method not only effectively deduces viable cell concentration for monitoring, but also reveals insights for the cell manufacturing process. Journal: IISE Transactions Pages: 29-39 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1856982 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1856982 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:29-39 Template-Type: ReDIF-Article 1.0 Author-Name: Yishuang Hu Author-X-Name-First: Yishuang Author-X-Name-Last: Hu Author-Name: Yi Ding Author-X-Name-First: Yi Author-X-Name-Last: Ding Author-Name: Yu Lin Author-X-Name-First: Yu Author-X-Name-Last: Lin Author-Name: Ming J. Zuo Author-X-Name-First: Ming J. Author-X-Name-Last: Zuo Author-Name: Donglian Qi Author-X-Name-First: Donglian Author-X-Name-Last: Qi Title: Optimal structure screening for large-scale multi-state series-parallel systems based on structure ordinal optimization Abstract: Multi-state series-parallel systems are widely-used for representing engineering systems. In real-life cases, engineers need to select an optimal system structure among many different multi-state series-parallel system structures. Screening of system structures is meaningful and critical. Moreover, to design a reliable structure, reliability evaluation is an indispensable part of the process. Due to the large number of available system structures, the computational burden can be huge when selecting the optimal one. Also, the number of components and possible states of each system can be enormous when the system scale is large, which causes significant complexity in exact reliability evaluation. To effectively select the optimal structure among numerous multi-state series-parallel systems under a reliability constraint, this article proposes an optimal structure screening method called the structure ordinal optimization. The proposed method combines the fuzzy universal generating function technique with an ordinal optimization algorithm. The fuzzy universal generating function technique is applied to reduce the computational time by approximately evaluating the reliability. Based on the approximate reliabilities, ordinal optimization helps to reduce the number of structure options and thus accelerate the screening process. Numerical examples show that the structure ordinal optimization method has advantages in computational efficiency with satisfactory accuracy. Journal: IISE Transactions Pages: 60-72 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1836434 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1836434 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:60-72 Template-Type: ReDIF-Article 1.0 Author-Name: Jiaxiang Cai Author-X-Name-First: Jiaxiang Author-X-Name-Last: Cai Author-Name: Zhi-Sheng Ye Author-X-Name-First: Zhi-Sheng Author-X-Name-Last: Ye Title: Optimal design of accelerated destructive degradation tests with block effects Abstract: Accelerated Destructive Degradation Tests (ADDTs) are effective for reliability assessment of highly reliable products whose key performance characteristic has to be destructively measured. Test units in a reliability experiment typically share the same test environments, and this introduces block effects to the resulting ADDT data. Nevertheless, the block effects are seldom considered in the optimal design of an ADDT plan. Motivated by an application of a seal strength test, this study discusses methods for planning ADDT with block effects. In particular, two types of block effects are considered, i.e., the rig-layer blocking due to a shared test rig, and the gauge-layer blocking resulting from simultaneous measurements. The ADDT planning specifies the optimal stress levels and allocation of test units to these stress levels to minimize the asymptotic variance of the estimated lifetime quantiles at use conditions. The optimal test plans are investigated analytically and through a comprehensive numerical study. An application to the motivating example reveals the importance of considering the block effects in the test design. Journal: IISE Transactions Pages: 73-90 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1849875 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1849875 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:73-90 Template-Type: ReDIF-Article 1.0 Author-Name: Samaneh Ebrahimi Author-X-Name-First: Samaneh Author-X-Name-Last: Ebrahimi Author-Name: Mostafa Reisi-Gahrooei Author-X-Name-First: Mostafa Author-X-Name-Last: Reisi-Gahrooei Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Shawn Mankad Author-X-Name-First: Shawn Author-X-Name-Last: Mankad Title: Monitoring sparse and attributed networks with online Hurdle models Abstract: In this article we create a novel monitoring system to detect changes within a sequence of networks. Specifically, we consider sparse, weighted, directed, and attributed networks. Our approach uses the Hurdle model to capture sparsity and explain the weights of the edges as a function of the node and edge attributes. Here, the weight of an edge represents the number of interactions between two nodes. We then integrate the Hurdle model with a state-space model to capture temporal dynamics of the edge formation process. Estimation is performed using an extended Kalman Filter. Statistical process control charts are used to monitor the network sequence in real time in order to identify changes in connectivity patterns that are caused by regime shifts. We show that the proposed methodology outperforms alternative approaches on both synthetic and real data. We also perform a detailed case study on the 2007–2009 financial crisis. Demonstrating the promise of the proposed approach as an early warning system, we show that our method applied to financial interbank lending networks would have raised alarms to the public prior to key events and announcements by the European Central Bank. Journal: IISE Transactions Pages: 91-104 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1861390 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1861390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:91-104 Template-Type: ReDIF-Article 1.0 Author-Name: Lening Wang Author-X-Name-First: Lening Author-X-Name-Last: Wang Author-Name: Xiaoyu Chen Author-X-Name-First: Xiaoyu Author-X-Name-Last: Chen Author-Name: Daniel Henkel Author-X-Name-First: Daniel Author-X-Name-Last: Henkel Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Title: Family learning: A process modeling method for cyber-additive manufacturing network Abstract: A Cyber-Additive Manufacturing Network (CAMNet) integrates connected additive manufacturing processes with advanced data analytics as computation services to support personalized product realization. However, highly personalized product designs (e.g., geometries) in CAMNet limit the sample size for each design, which may lead to unsatisfactory accuracy for computation services, e.g., a low prediction accuracy for quality modeling. Motivated by the modeling challenge, we proposed a data-driven model called family learning to jointly model similar-but-non-identical products as family members by quantifying the shared information among these products in the CAMNet. Specifically, the amount of shared information for each product is estimated by optimizing a similarity generation model based on design factors, which directly improve the prediction accuracy for the family learning model. The advantages of the proposed method are illustrated by both simulations and a real case study of the selective laser melting process. This family learning method can be broadly applied to data-driven modeling in a network with similar-but-non-identical connected systems. Journal: IISE Transactions Pages: 1-16 Issue: 1 Volume: 54 Year: 2021 Month: 10 X-DOI: 10.1080/24725854.2020.1851824 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1851824 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:1:p:1-16 Template-Type: ReDIF-Article 1.0 Author-Name: Christopher Green Author-X-Name-First: Christopher Author-X-Name-Last: Green Author-Name: Morvarid Rahmani Author-X-Name-First: Morvarid Author-X-Name-Last: Rahmani Title: The implications of rating systems on workforce performance Abstract: Enhancing workforce performance is the key to success for professional firms. Firms often evaluate workers based on their performance compared with their peers or against an objective standard. Which of these rating systems leads to higher workforce performance? To answer this question, we construct game-theoretic models of two performance rating systems: (i) a Relative rating system where workers compete with each other for a constrained number of high ratings, and (ii) an Absolute rating system where workers are awarded high ratings by performing at or above a standard threshold. We derive the workers’ equilibrium performance as a function of their ability and the characteristics of the rating pool. From a firm’s perspective, we find that an Absolute rating system can lead to higher performance than a Relative rating system when the rating pool size is small or the workers’ cost of effort relative to their efficiency rate is low, and the reverse holds true otherwise. When considering the workers’ perspective, we find that higher ability workers prefer an Absolute system due to its predictable nature, while lower ability workers prefer a Relative system as it provides them an opportunity to outperform other workers. Journal: IISE Transactions Pages: 159-172 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1944704 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1944704 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:159-172 Template-Type: ReDIF-Article 1.0 Author-Name: Yuguang Wu Author-X-Name-First: Yuguang Author-X-Name-Last: Wu Author-Name: Minmin Chen Author-X-Name-First: Minmin Author-X-Name-Last: Chen Author-Name: Xin Wang Author-X-Name-First: Xin Author-X-Name-Last: Wang Title: Incentivized self-rebalancing fleet in electric vehicle sharing Abstract: With the rising need for efficient and flexible short-distance urban transportation, more vehicle sharing companies are offering one-way car-sharing services. Electrified vehicle sharing systems are even more effective in terms of reducing fuel consumption and carbon emission. In this article, we investigate a dynamic fleet management problem for an Electric Vehicle (EV) sharing system that faces time-varying random demand and electricity price. Demand is elastic in each time period, reacting to the announced price. To maximize the revenue, the EV fleet optimizes trip pricing and EV dispatching decisions dynamically. We develop a new value function approximation with input convex neural networks to generate high-quality solutions. Through a New York City case study, we compare it with standard dynamic programming methods and develop insights regarding the interaction between the EV fleet and the power grid. Journal: IISE Transactions Pages: 173-185 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1928340 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1928340 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:173-185 Template-Type: ReDIF-Article 1.0 Author-Name: Xu Chen Author-X-Name-First: Xu Author-X-Name-Last: Chen Author-Name: Xiaojun Wang Author-X-Name-First: Xiaojun Author-X-Name-Last: Wang Author-Name: Yusen Xia Author-X-Name-First: Yusen Author-X-Name-Last: Xia Title: Low-carbon technology transfer between rival firms under cap-and-trade policies Abstract: We investigate the effects of low-carbon technology transfer between two rival manufacturers on their economic, environmental, and social welfare performance under a cap-and-trade policy. We model alternative licensing arrangements of technology transfer and evaluate the model performance from the perspectives of different stakeholders, including manufacturers, customers, and policy makers. Our findings show that the contractual choice on low-carbon technology licensing is dependent on the trade-off between the benefits gained from the licensing of technology and the consequential losses incurred from competition with a strengthened competitor, which is influenced by a combination of factors, including internal technological abilities, the interfirm power relationship, external market competition, and the carbon emission control policy. Among them, the interfirm power relationship is most influential in determining the optimal contractual decision. In addition, we extend the analysis of technology licensing strategies to different carbon emissions caps with additional cost incurred from purchasing emission allowances through auction, and a two-period model considering emissions cap reduction, respectively. Finally, our analyses show it is critical for policy makers to develop appropriate emissions control policies to promote the agenda of a sustainable, low-carbon economy. Journal: IISE Transactions Pages: 105-121 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1925786 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1925786 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:105-121 Template-Type: ReDIF-Article 1.0 Author-Name: Soeun Park Author-X-Name-First: Soeun Author-X-Name-Last: Park Author-Name: Woonghee Tim Huh Author-X-Name-First: Woonghee Tim Author-X-Name-Last: Huh Author-Name: Byung Cho Kim Author-X-Name-First: Byung Cho Author-X-Name-Last: Kim Title: Optimal inventory management with buy-one-give-one promotion Abstract: Recently, the Buy-One-Give-One (BOGO) model, where the firm donates one unit of its product for every unit purchased, has emerged as a viable option to practice corporate social responsibility. Despite growing public attention to the BOGO model, optimal inventory management and profitability associated with BOGO has not yet been explored adequately in the academic literature. Under the BOGO promotion, inventory management naturally becomes a key decision, since the firm has to produce an extra unit for each unit sold. In this article, we examine optimal inventory management of the BOGO model under stochastic demand and compare it to the standard newsvendor model as well as a model with cash donation. Analogous to the standard newsvendor model, we clearly define the BOGO fractile and optimal stocking quantity. We show that, counterintuitively, it is not necessarily optimal to produce more units under BOGO, due to the trade-off between give-away commitment and reduced product margin. Moreover, although the BOGO model invariably yields a lower profit than the classic newsvendor model or cash donation model if demand remains the same, there often exists a certain level of positive demand shift that renders BOGO more profitable, which helps explain growing presence of BOGO in the marketplace. Journal: IISE Transactions Pages: 198-209 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1938299 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1938299 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:198-209 Template-Type: ReDIF-Article 1.0 Author-Name: Xi Jiang Author-X-Name-First: Xi Author-X-Name-Last: Jiang Author-Name: Barry L. Nelson Author-X-Name-First: Barry L. Author-X-Name-Last: Nelson Author-Name: L. Jeff Hong Author-X-Name-First: L. Author-X-Name-Last: Jeff Hong Title: Meaningful sensitivities: A new family of simulation sensitivity measures Abstract: Sensitivity analysis quantifies how a model output responds to variations in its inputs. However, the following sensitivity question has never been rigorously answered: How sensitive is the mean or variance of a stochastic simulation output to the mean or variance of a stochastic input distribution? This question does not have a simple answer because there is often more than one way of changing the mean or variance of an input distribution, which leads to correspondingly different impacts on the simulation outputs. In this article we propose a new family of output-property-with-respect-to-input-property sensitivity measures for stochastic simulation. We focus on four useful members of this general family: sensitivity of output mean or variance with respect to input-distribution mean or variance. Based on problem-specific characteristics of the simulation we identify appropriate point and error estimators for these sensitivities that require no additional simulation effort beyond the nominal experiment. Two representative examples are provided to illustrate the family, estimators and interpretation of results. Journal: IISE Transactions Pages: 122-133 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1931571 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1931571 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:122-133 Template-Type: ReDIF-Article 1.0 Author-Name: Yuguang Wu Author-X-Name-First: Yuguang Author-X-Name-Last: Wu Author-Name: Qiao-Chu He Author-X-Name-First: Qiao-Chu Author-X-Name-Last: He Author-Name: Xin Wang Author-X-Name-First: Xin Author-X-Name-Last: Wang Title: Competitive spatial pricing for urban parking systems: Network structures and asymmetric information Abstract: Inspired by new technologies to monitor parking occupancy and process market signals, we aim to expand the application of demand-responsive pricing in the parking industry. Based on a graphical Hotelling model wherein each garage has information for its incoming parking demand, we consider a general competitive spatial pricing in parking systems under an asymmetric information structure. We focus on the impact of urban network structure on the incentive of information sharing. Our analyses suggest that the garages are always better off in a circular-networked city, while they could be worse off in the suburbs of a star-networked city. Nevertheless, the overall revenue for garages is improved and the aggregate congestion is reduced under information sharing. Our results also suggest that information sharing helps garages further exploit the customers who in turn become worse-off. Therefore, policy-makers should carefully evaluate their transportation data policy since impacts on the service-providers and the customers are typically conflicting. Using the SFpark data, we empirically confirmed the value of information sharing. In particular, garages with higher price-demand elasticity and lower demand variance tend to enjoy larger benefits via information sharing. These insights support the joint design of parking rates structure and information systems. Journal: IISE Transactions Pages: 186-197 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1937755 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1937755 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:186-197 Template-Type: ReDIF-Article 1.0 Author-Name: Rashid Anzoom Author-X-Name-First: Rashid Author-X-Name-Last: Anzoom Author-Name: Rakesh Nagi Author-X-Name-First: Rakesh Author-X-Name-Last: Nagi Author-Name: Chrysafis Vogiatzis Author-X-Name-First: Chrysafis Author-X-Name-Last: Vogiatzis Title: A review of research in illicit supply-chain networks and new directions to thwart them Abstract: Illicit trades have emerged as a significant problem to almost every government across the world. Their gradual expansion and diversification throughout the years suggests the existence of robust yet obscure supply chains as well as the inadequacy of current approaches to understand and disrupt them. In response, researchers have been trying hard to identify strategies that would succeed in controlling the proliferation of these trades. With the same motivation, this article conducts a comprehensive review of prior research in the field of illicit supply-chain networks. The review is primarily focused on the trade of physical products, ignoring virtual products and services. Our discussion includes analyses of their structure and operations, as well as procedures for their detection and disruption, especially from the perspective of operations research, management science, network science, and industrial engineering. We also address persisting challenges in this domain and offer future research directions to pursue. Journal: IISE Transactions Pages: 134-158 Issue: 2 Volume: 54 Year: 2021 Month: 11 X-DOI: 10.1080/24725854.2021.1939466 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1939466 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2021:i:2:p:134-158 Template-Type: ReDIF-Article 1.0 Author-Name: Moshe Eben-Chaime Author-X-Name-First: Moshe Author-X-Name-Last: Eben-Chaime Title: On the relationships between the design of assembly manufacturing and inspection systems and product quality Abstract: This study is a response to the observation that little attention has been paid to the relationship between production (manufacturing) system design and product quality. In particular, only very limited work exists analyzing assembly system quality when multiple products are produced in the same system. Here, a computational scheme to estimate material, capacity and resource requirements in assembly manufacturing systems with imperfect quality is proposed. Accurate estimation of these requirements is vital to system design. The proposed scheme integrates the design of both production and inspection systems, accounts for inspection errors of both types—missing defective units and false rejection—and enables the comparison of alternative designs. In addition, the structure of the inspection system significantly affects both the volumes of material flows and the structure of the material flow network. These ramifications of imperfect quality have been overlooked thus far, but are also examined and discussed in this study. Journal: IISE Transactions Pages: 227-237 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2021.1905196 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1905196 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:227-237 Template-Type: ReDIF-Article 1.0 Author-Name: Akash Deep Author-X-Name-First: Akash Author-X-Name-Last: Deep Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Dharmaraj Veeramani Author-X-Name-First: Dharmaraj Author-X-Name-Last: Veeramani Title: A data-driven recurrent event model for system degradation with imperfect maintenance actions Abstract: Although a large number of degradation models for industrial systems have been proposed by researchers over the past few decades, the modeling of impacts of maintenance actions has been mostly limited to single-component systems. Among multi-component models, past work either ignores the general impact of maintenance, or is limited to studying failure interactions. In this article, we propose a multivariate imperfect maintenance model that models impacts of maintenance actions across sub-systems while considering continual operation of the unit. Another feature of the proposed model is that the maintenance actions can have any degree of impact on the sub-systems. In other words, we propose a multivariate recurrent event model with stochastic dependence, and for this model we present a two-stage approach which makes estimation scalable, thus practical for large-scale industrial applications. We also derive expressions for the Fisher information so as to conduct asymptotic statistical tests for the maintenance impact parameters. We demonstrate the scalability through numerical studies, and derive insights by applying the model on real-world maintenance records obtained from oil rigs. In the online supplemental material, we provide the following: (i) sketch of proof for likelihood, (ii) convergence analysis, (iii) contamination analysis, and (iv) a set of R codes to implement the current method. Journal: IISE Transactions Pages: 271-285 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2021.1871687 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1871687 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:271-285 Template-Type: ReDIF-Article 1.0 Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Distribution inference from early-stage stationary data streams by transfer learning Abstract: Data streams are prevalent in current manufacturing and service systems where real-time data arrive progressively. A quick distribution inference from such data streams at their early stages is extremely useful for prompt decision making in many industrial applications. For example, a quality monitoring scheme can be quickly started if the process data distribution is available and the optimal inventory level can be determined early once the customer demand distribution is estimated. To this end, this article proposes a novel online recursive distribution inference method for stationary data streams that can respond as soon as the streaming data are generated and update as regularly as the data accumulate. A major challenge is that the data size might be too small to produce an accurate estimation at the early stage of data streams. To solve this, we resort to an instance-based transfer learning approach which integrates a sufficient amount of auxiliary data from similar processes or products to aid the distribution inference in our target task. Particularly, the auxiliary data are reweighted automatically by a density ratio fitting model with a prior-belief-guided regularization term to alleviate data scarcity. Our proposed distribution inference method also possesses an efficient online algorithm with recursive formulas to update upon every incoming data point. Extensive numerical simulations and real case studies verify the advantages of the proposed method. Journal: IISE Transactions Pages: 303-320 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2021.1875520 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1875520 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:303-320 Template-Type: ReDIF-Article 1.0 Author-Name: Lochana K. Palayangoda Author-X-Name-First: Lochana K. Author-X-Name-Last: Palayangoda Author-Name: Ronald W. Butler Author-X-Name-First: Ronald W. Author-X-Name-Last: Butler Author-Name: Hon Keung Tony Ng Author-X-Name-First: Hon Keung Tony Author-X-Name-Last: Ng Author-Name: Fangfang Yang Author-X-Name-First: Fangfang Author-X-Name-Last: Yang Author-Name: Kwok Leung Tsui Author-X-Name-First: Kwok Leung Author-X-Name-Last: Tsui Title: Evaluation of mean-time-to-failure based on nonlinear degradation data with applications Abstract: In reliability engineering, obtaining lifetime information for highly reliable products is a challenging problem. When a product quality characteristic whose degradation over time can be related to lifetime, then the degradation data can be used to estimate the first-passage (failure) time distribution and the Mean-Time-To-Failure (MTTF) for a given threshold level. To model the degradation data, the commonly used Lévy process modeling approach assumes that the degradation measurements are linearly related to time throughout the lifetime of the product. However, the degradation data may not be linearly related to time in practice. For this reason, trend-renewal-process-type models can be considered for degradation modeling in which a proper trend function is used to transform the degradation data so that the Lévy process approach can be applied. In this article, we study several parametric and semiparametric models and approaches to estimate the first-passage time distribution and MTTF for degradation data that may be not linearly related to time. A Monte Carlo simulation study is used to demonstrate the performance of the proposed methods. In addition, a model selection procedure is proposed to select among different models. Two numerical examples of lithium-ion battery degradation data are applied to illustrate the proposed methodologies. Journal: IISE Transactions Pages: 286-302 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2021.1874080 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1874080 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:286-302 Template-Type: ReDIF-Article 1.0 Author-Name: Ya-jun Zhang Author-X-Name-First: Ya-jun Author-X-Name-Last: Zhang Author-Name: Ningjian Huang Author-X-Name-First: Ningjian Author-X-Name-Last: Huang Author-Name: Rober G. Radwin Author-X-Name-First: Rober G. Author-X-Name-Last: Radwin Author-Name: Zheng Wang Author-X-Name-First: Zheng Author-X-Name-Last: Wang Author-Name: Jingshan Li Author-X-Name-First: Jingshan Author-X-Name-Last: Li Title: Flow time in a human-robot collaborative assembly process: Performance evaluation, system properties, and a case study Abstract: In this article, an analytical method is introduced to evaluate the flow time of an assembly process with collaborative robots. In such a process, an operator and a collaborative robot can independently carry out preparation tasks first, then they work jointly to finish the assembly operations. To study the productivity performance of such systems, a stochastic process model is developed, where the joint work is modeled as an assembly merge process, and the task times of all operation steps are described by phase-type distributions. Closed-form solutions of system performance, such as flow time expectation and variability, as well as service rate, are derived analytically. The system properties of monotonicity, work allocation, and bottleneck identification are investigated. In addition, a case study is introduced to evaluate the performance of a front panel assembly process in automotive manufacturing. Journal: IISE Transactions Pages: 238-250 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2021.1907489 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1907489 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:238-250 Template-Type: ReDIF-Article 1.0 Author-Name: Yinan Wang Author-X-Name-First: Yinan Author-X-Name-Last: Wang Author-Name: Kaiwen Wang Author-X-Name-First: Kaiwen Author-X-Name-Last: Wang Author-Name: Wenjun Cai Author-X-Name-First: Wenjun Author-X-Name-Last: Cai Author-Name: Xiaowei Yue Author-X-Name-First: Xiaowei Author-X-Name-Last: Yue Title: NP-ODE: Neural process aided ordinary differential equations for uncertainty quantification of finite element analysis Abstract: Finite Element Analysis (FEA) has been widely used to generate simulations of complex nonlinear systems. Despite its strength and accuracy, FEA usually has two limitations: (i) running high-fidelity FEA often requires high computational cost and consumes a large amount of time; (ii) FEA is a deterministic method that is insufficient for uncertainty quantification when modeling complex systems with various types of uncertainties. In this article, a physics-informed data-driven surrogate model, named Neural Process Aided Ordinary Differential Equation (NP-ODE), is proposed to model the FEA simulations and capture both input and output uncertainties. To validate the advantages of the proposed NP-ODE, we conduct experiments on both the simulation data generated from a given ordinary differential equation and the data collected from a real FEA platform for tribocorrosion. The results show that the proposed NP-ODE outperforms benchmark methods. The NP-ODE method realizes the smallest predictive error as well as generating the most reasonable confidence intervals with the best coverage on testing data points. Appendices, code, and data are available in the supplementary files. Journal: IISE Transactions Pages: 211-226 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2021.1891485 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1891485 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:211-226 Template-Type: ReDIF-Article 1.0 Author-Name: Xiujie Zhao Author-X-Name-First: Xiujie Author-X-Name-Last: Zhao Author-Name: Zhenglin Liang Author-X-Name-First: Zhenglin Author-X-Name-Last: Liang Author-Name: Ajith K. Parlikad Author-X-Name-First: Ajith K. Author-X-Name-Last: Parlikad Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Title: Performance-oriented risk evaluation and maintenance for multi-asset systems: A Bayesian perspective Abstract: In this article, we present a risk evaluation and maintenance strategy optimization approach for systems with parallel identical assets subject to continuous deterioration. System performance is defined by the number of functional assets, and the penalty cost is measured by the loss of performance. To overcome the practical challenges of information sparsity, we employ a Bayesian framework to dynamically update unknown parameters in a Wiener degradation model. Order statistics are utilized to describe the failure times of assets and the stepwise incurred performance penalty cost. Furthermore, based on the Bayesian parameter inferences, we propose a short-term value-based replacement policy to minimize the expected cost rate in the current planning horizon. The proposed strategy simultaneously considers the variability of parameter estimators and the inherent uncertainty of the stochastic degradation processes. A simulation study and a realistic example from the petrochemical industry are presented to demonstrate the proposed framework. Journal: IISE Transactions Pages: 251-270 Issue: 3 Volume: 54 Year: 2022 Month: 3 X-DOI: 10.1080/24725854.2020.1869871 File-URL: http://hdl.handle.net/10.1080/24725854.2020.1869871 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:3:p:251-270 Template-Type: ReDIF-Article 1.0 Author-Name: H. Neil Geismar Author-X-Name-First: H. Neil Author-X-Name-Last: Geismar Author-Name: Bruce A. McCarl Author-X-Name-First: Bruce A. Author-X-Name-Last: McCarl Author-Name: Stephen W. Searcy Author-X-Name-First: Stephen W. Author-X-Name-Last: Searcy Title: Optimal design and operation of a second-generation biofuels supply chain Abstract: This article investigates how climate influences the value of adding preprocessing depots to a second-generation biorefinery’s supply chain. This is vital because humidity determines the amount of dry matter loss—exponential decay of energy content—suffered by biomass stored without preprocessing. The large volume of biomass required to fuel a biorefinery poses challenges in storage and in transportation. Further complications arise because the biomass is produced seasonally by hundreds of growers. Thus, recent failures of biorefineries may have been avoided, and future success may be achieved, by adding an intermediate layer to a biorefinery’s supply chain. A rigorous climate-based analysis of cost functions for each potential grower/depot pair leads to a stochastic program that optimizes the locations of depots, the assignment of growers to depots, and the volume of biomass stored fieldside vs. the volume stored as pelleted feedstock at depots. A computational study reveals that the humidity of the climate has much greater influence on the value of adding pre-processing depots to a second-generation biofuel supply chain than does transportation consolidation or any other parameter, endogenous or exogenous. Under favorable circumstances, our process reduces a biorefinery’s costs by over 30%, on average. Journal: IISE Transactions Pages: 390-404 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1956022 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1956022 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:390-404 Template-Type: ReDIF-Article 1.0 Author-Name: Changwen Li Author-X-Name-First: Changwen Author-X-Name-Last: Li Author-Name: Yong-Wu Zhou Author-X-Name-First: Yong-Wu Author-X-Name-Last: Zhou Author-Name: Bin Cao Author-X-Name-First: Bin Author-X-Name-Last: Cao Author-Name: Yuanguang Zhong Author-X-Name-First: Yuanguang Author-X-Name-Last: Zhong Title: Equilibrium analysis and coalition stability in R&D cooperation with spillovers Abstract: This article analyzes cost-reducing R&D cooperation by n horizontal firms, and studies two common cooperation modes with knowledge spillovers: R&D cartels (CT) and research joint ventures (RV). These firms are allowed to freely form coalitions (or alliances) among themselves to better coordinate their R&D efforts, and then compete in the production stage. We model the endogenous alliance/coalition formation between n firms given coalition structures as a two-stage game, in which the firms in the first stage choose to cooperate in R&D, and in the second stage all firms play a Cournot game in production quantity. Our results show that the coalition structure’s stability is closely affected by spillover rate and degree of R&D difficulty, number of participant firms, cooperation modes, and myopic and farsighted views. We find that the grand coalition is not always myopic stable and is farsighted stable under the CT mode with a three-firm or four-firm market, while the grand coalition is always not myopic stable but is farsighted stable under the RV mode only with a four-firm market. Moreover, we find that the conditions under which there are no myopic stable coalition structures for the RV and CT modes with regard to spillover rate and degree of R&D difficulty differ. Journal: IISE Transactions Pages: 348-362 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1947545 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1947545 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:348-362 Template-Type: ReDIF-Article 1.0 Author-Name: Reut Noham Author-X-Name-First: Reut Author-X-Name-Last: Noham Author-Name: Michal Tzur Author-X-Name-First: Michal Author-X-Name-Last: Tzur Author-Name: Dan Yamin Author-X-Name-First: Dan Author-X-Name-Last: Yamin Title: An indirect prioritization approach to optimizing sample referral networks for HIV early infant diagnosis Abstract: Early diagnosis and treatment of newborns with Human Immunodeficiency Virus (HIV) can substantially reduce mortality rates. Polymerase chain reduction technology is desirable for diagnosing HIV-exposed infants and for monitoring the disease progression in older patients. In low- and middle-income countries (LMICs), processing both types of tests requires the use of scarce resources. In this article, we present a supply chain network model for referring/assigning HIV test samples from clinics to labs. These assignments aim to minimize the expected infant mortality from AIDS due to delays in the return of test results. Using queuing theory, we present an analytical framework to evaluate the distribution of the sample waiting times at the testing labs and incorporate it into a mathematical model. The suggested framework takes into consideration the non-stationarity in the availability of reagents and technical staff. Hence, our model provides a method to find an assignment strategy that involves an indirect prioritization of samples that are more likely than others to be positive. We also develop a heuristic to simplify the implementation of an assignment strategy and provide general managerial insights for operating sample referral networks in LMICs with limited resources. Using a case study from Tanzania, we show that the potential improvement is substantial, especially when some labs are utilized almost to their full capacity. Our results apply to other settings in which expensive equipment with volatile availability is used to perform crucial operations, for example, the recent COVID-19 pandemic. Journal: IISE Transactions Pages: 405-420 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1970294 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1970294 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:405-420 Template-Type: ReDIF-Article 1.0 Author-Name: Jiankui Yang Author-X-Name-First: Jiankui Author-X-Name-Last: Yang Author-Name: Junfei Huang Author-X-Name-First: Junfei Author-X-Name-Last: Huang Author-Name: Yunan Liu Author-X-Name-First: Yunan Author-X-Name-Last: Liu Title: Mind your own customers and ignore the others: Asymptotic optimality of a local policy in multi-class queueing systems with customer feedback Abstract: This work contributes to the investigation of optimal routing and scheduling policies in multi-class multi-server queueing systems with customer feedback. We propose a new policy, dubbed local policy that requires access to only local queue information. Our new local policy specifies how an idle server chooses the next customer by using the queue length information of not all queues, but only those this server is eligible to serve. To gain useful insights and mathematical tractability, we consider a simple W model with customer feedback, and we establish limit theorems to show that our local policy is asymptotically optimal among all policies that may use the global system information, with the objective of minimizing the cumulative queueing costs measured by convex functions of the queue lengths. Numerical experiments provide convincing engineering confirmations of the effectiveness of our local policy for both W model and a more general non-W model. Journal: IISE Transactions Pages: 363-375 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1952358 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1952358 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:363-375 Template-Type: ReDIF-Article 1.0 Author-Name: Yanling Chang Author-X-Name-First: Yanling Author-X-Name-Last: Chang Author-Name: Chelsea C. White Author-X-Name-First: Chelsea C. Author-X-Name-Last: White Title: Worst-case analysis for a leader–follower partially observable stochastic game Abstract: Although Partially Observable Stochastic Games (POSGs) provide a powerful mathematical paradigm for modeling multi-agent dynamic decision making under uncertainty and partial information, they are notoriously hard to solve (e.g., the common-payoff POSGs are NEXP-complete) and have an extensive data requirement on each agent. The latter may represent a serious challenge to a defending agent if he/she has limited knowledge of its adversary. A worst-case analysis can significantly reduce both model computational complexity and data requirements regarding the adversary; further, a (near) optimal worst-case policy may represent a useful guide for action selection for risk-averse defenders (e.g., benchmarks). This article introduces a worst-case analysis to a leader–follower POSG where: (i) the defending leader has little knowledge of the adversarial follower’s reward structure, level of rationality, and process for gathering and transmitting data relevant for decision making; (ii) the objective is to determine a best worst-case value function and a control strategy for the leader. We show that the worst-case assumption transforms this POSG into a more computationally tractable single-agent problem with a simple sufficient statistic. However, the value function can be non-convex, in contrast with the value function of a partially observable Markov decision process. We design an iterative solution procedure for computing a lower bound of the leader’s value function and its control policy for the finite horizon case. This approach was numerically illustrated to support decision making in a security example. Journal: IISE Transactions Pages: 376-389 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1955167 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1955167 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:376-389 Template-Type: ReDIF-Article 1.0 Author-Name: Maryam Khatami Author-X-Name-First: Maryam Author-X-Name-Last: Khatami Author-Name: Michelle Alvarado Author-X-Name-First: Michelle Author-X-Name-Last: Alvarado Author-Name: Nan Kong Author-X-Name-First: Nan Author-X-Name-Last: Kong Author-Name: Pratik J. Parikh Author-X-Name-First: Pratik J. Author-X-Name-Last: Parikh Author-Name: Mark A. Lawley Author-X-Name-First: Mark A. Author-X-Name-Last: Lawley Title: Inpatient discharge planning under uncertainty Abstract: Delay in inpatient discharge processes reduces patient satisfaction and increases hospital congestion and length of stay. Further, flow congestion manifests as patient boarding, where new patients awaiting admission are blocked by bed unavailability. Finally, length of stay is extended if the discharge delay incurs an extra overnight stay. These factors are often in conflict, thus, good hospital performance can only be achieved through careful balancing. We formulate the discharge planning problem as a two-stage stochastic program with uncertain discharge processing and bed request times. The model minimizes a combination of discharge lateness, patient boarding, and deviation from preferred discharge times. Patient boarding is integrated by aligning bed requests with bed releases. The model is solved for different instances generated using data from a large hospital in Texas. Stochastic decomposition is compared with the extensive form and the L-shaped algorithm. A shortest expected processing time heuristic is also investigated. Computational experiments indicate that stochastic decomposition outperforms the L-shaped algorithm and the heuristic, with a significantly shorter computational time and small deviation from optimal. The L-shaped method solves only small problems within the allotted time budget. Simulation experiments demonstrate that our model improves discharge lateness and patient boarding compared to current practice. Journal: IISE Transactions Pages: 332-347 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1943764 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1943764 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:332-347 Template-Type: ReDIF-Article 1.0 Author-Name: Lei Lei Author-X-Name-First: Lei Author-X-Name-Last: Lei Author-Name: Jian-Qiang Hu Author-X-Name-First: Jian-Qiang Author-X-Name-Last: Hu Author-Name: Chenbo Zhu Author-X-Name-First: Chenbo Author-X-Name-Last: Zhu Title: Discrete-event stochastic systems with copula correlated input processes Abstract: In this article, we develop a new method based on copulas to model correlated inputs in discrete-event stochastic systems. We first define a type of correlated stochastic process, called Copula Correlated Processes (CCPs), which we then use to model correlated inputs for discrete-event stochastic systems. In general, it is very difficult to analyze discrete-event stochastic systems with correlated inputs. However, we show that discrete-event stochastic systems with CCPs can be discretized and approximated by discrete-event stochastic systems with discrete copula correlated processes, which are equivalent to discrete-event stochastic systems driven by Markov-modulated processes and are much easier to analyze. An illustrative queueing example is provided to demonstrate how our method works. Journal: IISE Transactions Pages: 321-331 Issue: 4 Volume: 54 Year: 2022 Month: 4 X-DOI: 10.1080/24725854.2021.1943571 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1943571 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:4:p:321-331 Template-Type: ReDIF-Article 1.0 Author-Name: Mengyi Zhang Author-X-Name-First: Mengyi Author-X-Name-Last: Zhang Author-Name: Erica Pastore Author-X-Name-First: Erica Author-X-Name-Last: Pastore Author-Name: Arianna Alfieri Author-X-Name-First: Arianna Author-X-Name-Last: Alfieri Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Title: Buffer allocation problem in production flow lines: A new Benders-decomposition-based exact solution approach Abstract: The Buffer Allocation Problem (BAP) in production flow lines is very relevant from a practical point of view and very challenging from a scientific perspective. For this reason, it has drawn great attention both in industry and in the academic community. However, despite the problem’s relevance, no exact method is available in the literature to solve it when long production lines are being considered, i.e., in practical settings. This work proposes a new Mixed-Integer Linear Programming (MILP) formulation for exact solution of sample-based BAP. Due to the huge number of variables and constraints in the model, an algorithm based on Benders decomposition is proposed to increase the computational efficiency. The algorithm iterates between a simulation module that generates the Benders cuts and an optimization module that involves the solution of an updated MILP model. Multiple Benders cuts after each simulation run are generated by exploiting the structural properties of reversibility and monotonicity of flow line throughput. The new MILP formulation is tighter than the state-of-the-art model from a theoretical point of view, and order of magnitude of computation time saving is also observed in the numerical results. Journal: IISE Transactions Pages: 421-434 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1905195 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1905195 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:421-434 Template-Type: ReDIF-Article 1.0 Author-Name: Xiao Qinge Author-X-Name-First: Xiao Author-X-Name-Last: Qinge Author-Name: Ben Niu Author-X-Name-First: Ben Author-X-Name-Last: Niu Author-Name: Chen Ying Author-X-Name-First: Chen Author-X-Name-Last: Ying Title: Policy manifold generation for multi-task multi-objective optimization of energy flexible machining systems Abstract: Contemporary organizations recognize the importance of lean and green production to realize ecological and economic benefits. Compared with the existing optimization methods, the multi-task multi-objective reinforcement learning (MT-MORL) offers an attractive means to address the dynamic, multi-target process-optimization problems associated with Energy-Flexible Machining (EFM). Despite the recent advances in reinforcement learning, the realization of an accurate Pareto frontier representation remains a major challenge. This article presents a generative manifold-based policy-search method to approximate the continuously distributed Pareto frontier for EFM optimization. To this end, multi-pass operations are formulated as part of a multi-policy Markov decision process, wherein the machining configurations witness dynamic changes. However, the traditional Gaussian distribution cannot accurately fit complex upper-level policies. Thus, a multi-layered generator was designed to map the high-dimensional policy manifold from a simple Gaussian distribution without performing complex calculations. Additionally, a hybrid multi-task training approach is proposed to handle the mode collapse and large task difference observed during the improvement of the generalization performance. Extensive computational testing and comparisons against existing baseline methods have been performed to demonstrate the improved Pareto frontier quality and computational efficiency of the proposed algorithm. Journal: IISE Transactions Pages: 448-463 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1934756 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1934756 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:448-463 Template-Type: ReDIF-Article 1.0 Author-Name: Yinan Wang Author-X-Name-First: Yinan Author-X-Name-Last: Wang Author-Name: Weihong “Grace” Guo Author-X-Name-First: Weihong “Grace” Author-X-Name-Last: Guo Author-Name: Xiaowei Yue Author-X-Name-First: Xiaowei Author-X-Name-Last: Yue Title: Tensor decomposition to compress convolutional layers in deep learning Abstract: Feature extraction for tensor data serves as an important step in many tasks such as anomaly detection, process monitoring, image classification, and quality control. Although many methods have been proposed for tensor feature extraction, there are still two challenges that need to be addressed: (i) how to reduce the computation cost for high dimensional and large volume tensor data; (ii) how to interpret the output features and evaluate their significance. The most recent methods in deep learning, such as Convolutional Neural Network, have shown outstanding performance in analyzing tensor data, but their wide adoption is still hindered by model complexity and lack of interpretability. To fill this research gap, we propose to use CP-decomposition to approximately compress the convolutional layer (CPAC-Conv layer) in deep learning. The contributions of our work include three aspects: (i) we adapt CP-decomposition to compress convolutional kernels and derive the expressions of forward and backward propagations for our proposed CPAC-Conv layer; (ii) compared with the original convolutional layer, the proposed CPAC-Conv layer can reduce the number of parameters without decaying prediction performance. It can combine with other layers to build novel Deep Neural Networks; (iii) the value of decomposed kernels indicates the significance of the corresponding feature map, which provides us with insights to guide feature selection. Journal: IISE Transactions Pages: 481-495 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1894514 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1894514 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:481-495 Template-Type: ReDIF-Article 1.0 Author-Name: İbrahim Muter Author-X-Name-First: İbrahim Author-X-Name-Last: Muter Author-Name: Temel Öncan Author-X-Name-First: Temel Author-X-Name-Last: Öncan Title: Order batching and picker scheduling in warehouse order picking Abstract: This article focuses on the integration of order batching and picker scheduling decisions while taking into account two objectives that have been considered in the literature, namely the minimization of both total travel time to collect all items and makespan of the pickers. This integrated problem not only occurs naturally in wave picking systems in which the latest picking time of orders becomes the key performance metric, but also arises when there is a limit on the picker operating time. We present models that result from combining these objectives and analyze their relationship through bounds. We propose a column generation-based exact algorithm for the integrated problem. The novelty of the proposed approach lies in the ability of efficiently solving the integrated order batching and picker scheduling problem to optimality by designing a column generation subproblem based on the set of batches, which makes it a challenging optimization problem due to its size. We alleviate this difficulty by reformulating this subproblem, which allows efficient implicit enumeration of its variables. We have also devised a Variable Neighborhood Search algorithm used as a subprocedure within the proposed exact solution algorithm. Finally, we conduct experiments on randomly generated instances and show that the proposed algorithms are capable of solving instances with up to 100 orders. Journal: IISE Transactions Pages: 435-447 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1925178 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1925178 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:435-447 Template-Type: ReDIF-Article 1.0 Author-Name: Hao Yan Author-X-Name-First: Hao Author-X-Name-Last: Yan Author-Name: Marco Grasso Author-X-Name-First: Marco Author-X-Name-Last: Grasso Author-Name: Kamran Paynabar Author-X-Name-First: Kamran Author-X-Name-Last: Paynabar Author-Name: Bianca Maria Colosimo Author-X-Name-First: Bianca Maria Author-X-Name-Last: Colosimo Title: Real-time detection of clustered events in video-imaging data with applications to additive manufacturing Abstract: The use of video-imaging data for in-line process monitoring applications has become popular in industry. In this framework, spatio-temporal statistical process monitoring methods are needed to capture the relevant information content and signal possible out-of-control states. Video-imaging data are characterized by a spatio-temporal variability structure that depends on the underlying phenomenon, and typical out-of-control patterns are related to events that are localized both in time and space. In this article, we propose an integrated spatio-temporal decomposition and regression approach for anomaly detection in video-imaging data. Out-of-control events are typically sparse, spatially clustered and temporally consistent. The goal is not only to detect the anomaly as quickly as possible (“when”) but also to locate it in space (“where”). The proposed approach works by decomposing the original spatio-temporal data into random natural events, sparse spatially clustered and temporally consistent anomalous events, and random noise. Recursive estimation procedures for spatio-temporal regression are presented to enable the real-time implementation of the proposed methodology. Finally, a likelihood ratio test procedure is proposed to detect when and where the anomaly happens. The proposed approach was applied to the analysis of high-sped video-imaging data to detect and locate local hot-spots during a metal additive manufacturing process. Journal: IISE Transactions Pages: 464-480 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1882013 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1882013 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:464-480 Template-Type: ReDIF-Article 1.0 Author-Name: Hung-Ping Tung Author-X-Name-First: Hung-Ping Author-X-Name-Last: Tung Author-Name: Sheng-Tsaing Tseng Author-X-Name-First: Sheng-Tsaing Author-X-Name-Last: Tseng Author-Name: Nan-Jung Hsu Author-X-Name-First: Nan-Jung Author-X-Name-Last: Hsu Author-Name: Yi-Ting Hou Author-X-Name-First: Yi-Ting Author-X-Name-Last: Hou Title: A generalized pH acceleration model of nano-sol products and the effects of model misspecification on shelf-life prediction Abstract: In existing pH acceleration models, which are used to assess the shelf-life of liquid-phase nano-sol products, a mixture of normal distributions is commonly employed to describe the sizes of particles mixing from two populations. The Gaussian mixture model approach falls short when used to characterize the asymmetric distributions of particle sizes in subgroups. This work considers instead a broader class of a mixture of log-F distributions, to be embedded in a pH acceleration model. This study aims at understanding the impact of the new modeling approach, in the presence of model misspecification of the particle size distribution, on the accuracy and precision for making the shelf-life predictions. This study found that model misspecification indeed significantly affects the shelf-life predictions. The proposed method shows favorable finite sample performance on a simulated data set. Both the quantitative analysis of the impact due to model misspecification and the solution proposed herein could benefit practitioners in the long run. Journal: IISE Transactions Pages: 496-504 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1896054 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1896054 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:496-504 Template-Type: ReDIF-Article 1.0 Author-Name: Kai Yang Author-X-Name-First: Kai Author-X-Name-Last: Yang Author-Name: Peihua Qiu Author-X-Name-First: Peihua Author-X-Name-Last: Qiu Title: Design variable-sampling control charts using covariate information Abstract: Statistical Process Control (SPC) charts are widely used in manufacturing industry for monitoring the performance of sequential production processes over time. A common practice in using a control chart is to first collect samples and take measurements of certain quality variables from them at equally-spaced sampling times, and then make decisions about the process status by the chart based on the observed data. In some applications, however, the quality variables are associated with certain covariates, and it should improve the performance of an SPC chart if the covariate information can be used properly. Intuitively, if the covariate information indicates that the process under monitoring is likely to have a distributional shift soon based on the established relationship between the quality variables and the covariates, then it should benefit the process monitoring by collecting the next process observation sooner than usual. Motivated by this idea, we propose a general framework to design a variable-sampling control chart by using covariate information. Our proposed chart is self-starting and can well accommodate stationary short-range serial data correlation. It should be the first variable-sampling control chart in the literature that the sampling intervals are determined by the covariate information. Numerical studies show that the proposed method performs well in different cases considered. Journal: IISE Transactions Pages: 505-519 Issue: 5 Volume: 54 Year: 2022 Month: 5 X-DOI: 10.1080/24725854.2021.1902591 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1902591 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:5:p:505-519 Template-Type: ReDIF-Article 1.0 Author-Name: Hadi El-Amine Author-X-Name-First: Hadi Author-X-Name-Last: El-Amine Author-Name: Hrayer Aprahamian Author-X-Name-First: Hrayer Author-X-Name-Last: Aprahamian Title: A heuristic scheme for multivariate set partitioning problems with application to classifying heterogeneous populations for multiple binary attributes Abstract: We provide a novel heuristic approach to solve a class of multivariate set partitioning problems in which each item is characterized by three attribute values. The scheme first identifies a series of orderings of the items and then solves a corresponding sequence of shortest path problems. We provide theoretical findings on the structure of an optimal solution that motivate the design of the proposed heuristic scheme. The proposed algorithm runs in polynomial-time and is independent of the number of groups in the partition, making it more efficient than existing algorithms. To measure the performance of our solutions, we construct bounds for special instances which allow us to provide optimality gaps. We conduct an extensive numerical experiment in which we solve a large number of problem instances and show that our proposed approach converges to the global optimal solution in the vast majority of cases and in the case it does not, it yields very low optimality gaps. We demonstrate our findings with an application in the context of classifying a large heterogeneous population as positive or negative for multiple binary attributes as efficiently as possible. We conduct a case study on the screening of three of the most prevalent sexually transmitted diseases in the United States. The resulting solutions are shown to be within 2.6% of optimality and lead to a 26% cost saving over current screening practices. Journal: IISE Transactions Pages: 537-549 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1959964 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1959964 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:537-549 Template-Type: ReDIF-Article 1.0 Author-Name: Jesse G. Wales Author-X-Name-First: Jesse G. Author-X-Name-Last: Wales Author-Name: Alexander J. Zolan Author-X-Name-First: Alexander J. Author-X-Name-Last: Zolan Author-Name: Alexandra M. Newman Author-X-Name-First: Alexandra M. Author-X-Name-Last: Newman Author-Name: Michael J. Wagner Author-X-Name-First: Michael J. Author-X-Name-Last: Wagner Title: Optimizing vehicle fleet and assignment for concentrating solar power plant heliostat washing Abstract: Concentrating solar power central-receiver plants use thousands of sun-tracking mirrors, i.e., heliostats, to reflect sunlight to a central receiver, which collects and uses the heat to generate electricity. Over time, soiling reduces the reflectivity of the heliostats and, therefore, the efficiency of the system. Current industry practice sends vehicles to wash heliostats in an ad hoc fashion. We present a mixed-integer nonlinear program that determines wash vehicle fleet size, mix, and assignment of wash crews to heliostats to minimize the sum of (i) the revenues lost due to heliostat soiling, (ii) the costs of hiring wash crews and operating the vehicles, and (iii) the costs of purchasing wash vehicles. We establish conditions for convexity of the objective function, and then propose a decomposition method that enables near-optimal solutions to the wash vehicle fleet sizing and assignment problem on the order of a couple of minutes. These solutions yield hundreds of thousands of dollars in savings per year over current industry practices. Journal: IISE Transactions Pages: 550-562 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1966858 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1966858 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:550-562 Template-Type: ReDIF-Article 1.0 Author-Name: Haldun Aytug Author-X-Name-First: Haldun Author-X-Name-Last: Aytug Author-Name: Anand Paul Author-X-Name-First: Anand Author-X-Name-Last: Paul Title: Mean throughput rate in restart-upon-failure systems subject to random disruptions Abstract: We model a manufacturing or service system processing a fixed number of tasks subject to random disruptions generated by a renewal process, such that a task has to start over from scratch if it is interrupted. We study the impact of an increase in disruption rate on the mean throughput of the system, using tools from stochastic ordering. We show that a system that is stochastically better than another does not necessarily have a higher mean throughput–the deciding factor is the hazard rate of the underlying disruption process. We prove that a system that is better than another in the sense of hazard rate ordering of time between disruptions is guaranteed to enjoy a higher average throughput. Journal: IISE Transactions Pages: 578-589 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1975061 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1975061 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:578-589 Template-Type: ReDIF-Article 1.0 Author-Name: Liping Zhou Author-X-Name-First: Liping Author-X-Name-Last: Zhou Author-Name: Na Geng Author-X-Name-First: Na Author-X-Name-Last: Geng Author-Name: Zhibin Jiang Author-X-Name-First: Zhibin Author-X-Name-Last: Jiang Author-Name: Xiuxian Wang Author-X-Name-First: Xiuxian Author-X-Name-Last: Wang Title: Dynamic multi-type patient advance scheduling for a diagnostic facility considering heterogeneous waiting time targets and equity Abstract: This article studies a dynamic advance scheduling problem where multi-type patients arrive randomly to book the future service of a diagnostic facility in a public healthcare setting. The demand for the diagnostic facility generally arises from multiple sources such as emergency patients, inpatients, and outpatients. It is challenging for public hospital managers to dynamically allocate their limited capacities to serve the incoming multi-type patients not only to achieve their heterogeneous waiting time targets in a cost-effective manner but also to maintain equity among multi-type patients. To address this problem, a finite-horizon Markov Decision Process (MDP) model is proposed to minimize the total expected costs under the constraints of maintaining equity. Because of the complex structure of the feasible region and the high-dimensional state space, the property characterization of optimal scheduling policy and the exact solution of the MDP are intractable. To solve the MDP model with high-dimensional state and action spaces, we reformulate the MDP as a multi-stage stochastic programming model and propose a modified Benders decomposition algorithm based on new dual integer cuts to solve the model. Based on real data from our collaborating hospital, we perform extensive numerical experiments to demonstrate that our proposed approach yields good performance. Journal: IISE Transactions Pages: 521-536 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1957521 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1957521 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:521-536 Template-Type: ReDIF-Article 1.0 Author-Name: Mehmet Sekip Altug Author-X-Name-First: Mehmet Sekip Author-X-Name-Last: Altug Author-Name: Oben Ceryan Author-X-Name-First: Oben Author-X-Name-Last: Ceryan Title: Optimal dynamic allocation of rental and sales inventory for fashion apparel products Abstract: There is a growing trend towards renting rather than permanent ownership of various product categories such as designer clothes and accessories. In this article, we study an emerging retail business model that simultaneously serves rental and sales markets. Specifically, we consider a retailer that primarily focuses on renting while also selectively meeting incidental sales demand. Once a unit is sold, the firm forgoes potentially recurring rental revenues from that unit during the remaining periods. Therefore, it is critical for a retailer to dynamically decide how much of its inventory to allocate for sales and rentals at each period. We first develop a consumer choice model that determines the fraction of the market that chooses renting over purchasing. We characterize the optimal inventory allocation policy and explore how market characteristics and prices impact inventory allocation. We discuss the value of dynamic allocation and observe that the profit improvement can be substantial. In addition, we propose a simple and efficient heuristic policy. Finally, we extend our analysis to study the optimal allocation policies for (i) a retailer that is primarily a seller that selectively meets rental demand, and (ii) a retailer that does not enforce any prioritization between rental and sales demand. Journal: IISE Transactions Pages: 603-617 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1982157 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1982157 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:603-617 Template-Type: ReDIF-Article 1.0 Author-Name: Shahab Derhami Author-X-Name-First: Shahab Author-X-Name-Last: Derhami Author-Name: Benoit Montreuil Author-X-Name-First: Benoit Author-X-Name-Last: Montreuil Title: Estimation of potential lost sales in retail networks of high-value substitutable products Abstract: Sales data reveal only partial information about demand due to stockout-based substitutions and lost sales. We develop a data-driven algorithm to estimate stockout-based lost sales and product demands in a distribution network of high-value substitutable products such as cars, using only past sales and inventory log data and product substitution ratios. The model considers the particular customer and retailer behaviors frequently observed in high-value product markets, such as visiting multiple stores by customers for a better match and exploiting on-demand inventory transshipments by retailers to satisfy the demand for out-of-stock products. It identifies unavailable products for which a retailer could not fulfill demand and estimates the potential lost sales and the probability distribution of product demands for the potential lost sales using sales data in retailers with similar sales profiles while considering retailers’ market sizes. We validate the results of our algorithm through field data collection, simulation, and a pilot project for a case of recreational vehicles. We also show the result of implementing our model to estimate lost sales across the large retail network of a leading vehicle manufacturer. Our case study shows sales data significantly underestimate the demand for most products. Journal: IISE Transactions Pages: 563-577 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1969484 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1969484 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:563-577 Template-Type: ReDIF-Article 1.0 Author-Name: Shaoxuan Liu Author-X-Name-First: Shaoxuan Author-X-Name-Last: Liu Author-Name: Kut C. So Author-X-Name-First: Kut C. Author-X-Name-Last: So Author-Name: Wenhui Zhao Author-X-Name-First: Wenhui Author-X-Name-Last: Zhao Title: Direct supply base reduction in a decentralized assembly system with suppliers of varying market power Abstract: This paper studies a decentralized assembly system with two types of independent component suppliers: one type, so-called commanding suppliers, has strong market power and sets the component price in a push contract offered to the assembler; the other type, subordinate suppliers, has weak market power and accepts a pull contract with the component price set by the assembler. We analyze how direct supply base reduction of component suppliers through supplier clustering can affect the profitability of the assembler and the component suppliers, and we show that direct supply base reduction through clustering of commanding suppliers is generally beneficial to the assembler. Direct supply base reduction through clustering of subordinate suppliers also benefits the assembler, except when the clustering would add another commanding supplier to the system. In that case, our numerical results show that such clustering would likely improve the assembler’s profitability when a product’s profit margin is low and when clustering yields a reduction in the assembler’s coordination costs. We also give sufficient conditions under which direct supply base reduction through supplier clustering benefits also the suppliers involved in the clustering. Journal: IISE Transactions Pages: 590-602 Issue: 6 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1978016 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1978016 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:6:p:590-602 Template-Type: ReDIF-Article 1.0 Author-Name: Hang Dong Author-X-Name-First: Hang Author-X-Name-Last: Dong Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Title: Interaction event network modeling based on temporal point process Abstract: Interaction event networks, which consist of interaction events among a set of individuals, exist in many areas from social, biological to financial applications. The individuals on networks interact with each other for several possible reasons, such as periodic contact or reply to former interactions. Regarding these interaction events as expectations based on previous interactions is crucial for understanding the underlying network and the corresponding dynamics. Usually, any change on individuals of the network will reflect on the pattern of their interaction events. However, the causes and expressed patterns for interaction events on networks have not been properly considered in network models. This article proposes a dynamic model for interaction event networks based on the temporal point process, which aims to incorporate the impact from historical interaction events on later interaction events considering both network structure and node connections. A network representation learning method is developed to learn the interaction event processes. The proposed interaction event network model also provides a convenient representation of the rate of interaction events for any pair of sender–receiver nodes on the network and therefore facilitates monitoring such event networks by summarizing these pairwise rates. Both simulation experiments and experiments on real-world data validate the effectiveness of the proposed model and the corresponding network representation learning algorithm. Journal: IISE Transactions Pages: 630-642 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1906468 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1906468 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:630-642 Template-Type: ReDIF-Article 1.0 Author-Name: Jongwoo Ko Author-X-Name-First: Jongwoo Author-X-Name-Last: Ko Author-Name: Heeyoung Kim Author-X-Name-First: Heeyoung Author-X-Name-Last: Kim Title: Deep Gaussian process models for integrating multifidelity experiments with nonstationary relationships Abstract: The problem of integrating multifidelity data has been studied extensively, due to integrated analyses being able to provide better results than separately analyzing various data types. One popular approach is to use linear autoregressive models with location- and scale-adjustment parameters. Such parameters are typically modeled using stationary Gaussian processes. However, the stationarity assumption may not be appropriate in real-world applications. To introduce nonstationarity for enhanced flexibility, we propose a novel integration model based on deep Gaussian processes that can capture nonstationarity via successive warping of latent variables through multiple layers of Gaussian processes. For inference of the proposed model, we use a doubly stochastic variational inference algorithm. We validate the proposed model using simulated and real-data examples. Journal: IISE Transactions Pages: 686-698 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1931572 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1931572 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:686-698 Template-Type: ReDIF-Article 1.0 Author-Name: Cheoljoon Jeong Author-X-Name-First: Cheoljoon Author-X-Name-Last: Jeong Author-Name: Xiaolei Fang Author-X-Name-First: Xiaolei Author-X-Name-Last: Fang Title: Two-dimensional variable selection and its applications in the diagnostics of product quality defects Abstract: The root cause diagnostics of product quality defects in multistage manufacturing processes often requires a joint identification of crucial stages and process variables. To meet this requirement, this article proposes a novel penalized matrix regression methodology for two-dimensional variable selection. The method regresses a scalar response variable against a matrix-based predictor using a generalized linear model. The unknown regression coefficient matrix is decomposed as a product of two factor matrices. The rows of the first factor matrix and the columns of the second factor matrix are simultaneously penalized to inspire sparsity. To estimate the parameters, we develop a Block Coordinate Proximal Descent (BCPD) optimization algorithm, which cyclically solves two convex sub-optimization problems. We have proved that the BCPD algorithm always converges to a critical point with any initialization. In addition, we have also proved that each of the sub-optimization problems has a closed-form solution if the response variable follows a distribution whose (negative) log-likelihood function has a Lipschitz continuous gradient. A simulation study and a dataset from a real-world application are used to validate the effectiveness of the proposed method. Journal: IISE Transactions Pages: 619-629 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1904524 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1904524 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:619-629 Template-Type: ReDIF-Article 1.0 Author-Name: Chiwoo Park Author-X-Name-First: Chiwoo Author-X-Name-Last: Park Author-Name: David J. Borth Author-X-Name-First: David J. Author-X-Name-Last: Borth Author-Name: Nicholas S. Wilson Author-X-Name-First: Nicholas S. Author-X-Name-Last: Wilson Author-Name: Chad N. Hunter Author-X-Name-First: Chad N. Author-X-Name-Last: Hunter Title: Variable selection for Gaussian process regression through a sparse projection Abstract: This article presents a new variable selection approach integrated with Gaussian process regression. We consider a sparse projection of input variables and a general stationary covariance model that depends on the Euclidean distance between the projected features. The sparse projection matrix is considered as an unknown parameter. We propose a forward stagewise approach with embedded gradient descent steps to co-optimize the parameter with other covariance parameters based on the maximization of a non-convex marginal likelihood function with a concave sparsity penalty, and some convergence properties of the algorithm are provided. The proposed model covers a broader class of stationary covariance functions than the existing automatic relevance determination approaches, and the solution approach is more computationally feasible than the existing Markov chain Monte Carlo sampling procedures for the automatic relevance parameter estimation with a sparsity prior. The approach is evaluated for a large number of simulated scenarios. The choice of tuning parameters and the accuracy of the parameter estimation are evaluated with the simulation study. In comparison to some chosen benchmark approaches, the proposed approach has demonstrated improved accuracy in the variable selection. It is applied to an important problem of identifying environmental factors that affect atmospheric corrosion of metal alloys. Journal: IISE Transactions Pages: 699-712 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1959965 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1959965 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:699-712 Template-Type: ReDIF-Article 1.0 Author-Name: Linhan Ouyang Author-X-Name-First: Linhan Author-X-Name-Last: Ouyang Author-Name: Shichao Zhu Author-X-Name-First: Shichao Author-X-Name-Last: Zhu Author-Name: Keying Ye Author-X-Name-First: Keying Author-X-Name-Last: Ye Author-Name: Chanseok Park Author-X-Name-First: Chanseok Author-X-Name-Last: Park Author-Name: Min Wang Author-X-Name-First: Min Author-X-Name-Last: Wang Title: Robust Bayesian hierarchical modeling and inference using scale mixtures of normal distributions Abstract: Empirical models that relate multiple quality features to a set of design variables play a vital role in many industrial process optimization methods. Many of the current modeling methods employ a single-response normal model to analyze industrial processes without taking into consideration the high correlations and the non-normality among the response variables. Also, the problem of variable selection has also not yet been fully investigated within this modeling framework. Failure to account for these issues may result in a misleading prediction model, and therefore, poor process design. In this article, we propose a robust Bayesian seemingly unrelated regression model to simultaneously analyze multiple-feature systems while accounting for the high correlation, non-normality, and variable selection issues. Additionally, we propose a Markov chain Monte Carlo sampling algorithm to generate posterior samples from the full joint posterior distribution to obtain the robust Bayesian estimates. Simulation experiments are executed to investigate the performance of the proposed Bayesian method, which is also illustrated by application to a laser cladding repair process. The analysis results show that the proposed modeling technique compares favorably with its classic counterpart in the literature. Journal: IISE Transactions Pages: 659-671 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1912440 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1912440 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:659-671 Template-Type: ReDIF-Article 1.0 Author-Name: Hongtao Yu Author-X-Name-First: Hongtao Author-X-Name-Last: Yu Author-Name: Zhongsheng Hua Author-X-Name-First: Zhongsheng Author-X-Name-Last: Hua Title: Meta-modeling of heterogeneous data streams: A dual-network approach for online personalized fault prognostics of equipment Abstract: In fault prognosis, the individual heterogeneity among degradation processes of equipment is a critical problem that decreases the reliability and stability of prognostic models. The presence of the diversity of degradation mechanisms, along with the complex temporal nature of multivariate measurements of equipment, make the existing approaches difficult to forecast the trend of health status and predict the Remaining Useful Life (RUL) of equipment. To resolve this problem, this article proposes a dual-network approach for online RUL prediction. The proposed approach predicts the RUL by constructing a recurrent neural network (RNN) and a Feedforward Neural Network (FNN) from the degradation measurements and failure occurrence data of equipment. The RNN is used to predict the evolution of degradation measurements, whereas the FNN is used to determine the failure occurrence based on the predicted measurements. Considering the individual heterogeneity problem, a novel meta-learning procedure is proposed for network training. The main idea of the meta-learning approach is to train two network generators to capture the average behavior and variation of equipment degradation, and generate dual networks dynamically tailored to different equipment in the online RUL prediction process. Numerical studies on a simulation dataset and a real-world dataset are performed for performance evaluation. Journal: IISE Transactions Pages: 672-685 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1918804 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1918804 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:672-685 Template-Type: ReDIF-Article 1.0 Author-Name: Amirhossein Fallahdizcheh Author-X-Name-First: Amirhossein Author-X-Name-Last: Fallahdizcheh Author-Name: Chao Wang Author-X-Name-First: Chao Author-X-Name-Last: Wang Title: Profile monitoring based on transfer learning of multiple profiles with incomplete samples Abstract: Profile monitoring is an important tool for quality control. Most existing profile monitoring approaches focus on monitoring a single profile. In practice, multiple profiles also widely exist and these profiles contain rich correlation information that can benefit the monitoring of interested/target profile. In this article, we propose a transfer learning framework to extract profile-to-profile inter-relationship to improve the monitoring performance. In this framework, profiles are modeled as a multi-output Gaussian process (MGP), and a specially designed covariance structure is proposed to reduce the computational load in optimizing the MGP parameters. More importantly, the proposed framework contains features for dealing with incomplete samples in each profile, which facilitates the information sharing among profiles with different data collection costs/availability. The proposed method is validated and compared with various benchmarks in extensive numerical studies and a case study of monitoring ice machine temperature profiles. The results show the proposed method can successfully transfer knowledge from related profiles to benefit the monitoring performance in the target profile. The R code of this paper would be available as on-line supplementary materials. Journal: IISE Transactions Pages: 643-658 Issue: 7 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.1912439 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1912439 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:7:p:643-658 Template-Type: ReDIF-Article 1.0 Author-Name: Ohad Eisenhandler Author-X-Name-First: Ohad Author-X-Name-Last: Eisenhandler Author-Name: Michal Tzur Author-X-Name-First: Michal Author-X-Name-Last: Tzur Title: Multi-period collection and distribution problems and their application to humanitarian gleaning operations Abstract: We focus on a multi-period logistic setting, in which a given fleet of vehicles is used to collect products from suppliers, transfer them to a depot where they undergo certain time-consuming processing procedures, only at the end of which can they be distributed to customers. The problem is to schedule visits to the suppliers, as well as vehicle routes that distribute the processed products to the customers, such that each vehicle can perform at most one of these activity types every day. The processed inventory at the depot creates an additional dependence between these activities. Non-profit gleaning operations performed by food banks provide a real-life motivation for the analysis of this setting. It is also applicable for the collection and distribution of blood donations. In the solution method we propose, the problem is decomposed into its collection and distribution aspects, both of which constitute non-trivial sub-problems that have not been previously studied. We show how to tackle each of them individually, while considering information obtained from their counterpart, with the inventory storage as their linkage. We further present a rolling horizon framework for the problem. We demonstrate the implementation of this approach based on the activity of an Israeli food bank that manages gleaning operations, and test it with real-life data, as well as with randomly generated instances. The numerical experiments establish the advantage of the proposed method, compared to naïve methods currently being used in practice. Journal: IISE Transactions Pages: 785-802 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.1998937 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1998937 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:785-802 Template-Type: ReDIF-Article 1.0 Author-Name: Fei Yang Author-X-Name-First: Fei Author-X-Name-Last: Yang Author-Name: Ying Dai Author-X-Name-First: Ying Author-X-Name-Last: Dai Author-Name: Zu-Jun Ma Author-X-Name-First: Zu-Jun Author-X-Name-Last: Ma Title: A blood component production–inventory problem with preparation method combination and ABO compatibility Abstract: We consider the component dependence resulting from each preparation method consistently outputting a particular component combination in the integrated blood component production–inventory problem in a blood center. We first formulate a Markov decision process that comprehensively considers the alternative preparation methods, ABO compatibility, varying age-based inventories, stochastic supply of whole blood, and stochastic demand for components. Then, an approximate dynamic programming algorithm with the interval–adaptive and myopic-learning acceleration approaches is proposed to solve the problem. It performs well in the improvements of both the precision and learning speed. The numerical study displays the sophisticated optimal inventory levels and issuance policies of the blood components. We show that an integrated, operational production-inventory modeling is more capable of dealing with the interaction among the dependent outputs and their differentiated compatibilities and perishabilities. Moreover, the different substitution strategies (i.e., push and pull) are verified to provide similar overall supply levels while causing slight differences in other aspects. Further suggestions for specific components are also provided. Journal: IISE Transactions Pages: 713-727 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.1971341 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1971341 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:713-727 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaojie Wang Author-X-Name-First: Xiaojie Author-X-Name-Last: Wang Author-Name: Yongpei Guan Author-X-Name-First: Yongpei Author-X-Name-Last: Guan Author-Name: Xiang Zhong Author-X-Name-First: Xiang Author-X-Name-Last: Zhong Title: Service system design of video conferencing visits with nurse assistance Abstract: Despite providing convenience and reducing the travel burden of patients, Video-Conferencing (VC) clinical visits have not enjoyed wide uptake by patients and care providers. It is desired that the medical problems addressed by VC visits can match a face-to-face encounter in scope and quality. Subsequently, VC visits with nurse assistance are emerging; however, the scalable and financially sustainable of such services are unclear. Therefore, we explore the implementability of VC visits with nursing services using a game-theoretic model, and investigate the impact of different pricing schemes (discriminative pricing based on patient characteristics vs. non-discriminative) on patients’ care choices between VC and in-person visits. Our results shed light on the “artificial congestion” created by a profit-driven medical institution that hurts patient welfare, and subsequently identify the conditions where the interest of the social planner and the medical institution are aligned. Our results highlight that, compared to a uniform price of VC visits which seems fair, discriminative pricing can be more beneficial for patients and the medical institution alike. This heightens the importance of insurance coverage of telehealth-related services to promote the adoption of telehealth by patients and care providers, and ultimately, improving care access and patient outcomes. Journal: IISE Transactions Pages: 741-756 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.1982156 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1982156 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:741-756 Template-Type: ReDIF-Article 1.0 Author-Name: Kyuree Ahn Author-X-Name-First: Kyuree Author-X-Name-Last: Ahn Author-Name: Kanghoon Lee Author-X-Name-First: Kanghoon Author-X-Name-Last: Lee Author-Name: Juneyoung Yeon Author-X-Name-First: Juneyoung Author-X-Name-Last: Yeon Author-Name: Jinkyoo Park Author-X-Name-First: Jinkyoo Author-X-Name-Last: Park Title: Congestion-aware dynamic routing for an overhead hoist transporter system using a graph convolutional gated recurrent unit Abstract: Overhead hoist transportors (OHT) that transport semiconductor wafers between tools/stockers, is a crucial component of an Automated Material Handling System (AMHS). As semiconductor fabrication plants (FABs) become larger, more OHT vehicles need to be operated. This necessitates the development of a scalable algorithm to effectively operate these OHTs and increase the productivity of the AMHS. This study proposes an algorithm that can predict the entire traveling times of the edges in an OHT rail network by utilizing past traffic information. The model first represents the OHT rail network and the dynamic traffic conditions using a graph. A sequence of graphs that represent the past traffic is then used as an input to produce a sequence of graphs that predicts the future traffic conditions as an output. Using the AutoMod simulator, we have shown that the proposed model scalably and effectively predicts the future edge-traveling time. We have also demonstrated that the predicted values can be used to reroute the OHTs optimally to avoid congestion. Journal: IISE Transactions Pages: 803-816 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.2000680 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2000680 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:803-816 Template-Type: ReDIF-Article 1.0 Author-Name: Giorgi Tadumadze Author-X-Name-First: Giorgi Author-X-Name-Last: Tadumadze Author-Name: Simon Emde Author-X-Name-First: Simon Author-X-Name-Last: Emde Title: Loading and scheduling outbound trucks at a dispatch warehouse Abstract: We address the operational planning problem of loading and scheduling outbound trucks at a dispatch warehouse shipping goods to several customers. This entails, first, assigning shipments to outbound trucks given the trailers’ capacities and, second, scheduling the trucks’ processing at the dock doors such that the amount of required resources at the terminal (e.g., dock doors and logistics workers) does not exceed the available levels. The trucks should be scheduled as late as possible within their time windows, but no later than the deadlines of the loaded shipments. Such planning problems arise, e.g., at dispatch warehouses of automotive parts manufacturers supplying parts to original equipment manufacturers in a just-in-time or even just-in-sequence manner. We formalize this operational problem and provide a time-indexed mixed-integer linear programming model. Moreover, we develop an exact branch-and-price algorithm, which is shown to perform very well, solving most realistically sized problem instances to optimality within a few minutes. In a numerical study, we also look into the interplay between the time window policy for trucks and just-in-time deliveries. Finally, we find evidence that too small a workforce or too few outbound dock doors in the dispatch warehouse can substantially compromise the punctuality of the deliveries. Journal: IISE Transactions Pages: 770-784 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.1983923 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1983923 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:770-784 Template-Type: ReDIF-Article 1.0 Author-Name: Hrayer Aprahamian Author-X-Name-First: Hrayer Author-X-Name-Last: Aprahamian Author-Name: Hadi El-Amine Author-X-Name-First: Hadi Author-X-Name-Last: El-Amine Title: Optimal clustering of frequency data with application to disease risk categorization Abstract: We provide a clustering procedure for a special type of dataset, known as frequency data, which counts the frequency of a certain binary outcome. An interpretation of the data as a discrete distribution enables us to extract statistical information, which we embed within an optimization-based framework. Our analysis of the resulting combinatorial optimization problem allows us to reformulate it as a more tractable network flow problem. This, in turn, enables the construction of exact algorithms that converge to the optimal solution in quadratic time. In addition, to be able to handle large-scale datasets, we provide two hierarchical heuristic algorithms that run in linearithmic time. Our moment-based method results in clustering solutions that are shown to perform well for a family of applications. We illustrate the benefits of our findings through a case study on HIV risk categorization within the context of large-scale screening through group testing. Our results on CDC data show that the proposed clustering framework consistently outperforms other popular clustering methods. Journal: IISE Transactions Pages: 728-740 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.1973158 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1973158 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:728-740 Template-Type: ReDIF-Article 1.0 Author-Name: Konstantin Kogan Author-X-Name-First: Konstantin Author-X-Name-Last: Kogan Author-Name: Dmitry Tsadikovich Author-X-Name-First: Dmitry Author-X-Name-Last: Tsadikovich Author-Name: Tal Avinadav Author-X-Name-First: Tal Author-X-Name-Last: Avinadav Title: Water scarcity and welfare: Regulated public–private supply chain versus spot-market competition Abstract: Despite legislation and price controls by the government and state agencies that typically assume responsibility for providing water services, water bills continue to rise. However, this has not prevented growth in overall water consumption, nor an increase in water scarcity, both of which are fueled by worldwide population growth, urbanization, and demand for a higher quality of life. Market-based competition is thought to be a promising approach to controlling water charges and managing water scarcity. We compare a spot-market-based competitive supply model for water, which determines the equilibrium price, with a supply chain approach, in which a non-profit public entity encourages competition between private water providers within the framework of a regulated, time-invariant price. We derive dynamic equilibrium replenishment and inventory policies and show that, contrary to expectations, spot-market competition does not necessarily result in greater levels of supply, nor in a lower price, than does a regulated supply chain. Furthermore, the public-private partnership can have an additional advantage in the form of both higher consumption and higher consumer welfare. However, increasing the distribution cost, and hence, the regulated price is likely to diminish the differences between the two market types. Journal: IISE Transactions Pages: 757-769 Issue: 8 Volume: 54 Year: 2022 Month: 8 X-DOI: 10.1080/24725854.2021.1973695 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1973695 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:8:p:757-769 Template-Type: ReDIF-Article 1.0 Author-Name: Guojin Si Author-X-Name-First: Guojin Author-X-Name-Last: Si Author-Name: Tangbin Xia Author-X-Name-First: Tangbin Author-X-Name-Last: Xia Author-Name: Ershun Pan Author-X-Name-First: Ershun Author-X-Name-Last: Pan Author-Name: Lifeng Xi Author-X-Name-First: Lifeng Author-X-Name-Last: Xi Title: Service-oriented global optimization integrating maintenance grouping and technician routing for multi-location multi-unit production systems Abstract: With the product-service requirement of modern production enterprises, service-oriented manufacturing and its corresponding operations and maintenance have gained growing attention. Advances in sensor technology and wireless communication, promoting lessors to propose new strategies for intelligent maintenance decision-making of geographically distributed manufacturing enterprises. In this article, we present a comprehensive strategy for solving the maintenance grouping and technician routing problem of multi-location multi-unit production systems. Based on real-time machine degradation, we estimate the failure rate of leased machines and establish a time-varying maintenance cost function to quantify the trade-off between early maintenance and delayed maintenance. Unlike group maintenance of a single system, we integrate the travel time between systems and the maintenance capacity of technician teams into a mixed-integer optimization model to provide the dynamic preventive maintenance scheme. Finally, numerical examples are employed to illustrate the effectiveness of the proposed strategy and explore some managerial insights for the lessor’s daily management. Journal: IISE Transactions Pages: 894-907 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1957181 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1957181 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:894-907 Template-Type: ReDIF-Article 1.0 Author-Name: Andi Wang Author-X-Name-First: Andi Author-X-Name-Last: Wang Author-Name: Tzyy-Shuh Chang Author-X-Name-First: Tzyy-Shuh Author-X-Name-Last: Chang Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Multiple event identification and characterization by retrospective analysis of structured data streams Abstract: The sensors installed in complex systems generate massive amounts of data, which contain rich information about a system’s operational status. This article proposes a retrospective analysis method for a historical data set, which simultaneously identifies when multiple events occur to the system and characterizes how they affect the multiple sensing signals. The problem formulation is motivated by the dictionary learning method and the solution is obtained by iteratively updating the event signatures and sequences using ADMM algorithms. A simulation study and a case study of the steel rolling process validate our approach. The supplementary materials including the appendices and the reproduction report are available online. Journal: IISE Transactions Pages: 908-921 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1970863 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1970863 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:908-921 Template-Type: ReDIF-Article 1.0 Author-Name: Fikri Kucuksayacigil Author-X-Name-First: Fikri Author-X-Name-Last: Kucuksayacigil Author-Name: K. Jo Min Author-X-Name-First: K. Jo Author-X-Name-Last: Min Title: The value of jumboization in transportation ships: A real options approach Abstract: In the presence of budget constraints, “jumboization,” has been adopted as a practical solution to meet increased transportation needs. By jumboization, we mean increasing the capacity of a ship by extending its length at a future date. There are, however, two kinds of jumboization: Fixed design (retrofitting) and flexible design. With fixed design, the initial construction cost is lower, but the subsequent jumboization cost is higher. With flexible design, the initial construction cost is higher, but the subsequent jumboization cost is lower. In this article, for both designs, we build and analyze economic decision models, and show how to value the option to jumboize. Our framework utilizes a stochastic optimal control approach that considers the volume of transportation needs (the demand) as an underlying uncertain factor. Under the criterion of cost savings maximization, we determine optimal threshold demand level to jumboize. Through analytical and numerical analyses, we obtain conditions under which the flexible design is preferred over fixed design, and vice versa. A comprehensive, illustrative example is also provided. Journal: IISE Transactions Pages: 858-868 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1973154 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1973154 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:858-868 Template-Type: ReDIF-Article 1.0 Author-Name: Kay Peeters Author-X-Name-First: Kay Author-X-Name-Last: Peeters Author-Name: Ivo J. B. F. Adan Author-X-Name-First: Ivo J. B. F. Author-X-Name-Last: Adan Author-Name: Tugce Martagan Author-X-Name-First: Tugce Author-X-Name-Last: Martagan Title: Throughput control and revenue optimization of a poultry product batcher Abstract: This article studies the optimal control of batching equipment in the poultry processing industry. The problem is to determine a control policy that maximizes the long-term average revenue while achieving a target throughput. This problem is formulated as a Markov Decision Process (MDP). The developed MDP model captures the unique characteristics of poultry processing operations, such as, the trade-off between giveaway and throughput. Structural properties of the optimal policy are derived for small-sized problems where batching equipment utilizes a single bin. Since the MDP model is numerically intractable for industry-sized problems, we propose a heuristic index policy with a Dynamic Rejection Threshold (DRT). The DRT heuristic is constructed based on the salient characteristics of the problem setting and is easy to implement in practice. Numerical experiments demonstrate that DRT performs well. We present an industry case study where DRT is benchmarked against current practice, which shows that the expected revenue can be potentially increased by over 2% (yielding an additional revenue between 750,000 and 2,270,000 Euros per year) through the use of DRT. Journal: IISE Transactions Pages: 845-857 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1966556 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1966556 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:845-857 Template-Type: ReDIF-Article 1.0 Author-Name: Sahand Hajifar Author-X-Name-First: Sahand Author-X-Name-Last: Hajifar Author-Name: Hongyue Sun Author-X-Name-First: Hongyue Author-X-Name-Last: Sun Title: Online domain adaptation for continuous cross-subject liver viability evaluation based on irregular thermal data Abstract: Accurate evaluation of liver viability during its procurement is a challenging issue and has traditionally been addressed by taking an invasive biopsy of the liver. Recently, people have started to investigate the non-invasive evaluation of liver viability during its procurement using liver surface thermal images. However, existing works include the background noise in the thermal images and do not consider the cross-subject heterogeneity of livers, thus the viability evaluation accuracy can be affected. In this article, we propose to use the irregular thermal data of the pure liver region, and the cross-subject liver evaluation information (i.e., the available viability label information in cross-subject livers), for the real-time evaluation of a new liver’s viability. To achieve this objective, we extract features of irregular thermal data based on tools from Graph Signal Processing (GSP), and propose an online Domain Adaptation (DA) and classification framework using the GSP features of cross-subject livers. A multiconvex block coordinate descent-based algorithm is designed to jointly learn the domain-invariant features during online DA and the classifier. Our proposed framework is applied to the liver procurement data, and classifies the liver viability accurately. Journal: IISE Transactions Pages: 869-880 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1949762 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1949762 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:869-880 Template-Type: ReDIF-Article 1.0 Author-Name: Young Myoung Ko Author-X-Name-First: Young Author-X-Name-Last: Myoung Ko Author-Name: Eunshin Byon Author-X-Name-First: Eunshin Author-X-Name-Last: Byon Title: Optimal budget allocation for stochastic simulation with importance sampling: Exploration vs. replication Abstract: This article investigates a budget allocation problem for optimally running stochastic simulation models with importance sampling in computer experiments. In particular, we consider a two-level (or nested) simulation to estimate the expectation of the simulation output, where the first-level draws random input samples and the second-level obtains the output given the input from the first-level. The two-level simulation faces the trade-off in allocating the computational budgets: exploring more inputs (exploration) or exploiting the stochastic response surface at a sampled point in more detail (replication). We study an appropriate computational budget allocation strategy that strikes a balance between exploration and replication to minimize the variance of the estimator when importance sampling is employed at the first-level simulation. Our analysis suggests that exploration can be beneficial than replication in many practical situations. We also conduct numerical experiments in a wide range of settings and wind turbine case study to investigate the trade-off. Journal: IISE Transactions Pages: 881-893 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1953197 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1953197 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:881-893 Template-Type: ReDIF-Article 1.0 Author-Name: Aakil M. Caunhye Author-X-Name-First: Aakil M. Author-X-Name-Last: Caunhye Author-Name: Michel-Alexandre Cardin Author-X-Name-First: Michel-Alexandre Author-X-Name-Last: Cardin Author-Name: Muhammad Rahmat Author-X-Name-First: Muhammad Author-X-Name-Last: Rahmat Title: Flexibility and real options analysis in power system generation expansion planning under uncertainty Abstract: Over many years, there has been a drive in the electricity industry towards better integration of environmentally friendly and renewable generation resources for power systems. Such resources show highly variable availability, impacting the design and performance of power systems. In this article, we propose using a stochastic programming approach to optimize Generation Expansion Planning (GEP), with explicit consideration of generator output capacity uncertainty. Flexibility implementation - via real options exercised in response to uncertainty realizations - is considered as an important design approach to the GEP problem. It more effectively captures upside opportunities, while reducing exposure to downside risks. A decision-rule-based approach to real options modeling is used, combining conditional-go and finite adaptability principles. The solutions provide decision makers with easy-to-use guidelines with threshold values from which to exercise the options in operations. To demonstrate application of the proposed methodologies and decision rules, a case study situated in the Midwest United States is used. The case study demonstrates how to quantify the value of flexibility, and showcases the usefulness of the proposed approach. Journal: IISE Transactions Pages: 832-844 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1965699 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1965699 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:832-844 Template-Type: ReDIF-Article 1.0 Author-Name: Sepehr Fathizadan Author-X-Name-First: Sepehr Author-X-Name-Last: Fathizadan Author-Name: Feng Ju Author-X-Name-First: Feng Author-X-Name-Last: Ju Author-Name: Feifan Wang Author-X-Name-First: Feifan Author-X-Name-Last: Wang Author-Name: Kyle Rowe Author-X-Name-First: Kyle Author-X-Name-Last: Rowe Author-Name: Nils Hofmann Author-X-Name-First: Nils Author-X-Name-Last: Hofmann Title: Dynamic material deposition control for large-scale additive manufacturing Abstract: Large-scale additive manufacturing involves fabricating parts by joint printing of materials layer upon layer. The product quality and process efficiency are yet to be addressed to guarantee the process viability in practice. The print surface temperature has a significant impact on both of these elements and can be controlled by properly scheduling the material depositions on the surface. The thermal infrared images captured in real-time are processed, and the extracted thermal profiles are translated into a nonlinear profile model describing the heat dissipation on the surface. A real-time layer time control model is formulated to determine the best time to print the next layer. Furthermore, exploiting the maneuverability characteristics of the printer head while considering its mechanical constraints, a real-time printer head speed control model is formulated as a nonlinear mixed-integer program. Following the deterministic finite-state optimal control and shortest path problem paradigm, a novel algorithm is developed to decide the optimal printing speed trajectory for each layer. The proposed approach was tested by two case studies, including a thin wall specimen and a car lower chassis. The results showed that the method can capture the thermodynamics of the process and achieve simultaneous improvement in both quality and efficiency. Journal: IISE Transactions Pages: 817-831 Issue: 9 Volume: 54 Year: 2022 Month: 6 X-DOI: 10.1080/24725854.2021.1956702 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1956702 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:9:p:817-831 Template-Type: ReDIF-Article 1.0 Author-Name: Bahadır Pamuk Author-X-Name-First: Bahadır Author-X-Name-Last: Pamuk Author-Name: Semra Ağralı Author-X-Name-First: Semra Author-X-Name-Last: Ağralı Author-Name: Z. Caner Taşkın Author-X-Name-First: Z. Caner Author-X-Name-Last: Taşkın Author-Name: Banu Kabakulak Author-X-Name-First: Banu Author-X-Name-Last: Kabakulak Title: A lot-sizing problem in deliberated and controlled co-production systems Abstract: We consider an uncapacitated lot-sizing problem in co-production systems, in which it is possible to produce multiple items simultaneously in a single production run. Each product has a deterministic demand to be satisfied on time. The decision is to choose which items to co-produce and the amount of production throughout a predetermined planning horizon. We show that the lot-sizing problem with co-production is strongly NP-Hard. Then, we develop various Mixed-Integer Linear Programming (MILP) formulations of the problem and show that LP relaxations of all MILPs are equal. We develop a separation algorithm based on a set of valid inequalities, lower bounds based on a dynamic lot-sizing relaxation of our problem and a constructive heuristic that is used to obtain an initial solution for the solver, which form the basis of our proposed Branch & Cut algorithm for the problem. We test our models and algorithms on different data sets and provide the results. Journal: IISE Transactions Pages: 950-962 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.2022250 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2022250 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:950-962 Template-Type: ReDIF-Article 1.0 Author-Name: Mümin Kurtuluş Author-X-Name-First: Mümin Author-X-Name-Last: Kurtuluş Author-Name: Alper Nakkas Author-X-Name-First: Alper Author-X-Name-Last: Nakkas Author-Name: Sezer Ülkü Author-X-Name-First: Sezer Author-X-Name-Last: Ülkü Title: Allocation of operational decisions in retail supply chains Abstract: We consider a supply chain where a single manufacturer sells multiple products via a single retailer who faces uncertain consumer demand for multiple variants in its assortment. First, we consider a model where the retailer is responsible for making assortment and stocking quantity decisions and bears the associated risks. Then, we consider a model where the retailer delegates assortment and stocking quantity decisions and the associated risks to the manufacturer. We investigate how delegation of operational decisions and associated costs impacts operational decisions and profitability of each member in the channel. Our findings suggest that delegation of operational decisions can lead to a win-win outcome for the retailer and manufacturer because delegation of operational decisions can serve as a tool to mitigate inefficiencies due to double marginalization in the channel. Journal: IISE Transactions Pages: 976-987 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.2008065 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2008065 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:976-987 Template-Type: ReDIF-Article 1.0 Author-Name: Haoxiang Yang Author-X-Name-First: Haoxiang Author-X-Name-Last: Yang Author-Name: Daniel Duque Author-X-Name-First: Daniel Author-X-Name-Last: Duque Author-Name: David P. Morton Author-X-Name-First: David P. Author-X-Name-Last: Morton Title: Optimizing diesel fuel supply chain operations to mitigate power outages for hurricane relief Abstract: Hurricanes can cause severe property damage and casualties in coastal regions. Diesel fuel plays a crucial role in hurricane disaster relief. It is important to optimize fuel supply chain operations so that emergency diesel fuel demand for power generation in a hurricane’s immediate aftermath can be mitigated. It can be challenging to estimate diesel fuel demand and make informed decisions in the distribution process, accounting for the hurricane’s path and severity. We develop predictive and prescriptive models to guide diesel fuel supply chain operations for hurricane disaster relief. We estimate diesel fuel demand from historical weather forecasts and power outage data. This predictive model feeds a prescriptive stochastic programming model implemented in a rolling-horizon fashion to dispatch tank trucks. This data-driven optimization tool provides a framework for decision support in preparation for approaching hurricanes, and our numerical results provide insights regarding key aspects of operations. Journal: IISE Transactions Pages: 936-949 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.2021461 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2021461 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:936-949 Template-Type: ReDIF-Article 1.0 Author-Name: Elifcan Yaşa Author-X-Name-First: Elifcan Author-X-Name-Last: Yaşa Author-Name: Dilek Tüzün Aksu Author-X-Name-First: Dilek Tüzün Author-X-Name-Last: Aksu Author-Name: Linet Özdamar Author-X-Name-First: Linet Author-X-Name-Last: Özdamar Title: Metaheuristics for the stochastic post-disaster debris clearance problem Abstract: Post-disaster debris clearance is of utmost importance in disaster response and recovery. The goal in planning debris clearance operations in emergency response is to maximize road network accessibility and enable transport of casualties to medical facilities, primary relief distribution to survivors, and evacuation of survivors from the affected region. We develop a novel stochastic mathematical model to represent the debris clearance scheduling problem with multiple cleaning crews. The inherent uncertainty in the debris clearance planning problem lies in the estimation of clearance times for road debris. The durations required to clear road segments are estimated by helicopter surveys and satellite imagery. The goal is to maximize network accessibility throughout the clearance process. The model creates a schedule that takes all clearing time scenarios into consideration. To enable the usage of the model in practice, we also propose a rolling horizon approach to revise the initial schedule based on updated clearance time estimates received from the field. We use the Sample Average Approximation method to determine the number of scenarios required to adequately represent the problem. Since the resulting mathematical model is intractable for large-scale networks, we design metaheuristics that utilize Biased Random Sampling, Tabu Search, Simulated Annealing, and Variable Neighborhood Search algorithms. Journal: IISE Transactions Pages: 1004-1017 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2022.2030075 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2030075 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:1004-1017 Template-Type: ReDIF-Article 1.0 Author-Name: Kaan Unnu Author-X-Name-First: Kaan Author-X-Name-Last: Unnu Author-Name: Jennifer Pazour Author-X-Name-First: Jennifer Author-X-Name-Last: Pazour Title: Evaluating on-demand warehousing via dynamic facility location models Abstract: On-demand warehousing platforms match companies with underutilized warehouse and distribution capabilities with customers who need extra space or distribution services. These new business models have unique advantages, in terms of reduced capacity and commitment granularity, but also have different cost structures compared with traditional ways of obtaining distribution capabilities. This research is the first quantitative analysis to consider distribution network strategies given the advent of on-demand warehousing. Our multi-period facility location model – a mixed-integer linear program – simultaneously determines location-allocation decisions of three distribution center types (self-distribution, 3PL/lease, on-demand). A simulation model operationally evaluates the impact of the planned distribution strategy when various uncertainties can occur. Computational experiments for a company receiving products produced internationally to fulfil a set of regional customer demands illustrate that the power of on-demand warehousing is in creating hybrid network designs that more efficiently use self-distribution facilities through improved capacity utilization. However, the business case for on-demand warehousing is shown to be influenced by several factors, namely on-demand capacity availability, responsiveness requirements, and demand patterns. This work supports a firm’s use of on-demand warehousing if it has tight response requirements, for example for same-day delivery; however, if a firm has relaxed response requirements, then on-demand warehousing is only recommended if capacity availability of planned on-demand services is high. We also analyze capacity flexibility options leased by third-party logistics companies for a premium price and draw attention to the importance of them offering more granular solutions to stay competitive in the market. Journal: IISE Transactions Pages: 988-1003 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.2008066 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2008066 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:988-1003 Template-Type: ReDIF-Article 1.0 Author-Name: Haitao Liu Author-X-Name-First: Haitao Author-X-Name-Last: Liu Author-Name: Jinpeng Liang Author-X-Name-First: Jinpeng Author-X-Name-Last: Liang Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Title: Unifying offline and online simulation for online decision-making Abstract: Stochastic simulation is typically deployed for offline system design and control; however, the time delay in executing simulation hinders its application in making online decisions. With the rapid growth of computing power, simulation-based online optimization has emerged as an attractive research topic. We consider a problem of ranking and selection via simulation in the context of online decision-making, in which there exists a short time (referred to as online budget) after observing online scenarios. The goal is to select the best alternative conditional on each scenario. We propose a Unified Offline and Online Learning (UOOL) paradigm that exploits offline simulation, online scenarios, and online simulation budget simultaneously. Specifically, we model the mean performance of each alternative as a function of scenarios and learn a predictive model based on offline data. Then, we develop a sequential sampling procedure to generate online simulation data. The predictive model is updated based on offline and online data. Our theoretical result shows that online budget should be allocated to the revealed online scenario. Numerical experiments are conducted to demonstrate the superior performance of the UOOL paradigm and the benefits of offline and online simulation. Journal: IISE Transactions Pages: 923-935 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.2018739 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2018739 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:923-935 Template-Type: ReDIF-Article 1.0 Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Author-Name: Konrad Stephan Author-X-Name-First: Konrad Author-X-Name-Last: Stephan Author-Name: Felix Weidinger Author-X-Name-First: Felix Author-X-Name-Last: Weidinger Title: Efficient order consolidation in warehouses: The product-to-order-assignment problem in warehouses with sortation systems Abstract: To improve picking performance, many warehouses apply order batching and/or zoning in their picking areas. The former policy collects multiple customer orders jointly on a picker tour to increase picking density, and the latter partitions the picking area into smaller zones to enable a parallel order processing. Both picking policies require an additional consolidation stage, where bins filled with partial orders arriving from multiple zones are sorted according to customer orders. To connect both stages, a conveyor system is applied on which the picked products, each being a piece of a specific Stock Keeping Unit (SKU), move from the picking area toward the consolidation stage. If multiple pieces of the product sequence, approaching the consolidation area on the conveyor, refer to the same SKU, these products are interchangeable among customer orders, and our product-to-order assignment problem arises: Given a product sequence where each product refers to some SKU, we assign products to customer orders, such that demands are fulfilled and order-related objectives, e.g., the sum of completion times, are optimized. We investigate different objectives for this very basic optimization task and show that some problem versions are solvable in polynomial time, whereas others turn out to be NP-hard. Furthermore, we provide exact and heuristic solution approaches. By applying these algorithms in a comprehensive simulation study, we show that our product-to-order assignment problem can be an impactful lever to improve consolidation performance. Journal: IISE Transactions Pages: 963-975 Issue: 10 Volume: 54 Year: 2022 Month: 7 X-DOI: 10.1080/24725854.2021.2004336 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2004336 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:10:p:963-975 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1987593_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Nathan Gaw Author-X-Name-First: Nathan Author-X-Name-Last: Gaw Author-Name: Safoora Yousefi Author-X-Name-First: Safoora Author-X-Name-Last: Yousefi Author-Name: Mostafa Reisi Gahrooei Author-X-Name-First: Mostafa Reisi Author-X-Name-Last: Gahrooei Title: Multimodal data fusion for systems improvement: A review Abstract: In recent years, information available from multiple data modalities has become increasingly common for industrial engineering and operations research applications. There have been a number of research works combining these data in unsupervised, supervised, and semi-supervised fashions that have addressed various issues of combining heterogeneous data, as well as several existing open challenges that remain to be addressed. In this review paper, we provide an overview of some methods for the fusion of multimodal data. We provide detailed real-world examples in manufacturing and medicine, introduce early, late, and intermediate fusion, as well as discuss several approaches under decomposition-based and neural network fusion paradigms. We summarize the capabilities and limitations of these methods and conclude the review article by discussing the existing challenges and potential research opportunities. Journal: IISE Transactions Pages: 1098-1116 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1987593 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1987593 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1098-1116 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1989093_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Muyue Han Author-X-Name-First: Muyue Author-X-Name-Last: Han Author-Name: Yiran Yang Author-X-Name-First: Yiran Author-X-Name-Last: Yang Author-Name: Lin Li Author-X-Name-First: Lin Author-X-Name-Last: Li Title: Techno-economic modeling of 4D printing with thermo-responsive materials towards desired shape memory performance Abstract: Four-dimensional (4D) printing enables the fabrication of smart materials with self-adaptations of shapes and properties over time in response to external stimuli, indicating potential applications in numerous areas such as aerospace, healthcare, and automotive. Evaluating the techno-economic feasibility is key to enhancing the technology readiness level of 4D printing. In the current literature, studies have been conducted to understand the 3D printing process mechanism and associated cost; however, they are not applicable to 4D printing due to the much-increased complexity of the intercorrelated relationships between material compositions, process parameters across multiple stages, the stimuli-response mechanisms along the added time dimension, and 4D printing cost. In this research, a techno-economic model is established to quantify the cost of 4D printing with methacrylate-based thermo-responsive polymers, embedded with explicit relations between cost and the material solidification chemistry and shape memory properties. A nonlinear optimization problem is formulated, resulting in a set of process parameters that can lead to a 22.25% cost reduction in total cost per part without sacrificing the desired shape memory performance. A sensitivity analysis is conducted to investigate market-dependent and operator-oriented parameters in 4D printing. Two primary cost drivers are identified, i.e., the raw material unit price and the operator’s hourly rate. Journal: IISE Transactions Pages: 1047-1059 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1989093 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1989093 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1047-1059 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1972184_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Congshan Wu Author-X-Name-First: Congshan Author-X-Name-Last: Wu Author-Name: Rong Pan Author-X-Name-First: Rong Author-X-Name-Last: Pan Author-Name: Xian Zhao Author-X-Name-First: Xian Author-X-Name-Last: Zhao Title: Reliability assessment of multi-state performance sharing systems with transmission loss and random shocks Abstract: In this article, a performance sharing system with transmission loss and a shock operation environment is studied. Such systems are widely found in power distribution systems, distributed computing systems, data transmission systems, communication systems, and so on. The system consists of n components and each of them works to satisfy its demand and shares its performance surplus with others through a common bus. When the system operates, it may suffer a variety of stresses from its operating environment, which can be regarded as random external shocks, and the transmission loss is also wildly seen in engineering systems. Therefore, the random shocks and transmission loss are considered in this article. The performance level of a component is affected by three types of random external shocks – invalid shocks, valid shocks and extreme shocks. The system fails if at least one component cannot satisfy its demand. A finite Markov chain imbedding approach and phase-type distributions are used to estimate the performance level for each component and the universal generating function technique is applied to analyze system reliability. Analysis of a power distribution system is given to show the application of the model under study and the effectiveness of the proposed method. Journal: IISE Transactions Pages: 1060-1071 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1972184 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1972184 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1060-1071 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1974129_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Jaesung Lee Author-X-Name-First: Jaesung Author-X-Name-Last: Lee Author-Name: Chao Wang Author-X-Name-First: Chao Author-X-Name-Last: Wang Author-Name: Xiaoyu Sui Author-X-Name-First: Xiaoyu Author-X-Name-Last: Sui Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Junhong Chen Author-X-Name-First: Junhong Author-X-Name-Last: Chen Title: Landmark-embedded Gaussian process with applications for functional data modeling Abstract: In practice, we often need to infer the value of a target variable from functional observation data. A challenge in this task is that the relationship between the functional data and the target variable is very complex: the target variable not only influences the shape but also the location of the functional data. In addition, due to the uncertainties in the environment, the relationship is probabilistic, that is, for a given fixed target variable value, we still see variations in the shape and location of the functional data. To address this challenge, we present a landmark-embedded Gaussian process model that describes the relationship between the functional data and the target variable. A unique feature of the model is that landmark information is embedded in the Gaussian process model so that both the shape and location information of the functional data are considered simultaneously in a unified manner. Gibbs–Metropolis–Hasting algorithm is used for model parameters estimation and target variable inference. The performance of the proposed framework is evaluated by extensive numerical studies and a case study of nano-sensor calibration. Journal: IISE Transactions Pages: 1033-1046 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1974129 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1974129 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1033-1046 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1968079_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Haitao Liu Author-X-Name-First: Haitao Author-X-Name-Last: Liu Author-Name: Hui Xiao Author-X-Name-First: Hui Author-X-Name-Last: Xiao Author-Name: Haobin Li Author-X-Name-First: Haobin Author-X-Name-Last: Li Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Title: Offline sequential learning via simulation Abstract: Simulation has been widely used for static system designs, but it is rarely used in making online decisions, due to the time delay of executing simulation. We consider a system with stochastic binary outcomes that can be predicted via a logistic model depending on scenarios and decisions. The goal is to identify all feasible decisions conditioning on any online scenario. We propose to learn offline the relationship among scenarios, decisions, and binary outcomes. An Information Gradient (IG) policy is developed to sequentially allocate offline simulation budget. We show that the maximum likelihood estimator produced via the IG policy is consistent and asymptotically normal. Numerical results on synthetic data and a case study demonstrate the superior performance of the IG policy than benchmark policies. Moreover, we find that the IG policy tends to sample the location near boundaries of the design space, due to its higher Fisher information, and that the time complexity of the IG policy is linear to the number of design points and simulation budget. Journal: IISE Transactions Pages: 1019-1032 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1968079 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1968079 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1019-1032 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1987592_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Lujia Wang Author-X-Name-First: Lujia Author-X-Name-Last: Wang Author-Name: Todd J. Schwedt Author-X-Name-First: Todd J. Author-X-Name-Last: Schwedt Author-Name: Catherine D. Chong Author-X-Name-First: Catherine D. Author-X-Name-Last: Chong Author-Name: Teresa Wu Author-X-Name-First: Teresa Author-X-Name-Last: Wu Author-Name: Jing Li Author-X-Name-First: Jing Author-X-Name-Last: Li Title: Discriminant subgraph learning from functional brain sensory data Abstract: The human brain is a complex system with many functional units interacting with each other. This interacting relationship, known as the Functional Connectivity Network (FCN), is critical for brain functions. To learn the FCN, machine learning algorithms can be built based on brain signals captured by sensing technologies such as EEG and fMRI. In neurological diseases, past research has revealed that the FCN is altered. Also, focusing on a specific disease, some part of the FCN, i.e., a sub-network can be more susceptible than other parts. However, the current knowledge about disease-specific sub-networks is limited. We propose a novel Discriminant Subgraph Learner (DSL) to identify a functional sub-network that best differentiates patients with a specific disease from healthy controls based on brain sensory data. We develop an integrated optimization framework for DSL to simultaneously learn the FCN of each class and identify the discriminant sub-network. Further, we develop tractable and converging algorithms to solve the optimization. We apply DSL to identify a functional sub-network that best differentiates patients with episodic migraine from healthy controls based on a fMRI dataset. DSL achieved the best accuracy compared to five state-of-the-art competing algorithms. Journal: IISE Transactions Pages: 1084-1097 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1987592 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1987592 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1084-1097 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1973156_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220804T044749 git hash: 24b08f8188 Author-Name: Jianyu Xu Author-X-Name-First: Jianyu Author-X-Name-Last: Xu Author-Name: Xiujie Zhao Author-X-Name-First: Xiujie Author-X-Name-Last: Zhao Author-Name: Bin Liu Author-X-Name-First: Bin Author-X-Name-Last: Liu Title: A risk-aware maintenance model based on a constrained Markov decision process Abstract: The Markov Decision Process (MDP) model has been widely studied and used in sequential decision-making problems. In particular, it has been proved to be effective in maintenance policy optimization problems where the system state is assumed to continuously evolve under sequential maintenance policies. In traditional MDP models for maintenance, the long-run expected total discounted cost is taken as the objective function. The maintenance manager’s target is to evaluate an optimal policy that incurs the minimum expected total discounted cost through the corresponding MDP model. However, a significant drawback of these existing MDP-based maintenance strategies is that they fail to incorporate and characterize the safety issues of the system during the maintenance process. Therefore, in some applications that are sensitive to functional risks, such strategies fail to accommodate the requirement of risk awareness. In this study, we apply the concept of risk-aversion in the MDP maintenance model to develop risk-aware maintenance policies. Specifically, we use risk functions to measure some indexes of the system that reflect the safety level and formulate a safety constraint. Then, we summarize the problem as a constrained MDP model and use the linear programming approach to evaluate the proposed risk-aware optimal maintenance policy under concern. Journal: IISE Transactions Pages: 1072-1083 Issue: 11 Volume: 54 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1973156 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1973156 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:11:p:1072-1083 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2030074_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Farhad Hasankhani Author-X-Name-First: Farhad Author-X-Name-Last: Hasankhani Author-Name: Amin Khademi Author-X-Name-First: Amin Author-X-Name-Last: Khademi Title: Proportionally fair organ transplantation allocation rules Abstract: We introduce a new fairness measure for designing organ allocation rules for transplantation. In particular, we apply the proportional fairness measure for organ transplantation whose solution can be interpreted as the solution to the Nash bargaining problem for sharing limited donor organs among patients who seek to maximize their Quality-Adjusted Life-Years (QALYs). The motivation arises from several observations that current measures of fairness induce significant inefficiencies in terms of total QALYs of the patient population. We use the asymptotic results for the fluid approximation of a transplant queuing system to estimate the expected utility of patients and formulate an optimization problem where the decision maker partitions the set of organ types to achieve a proportionally fair objective. We use an achievable region approach to transform our formulation to an alternative optimization problem and show that the optimal allocation policy under the proportional fairness measure is assortative, which has the following insight: higher quality organs are allocated to patients who will have higher expected QALYs. We compare the performance of allocation rules developed for fairness purposes in organ allocation along with our proposed policy via a validated simulation model for heart transplantation in the US. Journal: IISE Transactions Pages: 1131-1142 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2022.2030074 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2030074 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1131-1142 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2040760_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Yossi Bukchin Author-X-Name-First: Yossi Author-X-Name-Last: Bukchin Author-Name: Eran Hanany Author-X-Name-First: Eran Author-X-Name-Last: Hanany Author-Name: Yigal Gerchak Author-X-Name-First: Yigal Author-X-Name-Last: Gerchak Title: Coordination of the decentralized concurrent open-shop Abstract: In a concurrent open-shop, several jobs have to be completed, where each job consists of multiple components that are processed simultaneously by different dedicated machines. We assume that the components are sequenced on each machine in a decentralized manner, and analyze the resulting coordination problem under the objective of minimizing the weighted sum of disutility of completion times. The decentralized system is modeled as a non-cooperative game for two environments: (i) local completion times, where each machine considers only the completion times of their components, disregarding the other machines; and (ii) global completion times, where each machine considers the job completion times from the perspective of the system, i.e., when all components of each job are completed. Tight bounds are provided on the inefficiency that might occur in the decentralized system, showing potentially severe efficiency loss in both environments. We propose and investigate scheduling based, coordinating job weighting mechanisms that use concise information, showing impossibility in the local completion times environment and possibility using the related weights mechanism in the global completion times environment. These results extend to a setting with incomplete information in which only the distribution of the processing times is commonly known, and each machine is additionally informed about their own processing times. Journal: IISE Transactions Pages: 1172-1185 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2022.2040760 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2040760 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1172-1185 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2031351_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Görkem Emirhüseyinoğlu Author-X-Name-First: Görkem Author-X-Name-Last: Emirhüseyinoğlu Author-Name: Sarah M. Ryan Author-X-Name-First: Sarah M. Author-X-Name-Last: Ryan Title: Farm management optimization under uncertainty with impacts on water quality and economic risk Abstract: Farm management decisions under uncertainty are important, not only for farmers trying to maximize their net income, but also for policy makers responsible for incentives and regulations to achieve environmental goals. We focus on corn production as a significant contributor to the economy of the US Midwest. Nitrogen is one of the key nutrients needed to increase production efficiency, but its leaching and loss as nitrate through subsurface flow and agricultural drainage systems poses a threat to water quality. We build a novel two-stage stochastic mixed-integer program to find the annual farm management decisions that maximize the expected farm profit. A decomposition-based solution strategy is suggested to reduce the computational complexity resulting from the predominance of binary variables and complicated constraints. Case study results indicate that farmers may compensate for the additional risks associated with nutrient reduction strategies by increasing the planned nitrogen application rate. Significant financial incentives would be required for farmers to achieve substantial reductions in nitrate loss by fertilizer management alone. The complicated interactions between fertilizer management and crop insurance decisions observed in the numerical study suggest that crop insurance programs can affect water quality by influencing the adoption of environmentally beneficial practices. Journal: IISE Transactions Pages: 1143-1160 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2022.2031351 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2031351 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1143-1160 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2026539_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Seyed Ali MirHassani Author-X-Name-First: Seyed Ali Author-X-Name-Last: MirHassani Author-Name: Fatemeh Garmroudi Author-X-Name-First: Fatemeh Author-X-Name-Last: Garmroudi Author-Name: Farnaz Hooshmand Author-X-Name-First: Farnaz Author-X-Name-Last: Hooshmand Title: Modeling and solution algorithm for a disaster management problem based on Benders decomposition Abstract: Pre-disaster planning and management activities may have significant effects on reducing post-disaster damages. In this article, a two-stage stochastic programming model is provided to design a resilient rescue network assuming that the demands for relief items and the network functionality after the disaster are affected by uncertainty. Locations and capacities of relief centers, the inventory of relief items, and strengthening vulnerable arcs of the network are among the main decisions that must be taken before the disaster. Servicing the affected points is decided after the disaster, and the risk of not satisfying demands is controlled by using the conditional-value-at-risk measure. Since the direct resolution of the model is intractable and time-consuming over actual large-sized instances, an improved Benders decomposition algorithm based on the problem structure is proposed to overcome this difficulty. Computational results highlight the effectiveness of the proposed method compared to the existing approaches. Journal: IISE Transactions Pages: 1161-1171 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2022.2026539 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2026539 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1161-1171 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2041773_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Saeed Poormoaied Author-X-Name-First: Saeed Author-X-Name-Last: Poormoaied Author-Name: Zümbül Atan Author-X-Name-First: Zümbül Author-X-Name-Last: Atan Author-Name: Tom van Woensel Author-X-Name-First: Tom Author-X-Name-Last: van Woensel Title: Quantity-based emergency shipment policies Abstract: Either structurally or in an ad hoc manner, emergency shipments are widely utilized for preventing the potentially negative impacts of stock-outs. In addition to having policies for their regular orders, companies need to decide when and for how many units to place emergency orders. In this article, we consider the periodic-review problem of a retailer who uses a quantity-based emergency shipment policy. Under this policy an emergency order is triggered if the inventory level within the review period falls below a certain level. We consider the base-stock policy for the regular replenishment orders. The goal is to determine the optimal base-stock level, the optimal period length as well as the optimal size and threshold value of emergency orders such that the total expected cost rate is minimized. We use renewal reward theory to derive the expressions of operating characteristics. The expected cost rate’s properties are analyzed to develop an optimization algorithm for finding the optimal policy parameters. We compare our policy with the time-based and hybrid emergency shipment policies and conclude that the quantity-based policy can bring substantial benefits compared to other policies. Journal: IISE Transactions Pages: 1186-1198 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2022.2041773 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2041773 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1186-1198 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2045045_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Fei Ye Author-X-Name-First: Fei Author-X-Name-Last: Ye Author-Name: Lunhai Liang Author-X-Name-First: Lunhai Author-X-Name-Last: Liang Author-Name: Yang Tong Author-X-Name-First: Yang Author-X-Name-Last: Tong Author-Name: Guangyi Xu Author-X-Name-First: Guangyi Author-X-Name-Last: Xu Author-Name: Zefei Xie Author-X-Name-First: Zefei Author-X-Name-Last: Xie Title: Brick-and-mortar or brick-and-click? The influence of online customer reviews on a retailer’s channel strategy Abstract: Adding an online channel generates more information about consumers’ preferences for particular products (e.g., through online customer reviews), and this information can be used to forecast demand more precisely. In this article, we investigate the influence of online customer reviews on a retailer’s decision to add to its physical store channel an online channel with a supply chain consisting of that of a brick-and-mortar retailer and a third-party logistics provider. We find the retailer’s optimal channel strategy (store only or online plus store) will be jointly affected by the additional cost of establishing an online channel, the increase in market size that is attained by adding the online channel, as well as the additional value of information revealed by online customer reviews. Interestingly, under both differentiated and uniform pricing strategies, when the fixed cost is moderate, the retailer benefits from adding an online channel if the additional value of information and the increase in market size exceed some thresholds; otherwise, doing so cannot increase the retailer’s profit. However, when online customer reviews exert two effects on updating demand uncertainty, negative (positive) customer reviews will reduce (increase) the value of information, and the retailer will be inhibited from adding (induced to add) an online channel. Moreover, a larger degree of risk aversion on the part of the retailer will inhibit the retailer from adding an online channel. Journal: IISE Transactions Pages: 1199-1210 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2022.2045045 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2045045 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1199-1210 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2001608_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Zhaolin Hu Author-X-Name-First: Zhaolin Author-X-Name-Last: Hu Author-Name: Wenjie Sun Author-X-Name-First: Wenjie Author-X-Name-Last: Sun Author-Name: Shushang Zhu Author-X-Name-First: Shushang Author-X-Name-Last: Zhu Title: Chance constrained programs with Gaussian mixture models Abstract: In this article, we discuss input modeling and solution techniques for several classes of Chance constrained programs (CCPs). We propose to use a Gaussian Mixture Model (GMM) to fit the data available and to model the randomness. We demonstrate the merits of using a GMM. We consider several scenarios that arise from practical applications and analyze how the problem structures could embrace alternative optimization techniques. More specifically, for several scenarios, we study how to assess the gradient of the chance constraint and incorporate the results into gradient-based nonlinear optimization algorithms, and for a class of CCPs, we propose a spatial branch-and-bound procedure and solve the problems to global optimality. We also conduct numerical experiments to test the efficiency of our approach and propose an example of hedge fund portfolio to illustrate the practical application of the method. Journal: IISE Transactions Pages: 1117-1130 Issue: 12 Volume: 54 Year: 2022 Month: 9 X-DOI: 10.1080/24725854.2021.2001608 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2001608 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:54:y:2022:i:12:p:1117-1130 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2046893_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Xiaohong Chen Author-X-Name-First: Xiaohong Author-X-Name-Last: Chen Author-Name: Tianhu Deng Author-X-Name-First: Tianhu Author-X-Name-Last: Deng Author-Name: Zuo-Jun Max Shen Author-X-Name-First: Zuo-Jun Max Author-X-Name-Last: Shen Author-Name: Yi Yu Author-X-Name-First: Yi Author-X-Name-Last: Yu Title: Mind the gap between research and practice in operations management Abstract: The mission of the Institute of Industrial and Systems Engineers (IISE) is to serve those who solve complex and critical problems of the world. Notably, the research–practice gap in Operations Management (OM) marginalizes the value and relevance of the IISE. To maintain and enhance the impact of the IISE, we identify major bottlenecks that limit the industrial installation of OM research outcomes. Ranked by the relative importance, the three bottlenecks are verifying the performance improvement, building trust with practitioners, and balancing model accuracy and simplicity, respectively, in the stages of value verification, implementation and development. We propose potential research opportunities and illustrate the challenges and opportunities using real case studies from three Fortune Global 500 companies. In particular, we emphasize the role of data-driven decision methods in dealing with the three bottlenecks. Journal: IISE Transactions Pages: 32-42 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2046893 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2046893 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:32-42 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2070801_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Elise Miller-Hooks Author-X-Name-First: Elise Author-X-Name-Last: Miller-Hooks Title: Constructs in infrastructure resilience framing – from components to community services and the built and human infrastructures on which they rely Abstract: This article describes five constructs for framing infrastructure resilience estimation. These constructs range from the consideration of a single component to a community service provided through a set of buildings whose functionality relies on interdependent supporting lifelines. A key aim is to explore how the construct that is adopted affects resilience understanding. It discusses the value of reframing the resilience computation around services that are provided by built environments rather than around the built systems themselves. The built environment would provide little in the way of services if not for human involvement and other needed resources. A construct for framing resilience is expanded to incorporate the role of humans as infrastructure, as well as permanent and consumable limiting resources, in creating service capacity. Taking a service-based viewpoint induces a change in perspective with rippling impact. It affects the choice of metrics for measuring resilience, adaptation strategies to include in assessment, baselines for comparison, and elements of the built environment to incorporate in the evaluation. It necessitates consideration of socio-technical concerns. It also brings hidden issues of inequity to the foreground. This article suggests that underlying many resilience studies is an implicit construct for framing resilience, and explores how the construct affects and enables resilience understanding. Journal: IISE Transactions Pages: 43-56 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2070801 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2070801 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:43-56 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2106391_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Editorial: Perspectives of ISE/OR researchers Journal: IISE Transactions Pages: 1-1 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2106391 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2106391 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:1-1 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2045392_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Laura A. Albert Author-X-Name-First: Laura A. Author-X-Name-Last: Albert Author-Name: Alexander Nikolaev Author-X-Name-First: Alexander Author-X-Name-Last: Nikolaev Author-Name: Sheldon H. Jacobson Author-X-Name-First: Sheldon H. Author-X-Name-Last: Jacobson Title: Homeland security research opportunities Abstract: Homeland security research has gone through a significant transformation since the events of September 11, 2001, and continues to evolve. This article identifies opportunities that the industrial engineering and operations research communities can seize. By drawing together insights from thought leaders in these communities, a path outlining research problems and discovery is provided that will serve to guide industrial engineering and operations research innovations and help move homeland security research forward over the next decade. Journal: IISE Transactions Pages: 22-31 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2045392 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2045392 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:22-31 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2080892_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Ozlem Ergun Author-X-Name-First: Ozlem Author-X-Name-Last: Ergun Author-Name: Wallace J. Hopp Author-X-Name-First: Wallace J. Author-X-Name-Last: Hopp Author-Name: Pinar Keskinocak Author-X-Name-First: Pinar Author-X-Name-Last: Keskinocak Title: A structured overview of insights and opportunities for enhancing supply chain resilience Abstract: Widespread product shortages during the COVID-19 pandemic and other emergencies have prompted several large studies of how to make supply chains more resilient. In this article we leverage these studies, as well as the academic literature, to provide a review of our state of knowledge about supply chain resilience. To do this, we (i) classify the failure modes of a supply chain, (ii) quantitatively evaluate the level of resilience needed in a supply chain to achieve desired business or societal outcomes, (iii) describe a structured framework of actions to enhance supply chain resilience, and (iv) use the resulting conceptual paradigm to review the academic literature on supply chain risk and resilience. In each step, we summarize key insights from our current state of understanding, as well as gaps that present opportunities for research and practice. Journal: IISE Transactions Pages: 57-74 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2080892 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2080892 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:57-74 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2059725_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: In-process quality improvement: Concepts, methodologies, and applications Abstract: This article presents the concepts, methodologies, and applications of In-Process Quality Improvement (IPQI) in complex manufacturing systems. As opposed to traditional quality control concepts that emphasize process change detection, acceptance sampling, and offline designed experiments, IPQI focuses on integrating data science and system theory, taking full advantage of in-process sensing data to achieve process monitoring, diagnosis, and control. The implementation of IPQI leads to root cause diagnosis (in addition to change detection), automatic compensation (in addition to off-line adjustment), and defect prevention (in addition to defect inspection). The methodologies of IPQI have been developed and implemented in various manufacturing processes. This paper provides a brief historical review of the IPQI, summarizes the developments and applications of IPQI methodologies, and discusses some challenges and opportunities in the current data-rich manufacturing systems. Future research directions are discussed at the end of the article with a special focus on leveraging emerging machine learning tools to address quality improvements in data-rich advanced manufacturing systems. Journal: IISE Transactions Pages: 2-21 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2059725 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2059725 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:2-21 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2089785_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Satish T.S. Bukkapatnam Author-X-Name-First: Satish T.S. Author-X-Name-Last: Bukkapatnam Title: Autonomous materials discovery and manufacturing (AMDM): A review and perspectives Abstract: This article presents an overview of the emerging themes in Autonomous Materials Discovery and Manufacturing (AMDM). This interdisciplinary field is garnering a growing interest among the scientists and engineers in the materials and manufacturing domains as well as those in the Artificial Intelligence (AI) and data sciences domains, and it offers immense research potential for the industrial systems engineering (ISE) and manufacturing fields. Although there are a few reviews related to this topic, they had focused exclusively on sequential experimentation techniques, AI/machine learning applications, or materials synthesis processes. In contrast, this review treats AMDM as a cyberphysical system, comprising an intelligent software brain that incorporates various computational models and sequential experimentation strategies, and a hardware body that integrates equipment platforms for materials synthesis with measurement and testing capabilities. This review offers a balanced perspective of the software and the hardware components of an AMDM system, and discusses the current state-of-the-art and the emerging challenges at the nexus of manufacturing/materials sciences and AI/data sciences in this nascent, exciting area. Journal: IISE Transactions Pages: 75-93 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2089785 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2089785 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:75-93 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2092918_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Brian T. Denton Author-X-Name-First: Brian Author-X-Name-Last: T. Denton Title: Frontiers of medical decision-making in the modern age of data analytics Abstract: Recent decades have seen considerable advances in developing Industrial Engineering/Operations Research (IE/OR) models for improving decision-making in healthcare. These approaches span the full range of descriptive, predictive, and prescriptive models for supporting patients' and clinicians' decision-making. The pervasive use of information technology to collect and store electronic health records, insurance claims, genomic information, and other observational data has opened new doors for developing, validating, and applying these types of data-driven IE/OR models. This article describes opportunities at the frontier of medical decision-making, emphasizing the intersection of medicine, data analytics, and operations research. Many of the examples covered intersect the fields of statistics, machine learning, and artificial intelligence. A series of motivating examples illustrate the possibilities and some promising future research directions. Journal: IISE Transactions Pages: 94-105 Issue: 1 Volume: 55 Year: 2023 Month: 1 X-DOI: 10.1080/24725854.2022.2092918 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2092918 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:1:p:94-105 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2022815_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Sixiang Zhao Author-X-Name-First: Sixiang Author-X-Name-Last: Zhao Author-Name: William B. Haskell Author-X-Name-First: William B. Author-X-Name-Last: Haskell Author-Name: Michel-Alexandre Cardin Author-X-Name-First: Michel-Alexandre Author-X-Name-Last: Cardin Title: A flexible system design approach for multi-facility capacity expansion problems with risk aversion Abstract: This article studies a model for risk aversion when designing a flexible capacity expansion plan for a multi-facility system. In this setting, the decision maker can dynamically expand the capacity of each facility given observations of uncertain demand. We model this situation as a multi-stage stochastic programming problem, and we express risk aversion through the Conditional Value-at-Risk (CVaR) and a mean-CVaR objective. We optimize the multi-stage problem over a tractable family of if–then decision rules using a decomposition algorithm. This algorithm decomposes the stochastic program over scenarios and updates the solutions via the subgradients of the function of cumulative future costs. To illustrate the practical effectiveness of this method, we present a numerical study of a decentralized waste-to-energy system in Singapore. The simulation results show that the risk-averse model can improve the tail risk of investment losses by adjusting the weight factors of the mean-CVaR objective. The simulations also demonstrate that the proposed algorithm can converge to high-performance policies within a reasonable time, and that it is also more scalable than existing flexible design approaches. Journal: IISE Transactions Pages: 187-200 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.2022815 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2022815 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:187-200 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1988770_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Chiwoo Park Author-X-Name-First: Chiwoo Author-X-Name-Last: Park Author-Name: Peihua Qiu Author-X-Name-First: Peihua Author-X-Name-Last: Qiu Author-Name: Jennifer Carpena-Núñez Author-X-Name-First: Jennifer Author-X-Name-Last: Carpena-Núñez Author-Name: Rahul Rao Author-X-Name-First: Rahul Author-X-Name-Last: Rao Author-Name: Michael Susner Author-X-Name-First: Michael Author-X-Name-Last: Susner Author-Name: Benji Maruyama Author-X-Name-First: Benji Author-X-Name-Last: Maruyama Title: Sequential adaptive design for jump regression estimation Abstract: Selecting input variables or design points for statistical models has been of great interest in adaptive design and active learning. Motivated by two scientific examples, this article presents a strategy of selecting the design points for a regression model when the underlying regression function is discontinuous. The first example we undertook was to accelerate imaging speed in high-resolution material imaging, and the second was to use sequential design for mapping a chemical phase diagram. In both examples, the underlying regression functions have discontinuities, and thus many existing design optimization approaches cannot be used, as they assume a continuous regression function. Although some existing adaptive design strategies developed from the treed regression models can handle the discontinuities, the related Bayesian model estimation approaches come with computationally expensive Markov Chain Monte Carlo algorithms for posterior inferences and the subsequent design point selections, which may not be applicable for the first motivating example that requires the computation to be faster than the original imaging speed. In addition, the treed models are based on domain partitioning and are inefficient in cases when the discontinuities occur at complex sub-domain boundaries. In this article, we propose a simple and effective adaptive design strategy for regression analysis with discontinuities. After some statistical properties of the estimated regression model are derived in cases with a fixed design, a new criterion for sequentially selecting the design points is suggested. The suggested sequential design selection procedure is then evaluated using a comprehensive simulation study and demonstrated using two motivating examples. Journal: IISE Transactions Pages: 111-128 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.1988770 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1988770 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:111-128 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2010152_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Fengming Cui Author-X-Name-First: Fengming Author-X-Name-Last: Cui Author-Name: Chen Wang Author-X-Name-First: Chen Author-X-Name-Last: Wang Author-Name: Lefei Li Author-X-Name-First: Lefei Author-X-Name-Last: Li Title: A PageRank-like measure for evaluating process flexibility Abstract: Uncertainty has always been a threat to system performance in both manufacturing and service industries. Although cost-budgeting may limit available resources, a more flexible structure can still improve the system’s ability to deal with uncertainty. In this study, we develop a new measure to help find a more flexible structure without extensive simulation. We create a PageRank-analogous score whereby we can calculate the Flexibility Gap (FG) index to predict the better of two alternative structures with topological information only. We theoretically analyze how the FG index recognizes flexible sparse structures such as expander graphs with high expansion ratios. Numerical experiments show that the FG index is effective in ranking the flexibility performance of different structures in terms of average waiting time and expected lost sales. Moreover, we extend the FG index with minimal modification to accommodate the case of imperfect flexibility (i.e., flexible suppliers with shrinking capacities) and demonstrate that the generalized FG index is still a good predictor of expected lost sales. Our approach provides a novel view to explain flexibility. That is, sparse structures with higher graph expansion ratios disperse the demand fluctuation more “rapidly” among resources to cushion the shock of uncertainty. Journal: IISE Transactions Pages: 172-186 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.2010152 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2010152 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:172-186 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2018528_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Sumin Park Author-X-Name-First: Sumin Author-X-Name-Last: Park Author-Name: Keunseo Kim Author-X-Name-First: Keunseo Author-X-Name-Last: Kim Author-Name: Heeyoung Kim Author-X-Name-First: Heeyoung Author-X-Name-Last: Kim Title: Prediction of highly imbalanced semiconductor chip-level defects using uncertainty-based adaptive margin learning Abstract: In semiconductor manufacturing, the package test is a process that verifies whether the product specifications are satisfied before the semiconductor products are finally shipped to customers. The packaged chips are classified as good or defective according to the verification results. To ensure high-quality products and customer satisfaction, it is important to detect defective chips during the package test. In this article, we consider the problem of predicting potential defects in advance using the wafer-test results data obtained from an earlier stage of the wafer test. There are several challenges in this problem. First, package-test data are highly class-imbalanced with a very low defect rate, and the imbalance level may vary due to the variability in manufacturing processes. Second, there is a complex relationship between package- and wafer-test results. Third, it is more important to increase the detection accuracy of defects than the overall classification accuracy. To address these challenges, we propose a Bayesian-neural-network-based prediction model. The proposed model adaptively considers unknown imbalance levels through the flexible adjustment of the decision boundary by using class- and sample-level prediction uncertainties and the relative frequency of each class. Using a real semiconductor manufacturing dataset from a global semiconductor company, we demonstrate that the proposed model can effectively predict defects even when the imbalance level of the test dataset differs from that of the training dataset. Journal: IISE Transactions Pages: 147-155 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.2018528 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2018528 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:147-155 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2116508_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Allen L. Soyster Author-X-Name-First: Allen L. Author-X-Name-Last: Soyster Title: Back to the future—IIE Transactions 1988 Abstract: The IIE Transactions began publishing focused issues in 1992. This article reviews the history and decision-making which led to this expansion. In addition, this article includes two recommendations for possible expansion of IISE Transactions. Journal: IISE Transactions Pages: 107-110 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2022.2116508 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2116508 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:107-110 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2010151_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Hu Yu Author-X-Name-First: Hu Author-X-Name-Last: Yu Author-Name: Yugang Yu Author-X-Name-First: Yugang Author-X-Name-Last: Yu Author-Name: René de Koster Author-X-Name-First: René Author-X-Name-Last: de Koster Title: Dense and fast: Achieving shortest unimpeded retrieval with a minimum number of empty cells in puzzle-based storage systems Abstract: This article studies puzzle-based storage (PBS) systems with multiple simultaneously movable empty cells and with block movement (i.e., multiple loads in a line can move simultaneously). Simultaneous movements can potentially reduce the retrieval time substantially, but has not yet been rigorously studied. Our aim is to determine the minimum number of empty cells sufficient for creating a shortest and unimpeded retrieval path for a requested load to the Input/Output (I/O) point at the bottom left corner of the system. We prove that four (or five) empty cells are sufficient for shortest unimpeded retrieval of a load from any interior location (or location on the left or bottom boundary), independent of the system size. This means the vast majority of cells in the system can be utilized for storing loads without any impact on the retrieval time. In addition, constructive optimal algorithms are developed for scheduling four or five empty cells to realize shortest unimpeded retrieval. Interestingly, the requested load’s shortest unimpeded retrieval path to the I/O point may contain upward or rightward movements, even if the I/O point is at the bottom left corner of the system. Compared with systems with one empty cell (sequential movements), systems with four or five empty cells (simultaneous movements) can bring about 70% (50% or more) retrieval time reduction. Furthermore, PBS systems can yield shorter average retrieval time (in addition to their intrinsic advantage of a higher storage density), compared with parallel-aisle unit-load storage systems. Journal: IISE Transactions Pages: 156-171 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.2010151 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2010151 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:156-171 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2000075_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Han Wang Author-X-Name-First: Han Author-X-Name-Last: Wang Author-Name: Haitao Liao Author-X-Name-First: Haitao Author-X-Name-Last: Liao Author-Name: Xiaobing Ma Author-X-Name-First: Xiaobing Author-X-Name-Last: Ma Author-Name: Rui Bao Author-X-Name-First: Rui Author-X-Name-Last: Bao Author-Name: Yu Zhao Author-X-Name-First: Yu Author-X-Name-Last: Zhao Title: A new class of mechanism-equivalence-based Wiener process models for reliability analysis Abstract: It is quite common to see that a unit of a product with a higher degradation rate also presents a more prominent dispersion. This phenomenon has motivated studies on developing new degradation models. In particular, a variety of Wiener process models have been developed by correlating the drift parameter and the diffusion parameter based on the statistical features of data. However, no insightful explanations are provided for such interesting correlations. In this article, degradation mechanism equivalence is first introduced based on the acceleration factor invariant principle, and the correlation between degradation rate and variation is explained using basic principles. Then, mechanism-equivalence-based Wiener process models, including a basic model and a random-effects model, are proposed to characterize such degradation behavior of a product. Analytical solutions for both point estimation and interval estimation of unknown model parameters are obtained using the maximum likelihood estimation method and an expectation–maximization algorithm. An extension of the proposed model that is able to handle accelerated degradation tests is developed. A simulation study and two real-world applications are provided to illustrate the effectiveness of the proposed models in product reliability estimation based on degradation data. Journal: IISE Transactions Pages: 129-146 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2021.2000075 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2000075 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:129-146 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2034195_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Mustafa Esengün Author-X-Name-First: Mustafa Author-X-Name-Last: Esengün Author-Name: Alp Üstündağ Author-X-Name-First: Alp Author-X-Name-Last: Üstündağ Author-Name: Gökhan İnce Author-X-Name-First: Gökhan Author-X-Name-Last: İnce Title: Development of an augmented reality-based process management system: The case of a natural gas power plant Abstract: Since the beginning of the Industry 4.0 era, Augmented Reality (AR) has gained significant popularity. Especially in production industries, AR has proven itself as an innovative technology renovating traditional production activities, making operators more productive and helping companies to make savings in different expense items. Despite these findings, its adoption rate is surprisingly low especially in production industries, due to various organizational and technical limitations. Various AR platforms have been proposed to eliminate this gap, however, there is still not a widely accepted framework for such a tool. This research presents the reasons behind the low adoption rate of AR in production industries, and analyzes the existing AR frameworks. Based on the findings from these analyses and a conducted field study, a cloud-based AR framework, which provides tools for creating AR applications without any coding and features for managing, monitoring and improving industrial processes is proposed. The design and development phases are presented together with the evaluation of the platform in a real-world industrial scenario. Journal: IISE Transactions Pages: 201-216 Issue: 2 Volume: 55 Year: 2022 Month: 11 X-DOI: 10.1080/24725854.2022.2034195 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2034195 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2022:i:2:p:201-216 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2068087_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Wenjie Huang Author-X-Name-First: Wenjie Author-X-Name-Last: Huang Author-Name: Jing Jiang Author-X-Name-First: Jing Author-X-Name-Last: Jiang Author-Name: Xiao Liu Author-X-Name-First: Xiao Author-X-Name-Last: Liu Title: Online non-convex learning for river pollution source identification Abstract: In this article, novel gradient-based online learning algorithms are developed to investigate an important environmental application: real-time river pollution source identification, which aims at estimating the released mass, location, and time of a river pollution source based on downstream sensor data monitoring the pollution concentration. The pollution is assumed to be instantaneously released once. The problem can be formulated as a non-convex loss minimization problem in statistical learning, and our online algorithms have vectorized and adaptive step sizes to ensure high estimation accuracy in three dimensions which have different magnitudes. In order to keep the algorithm from sticking in the saddle points of non-convex loss, the “escaping from saddle points” module and multi-start setting are derived to further improve the estimation accuracy by searching for the global minimizer of the loss functions. This can be shown theoretically and experimentally as the O(N) local regret of the algorithms and the high probability cumulative regret bound O(N) under a particular error bound condition in loss functions. A real-life river pollution source identification example shows the superior performance of our algorithms compared with existing methods in terms of estimation accuracy. Managerial insights for the decision maker to use the algorithms are also provided. Journal: IISE Transactions Pages: 229-241 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2068087 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2068087 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:229-241 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2100523_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Song Jiu Author-X-Name-First: Song Author-X-Name-Last: Jiu Author-Name: Qiang Guo Author-X-Name-First: Qiang Author-X-Name-Last: Guo Author-Name: Chao Liang Author-X-Name-First: Chao Author-X-Name-Last: Liang Title: Robust optimization for integrating preventative maintenance with coal production under demand uncertainty Abstract: We consider a coal mine producing a catalog of products through multiple pieces of equipment with variable production rates over a multi-period horizon, where each product faces random demand in each period. Each piece of equipment requires Preventative Maintenance (PM) with a given duration. We study a joint PM and production problem that adaptively determines the PM starting time and the production rates for the equipment to minimize the expected total cost. We formulate a multi-period stochastic optimization model that is challenging to solve due to the complexity of adjustable binary decisions. This motivates us to propose a two-phase approach based on robust optimization to solve the problem. Phase 1 determines the binary PM decisions using a target-oriented robust optimization approach. Fixing the PM decisions, Phase 2 adaptively determines the production rates using a linear decision rule. Numerical experiments suggest that our approach outperforms some existing approaches that handle adjustable binary decisions, and performs very close to the expected value given perfect information for varying problem instances. A case study using real data from a major coal mine in China suggests that implementing our approach can potentially yield cost savings in the long run over the status quo policy. Journal: IISE Transactions Pages: 242-258 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2100523 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2100523 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:242-258 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2057625_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Emre Berk Author-X-Name-First: Emre Author-X-Name-Last: Berk Author-Name: Ayhan Özgür Toy Author-X-Name-First: Ayhan Özgür Author-X-Name-Last: Toy Title: A serial inventory system with lead-time-dependent backordering: A reduced-state approximation Abstract: We study a serial inventory system where the external customers may have a maximum time that they would be willing to wait for delivery in cases of stock-out and the demand would be lost if the remaining delivery lead time of the next available item is longer. This lead-time-dependent backordering behavior subsumes the models of partial backordering regardless of the wait that a customer would experience. In the inventory literature, this behavior has only been analyzed in single-location settings. We study this behavior in a multi-stage setting. We consider continuous review (S−1,S) policies at all stages facing external Poisson demands. Using the method of supplementary variables, we define the stochastic process representing the inventory system and obtain the expressions for the operating characteristics of the inventory system. Based on the solution structures for the special cases, we propose an approximate solution which rests on replacing the state-dependent purchasing decision of the customer with an averaged-out purchase probability computed using only the age of the oldest item. An extensive numerical study indicates that the proposed approximation performs very well. Our numerical study provides additional insights about the sensitivity and allocation of stock levels across stages. Journal: IISE Transactions Pages: 259-270 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2057625 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2057625 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:259-270 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2064567_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Meng Cheng Author-X-Name-First: Meng Author-X-Name-Last: Cheng Author-Name: Yu Ning Author-X-Name-First: Yu Author-X-Name-Last: Ning Author-Name: Su Xiu Xu Author-X-Name-First: Su Xiu Author-X-Name-Last: Xu Author-Name: Zhaohua Wang Author-X-Name-First: Zhaohua Author-X-Name-Last: Wang Title: Novel double auctions for spatially distributed parking slot assignment with externalities Abstract: This article considers a parking slot assignment problem in a sharing economy where parking slots are spatially heterogeneous. When buyers (i.e., the slot users) park and take their cars in the reserved parking slots, environment externalities are created. We incorporate the externality costs in our winner determination model and construct a padding-based Vickrey-Clarke-Groves (PV) double auction where each buyer submits an XOR bid on his/her desired parking slots at the same price. In the PV double auction, the padding intuition is adopted on either the supply or demand side. We then propose a padding-based shadow price (PS) double auction by integrating the padding method with the shadow price method. Due to the rises in buying prices and declines in selling prices, the PS double auction is likely to realize higher auctioneer’s payoff. Both PV and PS double auctions achieve incentive compatibility, individual rationality, budget balance and asymptotical efficiency. Our numerical experiments demonstrate that the proposed double auctions can realize high efficiency. Journal: IISE Transactions Pages: 288-300 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2064567 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2064567 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:288-300 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2074578_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Foad Ghadimi Author-X-Name-First: Foad Author-X-Name-Last: Ghadimi Author-Name: Tarik Aouam Author-X-Name-First: Tarik Author-X-Name-Last: Aouam Author-Name: Reha Uzsoy Author-X-Name-First: Reha Author-X-Name-Last: Uzsoy Title: Safety stock placement with market selection under load-dependent lead times Abstract: We study the problem of safety stock placement in a supply chain with market selection decisions. A manufacturer with deterministic, load-dependent lead time supplies multiple warehouses, each serving multiple retailers. Each retailer has access to a set of potential markets with different characteristics. Serving more markets increases revenues, but also increases the manufacturer’s lead time, resulting in higher inventory costs. Adopting the Guaranteed Service Approach, we present a nonlinear mixed-integer programming model and reformulate it to eliminate integer variables related to service times at warehouses. We then propose a successive piecewise linearization algorithm and a mixed-integer conic quadratic formulation to solve the resulting nonlinear binary formulation. Computational experiments show that the successive piecewise linearization algorithm outperforms two state-of-the-art solvers, BARON and CPLEX, which are used to solve instances of the original formulation and the mixed-integer conic quadratic reformulation, respectively. The value of incorporating load-dependent lead times is greatest when capacity is limited relative to available demand. The benefit of integrating market selection and safety stock decisions is greatest when capacity is limited and marginal revenue is relatively low. Journal: IISE Transactions Pages: 314-328 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2074578 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2074578 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:314-328 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2060535_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Lu Zhen Author-X-Name-First: Lu Author-X-Name-Last: Zhen Author-Name: Jiajing Gao Author-X-Name-First: Jiajing Author-X-Name-Last: Gao Author-Name: Zheyi Tan Author-X-Name-First: Zheyi Author-X-Name-Last: Tan Author-Name: Shuaian Wang Author-X-Name-First: Shuaian Author-X-Name-Last: Wang Author-Name: Roberto Baldacci Author-X-Name-First: Roberto Author-X-Name-Last: Baldacci Title: Branch-price-and-cut for trucks and drones cooperative delivery Abstract: The truck and drone-based cooperative model of delivery can improve the efficiency of last mile delivery, and has thus increasingly attracted attention in academia and from practitioners. In this study, we examine a vehicle routing problem and apply a cooperative form of delivery involving trucks and drones. We propose a mixed-integer programming model and a branch-price-and-cut-based exact algorithm to address this problem. To reduce the computation time, we design several acceleration strategies, including a combination of dynamic programming and calculus-based approximation for the pricing problem, and various effective inequalities for the restricted master problem. Numerical experiments are conducted to validate the effectiveness and efficiency of the proposed solution. Journal: IISE Transactions Pages: 271-287 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2060535 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2060535 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:271-287 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2044567_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Xu Guan Author-X-Name-First: Xu Author-X-Name-Last: Guan Author-Name: Hao Wu Author-X-Name-First: Hao Author-X-Name-Last: Wu Author-Name: Jin Xu Author-X-Name-First: Jin Author-X-Name-Last: Xu Author-Name: Jianghua Zhang Author-X-Name-First: Jianghua Author-X-Name-Last: Zhang Title: Privatization reform in public healthcare system: Competition vs. collaboration Abstract: Privatization reform is increasingly considered as an efficient mechanism to reduce waiting time in the public healthcare system. This article focuses on two popular privatization reform formats: (i) the competition format, under which the private hospital is allowed to enter the market and compete with the public hospital, and (ii) the collaboration format, under which the public hospital and private hospital collaborate toward a common goal. We investigate the adverse impacts of the two formats on patients and social welfare, which depend on two key factors: (i) the reimbursement rate that determines to what extent the government can provide capital support to the public hospital, and (ii) the privatization level that reflects to what extent the joint public–private hospital provides care for its own profit. When both the reimbursement rate and privatization level are relatively high, the private hospital prefers the collaboration format. We also identify two separate regions wherein the private hospital’s interest can be aligned with patients and social welfare: (i) the competition format arises when the reimbursement rate is high and the privatization level is low, and (ii) the collaboration format arises when the reimbursement rate is moderate and the privatization level is high. Journal: IISE Transactions Pages: 217-228 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2044567 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2044567 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:217-228 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2074577_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Jia Shu Author-X-Name-First: Jia Author-X-Name-Last: Shu Author-Name: Miao Song Author-X-Name-First: Miao Author-X-Name-Last: Song Author-Name: Beilun Wang Author-X-Name-First: Beilun Author-X-Name-Last: Wang Author-Name: Jing Yang Author-X-Name-First: Jing Author-X-Name-Last: Yang Author-Name: Shaowen Zhu Author-X-Name-First: Shaowen Author-X-Name-Last: Zhu Title: Humanitarian relief network design: Responsiveness maximization and a case study of Typhoon Rammasun Abstract: In this article, we study a humanitarian relief network design problem, where the demand for relief supplies in each affected area is uncertain and can be met by more than one relief facility. Given a certain cost budget, we simultaneously optimize the decisions of relief facility location, inventory pre-positioning, and relief facility to affected area assignment so as to maximize the responsiveness. The problem is formulated as a chance-constrained stochastic programming model in which a joint chance constraint is utilized to measure the responsiveness of the humanitarian relief network. We approximate the proposed model by another model with chance constraints, which can be solved based on two settings of the demand information in each affected area: (i) the demand distribution is given; and (ii) the partial demand information, e.g., the mean, the variance, and the support, is given. We use a case study of the 2014 Typhoon Rammasun to illustrate the application of the model. Computational results demonstrate the effectiveness of the solution approaches and show that the chance-constrained stochastic programming models are superior to the deterministic model for humanitarian relief network design. Journal: IISE Transactions Pages: 301-313 Issue: 3 Volume: 55 Year: 2023 Month: 3 X-DOI: 10.1080/24725854.2022.2074577 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2074577 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:3:p:301-313 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2000681_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Zhenyu Wu Author-X-Name-First: Zhenyu Author-X-Name-Last: Wu Author-Name: Yanting Li Author-X-Name-First: Yanting Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Author-Name: Ershun Pan Author-X-Name-First: Ershun Author-X-Name-Last: Pan Title: Real-time monitoring and diagnosis scheme for IoT-enabled devices using multivariate SPC techniques Abstract: This article is aimed at condition monitoring and fault identification for Internet of Things (IoT) devices, and proposes a multivariate statistical process control scheme. The new method aims to detect sparse mean shifts using spatial rank and an improved adaptive elastic net algorithm, which can monitor the high-dimension data stream collected by IoT devices and pinpoint faulty variables. The new method is also applicable in the presence of a non-normal distribution and insufficient reference samples. Numerical simulations verify that the proposed method has clear advantages over existing methods. The case of wind turbines shows that the method can be applied to real-time monitoring and diagnosis of real IoT devices, which could provide valuable diagnosis of root cause and optimize subsequent maintenance strategies. Journal: IISE Transactions Pages: 348-362 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2021.2000681 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2000681 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:348-362 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2123999_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Shiyu Zhou Author-X-Name-First: Shiyu Author-X-Name-Last: Zhou Author-Name: Yong Chen Author-X-Name-First: Yong Author-X-Name-Last: Chen Author-Name: Nan Kong Author-X-Name-First: Nan Author-X-Name-Last: Kong Author-Name: Raed Al Kontar Author-X-Name-First: Raed Al Author-X-Name-Last: Kontar Title: Contributions to Internet of Things (IoT)-enabled systems Journal: IISE Transactions Pages: 329-330 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2022.2123999 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2123999 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:329-330 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2039423_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Honghan Ye Author-X-Name-First: Honghan Author-X-Name-Last: Ye Author-Name: Xiaochen Xian Author-X-Name-First: Xiaochen Author-X-Name-Last: Xian Author-Name: Jing-Ru C. Cheng Author-X-Name-First: Jing-Ru C. Author-X-Name-Last: Cheng Author-Name: Brock Hable Author-X-Name-First: Brock Author-X-Name-Last: Hable Author-Name: Robert W. Shannon Author-X-Name-First: Robert W. Author-X-Name-Last: Shannon Author-Name: Mojtaba Kadkhodaie Elyaderani Author-X-Name-First: Mojtaba Kadkhodaie Author-X-Name-Last: Elyaderani Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Title: Online nonparametric monitoring of heterogeneous data streams with partial observations based on Thompson sampling Abstract: With the rapid advancement of sensor technology driven by Internet-of-Things-enabled applications, tremendous amounts of measurements of heterogeneous data streams are frequently acquired for online process monitoring. Such massive data, involving a large number of data streams with high sampling frequency, incur high costs on data collection, transmission, and analysis in practice. As a result, the resource constraint often restricts the data observability to only a subset of data streams at each data acquisition time, posing significant challenges in many online monitoring applications. Unfortunately, existing methods do not provide a general framework for monitoring heterogeneous data streams with partial observations. In this article, we propose a nonparametric monitoring and sampling algorithm to quickly detect abnormalities occurring to heterogeneous data streams. In particular, an approximation framework is incorporated with an antirank-based CUSUM procedure to collectively estimate the underlying status of all data streams based on partially observed data. Furthermore, an intelligent sampling strategy based on Thompson sampling is proposed to dynamically observe the informative data streams and balance between exploration and exploitation to facilitate quick anomaly detection. Theoretical justification of the proposed algorithm is also investigated. Both simulations and case studies are conducted to demonstrate the superiority of the proposed method. Journal: IISE Transactions Pages: 392-404 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2022.2039423 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2039423 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:392-404 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2004626_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Huihui Miao Author-X-Name-First: Huihui Author-X-Name-Last: Miao Author-Name: Andi Wang Author-X-Name-First: Andi Author-X-Name-Last: Wang Author-Name: Bing Li Author-X-Name-First: Bing Author-X-Name-Last: Li Author-Name: Tzyy-Shuh Chang Author-X-Name-First: Tzyy-Shuh Author-X-Name-Last: Chang Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Process modeling with multi-level categorical inputs via variable selection and level aggregation Abstract: An Industrial IoT-enabled manufacturing system often involves multiple categorical variables, denoting the process configurations and product customizations. These categorical variables lead to a flexible relationship between the input process variables and output quality measurements, as there are many potential configurations of the manufacturing process. This causes significant challenges for data-driven process modeling and root cause diagnosis. This article proposes a data-driven additive model to address the effects of different categorical variables on the relationship between process variables and quality measurements. The estimation algorithm automatically identifies the variables that have significant effects on the product quality, aggregates the levels of each categorical variable based on a priori knowledge of level similarity, and provides an accurate model that describes the relationship between the process variables and quality measurements. The simulation study validates the accuracy and effectiveness of the proposed method, and a case study on a hot rolling process shows that the method provides useful guidance on the understanding of the production system. Journal: IISE Transactions Pages: 363-376 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2021.2004626 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2004626 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:363-376 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_1973157_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Di Wang Author-X-Name-First: Di Author-X-Name-Last: Wang Author-Name: Fangyu Li Author-X-Name-First: Fangyu Author-X-Name-Last: Li Author-Name: Kaibo Liu Author-X-Name-First: Kaibo Author-X-Name-Last: Liu Title: Modeling and monitoring of a multivariate spatio-temporal network system Abstract: With the development of information technology, various network systems are created to connect physical objects and people by sensor nodes or smart devices, providing unprecedented opportunities to realize automated interconnected systems and revolutionize people’s lives. However, network systems are vulnerable to attacks, due to the integration of physical objects and human behaviors as well as the complex spatio-temporal correlated structures of the network systems. Therefore, how to accurately and effectively model and monitor a network system is critical to ensure information security and support system automation. To address this issue, this article develops a multivariate spatio-temporal modeling and monitoring methodology for a network system by using multiple types of sensor signals collected from the network system. We first propose a Multivariate Spatio-Temporal Autoregressive (MSTA) model by integrating a Gaussian Markov Random Field and a vector autoregressive model structure to characterize the spatio-temporal correlation of the network system. In particular, we develop an iterative model learning algorithm that integrates the Bayesian inference, least squares, and a sum square error-based optimization method to learn the network structure and estimate parameters in the MSTA model. Then, we propose two spatio-temporal control schemes to monitor the network system based on the MSTA model. Numerical experiments and a real case study of an IoT network system are presented to validate the performance of the proposed method. Journal: IISE Transactions Pages: 331-347 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2021.1973157 File-URL: http://hdl.handle.net/10.1080/24725854.2021.1973157 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:331-347 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2030881_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Yuxin Wen Author-X-Name-First: Yuxin Author-X-Name-Last: Wen Author-Name: Xinxing Guo Author-X-Name-First: Xinxing Author-X-Name-Last: Guo Author-Name: Junbo Son Author-X-Name-First: Junbo Author-X-Name-Last: Son Author-Name: Jianguo Wu Author-X-Name-First: Jianguo Author-X-Name-Last: Wu Title: A neural-network-based proportional hazard model for IoT signal fusion and failure prediction Abstract: Accurate prediction of Remaining Useful Life (RUL) plays a critical role in optimizing condition-based maintenance decisions. In this article, a novel joint prognostic modeling framework that simultaneously combines both time-to-event data and multi-sensor degradation signals is proposed. With the increasing use of IoT devices, unprecedented amounts of diverse signals associated with the underlying health condition of in-situ units have become easily accessible. To take full advantage of the modern IoT-enabled engineering systems, we propose a specialized framework for RUL prediction at the level of individual units. Specifically, a Bayesian linear regression model is developed for the multi-sensor degradation signals and a functional neural network is proposed to allow the proportional hazard model to characterize the complex nonlinearity between the hazard function and degradation signals. Based on the proposed model, an online model updating procedure is established to accurately predict RUL in real time. The advantageous features of the proposed method are demonstrated through simulation studies and the application to a high-fidelity gas turbine engine dataset. Journal: IISE Transactions Pages: 377-391 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2022.2030881 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2030881 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:377-391 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2127164_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: Jiachen Shi Author-X-Name-First: Jiachen Author-X-Name-Last: Shi Author-Name: Heraldo Rozas Author-X-Name-First: Heraldo Author-X-Name-Last: Rozas Author-Name: Murat Yildirim Author-X-Name-First: Murat Author-X-Name-Last: Yildirim Author-Name: Nagi Gebraeel Author-X-Name-First: Nagi Author-X-Name-Last: Gebraeel Title: A stochastic programming model for jointly optimizing maintenance and spare parts inventory for IoT applications Abstract: Service supply chain models typically use conservative maintenance and spare part management policies that result in significant losses due to redundancies. Conservatism without an improved understanding of risks, however, does not cushion against unexpected consequences. Risk scenarios associated with asset failure and inventory shortage are frequently observed in practice. Advances in Internet of Things (IoT) technology is unlocking new methods that attain significant prediction accuracy for these risk factors. IoT-enabled predictions on asset state of health can drive dynamic decision models that conduct maintenance and replenishment actions more efficiently while reducing risk. In this study, we propose a unified framework that utilizes IoT data to jointly optimize condition-based maintenance and inventory decisions. We formulate our problem as a stochastic mixed-integer program that accounts for the interplay between maintenance, spare parts inventory, and asset reliability. We introduce a new reformulation that is efficient for solving large-scale instances of the proposed model. The framework presented herein is applied to real world degradation data to demonstrate the benefits of our methodology in terms of cost and reliability. Journal: IISE Transactions Pages: 419-431 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2022.2127164 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2127164 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:419-431 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2080306_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20220907T060133 git hash: 85d61bd949 Author-Name: An-Tsun Wei Author-X-Name-First: An-Tsun Author-X-Name-Last: Wei Author-Name: Hui Wang Author-X-Name-First: Hui Author-X-Name-Last: Wang Author-Name: Tarik Dickens Author-X-Name-First: Tarik Author-X-Name-Last: Dickens Author-Name: Hongmei Chi Author-X-Name-First: Hongmei Author-X-Name-Last: Chi Title: Co-learning of extrusion deposition quality for supporting interconnected additive manufacturing systems Abstract: Additive manufacturing systems are being deployed on a cloud platform to provide networked manufacturing services. This article explores the value of interconnected printing systems that share process data on the cloud in improving quality control. We employed an example of quality learning for cloud printers by understanding how printing conditions impact printing errors. Traditionally, extensive experiments are necessary to collect data and estimate the relationship between printing conditions vs. quality. This research establishes a multi-printer co-learning methodology to obtain the relationship between the printing conditions and quality using limited data from each printer. Based on multiple interconnected extrusion-based printing systems, the methodology is demonstrated by learning the printing line variations and resultant infill defects induced by extruder kinematics. The method leverages the common covariance structures among printers for the co-learning of kinematics-quality models. This article further proposes a sampling-refined hybrid metaheuristic to reduce the search space for solutions. The results showed significant improvements in quality prediction by leveraging data from data-limited printers, an advantage over traditional transfer learning that transfers knowledge from a data-rich source to a data-limited target. The research establishes algorithms to support quality control for reconfigurable additive manufacturing systems on the cloud. Journal: IISE Transactions Pages: 405-418 Issue: 4 Volume: 55 Year: 2023 Month: 4 X-DOI: 10.1080/24725854.2022.2080306 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2080306 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:4:p:405-418 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2039813_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Bo Shen Author-X-Name-First: Bo Author-X-Name-Last: Shen Author-Name: Raghav Gnanasambandam Author-X-Name-First: Raghav Author-X-Name-Last: Gnanasambandam Author-Name: Rongxuan Wang Author-X-Name-First: Rongxuan Author-X-Name-Last: Wang Author-Name: Zhenyu James Kong Author-X-Name-First: Zhenyu James Author-X-Name-Last: Kong Title: Multi-task Gaussian process upper confidence bound for hyperparameter tuning and its application for simulation studies of additive manufacturing Abstract: In many scientific and engineering applications, Bayesian Optimization (BO) is a powerful tool for hyperparameter tuning of a machine learning model, materials design and discovery, etc. Multi-task BO is a general method to efficiently optimize multiple different, but correlated, “black-box” functions. The objective of this work is to develop an algorithm for multi-task BO with automatic task selection so that only one task evaluation is needed per query round. Specifically, a new algorithm, namely, Multi-Task Gaussian Process Upper Confidence Bound (MT-GPUCB), is proposed to achieve this objective. The MT-GPUCB is a two-step algorithm, where the first step chooses which query point to evaluate, and the second step automatically selects the most informative task to evaluate. Under the bandit setting, a theoretical analysis is provided to show that our proposed MT-GPUCB is no-regret under some mild conditions. Our proposed algorithm is verified experimentally on a range of synthetic functions. In addition, our algorithm is applied to Additive Manufacturing simulation software, namely, Flow-3D Weld, to determine material property values, ensuring the quality of simulation output. The results clearly show the advantages of our query strategy for both design point and task. Journal: IISE Transactions Pages: 496-508 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2039813 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2039813 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:496-508 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2062627_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yu Liu Author-X-Name-First: Yu Author-X-Name-Last: Liu Author-Name: Jian Gao Author-X-Name-First: Jian Author-X-Name-Last: Gao Author-Name: Tao Jiang Author-X-Name-First: Tao Author-X-Name-Last: Jiang Author-Name: Zhiguo Zeng Author-X-Name-First: Zhiguo Author-X-Name-Last: Zeng Title: Selective maintenance and inspection optimization for partially observable systems: An interactively sequential decision framework Abstract: Selective maintenance is an important condition-based maintenance strategy for multi-component systems, where optimal maintenance actions are identified to maximize the success likelihood of subsequent missions. Most of the existing works on selective maintenance assumed that after each mission, the components’ states can be precisely known without additional efforts. In engineering scenarios, the states of the components in a system need to be revealed by inspections that are usually inaccurate. Inspection activities also consume the limited resources shared with maintenance activities. We, thus, put forth a novel decision framework for selective maintenance of partially observable systems with which maintenance and inspection activities will be scheduled in a holistic and interactively sequential manner. As the components’ states are partially observable and the remaining resources are fully observable, we formulate a finite-horizon Mixed Observability Markov Decision Process (MOMDP) model to support the optimization. In the MOMDP model, both maintenance and inspection actions can be interactively and sequentially planned based on the distributions of components’ states and the remaining resources. To improve the solution efficiency of the MOMDP model, we customize a Deep Value Network (DVN) algorithm in which the maximum mission success probability is approximated. A five-component system and a real-world multi-state coal transportation system are used to demonstrate the effectiveness of the proposed method. It is shown that the probability of the system successfully completing the next mission can be significantly increased by taking inspections into account. The results also demonstrate the computational efficiency of the customized DVN algorithm. Journal: IISE Transactions Pages: 463-479 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2062627 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2062627 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:463-479 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2067915_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Shijuan Yang Author-X-Name-First: Shijuan Author-X-Name-Last: Yang Author-Name: Jianjun Wang Author-X-Name-First: Jianjun Author-X-Name-Last: Wang Author-Name: Jiawei Wu Author-X-Name-First: Jiawei Author-X-Name-Last: Wu Author-Name: Yiliu Tu Author-X-Name-First: Yiliu Author-X-Name-Last: Tu Title: Modeling and optimization for multiple correlated responses with distribution variability Abstract: In production design processes, multiple correlated responses with different distributions are often encountered. The existing literature usually assumes that they follow normal distributions for computational convenience, and then analyzes these responses using traditional parametric methods. A few research papers assume that they follow the same type of distribution, such as the t-distribution, and then use a multivariate joint distribution to deal with the correlation. However, these methods give a poor approximation to the actual problem and may lead to the recommended settings that yield substandard products. In this article, we propose a new method for the robust parameter design that can solve the above problems. Specifically, a semiparametric model is used to estimate the margins, and then a joint distribution function is constructed using a multivariate copula function. Finally, the probability that the responses meet the specifications simultaneously is used to obtain the optimal settings. The advantages of the proposed method lie in the consideration of multiple correlation patterns among responses, the absence of restrictions on the response distributions, and the use of nonparametric smoothing to reduce the risk of model misspecification. The results of the case study and the simulation study validate the effectiveness of the proposed method. Journal: IISE Transactions Pages: 480-495 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2067915 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2067915 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:480-495 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2040761_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zhaohui Geng Author-X-Name-First: Zhaohui Author-X-Name-Last: Geng Author-Name: Arman Sabbaghi Author-X-Name-First: Arman Author-X-Name-Last: Sabbaghi Author-Name: Bopaya Bidanda Author-X-Name-First: Bopaya Author-X-Name-Last: Bidanda Title: Reconstructing original design: Process planning for reverse engineering Abstract: Reverse Engineering (RE) has been widely used to extract geometric design information from a physical product for reproduction or redesign purposes. A scan of an object is often implemented to (re-)construct the computer-aided design model. However, this model is most likely an inaccurate representation of the original design, due to the existing uncertainties in each part and the scanning process. This randomness can result in shrinking the original tolerance region or even yielding asymmetric tolerance regions, which can call for unnecessarily high precision reproduction. In this article, we first propose an algorithm to generate the mean configuration based on the data clouds collected from several scans and multiple parts (if applicable). A Bayesian model with prior knowledge of production processes and scanners is specified to model the statistical properties of the mean configuration. Its marginal posterior outperforms single-scan models with lower variances, concentrating around the physical object or initial design. Furthermore, we propose a bi-objective optimization model to address RE process planning questions regarding the required number of scans and parts to achieve target accuracy requirements. Simulations and industrial case studies, including both unique freeform objects and mechanical parts, are conducted to illustrate and evaluate the performances of proposed methods. Journal: IISE Transactions Pages: 509-522 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2040761 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2040761 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:509-522 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2072545_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Fabian Lorson Author-X-Name-First: Fabian Author-X-Name-Last: Lorson Author-Name: Andreas Fügener Author-X-Name-First: Andreas Author-X-Name-Last: Fügener Author-Name: Alexander Hübner Author-X-Name-First: Alexander Author-X-Name-Last: Hübner Title: New team mates in the warehouse: Human interactions with automated and robotized systems Abstract: Despite all the technological progress in the arena of automated and robotized systems, humans will continue to play a significant role in the warehouse of the future, due to their distinctive skills and economic advantages for certain tasks. Although industry and engineering have mainly dealt with the design and functionalities of automated warehouses, the role of human factors and behavior is still underrepresented. However, many novel warehousing systems require human–machine interactions, leading to a growing scientific and managerial necessity to consider human factors and behavior, particularly for operational activities. This is the first study that comprehensively identifies and analyzes relevant behavioral issues of interactions between warehouse operators and machines. To do so, we develop a systematic framework that links human–machine interactions with behavioral issues and implications on system performance across all operational warehouse activities. Insights generated by interviews with warehousing experts are applied to identify the most important issues. We develop a comprehensive research agenda, consisting of a set of potential research questions associated to the identified behavioral issues. The discussion is enriched by providing theoretical and managerial insights from related domains and existing warehousing research. Ultimately, we consolidate our findings by developing overarching theoretical foundations and deriving unifying themes. Journal: IISE Transactions Pages: 536-553 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2072545 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2072545 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:536-553 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2055269_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Haitao Liu Author-X-Name-First: Haitao Author-X-Name-Last: Liu Author-Name: Hui Xiao Author-X-Name-First: Hui Author-X-Name-Last: Xiao Author-Name: Loo Hay Lee Author-X-Name-First: Loo Hay Author-X-Name-Last: Lee Author-Name: Ek Peng Chew Author-X-Name-First: Ek Peng Author-X-Name-Last: Chew Title: A convergent algorithm for ranking and selection with censored observations Abstract: We consider a problem of Ranking and Selection in the presence of Censored Observations (R&S-CO). An observation within the interval defined by lower and upper limits is observed at the actual value, whereas an observation outside the interval takes the closer limit value. The censored sample average is thus a biased estimator for the true mean performance of each alternative. The goal of R&S-CO is to efficiently find the best alternative in terms of the true mean. We first derive the censored variable’s mean and variance in terms of the mean and variance of the uncensored variable and the lower and upper limits, and then develop a sequential sampling algorithm. Under mild conditions, we prove that the algorithm is consistent, in the sense that the best can be identified almost surely, as the sampling budget goes to infinity. Moreover, we show that the asymptotic allocation converges to the optimal static allocation derived by the large deviations theory. Extensive numerical experiments are conducted to investigate the finite-budget performance, the asymptotic allocation, and the robustness of the algorithm. Journal: IISE Transactions Pages: 523-535 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2055269 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2055269 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:523-535 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2024925_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Hyojoong Kim Author-X-Name-First: Hyojoong Author-X-Name-Last: Kim Author-Name: Heeyoung Kim Author-X-Name-First: Heeyoung Author-X-Name-Last: Kim Title: Contextual anomaly detection for high-dimensional data using Dirichlet process variational autoencoder Abstract: Due to recent advances in sensing technologies, response measurements of various sensors are frequently used for system monitoring purposes. However, response data are often affected by some contextual variables, such as equipment settings and time, resulting in different patterns, even when the system is in the normal state. In this case, anomaly detection methods that do not consider contextual variables may be unable to distinguish between abnormal and normal patterns of the response data affected by the contextual variables. Motivated by this problem, we propose a method for contextual anomaly detection, particularly in the case where the response and contextual variables are both high-dimensional and complex. The proposed method is based on Variational AutoEncoders (VAEs), which are neural-network-based generative models suitable for modeling high-dimensional and complex data. The proposed method combines two VAEs: one for response variables and the other for contextual variables. Specifically, in the latent space of the VAE for contextual variables, we model the latent variables using a Dirichlet process Gaussian mixture model. Consequently, the effects of the contextual variables can be modeled using several clusters, each representing a different contextual environment. The latent contextual variables are then used as additional inputs to the other VAE’s decoder for reconstructing response data from their latent representations. We then detect the anomalies based on the negative reconstruction loss of a new response observation. The effectiveness of the proposed method is demonstrated using several benchmark datasets and a case study based on a global tire company. Journal: IISE Transactions Pages: 433-444 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2021.2024925 File-URL: http://hdl.handle.net/10.1080/24725854.2021.2024925 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:433-444 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2037792_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Feiran Xu Author-X-Name-First: Feiran Author-X-Name-Last: Xu Author-Name: Ramin Moghaddass Author-X-Name-First: Ramin Author-X-Name-Last: Moghaddass Title: A scalable Bayesian framework for large-scale sensor-driven network anomaly detection Abstract: Many real systems have a network/graph structure with many connected nodes and many edges representing deterministic or stochastic dependencies and interactions between nodes. Various types of known or unknown anomalies and disturbances may occur across these networks over time. Developing real-time anomaly detection and isolation frameworks is crucial to enable network operators to make more informed and timely decisions and take appropriate maintenance and operations actions. To monitor the health of modern networks in real time, different types of sensors and smart devices are installed across these networks that can track real-time data from a particular node or a section of a network. In this article, we introduce an innovative inference method to calculate the most probable explanation of a set of hidden nodes in heterogeneous attributed networks with a directed acyclic graph structure represented by a Bayesian network, given the values of a set of binary data observed from available sensors, which may be located only at a subset of nodes. The innovative use of Bayesian networks to incorporate parallelization and vectorization makes the proposed framework applicable for large-scale graph structures. The efficiency of the model is shown through a comprehensive set of numerical experiments. Journal: IISE Transactions Pages: 445-462 Issue: 5 Volume: 55 Year: 2023 Month: 5 X-DOI: 10.1080/24725854.2022.2037792 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2037792 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:5:p:445-462 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2167137_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Wilbert E. Wilhelm Author-X-Name-First: Wilbert E. Author-X-Name-Last: Wilhelm Title: Birth and early years of the focused-issue structure of IISE Transactions Abstract: This article records the history of an important era in the life of IISE Transactions: the conceptualization and implementation of the Focused Issue structure. It reviews the 1988–1992 process during which the structure was conceived and approved, as well as the reasons that motivated the effort and some of the main players involved. It then records the early years 1993–1996 during which the structure was implemented, discussing the editorial organization, including its staffing, policies, and practices. Primary milestones achieved during the early years as well as mixed successes are then described. To contribute to the historical record, the article identifies issues facing the journal at the time and the specific steps taken to address each of them. The article ends with an epilogue. Journal: IISE Transactions Pages: 555-560 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2023.2167137 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2167137 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:555-560 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2076178_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Shing Chih Tsai Author-X-Name-First: Shing Chih Author-X-Name-Last: Tsai Author-Name: Jun Luo Author-X-Name-First: Jun Author-X-Name-Last: Luo Author-Name: Guangxin Jiang Author-X-Name-First: Guangxin Author-X-Name-Last: Jiang Author-Name: Wei Cheng Yeh Author-X-Name-First: Wei Cheng Author-X-Name-Last: Yeh Title: Adaptive fully sequential selection procedures with linear and nonlinear control variates Abstract: A decision-making process often involves selecting the best solution from a finite set of possible alternatives regarding some performance measure, which is known as Ranking-and-Selection (R&S) when the performance is not explicitly available and can only be estimated by taking samples. Many R&S procedures have been proposed considering different problem formulations. In this article, we adopt the classic fully sequential Indifference-Zone (IZ) formulation developed in the statistical literature, and take advantage of the control variates, a well-known variance reduction technique in the simulation literature, to investigate the potential benefits as well as the statistical guarantee by designing a new type of R&S procedure in an adaptive fashion. In particular, we propose a generic adaptive fully sequential procedure that can employ both linear and nonlinear control variates, in which both the control coefficient and sample variance can be sequentially updated as the sampling process progresses. We demonstrate that the proposed procedures provide the desired probability of correct selection in the asymptotic regime as the IZ parameter goes to zero. We then compare the proposed procedures with various existing procedures through the simulation experiments on practical illustrative examples, in which we observe several interesting findings and demonstrate the advantage of our proposed procedures. Journal: IISE Transactions Pages: 561-573 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2076178 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2076178 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:561-573 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2081744_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Wenxin Li Author-X-Name-First: Wenxin Author-X-Name-Last: Li Author-Name: Ness Shroff Author-X-Name-First: Ness Author-X-Name-Last: Shroff Title: Work-conserving disciplines are asymptotic optimal in completion time minimization Abstract: We prove that in a stable multi-server system, where different machines are allowed to have different speeds, all work-conserving disciplines are asymptotic optimal for minimizing total completion time, if job size and interarrival time distributions have finite moments. As a byproduct of our analysis, we obtain a tight upper bound on the competitive ratios of work-conserving disciplines on minimizing the metric of flow time. Journal: IISE Transactions Pages: 616-628 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2081744 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2081744 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:616-628 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2088903_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Bo Li Author-X-Name-First: Bo Author-X-Name-Last: Li Author-Name: Shaoxuan Liu Author-X-Name-First: Shaoxuan Author-X-Name-Last: Liu Title: When to swing into high gear? A time-limit approach to problem escalation Abstract: In manufacturing and services, random problems arise that disrupt normal operations. Organizations must resolve these problems in a timely and cost-efficient manner—which can be a daunting task. A typical management response is assigning escalation routes whereby the lowest tier initially owns a problem that may later be escalated to higher tiers. To minimize the costs associated with these problems, we consider a management policy that specifies a time limit beyond which problems must move up a tier; the setting is formulated as a stochastic dynamic program. For scenarios involving a single problem type, we find that optimal time limits can exist when the problem service times are generally distributed or correlated and when the marginal delay cost increases with time; we also characterize the single–problem type, multi-tier optimal solution when the marginal delay cost is constant. When multiple types of problems are pooled together, we show that a time-limit–based approach is robust to various probability distributions of service times and performs reasonably well even when the problem type is unidentified ex ante. Finally, we derive comparative statics on how parameter values, problem characteristics, and the composition of problems each affect the optimal time limit. Journal: IISE Transactions Pages: 644-655 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2088903 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2088903 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:644-655 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2086719_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yale T. Herer Author-X-Name-First: Yale T. Author-X-Name-Last: Herer Author-Name: Enver Yücesan Author-X-Name-First: Enver Author-X-Name-Last: Yücesan Title: An asymptotic perspective on risk pooling: Limitations and relationship to transshipments Abstract: In this article we provide a novel perspective on risk pooling approaches by characterizing and comparing their asymptotic performance, highlighting the conditions under which one approach dominates the other. More specifically, we determine the inventory policy and the expected total costs of systems under physical and information pooling as the number of locations grows. We show that physical pooling dominates information pooling in settings with no additional per-location costs for operating the centralized system. In the presence of such costs, however, information pooling becomes a viable alternative to physical pooling. Through asymptotic analysis, we also address the grouping problem, the division of a given set of non-identical locations into an ordered collection of mutually exclusive and collectively exhaustive subsets of predetermined sizes and demonstrate that homogeneous groups, comprising locations with similar demand volatility, achieve a lower expected total cost. Finally, the convergence of the expected total costs and the base stock levels under the two pooling approaches is demonstrated through a simple numerical illustration. Our analysis supports the assertion that it is important to consider not only the individual characteristics of each location in isolation, but also the interactions among them, when designing pooling systems. Journal: IISE Transactions Pages: 629-643 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2086719 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2086719 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:629-643 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2075568_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Hussein Tarhini Author-X-Name-First: Hussein Author-X-Name-Last: Tarhini Author-Name: Bacel Maddah Author-X-Name-First: Bacel Author-X-Name-Last: Maddah Author-Name: Ahmad Shmayssani Author-X-Name-First: Ahmad Author-X-Name-Last: Shmayssani Title: Pricing, assortment, and inventory decisions under nested logit demand and equal profit margins Abstract: We consider the interdependent decisions on assortment, inventory and pricing of substitutable products that are differentiated by some primary and secondary attributes captured by a nested logit consumer choice. We examine a newsvendor-type setting with several products competing for demand over a single selling season. We assume that all products have equal profit margins. The demand has a multiplicative-additive structure where both variance and coefficient of variation depend on the common profit margin, which adds to the model applicability at the expense of tractability. Under a Taylor series-type approximation, we show that the expected profit is unimodal in the common margin of products in a given assortment. Then, we compare the optimal profit margin to the case under ample inventory, which allows understanding the effect of inventory on pricing. We also study the optimal assortment problem under exogenous pricing. We show that the classic result on the optimality of popular sets holds under tight approximations of the profit function. We finally propose a heuristic for jointly deciding on assortment, pricing, and inventory decisions, which assumes equal profit margins of products, and exploit popular sets only. Our detailed numerical study shows that this equal-margin heuristic produces high quality solutions. Journal: IISE Transactions Pages: 602-615 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2075568 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2075568 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:602-615 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2103755_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zahra Ghatrani Author-X-Name-First: Zahra Author-X-Name-Last: Ghatrani Author-Name: Archis Ghate Author-X-Name-First: Archis Author-X-Name-Last: Ghate Title: Inverse Markov decision processes with unknown transition probabilities Abstract: Inverse optimization involves recovering parameters of a mathematical model using observed values of decision variables. In Markov Decision Processes (MDPs), it has been applied to estimate rewards that render observed policies optimal. A counterpart is not available for transition probabilities. We study two variants of this problem. First, the decision-maker wonders whether there exist a policy and transition probabilities that attain given target values of expected total discounted rewards over an infinite horizon. We derive necessary and sufficient existence conditions, and formulate a feasibility linear program whose solution yields the requisite policy and transition probabilities. We extend these results when the decision-maker wants to render the target values optimal. In the second variant, the decision-maker wishes to find transition probabilities that make a given policy optimal. The resulting problem is nonconvex bilinear, and we propose tailored versions of two heuristics called Convex-Concave Procedure and Sequential Linear Programming (SLP). Their performance is compared via numerical experiments against an exact method. Computational experiments on randomly generated MDPs reveal that SLP outperforms the other two both in runtime and objective values. Further insights into SLP’s performance are derived via numerical experiments on inverse inventory control, equipment replacement, and multi-armed bandit problems. Journal: IISE Transactions Pages: 588-601 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2103755 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2103755 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:588-601 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2086326_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yuanyuan Lei Author-X-Name-First: Yuanyuan Author-X-Name-Last: Lei Author-Name: Chen Wang Author-X-Name-First: Chen Author-X-Name-Last: Wang Title: Extracting the wisdom of a smaller crowd from dependent quantile judgments Abstract: The task of this article is to harness the wisdom of a crowd without calibration. We assume each expert to form predictions by linearly combining various information cues inspired by the lens model, and use Gaussian process to account for sampling and judgmental errors in quantile judgments. Without knowing the experts’ observed information cues, we develop a three-step estimation algorithm to factor quantile judgments into “variable profiles” (latent cues underlying each variable of interest) and “expert profiles” (each expert’s weights over these cues). We can inquire about expert similarity using their weights of the latent cues, which preserve the same clustering results as the actual weights of the observed cues up to a full-rank linear transform. We can then depict the diversity and dependency among experts explicitly and retain a subcrowd by picking delegates from each subgroup of experts based on the estimated weights. Simulation and case studies demonstrate that a subcrowd selected this way can represent the entire expert panel well. Journal: IISE Transactions Pages: 574-587 Issue: 6 Volume: 55 Year: 2023 Month: 6 X-DOI: 10.1080/24725854.2022.2086326 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2086326 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:6:p:574-587 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2102272_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zhen Zhong Author-X-Name-First: Zhen Author-X-Name-Last: Zhong Author-Name: Shancong Mou Author-X-Name-First: Shancong Author-X-Name-Last: Mou Author-Name: Jeffrey H. Hunt Author-X-Name-First: Jeffrey H. Author-X-Name-Last: Hunt Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: Convex relaxation for optimal fixture layout design Abstract: This article proposes a general fixture layout design framework that directly integrates the system equation with the convex relaxation method. Note that the optimal fixture design problem is a large-scale combinatorial optimization problem; we relax it to a convex Semi-Definite Programming (SDP) problem by adopting sparse learning and SDP relaxation techniques. It can be solved efficiently by existing convex optimization algorithms and thus generates a near-optimal fixture layout. A real case study in the half-to-half fuselage assembly process indicates the superiority of our proposed algorithm compared to the current industry practice and state-of-art methods. Journal: IISE Transactions Pages: 746-754 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2102272 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2102272 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:746-754 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2074579_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Xiaomeng Peng Author-X-Name-First: Xiaomeng Author-X-Name-Last: Peng Author-Name: Xiaoning Jin Author-X-Name-First: Xiaoning Author-X-Name-Last: Jin Author-Name: Shiming Duan Author-X-Name-First: Shiming Author-X-Name-Last: Duan Author-Name: Chaitanya Sankavaram Author-X-Name-First: Chaitanya Author-X-Name-Last: Sankavaram Title: Active learning-assisted semi-supervised learning for fault detection and diagnostics with imbalanced dataset Abstract: Data-driven Fault Detection and Diagnostics (FDD) methods often assume that sufficient labeled samples are class-balanced and faulty classes in testing are precedent or seen previously during model training. When monitoring a large fleet of assets at scale, these assumptions may be violated: (I) only a limited number of samples can be manually labeled due to constraints of time and/or cost; (II) most of the samples collected in the engineering systems are under normal conditions, leading to a highly imbalanced class distribution and a biased prediction model. This work presents a robust and cost-effective FDD framework that integrates active learning and semi-supervised learning methods to detect both known and unknown failure modes iteratively. This framework allows to strategically select the samples to be annotated from a fully unlabeled dataset, while labeling cost is minimal. Specifically, a novel graph-based semi-supervised classifier with adaptive graph construction is developed to predict labels with imbalanced data and detect novel classes. We designed a multi-criteria active learning sampling strategy to select the most informative samples from unlabeled data in order to query minimal number of labels for classification. We tested the framework and algorithms in three synthetic datasets and one real-world dataset of vehicle air intake systems, and demonstrated the superior performance compared to the state-of-the-art methods for fleet-level FDD. Journal: IISE Transactions Pages: 672-686 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2074579 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2074579 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:672-686 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2089784_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Chengjie Wang Author-X-Name-First: Chengjie Author-X-Name-Last: Wang Author-Name: Jian Liu Author-X-Name-First: Jian Author-X-Name-Last: Liu Author-Name: Qingyu Yang Author-X-Name-First: Qingyu Author-X-Name-Last: Yang Author-Name: Qingpei Hu Author-X-Name-First: Qingpei Author-X-Name-Last: Hu Author-Name: Dan Yu Author-X-Name-First: Dan Author-X-Name-Last: Yu Title: Recoverability effects on reliability assessment for accelerated degradation testing Abstract: Accelerated Degradation Testing (ADT) provides an efficient experimental approach to collect lifetime-related data for the reliability assessment of highly reliable products under normal use stress. Recoverability, which occurs in many typical failure modes when online in-chamber measurements have to be replaced by offline ex-chamber measurements with stress released, is an important factor that may affect the accuracy of reliability estimation from ADT data. Nonetheless, the presence of recoverability has not been adequately considered in traditional methods, leading to an over-optimistic estimation of lifetime and reliability. The linkage between recoverability and such inferential bias has not been theoretically studied systematically. In this study, recoverability is explicitly incorporated into the ADT modeling framework. Without loss of generality, the Wiener process is adopted as the basis for the proposed degradation model, superimposed with a cumulative recovery. Theoretical results show that the Mean Time To Failure (MTTF) will be overestimated with recoverability neglected, which leads to poor lifetime-centered decision-making. For finite and even small sample sizes, this conclusion is not intuitively certain but the chance is still high and the corresponding overestimation probability can be calculated explicitly. All theoretical conclusions are validated by simulation studies, in which the MTTFs are overestimated from 6% to 42% under different parameter settings. The real-world application of semiconductor products shows that even slight recoverability could lead to an obvious overestimation of MTTF and the proposed model provides a convenient way to derive more accurate assessment results. Journal: IISE Transactions Pages: 698-710 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2089784 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2089784 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:698-710 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2093424_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Mingman Sun Author-X-Name-First: Mingman Author-X-Name-Last: Sun Author-Name: Meng Zhang Author-X-Name-First: Meng Author-X-Name-Last: Zhang Title: Multiphysics modeling of ultrasound-assisted biomass torrefaction for fuel pellets production Abstract: When an ultrasound wave propagates through a volume of biomass medium, the majority of the energy in the acoustic field is absorbed locally by the biomass, resulting in the generation of heat. This torrefaction effect results in a temperature increase of the biomass, converting biomass into a coal-like intermediate with upgraded fuel properties over the original biomass. However, few analyses can be found in the literature explaining the mechanism of ultrasound-assisted biomass torrefaction. This research aims to model an ultrasound-assisted biomass torrefaction system. The developed multiphysics model depicts the piezoelectric effect of a transducer, the vibration amplitude at the output end of the ultrasonic horn, and the acoustic intensity and temperature distributions in the biomass medium. The vibration amplitude and frequency of the ultrasonic horn were measured by a non-contact capacitive sensor, and it is verified the model can accurately simulate the ultrasonic vibration of the experimental system. The temperature at the center of the biomass was measured to validate the model’s temperature prediction. Both simulation and experiment showed that ultrasound-assisted biomass torrefaction can create torrefied fuel pellet within 60 seconds. Journal: IISE Transactions Pages: 723-730 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2093424 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2093424 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:723-730 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2078523_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Nan Zhang Author-X-Name-First: Nan Author-X-Name-Last: Zhang Author-Name: Sen Tian Author-X-Name-First: Sen Author-X-Name-Last: Tian Author-Name: Kaiquan Cai Author-X-Name-First: Kaiquan Author-X-Name-Last: Cai Author-Name: Jun Zhang Author-X-Name-First: Jun Author-X-Name-Last: Zhang Title: Condition-based maintenance assessment for a deteriorating system considering stochastic failure dependence Abstract: In this article, the condition-based maintenance optimization of a K-out-of-N deteriorating system considering failure dependence is discussed. The degradation of each component is modelled by a pure jump Lévy process. Whenever one component fails, it can either induce instantaneous failures or lead to the increment of degradation levels of other components. Thus, this model has the flexibility to describe the phenomena of instantaneous failures of multiple components, which is known as the common cause failure. It can also model the accumulative, gradual propagation effect of the component failure to the system. A periodic inspection policy is considered to reveal the real state of the system, upon which, possible maintenance actions can be carried out according to the observations. The inspection and maintenance problem is formulated as a Markov decision process and the value iteration algorithm is employed to solve the problem. The proposed policy is assessed by the total expected discounted cost in the long-run horizon. Under mild conditions, some structural properties of the optimal maintenance policies are obtained. A numerical example is given to illustrate the applicability of the proposed model. It can provide theoretical reference for the decision-maker when developing maintenance policies. Journal: IISE Transactions Pages: 687-697 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2078523 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2078523 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:687-697 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2091184_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Eric Weflen Author-X-Name-First: Eric Author-X-Name-Last: Weflen Author-Name: Jakob D. Hamilton Author-X-Name-First: Jakob D. Author-X-Name-Last: Hamilton Author-Name: Samantha Sorondo Author-X-Name-First: Samantha Author-X-Name-Last: Sorondo Author-Name: Ola L. A. Harrysson Author-X-Name-First: Ola L. A. Author-X-Name-Last: Harrysson Author-Name: Matthew C. Frank Author-X-Name-First: Matthew C. Author-X-Name-Last: Frank Author-Name: Iris V. Rivero Author-X-Name-First: Iris V. Author-X-Name-Last: Rivero Title: Evaluating interlayer gaps in friction stir spot welds for rapid tooling applications Abstract: Potential for a rapid hybrid additive/subtractive manufacturing technique capable of generating large aluminum components with deep geometrical features exists if, for example, layers of aluminum plates can be deposited and held in place using friction stir welding followed by machining. However, when plates of aluminum are friction stir spot welded together, adjacent material can deform causing a gap between layers. This research investigates how friction stir tool geometry affects the formation of interlayer gaps and the lap shear strength of the weld. For this purpose, residual stresses and microhardness were measured to characterize the weld formation process. While friction stir welding of 6061 aluminum bar stock was carried out on a machining center with three different pin diameters (5.7, 6.4, 7.0 mm) and two pin lengths (8.9, 10.2 mm). In general, outcomes of the research show that lap shear strength trends upward with increasing pin diameter but does not show a strong relation to the pin length. Interlayer gap size increases with pin length, but does not show a clear trend with pin diameter. Compressive residual stresses were observed on the weld shoulder with no significant variations occurring among the studied stir tool geometries. No significant change was measured in microhardness values when the pin length or shoulder diameter were changed, suggesting that the increase in lap shear strength is due to a change in weld cross-section instead of a material property change. This research will guide friction stir tool geometry selection for this hybrid manufacturing process and can be applied more broadly to any application where material deformation around a weld is undesirable. Journal: IISE Transactions Pages: 711-722 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2091184 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2091184 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:711-722 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2068812_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yucheng Dong Author-X-Name-First: Yucheng Author-X-Name-Last: Dong Author-Name: Siqi Wu Author-X-Name-First: Siqi Author-X-Name-Last: Wu Author-Name: Xiaoping Shi Author-X-Name-First: Xiaoping Author-X-Name-Last: Shi Author-Name: Yao Li Author-X-Name-First: Yao Author-X-Name-Last: Li Author-Name: Francisco Chiclana Author-X-Name-First: Francisco Author-X-Name-Last: Chiclana Title: Clustering method with axiomatization to support failure mode and effect analysis Abstract: Failure Mode and Effect Analysis (FMEA) is a highly structured risk-prevention management process that improves the reliability and safety of a system. This article investigates one of the most critical issues in FMEA practice: Clustering failure modes based on their risks. In the failure mode clustering problem, all identified failure modes need to be assigned to several predefined and risk-ordered categories to manage their risks. We model the clustering of failure modes through multi-expert multiple criteria decision making with an additive value function, and call it the additive N-clustering problem. We begin by proposing six axioms that describe an ideal clustering method in the additive N-clustering problem, and find that the EXogenous Clustering Method (EXCM), where category thresholds can be exogenously provided, is ideal (Exogenous Possibility Theorem), whereas any endogenous clustering method, where the clustering is determined endogenously in the given method, cannot satisfy all six axioms simultaneously (Endogenous Impossibility Theorem). In practice, endogenous clustering methods are important, due to the difficulty in providing accurate and reasonable category thresholds of the EXCM. Therefore, we propose the Consensus-based ENdogenous Clustering Method (CENCM) and discuss its axiomatic properties. We also apply the CENCM to the SARS-CoV-2 prevention case and justify the CENCM through axiomatic comparisons and a detailed simulation experiment. Journal: IISE Transactions Pages: 657-671 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2068812 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2068812 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:657-671 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2100050_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yuhao Zhong Author-X-Name-First: Yuhao Author-X-Name-Last: Zhong Author-Name: Akash Tiwari Author-X-Name-First: Akash Author-X-Name-Last: Tiwari Author-Name: Hitomi Yamaguchi Author-X-Name-First: Hitomi Author-X-Name-Last: Yamaguchi Author-Name: Akhlesh Lakhtakia Author-X-Name-First: Akhlesh Author-X-Name-Last: Lakhtakia Author-Name: Satish T.S. Bukkapatnam Author-X-Name-First: Satish T.S. Author-X-Name-Last: Bukkapatnam Title: Identifying the influence of surface texture waveforms on colors of polished surfaces using an explainable AI approach Abstract: An explainable artificial intelligence approach based on consolidating the Local Interpretable and Model-agnostic Explanation (LIME) model outputs was devised to discern the influence of the surface morphology on the colors exhibited by stainless-steel 304 parts polished with a Magnetic Abrasive Finishing (MAF) process. The MAF polishing process was used to create two regions, each appearing either blue or red to the naked eye. The color distribution was microscopically heterogeneous, i.e., some red microscale patches were dispersed in blue regions, and vice versa. The surface morphology was represented in the frequency domain (using a 2D Fourier transform) to capture the harmonic surface patterns, such as the feed and lay marks from the polishing process. A Convolutional Neural Network (CNN) was employed to identify the color of the region from the frequency characteristics of the surface morphology. The CNN was able to predict the observed colors with test accuracies exceeding 99%, suggesting that the frequency characteristics of the surface morphology of the red regions are distinctly different from those of the blue regions. A LIME model was constructed around each small segment within each region of the surface to identify the frequency features that are influential for differentiating between the colors. To deal with the effect of heterogeneity, an algorithm based on the query by experts was used to reconcile the local influences and gather the global explanations of the frequency characteristics that inform the blue versus red regions. We found that the dominant morphological features in the red regions are those that capture the polishing lay patterns underlying surface structure, whereas those in the blue regions capture the non-uniform and high-frequency waveform patterns, such as those result when oxide films form due to the intense polishing conditions. Journal: IISE Transactions Pages: 731-745 Issue: 7 Volume: 55 Year: 2023 Month: 7 X-DOI: 10.1080/24725854.2022.2100050 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2100050 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:7:p:731-745 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2135182_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Lei Fang Author-X-Name-First: Lei Author-X-Name-Last: Fang Author-Name: Sai Zhao Author-X-Name-First: Sai Author-X-Name-Last: Zhao Title: The bright side of having a strong rival under environmental certification Abstract: Third-party certification plays an important role in promoting Environmental Corporate Social Responsibility (ECSR) activities, which usually cannot be directly observed by consumers. This article studies firms’ costly abatement efforts as their ECSR, in a horizontally differentiated duopoly with asymmetric costs. We consider a Non-Governmental Organization (NGO) as the external certifier to set an ECSR standard. We show that the equilibrium certification outcome depends on the cost asymmetry between the two firms as well as the relevant market dynamics (e.g., the degree of product substitutability, consumers’ valuation for ECSR efforts). Specifically, when the cost gap between the two firms is sufficiently wide, the NGO would set a high standard to selectively target the more efficient firm. In this setting, we find that surprisingly, the more efficient firm can be better off when the abatement cost of its less efficient rival is reduced, provided that the competition is not too intense. We also examine mandatory certification operated by the government and show that a firm may prefer to have a strong rival staying in the market rather than a weak rival exiting the market. Our analysis provides a rationale for the (free) sharing of environmental technology patents among competing firms in the industry. Journal: IISE Transactions Pages: 846-859 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2135182 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2135182 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:846-859 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2120222_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Taner Cokyasar Author-X-Name-First: Taner Author-X-Name-Last: Cokyasar Author-Name: Mingzhou Jin Author-X-Name-First: Mingzhou Author-X-Name-Last: Jin Title: Additive manufacturing capacity allocation problem over a network Abstract: The use of Additive Manufacturing (AM) for low demand volumes, such as spare parts, has recently attracted considerable attention from researchers and practitioners. This study defines the AM Capacity Allocation Problem (AMCAP) to design an AM supply network and choose between printing upon demand and sourcing through an alternative option for each part in a given set. A mixed-integer nonlinear program was developed to minimize the production, transportation, alternative sourcing, and lead time costs. We developed a cut generation algorithm to find optimal solutions in finite iterations by exploring the convexity of the nonlinear waiting time for AM products at each AM facility. Numerical experiments show the effectiveness of the proposed algorithm for the AMCAP. A case study was conducted to demonstrate that the optimal AM deployment can save almost 20% of costs over situations that do not use any AM. The case also shows that AM can realize its maximum benefits when it works in conjunction with an alternative option, e.g., inventory holding, and its capacity is strategically deployed. Since AM is a new technology and is rapidly evolving, this study includes a sensitivity analysis to see the effects of improved AM technology features, such as machine cost and build speed. When the build speed increases, the total cost decreases quickly, but the number of AM machines will increase first then decrease later when more parts are assigned to the AM option. Journal: IISE Transactions Pages: 807-820 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2120222 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2120222 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:807-820 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2111481_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Gang Li Author-X-Name-First: Gang Author-X-Name-Last: Li Author-Name: Yu Xia Author-X-Name-First: Yu Author-X-Name-Last: Xia Title: A supply chain sourcing model at the interface of operations and sustainability Abstract: In this article, we develop a supply chain sourcing model that incorporates both sustainability and operational performance measures. The model selects suppliers and determines sustainability investments and order allocations among the selected suppliers. High sustainability performance, as well as cost efficiency, is achieved while a high operational performance level is maintained. We formulate the problem as a nonlinear bi-objective integer-programming model, discover the model’s special features, and propose an effective and computationally efficient algorithm to solve it. To quantify a supply chain’s sustainability performance, we adopt the Environmental, Social, and Governance index. Instead of searching for just one sourcing solution, we find the Pareto-optimal set of effective solutions. Numerical tests verify that our algorithm outperforms an existing sourcing algorithm in terms of computational efficiency. A simulation of Apple’s sourcing decisions demonstrates the effectiveness of the model in business practice. Our work also provides managerial insights on how sustainable operations alter traditional supply chain sourcing decisions. It helps practitioners make fast and effective sourcing and investment decisions to implement their sustainability strategies at the operational level. Journal: IISE Transactions Pages: 794-806 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2111481 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2111481 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:794-806 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2135797_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Dachuan Chen Author-X-Name-First: Dachuan Author-X-Name-Last: Chen Author-Name: Chenxu Li Author-X-Name-First: Chenxu Author-X-Name-Last: Li Title: Closed-form expansion for option price under stochastic volatility model with concurrent jumps Abstract: We propose and implement a novel path-perturbation-based closed-form expansion for approximating option prices under a general class of models featuring stochastic volatility and jumps in both asset return and volatility. The expansion naturally employs formulas reported in the literature for pricing options under jump-diffusions with constant volatility as the leading term and provides corrections up to an arbitrary order. It offers an efficient computational tool for empirical analysis on the models through, e.g., calibration or estimation based on option data, in particular for flexible yet analytically intractable cases. Journal: IISE Transactions Pages: 781-793 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2135797 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2135797 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:781-793 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2128235_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zhan Pang Author-X-Name-First: Zhan Author-X-Name-Last: Pang Author-Name: Yixuan Xiao Author-X-Name-First: Yixuan Author-X-Name-Last: Xiao Title: Inventory control under corporate income tax and accrual accounting Abstract: Corporate income tax constitutes a significant portion of financial costs in business operations. A typical tax system is characterized by the tax structure, accounting method, and loss carryover rule. In this article, we study the optimal inventory policies under taxation and accrual accounting. We consider a lost-sales periodic-review inventory system under a proportional tax with loss carryforward. We formulate the problem as a cyclic stochastic dynamic program with multiple accounting periods, each of which consists of multiple review periods. We show that an income-dependent basestock policy is optimal. We find that tax function convexity and loss carryforward may introduce conflicting incentives into inventory decisions: the former drives risk-averse decisions whereas the latter induces risk-seeking behavior. We identify two intertemporal effects of taxation, the intra-accounting-period effect and the inter-accounting-period effect, both inducing smaller basestock levels. We further extend the analysis to the progressive tax, backordering system, and cash accounting method. Our numerical study examines the effects of various characteristics of taxation on the optimal policies. Our results demonstrate that ignoring taxation in inventory decisions may lead to significant losses especially when the tax rate and demand uncertainty are sufficiently high, which reveals the value of tax considerations in inventory management. Journal: IISE Transactions Pages: 833-845 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2128235 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2128235 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:833-845 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2107249_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: John Alasdair Warwicker Author-X-Name-First: John Alasdair Author-X-Name-Last: Warwicker Author-Name: Steffen Rebennack Author-X-Name-First: Steffen Author-X-Name-Last: Rebennack Title: Generating optimal robust continuous piecewise linear regression with outliers through combinatorial Benders decomposition Abstract: Using piecewise linear (PWL) functions to model discrete data has applications for example in healthcare, engineering and pattern recognition. Recently, mixed-integer linear programming (MILP) approaches have been used to optimally fit continuous PWL functions. We extend these formulations to allow for outliers. The resulting MILP models rely on binary variables and big-M constructs to model logical implications. The combinatorial Benders decomposition (CBD) approach removes the dependency on the big-M constraints by separating the MILP model into a master problem of the complicating binary variables and a linear sub problem over the continuous variables, which feeds combinatorial solution information into the master problem. We use the CBD approach to decompose the proposed MILP model and solve for optimal PWL functions. Computational results show that vast speedups can be found using this robust approach, with problem-specific improvements including smart initialization, strong cut generation and special branching approaches leading to even faster solve times, up to more than 12,000 times faster than the standard MILP approach. Journal: IISE Transactions Pages: 755-767 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2107249 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2107249 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:755-767 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2126037_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Mehmet Sekip Altug Author-X-Name-First: Mehmet Sekip Author-X-Name-Last: Altug Title: Flexible consumer return policies and rising clearance sales in retailing: Can this dual trend co-exist? Abstract: We have been witnessing the dual trend of retailers offering increasingly flexible consumer return policies and the rise in clearance sales and the number of off-price clearance channels. We model and connect these two trends by making the salvage/clearance price endogenous and ask the question of whether the retailers can continue to be better off without restricting their return policies, even if they end up with lower clearance prices. Interestingly, compared with a setting with no returns, although the retailers’ clearance price will be lower, we show that they would be better off with flexible return policies when the bargain hunter segment’s valuation heterogeneity is above a certain threshold. We show that this result holds in both store-clearance and the two types of off-price clearance-channel settings in which multiple retailers clear their excess stock through an intermediary. We also show that the expected clearance price in off-price clearance channels is higher than the clearance price in store-clearance setting with and without returns. We believe that these two results jointly explain both trends of offering flexible return policies and the rise in off-price clearance channel business in the retail industry. Journal: IISE Transactions Pages: 821-832 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2126037 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2126037 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:821-832 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2123117_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zeyad Kassem Author-X-Name-First: Zeyad Author-X-Name-Last: Kassem Author-Name: Adolfo R. Escobedo Author-X-Name-First: Adolfo R. Author-X-Name-Last: Escobedo Title: Models and network insights for edge-based districting with simultaneous location-allocation decisions Abstract: We introduce two edge-based districting optimization models with no pre-fixed centers to partition a road network into a given number of compact, contiguous, and balanced districts. The models are applicable to logistics applications. The first model is a mixed-integer programming model with network flow-based contiguity constraints. Since this model performs poorly on medium-to-large instances, a second model with cut set-based contiguity constraints is introduced. The full specification of the contiguity constraints requires substantial computational resources and is impractical except for very small instances. However, paired with an iterative branch-and-bound algorithm with a cut generation scheme (B&B&Cut), the second model tends to outperform the first computationally. We show that the underlying problem is NP-hard. Moreover, we derive network insights, from which cutting planes that enable a reduction in the solution space can be generated. The cuts are tested on road networks with up to 500 nodes and 687 edges, leading to speed up in computational time up to almost 27x relative to the computational time of solving the second optimization model exactly with only B&B&Cut. Journal: IISE Transactions Pages: 768-780 Issue: 8 Volume: 55 Year: 2023 Month: 8 X-DOI: 10.1080/24725854.2022.2123117 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2123117 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:8:p:768-780 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2104972_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Xiao-Lin Wang Author-X-Name-First: Xiao-Lin Author-X-Name-Last: Wang Title: Design and pricing of usage-driven customized two-dimensional extended warranty menus Abstract: In this article, we study the design and pricing of customized two-dimensional extended warranty menus from the provider’s perspective, where each consumer is presented with a tailored warranty menu based on her usage rate. We propose to design such usage-driven customized menus in two steps. First, for a fixed usage rate, we design the corresponding menu by seeking optimal prices for given warranty limits. Then, the menu for any other usage rate is designed by fixing the optimal prices and determining the associated warranty limits. We prove that under the multinomial logit choice framework, if the product failure probabilities for respective options in two customized menus are the same, then the menus will have identical expected profits and attach rates. A case study on commercial vehicles is presented to demonstrate the customization methodology. We find that customized menus can generate a higher profit and attach rate than a uniform menu without customization, indicating the benefit of customization. In addition, we extend the original model to incorporate ancillary preventive maintenance programs. Our analysis reveals that the optimal preventive maintenance policy for each option is independent of those for other options; moreover, bundling preventive maintenance programs can increase profit and attach rate. Journal: IISE Transactions Pages: 873-885 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2104972 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2104972 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:873-885 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2106389_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zhaohui Geng Author-X-Name-First: Zhaohui Author-X-Name-Last: Geng Author-Name: Arman Sabbaghi Author-X-Name-First: Arman Author-X-Name-Last: Sabbaghi Author-Name: Bopaya Bidanda Author-X-Name-First: Bopaya Author-X-Name-Last: Bidanda Title: Automated variance modeling for three-dimensional point cloud data via Bayesian neural networks Abstract: Three-dimensional (3-D) point cloud data are increasingly being used to describe a wide range of physical objects in detail, corresponding to customized and flexible shape designs. The advent of a new generation of optical sensors has simplified and reduced the costs of acquiring 3-D data in near-real-time. However, the variation of the acquired point clouds, and methods to describe them, create bottlenecks in manufacturing practices such as Reverse Engineering (RE) and metrology in additive manufacturing. We address this issue by developing an automated variance modeling algorithm that utilizes a physical object’s local geometric descriptors and Bayesian Extreme Learning Machines (BELMs). Our proposed ensemble and residual BELM-variants are trained by a scanning history that is composed of multiple scans of other, distinct objects. The specific scanning history is selected by a new empirical Kullback–Leibler divergence we developed to identify objects that are geometrically similar to an object of interest. A case study of our algorithm on additively manufactured products demonstrates its capability to model the variance of point cloud data for arbitrary freeform shapes based on a scanning history involving simpler, and distinct, shapes. Our algorithm has utility for measuring the process capability of 3-D scanning for RE processes. Journal: IISE Transactions Pages: 912-925 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2106389 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2106389 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:912-925 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2106390_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Efficient and interpretable monitoring of high-dimensional categorical processes Abstract: High-Dimensional (HD) processes have become prevalent in many data-intensive scientific domains and engineering applications. The monitoring of HD categorical data, where each variable of interest is evaluated by attribute levels or nominal values, however, has seldom been studied. As the joint distribution of HD categorical variables can be fully characterized by a high-way contingency table or a high-order tensor, we propose a Probabilistic Tensor Decomposition (PTD) which factorizes a huge tensor into a few latent classes (rank-one tensors) to dramatically reduce the number of model parameters. Moreover, to enable high interpretability of this latent-class-type PTD model, a novel polarization regularization is devised, which makes each latent class focus on only a few vital combinations of attribute levels of categorical variables. An Expectation-Maximization algorithm is designed for parameter estimation from a historical normal dataset in Phase I, and an exponentially weighted moving average control chart is built in Phase II to monitor the proportions of latent classes that act as surrogates for each original categorical vector. Extensive simulations and a real case study validate the superior inference and monitoring performance of our proposed efficient and interpretable method. Journal: IISE Transactions Pages: 886-900 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2106390 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2106390 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:886-900 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2092241_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yifan Li Author-X-Name-First: Yifan Author-X-Name-Last: Li Author-Name: Chunjie Wu Author-X-Name-First: Chunjie Author-X-Name-Last: Wu Author-Name: Wendong Li Author-X-Name-First: Wendong Author-X-Name-Last: Li Author-Name: Fugee Tsung Author-X-Name-First: Fugee Author-X-Name-Last: Tsung Title: Nonparametric passenger flow monitoring using a minimum distance criterion Abstract: Monitoring real-time passenger flow in urban rapid transit systems is very important to maintain social stability and prevent unexpected group events and system failure. To monitor passenger flow, data are collected by sensors deployed in important stations and many existing control charts can be applied. However, because of unknown complex distributions and the requirement to detect shifts of all ranges effectively, conventional methods may perform poorly. Nevertheless, while there are certain charting schemes that truncate the Log-Likelihood Ratio (LLR) function to detect large shifts more quickly, they can cause massive loss of information by truncation, and can only handle particular distributions, leading to unstable online monitoring. In this article, we propose a nonparametric CUSUM charting scheme to monitor passenger flow dynamically. We propose a novel minimum distance criterion to minimize the functional distance between the objective function and the original LLR function while maintaining its monotonically increasing property. By integrating this concept with kernel density estimation, our proposed chart does not require any parametric process distribution, it can be constructed easily in any situation, and it is sensitive to shifts of all sizes. Theoretical analysis, simulations and a real application to monitoring passenger flow in the Mass Transit Railway in Hong Kong show that our method performs well in various cases. Journal: IISE Transactions Pages: 861-872 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2092241 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2092241 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:861-872 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2113186_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: An impulse response formulation for small-sample learning and control of additive manufacturing quality Abstract: Machine learning for additive manufacturing (ML4AM) has emerged as a viable strategy in recent years to enhance 3D printing performance. However, the amount of data required for model training and the lack of ability to infer AM process insights can be serious barriers for black-box learning methods. Due to the nature of low-volume fabrication of infinite product variety in AM, ML4AM also faces “small data, big tasks” challenges to learn heterogeneous point cloud data and control the quality of new designs. To address these challenges, this work establishes an impulse response formulation of layer-wise AM processes to relate design inputs with the deformed final products. To enable prescriptive learning from a small sample of printed parts with different 3D shapes, we develop a fabrication-aware input–output representation, where each product is constructed by a large amount of basic shap primitives. The impulse response model depicts how the 2D shape primitives (circular sectors, line segments, and corner segments) in each layer are stacked up to become final 3D shape primitives. A geometric quality of a new design can therefore be predicted through the construction of learned shape primitives. Essentially, the small-sample learning of printed products is transformed into a large-sample learning of printed shape primitives under the impulse response formulation of AM. This fabrication-aware formulation builds the foundation for applying well-established control theory to the intelligent quality control in AM. It not only provides theoretical underpinning and justification of our previous work, but also enable new opportunities in ML4AM. As an example, it leads to transfer function characterization of AM processes to uncover process insights. It also provides block-diagram representation of AM processes to design and optimize the control of AM quality. Journal: IISE Transactions Pages: 926-939 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2113186 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2113186 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:926-939 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2113481_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Shilan Jin Author-X-Name-First: Shilan Author-X-Name-Last: Jin Author-Name: Rui Tuo Author-X-Name-First: Rui Author-X-Name-Last: Tuo Author-Name: Akash Tiwari Author-X-Name-First: Akash Author-X-Name-Last: Tiwari Author-Name: Satish Bukkapatnam Author-X-Name-First: Satish Author-X-Name-Last: Bukkapatnam Author-Name: Chantel Aracne-Ruddle Author-X-Name-First: Chantel Author-X-Name-Last: Aracne-Ruddle Author-Name: Ariel Lighty Author-X-Name-First: Ariel Author-X-Name-Last: Lighty Author-Name: Haley Hamza Author-X-Name-First: Haley Author-X-Name-Last: Hamza Author-Name: Yu Ding Author-X-Name-First: Yu Author-X-Name-Last: Ding Title: Hypothesis tests with functional data for surface quality change detection in surface finishing processes Abstract: This work is concerned with providing a principled decision process for stopping or tool-changing in a surface finishing process. The decision process is supposed to work for products of non-flat geometry. The solution is based on conducting hypothesis testing on the bearing area curves from two consecutive stages of a surface finishing process. In each stage, the bearing area curves, which are in fact the nonparametric quantile curves representing the surface roughness, are extracted from surface profile measurements at a number of sampling locations on the surface of the products. The hypothesis test of these curves informs the decision makers whether there is a change in surface quality induced by the current finishing action. When such change is detected, the current action is deemed effective and should thus continue, while when no change is detected, the effectiveness of the current action is then called into question, signaling possibly some change in the course of action. Application of the hypothesis testing-based decision procedure to both spherical and flat surfaces demonstrates the effectiveness and benefit of the proposed method and confirms its geometry-agnostic nature. Journal: IISE Transactions Pages: 940-956 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2113481 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2113481 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:940-956 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2115593_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Yuanyuan Gao Author-X-Name-First: Yuanyuan Author-X-Name-Last: Gao Author-Name: Xinming Wang Author-X-Name-First: Xinming Author-X-Name-Last: Wang Author-Name: Junbo Son Author-X-Name-First: Junbo Author-X-Name-Last: Son Author-Name: Xiaowei Yue Author-X-Name-First: Xiaowei Author-X-Name-Last: Yue Author-Name: Jianguo Wu Author-X-Name-First: Jianguo Author-X-Name-Last: Wu Title: Hierarchical modeling of microstructural images for porosity prediction in metal additive manufacturing via two-point correlation function Abstract: Porosity is one of the most critical quality issues in Additive Manufacturing (AM). As process parameters are closely related to porosity formation, it is vitally important to study their relationship for better process optimization. In this article, motivated by the emerging application of metal AM, a three-level hierarchical mixed-effects modeling approach is proposed to characterize the relationship between microstructural images and process parameters for porosity prediction and microstructure reconstruction. Specifically, a Two-Point Correlation Function (TPCF) is used to capture the morphology of the pores quantitatively. Then, the relationship between the TPCF profile and process parameters is established. A blocked Gibbs sampling approach is developed for parameter inference. Our modeling framework can reconstruct the microstructure based on the predicted TPCF through a simulated annealing optimization algorithm. The effectiveness and advantageous features of our method are demonstrated by both the simulation study and the case study with real-world data from metal AM applications. Journal: IISE Transactions Pages: 957-969 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2115593 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2115593 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:957-969 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2121882_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Gecheng Chen Author-X-Name-First: Gecheng Author-X-Name-Last: Chen Author-Name: Rui Tuo Author-X-Name-First: Rui Author-X-Name-Last: Tuo Title: Projection pursuit Gaussian process regression Abstract: A primary goal of computer experiments is to reconstruct the function given by the computer code via scattered evaluations. Traditional isotropic Gaussian process models suffer from the curse of dimensionality, when the input dimension is relatively high given limited data points. Gaussian process models with additive correlation functions are scalable to dimensionality, but they are more restrictive as they only work for additive functions. In this work, we consider a projection pursuit model, in which the nonparametric part is driven by an additive Gaussian process regression. We choose the dimension of the additive function higher than the original input dimension, and call this strategy “dimension expansion”. We show that dimension expansion can help approximate more complex functions. A gradient descent algorithm is proposed for model training based on the maximum likelihood estimation. Simulation studies show that the proposed method outperforms the traditional Gaussian process models. Journal: IISE Transactions Pages: 901-911 Issue: 9 Volume: 55 Year: 2023 Month: 9 X-DOI: 10.1080/24725854.2022.2121882 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2121882 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:9:p:901-911 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2149906_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Mahboubeh Madadi Author-X-Name-First: Mahboubeh Author-X-Name-Last: Madadi Author-Name: Mohammadhossein Heydari Author-X-Name-First: Mohammadhossein Author-X-Name-Last: Heydari Author-Name: Lisa Maillart Author-X-Name-First: Lisa Author-X-Name-Last: Maillart Author-Name: Richard Cassady Author-X-Name-First: Richard Author-X-Name-Last: Cassady Author-Name: Shengfan Zhang Author-X-Name-First: Shengfan Author-X-Name-Last: Zhang Title: Erlang loss systems with shortest idle server first service discipline: Maintenance considerations Abstract: We consider a variation of an Erlang loss system in which jobs are routed to servers according to the Shortest Idle Server First service discipline. Specifically, we consider a system in which idle servers are arranged in a stack; servers are returned to the top of the stack upon service completion; and arriving jobs are assigned to the server currently at the top of the stack. When busy, servers accumulate age and incur an age-dependent operating cost. For such systems, we (i) formulate a continuous-time Markov chain model to characterize the system’s transient behavior, and (ii) develop maintenance policies consisting of two possible actions: server group replacement and stack inversion. The stack inversion may be performed at any time prior to group replacement to achieve a more evenly distributed utilization among servers. We develop an optimization model to determine the optimal inversion and replacement times so as to minimize the long-run expected cost rate. Because the model is nonlinear and non-convex, we develop a set of algorithms to solve for the optimal replacement and inversion time. Lastly, we establish a lower bound for the inversion cost threshold below which it is optimal to invert the stack of servers before their replacement. Journal: IISE Transactions Pages: 1008-1021 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2022.2149906 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2149906 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:1008-1021 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2140367_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Kati Moug Author-X-Name-First: Kati Author-X-Name-Last: Moug Author-Name: Huiwen Jia Author-X-Name-First: Huiwen Author-X-Name-Last: Jia Author-Name: Siqian Shen Author-X-Name-First: Siqian Author-X-Name-Last: Shen Title: A shared-mobility-based framework for evacuation planning and operations under forecast uncertainty Abstract: To meet evacuation needs from carless populations who need personalized assistance to evacuate safely, in this article we propose a ridesharing-based evacuation program that recruits volunteer drivers before a disaster strikes, and then matches volunteer drivers with evacuees once demand is realized. We optimize resource planning and evacuation operations under uncertain spatiotemporal demand, and construct a two-stage stochastic mixed-integer program to ensure high demand fulfillment rates. We consider three formulations to improve the number of evacuees served, by minimizing an expected penalty cost, imposing a probabilistic constraint, and enforcing a constraint on the conditional value at risk of the total number of unserved evacuees, respectively. We discuss the benefits and disadvantages of the different risk measures used in the three formulations, given certain carless population sizes and the variety of evacuation modes available. We also develop a heuristic approach to provide quick, dynamic and conservative solutions. We demonstrate the performance of our approaches using five different networks of varying sizes based on regions of Charleston County, South Carolina, an area that experienced a mandatory evacuation order during Hurricane Florence, and utilize real demographic data and hourly traffic count data to estimate the demand distribution. Journal: IISE Transactions Pages: 971-984 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2022.2140367 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2140367 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:971-984 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2147606_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Lien Wang Author-X-Name-First: Lien Author-X-Name-Last: Wang Author-Name: Erik Demeulemeester Author-X-Name-First: Erik Author-X-Name-Last: Demeulemeester Title: Simulation optimization in healthcare resource planning: A literature review Abstract: In healthcare, the planning and the management of resources are challenging as there are always many complex and stochastic factors in both demand and supply. Simulation Optimization (SO) that combines simulation analysis and optimization techniques is well suited for solving complicated, stochastic, and mathematically intractable decision problems. In order to comprehensively unveil the degree to which SO has been used to solve healthcare resource planning problems, this article reviews the academic articles published until 2021 and categorizes them into multiple classification fields that are related to either problem perspectives (i.e., healthcare services, planning decisions, and objectives) or methodology perspectives (i.e., SO approaches and applications). We also examine the relations between the individual fields. We find that emergency care services are the most applied domain of SO, and that discrete-event simulation and random search methods (especially genetic algorithms) are the most frequently used methods. The literature classification can help researchers quickly learn this research area and identify the publications of interest. Finally, we identify major trends, insights and conclusions that deserve special attention when studying this area. We suggest many avenues for further research that provide opportunities for expanding existing methodologies and for narrowing the gap between theory and practice. Journal: IISE Transactions Pages: 985-1007 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2022.2147606 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2147606 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:985-1007 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2162168_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Xiaole Chen Author-X-Name-First: Xiaole Author-X-Name-Last: Chen Author-Name: Ke Fu Author-X-Name-First: Ke Author-X-Name-Last: Fu Author-Name: Yanli Tang Author-X-Name-First: Yanli Author-X-Name-Last: Tang Title: Impact of customer bounded rationality on on-demand service platforms Abstract: We consider an on-demand platform connecting independent agents and delay-sensitive boundedly rational customers. By adopting a queueing-game framework, we analyze the equilibrium joining behaviors of customers and agents, the platform’s optimal pricing strategy, and the resulting profit and social welfare. Moreover, we investigate how the bounded rationality level affects these results. We obtain the following main insights. First, when customers are irrational, the positive externality between the agents and the customers becomes weak. In other words, increasing the same number of agents attracts fewer boundedly rational customers than fully rational ones. Second, we distinguish two types of equilibria in terms of agent participation: one in which all or none of the agents participate and the other in which it is possible that a fraction of the agents may participate. We show that in the latter case, a social planner and a platform can easily align their goals since they are affected by the bounded rationality in the same direction. We attribute the insight to the interactions among the cross-side effect, bounded rationality, and the pooling effect embedded in a queueing system. Next, we characterize the platform’s optimal pricing decisions and prove that the increase in the bounded rationality level does not always lead to higher optimal prices and wages. Finally, we examine the agent side bounded rationality and the social welfare maximizing decisions in the extended models. Journal: IISE Transactions Pages: 1049-1061 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2022.2162168 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2162168 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:1049-1061 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2151672_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Rupal Mandania Author-X-Name-First: Rupal Author-X-Name-Last: Mandania Author-Name: Fernando S. Oliveira Author-X-Name-First: Fernando S. Author-X-Name-Last: Oliveira Title: Dynamic pricing of regulated field services using reinforcement learning Abstract: Resource flexibility and dynamic pricing are effective strategies in mitigating uncertainties in production systems; however, they have yet to be explored in relation to the improvement of field operations services. We investigate the value of dynamic pricing and flexible allocation of resources in the field service operations of a regulated monopoly providing two services: installations (paid-for) and maintenance (free). We study the conditions under which the company can improve service quality and the profitability of field services by introducing dynamic pricing for installations and the joint management of the resources allocated to paid-for (with a relatively stationary demand) and free (with seasonal demand) services when there is an interaction between quality constraints (lead time) and the flexibility of resources (overtime workers at extra cost). We formalize this problem as a contextual multi-armed bandit problem to make pricing decisions for the installation services. A bandit algorithm can find the near-optimal policy for joint management of the two services independently of the shape of the unobservable demand function. The results show that (i) dynamic pricing and resource management increase profitability; (ii) regulation of the service window is needed to maintain quality; (iii) under certain conditions, dynamic pricing of installation services can decrease the maintenance lead time; (iv) underestimation of demand is more detrimental to profit contribution than overestimation. Journal: IISE Transactions Pages: 1022-1034 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2022.2151672 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2151672 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:1022-1034 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2169417_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Li Xiao Author-X-Name-First: Li Author-X-Name-Last: Xiao Author-Name: Zuo-jun Max Shen Author-X-Name-First: Zuo-jun Max Author-X-Name-Last: Shen Title: Efficiency of the carpooling service: Customer waiting and driver utilization Abstract: The carpooling service, which is the sharing of journeys heading in similar directions, is expected to be more efficient. In this article, we study the impact of providing a carpooling service on customer waiting time and driver utilization in an on-demand service platform. We build an M/G/N queueing model to approximate the dynamics of the on-demand service platform that provides two services: a standard service and a carpooling service. We find that two factors influence the efficiency of the carpooling service: the source of customers who use the carpooling service, and the length of a normalized detour. If the carpooling service attracts customers who have not used this on-demand service before, i.e., the market size increases, then both the customer waiting time and driver utilization increase when more customers use the carpooling service. If the carpooling service primarily attracts customers who already use the standard on-demand service before, i.e., customers switch from the standard service to the carpooling service, then the impact of the carpooling service depends on the normalized detour. It is possible that customer waiting time and driver utilization first increase and then decrease as customers switch from the standard service to the carpooling service. Journal: IISE Transactions Pages: 1062-1074 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2023.2169417 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2169417 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:1062-1074 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2159590_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Akshay Mutha Author-X-Name-First: Akshay Author-X-Name-Last: Mutha Author-Name: Saurabh Bansal Author-X-Name-First: Saurabh Author-X-Name-Last: Bansal Title: Determining assortments of used products for B2B transactions in reverse supply chain Abstract: The reverse supply chain – in which used products are collected from end-users and remanufactured for resale – includes a consolidator, broker, and remanufacturer. The used products entering the reverse supply chain tend to be of heterogeneous quality levels and require different amounts of remanufacturing effort and cost. The consolidator collects these products from the end-users and sells them in unsorted form to the broker; the broker sorts them into different grades and offers the graded units to the remanufacturer. This article analyzes the business-to-business transaction between the broker and the remanufacturer. We determine the broker’s optimal assortment in terms of the optimal number of grades, their expected remanufacturing costs, and selling prices. We show that: (i) the optimal grades created by the broker and their prices depend on the distribution of the remanufacturing costs for the used products acquired by the broker and the remanufacturer’s demand distribution; and (ii) the expected remanufacturing costs and selling prices for each grade follow a specific ordering and constitute a convex hull. Comparative statics analyses show the malleability of the grading process to the changes in the broker’s exogenous parameters. A numerical study using data from prior research on reverse channels shows that the optimal grading policy results in higher profits for both the broker and the remanufacturer as compared with a heuristic policy for creating grades. Journal: IISE Transactions Pages: 1035-1048 Issue: 10 Volume: 55 Year: 2023 Month: 10 X-DOI: 10.1080/24725854.2022.2159590 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2159590 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:10:p:1035-1048 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2116133_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Shancong Mou Author-X-Name-First: Shancong Author-X-Name-Last: Mou Author-Name: Michael Biehler Author-X-Name-First: Michael Author-X-Name-Last: Biehler Author-Name: Xiaowei Yue Author-X-Name-First: Xiaowei Author-X-Name-Last: Yue Author-Name: Jeffrey H. Hunt Author-X-Name-First: Jeffrey H. Author-X-Name-Last: Hunt Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: SPAC: Sparse sensor placement-based adaptive control for high precision fuselage assembly Abstract: Optimal shape control is important in fuselage assembly processes. To achieve high precision assembly, shape adjustment is necessary for fuselages with initial shape deviations. The state-of-the-art methods accomplish this goal by using actuators whose forces are derived from a model based on the mechanical properties of the designed fuselage. This has a significant limitation: they do not consider the model mismatch due to mechanical property changes induced by the shape deviation of an individual incoming fuselage. The model mismatch will result in control performance deterioration. To improve the performance, the shape control model needs to be updated based on the online feedback information from the fuselage shape adjustment. However, due to the large size of the fuselage surface, highly accurate inline measurements are expensive or even infeasible to obtain in practice. To resolve those issues, this article proposes a Sparse sensor Placement-based Adaptive Control methodology. In this method, the model is updated based on the sparse sensor measurement of the response signal. The reconstruction performance under a minor model mismatch is quantified theoretically. Its performance has been evaluated based on real data of a half-to-half fuselage assembly process, and the proposed method improves the control performance with acceptable sensing effort. Journal: IISE Transactions Pages: 1133-1143 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2116133 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2116133 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1133-1143 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2133196_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Xinye Hao Author-X-Name-First: Xinye Author-X-Name-Last: Hao Author-Name: Changchun Liu Author-X-Name-First: Changchun Author-X-Name-Last: Liu Author-Name: Maoqi Liu Author-X-Name-First: Maoqi Author-X-Name-Last: Liu Author-Name: Canrong Zhang Author-X-Name-First: Canrong Author-X-Name-Last: Zhang Author-Name: Li Zheng Author-X-Name-First: Li Author-X-Name-Last: Zheng Title: Solving a real-world large-scale cutting stock problem: A clustering-assignment-based model Abstract: This study stems from a furniture factory producing products by cutting and splicing operations. We formulate the problem into an assignment-based model, which reflects the problem accurately, but is intractable, due to a large number of binary variables and severe symmetry in the solution space. To overcome these drawbacks, we reformulate the problem into a clustering-assignment-based model (and its variation), which provides lower (upper) bounds of the assignment-based model. According to the classification of the board types, we categorize the instances into three cases: Narrow Board, Wide Board, and Mixed Board. We prove that the clustering-assignment-based model can obtain the optimal schedule for the original problem in the Narrow Board case. Based on the lower and upper bounds, we develop an iterative heuristic to solve instances in the other two cases. We use industrial data to evaluate the performance of the iterative heuristic. On average, our algorithm can generate high-quality solutions within a minute. Compared with the greedy rounding heuristic, our algorithm has obvious advantages in terms of computational efficiency and stability. From the perspective of the total costs and practical metrics, our method reduces costs by 20.90% and cutting waste by 4.97%, compared with a factory’s method. Journal: IISE Transactions Pages: 1160-1173 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2133196 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2133196 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1160-1173 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2124470_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Andreas Hottenrott Author-X-Name-First: Andreas Author-X-Name-Last: Hottenrott Author-Name: Maximilian Schiffer Author-X-Name-First: Maximilian Author-X-Name-Last: Schiffer Author-Name: Martin Grunow Author-X-Name-First: Martin Author-X-Name-Last: Grunow Title: Flexible assembly layouts in smart manufacturing: An impact assessment for the automotive industry Abstract: Currently, automotive manufacturers take the concept of flexible manufacturing to an unprecedented level, considering the deployment of flexible assembly layouts (also known as matrix production systems) in which automated guided vehicles transport bodyworks on individual routes between assembly stations. To this end, a methodological framework that allows the assessment of the impact of technology choice decisions between traditional assembly lines and flexible assembly layouts as well as the impact of different flexibility levers and configurations is necessary for optimal decision support. We provide such a framework based on analytical insights and a chance-constrained problem formulation. We further show how this problem formulation can be solved optimally using a tailored branch-and-price algorithm. Our results quantify the impact of different flexibility configurations in flexible assembly layouts. We show that flexibility enables a simultaneous improvement in worker utilization and work in progress, resolving a classic trade-off in manufacturing systems. Moreover, we find that worker utilization and output are up to 30% higher in flexible assembly layouts compared with line assembly layouts. Further, flexible assembly layouts prove to be especially beneficial during the ramp-up of vehicles with alternative drivetrain technologies, such as the current transition to electric vehicles. Journal: IISE Transactions Pages: 1144-1159 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2124470 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2124470 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1144-1159 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2125602_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Tangfan Xiahou Author-X-Name-First: Tangfan Author-X-Name-Last: Xiahou Author-Name: Yu Liu Author-X-Name-First: Yu Author-X-Name-Last: Liu Author-Name: Zhiguo Zeng Author-X-Name-First: Zhiguo Author-X-Name-Last: Zeng Author-Name: Muchen Wu Author-X-Name-First: Muchen Author-X-Name-Last: Wu Title: Remaining useful life prediction with imprecise observations: An interval particle filtering approach Abstract: Particle Filtering (PF) has been widely used for predicting Remaining Useful Life (RUL) of industrial products, especially for those with nonlinear degradation behavior and non-Gaussian noise. Traditional PF is a recursive Bayesian filtering framework that updates the posterior probability density function of RULs when new observation data become available. In engineering practice, due to the limited accuracy of monitoring/inspection techniques, the observation data available for PF are inevitably imprecise and often need to be treated as interval data. In this article, a novel Interval Particle Filtering (IPF) approach is proposed to effectively leverage such interval-valued observations for RUL prediction. The IPF is built on three pillars: (i) an interval contractor that mitigates the error explosion problem when the epistemic uncertainty in the interval-valued observation data is propagated; (ii) an interval intersection method for constructing the likelihood function based on the interval observation data; and (iii) an interval kernel smoothing algorithm for estimating the unknown parameters in the IPF. The developed methods are applied on the interval-valued capacity data of batteries and fatigue crack growth data of railroad tracks. The results demonstrate that the developed methods could improve the performance of RUL predictions based on interval observation data. Journal: IISE Transactions Pages: 1075-1090 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2125602 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2125602 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1075-1090 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2148780_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Wanshan Li Author-X-Name-First: Wanshan Author-X-Name-Last: Li Author-Name: Chen Zhang Author-X-Name-First: Chen Author-X-Name-Last: Zhang Title: A Markov-switching hidden heterogeneous network autoregressive model for multivariate time series data with multimodality Abstract: Multivariate networked time series data are ubiquitous in many applications, where multiple variables of interest are sequentially collected over time for each vertex as multivariate time series. Data of different vertices may have heterogeneous influence on each other through the network topology. These time series may usually exhibit multimodal marginal distributions, due to complex system variations. In this article, we propose a novel approach for such data modeling. In particular, we assume that each vertex has multiple latent states and exhibits state-switching behaviors according to a Markov process. The multivariate time series of each vertex depend on a defined latent effect variable influenced by both its own latent state and the latent effects of its neighbors through a heterogeneous network autoregressive model according to the network topology. Furthermore, the influence of some exogenous covariates on the time series can also be incorporated in the model. Some model properties are discussed, and a variational EM algorithm is proposed for model parameter estimation and state inference. Extensive synthetic experiments and a real-world case study demonstrate the effectiveness and applicability of the proposed model. Journal: IISE Transactions Pages: 1118-1132 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2148780 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2148780 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1118-1132 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2148779_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Alexander Krall Author-X-Name-First: Alexander Author-X-Name-Last: Krall Author-Name: Daniel Finke Author-X-Name-First: Daniel Author-X-Name-Last: Finke Author-Name: Hui Yang Author-X-Name-First: Hui Author-X-Name-Last: Yang Title: Virtual sensing network for statistical process monitoring Abstract: Physical sensing is increasingly implemented in modern industries to improve information visibility, which generates real-time signals that are spatially distributed and temporally varying. These signals are often nonlinear and nonstationary in the high-dimensional space, which pose significant challenges to monitoring and control of complex systems. Therefore, this article presents a new “virtual sensing” approach that places imaginary sensors at different locations in signaling trajectories to monitor evolving dynamics within the signal space. First, we propose self-organizing principles to investigate distributional and topological features of nonlinear signals for optimal placement of imaginary sensors. Second, we design and develop the network model to represent real-time flux dynamics among these virtual sensors, in which each node represents a virtual sensor, while edges signify signal flux among sensors. Third, the establishment of a network model as well as the notion of transition uncertainty enable a fine-grained view into system dynamics and then extend a new Flux Rank (FR) algorithm for process monitoring. Experimental results show that the network FR methodology not only delineate real-time flux patterns in nonlinear signals, but also effectively monitor spatiotemporal changes in the dynamics of nonlinear dynamical systems. Journal: IISE Transactions Pages: 1103-1117 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2148779 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2148779 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1103-1117 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2152140_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Chengyu Tao Author-X-Name-First: Chengyu Author-X-Name-Last: Tao Author-Name: Juan Du Author-X-Name-First: Juan Author-X-Name-Last: Du Author-Name: Tzyy-Shuh Chang Author-X-Name-First: Tzyy-Shuh Author-X-Name-Last: Chang Title: Anomaly detection for fabricated artifact by using unstructured 3D point cloud data Abstract: 3D point cloud data has been widely used in surface quality inspection to measure fabricated artifacts, allowing the high density and precision of measurements and providing quantitative 3D geometric characteristics for anomalies. Unlike structured 3D point cloud data, unstructured 3D point cloud data can capture the surface geometry completely. However, anomaly detection by using unstructured 3D point cloud data is more challenging, due to the nonexistence of global coordinate ordering and the difficulty of mathematically modeling anomalies and discriminating outliers. To deal with these challenges, this article formulates the anomaly detection problem into a probabilistic framework. By categorizing points into three types, i.e., reference surface point, anomaly point, and outlier point, a novel Bayesian network is proposed to model the unstructured 3D point cloud data. The variational expectation-maximization algorithm is used to estimate parameters and make inference on the unknown types of points. Both simulation and real case studies demonstrate the accuracy and robustness of the proposed method. Journal: IISE Transactions Pages: 1174-1186 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2152140 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2152140 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1174-1186 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2147607_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Paweł Marcin Kozyra Author-X-Name-First: Paweł Marcin Author-X-Name-Last: Kozyra Title: An efficient algorithm for the reliability evaluation of multistate flow networks under budget constraints Abstract: Many real-world systems can be modeled by multi-state flow networks (MFNs) and their reliability evaluation features in designing and control of these systems. Considering the cost constraint makes the problem of reliability evaluation of an MFN more realistic. For a given demand value d and a given cost limit c, the reliability of an MFN at level (d, c) is the probability of transmitting at least d units from the source node to the sink node through the network within the cost of c. This article addresses the so-called (d, c)-MC problem, i.e., the problem of reliability evaluation of an MFN with cost constraint in terms of minimal cuts. It presents new results on which a new algorithm is based. This algorithm finds all (d, c)-MC candidates without duplicates and verifies them more efficiently than existing ones. The complexity results for this algorithm and an example of its use are provided. Finally, numerical experiments with R language implementations of the presented algorithm and other competitive algorithms are considered. Both the time complexity analysis and numerical experiments demonstrate the presented algorithm to be more efficient than the fastest competing algorithms in 81.41–85.11% of cases. Journal: IISE Transactions Pages: 1091-1102 Issue: 11 Volume: 55 Year: 2023 Month: 11 X-DOI: 10.1080/24725854.2022.2147607 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2147607 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:11:p:1091-1102 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2153948_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Kartik Kulkarni Author-X-Name-First: Kartik Author-X-Name-Last: Kulkarni Author-Name: Manish Bansal Author-X-Name-First: Manish Author-X-Name-Last: Bansal Title: Exact algorithms for multi-module capacitated lot-sizing problem, and its generalizations with two-echelons and piecewise concave production costs Abstract: We study new generalizations of the classic capacitated lot-sizing problem with concave production (or transportation), holding, and subcontracting cost functions in which the total production (or transportation) capacity in each time period is the summation of capacities of a subset of n available modules (machines or vehicles) of different capacities. We refer to this problem as Multi-module Capacitated Lot-Sizing Problem without or with Subcontracting, and denote it by MCLS or MCLS-S, respectively. These are NP-hard problems if n is a part of the input and polynomially solvable for n = 1. In this article we address an open question: Does there exist a polynomial time exact algorithm for solving the MCLS or MCLS-S with fixed n≥2? We present exact fixed-parameter tractable (polynomial) algorithms that solve MCLS and MCLS-S in O(T2n+3) time for a given n≥2. It generalizes algorithm of Atamtürk and Hochbaum [Management Science 47(8):1081–1100, 2001] for MCLS-S with n = 1. We also present exact algorithms for two-generalizations of the MCLS and MCLS-S: (a) a lot-sizing problem with piecewise concave production cost functions (denoted by LS-PC-S) that takes O(T2m+3) time, where m is the number of breakpoints in these functions, and (b) two-echelon MCLS that takes O(T4n+4) time. The former reduces run time of algorithm of Koca et al. [INFORMS J. on Computing 26(4):767–779, 2014] for LS-PC-S by 93.6%, and the latter generalizes algorithm of van Hoesel et al. [Management Science 51(11):1706–1719, 2005] for two-echelon MCLS with n = 1. We perform computational experiments to evaluate the efficiency of our algorithms for MCLS and LS-PC-S and their parallel computing implementation, in comparison to Gurobi 9.1. The results of these experiments show that our algorithms are computationally efficient and stable. Our algorithm for MCLS-S addresses another open question related to the existence of a polynomial time algorithm for optimizing a linear function over n-mixing set (a generalization of the well-known 1-mixing set). Journal: IISE Transactions Pages: 1187-1202 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2022.2153948 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2153948 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1187-1202 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2176951_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Dun Li Author-X-Name-First: Dun Author-X-Name-Last: Li Author-Name: Bangdong Zhi Author-X-Name-First: Bangdong Author-X-Name-Last: Zhi Author-Name: Tobias Schoenherr Author-X-Name-First: Tobias Author-X-Name-Last: Schoenherr Author-Name: Xiaojun Wang Author-X-Name-First: Xiaojun Author-X-Name-Last: Wang Title: Developing capabilities for supply chain resilience in a post-COVID world: A machine learning-based thematic analysis Abstract: This study examines the past, present, and future of Supply Chain Resilience (SCR) research in the context of COVID-19. Specifically, a total of 1717 papers in the SCR field are classified into 11 thematic clusters, which are subsequently verified by a supervised machine learning approach. Each cluster is then analyzed within the context of COVID-19, leading to the identification of three associated capabilities (i.e., interconnectedness, transformability, and sharing) on which firms should focus to build a more resilient supply chain in the post-COVID world. The derived insights offer invaluable guidance not only for practicing managers, but also for scholars as they design their future research projects related to SCR for greatest impact. Journal: IISE Transactions Pages: 1256-1276 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2023.2176951 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2176951 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1256-1276 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2175939_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Andy Alexander Author-X-Name-First: Andy Author-X-Name-Last: Alexander Author-Name: Yanjun Li Author-X-Name-First: Yanjun Author-X-Name-Last: Li Author-Name: Robert Plante Author-X-Name-First: Robert Author-X-Name-Last: Plante Title: Comparative study of two menus of contracts for outsourcing the maintenance function of a process having a linear failure rate Abstract: One of the most frequently outsourced functions in business is maintenance. A model for the design of an incentive-based maintenance outsourcing contract has been proposed under the assumption of an information-symmetric system, wherein both the manufacturer and the contractor have full knowledge of the parameters needed for contract design. In this article, we relax the assumption and extend this model under an information-asymmetric system, wherein the manufacturer knows only limited information about one of the parameters, the contractor’s expected repair cost. We use two approaches to determine a menu of contracts, where the menu of contracts is often used in the presence of limited information. The first approach maximizes the manufacturer’s expected profit, and the second approach maximizes the system’s expected profit. Our analytical and numerical comparisons of the two approaches show that the second approach is not only robust with respect to the estimated probability of the expected repair cost, but also more acceptable to the manufacturer than the first approach when a Black Swan event is possible, which occurs with small probability in a context where the contractor’s expected repair cost is large. Journal: IISE Transactions Pages: 1230-1241 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2023.2175939 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2175939 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1230-1241 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2163436_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Dali Zhang Author-X-Name-First: Dali Author-X-Name-Last: Zhang Author-Name: Lingyun Ji Author-X-Name-First: Lingyun Author-X-Name-Last: Ji Author-Name: Sixiang Zhao Author-X-Name-First: Sixiang Author-X-Name-Last: Zhao Author-Name: Lizhi Wang Author-X-Name-First: Lizhi Author-X-Name-Last: Wang Title: Variable-sample method for the computation of stochastic Nash equilibrium Abstract: This article proposes a variable-sample method for the computation of stochastic stable Nash equilibrium, in which the objective functions are approximated, in each iteration, by the sample average approximation with different sample sizes. We start by investigating the contraction mapping properties under the variable-sample framework. Under some moderate conditions, it is shown that the accumulation points attained from the algorithm satisfy the first-order equilibrium conditions with probability one. Moreover, we use the asymptotic unbiasedness condition to prove the convergence of the accumulation points of the algorithm into the set of fixed points and prove the finite termination property of the algorithm. We also verify that the algorithm converges to the equilibrium even if the optimization problems in each iteration are solved inexactly. In the numerical tests, we comparatively analyze the accuracy error and the precision error of the estimators with different sample size schedules with respect to the sampling loads and the computational times. The results validate the effectiveness of the algorithm. Journal: IISE Transactions Pages: 1217-1229 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2022.2163436 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2163436 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1217-1229 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2179139_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Lu Zhen Author-X-Name-First: Lu Author-X-Name-Last: Zhen Author-Name: Xueting He Author-X-Name-First: Xueting Author-X-Name-Last: He Author-Name: Shuaian Wang Author-X-Name-First: Shuaian Author-X-Name-Last: Wang Author-Name: Jingwen Wu Author-X-Name-First: Jingwen Author-X-Name-Last: Wu Author-Name: Kai Liu Author-X-Name-First: Kai Author-X-Name-Last: Liu Title: Vehicle routing for customized on-demand bus services Abstract: This study investigates a variant of the Vehicle Routing Problem (VRP) for customized on-demand bus service platforms. In this problem, the platform plans customized bus routes upon receiving a batch of orders released by passengers and informs the passengers of the planned pick-up and drop-off locations. The related decision process takes into account some passenger-side time window-related requirements, walking limits, the availability and capacities of various types of buses. A mixed-integer linear programming model of this new VRP variant with floating targets (passengers) is formulated. To solve the model efficiently, a solution method is developed that combines the branch-and-bound and column generation algorithms and also includes embedded acceleration techniques such as the multi-labeling algorithm. Experiments based on real data from Dalian, China are conducted to validate the effectiveness of the proposed model and efficiency of the algorithm; the small-scale experimental results demonstrate our algorithm can obtain optimal results in the majority of instances. Additionally, sensitivity analysis is conducted, and model extensions are investigated, to provide customized bus service platform operators with potentially useful managerial insights; for example, a platform need not establish as many candidate stops as possible, a wide range of walking distance may not bring early arrival at destinations for customers, more mini-buses should be deployed than large buses in our real-world case. Moreover, the rolling horizon-based context and zoning strategies are also investigated by extending our proposed methodology. Journal: IISE Transactions Pages: 1277-1294 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2023.2179139 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2179139 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1277-1294 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2156003_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Zeyu Liu Author-X-Name-First: Zeyu Author-X-Name-Last: Liu Author-Name: Mohammad Ramshani Author-X-Name-First: Mohammad Author-X-Name-Last: Ramshani Author-Name: Anahita Khojandi Author-X-Name-First: Anahita Author-X-Name-Last: Khojandi Author-Name: Xueping Li Author-X-Name-First: Xueping Author-X-Name-Last: Li Title: Optimal utilization of integrated photovoltaic battery systems: An application in the residential sector Abstract: PhotoVoltaic (PV) panels have been increasingly favored by residential users in recent years, due to noticeable reductions in their costs. The PV systems become more effective when combined with battery packages, which store the energy produced by the PV systems for later use. This way, the PV systems are able to provide flexible and reliable services even when the peak demand for electricity misalign with the window of most efficient PV power generation. In this study, we develop an integrated charge/discharge scheme for lithium-ion batteries to maximize their total expected benefit. Specifically, we develop a Markov Decision Process (MDP) model to maximize the battery utilization, subject to uncertainty in weather conditions and electricity demands, while accounting for battery degradation due to calendar aging and charging/discharging cycles. Due to the extremely slow rate of degradation in batteries, the state space of the MDP is excessively large. To solve the problem efficiently, we establish structural properties of the MDP and exploit them to solve the problem. We improve the backward induction algorithm with established structural properties. We further improve the Deep Q-network (DQN) algorithm by proposing two novel algorithms, the augmented DQN (ADQN) algorithm and the stochastic augmented DQN (SADQN) algorithm. Computational results indicate that ADQN and SADQN solve the problem much faster than DQN, with better solution qualities. The ADQN and SADQN algorithms provide flexibility for practitioners in real-world implementations. Journal: IISE Transactions Pages: 1203-1216 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2022.2156003 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2156003 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1203-1216 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2183531_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Seung Min Baik Author-X-Name-First: Seung Min Author-X-Name-Last: Baik Author-Name: Young Myoung Ko Author-X-Name-First: Young Myoung Author-X-Name-Last: Ko Title: QoS-aware energy-efficient workload routing and server speed control policy in data centers: A robust queueing theoretic approach Abstract: Operating cloud service infrastructures requires high energy efficiency while ensuring a satisfactory service level. Motivated by data centers, we consider a workload routing and server speed control policy applicable to the system operating under fluctuating demands. Dynamic control algorithms are generally more energy-efficient than static ones. However, they often require frequent information exchanges between routers and servers, making the data centers’ management hesitate to deploy these algorithms. This study presents a static routing and server speed control policy that could achieve energy efficiency similar to a dynamic algorithm and eliminate the necessity of frequent communication among resources. We take a robust queueing theoretic approach to response time constraints for the Quality of Service (QoS) conditions. Each server is modeled as a G/G/1 processor sharing queue, and the concept of uncertainty sets defines the domain of stochastic primitives. We derive an approximative upper bound of sojourn times from uncertainty sets and develop an approximative sojourn time quantile estimation method for QoS. Numerical experiments confirm the proposed static policy offers competitive solutions compared with the dynamic algorithm. Journal: IISE Transactions Pages: 1242-1255 Issue: 12 Volume: 55 Year: 2023 Month: 12 X-DOI: 10.1080/24725854.2023.2183531 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2183531 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:55:y:2023:i:12:p:1242-1255 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2199813_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Jihoon Chung Author-X-Name-First: Jihoon Author-X-Name-Last: Chung Author-Name: Bo Shen Author-X-Name-First: Bo Author-X-Name-Last: Shen Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Title: A novel sparse Bayesian learning and its application to fault diagnosis for multistation assembly systems Abstract: This article addresses the problem of fault diagnosis in multistation assembly systems. Fault diagnosis is to identify process faults that cause excessive dimensional variation of the product using dimensional measurements. For such problems, the challenge is solving an underdetermined system caused by a common phenomenon in practice; namely, the number of measurements is less than that of the process errors. To address this challenge, this article attempts to solve the following two problems: (i) how to utilize the temporal correlation in the time series data of each process error and (ii) how to apply prior knowledge regarding which process errors are more likely to be process faults. A novel sparse Bayesian learning method is proposed to achieve the above objectives. The method consists of three hierarchical layers. The first layer has parameterized prior distribution that exploits the temporal correlation of each process error. Furthermore, the second and third layers achieve the prior distribution representing the prior knowledge of process faults. Since posterior distributions of process faults are intractable, this article derives approximate posterior distributions via Variational Bayes inference. Numerical and simulation case studies using an actual autobody assembly process are performed to demonstrate the effectiveness of the proposed method. Journal: IISE Transactions Pages: 84-97 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2023.2199813 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2199813 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:84-97 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2168321_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Alberto Loffredo Author-X-Name-First: Alberto Author-X-Name-Last: Loffredo Author-Name: Nicla Frigerio Author-X-Name-First: Nicla Author-X-Name-Last: Frigerio Author-Name: Ettore Lanzarone Author-X-Name-First: Ettore Author-X-Name-Last: Lanzarone Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Title: Energy-efficient control in multi-stage production lines with parallel machine workstations and production constraints Abstract: Nowadays, the growing interest in industry for enhancing manufacturing processes sustainability is a major trend. One of the most supported strategies to increase the energy-efficiency of manufacturing activities is the control of machine state towards the optimum trade-off between production rate and energy demand. This method is referred to as energy-efficient control and it triggers machines in a standby state with low power request. In this article, multi-stage production lines composed of identical parallel machine workstations are the systems of interest, and the energy-efficient control policies make use of buffer level information. Each machine can be switched off instantaneously and switched on with a stochastic startup time. Problem objective is to minimize the energy demand while ensuring production constraints. This article proposes a novel approach to solve the problem at hand. An exact model for two-stage system is formulated using a Markov Decision Process to be solved with a linear programming methodology. A novel technique, namely the Backward-Recursive approach, is used to address systems with more than two stages. Numerical experiments confirm the effectiveness of the proposed approach. Journal: IISE Transactions Pages: 69-83 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2023.2168321 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2168321 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:69-83 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2171518_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Kaizong Bai Author-X-Name-First: Kaizong Author-X-Name-Last: Bai Author-Name: Jian Li Author-X-Name-First: Jian Author-X-Name-Last: Li Author-Name: Dong Ding Author-X-Name-First: Dong Author-X-Name-Last: Ding Title: Two approaches to monitoring multivariate Poisson counts: Simple and accurate Abstract: We consider the monitoring of multivariate correlated count data, which have many applications in practice. Although there are quite a few methods for the statistical process control of Multivariate Poisson (MP) counts, they are either too complicated or too simple to provide a satisfactory tool for efficient online monitoring. In addition, they mostly focus on only the mean vector of multivariate counts and ignore the correlations among them. In this article, we adopt the multivariate Poisson distribution with a two-way covariance structure for modeling MP counts, which has marginal Poisson distributions in each dimension and allows for pairwise correlations. Based on this, we develop two control charts to simultaneously monitor the mean vector and covariance matrix of MP counts. The first chart enjoys a simple charting statistic and is computationally fast, whereas the second one is accurate and provides a gold standard for monitoring MP counts. We also give recommendations on choice between them. Numerical simulations have demonstrated the advantages of the proposed two charts, and in non-Poisson cases we also test their robustness against underdispersion and overdispersion that are encountered often in count data. Journal: IISE Transactions Pages: 29-42 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2023.2171518 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2171518 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:29-42 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2152913_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Petros Papadopoulos Author-X-Name-First: Petros Author-X-Name-Last: Papadopoulos Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Author-Name: Ahmed Aziz Ezzat Author-X-Name-First: Ahmed Author-X-Name-Last: Aziz Ezzat Title: STOCHOS: Stochastic opportunistic maintenance scheduling for offshore wind farms Abstract: Despite the promising outlook, the numerous economic and environmental benefits of offshore wind energy are still compromised by its high Operations and Maintenance (O&M) expenditures. On one hand, offshore-specific challenges such as site remoteness, harsh weather, transportation requirements, and production losses, significantly inflate the O&M costs relative to land-based wind farms. On the other hand, the uncertainties in weather conditions, asset degradation, and electricity prices largely constrain the farm operator’s ability to identify the time windows at which maintenance is possible, let alone optimal. In response, we propose STOCHOS, short for the stochastic holistic opportunistic scheduler—a maintenance scheduling approach tailored to address the unique challenges and uncertainties in offshore wind farms. Given probabilistic forecasts of key environmental and operational parameters, STOCHOS optimally schedules the offshore maintenance tasks by harnessing the opportunities that arise due to favorable weather conditions, on-site maintenance resources, and maximal operating revenues. STOCHOS is formulated as a two-stage stochastic mixed-integer linear program, which we solve using a scenario-based rolling horizon algorithm that aligns with the industrial practice. Tested on real-world data from the U.S. North Atlantic where several offshore wind farms are in-development, STOCHOS demonstrates considerable improvements relative to prevalent maintenance benchmarks, across various O&M metrics, including total cost, downtime, resource utilization, and maintenance interruptions. Journal: IISE Transactions Pages: 1-15 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2022.2152913 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2152913 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:1-15 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2184004_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Michael Biehler Author-X-Name-First: Michael Author-X-Name-Last: Biehler Author-Name: Zhen Zhong Author-X-Name-First: Zhen Author-X-Name-Last: Zhong Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: SAGE: Stealthy Attack Generation in cyber-physical systems Abstract: Cyber-physical systems (CPSs) have been increasingly attacked by hackers. CPSs are especially vulnerable to attackers that have full knowledge of the system's configuration. Therefore, novel anomaly detection algorithms in the presence of a knowledgeable adversary need to be developed. However, this research is still in its infancy, due to limited attack data availability and test beds. By proposing a holistic attack modeling framework, we aim to show the vulnerability of existing detection algorithms and provide a basis for novel sensor-based cyber-attack detection. Stealthy Attack GEneration (SAGE) for CPSs serves as a tool for cyber-risk assessment of existing systems and detection algorithms for practitioners and researchers alike. Stealthy attacks are characterized by malicious injections into the CPS through input, output, or both, which produce bounded changes in the detection residue. By using the SAGE framework, we generate stealthy attacks to achieve three objectives: (i) Maximize damage, (ii) Avoid detection, and (iii) Minimize the attack cost. Additionally, an attacker needs to adhere to the physical principles in a CPS (objective (iv)). The goal of SAGE is to model worst-case attacks, where we assume limited information asymmetries between attackers and defenders (e.g., insider knowledge of the attacker). Those worst-case attacks are the hardest to detect, but common in practice and allow understanding of the maximum conceivable damage. We propose an efficient solution procedure for the novel SAGE optimization problem. The SAGE framework is illustrated in three case studies. Those case studies serve as modeling guidelines for the development of novel attack detection algorithms and comprehensive cyber-physical risk assessment of CPSs. The results show that SAGE attacks can cause severe damage to a CPS, while only changing the input control signals minimally. This avoids detection and keeps the cost of an attack low. This highlights the need for more advanced detection algorithms and novel research in cyber-physical security. Journal: IISE Transactions Pages: 54-68 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2023.2184004 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2184004 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:54-68 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2183440_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Hung Yi Lee Author-X-Name-First: Hung Yi Author-X-Name-Last: Lee Author-Name: Mostafa Reisi Gahrooei Author-X-Name-First: Mostafa Author-X-Name-Last: Reisi Gahrooei Author-Name: Hongcheng Liu Author-X-Name-First: Hongcheng Author-X-Name-Last: Liu Author-Name: Massimo Pacella Author-X-Name-First: Massimo Author-X-Name-Last: Pacella Title: Robust tensor-on-tensor regression for multidimensional data modeling Abstract: In recent years, high-dimensional data, such as waveform signals and images have become ubiquitous. This type of data is often represented by multiway arrays or tensors. Several statistical models, including tensor regression, have been developed for such tensor data. However, these models are sensitive to the presence of arbitrary outliers within the tensors. To address the issue, this article proposes a Robust Tensor-On-Tensor (RTOT) regression approach, which has the capability of modeling high-dimensional data when the data is corrupted by outliers. Through several simulations and case studies, we evaluate the performance of the proposed method. The results reveal the advantage of the RTOT over some benchmarks in the literature in terms of estimation error. A Python implementation is available at https://github.com/Reisi-Lab/RTOT.git. Journal: IISE Transactions Pages: 43-53 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2023.2183440 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2183440 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:43-53 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2202226_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Xiaokun Chang Author-X-Name-First: Xiaokun Author-X-Name-Last: Chang Author-Name: Ming Dong Author-X-Name-First: Ming Author-X-Name-Last: Dong Title: Product–machine qualification using a process flexibility strategy considering capacity loss Abstract: Due to uncertainties in future demands regarding product variety, manufacturers must maintain a balance between capacity investment costs and future demand satisfaction levels, which makes capacity management challenging for a high-value asset industry. Product–machine qualification, which is an essential process in configuring machines to produce new products, plays an important role in allocating appropriate capacity to different products. Motivated by the challenges presented by this real-world trade-off problem, this article develops effective “product–machine” production structures by introducing new process flexibility strategies. The product–machine qualification process is complex and very time-consuming, which generally causes processing machines to experience massive capacity loss. The more flexible a machine is, the more product–machine qualification steps are required and the more capacity will be lost. Traditional process flexibility strategies, such as the long chain strategy, cannot be used directly for product–machine qualification decisions because they all assume that the capacities of machines are constant and that only product demands vary. This article promotes the traditional process flexibility theory by introducing capacity loss into the model. The proposed strategies for balancing process flexibility and capacity loss have proven effective in satisfying uncertain demands by cost-effective capacity-demand matching structures. Journal: IISE Transactions Pages: 98-113 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2023.2202226 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2202226 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:98-113 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2157912_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Xubo Yue Author-X-Name-First: Xubo Author-X-Name-Last: Yue Author-Name: Raed Al Kontar Author-X-Name-First: Raed Al Author-X-Name-Last: Kontar Author-Name: Ana María Estrada Gómez Author-X-Name-First: Ana María Estrada Author-X-Name-Last: Gómez Title: Federated data analytics: A study on linear models Abstract: As edge devices become increasingly powerful, data analytics are gradually moving from a centralized to a decentralized regime where edge computing resources are exploited to process more of the data locally. This regime of analytics is coined as Federated Data Analytics (FDA). Despite the recent success stories of FDA, most literature focuses exclusively on deep neural networks. In this work, we take a step back to develop an FDA treatment for one of the most fundamental statistical models: linear regression. Our treatment is built upon hierarchical modeling that allows borrowing strength across multiple groups. To this end, we propose two federated hierarchical model structures that provide a shared representation across devices to facilitate information sharing. Notably, our proposed frameworks are capable of providing uncertainty quantification, variable selection, hypothesis testing, and fast adaptation to new unseen data. We validate our methods on a range of real-life applications, including condition monitoring for aircraft engines. The results show that our FDA treatment for linear models can serve as a competing benchmark model for the future development of federated algorithms. Journal: IISE Transactions Pages: 16-28 Issue: 1 Volume: 56 Year: 2024 Month: 1 X-DOI: 10.1080/24725854.2022.2157912 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2157912 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:1:p:16-28 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2193835_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Oylum Şeker Author-X-Name-First: Oylum Author-X-Name-Last: Şeker Author-Name: Merve Bodur Author-X-Name-First: Merve Author-X-Name-Last: Bodur Author-Name: Hamed Pouya Author-X-Name-First: Hamed Author-X-Name-Last: Pouya Title: Routing and wavelength assignment with protection: A quadratic unconstrained binary optimization approach enabled by Digital Annealer technology Abstract: Routing and wavelength assignment with protection is an important problem in telecommunications. Given an optical network and incoming connection requests, a commonly studied variant of the problem aims to grant a maximum number of requests by assigning lightpaths with minimum network resource usage, while ensuring the provided services remain functional in the case of a single-link failure through dedicated protection paths. We consider a version where alternative lightpaths for requests are assumed to be given as a precomputed set and show that it is NP-hard. We formulate the problem as an Integer Programming (IP) model and also use it as a foundation to develop a Quadratic Unconstrained Binary Optimization (QUBO) model. We present necessary and sufficient conditions on objective function parameters to prioritize request granting objective over wavelength-link usage for both models, and a sufficient condition ensuring the exactness of the QUBO model. Moreover, we implement a problem-specific branch-and-cut algorithm for the IP model and employ a new quantum-inspired technology, Digital Annealer (DA), for the QUBO model. We conduct computational experiments on a large suite of nontrivial instances to assess the efficiency and efficacy of all of these approaches as well as two problem-specific heuristics. Although the objective penalty coefficient values that guarantee the exactness of the QUBO model were outside the acceptable range for DA, it always yielded feasible solutions even with smaller values in practice. The results show that the emerging technology DA outperforms the considered techniques coupled with GUROBI in finding mostly significantly better or as good solutions in two minutes compared to two-hour run time, whereas problem-specific heuristics fail to be competitive. Journal: IISE Transactions Pages: 156-171 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2193835 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2193835 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:156-171 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2183532_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Miao Yu Author-X-Name-First: Miao Author-X-Name-Last: Yu Author-Name: Jie Xu Author-X-Name-First: Jie Author-X-Name-Last: Xu Author-Name: Jiafu Tang Author-X-Name-First: Jiafu Author-X-Name-Last: Tang Title: Managing customer contact centers with delay announcements and automated service Abstract: This article presents a study of the queueing system resulting from the service of customers in a generic customer contact center that has both automated service and traditional human agent service. Contact center managers would prefer customers use the provided automated service to reduce average customer queuing time as well as staffing costs of required human agent service agents. However, forcing customers to use automated service may lead to customer dissatisfaction. In this study, we propose to increase the use of automated service by using delay announcements as a tool to help guide customers to use automated service who would otherwise have chosen agent service. We present a stochastic optimization formulation to determine the optimal staffing level and delay announcement policy, with an objective to minimize staffing cost, customer balking, and reneging penalty. Closed-form solutions are derived using a fluid approximation, and the asymptotic optimality of the solutions is established. The obtained optimal policies are demonstrated by numerical experiments. A key managerial insight is that with automated service, the optimal delay announcement policy no longer satisfies the well-known information-consistent balking property for a customer contact center using delay announcement without automated service. Instead, there is a new equilibrium delay policy that uses an “extreme positive bias” to reduce the length of the queue for human agent service to zero. Journal: IISE Transactions Pages: 115-127 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2183532 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2183532 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:115-127 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2215843_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Mingda Liu Author-X-Name-First: Mingda Author-X-Name-Last: Liu Author-Name: Yanlu Zhao Author-X-Name-First: Yanlu Author-X-Name-Last: Zhao Author-Name: Xiaolei Xie Author-X-Name-First: Xiaolei Author-X-Name-Last: Xie Title: Continuity-skill-restricted scheduling and routing problem: Formulation, optimization and implications Abstract: As the aging population grows, the demand for long-term continuously Attended Home Healthcare (AHH) services has increased significantly in recent years. AHH services are beneficial since they not only alleviate the pressure on hospital resources, but also provide more convenient care for patients. However, how to reasonably assign patients to doctors and arrange their visiting sequences is still a challenging task due to various complex factors such as heterogeneous doctors, skill-matching requirements, continuity of care, and uncertain travel and service times. Motivated by a practical problem faced by an AHH service provider, we investigate a deterministic continuity-skill-restricted scheduling and routing problem (CSRP) and its stochastic variant (SCSRP) to address these operational challenges. The problem is formulated as a heterogeneous site-dependent and consistent vehicle routing problem with time windows. However, there is not a compact model and a practically implementable exact algorithm in the literature to solve such a complicated problem. To fill this gap, we propose a branch-price-and-cut algorithm to solve the CSRP and a discrete-approximation-method adaption for the SCSRP. Extensive numerical experiments and a real case study verify the effectiveness and efficiency of the proposed algorithms and provide managerial insights for AHH service providers to achieve better performance. Journal: IISE Transactions Pages: 201-220 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2215843 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2215843 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:201-220 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2192250_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Jiayi Lin Author-X-Name-First: Jiayi Author-X-Name-Last: Lin Author-Name: Hrayer Aprahamian Author-X-Name-First: Hrayer Author-X-Name-Last: Aprahamian Author-Name: Hadi El-Amine Author-X-Name-First: Hadi Author-X-Name-Last: El-Amine Title: Optimal unlabeled set partitioning with application to risk-based quarantine policies Abstract: We consider the problem of partitioning a set of items into unlabeled subsets so as to optimize an additive objective, i.e., the objective function value of a partition is equal to the sum of the contribution of each subset. Under an arbitrary objective function, this family of problems is known to be an NP-complete combinatorial optimization problem. We study this problem under a broad family of objective functions characterized by elementary symmetric polynomials, which are “building blocks” to symmetric functions. By analyzing a continuous relaxation of the problem, we identify conditions that enable the use of a reformulation technique in which the set partitioning problem is cast as a more tractable network flow problem solvable in polynomial-time. We show that a number of results from the literature arise as special cases of our proposed framework, highlighting its generality. We demonstrate the usefulness of the developed methodology through a novel and timely application of quarantining heterogeneous populations in an optimal manner. Our case study on real COVID-19 data reveals significant benefits over conventional measures in terms of both spread mitigation and economic impact, underscoring the importance of data-driven policies. Journal: IISE Transactions Pages: 143-155 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2192250 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2192250 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:143-155 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2191668_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Su Li Author-X-Name-First: Su Author-X-Name-Last: Li Author-Name: Hrayer Aprahamian Author-X-Name-First: Hrayer Author-X-Name-Last: Aprahamian Title: An optimization-based framework to minimize the spread of diseases in social networks with heterogeneous nodes Abstract: We provide an optimization-based framework that identifies social separation policies to mitigate the spread of diseases in social networks. The study considers subject-specific risk information, social structure, and the negative economic impact of imposing restrictions. We first analyze a simplified variation of the problem consisting of a single period and a specific social structure to establish key structural properties and construct a tailored globally-convergent solution scheme. We extend this solution scheme to heuristically solve the more general model with multiple time periods and any social structure. We use real COVID-19 data to illustrate the benefits of proposed framework. Our results reveal that the optimized policies substantially reduce the spread of the disease when compared with existing benchmark algorithms and policies that are based on a single risk factor. In addition, we utilize the considered framework to identify important subject attributes when distributing Personal Protective Equipment. Moreover, results reveal that the optimized policies continue to outperform under a more realistic setting. Our results underscore the importance of considering subject-specific information when designing policies and provide high-level data-driven observations to policy-makers that are tailored to the specific risk profile of the population that is being served. Journal: IISE Transactions Pages: 128-142 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2191668 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2191668 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:128-142 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2209622_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Pengcheng Dong Author-X-Name-First: Pengcheng Author-X-Name-Last: Dong Author-Name: Yang Liu Author-X-Name-First: Yang Author-X-Name-Last: Liu Author-Name: Qingchun Meng Author-X-Name-First: Qingchun Author-X-Name-Last: Meng Author-Name: Guodong Yu Author-X-Name-First: Guodong Author-X-Name-Last: Yu Title: Reliable network design considering endogenous customers’ choices under probabilistic arc failures Abstract: We consider a reliable network design where the facility location and road ban decisions are jointly optimized to minimize the total expected costs and risks against uncertain exogenous arc-dependent failures and customers’ endogenous interactions. We formulate endogenous customers’ choices by incorporating an expressive measure, Cumulative prospect theory, into the widely used multinomial logit model. Additionally, we use a well-known downside measure, Conditional value-at-risk, for the designer to control integrated risks from exogenous arc failures and endogenous customers’ choices. Accordingly, a mixed-integer trilinear program is developed. To solve the model, we first transform it into a class of mixed-integer linear programs based on the separable structure. Then, a customized branch-and-Benders-cut algorithm is proposed to solve these mixed-integer linear programs. We devise a set of novel valid inequalities based on the endogenous transition of choice probability to strengthen the weak relaxation of the master problem. Moreover, by aggregating the grouping and dual iterations shrinking techniques for solving sub-problems, the branch-and-Benders-cut algorithm can converge within 30 seconds and the whole problem can be solved within 15 minutes for a network with 90 nodes and 149 road segments. Some managerial insights for balancing risk and cost are finally extracted. Journal: IISE Transactions Pages: 186-200 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2209622 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2209622 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:186-200 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2203748_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20230119T200553 git hash: 724830af20 Author-Name: Negin Enayaty Ahangar Author-X-Name-First: Negin Author-X-Name-Last: Enayaty Ahangar Author-Name: Kelly M. Sullivan Author-X-Name-First: Kelly M. Author-X-Name-Last: Sullivan Author-Name: Shantih M. Spanton Author-X-Name-First: Shantih M. Author-X-Name-Last: Spanton Author-Name: Yu Wang Author-X-Name-First: Yu Author-X-Name-Last: Wang Title: Algorithms and complexity results for the single-cut routing problem in a rail yard Abstract: Rail yards are facilities that play a critical role in the freight rail transportation system. A number of essential rail yard functions require moving connected “cuts” of rail cars through the rail yard from one position to another. In a congested rail yard, it is therefore of interest to identify a shortest route for such a move. With this motivation, we contribute theory and algorithms for the Single-Cut Routing Problem (SCRP) in a rail yard. Two key features distinguish SCRP from a traditional shortest path problem: (i) the entity occupies space on the network; and (ii) track geometry further restricts route selection. To establish the difficulty of solving SCRP in general, we prove NP-completeness of a related problem that seeks to determine whether there is space in the rail yard network to position the entity in a given direction relative to a given anchor node. However, we then demonstrate this problem becomes polynomially solvable—and therefore, SCRP becomes polynomially solvable, too—for “Bounded Cycle Length” (BCL) yard networks. We formalize the resulting two-stage algorithm for BCL yard networks and validate our algorithm on a rail yard data set provided by the class I railroad CSX Transportation. Journal: IISE Transactions Pages: 172-185 Issue: 2 Volume: 56 Year: 2024 Month: 2 X-DOI: 10.1080/24725854.2023.2203748 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2203748 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:2:p:172-185 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2169418_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Daniel Kosmas Author-X-Name-First: Daniel Author-X-Name-Last: Kosmas Author-Name: Christina Melander Author-X-Name-First: Christina Author-X-Name-Last: Melander Author-Name: Emily Singerhouse Author-X-Name-First: Emily Author-X-Name-Last: Singerhouse Author-Name: Thomas C. Sharkey Author-X-Name-First: Thomas C. Author-X-Name-Last: Sharkey Author-Name: Kayse Lee Maass Author-X-Name-First: Kayse Lee Author-X-Name-Last: Maass Author-Name: Kelle Barrick Author-X-Name-First: Kelle Author-X-Name-Last: Barrick Author-Name: Lauren Martin Author-X-Name-First: Lauren Author-X-Name-Last: Martin Title: A transdisciplinary approach for generating synthetic but realistic domestic sex trafficking networks Abstract: One of the major challenges associated with applying Operations Research (OR) models to disrupting human trafficking networks is the limited amount of reliable data sources readily available for public use, since operations are intentionally hidden to prevent detection, and data from known operations are often incomplete. To help address this data gap, we propose a network generator for domestic sex trafficking networks by integrating OR concepts and qualitative research. Multiple sources regarding sex trafficking in the upper Midwest of the United States have been triangulated to ensure that networks produced by the generator are realistic, including law enforcement case file analysis, interviews with domain experts, and a survivor-centered advisory group with first-hand knowledge of sex trafficking. The output models the relationships between traffickers, so-called “bottoms”, and victims. This generator allows operations researchers to access realistic sex trafficking network structures in a responsible manner that does not disclose identifiable details of the people involved. We demonstrate the use of output networks in exploring policy recommendations from max flow network interdiction with restructuring. To do so, we propose a novel conceptualization of flow as the ability of a trafficker to control their victims. Our results show the importance of understanding how sex traffickers react to disruptions, especially in terms of recruiting new victims. Journal: IISE Transactions Pages: 340-354 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2169418 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2169418 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:340-354 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2120223_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Yaren Bilge Kaya Author-X-Name-First: Yaren Bilge Author-X-Name-Last: Kaya Author-Name: Kayse Lee Maass Author-X-Name-First: Kayse Lee Author-X-Name-Last: Maass Author-Name: Geri L. Dimas Author-X-Name-First: Geri L. Author-X-Name-Last: Dimas Author-Name: Renata Konrad Author-X-Name-First: Renata Author-X-Name-Last: Konrad Author-Name: Andrew C. Trapp Author-X-Name-First: Andrew C. Author-X-Name-Last: Trapp Author-Name: Meredith Dank Author-X-Name-First: Meredith Author-X-Name-Last: Dank Title: Improving access to housing and supportive services for runaway and homeless youth: Reducing vulnerability to human trafficking in New York City Abstract: Recent estimates indicate that there are over a million runaway and homeless youth and young adults (RHY) in the United States (US). Exposure to trauma, violence, and substance abuse, coupled with a lack of community support services, puts homeless youth at high risk of being exploited and trafficked. Although access to safe housing and supportive services such as physical and mental healthcare is an effective response to the vulnerability of RHY towards being trafficked, the number of youth experiencing homelessness exceeds the capacity of available housing resources in most US communities. We undertake a RHY-informed, systematic, and data-driven approach to project the collective capacity required by service providers to adequately meet the needs of RHY in New York City, including those most at risk of being trafficked. Our approach involves an integer linear programming model that extends the multiple multidimensional knapsack problem and is informed by partnerships with key stakeholders. The mathematical model allows for time-dependent allocation and capacity expansion, while incorporating stochastic youth arrivals and length of stays, services provided in a periodic fashion, and service delivery time windows. Our RHY and service provider-centered approach is an important step toward meeting the actual, rather than presumed, survival needs of vulnerable youth. Journal: IISE Transactions Pages: 296-310 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2022.2120223 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2120223 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:296-310 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2173368_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Annibal Parracho Sant’Anna Author-X-Name-First: Annibal Parracho Author-X-Name-Last: Sant’Anna Author-Name: Luiz Octávio Gavião Author-X-Name-First: Luiz Octávio Author-X-Name-Last: Gavião Author-Name: Tiago Lezan Sant’Anna Author-X-Name-First: Tiago Lezan Author-X-Name-Last: Sant’Anna Title: Multi-criteria classification of reward collaboration proposals Abstract: This article develops a mechanism for automatically classifying rewarded collaboration proposals. The research’s purpose is to increase transparency in the rewarded collaboration process, thereby inviting more collaboration proposals, to aid in the fight against criminal organizations. The research focuses on critical facets of the public security system and of the organized crime in Brazil. Through rewarded collaboration, a new approach to plea bargaining is achieved that helps detect, disrupt, and ultimately dismantle illicit operations. This multi-criteria approach enables the consideration of the interests of detainees, the priorities of police institutions, and the perspective of the community. This approach results in the formation of a holistic understanding of the issue, taking into account the costs and benefits to society of punishing defendants whose guilt can be established. Composition of Probabilistic Preferences Trichotomic is the multi-criteria method employed to take imprecision into consideration while performing classification into predetermined classes. It enables the evaluation of each proposal independently. This boosts the system’s objectivity and consequently its attractiveness. Taking the interaction between the criteria into consideration, the analysis naturally applies to any number of evaluation criteria and individuals involved in the investigated crimes. Novel forms of interaction modeling are compared in practical instances. Journal: IISE Transactions Pages: 374-384 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2173368 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2173368 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:374-384 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2254357_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Margret V. Bjarnadottir Author-X-Name-First: Margret V. Author-X-Name-Last: Bjarnadottir Author-Name: Siddharth Chandra Author-X-Name-First: Siddharth Author-X-Name-Last: Chandra Author-Name: Pengfei He Author-X-Name-First: Pengfei Author-X-Name-Last: He Author-Name: Greg Midgette Author-X-Name-First: Greg Author-X-Name-Last: Midgette Title: Analyzing illegal psychostimulant trafficking networks using noisy and sparse data Abstract: This article applies analytical approaches to map illegal psychostimulant (cocaine and methamphetamine) trafficking networks in the US using purity-adjusted price data from the System to Retrieve Information from Drug Evidence. We use two assumptions to build the network: (i) the purity-adjusted price is lower at the origin than at the destination and (ii) price perturbations are transmitted from origin to destination. We then adopt a two-step analytical approach: we formulate the data aggregation problem as an optimization problem, then construct an inferred network of connected states and examine its properties.We find, first, that the inferred cocaine network created from the optimally aggregated dataset explains 46% of the anecdotal evidence, compared with 28.4% for an over-aggregated and 14.5% for an under-aggregated dataset. Second, our network reveals a number of phenomena, some aligning with what is known and some previously unobserved. To demonstrate the applicability of our method, we compare our cocaine data analysis results with parallel analysis of methamphetamine data. These results likewise align with prior knowledge, but also present new insights. Our findings show that an optimally aggregated dataset can provide a more accurate picture of an illicit drug network than can suboptimally aggregated data. Journal: IISE Transactions Pages: 269-281 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2254357 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2254357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:269-281 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2113187_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Margaret Tobey Author-X-Name-First: Margaret Author-X-Name-Last: Tobey Author-Name: Ruoting Li Author-X-Name-First: Ruoting Author-X-Name-Last: Li Author-Name: Osman Y. Özaltın Author-X-Name-First: Osman Y. Author-X-Name-Last: Özaltın Author-Name: Maria E. Mayorga Author-X-Name-First: Maria E. Author-X-Name-Last: Mayorga Author-Name: Sherrie Caltagirone Author-X-Name-First: Sherrie Author-X-Name-Last: Caltagirone Title: Interpretable models for the automated detection of human trafficking in illicit massage businesses Abstract: Sexually oriented establishments across the United States often pose as massage businesses and force victim workers into a hybrid of sex and labor trafficking, simultaneously harming the legitimate massage industry. Stakeholders with varied goals and approaches to dismantling the illicit massage industry all report the need for multi-source data to clearly and transparently identify the worst offenders and highlight patterns in behaviors. We utilize findings from primary stakeholder interviews with law enforcement, regulatory bodies, legitimate massage practitioners, and subject-matter experts from nonprofit organizations to identify data sources and potential indicators of illicit massage businesses (IMBs). We focus our analysis on data from open sources in Texas and Florida including customer reviews and business data from Yelp.com, the U.S. Census, and GIS files such as truck stop, highway, and military base locations. We build two interpretable prediction models, risk scores and optimal decision trees, to determine the risk that a given massage establishment is an IMB. The proposed multi-source data-based approach and interpretable models can be used by stakeholders at all levels to save time and resources, serve victim-workers, and support well informed regulatory efforts. Journal: IISE Transactions Pages: 311-324 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2022.2113187 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2113187 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:311-324 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2255643_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Emily C. Griffin Author-X-Name-First: Emily C. Author-X-Name-Last: Griffin Author-Name: Aaron Ferber Author-X-Name-First: Aaron Author-X-Name-Last: Ferber Author-Name: Lucas Lafferty Author-X-Name-First: Lucas Author-X-Name-Last: Lafferty Author-Name: Burcu B. Keskin Author-X-Name-First: Burcu B. Author-X-Name-Last: Keskin Author-Name: Bistra Dilkina Author-X-Name-First: Bistra Author-X-Name-Last: Dilkina Author-Name: Meredith Gore Author-X-Name-First: Meredith Author-X-Name-Last: Gore Title: Interdiction of wildlife trafficking supply chains: An analytical approach Abstract: Illicit Wildlife Trade (IWT) is a serious global crime that negatively impacts biodiversity, human health, national security, and economic development. Many flora and fauna are trafficked in different product forms. We investigate a network interdiction problem for wildlife trafficking and introduce a new model to tackle key challenges associated with IWT. Our model captures the interdiction problem faced by law enforcement impeding IWT on flight networks, though it can be extended to other types of transportation networks. We incorporate vital issues unique to IWT, including the need for training and difficulty recognizing illicit wildlife products, the impact of charismatic species and geopolitical differences, and the varying amounts of information and objectives traffickers may use when choosing transit routes. Additionally, we incorporate different detection probabilities at nodes and along arcs depending on law enforcement’s interdiction and training actions. We present solutions for several key IWT supply chains using realistic data from conservation research, seizure databases, and international reports. We compare our model to two benchmark models and highlight key features of the interdiction strategy. We discuss the implications of our models for combating IWT in practice and highlight critical areas of concern for stakeholders. Journal: IISE Transactions Pages: 355-373 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2255643 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2255643 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:355-373 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2172631_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Hirbod Akhavantaheri Author-X-Name-First: Hirbod Author-X-Name-Last: Akhavantaheri Author-Name: Peter Sandborn Author-X-Name-First: Peter Author-X-Name-Last: Sandborn Author-Name: Diganta Das Author-X-Name-First: Diganta Author-X-Name-Last: Das Title: An enterprise network model for understanding and disrupting illicit counterfeit electronic part supply chains Abstract: This article analyses several promising policies in the electronic parts industry for disrupting the flow of counterfeit electronic parts. A socio-technical electronic part supply-chain network model has been developed to facilitate policy analysis. The model is used to understand the technical and social dynamics associated with the insertion of counterfeit electronic components into critical systems (e.g., aerospace, transportation, defense, and infrastructure) and to analyze the impact of various anti-counterfeiting policies and practices. This network model is used to assess the effectiveness of mandatory original component manufacturer buyback programs and the debarment of distributors found to provide counterfeit components. In this agent-based model, each participant in the supply chain is modeled as an independent entity governed by its own motivations and constraints. The entities in the model include the original component manufacturers, distributors, system integrators, operators, and counterfeiters. Each of these entities has dynamic behaviors and connections to the other agents. Since time is an integral factor (lead times and inventory levels can be drivers behind the appearance of counterfeits), the simulation is dynamic. The model allows the prediction of the risk of counterfeits making it into an operator’s system and the length of time between relevant supply-chain events/disruptions and the appearance of counterfeits. Journal: IISE Transactions Pages: 257-268 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2172631 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2172631 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:257-268 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2177364_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Abhishek Ray Author-X-Name-First: Abhishek Author-X-Name-Last: Ray Author-Name: Viplove Arora Author-X-Name-First: Viplove Author-X-Name-Last: Arora Author-Name: Kayse Maass Author-X-Name-First: Kayse Author-X-Name-Last: Maass Author-Name: Mario Ventresca Author-X-Name-First: Mario Author-X-Name-Last: Ventresca Title: Optimal resource allocation to minimize errors when detecting human trafficking Abstract: Accurately detecting human trafficking is particularly challenging due to its covert nature, difficulty in distinguishing trafficking from non-trafficking exploitative conditions, and varying operational definitions. Typically, detecting human trafficking requires resource-intensive efforts from resource-constrained anti-trafficking stakeholders. Such measures may need personnel training or machine learning-based identification technologies that suffer from detection errors. Repeated usage of such measures risks biasing detection efforts and reducing detection effectiveness. Such problems raise the question: “How should imperfect detection resources be allocated to most effectively identify human trafficking?” As an answer, we construct a class of resource allocation models that considers various optimal allocation scenarios. These scenarios range from optimal location selection for monitoring to optimal allocation of a finite set of imperfect resources, given error rates. We illustrate the applicability of these models across both human and technology-facilitated detection contexts at the India–Nepal border and in the global seafood industry. Insights from our models help inform operational strategies for allocating limited anti-human trafficking resources in a way that effectively preserves human rights and dignity. Journal: IISE Transactions Pages: 325-339 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2177364 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2177364 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:325-339 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2174277_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Eugene Wickett Author-X-Name-First: Eugene Author-X-Name-Last: Wickett Author-Name: Matthew Plumlee Author-X-Name-First: Matthew Author-X-Name-Last: Plumlee Author-Name: Karen Smilowitz Author-X-Name-First: Karen Author-X-Name-Last: Smilowitz Author-Name: Souly Phanouvong Author-X-Name-First: Souly Author-X-Name-Last: Phanouvong Author-Name: Victor Pribluda Author-X-Name-First: Victor Author-X-Name-Last: Pribluda Title: Inferring sources of substandard and falsified products in pharmaceutical supply chains Abstract: Substandard and falsified pharmaceuticals, prevalent in low- and middle-income countries, substantially increase levels of morbidity, mortality and drug resistance. Regulatory agencies combat this problem using post-market surveillance by collecting and testing samples where consumers purchase products. Existing analysis tools for post-market surveillance data focus attention on the locations of positive samples. This article looks to expand such analysis through underutilized supply-chain information to provide inference on sources of substandard and falsified products. We first establish the presence of unidentifiability issues when integrating this supply-chain information with surveillance data. We then develop a Bayesian methodology for evaluating substandard and falsified sources that extracts utility from supply-chain information and mitigates unidentifiability while accounting for multiple sources of uncertainty. Using de-identified surveillance data, we show the proposed methodology to be effective in providing valuable inference. Journal: IISE Transactions Pages: 241-256 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2174277 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2174277 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:241-256 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2162169_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Rashid Anzoom Author-X-Name-First: Rashid Author-X-Name-Last: Anzoom Author-Name: Rakesh Nagi Author-X-Name-First: Rakesh Author-X-Name-Last: Nagi Author-Name: Chrysafis Vogiatzis Author-X-Name-First: Chrysafis Author-X-Name-Last: Vogiatzis Title: Uncovering illicit supply networks and their interfaces to licit counterparts through graph-theoretic algorithms Abstract: The rapid market growth of different illicit trades in recent years can be attributed to their discreet, yet effective, supply chains. This article presents a graph-theoretic approach for investigating the composition of illicit supply networks using limited information. Two key steps constitute our strategy. The first is the construction of a broad network that comprises entities suspected of participating in the illicit supply chain. Two intriguing concepts are involved here: unification of alternate Bills-of-materials and identification of entities positioned at the interface of licit and illicit supply chain; logical graph representation and graph matching techniques are applied to achieve those objectives. In the second step, we search for a set of dissimilar supply chain structures that criminals might likely adopt. We provide an integer linear programming formulation as well as a graph-theoretic representation for this problem, the latter of which leads us to a new variant of Steiner Tree problem: Generalized Group Steiner Tree Problem. Additionally, a three-step algorithmic approach of extracting single (cheapest), multiple and dissimilar trees is proposed to solve the problem. We conclude this work with a semi-real case study on counterfeit footwear to illustrate the utility of our approach in uncovering illicit trades. We also present extensive numerical studies to demonstrate scalability of our algorithms. Journal: IISE Transactions Pages: 224-240 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2022.2162169 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2162169 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:224-240 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2271536_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Thomas C. Sharkey Author-X-Name-First: Thomas C. Author-X-Name-Last: Sharkey Author-Name: Burcu B. Keskin Author-X-Name-First: Burcu B. Author-X-Name-Last: Keskin Author-Name: Renata Konrad Author-X-Name-First: Renata Author-X-Name-Last: Konrad Author-Name: Maria E. Mayorga Author-X-Name-First: Maria E. Author-X-Name-Last: Mayorga Title: Introduction to the Special Issue on Analytical Methods for Detecting, Disrupting, and Dismantling Illicit Operations Journal: IISE Transactions Pages: 221-223 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2023.2271536 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2271536 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:221-223 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2123998_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Nicholas R. Magliocca Author-X-Name-First: Nicholas R. Author-X-Name-Last: Magliocca Author-Name: Ashleigh N. Price Author-X-Name-First: Ashleigh N. Author-X-Name-Last: Price Author-Name: Penelope C. Mitchell Author-X-Name-First: Penelope C. Author-X-Name-Last: Mitchell Author-Name: Kevin M. Curtin Author-X-Name-First: Kevin M. Author-X-Name-Last: Curtin Author-Name: Matthew Hudnall Author-X-Name-First: Matthew Author-X-Name-Last: Hudnall Author-Name: Kendra McSweeney Author-X-Name-First: Kendra Author-X-Name-Last: McSweeney Title: Coupling agent-based simulation and spatial optimization models to understand spatially complex and co-evolutionary behavior of cocaine trafficking networks and counterdrug interdiction Abstract: Despite more than 40 years of counterdrug interdiction efforts in the Western Hemisphere, cocaine trafficking, or ‘‘narco-trafficking’’, networks continue to evolve and increase their global reach. Counterdrug interdiction continues to fall short of performance targets, due to the adaptability of narco-trafficking networks and spatially complex constraints on interdiction operations (e.g., resources, jurisdictional). Due to these dynamics, current modeling approaches offer limited strategic insights into time-varying, spatially optimal allocation of counterdrug interdiction assets. This study presents coupled agent-based and spatial optimization models to investigate the co-evolution of counterdrug interdiction deployment and narco-trafficking networks’ adaptive responses. Increased spatially optimized interdiction assets were found to increase seizure volumes. However, the value per seized shipment concurrently decreased and the number of active nodes increased or was unchanged. Narco-trafficking networks adaptively responded to increased interdiction pressure by spatially diversifying routes and dispersing shipment volumes. Thus, increased interdiction pressure had the unintended effect of expanding the spatial footprint of narco-trafficking networks. This coupled modeling approach enabled the study of narco-trafficking network evolution while being subjected to varying interdiction pressure as a spatially complex adaptive system. Capturing such co-evolution dynamics is essential for simulating traffickers’ realistic adaptive responses to a wide range of interdiction scenarios. Journal: IISE Transactions Pages: 282-295 Issue: 3 Volume: 56 Year: 2024 Month: 3 X-DOI: 10.1080/24725854.2022.2123998 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2123998 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:3:p:282-295 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2123116_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Alberto Costa Author-X-Name-First: Alberto Author-X-Name-Last: Costa Author-Name: Tsan Sheng Ng Author-X-Name-First: Tsan Sheng Author-X-Name-Last: Ng Author-Name: Jidong Kang Author-X-Name-First: Jidong Author-X-Name-Last: Kang Author-Name: Zhuochun Wu Author-X-Name-First: Zhuochun Author-X-Name-Last: Wu Author-Name: Bin Su Author-X-Name-First: Bin Author-X-Name-Last: Su Title: Modelling fortification strategies for network resilience optimization: The case of immunization and mitigation Abstract: The ability of a system to tolerate disruptions and mitigate against malicious attacks is crucial in many applications, especially when a failure of the system can have huge economic and social consequences. The concept of system resilience has drawn increasing research and practical interest, and in this article, we propose a framework to define system resilience based on modelling of fortification strategies in the context of network interdiction problems. Specifically, we consider fortification strategies that address disruptions in two ways: immunization, where a certain disruption is not permitted to take place, and mitigation, where the disruption occurs, but is unable to impair significantly the system performance. We then propose the resilience optimization problem (RES-OPT) to maximize the capability of a system in fortifying against disruptions through immunization and mitigation strategies. The flexibility of this approach lies in that instead of assuming a fixed set of disruption scenarios, we fortify the system against as powerful an attacker as possible. In addition, we propose a cutting plane methodology to effectively solve the resulting optimization problem, and apply it to a network flow with transmission-links fortification and an electricity transmission network problem in Southern China. The results show that RES-OPT yields good quality solutions in terms of both resilience and average costs, compared to other benchmark approaches. Journal: IISE Transactions Pages: 411-423 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2022.2123116 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2123116 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:411-423 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2261028_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Barry L. Nelson Author-X-Name-First: Barry L. Author-X-Name-Last: Nelson Title: Rebooting simulation Abstract: Computer simulation has been in the toolkit of industrial engineers for over 50 years and its value has been enhanced by advances in research, including both modeling and analysis, and in application software, both commercial and open source. However, “advances” are different from paradigm shifts. Motivated by big data, big computing and the big consequences of model-based decisions, it is time to reboot simulation for industrial engineering. Journal: IISE Transactions Pages: 385-397 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2023.2261028 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2261028 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:385-397 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2216759_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Jingyao Huang Author-X-Name-First: Jingyao Author-X-Name-Last: Huang Author-Name: Douglas Morrice Author-X-Name-First: Douglas Author-X-Name-Last: Morrice Author-Name: Jonathan Bard Author-X-Name-First: Jonathan Author-X-Name-Last: Bard Title: Coordinated scheduling for in-clinic and virtual medicine patients in a multi-station network Abstract: In this article, we study a coordinated scheduling problem with both Virtual Medicine patients (VM patients) and In-Clinic patients (IC patients) in a multi-disciplinary setting. The problem was motivated by appointment scheduling requirements in a multi-disciplinary clinic called an Integrated Practice Unit (IPU), which incorporates differing priorities, heterogeneous service time distributions, distinct cost structures and unique care paths in a multi-station network. We establish priority for IC patients and introduce time windows for VM patients to create flexibility. Recursion expressions are derived for a performance measure of interest, which balances revenue against clinic overtime and patient waiting time costs. We develop an approach where IC patients are scheduled first. To do so, we establish discrete convexity for a special tree-type directed network structure and generate near-optimal IC patient schedules for a more general acyclic directed network. Conditioned on IC patients’ schedule, we show that the VM patient scheduling problem has a discrete convexity property even in the presence of non-linear costs. Through numerical examples based on IPUs being implemented by the Dell Medical School at the University of Texas at Austin, we find that the introduction of VM patients can substantially improve system performance and patient access without adding resources. Journal: IISE Transactions Pages: 437-457 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2023.2216759 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2216759 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:437-457 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2219281_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Jun Song Author-X-Name-First: Jun Author-X-Name-Last: Song Author-Name: William Yang Author-X-Name-First: William Author-X-Name-Last: Yang Author-Name: Chaoyue Zhao Author-X-Name-First: Chaoyue Author-X-Name-Last: Zhao Title: Decision-dependent distributionally robust Markov decision process method in dynamic epidemic control Abstract: In this article, we present a Distributionally Robust Markov Decision Process (DRMDP) approach for addressing the dynamic epidemic control problem. The Susceptible-Exposed-Infectious-Recovered (SEIR) model is widely used to represent the stochastic spread of infectious diseases, such as COVID-19. Although the Markov Decision Process (MDP) offers a mathematical framework for identifying optimal actions, such as vaccination and transmission-reducing intervention, to combat disease spread calculated using the SEIR model. However, uncertainties in these scenarios demand a more robust approach that is less reliant on error-prone assumptions. The primary objective of our study is to introduce a new DRMDP framework that allows for an ambiguous distribution of transition dynamics. Specifically, we consider the worst-case distribution of these transition probabilities within a decision-dependent ambiguity set. To overcome the computational complexities associated with policy determination, we propose an efficient Real-Time Dynamic Programming (RTDP) algorithm that is capable of computing optimal policies based on the reformulated DRMDP model in an accurate, timely, and scalable manner. Comparative analysis against the classic MDP model demonstrates that the DRMDP achieves a lower proportion of infections and susceptibilities at a reduced cost. Journal: IISE Transactions Pages: 458-470 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2023.2219281 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2219281 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:458-470 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2213754_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Adam Schmidt Author-X-Name-First: Adam Author-X-Name-Last: Schmidt Author-Name: Laura A. Albert Author-X-Name-First: Laura A. Author-X-Name-Last: Albert Title: The drop box location problem Abstract: For decades, voting-by-mail and the use of ballot drop boxes has substantially grown within the USA, and in response, many USA election officials have added drop boxes to their voting infrastructure. However, existing guidance for locating drop boxes is limited. In this article, we introduce an integer programming model, the Drop Box Location Problem (DBLP), to locate drop boxes. The DBLP considers criteria of cost, voter access, and risk. The cost of the drop box system is determined by the fixed cost of adding drop boxes and the operational cost of a collection tour by a bipartisan team who regularly collects ballots from selected locations. The DBLP utilizes covering sets to ensure each voter is in close proximity to a drop box and incorporates a novel measure of access to measure the ability to use multiple voting pathways to vote. The DBLP is shown to be NP-hard, and we introduce a heuristic to generate a large number of feasible solutions for policy makers to select from a posteriori. Using a real-world case study of Milwaukee, WI, U.S., we study the benefits of the DBLP. The results demonstrate that the proposed optimization model identifies drop box locations that perform well across multiple criteria. The results also demonstrate that the trade-off between cost, access, and risk is non-trivial, which supports the use of the proposed optimization-based approach to select drop box locations. Journal: IISE Transactions Pages: 424-436 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2023.2213754 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2213754 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:424-436 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2220772_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Hanqi Wen Author-X-Name-First: Hanqi Author-X-Name-Last: Wen Author-Name: Jingtong Zhao Author-X-Name-First: Jingtong Author-X-Name-Last: Zhao Author-Name: Van-Anh Truong Author-X-Name-First: Van-Anh Author-X-Name-Last: Truong Author-Name: Jie Song Author-X-Name-First: Jie Author-X-Name-Last: Song Title: Dynamic expansions of social followings with lotteries and give-aways Abstract: The problem of how to attract a robust following on social media is one of the most pressing for influencers. We study a common practice on popular microblogging platforms such as Twitter, of influencers’ expanding their followings by running lotteries and giveaways. We are interested in how the lottery size and the seeding decisions influence the information propagation and the final reward for such a campaign. We construct an information-diffusion model based on a random graph, and show that the market demand curve of the lottery reward via the promotion of the social network is “S”-shaped. This property lays a foundation for finding the optimal lottery size. Second, we observe that (i) dynamic seeding could re-stimulate the spread of information and (ii) with a fixed budget, seeding at two fixed occasions is always better than seeding once at the beginning. This observation motivates us to study the joint optimization of lottery size and adaptive seeding. We model the adaptive seeding problem as a Markov Decision Process. We find the monotonicity of the value functions and trends in the optimal actions, and we show that with adaptive seeding, the reward curve is approximately “S”-shaped with respect to the lottery size. Journal: IISE Transactions Pages: 471-484 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2023.2220772 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2220772 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:471-484 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2043570_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20231214T103247 git hash: d7a2cb0857 Author-Name: Di H. Nguyen Author-X-Name-First: Di H. Author-X-Name-Last: Nguyen Author-Name: J. Cole Smith Author-X-Name-First: J. Cole Author-X-Name-Last: Smith Title: Asymmetric stochastic shortest-path interdiction under conditional value-at-risk Abstract: We study a two-stage shortest-path interdiction problem between an interdictor and an evader, in which the cost for an evader to use each arc is given by the arc’s base cost plus an additional cost if the arc is attacked by the interdictor. The interdictor acts first to attack a subset of arcs, and then the evader traverses the network using a shortest path. In the problem we study, the interdictor does not know the exact value of each base cost, but instead only knows the (non-negative uniform) distribution of each arc’s base cost. The evader observes both the subset of arcs attacked by the interdictor and the true base cost values before traversing the network. The interdictor seeks to maximize the conditional value-at-risk of the evader’s shortest-path costs, given some specified risk parameter. We provide an exact method for this problem that utilizes row generation, partitioning, and bounding strategies, and demonstrate the efficacy of our approach on a set of randomly generated instances. Journal: IISE Transactions Pages: 398-410 Issue: 4 Volume: 56 Year: 2024 Month: 4 X-DOI: 10.1080/24725854.2022.2043570 File-URL: http://hdl.handle.net/10.1080/24725854.2022.2043570 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:4:p:398-410 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2184884_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Yifu Li Author-X-Name-First: Yifu Author-X-Name-Last: Li Author-Name: Lening Wang Author-X-Name-First: Lening Author-X-Name-Last: Wang Author-Name: Xiaoyu Chen Author-X-Name-First: Xiaoyu Author-X-Name-Last: Chen Author-Name: Ran Jin Author-X-Name-First: Ran Author-X-Name-Last: Jin Title: Distributed data filtering and modeling for fog and networked manufacturing Abstract: Fog Manufacturing applies both Fog and Cloud Computing collaboratively in Smart Manufacturing to create an interconnected network through sensing, actuation, and computation nodes. Fog Manufacturing has become a promising research component to be integrated into the existing Smart Manufacturing paradigm and provides reliable and responsive computation services. However, Fog nodes' relatively limited communication bandwidth and computation capabilities call for reduced data communication load and computation time latency for modeling. There has long been a lack of an integrated framework to automatically reduce manufacturing data and perform computationally efficient modeling/machine learning. This research direction is increasingly important as both the computational demands and Fog/networked Manufacturing become prevalent. This paper proposes an integrated and distributed framework for data reduction and modeling of multiple systems in a Smart Manufacturing network considering the system similarities. A simulation study and a Fog Manufacturing testbed for ingot growth manufacturing validated that the proposed framework significantly reduces the sample size used for improved computational runtime metrics while outperforming various other data reduction methods in modeling performance. Journal: IISE Transactions Pages: 485-496 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2184884 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2184884 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:485-496 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2207615_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Michael Biehler Author-X-Name-First: Michael Author-X-Name-Last: Biehler Author-Name: Daniel Lin Author-X-Name-First: Daniel Author-X-Name-Last: Lin Author-Name: Jianjun Shi Author-X-Name-First: Jianjun Author-X-Name-Last: Shi Title: DETONATE: Nonlinear Dynamic Evolution Modeling of Time-dependent 3-dimensional Point Cloud Profiles Abstract: Modeling the evolution of a 3D profile over time as a function of heterogeneous input data and the previous time steps’ 3D shape is a challenging, yet fundamental problem in many applications. We introduce a novel methodology for the nonlinear modeling of dynamically evolving 3D shape profiles. Our model integrates heterogeneous, multimodal inputs that may affect the evolvement of the 3D shape profiles. We leverage the forward and backward temporal dynamics to preserve the underlying temporal physical structures. Our approach is based on the Koopman operator theory for high-dimensional nonlinear dynamical systems. We leverage the theoretical Koopman framework to develop a deep learning-based framework for nonlinear, dynamic 3D modeling with consistent temporal dynamics. We evaluate our method on multiple high-dimensional and short-term dependent problems, and it achieves accurate estimates, while also being robust to noise. Journal: IISE Transactions Pages: 541-558 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2207615 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2207615 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:541-558 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2207252_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Xing Yang Author-X-Name-First: Xing Author-X-Name-Last: Yang Author-Name: Chen Zhang Author-X-Name-First: Chen Author-X-Name-Last: Zhang Title: Online directed-structural change-point detection: A segment-wise time-varying dynamic Bayesian network approach Abstract: High-dimensional data streams exist in many applications. Generally these high-dimensional streaming data have complex directed conditional dependence relationships evolving over time. However, modeling their directed conditional dependence structure and detecting its change over time in an online way has not been well studied in the current literature. To that end, we propose an ONline Segment-wise tiMe-varying dynAmic Bayesian netwoRk model with exTernal information (ONSMART), together with an online score-based inferring algorithm for directed-structural change-point detection in high-dimensional data. ONSMART adopts a linear vector autoregressive (VAR) model to describe directed inter-slice and intra-slice relations of variables. It further takes additional information about similarities of variables into account and regularizes similar variables to have similar structure positions in the network with graph Laplacian. ONSMART allows the parameters of VAR to change segment-wisely over time to describe the evolution of the conditional dependence structure and adopts a customized pruned exact linear time algorithm framework to identify directed-structural change-point detection. The L-BFGS-B approach is embedded in this framework to obtain the optimal dependence structure for each segment. Numerical studies using synthetic data and real data from a three-phase flow system are performed to verify the effectiveness of ONSMART. Journal: IISE Transactions Pages: 527-540 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2207252 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2207252 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:527-540 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2212725_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Yanhong Liu Author-X-Name-First: Yanhong Author-X-Name-Last: Liu Author-Name: Haojie Ren Author-X-Name-First: Haojie Author-X-Name-Last: Ren Author-Name: Zhonghua Li Author-X-Name-First: Zhonghua Author-X-Name-Last: Li Title: A unified diagnostic framework via symmetrized data aggregation Abstract: In statistical process control of high-dimensional data streams, in addition to online monitoring of abnormal changes, fault diagnosis of responsible components has become increasingly important. Existing diagnostic procedures have been designed for some typical models with distribution assumptions. Moreover, there is a lack of systematic approaches to provide a theoretical guarantee of significance in estimating shifted components. In this article, we introduce a new procedure to control the False Discovery Rate (FDR) of fault diagnosis. The proposed method formulates the fault diagnosis as a variable selection problem and utilizes the symmetrized data aggregation technique via sample splitting, data screening, and information pooling to control the FDR. Under some mild conditions, we show that the proposed method can achieve FDR control asymptotically. Extensive numerical studies and two real-data examples demonstrate satisfactory FDR control and remarkable diagnostic power in comparison to existing methods. Journal: IISE Transactions Pages: 573-584 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2212725 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2212725 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:573-584 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2210629_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Liang Ding Author-X-Name-First: Liang Author-X-Name-Last: Ding Author-Name: Rui Tuo Author-X-Name-First: Rui Author-X-Name-Last: Tuo Author-Name: Shahin Shahrampour Author-X-Name-First: Shahin Author-X-Name-Last: Shahrampour Title: A sparse expansion for deep Gaussian processes Abstract: In this work, we use Deep Gaussian Processes (DGPs) as statistical surrogates for stochastic processes with complex distributions. Conventional inferential methods for DGP models can suffer from high computational complexity, as they require large-scale operations with kernel matrices for training and inference. In this work, we propose an efficient scheme for accurate inference and efficient training based on a range of Gaussian Processes, called the Tensor Markov Gaussian Processes (TMGP). We construct an induced approximation of TMGP referred to as the hierarchical expansion. Next, we develop a deep TMGP (DTMGP) model as the composition of multiple hierarchical expansion of TMGPs. The proposed DTMGP model has the following properties: (i) the outputs of each activation function are deterministic while the weights are chosen independently from standard Gaussian distribution; (ii) in training or prediction, only O(polylog(M)) (out of M) activation functions have non-zero outputs, which significantly boosts the computational efficiency. Our numerical experiments on synthetic models and real datasets show the superior computational efficiency of DTMGP over existing DGP models. Journal: IISE Transactions Pages: 559-572 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2210629 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2210629 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:559-572 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2203736_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Junjie Wang Author-X-Name-First: Junjie Author-X-Name-Last: Wang Author-Name: Ahmed Maged Author-X-Name-First: Ahmed Author-X-Name-Last: Maged Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Title: Log-linear stochastic block modeling and monitoring of directed sparse weighted network systems Abstract: Networks have been widely employed to reflect the relationships of entities in complex systems. In a weighted network, each node corresponds to one entity while the edge weight between two nodes can represent the number of interactions between two associated entities. More and more schemes have been established to monitor the networks, which help identify the possible changes or anomalies in corresponding systems. However, limited works can comprehensively reflect the community structure, node heterogeneity, interaction sparsity and direction of weighted networks in the literature. This article proposes a log-linear stochastic block model with latent features of nodes based on the mixture of Bernoulli distribution and Poisson distribution to characterize the sparse directional interaction counts within network systems. Explicit matrices and vectors are designed to incorporate community structure and enable straightforward maximum likelihood estimation of parameters. We further construct a monitoring statistic based on the generalized likelihood ratio test for change detection of sparse weighted networks. Comparative studies based on simulations and real data are conducted to validate the high efficiency of proposed model and monitoring scheme. Journal: IISE Transactions Pages: 515-526 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2203736 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2203736 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:515-526 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2185323_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Md Tanzin Farhat Author-X-Name-First: Md Tanzin Author-X-Name-Last: Farhat Author-Name: Ramin Moghaddass Author-X-Name-First: Ramin Author-X-Name-Last: Moghaddass Title: State-space modeling for degrading systems with stochastic neural networks and dynamic Bayesian layers Abstract: To monitor the dynamic behavior of degrading systems over time, a flexible hierarchical discrete-time state-space model (SSM) is introduced that can mathematically characterize the stochastic evolution of the latent states (discrete, continuous, or hybrid) of degrading systems, dynamic measurements collected from condition monitoring sources (e.g., sensors with mixed-type outputs), and the failure process. This flexible SSM is inspired by Bayesian hierarchical modeling and recurrent neural networks without imposing prior knowledge regarding the stochastic structure of the system dynamics and its variables. The temporal behavior of degrading systems and the relationship between variables of the corresponding system dynamics are fully characterized by stochastic neural networks without having to define parametric relationships/distributions between deterministic and stochastic variables. A Bayesian filtering-based learning method is introduced to train the structure of the proposed framework with historical data. Also, the steps to utilize the proposed framework for inference and prediction of the latent states and sensor outputs are discussed. Numerical experiments are provided to demonstrate the application of the proposed framework for degradation system modeling and monitoring. Journal: IISE Transactions Pages: 497-514 Issue: 5 Volume: 56 Year: 2024 Month: 5 X-DOI: 10.1080/24725854.2023.2185323 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2185323 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:5:p:497-514 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2219468_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Xubo Yue Author-X-Name-First: Xubo Author-X-Name-Last: Yue Author-Name: Raed Al Kontar Author-X-Name-First: Raed Author-X-Name-Last: Al Kontar Title: Optimize to generalize in Gaussian processes: An alternative objective based on the Rényi divergence Abstract: We introduce an alternative closed-form objective function α-ELBO for improved parameter estimation in the Gaussian process (GP) based on the Rényi α-divergence. We use a decreasing temperature parameter α to iteratively deform the objective function during optimization. Ultimately, our objective function converges to the exact log-marginal likelihood function of GP. At early optimization stages, α-ELBO can be viewed as a regularizer that smoothes some unwanted critical points. At late stages, α-ELBO recovers the exact log-marginal likelihood function that guides the optimizer to solutions that best explain the observed data. Theoretically, we derive an upper bound of the Rényi divergence under the proposed objective and derive convergence rates for a class of smooth and non-smooth kernels. Case studies on a wide range of real-life engineering applications demonstrate that our proposed objective is a practical alternative that offers improved prediction performance over several state-of-the-art inference techniques. Journal: IISE Transactions Pages: 600-610 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2219468 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2219468 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:600-610 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2224854_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Bo Shen Author-X-Name-First: Bo Author-X-Name-Last: Shen Author-Name: Zhenyu (James) Kong Author-X-Name-First: Zhenyu (James) Author-X-Name-Last: Kong Title: Active defect discovery: A human-in-the-loop learning method Abstract: Unsupervised defect detection methods are applied to an unlabeled dataset by producing a ranked list based on defect scores. Unfortunately, many of the top-ranked instances by unsupervised algorithms are not defects, which leads to high false-positive rates. Active Defect Discovery (ADD) is proposed to overcome this deficiency, which sequentially selects instances to get the labeling information (defects or not). However, labeling is often costly. Therefore, balancing detection accuracy and labeling cost is essential. Along this line, this article proposes a novel ADD method to achieve the goal. Our approach is based on the state-of-the-art unsupervised defect detection method, namely, Isolation Forest, as the baseline defect detector to extract features. Thereafter, the sparsity of the extracted features is utilized to adjust the defect detector so that it can focus on more important features for defect detection. To enforce the sparsity of the features and subsequent improvement of the detection accuracy, a new algorithm based on online gradient descent, namely, Sparse Approximated Linear Defect Discovery (SALDD), is proposed with its theoretical Regret analysis. Extensive experiments are conducted on real-world datasets including healthcare, manufacturing, security, etc. The performance demonstrates that the proposed algorithm significantly outperforms the state-of-the-art algorithms for defect detection. Journal: IISE Transactions Pages: 638-651 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2224854 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2224854 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:638-651 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2222402_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Hui Wu Author-X-Name-First: Hui Author-X-Name-Last: Wu Author-Name: Yan-Fu Li Author-X-Name-First: Yan-Fu Author-X-Name-Last: Li Title: A multi-sensor fusion-based prognostic model for systems with partially observable failure modes Abstract: With the rapid development of sensor and communication technology, multi-sensor data is available to monitor the degradation of complex systems and predict the failure modes. However, two huge challenges remain to be resolved: (i) how to predict the failure modes with limited failure mode labeled systems to alleviate the heavy dependence on expert experience; (ii) how to effectively fuze the useful information from the multi-sensor data to achieve an accurate estimation of the degradation status automatically. To address these issues, we propose a novel semi-supervised prognostic model for the systems with partially observable failure modes, where only a small fraction of the systems in the training set are known for their failure modes. First, we develop a graph-based semi-supervised learning method to extract features characterizing the failure modes. Then, we input these features as well as the multi-sensor streams into an elastic net functional regression model to predict the residual useful lifetime. The proposed model is validated by extensive simulation studies and a case study of aircraft turbofan engines available from the NASA repository. Journal: IISE Transactions Pages: 624-637 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2222402 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2222402 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:624-637 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2219290_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Yanrong Li Author-X-Name-First: Yanrong Author-X-Name-Last: Li Author-Name: Juan Du Author-X-Name-First: Juan Author-X-Name-Last: Du Author-Name: Wei Jiang Author-X-Name-First: Wei Author-X-Name-Last: Jiang Title: Reinforcement learning for process control with application in semiconductor manufacturing Abstract: Process control is widely discussed in the manufacturing process, especially in semiconductor manufacturing. Due to unavoidable disturbances in manufacturing, different process controllers are proposed to realize variation reduction. Since Reinforcement Learning (RL) has shown great advantages in learning actions from interactions with a dynamic system, we introduce RL methods for process control and propose a new controller called RL-based controller. Considering the fact that most existing run-to-run (R2R) controllers mainly rely on a linear model assumption for the process input–output relationship, we first discuss theoretical properties of RL-based controllers based on the linear model assumption. Then the performance of RL-based controllers and traditional R2R controllers (e.g., Exponentially Weighted Moving Average (EWMA), double EWMA, adaptive EWMA, and general harmonic rule controllers) are compared for linear processes. Furthermore, we find that the RL-based controllers have potential advantages to deal with other complicated nonlinear processes. The intensive numerical studies validate the advantages of the proposed RL-based controllers. Journal: IISE Transactions Pages: 585-599 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2219290 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2219290 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:585-599 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2253869_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Barış Tan Author-X-Name-First: Barış Author-X-Name-Last: Tan Author-Name: Andrea Matta Author-X-Name-First: Andrea Author-X-Name-Last: Matta Title: The digital twin synchronization problem: Framework, formulations, and analysis Abstract: As the adoption of digital twins increases steadily, it is necessary to determine how to operate them most effectively and efficiently. In this article, the digital twin synchronization problem is introduced and defined formally. Frequent synchronizations would increase cost and data traffic congestion, whereas infrequent synchronizations would increase the bias of the predictions and yield wrong decisions. This work defines the synchronization problem variants in different contexts. To discuss the problem and its solution, the problem of determining when to synchronize an unreliable production system with its digital twin to minimize the average synchronization and bias costs is formulated and analyzed analytically. The state-independent, state-dependent, and full-information solutions have been determined by using a stochastic model of the system. Solving the synchronization problem using simulation is discussed, and an approximate policy is proposed. Our results show that the performance of the state-dependent policy is close to the optimal solution that can be obtained with full information and significantly better than the performance of the state-independent policy. Furthermore, the approximate periodic state-dependent policy yields near-optimal results. To operate digital twins more effectively, the digital twin synchronization problem must be considered and solved to determine the optimal synchronization policy. Journal: IISE Transactions Pages: 652-665 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2253869 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2253869 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:652-665 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2222162_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Weizhi Lin Author-X-Name-First: Weizhi Author-X-Name-Last: Lin Author-Name: Cesar Ruiz Author-X-Name-First: Cesar Author-X-Name-Last: Ruiz Author-Name: Matan Aroosh Author-X-Name-First: Matan Author-X-Name-Last: Aroosh Author-Name: Hadar Ben-Yoav Author-X-Name-First: Hadar Author-X-Name-Last: Ben-Yoav Author-Name: Qiang Huang Author-X-Name-First: Qiang Author-X-Name-Last: Huang Title: Multiresolution functional characterization and correction of biofouling for improved biosensing efficacy Abstract: Multielectrode electrochemical biosensors promise on-the-spot inspection of target compounds in biofluids, reducing costs in personalized healthcare. However, sensor sensitivity may decrease after each use due to biofouling, where chemical attachments on sensor electrodes curtail sensing signals. Current biofouling characterization techniques rely on time-consuming offline tests and analysis, making them impractical for on-the-spot signal correction. Alternatively, we propose to statistically model and correct the biofouling-induced signal changes. However, in addition to biofouling, the signals are influenced by multiple sources of variation, each with different levels of impact. To effectively characterize and separate biofouling effects from the major sources of variability, we establish a multiresolution functional mixed-effect model based on domain knowledge. A biosensing signal is first decomposed into a smooth trend and local peaks. The smooth trend models the effects of population-level biofluid composition, as well as patient and electrode effects to isolate variability sources. Changes in local peak location and amplitude indicate biofouling. These local peaks are modeled using a sparse subset of high-order functional terms. By modeling the changes of those high-order terms, we can characterize and predict the biofouling between consecutive measurements. We propose a sequential parameter estimation procedure that ensures model identifiability. A nonparametric regression model is developed for biofouling prediction. The proposed strategy is validated through simulation and real case studies, effectively correcting biofouling-affected signals from new patients. Journal: IISE Transactions Pages: 611-623 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2222162 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2222162 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:611-623 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2253881_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Alexandre Dolgui Author-X-Name-First: Alexandre Author-X-Name-Last: Dolgui Author-Name: Oleg Gusikhin Author-X-Name-First: Oleg Author-X-Name-Last: Gusikhin Author-Name: Dmitry Ivanov Author-X-Name-First: Dmitry Author-X-Name-Last: Ivanov Author-Name: Xingyu Li Author-X-Name-First: Xingyu Author-X-Name-Last: Li Author-Name: Kathryn Stecke Author-X-Name-First: Kathryn Author-X-Name-Last: Stecke Title: A network-of-networks adaptation for cross-industry manufacturing repurposing Abstract: During a crisis, manufacturing processes in supply chains of different industries may network with each other as an adaptation response. We propose and examine a “network-of-networks” mechanism of such a cross-industry adaptation to learn about the value of reducing uncertainty through collaborative crisis preparedness and response during the COVID-19 pandemic. Our study allows revelation of the underlying trade-offs between the manufacturing capacity conversion time and effort required to adapt and the gains from collaborative preparedness to uncertainty. Through a real-life data-based analysis with the help of mathematical optimization, we connect the networks-of-networks coordination design and the outcomes of scenario modeling demonstrating a superiority of the coordinated capacity repurposing when compared to ad-hoc adaptation. We conclude that an appropriate collaboration of governmental agencies, healthcare, and industry is crucial for a prompt capacity conversion to healthcare production in a pandemic. Concrete implementation ways are visibility, healthcare inventory monitoring, technology backup plans, and repurposing contingency plans at the preparedness stage. At the response stage, a correct adaptation start time determines success. The results obtained can be instructive to develop technological and managerial plans for a cross-industry adaptation. The proposed “network-of-networks” perspective contributes to theory of supply chain viability and adaptation under disruptions using intertwined supply networks. Journal: IISE Transactions Pages: 666-682 Issue: 6 Volume: 56 Year: 2024 Month: 6 X-DOI: 10.1080/24725854.2023.2253881 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2253881 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:6:p:666-682 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2204329_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Xiaoyan Xu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Xu Author-Name: Suresh P. Sethi Author-X-Name-First: Suresh P. Author-X-Name-Last: Sethi Author-Name: Sai-Ho Chung Author-X-Name-First: Sai-Ho Author-X-Name-Last: Chung Author-Name: Tsan-Ming Choi Author-X-Name-First: Tsan-Ming Author-X-Name-Last: Choi Title: Ordering COVID-19 vaccines for social welfare with information updating: Optimal dynamic order policies and vaccine selection in the digital age Abstract: In the digital age, operations can be improved by a wise use of information and technological tools. During the COVID-19 pandemic, governments faced various choices of vaccines possessing different efficacy and availability levels at different time points. In this article, we consider a two-stage vaccine ordering problem of a government from a first and only supplier in the first stage, and either the same supplier or a new second supplier in the second stage. Between the two stages, potential demand information for the vaccine is collected to update the forecast. Using dynamic programming, we derive the government’s optimal vaccine ordering policy. We find that the government should select its vaccine supplier based on the disease’s infection rate in the society. When the infection rate is low, the government should order nothing at the first stage and order from the supplier with a higher efficacy level at the second stage. When the disease’s infection rate is high, the government should order vaccines at the first stage and switch to the other supplier with a lower efficacy level at the second stage. We extend our model to examine (i) the value of blockchain adoption and (ii) the impact of vaccines’ side effects. Journal: IISE Transactions Pages: 729-745 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2023.2204329 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2204329 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:729-745 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2305732_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Ailing Xu Author-X-Name-First: Ailing Author-X-Name-Last: Xu Author-Name: Yuhan Miao Author-X-Name-First: Yuhan Author-X-Name-Last: Miao Author-Name: Ying-Ju Chen Author-X-Name-First: Ying-Ju Author-X-Name-Last: Chen Author-Name: Qiao-Chu He Author-X-Name-First: Qiao-Chu Author-X-Name-Last: He Author-Name: Zuo-Jun Max Shen Author-X-Name-First: Zuo-Jun Max Author-X-Name-Last: Shen Title: Incentivizing compliance behaviors with investment goods in pandemic preparedness and resilience Abstract: To understand non-compliance behaviors in investment goods in pandemic preparedness and resilience, we resort to a form of bounded rationality that people suffer from a lack of self-control due to “present bias”, and differ in their sophistication levels (the degree to which they are aware of such compliance barriers). We focus on (i) the manufacturer’s pricing strategy under an advance selling framework, and (ii) the subsidy policy to mitigate under-adoption, to generate operational insights. We show that subsidizing the manufacturer is more cost-effective than subsidizing consumers, because the latter subsidy will not fully trickle down to consumers when the manufacturering manipulates prices in response. In particular, when the subsidy program is budget-constrained, the manufacturer subsidy should be provided only in the spot period. In contrast, when the budget constraint is relaxed, subsidies in both periods should be provided. Surprisingly, such a subsidy program has non-monotone impacts on consumers’ adoption quantities. Intuitively, this is because the spot-period subsidy induces consumers’ non-compliance behaviors in the advance period. In response, the manufacturer/seller will shift to a pricing strategy that may further reduce the advance-selling quantity. This result provides a natural explanation to reconcile the mixed effects of adoption subsidies. Journal: IISE Transactions Pages: 685-698 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2024.2305732 File-URL: http://hdl.handle.net/10.1080/24725854.2024.2305732 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:685-698 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2317845_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Dmitry Ivanov Author-X-Name-First: Dmitry Author-X-Name-Last: Ivanov Author-Name: Weiwei Chen Author-X-Name-First: Weiwei Author-X-Name-Last: Chen Author-Name: David W. Coit Author-X-Name-First: David W. Author-X-Name-Last: Coit Author-Name: Nezih Altay Author-X-Name-First: Nezih Author-X-Name-Last: Altay Title: Modeling and Optimization of Supply Chain Resilience to Pandemics and Long-Term Crises Journal: IISE Transactions Pages: 683-684 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2024.2317845 File-URL: http://hdl.handle.net/10.1080/24725854.2024.2317845 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:683-684 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2223246_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Xuecheng Yin Author-X-Name-First: Xuecheng Author-X-Name-Last: Yin Author-Name: Sabah Bushaj Author-X-Name-First: Sabah Author-X-Name-Last: Bushaj Author-Name: Yue Yuan Author-X-Name-First: Yue Author-X-Name-Last: Yuan Author-Name: İ. Esra Büyüktahtakın Author-X-Name-First: İ. Esra Author-X-Name-Last: Büyüktahtakın Title: COVID-19: Agent-based simulation-optimization to vaccine center location vaccine allocation problem Abstract: This article presents an agent-based simulation-optimization modeling and algorithmic framework to determine the optimal vaccine center location and vaccine allocation strategies under budget constraints during an epidemic outbreak. Both simulation and optimization models incorporate population health dynamics, such as susceptible (S), vaccinated (V), infected (I) and recovered (R), while their integrated utilization focuses on the COVID-19 vaccine allocation challenges. We first formulate a dynamic location–allocation Mixed-Integer Programming (MIP) model, which determines the optimal vaccination center locations and vaccines allocated to vaccination centers, pharmacies, and health centers in a multi-period setting in each region over a geographical location. We then extend the agent-based epidemiological simulation model of COVID-19 (Covasim) by adding new vaccination compartments representing people who take the first vaccine shot and the first two shots. The Covasim involves complex disease transmission contact networks, including households, schools, and workplaces, and demographics, such as age-based disease transmission parameters. We combine the extended Covasim with the vaccination center location-allocation MIP model into one single simulation-optimization framework, which works iteratively forward and backward in time to determine the optimal vaccine allocation under varying disease dynamics. The agent-based simulation captures the inherent uncertainty in disease progression and forecasts the refined number of susceptible individuals and infections for the current time period to be used as an input into the optimization. We calibrate, validate, and test our simulation-optimization vaccine allocation model using the COVID-19 data and vaccine distribution case study in New Jersey. The resulting insights support ongoing mass vaccination efforts to mitigate the impact of the pandemic on public health, while the simulation-optimization algorithmic framework could be generalized for other epidemics. Journal: IISE Transactions Pages: 699-714 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2023.2223246 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2223246 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:699-714 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2184515_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Shanshan Li Author-X-Name-First: Shanshan Author-X-Name-Last: Li Author-Name: Yong He Author-X-Name-First: Yong Author-X-Name-Last: He Author-Name: Hongfu Huang Author-X-Name-First: Hongfu Author-X-Name-Last: Huang Author-Name: Junyi Lin Author-X-Name-First: Junyi Author-X-Name-Last: Lin Author-Name: Dmitry Ivanov Author-X-Name-First: Dmitry Author-X-Name-Last: Ivanov Title: Supply chain hoarding and contingent sourcing strategies in anticipation of price hikes and product shortages Abstract: In anticipation of price hikes and shortages caused by supplier disruptions and manufacturer production stops, customers might stockpile extra products. In the case of a supplier disruption, a manufacturer may decide to continue producing using a contingent source. Capturing the price dynamics in four disruption-related periods (i.e., responding, rising, recovering, and recovered), we derive optimal hoarding policies for customers. The results indicate that customer hoarding decisions fall into multiple patterns depending on the interactions between disruption events, market responses (quick and slow), and market recovery (instant, quick, slow, and never). We next present contingent sourcing tactics for manufacturers to mitigate disruptions with and without customer hoarding. We find that future price increases could induce contingent sourcing even if it is unprofitable to resume production during the price-responding phase. Our results offer recommendations regarding when and how to use hoarding and contingent sourcing accounting for uncertain disruption duration and asymmetric information along with disruption- and recovery-driven price dynamics. These recommendations can be of particular value for supply chain decision-making at times of growing inflation. We also demonstrate the impacts of customer hoarding and disruption information on the value of contingent sourcing. Journal: IISE Transactions Pages: 746-761 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2023.2184515 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2184515 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:746-761 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2217248_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Byeongmok Kim Author-X-Name-First: Byeongmok Author-X-Name-Last: Kim Author-Name: Jong Gwang Kim Author-X-Name-First: Jong Gwang Author-X-Name-Last: Kim Author-Name: Seokcheon Lee Author-X-Name-First: Seokcheon Author-X-Name-Last: Lee Title: A multi-agent reinforcement learning model for inventory transshipments under supply chain disruption Abstract: The COVID-19 pandemic has significantly disrupted global Supply Chains (SCs), emphasizing the importance of SC resilience, which refers to the ability of SCs to return to their original or more desirable state following disruptions. This study focuses on collaboration, a key component of SC resilience, and proposes a novel collaborative structure that incorporates a fictitious agent to manage inventory transshipment decisions between retailers in a centralized manner while maintaining the retailers’ autonomy in ordering. The proposed collaborative structure offers the following advantages from SC resilience and operational perspectives: (i) it facilitates decision synchronization for enhanced collaboration among retailers, and (ii) it allows retailers to collaborate without the need for information sharing, addressing the potential issue of information sharing reluctance. Additionally, this study employs non-stationary probability to capture the deeply uncertain nature of the ripple effect and the highly volatile customer demand caused by the pandemic. A new Reinforcement Learning (RL) algorithm is developed to handle non-stationary environments and to implement the proposed collaborative structure. Experimental results demonstrate that the proposed collaborative structure using the new RL algorithm achieves superior SC resilience compared with centralized inventory management systems with transshipment and decentralized inventory management systems without transshipment using traditional RL algorithms. Journal: IISE Transactions Pages: 715-728 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2023.2217248 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2217248 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:715-728 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2243615_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Canan Ulu Author-X-Name-First: Canan Author-X-Name-Last: Ulu Author-Name: Thomas S. Shively Author-X-Name-First: Thomas S. Author-X-Name-Last: Shively Title: A Bayesian model for multicriteria sorting problems Abstract: Decision makers are often interested in assigning alternatives to preference classes under multiple criteria instead of choosing the best alternative or ranking all the alternatives. Firms need to categorize suppliers based on performance, credit agencies need to classify customers according to their risks, and graduate programs need to decide who to admit. In this article, we develop an interactive Bayesian algorithm to aid a decision maker (DM) with a multicriteria sorting problem by learning about her preferences and using that knowledge to sort alternatives. We assume the DM has a linear value function and value thresholds for preference classes. Our method specifies an informative prior distribution on the uncertain parameters. At each stage of the process, we compare the expected cost of stopping with the expected cost of continuing to consult the DM. If it is optimal to continue, we select an alternative to present to the DM and, given the DM’s response, we update the prior distribution using Bayes’ Theorem. The goal of the algorithm is to minimize expected total cost. We develop lower bounds on the optimal cost and study the performance of a heuristic policy that presents the DM alternatives with the highest expected cost of misplacement. Journal: IISE Transactions Pages: 777-791 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2023.2243615 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2243615 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:777-791 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2227666_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Melis Boran Author-X-Name-First: Melis Author-X-Name-Last: Boran Author-Name: Bahar Çavdar Author-X-Name-First: Bahar Author-X-Name-Last: Çavdar Author-Name: Tuğçe Işık Author-X-Name-First: Tuğçe Author-X-Name-Last: Işık Title: Capacity allocation in service systems with preferred delivery times and multiple customer classes Abstract: Motivated by operational problems in click-and-collect systems, such as curbside pickup programs, we study a joint admission control and capacity allocation problem. We consider systems where customers have preferred service delivery times and can be of different priority classes. The service provider can reject customers upon arrival or serve jobs via overtime when service capacity is insufficient. The service provider’s goal is to find the minimum-cost admission and capacity allocation policy to dynamically decide when to serve and whom to serve. We model this problem as a Markov Decision Process and present structural results to partially characterize suboptimal solutions. We then develop a linear programming-based exact solution method using these results. We also present a problem-specific approximation method using a new state aggregation rule to address computational challenges faced due to large state and action spaces. Finally, we develop heuristic policies for large instances based on the behavior of optimal policies in small problems. We evaluate our methods through extensive computational experiments where we vary the service capacity, arrivals, associated service costs, customer segmentation, and order patterns. Our solution methods perform significantly better than several benchmarks in managing the tradeoff between the computation time and solution quality. Journal: IISE Transactions Pages: 762-776 Issue: 7 Volume: 56 Year: 2024 Month: 7 X-DOI: 10.1080/24725854.2023.2227666 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2227666 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:7:p:762-776 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2256369_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Hao Wu Author-X-Name-First: Hao Author-X-Name-Last: Wu Author-Name: Qiao Liang Author-X-Name-First: Qiao Author-X-Name-Last: Liang Author-Name: Kaibo Wang Author-X-Name-First: Kaibo Author-X-Name-Last: Wang Title: Modeling and monitoring multilayer attributed weighted directed networks via a generative model Abstract: As data with network structures are widely seen in diverse applications, the modeling and monitoring of network data have drawn considerable attention in recent years. When individuals in a network have multiple types of interactions, a multilayer network model should be considered to better characterize its behavior. Most existing network models have concentrated on characterizing the topological structure among individuals, and important attributes of individuals are largely disregarded in existing works. In this article, first, we propose a unified static Network Generative Model (static-NGM), which incorporates individual attributes in network topology modeling. The proposed model can be utilized for a general multilayer network with weighted and directed edges. A variational expectation maximization algorithm is developed to estimate model parameters. Second, to characterize the time-dependent property of a network sequence and perform network monitoring, we extend the static-NGM model to a sequential version, namely, the sequential-NGM model, with the Markov assumption. Last, a sequential-NGM chart is developed to detect shifts and identify root causes of shifts in a network sequence. Extensive simulation experiments show that considering attributes improves the parameter estimation accuracy and that the proposed monitoring method also outperforms the three competitive approaches, static-NGM chart, score test-based chart (ST chart) and Bayes factor-based chart (BF chart), in both shift detection and root cause diagnosis. We also perform a case study with Enron E-mail data; the results further validate the proposed method. Journal: IISE Transactions Pages: 902-914 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2256369 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2256369 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:902-914 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2225097_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Hanxiao Zhang Author-X-Name-First: Hanxiao Author-X-Name-Last: Zhang Author-Name: Yan-Fu Li Author-X-Name-First: Yan-Fu Author-X-Name-Last: Li Author-Name: Min Xie Author-X-Name-First: Min Author-X-Name-Last: Xie Author-Name: Chen Zhang Author-X-Name-First: Chen Author-X-Name-Last: Zhang Title: Two-stage distributionally robust optimization for joint system design and maintenance scheduling in high-consequence systems Abstract: The failures of high-consequence systems can cause serious harm to humans, including loss of human health, life security, finance, and even social chaos. To protect high-consequence systems, both optimal system design and maintenance activities contribute to improving system reliability and social safety. The existing works generally optimize these two problems sequentially and assume that the degradation process of components is precisely known. However, sequential optimization often results in significant losses due to redundancies, and such a presumption usually cannot be guaranteed in practice, due to limited historical data or a lack of expert knowledge, referred to as epistemic uncertainty. To fill this gap, in this article, we consider an integrated optimization of system design and maintenance scheduling for multi-state high-consequence systems in which the component’s degradation is known with limited distributional information. To address this issue, we utilize the framework of distributionally robust optimization to provide a risk-averse decision to decision-makers even under the worst realizations of random parameters, and develop a two-stage integer distributionally robust model with moment-based ambiguity set to determine the system design and maintenance scheduling simultaneously. The proposed model can be converted to a tractable approximation as an integer linear stochastic programming problem. In order to solve large-scale problems, we develop a sample-based adaptive large neighborhood search algorithm to find the optimal system designs. In the numerical experiments, we present a case study on feedwater heating systems in nuclear power plants and demonstrate that an integrated optimization consideration creates significant benefits in profitability. We also present the out-of-sample performance of the distributionally robust design to avoid extreme risk. Journal: IISE Transactions Pages: 793-810 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2225097 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2225097 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:793-810 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2238204_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Maryam Bayat Author-X-Name-First: Maryam Author-X-Name-Last: Bayat Author-Name: Farnaz Hooshmand Author-X-Name-First: Farnaz Author-X-Name-Last: Hooshmand Author-Name: Seyed Ali MirHassani Author-X-Name-First: Seyed Ali Author-X-Name-Last: MirHassani Title: Optimizing risk budgets in portfolio selection problem: A bi-level model and an efficient gradient-based algorithm Abstract: Risk budgeting is one of the recent and successful strategies for asset portfolio selection. In this strategy, risk budgets are associated with assets, and the amount of investment is adjusted so that the contribution of each asset to the portfolio risk is proportional to its risk budget. To the best of our knowledge, no specific method has been presented in the literature to systematically determine the value of risk budgets. To fill this research gap, in this article, we consider the risk budgets as decision variables and present a bi-level programming model where the upper level decides the risk budgets and the lower level determines the risk budgeting portfolio. Three approaches are introduced to solve the model. The first is a single-level reformulation of the bi-level model, the second is a novel gradient-based algorithm, and the third is the particle swarm optimization algorithm. Moreover, the k-means clustering method is utilized to determine the assets involved in the portfolio. Computational results over real-world datasets demonstrate the significance of the bi-level model. In addition, the results confirm the proficiency of our gradient-based algorithm from both solution quality and running time. Journal: IISE Transactions Pages: 841-854 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2238204 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2238204 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:841-854 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2227659_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Zebin Li Author-X-Name-First: Zebin Author-X-Name-Last: Li Author-Name: Fei Yao Author-X-Name-First: Fei Author-X-Name-Last: Yao Author-Name: Hongyue Sun Author-X-Name-First: Hongyue Author-X-Name-Last: Sun Title: Reinforced active learning for CVD-grown two-dimensional materials characterization Abstract: Two-dimensional (2D) materials are one of the research frontiers in material science due to their promising properties. Chemical Vapor Deposition (CVD) is the most widely used technique to grow large-scale high-quality 2D materials. The CVD-grown 2D materials can be efficiently characterized by an optical microscope. However, annotating microscopy images to distinguish the growth quality from good to bad is time-consuming. In this work, we explore Active Learning (AL), which iteratively acquires quality labels from a human and updates the classifier for microscopy images. As a result, AL only requires a limited amount of labels to achieve a good model performance. However, the existing handcrafted query strategies in AL are not good at dealing with the dynamics during the query process since the rigid handcrafted query strategies may not be able to choose the most informative instances (i.e., images) after each query. We propose a Reinforced Active Learning (RAL) framework that uses reinforcement learning to learn a query strategy for AL. Besides, by introducing the intrinsic motivation into the proposed framework, a unique intrinsic reward is designed to enhance the classification performance. The results show that RAL outperforms AL, and can significantly reduce the annotation efforts for the CVD-grown 2D materials characterization. Journal: IISE Transactions Pages: 811-823 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2227659 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2227659 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:811-823 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2228861_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Qiao Liang Author-X-Name-First: Qiao Author-X-Name-Last: Liang Title: Tree-based data filtering for online user-generated reviews Abstract: Analysis of online user-generated reviews has attracted extensive attention with broad applications in recent years. However, the high-volume and low-value density of online reviews bring challenges to a timely and effective data utilization. To address that challenge, this work proposes an unsupervised review filtering method based on the inherent tree-structured hierarchies among review data that reflect the general-to-specific characteristics of various quality aspects discussed in reviews. In particular, the reviews with aspects distributed near the leaf nodes of the tree are capable of providing more specific and detailed information about the examined product, which is more likely to be reserved after the tree-based filtering. To enable an effective extraction of aspect hierarchies from a broad variety of review corpora, a Bayesian nonparametric hierarchical topic model has been constructed and incorporated with an enhanced Pólya urn scheme. The approximate inference of model parameters is obtained by an efficient collapsed Gibbs sampling procedure. The proposed method can enhance the layered effect of individual reviews according to their general-to-specific characteristics and reserve an information-rich subset filtered from the raw review corpus. The merits of the proposed method have been elaborated by case studies on two real-world data sets and an extensive simulation study. Journal: IISE Transactions Pages: 824-840 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2228861 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2228861 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:824-840 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2247047_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Ying Liao Author-X-Name-First: Ying Author-X-Name-Last: Liao Author-Name: Ning Dong Author-X-Name-First: Ning Author-X-Name-Last: Dong Author-Name: Yisha Xiang Author-X-Name-First: Yisha Author-X-Name-Last: Xiang Title: Bayesian prognosis analysis of human papillomavirus-associated head and neck cancer using hierarchical Dirichlet process mixture models Abstract: The incidence and prognosis of Head and Neck Cancer (HNC) depend heavily on patients’ Human PapillomaVirus (HPV) status. Prognosis analysis of HPV-associated HNC is of clinical importance because in-depth understanding of the survival distribution is valuable for designing more informed treatment strategies. In this article, we develop a novel Hierarchical Dirichlet Process Weibull Mixture Model (HDP-WMM) to study the prognosis of HNC patients given their HPV status. The HDP-WMM is capable of simultaneously characterizing the survival distributions of grouped data and capturing the dependence among different groups. Moreover, the HDP-WMM can identify clusters of patients based on their outcomes, providing additional information for exploring patient subtypes. Effective Markov chain Monte Carlo sampling algorithms are designed for model inference and function estimation. The clustering structure is identified by summarizing the posterior samples of the data random partition using the Bayesian cluster analysis tool. A simulation study is designed to validate the performance of the proposed inference methods. The practical utility of the proposed HDP-WMM is demonstrated by a case study on prognosis analysis of HPV-associated HNC. Our results show that the Bayesian HDP-WMM achieves satisfactory performance on estimating the survival functions and clustering patients based on their outcomes. Journal: IISE Transactions Pages: 855-869 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2247047 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2247047 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:855-869 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2249050_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Lisha Song Author-X-Name-First: Lisha Author-X-Name-Last: Song Author-Name: Shuguang He Author-X-Name-First: Shuguang Author-X-Name-Last: He Author-Name: Zhiqiong Wang Author-X-Name-First: Zhiqiong Author-X-Name-Last: Wang Author-Name: Zhen He Author-X-Name-First: Zhen Author-X-Name-Last: He Title: Dynamic monitoring of polynomial profiles with attribute responses and between-profile correlation Abstract: Profile monitoring is a popular statistical process control problem in recent years. In many applications, quality characteristics of interest are attribute data due to the inherent feature of processes or limitation of data collection costs. The correlation among data is becoming more significant, since data collection intervals are becoming shorter in the big data era. However, research on monitoring profiles with attribute responses in the presence of Between-Profile Correlation (BPC) has received relatively scant attention. Motivated by a real example of automobile warranty claims, this article aims to monitor polynomial profiles with attribute responses and BPC. The generalized polynomial model and the learning curve model are adopted to characterize the profile relationship and the correlation between profiles, respectively. Then, an EWMA chart with dynamic control limits (dEWMA) is developed. Simulation studies show that ignoring the BPC does not affect the in-control performance of the chart with dynamic control limits, but does have devastating effects on the out-of-control performance. The proposed dEWMA chart can address the impact of correlation and provide superior monitoring performance compared with some competitors. Finally, a real example of warranty claims data is presented to illustrate the implementation of the proposed chart. Journal: IISE Transactions Pages: 870-885 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2249050 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2249050 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:870-885 Template-Type: ReDIF-Article 1.0 # input file: UIIE_A_2255887_J.xml processed with: repec_from_jats12.xsl darts-xml-transformations-20240209T083504 git hash: db97ba8e3a Author-Name: Qiyang Ma Author-X-Name-First: Qiyang Author-X-Name-Last: Ma Author-Name: Zimo Wang Author-X-Name-First: Zimo Author-X-Name-Last: Wang Title: A recurrent gated unit-based mixture kriging machine Bayesian filtering approach for long-term prediction of dynamic intermittency Abstract: The performance of long-term prediction models is currently impeded due to the mismatch between the nonstationary representations of statistical learning models and the underlying dynamics from real-world systems, which results in low long-term prediction accuracies for many real-world applications. We present a Recurrent Gated Unit-based Mixture Kriging Machine Bayesian Filtering (ReGU-MKMBF) approach for characterizing nonstationary and nonlinear behaviors of one ubiquitous real-world process—dynamic intermittency. It models the transient dynamics in the state space as recurrent transitions between localized stationary segments/attractors. Then, a case study on predicting the onset of pathological symptoms associated with Electrocardiogram signals is presented. The results suggest that ReGU-MKMBF improves the forecasting performance by extending the prediction time horizon with an order of magnitudes while maintaining high accuracies on the foreseen estimates. Implementing the presented approach can subsequently change the current scheme of online monitoring and aftermath mitigation into a prediction and timely prevention for telecardiology. Journal: IISE Transactions Pages: 886-901 Issue: 8 Volume: 56 Year: 2024 Month: 8 X-DOI: 10.1080/24725854.2023.2255887 File-URL: http://hdl.handle.net/10.1080/24725854.2023.2255887 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:uiiexx:v:56:y:2024:i:8:p:886-901