Template-Type: ReDIF-Article 1.0 Author-Name: Chung-Yuan Dye Author-X-Name-First: Chung-Yuan Author-X-Name-Last: Dye Author-Name: Chih-Te Yang Author-X-Name-First: Chih-Te Author-X-Name-Last: Yang Author-Name: Chi-Chuan Wu Author-X-Name-First: Chi-Chuan Author-X-Name-Last: Wu Title: Joint dynamic pricing and preservation technology investment for an integrated supply chain with reference price effects Abstract: In this paper, we propose a system of joint dynamic pricing and preservation technology investment decisions for deterioration items in an integrated supply chain management environment involving a manufacturer and a retailer, a controllable deterioration rate, and price-dependent demand. Because the purchasing decisions of consumers usually involve psychologically encoded prices based upon past shopping experiences, the effects of initial reference prices are also incorporated into the proposed model. An optimal dynamic pricing and preservation technology investment model is then established to determine joint strategy, maximizing the discounted total profit over an infinite time horizon from the perspectives of the retailer and integrated supply chain. We also characterize the properties of the optimal pricing and preservation technology investment decisions, and conduct numerical studies to investigate the impact of initial reference price and various system parameters on the optimal strategies and discounted total profit for the retailer/integrated supply chain. Finally, we offer concluding remarks and suggestions for future studies. Journal: Journal of the Operational Research Society Pages: 811-824 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0247-y File-URL: http://hdl.handle.net/10.1057/s41274-017-0247-y File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:811-824 Template-Type: ReDIF-Article 1.0 Author-Name: Yongjun Li Author-X-Name-First: Yongjun Author-X-Name-Last: Li Author-Name: Xiao Shi Author-X-Name-First: Xiao Author-X-Name-Last: Shi Author-Name: Ali Emrouznejad Author-X-Name-First: Ali Author-X-Name-Last: Emrouznejad Author-Name: Liang Liang Author-X-Name-First: Liang Author-X-Name-Last: Liang Title: Environmental performance evaluation of Chinese industrial systems: a network SBM approach Abstract: In recent years, environmental problems caused by industries in China have drawn increasing attention to both academics and policy makers. This paper assesses the environmental efficiency of Chinese regional industrial systems to come up with some recommendations to policy makers. First, we divided each Chinese regional industrial system into a production process and a pollutant treatment process. Then, we built a scientific input–intermediate–output index system by introducing a new network slacks-based model (NSBM) model. This study is the first to combine NSBM with DEA window analysis to give a dynamic evaluation of the environmental efficiency. This enables us to assess the environmental efficiency of Chinese regional industrial systems considering their internal structure as well as China’s policies concerning resource utilization and environmental protection. Hence, the overall efficiency of each regional industrial system is decomposed into production efficiency and pollutant treatment efficiency. Our empirical results suggest: (1) 66.7% of Chinese regional industrial systems are overall inefficient. 63.3 and of 66.7% Chinese regional industrial systems are inefficient in the production process and the pollutant treatment process, respectively. (2) The efficiency scores for the overall system and both processes are all larger in the eastern area of China than those of the central and western areas. (3) Correlation analysis indicates that SO2 generation intensity (SGI), solid waste generation intensity, COD discharge intensity, and SO2 discharge intensity have significantly negative impacts on the overall efficiency. (4) The overall inefficiency is mainly due to inefficiency of the pollutant treatment process for the majority of regional industrial systems. (5) In general, the overall efficiency was trending up from 2004 to 2010, indicating that the substantial efforts China has devoted to protecting the environment have yielded benefits. Journal: Journal of the Operational Research Society Pages: 825-839 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0257-9 File-URL: http://hdl.handle.net/10.1057/s41274-017-0257-9 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:825-839 Template-Type: ReDIF-Article 1.0 Author-Name: Yuanchang Zhu Author-X-Name-First: Yuanchang Author-X-Name-Last: Zhu Author-Name: Yongjun Li Author-X-Name-First: Yongjun Author-X-Name-Last: Li Author-Name: Liang Liang Author-X-Name-First: Liang Author-X-Name-Last: Liang Title: A variation of two-stage SBM with leader-follower structure: an application to Chinese commercial banks Abstract: The two-stage slack-based measure (SBM) model has many applications in the real world. Due to the limitations of the SBM model on which it is based, the two-stage SBM model unfortunately gives unrealistically low efficiencies and rather far projections (Tone in Eur J Oper Res 197(1):243–252, 2010) for inefficient decision- making unit. Based on the novel idea in Tone (2010), this paper proposes a variation of the two-stage SBM model by incorporating a leader–follower structure and applies the proposed approach to Chinese commercial banks. The results show that our proposed approach can increase efficiencies of inefficient banks and halve the projection distance of some inefficient banks. Journal: Journal of the Operational Research Society Pages: 840-848 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0262-z File-URL: http://hdl.handle.net/10.1057/s41274-017-0262-z File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:840-848 Template-Type: ReDIF-Article 1.0 Author-Name: Ruizhi Li Author-X-Name-First: Ruizhi Author-X-Name-Last: Li Author-Name: Shuli Hu Author-X-Name-First: Shuli Author-X-Name-Last: Hu Author-Name: Peng Zhao Author-X-Name-First: Peng Author-X-Name-Last: Zhao Author-Name: Yupeng Zhou Author-X-Name-First: Yupeng Author-X-Name-Last: Zhou Author-Name: Minghao Yin Author-X-Name-First: Minghao Author-X-Name-Last: Yin Title: A novel local search algorithm for the minimum capacitated dominating set Abstract: The minimum capacitated dominating set problem, an extension of the classic minimum dominating set problem, is an important NP-hard combinatorial optimization problem with a wide range of applications. The aim of this paper is to design a novel local search algorithm to solve this problem. First, the vertex penalizing strategy is introduced to define the scoring method so that our algorithm could increase the diversity of the solutions. Accordingly, a two-mode dominated vertex selecting strategy is introduced to choose the dominated vertices by the added vertex to achieve more promising solutions. After that, an intensification scheme is proposed to make full use of the capacity of each vertex and to improve the solutions effectively. Based on these strategies, a novel local search framework, as we call local search based on vertex penalizing and two-mode dominated vertex selecting strategy (LS_PD), is presented. LS_PD is evaluated against several state-of-the-art algorithms on a large collection of benchmark instances. The experimental results show that in most benchmark instances, LS_PD performs better than its competitors in terms of both solution quality and computational efficiency. Journal: Journal of the Operational Research Society Pages: 849-863 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0268-6 File-URL: http://hdl.handle.net/10.1057/s41274-017-0268-6 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:849-863 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Zhang Author-X-Name-First: Jian Author-X-Name-Last: Zhang Author-Name: Roussos G. Dimitrakopoulos Author-X-Name-First: Roussos G. Author-X-Name-Last: Dimitrakopoulos Title: Stochastic optimization for a mineral value chain with nonlinear recovery and forward contracts* Abstract: When a new forward contract is signed between a mining company and a customer to hedge the risk incurred by the uncertainty in commodity market, the mining company needs to re-optimize the plans of the entire value chain to account for the change of risk level. A two-stage stochastic mixed integer nonlinear program is formulated to optimize a mineral value chain in consideration of both geological uncertainty and market uncertainty. A heuristic is developed to deal with the complexity incurred by the throughput- and head-grade-dependent recovery rate in the processing plant. Through a series of numerical tests, we show that the proposed heuristic is effective and efficient. The test results also show that ignoring the dynamic recovery rate will result in loss and severe misestimation in the mineral value chains profitability. Based on the proposed model and heuristic, an application in evaluating and designing a forward contract is demonstrated through a hypothetical case study. Journal: Journal of the Operational Research Society Pages: 864-875 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0269-5 File-URL: http://hdl.handle.net/10.1057/s41274-017-0269-5 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:864-875 Template-Type: ReDIF-Article 1.0 Author-Name: Mahsa Noori-daryan Author-X-Name-First: Mahsa Author-X-Name-Last: Noori-daryan Author-Name: Ata Allah Taleizadeh Author-X-Name-First: Ata Allah Author-X-Name-Last: Taleizadeh Author-Name: Kannan Govindan Author-X-Name-First: Kannan Author-X-Name-Last: Govindan Title: Joint replenishment and pricing decisions with different freight modes considerations for a supply chain under a composite incentive contract Abstract: Companies often offer incentive contracts to persuade buyers to order more to increase their sales volume and to decrease their setup and freight costs. More sales volume means an increase in profits, market power, and market shares. This investigation analyzes optimal pricing and replenishment decisions of a single–manufacturer/multiple-retailer supply chain where a composite contract combines quantity and freight discounts, and a free shipping contract is incorporated into the model. Here, the transportation modes of raw materials and finished products are subject to a limited capacity. The manufacturer, who faces geographically dispersed retailers, ships the ordered shipments under three different modes classified in three scenarios. In the first scenario, the shipments are shipped by identical transport modes to the retailers. In the second one, they are delivered by different transport modes in terms of their capacities regarding distance from the manufacturing site. In the third scenario, products are sent to a central warehouse for fast ship to the retailers. Demand depends on selling price and shortage is not permitted. The leader–follower game is considered between the members of the chain so that the manufacturer is a follower and the retailers are the leaders. This research aims to optimize the chain total profit concerning the selling prices and order quantities of the manufacturer and the retailers under different transport methods and a composite incentive contract. To clarify the applicability of the model, some numerical samples are presented and the effects of optimal decision policies of the chain’s partners are examined. Journal: Journal of the Operational Research Society Pages: 876-894 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0270-z File-URL: http://hdl.handle.net/10.1057/s41274-017-0270-z File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:876-894 Template-Type: ReDIF-Article 1.0 Author-Name: Mourad Boudia Author-X-Name-First: Mourad Author-X-Name-Last: Boudia Author-Name: Thierry Delahaye Author-X-Name-First: Thierry Author-X-Name-Last: Delahaye Author-Name: Semi Gabteni Author-X-Name-First: Semi Author-X-Name-Last: Gabteni Author-Name: Rodrigo Acuna-Agost Author-X-Name-First: Rodrigo Author-X-Name-Last: Acuna-Agost Title: Novel approach to deal with demand volatility on fleet assignment models Abstract: One of the important applications of operations research in the airline industry is fleet assignment. The problem is posed as an assignment of aircraft capacity to flight legs at the planning level, in general one year before departure date. The fleet assignment problem comes after schedule design and without any influence on it. Even if the schedule is defined by a set of flight legs, the source of revenues for airlines is itineraries, many of which have more than one leg. Existing research is based on the itinerary-based fleet assignment model (IFAM) that captures the network effects. Nevertheless, the difficulty of forecasting itinerary demand prevents the widespread implementation in the airline industry of the IFAM and impacts heavily on its performance. This paper proposes a new model based on itinerary grouping. Our itinerary group fleet assignment model (IGFAM) deals with the difficulties caused by itinerary forecast by replacing them with aggregated demand forecasts. We conduct comparisons between models, considering their respective profit based on real-life demand using a simulation framework. Though the comparison is conservative, it still leads to the new model delivering an advantage in almost all circumstances. The greatest benefits are observed at the highest demand volatility. Journal: Journal of the Operational Research Society Pages: 895-904 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0273-9 File-URL: http://hdl.handle.net/10.1057/s41274-017-0273-9 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:895-904 Template-Type: ReDIF-Article 1.0 Author-Name: Margaret F. Shipley Author-X-Name-First: Margaret F. Author-X-Name-Last: Shipley Author-Name: Steven P. Coy Author-X-Name-First: Steven P. Author-X-Name-Last: Coy Author-Name: J. Brooke Shipley Author-X-Name-First: J. Author-X-Name-Last: Brooke Shipley Title: Utilizing statistical significance in fuzzy interval-valued evidence sets for assessing artificial reef structure impact Abstract: Artificial reefs are often formed by reef balls, oil rigs, either toppled or partially removed, and sunken vessels. Data gathered from observations made in the Gulf of Mexico off the Texas coast were statistically analyzed to investigate the impact of type of structure on fish presence, abundance and species observed. Environmental variables were controlled, and crosstabulations between various structure material-type categories were conducted for the most frequently observed species. Based on Chi-square tests of significance, a fuzzy interval-valued evidence model was developed such that different types of artificial reef structures were compared to determine to what extent each structure was likely to impact fish presence and abundance of species. Results indicate that all materials have been at least somewhat impactful, but the most impact on the presence of individual species has been the oil and gas jackets. A fuzzy goal-driven extension found that potential costs associated with each material may affect reef material selection decision making. Journal: Journal of the Operational Research Society Pages: 905-918 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0277-5 File-URL: http://hdl.handle.net/10.1057/s41274-017-0277-5 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:905-918 Template-Type: ReDIF-Article 1.0 Author-Name: Patrick Chisan Hew Author-X-Name-First: Patrick Chisan Author-X-Name-Last: Hew Title: Cueing a submarine from standoff to ambush a target in an anti-submarine environment: How often can the cue be a false alarm? Abstract: Ambushing an adversary’s vessels as they transit a chokepoint is a key mission for submarines. While the submarine could patrol inside the chokepoint, anti-submarine capabilities are making such patrols too dangerous. Emerging technologies could allow the submarine to stand off at a safe location, and move forward to the chokepoint when cued. There is a need for analysis that can establish the concept’s viability, and whether the technologies are mature enough to be developed into working systems. The important factor is the rate at which the submarine is cued to a target that is not actually present—a false alarm—for as the submarine is clearing a false alarm, it is exposed to counter-acquisition. We establish the false alarm performance that is acceptable, for a given probability of the submarine being counter-acquired. Journal: Journal of the Operational Research Society Pages: 919-927 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0274-8 File-URL: http://hdl.handle.net/10.1057/s41274-017-0274-8 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:919-927 Template-Type: ReDIF-Article 1.0 Author-Name: Francisco Guijarro Author-X-Name-First: Francisco Author-X-Name-Last: Guijarro Title: A similarity measure for the cardinality constrained frontier in the mean–variance optimization model Abstract: This paper proposes a new measure to find the cardinality constrained frontier in the mean–variance portfolio optimization problem. In previous research, assets belonging to the cardinality constrained portfolio change according to the desired level of expected return, so that the cardinality constraint can actually be violated if the fund manager wants to satisfy clients with different return requirements. We introduce a perceptual approach in the mean–variance cardinality constrained portfolio optimization problem by considering a novel similarity measure, which compares the cardinality constrained frontier with the unconstrained mean–variance frontier. We assume that the closer the cardinality constrained frontier to the mean–variance frontier, the more appealing it is for the decision maker. This makes the assets included in the portfolio invariant to any specific level of return, through focusing not on the optimal portfolio but on the optimal frontier. Journal: Journal of the Operational Research Society Pages: 928-945 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0276-6 File-URL: http://hdl.handle.net/10.1057/s41274-017-0276-6 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:928-945 Template-Type: ReDIF-Article 1.0 Author-Name: Rodrigo Cesar da Silva Author-X-Name-First: Rodrigo Cesar Author-X-Name-Last: da Silva Author-Name: Armando Zeferino Milioni Author-X-Name-First: Armando Zeferino Author-X-Name-Last: Milioni Author-Name: Joyce Evania Teixeira Author-X-Name-First: Joyce Evania Author-X-Name-Last: Teixeira Title: The general hyperbolic frontier model: establishing fair output levels via parametric DEA Abstract: This paper studies the problem of fairly allocating shares of a new and fixed output among a set of decision-making units under centralized management. To this end, we introduce a new parametric data envelopment analysis model that generalizes the previous parametric model for output allocation. In this model, it is possible to introduce value judgments and employ it under conditions of constant, decreasing or increasing returns to scale. We present numeric results to demonstrate the performance of our model. Journal: Journal of the Operational Research Society Pages: 946-958 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1057/s41274-017-0278-4 File-URL: http://hdl.handle.net/10.1057/s41274-017-0278-4 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:946-958 Template-Type: ReDIF-Article 1.0 Author-Name: Alex Leggate Author-X-Name-First: Alex Author-X-Name-Last: Leggate Author-Name: Seda Sucu Author-X-Name-First: Seda Author-X-Name-Last: Sucu Author-Name: Kerem Akartunalı Author-X-Name-First: Kerem Author-X-Name-Last: Akartunalı Author-Name: Robert van der Meer Author-X-Name-First: Robert Author-X-Name-Last: van der Meer Title: Modelling crew scheduling in offshore supply vessels Abstract: Crew scheduling problems have been widely studied in various transportation sectors, such as airlines, railways, and urban buses. However to date it appears that application of these problems in sea transport has been very limited. In this paper, we explore key differences between various transport settings, and propose mixed-integer programming formulations for both the crew scheduling and re-scheduling problems for a company operating a fleet of offshore supply vessels (OSVs) on a global scale. Computational results on an extensive set of problems show that our proposed models are practically applicable to generate real-time solutions. We also present a thorough statistical analysis of key problem parameters, and share insights regarding their impacts. Journal: Journal of the Operational Research Society Pages: 959-970 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1080/01605682.2017.1390531 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390531 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:959-970 Template-Type: ReDIF-Article 1.0 Author-Name: Nailson dos Santos Cunha Author-X-Name-First: Nailson dos Santos Author-X-Name-Last: Cunha Author-Name: Anand Subramanian Author-X-Name-First: Anand Author-X-Name-Last: Subramanian Author-Name: Dorien Herremans Author-X-Name-First: Dorien Author-X-Name-Last: Herremans Title: Generating guitar solos by integer programming Abstract: In this paper, we present a framework for computer-aided composition (CAC) that uses exact combinatorial optimisation methods to generate guitar solos from a newly proposed data-set of licks over an accompaniment based on the 12-bar blues chord progression. An integer programming formulation, which can be solved to optimality by a branch-and-cut algorithm, was developed for this problem whose objective is to determine an optimal sequence of a set of licks given a matrix of transition costs derived from user preferences. The generated solos are displayed in tablature format. Outputs of the system were evaluated in an empirical experiment with 173 participants. The results show that the solos whose licks were optimally sequenced were significantly more enjoyed than those randomly sequenced. We project that the developed framework could be of potential use to guitarists looking for original material; as an educational tool for future composers; and to support composers in discovering unique and novel compositional ideas. Journal: Journal of the Operational Research Society Pages: 971-985 Issue: 6 Volume: 69 Year: 2018 Month: 6 X-DOI: 10.1080/01605682.2017.1390528 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390528 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:6:p:971-985 Template-Type: ReDIF-Article 1.0 Author-Name: Michele Battistutta Author-X-Name-First: Michele Author-X-Name-Last: Battistutta Author-Name: Sara Ceschia Author-X-Name-First: Sara Author-X-Name-Last: Ceschia Author-Name: Fabio De Cesco Author-X-Name-First: Fabio Author-X-Name-Last: De Cesco Author-Name: Luca Di Gaspero Author-X-Name-First: Luca Author-X-Name-Last: Di Gaspero Author-Name: Andrea Schaerf Author-X-Name-First: Andrea Author-X-Name-Last: Schaerf Title: Modelling and solving the thesis defense timetabling problem Abstract: The thesis defense timetabling problem consists in composing the suitable committee for a set of defense sessions and assigning each graduation candidate to one of the sessions.In this work, we define the problem formulation that applies to some Italian universities and we provide three alternative solution methods, based on Integer Programming, Constraint Programming and Local Search, respectively. We also develop a principled instance generator, in order to expand the set of available instances.We perform an experimental analysis and we compare our solvers among themselves, using a testbed composed of both real-world and artificial instances. Even though there is no dominant method, the outcome is that Integer Programming gives the best average results, with Local Search being second, and Constraint Programming last on our testbed. All data is available on the web for verification and future comparisons. Journal: Journal of the Operational Research Society Pages: 1039-1050 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1495870 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495870 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1039-1050 Template-Type: ReDIF-Article 1.0 Author-Name: Steven P. Dillenburger Author-X-Name-First: Steven P. Author-X-Name-Last: Dillenburger Author-Name: Jeremy D. Jordan Author-X-Name-First: Jeremy D. Author-X-Name-Last: Jordan Author-Name: Jeffery K. Cochran Author-X-Name-First: Jeffery K. Author-X-Name-Last: Cochran Title: Pareto-optimality for lethality and collateral risk in the airstrike multi-objective problem Abstract: The recent surge in attacks on terrorist organizations within heavily populated areas has brought precision airstrikes to the forefront of discussion topics among partnering nations. In this paper, we present a quick algorithm for accurately creating the Pareto-optimal frontier in the multi-objective airstrike problem. This algorithm, which leverages off specific attributes of lethality and collateral risk, is shown to routinely outperform differential evolution and enumeration algorithms. Once Pareto optimal solutions are found, they can quickly be converted to solutions for the associated goal-programming and weighted sum scalarization problems. The choice of damage function greatly affects the expected lethality and collateral risk in an airstrike underscoring the need for accurate estimation of weapons effects. Notably, the cookie-cutter damage function underestimates collateral risk while overstating lethality in comparison to other damage functions. In addition, we demonstrate that differing guidelines or damage functions significantly alters the optimal location of the optimal aim point in this targeting problem. The methodology presented greatly improves upon existing work in this field, thus ensuring effective precision airstrikes while maximizing civilian safety. Journal: Journal of the Operational Research Society Pages: 1051-1064 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487818 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487818 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1051-1064 Template-Type: ReDIF-Article 1.0 Author-Name: Chia-Yen Lee Author-X-Name-First: Chia-Yen Author-X-Name-Last: Lee Title: Proactive marginal productivity analysis for production shutdown decision by DEA Abstract: The decision to shut a business usually arises when the product’s marginal revenue falls below the average variable cost since the firm cannot offset the fixed cost. Today, however, this traditional business shutdown criterion (BSC) as defined by microeconomic theory may no longer apply to some types of industry; one example is the high-tech industry. This study proposes a BSC scheme that uses an iterative procedure embedded with three phases: level analysis, margin analysis, and budget and action, to solve the shutdown decision problem via proactive marginal productivity. We validate the proposed scheme with a case study of light-emitting diode manufacturers in Taiwan, the majority of which continued to operate and expand capacity despite experiencing a profit drop due to global competition and higher sunk cost of capital investment. Based on the results, we conclude that the proposed BSC scheme gives decision-makers an improved comprehensive prediction via margin analysis. Journal: Journal of the Operational Research Society Pages: 1065-1078 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487820 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487820 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1065-1078 Template-Type: ReDIF-Article 1.0 Author-Name: Ali Emrouznejad Author-X-Name-First: Ali Author-X-Name-Last: Emrouznejad Author-Name: Guo-liang Yang Author-X-Name-First: Guo-liang Author-X-Name-Last: Yang Author-Name: Gholam R. Amin Author-X-Name-First: Gholam R. Author-X-Name-Last: Amin Title: A novel inverse DEA model with application to allocate the CO2 emissions quota to different regions in Chinese manufacturing industries Abstract: This paper aims to address the problem of allocating the CO2 emissions quota set by government goal in Chinese manufacturing industries to different Chinese regions. The CO2 emission reduction is conducted in a three-stage phases. The first stage is to obtain the total amount CO2 emission reduction from the Chinese government goal as our total CO2 emission quota to reduce. The second stage is to allocate the reduction quota to different two-digit level manufacturing industries in China. The third stage is to further allocate the reduction quota for each industry into different provinces. A new inverse data envelopment analysis (InvDEA) model is developed to achieve our goal to allocate CO2 emission quota under several assumptions. At last, we obtain the empirical results based on the real data from Chinese manufacturing industries. Journal: Journal of the Operational Research Society Pages: 1079-1090 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1489344 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489344 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1079-1090 Template-Type: ReDIF-Article 1.0 Author-Name: Xinfang Wang Author-X-Name-First: Xinfang Author-X-Name-Last: Wang Title: Health service design with conjoint optimization Abstract: Health service providers have been under increasing pressure to consider user preferences in designing their programmes. Some organisations have met this challenge using stated preference methods. The two key fairness principles used in designing health services are Utilitarian and Rawlsian, and we propose a bi-objective integer programme to analyse the trade-off between them. Specifically, we model two types of information flow: bottom-up and top-down. The former is an analyst-driven process that fully examines the trade-off between a loss in a group’s average utility and a specific improvement in utility for the least well-off individuals and vice versa. The latter represents a situation in which preferences are stated by decision makers in hope of finding a best-compromise solution. Tested in a case study, our model yielded significantly more balanced designs than the method in current use. Results reveal that in a bottom-up process, a large gain in minimum utility can be achieved with only a minimal loss in average utility, while a top-down approach based on decision makers’ preferences may lead to a solution that is inferior on both objectives. A simulation study further reveals that the improvement in minimum utility is even greater when user preferences are more heterogeneous. Journal: Journal of the Operational Research Society Pages: 1091-1101 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1489341 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489341 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1091-1101 Template-Type: ReDIF-Article 1.0 Author-Name: Nathan C. Proudlove Author-X-Name-First: Nathan C. Author-X-Name-Last: Proudlove Author-Name: Mhorag Goff Author-X-Name-First: Mhorag Author-X-Name-Last: Goff Author-Name: Kieran Walshe Author-X-Name-First: Kieran Author-X-Name-Last: Walshe Author-Name: Ruth Boaden Author-X-Name-First: Ruth Author-X-Name-Last: Boaden Title: The signal in the noise: Robust detection of performance “outliers” in health services Abstract: To make the increasing amounts of data about the performance of public sector organisations digestible by decision makers, composite indicators are commonly constructed, from which a natural step is rankings and league tables. However, how much credence should be given to the results of such approaches? Studying English NHS maternity services (N = 130 hospital trusts), we assembled and used a set of 38 indicators grouped into four baskets of aspects of service delivery. In the absence of opinion on how the indicators should be aggregated, we focus on the uncertainty this brings to the composite results. We use a large two-stage Monte Carlo simulation to generate possible aggregation weights and examine the discrimination in the composite results. We find that positive and negative “outliers” can be identified robustly, of particular value to decision makers for investigation for learning or intervention, however results in between should be treated with great caution. Journal: Journal of the Operational Research Society Pages: 1102-1114 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487816 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487816 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1102-1114 Template-Type: ReDIF-Article 1.0 Author-Name: G. Farina Author-X-Name-First: G. Author-X-Name-Last: Farina Author-Name: R. Giacometti Author-X-Name-First: R. Author-X-Name-Last: Giacometti Author-Name: M. E. De Giuli Author-X-Name-First: M. E. Author-X-Name-Last: De Giuli Title: Systemic risk attribution in the EU Abstract: Systemic default risk is due to multiple private and/or public entities’ simultaneous default. This risk has caused great concern in the recent past and its assessment is not a trivial subject. We have provided a model for systemic risk attribution in order to disentangle its different components. We have applied it to a selection of EU countries consistent with previous research. We have extracted a common EU factor and analysed the residual components related to an individual country’s banking system, to the interaction between banking system and government, and to the country’s and banking idiosyncratic components, as well.For this purpose, we have introduced a multivariate distribution for all the countries and the relative banks, also providing an integrated analysis. We have applied the multivariate Marshall–Olkin distribution, where the marginal probability of default for any country or bank depends on its default intensity. Risk attribution has been performed using weekly market data referred to sovereign and bank CDSs over the period 2009–2015. Our results have highlighted relevant differences between Northern and Southern EU countries, as far as risk decomposition is concerned. In Southern countries, risk is mainly concentrated in a country-banking system shock at each level. In Northern countries, the prevailing components of risk are the systemic EU shock at country level, and the idiosyncratic component at banking system level and individual bank level. Journal: Journal of the Operational Research Society Pages: 1115-1128 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487823 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487823 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1115-1128 Template-Type: ReDIF-Article 1.0 Author-Name: Wenbin Hu Author-X-Name-First: Wenbin Author-X-Name-Last: Hu Author-Name: Junzi Zhou Author-X-Name-First: Junzi Author-X-Name-Last: Zhou Title: Joint modeling: an application in behavioural scoring Abstract: Survival analysis has become an appealing approach in credit scoring. It is able to readily incorporate time-dependent covariates and dynamically predict the survival probability. However, the difference between endogenous and exogenous covariates is ignored in the existing extended Cox models in behavioural scoring. In this paper, we apply joint modelling framework, which can be seen as an extension of survival analysis, to overcome such deficiency of survival models. We carefully design experiments on two datasets and verify the superiority of joint modelling over the extend Cox model through cross validation on dynamic discrimination and calibration performance measures. The experimental results indicate that the joint model performance is better, especially in the calibration measure. The key reason for the better performance is discussed and illustrated. Journal: Journal of the Operational Research Society Pages: 1129-1139 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487821 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487821 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1129-1139 Template-Type: ReDIF-Article 1.0 Author-Name: Tommi Pajala Author-X-Name-First: Tommi Author-X-Name-Last: Pajala Author-Name: Pekka Korhonen Author-X-Name-First: Pekka Author-X-Name-Last: Korhonen Author-Name: Jyrki Wallenius Author-X-Name-First: Jyrki Author-X-Name-Last: Wallenius Title: Judgments of importance revisited: What do they mean? Abstract: In a multiple criteria decision-making problem, decision-makers often make judgments of importance, for example, that “rent is more important than apartment size” when choosing apartments. Even though linear models are heavily used in choice prediction, it has remained unclear whether criterion weights are connected to judgments of importance. A surprisingly common assumption is that a more important criterion tends to have a larger weight, as if weights and importance were equal, or at least heavily correlated. In the experiment, subjects provided pairwise judgments of importance for four criteria and made pairwise choices with apartments defined by these criteria. According to our results, Goldstein’s (1990) idea of connecting judgments of importance to impact is more meaningful than connecting them to weights. Impact as the product of AHP weights and coefficient of variation is the best definition for impact, when measured by correlation to the original judgments of importance. Journal: Journal of the Operational Research Society Pages: 1140-1148 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1489346 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489346 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1140-1148 Template-Type: ReDIF-Article 1.0 Author-Name: Hongjun Lv Author-X-Name-First: Hongjun Author-X-Name-Last: Lv Author-Name: Yinghong Wan Author-X-Name-First: Yinghong Author-X-Name-Last: Wan Title: Contracting for online personalisation services: An economic analysis Abstract: Our study contributes to the literature as follows. Firstly, we are the first to develop and analyse a new contracting problem in the context of personalised services in which the vendor strategically offers two complementary personalisation services to acquire customer preference information. Secondly, given heterogeneity in the willingness to use and expected utility for complementary personalisation services, we uniquely incorporate market realities about the differentiation of customer segmentation. We investigate how the boundedly rational customer segment affects the vendor’s optimal personalisation service strategies and profits. Thirdly, our study extends the privacy calculus theory in information systems through an economic model to reveal customer privacy perceptions and online behaviours. In brief, our study offers guidelines for online vendors that address online personalisation and sheds light on how to effectively carry out information acquisition strategies with boundedly rational customer issues. Market demand uncertainty and customer preference complementarity make targeted advertising and pricing decisions for online vendors particularly challenging. Advanced Internet technologies have provided vendors with the capacity to acquire customer complementary preference information. These technologies depend on the level of personalisation services and segmentation of customers. These problems get exacerbated by the fact that a vendor cannot accurately predict customer preference and thus cannot charge for online personalisation services. Therefore, designing service contracts to acquire customer preference information is vital for the vendor. This study tries to derive the optimal contracting structures (basic service, zero-utility complementary, and positive-utility complementary contracts) for vendors under information asymmetry, considering service complementarity and customer segmentation. Our study can be used in many online shopping and line-interactive systems that give the vendor a noteworthy information advantage. Journal: Journal of the Operational Research Society Pages: 1149-1163 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487817 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487817 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1149-1163 Template-Type: ReDIF-Article 1.0 Author-Name: Zhiming Zhong Author-X-Name-First: Zhiming Author-X-Name-Last: Zhong Author-Name: Xingmei Li Author-X-Name-First: Xingmei Author-X-Name-Last: Li Author-Name: Xiaoyan Liu Author-X-Name-First: Xiaoyan Author-X-Name-Last: Liu Author-Name: William Lau Author-X-Name-First: William Author-X-Name-Last: Lau Title: Opportunity cost management in project portfolio selection with divisibility Abstract: This paper addresses project portfolio selection with divisibility, where cash flow and opportunity cost are simultaneously considered for the first time. If a project is selected, fixed assets required to execute the project will be occupied during its lifetime. The opportunity cost should be considered due to the commitment of fixed assets. An integrated profit analysis method is proposed to simultaneously consider cash flow and opportunity cost in divisible project portfolio selection. To derive the combination of projects as well as the schedule of selected projects, a mixed integer linear program is provided. A real-world case is used to illustrate the capability and characteristics of our proposed models. Journal: Journal of the Operational Research Society Pages: 1164-1178 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1506546 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1506546 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1164-1178 Template-Type: ReDIF-Article 1.0 Author-Name: Yeneneh Tamirat Author-X-Name-First: Yeneneh Author-X-Name-Last: Tamirat Author-Name: Fu-Kwun Wang Author-X-Name-First: Fu-Kwun Author-X-Name-Last: Wang Title: Acceptance sampling plans based on EWMA yield index for the first order autoregressive process Abstract: Acceptance-sampling plan plays an important role in quality control. Four new sampling plans based on the yield index are proposed to deal with lot sentencing for a first-order autoregressive process. The first plan is based on exponentially weighted moving average (EWMA) model. The other three plans are based on resubmitted, repetitive group sampling (RGS), and multiple dependent state repetitive (MDSR), respectively. The EWMA and MDSR models use the quality information of the current lot and previous lots. The resubmitted and repetitive group sampling plans are allowed resampling under a certain condition. We found that the sample size required for lot sentencing is the most economical for the EWMA model. Moreover, the RGS and MDSR plans are much more efficient than the traditional single sampling plan. The resubmitted scheme has the least efficiency. Considering the acceptable quality level at the producer’s risk and the lot tolerance percent defective at the consumer’s risk, a nonlinear optimisation models are proposed to determine the plan parameters. Two examples are provided to show the applicability of the proposed sampling plans. Journal: Journal of the Operational Research Society Pages: 1179-1192 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487819 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487819 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1179-1192 Template-Type: ReDIF-Article 1.0 Author-Name: Tuğçe Yücel Author-X-Name-First: Tuğçe Author-X-Name-Last: Yücel Author-Name: Ayşegül Altın-Kayhan Author-X-Name-First: Ayşegül Author-X-Name-Last: Altın-Kayhan Title: A copy-at-neighbouring-node retransmission strategy for improved wireless sensor network lifetime and reliability Abstract: A Wireless Sensor Network (WSN) is composed of tiny autonomous sensors with limited battery power. WSNs are employed to observe specific fields of interest. In this paper, we study the energy efficient and reliable network design problem using a mathematical programming framework. Energy efficiency is vital since battery replenishment is not always viable and the network lifetime is measured as the time until the first sensor exhausts its energy. Moreover, reliability is important since sensors are mostly deployed unattended and transmission of data fully and correctly is obviously critical. We develop a retransmission strategy originated from the Pareto principle and the scale-free property of complex networks. In our modified hop-by-hop reliability definition, sensors forwarding data directly to the central node must perform retransmission. Central node is the sensor with the highest data transmission load and our motivation is to secure the transmission of data passing through the central node against malicious attacks or technical failures. To this end, we present a mixed 0–1 integer programming model and an efficient heuristic. Our test results show an improvement of 80.5% in network lifetime and of 86.3% in redundant data overhead when compared with the classical conservative data redundancy approaches. We provide extensive test results, which reveal the contribution of our strategy in several other strategic design dimensions. Journal: Journal of the Operational Research Society Pages: 1193-1202 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1475108 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1475108 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1193-1202 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaogang Lin Author-X-Name-First: Xiaogang Author-X-Name-Last: Lin Author-Name: Yong-Wu Zhou Author-X-Name-First: Yong-Wu Author-X-Name-Last: Zhou Title: Pricing policy selection for a platform providing vertically differentiated services with self-scheduling capacity Abstract: In this article, we study three pricing policies for a monopoly platform, such as Uber or Gett, who offers vertically differentiated services to customers via multiple types of self-scheduling providers. Ideally, the platform can employ a “dynamic pricing” policy, which pays providers wages and charges customers prices for the transactions of different services that both adjust based on prevailing demand conditions, to maximize its profit. However, since it is challenging for the platform to implement and for providers to understand this policy, the other two pricing policies are commonly adopted in practice, that is, “surge pricing” policy (adopted by Uber) which pays providers a fixed commission of its dynamic prices, and “static pricing” policy (applied by Gett) which pays providers a fixed commission of its fixed prices. By observing these phenomena, we propose to study and discuss the platform’s profit performance of these three pricing strategies. We show that the surge pricing policy does not always perform well, which can explain why some on-demand platforms would implement the static pricing policy in practice. Also, although the dynamic pricing policy will significantly improve the platform’s profit, we find that the profitability of the static (surge) pricing policy would approach that of the dynamic pricing policy if the platform can balance the number of different types of providers and/or reduce the commission rate. Journal: Journal of the Operational Research Society Pages: 1203-1218 Issue: 7 Volume: 70 Year: 2019 Month: 7 X-DOI: 10.1080/01605682.2018.1487822 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487822 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:7:p:1203-1218 Template-Type: ReDIF-Article 1.0 Author-Name: Christopher Bayliss Author-X-Name-First: Christopher Author-X-Name-Last: Bayliss Author-Name: Geert De Maere Author-X-Name-First: Geert Author-X-Name-Last: De Maere Author-Name: Jason A. D. Atkin Author-X-Name-First: Jason A. D. Author-X-Name-Last: Atkin Author-Name: Marc Paelinck Author-X-Name-First: Marc Author-X-Name-Last: Paelinck Title: Scheduling airline reserve crew using a probabilistic crew absence and recovery model Abstract: Airlines require reserve crew to replace delayed or absent crew, with the aim of preventing consequent flight cancellations. A reserve crew schedule specifies the duty periods for which different reserve crew will be on standby to replace any absent crew. Due to dependencies between flights the timing of a duty period of a reserve crew member influences the probabilities of flight cancellations and also the probabilities that other reserve crew are required to replace absent. These interactions make the exercise of scheduling reserve crew duties a combinatorial optimisation problem. This work develops an enhanced mathematical model for assessing the impact of any given reserve crew schedule, in terms of expected cancellations and reserve induced delays. The proposed model produces results that match a simulation model, in a much shorter time. The model is then used as a fitness function in metaheuristic algorithms and the results are analysed in detail. Journal: Journal of the Operational Research Society Pages: 543-565 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1567649 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1567649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:543-565 Template-Type: ReDIF-Article 1.0 Author-Name: Hande Küçükaydin Author-X-Name-First: Hande Author-X-Name-Last: Küçükaydin Author-Name: Barış Selçuk Author-X-Name-First: Barış Author-X-Name-Last: Selçuk Author-Name: Özgür Özlük Author-X-Name-First: Özgür Author-X-Name-Last: Özlük Title: Optimal keyword bidding in search-based advertising with budget constraint and stochastic ad position Abstract: This paper analyses the search-based advertising problem from an advertiser’s view point, and proposes optimal bid prices for a set of keywords targeted for the advertising campaign. The advertiser aims to maximise its expected potential revenue given a total budget constraint from a search-based advertising campaign. Optimal bid prices are formulated by considering various characteristics of the keywords such that the expected revenue from a keyword is a function of the ad’s position on the search page, and the ad position is a stochastic function of both the bid price and the competitive landscape for that keyword. We explore this problem analytically and numerically in an effort to generate important managerial insights for campaign setters. Journal: Journal of the Operational Research Society Pages: 566-578 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1567650 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1567650 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:566-578 Template-Type: ReDIF-Article 1.0 Author-Name: Weiwei Zhu Author-X-Name-First: Weiwei Author-X-Name-Last: Zhu Author-Name: Mei Xu Author-X-Name-First: Mei Author-X-Name-Last: Xu Author-Name: Cheng-Ping Cheng Author-X-Name-First: Cheng-Ping Author-X-Name-Last: Cheng Title: Dealing with undesirable outputs in DEA: An aggregation method for a common set of weights Abstract: The existing approaches that deal with undesirable outputs tend to either increase the efficiency scores of DMUs or keep the efficiency scores constant and do not allow undesirable outputs to achieve the opposite effect on the efficiency scores, which is inconsistent with the characteristics of undesirable outputs. To solve this problem, You and Yan proposed a new ratio model to allocate penalty coefficients for the undesirable outputs according to their economic costs, but there are differences of magnitude and dimension in various undesirable outputs under practical applications. Therefore, this study uses common weights instead of the penalty coefficients in the original method to obtain the aggregate weights of undesirable outputs. We propose two new models to calculate the aggregate weights of undesirable outputs and illustrate the methods using data given by You and Yan on China’s textile industry. The results reveal that our approaches can generally reduce the efficiency scores of DMUs after considering undesirable outputs and are more significant than other methods available. Journal: Journal of the Operational Research Society Pages: 579-588 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1568843 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1568843 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:579-588 Template-Type: ReDIF-Article 1.0 Author-Name: Jesse A. Nunez Author-X-Name-First: Jesse A. Author-X-Name-Last: Nunez Author-Name: Dashi I. Singham Author-X-Name-First: Dashi I. Author-X-Name-Last: Singham Author-Name: Michael P. Atkinson Author-X-Name-First: Michael P. Author-X-Name-Last: Atkinson Title: A particle filter approach to estimating target location using Brownian bridges Abstract: We study the problem of modelling the trajectory of a moving object of interest, or target, given limited locational and temporal information. Because of uncertainty in information, the location of the target can be represented using a spatial distribution, or heatmap. This paper proposes a comprehensive method for constructing and updating probability heatmaps for the location of a moving object based on uncertain information. This method uses Brownian bridges to model and construct temporal probability heatmaps of target movement, and employs a particle filter to update the heatmap as new intelligence arrives. This approach allows for more complexity than simple deterministic motion models, and is computationally easier to implement than detailed models for local target movement. Journal: Journal of the Operational Research Society Pages: 589-605 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1570806 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1570806 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:589-605 Template-Type: ReDIF-Article 1.0 Author-Name: Vanessa M. R. Bezerra Author-X-Name-First: Vanessa M. R. Author-X-Name-Last: Bezerra Author-Name: Aline A. S. Leao Author-X-Name-First: Aline A. S. Author-X-Name-Last: Leao Author-Name: José Fernando Oliveira Author-X-Name-First: José Fernando Author-X-Name-Last: Oliveira Author-Name: Maristela O. Santos Author-X-Name-First: Maristela O. Author-X-Name-Last: Santos Title: Models for the two-dimensional level strip packing problem – a review and a computational evaluation Abstract: The two-dimensional level strip packing problem has received little attention from the scientific community. To the best of our knowledge, the most competitive model is the one proposed in 2004 by Lodi et al., where the items are packed by levels. In 2015, an arc flow model addressing the two-dimensional level strip cutting problem was proposed by Mrad. The literature presents some mathematical models, despite not addressing specifically the two-dimensional level strip packing problem, they are efficient and can be adapted to the problem. In this paper, we adapt two mixed integer linear programming models from the literature, rewrite the Mrad’s model for the strip packing problem and add well-known valid inequalities to the model proposed by Lodi et al. Computational results were performed on instances from the literature and show that the model put forward by Lodi et al. with valid inequalities outperforms the remaining models with respect to the number of optimal solutions found. Journal: Journal of the Operational Research Society Pages: 606-627 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1578914 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1578914 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:606-627 Template-Type: ReDIF-Article 1.0 Author-Name: Qian Wei Author-X-Name-First: Qian Author-X-Name-Last: Wei Author-Name: Jianxiong Zhang Author-X-Name-First: Jianxiong Author-X-Name-Last: Zhang Author-Name: Guowei Zhu Author-X-Name-First: Guowei Author-X-Name-Last: Zhu Author-Name: Rui Dai Author-X-Name-First: Rui Author-X-Name-Last: Dai Author-Name: Shichen Zhang Author-X-Name-First: Shichen Author-X-Name-Last: Zhang Title: Retailer vs. vendor managed inventory with considering stochastic learning effect Abstract: Extending the research on the impact of learning effect on inventory management is of particular importance, this paper studies two different inventory management models with considering stochastic learning effect, one is retailer-managed inventory (RMI) scenario, and another is vendor-managed inventory (VMI) scenario. We find that inventory exists in equilibrium provided that the holding cost is under a respective threshold both in the RMI and VMI scenarios, also, the threshold in the RMI scenario is significantly larger than that in the VMI scenario. Moreover, the RMI scenario is Pareto dominant over the VMI scenario except for a very large holding cost, and the advantage in enhancing profit is highlighted in the RMI scenario as the variability of the learning rate increases. Furthermore, the traditional double marginalization effect is weakened by a large variability in the RMI scenario while intensified in the VMI scenario. The results obtained in this paper can provide guidance for the inventory management with considering stochastic learning effect. Journal: Journal of the Operational Research Society Pages: 628-646 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1581407 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1581407 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:628-646 Template-Type: ReDIF-Article 1.0 Author-Name: Konstantin Kogan Author-X-Name-First: Konstantin Author-X-Name-Last: Kogan Title: Retailing and long-term environmental concerns: The impact of inventory and pricing competition Abstract: Retailers are sources of environmental pollution, 80–90% of which is ultimately due to the processes that retailers set in motion by their orders for the products they carry and sell. The goal of this paper is to investigate environmental consequences of an intertemporal competition between retailers facing demand and price-related uncertainties. In such an environment, mass displays of inventories by a firm stimulate sales while inventory shortages discourage consumers and stimulate the sales of the firm’s competitors. We consider two types of retailers – price setters and price takers – both engaged in an associated inventory competition by selling products that are partially substitutable. While price-taking retailers let the market decide the prices, price-setting retailers compete also on prices. We find that competition by both types of firms does not necessarily increase the expected retail output and, consequently, the ensuing pollution. In particular, though the stocks of the price-taking retailers grow as the competition between them intensifies, their long-term expected output declines. Moreover, the impact of uncertainty implies greater precaution since both output and pollution further decline as the uncertainty grows. Journal: Journal of the Operational Research Society Pages: 647-659 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1578627 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1578627 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:647-659 Template-Type: ReDIF-Article 1.0 Author-Name: Alberto Paucar-Caceres Author-X-Name-First: Alberto Author-X-Name-Last: Paucar-Caceres Author-Name: Bruno Jerardino-Wiesenborn Author-X-Name-First: Bruno Author-X-Name-Last: Jerardino-Wiesenborn Title: A bridge for two views: Checkland’s soft systems methodology and Maturana’s ontology of the observer Abstract: Checkland and Maturana’s work aim to understand and to improve problematic situations in organisations and in our everyday life. Maturana’s phenomenological onto-epistemology (we are immersed in the praxis of living in an ontological multi-universe) seems to resonate with Soft Systems Methodology (SSM) interpretivist epistemology. We argue that this concurrence makes it possible to reflect and explore some of Maturana’s ideas (structural determinism/structural coupling/organisational closure) when they are grafted into the phases of the Checkland’s SSM seven-step process. This article aims to complement SSM by proposing a framework in which some key concepts from Maturana’s Ontology of the Observer (OoO) might enhance and expand the understanding of the SSM application process. An enriched and enhanced SSM process could have significant consequences in the Management Science/Operational Research (MS/OR) and Systems community practice. The framework proposed can have major social repercussions since it will incorporate the well-known influential OoO ideas into MS/OR practice. Journal: Journal of the Operational Research Society Pages: 660-672 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1578629 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1578629 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:660-672 Template-Type: ReDIF-Article 1.0 Author-Name: Chaoan Lai Author-X-Name-First: Chaoan Author-X-Name-Last: Lai Author-Name: Liang Xu Author-X-Name-First: Liang Author-X-Name-Last: Xu Author-Name: Jennifer Shang Author-X-Name-First: Jennifer Author-X-Name-Last: Shang Title: Optimal planning of technology roadmap under uncertainty Abstract: The selection and planning of technical projects is an important and challenging investment decision for companies as significant amount of capital is often involved. With the growing complexity and scale, managing technical research projects and technology roadmap (TRM) are greatly affected by uncertainties than ever before. However, existing approaches for addressing these problems are restricted to deterministic environments. In this study, a general methodology based on graph theory and mathematical programming for R&D projects planning subject to uncertainty is proposed to maximize profit and to find precedence relations according to technological trends for given budgets and time. We first put forward a new graph model and its mathematical definition to represent the relations among technologies. The network contains nodes to represent technologies and edges to denote feasible paths between two technology nodes. To deal with uncertainty, a network-based novel robust optimization model as well as a chance constrained model is developed. Finally, we apply the proposed model and solution approach to the TRM of Smart Home industry. The numerical study shows that the proposed method can effectively and efficiently solve the optimization problems for technical project planning, path designing, and project management, under uncertainty. Journal: Journal of the Operational Research Society Pages: 673-686 Issue: 4 Volume: 71 Year: 2020 Month: 4 X-DOI: 10.1080/01605682.2019.1581406 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1581406 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:4:p:673-686 Template-Type: ReDIF-Article 1.0 Author-Name: Belaid Aouni Author-X-Name-First: Belaid Author-X-Name-Last: Aouni Author-Name: Michalis Doumpos Author-X-Name-First: Michalis Author-X-Name-Last: Doumpos Author-Name: Blanca Pérez-Gladish Author-X-Name-First: Blanca Author-X-Name-Last: Pérez-Gladish Author-Name: Ralph E. Steuer Author-X-Name-First: Ralph E. Author-X-Name-Last: Steuer Title: On the increasing importance of multiple criteria decision aid methods for portfolio selection Abstract: In 1952, Markowitz published his famous paper on portfolio selection that transformed the field of finance. Although over 65 years have passed since then, the mean-variance model remains today the predominant model in portfolio selection. Having endured many criticisms over this period, the one that has perhaps been the most persistent is the fact that mainstream mean-variance theory is unable to accommodate additional criteria beyond expected return and variance. With investment decision-making having become more complex, this is a real problem as many problems with additional criteria exist and are only increasing in number and importance. In this paper, we review the papers that have been published that apply methods and procedures in an exact (as opposed to evolutionary) sense to address problems in portfolio selection with criteria beyond mean and variance. We also analyse the methodologies that allow the solution of the problem in a multiple criteria context, thus extending the features of the mean-variance approach that have caused portfolio theory to have such impact. Journal: Journal of the Operational Research Society Pages: 1525-1542 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1080/01605682.2018.1475118 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1475118 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1525-1542 Template-Type: ReDIF-Article 1.0 Author-Name: C. Calvo Author-X-Name-First: C. Author-X-Name-Last: Calvo Author-Name: C. Ivorra Author-X-Name-First: C. Author-X-Name-Last: Ivorra Author-Name: V. Liern Author-X-Name-First: V. Author-X-Name-Last: Liern Title: Controlling risk through diversification in portfolio selection with non-historical information Abstract: We deal with the portfolio selection problem for investors having information on the expected returns of the assets based not only on historical data. In the absence of a way of measuring the risk of non-historical information, the investor may try to adjust it through the consideration of a suitable set of diversification constraints. With this aim, we relate the concept of value of information (recently introduced by Kao and Steuer) to a qualitative subjective measure of the investor’s level of confidence in his/her non-historical information. As an illustration, we analyze the behavior of the proposed indicator in the Spanish IBEX35 index for risk, upper bound, semicontinuous variable and cardinality constraints. Journal: Journal of the Operational Research Society Pages: 1543-1548 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0195-6 File-URL: http://hdl.handle.net/10.1057/s41274-017-0195-6 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1543-1548 Template-Type: ReDIF-Article 1.0 Author-Name: Davide La Torre Author-X-Name-First: Davide Author-X-Name-Last: La Torre Author-Name: Franklin Mendivil Author-X-Name-First: Franklin Author-X-Name-Last: Mendivil Title: Stochastic linear optimization under partial uncertainty and incomplete information using the notion of probability multimeasure Abstract: We consider a scalar stochastic linear optimization problem subject to linear constraints. We introduce the notion of deterministic equivalent formulation when the underlying probability space is equipped with a probability multimeasure. The initial problem is then transformed into a set-valued optimization problem with linear constraints. We also provide a method for estimating the expected value with respect to a probability multimeasure and prove extensions of the classical strong law of large numbers, the Glivenko–Cantelli theorem, and the central limit theorem to this setting. The notion of sampling with respect to a probability multimeasure and the definition of cumulative distribution multifunction are also discussed. Finally, we show some properties of the deterministic equivalent problem. Journal: Journal of the Operational Research Society Pages: 1549-1556 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0249-9 File-URL: http://hdl.handle.net/10.1057/s41274-017-0249-9 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1549-1556 Template-Type: ReDIF-Article 1.0 Author-Name: Fouad Ben Abdelaziz Author-X-Name-First: Fouad Author-X-Name-Last: Ben Abdelaziz Author-Name: Ray Saadaoui Author-X-Name-First: Ray Author-X-Name-Last: Saadaoui Author-Name: Meryem Masmoudi Author-X-Name-First: Meryem Author-X-Name-Last: Masmoudi Title: Single criterion vs. multi-criteria optimal stopping methods for portfolio management Abstract: This paper compares two novel methods applied to Portfolio Management based on the attractive theory of Optimal Stopping Problems. We test the single criterion standard version of the latter theory against the multi-criteria version. The optimal moment to stop and trade (to Buy or Sell), represents the major challenge of our active management strategy. We subject the stock included in the portfolio to the rules derived from the underlying theory. Our aim is to provide a method that helps portfolio managers create wealth by buying and selling securities (trading). Our algorithm proves its performance when applied to real data, and we compare it with the Buy & Hold Strategy. Journal: Journal of the Operational Research Society Pages: 1557-1567 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1080/01605682.2018.1441638 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1441638 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1557-1567 Template-Type: ReDIF-Article 1.0 Author-Name: Hatem Masri Author-X-Name-First: Hatem Author-X-Name-Last: Masri Title: A Shariah-compliant portfolio selection model Abstract: The paper aims to develop a Shariah-compliant optimization model for portfolio selection in an Islamic security market. The security return is considered stochastic and is estimated based on the stochastic market return. The proposed model follows Shariah principles by avoiding excessive risk and providing an ethical and socially responsible approach for portfolio selection. We assume that the portfolio return should be maximized for a given probability of loss and that any return below the Zakat threshold is a recourse cost. The Shariah-compliant portfolio selection model is obtained using a goal programming approach, a chance-constrained approach and a recourse approach. An empirical study from Bahrain Islamic Market is reported. Journal: Journal of the Operational Research Society Pages: 1568-1575 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0223-6 File-URL: http://hdl.handle.net/10.1057/s41274-017-0223-6 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1568-1575 Template-Type: ReDIF-Article 1.0 Author-Name: Amelia Bilbao-Terol Author-X-Name-First: Amelia Author-X-Name-Last: Bilbao-Terol Author-Name: Mar Arenas-Parra Author-X-Name-First: Mar Author-X-Name-Last: Arenas-Parra Author-Name: Verónica Cañal-Fernández Author-X-Name-First: Verónica Author-X-Name-Last: Cañal-Fernández Author-Name: Pablo Nguema Obam-Eyang Author-X-Name-First: Pablo Nguema Author-X-Name-Last: Obam-Eyang Title: Multi-criteria analysis of the GRI sustainability reports: an application to Socially Responsible Investment Abstract: The aim of this paper is to construct a support decision-making system to evaluate the different items of corporate social responsibility. For this purpose, we propose a multi-criteria model that runs on two levels of decision-making in accordance with the hierarchical structure designed by the Global Reporting Initiative (GRI). Tools for modelling preferences and aggregating information are used in this framework. Arrays of normalized scores reflecting the company performance in the Aspects and Categories of GRI are then made available for the stakeholders. The design of investment portfolios uses the obtained measures of sustainability in an Extended Goal Programming model that combines financial and sustainability objectives. The proposal enables more informed decision-making for investors with social concerns that prefer direct investment and wish to make their own financial decisions. The developed methodology has been applied to 8 Spanish companies, which have been selected for their relevance in the Spanish stock market. Journal: Journal of the Operational Research Society Pages: 1576-1598 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0229-0 File-URL: http://hdl.handle.net/10.1057/s41274-017-0229-0 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1576-1598 Template-Type: ReDIF-Article 1.0 Author-Name: Tomás Gómez-Navarro Author-X-Name-First: Tomás Author-X-Name-Last: Gómez-Navarro Author-Name: Mónica García-Melón Author-X-Name-First: Mónica Author-X-Name-Last: García-Melón Author-Name: Francisco Guijarro Author-X-Name-First: Francisco Author-X-Name-Last: Guijarro Author-Name: Marion Preuss Author-X-Name-First: Marion Author-X-Name-Last: Preuss Title: Methodology to assess the market value of companies according to their financial and social responsibility aspects: An AHP approach Abstract: This paper proposes a combination of the Analytic Hierarchy Process with Goal Programming for a better valuation of companies. The methodology includes the economic dimension of the company and another based on its social responsibility. A set of relative and absolute economic variables is proposed including concepts like leverage, liquidity or solvency. For the CSR dimension, we present a set of variables extracted from sustainability reports based on the Global Reporting Initiative. This way, the whole methodology relies on publicly available data and can be readily reproduced. We prove the methodology with a complex case study involving the estimation of a German real estate company that wants to foresee its market value. For that, we have analysed four comparable companies plus the target one. Journal: Journal of the Operational Research Society Pages: 1599-1608 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0222-7 File-URL: http://hdl.handle.net/10.1057/s41274-017-0222-7 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1599-1608 Template-Type: ReDIF-Article 1.0 Author-Name: K. Liagkouras Author-X-Name-First: K. Author-X-Name-Last: Liagkouras Author-Name: K. Metaxiotis Author-X-Name-First: K. Author-X-Name-Last: Metaxiotis Title: Handling the complexities of the multi-constrained portfolio optimization problem with the support of a novel MOEA Abstract: The incorporation of additional constraints to the basic mean–variance (MV) model adds realism to the model, but simultaneously makes the problem difficult to be solved with exact approaches. In this paper we address the challenges that have arisen by the multi-constrained portfolio optimization problem with the assistance of a novel specially engineered multi-objective evolutionary algorithm (MOEA). The proposed algorithm incorporates a new efficient representation scheme and specially designed mutation and recombination operators alongside with efficient algorithmic approaches for the correct incorporation of complex real-world constraints into the MV model. We test the algorithm’s performance in comparison with two well-known MOEAs by using a wide range of test problems up to 1317 stocks. For all examined cases the proposed algorithm outperforms the other two MOEAs in terms of performance and processing speed. Journal: Journal of the Operational Research Society Pages: 1609-1627 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0209-4 File-URL: http://hdl.handle.net/10.1057/s41274-017-0209-4 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1609-1627 Template-Type: ReDIF-Article 1.0 Author-Name: Maria do Castelo Gouveia Author-X-Name-First: Maria Author-X-Name-Last: do Castelo Gouveia Author-Name: Elisabete Duarte Neves Author-X-Name-First: Elisabete Author-X-Name-Last: Duarte Neves Author-Name: Luís Cândido Dias Author-X-Name-First: Luís Author-X-Name-Last: Cândido Dias Author-Name: Carlos Henggeler Antunes Author-X-Name-First: Carlos Author-X-Name-Last: Henggeler Antunes Title: Performance evaluation of Portuguese mutual fund portfolios using the value-based DEA method Abstract: The increased volatility in capital markets since the outbreak of the 2008 global financial crisis and the investor’s lack of confidence in the banking sector represented significant challenges to portfolio fund managers. The current study assesses the performance of Portuguese mutual fund portfolios considering the period 2007–2014 using the value-based DEA method. This approach combines data envelopment analysis (DEA) with multiple criteria decision aiding. A dynamic evaluation including value judgements is carried out using data from 15 Portuguese equity funds. The results unveil the impact of the global crisis in the Portuguese investment funds industry. They show that Portuguese investment funds performed better between 2011 and 2013; this suggests that equity funds investors became more confident in these vehicles due to political measures reinforcing financial markets. The methodology followed in this study contributes to help investors in the identification of the funds with the best practices according to their judgments. Journal: Journal of the Operational Research Society Pages: 1628-1639 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0259-7 File-URL: http://hdl.handle.net/10.1057/s41274-017-0259-7 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1628-1639 Template-Type: ReDIF-Article 1.0 Author-Name: Ma Leonor Plá Author-X-Name-First: Ma Author-X-Name-Last: Leonor Plá Author-Name: Trinidad Casasús Author-X-Name-First: Trinidad Author-X-Name-Last: Casasús Author-Name: Vicente Liern Author-X-Name-First: Vicente Author-X-Name-Last: Liern Author-Name: Juan Carlos Pérez Author-X-Name-First: Juan Author-X-Name-Last: Carlos Pérez Title: On the importance of perspective and flexibility for efficiency measurement: effects on the ranking of decision-making units Abstract: The efficiency of a firm can be assessed from several perspectives and using a variety of methodologies. Data envelopment analysis (DEA) is one of the most commonly used methodologies. Conventional DEA analyses or models allow one to classify decision-making units (DMUs) into efficient and inefficient ones based on their efficiency scores, which could also be used for ranking DMUs; however, such rankings generally show many ties. Super-efficiency DEA analyses have been proposed to address the tie issue. On the other hand, conventional DEA analyses only take account of a single perspective in estimating efficiency scores. Cross-efficiency DEA analyses provide an alternative that takes account of the perspectives or perceptions of different DMUs. Conventional DEA analyses designed for handling crisp data have also been extended to deal with fuzzy data. In this paper, we propose a fuzzy version of cross-efficiency DEA analysis along with a method for ranking DMUs. We illustrate our proposal with a real example from the Spanish banking sector. In order to assess the robustness of our proposal, we compared our results with those obtained with three different approaches based on the perspective from which efficiency aims to be evaluated: a fuzzy DEA approach, a cross-efficiency-based approach and a TOPSIS-based approach. Journal: Journal of the Operational Research Society Pages: 1640-1652 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0250-3 File-URL: http://hdl.handle.net/10.1057/s41274-017-0250-3 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1640-1652 Template-Type: ReDIF-Article 1.0 Author-Name: Jamal Ouenniche Author-X-Name-First: Jamal Author-X-Name-Last: Ouenniche Author-Name: Kais Bouslah Author-X-Name-First: Kais Author-X-Name-Last: Bouslah Author-Name: Jose Manuel Cabello Author-X-Name-First: Jose Manuel Author-X-Name-Last: Cabello Author-Name: Francisco Ruiz Author-X-Name-First: Francisco Author-X-Name-Last: Ruiz Title: A new classifier based on the reference point method with application in bankruptcy prediction Abstract: The finance industry relies heavily on the risk modelling and analysis toolbox to assess the risk profiles of entities such as individual and corporate borrowers and investment vehicles. Such toolbox includes a variety of parametric and nonparametric methods for predicting risk class belonging. In this paper, we expand such toolbox by proposing an integrated framework for implementing a full classification analysis based on a reference point method, namely in-sample classification and out-of-sample classification. The empirical performance of the proposed reference point method-based classifier is tested on a UK data-set of bankrupt and nonbankrupt firms. Our findings conclude that the proposed classifier can deliver a very high predictive performance, which makes it a real contender in industry applications in banking and investment. Three main features of the proposed classifier drive its outstanding performance, namely its nonparametric nature, the design of our RPM score-based cut-off point procedure for in-sample classification, and the choice of a k-nearest neighbour as an out-of-sample classifier which is trained on the in-sample classification provided by the reference point method-based classifier. Journal: Journal of the Operational Research Society Pages: 1653-1660 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0254-z File-URL: http://hdl.handle.net/10.1057/s41274-017-0254-z File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1653-1660 Template-Type: ReDIF-Article 1.0 Author-Name: Sebastián Román Author-X-Name-First: Sebastián Author-X-Name-Last: Román Author-Name: Andrés M. Villegas Author-X-Name-First: Andrés M. Author-X-Name-Last: Villegas Author-Name: Juan G. Villegas Author-X-Name-First: Juan G. Author-X-Name-Last: Villegas Title: An evolutionary strategy for multiobjective reinsurance optimization Abstract: In this work we tackle a multiobjective reinsurance optimization problem (MOROP) from the point of view of an insurance company. The MOROP seeks to find a reinsurance program that optimizes two conflicting objectives: the maximization of the expected value of the profit of the company and the minimization of the risk of the insurance losses retained by the company. To calculate these two objectives we built a probabilistic model of the portfolio of risks of the company. This model is embedded within an evolutionary strategy (ES) that approximates the efficient frontier of the MOROP using a combination of four classical reinsurance structures: surplus, quota share, excess-of-loss and stop-loss. Computational experiments with the risks of a specific line of business of a large Colombian general insurance company show that the proposed evolutionary strategy outperforms the classical non-dominated sorting genetic algorithm. Moreover, the analysis of the solutions in the efficient frontier obtained with our ES gave several insights to the company in terms of the structure and properties of the solutions for different risk-return trade-offs. Journal: Journal of the Operational Research Society Pages: 1661-1677 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1057/s41274-017-0210-y File-URL: http://hdl.handle.net/10.1057/s41274-017-0210-y File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1661-1677 Template-Type: ReDIF-Article 1.0 Author-Name: M. Ryan Haley Author-X-Name-First: M. Ryan Author-X-Name-Last: Haley Title: A moment-free nonparametric quantity-of-quality approach to optimal portfolio selection: A role for endogenous shortfall and windfall boundaries? Abstract: This article proposes a Quantity-of-Quality (QoQ) approach to optimal portfolio selection, which builds on the intuition of the widely applied h-index and e-index from the bibliometric literature. While moment-free and nonparametric, the method embraces quantity-of-high-quality returns and upside potential while simultaneously avoiding quantity-of-low-quality returns and downside risk. A non-standard measure of central tendency is also present, which functions in a way similar to a portfolio mean or median. The method delivers attractive and intuitively appealing results, and appears to be less susceptible to overfitting issues than the stylized Sharpe Ratio portfolio. The method is demonstrated with an established data set, and out-of-sample performance is gauged using training-holdout analysis in two distinct data sets. Because the proposed method uses a fundamentally different portfolio selection objective function than standard moment-based methods, the QoQ approach extracts information about the data-generating process that is perhaps overlooked or deemphasized by traditional moment-based methods, and as such may serve as a capable complement to standard moment-based portfolio selection criteria. Journal: Journal of the Operational Research Society Pages: 1678-1687 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1080/01605682.2018.1489356 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489356 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1678-1687 Template-Type: ReDIF-Article 1.0 Author-Name: Sabri Boubaker Author-X-Name-First: Sabri Author-X-Name-Last: Boubaker Author-Name: Asma Houcine Author-X-Name-First: Asma Author-X-Name-Last: Houcine Author-Name: Zied Ftiti Author-X-Name-First: Zied Author-X-Name-Last: Ftiti Author-Name: Hatem Masri Author-X-Name-First: Hatem Author-X-Name-Last: Masri Title: Does audit quality affect firms’ investment efficiency? Abstract: This study investigates the effect of audit quality on firm investment efficiency for 125 French-listed companies over 2008–2015. It uses parametric and non-parametric measures of firm investment efficiency, based on residuals extracted from the investment efficiency model and the data envelopment analysis (DEA) approach, respectively, to assess whether audit quality improves investment inefficiency. It analyses this relationship after distinguishing between firms that under-invest and those that over-invest. The results show that investment inefficiency decreases with audit quality. Specifically, auditor knowledge leads to less investment in firms prone to over-investment and more investment in firms prone to under-investment. This relationship appears to be independent of a firm’s financial reporting quality, which indicates that auditors provide value-added services that impact the investment decisions of firm managers, separately from the quality of accounting information. Journal: Journal of the Operational Research Society Pages: 1688-1699 Issue: 10 Volume: 69 Year: 2018 Month: 10 X-DOI: 10.1080/01605682.2018.1489357 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489357 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:10:p:1688-1699 Template-Type: ReDIF-Article 1.0 Author-Name: Hyojung Kang Author-X-Name-First: Hyojung Author-X-Name-Last: Kang Author-Name: Harriet Black Nembhard Author-X-Name-First: Harriet Black Author-X-Name-Last: Nembhard Author-Name: Nasrollah Ghahramani Author-X-Name-First: Nasrollah Author-X-Name-Last: Ghahramani Author-Name: William Curry Author-X-Name-First: William Author-X-Name-Last: Curry Title: A system dynamics approach to planning and evaluating interventions for chronic disease management Abstract: Studies have been reported on the applications of systems science to chronic disease management, but few, if any, have concentrated on chronic kidney disease (CKD). We examined the impact of a system dynamics approach to the evaluation of interventions in care of patients with CKD. We developed a stock flow simulation model and a multi-objective goal programming model. After calibrating the model, eight scenarios were analysed to measure intervention effects. Physician education (PE) had the most significant impact on reducing disease progression rate (DPR) from Stage 3 to Stage 4, while care coordination had a substantial impact on decreasing DPR Stage 4 to Stage 5. The addition of either CME or primary care team building to PE led to significant reductions in DPR for patients with Stage 3 CKD. The goal programming model indicated that a growing number of primary care physicians and care managers are needed to manage CKD patients overtime. This study showed that the stock flow model is a potentially powerful tool for supporting informed decision-making for planning and implementing interventions at various phases. Journal: Journal of the Operational Research Society Pages: 987-1005 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0279-3 File-URL: http://hdl.handle.net/10.1057/s41274-017-0279-3 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:987-1005 Template-Type: ReDIF-Article 1.0 Author-Name: Li Li Author-X-Name-First: Li Author-X-Name-Last: Li Author-Name: Li Jiang Author-X-Name-First: Li Author-X-Name-Last: Jiang Title: Responsive pricing and stock redistribution: Implications for stock balancing and system performance Abstract: We consider two firms that order from a supplier and sell products to the markets with uncertain demand. The firms can transship stocks in between at an endogenous transfer price and set retail prices after learning actual market sizes. The relative timing of stock transshipment and retail pricing, both ex-post market size realization, gives rise to two decision models. The firms set retail prices and transship upon stock imbalance in the pro-pricing model, and, furthermore, transship stocks in between before retail pricing and demand satisfaction in the pro-transshipment model. We demonstrate that responsive pricing alone keeps the firms off stock imbalance and insulates them from the impacts of market uncertainty, and responsive stock redistribution contributes to a more efficient deployment of stocks to market selling, even absent volatility. Enhanced responsiveness in stocking and pricing benefits firms, but can hurt the supplier unless its production cost is sufficiently low. Journal: Journal of the Operational Research Society Pages: 1006-1020 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0280-x File-URL: http://hdl.handle.net/10.1057/s41274-017-0280-x File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1006-1020 Template-Type: ReDIF-Article 1.0 Author-Name: Uwe Aickelin Author-X-Name-First: Uwe Author-X-Name-Last: Aickelin Author-Name: Jenna Marie Reps Author-X-Name-First: Jenna Marie Author-X-Name-Last: Reps Author-Name: Peer-Olaf Siebers Author-X-Name-First: Peer-Olaf Author-X-Name-Last: Siebers Author-Name: Peng Li Author-X-Name-First: Peng Author-X-Name-Last: Li Title: Using simulation to incorporate dynamic criteria into multiple criteria decision-making Abstract: In this paper, we present a case study demonstrating how dynamic and uncertain criteria can be incorporated into a multicriteria analysis with the help of discrete event simulation. The simulation guided multicriteria analysis can include both monetary and non-monetary criteria that are static or dynamic, whereas standard multi criteria analysis only deals with static criteria and cost benefit analysis only deals with static monetary criteria. The dynamic and uncertain criteria are incorporated by using simulation to explore how the decision options perform. The results of the simulation are then fed into the multicriteria analysis. By enabling the incorporation of dynamic and uncertain criteria, the dynamic multiple criteria analysis was able to take a unique perspective of the problem. The highest ranked option returned by the dynamic multicriteria analysis differed from the other decision aid techniques. The results suggest that dynamic multiple criteria analysis may be highly suitable for decisions that require long-term evaluation, as this is often when uncertainty is introduced. Journal: Journal of the Operational Research Society Pages: 1021-1032 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1410010 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1410010 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1021-1032 Template-Type: ReDIF-Article 1.0 Author-Name: Yuelin Shen Author-X-Name-First: Yuelin Author-X-Name-Last: Shen Title: Platform retailing with slotting allowance and revenue sharing Abstract: This paper investigates a type of platform retailing, where the retailer builds up large facilities, inside which a supplier (manufacturer) rents a mini-store and sells goods directly. The retailer demands a slotting allowance and a portion of the sales revenue from the supplier; however, this fee structure may cause a channel conflict and supplier exclusion. To understand these phenomena, we build a two-supplier−one-retailer Stackelberg model with the retailer acting as the leader and the suppliers acting as the followers. We solve the model analytically and numerically, assuming competitive and non-competitive suppliers, identical and nonidentical slotting allowance, and possibly different revenue-sharing rates for the two suppliers. It is found that supplier exclusion may happen if the slotting allowance is identical across the suppliers, whereby the market size difference and product substitution are the underlying driving forces. We also provide rationales for the existence of a slotting fee. Journal: Journal of the Operational Research Society Pages: 1033-1045 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0286-4 File-URL: http://hdl.handle.net/10.1057/s41274-017-0286-4 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1033-1045 Template-Type: ReDIF-Article 1.0 Author-Name: Subrata Mitra Author-X-Name-First: Subrata Author-X-Name-Last: Mitra Author-Name: Ashis K. Chatterjee Author-X-Name-First: Ashis K. Author-X-Name-Last: Chatterjee Title: Single-period newsvendor problem under random end-of-season demand Abstract: Newsvendor problems, which have attracted the attention of researchers since 1950s, have wide applications in various industries. There have been many extensions to the standard single-period newsvendor problem. In this paper, we consider the single-period, single-item and single-stage newsvendor problem under random end-of- season demand and develop a model to determine the optimal order quantity and expected profit. We prove that the optimal order quantity and expected profit thus obtained are lower than their respective values obtained from the standard newsvendor formulation. We also provide numerical examples and perform sensitivity analyses to compute the extent of deviations of the ‘true’ optimal solutions from the newsvendor solutions. We observe that the deviations are most sensitive to the ratio of the means of the demand distributions. The deviations are also found sensitive to the contribution margin, salvage price, coefficients of variation of the demand distributions and correlation between seasonal and end-of-season demands. We provide broad guidelines for managers as to when the model developed in this paper should be used and when the standard newsvendor formulation would suffice to determine the order quantity. Journal: Journal of the Operational Research Society Pages: 1046-1060 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0288-2 File-URL: http://hdl.handle.net/10.1057/s41274-017-0288-2 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1046-1060 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Zenker Author-X-Name-First: Michael Author-X-Name-Last: Zenker Author-Name: Nils Boysen Author-X-Name-First: Nils Author-X-Name-Last: Boysen Title: Dock sharing in cross-docking facilities of the postal service industry Abstract: In cross-docks, incoming shipments are unloaded, moved across the facility, and loaded into outbound trucks, such that truck load factors are increased and transportation costs are reduced. In this context, we treat the question which outbound destinations should share a dock if not enough docking space is available to process each destination via its separate dock door. We aim at a partition of destinations (among docks), such that fixed processing intervals do not overlap and the maximum inventory accumulating in the staging areas is minimized. We define the resulting dock sharing problem specifically addressing the peculiarities of the postal service industry, investigate computational complexity, and provide solution procedures. Journal: Journal of the Operational Research Society Pages: 1061-1076 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0289-1 File-URL: http://hdl.handle.net/10.1057/s41274-017-0289-1 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1061-1076 Template-Type: ReDIF-Article 1.0 Author-Name: Amirmohsen Golmohammadi Author-X-Name-First: Amirmohsen Author-X-Name-Last: Golmohammadi Author-Name: Elkafi Hassini Author-X-Name-First: Elkafi Author-X-Name-Last: Hassini Title: A two-period sourcing model with demand and supply risks Abstract: In this paper, we study the problem of sourcing a product when the demand and supply may be uncertain. We consider a two-period model where the supplier’s production quantity in the second period is dependent on the amount produced in the first period. This is a common situation in industries where the production capacity cannot be changed in a short period of time such as in the almond industry. In this industry usually a two-year contract between the supplier (farmer) and buyer (handler) is preferred. The buyer can sign two sets of contracts: a production contract where she is responsible for the uncertainty of yield or a purchasing contract where the provided quantity is guaranteed by the supplier at a higher cost. The buyer has to decide about the quantities to buy through the production and purchasing contracts. The buyer has the option to carry excess inventory from the first period to the second. We establish some analytical properties of our proposed model and perform comparative static analysis to study the buyers decisions. In particular, we show under which conditions the buyer may benefit from purchasing contracts. In addition, we shed some light the debate of the merits of inventory carry-over in mitigating the yield risk in the almond industry. To gain some practical insights, we also apply our model to some real data from the California almond industry. Finally, we extend the model to investigate two cases: when the prices in the primary and the secondary markets are functions of yield and when the amount of carry-over is a decision variable. Journal: Journal of the Operational Research Society Pages: 1077-1095 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1409411 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409411 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1077-1095 Template-Type: ReDIF-Article 1.0 Author-Name: Jesús Alberto Tapia Author-X-Name-First: Jesús Alberto Author-X-Name-Last: Tapia Author-Name: Bonifacio Salvador Author-X-Name-First: Bonifacio Author-X-Name-Last: Salvador Author-Name: Jesús María Rodríguez Author-X-Name-First: Jesús Author-X-Name-Last: María Rodríguez Title: Data envelopment analysis in satisfaction survey research: sample size problem Abstract: Data envelopment analysis (DEA) frequently uses stochastic input and/or output data. If these data are estimated from a sample in each decision-making unit, the DEA efficiency will be an estimation of the obtained efficiency if the population information is available. We propose a methodology to determine the relationship between the sample size and the estimation error of the efficiency in the presence of output data estimated with a sample. The practical utility of this result is to evaluate, with fixed precision, the efficiency of a set of making units, taking deterministic inputs that explain the opinion–satisfaction of the unit users, whose opinion is known through a sampling survey. We illustrate how to apply the proposed research with a case study. Journal: Journal of the Operational Research Society Pages: 1096-1104 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0290-8 File-URL: http://hdl.handle.net/10.1057/s41274-017-0290-8 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1096-1104 Template-Type: ReDIF-Article 1.0 Author-Name: Fatemeh Ghasemi Bojd Author-X-Name-First: Fatemeh Author-X-Name-Last: Ghasemi Bojd Author-Name: Hamidreza Koosha Author-X-Name-First: Hamidreza Author-X-Name-Last: Koosha Title: A robust goal programming model for the capital budgeting problem Abstract: Considering financial limitations, organizations should choose among various investment opportunities. Wrong decision making for selecting projects may lead to waste of resources as well as opportunity cost and negative long term consequences. Thus, capital budgeting problem can be solved to make proper decisions. Some of the main parameters of these problems, e.g., cash flows, are not deterministic. In addition, budget constraints of the capital budgeting problem are soft, i.e., they can be violated. In this paper, we propose a new model for the capital budgeting problem which can deal with uncertainty and benefits from soft constraints so that it can still provide a feasible solution. Goal programming is used to increase model flexibility and robust optimisation is applied to deal with uncertainty. The model is examined with different numerical illustrations. Finally, results are analysed and advantages of the new model are discussed. Results are promising and the approach is highly tractable and easy to implement. Journal: Journal of the Operational Research Society Pages: 1105-1113 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1389673 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1389673 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1105-1113 Template-Type: ReDIF-Article 1.0 Author-Name: I. L. Tomashevskii Author-X-Name-First: I. L. Author-X-Name-Last: Tomashevskii Title: Optimization methods to estimate alternatives in AHP: The classification with respect to the dependence of irrelevant alternatives Abstract: This paper focuses on specific rank reversal phenomena in optimization methods (the least squares method, the chi-square method, etc.) designed to derive preference weights of alternatives from pairwise comparison matrices in the Analytic Hierarchy Process. It is preferable that the most irrelevant alternative had no effect on the ranking of the other alternatives. Unfortunately, it appears that, for many methods, most irrelevant alternatives tend to dictate the rank order of all the remaining alternatives. Respectively, adding some irrelevant alternative may turn the most important alternative into an unimportant one and conversely.We classify the optimization methods with respect to the dependence of irrelevant alternatives and specify all possible “dictatorial” methods, which provide the absolute dictate of very irrelevant alternatives, and all methods, which are free from the dictate of such alternatives. For the dictatorial methods, we propose “weight function” modifications, which prevent the influence of irrelevant alternatives. We show that without the modification, “dictatorial” methods can add confusion and false recommendations in the decision-making process even in the most ordinary decision-making situations. Journal: Journal of the Operational Research Society Pages: 1114-1124 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1390533 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390533 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1114-1124 Template-Type: ReDIF-Article 1.0 Author-Name: Michele Fedrizzi Author-X-Name-First: Michele Author-X-Name-Last: Fedrizzi Author-Name: Fabio Ferrari Author-X-Name-First: Fabio Author-X-Name-Last: Ferrari Title: A chi-square-based inconsistency index for pairwise comparison matrices Abstract: In this paper, we introduce a new method for evaluating the inconsistency level of a pairwise comparison matrix. The classical Chi-square index suggests an interesting formal similarity for a consistent pairwise comparison matrix and, as a consequence, a method for measuring the relative deviation of the elicited preferences from a set of consistent preferences defined on the basis of the similarity mentioned above. Contrary to some previously introduced Chi-square-based approaches, no optimisation problems are involved. We verify that the new index satisfies some recently introduced characterising properties of inconsistency indices. Then, by means of numerical simulations, we compare our index with some other well-known inconsistency indices and we focus, in particular, on the comparison with Saaty’s consistency index. We discuss some numerical results showing that the new index is closely related with Saaty’s one but it is more stable with respect to the number of alternatives. Journal: Journal of the Operational Research Society Pages: 1125-1134 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1390523 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390523 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1125-1134 Template-Type: ReDIF-Article 1.0 Author-Name: Michael Becker-Peth Author-X-Name-First: Michael Author-X-Name-Last: Becker-Peth Author-Name: Ulrich W. Thonemann Author-X-Name-First: Ulrich W. Author-X-Name-Last: Thonemann Author-Name: Torsten Gully Author-X-Name-First: Torsten Author-X-Name-Last: Gully Title: A note on the risk aversion of informed newsvendors Abstract: The order behaviour of newsvendors has been extensively analysed in the behavioural operations literature and a robust observation has been that average order quantities are between expected-profit-maximising quantities and mean demand. This “pull-to-center” effect has been explained by anchoring, demand-chasing, inventory error minimisation, and other decision heuristics and biases. Risk preferences have been ruled out as an explanation of order behaviour, which we believe might have been premature. Risk preferences vary between people and understanding the effect of risk preferences on ordering requires an analysis at the individual level and not only on the group level, which is the dominant approach used in the literature. In a controlled laboratory experiment, we measure individual risk preferences and analyse how they relate to order quantities. We find a significant correlation between individual risk preferences and order quantities, which indicates that risk preferences affect order behaviour. We also test how information about the effect of order quantities on the profit distribution affects ordering and find only a marginal moderation effect. Furthermore, our analyses show no mediation effect of risk preferences by gender, but a significant level effect of gender: female subjects anchor more on mean demand than male subjects. Journal: Journal of the Operational Research Society Pages: 1135-1145 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1390525 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390525 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1135-1145 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaojia Wang Author-X-Name-First: Xiaojia Author-X-Name-Last: Wang Author-Name: Wei Chen Author-X-Name-First: Wei Author-X-Name-Last: Chen Author-Name: Jennifer Shang Author-X-Name-First: Jennifer Author-X-Name-Last: Shang Author-Name: Shanlin Yang Author-X-Name-First: Shanlin Author-X-Name-Last: Yang Title: Foreign markets expansion for air medical transport business Abstract: Air medical transport has gained increasing popularity in modern society. However, the global market of this industry has not been fully explored either in theory or in practice. This research assesses the medical aviation market and identifies the most suitable regions for a private air transport company to expand its business on a global scale. We combine the analytic hierarchy process (AHP) and the grey number (GN) theorem to analyse the potential foreign market. Compared with conventional methods, the proposed model mitigates the adverse effects of uncertainty while providing a practical approach that considers management’s subjective judgements. The integrated model is comprehensive and flexible for assessing the demand for air medical service in different parts of the world. In addition to the air jet service industry, the GN-AHP model can be generalised to evaluate many other markets and industries. Journal: Journal of the Operational Research Society Pages: 1146-1159 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1080/01605682.2017.1390529 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390529 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1146-1159 Template-Type: ReDIF-Article 1.0 Author-Name: Jack Jewson Author-X-Name-First: Jack Author-X-Name-Last: Jewson Author-Name: Simon French Author-X-Name-First: Simon Author-X-Name-Last: French Title: A comment on the Duckworth–Lewis–Stern method Journal: Journal of the Operational Research Society Pages: 1160-1163 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0281-9 File-URL: http://hdl.handle.net/10.1057/s41274-017-0281-9 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1160-1163 Template-Type: ReDIF-Article 1.0 Author-Name: Steven Stern Author-X-Name-First: Steven Author-X-Name-Last: Stern Title: Response to Jewson & French: ‘A comment on the Duckworth–Lewis–Stern method’ Journal: Journal of the Operational Research Society Pages: 1164-1165 Issue: 7 Volume: 69 Year: 2018 Month: 7 X-DOI: 10.1057/s41274-017-0282-8 File-URL: http://hdl.handle.net/10.1057/s41274-017-0282-8 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:7:p:1164-1165 Template-Type: ReDIF-Article 1.0 Author-Name: Chantal Baril Author-X-Name-First: Chantal Author-X-Name-Last: Baril Author-Name: Viviane Gascon Author-X-Name-First: Viviane Author-X-Name-Last: Gascon Author-Name: Dominic Vadeboncoeur Author-X-Name-First: Dominic Author-X-Name-Last: Vadeboncoeur Title: Discrete-event simulation and design of experiments to study ambulatory patient waiting time in an emergency department Abstract: Despite major investments in healthcare, access to front line health services (family doctors, walk-in clinics) is still difficult. If front line healthcare services remain insufficient, emergency departments will have to offer non-urgent patients appropriate services. More than half of emergency patients are ambulatory patients and their medical condition is not usually as serious as for patients on stretchers (Thibeault, 2014). This paper is devoted to the analysis of ambulatory patient length of stay in an emergency department of a hospital in the province of Quebec. The average length of stay for ambulatory patients in that hospital is slightly more than 7 hours which exceeds the average 4 hours observed for all emergency departments in Quebec. To identify the factors and their interactions affecting the performance of the hospital emergency, measured by the average length of stay for ambulatory patients, experimental design and discrete-event simulation were used. This research aims at verifying how nurses can contribute to reduce emergency department overcrowding. Our results show that giving more responsibility to nurses (collective prescriptions, review patients) reduce greatly the average patient length of stay with less financial effort than adding new doctors. This allows doctors to focus on the more acute patients. Journal: Journal of the Operational Research Society Pages: 2019-2038 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1510805 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510805 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2019-2038 Template-Type: ReDIF-Article 1.0 Author-Name: Sui-zhi Luo Author-X-Name-First: Sui-zhi Author-X-Name-Last: Luo Author-Name: Hong-yu Zhang Author-X-Name-First: Hong-yu Author-X-Name-Last: Zhang Author-Name: Jian-qiang Wang Author-X-Name-First: Jian-qiang Author-X-Name-Last: Wang Author-Name: Lin Li Author-X-Name-First: Lin Author-X-Name-Last: Li Title: Group decision-making approach for evaluating the sustainability of constructed wetlands with probabilistic linguistic preference relations Abstract: Industrialisation and urbanisation have led to the substantial decrease of natural wetlands. Thus, assessing the sustainability of artificial wetlands has become an important topic. Generally, people are more accustomed to using linguistic phrases to express preference information. In this case, utilising probabilistic linguistic term sets (PLTSs) is a good choice for information expression. Besides, decision makers (DMs) may prefer to construct preference matrices by comparing the pairwise alternatives. Accordingly, this study focuses on the probabilistic linguistic preference relations (PLPRs). The cosine similarity measure of PLTSs is first proposed. Thereafter, the consistency/consensus issues are discussed and a strategy based on trust degree is investigated to derive important degrees of experts. Then an optimisation model is developed to calculate the priority weights. The main novelty is the idea of maximising the similarity measure. Finally, an example of wetlands assessment is illustrated to verify the effectiveness and advantages of our approach. Journal: Journal of the Operational Research Society Pages: 2039-2055 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1510806 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510806 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2039-2055 Template-Type: ReDIF-Article 1.0 Author-Name: Esra Adıyeke Author-X-Name-First: Esra Author-X-Name-Last: Adıyeke Author-Name: Ethem Çanakoğlu Author-X-Name-First: Ethem Author-X-Name-Last: Çanakoğlu Author-Name: Semra Ağralı Author-X-Name-First: Semra Author-X-Name-Last: Ağralı Title: Risk averse investment strategies for a private electricity generating company in a carbon constrained environment Abstract: We study a private electricity generating company that plans to enter a partially regulated market that operates under an active cap and trade system. There are different types of thermal and renewable power plants that the company considers to invest in over a predetermined planning horizon. Thermal power plants may include a carbon capture and storage technology in order to comply with the carbon limitations. We develop a time-consistent multi-stage stochastic optimization model for this investment problem, where the objective is to minimize the conditional value at risk (CV@R) of the net present value of the profit obtained through the planning horizon. We implement the model for a hypothetical generating company located in Turkey. The results show that the developed model is appropriate for determining risk averse investment strategies for a company that operates under carbon restricted market conditions. Journal: Journal of the Operational Research Society Pages: 2056-2068 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1535265 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1535265 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2056-2068 Template-Type: ReDIF-Article 1.0 Author-Name: Nenggui Zhao Author-X-Name-First: Nenggui Author-X-Name-Last: Zhao Author-Name: Qiang Wang Author-X-Name-First: Qiang Author-X-Name-Last: Wang Author-Name: Ping Cao Author-X-Name-First: Ping Author-X-Name-Last: Cao Author-Name: Jie Wu Author-X-Name-First: Jie Author-X-Name-Last: Wu Title: Dynamic pricing with reference price effect and price-matching policy in the presence of strategic consumers Abstract: In this paper, we consider a retailer that sells a product with high storage cost over a two-period horizon. The goal is to investigate the single and combined effects of reference price and price-matching (PM) on the purchasing behavior of consumers and determine how these affect the retailer’s optimal pricing decisions and optimal total discounted revenue. We first present a discrete-time dynamic pricing (DP) model over a two-period horizon with a reference price effect in the presence of strategic consumers. Then we subsequently extended to another DP model in which the retailer implements a PM policy. The results show that under the reference price effect, the retailer’s revenue will always decrease, even when a PM policy is implemented. The PM policy is not always beneficial for the retailer, especially when the discount factor is infinitely close to 1. Second, we propose a model misspecification approach to investigate the effect of the reference price. Furthermore, the value of a PM policy is discussed. We find that the value of a PM policy is generally best when the discount factor is at a threshold value and also that the value of PM is much greater in the presence of strategic purchasing behavior. Journal: Journal of the Operational Research Society Pages: 2069-2083 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1510809 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510809 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2069-2083 Template-Type: ReDIF-Article 1.0 Author-Name: Lingna Lin Author-X-Name-First: Lingna Author-X-Name-Last: Lin Author-Name: He Wang Author-X-Name-First: He Author-X-Name-Last: Wang Title: Dynamic incentive model of knowledge sharing in construction project team based on differential game Abstract: The construction project team is a demanding, high-stress environment, yet wary participants can be extremely difficult in sharing their knowledge with others. This is a study that targets dynamic knowledge sharing in a construction project team, constructing a dynamic incentive model framework. It is done through the differential game theory, and the application of the Hamilton–Jacobi–Bellman equation is introduced to solve a Nash non-cooperative game and Leader-follower differential games. The results show that the optimal strategy of the Nash game is that agents do not share any knowledge and the principal does not give any incentives. However, the participants will share the cumulative amount of knowledge in the Leader-follower differential games, and the optimal profits of agents and principal are increased as time progressed, and the agents’ effort level of knowledge sharing eventually tending to stability. Journal: Journal of the Operational Research Society Pages: 2084-2096 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1516177 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1516177 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2084-2096 Template-Type: ReDIF-Article 1.0 Author-Name: Intissar Ben Othmane Author-X-Name-First: Intissar Author-X-Name-Last: Ben Othmane Author-Name: Monia Rekik Author-X-Name-First: Monia Author-X-Name-Last: Rekik Author-Name: Sehl Mellouli Author-X-Name-First: Sehl Author-X-Name-Last: Mellouli Title: A profit-maximization heuristic for combinatorial bid construction with pre-existing network restrictions Abstract: This paper proposes a heuristic approach for constructing combinatorial bids in TL transportation services procurement auctions. It considers the case where the carrier has already engaged on a set of transportation contracts before participating in the auction. The proposed heuristic identifies profitable new contracts and efficiently integrates them in the carrier exiting routes, and/or builds new routes for unused vehicles. Experimental results prove the efficiency of the proposed heuristic in terms of computing times and solutions quality. Journal: Journal of the Operational Research Society Pages: 2097-2111 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1512844 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1512844 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2097-2111 Template-Type: ReDIF-Article 1.0 Author-Name: Qingxian An Author-X-Name-First: Qingxian Author-X-Name-Last: An Author-Name: Yao Wen Author-X-Name-First: Yao Author-X-Name-Last: Wen Author-Name: Junfei Chu Author-X-Name-First: Junfei Author-X-Name-Last: Chu Author-Name: Xiaohong Chen Author-X-Name-First: Xiaohong Author-X-Name-Last: Chen Title: Profit inefficiency decomposition in a serial-structure system with resource sharing Abstract: Previous studies have used overall profit inefficiency (OPI) to assess the overall profit improvement of firms and decomposed OPI into technical and allocative components to identify the specific sources of overall potential gains. Resource sharing among the stages of the network structure system as a type of resource allocation is also a source of increasing system profit; however, this kind of source has not been identified and specified in previous works. Based on network data envelopment analysis (DEA), this study explores the OPI decomposition issue in a three-stage serial-structure system with resource sharing existing among stages. We prove that resource sharing among stages can bring potential gains. Using a multiplicative decomposition method based on the measure of profit ratios, we decompose the OPI into product of technical profit inefficiency (TPI), resource sharing profit inefficiency (RSPI), and free allocation profit inefficiency (FAPI), where the RSPI multiplies by FAPI is consistent with the allocative component in previous studies. A numerical example is used to illustrate our approach, and the results show that the overall potential gains can be decomposed into gains derived from removing technical inefficiency, resource sharing, and free allocation within the whole production possibility set. Journal: Journal of the Operational Research Society Pages: 2112-2126 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1510810 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510810 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2112-2126 Template-Type: ReDIF-Article 1.0 Author-Name: Barnabé Walheer Author-X-Name-First: Barnabé Author-X-Name-Last: Walheer Title: Input allocation in multi-output settings: Nonparametric robust efficiency measurements Abstract: Comparing decision making units to detect their potential efficiency improvement, without relying on parametric unverifiable assumptions about the production process, is the goal of nonparametric efficiency analysis (such as FDH, DEA). While such methods have demonstrated their practical usefulness, practitioners sometimes have doubts about their fairness. In multi-output settings, two main limitations could give credit to their doubts: (1) the production process is modelled as a “black box,” i.e., it is implicitly assumed that all the inputs produce simultaneously all the outputs; (2) only techniques investigating for outliers in all output directions simultaneously exist. In this article, we tackle these two limitations by presenting two new nonparametric robust efficiency measurements for multi-output settings. Our new measurements present several attractive features. First, they increase the realism of the modelling by taking the links between inputs and outputs into account, and thus tackle (1). Second, they provide flexibility in the outlier detection exercise, and thus also tackle (2). Overall, our new measurements better use the data available, and can be seen as natural extensions of well-known nonparametric robust efficiency measurements for multi-output contexts. To demonstrate the usefulness of our method, we propose both a simulation and an empirical application. Journal: Journal of the Operational Research Society Pages: 2127-2142 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1516175 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1516175 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2127-2142 Template-Type: ReDIF-Article 1.0 Author-Name: Akram Dehnokhalaji Author-X-Name-First: Akram Author-X-Name-Last: Dehnokhalaji Author-Name: Narges Soltani Author-X-Name-First: Narges Author-X-Name-Last: Soltani Title: Gradual efficiency improvement through a sequence of targets Abstract: The goal in efficiency analysis is not only to evaluate a decision-making unit (DMU) performance, but also to find an efficient target which provides information on inputs reduction and outputs increment values that are necessary to remove inefficiencies for each inefficient DMU. In data envelopment analysis (DEA), the target unit is located on the efficient frontier and possibly far from the unit under assessment. Therefore, in practice performance improvement seems to be disappointing or even impossible to achieve in only one step for some inefficient DMUs. In this regard, finding intermediate targets is of great importance in benchmarking literature. In this article, we find a sequence of targets instead of a single target for each inefficient unit. In our method, the intermediate target at each step has three properties: (I) the intermediate targets and the unit under evaluation are all similar in size; (II) efficiency scores are ascending through the sequence of targets; (III) the target unit at each step is close to the special part of the efficient frontier as much as possible. These properties lead to finding a target that is more achievable in real applications. Journal: Journal of the Operational Research Society Pages: 2143-2152 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1529723 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1529723 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2143-2152 Template-Type: ReDIF-Article 1.0 Author-Name: Anyu Yu Author-X-Name-First: Anyu Author-X-Name-Last: Yu Author-Name: Yilei Shao Author-X-Name-First: Yilei Author-X-Name-Last: Shao Author-Name: Jianxin You Author-X-Name-First: Jianxin Author-X-Name-Last: You Author-Name: Maoguo Wu Author-X-Name-First: Maoguo Author-X-Name-Last: Wu Author-Name: Tao Xu Author-X-Name-First: Tao Author-X-Name-Last: Xu Title: Estimations of operational efficiencies and potential income gains considering the credit risk for China’s banks Abstract: This paper proposes a method framework to estimate operational efficiencies and potential income gains considering the credit risk for banks. The method refers to the optimization of operational income, interest income, and non-performing loan amounts. As main innovations, potential interest income gains from credit technology improvement and loan provision reduction are detected. Operational capability restriction is considered by an inverse-like DEA model. Based on an empirical study of Chinese banks, some suggestions are obtained: (1) diverse operational efficiencies are observed for bank groups. Operational efficiencies of rural commercial banks became worse after going public. (2) For city-owned and rural commercial banks, the investment performance and financial services should be improved to increase operational incomes. Excessive loan provision should be cautious to forbid more non-performing loans. (3) Credit risk technology improvement should be addressed by state-owned and rural commercial banks. Their operational inefficiencies are mainly from weak credit risk control.Research HighlightsA modified data envelopment analysis for output optimisation is proposed.Potential interest gains have been decomposed into parts for different causes.Operational capacity restrictions are considered in potential output estimations.The approach is applied to measure banks’ operational performance in China.Future suggestions for bank groups are provided in the empirical study. Journal: Journal of the Operational Research Society Pages: 2153-2168 Issue: 12 Volume: 70 Year: 2019 Month: 12 X-DOI: 10.1080/01605682.2018.1510808 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510808 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:12:p:2153-2168 Template-Type: ReDIF-Article 1.0 Author-Name: Yongjae Lee Author-X-Name-First: Yongjae Author-X-Name-Last: Lee Author-Name: Min Jeong Kim Author-X-Name-First: Min Jeong Author-X-Name-Last: Kim Author-Name: Jang Ho Kim Author-X-Name-First: Jang Ho Author-X-Name-Last: Kim Author-Name: Ju Ri Jang Author-X-Name-First: Ju Ri Author-X-Name-Last: Jang Author-Name: Woo Chang Kim Author-X-Name-First: Woo Author-X-Name-Last: Chang Kim Title: Sparse and robust portfolio selection via semi-definite relaxation Abstract: In investment management, especially for automated investment services, it is critical for portfolios to have a manageable number of assets and robust performance. First, portfolios should not contain too many assets in order to reduce the management fees, transaction costs, and taxes. Second, portfolios should be robust as investment environments change rapidly. In this study, therefore, we propose two convex portfolio selection models that provide portfolios that are sparse and robust. We first perform semi-definite relaxation to develop a sparse mean-variance portfolio selection model, and further extend the model by using L2-norm regularization and worst-case optimization to formulate two sparse and robust portfolio selection models. Empirical analyses with historical stock returns demonstrate the effectiveness of the proposed models in forming sparse and robust portfolios. Journal: Journal of the Operational Research Society Pages: 687-699 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1581408 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1581408 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:687-699 Template-Type: ReDIF-Article 1.0 Author-Name: Zhang-Peng Tian Author-X-Name-First: Zhang-Peng Author-X-Name-Last: Tian Author-Name: Ru-Xin Nie Author-X-Name-First: Ru-Xin Author-X-Name-Last: Nie Author-Name: Jian-Qiang Wang Author-X-Name-First: Jian-Qiang Author-X-Name-Last: Wang Title: Probabilistic linguistic multi-criteria decision-making based on evidential reasoning and combined ranking methods considering decision-makers’ psychological preferences Abstract: This study aims to develop an integrated approach for solving probabilistic linguistic multi-criteria decision-making (MCDM) problems. This study first reveals the limitations in the existing methods for probabilistic linguistic term sets (PLTSs). Subsequently, an improved aggregation method and a novel ranking method are developed for addressing PLTSs. The proposed aggregation method is based on the evidence reasoning algorithm and the proposed ranking method that integrates with a three-fold ranking method is based on optimism, neutralism and pessimism decision-making processes. Thus, the proposed approach can straightforwardly and robustly deal with probabilistic linguistic MCDM problems considering decision-makers’ psychological preferences. Moreover, to flexibly obtain criteria weights, several models are constructed to adapt to different decision-making situations, in which criteria weight information is incompletely, inconsistently or completely unknown. Finally, a case study on selecting the best investment objective(s) among the member counties of “One Belt, One road” is conducted to validate the feasibility and effectiveness of the proposed approach, followed by a comparative analysis between the existing methods and the proposed approach. Journal: Journal of the Operational Research Society Pages: 700-717 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1632752 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1632752 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:700-717 Template-Type: ReDIF-Article 1.0 Author-Name: Ming-Miin Yu Author-X-Name-First: Ming-Miin Author-X-Name-Last: Yu Author-Name: Li-Hsueh Chen Author-X-Name-First: Li-Hsueh Author-X-Name-Last: Chen Title: Evaluation of efficiency and technological bias of tourist hotels by a meta-frontier DEA model Abstract: Tourist hotels may face different production frontiers and bias of technology between the group frontier and meta-frontier due to technological heterogeneity. This paper develops a meta-frontier data envelopment analysis framework for evaluating the efficiency and technological bias of tourist hotels. By comparing the curvatures of the group frontier and meta-frontier, the relative technological bias between a specific group and the whole industry can be obtained. Furthermore, by investigating the relative technological bias, the direction of technological improvement needed for individual hotels can be ascertained. The proposed method is applied in an empirical example of Taiwanese tourist hotels. The results indicate that most hotels have technological bias and should adjust the curve of their production possibility frontier to match the meta-technology. Journal: Journal of the Operational Research Society Pages: 718-732 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1578625 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1578625 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:718-732 Template-Type: ReDIF-Article 1.0 Author-Name: Muhammet Gul Author-X-Name-First: Muhammet Author-X-Name-Last: Gul Author-Name: Ali Fuat Guneri Author-X-Name-First: Ali Author-X-Name-Last: Fuat Guneri Author-Name: Murat M. Gunal Author-X-Name-First: Murat M. Author-X-Name-Last: Gunal Title: Emergency department network under disaster conditions: The case of possible major Istanbul earthquake Abstract: Emergency departments (EDs) provide health care services to people in need of urgent care. Their role is remarkable when extraordinary events that affect the public, such as earthquakes, occur. In this paper, we present a hybrid framework to evaluate earthquake preparedness of EDs in cities. Our hybrid framework uses artificial neural networks (ANNs) to estimate number of casualties and discrete event simulation (DES) to analyse the effect of surge in patient demand in EDs, after an earthquake happens. At the core of our framework, Earthquake Time Emergency Department Network Simulation Model (ET-EDNETSIM) resides which can simulate patient movements in a network of multiple and coordinated EDs. With the design of simulation experiments, different resource levels and sharing rules between EDs can be evaluated. We demonstrated our framework in a network of five EDs located in a region of which is estimated to have the highest injury rate after an earthquake in Istanbul, Turkey. Results of our study contributed to the planning for expected earthquake in Istanbul. Simulating a network of EDs extends the individual ED studies in the literature and furthermore, our hybrid framework can help increase earthquake preparedness in cities around the world. On the methodological side, the use of ANN, which is a member of machine learning (ML) algorithms family, in our hybrid framework also shows the close links between ML and DES. Journal: Journal of the Operational Research Society Pages: 733-747 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1582588 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1582588 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:733-747 Template-Type: ReDIF-Article 1.0 Author-Name: Ming Liu Author-X-Name-First: Ming Author-X-Name-Last: Liu Author-Name: Xifen Xu Author-X-Name-First: Xifen Author-X-Name-Last: Xu Author-Name: Jie Cao Author-X-Name-First: Jie Author-X-Name-Last: Cao Author-Name: Ding Zhang Author-X-Name-First: Ding Author-X-Name-Last: Zhang Title: Integrated planning for public health emergencies: A modified model for controlling H1N1 pandemic Abstract: Infectious disease outbreaks have occurred many times in the past decades and are more likely to occur in the future. Recently, Büyüktahtakin et al. (2018) proposed a new epidemics-logistics model to control the 2014 Ebola outbreak in West Africa. Considering that different diseases have dissimilar diffusion dynamics and can cause different public health emergencies, we modify the proposed model by changing capacity constraint, and then apply it to control the 2009 H1N1 outbreak in China. We formulate the problem to be a mixed-integer non-linear programming model (MINLP) and simultaneously determine when to open the new isolated wards and when to close the unused isolated wards. The test results reveal that our model could provide effective suggestions for controlling the H1N1 outbreak, including the appropriate capacity setting and the minimum budget required with different intervention start times. Journal: Journal of the Operational Research Society Pages: 748-761 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1582589 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1582589 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:748-761 Template-Type: ReDIF-Article 1.0 Author-Name: Carolina Marcelino Author-X-Name-First: Carolina Author-X-Name-Last: Marcelino Author-Name: Manuel Baumann Author-X-Name-First: Manuel Author-X-Name-Last: Baumann Author-Name: Leonel Carvalho Author-X-Name-First: Leonel Author-X-Name-Last: Carvalho Author-Name: Nelson Chibeles-Martins Author-X-Name-First: Nelson Author-X-Name-Last: Chibeles-Martins Author-Name: Marcel Weil Author-X-Name-First: Marcel Author-X-Name-Last: Weil Author-Name: Paulo Almeida Author-X-Name-First: Paulo Author-X-Name-Last: Almeida Author-Name: Elizabeth Wanner Author-X-Name-First: Elizabeth Author-X-Name-Last: Wanner Title: A combined optimisation and decision-making approach for battery-supported HMGS Abstract: Hybrid micro-grid systems (HMGS) are gaining increasing attention worldwide. The balance between electricity load and generation based on fluctuating renewable energy sources is a main challenge in the operation and design of HMGS. Battery energy storage systems are considered essential components for integrating high shares of renewable energy into a HMGS. Currently, there are very few studies in the field of mathematical optimisation and multi-criteria decision analysis that focus on the evaluation of different battery technologies and their impact on the HMGS design. The model proposed in this paper aims at optimising three different criteria: minimising electricity costs, reducing the loss of load probability, and maximising the use of locally available renewable energy. The model is applied in a case study in southern Germany. The optimisation is carried out using the C-DEEPSO algorithm. Its results are used as input for an AHP-TOPSIS model to identify the most suitable alternative out of five different battery technologies using expert weights. Lithium batteries are considered the best solution with regard to the given group preferences and the optimisation results. Journal: Journal of the Operational Research Society Pages: 762-774 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1582590 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1582590 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:762-774 Template-Type: ReDIF-Article 1.0 Author-Name: Maha Bakoben Author-X-Name-First: Maha Author-X-Name-Last: Bakoben Author-Name: Tony Bellotti Author-X-Name-First: Tony Author-X-Name-Last: Bellotti Author-Name: Niall Adams Author-X-Name-First: Niall Author-X-Name-Last: Adams Title: Identification of credit risk based on cluster analysis of account behaviours Abstract: Assessment of risk levels for existing credit accounts is important to the implementation of bank policies and offering financial products. This article uses cluster analysis of behaviour of credit card accounts to help assess credit risk level. Account behaviour is modelled parametrically and we then implement the behavioural cluster analysis using a recently proposed dissimilarity measure of statistical model parameters. The advantage of this new measure is the explicit exploitation of uncertainty associated with parameters estimated from statistical models. Interesting clusters of real credit card behaviours data are obtained, in addition to superior prediction and forecasting of account default based on the clustering outcomes. Journal: Journal of the Operational Research Society Pages: 775-783 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1582586 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1582586 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:775-783 Template-Type: ReDIF-Article 1.0 Author-Name: Omer Ozkan Author-X-Name-First: Omer Author-X-Name-Last: Ozkan Author-Name: Murat Ermis Author-X-Name-First: Murat Author-X-Name-Last: Ermis Author-Name: Ilker Bekmezci Author-X-Name-First: Ilker Author-X-Name-Last: Bekmezci Title: Reliable communication network design: The hybridisation of metaheuristics with the branch and bound method Abstract: Reliable communication network design (RCND) is a well-known optimisation problem to produce a network with maximum reliability. This paper addresses the minimum cost communication network design problem under the all-terminal reliability constraint. Due to the NP-hard nature of RCND, several different metaheuristic algorithms have been widely applied to solve this problem. The aim of this paper is to propose two new hybrid metaheuristic algorithms, namely, GABB and SABB, by integrating either a Genetic Algorithm (GA) with the Branch and Bound method (B&B) or Simulated Annealing (SA) with B&B. The GABB and SABB algorithms have the advantages of finding higher performance solutions produced from the GA or SA, along with the ability to repair infeasible solutions or improve solution quality by integrating the B&B method. To investigate the effectiveness of the proposed algorithms, extensive comparisons with individual application of the GA and SA (the basic forms of GABB and SABB), two different hybrid algorithms (GABB and SABB) and other two approaches (ACO_SA and STH) that give the best results in the literature for the design problems are carried out in a three-stage experimental study (ie, small-, medium-, and large-sized networks). The computational results show that hybridisation of metaheuristics with the B&B method is an effective approach to designing reliable networks and finding better solutions for existing problems in the literature. Journal: Journal of the Operational Research Society Pages: 784-799 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1582587 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1582587 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:784-799 Template-Type: ReDIF-Article 1.0 Author-Name: Arash Geramian Author-X-Name-First: Arash Author-X-Name-Last: Geramian Author-Name: Arash Shahin Author-X-Name-First: Arash Author-X-Name-Last: Shahin Author-Name: Behzad Minaei Author-X-Name-First: Behzad Author-X-Name-Last: Minaei Author-Name: Jiju Antony Author-X-Name-First: Jiju Author-X-Name-Last: Antony Title: Enhanced FMEA: An integrative approach of fuzzy logic-based FMEA and collective process capability analysis Abstract: The aim of this study is to modify and enhance the quantitative/mathematical features of both computational and analytical aspects of the process failure modes and effects analysis (FMEA). For this purpose, a hybrid approach including the Fuzzy Logic-based FMEA (FFMEA) and collective process capability analysis (CPCA) has been developed in three phases. First, failure modes have been defined based on lack of quality in quality characteristics under investigation, and then, they have been prioritised using FFMEA. Second, the most critical failure has been selected for statistical analysis using CPCA, leading to the corrective actions in the third phase. The proposed approach was investigated in an electrical-equipment-manufacturing company. Findings indicated that the diameter deviation in Insulator A was the most critical failure effect caused by a rightward mean shift of 0.32 cm. In addition, Cpk has been improved from 0.41 to 1.12, and defective products have been reduced from 115,083.09 to 336.98 parts per million. Journal: Journal of the Operational Research Society Pages: 800-812 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1606986 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1606986 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:800-812 Template-Type: ReDIF-Article 1.0 Author-Name: Lu Zhen Author-X-Name-First: Lu Author-X-Name-Last: Zhen Author-Name: Wenya Lv Author-X-Name-First: Wenya Author-X-Name-Last: Lv Author-Name: Kai Wang Author-X-Name-First: Kai Author-X-Name-Last: Wang Author-Name: Chengle Ma Author-X-Name-First: Chengle Author-X-Name-Last: Ma Author-Name: Ziheng Xu Author-X-Name-First: Ziheng Author-X-Name-Last: Xu Title: Consistent vehicle routing problem with simultaneous distribution and collection Abstract: To improve customer service in the reverse logistics, this article defines a new variant of the vehicle routing problem (VRP) by combining the consistent VRP (ConVRP) and the VRP with simultaneous distribution and collection (VRPSDC). This new variant is called the consistent vehicle routing problem with simultaneous distribution and collection, for which a mixed-integer programming model is formulated. To solve this problem, three heuristics are proposed on the basis of the record-to-record (RTR) travel algorithm, the local search with variable neighbourhood search (LSVNS), and the tabu search-based method. Numerical experiments are performed to validate the efficiency of our proposed solution methods and the effectiveness of the proposed model. The results show that the RTR-based heuristic has an advantage in small-scale instances. However, for medium-scale instances, the best option is the LSVNS-based heuristic, which can solve instances with 40 customers and 5 days within 10 s. Moreover, the LSVNS-based heuristic can solve large-scale instances with 200 customers and 5 days 3 hours. Journal: Journal of the Operational Research Society Pages: 813-830 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1590134 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1590134 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:813-830 Template-Type: ReDIF-Article 1.0 Author-Name: Decui Liang Author-X-Name-First: Decui Author-X-Name-Last: Liang Author-Name: Adjei Peter Darko Author-X-Name-First: Adjei Peter Author-X-Name-Last: Darko Author-Name: Zeshui Xu Author-X-Name-First: Zeshui Author-X-Name-Last: Xu Author-Name: Yinrunjie Zhang Author-X-Name-First: Yinrunjie Author-X-Name-Last: Zhang Title: Partitioned fuzzy measure-based linear assignment method for Pythagorean fuzzy multi-criteria decision-making with a new likelihood Abstract: The aim of this paper is to develop an extended linear assignment method to solve multi-criteria decision-making (MCDM) problems under the Pythagorean fuzzy environment, where the criteria values take the form of the Pythagorean fuzzy numbers (PFNs) and the information about criteria weights are correlative. In order to obtain the criteria-wise rankings of the linear assignment method, we firstly define a new likelihood for the comparison between PFNs. Then, we introduce the fuzzy measure to determine the weighted-rank frequency matrix of the linear assignment method. Unlike the existing literature of the fuzzy measure, this paper incorporates the partitioned structure of the criteria set into it and proposes a new partitioned fuzzy measure. Further, we design the extended linear assignment method by using the new likelihood of PFNs and partitioned fuzzy measure for Pythagorean fuzzy multi-criteria decision-making (PFMCDM). Finally, a practical example is used to illustrate and verify our proposed method. Journal: Journal of the Operational Research Society Pages: 831-845 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1590133 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1590133 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:831-845 Template-Type: ReDIF-Article 1.0 Author-Name: Henrik C. Bylling Author-X-Name-First: Henrik C. Author-X-Name-Last: Bylling Author-Name: Steven A. Gabriel Author-X-Name-First: Steven A. Author-X-Name-Last: Gabriel Author-Name: Trine K. Boomsma Author-X-Name-First: Trine K. Author-X-Name-Last: Boomsma Title: A parametric programming approach to bilevel optimisation with lower-level variables in the upper level Abstract: This paper examines linearly constrained bilevel programming problems in which the upper-level objective function depends on both the lower-level primal and dual optimal solutions. We parametrize the lower-level solutions and thereby the upper-level objective function by the upper-level variables and argue that it may be non-convex and even discontinuous. However, when the upper-level objective is affine in the lower-level primal optimal solution, the parametric function is piece-wise linear. We show how this property facilitates the application of parametric programming and demonstrate how the approach allows for decomposition of a separable lower-level problem. When the upper-level objective is bilinear in the lower-level primal and dual optimal solutions, we also provide an exact linearisation method that reduces the bilevel problem to a single-level mixed-integer linear programme (MILP). We assess the performance of the parametric programming approach on two case studies of strategic investment in electricity markets and benchmark against state-of-the-art MILP and non-linear solution methods for bilevel optimisation problems. Preliminary results indicate substantial computational advantages over several standard solvers, especially when the lower-level problem separates into a large number of subproblems. Furthermore, we show that the parametric programming approach succeeds in solving problems to global optimality for which standard methods can fail. Journal: Journal of the Operational Research Society Pages: 846-865 Issue: 5 Volume: 71 Year: 2020 Month: 5 X-DOI: 10.1080/01605682.2019.1590132 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1590132 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:71:y:2020:i:5:p:846-865 Template-Type: ReDIF-Article 1.0 Author-Name: Niloy J. Mukherjee Author-X-Name-First: Niloy J. Author-X-Name-Last: Mukherjee Author-Name: Subhash C. Sarin Author-X-Name-First: Subhash C. Author-X-Name-Last: Sarin Title: Comparison of single sourcing (with lot streaming) and dual-sourcing Abstract: Dual-sourcing is a strategy of ordering the material required to process a lot (order) from two suppliers, instead of a single supplier. This strategy has been well-studied in the literature and has been shown to reduce lead time. However, the additional ordering cost incurred for dual-sourcing makes this strategy unattractive. In this paper, we compare it to an alternative strategy of sourcing the required material from a single supplier, but by permitting its delivery in two partial shipments (referred to here as single-sourcing (with lot streaming)). We consider the case when the supplier lead time is stochastic and its distribution depends on the size of the lot processed. We show that single-sourcing (with lot streaming): (1) is less prone to stockouts than dual-sourcing, (2) incurs lower expected lead time for a given stockout risk when the time between orders in dual-sourcing is large enough, and (3) results in lower inventory levels given the same lead time performance and allowable stockout risk when the supplier’s processing times are significantly smaller than those of the manufacturer. The risk of delay in the arrival of the first sublot is smaller for dual-sourcing, however, this advantage decreases when time between orders is increased. Journal: Journal of the Operational Research Society Pages: 1701-1710 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1404182 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1404182 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1701-1710 Template-Type: ReDIF-Article 1.0 Author-Name: Harish Garg Author-X-Name-First: Harish Author-X-Name-Last: Garg Author-Name: Rishu Arora Author-X-Name-First: Rishu Author-X-Name-Last: Arora Title: Bonferroni mean aggregation operators under intuitionistic fuzzy soft set environment and their applications to decision-making Abstract: Intuitionistic fuzzy soft set (IFSS) theory is one of the successful extensions of soft set theory for handling the uncertainty in the data by introducing the parametrisation factor during the decision-making process as compared to the existing theories. Under this IFSS environment, the present paper developed some new Bonferroni mean(BM) and weighted BM averaging operator for aggregating the different preferences of the decision-maker. Some of its desirable properties have also been discussed in details. Further, a decision-making method based on proposed operators has been presented and then illustrated with a numerical example. A comparison analysis between the proposed and the existing measures under IFSS environment has been performed in terms of counter-intuitive cases for showing the validity of it. Journal: Journal of the Operational Research Society Pages: 1711-1724 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409159 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409159 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1711-1724 Template-Type: ReDIF-Article 1.0 Author-Name: Ehsan Mehdad Author-X-Name-First: Ehsan Author-X-Name-Last: Mehdad Author-Name: Jack P. C. Kleijnen Author-X-Name-First: Jack P. C. Author-X-Name-Last: Kleijnen Title: Efficient global optimisation for black-box simulation via sequential intrinsic Kriging Abstract: Efficient global optimisation (EGO) is a popular method that searches sequentially for the global optimum of a simulated system. EGO treats the simulation model as a black-box, and balances local and global searches. In deterministic simulation, classic EGO uses ordinary Kriging (OK), which is a special case of universal Kriging (UK). In our EGO variant we use intrinsic Kriging (IK), which does not need to estimate the parameters that quantify the trend in UK. In random simulation, classic EGO uses stochastic Kriging (SK), but we replace SK by stochastic IK (SIK). Moreover, in random simulation, EGO needs to select the number of replications per simulated input combination, accounting for the heteroscedastic variances of the simulation outputs. A popular method uses optimal computer budget allocation (OCBA), which allocates the available total number of replications to simulated combinations. We replace OCBA by a new allocation algorithm. We perform several numerical experiments with deterministic simulations and random simulations. These experiments suggest that (1) in deterministic simulations, EGO with IK outperforms classic EGO; (2) in random simulations, EGO with SIK and our allocation rule does not perform significantly better than EGO with SK and OCBA. Journal: Journal of the Operational Research Society Pages: 1725-1737 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409154 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409154 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1725-1737 Template-Type: ReDIF-Article 1.0 Author-Name: G. Villa Author-X-Name-First: G. Author-X-Name-Last: Villa Author-Name: S. Lozano Author-X-Name-First: S. Author-X-Name-Last: Lozano Title: Dynamic Network DEA approach to basketball games efficiency Abstract: Although Data Envelopment Analysis (DEA) has been widely applied to sports, not many studies are related to basketball. Two types of approaches have been developed to measure the efficiency in basketball so far: those focused on the assessment of players and those that assess the performance of the teams. Assuming that the number of points scored in a basketball game greatly influences the appeal of a game, in this paper, a new approach focused on the measurement of the scoring efficiency of the two teams that play a game is addressed. To do that, the performance of each team in each quarter and the carry-overs between successive quarters must be taken into account. This leads to a Dynamic Network DEA model with two subprocess (corresponding to the home and visitor teams) running in each quarter. A scoring efficiency can be computed for each team in each quarter as well as for each team overall, for each quarter overall and for the whole game. The proposed approach is applied to the matches played during the 2014–2015 NBA season. Journal: Journal of the Operational Research Society Pages: 1738-1750 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409158 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409158 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1738-1750 Template-Type: ReDIF-Article 1.0 Author-Name: Binwei Dong Author-X-Name-First: Binwei Author-X-Name-Last: Dong Author-Name: Wansheng Tang Author-X-Name-First: Wansheng Author-X-Name-Last: Tang Author-Name: Chi Zhou Author-X-Name-First: Chi Author-X-Name-Last: Zhou Title: Strategic procurement outsourcing with asymmetric cost information under scale economies Abstract: This paper considers a supply chain in which an original equipment manufacturer (OEM) outsources her production to a contract manufacturer (CM). For the product’s component, the OEM can either control the component procurement (i.e., control strategy), or delegate this work to the CM (i.e., delegation strategy). Meanwhile, they have different discount abilities for the procurement cost due to scale economies. Moreover, the CM’s discount ability is private information for himself. In the scenario where a non-competitive CM doesn’t have own brand products, the control strategy is superior to the delegation strategy for the OEM. In contrast, when the CM is competitive (with own brand production ability), the delegation strategy is optimal. This result is interesting and implies that the OEM prefers to adopt the delegation strategy because of the discount sharing effect, although the CM has private information in this case. Finally, the results of numerical simulation show that the CM’s competition can create a win–win situation under some certain conditions. Journal: Journal of the Operational Research Society Pages: 1751-1772 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409155 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409155 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1751-1772 Template-Type: ReDIF-Article 1.0 Author-Name: Brice Assimizele Author-X-Name-First: Brice Author-X-Name-Last: Assimizele Author-Name: Johannes O. Royset Author-X-Name-First: Johannes O. Author-X-Name-Last: Royset Author-Name: Robin T. Bye Author-X-Name-First: Robin T. Author-X-Name-Last: Bye Author-Name: Johan Oppen Author-X-Name-First: Johan Author-X-Name-Last: Oppen Title: Preventing environmental disasters from grounding accidents: A case study of tugboat positioning along the Norwegian coast Abstract: An important task of operators in Norwegian vessel traffic services (VTS) centres is to cleverly position tugboats before potential vessel distress calls. Here, we formulate a non-linear binary-integer program, integrated in a receding horizon control algorithm that minimises the expected cost of grounding accidents by positioning tugboats optimally under uncertainty about vessel incidents and environmental conditions. Linearisations of the model lead to easy-to-compute bounds on the optimal value. Numerical experiments with real-world data demonstrate significant reduction in the expected cost, suggesting that the model can be used as a decision-support tool at VTS centres. Journal: Journal of the Operational Research Society Pages: 1773-1792 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409157 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409157 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1773-1792 Template-Type: ReDIF-Article 1.0 Author-Name: Feng Li Author-X-Name-First: Feng Author-X-Name-Last: Li Author-Name: Liang Liang Author-X-Name-First: Liang Author-X-Name-Last: Liang Author-Name: Yongjun Li Author-X-Name-First: Yongjun Author-X-Name-Last: Li Author-Name: Ali Emrouznejad Author-X-Name-First: Ali Author-X-Name-Last: Emrouznejad Title: An alternative approach to decompose the potential gains from mergers Abstract: Bogetoft and Wang proposed admirable production economic models to estimate and decompose the potential gains from mergers. They provided a good platform to quantify the merger efficiency and related it to relevant organisational changes ex-ante. In this paper, we develop an alternative approach to decompose the potential overall gains from mergers into to technical effect, size effect, and harmony effect. The proposed approach uses strongly efficient projections, and consistently calculates radial input-based measures for these three effects based on the pre-merger aggregated inputs. In addition, the proposed approach is of vital significance in two special cases where the aggregated projected inputs are not proportional to the pre-merger aggregated inputs and where the production sizes are very different for the original decision-making units. Finally, an application to the City Commercial Banks (CCBs) in China is provided to illustrate the usefulness and efficacy of the proposed approach. The application shows that there exist significant merger efficiency gains for these top 20 CCBs. Further, both the technical effect and harmony effect favour mergers, whereas the size effect would work against most mergers. Thus, in most cases the full-size merger with “organisational sense” is not proper. Journal: Journal of the Operational Research Society Pages: 1793-1802 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409867 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409867 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1793-1802 Template-Type: ReDIF-Article 1.0 Author-Name: Sebastián Lozano Author-X-Name-First: Sebastián Author-X-Name-Last: Lozano Author-Name: Laura Calzada-Infante Author-X-Name-First: Laura Author-X-Name-Last: Calzada-Infante Title: Efficiency assessment using network analysis tools Abstract: In this paper, some of the network analysis techniques generally used for complex networks are applied to efficiency assessment. The proposed approach is units invariant and allows the computation of many interesting indexes, such as node specificity, benchmarking potential, clustering coefficient, betweenness centrality, components and layers structure, in- and out-degree distributions, etc. It also allows the visualisation of the dominance relationships within the data-set as well as the potential benchmarks and the gradual improvement paths from inefficient nodes. A number of useful filters (bipartite subgraph, ego networks, threshold networks, skeletonisation, etc.) can be applied on the network in order to highlight and focus on specific subgraphs of interest. The proposed approach provides a new perspective on efficiency analysis, one that allows not only to focus on the distance to the efficient frontier and potential targets of individual units but also to study the data-set as a whole, with its component and layer structure, its overall dominance density, etc. Journal: Journal of the Operational Research Society Pages: 1803-1818 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409866 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409866 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1803-1818 Template-Type: ReDIF-Article 1.0 Author-Name: Jie Wu Author-X-Name-First: Jie Author-X-Name-Last: Wu Author-Name: Yafei Yu Author-X-Name-First: Yafei Author-X-Name-Last: Yu Author-Name: Qingyuan Zhu Author-X-Name-First: Qingyuan Author-X-Name-Last: Zhu Author-Name: Qingxian An Author-X-Name-First: Qingxian Author-X-Name-Last: An Author-Name: Liang Liang Author-X-Name-First: Liang Author-X-Name-Last: Liang Title: Closest target for the orientation-free context-dependent DEA under variable returns to scale Abstract: An important branch of data envelopment analysis (DEA) is context-dependent DEA, which evaluates efficiency by combining the attractiveness and progress for a particular decision-making unit (DMU). Traditionally, context-dependent DEA models are based on the assumption of constant returns to scale. Two limitations are found when directly extending original radial context-dependent DEA (ORCD-DEA) models into variable returns to scale versions. One is that it may not be possible to determine the attractiveness of a DMU that logically must be attractive in that context. The other problem is that the progress measure cannot ensure an inefficient DMU projects to a Pareto-efficient frontier. A small numerical example is used to illustrate these two issues. In order to overcome these deficiencies, the concept of closest target is introduced to determine the attractiveness and progress for each DMU. The closest target method can further improve DMUs’ performance with less wastes in inputs or underproduction in outputs. Finally, a practical application involving computer printers is presented. Journal: Journal of the Operational Research Society Pages: 1819-1833 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1409865 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1409865 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1819-1833 Template-Type: ReDIF-Article 1.0 Author-Name: Edward Lawrence Umpfenbach Author-X-Name-First: Edward Lawrence Author-X-Name-Last: Umpfenbach Author-Name: Evrim Dalkiran Author-X-Name-First: Evrim Author-X-Name-Last: Dalkiran Author-Name: Ratna Babu Chinnam Author-X-Name-First: Ratna Babu Author-X-Name-Last: Chinnam Author-Name: Alper Ekrem Murat Author-X-Name-First: Alper Ekrem Author-X-Name-Last: Murat Title: Optimization of strategic planning processes for configurable products Abstract: Assortment planning aims to select the set of products that a retailer or manufacturer will offer to its customers to maximize profitability. While assortment planning research has been expanding in recent years, current models are inadequate for the needs of a configurable product manufacturer. In this paper, we develop models integrating assortment planning and supply chain management decisions for the strategic planning of a large automaker. Our model utilizes a multinomial logit choice model transformed into a mixed-integer linear program through the Charnes–Cooper transformation. It is able to scale to problems that contain thousands of configurations to possibly be offered, a necessity given the number of possible configurations an automaker can build. In addition, most research in assortment planning contains simplified costs associated with product complexity. We better account for design, integration, manufacturing, and supply chain complexities that stem from large product assortments. We believe that our model can significantly aid automotive manufacturers to balance their product complexity with supply chain complexity to improve profitability of vehicle programs. We also present results from a case study motivated by a large global automotive original equipment manufacturer. Journal: Journal of the Operational Research Society Pages: 1834-1853 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1057/s41274-017-0287-3 File-URL: http://hdl.handle.net/10.1057/s41274-017-0287-3 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1834-1853 Template-Type: ReDIF-Article 1.0 Author-Name: Mike Wright Author-X-Name-First: Mike Author-X-Name-Last: Wright Title: Scheduling an amateur cricket league over a nine-year period Abstract: This paper describes a scheduling exercise carried out for the Minor Counties’ Cricket Association (MCCA), which runs an amateur league-based across England. The MCCA League consists of two wholly separate divisions of 10 teams each, with each team playing three home matches and three away matches against teams in their own division each year, with opponents rotating between years; effectively this was scheduled as a double round robin over a three-year period. Originally the schedules had been repeated on a three-year cycle. However, problems of fairness and balance between years arose, and the MCCA, therefore, decided they needed to commission the creation of a nine-year schedule – a sextuple round robin – which would address these equity issues and others. These issues were formulated as soft constraints, some of which related to a nine-year period, and a schedule was successfully produced using a form of Simulated Annealing, operating over a variety of neighbourhoods. The new nine-year schedule is currently in operation. Journal: Journal of the Operational Research Society Pages: 1854-1862 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1415642 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415642 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1854-1862 Template-Type: ReDIF-Article 1.0 Author-Name: Cheng Wang Author-X-Name-First: Cheng Author-X-Name-Last: Wang Author-Name: Lili Deng Author-X-Name-First: Lili Author-X-Name-Last: Deng Author-Name: Yi Han Author-X-Name-First: Yi Author-X-Name-Last: Han Title: Optimal appointment reminder sending strategy for a single service scenario with customer no-show behaviour Abstract: Making an appointment is an effective way to balance the supply and demand in service industries. However, people may not show up for their appointments at the scheduled time. Undoubtedly, sending a reminder to ask for a clear response for each appointment can lower the no-show rate and provide more time for service providers to perform other activities. Therefore, the most important variable is to determine when the reminders should to be sent. In this paper, we study the optimal appointment reminder sending strategy for a single service scenario with customer no-show behaviour. Through discretising the decision process, a dynamic programming model is formulated. Then the optimal time to send a reminder for each appointment is calculated. We prove that there exists an optimal time to send a reminder for each appointment and that the earlier an appointment is made, the earlier a reminder should be sent. Furthermore, our numerical studies show that there exists an optimal appointment time window for a service with a given arrival rate and no-show rate. In addition, the higher the no-show rate of a customer is, the later a reminder should be sent. Based on the optimal reminder sending strategy, the expected service utilisation can be improved compared to no reminders or sending reminders 24 h before the scheduled time. Especially, the increase in the expected service utilisation rate becomes more significant when the arrival rate decreases and the no-show rate increases. Journal: Journal of the Operational Research Society Pages: 1863-1875 Issue: 11 Volume: 69 Year: 2018 Month: 11 X-DOI: 10.1080/01605682.2017.1415639 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415639 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:11:p:1863-1875 Template-Type: ReDIF-Article 1.0 Author-Name: Say Leng Goh Author-X-Name-First: Say Leng Author-X-Name-Last: Goh Author-Name: Graham Kendall Author-X-Name-First: Graham Author-X-Name-Last: Kendall Author-Name: Nasser R. Sabar Author-X-Name-First: Nasser R. Author-X-Name-Last: Sabar Title: Simulated annealing with improved reheating and learning for the post enrolment course timetabling problem Abstract: In this paper, we utilise a two-stage approach for addressing the post enrolment course timetabling (PE-CTT) problem. We attempt to find a feasible solution in the first stage. The solution is further improved in terms of soft constraint violations in the second stage. We present an enhanced variant of the Simulated Annealing with Reheating (SAR) algorithm, which we term Simulated Annealing with Improved Reheating and Learning (SAIRL). We propose a reinforcement learning-based methodology to obtain a suitable neighbourhood structure for the search to operate effectively. We incorporate the average cost changes into the reheating temperature function. The proposed enhancements are tested on three widely studied benchmark data-sets. Our algorithm eliminates the need for tuning parameters in conventional SA as well as neighbourhood structure composition in SAR. The results are highly competitive with SAR and other state of the art methods. In addition, SAIRL is scalable when the runtime is extended. The algorithm achieves new best results for 6 instances and new mean results for 14 instances. Journal: Journal of the Operational Research Society Pages: 873-888 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1468862 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468862 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:873-888 Template-Type: ReDIF-Article 1.0 Author-Name: Sung Ho Park Author-X-Name-First: Sung Ho Author-X-Name-Last: Park Author-Name: Seoung Bum Kim Author-X-Name-First: Seoung Bum Author-X-Name-Last: Kim Title: Multivariate control charts that combine the Hotelling T2 and classification algorithms Abstract: Multivariate control charts, including Hotelling’s T2 chart, have been widely adopted for the multivariate processes found in many modern systems. However, traditional multivariate control charts assume that the in-control group is the only population that can be used to determine a decision boundary. However, this assumption has restricted the development of more efficient control chart techniques that can capitalise on available out-of-control information. In the present study, we propose a control chart that improves the sensitivity (i.e., detection accuracy) of a Hotelling’s T2 control chart by combining it with classification algorithms, while maintaining low false alarm rates. To the best of our knowledge, this is the first attempt to combine classification algorithms and control charts. Simulations and real case studies demonstrate the effectiveness and applicability of the proposed control chart. Journal: Journal of the Operational Research Society Pages: 889-897 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1468859 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468859 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:889-897 Template-Type: ReDIF-Article 1.0 Author-Name: Mohammad Tabatabaei Author-X-Name-First: Mohammad Author-X-Name-Last: Tabatabaei Author-Name: Markus Hartikainen Author-X-Name-First: Markus Author-X-Name-Last: Hartikainen Author-Name: Karthik Sindhya Author-X-Name-First: Karthik Author-X-Name-Last: Sindhya Author-Name: Jussi Hakanen Author-X-Name-First: Jussi Author-X-Name-Last: Hakanen Author-Name: Kaisa Miettinen Author-X-Name-First: Kaisa Author-X-Name-Last: Miettinen Title: An interactive surrogate-based method for computationally expensive multiobjective optimisation Abstract: Many disciplines involve computationally expensive multiobjective optimisation problems. Surrogate-based methods are commonly used in the literature to alleviate the computational cost. In this paper, we develop an interactive surrogate-based method called SURROGATE-ASF to solve computationally expensive multiobjective optimisation problems. This method employs preference information of a decision-maker. Numerical results demonstrate that SURROGATE-ASF efficiently provides preferred solutions for a decision-maker. It can handle different types of problems involving for example multimodal objective functions and nonconvex and/or disconnected Pareto frontiers. Journal: Journal of the Operational Research Society Pages: 898-914 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1468860 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468860 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:898-914 Template-Type: ReDIF-Article 1.0 Author-Name: Omid Yaghubi Agreh Author-X-Name-First: Omid Author-X-Name-Last: Yaghubi Agreh Author-Name: Alireza Ghaffari-Hadigheh Author-X-Name-First: Alireza Author-X-Name-Last: Ghaffari-Hadigheh Title: Application of Dempster-Shafer theory in combining the experts’ opinions in DEA Abstract: In data envelopment analysis models, values of inputs, outputs and their slack weights are usually based on the domain experts’ opinions. While they play a key role in efficiency evaluation of decision-making units in practice, when there are more than one expert, the manager encounters with the problem of effective specification of final values. The problem may be worsen when the belief degree on the opinions is not complete and differs from one expert to the other, which consequently leads to different and sometimes conflicting analytic results. Belief function defined in Dempster–Shafer theory is a powerful tool to derive a possible solution in these circumstances. We adapt this theory to address such situations in data envelopment analysis. A linear optimisation model is devised as a new combination rule of experts’ opinions, which covers the drawbacks of some existing combination rules in the belief function theory. The methodology is visualised with simple examples. Moreover, the well-known Monte Carlo experimentation is used to test the performance of the proposed method. Journal: Journal of the Operational Research Society Pages: 915-925 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1468858 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468858 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:915-925 Template-Type: ReDIF-Article 1.0 Author-Name: Everette S. Gardner Author-X-Name-First: Everette S. Author-X-Name-Last: Gardner Author-Name: Yavuz Acar Author-X-Name-First: Yavuz Author-X-Name-Last: Acar Title: Fitting the damped trend method of exponential smoothing Abstract: The well-established forecasting methods of exponential smoothing rely on the “optimal” estimation of parameters if they are to perform well. A grid search procedure to minimise the MSE is often used in practice to fit exponential smoothing methods, especially in large inventory control applications. Grid searches are also found in some modern statistical software. We ask whether the ex ante forecast accuracy of the damped trend method of exponential smoothing can be improved by optimising parameters. Furthermore, we ask whether the method should be fitted according to a mean absolute error criterion rather than the mean squared error commonly used in practice. We found that model-fitting matters. Parameter optimization makes significant improvements in forecast accuracy regardless of the fit criterion. We also show that the mean absolute error criterion usually produces better results. Journal: Journal of the Operational Research Society Pages: 926-930 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1469457 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1469457 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:926-930 Template-Type: ReDIF-Article 1.0 Author-Name: Oded Berman Author-X-Name-First: Oded Author-X-Name-Last: Berman Author-Name: Zvi Drezner Author-X-Name-First: Zvi Author-X-Name-Last: Drezner Author-Name: Dmitry Krass Author-X-Name-First: Dmitry Author-X-Name-Last: Krass Title: The multiple gradual cover location problem Abstract: Covering location models assume that a demand point is either fully covered or not covered at all. Gradual cover models consider the possibility of partial cover. In this paper, we investigate the issue of joint partial coverage by several facilities in a multiple facilities location model. We establish theoretical foundations for the properties of the joint coverage relationship to individual partial covers and develop models based on these theoretical foundations. The location problems are solved both heuristically and optimally within a pre-specified percentage from the optimal solution. Journal: Journal of the Operational Research Society Pages: 931-940 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1471376 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1471376 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:931-940 Template-Type: ReDIF-Article 1.0 Author-Name: Robin H. Pearce Author-X-Name-First: Robin H. Author-X-Name-Last: Pearce Author-Name: Michael Forbes Author-X-Name-First: Michael Author-X-Name-Last: Forbes Title: Disaggregated benders decomposition for solving a network maintenance scheduling problem Abstract: We consider a problem concerning a network and a set of maintenance requests to be undertaken. The aim is to schedule the maintenance in such a way as to minimise the impact on the total throughput of the network. We embed disaggregated Benders decomposition in a branch-and-cut framework to solve the problem to optimality, as well as explore the strengths and weaknesses of the technique. We prove that our Benders cuts are Pareto-optimal. Solutions to the linear programming relaxation also provide further valid inequalities to reduce total solving time. We implement these techniques on simulated data presented in previous papers and compare our solution technique to previous methods and a direct mixed-integer programming formulation. We prove optimality in many problem instances that have not previously been proven. Journal: Journal of the Operational Research Society Pages: 941-953 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1471374 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1471374 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:941-953 Template-Type: ReDIF-Article 1.0 Author-Name: Nasim Nasrabadi Author-X-Name-First: Nasim Author-X-Name-Last: Nasrabadi Author-Name: Akram Dehnokhalaji Author-X-Name-First: Akram Author-X-Name-Last: Dehnokhalaji Author-Name: Pekka Korhonen Author-X-Name-First: Pekka Author-X-Name-Last: Korhonen Author-Name: Jyrki Wallenius Author-X-Name-First: Jyrki Author-X-Name-Last: Wallenius Title: A stepwise benchmarking approach to DEA with interval scale data Abstract: The conventional DEA models assume that all variables are measured on a ratio scale. However, in many applications, we have to deal with interval scale data. In Dehnokhalaji, A., Korhonen, P. J., Köksalan, M., Nasrabadi, N., & Wallenius, J. (2010). Efficiency analysis to incorporate interval scale data. European Journal of Operational Research 207(2), 1116–1121, we proposed a model for efficiency analysis to incorporate interval scale data in addition to ratio scale data. Our proposed model provides efficiency scores for each unit, but does not suggest target unit(s) for inefficient ones directly. In this paper, we investigate the concept of benchmarking in Dehnokhalaji et al.?s (2010) model. We propose an algorithm which results in a path of targets for each inefficient unit. All units on this path are better than the unit under evaluation in terms of efficiency scores defined for interval scale data. The intermediate targets belong to sequential layers obtained from a layering algorithm and the final unit on the path is an efficient unit. Journal: Journal of the Operational Research Society Pages: 954-961 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1471375 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1471375 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:954-961 Template-Type: ReDIF-Article 1.0 Author-Name: Ghaith Rabadi Author-X-Name-First: Ghaith Author-X-Name-Last: Rabadi Author-Name: Mohamed Kais Msakni Author-X-Name-First: Mohamed Kais Author-X-Name-Last: Msakni Author-Name: Elkin Rodriguez-Velasquez Author-X-Name-First: Elkin Author-X-Name-Last: Rodriguez-Velasquez Author-Name: William Alvarez-Bermudez Author-X-Name-First: William Author-X-Name-Last: Alvarez-Bermudez Title: New characteristics of optimal solutions for the two-machine flowshop problem with unlimited buffers Abstract: The two-machine flowshop problem with unlimited buffers with the objective of minimising the makespan (F2||Cmax) is addressed. Johnson’s algorithm finds optimal solutions (permutations) to this problem, but are not necessarily the only optimal solutions. We show in this paper that certain jobs that we define as Critical Jobs, must occupy specific positions in any optimal sequence, not only in Johnson’s solutions. We also prove that jobs that precede a critical job cannot be exchanged with jobs that succeed it in an optimal sequence, which reduces the number of enumerations necessary to identify all optimal solutions. The findings of this research can be useful in reducing the search space for optimal enumeration algorithms such as branch-and-bound. Journal: Journal of the Operational Research Society Pages: 962-973 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1475114 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1475114 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:962-973 Template-Type: ReDIF-Article 1.0 Author-Name: Martin Kunc Author-X-Name-First: Martin Author-X-Name-Last: Kunc Author-Name: Frances A. O’Brien Author-X-Name-First: Frances A. Author-X-Name-Last: O’Brien Title: The role of business analytics in supporting strategy processes: Opportunities and limitations Abstract: Many organisations consider business analytics to be a key organisational capability. To date, there is little evidence on how organisations have included analytics at the heart of their strategy processes. This paper addresses this issue by exploring the activities within a strategy process and considering the potential role that business analytics might play in providing support to such processes. We perform a search in multidisciplinary databases for evidence of the use of business analytics within strategy processes and we reflect on its use in two case studies performing strategic analysis within the pharmaceutical industry. The findings indicate business analytics is still an emerging field without a structured approach. Business analytics can provide important data-driven insights into strategy processes; we therefore recommend its further integration with other traditional OR and strategy tools in order to support strategic decision-makers. Journal: Journal of the Operational Research Society Pages: 974-985 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1475104 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1475104 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:974-985 Template-Type: ReDIF-Article 1.0 Author-Name: Romain Montagné Author-X-Name-First: Romain Author-X-Name-Last: Montagné Author-Name: Michel Gamache Author-X-Name-First: Michel Author-X-Name-Last: Gamache Author-Name: Michel Gendreau Author-X-Name-First: Michel Author-X-Name-Last: Gendreau Title: A shortest path-based algorithm for the inventory routing problem of waste vegetable oil collection Abstract: We consider an inventory routing problem over a time horizon in which wasted vegetable oil has to be collected from source points periodically. These source points have different accumulation rates and limited storage capacities that must not be exceeded. Collected waste is then processed and used as raw material for producing high-quality grease, oils and tallows. The decision problem is to determine for each day which source points to visit as well as the routes of the vehicles operating, such that cost-effectiveness is maximised. In this paper, we tackle this problem with two different but complementary approaches. First, we develop integer programming techniques to solve a relaxed version of the problem, without routing constraints. Then, we propose a constructive heuristic based on the shortest path and split procedures. Performances are compared with the actual company’s solution; numerical tests performed on real-world data with up to 3000 customers served on a 30 day time horizon show that our algorithms are able to increase cost-effectiveness by up to 20%. Journal: Journal of the Operational Research Society Pages: 986-997 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1476801 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1476801 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:986-997 Template-Type: ReDIF-Article 1.0 Author-Name: Thu Ba T. Nguyê˜n Author-X-Name-First: Thu Ba T. Author-X-Name-Last: Nguyê˜n Author-Name: Tolga Bektaş Author-X-Name-First: Tolga Author-X-Name-Last: Bektaş Author-Name: Tom J. Cherrett Author-X-Name-First: Tom J. Author-X-Name-Last: Cherrett Author-Name: Fraser N. McLeod Author-X-Name-First: Fraser N. Author-X-Name-Last: McLeod Author-Name: Julian Allen Author-X-Name-First: Julian Author-X-Name-Last: Allen Author-Name: Oliver Bates Author-X-Name-First: Oliver Author-X-Name-Last: Bates Author-Name: Marzena Piotrowska Author-X-Name-First: Marzena Author-X-Name-Last: Piotrowska Author-Name: Maja Piecyk Author-X-Name-First: Maja Author-X-Name-Last: Piecyk Author-Name: Adrian Friday Author-X-Name-First: Adrian Author-X-Name-Last: Friday Author-Name: Sarah Wise Author-X-Name-First: Sarah Author-X-Name-Last: Wise Title: Optimising parcel deliveries in London using dual-mode routing Abstract: Last-mile delivery operations are complex, and the conventional way of using a single mode of delivery (e.g. driving) is not necessarily an efficient strategy. This paper describes a two-level parcel distribution model that combines walking and driving for a single driver. The model aims to minimise the total travelling time by scheduling a vehicle’s routing and the driver’s walking sequence when making deliveries, taking decisions on parking locations into consideration. The model is a variant of the Clustered Travelling Salesman Problem with Time Windows, in which the sequence of visits within each cluster is required to form a closed tour. When applied to a case study of an actual vehicle round from a parcel carrier operating in London, savings of over 20% in the total operation time were returned over the current situation where 144 parcels were being delivered to 57 delivery locations. Journal: Journal of the Operational Research Society Pages: 998-1010 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1480906 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1480906 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:998-1010 Template-Type: ReDIF-Article 1.0 Author-Name: Linlin Zhao Author-X-Name-First: Linlin Author-X-Name-Last: Zhao Author-Name: Yong Zha Author-X-Name-First: Yong Author-X-Name-Last: Zha Author-Name: Rui Hou Author-X-Name-First: Rui Author-X-Name-Last: Hou Author-Name: Liang Liang Author-X-Name-First: Liang Author-X-Name-Last: Liang Title: Unobservable effort, objective consistency and the efficiencies of the principal and the top management team Abstract: Top management team (TMT) plays a leading role in making strategic decisions and business success. On account of unobservable and immeasurable characteristics of the TMT’s effort, prior researches are of limited use in identifying its impact on the performances of the principal and TMT. This paper formulates novel models to overcome the limitations by incorporating DEA and bi-level programming into the principal–agent framework. We begin with the basic models from the perspective of the principal and the TMT respectively, and propose integrated and bi-level models to illustrate the cooperative and leader–follower collaboration between the two parties. We then develop an effort-based DEA model in which the effort of the TMT is viewed as a variable. By comparing optimal value of the variable with zero, we can identify whether the effort level is desired by the principal. A higher value larger (lower) than zero implies that the members of TMT exert higher (lower) effort. Further, we identify objective consistence between the two parties by incorporating the organisational outcomes into the outputs of the TMT. A case study of 16 China listed real estate companies shows that the TMT’s effort has a significant influence on the efficiencies of the principal and the TMT. In addition, the TMT has an initiative incentive to adjust the effort level to adapt various situations of objective consistence and inconsistence. Journal: Journal of the Operational Research Society Pages: 1011-1026 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1487814 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487814 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:1011-1026 Template-Type: ReDIF-Article 1.0 Author-Name: Salla Marttonen-Arola Author-X-Name-First: Salla Author-X-Name-Last: Marttonen-Arola Author-Name: Timo Kärri Author-X-Name-First: Timo Author-X-Name-Last: Kärri Author-Name: Tiina Sinkkonen Author-X-Name-First: Tiina Author-X-Name-Last: Sinkkonen Author-Name: Miia Pirttilä Author-X-Name-First: Miia Author-X-Name-Last: Pirttilä Title: A pricing model for Internet of Things-based fleet services to support equipment sales Abstract: Servitisation and rapid technological development have made data-based services a feasible way for many manufacturing companies to increase their cash flow and support their core products. In this article, an analytical model is presented for studying the development costs and pricing of new Internet of Things-based services for especially populations, or fleets, of industrial production equipment and machines. The model suggests the optimal price of a fleet service as a function of the life cycle of the service, the required rate of return, the size of the fleet, and the extent of economies of scale in fleet research and development. This article contributes to the research on servitisation of manufacturing, and sheds light on the different natures of service and equipment sales. Also a numerical study is presented to bring forward the managerial implications of the model. Journal: Journal of the Operational Research Society Pages: 1027-1037 Issue: 6 Volume: 70 Year: 2019 Month: 6 X-DOI: 10.1080/01605682.2018.1487815 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1487815 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:6:p:1027-1037 Template-Type: ReDIF-Article 1.0 Author-Name: Dobrila Petrovic Author-X-Name-First: Dobrila Author-X-Name-Last: Petrovic Author-Name: Magdalena Kalata Author-X-Name-First: Magdalena Author-X-Name-Last: Kalata Title: Multi-objective optimisation of risk and business strategy in real-world supply networks in the presence of uncertainty Abstract: Selection of suppliers is very important for a strategic supply network (SN) design. This paper presents a novel multi-objective optimisation model for supplier selection and order allocation. In addition to a standard objective of total SN cost minimisation, two new objectives are considered: minimisation of suppliers’ risk and maximisation of achievement of a manufacturer business strategy. Uncertainty in supply lead times and non-conformance rates of delivered components causes uncertainty in the SN cost objective. These parameters are described using imprecise linguistic terms and modelled using fuzzy numbers. Risk classification of suppliers is carried out using imprecise knowledge which is modelled using fuzzy If-Then rules and embedded in the risk objective. Various experiments are carried out to analyse the trade-off between the considered objectives and the impact of SN network parameters on the suppliers’ selection and order allocation. The size of the problem that the model can handle is analysed also. Journal: Journal of the Operational Research Society Pages: 1869-1884 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1501459 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1501459 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1869-1884 Template-Type: ReDIF-Article 1.0 Author-Name: Jie Chu Author-X-Name-First: Jie Author-X-Name-Last: Chu Author-Name: Kai Huang Author-X-Name-First: Kai Author-X-Name-Last: Huang Author-Name: Aurélie Thiele Author-X-Name-First: Aurélie Author-X-Name-Last: Thiele Title: A robust optimization approach to model supply and demand uncertainties in inventory systems Abstract: In this article, we simultaneously consider supply and demand uncertainties in a robust optimization (RO) framework. First, we apply the RO approach to a multi-period, single-station inventory problem where supply uncertainty is modeled by partial supply. Our main finding is that solving the robust counterpart is equivalent to solving a nominal problem with a modified deterministic demand sequence. In particular, in the stationary case the optimal robust policy follows the quasi-(s, S) form and the corresponding s and S levels are theoretically computable. Subsequently, the RO framework is extended to a multi-echelon case. We show that for a tree structure network, decomposition applies so that the optimal single-station robust policy remains valid for each echelon in the tree. We conduct extensive numerical studies to demonstrate the effectiveness of the proposed robust policies. Our results suggest that significant cost benefits can be realized by incorporating both supply and demand uncertainties. Journal: Journal of the Operational Research Society Pages: 1885-1899 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1507424 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1507424 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1885-1899 Template-Type: ReDIF-Article 1.0 Author-Name: Tea Vizinger Author-X-Name-First: Tea Author-X-Name-Last: Vizinger Author-Name: Janez Žerovnik Author-X-Name-First: Janez Author-X-Name-Last: Žerovnik Title: A stochastic model for better planning of product flow in retail supply chains Abstract: Retail supply chains operate in a constantly changing environment and need to adapt to different situations in order to increase their reliability, flexibility and convenience. Holding and transportation costs can amount to up to 40 per cent of the product value, so that the proper coordination of interrelated activities plays an essential role when managing retail flows. In order to provide a relevant model we first focus on future demand satisfaction, whereas pricing policies, perishability factors, etc., are subjected to a complementary model for operative planning. The idea is to obtain a preferable distribution plan with minimal expected distribution costs, as well as minimal supply risks. The used methodology produces a set of solutions and quality estimates which can be used in order to find a desired distribution plan which is near-optimal. While considering stochasticity on the demand side, a multi-objective optimisation approach is introduced to cope with the minimisation of transport and warehouse costs, the minimisation of overstocking effects and the maximisation of customer’s service level. The optimisation problem that arises is a computationally hard problem. A computational experiment has shown that the version of the problem where the weighted sum of costs is minimised can be handled sufficiently well by some well-known simple heuristics. Journal: Journal of the Operational Research Society Pages: 1900-1914 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1501460 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1501460 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1900-1914 Template-Type: ReDIF-Article 1.0 Author-Name: Fouad El Ouardighi Author-X-Name-First: Fouad Author-X-Name-Last: El Ouardighi Author-Name: Matan Shniderman Author-X-Name-First: Matan Author-X-Name-Last: Shniderman Title: Supplier’s opportunistic behavior and the quality-efficiency tradeoff with conventional supply chain contracts Abstract: This paper presents a supply chain game with a manufacturer and its supplier, where each firm seeks to allocate its own resources between improving design quality and reducing the production cost of a finished product over finite contract duration. The firms agree on a linear contract where the supplier either periodically updates the transfer price, i.e., cost-plus contract (CPC), or sets a definitive transfer price at the beginning of the contract, i.e., wholesale price contract (WPC). Assuming a committed manufacturer, we account for the possibility that the supplier is either committed or non-committed, and derive homogeneous and heterogeneous Nash equilibrium strategies under a CPC and a WPC. We then compare the impact of the supplier’s strategy on the tradeoff between quality and efficiency and the firms’ payoffs, and shed light on the relative merits of a CPC and a WPC. We notably show that a CPC is more robust to the supplier’s strategy type than a WPC in terms of efficiency, quality, and profits. Contrary to the literature, we conclude that a variable transfer price is preferable to a constant transfer price. Journal: Journal of the Operational Research Society Pages: 1915-1937 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1510749 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510749 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1915-1937 Template-Type: ReDIF-Article 1.0 Author-Name: Wuhua Chen Author-X-Name-First: Wuhua Author-X-Name-Last: Chen Author-Name: Zhe George Zhang Author-X-Name-First: Zhe George Author-X-Name-Last: Zhang Author-Name: Zhongsheng Hua Author-X-Name-First: Zhongsheng Author-X-Name-Last: Hua Title: Analysis of price competition in two-tier service systems Abstract: This article focuses on the issue of price competition of two private service providers (SPs) in a market with a free public SP. The two private SPs, differentiated by service quality and capacity, choose the optimal prices to maximise their own profits. It has been shown that such a price competition between the private SPs in a market with a public SP reaches a pure Nash equilibrium. Such a system with both private and public SPs is called a two-tier service system. We investigate the impact on the social welfare of the two-tier service system of the competition and collaboration between the two private SPs. In addition, we also examine the impact of the public SP’s competitive advantage on customer service and the private SP’s pricing strategies and their performance. Numerical examples are presented to generate managerial insights for practitioners. Journal: Journal of the Operational Research Society Pages: 1938-1950 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1057/jors.2015.123 File-URL: http://hdl.handle.net/10.1057/jors.2015.123 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1938-1950 Template-Type: ReDIF-Article 1.0 Author-Name: Xiaoling Yin Author-X-Name-First: Xiaoling Author-X-Name-Last: Yin Author-Name: Zhe George Zhang Author-X-Name-First: Zhe Author-X-Name-Last: George Zhang Title: On Downs–Thomson paradox in two-tier service systems with a fast pass and revenue-based capacity investment Abstract: A two-tier service system consists of free and toll channels with its toll revenue reinvested in service capacity. In this study, we develop two models with revenue reinvestment in either the free or the toll system. Similar to the congestion problem in an urban transportation network, we investigate whether the Downs–Thomson paradox occurs in the cases where an additional free service capacity is increased. Based on the relations between the major performance measures (such as the customer waiting time, toll system revenue, and total social cost) and the key system parameters and decision variables (such as the traffic intensity, proportion of revenue invested in capacity expansion, toll system price, and service cost of the free or toll system), we find that the Downs–Thomson paradox in terms of total social cost may exist. The findings provide managerial insights if an additional budget is invested to expand the free service capacity. Journal: Journal of the Operational Research Society Pages: 1951-1964 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1510750 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510750 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1951-1964 Template-Type: ReDIF-Article 1.0 Author-Name: Shahrzad M. Pour Author-X-Name-First: Shahrzad M. Author-X-Name-Last: Pour Author-Name: Kourosh Marjani Rasmussen Author-X-Name-First: Kourosh Author-X-Name-Last: Marjani Rasmussen Author-Name: John H. Drake Author-X-Name-First: John H. Author-X-Name-Last: Drake Author-Name: Edmund K. Burke Author-X-Name-First: Edmund K. Author-X-Name-Last: Burke Title: A constructive framework for the preventive signalling maintenance crew scheduling problem in the Danish railway system Abstract: In this article, we consider the problem of planning preventive maintenance of railway signals in Denmark. This case is particularly relevant as the entire railway signalling system is currently being upgraded to the new European Railway Traffic Management System (ERTMS) standard. This upgrade has significant implications for signal maintenance scheduling in the system. We formulate the problem as a multi-depot vehicle routing and scheduling problem with time windows and synchronisation constraints, in a multi-day time schedule. The requirement that some tasks require the simultaneous presence of more than one engineer means that task synchronisation must be considered. A multi-stage constructive framework is proposed, which first distributes maintenance tasks using a clustering formulation. Following this, a Constraint Programming (CP) based approach is used to generate feasible monthly plans for large instances of practical interest. Experimental results indicate that the proposed framework can generate feasible solutions and schedule a monthly plan of up to 1000 tasks for eight crew members, in a reasonable amount of computational time. Journal: Journal of the Operational Research Society Pages: 1965-1982 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1507423 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1507423 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1965-1982 Template-Type: ReDIF-Article 1.0 Author-Name: Naoto Katayama Author-X-Name-First: Naoto Author-X-Name-Last: Katayama Title: A combined fast greedy heuristic for the capacitated multicommodity network design problem Abstract: The capacitated multicommodity network design problem represents a network design system and has a wide range of real-life applications, such as the construction of logistics networks, transportation networks, communication networks, and production networks. In this article, we introduce a fast greedy algorithm for solving the capacitated multicommodity network design problem. The greedy algorithm is based on link-rerouting and partial link-rerouting heuristics for the uncapacitated multicommodity network design problem. This algorithm involves a capacity scaling for reducing the number of candidate arcs and a restricted branch-and-bound for improving solutions. The algorithm succeeds in finding good solutions within a short computation time. The average computation time for solving benchmark problem instances is only several tens of seconds. Journal: Journal of the Operational Research Society Pages: 1983-1996 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1500977 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1500977 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1983-1996 Template-Type: ReDIF-Article 1.0 Author-Name: Baruch Mor Author-X-Name-First: Baruch Author-X-Name-Last: Mor Author-Name: Dana Shapira Author-X-Name-First: Dana Author-X-Name-Last: Shapira Title: Improved algorithms for scheduling on proportionate flowshop with job-rejection Abstract: The rejection method is an option commonly used in manufacturing systems as a tool to overcome overloaded assembly lines. The operations manager is given the ability to refuse to produce, or alternately to outsource some of the products. A flowshop environment is a setting of machines in series such that each job should be processed on each of the machines. In a recent paper, Shabtay and Oron (2016), the two disciplines are combined, and the makespan criteria is studied, which is one of the fundamental measures in scheduling theory that reveals the utilization of the system. The problems considered are minimizing the makespan subject to a constraint on the maximal rejection cost, E, and minimizing the rejection cost given that the makespan cannot exceed a given upper bound, K. The computational complexity of the pseudo polynomial dynamic programming (DP) algorithms presented by Shabtay and Oron are O(n2E) and O(n2K), respectively, where n is the number of jobs. In this paper, we consider the same problems, and our contributions are enhanced DP algorithms, which run in O(nE) and O(nK) time, in correspondence, implying an improvement of a factor of n. Furthermore, we supply empirical results based on the experimental simulations. This study, therefore, has both theoretical significance and practical implications, as our numerical study proves that the introduced DP algorithms are capable of solving large-size instances. Journal: Journal of the Operational Research Society Pages: 1997-2003 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1506540 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1506540 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:1997-2003 Template-Type: ReDIF-Article 1.0 Author-Name: Lifen Wu Author-X-Name-First: Lifen Author-X-Name-Last: Wu Title: Linear fractional radial graph measure of efficiency of production Abstract: The linear fractional radial graph efficiency measure developed in this article is consistent with the definition of efficiency itself as a ratio of output to input, and has an interpretation of profitability as a ratio of revenue to cost. This measure embeds input and output radial efficiency measures which have cost and revenue interpretation respectively. As this graph measure does not have any path constraint, it can also be adapted to hyperbolic graph efficiency measure, generalised distance function, and directional distance function by imposing a path constraint. This measure, under a variable returns to scale assumption about the technology, provides feasible ways of scale inefficiency improvement within production possibility set. It also provides flexible ways of inefficiency improvement under constant returns to scale assumption. Profitability is maximised within technical efficiency context whenever scale efficiency is attained. Journal: Journal of the Operational Research Society Pages: 2004-2018 Issue: 11 Volume: 70 Year: 2019 Month: 11 X-DOI: 10.1080/01605682.2018.1510807 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1510807 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:11:p:2004-2018 Template-Type: ReDIF-Article 1.0 Author-Name: Xiangyu Cui Author-X-Name-First: Xiangyu Author-X-Name-Last: Cui Author-Name: Xun Li Author-X-Name-First: Xun Author-X-Name-Last: Li Author-Name: Xianping Wu Author-X-Name-First: Xianping Author-X-Name-Last: Wu Author-Name: Lan Yi Author-X-Name-First: Lan Author-X-Name-Last: Yi Title: A mean-field formulation for multi-period asset–liability mean–variance portfolio selection with an uncertain exit time Abstract: This paper is concerned with multi-period asset–liability mean–variance portfolio selection with an uncertain exit time. By employing the mean-field formulation to this problem which involves two-dimensional state variables, we derive the analytical optimal strategy and efficient frontier successfully. The corresponding sensitivity analysis and a real-life example shed light on influences of liability and uncertain exit time to the optimal investment strategy. Journal: Journal of the Operational Research Society Pages: 487-499 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0232-5 File-URL: http://hdl.handle.net/10.1057/s41274-017-0232-5 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:487-499 Template-Type: ReDIF-Article 1.0 Author-Name: Xiangrui Chao Author-X-Name-First: Xiangrui Author-X-Name-Last: Chao Author-Name: Yi Peng Author-X-Name-First: Yi Author-X-Name-Last: Peng Title: A cost-sensitive multi-criteria quadratic programming model for imbalanced data Abstract: Multiple Criteria Quadratic Programming (MCQP), a mathematical programming-based classification method, has been developed recently and proved to be effective and scalable. However, its performance degraded when learning from imbalanced data. This paper proposes a cost-sensitive MCQP (CS-MCQP) model by introducing the cost of misclassifications to the MCQP model. The empirical tests were designed to compare the proposed model with MCQP and a selection of classifiers on 26 imbalanced datasets from the UCI repositories. The results indicate that the CS-MCQP model not only performs better than the optimization-based models (MCQP and SVM), but also outperforms the selected classifiers, ensemble, preprocessing techniques and hybrid methods on imbalanced datasets in terms of AUC and GeoMean measures. To validate the results statistically, Student’s t test and Wilcoxon signed-rank test were conducted and show that the superiority of CS-MCQP is statistically significant with significance level 0.05. In addition, we analyze the effect of noisy, small disjunct and overlapping data properties on the proposed model and conclude that the CS-MCQP model achieves better performance on imbalanced data with overlapping feature than noisy and small disjunct data. Journal: Journal of the Operational Research Society Pages: 500-516 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0233-4 File-URL: http://hdl.handle.net/10.1057/s41274-017-0233-4 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:500-516 Template-Type: ReDIF-Article 1.0 Author-Name: Kathryn Hoad Author-X-Name-First: Kathryn Author-X-Name-Last: Hoad Author-Name: Martin Kunc Author-X-Name-First: Martin Author-X-Name-Last: Kunc Title: Teaching system dynamics and discrete event simulation together: a case study Abstract: System dynamics (SD) and discrete event simulation (DES) follow two quite different modeling philosophies and can bring very different but, nevertheless, complimentary insights in understanding the same ‘real world’ problem. Thus, learning SD and DES approaches requires students to absorb different modeling philosophies usually through specific and distinct courses. We run a course where we teach model conceptualization for SD and DES in parallel and, then, the technical training on SD and DES software in sequential order. The ability of students to assimilate, and then put into practice both modeling approaches, was evaluated using simulation-based problems. While we found evidence that students can master both simulation techniques, we observed that they were better able to develop skills at representing the tangible characteristics of systems, the realm of DES, rather than conceptualizing the intangible properties of systems such as feedback processes, the realm of SD. Suggestions and reflections on teaching both simulation methods together are proposed. Journal: Journal of the Operational Research Society Pages: 517-527 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0234-3 File-URL: http://hdl.handle.net/10.1057/s41274-017-0234-3 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:517-527 Template-Type: ReDIF-Article 1.0 Author-Name: Renan Felinto de Farias Aires Author-X-Name-First: Renan Author-X-Name-Last: Felinto de Farias Aires Author-Name: Luciano Ferreira Author-X-Name-First: Luciano Author-X-Name-Last: Ferreira Author-Name: Afranio Galdino de Araujo Author-X-Name-First: Afranio Author-X-Name-Last: Galdino de Araujo Author-Name: Denis Borenstein Author-X-Name-First: Denis Author-X-Name-Last: Borenstein Title: Student selection in a Brazilian university: using a multi-criteria method Abstract: Student selection is a complex decision-making process, in which several criteria need to be considered simultaneously. In this paper, we address this problem for a Brazilian university that has created an interdisciplinary degree in which several intermediate selection processes are required during the course, defining the final title degree. The university is currently using an aggregated score based on the performance of a student in the course. However, this method is facing difficulties in selecting the best students, because deficiencies in the way transferred, dropped and quit course credits are accounted for. As a possible alternative for the current method, we developed a hybrid ranking algorithm, called ELECTRE–TOPSIS (E–T). This method combines elements of the ELECTRE family and TOPSIS, two well-known multi-attribute analysis tools, to rank students based on objective criteria. Computational experiments and a case study were conducted to evaluate E–T. The results show that our approach provides quite competitive rankings in comparison with similar methods, through simultaneously eliminating ranking reversal and better balancing the formation time and the academic performance of the evaluated students. Journal: Journal of the Operational Research Society Pages: 528-540 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0242-3 File-URL: http://hdl.handle.net/10.1057/s41274-017-0242-3 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:528-540 Template-Type: ReDIF-Article 1.0 Author-Name: Yukun Zhao Author-X-Name-First: Yukun Author-X-Name-Last: Zhao Author-Name: Xiaobo Zhao Author-X-Name-First: Xiaobo Author-X-Name-Last: Zhao Title: On elicitation-method effect in game experiments: a competing newsvendor perspective Abstract: To test the behavioral validity of the strategy method in a setting of operations management, we experimentally investigate competing newsvendor behavior under incomplete information with both the strategy method and the direct-response method. We observe that the ‘‘pull-to-center’’ effect exists only with low margin; mean order quantity with high margin does not significantly deviate from equilibrium prediction. We build a behavioral model based on overestimation and mean anchoring to explain competing newsvendor behavior. Estimates of the behavioral model confirm the existence of the behavioral biases. Meanwhile, order levels are not significantly different between the strategy method and the direct-response method. Hence, we suggest that the strategy method should lead to similar decisions in newsvendor settings compared to the direct-response method and may be adopted in most operations management settings associated to the newsvendor problem to improve the efficiency of experimental studies. Journal: Journal of the Operational Research Society Pages: 541-555 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0246-z File-URL: http://hdl.handle.net/10.1057/s41274-017-0246-z File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:541-555 Template-Type: ReDIF-Article 1.0 Author-Name: Jianguo Qi Author-X-Name-First: Jianguo Author-X-Name-Last: Qi Author-Name: Shukai Li Author-X-Name-First: Shukai Author-X-Name-Last: Li Author-Name: Yuan Gao Author-X-Name-First: Yuan Author-X-Name-Last: Gao Author-Name: Kai Yang Author-X-Name-First: Kai Author-X-Name-Last: Yang Author-Name: Pei Liu Author-X-Name-First: Pei Author-X-Name-Last: Liu Title: Joint optimization model for train scheduling and train stop planning with passengers distribution on railway corridors Abstract: Aiming to provide a more practical modeling framework for railway optimization problem, this paper investigates the joint optimization model for train scheduling, train stop planning and passengers distributing by considering the passenger demands over each origin and destination (OD) pair on a high-speed railway corridor. Specifically, through introducing new decision variables associated with the number of passengers distributed in each train over each OD pair and formulating the connection constraints between the train stop plan and passenger distributions, the total travel time of all the trains is firstly adopted as the objective function to optimize the train stop plan and timetable with the passenger demands being guaranteed. Then, based on the generated train stop plan and timetable, the passenger distribution plan is further optimized with the purpose of minimizing the total travel time of all the passengers. Finally, the effectiveness and efficiency of the proposed approaches are verified by the obtained train stop plans, timetables and passenger distribution plans for a sample railway corridor and Wuhan–Guangzhou high-speed railway corridor. The computational results showed that the proposed methods can effectively obtain the train stop plan, timetable and passenger distribution plan at the same time. Journal: Journal of the Operational Research Society Pages: 556-570 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0248-x File-URL: http://hdl.handle.net/10.1057/s41274-017-0248-x File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:556-570 Template-Type: ReDIF-Article 1.0 Author-Name: Dongshuang Hou Author-X-Name-First: Dongshuang Author-X-Name-Last: Hou Author-Name: Panfei Sun Author-X-Name-First: Panfei Author-X-Name-Last: Sun Author-Name: Genjiu Xu Author-X-Name-First: Genjiu Author-X-Name-Last: Xu Author-Name: Theo Driessen Author-X-Name-First: Theo Author-X-Name-Last: Driessen Title: Compromise for the complaint: an optimization approach to the ENSC value and the CIS value Abstract: The main goal of this paper is to introduce a new solution concept: the optimal compromise value. We propose two kinds of complaint criteria based on which the optimistic complaint and the pessimistic complaint are defined. Two optimal compromise values are obtained by lexicographically minimizing the optimistic maximal complaint and the pessimistic maximal complaint, respectively. Interestingly, these two optimal compromise values coincide with the ENSC value and the CIS value, respectively. Moreover, these values are characterized in terms of equal maximal complaint property and efficiency. As an adjunct, we reveal the coincidence of the Nucleolus and the ENSC value of 1-convex games. Journal: Journal of the Operational Research Society Pages: 571-579 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0251-2 File-URL: http://hdl.handle.net/10.1057/s41274-017-0251-2 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:571-579 Template-Type: ReDIF-Article 1.0 Author-Name: Eliseo Vilalta-Perdomo Author-X-Name-First: Eliseo Author-X-Name-Last: Vilalta-Perdomo Author-Name: Martin Hingley Author-X-Name-First: Martin Author-X-Name-Last: Hingley Title: Beyond links and chains in food supply: a Community OR perspective Abstract: This theoretical paper complements traditional OR approaches to improve micro-businesses’ performance. When looking at local micro-businesses, we find that current supply chain and operations theory that focuses on efficiency and economic-based criteria for chain and network integration is inapplicable and external organisation inappropriate. An illustration shows how traditional modelling exercises may fall short in better informing independent-minded micro-entrepreneurs on how to collaborate, even though they recognise benefits from such endeavour. The illustration concerns consideration of food micro-producers, not as links constituting a chain, but as members of a community. This paper explores two different approaches to apply Community OR principles: on the one hand, the application of OR methods to phenomena in the ‘community’, and on the other, the development of research on ‘community operations’, which are symbolised as C+OR and CO+R, respectively. These approaches are associated with two different research languages: of needs and for interactions. Main contributions of this paper are: first, we show that collaboration does not always need shared aims; second, we offer a circular process where the identification of collective actions may help organisations to improve individually, and vice versa; and third, we suggest how to develop the role of a stronger collective actor by means of collaboration. Journal: Journal of the Operational Research Society Pages: 580-588 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0252-1 File-URL: http://hdl.handle.net/10.1057/s41274-017-0252-1 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:580-588 Template-Type: ReDIF-Article 1.0 Author-Name: Baruch Mor Author-X-Name-First: Baruch Author-X-Name-Last: Mor Title: Minmax common due-window assignment and scheduling on a single machine with two competing agents Abstract: We study the classical method of common due-date assignment and focus on minmax objective functions. In due-date assignment problems, the objective is to find the optimal due-date and job sequence that minimize the total earliness, tardiness and due-date-related costs. We extend the single-agent problem to a setting involving two competing agents and to a setting of multi-agent. In the two-agent setting (herein agents A and B), the scheduler needs to minimize the maximum cost of the agent A, subject to an upper bound on the maximal cost of the agent B. In the general model of multi-agent scheduling, the scheduler needs to minimize the cost among all A-type agents, subject to an agent-dependent upper bound on the maximal cost of the B-type agents. We further generalize the problems to the method of common due-window assignment. For all studied problems, we introduce efficient polynomial time solutions. Journal: Journal of the Operational Research Society Pages: 589-602 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0253-0 File-URL: http://hdl.handle.net/10.1057/s41274-017-0253-0 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:589-602 Template-Type: ReDIF-Article 1.0 Author-Name: Stefano Starita Author-X-Name-First: Stefano Author-X-Name-Last: Starita Author-Name: Maria Paola Scaparra Author-X-Name-First: Maria Author-X-Name-Last: Paola Scaparra Title: Passenger railway network protection: a model with variable post-disruption demand service Abstract: Protecting transportation infrastructures is critical to avoid loss of life and to guard against economic upheaval. This paper addresses the problem of identifying optimal protection plans for passenger rail transportation networks, given a limited budget. We propose a bi-level protection model which extends and refines the model previously introduced by Scaparra et al, (Railway infrastructure security, Springer, New York, 2015). In our extension, we still measure the impact of rail disruptions in terms of the amount of unserved passenger demand. However, our model captures the post-disruption user behaviour in a more accurate way by assuming that passenger demand for rail services after disruptions varies with the extent of the travel delays. To solve this complex bi-level model, we develop a simulated annealing algorithm. The efficiency of the heuristic is tested on a set of randomly generated instances and compared with the one of a more standard exact decomposition algorithm. To illustrate how the modelling approach might be used in practice to inform protection planning decisions, we present a case study based on the London Underground. The case study also highlights the importance of capturing flow demand adjustments in response to increased travel time in a mathematical model. Journal: Journal of the Operational Research Society Pages: 603-618 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0255-y File-URL: http://hdl.handle.net/10.1057/s41274-017-0255-y File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:603-618 Template-Type: ReDIF-Article 1.0 Author-Name: Cade M. Saie Author-X-Name-First: Cade M. Author-X-Name-Last: Saie Author-Name: Darryl K. Ahner Author-X-Name-First: Darryl K. Author-X-Name-Last: Ahner Title: Investigating the dynamics of nation-building through a system of differential equations Abstract: Nation-building modeling is an important field given the increasing number of candidate nations and the limited resources available. In this paper, we present a modeling methodology and a system of differential equations model to investigate the dynamics of nation-building. The methodology is based on solving inverse problems, much like Lanchester equations, and provides measures of merit to evaluate nation-building operations. An application is derived for Operation Iraqi Freedom to demonstrate the utility as well as effects of various alternate strategies, using differing applications of national power. This modeling approach is data driven and offers a significant, novel capability when analyzing and planning for future nation-building scenarios. Journal: Journal of the Operational Research Society Pages: 619-629 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0256-x File-URL: http://hdl.handle.net/10.1057/s41274-017-0256-x File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:619-629 Template-Type: ReDIF-Article 1.0 Author-Name: Wendy K. Tam Cho Author-X-Name-First: Wendy K. Author-X-Name-Last: Tam Cho Title: An evolutionary algorithm for subset selection in causal inference models Abstract: Researchers in all disciplines desire to identify causal relationships. Randomized experimental designs isolate the treatment effect and thus permit causal inferences. However, experiments are often prohibitive because resources may be unavailable or the research question may not lend itself to an experimental design. In these cases, a researcher is relegated to analyzing observational data. To make causal inferences from observational data, one must adjust the data so that they resemble data that might have emerged from an experiment. The data adjustment can proceed through a subset selection procedure to identify treatment and control groups that are statistically indistinguishable. Identifying optimal subsets is a challenging problem but a powerful tool. An advance in an operations research solution that is more efficient and identifies empirically more optimal solutions than other proposed algorithms is presented. The computational framework does not replace existing matching algorithms (e.g., propensity score models) but rather further enables and augments the ability of all causal inference models to identify more putatively randomized groups. Journal: Journal of the Operational Research Society Pages: 630-644 Issue: 4 Volume: 69 Year: 2018 Month: 4 X-DOI: 10.1057/s41274-017-0258-8 File-URL: http://hdl.handle.net/10.1057/s41274-017-0258-8 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:4:p:630-644 Template-Type: ReDIF-Article 1.0 Author-Name: Mohsen Lashgari Author-X-Name-First: Mohsen Author-X-Name-Last: Lashgari Author-Name: Ata Allah Taleizadeh Author-X-Name-First: Ata Allah Author-X-Name-Last: Taleizadeh Author-Name: Seyed Jafar Sadjadi Author-X-Name-First: Seyed Jafar Author-X-Name-Last: Sadjadi Title: Ordering policies for non-instantaneous deteriorating items under hybrid partial prepayment, partial trade credit and partial backordering Abstract: In this paper, an economic order quantity model is presented for non-instantaneous deteriorating items under a hybrid payment schedule. This payment schedule is composed of a multiple advanced payments scheme and a delayed payment plan. Here, a retailer must prepay a portion of the purchasing cost to his supplier, during the lead time of order delivery, in several instalments. The retailer is allowed to pay the rest after a certain amount of time from receiving the order. The first policy may be adopted by the supplier to finance the procurement of materials or parts used to prepare the order, or to control the risk of order cancellation; and the second policy may be employed as a marketing strategy to stimulate sales. On the other hand, the retailer sells the products to his customers. For the proposed model, inventory shortage is also taken into account, which may occur as either backordered and lost sales, or both. The retailer’s total inventory costs (including the costs of ordering, purchasing, inventory holding, shortage and also the interest costs incurred for advance payment and delayed payment) is minimised, in order to find the order and shortage quantities. Several numerical examples are presented for demonstrating the applicability of the framework. Finally, in order to provide managerial insights, sensitivity analyses are performed for several key parameters. Journal: Journal of the Operational Research Society Pages: 1167-1196 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390524 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390524 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1167-1196 Template-Type: ReDIF-Article 1.0 Author-Name: Jinting Wang Author-X-Name-First: Jinting Author-X-Name-Last: Wang Author-Name: Zhe George Zhang Author-X-Name-First: Zhe George Author-X-Name-Last: Zhang Title: Strategic joining in an M/M/1 queue with risk-sensitive customers Abstract: To analyze a stochastic service system with customers choosing to join or balk upon arrival, we model the system as a single server Markovian queue with a quadratic utility function for customers. In contrast to classical models with risk-neutral customers, we focus on the queueing model with risk-sensitive ones and study customer strategies under individual interest equilibrium, server’s profit optimization, and social welfare optimization. The quadratic utility function allows us to take the risk and return tradeoff into account in analyzing customer joining strategies. We show that while some of the well-known results for the risk-neutral customer situation apply, others may fail to hold in some realistic risk-sensitive customer situations. Furthermore, we examine the queue length information effect on different performance measures from server’s profit and social welfare perspectives. A practical implication of this study is that managers of service systems should be very cautious about relying on classical stationary queueing analysis when customers are risk-sensitive. Journal: Journal of the Operational Research Society Pages: 1197-1214 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390526 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390526 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1197-1214 Template-Type: ReDIF-Article 1.0 Author-Name: Wenbo Chen Author-X-Name-First: Wenbo Author-X-Name-Last: Chen Author-Name: Ming Dong Author-X-Name-First: Ming Author-X-Name-Last: Dong Title: Joint inventory control across products with supply uncertainty and responsive pricing Abstract: We study sourcing and responsive pricing decisions of a firm with two correlated products and price-dependent demand when supply capacities for these two products are uncertain. Cross-price effects exist between the two products, which means that the demand of each product depends on the prices of both products. First, we apply the tool of L♮$ L^{\natural } $ convexity and some recent results to establish the L♮$ L^{\natural } $ convexity of the value-to-go function during each period; then we use KKT conditions and some structural properties given by L♮$ L^{\natural } $ convexity to derive structural results of the optimal order policy for these two correlated products in a periodic review setting. Contrary to the classical “base-stock” policy derived in the literature for single product or multiple products without uncertain supply capacity, we show that the optimal order policy in our inventory control problem is complicated and it follows the “order-up-to” structural property only under some conditions. Numerical studies are carried out to present the structural properties of the optimal order policy and show the effect of the complementarity or substitutability on the profits. Journal: Journal of the Operational Research Society Pages: 1215-1226 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390527 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390527 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1215-1226 Template-Type: ReDIF-Article 1.0 Author-Name: Ala Pazirandeh Author-X-Name-First: Ala Author-X-Name-Last: Pazirandeh Author-Name: Amin Maghsoudi Author-X-Name-First: Amin Author-X-Name-Last: Maghsoudi Title: Improved coordination during disaster relief operations through sharing of resources Abstract: In this paper, we focus on coordination dynamics between nonprofit organisations in the short-term, nonprofit, and competitive settings in disaster relief operations. Sharing resources across organisations can be a key to better coordination. Thus, we tested the link between resource sharing, aspects impacting resource sharing, and operational performance of the organisations using 101 data points. Data was collected through a survey from humanitarian organisations within the Southeast Asian region and was analysed using the Structural Equation Modelling-Partial Least Square (SEM-PLS) approach. The results show that resource sharing can improve organisational performance in this horizontal and competitive context, and that complementarity of resources between organisations increases their willingness to share resources. Complementarity of resources can also improve the interdependencies between organisations, which is not perceived very highly in the current highly competitive settings. Journal: Journal of the Operational Research Society Pages: 1227-1241 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390530 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390530 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1227-1241 Template-Type: ReDIF-Article 1.0 Author-Name: Thomas Chabot Author-X-Name-First: Thomas Author-X-Name-Last: Chabot Author-Name: Leandro C. Coelho Author-X-Name-First: Leandro C. Author-X-Name-Last: Coelho Author-Name: Jacques Renaud Author-X-Name-First: Jacques Author-X-Name-Last: Renaud Author-Name: Jean-François Côté Author-X-Name-First: Jean-François Author-X-Name-Last: Côté Title: Mathematical model, heuristics and exact method for order picking in narrow aisles Abstract: Order picking is one of the most challenging operations in distribution centre management and one of the most important sources of costs. One way to reduce the lead time and associated costs is to minimise the total amount of work for collecting all orders. This paper is motivated by a collaboration with an industrial partner who delivers furniture and electronic equipment. We have modelled their narrow aisles order picking problem as a vehicle routing problem through a series of distance transformations between all pairs of locations. Security issues arising when working on narrow aisles impose an extra layer of difficulty when determining the routes. We show that these security measures and the operator equipment allow us to decompose the problem per aisle. In other words, if one has to pick orders from three aisles in the warehouse, it is possible to decompose the problem and create three different instances of the picking problem. Our approach yields an exact representation of all possible picking sequences. We also show that neglecting 2D aspects and solving the problem over a 1D warehouse yields significant difference in the solutions, which are then suboptimal for the real 2D case. We have solved a large set of instances reproducing realistic configurations using a combination of heuristics and an exact algorithm, minimising the total distance travelled for picking all items. Through extensive computational experiments, we identify which of our methods are better suited for each aisle configuration. We also compare our solutions with those obtained by the company order picking procedures, showing that improvements can be achieved by using our approach. Journal: Journal of the Operational Research Society Pages: 1242-1253 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390532 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390532 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1242-1253 Template-Type: ReDIF-Article 1.0 Author-Name: Puca Huachi Vaz Penna Author-X-Name-First: Puca Huachi Vaz Author-X-Name-Last: Penna Author-Name: Andréa Cynthia Santos Author-X-Name-First: Andréa Cynthia Author-X-Name-Last: Santos Author-Name: Christian Prins Author-X-Name-First: Christian Author-X-Name-Last: Prins Title: Vehicle routing problems for last mile distribution after major disaster Abstract: This study is dedicated to a complex Vehicle Routing Problem (VRP) applied to the response phase after a natural disaster. Raised by the last mile distribution of relief goods after earthquakes, it is modelled as a rich VRP involving a heterogeneous fleet of vehicles, multiple trips, multiple depots, and vehicle-site dependencies. The proposed method is a generic hybrid heuristic that uses a Set Partitioning formulation to add memory to a Multi-Start Iterated Local Search framework. To better fit the requirements of last mile distribution, the algorithm has been developed in partnership with members of the International Charter on Space and Major Disasters and has been evaluated on real scenarios from Port-au-Prince earthquake. The heuristic quickly computes efficient routes while determining the number of required vehicles and the subset of depots to be used. Moreover, the computational results show that the proposed method is also competitive compared to the state of the art heuristics on closely related problems found in industrial distribution. Journal: Journal of the Operational Research Society Pages: 1254-1268 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390534 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390534 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1254-1268 Template-Type: ReDIF-Article 1.0 Author-Name: Fernando A. F. Ferreira Author-X-Name-First: Fernando A. F. Author-X-Name-Last: Ferreira Author-Name: Ronald W. Spahr Author-X-Name-First: Ronald W. Author-X-Name-Last: Spahr Author-Name: Mark A. Sunderman Author-X-Name-First: Mark A. Author-X-Name-Last: Sunderman Author-Name: Marjan S. Jalali Author-X-Name-First: Marjan S. Author-X-Name-Last: Jalali Title: A prioritisation index for blight intervention strategies in residential real estate Abstract: The existence of abandoned or poorly maintained properties, often with overgrowth, litter, and abandoned junk – or “neighbourhood blight” as it is sometimes referred to – is a complex and wide-ranging real estate problem. Its detrimental impact on neighbourhood property values, safety and reputation requires an elucidation of where its causes lie, which in turn requires that areas of intervention first be identified. The need is for multidimensional solutions which take into account different stakeholders’ interests and perceptions. To address this need, this paper integrates cognitive mapping and multiple criteria decision analysis (MCDA) and, based on the discussion of real world cases with a panel of urban planning experts from the Lisbon municipality in Portugal, constructs a blight intervention prioritisation index. The resulting framework was validated by the participating panel members and a representative of the city council, and is aimed at facilitating strategies for intervention and elimination of residential neighbourhood blight.. Journal: Journal of the Operational Research Society Pages: 1269-1285 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1390535 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1390535 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1269-1285 Template-Type: ReDIF-Article 1.0 Author-Name: John H. Powell Author-X-Name-First: John H. Author-X-Name-Last: Powell Author-Name: Navonil Mustafee Author-X-Name-First: Navonil Author-X-Name-Last: Mustafee Author-Name: Colin S. Brown Author-X-Name-First: Colin S. Author-X-Name-Last: Brown Title: The rôle of knowledge in system risk identification and assessment: The 2014 Ebola outbreak Abstract: Current approaches to risk management stress the need for dynamic approaches to risk identification aimed at reducing the expected consequences of undesired outcomes. We contend that these approaches place insufficient emphasis on the system knowledge available to the assessor, particularly in respect of three related factors, namely the dynamic behaviour of the system under threat, the role of human agents and the knowledge availability to those agents. In this paper, we address the rôle of knowledge use and availability in critical human activity systems. We emphasise two distinctions: that between information and knowledge used in these systems, and that between knowledge about the system and knowledge deployed within it, the latter forming part of the system itself. Using the ongoing 2014–2015 West African Ebola outbreak as an example, we offer a practical procedure using the well-known systems dynamics technique in its qualitative form for the identification of risks and appropriate policies for managing those risks. Journal: Journal of the Operational Research Society Pages: 1286-1308 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1392404 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1392404 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1286-1308 Template-Type: ReDIF-Article 1.0 Author-Name: Neng-Hui Shih Author-X-Name-First: Neng-Hui Author-X-Name-Last: Shih Author-Name: Yao-Sheng Liao Author-X-Name-First: Yao-Sheng Author-X-Name-Last: Liao Author-Name: Chih-Hsiung Wang Author-X-Name-First: Chih-Hsiung Author-X-Name-Last: Wang Title: Determining an economic production quantity under a zero defects policy Abstract: This study considers a production system that shifts randomly from an in-control state to an out-of-control state, where, when the system is out of control, it has a larger probability of producing a nonconforming product than it does in an in-control state. A zero defects policy is reached by inspecting all products at a non-negligible inspection time. The inspection information is used to monitor the process quality in order to determine whether to cease production. To achieve a minimal cost per conforming product, a decision rule is provided for deciding when to cease production using the previous product inspection information or to determine an optimal production batch size when on-line product inspection is infeasible. Numerical examples are given to illustrate our proposed model. Sensitivity analysis using the model parameter values was also performed in order to understand the effect of the changes on the model’s variables, where it is shown that a smaller manufacturing variation achieves a smaller expected total cost per conforming product. Finally, a conclusion, including some future research directions, is provided. Journal: Journal of the Operational Research Society Pages: 1309-1317 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1392405 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1392405 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1309-1317 Template-Type: ReDIF-Article 1.0 Author-Name: Shuo Yang Author-X-Name-First: Shuo Author-X-Name-Last: Yang Author-Name: Kai Yang Author-X-Name-First: Kai Author-X-Name-Last: Yang Author-Name: Lixing Yang Author-X-Name-First: Lixing Author-X-Name-Last: Yang Author-Name: Ziyou Gao Author-X-Name-First: Ziyou Author-X-Name-Last: Gao Title: MILP formulations and a TS algorithm for reliable last train timetabling with uncertain transfer flows Abstract: This paper aims to develop reliable last train timetabling models for increasing number of successful transfer passengers and reducing total running time for metro corporations. The model development is based on an observation that real-world transfer flows capture the characteristics of randomness in a subway network. For systematically modelling uncertainty, a sample-based representation and two types of non-expected evaluation criteria, namely max–min reliability criterion and percentile reliability criterion are proposed to generate reliable timetables for last trains. The equivalent mixed integer linear programming formulations are deduced for the respective evaluation strategies by introducing auxiliary variables. Based upon the linearised models, an efficient tabu search (TS) algorithm incorporating solution generation method is presented. Finally, a number of small problem instances are solved using CPLEX for the linear models. The obtained results are also used as a platform for assessing the performance of proposed TS approach which is then tested on large Beijing Subway instances with promising results. Journal: Journal of the Operational Research Society Pages: 1318-1334 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1392406 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1392406 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1318-1334 Template-Type: ReDIF-Article 1.0 Author-Name: Harsha Perera Author-X-Name-First: Harsha Author-X-Name-Last: Perera Author-Name: Jack Davis Author-X-Name-First: Jack Author-X-Name-Last: Davis Author-Name: Tim B. Swartz Author-X-Name-First: Tim B. Author-X-Name-Last: Swartz Title: Assessing the impact of fielding in Twenty20 cricket Abstract: This paper attempts to quantify the importance of fielding in Twenty20 cricket. We introduce the metric of expected runs saved due to fielding which is both interpretable and is directly relevant to winning matches. The metric is assigned to individual players and is based on a textual analysis of match commentaries using random forest methodology. We observe that the best fielders save on average 1.2 runs per match compared to a typical fielder. Journal: Journal of the Operational Research Society Pages: 1335-1343 Issue: 8 Volume: 69 Year: 2018 Month: 8 X-DOI: 10.1080/01605682.2017.1398204 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1398204 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:8:p:1335-1343 Template-Type: ReDIF-Article 1.0 Author-Name: Jonathan Crook Author-X-Name-First: Jonathan Author-X-Name-Last: Crook Author-Name: Christophe Mues Author-X-Name-First: Christophe Author-X-Name-Last: Mues Author-Name: Tony Bellotti Author-X-Name-First: Tony Author-X-Name-Last: Bellotti Author-Name: Galina Andreeva Author-X-Name-First: Galina Author-X-Name-Last: Andreeva Title: Call for papers special issue on credit risk modelling Journal: Journal of the Operational Research Society Pages: ii-ii Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2019.1603422 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1603422 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:ii-ii Template-Type: ReDIF-Article 1.0 Author-Name: Yewen Gu Author-X-Name-First: Yewen Author-X-Name-Last: Gu Author-Name: Stein W. Wallace Author-X-Name-First: Stein W. Author-X-Name-Last: Wallace Author-Name: Xin Wang Author-X-Name-First: Xin Author-X-Name-Last: Wang Title: Integrated maritime fuel management with stochastic fuel prices and new emission regulations Abstract: Maritime fuel management (MFM) controls the procurement and consumption of the fuels used on board and therefore manages one of the most important cost drivers in the shipping industry. At the operational level, a shipping company needs to manage its fuel consumption by making optimal routing and speed decisions for each voyage. But since fuel prices are highly volatile, a shipping company sometimes also tactically procures fuel in the forward market to control risk and cost volatility. From an operations research perspective, it is customary to think of tactical and operational decisions as tightly linked. However, the existing literature on MFM normally focuses on only one of these two levels, rather than taking an integrated point of view. This is in line with how shipping companies operate; tactical and operational fuel management decisions are made in isolation. We develop a stochastic programming model involving both tactical and operational decisions in MFM in order to minimise the total expected fuel costs, controlled for financial risk, within a planning period. This paper points out that after the latest regulation of the Sulphur Emission Control Areas (SECA) came into force in 2015, an integration of the tactical and operational levels in MFM has become important for shipping companies whose business deals with SECA. The results of the computational study show that isolated decision making on either tactical or operational level in MFM will lead to various problems. Nevertheless, the most severe consequence occurs when tactical decisions are made in isolation. Journal: Journal of the Operational Research Society Pages: 707-725 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2017.1415649 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415649 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:707-725 Template-Type: ReDIF-Article 1.0 Author-Name: Jian Ni Author-X-Name-First: Jian Author-X-Name-Last: Ni Author-Name: Shoude Li Author-X-Name-First: Shoude Author-X-Name-Last: Li Title: When better quality or higher goodwill can result in lower product price: A dynamic analysis Abstract: This article analyses the price-quality and price-goodwill relationships under the influence of quality on goodwill through the effects of cost, sales, and mark-up. We identify the conditions under which a negative price-quality or price-goodwill relationship will arise, that is, price falls as quality or goodwill increases over time. We show that the price-quality or price-goodwill relationship could be negative even if the demand function is linearly additive, and this relationship will tend to be positive when the customer demand becomes more sensitive to the product quality. Journal: Journal of the Operational Research Society Pages: 726-736 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1452535 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1452535 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:726-736 Template-Type: ReDIF-Article 1.0 Author-Name: Min Wen Author-X-Name-First: Min Author-X-Name-Last: Wen Author-Name: Rune Larsen Author-X-Name-First: Rune Author-X-Name-Last: Larsen Author-Name: Stefan Ropke Author-X-Name-First: Stefan Author-X-Name-Last: Ropke Author-Name: Hanne L. Petersen Author-X-Name-First: Hanne L. Author-X-Name-Last: Petersen Author-Name: Oli B. G. Madsen Author-X-Name-First: Oli B. G. Author-X-Name-Last: Madsen Title: Centralised horizontal cooperation and profit sharing in a shipping pool Abstract: Horizontal cooperation in logistics has attracted an increasing amount of attention in both industry and the research community. The most common form of cooperation in the tramp shipping market is the shipping pool, formed by a fleet of ships from different ownerships operated by a centralised administration. This paper studies such a centralised horizontal cooperation, a product tanker pool in Denmark, and addresses the operational challenges, including how to maximise the pool profit and how to allocate it fairly. We apply discrete event simulation and dynamic ship routing and speed optimisation in order to maximise the pool profit in a highly dynamic environment and apply methods derived from cooperative game theory when allocating the total profit. Through a large number of experiments on realistic data, we evaluate the benefit of cooperation under different scenarios, present the results from the profit allocation and analyse the effect of pool size on the total profit and ship utilisation rate. Journal: Journal of the Operational Research Society Pages: 737-750 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457481 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457481 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:737-750 Template-Type: ReDIF-Article 1.0 Author-Name: Juliana Quintanilha da Silveira Author-X-Name-First: Juliana Quintanilha Author-X-Name-Last: da Silveira Author-Name: João Carlos Correia Baptista Soares de Mello Author-X-Name-First: João Carlos Correia Baptista Author-X-Name-Last: Soares de Mello Author-Name: Lidia Angulo-Meza Author-X-Name-First: Lidia Author-X-Name-Last: Angulo-Meza Title: Input redistribution using a parametric DEA frontier and variable returns to scale: The parabolic efficient frontier Abstract: In practical use of Data Envelopment Analysis (DEA), there are some cases that the resources used by each DMU can be shared with others or there can be a total limitation of resources to be used. In this way, DMUs may have to redistribute inputs to achieve the efficient frontier. One way to deal with this situation is to use the so-called parametric DEA, which dealt only with constant returns to scale. This paper proposes a method to determine a paraboloid frontier for the resource redistribution of DMUs, where the sum of one input among observed DMUs is constant. This extension of parametric DEA models deals with variable returns to scale. This paper also includes the mathematical demonstration of the variable returns to scale property of the parabolic frontier. To illustrate the use of the model we present numerical examples. Journal: Journal of the Operational Research Society Pages: 751-759 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457484 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457484 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:751-759 Template-Type: ReDIF-Article 1.0 Author-Name: Mona Barat Author-X-Name-First: Mona Author-X-Name-Last: Barat Author-Name: Ghasem Tohidi Author-X-Name-First: Ghasem Author-X-Name-Last: Tohidi Author-Name: Masoud Sanei Author-X-Name-First: Masoud Author-X-Name-Last: Sanei Author-Name: Shabnam Razavyan Author-X-Name-First: Shabnam Author-X-Name-Last: Razavyan Title: Data envelopment analysis for decision making unit with nonhomogeneous internal structures: An application to the banking industry Abstract: Traditional Data Envelopment Analysis (DEA) evaluates the relative efficiency of a set of homogeneous decision making units (DMUs) regarding multiple inputs and outputs. An important implication of the DEA is dealing with applications wherein the internal structures of DMUs are known, specifically those that have a network framework. In some situations the assumption of homogeneity among the internal of DMUs is violated; for instance, when a set of universities comprises DMUs but not all of them have the same faculties. This paper proposes a DEA-based methodology to deal with the problem of evaluating the relative efficiencies of a set of DMUs whose internal structures are nonhomogeneous. It is shown that the overall efficiency of each DMU could be evaluated through two stages; in the first stage subgroup efficiency scores are derived and the second one evaluates the overall efficiency score of each DMU using a weighted average of the subgroups efficiency scores obtained in stage 1. To show the practical aspects of the newly developed model, it is applied to a set of hypothetical data-set in addition to a real data-set on bank industry. Journal: Journal of the Operational Research Society Pages: 760-769 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457483 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457483 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:760-769 Template-Type: ReDIF-Article 1.0 Author-Name: Marcio Alves Diniz Author-X-Name-First: Marcio Alves Author-X-Name-Last: Diniz Author-Name: Rafael Izbicki Author-X-Name-First: Rafael Author-X-Name-Last: Izbicki Author-Name: Danilo Lopes Author-X-Name-First: Danilo Author-X-Name-Last: Lopes Author-Name: Luis Ernesto Salasar Author-X-Name-First: Luis Ernesto Author-X-Name-Last: Salasar Title: Comparing probabilistic predictive models applied to football Abstract: We propose two Bayesian multinomial-Dirichlet models to predict the final outcome of football (soccer) matches and compare them to three well-known models regarding their predictive power. All the models predicted the full-time results of 1710 matches of the first division of the Brazilian football championship and the comparison used three proper scoring rules, the proportion of errors and a calibration assessment. We also provide a goodness of fit measure. Our results show that multinomial-Dirichlet models are not only competitive with standard approaches, but they are also well calibrated and present reasonable goodness of fit. Journal: Journal of the Operational Research Society Pages: 770-782 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457485 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457485 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:770-782 Template-Type: ReDIF-Article 1.0 Author-Name: Ning Zhu Author-X-Name-First: Ning Author-X-Name-Last: Zhu Author-Name: Jens Leth Hougaard Author-X-Name-First: Jens Leth Author-X-Name-Last: Hougaard Author-Name: Mojtaba Ghiyasi Author-X-Name-First: Mojtaba Author-X-Name-Last: Ghiyasi Title: Ranking production units by their impact on structural efficiency Abstract: League tables associated with various forms of service activities from schools to hospitals illustrate a public need for ranking institutions by their productive performance. We present a new approach for ranking production units which is based on each unit’s marginal contribution in terms of structural efficiency. The approach is radically different from conventional methods based on super-efficiency indexes in Data Envelopment Analysis. We illustrate the mechanics of our method by numerical examples as well as an empirical illustration. We further demonstrate that our new indexes inherit all relevant and desirable properties of the Farrell efficiency index upon which they are constructed. Journal: Journal of the Operational Research Society Pages: 783-792 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457486 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457486 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:783-792 Template-Type: ReDIF-Article 1.0 Author-Name: Samaneh Shiri Author-X-Name-First: Samaneh Author-X-Name-Last: Shiri Author-Name: ManWo Ng Author-X-Name-First: ManWo Author-X-Name-Last: Ng Author-Name: Nathan Huynh Author-X-Name-First: Nathan Author-X-Name-Last: Huynh Title: Integrated drayage scheduling problem with stochastic container packing and unpacking times Abstract: This paper considers the integrated drayage scheduling problem. Two new models are developed that account for the uncertainty of (un)packing times in drayage operation without an explicit assumption about their probability distributions. These models are developed for situations when an accurate probability distribution is not available. The first model requires the specification of the mean and variance of the (un)packing times, and the second model requires the specification of mean and upper and lower bounds of the (un)packing times. To demonstrate the feasibility of the developed models, they are tested on problem instances with real-life characteristics. The numerical results show that the drayage operation time increases when the mean of (un)packing times, the variance of the (un)packing times or the user-specified confidence level is increased. Also, the results indicate that the stochastic models produce schedules that are more likely to be feasible under a variety of scenarios compared to the deterministic model. Journal: Journal of the Operational Research Society Pages: 793-806 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457487 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457487 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:793-806 Template-Type: ReDIF-Article 1.0 Author-Name: Mostafa Davtalab-Olyaie Author-X-Name-First: Mostafa Author-X-Name-Last: Davtalab-Olyaie Title: A secondary goal in DEA cross-efficiency evaluation: A “one home run is much better than two doubles” criterion Abstract: Data Envelopment Analysis (DEA) is a mathematical programming approach for assessing the relative efficiency of decision making units (DMUs). The cross-efficiency evaluation is an extension of DEA that provides a ranking method and eliminates unrealistic DEA weighting schemes on weight restrictions, without requiring a prior information. The cross-efficiency evaluation may have some shortages, e.g. the cross-efficiency scores may not be unique due to the presence of several optima. To rectify this issue, several secondary goals have been proposed in the literature. Some scholars have proposed several cross-efficiency evaluations based on maximising (minimising) the total deviation from their ideal point as an aggressive (benevolent) perspective. In some cases, minimising (maximising) the number of DMUs that achieve their target efficiencies, is more important than maximising (minimising) the total deviation from the ideal point. We propose some alternative models for the cross-efficiency evaluation based on the cardinality of the set of “satisfied DMUs”, i.e. the DMUs that achieve their maximum efficiencies. For aggressive (benevolent) cross-efficiency evaluation, among all the optimal weights for a specific unit, we choose the weights which can maximise its efficiency, and at the same time minimise (maximise) the number of satisfied units. We demonstrate how the proposed method can be implemented and illustrate the method using two examples. Journal: Journal of the Operational Research Society Pages: 807-816 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1457482 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1457482 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:807-816 Template-Type: ReDIF-Article 1.0 Author-Name: Qi Wei Author-X-Name-First: Qi Author-X-Name-Last: Wei Author-Name: Yong Wu Author-X-Name-First: Yong Author-X-Name-Last: Wu Author-Name: Yiwei Jiang Author-X-Name-First: Yiwei Author-X-Name-Last: Jiang Author-Name: T.C.E. Cheng Author-X-Name-First: T.C.E. Author-X-Name-Last: Cheng Title: Two-machine hybrid flowshop scheduling with identical jobs: Solution algorithms and analysis of hybrid benefits Abstract: We study two-machine hybrid flowshop scheduling with identical jobs. Each job consists of two tasks, namely a flexible task and a fixed task. The flexible task can be processed on either machine, while the fixed task must be processed on the second machine. The fixed task can only be processed after the flexible task is finished. Due to different technological capabilities of the two machines, the flexible task has different processing times on the two machines. Our goal is to find a schedule that minimises the makespan. We consider two variants of the problem, namely no buffer and infinite buffer capacity between the two machines. We present constant-time solution algorithms for both variants. In addition, analysing the relationship between the hybrid benefits and performance difference between the two machines, we find that, for the infinite-buffer case, increasing the technological level of the second machine does not necessarily increase the hybrid benefits. Journal: Journal of the Operational Research Society Pages: 817-826 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1458018 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1458018 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:817-826 Template-Type: ReDIF-Article 1.0 Author-Name: Yongming Song Author-X-Name-First: Yongming Author-X-Name-Last: Song Author-Name: Guangxu Li Author-X-Name-First: Guangxu Author-X-Name-Last: Li Title: A large-scale group decision-making with incomplete multi-granular probabilistic linguistic term sets and its application in sustainable supplier selection Abstract: A large amount of stakeholders take part in the decision-making process, usually called a large-scale group decision-making (LGDM) problem. Some stakeholders may only provide partial preference information because of the limitation of knowledge over the alternatives. In this paper, a LGDM model is proposed to handle such problems, in which the incomplete multi-granular linguistic information showcases more appropriateness in respect of multi-stakeholders to represent their assessments. Meanwhile, the proposed model attains the maximum information from all decision makers and avoids an oversimplification for the elicited information in traditional linguistic models. It is more significant that we present three normalising methods for the purpose of securing the complete probabilistic linguistic term sets (PLTSs) based on risk attitudes: optimistic, pessimistic and neutral, respectively. In addition, alternatives are ranked by the extended TOPSIS method. Finally, a sustainable supplier selection is used to validate the effectiveness of the proposed model. Journal: Journal of the Operational Research Society Pages: 827-841 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1458017 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1458017 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:827-841 Template-Type: ReDIF-Article 1.0 Author-Name: Bice Cavallo Author-X-Name-First: Bice Author-X-Name-Last: Cavallo Author-Name: Alessio Ishizaka Author-X-Name-First: Alessio Author-X-Name-Last: Ishizaka Author-Name: Maria Grazia Olivieri Author-X-Name-First: Maria Grazia Author-X-Name-Last: Olivieri Author-Name: Massimo Squillante Author-X-Name-First: Massimo Author-X-Name-Last: Squillante Title: Comparing inconsistency of pairwise comparison matrices depending on entries Abstract: Pairwise comparisons have been a long-standing technique for comparing alternatives/criteria and their role has been pivotal in the development of modern decision-making methods. Since several types of pairwise comparison matrices (e.g., multiplicative, additive, fuzzy) are proposed in literature, in this paper, we investigate, for which type of matrix, decision-makers are more coherent when they express their subjective preferences. By performing an experiment, we found that the additive approach provides the worst level of coherence. Journal: Journal of the Operational Research Society Pages: 842-850 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1464427 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1464427 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:842-850 Template-Type: ReDIF-Article 1.0 Author-Name: Sungmook Lim Author-X-Name-First: Sungmook Author-X-Name-Last: Lim Title: A note on a robust inventory model with stock-dependent demand Abstract: We investigate an inventory model with stock-dependent demand where larger pile of stock displayed leads the customer to purchase more. The dependency of demand on the inventory level is modelled as a monomial function whose shape and scale parameters are stochastic. We present a linear regression-based method for constructing ellipsoidal representations of the parameter uncertainty, which are subsequently incorporated into the inventory model under the robust optimisation framework. We show that the resulting robust optimisation model can be transformed into an equivalent convex programme, and also prove that a robust optimal inventory replenishment policy is of the base-stock type. Through a numerical illustration of the proposed approach and a performance analysis based upon Monte Carlo simulation, we demonstrate that robust optimal order decisions exhibit a unique advantage over deterministic ones. Journal: Journal of the Operational Research Society Pages: 851-866 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1468861 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468861 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:851-866 Template-Type: ReDIF-Article 1.0 Author-Name: Shibo Bian Author-X-Name-First: Shibo Author-X-Name-Last: Bian Author-Name: Wei Liu Author-X-Name-First: Wei Author-X-Name-Last: Liu Author-Name: Dewei Zhang Author-X-Name-First: Dewei Author-X-Name-Last: Zhang Title: The sovereign credit and the limited foreign exchange outflow and the liquidity management of foreign exchange reserves Abstract: In this paper, we use the models of the commercial bank liquidity management to study the liquidity of foreign exchange reserves. We build a model for the liquidity management of foreign exchange reserves, which includes the sovereign credit and the limited foreign exchange outflow, and we propose an optimal proportion with which the central banks hold their foreign exchange reserves in the form of liquidity and an accuracy measurement of the whole gains of foreign exchange reserves. Journal: Journal of the Operational Research Society Pages: 867-871 Issue: 5 Volume: 70 Year: 2019 Month: 5 X-DOI: 10.1080/01605682.2018.1468863 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468863 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:5:p:867-871 Template-Type: ReDIF-Article 1.0 Author-Name: Sanne Wøhlk Author-X-Name-First: Sanne Author-X-Name-Last: Wøhlk Author-Name: Gilbert Laporte Author-X-Name-First: Gilbert Author-X-Name-Last: Laporte Title: A fast heuristic for large-scale capacitated arc routing problems Abstract: The purpose of this paper is to develop a fast heuristic called FastCARP for the solution of large-scale capacitated arc routing problems, with or without duration constraints. This study is motivated by a waste collection problem in Denmark. After a preprocessing phase, FastCARP creates a giant tour, partitions the graph into districts, and construct routes within each district. It then iteratively merges and splits adjacent districts and reoptimises the routes. The heuristic was tested on 264 benchmark instances containing up to 11,640 nodes, 12,675 edges, 8581 required edges, and 323 vehicles. FastCARP was compared with an alternative heuristic called Base and with several Path-Scanning algorithms. On small graphs, it was better but slower than Base. On larger graphs, it was much faster and only slightly worse than Base in terms of solution quality. It also outperforms the Path-Scanning algorithms. Journal: Journal of the Operational Research Society Pages: 1877-1887 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415648 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415648 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1877-1887 Template-Type: ReDIF-Article 1.0 Author-Name: Nicolas Antheaume Author-X-Name-First: Nicolas Author-X-Name-Last: Antheaume Author-Name: Daniel Thiel Author-X-Name-First: Daniel Author-X-Name-Last: Thiel Author-Name: François de Corbière Author-X-Name-First: François Author-X-Name-Last: de Corbière Author-Name: Frantz Rowe Author-X-Name-First: Frantz Author-X-Name-Last: Rowe Author-Name: Hiro Takeda Author-X-Name-First: Hiro Author-X-Name-Last: Takeda Title: An analytical model to investigate the economic and environmental benefits of a supply chain resource-sharing scheme based on collaborative consolidation centres Abstract: This study evaluates the cost and carbon dioxide-equivalent emissions of different supply chain configurations to determine when suppliers should move to a greener resource-sharing scheme. We build an analytical model based on a case study of a retailer that has developed a resource-sharing initiative introducing collaborative consolidation centres (CCC) between its suppliers and its warehouses (WH). We compare the costs and carbon dioxide-equivalent emissions of using a pair of CCCs with direct delivery to twenty WHs. Our parameters include the distances between suppliers, CCCs, and WHs, in addition to the volumes delivered. This model determines when there should be a switch to the CCC system. We also compare the actual CCC locations with better alternatives, the centres of gravity of the regions. On a real cost basis, economic gains, but not environmental ones, occur, highlighting a need for alternative models for optimal locations, which would include economic and environmental constraints. Journal: Journal of the Operational Research Society Pages: 1888-1902 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415638 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415638 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1888-1902 Template-Type: ReDIF-Article 1.0 Author-Name: Mahdi Mahdiloo Author-X-Name-First: Mahdi Author-X-Name-Last: Mahdiloo Author-Name: Abdol Hossein Jafarzadeh Author-X-Name-First: Abdol Hossein Author-X-Name-Last: Jafarzadeh Author-Name: Reza Farzipoor Saen Author-X-Name-First: Reza Farzipoor Author-X-Name-Last: Saen Author-Name: Yong Wu Author-X-Name-First: Yong Author-X-Name-Last: Wu Author-Name: John Rice Author-X-Name-First: John Author-X-Name-Last: Rice Title: Modelling undesirable outputs in multiple objective data envelopment analysis Abstract: Recent empirical and conceptual work in data envelopment analysis (DEA) have emphasised its potential importance in highlighting the environmental performance of economic entities. Initial work in this emerging research area has focused on the separation of output factors into desirable and undesirable ones. In this paper, we describe recent developments in the modelling undesirable outputs. In particular, the modelling of undesirable outputs in the range adjusted measure (RAM) is investigated. We discuss some of the difficulties of RAM in assessing the environmental efficiency of decision-making units (DMUs) and develop a multiple objective DEA model to overcome these difficulties. The proposed multiple objective model is solved as a linear programming and its applicability as a mechanism for assessing environmental efficiency is demonstrated by evaluating the technical, ecological and process environmental quality efficiency scores of China’s provinces. Journal: Journal of the Operational Research Society Pages: 1903-1919 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415647 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415647 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1903-1919 Template-Type: ReDIF-Article 1.0 Author-Name: Pan Zhang Author-X-Name-First: Pan Author-X-Name-Last: Zhang Author-Name: Yu Xiong Author-X-Name-First: Yu Author-X-Name-Last: Xiong Author-Name: Zhongkai Xiong Author-X-Name-First: Zhongkai Author-X-Name-Last: Xiong Author-Name: Yu Zhou Author-X-Name-First: Yu Author-X-Name-Last: Zhou Title: Information sharing and service channel design in the presence of forecasting demand Abstract: This paper investigates the issue of demand forecast sharing in a supply chain, in which either the manufacturer or the retailer conducts demand-enhancing service. In the mode with manufacturer conducting service (Mode M), our analysis shows that if the service efficiency is high (low), the retailer should voluntarily (not) share its demand forecast. If the service efficiency is moderate, a side-payment contract or a bargaining mechanism can induce the retailer to share. In the mode with retailer conducting service (Mode R), no information sharing is the unique equilibrium. In both modes, supply chain members are generally better off when their forecasts become more accurate. Moreover, the positive impact of more accurate forecasts on both the manufacturer and the retailer is generally much stronger in Mode R than in Mode M. Finally, we find that both firms prefer Mode M to Mode R if the service efficiency is high, while they prefer Mode R if the service efficiency is low. Journal: Journal of the Operational Research Society Pages: 1920-1934 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415644 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415644 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1920-1934 Template-Type: ReDIF-Article 1.0 Author-Name: Lu Zhen Author-X-Name-First: Lu Author-X-Name-Last: Zhen Author-Name: Hao Hao Author-X-Name-First: Hao Author-X-Name-Last: Hao Author-Name: Xin Shi Author-X-Name-First: Xin Author-X-Name-Last: Shi Author-Name: Lufei Huang Author-X-Name-First: Lufei Author-X-Name-Last: Huang Author-Name: Yi Hu Author-X-Name-First: Yi Author-X-Name-Last: Hu Title: Task assignment and sequencing decision model under uncertain available time of service providers Abstract: This paper studies an integrated decision problem on assigning and sequencing tasks to service providers. By considering the uncertain available time of the service providers, a stochastic programming model is proposed on the basis of a finite set of scenarios. Some realistic factors are also taken into account and formulated as some non-linear cost functions, which are linearised in this study. Moreover, a heuristic solution method is designed for solving some extremely large-scale problem cases within a reasonable period of time. Numerical experiments are performed to validate the effectiveness of the proposed model and the efficiency of the proposed solution method. Journal: Journal of the Operational Research Society Pages: 1935-1946 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415645 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415645 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1935-1946 Template-Type: ReDIF-Article 1.0 Author-Name: Nadezhda I. Nedashkovskaya Author-X-Name-First: Nadezhda I. Author-X-Name-Last: Nedashkovskaya Title: Investigation of methods for improving consistency of a pairwise comparison matrix Abstract: Estimation of quality of expert judgements and their suitability for reliable evaluation of decision alternatives is one of key issues for successful decision-making. The notions of strong and weak consistency are used to estimate the contradiction level of expert pairwise comparison matrix (PCM). Several methods are used to find the most inconsistent elements of a PCM. They may lead to different results, and there is a problem to find an element that has to be changed to increase the consistency level of a PCM. To solve this problem, a novel methodology for analysis of efficiency of methods for finding the most inconsistent elements of a PCM is proposed. An improved M_Outflow method for finding the most inconsistent element and a cycle in a PCM is established. Using computer modelling, it is shown that the proposed M_Outflow method is more efficient under accepted conditions in comparison with other considered methods. Journal: Journal of the Operational Research Society Pages: 1947-1956 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415640 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415640 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1947-1956 Template-Type: ReDIF-Article 1.0 Author-Name: Ching-Ter Chang Author-X-Name-First: Ching-Ter Author-X-Name-Last: Chang Title: A technique of the salient success and survival aspiration levels for multiple objective/criteria decision-making problems Abstract: This paper proposes a new technique of how to get as close as possible to the salient success aspiration level and how to get as far as possible from the survival aspiration level simultaneously, to resolve the multiple objective/criteria decision-making problems. Management implications are addressed, to understand the role of salient success and survival aspirations in multiple objective/criteria decision-making problems. An illustrative example is provided to demonstrate the veracity of the proposed method that involves a formulation of both salient success and survival aspiration levels using the binary goal programming. In addition, the proposed method uses a membership (utility) function to improve the utilisation of goal programming so as to approach decision-makers’ preferences as close as possible. A real problem is also provided to demonstrate the usefulness of the proposed method. Journal: Journal of the Operational Research Society Pages: 1957-1965 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415646 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415646 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1957-1965 Template-Type: ReDIF-Article 1.0 Author-Name: Junguang Zhang Author-X-Name-First: Junguang Author-X-Name-Last: Zhang Author-Name: Saike Jia Author-X-Name-First: Saike Author-X-Name-Last: Jia Author-Name: Estrella Diaz Author-X-Name-First: Estrella Author-X-Name-Last: Diaz Title: Dynamic monitoring and control of a critical chain project based on phase buffer allocation Abstract: Improvement in the monitoring and control of the efficiency of project scheduling is a challenge for project management research. Classic static buffer monitoring methods cannot be adapted to a complex project environment with a high-degree of uncertainties. In order to overcome this challenge, this paper suggests the use of a dynamic buffer monitoring model based on the phase attributes of the project. The proposed method allocates a project buffer to each phase based on the duration rate and the network complexity of the phases, sets buffer monitoring parameters and monitors the implementation of each phase dynamically. Buffer monitoring trigger points are determined and adjusted dynamically based on the attribution of each phase. Thus, dynamic rolling monitoring and control are conducted through the implementation of these phases. The empirical analysis through a Monte Carlo simulation shows that the duration and cost determined by the proposed method are more reasonable, thus signifying that it can optimise both project duration and cost. These findings indicate that, as opposed to traditional buffer monitoring methods, the proposed approach can effectively overcome student syndrome, monitor project scheduling and avoid unnecessary cost caused by excessive measures. Journal: Journal of the Operational Research Society Pages: 1966-1977 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415641 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415641 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1966-1977 Template-Type: ReDIF-Article 1.0 Author-Name: Wei Jin Author-X-Name-First: Wei Author-X-Name-Last: Jin Author-Name: Jianwen Luo Author-X-Name-First: Jianwen Author-X-Name-Last: Luo Author-Name: Qinhong Zhang Author-X-Name-First: Qinhong Author-X-Name-Last: Zhang Title: Optimal ordering and financing decisions under advance selling and delayed payment for a capital-constrained supply chain Abstract: We compare two different financing strategies, advance selling, and delayed payment, for a supply chain with a capital-constrained retailer facing uncertain demand. We firstly establish the retailer’s optimal ordering and financing decisions under advance selling and delayed payment, respectively. In case of advance selling, we find that a strikingly different price discount rate should be adopted when the retailer changes to being capital-constrained. In case of delayed payment, we analytically investigate the impact of the retailer’s capital level on her own performance. Furthermore, we identify the conditions under which advance selling strategy or delayed payment strategy is preferable. In particular, advance selling strategy is preferable for the retailer when she is sufficiently capital-constrained or customers are relatively price sensitive; In contrast, delayed payment strategy is preferable for the supplier and the entire supply chain when the retailer is sufficiently capital-constrained. Journal: Journal of the Operational Research Society Pages: 1978-1993 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1415643 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415643 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1978-1993 Template-Type: ReDIF-Article 1.0 Author-Name: Caterina Liberati Author-X-Name-First: Caterina Author-X-Name-Last: Liberati Author-Name: Furio Camillo Author-X-Name-First: Furio Author-X-Name-Last: Camillo Title: Personal values and credit scoring: new insights in the financial prediction Abstract: The objective of quantitative credit scoring is to develop accurate models of classification. Most attention has been devoted to deliver new classifiers based on variables commonly used in the economic literature. Several interdisciplinary studies have found that personality traits are related to financial behaviour; therefore, psychological traits could be used to lower credit risk in scoring models. In our paper, we considered financial histories and psychological traits of customers of an Italian bank. We compared the performance of kernel-based classifiers with those of standard ones. We found very promising results in terms of misclassification error reduction when personality attitudes are included in models, with both linear and non-linear discriminants. We also measured the contribution of each variable to risk prediction in order to assess importance of each predictor. Journal: Journal of the Operational Research Society Pages: 1994-2005 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1417684 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1417684 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:1994-2005 Template-Type: ReDIF-Article 1.0 Author-Name: Guilherme F. Coelho Author-X-Name-First: Guilherme F. Author-X-Name-Last: Coelho Author-Name: Luiz R. Pinto Author-X-Name-First: Luiz R. Author-X-Name-Last: Pinto Title: Kriging-based simulation optimization: An emergency medical system application Abstract: Metamodeling is a common subject in simulation optimization literature. It aims to estimate the actual value (simulated) even before the point is evaluated by a simulation model. However, most publications do not apply metamodeling to models with real world complexity and size. This paper sought to apply Kriging to minimize the average response time of a Medical Emergency System by allocating ambulances throughout several city bases. Kriging is considered the state-of-art technique in metamodeling as it provides, in addition to the new point estimation, the level of prediction uncertainty. The optimization process followed the Efficient Global Optimization algorithm (EGO) and the Reinterpolation Procedure to deal with a stochastic simulation model. Finally, EGO was used to obtain a curve that reflected the relationship between the minimum response time and the total number of ambulances allocated to the city, representing significant information for healthcare public systems managers. Journal: Journal of the Operational Research Society Pages: 2006-2020 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1418149 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1418149 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:2006-2020 Template-Type: ReDIF-Article 1.0 Author-Name: Hirofumi Fukuyama Author-X-Name-First: Hirofumi Author-X-Name-Last: Fukuyama Author-Name: Rolf Färe Author-X-Name-First: Rolf Author-X-Name-Last: Färe Author-Name: William L. Weber Author-X-Name-First: William L. Author-X-Name-Last: Weber Title: Valuing and ranking Japanese Banks: Application to Japan Post Bank and Mizuho Bank Abstract: Data Envelopment Analysis has been widely used to measure the technical efficiency of banks. Although technical efficiency provides information about how much inputs could be reduced or outputs expanded, bank owners and potential acquirers are likely more concerned with the value of the bank and how it compares to other banks. We apply the decision-making unit (DMU) pricing approach to value Japanese banks which operated during March 2014–March 2016. Each bank’s monetary value is determined by the adjoint transformation of the technology matrix and these estimates complement other financial information. We find that the values for the newly privatised Japan Post Bank and for the post-merger Mizuho Bank fell during the period. This study is the first empirical application of DMU pricing. Journal: Journal of the Operational Research Society Pages: 2021-2033 Issue: 12 Volume: 69 Year: 2018 Month: 12 X-DOI: 10.1080/01605682.2017.1421855 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1421855 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:69:y:2018:i:12:p:2021-2033 Template-Type: ReDIF-Article 1.0 Author-Name: Lorraine Dodd Author-X-Name-First: Lorraine Author-X-Name-Last: Dodd Title: Techne and techniques for engaging in a socially complex world Abstract: This paper addresses the challenge for Operational Research (OR) in extending out from traditional forms of modelling towards a more relational form of modelling. The challenge comes from OR practice becoming more transformative in nature, which puts more emphasis on reflective practice, people and relationships. Staged Appreciation is proposed as an overall guiding framework and selected illustrative techniques are presented for engaging with social complexity; so-called “wicked” problems. Systems Thinking techniques, guided by Staged Appreciation add an insightful new dimension to knowledge sharing for understanding, and for reflecting upon the intricacies involved in socially complex situations. There are analytical advantages of standing apart from complexity. Staged Appreciation complements this analytical standpoint by asking analysts to take a more reflective view of their own working relationships, being more a part of the socially complex problem as well as standing apart from it. Staged Appreciation offers a reflective framework for working with Systems Thinking techniques and together they complement traditional practice. The proposal and suggestions aim to support analysts to adopt a more reflective and relational view of a complex problematic situation in order to see it “as a whole.” The paper draws lessons from holism, reflective practice and subjective analysis. Journal: Journal of the Operational Research Society Pages: 1399-1409 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1501461 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1501461 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1399-1409 Template-Type: ReDIF-Article 1.0 Author-Name: Tommi Pajala Author-X-Name-First: Tommi Author-X-Name-Last: Pajala Title: Explaining choice quality with decision style, cognitive reflection and decision environment Abstract: Psychological measures of decision making could help researchers better understand and model individual differences in Multiple Criteria Decision Making. However, such measures have so far gained little traction in behavioral operational research. I investigate whether decision style, cognitive reflection, and tendency to maximise, together with the decision environment, can explain choice quality. 159 participants answered the psychological measures and made 26 choices in a setting with six alternatives and six criteria. According to the Bayesian analysis, low cognitive reflection and high need to explore alternatives were related to a higher chance of making errors in choices. This indicates that psychological measures have explanatory power in MCDM, and that the relationship to choice quality is not always in the expected direction. Journal: Journal of the Operational Research Society Pages: 1410-1424 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1495994 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495994 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1410-1424 Template-Type: ReDIF-Article 1.0 Author-Name: Jennifer Morgan Author-X-Name-First: Jennifer Author-X-Name-Last: Morgan Author-Name: Paul Harper Author-X-Name-First: Paul Author-X-Name-Last: Harper Author-Name: Vincent Knight Author-X-Name-First: Vincent Author-X-Name-Last: Knight Author-Name: Andreas Artemiou Author-X-Name-First: Andreas Author-X-Name-Last: Artemiou Author-Name: Alex Carney Author-X-Name-First: Alex Author-X-Name-Last: Carney Author-Name: Andrew Nelson Author-X-Name-First: Andrew Author-X-Name-Last: Nelson Title: Determining patient outcomes from patient letters: A comparison of text analysis approaches Abstract: This paper presents a case study comparing text analysis approaches used to classify the current status of a patient to inform scheduling. It aims to help one of the UKs largest healthcare providers systematically capture patient outcome information following a clinic attendance, ensuring records are closed when a patient is discharged and follow-up appointments can be scheduled to occur within the time-scale required for safe, effective care. Analysing patient letters allows systematic extraction of discharge or follow-up information to automatically update a patient record. This clarifies the demand placed on the system, and whether current capacity is a barrier to timely access. Three approaches for systematic information capture are compared: phrase identification (using lexicons), word frequency analysis and supervised text mining. Approaches are evaluated according to their precision and stakeholder acceptability. Methodological lessons are presented to encourage project objectives to be considered alongside text classification methods for decision support tools. Journal: Journal of the Operational Research Society Pages: 1425-1439 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1506559 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1506559 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1425-1439 Template-Type: ReDIF-Article 1.0 Author-Name: Li-Hao Zhang Author-X-Name-First: Li-Hao Author-X-Name-Last: Zhang Author-Name: Huixiao Yang Author-X-Name-First: Huixiao Author-X-Name-Last: Yang Title: Incentives for RFID adoption with imperfect read rates: Wholesale price premium versus cost sharing Abstract: We consider a supply chain system consisting of a manufacturer and a retailer who requires RFID technology to eliminate inventory misplacement errors. The demand is uncertain and RFID read rates are imperfect. To better align their incentives, the manufacturer can make up her additional cost stemmed from RFID adoption through two compensatory schemes: Wholesale price premium (PR) and cost sharing (CS). Our analysis shows that under PR, the manufacturer surcharges a premium higher than the tag cost, whereas under CS, the retailer will bear the entire tag cost. Because profit margin is proportionally allocated to the firms at the same ratio under PR and without RFID at the break-even point, their incentives to adopt RFID under PR are perfectly aligned. Surprisingly, under CS, further increase in RFID read rates can benefit the retailer but hurts the manufacturer. Further, the manufacturer prefers PR, whereas the retailer prefers CS when the tag cost is small; but both prefer PR when the cost is medium. Contrary to previous findings, the retailer’s incentive for RFID adoption under CS is stronger than the first-best level; the reverse is true for the manufacturer. Finally, CS improves the supply chain efficiency over PR but cannot coordinate the chain. Journal: Journal of the Operational Research Society Pages: 1440-1456 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1506252 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1506252 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1440-1456 Template-Type: ReDIF-Article 1.0 Author-Name: Lei Xiao Author-X-Name-First: Lei Author-X-Name-Last: Xiao Author-Name: Minghui Xu Author-X-Name-First: Minghui Author-X-Name-Last: Xu Author-Name: Zhiyuan Chen Author-X-Name-First: Zhiyuan Author-X-Name-Last: Chen Author-Name: Xu Guan Author-X-Name-First: Xu Author-X-Name-Last: Guan Title: Optimal pricing for advance selling with uncertain product quality and consumer fitness Abstract: We investigate a seller’s equilibrium pricing strategies under two classic advance selling pricing schemes. Under the dynamic pricing scheme, the seller sequentially decides his retail prices, while under the price commitment scheme, the seller simultaneously offers his retail prices. The consumers arrive sequentially depending on their awareness of advance selling, and a consumer’s valuation of the purchase is jointly determined by the inherent quality of the product and her private fitness of the product. However, both factors are uncertain in the advance period but can be resolved in the spot period. We show that under dynamic pricing, the flexibility of pricing does not necessarily lead to a higher payoff as it may also reduce the consumers’ willingness-to-buy in the advance period. Therefore, when consumers’ fitness differentiation is low, dynamic pricing becomes dysfunctional compared to non-advance selling. Under the price commitment scheme, although the seller suffers the risk of quality uncertainty by ex-ante determining the retail price, he can also strategically induce more consumers to buy in advance by claiming a high spot price. When the consumer fitness differentiation is low and the consumer’s awareness of advance selling is high, price commitment scheme dominates dynamic pricing scheme. Journal: Journal of the Operational Research Society Pages: 1457-1474 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1489342 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489342 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1457-1474 Template-Type: ReDIF-Article 1.0 Author-Name: Na Fu Author-X-Name-First: Na Author-X-Name-Last: Fu Author-Name: T. C. E. Cheng Author-X-Name-First: T. C. E. Author-X-Name-Last: Cheng Author-Name: Zhongjun Tian Author-X-Name-First: Zhongjun Author-X-Name-Last: Tian Title: RFID investment strategy for fresh food supply chains Abstract: We study RFID investment decisions for a fresh food supply chain consisting of a retailer, a manufacturer, and a supplier. The supplier supplies a type of raw fresh food that is further processed by the manufacturer. The end product remains fresh and is sold to the retailer, which then sells it to consumers. The retailer can choose to either control the procurement of the raw fresh food or delegate the function to the manufacturer. The demand for the end product is random and there exists a spot market with ample supply for emergency purchase. Applying game theory to analyse the retailer’s decisions as to whether or not to invest in a RFID technology under the control or delegation strategy, we find the equilibrium outcomes. We derive the conditions under which RFID investment is profitable and discuss the investment cost sharing issue. We also determine the optimal joint decisions of procurement strategy selection and RFID investment. Our findings provide important managerial insights to managers in the fresh food business where RFID investment is an intriguing issue: (1) RFID shall be invested so long as the investment cost is not significantly high, either by the retailer under the control strategy or by the manufacturer under the delegation strategy, whichever is more profitable; (2) sometimes it is not optimal for the retailer to invest in RFID by itself, but it is still possible to gain from the RFID investment by the manufacturer under the delegation strategy; (3) the procurement function shall not be delegated to the manufacturer unless it is optimal for the manufacturer to invest in RFID and the retailer is better off by doing so, i.e., the delegation with RFID strategy is better than both of the control with and without RFID strategies. Journal: Journal of the Operational Research Society Pages: 1475-1489 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1494526 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1494526 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1475-1489 Template-Type: ReDIF-Article 1.0 Author-Name: Jian-Gang Peng Author-X-Name-First: Jian-Gang Author-X-Name-Last: Peng Author-Name: Guang Xia Author-X-Name-First: Guang Author-X-Name-Last: Xia Title: A systematic fuzzy multi-criteria group decision-making approach for alternatives evaluation Abstract: Given that the values of the criteria in uncertain multi-criteria group decision-making (MGDM) problems take the form of fuzzy linguistic variables, this paper proposes a model based on hesitant fuzzy linguistic term sets (HFLTSs), named MGDM-HFLTS, to estimate investment alternatives for angel investors. To meet the challenges of complexity, lack of information and time pressure among several possible values in MGDM, the HFLTSs are introduced and revised. The HFLTSs, which are convenient and sufficiently flexible to reflect the decision-makers’ preferences, are introduced to represent the hesitation or doubt originating from systematic comparisons of the assessment values of alternatives for each criterion during both preference elicitation and alternative evaluation phases. Then, context-free grammar is revised for computing with words to enhance and extend the applicability of HFLTSs according to a set of various membership degrees over which decision-makers hesitate when eliciting their preferences over alternatives. Subsequently, the most satisfactory alternative(s) is/are determined by the outranking relationship approach to integrate the degree of preference and entropy information. In addition, studies of evaluation criteria and their weights in angel investment decision-making are investigated. An illustrative example of an angel investment implemented by the proposed MGDM-HFLTS and its corresponding algorithm confirms the effectiveness and practicability of the proposed method. Journal: Journal of the Operational Research Society Pages: 1490-1501 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1495995 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495995 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1490-1501 Template-Type: ReDIF-Article 1.0 Author-Name: Ai-Bing Ji Author-X-Name-First: Ai-Bing Author-X-Name-Last: Ji Author-Name: Hao Chen Author-X-Name-First: Hao Author-X-Name-Last: Chen Author-Name: Yanhua Qiao Author-X-Name-First: Yanhua Author-X-Name-Last: Qiao Author-Name: Jiahong Pang Author-X-Name-First: Jiahong Author-X-Name-Last: Pang Title: Data envelopment analysis with interactive fuzzy variables Abstract: In this article, we develop a novel fuzzy data envelopment analysis (DEA) model, using fuzzy Choquet integral as an aggregating tool, to evaluate the efficiency of the decision making units (DMUs). The proposed model can be used to evaluate the efficiency of the DMU with interactive fuzzy variables (fuzzy inputs or fuzzy outputs), the classical fuzzy DEA model is a special form of this novel fuzzy DEA model. At the end of the article, we will use numerical examples to illustrate the performance of the proposed model. Journal: Journal of the Operational Research Society Pages: 1502-1510 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1495158 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495158 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1502-1510 Template-Type: ReDIF-Article 1.0 Author-Name: S. Tohidnia Author-X-Name-First: S. Author-X-Name-Last: Tohidnia Author-Name: G. Tohidi Author-X-Name-First: G. Author-X-Name-Last: Tohidi Title: Measuring productivity change in DEA-R: A ratio-based profit efficiency model Abstract: The main purpose of the present study is to evaluate the productivity change of decision making units (DMUs) over time based on ratio data envelopment analysis (DEA-R). To achieve this aim, we formulate a ratio-based profit efficiency model that is inspired by the ratio form of the profit efficiency. Also, a non-oriented DEA-R model is presented to define DEA-R allocative efficiency of DMUs which by using it the proposed productivity index can be decomposed into four components. Finally, a numerical example is presented to compare the results of the proposed approach with DEA approach. Journal: Journal of the Operational Research Society Pages: 1511-1521 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1506561 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1506561 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1511-1521 Template-Type: ReDIF-Article 1.0 Author-Name: Laurens Cherchye Author-X-Name-First: Laurens Author-X-Name-Last: Cherchye Author-Name: Kristof De Witte Author-X-Name-First: Kristof Author-X-Name-Last: De Witte Author-Name: Sergio Perelman Author-X-Name-First: Sergio Author-X-Name-Last: Perelman Title: A unified productivity-performance approach applied to secondary schools Abstract: We introduce a novel diagnostic tool to improve the performance of public services. We propose a method to compute performance/productivity ratios, which can be applied as soon as data on production units' outcomes and resources are available. Assuming outcome improvement as the main objective in a public services context, these ratios have an intuitive interpretation: values below unity indicate that better outcomes can be attained through weaker resource constraints (pointing at scarcity of resources) and, conversely, values above unity indicate that better outcomes can be achieved with the given resources (pointing at unexploited production capacity). We demonstrate the practical usefulness of our methodology through an application to secondary schools, where we account for outlier behaviour and environmental effects by using a robust nonparametric estimation method. Our results indicate that in most cases schools' performance improvement is a matter of unexploited production capacity, while scarcity of resources is a lesser issue. Journal: Journal of the Operational Research Society Pages: 1522-1537 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1489351 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489351 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1522-1537 Template-Type: ReDIF-Article 1.0 Author-Name: Kaj Holmberg Author-X-Name-First: Kaj Author-X-Name-Last: Holmberg Title: Formation of student groups with the help of optimisation Abstract: We study the problem of forming groups of students so that the groups are as even as possible with respect to certain aspects and group members are changed as much as possible compared to previous groups, and formulate it as a mixed integer programming problem. We find that standard software cannot solve real life sized instances, so we develop several heuristics and metaheuristics for the problem. Computational tests are made on randomly generated instances as well as real life instances. Some of the heuristics give good solutions in short time, and tests on real life problems indicate that satisfactory solutions can be found within 60 seconds. Journal: Journal of the Operational Research Society Pages: 1538-1553 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1500429 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1500429 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1538-1553 Template-Type: ReDIF-Article 1.0 Author-Name: Guo-Sheng Liu Author-X-Name-First: Guo-Sheng Author-X-Name-Last: Liu Author-Name: Jin-Jin Li Author-X-Name-First: Jin-Jin Author-X-Name-Last: Li Author-Name: Hai-Dong Yang Author-X-Name-First: Hai-Dong Author-X-Name-Last: Yang Author-Name: George Q. Huang Author-X-Name-First: George Q. Author-X-Name-Last: Huang Title: Approximate and branch-and-bound algorithms for the parallel machine scheduling problem with a single server Abstract: In this paper, we consider the scheduling problem of minimising the total weighted job completion time when a set of jobs must be processed on m parallel machines with a single server. This problem has various applications to networks, manufacturing, logistics, etc. The shortest weighted processing time (SWPT) sequencing by Hasani et al. is (3−2/m)-approximate for general problem cases and (2−1/m)-approximate for problems subjected to regular job restrictions. At present, these findings are the best-known results available for the worst-case analyses. Furthermore, dominance properties are discussed and several rules for improving a given schedule are given. To solve the problem, a branch-and-bound (B&B) algorithm is developed by integrating SWPT sequencing, a new lower bound, and dominance properties. A number of numerical experiments are illustrated to validate the performance of our algorithms and identify implications for the considered problem. Journal: Journal of the Operational Research Society Pages: 1554-1570 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1500976 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1500976 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1554-1570 Template-Type: ReDIF-Article 1.0 Author-Name: Daniel Oron Author-X-Name-First: Daniel Author-X-Name-Last: Oron Title: Batching and resource allocation decisions on an m-machine proportionate flowshop Abstract: This article considers an m-machine proportionate flowshop scheduling problem where each stage of production consists of a batching operation. Moreover, we assume that the job processing times are controllable through the allocation of a non-renewable resource. The objective consists of minimising the makespan. The scheduler’s task consists of (1) allocating jobs to batches; (2) scheduling batches on the m-machine flowshop; (3) allocating resources to batches; and (4) allocating the resources within each batch to jobs. We show that there exists an optimal solution to the problem that consists of sequencing the jobs in Λ-shape order based on their workloads. Furthermore, the jobs that are sorted in non-decreasing order of workload are scheduled in batches of equal size, whereas the remaining jobs are allocated to batches of different sizes. We present an O(n2) time algorithm based on the observation that all jobs scheduled in the first segment of the Λ-shape have workloads that are smaller than (or equal to) that of the last job in the sequence. Journal: Journal of the Operational Research Society Pages: 1571-1578 Issue: 9 Volume: 70 Year: 2019 Month: 9 X-DOI: 10.1080/01605682.2018.1495996 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495996 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:9:p:1571-1578 Template-Type: ReDIF-Article 1.0 Author-Name: Dimitris Andriosopoulos Author-X-Name-First: Dimitris Author-X-Name-Last: Andriosopoulos Author-Name: Michalis Doumpos Author-X-Name-First: Michalis Author-X-Name-Last: Doumpos Author-Name: Panos M. Pardalos Author-X-Name-First: Panos M. Author-X-Name-Last: Pardalos Author-Name: Constantin Zopounidis Author-X-Name-First: Constantin Author-X-Name-Last: Zopounidis Title: Computational approaches and data analytics in financial services Journal: Journal of the Operational Research Society Pages: 1579-1580 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1649932 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1649932 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1579-1580 Template-Type: ReDIF-Article 1.0 Author-Name: Dimitris Andriosopoulos Author-X-Name-First: Dimitris Author-X-Name-Last: Andriosopoulos Author-Name: Michalis Doumpos Author-X-Name-First: Michalis Author-X-Name-Last: Doumpos Author-Name: Panos M. Pardalos Author-X-Name-First: Panos M. Author-X-Name-Last: Pardalos Author-Name: Constantin Zopounidis Author-X-Name-First: Constantin Author-X-Name-Last: Zopounidis Title: Computational approaches and data analytics in financial services: A literature review Abstract: The level of modeling sophistication in financial services has increased considerably over the years. Nowadays, the complexity of financial problems and the vast amount of data require an engineering approach based on analytical modeling tools for planning, decision making, reporting, and supervisory control. This article provides an overview of the main financial applications of computational and data analytics approaches, focusing on the coverage of the recent developments and trends. The overview covers different methodological tools and their uses in areas, such as portfolio management, credit analysis, banking, and insurance. Journal: Journal of the Operational Research Society Pages: 1581-1599 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1595193 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1595193 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1581-1599 Template-Type: ReDIF-Article 1.0 Author-Name: Carole Bernard Author-X-Name-First: Carole Author-X-Name-Last: Bernard Author-Name: Rob H. De Staelen Author-X-Name-First: Rob H. Author-X-Name-Last: De Staelen Author-Name: Steven Vanduffel Author-X-Name-First: Steven Author-X-Name-Last: Vanduffel Title: Optimal portfolio choice with benchmarks Abstract: We construct an algorithm that makes it possible to numerically obtain an investor’s optimal portfolio under general preferences. In particular, the objective function and risks constraints may be driven by benchmarks (reflecting state-dependent preferences). We apply the algorithm to various classic optimal portfolio problems for which explicit solutions are available and show that our numerical solutions are compatible with them. This observation allows us to conclude that the algorithm can be trusted as a viable way to deal with portfolio optimisation problems for which explicit solutions are not in reach. Journal: Journal of the Operational Research Society Pages: 1600-1621 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1470066 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1470066 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1600-1621 Template-Type: ReDIF-Article 1.0 Author-Name: Leonardo Riegel Sant'Anna Author-X-Name-First: Leonardo Author-X-Name-Last: Riegel Sant'Anna Author-Name: Tiago Pascoal Filomena Author-X-Name-First: Tiago Author-X-Name-Last: Pascoal Filomena Author-Name: João Frois Caldeira Author-X-Name-First: João Author-X-Name-Last: Frois Caldeira Author-Name: Denis Borenstein Author-X-Name-First: Denis Author-X-Name-Last: Borenstein Title: Investigating the use of statistical process control charts for index tracking portfolios Abstract: In this article, our goal is to introduce a statistical process control charts approach (SPC) to monitor the rebalancing process of index tracking (IT) portfolios. SPC methods derive from statistics and engineering as tools to control production process. We use exponentially weighted moving average (EWMA) control charts to monitor IT portfolios based on two combined charts: portfolios’ tracking error performance and portfolios’ volatility. As a result, we endogenously control the rebalancing process of the portfolios based on both their returns and their risk conditions over time. Computational tests are performed to evaluate the developed approach in comparison with the traditional fixed period strategy, using data from Brazilian and U.S. market from 2005 to 2014. Cointegration and optimization methods are applied to form the portfolios. The results show that SPC approach can be a viable alternative to portfolio rebalancing. Journal: Journal of the Operational Research Society Pages: 1622-1638 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1495887 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495887 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1622-1638 Template-Type: ReDIF-Article 1.0 Author-Name: Carla Oliveira Henriques Author-X-Name-First: Carla Oliveira Author-X-Name-Last: Henriques Author-Name: Maria Elisabete Duarte Neves Author-X-Name-First: Maria Elisabete Duarte Author-X-Name-Last: Neves Title: A multiobjective interval portfolio framework for supporting investor’s preferences under different risk assumptions Abstract: This paper is aimed at presenting a multiobjective interval portfolio framework which considers investment decisions under different risk assumptions. New surrogate problems are obtained for the mean-absolute deviation risk measure based on the concept of necessary subtraction between interval numbers. A proposal for obtaining the efficient portfolio solutions is also suggested, which allows accounting for three types of investment strategies. Indices of robustness have also been computed, which allow assessing the assets which are more often selected irrespective of the investment strategy followed, and regardless of the business cycle contemplated. Results illustrate the trade-off between risk and return, being also consistent with the type of strategy followed by the investor. Overall, we were able to conclude that less prone to risk investors might find the formulation based on the mean-absolute necessary deviation more appealing since it allows reaching, in general, lower volatility of returns. Journal: Journal of the Operational Research Society Pages: 1639-1661 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1571004 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1571004 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1639-1661 Template-Type: ReDIF-Article 1.0 Author-Name: Xue Cheng Author-X-Name-First: Xue Author-X-Name-Last: Cheng Author-Name: Marina Di Giacinto Author-X-Name-First: Marina Author-X-Name-Last: Di Giacinto Author-Name: Tai-Ho Wang Author-X-Name-First: Tai-Ho Author-X-Name-Last: Wang Title: Optimal execution with dynamic risk adjustment Abstract: This article considers the problem of optimal liquidation of a position in a risky security quoted in a financial market, where price evolution are risky and trades have an impact on price as well as uncertainty in the filling orders. The problem is formulated as a continuous time stochastic optimal control problem aiming at maximising a generalised risk-adjusted profit and loss function. The expression of the risk adjustment is derived from the general theory of dynamic risk measures and is selected in the class of g-conditional risk measures. The resulting theoretical framework is nonclassical since the target function depends on backward components. We show that, under a quadratic specification of the driver of a backward stochastic differential equation, it is possible to find a closed form solution and an explicit expression of the optimal liquidation policies. In this way, it is immediate to quantify the impact of risk adjustment on the profit and loss and on the expression of the optimal liquidation policies. Journal: Journal of the Operational Research Society Pages: 1662-1677 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1644143 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1644143 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1662-1677 Template-Type: ReDIF-Article 1.0 Author-Name: Junbin Chen Author-X-Name-First: Junbin Author-X-Name-Last: Chen Author-Name: Frank P. A. Coolen Author-X-Name-First: Frank P. A. Author-X-Name-Last: Coolen Author-Name: Tahani Coolen-Maturi Author-X-Name-First: Tahani Author-X-Name-Last: Coolen-Maturi Title: On nonparametric predictive inference for asset and European option trading in the binomial tree model Abstract: This article introduces a novel method for asset and option trading in a binomial scenario. This method uses nonparametric predictive inference (NPI), a statistical methodology within imprecise probability theory. Instead of inducing a single probability distribution from the existing observations, the imprecise method used here induces a set of probability distributions. Based on the induced imprecise probability, one could form a set of conservative trading strategies for assets and options. By integrating NPI imprecise probability and expectation with the classical financial binomial tree model, two rational decision routes for asset trading and for European option trading are suggested. The performances of these trading routes are investigated by computer simulations. The simulation results indicate that the NPI based trading routes presented in this article have good predictive properties. Journal: Journal of the Operational Research Society Pages: 1678-1691 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1643682 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1643682 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1678-1691 Template-Type: ReDIF-Article 1.0 Author-Name: Ting He Author-X-Name-First: Ting Author-X-Name-Last: He Author-Name: Frank P. A. Coolen Author-X-Name-First: Frank P. A. Author-X-Name-Last: Coolen Author-Name: Tahani Coolen-Maturi Author-X-Name-First: Tahani Author-X-Name-Last: Coolen-Maturi Title: Nonparametric predictive inference for European option pricing based on the binomial tree model Abstract: In finance, option pricing is one of the main topics. A basic model for option pricing is the Binomial Tree Model, proposed by Cox, Ross, and Rubinstein in 1979 (CRR). This model assumes that the underlying asset price follows a binomial distribution with a constant upward probability, the so-called risk-neutral probability. In this article, we propose a novel method based on the binomial tree. Rather than using the risk-neutral probability, we apply Nonparametric Predictive Inference (NPI) to infer imprecise probabilities of movements, reflecting more uncertainty while learning from data. To study its performance, we price the same European options utilising both the NPI method and the CRR model and compare the results in two different scenarios, firstly where the CRR assumptions are right, and secondly where the CRR model assumptions deviate from the real market. It turns out that our NPI method, as expected, cannot perform better than the CRR in the first scenario, but can do better in the second scenario. Journal: Journal of the Operational Research Society Pages: 1692-1708 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1495997 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1495997 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1692-1708 Template-Type: ReDIF-Article 1.0 Author-Name: C. J. Adcock Author-X-Name-First: C. J. Author-X-Name-Last: Adcock Author-Name: C. Ye Author-X-Name-First: C. Author-X-Name-Last: Ye Author-Name: S. Yin Author-X-Name-First: S. Author-X-Name-Last: Yin Author-Name: D. Zhang Author-X-Name-First: D. Author-X-Name-Last: Zhang Title: Price discovery and volatility spillover with price limits in Chinese A-shares market: A truncated GARCH approach Abstract: The use of price limits by a stock exchange means that the distribution of returns is truncated. By considering a GARCH model in conjunction with a truncated distribution for the residuals, this study investigates whether price limits have an effect on price behaviour and volatility of Chinese A-shares. The analysis has been applied to A-shares traded on the Shanghai Stock Exchange (SSE) and the Shenzhen Stock Exchange (SZSE) during the period from 2004 to 2018. The results suggest the Truncated-GARCH model outperforms a conventional model and offers substantially different insights into the effect of price limits. The delayed price discovery hypothesis is not rejected for either exchange after upper price limit hits. Limited evidence supports the volatility spillover hypothesis, as just over 5% of A-shares experience an increase of volatility after upper price limit hits on both exchanges. No evidence of reduction of volatility after price limit hits is shown in the research. Journal: Journal of the Operational Research Society Pages: 1709-1719 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1542973 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1542973 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1709-1719 Template-Type: ReDIF-Article 1.0 Author-Name: Loukia Meligkotsidou Author-X-Name-First: Loukia Author-X-Name-Last: Meligkotsidou Author-Name: Ekaterini Panopoulou Author-X-Name-First: Ekaterini Author-X-Name-Last: Panopoulou Author-Name: Ioannis D. Vrontos Author-X-Name-First: Ioannis D. Author-X-Name-Last: Vrontos Author-Name: Spyridon D. Vrontos Author-X-Name-First: Spyridon D. Author-X-Name-Last: Vrontos Title: Quantile forecast combinations in realised volatility prediction Abstract: This paper tests whether it is possible to improve point, quantile, and density forecasts of realised volatility by conditioning on a set of predictive variables. We employ quantile autoregressive models augmented with macroeconomic and financial variables. Complete subset combinations of both linear and quantile forecasts enable us to efficiently summarise the information content in the candidate predictors. Our findings suggest that no single variable is able to provide more information for the evolution of the volatility distribution beyond that contained in its own past. The best performing variable is the return on the stock market followed by the inflation rate. Our complete subset approach achieves superior point, quantile, and density predictive performance relative to the univariate models and the autoregressive benchmark. Journal: Journal of the Operational Research Society Pages: 1720-1733 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1489354 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1489354 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1720-1733 Template-Type: ReDIF-Article 1.0 Author-Name: Juan Carlos Matallín-Sáez Author-X-Name-First: Juan Carlos Author-X-Name-Last: Matallín-Sáez Author-Name: Amparo Soler-Domínguez Author-X-Name-First: Amparo Author-X-Name-Last: Soler-Domínguez Author-Name: Emili Tortosa-Ausina Author-X-Name-First: Emili Author-X-Name-Last: Tortosa-Ausina Title: Does active management add value? New evidence from a quantile regression approach Abstract: While it has long been recognised that active management is an important issue in the area of mutual fund performance, little consensus has been reached about the value managers’ abilities can add. This study examines funds’ and managers’ characteristics in an attempt to understand their influence on mutual fund efficiency. We explore these issues in a two-stage approach, considering partial frontier estimators (order-m, order-α) to assess performance in the first stage, and quantile regression in the second stage to isolate the determinants of efficiency. This combination of methodologies has barely been considered to date in the field of operations research. Our findings are of interest to both academics and practitioners as they shed light on the differences among funds as well as among managers. Our analysis provides some arguments to guide fund selection and points to some managerial features investors might consider taking into account. In addition, some of the differences in performance among funds are rather intricate because both the magnitude of the estimated regression coefficients and their significance varies depending on the quantile of the distribution of fund performance, suggesting that some relevant trends might be concealed by conditional-mean models such as Tobit or OLS. Journal: Journal of the Operational Research Society Pages: 1734-1751 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1612549 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1612549 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1734-1751 Template-Type: ReDIF-Article 1.0 Author-Name: Linh Phuong Catherine Do Author-X-Name-First: Linh Phuong Catherine Author-X-Name-Last: Do Author-Name: Štefan Lyócsa Author-X-Name-First: Štefan Author-X-Name-Last: Lyócsa Author-Name: Peter Molnár Author-X-Name-First: Peter Author-X-Name-Last: Molnár Title: Impact of wind and solar production on electricity prices: Quantile regression approach Abstract: We study the impact of fuel prices, emission allowances, demand, past prices, wind and solar production on hourly day-ahead electricity prices in Germany over the period from January 2015 until June 2018. Working within a linear regression, ARX-EGARCH and quantile regression framework we compare how different pricing factors influence the mean and quantiles of the electricity prices. Contrary to the existing literature, we find that short-term price fluctuations on the fuel markets and emission allowances have little effect on the electricity prices. We also find that day-of-the-week as well as monthly effects have significant impact on the electricity prices in Germany and should not be ignored in model specifications. Three main factors are found to drive extreme prices: price persistence, expected demand and expected wind production. Our findings contribute to understanding of extreme price movements, which can be used in pricing models and hedging strategies. Journal: Journal of the Operational Research Society Pages: 1752-1768 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2019.1634783 File-URL: http://hdl.handle.net/10.1080/01605682.2019.1634783 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1752-1768 Template-Type: ReDIF-Article 1.0 Author-Name: Antonios K. Alexandridis Author-X-Name-First: Antonios K. Author-X-Name-Last: Alexandridis Author-Name: Dimitrios Karlis Author-X-Name-First: Dimitrios Author-X-Name-Last: Karlis Author-Name: Dimitrios Papastamos Author-X-Name-First: Dimitrios Author-X-Name-Last: Papastamos Author-Name: Dimitrios Andritsos Author-X-Name-First: Dimitrios Author-X-Name-Last: Andritsos Title: Real Estate valuation and forecasting in non-homogeneous markets: A case study in Greece during the financial crisis Abstract: In this paper, we develop an automatic valuation model for property valuation using a large database of historical prices from Greece. The Greek property market is an inefficient, non-homogeneous market, still at its infancy and governed by lack of information. As a result modelling the Greek real estate market is a very interesting and challenging problem. The available data cover a wide range of properties across time and include the financial crisis period in Greece which led to tremendous changes in the dynamics of the real estate market. We formulate and compare linear and non-linear models based on regression, hedonic equations and artificial neural networks. The forecasting ability of each method is evaluated out-of-sample. Special care is given on measuring the success of the forecasts but also on identifying the property characteristics that lead to large forecasting errors. Finally, by examining the strengths and the performance of each method we apply a combined forecasting rule to improve forecasting accuracy. Our results indicate that the proposed methodology constitutes an accurate tool for property valuation in a non-homogeneous, newly developed market. Journal: Journal of the Operational Research Society Pages: 1769-1783 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1468864 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1468864 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1769-1783 Template-Type: ReDIF-Article 1.0 Author-Name: Silvia Angilella Author-X-Name-First: Silvia Author-X-Name-Last: Angilella Author-Name: Sebastiano Mazzù Author-X-Name-First: Sebastiano Author-X-Name-Last: Mazzù Title: A credit risk model with an automatic override for innovative small and medium-sized enterprises Abstract: The goal of this paper is to build an operational model for assessing creditworthiness of innovative small and medium-sized enterprises. To this purpose, a novel multicriteria methodology is implemented through a simulation approach within the context of the ELECTRE TRI-based framework. The model is applied to a database, retained from AIDA, involving a sample of Italian innovative small and medium-sized enterprises. The main finding is twofold. From a theoretical point of view, the credit rating model proposed allows to incorporate an override in the credit class, as required by Basel II in all the cases in which the availability of data is insufficient to describe the risk factors or a judgmental rating model is advised, as well as in innovative small and medium-sized enterprises. From an operational point of view, this methodology could be a useful tool for banks’ innovative lending processes, because of the lack of a credit model in this context. Journal: Journal of the Operational Research Society Pages: 1784-1800 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2017.1411313 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1411313 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1784-1800 Template-Type: ReDIF-Article 1.0 Author-Name: Fernando A. F. Ferreira Author-X-Name-First: Fernando A. F. Author-X-Name-Last: Ferreira Author-Name: José P. Esperança Author-X-Name-First: José P. Author-X-Name-Last: Esperança Author-Name: Maria A. S. Xavier Author-X-Name-First: Maria A. S. Author-X-Name-Last: Xavier Author-Name: Renato L. Costa Author-X-Name-First: Renato L. Author-X-Name-Last: Costa Author-Name: Blanca Pérez-Gladish Author-X-Name-First: Blanca Author-X-Name-Last: Pérez-Gladish Title: A socio-technical approach to the evaluation of social credit applications Abstract: Social credit is a type of micro-credit aiming at fighting poverty and social inequality. Although interest in this type of credit has increased significantly over time, namely after Muhammad Yunus was awarded the 2006 Nobel Peace Prize, there are few studies that address the assessment of social credit applications. This is an issue to be taken seriously primarily because the objectives of social credit differ from those of other types of credit, meaning that social credit applications should not be evaluated using the same credit-scoring systems. Assuming the baseline principles of the multiple criteria decision analysis (MCDA) approach, this study combines cognitive mapping and measuring attractiveness by a categorical-based evaluation technique (MACBETH) to develop an evaluation system for social credit applications. The results show that the social-technical approach followed in this study provides value for the evaluation processes of this type of credit application as a result of the privileged contact established with a panel of credit analysts. The advantages, limitations and managerial implications of our study are also discussed. Journal: Journal of the Operational Research Society Pages: 1801-1816 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2017.1415650 File-URL: http://hdl.handle.net/10.1080/01605682.2017.1415650 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1801-1816 Template-Type: ReDIF-Article 1.0 Author-Name: Apostolos G. Christopoulos Author-X-Name-First: Apostolos G. Author-X-Name-Last: Christopoulos Author-Name: Ioannis G. Dokas Author-X-Name-First: Ioannis G. Author-X-Name-Last: Dokas Author-Name: Petros Kalantonis Author-X-Name-First: Petros Author-X-Name-Last: Kalantonis Author-Name: Theodora Koukkou Author-X-Name-First: Theodora Author-X-Name-Last: Koukkou Title: Investigation of financial distress with a dynamic logit based on the linkage between liquidity and profitability status of listed firms Abstract: The scope of this paper is to investigate the predictability of financial distress, adopting a survival model based on dynamic logit for a sample of NYSE listed firms. The main assumption of this study is that liquidity and profitability constitute the key criteria for the configuration of financial distress status of a firm. Specifically, two independent models are applied for the period after the financial crisis of 2007–2008. The first model is constructed on the pillar of liquidity, and the classification into the subgroup of distressed firms is based on specific criteria such as current ratio, current liabilities / total liabilities, Equity / Liabilities and Total Debt / Total Asset. The second model is based on the pillar of profitability where the specific criteria for the classification from the primary group into the subgroup of distressed firms are ROE < ROA and Net Profit Margin ≤ 0. Finally, a third model is established as a result of the combination of the two previous models. A further purpose of this work is to ascertain whether during the period of crisis there has been a differentiation in the policy of listed companies, namely whether their efforts have been shifted to addressing liquidity problems at the expense of profitability. Journal: Journal of the Operational Research Society Pages: 1817-1829 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1460017 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1460017 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1817-1829 Template-Type: ReDIF-Article 1.0 Author-Name: Jun Pei Author-X-Name-First: Jun Author-X-Name-Last: Pei Author-Name: Xingming Wang Author-X-Name-First: Xingming Author-X-Name-Last: Wang Author-Name: Wenjuan Fan Author-X-Name-First: Wenjuan Author-X-Name-Last: Fan Author-Name: Panos M. Pardalos Author-X-Name-First: Panos M. Author-X-Name-Last: Pardalos Author-Name: Xinbao Liu Author-X-Name-First: Xinbao Author-X-Name-Last: Liu Title: Scheduling step-deteriorating jobs on bounded parallel-batching machines to maximise the total net revenue Abstract: This paper addresses a parallel-batching scheduling problem considering processing cost and revenue, with the objective of maximising the total net revenue. Specifically, the actual processing time of a job is assumed to be a step function of its starting time and the common due date. This problem involves assigning jobs to different machines, batching jobs, and sequencing batches on each machine. Some key structural properties are proposed for the scheduling problem, based on which an optimal scheduling scheme is developed for any given machine. Then, an effective hybrid VNS–IRG algorithm which combines Variable Neighborhood Search (VNS) and Iterated Reference Greedy algorithm (IRG) is proposed to solve this problem. Finally, the effectiveness and stability of the proposed VNS–IRG are demonstrated and compared with VNS, IRG, and Particle Swarm Optimization through computational experiments. Journal: Journal of the Operational Research Society Pages: 1830-1847 Issue: 10 Volume: 70 Year: 2019 Month: 10 X-DOI: 10.1080/01605682.2018.1464428 File-URL: http://hdl.handle.net/10.1080/01605682.2018.1464428 File-Format: text/html File-Restriction: Access to full text is restricted to subscribers. Handle: RePEc:taf:tjorxx:v:70:y:2019:i:10:p:1830-1847 Template-Type: ReDIF-Article 1.0 Author-Name: Yan Shi Author-X-Name-First: Yan Author-X-Name-Last: Shi Author-Name: Zhiyong Zhang Author-X-Name-First: Zhiyong Author-X-Name-Last: Zhang Author-Name: Fangming Zhou Author-X-Name-First: Fangming Author-X-Name-Last: Zhou Author-Name: Yongqiang Shi Author-X-Name-First: Yongqiang Author-X-Name-Last: Shi Title: Optimal ordering policies for a single deteriorating item with ramp-type demand rate under permissible delay in payments Abstract: In this paper, we assume that the supplier offers the retailer a credit period (i.e., M) and the retailer purchases a single deteriorating item from the supplier to satisfy the market demand, and shortages are not allowed. With the consideration of trade credit, we develop an inventory model for a single deteriorating item with ramp-type demand rate. Unlike the previous research, on the one hand, we consider both cases of μ≥T$ \mu \ge T $ and μ