37th Annual ORSNZ Conference
29 to 30 November 2002
University of Auckland, Auckland, NZ
Below are the abstracts already submitted for the conference. They are sorted according to last name of the first author. Check them out, really interesting topics, and very international, too. So send the registration form and attend!!!
This paper presents a multi-period inventory lot-sizing scenario, where there are multiple products and multiple suppliers. We consider a situation where the demand of multiple discrete products is known over a planning horizon. Each of these products can be sourced from a set of approved suppliers, a supplier-dependent transaction cost applying for each period in which an order is placed on a supplier. A product-dependent holding cost per period applies for each product in the inventory that is carried across a period in the planning horizon. The decision maker needs to decide what products to order in what quantities with which suppliers in which periods. An enumerative search algorithm and two heuristics are presented to address the problem.
In recent years, multileaf collimators have allowed for more complex radiation therapy plans with the ability to provide better treatment for cancer patients. The multileaf beam head is discretised into channels with a pair of leaves each. It can be adjusted into different configurations to shape the distribution of the radiation. This project investigates the configuration of the multileaf collimator which minimises the total treatment time. An integer variable is used for each leaf to describe its position. These integer variables together describe the intensity profile. Inter programming is used to model the problem; linear relaxation and column generation are then used to solve the problem.
A product warranty is an agreement offered by a producer to a consumer to repair or replace a faulty item, or to partially or fully reimburse the consumer in the event of a failure. Despite the fact that warranties are so commonly used, the accurate pricing of warranties in many situations remains an unsolved problem. This may seem surprising since the fulfillment of warranty claims may cost companies large amounts of money. Underestimating true warranty costs will result in losses for a company. On the other hand, overestimating them will lead to in uncompetitive product prices. As a result the amount of product sales will decrease. In this paper the main focus is on modeling repairs for the duration of the warranty coverage. For each warranty claim a decision has to be made as to what type of repair should be performed. The degree of repair will reflect on the duration of the repair time and on the cost associated with the repair. The virtual age of the product is modeled by stochastic process. Under two forms or reimbursement, free replacement and linear pro-rata, the expected warranty cost over the warranty period and per life cycle of the product are evaluated. Examples are provided to illustrate the ideas.
In the inventory routing problem, a distributor must choose the size and frequency of deliveries to each of its customers and design vehicle routes so as to avoid stockouts over a given planning horizon. We propose a tabu search heuristic for the deterministic version of this problem. Computational results are reported on randomly generated test problems.
An important part of any airline scheduling problem is "on the day" rescheduling. This occurs when the scheduled roster is not able to be operated. This may be caused by mechanical problems on an aircraft, crew sickness, bad weather etc. To allow for disruptions to the scheduled roster, some crew members are designated as "on call" at the rostering stage. These crew members are not rostered any work, but are required to report to the airport at short notice in case of a disruption. This paper looks at the selection of a crew member from among the "on call" crew for each piece of work for which a crew member is required. This selection is made in order to maximise the number of crew that remain "on call" in the future.
This paper examines the nature of decisions made by senior executives in both private and public sector organisations. We focus on the inherent structure in emergent decision problems, and how decision problems come about and arrive 'on the desk' of senior executives. Given the dearth of literature on decision problem structuring (most research deals with the processing of an already structured decision problem), a case study approach has been used comprising sixteen in-depth interviews with senior executives, discussing one specific decision problem, with a specific emphasis on the manner by which the executive becomes aware of a decision problem, and the general nature of the problem. Our research shows that decision problems arrive variably, typically unstructured and that executives do little work in structuring them. The nature of decision awareness was captured using three dimensions: top-down/bottom up, foreseen/unforeseen and reactive/proactive. Implications for prescriptive, decision making models will be discussed.
A private surgery has a four-weekly rotating roster of Clinicians for their theatre and endoscopy sessions. Currently the roster is developed by hand - it tries to balance the availability of the Clinicians with the needs and costs of the surgery, particularly with regard to the number of patients staying overnight. A decision support tool was developed to aid the surgery with developing rosters. The tool needed to evaluate and provide information about particular rosters, generate new rosters that met various criteria and manage the relevant data for these processes. Various measures were developed to evaluate different aspects of rosters. Heuristics were developed to generate new rosters and to improve on current rosters. This paper describes the decision support tool developed.
Over the last two decades industries have significantly benefited from research that has been carried out towards the design of efficient rostering systems. Following the success of optimised rostering tools in the airline industry, it was soon recognised that a similar technology could be implemented to gain a competitive advantage in other industries. Development in these areas followed a similar pattern to the airline industry in that academic research paved the way for commercial viability. However, the complex nature of rostering problems meant that the rostering tools were specifically tuned for a particular industry or job. This means that software solutions based on optimal strategies are usually limited to handling problems that are very closely related to each other. In an attempt to maintain generality, developers appear to have reverted back to heuristic based solutions. Column generation holds particular interest to us, not only because it is capable of achieving optimal solutions, but also because it separates the characteristics that define a particular rostering problem from the optimiser. Hence if we can embed generality at the column generation level, we can achieve overall generality.
Norske Skog Australasia has been utilising a powerful tactical supply chain optimisation model, known as Pivot, for a number of years now. Pivot has been developed with ILOG's AMPL mathematical programming language. Up until recently users of Pivot relied on spreadsheets for constructing input data files and creating reports. Recently a menu driven interface was developed for Pivot using a Microsoft Access database. Input data is extracted directly from the database and solutions are delivered each time the model is solved. Users are able to easily configure Pivot to suit their particular need. In this paper we describe the design of the database and demonstrate the flexibility that its development has provided to the supply chain planning team.
Norske Skog owns and operates a newsprint mill at Kawerau in New Zealand's Bay of Plenty. Energy comprises about one third of variable costs with wholesale electricity purchases in the mechanical pulp mill comprising the bulk of this. The power hungry mechanical pulp mill is required to meet the pulp demand from the paper machines at all times. A large mixed integer optimisation model has been developed to provide advice regarding production schedules. The model utilises pulp storage tanks so that production in the mechanical pulp mill can be scheduled to avoid incurring high wholesale electricity prices, whilst still meeting paper machine demand. Costs associated with load shedding and rules encapsulating pulp quality requirements are important considerations. This paper describes the formulation of a mixed integer optimisation model and a second formulation using Lagrangian relaxation .
All educational institutions need to timetable the various teaching and related activities that they undertake. Many approaches have been proposed for improving solution procedures to a wide variety of timetabling problems. Our practical experience of timetabling in different universities leads us to the conclusion that automatic timetabling on a university-wide basis is unlikely to produce acceptable results, and that a more effective approach is to enhance the abilities of experienced timetablers. We describe our practical experience with one particular approach, the development of a course timetabling decision support system. We report on the successful implementation of the system by the Waikato Management School. The system is written in Microsoft Access, to run on a personal computer, and provides an effective and user-friendly addition to the university timetabler's toolkit.
The MANA model has been developed at the Defence Technology Agency to assist in performing operational research for the New Zealand Defence Force. It has been applied over the past few years primarily to provide assistance to the Army in planning their operations. Recently the use of this model has been extended to help answer operational questions for other branches of the millitary. Here we report on a particular application of MANA to the determination of optimal search strategies and sensor requirements for the interception of a target vessel in New Zealand waters.
Decision makers perceive the decision-making processes for solving complex spatial problems as unsatisfactory and lacking in generality. Current Spatial Decision Support Systems (SDSS) fulfil their narrow objectives, but fail to address many of the requirements for effective spatial problem solving, as they are inflexible and often domain-specific. This research attempts to overcome problems identified in the fields of spatial decision-making and SDSS. It synthesises ideas, frameworks and architectures from Geographic Information Systems, Decision Support Systems and SDSS. Concepts from spatial modelling, model and scenario management, knowledge management and Multi-Criteria Decision-Making methodology are explored and leveraged in the implementation of a Flexible Spatial Decision Support System (FSDSS) using object-oriented concepts and technologies. We proposed a generic spatial decision-making process at first, and then developed a domain-independent FSDSS framework and architecture to support this process. We implemented a prototypical FSDSS that acts as a proof of concept for the spatial decision-making process, FSDSS framework and architecture. The proposed spatial decision-making process and the implemented FSDSS are evaluated through five scenarios across spatial decision problem domains including location, allocation, routing and/or layout. The evaluation results indicate that the spatial decision-making process is valid, the FSDSS framework and architecture are valuable for SDSS design.
The US gas supply system is critically dependent on gas stored in various types of reservoirs around the country to meet winter demand peaks. Gas for storage can be purchased in a variety of ways - day ahead, for the balance of the current month, and by NYMEX hedge contracts. Traders managing storage sites can earn additional revenue by taking advantage of price volatility. Trading decisions must be made quickly as arbitrage opportunities quickly disappear. A model is described which was written to assist traders, taking into account the various transaction costs, reservoir operating rules, taxes, etc. Key requirements were rapid development and the ability to make changes at short notice. The application uses a spreadsheet front end, with a model built in a modeling language which is solved using the Xpress-MP MIP solver.
Inverse Planning has the ability to dramatically improve treatment outcomes for patients undergoing radiation therapy. One of the major challenges in Inverse Planning is the development of effective tools to support the planning process. Not only must a large amount of patient data be collected and analysed, but a significant number of output variables can be changed to optimise treatment outcomes. These variables include irradiation directions, intensity profiles and beam parameters. This project explores the use of optimisation in radiotherapy planning, with a view towards developing tools that will specifically optimise irradiation directions. This problem has been solved with four different models, each benchmarked to provide a basis for comparison.
In this article, we study how an ERP (Enterprise Resource Planning) project is being implemented by Chengdu Enwei Group, a private enterprise in China. By learning the development experience investigated from Enwei and other 50 ERP cases, we suggest how to avoid the failure of the ERP project implementation to Chinese enterprises.
Most businesses in Australasian face great difficulty in planning for future requirements due to often-volatile fluctuations in currency and commodity markets. New analytical techniques based on “fractal analysis” suggest that market data exhibits temporal correlations (i.e. volatile fluctuations tend to occur around the same time, punctuating periods of relative stability), and fat-tailed probability distributions (i.e. more extreme events than might be expected from a normal “bell curve” distribution). This fractal nature appears to be a near universal property of complex systems, including financial and commodity markets, weather and population dynamics. By contrast, traditional techniques based on linear models fail to cope correctly with volatility, and may fail badly during these periods. This leads to the possibility that some companies have been making policy and optimization decisions based on traditional methods that may be very harmful. The properties of clustered volatility and extreme events displayed by markets suggest adapting strategies which are robust, rather than optimal for some “average” case. In this paper it is discussed how fractal methods can correctly describe complex and volatile systems, and how this allows the use of methods such as “genetic algorithms” to produce robust strategies.
This thesis details a dynamic programming approach for solving a network covering problem. The network is based on fibre optic designs used by TelstraSaturn to provide Central Business District (CBD) customers with a range of communication services. The network is generated by software, called \FIDO, and the dynamic program is compared with the bin packing heuristic employed by \FIDO to find a solution to cover the network. The results show that the dynamic program gives significantly improved solutions for a range of networks and costs. Tradeoffs between optimality and solution speed are also discussed.
The main goals of radiotherapy treatment, those of achieving a high dose in the tumour and low dose in the surrounding healthy organs, are contradictory. This observation motivates the idea to use a multi-criteria approach in radiation therapy planning. In this model the deviations from desired doses in the tumour and each healthy organ are considered as separate objectives to be minimised. Because the ideal plan, where desired ideal dose limits are respected, will most often be unachievable, the aim of the multi-criteria formulation is to build a set of plans that are a good representation of the whole set of efficient treatment plans. A treatment plan is efficient if results for one organ can only be improved by worsening those for another. With this representative set and an appropriate navigational tool the oncologist can search for an appropriate plan for the individual patient through a database of pre-calculated plans. This search can be guided by patient specific criteria and does not involve knowledge of the physical parameters involved. The appropriate representation of possible solutions will be focused on the area of most interest, where comparable deviations from ideal dose limits are achieved, rather than satisfying one dose bound and having another far from optimal. Contained in the set of solutions is a designated starting point for the search, ideally being a balanced solution where the maximum deviation is minimised and all deviations are of a similar value. This will minimise the expected time for an appropriate plan to be found.
When a forest is to be harvested an area-restriction is often imposed on the maximum area of a clearfell. This generally causes intractable complications in any deterministic optimisation process. The complexity is that of a very large and complicated integer programme, a formulation that is needed since the adjacency constraints need to be site-specific. A strategy is advanced by which certain nuclear sets of blocks can be determined. Once these are found, a vastly reduced number of adjacency constraints are required. This leads to a comparatively speedy algorithm which permits deterministic optimisation of the forest harvesting algorithm. This talk will mainly discuss ways of defining and finding these nuclear sets.
The literature of statistics and social choice contains descriptions of many types of aggregation paradoxes. For example: Anscombe's paradox (Wagner 1984) or Ostrogorski's paradox (Kelly 1989). The paradox analysed here is an aggregation paradox, which occurs in a specific case: multiple elections in which voters may not know results of one election before they vote another (Brams 1993). In a statistical sense results of one vote are independent on results of another vote. The analysis of the paradox may be applicable in referendum containing several alternatives to vote for or against, or in sequential votes in legislature. The set of winner's stances in each of the individual elections is called the winning combination. In some cases it is possible that nobody vote for winning combination, but the combination is still winning one. This phenomenon is called the paradox of multiple elections (Brams 1993). In this paper the chance for occurrence of the paradox of multiple election is evaluated and computed. General hypothesis about tendency to paradoxical results is formulated.
During aircraft flight-testing, measurands such as speed, altitude, mechanical stress and pressure are sampled from a group of sensors mounted on the aircraft and transmitted to a ground receiving station. Prior to this transmission, measured data are multiplexed into a data structure known as a Data Cycle Map (DCM), where the data are packed into so called frames of fixed size, each one identified by a sync-pattern and a subframe id. The process of designing a DCM prior to a flight-testing may take several weeks for number of people. It is complex and time consuming. In the past the problem of designing DCM has been modeled using set covering integer formulation. In this approach measurands (the parameters) are allocated to the map and the DCM design is optimised in terms of the amount of wasted space on the map. In an optimal solution, this wasted space is minimised and is just sufficient to ensure that measurand periodicity is satisfied. This paper develops the set-partitioning mathematical model further using pairwise view representation of measurands, and then applying reduction and lifting technique to reduce the size of the problem. The aim is then to find a way of packing the measurands in DCM. That is, need to find a feasible solution, which satisfies measurands periodicity (i.e. no overlap). Once the approach is implemented, the results will be investigated and compared with the past research done by David Panton.
This paper presents two applications of nonlinear programming to risk minimisation in finance. The first addresses the issue of hedging using either options or futures, the proportion of which can be optimised with a nonlinear programming model. The second application is the use of nonlinear programming to minimise risk in asset allocation with the additional capability of artificial intelligence. The model focuses on achieving a target level of return while minimizing downside risk The use of artificial intelligence is to indicate the possibility of derivatives being used to cushion downside movement, in particular put options on the physical asset.
We described a system developed in the computer unit of Sambalpur University. The software has not only simplified the growing complexities of University Examination System and publishing various examination result in time but also brought transparency in the system, minimized time, delivered error free result and provides vital information in contexts of decision making.
We consider hydro-electric electricity generators operating in an electricity pool market in which each generator offers a supply curve and is dispatched by the independent system operator to meet demand at least cost. Each hydro-electric generator with reservoir storage seeks a supply curve that maximizes their expected profit in the next trading period while taking into account any foregone profits they might have earned by delaying release of the water. The opportunity cost of using water is computed using a dynamic programming model with a long-term horizon. We give a theoretical framework for this problem that enables water value surfaces to be computed with little computational effort.
Mamer & McBride recently introduced a new decomposition algorithm for linear programming. They call this decomposition algorithm decomposition-based pricing. Our paper applies decomposition-based pricing to integer programs. For integer programs, we show that decomposition-based pricing produces tighter bounds than other common decomposition algorithms. We use the classic one-dimension cutting stock problem as an example. The subproblem is the same as that in the classic Gilmore-Gomory algorithm. The master corresponds to the cutting stock model developed by Dyckhoff. The new algorithm suggests that the typical formulation of the decomposition master can be improved.
Both psychatric breakdown and recovery involve the way in which the stresses and stimulations of the psycho-social environment of people impact upon aspects of their psychology, brain-functioning and other aspects of their person. The treating of these matters entails the development and application of therapies with people within a range of connected 'stress-stimulation' environments that, when taken together, might be described as a mental health system. In this paper we attempt to analyse and describe such a system in the terms of environmental settings that are related to the stress/stimulation levels impacting upon people. The development of a quantitative model for the system is as a semi-Markov stochastic process. The lengths of stay in the different environments are assumed to have a common probability distribution, and the probabilities of movements at the end of such stays are assumed to follow a Markov chain. These probabilities are dependent upon a whole range of factors that are related both to the therapies and to the facilities and management of the services related to them. The paper will propose a general model based upon these ideas, discuss its psychiatric background and illustrate it with reference to the system that exists on the ground in Wentworth Area Health, Western Sydney, Australia. It will also discuss its potential as a decision-making tool.
The New Zealand education system underwent a number of fundamental reforms at the beginning of the 1990s that effectively changed the governance structures of New Zealand state schools. These reforms provide an opportunity to examine the effect of increased levels of autonomy and accountability on the performance of schools. In this respect, a recent study prepared for the Ministry of Education has called for further research to examine the effect of these reforms on the performance of New Zealand schools. As part of a longer-term in-depth research project, this paper provides an exploratory examination of data related to Auckland secondary schools and uses Data Envelopment Analysis to evaluate their relative performance.
Focusing primarily on developing decision-support systems (DSSs) based on traditional mathematical programming techniques - LP/MIP, for instance - has unnecessarily restricted industrial OR's scope of practice. In a sense, OR practice in the corporate sector remains a niche within a niche. We describe recent DSS trends that allow us to extend OR practice into areas hitherto deemed too complex for structured optimization (e.g., real-world supply-chain planning, configuration) or "not really OR" (e.g., policy automation). Our presentation is driven by quintessentially practical considerations - what do the consumers of OR need? What are their organizational constraints? What is the perceived value of our raison d'etre - optimality?
In many vendor-manufacturer partnerships, it is quite typical for the vendor to hold consignment stock on behalf of the manufacturer, usually through an 'in-plant' from the vendor organization who is resident at the manufacturer's premises. However, empirical evidence suggests that where the buyer is dominant, suppliers are often forced to carry inventory either as part of the contract or to qualify for selection: "It seems that most of the costs are the suppliers' and most of the benefits are the customers'" (Waters-Fuller, 1995). However, in a certain vendor-manufacturer partnership in NZ forestry, we found that while the vendor had initial reservations in terms of holding stock on a consignment basis for the manufacturer, it subsequently fully supported the move; according to the vendor's commercial manager, it was ideal for the company with chemicals that had short shelf-lives. Accordingly, we develop inventory models to better understand the benefits from the vendor's management of consignment stocks of short-lived supplies. In particular, a simple model confirms the intuition that under positively skewed probability distributions of shelf life, the vendor stands to gain more than the manufacturer from the operation of consignment stocks through an in-plant.
We focus on planning the expansion of electricity distribution networks under uncertainty. The objective of the distribution network expansion planning problem is to determine an investment schedule to ensure an economic and reliable energy supply. This is done by constructing a minimum cost radial distribution network under the constraints of network line load capacities, voltage drops, reliability, and load demands. This problem can be classified as a complex combinatorial optimization problem. The problem complexity comes from: various options for transformer and substation location; several alternatives for cable/line size and routes; multistage investment decisions; complex objectives; and uncertainty about demand variation and location, equipment availability, and faults. We propose to use the Lagrangian relaxation technique to solve the distribution network expansion planning problem under uncertainty. This is achieved by decomposing the original NP-hard problem into smaller, easy-to-solve subproblems. We intend to use stochastic dynamic programming to solve these subproblems in association with a subgradient optimization technique to generate tight lower bounds on the optimal value of original problem. A heuristic will be used to find the upper bounds enabling us to measure the quality of the solution obtained. The developed model will be applied to the future planning of Auckland's central business district high voltage distribution network.
In finance there is the famous Markowitz formulation whose solution set is the efficient frontier in mean/standard deviation space. While textbook theory states that everyone has the same efficient frontier and faces the same equilibrium market portfolio, it is widely known that this is not exactly the case. People have different limits on the amount they wish to invest in a stock, on the degree they wish to engage in short selling, and on the number of securities they will tolerate in their portfolio. Commonly implemented as constraints in order to remain in 2-space, all of these limitations, as shown in the paper, have an extremely dramatic effect on the efficient frontier, to the extent that it would not make much sense to restrict the search for an improved theory of portfolio analysis. In particular, the limitations of short selling and the number of securities in a portfolio are probably more adequately modeled as criteria. Along with other concerns such as social responsibility and dividends, we are now, in the opinion of this author, most decidedly beyond mean/standard deviation space and in the realm of multiple objectives. With multiple objectives transforming the efficient frontier into an efficient surface, this paper discusses new approaches, new solution procedures, and new ways of rationalizing equilibrium in asset prices with a view toward developing a new more realistic multiple criteria based theory for portfolio selection.
South Tyrol is a popular destination in Northern Italy for tourists from the north and south of the Alpine mountain ranges. The growing demand for tourist activities such as skiing, hiking and climbing has led to an increased number of accidents in the region over the years. This has resulted in an increased demand for rescue helicopters, as most of the accident sites are not accessible by vehicles on-ground. This project is based on a case study of the base locations of three helicopters operated by a South Tyrolean rescue organization called Weißes Kreuz. Currently, the bases of these rescue helicopters are not optimally located. The aim of this project is to therefore optimise the locations of the rescue helicopter bases so that the response times to emergency calls are reduced. Response times depend on the distance between a particular accident site and a helicopter base. This gives rise to two basic models. The first model looks at minimising the average response times in which the total weighted distance is minimised. The second considers minimising the worst response times in which the maximal un-weighted distance between the helicopter bases and the accident sites is minimised. In this project, the two models have been solved using effective heuristics. These heuristics do not guarantee optimality but they do provide local optimal solutions that can help improve the solution by significant amounts. On the basis of the results obtained from the models and algorithms implemented in the project, it was found that the model that minimised the maximal un-weighted distance between the helicopter bases and the accident sites produced more evenly allocated regions. However, we want to obtain better allocations for the first model as it takes the number of missions into account. This can help reduce the distances to sites where accidents are more frequent. The effect of moving the base locations found by the first model to the second has been analysed further in order to provide Weißes Kreuz with an understanding of how the response times can be improved.
Creating daily routes for grass-mowers around a city's municipal parks can be modelled as an interesting variation of the vehicle scheduling and routing problem. In this case, complications mean that the typical VRSP solution methods must be extended. For example, not all parks need to be mown every day, however there are soft constraints imposed on the length of the grass so that the longer the grass at a location grows, the more urgently it needs to be mown. Further, many variables influence the time it takes to mow a location, and because of this uncertainty the routes constructed must be long enough to ensure a full day's work. These complications result in a problem that is too large to be solved analytically on a daily basis. We report on the design, performance and suitability of a system containing a new heuristic for the grass-mower scheduling and routing problem.
Lots of time and money worldwide is put into the sport of yacht racing. The three
major contributions to winning a yacht race are:
1. A great crew.
2. A well designed and constructed yacht.
3. The best choice of sailing course.
In the United States and the United Kingdom it is commonplace now for forecasters to issue ensemble forecasts, which are a collection of individual forecasts generated from slightly varying initial conditions. This project looked at using ensemble forecasts to optimise the choice of sailing directions in a yacht race. Each individual forecast of the ensemble (called a member) will always have an associated realisation probability. This is the likelihood that it the member forecast predicts the actual weather. As time moves on it is possible to compare what the weather has actually done with what each member predicted and adjust the realisation probabilities accordingly. This is done using Bayes' rule and it means that if one member matches the actual weather closely, then its realisation probability will be increased and the realisation probabilities of the members that differ from the observed weather will be decreased. When selecting a sailing direction there is a trade-off between changing course to catch strong winds and staying on course for the finish line but in weaker winds. The factors affecting this decision are where in the ocean the yacht is, what time it is and what the weather is going to do. This project took these three factors to define a state space for a stochastic dynamic program. The weather behaviour is captured in the state space using sets of realisation probabilities on the ensemble members. The dynamic program developed will fit easily with different yacht movement models but is limited to forward sailing directions. Pilot problems were solved in reasonable time giving logical solutions demonstrating the trade off decisions.
The paper analyses models of consultation, cooperation and co-decision procedures in decision making of the European Union institutions: Commission, Council of Ministers and European Parliament. Using power indices methodology a distribution of influence among Commission, Council and the Parliament under different decision making procedures is being evaluated, together with voting power of member states and European political parties. Problems of institutional reform and implications of results of the Nice summit are discussed.
Diamonds are cut from rough though polyhedral kimberlite stones. One of the most difficult tasks of the cutter is to know where to place the diamond within the stone in order to minimize waste. This problem can be formulated as a design centering problem. This talk will adress the diamond cutting problem and discuss methodologies for solving the problem. We will provide some numerical examples.
The purpose of this paper is two-folded. On one hand we discuss the interaction between the least absolute deviation regression and linear programming (Sadovski, 1974) and on the other hand the relationship of the least squares method and quadratic programming is explored. The disscussion is based on statistical theory and optimization techniques. The parameter estimations of two statistical models are obtained by simplex method and the Kuhn-Tucker theorem. The optimization problems in mathematical programming are reformulated and solved using stepwise regression algorithm.
Group decision making is quite often organised in procedural way and decision-making rights have been allocated to several institutions or sub-groups of decision-makers. Legislation serves as an example. These situations have been traditionally analysed using compound co-operative games. They are able to deal with inter-institutional decision-making but only when sub-groups decide simultaneously. Procedural and strategic aspects are disregarded in the analysis. In this paper we propose a method which extends the compound game approach and contains it as a special case. We illustrate the method and its differences to the traditional approach with legislative examples in federations.
In this presentation, we will review the market distribution functions with particular application to the New Zealand electricity market and discuss estimation techniques for them. We will also discuss ways of measuring the quality of these estimators, in particular the expected excess cost associated with them.