RESEARCH ARTICLES | RISK + CRYSTAL BALL + ANALYTICS

All Posts Term: Statistics
17 post(s) found

Modeling Time-Series Forecasts with @RISK


Making decisions for the future is becoming harder and harder because of the ever increasing sources and rate of uncertainty that can impact the final outcome of a project or investment. Several tools have proven instrumental in assisting managers and decision makers tackle this: Time Series Forecasting, Judgmental Forecasting and Simulation.  

This webinar is going to present these approaches and how they can be combined to improve both tactical and strategic decision making. We will also cover the role of analytics in the organization and how it has evolved over time to give participants strategies to mobilize analytics talent within the firm.  

We will discuss these topics as well as present practical models and applications using @RISK.

The Need for Speed: A performance comparison of Crystal Ball, ModelRisk, @RISK and Risk Solver


Need for SpeedA detailed comparison of the top Monte-Carlo Simulation Tools for Microsoft Excel

There are very few performance comparisons available when considering the acquisition of an Excel-based Monte Carlo solution. It is with this in mind and a bit of intellectual curiosity that we decided to evaluate Oracle Crystal Ball, Palisade @Risk, Vose ModelRisk and Frontline Risk Solver in terms of speed, accuracy and precision. We ran over 20 individual tests and 64 million trials to prepare comprehensive comparison of the top Monte-Carlo Tools.

 

Copulas Vs. Correlation

Copulas and Rank Order Correlation are two ways to model and/or explain the dependence between 2 or more variables. Historically used in biology and epidemiology, copulas have gained acceptance and prominence in the financial services sector.

In this article we are going to untangle what correlation and copulas are and how they relate to each other. In order to prepare a summary overview, I had to read painfully dry material… but the results is a practical guide to understanding copulas and when you should consider them. I lay no claim to being a stats expert or mathematician… just a risk analysis professional. So my approach to this will be pragmatic. Tools used for the article and demo models are Oracle Crystal Ball 11.1.2.1. and ModelRisk Industrial 4.0

Correlation and Impact on Monte Carlo Analysis Results (5/8)

All the top dogs in the Monte Carlo Analysis spreadsheet universe have distribution-fitting capabilities. Their interfaces have common elements, of course, since they rely on (for the most part) the same PDFs in their arsenal of distribution-fitters. There are important differences, to be sure. It is hoped this comparison will illustrate pros and cons from a practical standpoint. Before going over our scorecard between Crystal Ball and ModelRisk, there is one more very important capability category begging for review: Correlation.

Discrete Distribution Fitting to Duke Basketball Scores, in ModelRisk (4/8)

Let the battle begin anew. We continue our journey in uncertainty modeling, having understood how to fit distributions to data using Crystal Ball (CB). How does that experience compare to what ModelRisk (MR) has to offer?

Open the Duke 09_10 Scores spreadsheet with ModelRisk loaded in the Excel environment. We will first create the MR Objects representing the fitted PDFs. (Just as with the CB exercise, it is good practice to examine a variety of best-fitting distributions, rather than blindly accepting the top dog.) Then, in distinctly separate cells, we will create the VoseSimulate functions that behave as sampled values from the PDFs modeled by the MR Objects.

Distributions in ModelRisk as Objects (3/8)

As with Crystal Ball, ModelRisk has the ability to fit distributions to historical data. The analyst looking to describe the variation of a Monte Carlo Analysis input can use "fitting" windows to select data and manipulate other options. How does the ModelRisk (MR) fitting experience stack up against the Crystal Ball (CB) methods and options? There are some important differences one should understand about MR before fitting PDFs to the Duke 09_10 Scores spreadsheet.

Subject Matter Expertise in Distribution Selection (2/8)

Are there discrete univariate probability distribution functions (PDFs) that can be used to simulate college basketball scores? Do we, as avid basketball observers, know enough to suggest one discrete PDF is better than another? In fitting distributions to data in your business problems, the analyst will be asking the same types of questions. If the analyst is not an expert on the inputs and their behavior, he or she should seek out a subject-matter expert (SME) who can provide insight. Putting experience and theoretical knowledge together this way is a best practice for distribution selection.

Discrete Distribution Fitting to Duke Basketball Scores, in Crystal Ball (1/8)

Let us assume we have a batch of historical data in a spreadsheet. Our mission-of-the-moment is to use this data and fit probability distributions that describe its past variability (or uncertainty). Consider using either Crystal Ball or ModelRisk to do this task. We offer free trials of both to registered users. If you register here, you can get yours too. Try fitting the same data using these two different packages. Let us know how and why one is better than the other. In demonstrating these capabilities, we gain first-hand experience on the usability and capabilities of the alternatives and which features compared have more priority. The best way to judge is to try them out for yourself.

Dealing with Uncertainty

Change is constant. Or so the saying goes. However, even change is ever-varying. So perhaps we should say: Change is constantly changing. As occupants of planet earth, we intuitively know this and yet strive to keep everything the same, at least those things that do well by us. Uncertainty derails the best of our plans, even uncertainties that we recognize up front.

Tolerance Analysis using Monte Carlo, continued (Part 12 / 13)

In the case of the one-way clutch example, the current MC quality prediction for system outputs provide us with approximately 3- and 6-sigma capabilities (Z-scores). What if a sigma score of three is not good enough? What does the design engineer do to the input standard deviations to comply with a 6 sigma directive?