After installing Risk Simulator, start Excel and the Risk Simulator menu will appear in
Excel. Risk Simulator comes in 11 languages (English, Chinese Simplified, Chinese
Traditional, French, German, Italian, Japanese, Korean, Portuguese, Russian, and
Spanish) and has several main modules briefly described below. A wealth of resources
are available to get you started including Online Getting Started Videos, User Manuals,
Case Studies, White Papers, Help Files, and Hands-on Exercises (these are installed
with the software and available on the software DVD, as well on the website
www.rovusa.com).
(1) Monte Carlo Risk Simulation, (2) Forecasting and Prediction Analytics, (3) Optimization, (4) Analytical Tools, (5) ROV BizStats, and (6) ROV Decision Tree.

* ANOVA: One-Way Single Factor with Multiple Treatments * ANOVA: One-Way Randomized Block * ANOVA: Two-Way * ARIMA * ARIMA (Auto) * Autocorrelation and Partial Autocorrelation * Autoeconometrics * Basic Econometrics/Custom Econometrics * Charts: Pareto * Charts: Box-Whisker * Charts: Q-Q Normal * Combinatorial Fuzzy Logic * Control Charts: C, NP, P, R, U, XMR * Correlation (Linear and Nonlinear) * Cubic Spline * Descriptive Statistics * Deseasonalizing * Distributional Fitting * Exponential J Curve * Heteroskedasticity * Generalized Linear Models/Limited Dependent Variables: Logit * Generalized Linear Models/Limited Dependent Variables: Probit * Generalized Linear Models/Limited Dependent Variables: Tobit * Linear Interpolation * Logistic S Curve * Markov Chain * Multiple Regression (Linear and Nonlinear) * Neural Network * Nonparametric Hypothesis Tests *Parametric Hypothesis Tests * Stochastic Processes * Structural Break. * Time-Series Analysis * Time-Series Analysis (Auto) * Trending and Detrending: Difference, Exponential, Linear, Logarithmic, Moving Average, Polynomial, Power, Rate, Static Mean, and Static Median * Volatility: GARCH Models * Volatility: Log Returns Approach * Yield Curve (Bliss) * Yield Curve (Nelson-Siegel) *

Most distributions can be defined up to four moments. The first moment describes its location or central tendency (expected values); the second moment describes its width or spread (risks and uncertainties); the third moment, its directional skew (most probable events); and the fourth moment, its peakedness or thickness in the tails (catastrophic extreme tail events). All four moments should be calculated in practice and interpreted to provide a more comprehensive view of the project under analysis. Risk Simulator provides the results of all four moments in its Statistics view in the forecast charts.

In This Issue: 1. Learn the basics of statistical moments, 2. Understand the first four moments of a distribution, 3. Run correlated simulations in Risk Simulator 4. Review the correlations copulas and random number generators in Risk Simulator.

The S-Curve, or Cumulative Distribution Function (CDF), is a very powerful and often-used visual representation of a distribution of data points. This section briefly reviews the salient points of the SCurve through the use of visual examples, starting with the four moments showing the Probability Density Function (PDF) or probability histogram shapes, and then moving on to the corresponding CDF charts. The following PDF and CDF charts were generated in the Risk Simulator │Analytical Tools │ Distributional Charts and Tables tool.

In This Issue: 1. Learn the basics of the four moments, 2. Learn how to read and interpret the CDF or Curves
based on its four moments.

This newsletter is a quick synopsis of the probability distributions available in Real Options Valuation, Inc.’s various software applications such as Risk Simulator, Real Options SLS, ROV Quantitative Data Miner, ROV Modeler, and others.

In This Issue: 1. Learn some helpful hints and tips on using Risk Simulator

A Risk Simulation model developed in Excel with Risk Simulator can be saved in several ways:
1. Saving the Excel Model, 2. Extracting the Simulated Data, 3. Saving a *.RiskSim File, 4. Generating a Simulation Report, 5. Generating a Simulation Statistics Report, 6. Copying and Pasting Charts,
7. Generating Live Histogram Charts, 8. Generating Overlay Charts.

In This Issue: 1.Learn the different ways of saving your simulation models and results, 2.Learn how to save results versus saving the simulation settings versus saving simulation charts.

Power tools such as Risk Simulator and Real Options SLS took years to build and many more years to be perfected. It is extremely likely that a new user can simply pick up software products such as these and hit the ground running immediately. However, some knowledge of the theory is required because, despite their analytical power, these software tools are just tools. They do not replace the analyst in any way. In fact, tools such as these only accouter the analyst with the appropriate analytics by, for example, relieving the analyst from the need to be proficient with fancy mathematics in order to build sophisticated models. In short, to create and perform sophisticated modeling, the analyst first needs to understand some of the underlying assumptions and approaches used in these analytics.

In This Issue: 1. Learn the basics of management’s due diligence efforts as we start on the topic of modeling errors…

Monte Carlo simulation is a very potent methodology solving difficult and often intractable problems with simplicity and ease. It creates artificial futures by generating thousands and even millions of sample paths of outcomes and looks at their prevalent characteristics. For analysts in a company, taking graduate-level advanced mathematics courses is neither logical nor practical. A brilliant analyst would use all available tools at his or her disposal to obtain the same answer the easiest way possible. One such tool is Monte Carlo simulation using Risk Simulator. The major benefit that Risk Simulator brings is its simplicity of use. Here
are seven of fourteen due-diligence issues management should evaluate when an analyst presents a report with a series of advanced analytics using simulation. The remaining seven issues are covered in “Performing Due Diligence, Part 3.”

In This Issue: 1. Learn what are the main due diligence questions when it comes to setting up, running, and
reviewing Monte Carlo risk simulation…

“Performing Due Diligence, Part 2” covers seven of fourteen due-diligence issues management should evaluate when an analyst presents a report with a series of advanced analytics using simulation. The remaining seven such concerns are covered in this issue. (Continued from “Performing Due Diligence, Part 2”).

In This Issue: 1. Continue on the list of due diligence items on running and evaluating a Monte Carlo risk
simulation model…

In addition to Monte Carlo simulation, another frequently used decision-analysis tool is forecasting. One thing is certain: You can never predict the future with perfect accuracy. The best that you can hope for is to get as close as possible. In addition, it is actually okay to be wrong on occasion. As a matter of fact, it is sometimes good to be wrong, as valuable lessons can be learned along the way. It is better to be wrong consistently than to be wrong on occasion because if you are wrong consistently in one direction, you can correct or reduce your expectations, or increase your expectations when you are consistently overoptimistic or underoptimistic. The problem arises when you are occasionally right and occasionally wrong, and you have no idea when or why. Here are 12 issues that should be addressed when evaluating time-series or any other forecasting results.

In This Issue: 1. Learn the basics of forecasting and regression warning signs…

Risk analysis is never complete without the analysis of real options. What are uncertainty and risk analyses good for if one cannot make use of them? Real options analysis looks at the flexibility of a project or management’s ability to make midcourse corrections when uncertainty becomes resolved over time. At the outset, real options analysis looks like a very powerful analytical tool, but care should be taken when real options analysis is applied.

In This Issue: 1. Learn the key due diligence questions to ask or look out for in performing real options analysis

Here is a quick review of each methodology and several quick getting started examples in using the software. More detailed descriptions and example models of each of these techniques are found in Risk Simulator.

In This Issue: Learn about the various types of forecasting techniques available in Risk Simulator.
* ARIMA * Auto ARIMA * Basic Econometrics * Basic Auto Econometrics * Custom Distributions * GARCH * J-Curve * Markov Chains * Maximum Likelihood on Logit, Probit, and Tobit * Multivariate Regression * Nonlinear Extrapolation * S-Curves * Spline Curves * Stochastic Process Forecasting * Time-Series Analysis and Decomposition.

In This Issue:
1. Learn about the eight most common models used in time-series analysis
2. Perform a time-series analysis utilizing a sample from Risk Simulator’s example folder.

“Forecasting is a balance between art and science. How can you improve your forecasting skills?”

In This Issue:
1. Learn about the assumptions needed for a regression analysis to work.
2. Learn how to use a scatter plot to verify regression assumptions.
3. Become aware of the problems common in forecasting.

As noted in “The Pitfalls of Forecasting, Part 1,” forecasting is a balance between art and science. Using Risk Simulator can take care of the science, but it is almost impossible to take the art out of forecasting. In forecasting, experience and subject-matter expertise counts. One effective way to support this point is to look at some of the more common problems and violations of the required underlying assumptions of the data and forecast interpretation.

In This Issue:
Learn about the following potential problems of forecasting: out-of-range forecasts, interactions among variables, surviviorship bias and self-selection bias, omitted variables, multicollinearity, error measurements, causality loops, seasonality and cyclicality, micronumerosity, nonstationary data, stochastic processes, heteroskedasticity, redundant variables, bad-fitting models or bad goodness-of-fit, structural breaks, structural shifts, autocorrelation, leads and lags, bad data and data, collection errors.

1. heteroskedasticity, 2. other technical issues and 3. how scatter plots provide information about the behavior of the data series.

In This Issue: Scatter Plots and Behavior of the Data Series. “How can scatter plots help identify violations of the regression assumption?”
1. outliers,
2. nonlinearity,
3. multicollinearity.

In This Issue: “How is stochastic process simulation in forecasting?”
1. Become acquanited with the stochastic processes included in Risk Simulator’s Forecasting tool
2. Learn how to run and interpret a sample stochastic process using Risk Simulator

In This Issue: “What is multivariate regression?”
1. Learn how regression analysis finds the unique best-fitting line through a set of data points
2. Learn how to run and interpret a sample multiple regression using Risk Simulator

As noted in “Multivariate Regression, Part 1,” it is assumed that the user is knowledgeable about the fundamentals of regression analysis. In this part, we continue our coverage of the topic by examining stepwise regression and goodness-of-fit.

In This Issue:
1. Learn about a powerful approach to regression analysis known as “stepwise regression”,
2. Learn how goodness-of-fit statistics provide a glimpse into the accuracy and reliability of the estimated regression model.

In This Issue: “How can we model situations involving dependent variables?”
1. Learn about limited dependent variables.
2. Using Risk Simulator’s Maximum Likelihood Models to identify which variables affect the default behavior of individuals.

In This Issue: “What is GARCH modeling used for?”
1. Learn how to run a sample GARCH model using Risk Simulator.
2. Learn about some of the GARCH specifications used in Risk Simulator.

In This Issue: “How is ARIMA forecasting different from multivariate regression?”
1. Learn about Risk Simulator’s ARIMA and Auto ARIMA modules.
2. Find out why an ARIMA model is superior to common time-series analysis and multivariate regressions.

Econometrics refers to a branch of business analytics, modeling, and forecasting techniques for modeling the behavior or forecasting certain business, financial, economic, physical science, and other variables. Running the Basic Econometrics models is similar to regular regression analysis except that the dependent and independent variables are allowed to be modified before a regression is run. The report generated is the same as shown in the Multiple Regression article and the interpretations are identical to those described in that article.

In This Issue:
1. Learn how to run Risk Simulator’s Basic Econometrics models.
2. Discover how you can run hundreds of models at once.

In this newsletter shows you, “What do you do when there are missing values in a time series dataset?”
In This Issue:
1. Discover the usefulness of spline curves.
2. Learn how to run Risk Simulator’s Cubic Spline module.

In This Issue: “Why is it useful to detrend or deseasonalize data?”
1. Learn how to use Risk Simulator’s Data Deseasonalization and Detrending tool.
2. Learn how to perform a seasonality test on your data.

The J curve, or exponential growth curve, is one where the growth of the next period depends on the current period’s level and the increase is exponential. This phenomenon means that over time, the values will increase significantly, from one period to another. This model is typically used in forecasting biological growth and chemical reactions over time.
The S curve, or logistic growth curve, starts off like a J curve, with exponential growth rates. Over time, the environment becomes saturated (e.g., market saturation, competition, overcrowding), the growth slows, and the forecast value eventually ends up at a saturation or maximum level. The S-curve model is typically used in forecasting market share or sales growth of a new product from market introduction until maturity and decline, population dynamics, growth of bacterial cultures, and other naturally occurring variables.

In This Issue: 1. Learn how to run Risk Simulator’s JS Curves module.

A Markov chain exists when the probability of a future state depends on a previous state and when linked together forms a chain that reverts to a long-run steady state level. This Markov approach is typically used to forecast the market share of two competitors. The required inputs are the starting probability of a customer in the first store (the first state) returning to the same store in the next period versus the probability of switching to a competitor’s store in the next state.

In This Issue
1. Learn how to run Risk Simulator’s Markov Chains module.

Extrapolation involves making statistical forecasts by using historical trends that are projected for a specified period of time into the future. It is only used for time-series forecasts. For cross-sectional or mixed panel data (time-series with cross-sectional data), multivariate regression is more appropriate. This methodology is useful when major changes are not expected, that is, causal factors are expected to remain constant or when the causal factors of a situation are not clearly understood. The use of extrapolation also helps discourage the introduction of personal biases into the process. Extrapolation is fairly reliable, relatively simple, and inexpensive. However, extrapolation, which assumes that recent and historical trends will continue, produces large forecast errors if discontinuities occur within the projected time period; that is, pure extrapolation of time series assumes that all we need to know is contained in the historical values of the series being forecasted. If we assume that past behavior is a good predictor of future behavior, extrapolation is appealing. This makes it
a useful approach when all that is needed are many short-term forecasts.

In This Issue: 1. Find insight into the use of extrapolation. 2. Learn how to run Risk Simulator’s Nonlinear
Extrapolation module.

In This Issue: “How are neural networks and fuzzy logic different?”
1. Gain a simple understanding of the neural network and fuzzy logic forecasting techniques.
2. Learn how to run Risk Simulator’s Neural Network and Combinatorial Fuzzy Logic models.

Several statistical tests exist for deciding if a sample set of data comes from a specific distribution. The most commonly used are the Kolmogorov-Smirnov test and the Chi-Square test. Each test has its advantages and disadvantages. This newsletter details the specifics of these two tests as applied in distributional fitting in Monte Carlo simulation analysis. Other tests such as the Anderson-Darling, Jacque-Bera, and Wilkes-Shapiro are not used in Risk Simulator as these are parametric tests and their accuracy depends on the
dataset being normal or near-normal. Therefore, these tests oftentimes yield suspect or inconsistent results.

In This Issue: 1. Learn about the Kolmogorov-Smirnov and Chi-Square tests as applied in distributional fitting in Monte Carlo simulation analysis.

Another powerful simulation tool is distributional fitting; that is, which distribution does an analyst or engineer use for a particular input variable in a model? What are the relevant distributional parameters? If no historical data exist, then the analyst must make assumptions about the variables in question. One approach is to use the Delphi method, where a group of experts are tasked with estimating the behavior of each variable. For instance, a group of mechanical engineers can be tasked with evaluating the extreme possibilities of a spring coil’s diameter through rigorous experimentation or guesstimates. These values can be used as the
variable’s input parameters (e.g., uniform distribution with extreme values between 0.5 and 1.2). When testing is not possible (e.g., market share and revenue growth rate), management can still make estimates of potential outcomes and provide the best-case, most-likely case, and worst-case scenarios, whereupon a triangular or custom distribution can be created.

In This Issue: 1. Learn how to use Risk Simulator to perform a distributional fitting model.

The distributional analysis tool is a statistical probability tool in Risk Simulator that is rather useful in a variety of settings. It can be used to compute the probability density function (PDF), which is also called the probability mass function (PMF), for discrete distributions (these terms are used interchangeably), where given some distribution and its parameters, we can determine the probability of occurrence given some outcome x. In addition, the cumulative distribution function (CDF) can be computed, which is the sum of the PDF values up to this x value. Finally, the inverse cumulative distribution function (ICDF) is used
to compute the value x given the cumulative probability of occurrence. Example uses of PDF, CDF, and ICDF are provided herein.

In This Issue: 1. Learn how to use Risk Simulator’s Distributional Analysis tool.

The Percentile Distributional Fitting tool is an alternate way of finding and fitting the appropriate probability distributions to use in a simulation. There are several related tools and each has its own uses and advantages: Distributional Fitting (Percentiles), Distributional Fitting (Single Variable), Distributional Fitting (Multiple Variables) and Custom Distribution (Set Input Assumption).

In This Issue: Learn how to find and fit the best probability distributions with limited information on percentiles or moments.

Distributional Charts and Tables is a Probability Distribution tool that is a very powerful and fast module used for generating distribution charts and tables. Note that there are three similar tools in Risk Simulator but each does very different things: Distributional Analysis, Distributional Charts and Tables and Overlay Charts.

In This Issue
1. Learn about the three tools available for quickly generating distribution charts and tables.
2. Become familiar with the different capabilities of each tool.

The regression and forecasting diagnostic tool is the advanced analytical tool in Risk Simulator used to determine the econometric properties of your data. The diagnostics include checking the data for heteroskedasticity, nonlinearity, outliers, specification errors, micronumerosity, stationarity and stochastic properties, normality and sphericity of the errors, and multicollinearity.

In This Issue: 1. Explore Risk Simulator’s regression and forecasting diagnostic tool. 2. Learn about common violations in forecasting and regression analysis 3. Discover the significance of outliers. 4. Find out about
autocorrelation

Another very powerful tool in Risk Simulator is the Statistical Analysis tool, which determines the statistical properties of the data. The diagnostics run include checking the data for various statistical properties, from basic descriptive statistics to testing for and calibrating the stochastic properties of the data.

In This Issue: 1. Learn how to determine the statistical properties of your data.

One of the powerful simulation tools available in Risk Simulator is tornado analysis––it captures the static impacts of each variable on the outcome of the model; that is, the tool automatically perturbs each variable in the model a preset amount, captures the fluctuation on the model’s forecast or final result, and lists the resulting perturbations ranked from the most significant to the least.

In This Issue: 1. Explore Risk Simulator’s tornado analysis tool, 2. Learn how to use tornado analysis to identify critical success drivers and 3. Discover the differences between a tornado chart and a spider chart.

A powerful feature in Risk Simulator is sensitivity analysis. While tornado analysis (tornado charts and spider charts) applies static perturbations before a simulation run, sensitivity analysis applies dynamic perturbations created after the simulation run. Tornado and spider charts are the results of static perturbations, meaning that each precedent or assumption variable is perturbed a preset amount one at a time, and the fluctuations in the results are tabulated. In contrast, sensitivity charts are the results of dynamic perturbations in the sense that multiple assumptions are perturbed simultaneously, and their interactions in the model and
correlations among variables are captured in the fluctuations of the results. Tornado charts therefore identify which variables drive the results the most and, hence, are suitable for simulation. In contrast, sensitivity charts identify the impact to the results when multiple interacting variables are simulated together in the model.

In This Issue: 1. Discover how sensitivity analysis contrasts with tornado analysis and 2. Learn when and how to run a sensitivity analysis.

The scenario analysis tool in Risk Simulator allows you to run multiple scenarios quickly and effortlessly by changing one or two input parameters to determine the output of a variable.

In This Issue: 1. Learn the basics of using the Scenario Analysis tool.

Make sure that you do not confuse several related but very different terms: accuracy, error, precision, and
confidence. Albeit people tend to use them interchangeably and the concepts are related, they are different and should not be incorrectly used as synonyms.

In This Issue:
1. Learn the differences among accuracy, precision, error, and confidence level.
2. Understand the difference between accuracy and precision through visual representations.
3. Understanding Precision Control in Risk Simulator.
4. Compute confidence intervals using error and precision based on a certain confidence interval.

Bootstrap simulation is a simple technique that estimates the reliability or accuracy of forecast statistics or other sample raw data. Bootstrap simulation can be used to answer a lot of confidence- and precision-based questions in simulation.

In This Issue:
1.Discover how bootstrap simulation is a hypothesis testing tool.
2.Learn how to use bootstrap simulation to test any distributional statistic.

Another analytical technique of interest is that of segmentation clustering.
In This Issue: 1. Learn how to run segmentation clustering.

The correlation coefficient is a measure of the strength and direction of the relationship between two variables, and it can take on any value between –1.0 and +1.0. That is, the correlation coefficient can be decomposed into its sign (positive or negative relationship between two variables) and the magnitude or strength of the relationship (the higher the absolute value of the correlation coefficient, the stronger the relationship).

In This Issue:
1. Learn the basics of correlations, 2. Understand the effects of correlations on risk, 3. Run correlated simulations in Risk Simulator and 4. Review the correlations copulas and random number generators in Risk
Simulator.

A hypothesis test is performed when testing the means and variances of two distributions to determine if they are statistically identical or statistically different from one another; that is, to see if the differences between the means and variances of two different forecasts that occur are based on random chance or if they are, in fact, statistically significantly different from one another.

In This Issue: 1. Discover the reason for using hypothesis testing and 2. Learn how hypothesis testing and bootstrap simulation differ.

Several tests exist to check for the presence of heteroskedasticity. These tests also are applicable for testing misspecifications and nonlinearities. The simplest approach is to graphically represent each independent variable against the dependent variable. Another approach is to apply one of the most widely used models, the White’s test, where the test is based on the null hypothesis of no heteroskedasticity against an alternate hypothesis of heteroskedasticity of some unknown general form. The test statistic is computed by an auxiliary or secondary regression, where the squared residuals or errors from the first regression are regressed on all possible (and nonredundant) cross products of the regressors.

In This Issue: 1. Learn about three common forecasting problems and how to fix them.

A structural break tests whether the coefficients in different datasets are equal, and this test is most commonly used in time-series analysis to test for the presence of a structural break.

In This Issue: 1. Learn how to identify and model structural breaks to see if they are indeed statistically significant

The ROV BizStats tool is a very powerful and fast module in Risk Simulator that is used for running business statistics and analytical models on your data. It covers more than 130 business statistics and analytical models (Figures 1 through 4). The following provides a few quick getting started steps on running the module and details on each of the elements in the software.

In This Issue: 1. Learn how to quickly get started using the ROV BizStats module.

Many algorithms exist to run optimization and many different procedures exist when optimization is coupled with Monte Carlo simulation. In Risk Simulator, there are three distinct optimization procedures and optimization types as well as different decision variable types. For instance, Risk Simulator can handle Continuous Decision Variables (1.2535, 0.2215, and so forth), Integer Decision Variables (e.g., 1, 2, 3, 4 or 1.5, 2.5, 3.5, and so forth), Binary Decision Variables (1 and 0 for go and no-go decisions), and Mixed Decision Variables (both integers and continuous variables). On top of that, Risk Simulator can handle Linear
Optimization (i.e., when both the objective and constraints are all linear equations and functions) and Nonlinear Optimizations (i.e., when the objective and constraints are a mixture of linear and nonlinear functions and equations).

In This Issue:
1. Explore Risk Simulator’s continuous optimization processes.
2. Learn how to use Risk Simulator to perform true stochastic optimizations.

Many algorithms exist to run optimization and many different procedures exist when optimization is coupled with Monte Carlo simulation. In Risk Simulator, there are three distinct optimization procedures and optimization types as well as different decision variable types. For instance, Risk Simulator can handle Continuous Decision Variables (1.2535, 0.2215, and so forth), Integer Decision Variables (e.g., 1, 2, 3, 4 or 1.5, 2.5, 3.5, and so forth), Binary Decision Variables (1 and 0 for go and no-go decisions), and Mixed Decision Variables (both integers and continuous variables). On top of that, Risk Simulator can handle Linear
Optimization (i.e., when both the objective and constraints are all linear equations and functions) and Nonlinear Optimizations (i.e., when the objective and constraints are a mixture of linear and nonlinear functions and equations).

In This Issue:
1. Gain an understanding of discrete optimization.
2. Learn about dynamic and stochastic optimization and efficient frontier.

Many algorithms exist to run optimization and many different procedures exist when optimization is coupled with Monte Carlo simulation. In Risk Simulator, there are three distinct optimization procedures and optimization types as well as different decision variable types. For instance, Risk Simulator can handle Continuous Decision Variables (1.2535, 0.2215, and so forth), Integer Decision Variables (e.g., 1, 2, 3, 4 or 1.5, 2.5, 3.5, and so forth), Binary Decision Variables (1 and 0 for go and no-go decisions), and Mixed Decision Variables (both integers and continuous variables). On top of that, Risk Simulator can handle Linear
Optimization (i.e., when both the objective and constraints are all linear equations and functions) and Nonlinear Optimizations (i.e., when the objective and constraints are a mixture of linear and nonlinear functions and equations).

In This Issue:
1. Gain an understanding of stochastic optimization.
2. Learn about discrete and dynamic optimization and efficient frontier.

In This Issue: “What is a Markowitz Efficient Frontier for Investment Decisions”
1. Gain an understanding of investment efficient frontier development using optimization.
2. Learn about discrete and dynamic optimization and efficient frontier.

Genetic Algorithms belong to the larger class of evolutionary algorithms that generate solutions to optimization and search problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. Additionally, the Goal Seek tool is a search algorithm applied to find the solution of a single variable within a model.

In This Issue:
1. Gain an understanding of the basics of genetic algorithm for optimization.
2. Learn about goal seek and single variable optimization.

In This Issue:
1. Explore Risk Simulator’s Decision Tree module
2. Learn about running Monte Carlo risk simulation on a decision tree
3. Find out what the Bayesian analysis tool is used for
4. Learn how to compute the Expected Value of Perfect Information (EVPI), MINIMAX and MAXIMIN Analysis, as well as the Risk Profile and the Value of Imperfect Information
5. See how sensitivity charts and scenario tables can be useful
6. Learn how to generate utility functions

In This Issue
1. Learn the basics of a box plot or box-and-whisker plot.
2. Create a visual representation of outliers.
3. Run box plots in Risk Simulator’s ROV BizStats.
4. Compute confidence intervals using interquartile ranges.

In This Issue: “How do you visually identify outliers in your data and how dispersed is your data?”
1. Learn the basics of a Pareto Chart.
2. Create a visual representation of the 80-20 rule using a Pareto Chart.
3. Compare the similarities and differences among Pareto charts, sensitivity charts, and Tornado analysis

## Software Brochures

This newsletter briefly explains the probability density function (PDF) for continuous distributions, which is also called the probability mass function (PMF) for discrete distributions (we use these terms interchangeably), where given some distribution and its parameters, we can determine the probability of occurrence given some outcome or random variable x. In addition, the cumulative distribution function (CDF) can also be computed, which is the sum of the PDF values up to this x value. Finally, the inverse cumulative distribution function (ICDF) is used to compute the value x given the cumulative probability of occurrence.

In This Issue: 1. Learn the basics of PDF, CDF, and ICDF, 2. Learn how to read and interpret the CDF or SCurve based on its four moments.