Model Reports : Evaluating Models
Up to 3 models can be evaluated at any time by dragging a variable on to the drop panel. The following type of model can be evaluated:
The purpose of a predictive model is to provide a method of identifying records that are most likely to be part of a target group.
Selections
The evaluation of a model is based on the proportion of the target group (analysis selection) in each model category. A good model will have some categories that contain a high proportion of the target group and some that contain a low proportion. The contrast between the categories is reflected in the "power" of the model.
The base selection determines which records are evaluated.
All the models are evaluated using the same analysis and base selections. These selections can be viewed or modified by clicking the Selections button at the top of the model report window. This opens the following control:
Right click on a selection to "View a Copy". You can edit this selection, but you must drag the new version back on to the selection control in order for the changes to take effect.
Setting Selections
When you evaluate a PWE model or a Decision Tree model the analysis and base selections used to create the model are automatically retrieved and used to set the selections in the model report. Having dragged on the model, you can use the selections control to see the selections that were used in the model.
You will receive a warning if you add a PWE model or a Decision Tree model that has been created with selections that differ from those currently used by the model report:
The first warning gives you the chance to cancel the action of adding the inconsistent model variable. If you say "No", the model report will remain unchanged. Using inconsistent model variables is allowed but you need to be aware of the issues in doing this.
If you say "Yes" the inconsistent model variable will be added to the report. You will now be asked to specify whether you wish to use the selections associated with the new variable. If you say "Yes" the current model report selections will be changed to reflect those used in creating the model that is being added.
If you add a selector variable you must specify the analysis and base selections to use in the evaluation. This can either be done by using the selections control or by adding a PWE model or a Decision Tree model (which will set the analysis and base selections automatically).
Issues in using Models with Different Selections
There are situations where you may wish to evaluate a model which has been created with a different analysis or base selection. For example:
-
You may build a model using a base selection of the whole country and then want to evaluate how well the model performs for a base selection of just one region (different base selection).
-
You may build a model using an analysis selection of "people who have travelled to the U.S." and want to see how well it performs predicting "people who have travelled to Europe" (different analysis selection).
However, you need to consider the following:
-
All the records in the base selection used in the model report need to be scored, in order for the model to be evaluated effectively. For example, if a model was rolled out to score people in just one region (say the Midlands) then all records outside of the Midlands would have an unclassified score. This would distort the model report that was set up with a base selection of the whole country. You would find that a large proportion of the records are in the unclassified category.
-
The analysis selection needs to be applicable to all the models being evaluated. For example, if the analysis selection is "High spending customers" you will not be able to evaluate a base selection of prospects since none of them will be in the analysis selection.