Intelligent Credit Scoring. Siddiqi Naeem
to changes in risk adjudication.
● In-depth knowledge of actuarial practices.
Enterprise risk staff is usually advised when new strategies change the risk profile of the company’s portfolio. Increasing or decreasing risk levels affect the amount of capital a company needs to allocate. Taking significant additional risks may also be in contravention of the company’s stated risk profile target, and may potentially affect its own credit rating. Enterprise risk staff will ensure that all strategies comply with corporate risk guidelines, and that the company is sufficiently capitalized for its risk profile.
Legal Staff/Compliance Manager
Credit granting in most jurisdictions is subject to laws and regulations that determine methods that can be used to assess creditworthiness, credit limits, and characteristics that cannot be used in this effort. A good practice is to submit a list of proposed segmentation and scorecard characteristics to the legal department, to ensure that none of them is in contravention of existing laws and regulations. In the United States, for example, issuing arising from the Equal Credit Opportunity Act,14 Fair Housing Act,15 Dodd-Frank,16 Regulation B,17 as well as “adverse” and “disparate” impact are all areas that need to be considered during scorecard development and usage.
Intelligent Scorecard Development
Involving these resources in the scorecard development and implementation project helps to incorporate collective organizational knowledge and experience, prevents delays, and produces scorecards that are more likely to fulfill business requirements. Most of this corporate intelligence is not documented; therefore, the only effective way to introduce it into credit scoring is to involve the relevant resources in the development and implementation process itself. This is the basis for intelligent scorecard development.
Note
Bearing in mind that different companies may have differing titles for similar functions, the preceding material is meant to reflect the typical parties needed to ensure that a developed scorecard is well balanced, with considerations from different stakeholders in a company. Actual participants may vary.
Scorecard Development and Implementation Process: Overview
When the appropriate participants have been selected to develop a scorecard, it is helpful to review the main stages of the scorecard development and implementation process, and to be sure that you understand the tasks associated with each stage. The following list describes the main stages and tasks. Detailed descriptions of each stage are in the chapters that follow. The following table also summarizes the output of each stage, whether signoff is recommended, and which team members should sign off. Note that while the chapter recommends getting advice from those in the marketing or product management areas, they are not involved in any signoff. The abbreviations used in the Participants columns are:
MD: Model development, usually represented by the head of the model development team.
RM: Risk management, usually the portfolio risk/policy manager or end user on the business side.
MV: Model validation or vetting, usually those responsible for overseeing the process.
IT: Information technology or equivalent function responsible for implementing the models.
The following stages are for post-development work for strategy development, and are usually handled by the business risk management function.
STAGE 6. SCORECARD IMPLEMENTATION
● Scoring strategy
● Setting cutoffs
● Policy rules
● Override policy
STAGE 7. POST-IMPLEMENTATION
● Scorecard and portfolio monitoring reports
● Scorecard management reports
● Portfolio performance reports
The preceding stages are not exhaustive – they represent the major stages where key output is produced, discussed, and signed off. The signoff process, which encourages teamwork and identifying problems early, will be discussed in the next chapter. Involvement by the Model Vetting/Validation unit is dependent on the model audit policies of each individual bank as well as expectations from regulators.
Chapter 3
Designing the Infrastructure for Scorecard Development
As more banks around the world realize the value of analytics and credit scoring, we see a corresponding high level of interest in setting up analytics and modeling disciplines in-house. This is where some planning and long-term vision is needed. A lot of banks hired well-qualified modelers and bought high-powered data mining software, thinking that their staff would soon be churning out models at a regular clip. For many of them, this did not materialize. Producing models and analytics took just as long to produce as before, or took significantly longer than expected. The problem was not that their staff didn’t know how to build models, or that the model fitting was taking too long. It was the fact that the actual modeling is the easiest and sometimes fastest part of the entire data mining process. The major problems, which were not addressed, were in all the other activities before and after the modeling. Problems with accessing data, data cleansing, getting business buy-in, model validation, documentation, producing audit reports, implementation, and other operational issues made the entire process slow and difficult.
In this chapter, we look at the most common problems organizations face when setting up infrastructure for analytics and suggest ways to reduce problems through better design.
The discussion in this chapter will be limited to the tasks involved in building, using, and monitoring scorecards. Exhibit 3.1 is a simplified example of the end-to-end tasks that would take place during scorecard development projects. These are not as exhaustive as the tasks that will be covered in the rest of the book, but serve only to illustrate points associated with creating an infrastructure to facilitate the entire process.
Exhibit 3.1 Major Tasks during Scorecard Development
Based on the most common problems lending institutions face when building scorecards, we would suggest consideration of the following main issues when looking to design an architecture to enable analytics:
● One version of the truth. Two people asking the same question, or repeating the same exercise should get the same answer. One way to achieve this is by sharing and reusing, for example, data sources, data extraction logic, conditions such as filters and segmentation logic, models, parameters and variables, including logic for derived ones.
● Transparency and audit. Given the low level of regulatory tolerance for black box models and processes, everything from the creation of data to the analytics, deployment, and reporting should be transparent. Anyone who needs to see details on each phase of the development process should be able to do so easily. For example, how data is transformed to create aggregated and derived variables, the parameters chosen for model fitting, how variables entered the model, validation details, and other parameters should preferably be stored in graphical user interface (GUI) format for review. Although all of the above can be done through coding, auditing of code is somewhat more complex. In addition, one should also be able to produce an unbroken audit chain across all the tasks shown in Exhibit 3.1– from the point where data is created in source systems, through all the data transformations
14
15
16
https://www.congress.gov/bill/111th-congress/house-bill/4173
17
https://www.federalreserve.gov/boarddocs/supmanual/cch/fair_lend_reg_b.pdf