Wiley Practitioner's Guide to GAAS 2020. Joanne M. Flood

Wiley Practitioner's Guide to GAAS 2020 - Joanne M. Flood


Скачать книгу
Extent of Tests Factor to consider Increase number of tests Decrease number of tests How frequently the control procedure is performed Procedure performed often (e.g., daily) Procedure performed occasionally (e.g., once a month) Importance of control Important control (e.g., control addresses multiple assertions or is a period-end detective control) Less important control Degree of judgment required to perform the control High degree of judgment Low degree of judgment Complexity of control procedure Relatively complex control procedure Relatively simple control procedure Level of competence of the person performing the control procedure Highly competent Less competent

      When determining the extent of tests, you also should consider whether the control is manual or automated. When a control is performed manually, the consistency with which that control is performed can vary greatly. In contrast, once a control becomes automated, it is performed the same way each and every time. For that reason, you should plan on performing more extensive tests of manual controls than you will for automated controls.

      In some circumstances, testing a single operation of an automated control may be sufficient to obtain a high level of assurance that the control operated effectively, provided that IT general controls operated effectively throughout the period.

      Sample Sizes for Tests of Transactions

      You do not have to test every performance of a control to draw a valid conclusion about the operating effectiveness of the control. For example, suppose that one of the controls a manufacturing company performs in its revenue cycle is to match the shipping report to the customer’s invoice to make sure that the customer was billed for the right number of items and the revenue was recorded in the proper period. Over the course of a year, the company has thousands of shipments. How many of those should be tested to draw a conclusion?

      Statistical Sampling Principles

      You do not have to perform a statistical sample to determine your sample size, but it does help to apply the basic principles of statistical sampling theory. In a nutshell, the size of your sample is driven by three variables:

      1 Confidence level. This variable has to do with how confident you are in your conclusion. If you want to be very confident that you reached the correct conclusion (say, 95% confident), then your sample size will be larger than if you want a lower confidence level (say, 60%).

      2 Tolerable rate of error. This variable addresses the issue of how many deviations in the performance of the control would be acceptable for you to still conclude that the control is operating effectively. If you can accept a high rate of error (the procedure is performed incorrectly 20% of the time), then your sample size can be smaller than if you can accept only a slight rate of error (the procedure is performed incorrectly only 2% of the time).

      3 Expected error rate of the population. This variable has to do with your expectation of the true error rate in the population. Do you think that the control procedure was performed correctly every single time it was performed (0% deviation rate), or do you think that a few errors might have been made? The lower the expected error rate, the lower the sample size.

      Note that the size of the population does not affect the sample size unless it is very small (e.g., when a control procedure is performed only once a month, in which case the population consists of only 12 items).

      In practice, most companies have chosen sample sizes for tests of transactions that range from 20 items to 60 items. It is common for independent auditors to offer some guidance on sample sizes.

      Be careful about simply accepting sample sizes without questioning the underlying assumptions for the three variables just listed. In reviewing these assumptions, you should ask:

       Am I comfortable with the assumed confidence level? Given the importance of the control and other considerations, do I need a higher level of confidence (which would result in testing more items), or is the assumed level sufficient?

       Is the tolerable rate of error acceptable? Can I accept that percentage of errors in the application of the control procedure and still conclude that the control is operating effectively?

       Is the expected population deviation rate greater than 0%? Some sample sizes are determined using the assumption that the expected population deviation rate is 0%. Although this assumption reduces the initial sample size, if a deviation is discovered, the sample size must be increased to reach the same conclusion about control effectiveness. Unless you have a strong basis for assuming a population deviation rate of 0%, you should assume that the population contains some errors. That assumption will increase your initial sample size, but it is usually more efficient to start with a slightly higher sample size rather than increasing sample sizes subsequently as deviations are discovered.

      Sample Sizes for Tests of Other Controls

Frequency of control performance Typical sample sizes
Annually 1
Quarterly 2 or 3
Monthly 2 to 6
Weekly 5 to 15

      Inquiry and Focus Groups

      Formal inquiries of entity personnel—either individually or as part of a focus group—can be a reliable source of evidence about the operating effectiveness of application-level controls. Inquiries can serve two main purposes:

      1 To confirm your understanding of the design of the control (what should happen).

      2 To identify exceptions to the entity’s stated control procedures (what really happens).

      Confirming control design. Typically, this process consists primarily of a review of documentation (such as policies and procedures manuals) and limited inquiries of high-level individuals or those in the accounting


Скачать книгу