top of page
  • Writer's pictureMaribeth Vander Weele

Designing a Risk Assessment

While the Uniform Guidance requires programs to develop risk assessments for subrecipient monitoring, it provides only guidance on risk assessment methods. It is silent on sampling methodologies, such as criteria for sampling, sample size, or methodology for stratifying operating units within a subrecipient organization. Ultimately, the risk assessment methodology is left to the professional judgment of the pass-through entity.


The Every Student Succeeds Act of 2015--one of many acts that govern the programmatic aspects of grants monitoring—concurs. It contains multiple references to required monitoring, but it also does not prescribe a risk assessment or sampling methodology.


So given that flexibility, how might a risk assessment be designed?


First, given that a risk assessment is required, it’s evident that a risk assessment should be in writing so auditors can confirm its existence.


Second, dividing risks into categories is helpful to ensure that risks aren’t overly lopsided to the financial, compliance, or programmatic side. Based on the Uniform Guidance’s definition of monitoring, types of risks might include:

  • Risk of non-compliance with Federal statutes and regulations.

  • Risks of non-compliance with the terms and conditions of the subaward.

  • Performance risk that subaward performance goals are not being achieved.

When it comes to taxpayer funds, another well-known risk is the risk of fraud, waste, and abuse, which can be described as follows:

  • Control Risk, specifically the risk of fraud, waste and abuse as gauged through the testing for internal controls and unallowable expenditures.

Next comes the assignment of specific risk factors—and both the Uniform Guidance and the audit profession offer some ideas in this regard. Based on 2 CFR §200.519, criteria for federal program risk for auditing purposes, these include:

  • consideration of the internal control environment

  • whether there are multiple internal control structures

  • the systems for monitoring

  • prior audit findings

  • recent monitoring or other reviews that disclosed no significant problems

  • the complexity of the program, and

  • the types of expenditures.

​Other criteria for risk assessment under audit methodologies include:

  • significant changes in governing standards such as laws and statutes

  • the phase of the program

  • the size of the federal award, and

  • whether the program is well-established.

​When single audits are performed on an annual basis and the auditors cite no material financial statement or internal control weaknesses under the requirements of the Generally Accepted Government Auditing Standards (“GAGAS”), also known as the “Yellow Book,” an entity is considered to be lower risk. An additional consideration includes whether the auditor expressed substantial doubt that an auditee could continue as a going concern.


But none of these are possible to measure unless the information is documented and available. So the monitoring team may need to look at objective, available data and they might consider looking at risk factors that directly affect whether the purposes of the grant are being achieved. For a school district, examples include whether the district:

  • Has an embedded Internal Audit unit

  • Has an Inspector General's office

  • Has an experienced and substantial financial team in place

  • Had a Single Audit with material weaknesses

  • Has a centralized financial system for all schools

  • Has a functional Audit Committee

  • Meets its goals in academic achievement, student attendance, high school graduations, and so forth.

​​For a school, programmatic examples—some of which speak to management effectiveness—include:

  • Principal turnover

  • Student achievement scores

  • Teacher attendance

  • Student attendance

  • Chronic truancy

  • Teacher retention

  • Parent Involvement

​After risk indicators are identified, each should be assigned a value or weight, recommends the U.S. Department of Education. The Department states that creating a risk framework ensures consistency in reviews and includes the following steps:

  1. Identify appropriate risk indicators and assign each a value or weight.

  2. Evaluate and rank subrecipients and programs based on relative risk.

  3. Identify available monitoring resources and staff – weigh against monitoring needs.

  4. Adjust monitoring plan, including monitoring activities and schedule based on risk and resource assessments.​

​This last step is one more indication that pass-through entities are accorded a high level of flexibility in designing risk- and resource-based monitoring programs. And while risk frameworks are important, they should be designed by the program managers who know the programs best.

bottom of page