Rigorous testing promotes strong performance in government agency compliance officers

RIGOROUS TESTING PROMOTES STRONG PERFORMANCE IN GOVERNMENT AGENCY COMPLIANCE OFFICERS

CHALLENGE
A government agency is responsible for ensuring that contractors comply with government requirements wanted to demonstrate that its compliance officers (COs) provided reliable information to contractors and applied the requirements consistently across all cases.

The CO’s role is to perform audits of government contractors and answer their questions about how to best remain in compliance with nondiscrimination laws and regulations. After an OPM report showed differences in how COs applied requirements during audits and inconsistent answers from COs to similar contractor questions, the agency chose Cynuria to create and validate tests that allowed COs to demonstrate capability in these areas.

SOLUTION

Cynuria’s industrial-organizational (I/O) psychology team specializes in assessment development and validation. The government agency chose us based on previous successful work for this group and the strong partnership that our I/O psychologists and instructional systems designers (ISDs) built with the program office implementing the OPM report findings. We quickly assessed the situation and recommended solutions that would meet the requirement to show that the COs had improved to meet performance expectations.

Step 1: Build an understanding of job-critical knowledge, skills, and abilities 

Cynuria began by reviewing the CO position’s job analysis to understand the capabilities required in the CO role. We conducted structured conversations with the COs and their managers to understand what success looks like for this work and how important each task is to the role. Our I/O psychologists’ comprehensive experience in performing and reviewing job analysis enabled them to quickly identify the critical elements required for the role, categorize each, and weight its criticality.

Step 2: Design matrix of test content reflecting job KSAs

Based on this thorough understanding of the job requirements, our team created a matrix of required knowledge, skills, and abilities, their criticality, and performance measures to clearly describe what kinds of questions would appear in a valid test for individuals in this job. We validated this matrix with CO management to confirm the specific kinds of questions were appropriate to the work that the job requires. This process confirmed that the questions in the test would appropriately assess the CO’s knowledge and that the proportions in which specific question topics appeared would align with the importance and frequency with which the knowledge, skill, or ability would be required in their work.

Step 3: Develop test items that effectively demonstrate understanding

Based on the confirmed proportions of valid test topics, our combined team of I/O psychologists, instructional designers/test developers, and client SMEs proceeded to develop test items at appropriate levels of difficulty to demonstrate candidates’ levels of understanding. This included questions worded clearly and in a way that challenged candidates to correctly interpret the requirement and distractors that provided the right level of accuracy to identify incomplete understanding.


Step 4: Test Pilot and Validation

Using the newly-developed test items, Cynuria pilot-tested these test questions and the answer choices with COs from multiple regions across the organization to confirm their reliability, clarity, readability, and face validity. We presented assessment items to pilot testers in a randomized order to confirm that all users saw all items but in a different sequence and with the answer options presented in random order. Pilot testing showed a 79% average score with a standard deviation of .07 across the six sections of the exam. We also confirmed that items with a higher degree of difficulty were answered correctly less often than those with an intermediate degree of difficulty. Based on these results, we edited some of the items to improve all the tested qualities, with particular focus on those items that did not correlate with an overall passing score. We re-delivered the adjusted items to testers and confirmed improvement.

The final version of the test was administered to all COs and their supervisors as a developmental exercise. This confirmed the improvements to the test after the pilot and the overall reliability of the assessment. Thus we confirmed that the test items are statistically sound in discriminating between COs who understand the material and those who do not. Further analysis demonstrated that the test is fair to racial minorities and women.

RESULTS
Based on the successful test validation, this agency was able to report to OPM that it had developed a valid, defensible test. COs are providing consistent information to contractors and have a solid baseline from which to conduct contractor audits.