This was originally penned as a letter to the editor of the International Journal of Social Work Values & Ethics, Dr. Stephen Marson upon his invitation to submit in the last volume.
Thank you for your offer to submit a letter to the editor about measurement bias and predictive bias in the exams produced by the Association of Social Work Boards (ASWB). As this is the social work values and ethics journal, I am going to confine my brief comments to the ethical problems in ASWB’s most recent Request for Proposals (RFP) entitled Regulatory Research Initiative to Advance Equity.
ASWB has publicly touted this RFP as the opportunity for social workers, regulators, and the concerned public to understand what issues are driving the vast disparities in exam scores which further privilege white, English-dominant, and younger social workers. Contrary to ASWB’s assertions, I will demonstrate ASWB’s abuse of the research and regulatory process to prevent researchers from investigating psychometric flaws in the examinations.
Instead of analyzing the exam itself, they will finance—using their $40 million in net assets, $30 million endowment of stocks and bonds, 29% profit margin, $1 million in executive compensation (ProPublica, n.d.), and $10 million examination defense fund (ASWB, 2019)—empirical research that supports what ASWB already tells test-takers in their Candidate Handbook.
ASWB works to ensure the fairness of each of its exam questions but acknowledges that there may be differences in exam performance outcomes for members of different demographic groups because exam performance is influenced by many factors external to the exams [emphasis added]. ASWB has committed to contributing to the conversation around diversity, equity, and inclusion by investing in a robust analysis of examination pass rate data.
(ASWB, 2023a, p. 12)
In the RFP, there is one funding area that supports researchers investigating biased examination scores. In a glaring lack of research ethics, ASWB suggests hypotheses that exculpate ASWB and match what they already tell test-takers. ASWB intends to use the research process to manufacture their foregone conclusions.
ASWB requires research on the variables associated with the licensing exam pass rate data to determine future steps and areas in need of continued research. Research proposals might address correlating external [emphasis added] variables that may influence the disparities in the licensing exam pass rate data. Such variables could include upstream [emphasis added] factors such as differences in education programs; considerations of intersectionality, including age, gender, race, health, socioeconomic status; and social determinants of health, including life experiences from early childhood to post-graduate.
(ASWB, 2023b, p. 4)
Although ASWB promises a “robust analysis of examination pass rate data,” only one of the twelve focus areas of the RFP will fund studies related to exam bias. Within that 1/12 of the RFP, ASWB explicitly suggests researchers’ hypotheses for them. The idea that early childhood experiences are more relevant than the psychometric functioning of the exam itself—i.e., the internal properties and multivariate functioning of the exam—is research and regulatory malpractice. Further, it is unethical for ASWB to attribute exam disparities to “many factors external to the exam” without evidence and seek to manifest that empirical reality through the grant process.
Crucially, only researchers who support ASWB’s hypotheses will have access to exam bias data. The most relevant data will only tell one story—factors external to the exam drive disparities—while leaving glaring internal issues within the exam unexplored and unfunded. This is a craven and unethical abuse of the research and grant funding process. While it is possible for a test maker like ASWB to overcome conflicts of interest and fund objective research into its own examinations, ASWB values their self-interest over the public interest.
To be clear, ASWB does not directly state they will reject proposals that examine psychometric functioning outright; however, their most recent misinformation-filled blog about psychometrics makes that point clearly. Their blog post incorrectly describes the purpose and procedure for differential test functioning analysis. It cites the psychometric standards which require independent tests of item and test functioning, but then states that:
Although it is theoretically possible that DIF analyses may fail to identify some problematic items and small amounts of bias may accumulate to produce DTF, it is very unlikely that practically important DTF will result, because there is often high power to detect small magnitudes of DIF, and DIF does not typically favor one examinee group consistently
(ASWB, 2023c, para. 4)
Do ASWB exams fall into the typical case, in which DIF does not favor one group over another? Or are ASWB exams systematically biased in favor of some groups? ASWB ignores these questions and only looks for bias item-by-item (DIF), thus ignoring patterns that emerge across multiple questions.
The descriptive data seem like strong evidence that would require multivariate analysis to completely understand. ASWB cites psychometricians in their blog post but refuses to apply the multivariate methodologies in those citations to assess the fairness of their examination. They simply state that exam-level bias is uncommon and commit to reporting descriptive data, never testing a single hypothesis about test functioning.
ASWB’s lies echo the misinformation in the public statements made by the editor in chief of the International Journal of Social Work Values & Ethics. He succinctly stated the central piece of ASWB gaslighting in his emails to the BPD-L Listserv. “I ask one and only one question: Each and every single item on all of ASWB’s tests demonstrates no sex or race bias, but the test as a whole does demonstrate race bias. I want an explanation of how that is possible. That’s it.” (emphasis in original).
To answer this question, one would merely have to look at the Standards for Educational and Psychological Testing that ASWB and its former psychometrics consultant, Dr. Marson, say are used to validate ASWB examinations.
Differential test functioning (DTF) refers to differences in the functioning of tests (or sets of items) for different specially defined groups. When DTF occurs, individuals from different groups who have the same standing on the characteristic assessed by the test do not have the same expected test score. The term predictive bias may be used when evidence is found that differences exist in the patterns of associations between test scores and other variables for different groups, bringing with it concerns about bias in the inferences drawn from the use of test scores…(p.51)
When credible evidence indicates potential bias in measurement (i.e., lack of consistent construct meaning across groups, DIF, DTF) or bias in predictive relations, these potential sources of bias should be independently investigated because the presence or absence of one form of such bias may have no relationship with other forms of bias. For example, a predictor test may show no significant levels of DIF, yet show group differences in regression lines in predicting a criterion.
(American Educational Research Association, American Psychological Association, National Council on Measurement, 2014, p. 52)
To translate a bit from methodology-speak, that last sentence states that a test like ASWB’s exams may show no item-level bias (DIF) but show differences across groups in the overall exam score (DTF) that are not related to the criterion being measured (entry-level social work competence). This is the exact situation we find ourselves in!
At the beginning of their blog post, ASWB (2023c) cites the Standards for Educational and Psychological Testing which states that these two sources of potential bias (DIF & DTF) should be investigated independently. Yet in the end, ASWB disagrees with the best practices of psychometricians and decides that DIF is good enough! This is what ASWB describes as meeting or exceeding psychometric standards.
If DIF vs. DTF feels too abstract, consider this score report from a test-taker who failed ASWB exams. Which content area displays the highest degree of differential functioning? We’ll never know because ASWB only assesses for bias item-by-item. Shouldn’t we know?

ASWB’s self-interested RFP would prevent researchers from investigating bias that emerges from specific topics, content areas, or subsets of the test. It would also prevent researchers from testing ASWB’s cut scores based on real-world performance of the examination. This fails to meet the ethical standard for exam validation, and the impacts of biased exams are manifestly clear in dire workforce shortages in social workers across the country.
While it is certainly possible for a regulator like ASWB to effectively manage conflicts of interest, it appears that ASWB is institutionally incapable of addressing these problems:
- ASWB purchases examinations from itself. It does not allow for competitive bidding on examination development.
- ASWB does not publish their exam validation methodology or results. Its member boards never ask for details on methods or results, shielding them from disclosure under public records laws and preventing states or psychometricians from developing substantially equivalent exams or Spanish language exams, which could be used in place of ASWB exams in the new social work licensure compact.
- ASWB conditions researchers’ access to exam bias data on testing ASWB’s preexisting hypotheses.
- ASWB gaslights test-takers by stating as truth their untested hypotheses that only “external factors” cause biased exam scores.
- ASWB restricts researchers from talking publicly about their projects (ASWB, 2023a) and requires researchers to sign confidentiality agreements that give ASWB final authority over what is published (ASWB, 2020, p. 25).
- ASWB removes scored exam items due to biased functioning without notifying test-takers and boards who denied licensure because their score was one less than the (biased) cut score (analysis in DeCarlo, 2023; original information in Owens, 2021).
- ASWB has not published a procedure for its bias-detection methodology since 2010 (Marson et al., 2010), and those methods were outdated at the time they were published, using correlations instead of regressions to detect biased items (AERA, APA, & NCME, 2014, p. 51-52).
ASWB will say whatever it wants and do whatever is necessary to maintain the oppressive status quo. I know this because I have seen it happen before. The last RFP they funded on regulatory research—the worthwhile studies by Dr. Joy Kim—produced results like this:
None of these factors—variations in state regulations, the field of practice, the type of employers, and social workers demographic vulnerability—helped to explain away the African American-White disparity and the odds of licensing for bachelor social workers. [emphasis in original] The odds of African American social workers holding any license for 43% lower than the odds of white social workers. For a required license the odds of African Americans were 26% lower than that of Whites
(Kim, 2022, p. 382)
No changes were made to baccalaureate licensure because of this finding. Research by the keynote speaker at ASWB’s 2023 Education Meeting in New Orleans, LA and research grant awardee does not appear to convince ASWB that BSW licensure disparities are not the result of “external” or “pipeline” factors.
Despite funding results that say otherwise, ASWB still tells BSW examinees that external factors cause less than 40% of black social workers under 30 and less than 25% of black social workers over 50 pass the LBSW exam (ASWB, 2022). The evidence does not matter. Ethics do not matter. ASWB profits matter.
My view is that the entire project of regulatory research at ASWB is a callous marketing ploy. Even if a researcher were permitted by ASWB to conduct a proper measurement equivalence study and ASWB allowed their results to be published unedited, researchers’ conclusions would not inform how ASWB thinks or acts about the functioning of its exams. Only results that support ASWB’s profits are actionable.
ASWB exams are extremely lucrative, producing $17.6 million in revenue during 2021 (ProPublica, n.d.). Despite holding a nonprofit status, ASWB has sustained industry-shattering, recession-proof profit margins for the past decade. While 2021’s profit margin of 29% is quite high, since 2011 its average profit margin was over 17% and its net assets increased over 447%. They are chokepoint capitalists—abusing their nonprofit status, enriching themselves off social workers, and further stratifying the profession by class, age, race, disability, language, and culture.
With their most recent RFP, ASWB has rigged the regulatory research game. They will provide access to exam bias data to researchers pursuing hypotheses that exculpate ASWB and ignore obvious internal, psychometric issues in the examinations. I hope social workers who seek to productively collaborate with ASWB as researchers, question writers, or volunteers keep their frothing, unrepentant self-interest at the top of their mind.