The flawed claims helping drive calls to gut federal workforce spending.
Six years ago, a government watchdog claimed all workforce programs are a lot alike. It did some headscratching things to get there.
The issue.
There is a notion that keeps showing up in Republican justifications for gutting American workforce funding: that there are too many workforce programs that do similar things for similar people.
The problem is that a key government report setting up that argument… kind of didn’t do the work to support it—which underlies a bigger problem.
Explain.
Republicans appropriators in the House have attempted to justify halving America’s workforce spending by saying they were “[e]liminat[ing] five duplicative and overlapping job training programs.” In August, the Trump Administration’s workforce blueprint similarly complained about a “patchwork of federal workforce programs” that “attempt to serve similar purposes.”
And from an American Enterprise Institute paper on Utah’s workforce system, which outlines policy ideas at the heart of some Republican’ views of workforce development:
Public workforce development, employment, and job-training programs at the federal and state levels have been continually plagued with evaluations and audits demonstrating duplication, redundancy, and fragmentation in service delivery. At the federal level, the Government Accountability Office has documented these concerns through reports over many years, specifically in 2011 and again in 2019.
In terms of source quality, GAO generally is good one, given that it is the federal government’s primary independent watchdog. I regularly dealt with our watchdogs when I was an attorney at the Department of Labor. GAO’s work was routinely a cut above our Inspector General’s Office and the questions we would get from the Hill.
So when I saw the 2019 GAO report1 while researching Utah’s (misunderstood) system, I was a bit taken aback by how thin and off I found their conclusions. The GAO report referenced by AEI uses “similar services to similar populations” quite a bit to describe America’s workforce programs. While it defines the services in very broad terms—in short, “Jobs Stuff”—neither I nor my Robot Research Assistant could find a definition for what it meant by “similar populations.”
This isn’t just my being nitty. Jobs Stuff may look the same at a high level, but the similarity in those services really varies based on populations. An older worker from a now-obsolete job has different training needs than a 16 year old with a history of mental health issues, for example. Could they both use some wayfinding to find their career? Could they both need therapy? Sure. But someone who has worked 25 years and has proven skills that need repackaging is in a different space than a kid sorting out what they want to do with their life and the particular equation of behavioral health treatment that will help them actually be able to do it.
In trying to restrict how this money is spent, Congress has installed complex legal requirements on workforce programs that make the gradients you read in the previous paragraph seem like primary colors. For example, one of the main populations supported by American workforce dollars is called “dislocated workers.” “Dislocated workers” are not simply unemployed workers. They include:
Workers who are eligible for unemployment compensation (or have exhausted it) or not eligible for unemployment compensation but have shown “attachment to the workforce” and aren’t likely to return to the same industry or occupation—meaning if you’re an accountant eligible for UI who wants to be a financial analyst, you’re not getting services, buddy.
Unemployed workers who were self-employed, but only if their unemployment is a result of “general economic conditions” or a disaster.
Workers who are military spouses, but only if they have experienced unemployment “as a direct result” of a base reassignment.
Those… aren’t particularly similar populations—and that’s not even every group of worker covered in one eligibility definition of the Workforce Innovation and Opportunity Act, America’s most consistent source of workforce dollars. The right brand of Job Stuff will be tailored to those eligibility requirements to choose the best way to help a worker while keeping their services funded.
So how did GAO come to a conclusion that these populations are the same? What did they find in their highly detailed and immaculate research of the problem that I didn’t?
Well, it turns out that I was wrong… about GAO doing highly detailed and immaculate research of the problem.2
Picking apart America’s workforce programs.
Before unpacking the issues with the GAO report, I think it’s important to understand why I might have a particular unique set of eyes on this set of issues.
As I have mentioned before, I was a former investigative reporter who uncovered misuse of government funds, including those under federal contracts and grants. Then, I was a lawyer whose job was to make new developments in jobs work with the restrictions and requirements Congress places on federal workforce programs—oh, and dealing with watchdogs like GAO when they had questions about them. Then, my last job in the federal government was keeping track of DOL’s grants portfolio and figuring out how to make each of them work toward good jobs.
To do that last part, I worked through each of the programs’ laws and did quite a bit of work to understand how everyone in the chain—from national officials to workers on the ground—experience each program and where it could be made better.3 In other words, my default is to work like an investigator even when I’m trying to do things as a policymaker—something I’m sure you’ve never noticed as a reader of JOBS THAT WORK—and I have spent a lot of time exploring the corners of the phone booths in which Congress placed some of these programs.
What did GAO do?
Too little as a consequence of trying to do far too much.
Let’s first talk about the “Too much” side: the report above reviews 47 different programs across nine Cabinet-level agencies—all of which GAO had to audition to figure out if they were really workforce programs. I won’t dispute the program fragmentation conclusion of this report—more in the next section—but if you’re trying to draw meaningful conclusions, I think you need a more focused sample size with more consistency in administration.
When I was getting to know all of DOL’s programs, my recollection is I counted around 30 programs—which is quite a bit. Yet, they all shared one Cabinet-level agency, which makes it a whole lot easier to get consistent and deep information needed to reach thoughtful conclusions.
Here is how “too much” leads to “too little”: there’s really no way to herd all these cats in a way that gets the information needed for meaningful analysis, especially with limited staff and pressure to get the report out, which I’m inferring here since this was a congressionally requested review.
So what did GAO do?
To address all of our objectives, we administered a survey to program officials that included questions about services provided, budgetary information, and participants served. In addition, we included questions asking agency officials to confirm or correct program objectives and eligibility and beneficiary requirements . . . . We also included questions about agencies’ actions to manage overlap and fragmentation.
I’m not opposed to GAO using a big survey as a way to simplify a messy investigative process. Yes, you’re only going to learn so much by trying to ask generalized questions that produce useful information on 47 programs at nine agencies. And because GAO’s process is highly formalized and adversarial (intentional or not), the survey answers are going to be the bare minimum needed to respond, probably with a lot of copying and pasting and suggestions to go look at the law. Still, agency officials should know about the services they provide and who they serve. This is as good and efficient a way I can think of to knock out the agency contact portion of this review and reduce moving pieces.
The problem is that GAO gave this survey way too much weight for a piece of evidence that is thin by its nature. For one, if you’re doing serious analysis—or building a case against something—you need to catch up with as many people touched by the issue as possible. Neither I nor The Robot could find evidence that GAO talked to grantees, participants, and employers about their experiences with these programs, which obviously could tell us plenty about duplication and overlap.4 My guess is GAO didn’t do it—or at least didn’t disclose any interactions—because of the size of the review and an effort to maintain the perceived uniformity of the evidence. Understandable, but it undercuts the quality of the conclusions.
Here’s something far less understandable, though:
We did not conduct a legal analysis to confirm the various characterizations of the programs in this report, such as services provided, target population, eligibility criteria, or program goals. Instead, such program information in this report is generally based on our survey results as confirmed by agency officials.
My first reaction to this passage was to laugh so hard I worried I would get a wellness check. Putting aside how much this harms the quality of GAO’s analysis, it cost GAO an opportunity to evaluate whether agencies actually knew what the hell they were talking about with their programs. As a reporter, I met with government officials who had insane assumptions about what was in their laws and had never read them. In other words, the Government Accountability Office missed a plum opportunity to gauge actual government accountability.
But not analyzing the law also explains why I came to a very different conclusion than GAO on the “similar populations” question. I looked at the law and saw requirements that narrow down workers who need help to a very specific group of people. I’m going to give GAO the benefit of the doubt that they looked at the law—more out of professional respect than anything—but by their own description, they let agency officials do their thinking about it.
Sort of. This passage is from Footnote 53 on Page 29 of the report:
Similar to their responses regarding fragmentation and overlap, program officials commonly reported taking no action to address duplication because their program was unique in the population it served or the services it provided.
That strongly implies that the program officials surveyed didn’t think their programs were “similar services to similar populations.”
Which raises a question: if GAO didn’t base its conclusion of “similar services to similar populations” on the law—freely available on the internet!—or the opinions of officials it surveyed—and says it relied upon in reaching its conclusions—then how and why did it come to its conclusion that these programs were duplicative and overlapped?
Which, in turn, raises questions about a core premise driving American workforce policymaking in 2025.
So what do we do about it?
Two things to address off the bat: first, I can’t affirmatively say how much this GAO report plays into the Trump workforce blueprint or similar policy documents because I didn’t see it cited there. But as you’ll gather from the quotes at the start of this piece, the claim made by GAO certainly has had quite a bit of pickup from conservative voices who think we need to shed a lot of America’s workforce spending and infrastructure because it’s too inefficient. My bet is a lot of that has to do with the AEI paper on Utah, which did cite the GAO report, is ubiquitous in conservative workforce circles and definitely flavored parts of the Administration’s workforce blueprint.
Second, I think the GAO report—and many of the Republican policy documents on workforce—are right that there is too much fragmentation in the current array of federal workforce programs, but they’re missing the cause.
The truth is there is too little overlap among workforce programs—out of fear from the wrong people getting money for the wrong things, Congress has legislated the eligibility requirements for these programs into a phone booth. Different populations may have some overlap—one person could fit into more than one grant’s eligibility requirements—but for the purposes of serving that population, each has to be unique in some way because that’s what the law requires.
Speaking of which, a core assumption of federal appropriations law—in very rough terms—is that if Congress authorized two different programs with many of the same characteristics, they have to be for different purposes unless Congress left a route to mix and match funds. It rarely does. I learned that trying to do some of the cross-agency coordination that GAO complains doesn’t happen in its report. Ironically, GAO is the keeper of appropriations law—so they probably ought to have done some of the legal analysis they said they didn’t do.
Which brings me to this: a unifying theme in the keystone policy documents arguing for gutting workforce programs don’t actually do the program-by-program work to show duplication or overlap—or explain why current programs can’t accomplish the policies these documents call for. Per The Robot, the AEI paper comes the closest, but this work certainly isn’t done by the Trump workforce blueprint, which was actually directed to do it by the executive order that led to it—and just… didn’t.
I give the Trump blueprint a teensy bit of slack because I respect some of the people involved and I understand they’re working through a really rigid filter in the Trump II White House, which seems to fundamentally misunderstand key concepts in these programs. But if you don’t do the work, you miss key implementation details like the ones I covered at the start of this section. Speaking from experience, you need those lessons even if you’re going to implement your idealized vision of federal workforce spending, which is mostly what the Trump workforce blueprint ends up doing.
Because if you don’t know what’s working and not working now—and why—it’s real hard to put something better into place in the future.
Appropriations update: Buying time.
We’re two weeks away from the fiscal year, and it looks like House Republican leadership thinks it can buy time until Thanksgiving through a short-term spending bill keeping open the government. That would buy more time for the Senate and House to reach a compromise on many things, including the future of American workforce spending, which House appropriators halved and Senate appropriators restored to 2024 levels.
There are potential hitches: as of this writing, House Republicans have lost two out of the three votes that they need to pass a stopgap—including one member who want more cuts. Democrats may want to bargain in return for votes to get the bill passed, and I suspect the second Trump Administration isn’t there and may never be there.
Things change quick, but today, I’m doubtful we’ll get any resolution before September 30. The good news is it gives advocates more time to get their stuff together and do education and advocacy to moderates and conservatives who have a soft spot for workforce programs. The bad news—if you like the existence of workforce spending—is that if this lingers into the fall, all sorts of things could be put on the table.
There also is the possibility that more cuts will come in a stopgap bill—recall $75 million lawmakers slashed DOL emergency grant funding in a March stopgap bill.
Card subject to change.
Unless there is something dramatic in that stopgap bill, Friday likely will be the next time I’m in your inbox. Obviously, I’ll pop back in if I find out another state has gotten a bazillion dollars for apprenticeship and no one has heard about it.
Next Tuesday: There is an expectation that states will take a bigger role in the future of apprenticeship during Trump II, but because of some Bush-era regulations, that’s easier said than done. I have a few ideas for how to make it easier and better.
I didn’t do the same analysis on the 2011 GAO report because it preceded the 2014 passage of WIOA, the authorization for many of America’s current workforce programs.
To double-check the fairness of my conclusions, I supplemented my usual editing process by running my analysis and the report through a deep research AI tool that I tasked to look for errors in my description of GAO’s work and any logic flaws. It suggested unpacking fragmentation versus overlap—which I address in the conclusion—and it suggested that my argument did not take into account for GAO’s limited resources in building this report.
By the way, I’m not prying open a black box of government mysteries. If you look at DOL funding opportunities from 2023 and 2024, you can see the common good jobs language and strategies we put in place, the ways each strategy fit a particular program’s needs, and the ways each strategy evolved over time.
Something I wanted to acknowledge: in my DOL work, I’m not sure I would have felt comfortable doing direct and detailed engagement with current DOL grantees on these topics because of how it could muddy administration of individual grants and potential future competitions. Would I have loved that kind of context? Very much so. Would I have? I can’t recall any specific opportunities to do so, but my internal grants attorney probably would have nudged me to be very general even if I could have had these conversations.
GAO, however, isn’t a grantmaking agency—it’s an independent government watchdog—and it set out to build a case that there was too much overlap and not enough coordination. I think it’s a fair question to ask why GAO didn’t go down this route.