Image generated with artificial intelligence (ChatGPT image generator) based on author concept.

AI 3.0: Still a Counselor

We Have Been Here Before

This is my third conversation about the role of artificial intelligence (AI) in rehabilitation counseling. I find myself returning to the topic because questions are becoming practical, not theoretical.

Every generation has had a tool that promised to change our work. The question has never really been whether to use the tool, but how to use it ethically.

As I have thought about the future of AI in clinical and vocational rehabilitation counseling practice, I have been reminded of our early experience with vocational evaluation. I have been a Certified Vocational Evaluation Specialist since the 1980s. At that time there was a tendency in some settings to rely exclusively on test results for vocational planning rather than on the individual choice of the consumer being served. Results were sometimes used as a way of saying “no” to a person’s expressed career goal.

What we learned over time was that tests could not provide all the answers. Test results could not measure desire, or as my father used to say, “want to.” They could not measure perspiration and inspiration — hard work and motivation. We also learned that people were sometimes denied services based on test results rather than on full consideration of accommodations and modifications that could mitigate limitations discovered during evaluation.

For a period of time the value of vocational evaluation itself was questioned. Consumers challenged results, and rightfully so. As professionals we adjusted our practice. Informed choice, accommodation, and modification again drove the process. The lesson was simple: a tool becomes dangerous the moment we stop questioning it.

We are now in a similar place. The technology has arrived before we have fully established the ethical standards for its use. No — the sky is not falling. AI holds tremendous promise. But ethical decision making must lead the way as we make choices about how it is used.

Promise and Responsibility

In a course I recently developed, I suggested that ethical practice lives between the promise of a tool and the responsibility of the professional using it. Artificial intelligence may be one of the most powerful tools rehabilitation counseling has encountered. The question is not whether we will use AI, but how we will use it.

An important question follows: who is driving the push toward AI?

Leadership faces significant pressures. Vocational rehabilitation agencies must serve more individuals with fewer counselors. They experience restricted funding, difficulty filling vacancies, increased federal and state oversight, order of selection challenges, and higher expectations for reporting outcomes and customer satisfaction. These pressures are real, and the desire to better serve customers is understandable. AI promises improved efficiencies — efficiencies in case management software, faster documentation, automated appointment reminders, online applications, performance analysis, and financial reporting. These are real and meaningful benefits.

Rehabilitation counselors see something different. They hope AI can reduce paperwork and administrative burden so they can spend more time with the people they serve. They see value in assistance with ease of use in case management software, case documentation, review of medical records, labor market information, employer tracking, and plan development support. They hope efficiency leads back to counseling.

There is considerable overlap in these motivations. Both leadership and counselors want improved services and better outcomes. However, the ethical risk emerges when efficiency begins to shape professional decisions rather than support them.

Where the Risk Lies

Some uses of AI carry relatively low ethical risk when proper safeguards are in place — drafting correspondence, appointment scheduling, reminders, or organizing large amounts of information for review. These activities support practice.

Other uses require much greater caution — eligibility recommendations, automated plans, diagnostic impressions, or suggested steps based solely on disability categories or data patterns. These activities approach clinical judgment. Rehabilitation counseling has always depended on individualized understanding, not simply classification.

I have spoken with leaders who believe AI may allow them to fill difficult-to-hire positions by relying more heavily on technology and potentially less qualified staff. The temptation is understandable. However, the danger is not simply that AI may be wrong. The danger is that organizations may gradually redefine rehabilitation counseling work around what the system can produce. When that happens, professional judgment quietly erodes.

Both vocational rehabilitation leadership and rehabilitation counselors must therefore approach AI with deliberate care. We must safeguard confidentiality and transparency. We must understand algorithmic bias and the potential for inaccurate information (National Institute of Standards and Technology, 2023).

The concern is not simply that AI may make an error. The concern is that recommendations may be accepted without the same level of professional questioning we would apply to our own judgment. In rehabilitation counseling, a suggestion can quickly become a direction — influencing eligibility decisions, the type of employment explored, or whether an individual is viewed as capable of success.

We must also recognize that AI has no empathy and no lived understanding of disability, employment barriers, or human motivation (Commission on Rehabilitation Counselor Certification, 2026). It cannot appreciate perseverance, family support, accommodation, or what my father called “want to.” Those factors have always shaped outcomes more than any data point. Our responsibility is to evaluate the information the system provides, not defer to it.

Most importantly, we cannot abdicate authority to the system. The counselor must retain professional responsibility, and the consumer must retain informed choice. Both must be able to disagree with the output.

Still a Counselor

Guidance from professional organizations reflects this responsibility (American Counseling Association, n.d.; Commission on Rehabilitation Counselor Certification, 2026). The consistent themes are clear:

  • To protect confidentiality
  • To preserve client autonomy and informed decision making
  • To develop competence through training
  • To keep the human professional actively involved in every significant decision

The counselor makes the professional decision. The consumer makes the life decision. The AI system assists; it does not decide.

Conclusion

Every generation must decide how it will use its tools. Artificial intelligence may profoundly influence rehabilitation counseling, but its impact will not be determined by technology alone. It will be determined by the judgment, values, and ethics of the professionals who use it.

The history of rehabilitation counseling has never been defined by the tools we used, but by the judgment we exercised. Artificial intelligence will not change that — unless we allow it to.

References:

American Counseling Association. (n.d.). Recommendations for practicing counselors and their use of artificial intelligence. https://www.counseling.org/resources/research-reports/artificial-intelligence-counseling/recommendations-for-practicing-counselors

Commission on Rehabilitation Counselor Certification. (2026). Frequently asked questions (FAQs) and guiding statements to support certified rehabilitation counselors (CRCs) using artificial intelligence. https://crccertification.com/wp-content/uploads/2026/02/FAQ-AI-Guidelines.pdf

National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce. https://doi.org/10.6028/NIST.AI.100-1

*Edited with the assistance of ChatGPT