Lessons Learned About AI from a Novice

Artificial intelligence (AI) is all the rage right now. This blog focuses on my experiences as a novice using AI and the discoveries I’ve made along the way. My efforts are primarily centered on vocational rehabilitation, community rehabilitation, and electronic case management systems. In this post, I reflect on lessons learned, mistakes made, and opportunities for growth.

When I first started using AI, I felt completely inept and was frankly intimidated by the process. Fortunately, I had a professor colleague and the former owner of my company who were willing to teach me some basics. I also took several online courses to gain a better understanding.

One of the exciting aspects of my work is the opportunity to attend conferences across the country, most of which focus on vocational rehabilitation, community rehabilitation, and workforce development. Over the past 18 months, I have attended several breakout sessions specifically about AI. Many presenters use AI live during their presentations to demonstrate its applications, such as writing case notes, tracking services, and even creating individualized employment plans. Presenters also showcase how AI can assist with research. There is always excitement about its potential to improve efficiency and enhance documentation. However, these discussions also raise significant ethical concerns.

Public AI systems such as ChatGPT and Copilot use machine learning models that store input data to improve future responses. This creates substantial risks for personally identifiable information (PII) being exposed in public systems. Because of this, it’s crucial to distinguish between public and secure AI systems. Many states have enacted legislation restricting AI use, so understanding local regulations is essential. If you use a public system for drafting case notes, you must anonymize your drafts—using generic terms like “client” or “consumer” instead of names and avoiding specific location or provider/employer information. Ethically, protecting sensitive information must be a top priority.

My Personal Experience with AI

Now that the disclaimers are out of the way, let’s talk about my experiences using AI. I haven’t tested every available platform, but I have experimented primarily with free versions of ChatGPT, Copilot, and Claude. Recently, I subscribed to ChatGPT’s paid version (about $20 per month), and I will discuss the differences between free and paid versions later.

Although I have tried multiple platforms, I personally prefer ChatGPT. This is not an endorsement or a sales pitch—just a matter of personal preference. I use it for research related to my blogs and the graduate courses I teach. I also use it for grammar, organization, and basic editing. I have used it to assist with drafting speeches and PowerPoint documents.  I have used it to analyze data. Sometimes I like the results; other times, not at all.

As an academic, I am highly vigilant about plagiarism and academic integrity. I always write my drafts in my own words, based on my knowledge and research, before using AI for editing. I want to ensure that my work remains authentically mine, and I do not want to feel like I am “stealing” ideas from an AI platform. If I use any significant information from AI, I cite it at the end of my writing.

Challenges of AI-Generated Information

There are several challenges when working with AI-generated content. First, AI relies on algorithms developed by humans, meaning inherent biases exist. AI systems tend to generalize information, often overlooking nuances that don’t fit the most common patterns. Think of a bell curve—it covers about 80% of the data but omits the outliers. Because people with disabilities often fall outside the “average” range, AI-generated information specific to disability is frequently incomplete.

Second, AI is not always accurate—sometimes, it is outright incorrect. This phenomenon is known as a “hallucination.” It is essential to verify AI-generated content for accuracy and completeness. While AI continuously learns from user inputs, the learning process is not immediate and requires multiple user corrections before the system adapts.

Keeping My Voice and Style

I write like a storyteller. I have a specific voice and rhythm, and I want my writing to sound like me. This can be challenging when using AI for editing. I have used AI multiple times to refine my drafts, and while I have occasionally been pleased with the results, I have also been deeply disappointed. The first time I used AI for editing, it condensed my work by two-thirds, reducing my narrative to bullet points. It didn’t sound like me at all.

I have written and published extensively over the years and have served as an editor for journals, monographs, and books. Yet, even with my experience, I sometimes find myself arguing with AI to maintain my original style. It often takes 8-10 iterations before the document reflects my voice. I rarely use AI-generated content in its entirety; instead, I extract and refine small sections to fit my writing style.

That said, I do appreciate AI’s ability to correct grammar and spelling and to suggest new ideas. Sometimes, I even find myself debating with AI over verb tense—whether to use past, present, or future. I use AI as a tool, but I always ensure that my final work remains my own.

Collaborating with AI

At times, working with AI feels like collaborating with a co-author. I interact with it as if I am conversing with a colleague. Interestingly, I have noticed that using polite language—such as “please” and “thank you”—often results in better responses. AI is not a human, but the interaction feels like a negotiation, requiring both openness to suggestions and critical evaluation of its output.

Lessons Learned

Through trial and error, I have learned several key strategies for making AI a useful partner:

  1. Framing Matters – How I phrase my request significantly impacts the response. I must decide whether I need an open-ended answer or specific information.
  2. Iteration is Key – I review the AI-generated output and refine my question as needed, sometimes multiple times.
  3. Voice and Style – To maintain my unique voice, I explicitly instruct AI to preserve my writing style.
  4. Fact-Checking is Essential – AI is not infallible. I compare AI-generated content against reputable sources to verify accuracy.
  5. Ownership of Work – I evaluate whether the final product reflects my writing or if AI has contributed too much. I always ensure my work remains authentically mine.
  6. Word Count Control – AI tends to over-summarize, so providing a specific word count helps maintain content length.
  7. Differences Between Free and Paid Versions – The subscription version of AI platforms generally allows for more extended and complex interactions.
  8. Practice Makes Perfect – To get better at using AI, I practice regularly. While most of my AI usage relates to vocational rehabilitation, community rehabilitation, and workforce development, I also explore other topics, such as literature, fly-tying, hunting, fishing, and hobbies, to refine my skills.

Final Thoughts

AI is here to stay, and its influence on our work will only grow. It is a valuable tool for enhancing skills and expanding knowledge, but it does not replace human expertise. AI helps fine-tune my work, but it also presents challenges and limitations that I must carefully consider before finalizing any document.

I find my skills improving every day, and I am excited about AI’s possibilities. It saves time and effort, ultimately increasing my capacity. However, AI cannot replace me—it serves as a powerful assistant, but human decision-making remains essential. And yes, I used AI to review this document! I even used it to create the graphic for this blog.