AI Agents and Offloading Thinking

· wanl.blue


I have quite the bee in my bonnet... I've noticed a troubling trend that drives me absolutely nuts.

AI agents have become very prevalent in just about every space, and this is creating a slew of issues that I will not be discussing.

But just to start:

My problem is with a new mechanism of discussion alias abuse that I have been witnessing.

At work, we have large technical discussion aliases for the myriad of topics that we cover—Windows Filtering Platform (WFP), Kerberos, TLS, etc. In these aliases, subject matter experts (SMEs) provide their insights on these topics.

The new trend I have noticed is that someone will ask an in-depth technical question to an AI agent, and it will provide a nonsense response. This isn't surprising; there may not be enough data available on these topics to answer these questions.

HOWEVER, the individual will then take this nonsense response and, in essence, ask the discussion alias, "Hey, is this wrong? I don't know, and I haven't checked."

This is absolutely madness and a monumental waste of time and money.

My bigger issue is with #1, as it shows a fundamental disrespect for the time of the people you are asking for help. The canned response I've been giving when I see these behaviors is:

Side Bar: AI agents like Copilot are great for gathering initial insights, but please verify the information before posting questions to a large discussion group. When questions are based on unverified AI-generated details, experts often waste time deciphering the actual problem the asker is trying to solve. A quick fact-check ensures discussions remain focused and productive, allowing experts to provide meaningful help rather than untangling unclear queries.

last updated:

Copyright © 2023 wanl.blue
brainmade image