Discussion about this post

User's avatar
Michael Woudenberg's avatar

100% agree on this. Right along side Halucinating AI is Biased or Racist AI which sufferes the same problem. Error. What we see is an accuacy problem because we aren't seeing the people coding AI intentionally adding ethical bias. What's funny is that there are three layers of bias in an algorithm:

1. Ethical Bias (typically an artifact of an org)

2. Measurement or Data Bias

3. Mathematical Bias (how we code our algorithm)

But again, it all boils down to error not ethics, not conciousness, not hallucinations.

https://polymathicbeing.substack.com/p/eliminating-bias-in-aiml

Expand full comment
Paul Maplesden's avatar

You make a good point about the accuracy of the terms we use - and this from someone who used the word "Hallucinating" in an article sharing my predictions about AI just last week. I think that part of it is that because AI is supposedly "the next big thing" (another debate entirely), we have to find special words to describe it. I also think that part of it is because "hallucination" is an interesting word to say, read, or write. I love delicious words. Error is distinctly much more correct, it's just not as "fun."

But, to your main point about anthropomorphism - you point out something that's often misunderstood about AI. It's not actually *intelligent.* It's just using vast amounts of data and clever algorithms to predict things about the dataset it's querying, and presenting that to the user.

It doesn't "think" - It's just an extremely clever parlor trick.

Expand full comment
12 more comments...

No posts