Generative-AI programs may eventually consume material that was created by other machinesâwith disastrous consequences.
www.theatlantic.com
The program at first fluently finished a sentence about English Gothic architecture, but after nine generations of learning from AI-generated data, it responded to the same prompt by spewing gibberish: âarchitecture. In addition to being home to some of the worldâs largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.â
I don't question AI's ability to gather a lot of information and analyze it. Like a spreadsheet works out specific problems, AI can provide decent answers to linear questions.
If we define "intelligence" loosely as having the ability to remember pictures, formulas, writing, etc...... ChatGPT is quite "intelligent." It can both recall and somewhat mimic data that it has seen.
What it can't do is "think outside the box" about concepts. So the answers it gives are "the best statistical guess" based on the data and it's unlikely to go beyond that, IMO.
Lots of human interaction is "scripted" and "form based" such that even "random encounters" among strangers in a grocery or wherever follow a reasonably predictable pattern of custom and civility, so a bot may handle that easily as long as you ignore the nuance of body language, eye movements, etc that most (not all) humans react to and can change the conversation because of.
If I see someone I've met looking away or appearing distracted or something, I instinctively know they're not really in the conversation. It'll take bots a long time to recognize these things, IMO.
Al is a good tool for some things but it'll be easy to spot, I think, for many years by "acting different" than the normal human script.