| 1123638) |
|
Mari  |
| mari.solis(at)hotmail.co.uk |
Ort: Genova |
|
For these of us which will have sipped a bit too much of the "Ai" cool-help, Ethan challenges the semantics of how we’re utilizing the word "hallucination" - and brings a a lot wanted nudge back to reality.
If you read about the current crop of "artificial intelligence" tools, you’ll eventually come across the phrase "hallucinate." It’s used as a shorthand for any occasion where the software program just, like, makes stuff up: An error, a mistake, a factual misstep - a lie.
Everything - all the things - that comes out of those "AI" platforms is a "hallucination." Quite merely, these providers are slot machines for content material. They’re enjoying probabilities: once you ask a big language mannequin a query, it returns solutions aligned with the developments and patterns they’ve analyzed in their training information. These platforms do not know when they get things wrong; they actually do not know once they get things right.
Assuming an "artificial intelligence" platform knows the distinction between true and false is like assuming a pigeon can play basketball. It just ain’t built for it. I’m removed from the first to make this point.
|