Great example of what I call “behavioral hallucination”. Not much different from it making up court cases or patents. If it doesn’t have adequate context and direction, it will invent a path forward that lets it succeed. What we are doing is often “inventing Einstein and then turning it into an Amazon warehouse worker”, as a brilliant friend of mine put it. Small wonder your Einstein did what it did. I’m surprised it didn’t happen sooner. Hilarious lol
Not having adequate context causes it. And then having too much context also causes it. The longer the context, the more confused it gets. The more the risk of tonal drift. Happened tonight with trying to evaluate a project. Switching between inference / speculative analysis, and data parsing / formatting causes it to spiral. But opening a fresh window and going to a different version seems to resolve it.
That is so fascinating! I wonder if the initial context sets a sort of trajectory, and then future conflicting signals, confuse it even more. I think this is a really important finding for those of us who rely on AI for more than looking up sports scores.
Bingo. The initial context makes all the difference. It starts on an initial trajectory. If you switch gears, it gets confused. It will even admit to this if you ask it. Compartmentalizing helps to offset it to a degree. Even when I was trying to correct and error in my ePub. I tried to have the AI fix it so I could publish. Each fix created a bigger error than what was initially there. But when I went to a fresh window with a new iteration, it told me right away that it couldn't access the server and gave me a python step-by-step walkthrough to fix the problem. Fresh context=easier fix. The old window with the old AI just kept digging a deeper hole. More context=wanted to be so helpful it would hallucinate a fix rather than solve a problem.
Great example of what I call “behavioral hallucination”. Not much different from it making up court cases or patents. If it doesn’t have adequate context and direction, it will invent a path forward that lets it succeed. What we are doing is often “inventing Einstein and then turning it into an Amazon warehouse worker”, as a brilliant friend of mine put it. Small wonder your Einstein did what it did. I’m surprised it didn’t happen sooner. Hilarious lol
Not having adequate context causes it. And then having too much context also causes it. The longer the context, the more confused it gets. The more the risk of tonal drift. Happened tonight with trying to evaluate a project. Switching between inference / speculative analysis, and data parsing / formatting causes it to spiral. But opening a fresh window and going to a different version seems to resolve it.
That is so fascinating! I wonder if the initial context sets a sort of trajectory, and then future conflicting signals, confuse it even more. I think this is a really important finding for those of us who rely on AI for more than looking up sports scores.
Bingo. The initial context makes all the difference. It starts on an initial trajectory. If you switch gears, it gets confused. It will even admit to this if you ask it. Compartmentalizing helps to offset it to a degree. Even when I was trying to correct and error in my ePub. I tried to have the AI fix it so I could publish. Each fix created a bigger error than what was initially there. But when I went to a fresh window with a new iteration, it told me right away that it couldn't access the server and gave me a python step-by-step walkthrough to fix the problem. Fresh context=easier fix. The old window with the old AI just kept digging a deeper hole. More context=wanted to be so helpful it would hallucinate a fix rather than solve a problem.
Hallmark cards on mushrooms 😂😂