In previous posts (here and work back) I looked at ChatGPT and BibleMate to see what they would do with some basic Bible questions. AI has come a long way in the last year or so. I just read a thread on Twitter (x) where someone compared Gemini, Claude, and the latest ChatGPT4. He was testing to see whether AI 'hallucinates.' I.e., when prompted (and encouraged or challenged) with a falsehood, is the AI able to detect it? Or does it create information to validate it? The person created 2 prompts regarding some false, fantastical claims about Elon Musk. Here are the results. (click on graphic to go to x thread)
Gemini really went all in and created all sorts of false validation. Claude was the most reliable, but what struck this person was that ChatGPT seemed to 'learn' from it's first mistake and improved its response.
I played around a bit again with the various engines. BibleMate still reflects a rather literal, 'conservative' response to Bible questions. All of them can generate excellent results (e.g., answering questions like: Did Jesus say, "Heaven helps those who help themselves"? or finding biblical texts or creating outlines of biblical books), but as always results need to be checked.
I'm still checking out the integration of AI into Logos Bible software. Working off a more curated database, it does offer the prospect of more reliable and helpful answers.