The two walkers said they relied on AI for tide times, but it nearly cost them their lives.

Two hikers were rescued from Sully Island, Wales, on October 14, 2025, after following ChatGPT’s advice on when it was safe to cross the causeway, only to find themselves surrounded by rising water. The incident was shared on Instagram and YouTube by former BBC reporter Anna Brees (@breesanna), who reported that the walkers had "checked the tidal times but say they were given the wrong time by ChatGPT."
According to the post, the pair attempted to cross the water around 10 AM when a local restaurant owner spotted them. Realizing they were walking straight into the incoming tide, he grabbed a megaphone and started shouting for them to turn back. The man, later identified as Gordon Hadfield, told Brees, "I asked them if they've got provisions to stay there. They went over early this morning in the dark." When she asked how he knew they needed help, he replied, "Because they started to walk into the water. They were coming into the water." Hadfield was the one who called for help, not the hikers. "The Coast Guard told me they called 999, but it was you," she said.
One of the hikers later admitted that they had "made the mistake of using ChatGPT for research to see when the low tide was." The AI tool reportedly told them 9:30 AM, but the tide had already shifted by the time they tried to return. One of the NHS workers later explained that "this is the second-highest tidal range in the world. It’s uneven underfoot, so it’s hidden. The water looks calm, but there’s a drop-off you can’t [see]," adding that "it happens 10 to 20 times a year."

ChatGPT and other AI tools have repeatedly been shown to be unreliable, particularly when it comes to factual or time-sensitive information. A study published in Frontiers in Public Health evaluated ChatGPT-3.5, ChatGPT-4, and ChatGPT-4o on factual questions and found major variation in accuracy and reproducibility, with "statistically significant differences… (p < 0.0001)." Similarly, a study by the European Broadcasting Union (EBU) and BBC found that AI assistants, including ChatGPT, gave incorrect or biased responses in 45% of cases and had some form of issue — whether sourcing, accuracy, or bias — in 81% of responses.


Hadfield frequently warns visitors at the spot, especially during warmer months. The video of the near-miss spread online, prompting a wave of reactions about the usage of ChatGPT for research. @miltyg565 wrote, "Why is anyone asking ChatGPT the tide times instead of just checking the actual tide times? It’s not difficult." @663shellsalad added, "STOP USING AI!!!! It’s bad for the environment!!! And it’s clearly bad at its job!!!" @hedvigrb chimed in, "ChatGPT makes stuff up. If it doesńt have specific information. It will just say anything it thinks you want to hear. Never rely on ChatGPT for any kind of fact."
@hedvigrb commented, "ChatGPT makes stuff up. If it doesn’t have the data, it’ll just say whatever it thinks you want to hear. Never rely on it for facts." @deej404 joked, "AI actively melting the ice caps and raising the water levels, and here it is literally leading people to drown. Good job." @sopor44 suggested, "If this happens frequently, maybe put up signs with the tide schedules? Seems cheaper and safer than sending rescue teams."
You can follow Anna Brees (@breesanna) on Instagram for more news content.
Manager who tried to get ChatGPT to replace employees gets a taste of his own medicine
People are calling ChatGPT 'dumber' and 'lazier' after being dissatisfied with recent updates
'AI artist' completes Keith Haring's intentionally unfinished artwork, sparks debate on ethics