Are you using AI / ChatGPT for travel planning and while travelling?

Yeah, fair points about double-checking the details. I always verify anything important, like flight times or bookings, through the actual airline/hotel site anyway. I don't use it to get flight or hotel pricing, as I find it's not reliable. AI is brilliant for the planning stage, though, like helping me figure out rough itineraries or what areas to stay in.

The AI blocker thing isn't really how it works, from what I understand. Those are more about stopping companies from scraping websites for training data, not about blocking us from using ChatGPT or Claude to access their website info as a personal travel assistant. But I get the caution around accuracy, that's definitely real and worth keeping in mind.

I find it's a massive time-saver for the initial research phase. Then I treat it like I would any other travel blog or forum post; helpful info, but always verify the specifics yourself before committing to anything.
Where we'll probably end up is having multiple AI agents behind the scenes that co-operate. The first one helps you with high level planning about where to go and for how long,, a second one to fill in the details, and then a third one who verifies things and makes sure the first two weren't hallucinating.

Then we just need to make sure SkyNet doesn't end up with some kind of multiple identity disorder!
 
I tried to use google AI to plan for an upcoming holiday next month, it kind of worked but i'll do my own research as well.

Tried to use to search for award flight availability but no luck.
 
I tried to use google AI to plan for an upcoming holiday next month, it kind of worked but i'll do my own research as well.

Tried to use to search for award flight availability but no luck.
AI, like many things in life is overrated.Google is more interested in haresveting data about your spending habits than helping you plan anything.
 
I'm regularly finding that ChatGPT/AI tools still routinely spit out incorrect travel-related information. Like the other day, Google's AI overview suggested going for Qantas status over Velocity status because Qantas offers family pooling.

The annoying thing is that the incorrect information is often presented with an unjustified tone of confidence, and people believe it. I regularly get questions from people saying they read something on AI, and why isn't the thing AI told them to do working. The reason is usually that AI gave them bad advice.

I personally wouldn't rely on AI for travel advice, at least not yet.
 
I'm regularly finding that ChatGPT/AI tools still routinely spit out incorrect travel-related information. Like the other day, Google's AI overview suggested going for Qantas status over Velocity status because Qantas offers family pooling.

The annoying thing is that the incorrect information is often presented with an unjustified tone of confidence, and people believe it. I regularly get questions from people saying they read something on AI, and why isn't the thing AI told them to do working. The reason is usually that AI gave them bad advice.

I personally wouldn't rely on AI for travel advice, at least not yet.
I’ve found the same. I occasionally give it another go, hoping it’s improved, but it seems to consistently come up with incorrect information. I think the smug confidence with which the misinformation is delivered is the most infuriating part of these AI systems. If you point out an error, there’s a grovelling apology and then more misinformation is spat out. You really have to already be knowledgeable about the subject to pick up on the errors, so it seems pretty pointless.

Sadly, so many publications are using AI to generate travel stories now, so those errors are creeping into publications that were traditionally more trustworthy. I read one article in a magazine recently that announced new accommodation at Cradle Mountain, making a big deal about how there hasn’t been new accommodation opening at Cradle for a long time. And yet when you read further, it said the accommodation was on Lake St Clair, which is either a 6 day walk away or a 3 hour winding drive down the West Coast. It was a very AI-style error and I was surprised that an established lifestyle magazine had published such slop.
 
Elevate your business spending to first-class rewards! Sign up today with code AFF10 and process over $10,000 in business expenses within your first 30 days to unlock 10,000 Bonus PayRewards Points.
Join 30,000+ savvy business owners who:

✅ Pay suppliers who don’t accept Amex
✅ Max out credit card rewards—even on government payments
✅ Earn & transfer PayRewards Points to 10+ airline & hotel partners

Start earning today!
- Pay suppliers who don’t take Amex
- Max out credit card rewards—even on government payments
- Earn & Transfer PayRewards Points to 8+ top airline & hotel partners

AFF Supporters can remove this and all advertisements

I’ve found the same. I occasionally give it another go, hoping it’s improved, but it seems to consistently come up with incorrect information. I think the smug confidence with which the misinformation is delivered is the most infuriating part of these AI systems. If you point out an error, there’s a grovelling apology and then more misinformation is spat out. You really have to already be knowledgeable about the subject to pick up on the errors, so it seems pretty pointless.

Sadly, so many publications are using AI to generate travel stories now, so those errors are creeping into publications that were traditionally more trustworthy. I read one article in a magazine recently that announced new accommodation at Cradle Mountain, making a big deal about how there hasn’t been new accommodation opening at Cradle for a long time. And yet when you read further, it said the accommodation was on Lake St Clair, which is either a 6 day walk away or a 3 hour winding drive down the West Coast. It was a very AI-style error and I was surprised that an established lifestyle magazine had published such slop.
And it all becomes merry-go-round........the incorrect/inaccurate/misleading information that is in an article automatically becomes a training source for the AI models, so the AI models eat it up and use it to further train themselves, but they are feeding on wrong information and they don't know it. But they then reproduce this new learning as more fact in a new article that goes back into the loop to feed and train the AI models.

Never has all of the information on the internet been true or correct, yet we are supposed to believe that the AI models can differentiate fully between fact and fiction and not have hallucinations.

I'm sure that if enough people wrote online that the moon was made of cheese we would soon enough have astronomy students in university being taught that the moon was a lovely matured wheel of cheddar.
 

Become an AFF member!

Join Australian Frequent Flyer (AFF) for free and unlock insider tips, exclusive deals, and global meetups with 65,000+ frequent flyers.

AFF members can also access our Frequent Flyer Training courses, and upgrade to Fast-track your way to expert traveller status and unlock even more exclusive discounts!

AFF forum abbreviations

Wondering about Y, J or any of the other abbreviations used on our forum?

Check out our guide to common AFF acronyms & abbreviations.
Back
Top