In a dramatic juxtaposition of industry ambition and consumer apprehension, the global travel sector is pouring billions of dollars into the development of artificial intelligence agents capable of autonomously planning, booking, and managing entire travel itineraries. This ambitious push, dubbed "agentic" AI by industry insiders, promises a future where sophisticated algorithms handle every facet of a trip on behalf of the traveler. However, a significant chasm exists between the industry’s fervent embrace of this technology and the cautious, even skeptical, sentiment held by the very consumers it aims to serve. Gareth Williams, a pivotal figure in the evolution of online travel as the founder and former CEO of Skyscanner, one of the world’s largest travel metasearch platforms, has a unique vantage point on this paradigm shift. Having witnessed the industry’s transition from cumbersome paper tickets to the ubiquitous mobile booking era, Williams is now observing, as an investor, the nascent stages of an AI-driven revolution. This new frontier envisions AI not merely as an information provider, but as a proactive agent capable of making complex decisions, comparing options, and ultimately executing purchases – all without direct human intervention. "I’ve been really struck by how negative the public is towards AI compared to people inside the industry," Williams stated in an exclusive interview with Skift. This stark contrast highlights a fundamental disconnect. While those within the travel ecosystem are actively shaping and promoting agentic AI, the general public appears to be lagging significantly in its acceptance and readiness to relinquish control over their travel plans. Williams attributes this public wariness to a deeper, perhaps intuitive, understanding of how easily trust can be eroded. He believes that consumer skepticism, while potentially underestimated by the industry’s tech-forward proponents, may prove a far more formidable obstacle than initially anticipated. The very notion of handing over the reins of a vacation – a significant financial and emotional investment for most individuals – to an opaque algorithm can trigger anxieties about potential missteps, unmet expectations, and a loss of personal agency. This prevailing consumer skepticism stands in direct opposition to an industry that is rapidly accelerating its adoption of AI. The term "agentic" itself carries significant weight, implying a level of autonomy and decision-making power that moves beyond simple chatbots or search engines. The ultimate vision is an AI that can seamlessly book flights, secure hotel accommodations, and proactively manage any unforeseen disruptions, such as flight cancellations or overbookings, without requiring the traveler to lift a finger. This ambitious goal is being pursued with substantial financial commitments, with major travel companies, technology providers, and venture capitalists investing heavily in research, development, and implementation. The potential benefits of agentic AI for the travel industry are manifold. For businesses, it offers the promise of increased efficiency, reduced operational costs, and enhanced customer service through hyper-personalized recommendations and seamless problem resolution. By automating repetitive tasks and providing instant, intelligent assistance, AI agents could free up human agents to focus on more complex or high-value interactions. For consumers, the allure lies in the prospect of effortless travel planning, saving time and cognitive load. Imagine an AI that understands your travel preferences, budget, and past booking history, then curates and books your entire trip, from flights and accommodations to activities and transportation, all while optimizing for value and convenience. However, the "Skift Take" succinctly captures the core dilemma: "Travelers say they aren’t ready to let AI book a trip. The industry is spending billions anyway – and no one has answered the most basic question: who pays when AI agents get it wrong?" This question of liability is perhaps the most critical and unresolved aspect of the agentic AI revolution. When a human travel agent makes a mistake, the recourse is generally clear. There is a professional entity responsible for errors, and mechanisms for compensation or rectification. But when an AI agent errs – a booking is made for the wrong dates, a hotel is misrepresented, or a critical connection is missed – who bears the financial and reputational cost? The implications of an AI booking error can be significant. A misbooked flight could lead to missed important events, a wrong hotel reservation might result in an uncomfortable or even unsafe lodging situation, and a failure to manage a disruption could strand travelers in unfamiliar locations. The financial repercussions could range from the cost of rebooking and additional accommodation to lost business opportunities and significant emotional distress for the traveler. Without a clear framework for accountability, consumers are left with a potentially unsettling level of risk. Industry stakeholders are keenly aware of this challenge, but definitive answers remain elusive. Some companies are exploring insurance models specifically designed for AI-driven travel services, while others are focusing on robust internal error-checking mechanisms and human oversight protocols. Yet, the sheer complexity of AI decision-making processes, which can involve intricate algorithms and vast datasets, makes it difficult to pinpoint the exact cause of an error and assign blame. Is it the algorithm itself, the data it was trained on, the specific parameters set by the user, or a flaw in the integration with third-party booking systems? The development of agentic AI in travel is not occurring in a vacuum. It is part of a broader societal conversation about the role of artificial intelligence in our lives. Concerns about data privacy, algorithmic bias, job displacement, and the potential for AI to be used for malicious purposes are prevalent across various sectors. In the travel industry, these broader anxieties can amplify consumer reluctance. The idea of an AI having access to personal travel preferences, past itineraries, and potentially even financial information can raise privacy red flags. Furthermore, if AI agents are trained on biased data, they could inadvertently perpetuate discriminatory practices in pricing, recommendations, or even access to certain travel services. The current landscape of AI in travel predominantly focuses on enhancing existing functionalities. Chatbots provide customer support, recommendation engines suggest destinations and activities, and dynamic pricing algorithms optimize fares. These are largely assistive technologies that augment human capabilities rather than fully replacing them. Agentic AI represents a significant leap forward, moving from assistance to autonomous action. This leap requires a commensurate leap in consumer trust and a clear regulatory and legal framework. The "trust fall" depicted in one of the accompanying images aptly symbolizes the leap of faith that travelers are being asked to take. It is a gesture of vulnerability, of relinquishing control in anticipation of support and a safe landing. However, for many consumers, the landing zone provided by an AI agent feels uncertain and potentially precarious. The travel industry’s significant investment in this technology suggests a strong conviction that these challenges will eventually be overcome. This conviction is likely fueled by the potential for immense cost savings, operational efficiencies, and a superior customer experience if agentic AI can be successfully implemented and widely adopted. Experts in human-computer interaction and consumer psychology suggest that building trust in AI agents will require a multi-pronged approach. Transparency in how AI makes decisions, clear communication about its capabilities and limitations, and robust mechanisms for recourse and compensation in case of errors are paramount. Furthermore, the industry needs to demonstrate a tangible benefit to the traveler that clearly outweighs the perceived risks. This might involve offering demonstrably better prices, more personalized and curated experiences, or a level of convenience that is simply unattainable through traditional planning methods. The journey from skepticism to widespread adoption will likely be gradual. Early adopters, tech-savvy individuals, and those who prioritize convenience above all else may be the first to embrace agentic AI. As the technology matures, and as more positive use cases and success stories emerge, public confidence is likely to grow. However, the travel industry must not underestimate the power of negative experiences. A single high-profile failure of an AI agent could set back public trust by years. The billions being invested by the travel industry are a clear indicator of their belief in the transformative power of agentic AI. The potential for a more efficient, personalized, and seamless travel experience is a compelling prospect. However, this technological ambition must be tempered with a deep understanding of consumer psychology and a proactive approach to addressing the fundamental questions of trust and liability. Without these crucial elements in place, the promise of AI-driven travel risks remaining an unfulfilled aspiration, overshadowed by the enduring skepticism of the very travelers it seeks to serve. The industry is at a critical juncture, where innovation must be guided by ethical considerations and a commitment to building genuine, not just technological, partnerships with its customers. The success of agentic AI in travel hinges not only on its technical prowess but on its ability to earn and maintain the trust of the human traveler. Post navigation Skift Data + AI Summit: Charting the Future of Travel Through Data Infrastructure, Applied AI, and Commercial Precision Booking Holdings Bets on Continued Growth Amidst Shifting AI Landscape.