A recent Riskified survey revealed that a sizable gap exists between consumers’ trust in AI and their actual usage of these tools.
Despite 53.9% of consumers believing AI can increase their fraud risk and 55% expressing discomfort regarding AI agents making purchases on their behalf, 61.5% have used AI tools at some point along the shopping journey nevertheless.
So, what’s driving this gap?
Bob Hutchins, CEO at Human Voice Media, drew parallels between AI and past technological advancements.
“For many of the same reasons people began to use GPS (and thus allowed tracking), they will begin to use AI. The tool works. The benefits show themselves sooner than the potential risks become an active barrier to adopting new behaviors. This same trend is evident in online banking, social login, autofill, and cell phone location sharing,” Hutchins told FI.
However, he also highlighted an important distinction between AI and the other tools: It can act on your behalf.
“You operated the internet. AI interacts with you, and occasionally for you. That alters how we think about trust. You’re giving away a little bit of control; you’re also providing a little bit of your data,” Hutchins explained.
Yad Senapathy, CEO and founder of PMTI, an institute that’s trained 80,000-plus professionals on best practices for organizations adopting new processes, believes the gap between consumer trust and adoption of AI is more about risk evaluation than confusion.
“People use AI tools when the risk of using them is manageable. For instance, when you search for product recommendations, there is very little risk involved. On the other hand, when you allow your AI agent to order something using your credit card, that is a much larger decision,” Senapathy told FI.
“That’s where the gap lies: 61.5% of people have used AI at some point in their shopping journey. However, only 55% of people are willing to allow AI to make purchasing decisions on their behalf. This is not contradictory; this is rational human behavior,” he added.
So, is there any legitimacy to the fraud concerns?
Does Agentic AI Actually Increase Fraud Risk?
Eric Lam, CEO of Berry AI, told FI that, while fraud risk is a legitimate issue associated with agentic AI, consumers’ worries about it are often misdirected, as AI doesn’t inherently increase fraud risk but changes how it occurs.
“Fraudsters can create more convincing phishing attempts, automate account takeovers, or mimic legitimate behavior at scale. But it’s also one of the most powerful tools we have to detect and prevent fraud and loss,” Lam explained.
Pavel Sirotin, VP of partnerships at Newo.ai, pointed out that AI is merely a tool and therefore has the capacity to scale both good and bad behavior.
“At the same time, structured systems can reduce risk by applying rules consistently and validating inputs in real time. What people are reacting to is the unevenness. Experiences vary widely depending on how the system is built,” Sirotin told FI.
Hutchins zeroed in on some specific ways AI can be used for fraudulent purposes.
“Voice cloning exists today. There’s measurable evidence of improved AI-generated phishing. Synthetic identity fraud is increasing, and deep fake videos have reached consumer-level quality through free tools,” Hutchins told FI.
Patrick Gibbs, founder of AI automation agency Epiphany Dynamics LLC, added that, though the fraud-related fears are partially legitimate, they’re often “mis-framed.”
“The fraud vector that scales with AI is not ‘AI buying things on consumers’ behalf.’ It is voice cloning and conversational social engineering,” Gibbs told FI.
So, what are some ways brands leveraging agentic AI can build trust with their customer base?
Best Practices for Building Trust in Agentic AI
Lam recommends cultivating trust in AI through outcomes over messaging.
“The mistake brands make is trying to convince users to trust the technology instead of letting the experience prove itself,” Lam told FI.
That said, at least some messaging must occur regarding AI tools. Hutchins stressed the importance of authenticity and accuracy in communications, as brands that oversell their products are typically the ones that rapidly lose trust.
“Communicate clearly what occurs beneath the hood. Use clear language when describing what data enters and exits and what data is stored,” Hutchins advised.
Matt Little, founder and managing director of Festoon House, has developed an e-commerce brand using AI-driven marketing, paid media, and performance funnels and agrees that transparency is key.
“The most successful consumer-facing brands deploying AI tools are doing so transparently. Consumers know they’re using AI, understand how the tools are being used, and can easily override it,” Little told FI, adding that another effective method is a gradual opt-in to the technologies.
“Begin with low-risk interactions that allow consumers to remain in control and gradually expand use to higher-risk interactions,” he advised.
Little shared that when his company started providing AI-based product recommendations using a “suggested for you” label instead of a default selection, the return rate increased by 23.4% over the next quarter.
Lam added that skepticism often increases when people feel like a platform’s incentives aren’t aligned with their best interests.
“If AI tools become overly commercialized or start prioritizing paid outcomes over useful ones, trust will erode quickly,” he explained.
The Food Institute Podcast
This episode features Julie Chapon, CEO and co-founder of Yuka, the fast-growing app that helps consumers better understand the health impact of food and cosmetic products. Chapon shares the origin story of Yuka, which began as a personal mission to decode confusing food labels and has since expanded into a global platform with over 80 million users.








