Back to Insights

Insights from the AI-Native Banking & Fintech Conference

Senior leaders in banking and fintech met in Salt Lake City to discuss how AI is reshaping products, risk, and regulation. Sessions highlighted practical deployments across fraud, underwriting, support automation, and secure LLM usage—alongside the non-negotiables of governance and cybersecurity. The event featured a Qubika case study on AI workflows for global payments, showing faster resolutions and sharper compliance with strong guardrails and human oversight.

This week in Salt Lake City, SpringLabs hosted the AI-Native Banking & Fintech Conference, bringing together senior leaders across banking and fintech to exchange perspectives on the future of finance and AI.

In this article we want to highlight some of the key insights we heard from different participants during the conference.

Regulation, guardrails, and human oversight – Keeping AI accountable

A recurring theme throughout the conference was how regulation and oversight intersect with innovation and the development of AI agents.

This was the topic of one of the key panel discussions of the event, “Mitigating AI risks: Addressing bias and compliance with today’s AI models”, bringing together an expert lineup moderated by Marc Rasich, Greenberg Traurig, LLP, involving Kareem Saleh, Founder & CEO, FairPlay AI; Annie Delgado, Chief Risk Officer, Upstart; Martha O’Malley, Assistant General Counsel, Prosper Marketplace; and Elisabeth Bohlmann, Managing Director, Qubika

The panel emphasized that regulation can both protect and complicate. With AI agents moving from theory to practice in under a year, the industry is already accumulating both success stories and cautionary failures. There were discussions around using risk/impact matrices to identify low-risk, high-impact use cases, while always keeping a human in the loop.

Jay Budzik, SVP and Director of AI/ML at Fifth Third Bank, advised community banks – that are often without deep tech resources – to develop AI strategies and partner with experienced providers.

The discussion underscored the importance of balance – that agents need to be proactive and predictive, but at the same time require strong guardrails to prevent unintended risks.

How AI is impacting regulation and the role of regulators

Several speakers addressed the role of regulators themselves. During the session on “How Bank Regulators Can and Should be Using AI”, Raj Date, Managing Partner at Fenway Summer, argued that supervisory processes are too slow. And because they are typically done manually, they reflect conditions 18-24 months in the past. He proposed experimenting with aggressive timelines for low-risk institutions, accepting some risk in exchange for more timely oversight.

Laurel Loomis Rimon, Partner & Fintech Co-Chair at Jenner & Block, reinforced the point that even in the absence of formal AI regulation, banks and financial institutions should have policies in place and demonstrate that they follow them. For regulators, the existence of a coherent policy is an indication that you have thought about it, and signals that you are taking responsibility.

AI in action: Banking and fintech use cases

The most compelling sessions showcased how leading institutions are already deploying AI. Here is a summary of some of the key points.

  • Affirm’s SVP and Head of Product, Vishal Kapoor, described how AI is reshaping both product development and customer interaction. Product managers now rely on large language model (LLM) agents to better understand customer needs, while designers use generative tools to accelerate prototyping – the so-called “vibe coding” approach. On the consumer side, Kapoor noted a crucial shift: people are treating LLMs less like search engines and more like advisors. This creates opportunities for Affirm to be positioned as the preferred option at the point of decision.
  • Finwise Bank’s Senior VP and Deputy Chief Compliance & Risk Officer, Kenzie Mexican, shared that they are leaning on AI primarily for fraud detection, deploying tools that connect signals across sponsor bank partners to reduce false positives. In underwriting, Mexican emphasized the importance of safeguards: the bank relies more on conservative, supervised machine learning models that are reviewed regularly by independent third parties to ensure reliability and compliance.
  • Pravesh Rijal of Cross River Bank offered yet another angle, highlighting the need for security-first approaches. Cross River has built its own ChatGPT variant with semantic layering, ensuring alignment with its internal security posture. By integrating AI tools directly with internal APIs, the bank has been able to improve operational productivity while maintaining strict guardrails around data access.
  • From the fintech side, Mercury’s VP of Product, Ryan Wiggins detailed how the company is weaving AI into daily workflows. Employees use ChatGPT Enterprise to solve internal tasks, averaging 15 queries per day. For clients, Mercury has automated onboarding processes to handle scale without adding headcount, and introduced accounting automations that let customers validate and upload transactions seamlessly. Wiggins also pointed to the potential of proactive insights – like flagging when advertising spend spikes unexpectedly – to deepen client value.
  • Finally, Robert Marx, VP of Compliance at WebBank, underscored the compliance dimension. By leveraging AWS Bedrock, WebBank has been able to secure its AI use cases and keep sensitive financial data protected, setting a model for responsible AI deployment in highly regulated environments.

Compliance and cybersecurity: The non-negotiables

The conference closed with a strong focus on governance and cybersecurity.

Alec Crawford, Founder and CEO of Artificial Intelligence Risk, warned that poor controls can have severe consequences, citing a Texas bank that nearly faced a $20M fine after staff uploaded sensitive data into a public AI tool. He stressed that not all employees should have equal access to AI systems and that institutions must adopt frameworks like NIST and beyond to safeguard operations.

The message was consistent: AI can expand access to insights, but without strong guardrails, it also expands risk.

Feature presentation: Building AI workflows for a global payments platform

Andreas Fast, Director of Finance Solutions at Qubika, addressed the operational complexity of scaling global payments.

He provided the case study of one leading payments platform which handled nearly $25 billion in international payments in 2024, where surging transaction volumes strained merchant support and compliance teams. Manual processes slowed resolution times, while false-positive fraud alerts threatened scalability.

By deploying intelligent finance AI agents built with the Qubika Agentic Platform, the company achieved:

  • AI-powered merchant support and automation: Resolution times fell from hours to minutes as AI agents handled common queries.
  • Compliance efficiency: AI-driven alert analysis cut false positives by 80% and doubled daily case throughput, producing regulator-ready reports with human-level accuracy.

For this organization, AI has already become key to growth, compliance, and customer trust.

Looking ahead: The transformative role of AI, but the need to be secure-first

The AI-Native Banking & Fintech Conference made it clear that AI is becoming/ has become foundational to payments, compliance, customer experience, and regulatory engagement.

For senior financial leaders, the challenge is twofold:

  1. Harness potential of AI: Deploy intelligent agents that unlock everything from scalability, to personalization, and speed.
  2. Apply discipline: Build guardrails, policies, and oversight that maintain trust and regulatory readiness.

A big thank you to the organizers of the conference, Spring Labs, and to everyone who joined the event.

Modern data and AI-driven financial services solutions

Qubika’s Finance Studio offers IP-driven solutions, which both financial and non-financial organizations can use to build modern, next-generation financial services.

Learn more!
Avatar photo
Carla Massola

By Carla Massola

Senior Client Partner at Qubika

Carla Massola is a Senior Client Partner at Qubika. She partners with both enterprise and startup leaders to scope initiatives, align stakeholders expectations, and deliver measurable outcomes. Carla has a solid background in Economics and Digital Customer Experience. She brings expertise in strategic account leadership, executive stakeholder management, governance, and value realization across complex, multi-workstream engagements.

News and things that inspire us

Receive regular updates about our latest work

Let’s work together

Get in touch with our experts to review your idea or product, and discuss options for the best approach

Get in touch