List of cybersecurity conferences in 2025
Discover the most influential Cybersecurity conferences of 2025. From global summits to niche forums, this guide helps you stay informed, connected, and ahead of threats
Qubika at Databricks Data + AI Summit
Join us June 9-12 to see our leading Databricks capabilities
Financial services
Expertise in core banking, BaaS integrations, payments, and GenAI-enhanced financial solutions.
Healthcare
People-centric healthcare design and solutions, from virtual care, integrations, to smart devices.
Insurance
Modern solutions including self-service, on-demand, and algorithm-driven personalization.
Hi-tech & semiconductors
Semiconductor design, firmware and IoT development, and AI-powered embedded systems.
Qubika is a Databricks Select Partner
Learn more about our journey, our 200+ certified Databricks experts, and how we’re delivering solutions such as autonomous AI agents.
Databricks Capabilities
Learn more about Qubika's strong partnership with Databricks as a Select Partner, delivering solutions across the finance, banking, healthcare, hi-tech, and entertainment industries.
Databricks Impact on Financial Institutions
Databricks empowers financial institutions to harness unified data and AI, such as for real-time fraud detection, dynamic risk modeling, and personalized customer experiences.
June 19, 2025
Explore the OWASP Top 10 risks for LLMs — from prompt injection to data poisoning — and how Qubika helps secure AI systems with robust cybersecurity strategies.
As Large Language Models (LLMs) get more integrated into our everyday lives, from writing emails to summarizing complex documents and even helping doctors diagnose illnesses, the potential impact of the security issues inside these models grows exponentially.
The OWASP AI Security Project has emerged as an initiative to help organizations prepare for the upcoming wave of generative artificial intelligence (GenAI) security attacks for this type of applications.
Prompt injection remains the top concern when it comes to securing LLMs. This vulnerability allows an attacker to craft malicious inputs that manipulate the model’s output, potentially leading to data breaches or even compromising the entire system.
For example, imagine a chatbot designed to answer user questions being tricked into revealing information about other users or executing unauthorized commands.
This vulnerability focuses on LLMs leaking confidential business information, proprietary algorithms, or similar sensitive data. Unlike other issues, the risk here isn’t limited to malicious actors; the model itself can become the vector for the leak.
A well-known real-world example is the “Proof Pudding” exploit, where training data was inadvertently disclosed and used to manipulate an email filter. This case demonstrated how seemingly minor disclosures can escalate into a severe and dangerous vulnerability.
This issue focuses on the vulnerabilities introduced by the components that make up an LLM, such as libraries, pre-trained models, and other dependencies. Since LLMs are built from multiple sources, any one of these components can introduce risks. The problem isn’t just the LLM itself but its entire supply chain. Think of it like a car: even if only the engine has an issue, the whole vehicle needs to be serviced.
A real-world example is PoisonGPT, a malicious LLM used to bypass security measures and spread misinformation. It even managed to infiltrate a popular AI platform, demonstrating how weaknesses in the supply chain can lead to dangerous consequences. Organizations need to stop assuming these platforms are secure and instead demand proof, applying the same level of scrutiny they would with any other business partner.
This vulnerability highlights how easily models can be manipulated during their training process by introducing biased data or deliberately false information. For example, imagine a finance chatbot trained on data that promotes a specific company or favors one stock over another, recommending it to users even when it’s not the best option.
To prevent this, data hygiene is critical. The datasets used to train models must be thoroughly examined and validated before relying on the AI to make decisions. A multilayered approach is essential, combining machine learning techniques with human expertise to ensure the data is clean and trustworthy. This approach helps safeguard the model from being influenced by poisoned datasets, making it more robust and reliable.
Even if the training data is clean and the model itself is secure, the output can still be manipulated or misused if not handled properly. Continuing the car analogy, even if the car is safe and well-tested, it can still cause harm in the hands of a reckless driver.
A common real-world example of this vulnerability is when LLM output is passed directly into a system shell or similar function, leading to remote code execution. This highlights the importance of treating LLM output as an untrusted data source. To mitigate this risk, outputs must be carefully validated, sanitized, and checked for inconsistencies, errors, or potentially dangerous content.
The issue arises when AIs are granted too much autonomy without proper human oversight. It’s like giving a child the keys to a car…potentially dangerous without supervision and safeguards. LLM-based systems are often allowed to call functions, interface with other systems, or perform actions through tools, skills, or plugins. For example, an LLM responding to customers on social media could post something inappropriate, harming a brand’s reputation.
To mitigate this, clear boundaries must be defined regarding what each model is permitted to do. Safeguards should be in place to prevent overstepping, and human oversight remains essential, just as we monitor human employees.
Leaking the base prompt of an LLM can be just as dangerous as exposing an API key. The base prompt often contains critical details about the system’s functionality, restrictions, and interactions, potentially including sensitive information. This is like giving someone the blueprints to your system.
For example, in a financial trading LLM, the prompt might reveal trading strategies, risk tolerance levels, or conditions for certain actions. If an external actor obtains this information, they could exploit it to manipulate the system. To prevent this, strict access controls for prompts are essential, along with minimizing the inclusion of sensitive information to reduce the risk of further exposure.
Vectors and embeddings act as the DNA of an LLM. They are the numerical representations of words and concepts that enable the model to understand and process language. However, these vectors and embeddings can be manipulated or tampered with, altering how the LLM interprets information and how it generates responses.
For instance, an attacker could modify embeddings to subtly guide the tone of responses in favor of a specific topic or, for example, recommending one political candidate over another. Addressing this vulnerability requires careful monitoring and validation of embeddings to ensure the integrity of the LLM’s outputs.
In an era of deepfakes and fake news, misinformation is one of the most pressing concerns surrounding LLMs. These models can be misused to fake conversations (e.g., customer support) or craft personalized emails, amplifying misinformation at scale. However, not all misinformation propagated by LLMs is intentional. A major cause is hallucinations; when the model generates content that appears accurate but is entirely fabricated.
Hallucinations occur because LLMs rely on statistical patterns to fill gaps in their training data without truly understanding the content. This can lead to answers that sound credible but lack any factual basis. Additionally, biases in the training data and incomplete information further exacerbate the problem. Common risks include factual inaccuracies, unsupported claims, and unsafe code generation. Addressing misinformation requires robust fact-checking, model fine-tuning, and human oversight to ensure outputs remain reliable and trustworthy.
This issue occurs when an LLM is allowed to run unchecked, consuming excessive resources such as power, network bandwidth, and data storage. In our car analogy, it’s like having a brick stuck on the accelerator, with no way to stop it. If left unregulated, this uncontrolled resource usage can cause other critical systems to underperform.
The financial impact can also be significant, as cloud environments are designed to scale with demand, potentially leading to massive, unexpected costs. To mitigate this, it’s essential to monitor how LLMs interact with resources, detect unusual activity, and have the right tools in place to proactively prevent surprises. Proper limits and oversight are key to ensuring smooth, efficient operation.
As LLMs become integral to our systems and daily lives, their vast potential for enhancing productivity, innovation, and business capabilities also comes with significant cybersecurity risks. Navigating these challenges requires expertise and a proactive approach.
At Qubika, we are committed to safeguarding organizations by developing and implementing robust security measures. Our comprehensive cybersecurity services include securing LLMs and AI within applications, ensuring safe integration and protection against vulnerabilities. With offerings like AI security, Application security, Compliance Assessment, Cloud Security, Qubika’s portfolio is designed to protect enterprise systems while enabling the safe and effective use of these powerful technologies. By addressing risks head-on, we empower businesses to harness the full potential of LLMs without compromising on security or trust.
Qubika's Cybersecurity Studio offers a full suite of advanced services to protect your assets, streamline security, and ensure seamless business operations
Tags
Cybersecurity Senior at Qubika
Receive regular updates about our latest work
Discover the most influential Cybersecurity conferences of 2025. From global summits to niche forums, this guide helps you stay informed, connected, and ahead of threats
If you’re looking forward to making the most of the late summer and early autumn season, here are some exciting events lined up for September, and October!
Learn about how our Cloud, SRE & Cybersecurity team utilize some best practices to enhance cybersecurity in our client’s products.
Receive regular updates about our latest work
Get in touch with our experts to review your idea or product, and discuss options for the best approach
Get in touchArtificial Intelligence Services
Accelerate AI
Healthcare Solutions
Data
Agentic Factory
Financial Services Technology
Platform engineering
Data Foundation