Latest Insights on AI Trust and Verification
Stay updated with our expert analysis on AI audit practices, digital trust-building, and emerging trends in agent verification, ensuring your AI systems are reliable and trustworthy.

October 11, 2025
In August 2025, researchers uncovered a novel attack vector targeting production AI systems — a vulnerability that exploits something as simple as image scaling.
Their findings, described in “Weaponizing Image Scaling Against Production AI Systems,” reveal how attackers can hide malicious prompts inside images that only become visible after the image is resized.
When the AI system processes the scaled version, it can be tricked into executing unintended commands — from data exfiltration to external API calls — without any visible warning to the user.
Read more about: When Image Scaling Becomes a Weapon: Why Every AI Deployment Needs Independent Auditing
Image with hidden message for AI Agent [source]

How the Exploit Works
The attack takes advantage of a subtle but widespread design flaw:
- Many AI platforms automatically resize images before analysis or inference.
- This preprocessing step (using common algorithms like bilinear, bicubic, or nearest neighbor) can slightly alter pixel values — creating opportunities to hide invisible “payloads.”
- When scaled, those hidden pixels form visible text or tokens that the AI interprets as legitimate commands.
Trail of Bits demonstrated that this exploit worked on several real-world AI systems — including Gemini CLI, Vertex AI, and Google Assistant integrations
Why This Matters
This vulnerability is not about a single bug. It’s a systemic risk that emerges from the complexity of modern AI pipelines.
- AI systems are more interconnected than ever
- Images move through multiple components — browsers, preprocessors, APIs, agents — each with its own assumptions.
- Every transformation layer (compression, scaling, encoding) adds risk.
- Attackers only need one weak link
- A mismatch between what the user sees and what the model processes is all it takes to introduce harmful or manipulative instructions.
- Default protections are not enough
- Even if you use standard libraries like OpenCV or Pillow, their scaling implementations behave differently.
- An attacker can fingerprint those differences and craft images specifically designed for your system’s behavior.
The researchers recommendations include:
- Always showing a preview of the image as seen by the model.
- Avoiding automatic scaling when possible.
- Limiting image size and file types.
But even with such safeguards, the core issue remains — complexity makes unseen vulnerabilities inevitable
Why Independent Auditing Is Critical
Protocols and internal testing are valuable, but an external, specialized audit provides a deeper layer of protection.
Here’s why:
- Complex systems hide complex risks
AI agents, multimodal inputs, and API chains make it difficult to track every transformation. Independent auditors can trace end-to-end behavior and identify invisible exposure points. - Subtle flaws lead to major consequences
What looks like a harmless preprocessing step could lead to data loss, financial harm, or reputational damage. - Audits ensure accountability and trust
For regulated industries — finance, healthcare, insurance, e-commerce — an external audit demonstrates that your AI systems have been independently validated for safety and compliance
The Takeaway
The image scaling attack is a reminder that modern AI systems are not just models — they are complex ecosystems of transformations, agents, and integrations.
And in such ecosystems, even the smallest oversight can be weaponized.
For organizations deploying AI agents or multimodal systems, the value of automation and scalability is immense — but so are the risks. Engaging independent AI auditors helps ensure that your innovations remain secure, compliant, and trustworthy.
Because in AI, visibility isn’t enough — you need verification.
September 28, 2025
AI is shifting from tools that generate outputs to agents that act — negotiating, transacting, and executing workflows on our behalf. This leap brings enormous value, but also new risks.
Read more about: The Rise of Agentic Protocols
Google’s AP2: Enabling Agentic Payments with Trust

Recently, Google introduced the Agent Payments Protocol (AP2) — an open, shared standard to let AI agents carry out transactions (among merchants, banking systems, and users) in a secure, verifiable way.
- Mandates & Verifiable Credentials: Every instruction from a user is captured in a cryptographically signed “mandate,” which becomes the non-repudiable proof of user intent.
- Multiple Modes: AP2 accommodates both real-time (human in the loop) and delegated purchases (agent acts later within permitted bounds), using “intent mandates” and “cart mandates.”
- Payment Agnostic: The protocol supports credit cards, debit cards, real-time bank transfers, as well as stablecoins and crypto integrations, making it future-flexible.
- Auditability & Accountability: Because each step is traced via mandates and verifiable credentials, the protocol gives edges of transparency: you can trace who authorized what, when.
In short: AP2 is an attempt to give AI agents the “permissions + visibility + trust infrastructure” that humans naturally assume in transactions. It aims to shrink the gap between human-based commerce and agent-based commerce, without compromising security or compliance
Why Identity & Access Models Break When Agents Come In
While AP2 tackles the payments dimension, identity and access control are an equally thorny frontier. Felicis, among others, argues that traditional IAM (Identity & Access Management) systems are ill-suited for AI agents.
- Traditional IAM assumes deterministic actors (e.g. services, human users) whose behavior is predictable and bounded.
- AI agents, by contrast, are non-deterministic, dynamic, and context-sensitive. They might make decisions or branch logic unexpectedly.
- Static roles, fixed scopes, and long-lived permissions create risk: over-permission, privilege creep, and gaps when the agent adapts or evolves
To support agents securely, Felicis suggests shifting from static permission models to dynamic, context-aware permissions, ephemeral authorizations, and identity systems that understand “agenthood” as a first-class entity.
In the broader “agentic web” ecosystem, think of identity & authentication layers (e.g. verifying the agent’s identity vs verifying user-agent linkage) as foundational scaffolding
The High Value — and High Stakes — of Agentic Deployments
The value is clear:
- Efficiency — agents automate multi-step processes across tools.
- User experience — agents act proactively, from shopping to scheduling.
- New business models — agent-driven services open new markets.
But the risks are real:
- Financial loss if agents misinterpret mandates or overstep permissions.
- Reputation damage if customer-facing agents behave unexpectedly.
- Security exposure when static roles or broad access collide with adaptive agent behavior.
Even with AP2 or identity protocols, a small configuration error can have outsized consequences.
Why Expert Review is Essential
Protocols provide the foundation, but expert oversight ensures safe implementation. Here’s why:
Complexity is unavoidable — agents cross systems, APIs, and contexts.
Mistakes are subtle but costly — a small misstep can trigger financial or compliance fallout.
Audits build trust — with regulators, partners, and customers.
Oversight must be ongoing — agents and environments evolve constantly.
Final Thoughts
Agentic AI is a leap forward: it can transact, optimize, and operate independently. But its very power makes it risky if deployed carelessly. Protocols like AP2 and emerging identity models are promising, yet the missing piece is expert review and auditing.
Companies that embrace both innovation and oversight will unlock the benefits of agentic AI while avoiding its pitfalls.
September 21, 2025
Agentic AI — systems that can act, decide, and operate autonomously — are transforming businesses. From e-commerce platforms and insurers to financial services and crypto exchanges, companies are deploying AI agents to handle transactions, customer support, risk assessments, and more.
Read more about: Why Agentic AI Needs Trused Verification
Rising Adoption
- The global Agentic AI market was valued at $5.2 billion in 2024 and is forecasted to exceed $190 billion by 2034.
- Nearly 80% of businesses report experimenting with or deploying AI agents in some capacity.
- Key areas of use include customer engagement, trading, claims processing, and internal operations.
The Risk of Unverified Agents
While adoption grows, so do the risks. Deploying AI agents without proper verification can cause:
- Financial loss — costly mistakes, compliance penalties, or failed trades.
- Reputation damage — agents mishandling customers or exposing sensitive data.
- Operational disruption — uncontrolled or misaligned agent behavior.
A recent industry estimate suggested tens of billions in damages could stem from unverified AI agent failures in finance, healthcare, and e-commerce.
The Best Defence: Independent Verification
The strongest safeguard is to have specialized audit services verify your AI agents before and during deployment. This ensures:
Confidence for customers and partners, leading to greater adoption and higher ROI.
Trusted and compliant agents that align with industry regulations.
Risk identification upfront so uncontrolled behaviors never reach production.
Transparency and accountability in how agents operate.
Final Thoughts
Agentic AI is here to stay — but without trust, it can do more harm than good. Verifying and auditing your AI agents ensures innovation with security, compliance, and confidence.
