Loading…
Wednesday September 3, 2025 2:00pm - 2:50pm PDT
David vonThenen, NetApp, AI/ML Engineer

In a digital landscape dominated by APIs and AI, security threats from adversarial manipulation have become critical risks. This session explores the intersection of APIs, AI security, and adversarial attacks. We'll dissect how adversaries manipulate APIs feeding data to machine learning models—by injecting noise, crafting misleading inputs, and exploiting data obfuscation techniques—to compromise model integrity and security. Attendees will gain insights into real-world adversarial scenarios, learn practical defensive techniques, and understand the implications for privacy, model fairness, and data reliability.

The session will provide practical examples and live demonstrations showcasing how adversarial strategies can exploit API vulnerabilities to undermine AI models. We'll examine defensive frameworks and best practices for securing APIs against adversarial attacks, ensuring data integrity, maintaining privacy compliance, and reinforcing ethical AI usage. By the end, attendees will be equipped with strategies for hardening their AI-driven APIs, proactively identifying vulnerabilities, and deploying robust security measures to mitigate adversarial threats.
Speakers
avatar for David vonThenen

David vonThenen

Senior AI/ML Engineer, NetApp
David is a Senior AI/ML Engineer at NetApp, where he’s dedicated to empowering developers to build, scale, and deploy AI/ML models in production. He brings deep expertise in building and training models for applications like NLP, data visualization, and real-time analytics. His... Read More →
Wednesday September 3, 2025 2:00pm - 2:50pm PDT
API World -- Workshop Stage A (PRO)

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link