Prevent Privacy & IP loss in AI LLMs Workshop


Instructor lead by industry experts in AI

Interactive sessions with AI Security & Privacy Experts

Trustworthy And Responsible AI for competitive advantage while ensuring compliance with evolving regulations, ethical standards, and security requirements.

This specialized and instructor led workshop focuses on Challenges related to Large Language Models (LLMs) that can have a high impact on:

  1. GDPR compliance
  2. Intellectual Property Loss
  3. Wasted time, money, and computational resources.


Areas we will cover:

  • How Large Language Models like ChatGPT memorize, link, and infer personal data.
  • The application of re-identification guidance to assess EU GDPR applicability.
  • The importance of evaluating LLMs themselves, not just their training data, for Personal data processing risks.
  • Strategies for mitigating re-identification risks through privacy techniques and
  • security measures vs. the potential conflict with a core LLM functionality – their ability to memorize!
  • Factors that influence memorization in LLMs and ways of counteracting it.
  • The feasibility of pseudonymization in meeting GDPR criteria under certain conditions to possibly reduce GDPR obligations.