Description
Areas we will cover:
- How Large Language Models like ChatGPT memorize, link, and infer personal data.
- The application of re-identification guidance to assess EU GDPR applicability.
- The importance of evaluating LLMs themselves, not just their training data, for Personal data processing risks.
- Strategies for mitigating re-identification risks through privacy techniques and
- security measures vs. the potential conflict with a core LLM functionality – their ability to memorize!
- Factors that influence memorization in LLMs and ways of counteracting it.
- The feasibility of pseudonymization in meeting GDPR criteria under certain conditions to possibly reduce GDPR obligations.