Education

M.Sc. in
Computer Science
Advisor: Dr. Cicek

B.Sc. in
Computer Engineering
Advisor: Dr. Tanha
Research Interests
Honestly, I’m curious about just about everything related to computers! Right now, my main interests are:
Data Privacy
Computational Biology
Generative Models
Deep Learning
Academic Experience
Spring 2025
Fall 2024
Spring 2024
Fall 2023
Spring 2022
Spring 2021
Fall 2020
Bilkent University
Teaching Assistant - CS101 Algorithms and Programming I
Teaching Assistant - CS223 Digital Design
Community Service
IEEE Transactions on Computational Biology and Bioinformatics (TCBB) - Reviewer
Research in Computational Molecular Biology (RECOMB) - Speaker
Intelligent Systems for Molecular Biology (ISMB) Conference - Reviewer
Papers
Generated Data with Fake Privacy: Hidden Dangers of Fine-tuning Large Language Models on Generated Data
Authors: Akkus A, Poorghaffar Aghdam M, Li M, Chu J, Backes M, Zhang Y, Sav S.
Fine-tuning large language models (LLMs) with generated data is often considered a privacy-preserving alternative to real data, but our study reveals significant privacy risks. We evaluate Personal Information Identifier (PII) leakage and Membership Inference Attacks (MIAs) on the Pythia Model Suite and Open Pre-trained Transformer (OPT), finding that fine-tuning with generated data can increase privacy vulnerabilities.
USENIX Security’25 · https://usenix.org/conference/usenixsecurity25/presentation/akkus
Teaching Assistant - CS101 Algorithms and Programming I
Teaching Assistant - CS223 Digital Design
Community Service
IEEE Transactions on Computational Biology and Bioinformatics (TCBB) - Reviewer
Research in Computational Molecular Biology (RECOMB) - Speaker
Intelligent Systems for Molecular Biology (ISMB) Conference - Reviewer
Papers
Generated Data with Fake Privacy: Hidden Dangers of Fine-tuning Large Language Models on Generated Data
Authors: Akkus A, Poorghaffar Aghdam M, Li M, Chu J, Backes M, Zhang Y, Sav S.
Fine-tuning large language models (LLMs) with generated data is often considered a privacy-preserving alternative to real data, but our study reveals significant privacy risks. We evaluate Personal Information Identifier (PII) leakage and Membership Inference Attacks (MIAs) on the Pythia Model Suite and Open Pre-trained Transformer (OPT), finding that fine-tuning with generated data can increase privacy vulnerabilities.
USENIX Security’25 · https://usenix.org/conference/usenixsecurity25/presentation/akkus