I am a Master student in Electrical and Computer Engineering at Johns Hopkins University. I work as a Research Assistant at the SMILE Lab, affiliated with the
Center for Language and Speech Processing (CLSP), under the supervision of Prof. Berrak Sisman.
I am passionate about research. Currently, my research interests focus on speech and language processing based on deep learning methods, especially healthcare-related area. My recent work has led to two paper submissions to Interspeech 2026, one as first-author and the other as second author. I have also submitted a first-author paper to EMNLP 2026.
I received my bachelor’s degree from Dalian University of Technology, where I worked as a Research Assistant under the supervision of Prof. Qiufen Xia.
During my undergraduate studies, I focused on cold-start latency optimization in serverless computing, and published a first-author paper.
Outside of research, I also love films and traveling. My current dream is to travel to France to watch films at the Cannes Film Festival.
Hsiang-Chen Yeh, Luqi Sun, Aurosweta Mahapatra, Shreeram Suresh Chandra, Emily Mower Provost, Berrak Sisman
This study investigates whether speech-based depression detection models learn depression-related acoustic biomarkers or instead rely on speaker identity cues. Using the DAIC-WOZ dataset, we propose a data-splitting strategy that controls speaker overlap between training and test sets while keeping the training size constant, and evaluate three models of varying complexity. Results show that speaker overlap significantly boosts performance, whereas accuracy drops sharply on unseen speakers. Even with a Domain-Adversarial Neural Network, a substantial performance gap remains. These findings indicate that depression-related features extracted by current speech models are highly entangled with speaker identity. Conventional evaluation protocols may therefore overestimate generalization and clinical utility, highlighting the need for strictly speaker-independent evaluation.
Luqi Sun, Qiufen Xia, Xiaolong Zhai, Zhen Feng, Jiankang Ren
International Conference on Mobility, Sensing and Networking (MSN 2025)
While Serverless computing offers developers convenience and cost benefits, cold start latency remains a critical performance bottleneck. This paper presents an innovative solution based on a nested container architecture, designing and implementing a Nested Container System that achieves efficient runtime sharing through pre-warmed parent containers and lightweight child containers, combined with Docker volume mechanisms. The system employs optimized Docker outside of Docker (DooD) technology, enabling resource sharing while maintaining container isolation. Experimental results show that compared to traditional OpenWhisk, our Nested Container System significantly reduces cold start times-by up to 90% in some scenarios-while maintaining similar CPU utilization and lower memory consumption. The research provides new insights and solutions for resource management and performance optimization in Serverless computing.
Qiufen Xia, Luqi Sun, Zichuan Xu
Chinese Patent (CN120821531B) · Dalian University of Technology
The present discloses a serverless computing container cold start method and related devices based on a nested architecture, which relates to the field of serverless computing technology. The method determines the target parent container based on the parent container's running status score by responding to a user function call request; periodically judges and executes the parent container expansion and contraction based on the node CPU utilization and expansion and contraction time, configures resources and mounts shared dependencies when adding, and gives priority to releasing low-load resources when reducing; after analyzing the function dependency, creates a child container and mounts the parent container's shared dependency directory and independent temporary dependency directory; after the child container executes the function, the parent container cleans up related resources. The method improves response efficiency by accurately scheduling the parent container, optimizes resource utilization by dynamic expansion and contraction, reduces repeated loading by relying on shared dependencies to shorten the cold start time, and takes into account isolation at the same time. Resources are cleaned up in time to avoid leakage, effectively adapting to multi-tenant and diversified deployment scenarios, and achieving a balance between low resource usage and low startup latency.
SMILE Lab, affiliated with the Center for Language and Speech Processing (CLSP)
Advised by Prof. Berrak Sisman.
Research focus: Deep learning based speech and language processing for healthcare
Output:
Advised by Prof. Qiufen Xia.
Research focus: Cold start latency optimization in serverless computing
Output:
Electricial and Computer Engineering
Digital Media Technology, Software Engineering