Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Proceedings of the ACL-21 Workshop on Computing Semantics with Types, Frames and Related Structures, 2021
This paper briefly presents an evaluation of three models: a domain-specific one based upon typed feature structures, a neural language model, and a mixture of the two, on an unseen but in-domain corpus of user queries in the context of a dialogue classification task. We find that the mixture performs the best, which opens the door to a potentially new application of neural language models. A further examination of the domain- We also consider the inner workings of the domainspecific model in more detail, as well as how it came into being, from an ethnographic perspective. This has changed our perspective on the potential role of structured representations in the future of dialogue systems, and suggests that formal research in this area may have a new role to play in validating and coordinating ad hoc dialogue systems development.
Recommended citation: Penn, Gerald and Shi, Ken. (2021). "Feature Structures in the Wild: A Case Study in Mixing Traditional Linguistic Knowledge Representation with Neural Language Models." Proceedings of the ACL-21 Workshop on Computing Semantics with Types, Frames and Related Structures. 53(7). https://aclanthology.org/2021.cstfrs-1.6.pdf
Published in Proceedings of the First Workshop on Writing Aids at the Crossroads of AI, Cognitive Science and NLP (WRAICOGS 2025), 2025
In this paper, we introduce the concept of Semantic Masking, where semantically coherent surrounding text (the haystack) interferes with the retrieval and comprehension of specific information (the needle) embedded within it. We propose the Needle-in-a-Haystack-QA Test, an evaluation pipeline that assesses LLMs’ long-text capabilities through question answering, explicitly accounting for the Semantic Masking effect. We conduct experiments to demonstrate that Semantic Masking significantly impacts LLM performance more than text length does. By accounting for Semantic Masking, we provide a more accurate assessment of LLMs’ true proficiency in utilizing extended contexts, paving the way for future research to develop models that are not only capable of handling longer inputs but are also adept at navigating complex semantic landscapes.
Recommended citation: Ken Shi and Gerald Penn. 2025. Semantic Masking in a Needle-in-a-haystack Test for Evaluating Large Language Model Long-Text Capabilities. In Proceedings of the First Workshop on Writing Aids at the Crossroads of AI, Cognitive Science and NLP (WRAICOGS 2025), pages 16–23, Abu Dhabi, UAE. International Committee on Computational Linguistics. https://aclanthology.org/2025.wraicogs-1.2/
Published:
Voice Assistants in Vehicles has been a popular application of dialogue systems, and there have been many different approaches for this task. This talk will briefly present an evaluation of three models: a domain-specific one based upon typed feature structures, a neural language model, and a mixture of the two, on an unseen but in-domain corpus of user queries in the context of a dialogue classification task. The finding opens the door to a potentially new application of neural language models. The study has changed our perspective on the potential role of structured representations in the future of dialogue systems, and suggests that formal research in this area may have a new role to play in validating and coordinating ad hoc dialogue systems development.
Published:
In this paper, we introduce the concept of Semantic Masking, where semantically coherent surrounding text (the haystack) interferes with the retrieval and comprehension of specific information (the needle) embedded within it. We propose the Needle-in-a-Haystack-QA Test, an evaluation pipeline that assesses LLMs’ long-text capabilities through question answering, explicitly accounting for the Semantic Masking effect. We conduct experiments to demonstrate that Semantic Masking significantly impacts LLM performance more than text length does. By accounting for Semantic Masking, we provide a more accurate assessment of LLMs’ true proficiency in utilizing extended contexts, paving the way for future research to develop models that are not only capable of handling longer inputs but are also adept at navigating complex semantic landscapes.
Graduate / Undergraduate Courses, University of Toronto, 2023
Two Teaching Assistantships: CSC485/2501 in Department of Computer Science; and ESC180 in Department of Applied Science and Engineering.
Graduate / Undergraduate Courses, University of Toronto, 2024
Two Teaching Assistantships: CSC401/2511 in Department of Computer Science; and ESC190 in Department of Applied Science and Engineering.
Graduate / Undergraduate Courses, University of Toronto, 2024
Teaching Assistantships: CSC401/2511 in Department of Computer Science.