Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in EMNLP, 2021
We study verbal leakage cues to understand the effect of the data construction method on their significance, and examine the relationship between such cues and models' validity. Result shows that data with audio statements and lie-based annotations indicate a greater number of strong verbal leakage cue categories, and shows that models trained on a dataset with more strong verbal leakage cue categories yield superior results.
Recommended citation: Min-Hsuan Yeh and Lun-Wei Ku. (2021). "Lying Through One’s Teeth: A Study on Verbal Leakage Cues," to appear in Proceedings of the The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://aclanthology.org/2021.emnlp-main.370.pdf
Published in EMNLP, 2022
We propose a new task, Multi-VQG, which aims to generate engaging questions for multiple images. We introduce a new dataset, MVQG, which contains arround 30,000 question and image sequence pairs. We also propose both end-to-end and dual-staged models extended from VL-T5 to generate questions with story information. We evaluate our models on MVQG and show that models with explicit story information yield better results.
Recommended citation: Min-Hsuan Yeh, Vicent Chen, Ting-Hao 'Kenneth' Haung, and Lun-Wei Ku. (2022). "Multi-VQG: Generating Engaging Questions for Multiple Images," to appear in Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://arxiv.org/abs/2211.07441
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.