publications

Selected publications of NALA members (since 2020).

2022

  1. Response Construct Tagging: NLP-Aided Assessment for Engineering Education
    Ananya Ganesh, Hugh Scribner, Jasdeep Singh, Katherine Goodman, Jean Hertzberg, and Katharina Kann
    In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications, 2022
  2. Open-domain Dialogue Generation: What We Can Do, Cannot Do, And Should Do Next
    Katharina Kann, Abteen Ebrahimi, Joewie J. Koh, Shiran Dudy, and Alessandro Roncone
    In Proceedings of the 4th Workshop on NLP for Conversational AI, 2022
  3. AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
    Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir Meza Ruiz, Gustavo A. Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando Coto-Solano, Thang Vu, and Katharina Kann
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022
  4. How Does Multilingual Pretraining Affect Cross-Lingual Transferability?
    Yoshinari Fujinuma, Jordan Lee Boyd-Graber, and Katharina Kann
    In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022
  5. Morphological Processing of Low-Resource Languages: Where We Are and What’s Next
    Adam Wiemerslage, Miikka Silfverberg, Changbing Yang, Arya D. McCarthy, Garrett Nicolai, Eliana Colunga, and Katharina Kann
    In Findings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022
  6. BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages
    Manuel Mager, Arturo Oncevay, Elisabeth Mager, Katharina Kann, and Thang Vu
    In Findings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022

2021

  1. The World of an Octopus: How Reporting Bias Influences a Language Model’s Perception of Color
    Cory Paik, Stéphane Aroca-Ouellette, Alessandro Roncone, and Katharina Kann
    In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021
  2. What Would a Teacher Do? Predicting Future Talk Moves
    Ananya Ganesh, Martha Palmer, and Katharina Kann
    In Findings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  3. PROST: Physical Reasoning of Objects through Space and Time
    Stephane Aroca-Ouellette, Cory Paik, Alessandro Roncone, and Katharina Kann
    In Findings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  4. How to Adapt Your Pretrained Multilingual Model to 1600 Languages
    Abteen Ebrahimi, and Katharina Kann
    In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  5. Don’t Rule Out Monolingual Speakers: A Method For Crowdsourcing Machine Translation Data
    Rajat Bhatnagar, Ananya Ganesh, and Katharina Kann
    In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  6. Findings of the LoResMT 2021 Shared Task on COVID and Sign Language for Low-resource Languages
    Atul Kr. Ojha, Chao-Hong Liu, Katharina Kann, John Ortega, Sheetal Shatam, and Theodorus Fransen
    In Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021), 2021
  7. Paradigm Clustering with Weighted Edit Distance
    Andrew Gerlach, Adam Wiemerslage, and Katharina Kann
    In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2021
  8. Findings of the SIGMORPHON 2021 Shared Task on Unsupervised Morphological Paradigm Clustering
    Adam Wiemerslage, Arya D. McCarthy, Alexander Erdmann, Garrett Nicolai, Manex Agirrezabal, Miikka Silfverberg, Mans Hulden, and Katharina Kann
    In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2021
  9. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas
    Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Ximena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Giménez-Lugo, Ricardo Ramos, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager-Hois, Vishrav Chaudhary, Graham Neubig, Ngoc Thang Vu, and Katharina Kann
    In Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas, 2021
  10. Coloring the Black Box: What Synesthesia Tells Us about Character Embeddings
    Katharina Kann, and Mauro M. Monsalve-Mercado
    In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021
  11. CLiMP: A Benchmark for Chinese Language Model Evaluation
    Beilei Xiang, Changbing Yang, Yu Li, Alex Warstadt, and Katharina Kann
    In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021

2020

  1. Making a Point: Pointer-Generator Transformers for Disjoint Vocabularies
    Nikhil Prabhu, and Katharina Kann
    In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 9th International Joint Conference on Natural Language Processing Student Research Workshop, 2020
    Best Paper Award
  2. English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
    Jason Phang, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Iacer Calixto, Katharina Kann, and Samuel R. Bowman
    In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 9th International Joint Conference on Natural Language Processing, 2020
  3. Tackling the Low-resource Challenge for Canonical Segmentation
    Manuel Mager, Özlem Çetinoğlu, and Katharina Kann
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
  4. Acrostic Poem Generation
    Rajat Agarwal, and Katharina Kann
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
  5. IGT2P: From Interlinear Glossed Texts to Paradigms
    Sarah Moeller, Ling Liu, Changbing Yang, Katharina Kann, and Mans Hulden
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
  6. Why Overfitting Isn’t Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries
    Mozhi Zhang, Yoshinari Fujinuma, Michael J. Paul, and Jordan Boyd-Graber
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
  7. The SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion
    Katharina Kann, Arya D. McCarthy, Garrett Nicolai, and Mans Hulden
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  8. Frustratingly Easy Multilingual Grapheme-to-Phoneme Conversion
    Nikhil Prabhu, and Katharina Kann
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  9. The NYU-CUBoulder Systems for SIGMORPHON 2020 Task 0 and Task 2
    Assaf Singer, and Katharina Kann
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  10. The IMS–CUBoulder System for the SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion
    Manuel Mager, and Katharina Kann
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  11. Self-Training for Unsupervised Parsing with PRPN
    Anhad Mohananey, Katharina Kann, and Samuel R. Bowman
    In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, 2020
  12. Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
    Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
  13. Unsupervised Morphological Paradigm Completion
    Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya McCarthy, and Katharina Kann
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
  14. Learning to Learn Morphological Inflection for Resource-Poor Languages
    Katharina Kann, Samuel R. Bowman, and Kyunghyun Cho
    In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
  15. Weakly Supervised POS Taggers Perform Poorly on Truly Low-Resource Languages
    Katharina Kann, Ophélie Lacroix, and Anders Søgaard
    In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
  16. Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
    Katharina Kann
    In Proceedings of the Society for Computation in Linguistics, 2020