publications

Selected publications of NALA members (since 2020).

2021

  1. What Would a Teacher Do? Predicting Future Talk Moves
    Ananya Ganesh, Martha Palmer, and Katharina Kann
    In Findings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  2. PROST: Physical Reasoning of Objects through Space and Time
    Stephane Aroca-Ouellette, Cory Paik, Alessandro Roncone, and Katharina Kann
    In Findings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  3. How to Adapt Your Pretrained Multilingual Model to 1600 Languages
    Abteen Ebrahimi, and Katharina Kann
    In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  4. Don’t Rule Out Monolingual Speakers: A Method For Crowdsourcing Machine Translation Data
    Rajat Bhatnagar, Ananya Ganesh, and Katharina Kann
    In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021
  5. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas
    Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Ximena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Giménez-Lugo, Ricardo Ramos, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager-Hois, Vishrav Chaudhary, Graham Neubig, Ngoc Thang Vu, and Katharina Kann
    In Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas, 2021
  6. Coloring the Black Box: What Synesthesia Tells Us about Character Embeddings
    Katharina Kann, and Mauro M. Monsalve-Mercado
    In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021
  7. CLiMP: A Benchmark for Chinese Language Model Evaluation
    Beilei Xiang, Changbing Yang, Yu Li, Alex Warstadt, and Katharina Kann
    In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, 2021

2020

  1. Making a Point: Pointer-Generator Transformers for Disjoint Vocabularies
    Nikhil Prabhu, and Katharina Kann
    In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 9th International Joint Conference on Natural Language Processing Student Research Workshop, 2020
    Best Paper Award
  2. English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
    Jason Phang, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Iacer Calixto, Katharina Kann, and Samuel R. Bowman
    In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 9th International Joint Conference on Natural Language Processing, 2020
  3. Tackling the Low-resource Challenge for Canonical Segmentation
    Manuel Mager, Özlem Çetinoğlu, and Katharina Kann
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
  4. Acrostic Poem Generation
    Rajat Agarwal, and Katharina Kann
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
  5. IGT2P: From Interlinear Glossed Texts to Paradigms
    Sarah Moeller, Ling Liu, Changbing Yang, Katharina Kann, and Mans Hulden
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
  6. Why Overfitting Isn’t Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries
    Mozhi Zhang, Yoshinari Fujinuma, Michael J. Paul, and Jordan Boyd-Graber
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
  7. The SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion
    Katharina Kann, Arya D. McCarthy, Garrett Nicolai, and Mans Hulden
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  8. Frustratingly Easy Multilingual Grapheme-to-Phoneme Conversion
    Nikhil Prabhu, and Katharina Kann
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  9. The NYU-CUBoulder Systems for SIGMORPHON 2020 Task 0 and Task 2
    Assaf Singer, and Katharina Kann
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  10. The IMS–CUBoulder System for the SIGMORPHON 2020 Shared Task on Unsupervised Morphological Paradigm Completion
    Manuel Mager, and Katharina Kann
    In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, 2020
  11. Self-Training for Unsupervised Parsing with PRPN
    Anhad Mohananey, Katharina Kann, and Samuel R. Bowman
    In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, 2020
  12. Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?
    Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
  13. Unsupervised Morphological Paradigm Completion
    Huiming Jin, Liwei Cai, Yihui Peng, Chen Xia, Arya McCarthy, and Katharina Kann
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
  14. Learning to Learn Morphological Inflection for Resource-Poor Languages
    Katharina Kann, Samuel R. Bowman, and Kyunghyun Cho
    In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
  15. Weakly Supervised POS Taggers Perform Poorly on Truly Low-Resource Languages
    Katharina Kann, Ophélie Lacroix, and Anders Søgaard
    In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
  16. Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
    Katharina Kann
    In Proceedings of the Society for Computation in Linguistics, 2020