Difference between revisions of "BioNLP 2023"
m (→Program) |
m (→Program) |
||
Line 6: | Line 6: | ||
<h3>All times are in Pacific Time (Seattle, San Francisco, Los Angeles)</h3> | <h3>All times are in Pacific Time (Seattle, San Francisco, Los Angeles)</h3> | ||
+ | |||
+ | |||
+ | <table cellspacing="0" cellpadding="5" border="0"> | ||
+ | <tr> | ||
+ | <td colspan=2 style="padding-top: 14px;"> | ||
+ | <h4>Friday June 11, 2021</h4> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>08:00–08:15 Opening remarks</b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>08:15–09:15 Session 1: Information Extraction </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>08:15–08:30</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Improving BERT Model Using Contrastive Learning for Biomedical Relation Extraction</i> <br> | ||
+ | Peng Su, Yifan Peng and K. Vijay-Shanker | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>08:30–08:45</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Triplet-Trained Vector Space and Sieve-Based Search Improve Biomedical Concept Normalization</i> | ||
+ | <br> | ||
+ | Dongfang Xu and Steven Bethard | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>08:45–09:00</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Scalable Few-Shot Learning of Robust Biomedical Name Representations</i> | ||
+ | <br> | ||
+ | Pieter Fivez, Simon Suster and Walter Daelemans | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>09:00–09:15</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>SAFFRON: tranSfer leArning For Food-disease RelatiOn extractioN</i> | ||
+ | <br> | ||
+ | Gjorgjina Cenikj, Tome Eftimov and Barbara Koroušić Seljak | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>09:15–10:00 Session 2: Clinical NLP </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>09:15–09:30</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Are we there yet? Exploring clinical domain knowledge of BERT models</i> | ||
+ | |||
+ | <br> | ||
+ | |||
+ | Madhumita Sushil, Simon Suster and Walter Daelemans | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>09:30–09:45</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Towards BERT-based Automatic ICD Coding: Limitations and Opportunities</i> | ||
+ | <br> | ||
+ | |||
+ | Damian Pascual, Sandro Luck and Roger Wattenhofer | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>09:45–10:00</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>emrKBQA: A Clinical Knowledge-Base Question Answering Dataset</i> | ||
+ | <br> | ||
+ | Preethi Raghavan, Jennifer J Liang, Diwakar Mahajan, Rachita Chandra and Peter Szolovits | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>10:00–10:30</b> | ||
+ | </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b> | ||
+ | <em>Coffee Break</em> | ||
+ | </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>Session 3: MEDIQA 2021 Overview: Asma Ben Abacha </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>10:30–11:00</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Overview of the MEDIQA 2021 Shared Task on Summarization in the Medical Domain</i> | ||
+ | <br> | ||
+ | Asma Ben Abacha, Yassine Mrabet, Yuhao Zhang, Chaitanya Shivade, Curtis Langlotz and Dina Demner-Fushman | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>11:00–12:00 Session 4: MEDIQA 2021 Presentations </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>11:00–11:15</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>WBI at MEDIQA 2021: Summarizing Consumer Health Questions with Generative Transformers</i> | ||
+ | <br> | ||
+ | Mario Sänger, Leon Weber and Ulf Leser | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>11:15–11:30</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>paht_nlp @ MEDIQA 2021: Multi-grained Query Focused Multi-Answer Summarization</i> | ||
+ | <br> | ||
+ | Wei Zhu, Yilong He, Ling Chai, Yunxiao Fan, Yuan Ni, GUOTONG XIE and Xiaoling Wang | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>11:30–11:45</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>BDKG at MEDIQA 2021: System Report for the Radiology Report Summarization Task</i> | ||
+ | <br> | ||
+ | Songtai Dai, Quan Wang, Yajuan Lyu and Yong Zhu | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100>11:45–12:00</td> | ||
+ | <td valign=top align=left> | ||
+ | <i>damo_nlp at MEDIQA 2021: Knowledge-based Preprocessing and Coverage-oriented Reranking for Medical Question Summarization</i> | ||
+ | <br> | ||
+ | Yifan He, Mosha Chen and Songfang Huang | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>12:00–12:30</b> | ||
+ | </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b> | ||
+ | <em>Coffee Break</em> | ||
+ | </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>12:30–14:30 Session 5: Poster session 1 </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Stress Test Evaluation of Biomedical Word Embeddings</i> | ||
+ | <br> | ||
+ | Vladimir Araujo, Andrés Carvallo, Carlos Aspillaga, Camilo Thorne and Denis Parra | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>BLAR: Biomedical Local Acronym Resolver</i> | ||
+ | <br> | ||
+ | William Hogan, Yoshiki Vazquez Baeza, Yannis Katsis, Tyler Baldwin, Ho-Cheol Kim and Chun-Nan Hsu | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Claim Detection in Biomedical Twitter Posts</i> | ||
+ | <br> | ||
+ | Amelie Wührl and Roman Klinger | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>BioELECTRA:Pretrained Biomedical text Encoder using Discriminators</i> | ||
+ | <br> | ||
+ | Kamal raj Kanakarajan, Bhuvana Kundumani and Malaikannan Sankarasubbu | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Word centrality constrained representation for keyphrase extraction</i> | ||
+ | <br> | ||
+ | Zelalem Gero and Joyce Ho | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>End-to-end Biomedical Entity Linking with Span-based Dictionary Matching</i> | ||
+ | <br> | ||
+ | Shogo Ujiie, Hayate Iso, Shuntaro Yada, Shoko Wakamiya and Eiji ARAMAKI | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Word-Level Alignment of Paper Documents with their Electronic Full-Text Counterparts</i> | ||
+ | <br> | ||
+ | Mark-Christoph Müller, Sucheta Ghosh, Ulrike Wittig and Maja Rey | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Improving Biomedical Pretrained Language Models with Knowledge</i> | ||
+ | <br> | ||
+ | Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang and Fei Huang | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>EntityBERT: Entity-centric Masking Strategy for Model Pretraining for the Clinical Domain</i> | ||
+ | <br> | ||
+ | Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard and Guergana Savova | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Contextual explanation rules for neural clinical classifiers</i> | ||
+ | <br> | ||
+ | Madhumita Sushil, Simon Suster and Walter Daelemans | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Exploring Word Segmentation and Medical Concept Recognition for Chinese Medical Texts</i> | ||
+ | <br> | ||
+ | Yang Liu, Yuanhe Tian, Tsung-Hui Chang, Song Wu, Xiang Wan and Yan Song | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA</i> | ||
+ | <br> | ||
+ | Sultan Alrowili and Vijay Shanker | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Semi-Supervised Language Models for Identification of Personal Health Experiential from Twitter Data: A Case for Medication Effects</i> | ||
+ | <br> | ||
+ | Minghao Zhu and Keyuan Jiang | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Context-aware query design combines knowledge and data for efficient reading and reasoning</i> | ||
+ | <br> | ||
+ | Emilee Holtzapple, Brent Cochran and Natasa Miskov-Zivanov | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Measuring the relative importance of full text sections for information retrieval from scientific literature.</i> | ||
+ | <br> | ||
+ | Lana Yeganova, Won Gyu KIM, Donald Comeau, W John Wilbur and Zhiyong Lu | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>14:30–15:00</b> | ||
+ | </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b> | ||
+ | <em>Coffee Break</em> | ||
+ | </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>15:00–17:00 Session 7: MEDIQA 2021 Poster Session </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>UCSD-Adobe at MEDIQA 2021: Transfer Learning and Answer Sentence Selection for Medical Summarization</i> | ||
+ | <br> | ||
+ | Khalil Mrini, Franck Dernoncourt, Seunghyun Yoon, Trung Bui, Walter Chang, Emilias Farcas and Ndapa Nakashole | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization</i> | ||
+ | <br> | ||
+ | Liwen Xu, Yan Zhang, Lei Hong, Yi Cai and Szui Sung | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>NCUEE-NLP at MEDIQA 2021: Health Question Summarization Using PEGASUS Transformers</i> | ||
+ | <br> | ||
+ | Lung-Hao Lee, Po-Han Chen, Yu-Xiang Zeng, Po-Lei Lee and Kuo-Kai Shyu | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>SB_NITK at MEDIQA 2021: Leveraging Transfer Learning for Question Summarization in Medical Domain</i> | ||
+ | <br> | ||
+ | Spandana Balumuri, Sony Bachina and Sowmya Kamath S | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>Optum at MEDIQA 2021: Abstractive Summarization of Radiology Reports using simple BART Finetuning</i> | ||
+ | <br> | ||
+ | Ravi Kondadadi, Sahil Manchanda, Jason Ngo and Ronan McCormack | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization</i> | ||
+ | <br> | ||
+ | Jean-Benoit Delbrouck, Cassie Zhang and Daniel Rubin | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>NLM at MEDIQA 2021: Transfer Learning-based Approaches for Consumer Question and Multi-Answer Summarization</i> | ||
+ | <br> | ||
+ | Shweta Yadav, Mourad Sarrouti and Deepak Gupta | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>IBMResearch at MEDIQA 2021: Toward Improving Factual Correctness of Radiology Report Abstractive Summarization</i> | ||
+ | <br> | ||
+ | Diwakar Mahajan, Ching-Huei Tsou and Jennifer J Liang | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>UETrice at MEDIQA 2021: A Prosper-thy-neighbour Extractive Multi-document Summarization Model</i> | ||
+ | <br> | ||
+ | Duy-Cat Can, Quoc-An Nguyen, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Linh Nguyen Tran Ngoc, Quang-Thuy Ha and Mai-Vu Tran | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>MNLP at MEDIQA 2021: Fine-Tuning PEGASUS for Consumer Health Question Summarization</i> | ||
+ | <br> | ||
+ | |||
+ | Jooyeon Lee, Huong Dang, Ozlem Uzuner and Sam Henry | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top width=100> </td> | ||
+ | <td valign=top align=left> | ||
+ | <i>UETfishes at MEDIQA 2021: Standing-on-the-Shoulders-of-Giants Model for Abstractive Multi-answer Summarization</i> | ||
+ | <br> | ||
+ | Hoang-Quynh Le, Quoc-An Nguyen, Quoc-Hung Duong, Minh-Quang Nguyen, Huy-Son Nguyen, Tam Doan Thanh, Hai-Yen Thi Vuong and Trang M. Nguyen | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>Session 8: Invited Talk by Makoto Miwa </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>17:00–17:30</b> | ||
+ | </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b> | ||
+ | <em>Makoto Miwa: Information Extraction from Texts Using Heterogeneous Information</em> | ||
+ | </b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | <tr> | ||
+ | <td valign=top style="padding-top: 14px;"> </td> | ||
+ | <td valign=top style="padding-top: 14px;"> | ||
+ | <b>17:30–18:00 Closing remarks</b> | ||
+ | </td> | ||
+ | </tr> | ||
+ | </table> | ||
===IMPORTANT DATES === | ===IMPORTANT DATES === |
Revision as of 19:55, 8 May 2021
BIONLP 2021 @ NAACL 2021
Program
All times are in Pacific Time (Seattle, San Francisco, Los Angeles)
Friday June 11, 2021 |
|
08:00–08:15 Opening remarks |
|
08:15–09:15 Session 1: Information Extraction |
|
08:15–08:30 |
Improving BERT Model Using Contrastive Learning for Biomedical Relation Extraction |
08:30–08:45 |
Triplet-Trained Vector Space and Sieve-Based Search Improve Biomedical Concept Normalization |
08:45–09:00 |
Scalable Few-Shot Learning of Robust Biomedical Name Representations |
09:00–09:15 |
SAFFRON: tranSfer leArning For Food-disease RelatiOn extractioN |
09:15–10:00 Session 2: Clinical NLP |
|
09:15–09:30 |
Are we there yet? Exploring clinical domain knowledge of BERT models Madhumita Sushil, Simon Suster and Walter Daelemans |
09:30–09:45 |
Towards BERT-based Automatic ICD Coding: Limitations and Opportunities Damian Pascual, Sandro Luck and Roger Wattenhofer |
09:45–10:00 |
emrKBQA: A Clinical Knowledge-Base Question Answering Dataset |
10:00–10:30 |
Coffee Break |
Session 3: MEDIQA 2021 Overview: Asma Ben Abacha |
|
10:30–11:00 |
Overview of the MEDIQA 2021 Shared Task on Summarization in the Medical Domain |
11:00–12:00 Session 4: MEDIQA 2021 Presentations |
|
11:00–11:15 |
WBI at MEDIQA 2021: Summarizing Consumer Health Questions with Generative Transformers |
11:15–11:30 |
paht_nlp @ MEDIQA 2021: Multi-grained Query Focused Multi-Answer Summarization |
11:30–11:45 |
BDKG at MEDIQA 2021: System Report for the Radiology Report Summarization Task |
11:45–12:00 |
damo_nlp at MEDIQA 2021: Knowledge-based Preprocessing and Coverage-oriented Reranking for Medical Question Summarization |
12:00–12:30 |
Coffee Break |
12:30–14:30 Session 5: Poster session 1 |
|
Stress Test Evaluation of Biomedical Word Embeddings |
|
BLAR: Biomedical Local Acronym Resolver |
|
Claim Detection in Biomedical Twitter Posts |
|
BioELECTRA:Pretrained Biomedical text Encoder using Discriminators |
|
Word centrality constrained representation for keyphrase extraction |
|
End-to-end Biomedical Entity Linking with Span-based Dictionary Matching |
|
Word-Level Alignment of Paper Documents with their Electronic Full-Text Counterparts |
|
Improving Biomedical Pretrained Language Models with Knowledge |
|
EntityBERT: Entity-centric Masking Strategy for Model Pretraining for the Clinical Domain |
|
Contextual explanation rules for neural clinical classifiers |
|
Exploring Word Segmentation and Medical Concept Recognition for Chinese Medical Texts |
|
BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA |
|
Semi-Supervised Language Models for Identification of Personal Health Experiential from Twitter Data: A Case for Medication Effects |
|
Context-aware query design combines knowledge and data for efficient reading and reasoning |
|
Measuring the relative importance of full text sections for information retrieval from scientific literature. |
|
14:30–15:00 |
Coffee Break |
15:00–17:00 Session 7: MEDIQA 2021 Poster Session |
|
UCSD-Adobe at MEDIQA 2021: Transfer Learning and Answer Sentence Selection for Medical Summarization |
|
ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization |
|
NCUEE-NLP at MEDIQA 2021: Health Question Summarization Using PEGASUS Transformers |
|
SB_NITK at MEDIQA 2021: Leveraging Transfer Learning for Question Summarization in Medical Domain |
|
Optum at MEDIQA 2021: Abstractive Summarization of Radiology Reports using simple BART Finetuning |
|
QIAI at MEDIQA 2021: Multimodal Radiology Report Summarization |
|
NLM at MEDIQA 2021: Transfer Learning-based Approaches for Consumer Question and Multi-Answer Summarization |
|
IBMResearch at MEDIQA 2021: Toward Improving Factual Correctness of Radiology Report Abstractive Summarization |
|
UETrice at MEDIQA 2021: A Prosper-thy-neighbour Extractive Multi-document Summarization Model |
|
MNLP at MEDIQA 2021: Fine-Tuning PEGASUS for Consumer Health Question Summarization Jooyeon Lee, Huong Dang, Ozlem Uzuner and Sam Henry |
|
UETfishes at MEDIQA 2021: Standing-on-the-Shoulders-of-Giants Model for Abstractive Multi-answer Summarization |
|
Session 8: Invited Talk by Makoto Miwa |
|
17:00–17:30 |
Makoto Miwa: Information Extraction from Texts Using Heterogeneous Information |
17:30–18:00 Closing remarks |
IMPORTANT DATES
- Submission deadline: March 20, 2021 11:59 PM Eastern US https://www.softconf.com/naacl2021/bionlp21/
- Notification of acceptance: April 15, 2021
- Camera-ready copy due from authors: April 26, 2021 (HARD DEADLINE)
- Workshop: June 11, 2021
Final papers should match the NAACL 2021 style guide and instructions for formatting: https://2021.naacl.org/calls/style-and-formatting/ General *ACL guidelines for formatting: https://acl-org.github.io/ACLPUB/formatting.html
MEDIQA 2021 The second edition of the MEDIQA challenge collocated with the BioNLP 2021Workshop focuses on summarization in the medical domain with three tasks:
- Consumer health question summarization
- Multi-answer summarization
- Radiology report summarization
Please check the website for details on the tasks, datasets, and submission guidelines: https://sites.google.com/view/mediqa2021
Submission Types & Requirements
Following the previous conferences, BioNLP 2021 will be open for two types of submissions: long and short papers. For the shared task, please select the "long - shared task" submission type. Please use tNAACL instructions and templates: https://2021.naacl.org/calls/style-and-formatting/ The submission site is now available at https://www.softconf.com/naacl2021/bionlp21/
Program Committee
* Sophia Ananiadou, National Centre for Text Mining and University of Manchester, UK * Emilia Apostolova, Language.ai, USA * Eiji Aramaki, University of Tokyo, Japan * Asma Ben Abacha, US National Library of Medicine * Steven Bethard, University of Arizona, USA * Olivier Bodenreider, US National Library of Medicine * Leonardo Campillos Llanos, Universidad Autónoma de Madrid, Spain * Qingyu Chen, US National Library of Medicine * Fenia Christopoulou, National Centre for Text Mining and University of Manchester, UK * Kevin Bretonnel Cohen, University of Colorado School of Medicine, USA * Brian Connolly, Kroger Digital, USA * Dina Demner-Fushman, US National Library of Medicine * Bart Desmet, Clinical Center, National Institutes of Health, USA * Travis Goodwin, The University of Texas at Dallas, USA * Natalia Grabar, CNRS, France * Cyril Grouin, LIMSI - CNRS, France * Tudor Groza, The Garvan Institute of Medical Research, Australia * Antonio Jimeno Yepes, IBM, Melbourne Area, Australia * William Kearns, UW Medicine, USA * Halil Kilicoglu, University of Illinois at Urbana-Champaign, USA * Ari Klein, University of Pennsylvania, USA * André Lamúrias, University of Lisbon, Portugal * Alberto Lavelli, FBK-ICT, Italy * Robert Leaman, US National Library of Medicine * Ulf Leser, Humboldt-Universität zu Berlin, Germany * Timothy Miller, Children’s Hospital Boston, USA * Aurelie Neveol, LIMSI - CNRS, France * Claire Nédellec, INRA, France * Mariana Neves, German Federal Institute for Risk Assessment, Germany * Denis Newman-Griffis, Clinical Center, National Institutes of Health, USA * Nhung Nguyen, The University of Manchester, UK * Karen O'Connor, University of Pennsylvania, USA * Yifan Peng, Cornell Medical School, USA * Laura Plaza, UNED, Madrid, Spain * Francisco J. Ribadas-Pena, University of Vigo, Spain * Fabio Rinaldi, University of Zurich, Switzerland * Angus Roberts, The University of Sheffield, UK * Kirk Roberts, The University of Texas Health Science Center at Houston, USA * Roland Roller, DFKI GmbH, Berlin, Germany * Diana Sousa, University of Lisbon, Portugal * Karin Verspoor, The University of Melbourne, Australia * Davy Weissenbacher, University of Pennsylvania, USA * W John Wilbur, US National Library of Medicine * Shankai Yan, US National Library of Medicine * Chrysoula Zerva, National Centre for Text Mining and University of Manchester, UK * Ayah Zirikly, Clinical Center, National Institutes of Health, USA * Pierre Zweigenbaum, LIMSI - CNRS, France
* Spandana Balumuri, National Institute of Technology Karnataka, Surathkal, India * Asma Ben Abacha, NLM/NIH * Yi Cai, Chic Health, Shanghai, China * Duy-Cat Can, University of Engineering and Technology, Vietnam * Songtai Dai, Baidu, Inc, Beijing, China * Jean-Benoit Delbrouck, Stanford University * Deepak Gupta, NLM/NIH * Yifan He, Alibaba Group, Sunnyvale, CA * Abdullah Faiz Ur Rahman Khilji, National Institute of Technology Silchar, Mumbai, India * Ravi Kondadadi, Optum * Jooyeon Lee, George Mason University, Fairfax, VA * Lung-Hao Lee, National Central University, Taiwan * Diwakar Mahajan, IBM Research, Yorktown Heights, NY * Yassine Mrabet, NLM/NIH * Khalil Mrini, University of California, San Diego * Mourad Sarrouti, NLM/NIH * Mario Sänger, Humboldt-Universität zu Berlin * Chaitanya Shivade, Amazon * Shweta Yadav, NLM/NIH * Yuhao Zhang, Stanford University * Wei Zhu, East China Normal University, Shanghai
WORKSHOP OVERVIEW AND SCOPE
The BioNLP workshop associated with the ACL SIGBIOMED special interest group has established itself as the primary venue for presenting foundational research in language processing for the biological and medical domains. Despite, or maybe due to reaching maturity, the field of Biomedical NLP continues getting stronger. BioNLP welcomes and encourages inclusion and diversity. BioNLP truly encompasses the breadth of the domain and brings together researchers in bio- and clinical NLP from all over the world. The workshop will continue presenting work on a broad and interesting range of topics in NLP.
The active areas of research include, but are not limited to:
- Entity identification and normalization (linking) for a broad range of semantic categories
- Extraction of complex relations and events
- Discourse analysis
- Anaphora/coreference resolution
- Text mining / Literature based discovery
- Summarization
- Question Answering
- Resources and novel strategies for system testing and evaluation
- Infrastructures for biomedical text mining / Processing and annotation platforms
- Translating NLP research to practice
- Explainable models for biomedical NLP
- Multi-modal models for biomedical NLP
- Getting reproducible results
- BioNLP research in languages other than English
Organizers
Dina Demner-Fushman, US National Library of Medicine Kevin Bretonnel Cohen, University of Colorado School of Medicine Sophia Ananiadou, National Centre for Text Mining and University of Manchester, UK Jun-ichi Tsujii, National Institute of Advanced Industrial Science and Technology, Japan and University of Manchester, UK
Dual submission policy
Papers may NOT be submitted to the BioNLP 2021 workshop if they are or will be concurrently submitted to another meeting or publication.