Jasy Suet Yan Liew


2022

pdf bib
English-Malay Word Embeddings Alignment for Cross-lingual Emotion Classification with Hierarchical Attention Network
Ying Hao Lim | Jasy Suet Yan Liew
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis

The main challenge in English-Malay cross-lingual emotion classification is that there are no Malay training emotion corpora. Given that machine translation could fall short in contextually complex tweets, we only limited machine translation to the word level. In this paper, we bridge the language gap between English and Malay through cross-lingual word embeddings constructed using singular value decomposition. We pre-trained our hierarchical attention model using English tweets and fine-tuned it using a set of gold standard Malay tweets. Our model uses significantly less computational resources compared to the language models. Experimental results show that the performance of our model is better than mBERT in zero-shot learning by 2.4% and Malay BERT by 0.8% when a limited number of Malay tweets is available. In exchange for 6 – 7 times less in computational time, our model only lags behind mBERT and XLM-RoBERTa by a margin of 0.9 – 4.3 % in few-shot learning. Also, the word-level attention could be transferred to the Malay tweets accurately using the cross-lingual word embeddings.

pdf bib
English-Malay Cross-Lingual Embedding Alignment using Bilingual Lexicon Augmentation
Ying Hao Lim | Jasy Suet Yan Liew
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

As high-quality Malay language resources are still a scarcity, cross lingual word embeddings make it possible for richer English resources to be leveraged for downstream Malay text classification tasks. This paper focuses on creating an English-Malay cross-lingual word embeddings using embedding alignment by exploiting existing language resources. We augmented the training bilingual lexicons using machine translation with the goal to improve the alignment precision of our cross-lingual word embeddings. We investigated the quality of the current state-of-the-art English-Malay bilingual lexicon and worked on improving its quality using Google Translate. We also examined the effect of Malay word coverage on the quality of cross-lingual word embeddings. Experimental results with a precision up till 28.17% show that the alignment precision of the cross-lingual word embeddings would inevitably degrade after 1-NN but a better seed lexicon and cleaner nearest neighbours can reduce the number of word pairs required to achieve satisfactory performance. As the English and Malay monolingual embeddings are pre-trained on informal language corpora, our proposed English-Malay embeddings alignment approach is also able to map non-standard Malay translations in the English nearest neighbours.

pdf bib
XLNET-GRU Sentiment Regression Model for Cryptocurrency News in English and Malay
Nur Azmina Mohamad Zamani | Jasy Suet Yan Liew | Ahmad Muhyiddin Yusof
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022

Contextual word embeddings such as the transformer language models are gaining popularity in text classification and analytics but have rarely been explored for sentiment analysis on cryptocurrency news particularly on languages other than English. Various state-of-the-art (SOTA) pre-trained language models have been introduced recently such as BERT, ALBERT, ELECTRA, RoBERTa, and XLNet for text representation. Hence, this study aims to investigate the performance of using Gated Recurrent Unit (GRU) with Generalized Autoregressive Pretraining for Language (XLNet) contextual word embedding for sentiment analysis on English and Malay cryptocurrency news (Bitcoin and Ethereum). We also compare the performance of our XLNet-GRU model against other SOTA pre-trained language models. Manually labelled corpora of English and Malay news are utilized to learn the context of text specifically in the cryptocurrency domain. Based on our experiments, we found that our XLNet-GRU sentiment regression model outperformed the lexicon-based baseline with mean adjusted R2 = 0.631 across Bitcoin and Ethereum for English and mean adjusted R2 = 0.514 for Malay.

2016

pdf bib
EmoTweet-28: A Fine-Grained Emotion Corpus for Sentiment Analysis
Jasy Suet Yan Liew | Howard R. Turtle | Elizabeth D. Liddy
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.

pdf bib
Exploring Fine-Grained Emotion Detection in Tweets
Jasy Suet Yan Liew | Howard R. Turtle
Proceedings of the NAACL Student Research Workshop

2014

pdf bib
Expanding the Range of Automatic Emotion Detection in Microblogging Text
Jasy Suet Yan Liew
Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Optimizing Features in Active Machine Learning for Complex Qualitative Content Analysis
Jasy Suet Yan Liew | Nancy McCracken | Shichun Zhou | Kevin Crowston
Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science

pdf bib
Design of an Active Learning System with Human Correction for Content Analysis
Nancy McCracken | Jasy Suet Yan Liew | Kevin Crowston
Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces