1 code implementation • 24 Apr 2024 • Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, Yanghua Xiao
It is imperative for Large language models (LLMs) to follow instructions with elaborate requirements (i. e. Complex Instructions Following).
1 code implementation • 19 Apr 2024 • Wenhao Huang, Chenghao Peng, Zhixu Li, Jiaqing Liang, Yanghua Xiao, Liqian Wen, Zulong Chen
We propose AutoCrawler, a two-stage framework that leverages the hierarchical structure of HTML for progressive understanding.
no code implementations • 18 Apr 2024 • Rui Xu, Xintao Wang, Jiangjie Chen, Siyu Yuan, Xinfeng Yuan, Jiaqing Liang, Zulong Chen, Xiaoqing Dong, Yanghua Xiao
Can Large Language Models substitute humans in making important decisions?
no code implementations • 16 Apr 2024 • Haixia Han, Tingyun Li, Shisong Chen, Jie Shi, Chengyu Du, Yanghua Xiao, Jiaqing Liang, Xin Lin
Specifically, we first identify three key problems: (1) How to capture the inherent confidence of the LLM?
no code implementations • 15 Apr 2024 • Zepeng Ding, Wenhao Huang, Jiaqing Liang, Deqing Yang, Yanghua Xiao
The framework includes an evaluation model that can extract related entity pairs with high precision.
no code implementations • 14 Apr 2024 • Guochao Jiang, Ziqin Luo, Yuchen Shi, Dixuan Wang, Jiaqing Liang, Deqing Yang
In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task.
no code implementations • 11 Apr 2024 • Haokun Zhao, Haixia Han, Jie Shi, Chengyu Du, Jiaqing Liang, Yanghua Xiao
Continual Learning (CL) is a commonly used method to address this issue.
no code implementations • 4 Apr 2024 • Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning.
no code implementations • 25 Mar 2024 • Wenhao Huang, Qianyu He, Zhixu Li, Jiaqing Liang, Yanghua Xiao
Definition bias is a negative phenomenon that can mislead models.
no code implementations • 14 Mar 2024 • Yuncheng Huang, Qianyu He, Yipei Xu, Jiaqing Liang, Yanghua Xiao
In our experiments, we find that atomic skills can not spontaneously generalize to compositional tasks.
no code implementations • 14 Jan 2024 • Haixia Han, Jiaqing Liang, Jie Shi, Qianyu He, Yanghua Xiao
In this paper, we introduce the \underline{I}ntrinsic \underline{S}elf-\underline{C}orrection (ISC) in generative language models, aiming to correct the initial output of LMs in a self-triggered manner, even for those small LMs with 6 billion parameters.
no code implementations • 11 Jan 2024 • Xintao Wang, Zhouhong Gu, Jiaqing Liang, Dakuan Lu, Yanghua Xiao, Wei Wang
In this paper, we propose ConcEPT, which stands for Concept-Enhanced Pre-Training for language models, to infuse conceptual knowledge into PLMs.
no code implementations • 29 Dec 2023 • Yuncheng Huang, Qianyu He, Jiaqing Liang, Sihang Jiang, Yanghua Xiao, Yunwen Chen
Hence, we present a framework to enhance the quantitative reasoning ability of language models based on dimension perception.
no code implementations • 16 Nov 2023 • Yipei Xu, Dakuan Lu, Jiaqing Liang, Xintao Wang, Yipeng Geng, Yingsi Xin, Hengkui Wu, Ken Chen, ruiji zhang, Yanghua Xiao
Pre-trained language models (PLMs) have established the new paradigm in the field of NLP.
2 code implementations • 17 Sep 2023 • Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao
To bridge this gap, we propose CELLO, a benchmark for evaluating LLMs' ability to follow complex instructions systematically.
1 code implementation • 12 Sep 2023 • Tinghui Zhu, Jingping Liu, Jiaqing Liang, Haiyun Jiang, Yanghua Xiao, ZongYu Wang, Rui Xie, Yunsen Xian
Specifically, on the Chinese taxonomy dataset, our method significantly improves accuracy by 8. 75 %.
no code implementations • 17 Aug 2023 • Xintao Wang, Qianwen Yang, Yongting Qiu, Jiaqing Liang, Qianyu He, Zhouhong Gu, Yanghua Xiao, Wei Wang
Large language models (LLMs) have demonstrated impressive impact in the field of natural language processing, but they still struggle with several issues regarding, such as completeness, timeliness, faithfulness and adaptability.
no code implementations • 19 Jun 2023 • Wenhao Huang, Jiaqing Liang, Zhixu Li, Yanghua Xiao, Chuanjun Ji
Information extraction (IE) has been studied extensively.
1 code implementation • 13 Jun 2023 • Qianyu He, Yikai Zhang, Jiaqing Liang, Yuncheng Huang, Yanghua Xiao, Yunwen Chen
Similes play an imperative role in creative writing such as story and dialogue generation.
1 code implementation • 10 May 2023 • Siyu Yuan, Jiangjie Chen, Changzhi Sun, Jiaqing Liang, Yanghua Xiao, Deqing Yang
Analogical reasoning is a fundamental cognitive ability of humans.
1 code implementation • 3 May 2023 • Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie
The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts.
no code implementations • 25 Mar 2023 • Zhouhong Gu, Sihang Jiang, Wenhao Huang, Jiaqing Liang, Hongwei Feng, Yanghua Xiao
The model's ability to understand synonymous expression is crucial in many kinds of downstream tasks.
no code implementations • 25 Mar 2023 • Zhouhong Gu, Sihang Jiang, Jingping Liu, Yanghua Xiao, Hongwei Feng, Zhixu Li, Jiaqing Liang, Jian Zhong
The previous methods suffer from low-efficiency since they waste much time when most of the new coming concepts are indeed noisy concepts.
2 code implementations • 18 Feb 2023 • Dakuan Lu, Hengkui Wu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng Geng, Mengkun Han, Yingsi Xin, Yanghua Xiao
Our aim is to facilitate research in the development of NLP within the Chinese financial domain.
2 code implementations • 10 Dec 2022 • Qianyu He, Xintao Wang, Jiaqing Liang, Yanghua Xiao
The ability to understand and generate similes is an imperative step to realize human-level AI.
1 code implementation • 6 Oct 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Zhixu Li, Jinxi Liu, Jingyue Huang, Yanghua Xiao
To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM).
1 code implementation • 30 Aug 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, Rui Xie
In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities.
1 code implementation • 30 Jun 2022 • Jingping Liu, Yuqiu Song, Kui Xue, Hongli Sun, Chao Wang, Lihan Chen, Haiyun Jiang, Jiaqing Liang, Tong Ruan
Specifically, we focus on layer tuning for feed-forward network in the Transformer, namely FL-tuning.
1 code implementation • 25 Jun 2022 • Xintao Wang, Qianyu He, Jiaqing Liang, Yanghua Xiao
In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods.
Ranked #3 on Link Prediction on WN18RR
no code implementations • 17 May 2022 • Ailisi Li, Xueyao Jiang, Bang Liu, Jiaqing Liang, Yanghua Xiao
Math Word Problems (MWP) is an important task that requires the ability of understanding and reasoning over mathematical text.
no code implementations • 21 Feb 2022 • Lihan Chen, Sihang Jiang, Jingping Liu, Chao Wang, Sheng Zhang, Chenhao Xie, Jiaqing Liang, Yanghua Xiao, Rui Song
Knowledge graphs (KGs) are an important source repository for a wide range of applications and rule mining from KGs recently attracts wide research interest in the KG-related research community.
no code implementations • 7 Jan 2022 • Ailisi Li, Jiaqing Liang, Yanghua Xiao
In this paper, we propose a set of novel data augmentation approaches to supplement existing datasets with such data that are augmented with different kinds of local variances, and help to improve the generalization ability of current neural models.
1 code implementation • ACL 2021 • Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong Huang, Wenhao Huang, Yanghua Xiao
Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem.
Ranked #1 on Relation Extraction on NYT11-HRL
no code implementations • 6 May 2020 • Chenhao Xie, Qiao Cheng, Jiaqing Liang, Lihan Chen, Yanghua Xiao
On the contrary, traditional machine learning algorithms often rely on negative examples, otherwise the model would be prone to collapse and always-true predictions.
no code implementations • 27 Feb 2019 • Jindong Chen, Ao Wang, Jiangjie Chen, Yanghua Xiao, Zhendong Chu, Jingping Liu, Jiaqing Liang, Wei Wang
Taxonomies play an important role in machine intelligence.