1 code implementation • COLING 2022 • Bo Xu, Shizhou Huang, Ming Du, Hongya Wang, Hui Song, Chaofeng Sha, Yanghua Xiao
In this paper, we argue that different social media posts should consider different modalities for multimodal information extraction.
no code implementations • COLING 2022 • Xuantao Lu, Jingping Liu, Zhouhong Gu, Hanwen Tong, Chenhao Xie, Junyang Huang, Yanghua Xiao, Wenguang Wang
In this paper, we propose a scoring model to automatically learn a model-based reward, and an effective training strategy based on curriculum learning is further proposed to stabilize the training process.
1 code implementation • 24 Apr 2024 • Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, Yanghua Xiao
It is imperative for Large language models (LLMs) to follow instructions with elaborate requirements (i. e. Complex Instructions Following).
1 code implementation • 19 Apr 2024 • Wenhao Huang, Chenghao Peng, Zhixu Li, Jiaqing Liang, Yanghua Xiao, Liqian Wen, Zulong Chen
We propose AutoCrawler, a two-stage framework that leverages the hierarchical structure of HTML for progressive understanding.
no code implementations • 18 Apr 2024 • Rui Xu, Xintao Wang, Jiangjie Chen, Siyu Yuan, Xinfeng Yuan, Jiaqing Liang, Zulong Chen, Xiaoqing Dong, Yanghua Xiao
Can Large Language Models substitute humans in making important decisions?
no code implementations • 16 Apr 2024 • Haixia Han, Tingyun Li, Shisong Chen, Jie Shi, Chengyu Du, Yanghua Xiao, Jiaqing Liang, Xin Lin
Specifically, we first identify three key problems: (1) How to capture the inherent confidence of the LLM?
no code implementations • 15 Apr 2024 • Zepeng Ding, Wenhao Huang, Jiaqing Liang, Deqing Yang, Yanghua Xiao
The framework includes an evaluation model that can extract related entity pairs with high precision.
1 code implementation • 15 Apr 2024 • Yuchen Shi, Deqing Yang, Jingping Liu, Yanghua Xiao, ZongYu Wang, Huimin Xu
To achieve NTE, we devise a novel Syntax&Semantic-Enhanced Negation Extraction model, namely SSENE, which is built based on a generative pretrained language model (PLM) {of Encoder-Decoder architecture} with a multi-task learning framework.
no code implementations • 11 Apr 2024 • Haokun Zhao, Haixia Han, Jie Shi, Chengyu Du, Jiaqing Liang, Yanghua Xiao
Continual Learning (CL) is a commonly used method to address this issue.
no code implementations • 9 Apr 2024 • Xintao Wang, Jiangjie Chen, Nianqi Li, Lida Chen, Xinfeng Yuan, Wei Shi, Xuyang Ge, Rui Xu, Yanghua Xiao
In the rapidly advancing research fields such as AI, managing and staying abreast of the latest scientific literature has become a significant challenge for researchers.
1 code implementation • 4 Apr 2024 • Siye Wu, Jian Xie, Jiangjie Chen, Tinghui Zhu, Kai Zhang, Yanghua Xiao
By leveraging the retrieval of information from external knowledge databases, Large Language Models (LLMs) exhibit enhanced capabilities for accomplishing many knowledge-intensive tasks.
no code implementations • 4 Apr 2024 • Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning.
no code implementations • 25 Mar 2024 • Wenhao Huang, Qianyu He, Zhixu Li, Jiaqing Liang, Yanghua Xiao
Definition bias is a negative phenomenon that can mislead models.
1 code implementation • 20 Mar 2024 • Zhouhong Gu, Xiaoxuan Zhu, Haoran Guo, Lin Zhang, Yin Cai, Hao Shen, Jiangjie Chen, Zheyu Ye, Yifei Dai, Yan Gao, Yao Hu, Hongwei Feng, Yanghua Xiao
Language significantly influences the formation and evolution of Human emergent behavior, which is crucial in understanding collective intelligence within human societies.
no code implementations • 14 Mar 2024 • Yuncheng Huang, Qianyu He, Yipei Xu, Jiaqing Liang, Yanghua Xiao
In our experiments, we find that atomic skills can not spontaneously generalize to compositional tasks.
no code implementations • 12 Mar 2024 • Jianchen Wang, Zhouhong Gu, Zhuozhi Xiong, Hongwei Feng, Yanghua Xiao
Large Language Models have revolutionized numerous tasks with their remarkable efficacy. However, the editing of these models, crucial for rectifying outdated or erroneous information, often leads to a complex issue known as the ripple effect in the hidden space.
no code implementations • 3 Mar 2024 • Haiquan Zhao, Xuwu Wang, Shisong Chen, Zhixu Li, Xin Zheng, Yanghua Xiao
In this paper, we propose a task called Online Video Entity Linking OVEL, aiming to establish connections between mentions in online videos and a knowledge base with high accuracy and timeliness.
1 code implementation • 20 Feb 2024 • Jiayi Fu, Xuandong Zhao, Ruihan Yang, Yuansen Zhang, Jiangjie Chen, Yanghua Xiao
Large language models (LLMs) excellently generate human-like text, but also raise concerns about misuse in fake news and academic dishonesty.
no code implementations • 8 Feb 2024 • Yikai Zhang, Siyu Yuan, Caiyu Hu, Kyle Richardson, Yanghua Xiao, Jiangjie Chen
Despite remarkable advancements in emulating human-like behavior through Large Language Models (LLMs), current textual simulations do not adequately address the notion of time.
1 code implementation • 2 Feb 2024 • Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, Yu Su
Are these language agents capable of planning in more complex settings that are out of the reach of prior AI agents?
1 code implementation • 20 Jan 2024 • Zhen Chen, Jingping Liu, Deqing Yang, Yanghua Xiao, Huimin Xu, ZongYu Wang, Rui Xie, Yunsen Xian
Open information extraction (OpenIE) aims to extract the schema-free triplets in the form of (\emph{subject}, \emph{predicate}, \emph{object}) from a given sentence.
no code implementations • 14 Jan 2024 • Haixia Han, Jiaqing Liang, Jie Shi, Qianyu He, Yanghua Xiao
In this paper, we introduce the \underline{I}ntrinsic \underline{S}elf-\underline{C}orrection (ISC) in generative language models, aiming to correct the initial output of LMs in a self-triggered manner, even for those small LMs with 6 billion parameters.
no code implementations • 11 Jan 2024 • Xintao Wang, Zhouhong Gu, Jiaqing Liang, Dakuan Lu, Yanghua Xiao, Wei Wang
In this paper, we propose ConcEPT, which stands for Concept-Enhanced Pre-Training for language models, to infuse conceptual knowledge into PLMs.
no code implementations • 29 Dec 2023 • Yuncheng Huang, Qianyu He, Jiaqing Liang, Sihang Jiang, Yanghua Xiao, Yunwen Chen
Hence, we present a framework to enhance the quantitative reasoning ability of language models based on dimension perception.
no code implementations • 16 Dec 2023 • Zhiwei Zha, Jiaan Wang, Zhixu Li, Xiangru Zhu, Wei Song, Yanghua Xiao
To collect concept-image and concept-description alignments, we propose a context-aware multi-modal symbol grounding approach that considers context information in existing large-scale image-text pairs with respect to each concept.
1 code implementation • 4 Dec 2023 • Xiangru Zhu, Penglei Sun, Chengyu Wang, Jingping Liu, Zhixu Li, Yanghua Xiao, Jun Huang
We use Winoground-T2I with a dual objective: to evaluate the performance of T2I models and the metrics used for their evaluation.
no code implementations • 16 Nov 2023 • Yipei Xu, Dakuan Lu, Jiaqing Liang, Xintao Wang, Yipeng Geng, Yingsi Xin, Hengkui Wu, Ken Chen, ruiji zhang, Yanghua Xiao
Pre-trained language models (PLMs) have established the new paradigm in the field of NLP.
2 code implementations • 27 Oct 2023 • Xintao Wang, Yunze Xiao, Jen-tse Huang, Siyu Yuan, Rui Xu, Haoran Guo, Quan Tu, Yaying Fei, Ziang Leng, Wei Wang, Jiangjie Chen, Cheng Li, Yanghua Xiao
This paper, instead, introduces a novel perspective to evaluate the personality fidelity of RPAs with psychological scales.
2 code implementations • 17 Sep 2023 • Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao
To bridge this gap, we propose CELLO, a benchmark for evaluating LLMs' ability to follow complex instructions systematically.
1 code implementation • 12 Sep 2023 • Tinghui Zhu, Jingping Liu, Jiaqing Liang, Haiyun Jiang, Yanghua Xiao, ZongYu Wang, Rui Xie, Yunsen Xian
Specifically, on the Chinese taxonomy dataset, our method significantly improves accuracy by 8. 75 %.
1 code implementation • 26 Aug 2023 • Shuang Li, Jiangjie Chen, Siyu Yuan, Xinyi Wu, Hao Yang, Shimin Tao, Yanghua Xiao
To translate well, machine translation (MT) systems and general-purposed language models (LMs) need a deep understanding of both source and target languages and cultures.
no code implementations • 17 Aug 2023 • Xintao Wang, Qianwen Yang, Yongting Qiu, Jiaqing Liang, Qianyu He, Zhouhong Gu, Yanghua Xiao, Wei Wang
Large language models (LLMs) have demonstrated impressive impact in the field of natural language processing, but they still struggle with several issues regarding, such as completeness, timeliness, faithfulness and adaptability.
1 code implementation • 9 Aug 2023 • Jingdan Zhang, Jiaan Wang, Xiaodan Wang, Zhixu Li, Yanghua Xiao
Multi-modal knowledge graphs (MMKGs) combine different modal data (e. g., text and image) for a comprehensive understanding of entities.
no code implementations • 11 Jul 2023 • Zhouhong Gu, Lin Zhang, Jiangjie Chen, Haoning Ye, Xiaoxuan Zhu, Zihan Li, Zheyu Ye, Yan Gao, Yao Hu, Yanghua Xiao, Hongwei Feng
We introduces the DetectBench, a reading comprehension dataset designed to assess a model's ability to jointly ability in key information detection and multi-hop reasoning when facing complex and implicit information.
no code implementations • 19 Jun 2023 • Wenhao Huang, Jiaqing Liang, Zhixu Li, Yanghua Xiao, Chuanjun Ji
Information extraction (IE) has been studied extensively.
no code implementations • 16 Jun 2023 • Jingsong Yang, Guanzhou Han, Deqing Yang, Jingping Liu, Yanghua Xiao, Xiang Xu, Baohua Wu, Shenghua Ni
In this paper, we propose a novel Multi-Modal Model for POI Tagging, namely M3PT, which achieves enhanced POI tagging through fusing the target POI's textual and visual features, and the precise matching between the multi-modal representations.
1 code implementation • 13 Jun 2023 • Qianyu He, Yikai Zhang, Jiaqing Liang, Yuncheng Huang, Yanghua Xiao, Yunwen Chen
Similes play an imperative role in creative writing such as story and dialogue generation.
1 code implementation • 11 Jun 2023 • Jian Xie, Yidan Liang, Jingping Liu, Yanghua Xiao, Baohua Wu, Shenghua Ni
In this paper, we propose QUERT, A Continual Pre-trained Language Model for QUERy Understanding in Travel Domain Search.
2 code implementations • 9 Jun 2023 • Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Yixin Zhu, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Weijie Wu, Qianyu He, Rui Xu, Wenhao Huang, Jingping Liu, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align with the rapid development of large language models (LLMs).
1 code implementation • 22 May 2023 • Siyu Yuan, Jiangjie Chen, Xuyang Ge, Yanghua Xiao, Deqing Yang
The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures.
1 code implementation • 10 May 2023 • Siyu Yuan, Jiangjie Chen, Changzhi Sun, Jiaqing Liang, Yanghua Xiao, Deqing Yang
Analogical reasoning is a fundamental cognitive ability of humans.
1 code implementation • 10 May 2023 • Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei LI, Yanghua Xiao
Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge.
1 code implementation • 9 May 2023 • Siyu Yuan, Jiangjie Chen, Ziquan Fu, Xuyang Ge, Soham Shah, Charles Robert Jankowski, Yanghua Xiao, Deqing Yang
In everyday life, humans often plan their actions by following step-by-step instructions in the form of goal-oriented scripts.
1 code implementation • 3 May 2023 • Siyu Yuan, Deqing Yang, Jinxi Liu, Shuyu Tian, Jiaqing Liang, Yanghua Xiao, Rui Xie
The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts.
no code implementations • 23 Apr 2023 • Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Zhuozhi Xiong, Zihan Li, Qianyu He, Sihang Jiang, Hongwei Feng, Yanghua Xiao
Domain knowledge refers to the in-depth understanding, expertise, and familiarity with a specific subject, industry, field, or area of special interest.
no code implementations • 25 Mar 2023 • Zhouhong Gu, Sihang Jiang, Jingping Liu, Yanghua Xiao, Hongwei Feng, Zhixu Li, Jiaqing Liang, Jian Zhong
The previous methods suffer from low-efficiency since they waste much time when most of the new coming concepts are indeed noisy concepts.
no code implementations • 25 Mar 2023 • Zhouhong Gu, Sihang Jiang, Wenhao Huang, Jiaqing Liang, Hongwei Feng, Yanghua Xiao
The model's ability to understand synonymous expression is crucial in many kinds of downstream tasks.
2 code implementations • 18 Feb 2023 • Dakuan Lu, Hengkui Wu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng Geng, Mengkun Han, Yingsi Xin, Yanghua Xiao
Our aim is to facilitate research in the development of NLP within the Chinese financial domain.
2 code implementations • 10 Dec 2022 • Qianyu He, Xintao Wang, Jiaqing Liang, Yanghua Xiao
The ability to understand and generate similes is an imperative step to realize human-level AI.
no code implementations • 7 Dec 2022 • Jiangjie Chen, Yanghua Xiao
The rapid development and application of natural language generation (NLG) techniques has revolutionized the field of automatic text production.
1 code implementation • 25 Nov 2022 • Shuoyao Zhai, Baichuan Liu, Deqing Yang, Yanghua Xiao
Furthermore, we propose two auxiliary losses corresponding to the two sub-tasks, to refine the representation learning in our model.
1 code implementation • 22 Nov 2022 • Jiangjie Chen, Rui Xu, Wenxuan Zeng, Changzhi Sun, Lei LI, Yanghua Xiao
Given a possibly false claim sentence, how can we automatically correct it with minimal editing?
no code implementations • COLING 2022 • Chengwei Hu, Deqing Yang, Haoliang Jin, Zhen Chen, Yanghua Xiao
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data, of which the major challenge is the catastrophic forgetting of old tasks.
1 code implementation • 6 Oct 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Zhixu Li, Jinxi Liu, Jingyue Huang, Yanghua Xiao
To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM).
1 code implementation • 30 Aug 2022 • Siyu Yuan, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, Rui Xie
In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities.
1 code implementation • 27 Jul 2022 • Jingjie Yi, Deqing Yang, Siyu Yuan, Caiyan Cao, Zhiyao Zhang, Yanghua Xiao
The newly proposed ERC models have leveraged pre-trained language models (PLMs) with the paradigm of pre-training and fine-tuning to obtain good performance.
1 code implementation • 27 Jul 2022 • Lyuxin Xue, Deqing Yang, Yanghua Xiao
Most sequential recommendation (SR) systems employing graph neural networks (GNNs) only model a user's interaction sequence as a flat graph without hierarchy, overlooking diverse factors in the user's preference.
1 code implementation • 25 Jun 2022 • Xintao Wang, Qianyu He, Jiaqing Liang, Yanghua Xiao
In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods.
Ranked #3 on Link Prediction on WN18RR
no code implementations • 17 May 2022 • Ailisi Li, Xueyao Jiang, Bang Liu, Jiaqing Liang, Yanghua Xiao
Math Word Problems (MWP) is an important task that requires the ability of understanding and reasoning over mathematical text.
1 code implementation • NAACL 2022 • Chun Zeng, Jiangjie Chen, Tianyi Zhuang, Rui Xu, Hao Yang, Ying Qin, Shimin Tao, Yanghua Xiao
To this end, we propose a plug-in algorithm for this line of work, i. e., Aligned Constrained Training (ACT), which alleviates this problem by familiarizing the model with the source-side context of the constraints.
3 code implementations • ACL 2022 • Xuwu Wang, Junfeng Tian, Min Gui, Zhixu Li, Rui Wang, Ming Yan, Lihan Chen, Yanghua Xiao
In this paper, we present WikiDiverse, a high-quality human-annotated MEL dataset with diversified contextual topics and entity types from Wikinews, which uses Wikipedia as the corresponding knowledge base.
1 code implementation • 28 Mar 2022 • Sijie Cheng, Zhouhong Gu, Bang Liu, Rui Xie, Wei Wu, Yanghua Xiao
Specifically, i) to fully exploit user behavioral information, we extract candidate hyponymy relations that match user interests from query-click concepts; ii) to enhance the semantic information of new concepts and better detect hyponymy relations, we model concepts and relations through both user-generated content and structural information in existing taxonomies and user click logs, by leveraging Pre-trained Language Models and Graph Neural Network combined with Contrastive Learning; iii) to reduce the cost of dataset construction and overcome data skews, we construct a high-quality and balanced training dataset from existing taxonomy with no supervision.
no code implementations • Findings (ACL) 2022 • Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei LI, Yanghua Xiao, Hao Zhou
Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR).
1 code implementation • ACL 2022 • Qianyu He, Sijie Cheng, Zhixu Li, Rui Xie, Yanghua Xiao
In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i. e., to let the PLMs infer the shared properties of similes.
no code implementations • 21 Feb 2022 • Lihan Chen, Sihang Jiang, Jingping Liu, Chao Wang, Sheng Zhang, Chenhao Xie, Jiaqing Liang, Yanghua Xiao, Rui Song
Knowledge graphs (KGs) are an important source repository for a wide range of applications and rule mining from KGs recently attracts wide research interest in the KG-related research community.
no code implementations • 11 Feb 2022 • Xiangru Zhu, Zhixu Li, Xiaodan Wang, Xueyao Jiang, Penglei Sun, Xuwu Wang, Yanghua Xiao, Nicholas Jing Yuan
In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques.
no code implementations • 13 Jan 2022 • Yuyan Chen, Yanghua Xiao, Bang Liu
In this research, we argue that the evidences of an answer is critical to enhancing the interpretability of QA models.
no code implementations • 7 Jan 2022 • Ailisi Li, Jiaqing Liang, Yanghua Xiao
In this paper, we propose a set of novel data augmentation approaches to supplement existing datasets with such data that are augmented with different kinds of local variances, and help to improve the generalization ability of current neural models.
1 code implementation • 10 Dec 2021 • Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, Lei LI
We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off.
no code implementations • 27 Nov 2021 • Jianian Wang, Sheng Zhang, Yanghua Xiao, Rui Song
With multiple components and relations, financial data are often presented as graph data, since it could represent both the individual features and the complicated relations.
no code implementations • 6 Nov 2021 • Ye Liu, Rui Song, Wenbin Lu, Yanghua Xiao
A large number of models and algorithms have been proposed to perform link prediction, among which tensor factorization method has proven to achieve state-of-the-art performance in terms of computation efficiency and prediction accuracy.
no code implementations • 21 Oct 2021 • Sijie Cheng, Jingwen Wu, Yanghua Xiao, Yang Liu
Today data is often scattered among billions of resource-constrained edge devices with security and privacy constraints.
no code implementations • 2 Aug 2021 • Junyang Huang, Yongbo Wang, Yongliang Wang, Yang Dong, Yanghua Xiao
It first learns relation embedding over the schema entities and question words with predefined schema relations with ELECTRA and relation aware transformer layer as backbone.
1 code implementation • ACL 2021 • Li Cui, Deqing Yang, Jiaxin Yu, Chengwei Hu, Jiayang Cheng, Jingjie Yi, Yanghua Xiao
As a typical task of continual learning, continual relation extraction (CRE) aims to extract relations between entities from texts, where the samples of different relations are delivered into the model continuously.
1 code implementation • ACL 2021 • Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong Huang, Wenhao Huang, Yanghua Xiao
Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem.
Ranked #1 on Relation Extraction on NYT11-HRL
no code implementations • 7 Apr 2021 • Jiayang Cheng, Haiyun Jiang, Deqing Yang, Yanghua Xiao
However, few works have focused on how to validate and correct the results generated by the existing relation extraction models.
1 code implementation • 25 Dec 2020 • Jiangjie Chen, Qiaoben Bao, Changzhi Sun, Xinbo Zhang, Jiaze Chen, Hao Zhou, Yanghua Xiao, Lei LI
The final claim verification is based on all latent variables.
no code implementations • 17 Dec 2020 • Zhendong Chu, Haiyun Jiang, Yanghua Xiao, Wei Wang
We see information sources as multiple views and fusing them to construct an intact space with sufficient information.
no code implementations • 9 Dec 2020 • Haiyun Jiang, Qiaoben Bao, Qiao Cheng, Deqing Yang, Li Wang, Yanghua Xiao
In recent years, many complex relation extraction tasks, i. e., the variants of simple binary relation extraction, are proposed to meet the complex applications in practice.
1 code implementation • EMNLP 2020 • Ye Liu, Sheng Zhang, Rui Song, Suo Feng, Yanghua Xiao
Effectively filtering out noisy articles as well as bad answers is the key to improving extraction accuracy.
1 code implementation • 19 Jun 2020 • Junyang Jiang, Deqing Yang, Yanghua Xiao, Chenlu Shen
Most of existing embedding based recommendation models use embeddings (vectors) corresponding to a single fixed point in low-dimensional space, to represent users and items.
no code implementations • 18 Jun 2020 • Deqing Yang, Zengcun Song, Lvxin Xue, Yanghua Xiao
Deep neural networks (DNNs) have been widely employed in recommender systems including incorporating attention mechanism for performance improvement.
1 code implementation • 12 Jun 2020 • Wenjing Meng, Deqing Yang, Yanghua Xiao
These insights motivate us to propose a novel SR model MKM-SR in this paper, which incorporates user Micro-behaviors and item Knowledge into Multi-task learning for Session-based Recommendation.
2 code implementations • 17 May 2020 • Chen Lin, Si Chen, Hui Li, Yanghua Xiao, Lianyun Li, Qian Yang
Recommendation Systems (RS) have become an essential part of many online services.
no code implementations • 6 May 2020 • Chenhao Xie, Qiao Cheng, Jiaqing Liang, Lihan Chen, Yanghua Xiao
On the contrary, traditional machine learning algorithms often rely on negative examples, otherwise the model would be prone to collapse and always-true predictions.
no code implementations • IJCNLP 2019 • Xiaofei Shi, Yanghua Xiao
We calibrate embeddings of different KGs via a small set of pre-aligned seeds.
no code implementations • 14 Oct 2019 • Hao Cheng, Xiaoqing Yang, Zang Li, Yanghua Xiao, Yu-Cheng Lin
Deep neural networks have been widely used in text classification.
1 code implementation • 28 Aug 2019 • Yuting Ye, Xuwu Wang, Jiangchao Yao, Kunyang Jia, Jingren Zhou, Yanghua Xiao, Hongxia Yang
Low-dimensional embeddings of knowledge graphs and behavior graphs have proved remarkably powerful in varieties of tasks, from predicting unobserved edges between entities to content recommendation.
no code implementations • ACL 2019 • Jiangjie Chen, Ao Wang, Haiyun Jiang, Suo Feng, Chenguang Li, Yanghua Xiao
A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity.
no code implementations • 6 Mar 2019 • Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, Wei Wang
Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.
no code implementations • 27 Feb 2019 • Jindong Chen, Ao Wang, Jiangjie Chen, Yanghua Xiao, Zhendong Chu, Jingping Liu, Jiaqing Liang, Wei Wang
Taxonomies play an important role in machine intelligence.
1 code implementation • 21 Feb 2019 • Jindong Chen, Yizhou Hu, Jingping Liu, Yanghua Xiao, Haiyun Jiang
For the purpose of measuring the importance of knowledge, we introduce attention mechanisms and propose deep Short Text Classification with Knowledge powered Attention (STCKA).
no code implementations • 20 Oct 2017 • Wanyun Cui, Xiyou Zhou, Hangyu Lin, Yanghua Xiao, Haixun Wang, Seung-won Hwang, Wei Wang
In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.
no code implementations • 29 Nov 2015 • Yi Zhang, Yanghua Xiao, Seung-won Hwang, Haixun Wang, X. Sean Wang, Wei Wang
This paper provides a query processing method based on the relevance models between entity sets and concepts.