您現在的位置:首頁> 外文會議>Annual meeting of the Society for Computation in Linguistics >文獻詳情

Do RNNs learn human-like abstract word order preferences?

機器翻譯RNN是否學習類似人類的抽象詞序偏好?

原文傳遞 原文傳遞并翻譯 加入購物車 收藏
3 【6hr】

【摘要】RNN language models have achieved state-of-the-art results on various tasks, but what exactly they are representing about syntax is as yet unclear. Here we investigate whether RNN language models learn humanlike word order preferences in syntactic alternations. We collect language model surprisal scores for controlled sentence stimuli exhibiting major syntactic alternations in English: heavy NP shift, particle shift, the dative alternation, and the genitive alternation. We show that RNN language models reproduce human preferences in these alternations based on NP length, an-imacy, and definiteness. We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations. We show that the RNNs' performance is similar to the human acceptability ratings and is not matched by an n-gram baseline model. Our results show that RNNs learn the abstract features of weight, animacy, and definiteness which underlie soft constraints on syntactic alternations.

【作者】Richard Futrell; Roger P. Levy;

【作者單位】Department of Language Science, UC Irvine; Department of Brain and Cognitive Sciences, MIT;

【年(卷),期】2019,,

【頁碼】50-59

【總頁數】10

【正文語種】eng

【中圖分類】;

【關鍵詞】;


激情球迷怎么玩 双色球蓝球走势图 五分彩万位必中规律 龙王捕鱼网站 小县城开面馆赚钱吗 本人自创一肖公式规律 多开能赚钱的手机游戏 六合图库大全 深圳快乐时时彩走势图 乐透游戏手机版官网 华东15选5走势图连线 网上购彩票官网百度 急速赛车国语 辽宁11选5直选遗漏 手机上赚钱不用投资 内蒙十一选五任五遗漏 甘肃快3历史开奖结果