Titel: Using a Maximum Entropy Classifier to link “good” corpus examples to dictionary senses
Personen:Geyken, Alexander/Pölitz, Christian/Bartz, Thomas
Jahr: 2015
Typ: Aufsatz
Verlag: Trojina, Institute for Applied Slovene Studies/ Lexical Computing Ltd.
Ortsangabe: Ljubljana/ Brighton
In: Kosem, Iztok/Jakubíček, Miloš/Kallas, Jelena/Krek, Simon (Hgg.): Electronic lexicography in the 21st century: linking lexical data in the digital age. Proceedings of the eLex 2015 conference, 11 - 13 August 2015, Herstmonceux Castle, United Kingdom
Seiten: 304-314
Untersuchte Sprachen: Deutsch*German - Englisch*English
Schlagwörter: Beispiel*example
Disambiguierung*disambiguation
Kollokationen/Phraseologismen/Wortverbindungen*collocations/phraseologisms/multi word items
korpusbasierte Lexikografie*corpus-based lexicography
Redaktionssystem*lexicographic editor
Medium: Online
URI: https://elex.link/elex2015/conference-proceedings/
Zuletzt besucht: 22.10.2018
Abstract: A particular problem of maintaining dictionaries consists of replacing outdated example sentences by corpus examples that are up-to-date. Extraction methods such as the good example finder (GDEX; Kilgarriff, 2008) have been developed to tackle this problem. We extend GDEX to polysemous entries by applying machine learning techniques in order to map the example sentences to the appropriate dictionary senses. The idea is to enrich our knowledge base by computing the set of all collocations and to use a maximum entropy classifier (MEC; Nigam, 1999) to learn the correct mapping between corpus sentence and its correct dictionary sense. Our method is based on hand labeled sense annotations. Results reveal an accuracy of 49.16% (MEC) which is significantly better than the Lesk algorithm (31.17%).