Abstract Wikipedia/Related and previous work/Natural language generation

From Meta, a Wikimedia project coordination wiki
Jump to navigation Jump to search

Abstract Wikipedia will generate natural language text from an abstract representation. This is not a novel idea, and it has been tried a number of times before.

This page aims to collect different existing approaches. It tries to summarize the core ideas of the different approaches, their advantages and disadvantages, and points to existing implementations. This page (by and for the community) will help to choose which approach to focus on first.

Implementations[edit]

Arria NLG[edit]

ASTROGEN[edit]

Chimera[edit]

Elvex[edit]

FUF/SURGE[edit]

Genl[edit]

GoPhi[edit]

Grammar Explorer[edit]

Grammatical Framework[edit]

  • Wikipedia: Grammatical Framework [ en ] [ nn ]
  • Website: https://www.grammaticalframework.org
  • Implementation link:
  • License: GNU General Public License: see text
  • Supported languages: Afrikaans, Amharic (partial), Arabic (partial), Basque (partial), Bulgarian, Catalan, Chinese, Czech (partial), Danish, Dutch, English, Estonian, Finnish, French, German, Greek ancient (partial), Greek modern, Hebrew (fragments), Hindi, Hungarian (partial), Interlingua, Italian, Japanese, Korean (partial), Latin (partial), Latvian, Maltese, Mongolian, Nepali, Norwegian bokmål, Norwegian nynorsk, Persian, Polish, Punjabi, Romanian, Russian, Sindhi, Slovak (partial), Slovene (partial), Somali (partial), Spanish, Swahili (fragments), Swedish, Thai, Turkish (fragments), and Urdu.

jsRealB[edit]

KPML[edit]

Linguistic Knowledge Builder[edit]

Multimodal Unification Grammar[edit]

NaturalOWL[edit]

NLGen and NLGen2[edit]

OpenCCG[edit]

rLDCP[edit]

RoseaNLG[edit]

Semantic Web Authoring Tool (SWAT)[edit]

SimpleNLG[edit]

SPUD[edit]

Suregen-2[edit]

Syntax Maker[edit]

TGen[edit]

Universal Networking Language[edit]

UralicNLP[edit]

  • Website: https://uralicnlp.com/
  • Website: https://github.com/mikahama/uralicNLP
  • Supported languages: Finnish, Russian, German, English, Norwegian, Swedish, Arabic, Ingrian, Meadow & Eastern Mari, Votic, Olonets-Karelian, Erzya, Moksha, Hill Mari, Udmurt, Tundra Nenets, Komi-Permyak, North Sami, South Sami and Skolt Sami[1]

...[edit]

  • Wikipedia:
  • Website:
  • Implementation link:
  • License:
  • Supported languages:

Theoretical background[edit]

Powered by Wikidata

natural language generation [ de ] [ en ] [ es ] [ fr ] [ 日本語 ] [ nn ] [ 中文 ] is a sub-field of natural language processing. See the broader topic on Scholia.[2]

Pipeline model[edit]

In their 2018 Survey,[3] Gatt[4] and Krahmer[5] begin by describing natural language generation as the "task of generating text or speech from non-linguistic input." They identify six sub-problems (after Reiter & Dale 1997, 2000[6]) [2.NLG Tasks, pp. 70-82]:[3]

  1. Content determination (Content determination (Q5165077))
  2. Text structuring (Document structuring (Q5287648))
  3. Sentence aggregation (Aggregation (Q4692263))
  4. Lexicalisation (Lexical choice (Q6537688))
  5. Referring expression generation (Referring expression generation (Q7307185))
  6. Linguistic realisation (Realization (Q7301282))

Please note that the six topics listed above have articles only in the English Wikipedia (24 July 2020).

These six sub-problems can be seen as a segmentation of the "pipeline", beginning with "early" tasks, aligned to the purpose of the linguistic output. The "late" tasks are more aligned to the final linguistic form. A summary form might be "What (1), ordered (2) and segmented (3) how, with what words (4&5) in which forms(6)". Lexicalisation (4) is not clearly distinguished from "referring expression generation" (REG) (5) in this summary form. The key idea during REG is avoiding repetition and ambiguity, or managing the tension between those conflicting aims. This corresponds to the Gricean maxim (Grice, 1975[7]) that "speakers should make sure that their contributions are sufficiently informative for the purposes of the exchange, but not more so" (or, as Roger Sessions said (1950) after Albert Einstein (1933): "everything should be as simple as it can be but not simpler!").

Content determination[edit]

Document structuring[edit]

Aggregation[edit]

Lexical choice[edit]

Referring expression generation[edit]

Realization[edit]

"In linguistics, realization is the process by which some kind of surface representation is derived from its underlying representation; that is, the way in which some abstract object of linguistic analysis comes to be produced in actual language. Phonemes are often said to be realized by speech sounds. The different sounds that can realize a particular phoneme are called its allophones.

"Realization is also a subtask of natural language generation, which involves creating an actual text in a human language (English, French, etc.) from a syntactic representation."

English Wikipedia
(Wikipedia contributors, 'Realization', Wikipedia, The Free Encyclopedia, 26 May 2020, 02:46 UTC, <https://en.wikipedia.org/w/index.php?title=Realization&oldid=958866516> [accessed 31 August 2020].)

Black-box approach[edit]

In a later survey, Gârbacea and Mei[8] suggested "Neural language generation" as an emerging sub-field of NLG. Eleven of the papers cited in their survey have titles with "neural language" in them, the earliest from 2016 (Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache). The earliest citation in which "neural language generation" appears is from 2017 (Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94–104). Published slightly earlier that year was Van-Khanh Tran and Le-Minh Nguyen. 2017. Semantic Refinement GRU-based Neural Language Generation for Spoken Dialogue Systems. In mid 2020, "neural language generation" is not mature enough to be used to generate natural language renditions of language-neutral content.

References[edit]

  • Gârbacea and Mei, 2020[8]
  • Gardent et al. 2017[9]
  • Gatt & Krahmer, 2018[3]
  • Reiter & Dale, 2000[6] (PDF ends at the end of the first section.)

External Links[edit]

[edit]

  1. (https://models.uralicnlp.com/nightly/)
  2. The Scholia view on Natural-language generation lacked the standard sources and leading authors on 27 July 2020. Instead, see Google Scholar
  3. a b c Gatt, Albert; Krahmer, Emiel (January 2018), "Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation", Journal of Artificial Intelligence Research 61: 65–170, archived from the original on 2020-06-23, retrieved 2020-07-24 
  4. Gatt's publications
  5. Emiel Krahmer ( Q51689943) selected publications
  6. a b Reiter, EB; Dale, R (2000), Building Natural-Language Generation Systems. (PDF), Cambridge University Press., archived from the original (PDF) on 2019-07-11, retrieved 2020-07-27 
  7. Grice, H. Paul (1975), Logic and conversation. (PDF), retrieved 2020-08-10 
  8. a b Gârbacea, Cristina; Mei, Qiaozhu, Neural Language Generation: Formulation, Methods, and Evaluation (PDF), pp. 1–70, retrieved 2020-08-08, Compared to the survey of (Gatt and Krahmer, 2018), our overview is a more comprehensive and updated coverage of neural network methods and evaluation centered around the novel problem definitions and task formulations. 
  9. Gardent, Claire; Shimorina, Anastasia; Narayan, Shashi; Perez-Beltrachini, Laura (2017), "The WebNLG Challenge: Generating Text from RDF data." (PDF), Proceedings of the 10th International Conference on Natural Language Generation: 124–133