Research:Newsletter/2024/November
Vol: 14 • Issue: 11 • November 2024 [contribute] [archives]
SPINACH: AI help for asking Wikidata "challenging real-world questions"
By: Tilman Bayer
"SPINACH": LLM-based tool to translate "challenging real-world questions" into Wikidata SPARQL queries
[edit]A paper[1] presented at last week's EMNLP conference reports on a promising new AI-based tool (available at https://spinach.genie.stanford.edu/ ) to retrieve information from Wikidata using natural language questions. It can successfully answer complicated questions like the following:
"What are the musical instruments played by people who are affiliated with the University of Washington School of Music and have been educated at the University of Washington, and how many people play each instrument?"
The authors note that Wikidata is one of the largest publicly available knowledge bases [and] currently contains 15 billion facts, and claim that it is of significant value to many scientific communities. However, they observe that Effective access to Wikidata data can be challenging, requiring use of the SPARQL query language.
This motivates the use of large language models to convert natural language questions into SPARQL queries, which could obviously be of great value to non-technical users. The paper is far from being the first such attempt, see also below for a more narrowly tailored effort. And in fact, some of its authors (including Monica S. Lam and members of her group at Stanford) had already built such a system – "WikiSP" – themselves last year, obtained by fine-tuning an LLM; see our review: "Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata". (Readers of this column may also recall coverage of Wikipedia-related publications out of Lam's group, see "STORM: AI agents role-play as 'Wikipedia editors' and 'experts' to create Wikipedia-like articles" and "WikiChat, 'the first few-shot LLM-based chatbot that almost never hallucinates'" – a paper that received the Wikimedia Foundation's "Research Award of the Year".)
The SPINACH dataset
[edit]More generally, this kind of task is called "Knowledge Base Question Answering" (KBQA). The authors observe that many benchmarks have been published for it over the last decade, and that recently, the KBQA community has shifted toward using Wikidata as the underlying knowledge base for KBQA datasets. However, they criticize those existing benchmarks as either contain[ing] only simple questions [...] or synthetically generated complex logical forms that are not representative enough of real-world queries. To remedy this, they
introduce the SPINACH dataset, an expert-annotated KBQA dataset collected from forum discussions on Wikidata's "Request a Query" forum with 320 decontextualized question-SPARQL pairs. Much more complex than existing datasets, SPINACH calls for strong KBQA systems that do not rely on training data to learn the KB schema, but can dynamically explore large and often incomplete schemas and reason about them.
In more detail, the researchers scraped the "Request a Query" forum's archive from 2016 up to May 2024, obtaining 2780 discussions that had resulted in a valid SPARQL query, which were then filtered by various criteria and sampled to a subset of 920 conversations spanning many domains for consideration. Those were then further winnowed down with a focus on end-users rather than Wikipedia and Wikidata contributors interested in obscure optimizations or formatting. The remaining conversations were manually annotated with a self-contained, decontextualized natural language question that accurately captures the meaning of the user-written SPARQL. These steps include disambiguation of terms in the question as originally asked in the forum (For example, instead of asking "where a movie takes place", we distinguish between the "narrative location” and the "filming location"; thus avoiding an example that had confused the authors' own WikiSP system). This might be regarded as attaching training wheels, i.e. artificially making the task a little bit easier. However, another step goes in the other direction, by refrain[ing] from directly using [Wikidata's] entity and property names, instead using a more natural way to express the meaning. For instance, instead of asking "what is the point of time of the goal?", a more natural question with the same level of accuracy like "when does the goal take place?" should be used.
The SPINACH agent
[edit]The paper's second contribution is an LLM-based system, also called "SPINACH", that on the authors' own dataset outperforms all baselines, including the best GPT-4-based KBQA agent by a large margin, and also achiev[es] a new state of the art on several existing KBQA benchmarks, although on it narrowly remains behind the aforementioned WikiSP model on the WikiWebQuestions dataset (both also out of Lam's lab).
"unlike prior work, we design SPINACH with the primary goal of mimicking a human expert writing a SPARQL query. An expert starts by writing simple queries and looking up Wikidata entity or property pages when needed, all to understand the structure of the knowledge graph and what connections exist. This is especially important for Wikidata due to its anomalous structure (Shenoy et al., 2022). An expert then might add new SPARQL clauses to build towards the final SPARQL, checking their work along the way by executing intermediate queries and eyeballing the results."
This agent is given several tools to use, namely
- searching Wikidata for the QID for a string (like a human user would using the search box on the Wikidata site). This addresses an issue that thwarts many naive attempts to use e.g. ChatGPT directly for generating SPARQL queries, which the aforementioned WikiSP paper already pointed out last year: "While zero-shot LLMs [e.g. ChatGPT] can generate SPARQL queries for the easiest and most common questions, they do not know all the PIDs and QIDs [property and item IDs in Wikidata]."
- retrieving the Wikidata entry for a QID (i.e. all the information on its Wikidata page)
- retrieving a few examples demonstrating the use of the specified property in Wikidata
- running a SPARQL query on the Wikidata Query Service
The authors note that Importantly, the results of the execution of each action are put in a human-readable format to make it easier for the LLM to process. To limit the amount of information that the agent has to process, we limit the output of search results to at most 8 entities and 4 properties, and limit large results of SPARQL queries to the first and last 5 rows. That LLMs and humans have similar problems reading through copious Wikidata query results is a somewhat intriguing observation, considering that Wikidata was conceived as a machine-readable knowledge repository. (In an apparent effort to address the low usage of Wikidata in today's AI systems, Wikimedia Deutschland recently announced "a project to simplify access to the open data in Wikidata for AI applications" by "transformation of Wikidata’s data into semantic vectors.")
The SPINACH system uses the popular ReAct (Reasoning and Acting) framework for LLM agents,[supp 1] where the model is alternating between reasoning about its task (e.g. It seems like there is an issue with the QID I used for the University of Washington. I should search for the correct QID
) and acting (e.g. using its search tool: search_wikidata("University of Washington")
).
The generation of these thought + action pairs in each turn is driven by an agent policy prompt
that only includes high-level instructions such as "start by constructing very simple queries and gradually build towards the complete query" and "confirm all your assumptions about the structure of Wikidata before proceeding" [...]. The decision of selecting the action at each time step is left to the LLM.
Successfully answering a question with a correct SPARQL query can require numerous turns. The researchers limit these by providing the agents with a budget of 15 actions to take, and an extra 15 actions to spend on [...] "rollbacks" of such actions. Even so, Since SPINACH agent makes multiple LLM calls for each question, its latency and cost are higher compared to simpler systems. [...] This seems to be the price for a more accurate KBQA system.
Still, for the time being, an instance is available for free at https://spinach.genie.stanford.edu/ , and also on-wiki as a bot (operated by one of the authors, a – now former – Wikimedia Foundation employee), which has already answered about 30 user queries since its introduction some months ago.
Briefly
[edit]- See the page of the monthly Wikimedia Research Showcase for videos and slides of past presentations.
Other recent publications
[edit]Other recent publications that could not be covered in time for this issue include the items listed below. Contributions, whether reviewing or summarizing newly published research, are always welcome.
"SPARQL Generation: an analysis on fine-tuning OpenLLaMA for Question Answering over a Life Science Knowledge Graph"
[edit]From the abstract:[2]
"we evaluate several strategies for fine-tuning the OpenLlama LLM for question answering over life science knowledge graphs. In particular, we propose an end-to-end data augmentation approach for extending a set of existing queries over a given knowledge graph towards a larger dataset of semantically enriched question-to-SPARQL query pairs, enabling fine-tuning even for datasets where these pairs are scarce."
From the paper:
"Recently, the benchmark dataset so-called [sic] KQA Pro was released [...]. It is a large-scale dataset for complex question answering over a dense subset of the Wikidata1 KB. [...] Although Wikidata is not a domain specific KB, it contains relevant life science data."
"We augment an existing catalog of representative questions over a given knowledge graph and fine-tune OpenLlama in two steps: We first fine-tune the base model using the KQA Pro dataset over Wikidata. Next, we further fine-tune the resulting model using the extended set of questions and queries over the target knowledge graph. Finally, we obtain a system for Question Answering over Knowledge Graphs (KGQA) which translates natural language user questions into their corresponding SPARQL queries over the target KG."
A small number of "culprits" cause over 10 million "Disjointness Violations in Wikidata"
[edit]This preprint identifies 51 pairs of classes on Wikidata that should be disjoint (e.g. "natural object" vs. "artificial object") but aren't, with over 10 million violations, caused by a small number of "culprits". From the abstract:[3]
"Disjointness checks are among the most important constraint checks in a knowledge base and can be used to help detect and correct incorrect statements and internal contradictions. [...] Because of both its size and construction, Wikidata contains many incorrect statements and internal contradictions. We analyze the current modeling of disjointness on Wikidata, identify patterns that cause these disjointness violations and categorize them. We use SPARQL queries to identify each 'culprit' causing a disjointness violation and lay out formulas to identify and fix conflicting information. We finally discuss how disjointness information could be better modeled and expanded in Wikidata in the future."
"Automatic Quality Assessment of Wikipedia Articles - A Systematic Literature Review"
[edit]From the abstract:[4]
"We review existing methods for automatically measuring the quality of Wikipedia articles, identifying and comparing machine learning algorithms, article features, quality metrics, and used datasets, examining 149 distinct studies, and exploring commonalities and gaps in them. The literature is extensive, and the approaches follow past technological trends. However, machine learning is still not widely used by Wikipedia, and we hope that our analysis helps future researchers change that reality."
References
[edit]- ↑ Liu, Shicheng; Semnani, Sina; Triedman, Harold; Xu, Jialiang; Zhao, Isaac Dan; Lam, Monica (November 2024). "SPINACH: SPARQL-Based Information Navigation for Challenging Real-World Questions". In Yaser Al-Onaizan; Mohit Bansal; Yun-Nung Chen. Findings of the Association for Computational Linguistics: EMNLP 2024. Findings 2024. Miami, Florida, USA: Association for Computational Linguistics. pp. 15977–16001. Data and code Online tool
- ↑ Rangel, Julio C.; de Farias, Tarcisio Mendes; Sima, Ana Claudia; Kobayashi, Norio (2024-02-07), SPARQL Generation: an analysis on fine-tuning OpenLLaMA for Question Answering over a Life Science Knowledge Graph, arXiv, doi:10.48550/arXiv.2402.04627 (accepted submission at SWAT4HCLS 2024: The 15th International Conference on Semantic Web Applications and Tools for Health Care and Life Sciences)
- ↑ Doğan, Ege Atacan; Patel-Schneider, Peter F. (2024-10-17), Disjointness Violations in Wikidata, arXiv, doi:10.48550/arXiv.2410.13707
- ↑ Moás, Pedro Miguel; Lopes, Carla Teixeira (2023-09-22). "Automatic Quality Assessment of Wikipedia Articles - A Systematic Literature Review". ACM Computing Surveys. ISSN 0360-0300. doi:10.1145/3625286.
- Supplementary references and notes:
- ↑ Yao, Shunyu; Zhao, Jeffrey; Yu, Dian; Du, Nan; Shafran, Izhak; Narasimhan, Karthik; Cao, Yuan (2023-03-09), ReAct: Synergizing Reasoning and Acting in Language Models, doi:10.48550/arXiv.2210.03629
Wikimedia Research Newsletter
Vol: 14 • Issue: 11 • November 2024
About • Subscribe: Email •
[archives] • [Signpost edition] • [contribute] • [research index]